Splunk Enterprise Security

sendmail transactions prior to data model ingestion in enterprise security

panovattack
Communicator

When using enterprise security protocol intelligence dashboards, how do you build a complete email transaction log (e.g. sourcetype=sendmail qid=* | transaction qid) prior to senndmail logs being pulled into the datamodels? Since single email 'transactions' are spread over numerous logs with same qid, it would be advantageous to build the complete email 'event' prior to populating the data models. Any thoughts?

0 Karma
1 Solution

panovattack
Communicator

I found a work around to this, based on some of the Splunk documentation. I removed the email tag from the email logs directly. I then run a report that aggregates all the fields required by the Email datamodel and consolidate based on the session id. So, generally, this looks like:

"index=mta_syslog_log | stats values(email_datamodel_field) as email_datamodel_field by sid"

This report is then sent to a summary index (I've also accelerated it) and runs every 5 minutes. I then tag the summary index events will "email" and configure the ES CIM to include the summary index using "index=summary source=saved_report_name" as the constraint for an eventtype which assigns the tags.

This duplicates the some of the data on a summary index...but it vastly improved datamodel build performance. Not ideal, but it works.

View solution in original post

panovattack
Communicator

I found a work around to this, based on some of the Splunk documentation. I removed the email tag from the email logs directly. I then run a report that aggregates all the fields required by the Email datamodel and consolidate based on the session id. So, generally, this looks like:

"index=mta_syslog_log | stats values(email_datamodel_field) as email_datamodel_field by sid"

This report is then sent to a summary index (I've also accelerated it) and runs every 5 minutes. I then tag the summary index events will "email" and configure the ES CIM to include the summary index using "index=summary source=saved_report_name" as the constraint for an eventtype which assigns the tags.

This duplicates the some of the data on a summary index...but it vastly improved datamodel build performance. Not ideal, but it works.

panovattack
Communicator

I want to give this one a bump. It is becoming a point of significant frustration. Sendmail and email filter logs have fields like src_user, recipient, file_name, size, etc. spread across multiple events. The enterprise security dashboards expect all this data to be on one event that gets CIM normalized into one row of data. I built a transaction child node, which can't be accelerated, and also means I have to re-write a lot of ES email dashboards. Is there a practical and efficient way to build the transactions prior to normalization into the datamodel so I can benefit from the ES dashboards and acceleration? This must be a common problem for many customers using ES.

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...