All Apps and Add-ons

Why does Indexer sometimes not classify AWS Guardduty data as proper source?

emcbolton
New Member

We are using HTTP Event Collector (HEC) to ingest AWS Guardduty Cloudwatch Events via AWS Kinesis Firehose. I have worked with a Splunk SE to get this working on two environments. One environment (dev license) is a single instance environment. The other environment is Distributed with a HEC instance separate from the Indexer. In both environments, the Indexer appears to not always classify the incoming data from Firehose as aws:cloudwatch:guardduty properly. The unclassified data doesn't end up in the Splunk Dashboard for Guardduty. What I mean by this, is that it will sometimes have the clean Splunk formatting (plus signs and all) and other times be multiple Guardduty events in one text blob (no Splunk formatting, just long string of text).

Is there a way to fix this, alert on this issue from Splunk, or have Splunk re-read the data?
Does this likely mean I need a Lambda function to break these into separate events (maybe it's a CloudWatch to Firehose issue) prior to being sent to Firehose?

Tags (1)
0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to May Tech Talks, Office Hours, and Webinars!

Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars this month. This ...

They're back! Join the SplunkTrust and MVP at .conf24

With our highly anticipated annual conference, .conf, comes the fez-wearers you can trust! The SplunkTrust, as ...

Enterprise Security Content Update (ESCU) | New Releases

Last month, the Splunk Threat Research Team had two releases of new security content via the Enterprise ...