Getting Data In

delay in logging an event.

sumitpandey1
New Member

We have a question related to Splunk Alert getting triggered in the night and sending us false alarms. Splunk Instance - https://<<InstanceNameHere>>:8000

The alert search is scheduled to run every 5 min and send us an alarm when it sees no events. So, what going on is between 2-3 am CT - Splunk starts sending us false alarms because it didn't see any events in the search. But when we looked back the events in that time frame we can find events there.

This brings us to a conclusion that either the events are getting delayed to reach splunk or the alert search itself cannot see those events (this can be very strange and don't know how).

While investigating, we came across a troubleshoot doc https://docs.splunk.com/Documentation/Splunk/7.0.2/Troubleshooting/Troubleshootingeventsindexingdela.... As per the suggestion, we ran the search with eval delay_sec=_indextime-_time| timechart min(delay_sec) avg(delay_sec) max(delay_sec) by host and found that the delays are higher for 2-3 am CT in comparison to other times in the day.

We checked for the thruput limit configured for this splunk instance and found the value is set to 512 KBps. We are not sure if this is cause of the problem. We are still researching, but if anyone has seen this or knows about it, please do let us know to fix the issue.

Tags (1)
0 Karma

elliotproebstel
Champion

We also saw issues like this and changed our searches that were running every 5 minutes to not look at the _time field but instead at the _indextime field by creating a macro called alert_time that takes one parameter($minutes$) and is defined as: _index_earliest=-$minutes$m _index_latest=now, and then we set the searches to look back over maybe the previous hour but specify alert_time(5) inline in the search.

This worked really well for us until we moved to a multisite approach with two indexer clusters that replicate between the sites. We have subsequently needed to delay our searches by creating a macro called alert_time_shifted that is defined as: _index_earliest=-10m _index_latest=-5m.

You can play around with the times here and see what works well for your environment to avoid hitting the strange 2am-3am delays.

0 Karma

somesoni2
Revered Legend

First, refrain from sharing sensitive information like your instance URL. (I've masked it). The delay could be a indexer level as well if there are too much data coming their way. We generally recommend allowing few minutes of delay in the search's time range to account for this indexing delay. So my question would be, what's the time range you're using? You're running it every 5 mins so my suggestion would be use -5m@m (start time) to -10m@m (End time) as time range (assuming you were looking at 5 mins worth of data).

0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to May Tech Talks, Office Hours, and Webinars!

Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars this month. This ...

They're back! Join the SplunkTrust and MVP at .conf24

With our highly anticipated annual conference, .conf, comes the fez-wearers you can trust! The SplunkTrust, as ...

Enterprise Security Content Update (ESCU) | New Releases

Last month, the Splunk Threat Research Team had two releases of new security content via the Enterprise ...