Splunk Search

Time interval for searches

kholleran
Communicator

I have a best practice time question for veteran Splunkers out there. Right now I have a a failed login search that runs every 15 minutes for the last 15 minute interval and alerts out if failed logins on a particular server are greater than 3.

However, if I failed logging in twice at 1:59 and then twice at 2:01, it would be 4 failed logins but would not alert out because the 2:00 run of the search would see 2 failures from 1:45 to 2:00 and the 2:15 search would see 2 failures from 2:00 to 2:15.

So the way I see it is I need to have some overlap, such as search every 15 minutes over the last 20 minutes, but I was wondering how others handle this and if there is a "best practice"

Thanks.

Kevin

Tags (1)
1 Solution

ftk
Motivator

As a best practice when I set up my alerts, I build in a delay to ensure all items have been forwarded and indexed at the indexer so they do not get skipped on the next run. For example if I run a search every 15 minutes, and it runs at 2:00:00, it will look at data from 1:45:00-2:00:00. If an event get's logged at 1:59:50 it might not get forwarded and indexed until 2:00:30 or so, but will get indexed with a 1:59:50 timestamp. This means the next scheduled search running at 2:15:00, looking at events from 2:00:00-2:15:00 will miss this event.

As such I always add a relative time range to my alert searches. For a every 15 minute search for example I do

my search terms earliest=-20m@m latest=-5m@m

If I run this search at 2:00:00 it will look at data from 1:40:00-1:55:00. This gives me a 5 minute buffer to account for forwarding/indexing delays.

Now for a search that looks for an aggregate of events, I think going with an overlap might be the way to go, if you are concerned about missing events. Just make sure you don't make the overlap too big or you might end up with duplicate alerts for the same events.

View solution in original post

0 Karma

ftk
Motivator

As a best practice when I set up my alerts, I build in a delay to ensure all items have been forwarded and indexed at the indexer so they do not get skipped on the next run. For example if I run a search every 15 minutes, and it runs at 2:00:00, it will look at data from 1:45:00-2:00:00. If an event get's logged at 1:59:50 it might not get forwarded and indexed until 2:00:30 or so, but will get indexed with a 1:59:50 timestamp. This means the next scheduled search running at 2:15:00, looking at events from 2:00:00-2:15:00 will miss this event.

As such I always add a relative time range to my alert searches. For a every 15 minute search for example I do

my search terms earliest=-20m@m latest=-5m@m

If I run this search at 2:00:00 it will look at data from 1:40:00-1:55:00. This gives me a 5 minute buffer to account for forwarding/indexing delays.

Now for a search that looks for an aggregate of events, I think going with an overlap might be the way to go, if you are concerned about missing events. Just make sure you don't make the overlap too big or you might end up with duplicate alerts for the same events.

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...