Splunk Search

Time interval for searches

kholleran
Communicator

I have a best practice time question for veteran Splunkers out there. Right now I have a a failed login search that runs every 15 minutes for the last 15 minute interval and alerts out if failed logins on a particular server are greater than 3.

However, if I failed logging in twice at 1:59 and then twice at 2:01, it would be 4 failed logins but would not alert out because the 2:00 run of the search would see 2 failures from 1:45 to 2:00 and the 2:15 search would see 2 failures from 2:00 to 2:15.

So the way I see it is I need to have some overlap, such as search every 15 minutes over the last 20 minutes, but I was wondering how others handle this and if there is a "best practice"

Thanks.

Kevin

Tags (1)
1 Solution

ftk
Motivator

As a best practice when I set up my alerts, I build in a delay to ensure all items have been forwarded and indexed at the indexer so they do not get skipped on the next run. For example if I run a search every 15 minutes, and it runs at 2:00:00, it will look at data from 1:45:00-2:00:00. If an event get's logged at 1:59:50 it might not get forwarded and indexed until 2:00:30 or so, but will get indexed with a 1:59:50 timestamp. This means the next scheduled search running at 2:15:00, looking at events from 2:00:00-2:15:00 will miss this event.

As such I always add a relative time range to my alert searches. For a every 15 minute search for example I do

my search terms earliest=-20m@m latest=-5m@m

If I run this search at 2:00:00 it will look at data from 1:40:00-1:55:00. This gives me a 5 minute buffer to account for forwarding/indexing delays.

Now for a search that looks for an aggregate of events, I think going with an overlap might be the way to go, if you are concerned about missing events. Just make sure you don't make the overlap too big or you might end up with duplicate alerts for the same events.

View solution in original post

0 Karma

ftk
Motivator

As a best practice when I set up my alerts, I build in a delay to ensure all items have been forwarded and indexed at the indexer so they do not get skipped on the next run. For example if I run a search every 15 minutes, and it runs at 2:00:00, it will look at data from 1:45:00-2:00:00. If an event get's logged at 1:59:50 it might not get forwarded and indexed until 2:00:30 or so, but will get indexed with a 1:59:50 timestamp. This means the next scheduled search running at 2:15:00, looking at events from 2:00:00-2:15:00 will miss this event.

As such I always add a relative time range to my alert searches. For a every 15 minute search for example I do

my search terms earliest=-20m@m latest=-5m@m

If I run this search at 2:00:00 it will look at data from 1:40:00-1:55:00. This gives me a 5 minute buffer to account for forwarding/indexing delays.

Now for a search that looks for an aggregate of events, I think going with an overlap might be the way to go, if you are concerned about missing events. Just make sure you don't make the overlap too big or you might end up with duplicate alerts for the same events.

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...