Splunk Search

I know the data is in the index, why didn't my scheduled search find all of the events?

Mick
Splunk Employee
Splunk Employee

I have a saved seach setup to check every minute for file changes. I have the start time set for [-1m] to search back 1 minute. I have the schedule set for [BASIC] and run every [minute]. I have the alert condition set to perform [if number of events] [is greater than] [0] to send email to us.

Last night we had a file change but for some reason, we didn't receive the email alerts for all of the events even though they appear when I search [All time]. It appears the 1 minute search is not going back the full minute to catch all the messages.

If I click on the search link in the email we received, it only shows 7 events listed in the email.

If I change the timeframe from [Custom] to [All Time], I will see a total of 13 events which includes the all of the relevant events

1 Solution

Mick
Splunk Employee
Splunk Employee

It's likely that the events you're referring to as missed, were not actually present in the index at the time the search was run. The only reason Splunk wouldn't pick them up, is if they weren't actually there.

You can verify this with the following - <your_search_terms> | convert ctime(_indextime) as IT - and the IT field will tell you when the events were actually written to the index. When you're indexing a high volume of data, or from a lot of different sources, there can be a bit of a lag between an event being produced and Splunk actually writing it to the index.

To account for this, many Customers simply offset their searches a bit, so for 'last minute search' you could start the search 2 or even 3 minutes back and end it 1 or 2 minutes later. For example -

<your_search_terms> startminutesago=3 endminutesago=2

You're still searching over a span of 1 minute, and running every minute means you'll still cover all possible time-ranges, but you're allowing for the lag of getting data into the index.

View solution in original post

Mick
Splunk Employee
Splunk Employee

It's likely that the events you're referring to as missed, were not actually present in the index at the time the search was run. The only reason Splunk wouldn't pick them up, is if they weren't actually there.

You can verify this with the following - <your_search_terms> | convert ctime(_indextime) as IT - and the IT field will tell you when the events were actually written to the index. When you're indexing a high volume of data, or from a lot of different sources, there can be a bit of a lag between an event being produced and Splunk actually writing it to the index.

To account for this, many Customers simply offset their searches a bit, so for 'last minute search' you could start the search 2 or even 3 minutes back and end it 1 or 2 minutes later. For example -

<your_search_terms> startminutesago=3 endminutesago=2

You're still searching over a span of 1 minute, and running every minute means you'll still cover all possible time-ranges, but you're allowing for the lag of getting data into the index.

Get Updates on the Splunk Community!

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...