Alerting

Subsearch hitting limit. How to circumvent

steven10172
Explorer

I'm currently working on developing a search that will be run every 15minutes as an Alert. I would like the alert to only send an email if the usage has increased more than a certain threshold of the average usage.

In the query below I grab the usage for the last 15minutes and compare it to the average usage for the previous 24hrs. I only show results where the current usage is at least 2x the average usage so I can trigger an alert based on the number of results returned.

* index=voice earliest=-15m latest=-0m
| fields _raw,host
| eval bytes=len(_raw)
| eval kilobytes=bytes/1024
| eval uncompressedKB=kilobytes/.6
| stats sum(uncompressedKB) as Last15Minutes by host
| join host [ search * index=voice earliest=-25h@h latest=-1h@h
    | fields _raw,host
    | eval tmp_bytes=len(_raw)
    | eval tmp_kilobytes=tmp_bytes/1024
    | eval tmp_uncompressedKB=tmp_kilobytes/.6
    | stats sum(tmp_uncompressedKB) as Last24Hour by host ]
| eval avgKB=Last24Hour/24/60*15 
| where (avgKB*2.00)<a
| table host,Last15Minutes,Last24Hour,avgKB

The issue I'm having is that the average is really low and it seems that I'm hitting a 60second search limit. Is there a way to combine the subsearch into the main search or make the subsearch be it's own automated search that gets cached?

P.S. I don't have access to the .conf's

Tags (3)
1 Solution

emiller42
Motivator

If you pull out the sub search and schedule it, you can refer to the most recent results via the | loadjob command. (Splunk Reference)

Another idea is to use summary indexing to store the aggregate data in various time slices, and then search that for your final output, but that's more complicated.

View solution in original post

chanfoli
Builder

Wouldn't the easiest way around the default subsearch time limit be to reverse your searches, i.e. make the narrower search your subsearch?

0 Karma

steven10172
Explorer

It would, but the smaller search is faster it takes more than 60 seconds to complete.

0 Karma

emiller42
Motivator

If you pull out the sub search and schedule it, you can refer to the most recent results via the | loadjob command. (Splunk Reference)

Another idea is to use summary indexing to store the aggregate data in various time slices, and then search that for your final output, but that's more complicated.

Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...