I'm trying to automate sending a "clear" Splunk alert by comparing results from a previous search with the current one. This search works:
source="/path/to/error.log" earliest=-10m@m latest=-5m@m "file does not exist" | eval results="Old" | append [search source="/path/to/error.log" earliest=-5m@m latest=now "file does not exist" | eval results="New"] | stats sum(eval(if(match(results,"Old"),1,0))) as Old sum(eval(if(match(results,"New"),1,0))) as New
(I'm not alerting for 404s - I just needed some data!)
This gives me a nice, two-column view of the Old and New results and, for the alert trigger, the custom condition is "search Old>0 AND New=0."
The trouble is that I don't want to monitor a single log - I want to monitor a sourcetype, and to table these results by host and source. The eval statements are the only way I can think of to keep the old and new values distinct - "stats count by host, source" just dumps everything into a single value. I'd like to see something like:
host, source, old, new
webserver1, web1_error.log, 15, 3
webserver2, web2_error.log, 24, 0
Am I on the right track? I'd be grateful for any nudges in the right direction, because I've hit a wall.
Try this
sourcetype=xyz earliest=-10m@m latest=@m yourothercriteria
| eval timespan=if(_time < (now()-300),"Old","New")
| stats count(eval(timespan="Old")) as Old count(eval(timespan="New")) as New by host source
This will also be more efficient than the subsearch technique. now()
is the time that the search started running, so now()-300
is 5 minutes before that.
Try this
sourcetype=xyz earliest=-10m@m latest=@m yourothercriteria
| eval timespan=if(_time < (now()-300),"Old","New")
| stats count(eval(timespan="Old")) as Old count(eval(timespan="New")) as New by host source
This will also be more efficient than the subsearch technique. now()
is the time that the search started running, so now()-300
is 5 minutes before that.
So much more elegant than mine! This is excellent - I can see it working already. Thanks!