Hello together,
i want to monitor existing alerts in splunk. For the case that an alarm doesn't work proper and doesn't find anything I want to get a notice or an alarm for that.
I do not know how to do this far.
Is there something in the internal index where splunk logs its alarms? Any suggestions?
Thanks in advance.
As a starting point have a look at index=_internal sourcetype=scheduler
this will give you all scheduled search logs like, when it started, when its completed, eventcount, resultcount, timetaken etc.
Thank you. I tried to do a summary with appendcols
index=_internal sourcetype=scheduler earliest=-7d@d latest=-0d@d
| eval period="-7d"
| stats count min(result_count) as min_result_count max(result_count) as max_result_count avg(result_count) as avg_result_count by savedsearch_name app period
| eval avg_result_count= round(avg_result_count,2)
| table savedsearch_name app period min_result_count max_result_count avg_result_count count
| appendcols [search index=_internal sourcetype=scheduler earliest=-14d@d latest=-7@d
| eval period= "-14d"
| stats count min(result_count) as min_result_count max(result_count) as max_result_count avg(result_count) as avg_result_count by savedsearch_name app period
| eval avg_result_count= round(avg_result_count,2)
| table savedsearch_name app period min_result_count max_result_count avg_result_count count
]
| appendcols [search index=_internal sourcetype=scheduler earliest=-21d@d latest=-14@d
| eval period= "-21d"
| stats count min(result_count) as min_result_count max(result_count) as max_result_count avg(result_count) as avg_result_count by savedsearch_name app period
| eval avg_result_count= round(avg_result_count,2)
| table savedsearch_name app period min_result_count max_result_count avg_result_count count
]
But this does not work. And I dont know how to alarm if the counts are highly different from the weeks before.