Hi,
I've been experiencing some issues with alerts triggering. I have a number of alerts (5-10), and while most of them are triggering as expected, some of them are not. I had thought that maybe I created the alerts incorrectly, but today it just so happened that our Splunk instance crashed, was restarted, and all of a sudden the alerts that had previously not been coming through started working!
Now my concern is that if that alert is working, does that mean other alerts that were previously working now aren't?
I assume this has something to do with some kind of job limit, but I haven't been able to find good information on how any such limits work, or how I can confirm this and see for sure that this is the issue, and I've been told by my teams admin that I am under whatever limits I have. I also haven't been able to find good information on how to tell if an alert job is going to alert or not based on this hypothetical limit, or how to switch which alerts will be triggered if I am over the limit.
Any thoughts? Thanks.
... View more