My setup is simple. Splunk reads a file /var/log/snmp.log into index "snmp". I created a search: index="snmp"
and created an alert for this. It is scheduled to search for results in the last minute every 1 minute and should trigger if the count>1. In the action, I send the alert to alert manager, but it's not working. I could see the search completed on jobs, but no alert showed in triggered alerts nor alert manager.
You might have over 1 minute of indexing / event latency.
Try this in your search instead:
index=snmp _index_earliest=-1m
This will go by the time it was indexed and not by the date timestamp in the events.
It is a best practice to have a "trailing" search for these types of alerts. Something like this would search for the minute of time from 10 minutes ago, and would be more likely to find the events that are latent.
index=snmp earliest=-11m@m latest=-10m@m
Play with _index_earliest, _index_latest, earliest and latest a little and I think you'll solve your problem.
Here's a good write up on the issue I think you're facing:
https://answers.splunk.com/answers/11870/how-can-i-view-the-indexing-latency-for-incoming-events-in-...