Hello,
I am having a hard time trying to pin down why most of my real-time alerts have stopped working. I have looked into scheduler.log and python.log, and did not find much insightful details to the problem. Here are the symptoms:
Please follow the steps below to debug this issue:
1.) Backup and edit $SPLUNK_HOME/etc/log.cfg and locate the following entry:
category.SavedSplunker=INFO,scheduler
2.) change the INFO key to DEBUG (ie. category.SavedSplunker=DEBUG,scheduler)
3.) save the changes and restart splunk
4.) try to generate an event that should trigger an alert, but not happening.
5.) the debug messages (with diagnostics) should be written into $SPLUNK_HOME/var/log/splunk/scheduler.log
6) Identify the root cause, fix it and revert back the changes made in step 2 and restart Splunk to get rid of the DEBUG messages that you hopefully don't need any more
Hope This Helps!
Same situation, what was the solution for you here?
Please follow the steps below to debug this issue:
1.) Backup and edit $SPLUNK_HOME/etc/log.cfg and locate the following entry:
category.SavedSplunker=INFO,scheduler
2.) change the INFO key to DEBUG (ie. category.SavedSplunker=DEBUG,scheduler)
3.) save the changes and restart splunk
4.) try to generate an event that should trigger an alert, but not happening.
5.) the debug messages (with diagnostics) should be written into $SPLUNK_HOME/var/log/splunk/scheduler.log
6) Identify the root cause, fix it and revert back the changes made in step 2 and restart Splunk to get rid of the DEBUG messages that you hopefully don't need any more
Hope This Helps!