Alerting

email alert problem

parth_jec
Path Finder

Hi,
I have installed splunk indexer v4.3.1-3 and configured email alert. I can see the alert being triggered in the alert manager but didn't recieve the email (double checked the email provided in alert config). I can see following errors/warnings in logs:

-splunkd.log-
07-09-2012 15:15:44.475 -0400 ERROR SearchResults - Unable to open output file: path=/apps/splunk/var/run/splunk/dispatch/scheduler_admin_search_SGVhcnRiZWF0IEZvdW5k_at_1341861060_390fb9622c8aadd9/per_result_alert/tmp_238.csv.gz.tmp error=No such file or directory

-scheduler.log-
07-09-2012 15:14:15.702 -0400 WARN SavedSplunker - Reached maximum number of per-result alerts for savedsearch_id="admin;search;Heartbeat Found", sid="scheduler_adminsearch_SGVhcnRiZWF0IEZvdW5k_at_1341860940_a5a1bb207ef39a5f", limit=500
07-09-2012 15:14:16.932 -0400 INFO SavedSplunker - savedsearch_id="admin;search;Heartbeat Found", user="admin", app="search", savedsearch_name="Heartbeat Found", status=success, digest_mode=0, scheduled_time=1341860940, dispatch_time=1341860946, run_time=2.482, result_count=48931, alert_actions="email", sid="scheduler
admin_search_SGVhcnRiZWF0IEZvdW5k_at_1341860940_a5a1bb207ef39a5f", suppressed=0, fired=500, skipped=48431, action_time_ms=308360, thread_id="AlertNotifierWorker-14"

-python.log-
2012-07-09 15:17:34,085 ERROR Could not get job status for searchId=scheduler_adminsearch_SGVhcnRiZWF0IEZvdW5k_at_1341861060_390fb9622c8aadd9, Error: [HTTP 404] https://127.0.0.1:8089/services/search/jobs/scheduleradmin_search_SGVhcnRiZWF0IEZvdW5k_at_1341861060_390fb9622c8aadd9?message_level=warn

I cannot identify the issue after looking at the logs. Can someone pls guide me ?

Thanks,

Tags (3)
0 Karma
1 Solution

parth_jec
Path Finder

Thanks for the reply, the problem was with the email server on the host.

View solution in original post

parth_jec
Path Finder

Thanks for the reply, the problem was with the email server on the host.

mataharry
Communicator

Maybe the search return too many events for the script.
"Reached maximum number of per-result alerts"

Please try with a different search that return the count of errors instead of the raw events.
instead of


source=mysource sourcetype=mysourcetype ERRORKEYWORD

condition : number of results > 0

source=mysource sourcetype=mysourcetype ERRORKEYWORD| stats count by host

and alert on number of results > 0 (means at least one host has one error

Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...