Splunk Search

What is a good query to monitor for someone sending too many alerts?

MikeBertelsen
Communicator

I received an email from ES techs that someone had sent over 128k alerts to the same address in a 24 hour period.
I tracked it down to two private alerts and disabled them.
Researching further those emailed alerts were just among those successfully sent. Because a lot of people did not get their alerts or scheduled reports for that day.

Here is a sampling of the error message from internals:
04-09-2019 09:15:07.768 -0500 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/search/bin/sendemail.py "results_link=http://TheSearchHeadServer:8000/app/ALL_my_sales/@go?sid=scheduler
karlA28BCL_ZGlnaXRhbF9zYWxlcwRMD559a15d8ba081a9e5_at_1554819300_52888" "ssname=Null Pointer" "graceful=True" "trigger_time=1554819306" results_file="/opt/splunk/var/run/splunk/dispatch/schedulerkarlA28BCL_ZGlnaXRhbF9zYWxlcw_RMDL559a15d8ba081a9e5_at_1554819300_52888/results.csv.gz"': ERROR:root:(452, '4.3.1 Insufficient system storage', u'SplunkSH@gmail.com') while sending mail to: karlA28BCL@gmail.com

0 Karma

MikeBertelsen
Communicator

Here is the query I have so far:
host=SplunkSH index=_internal "-0500 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/search/bin/sendemail.py" "Insufficient system storage'" "while sending mail to:"

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...