Alerting

Alert throttling capabilities

JIrojas
Explorer

Hi,

Found an issue that I was not able to work around with the alert thottling.

Given a search that works like this:

| mstats avg(_value) as value WHERE metric_name="disk.used_percent"  AND span=5m by host, path
| eval "Disk Used (%)"=round(value,2)
| search "Disk Used (%)" >= 90 AND "Disk Used (%)" < 95
| table host, path, "Disk Used (%)"

If I set the throttling to "per result" the problem is if 50 hosts crossed the threshold, I would get 50 individual alerts, in the case I set it up with emails, it would be 50 emails, which in this particular case is non-desirable.

If I set the trigger condition to "Once" instead of "for each result", I would now get a single e-mail with 50 instances in the "in-line table" but  the problem I get is that some alerting may be missing during the throttling time, like new instances that reach the threshold during the throttling period (which is the use case we want to solve with the "per result" throttling).

Basically what I need is a solution that can give a "smart throttling", silencing alerts based on hosts that have already triggered the alert, and also have a way to clamp all of the occurences at a given point in a single alert event, if possible.

Thanks!

Labels (4)
0 Karma
Get Updates on the Splunk Community!

Threat Hunting Unlocked: How to Uplevel Your Threat Hunting With the PEAK Framework ...

WATCH NOWAs AI starts tackling low level alerts, it's more critical than ever to uplevel your threat hunting ...

Splunk APM: New Product Features + Community Office Hours Recap!

Howdy Splunk Community! Over the past few months, we’ve had a lot going on in the world of Splunk Application ...

Index This | Forward, I’m heavy; backward, I’m not. What am I?

April 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...