Alerting

Creating an alert when the Value of a counter hit a threshold

saccam447
Explorer

Im trying to create an alert where it will generate an alert when the Value of my search ( source="perfmon:physical disk latency" counter="avg. disk sec/read" host="servername" instance=* ) is greater than 0.25. Does this need to be a basic or advanced condition? Im new to creating alerts.

Can someone point me in the right direction?

Much appreciated.

Scott

Tags (3)
0 Karma
1 Solution

BobM
Builder

You have two choices both using the where command and depending on what you want your user to see.
If you add it to your search string as below, you would only get an event if it was above 0.25. You can then set it to alert if the number of events is > 0

source="perfmon:physical disk latency" counter="avg. disk sec/read" host="servername" instance=* | where Value>0.25

Or you can leave it out of your search and add it in as a "custom condition" in the alert. The user will then only get an alert if one or more have a Value over 0.25 but will get all events in the results. I would tend to display it as a table and use the timepicker to restrict the time.

source="perfmon:physical disk latency" counter="avg. disk sec/read" host="servername" instance=*| table _time host object instance counter Value

Then click on "Create an Alert", Give it a search name , Share it if required, and click Next.
Change "Condition" to "If custom condition is met" and in the "conditional search string" box type

where Value > 0.25

set the schedule, throttling, Expiration and severity to what you want. Then set it up to send you an email or whatever your preferred alert type is.

You will then get an alert whenever any value is over 0.25 and when you look at the results, you will see all the scans in your time range.

View solution in original post

BobM
Builder

You have two choices both using the where command and depending on what you want your user to see.
If you add it to your search string as below, you would only get an event if it was above 0.25. You can then set it to alert if the number of events is > 0

source="perfmon:physical disk latency" counter="avg. disk sec/read" host="servername" instance=* | where Value>0.25

Or you can leave it out of your search and add it in as a "custom condition" in the alert. The user will then only get an alert if one or more have a Value over 0.25 but will get all events in the results. I would tend to display it as a table and use the timepicker to restrict the time.

source="perfmon:physical disk latency" counter="avg. disk sec/read" host="servername" instance=*| table _time host object instance counter Value

Then click on "Create an Alert", Give it a search name , Share it if required, and click Next.
Change "Condition" to "If custom condition is met" and in the "conditional search string" box type

where Value > 0.25

set the schedule, throttling, Expiration and severity to what you want. Then set it up to send you an email or whatever your preferred alert type is.

You will then get an alert whenever any value is over 0.25 and when you look at the results, you will see all the scans in your time range.

BobM
Builder

I will update the reply.

0 Karma

saccam447
Explorer

Your first suggestion worked perfectly. Had issues with the second option working correctly. But I am really interested in the second option as well so I don't need to create a separate report to do an alert. would you please elaborate.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...