Splunk Search

Generate an alert based off the results from a table or timechart

lhowel202
New Member

Very new to Splunk, but have what I think should be a pretty straightforward task. I have a search that results in a simple timechart. I want to create an alert if a result in the timechart equals something specific. The example in the 'eval' man page is close enough to serve as the framework. Example search:

source=eqs7day-M1.csv | eval Description=case(Depth<=70, "Shallow", Depth>70 AND Depth<=300, "Mid", Depth>300 AND Depth<=700, "Deep") | table Datetime, Region, Depth, Description

What would I append, or what conditional search would I use to create an alert if the table had a value equal to say, "Shallow"?

0 Karma

stmyers7941
Path Finder

From the search, click Save As --> Alert, set trigger condition to Custom, and set value to search Depth<=70. Schedule your search to run on a regular interval (ie every hour) and have it search for the past hour.

If you want "real-time" you can do a real-time alert, but that's costly. Instead, set schedule to 5 minutes (run on cron schedule as */5 * * * *), set your earliest to -6m@m and latest to -1m@m. This will run a search every 5 minutes for the last 5 full minutes of events (with 1 minute buffer to ensure events are all processed) and the alert will trigger if the field "Depth" is greater than or equal to 70.

From there, just decide what you want the action to be.

0 Karma

lhowel202
New Member

Thanks for the feedback, I appreciate the insight. I just tried setting the custom value to search Depth<=70, but now I'm getting an alert constantly. I'm wondering if the example I used wasn't close enough. Here is (essentially) my actual query:

sourcetype="engine" Acceptedmessage run | timechart count span="1h" by host | | eval total=Server1+Server2 | eval Test1=(Server1/total)*100 | eval Test2=(Server2/total)*100 | | eval Result1=if(Test1 < 40, "Error", "OK") | eval Result2=if(Test2 < 40, "Error", "OK") | fields Test1, Result1, Test2, Result2

So my resulting timechart shows me what I want - I see the number of Accepted messages per host, and then the percentage of the total that the two servers grabbed. If one falls below 40% I see Error in the timechart, but the alert wasn't generating. When I set the custom condition to: search Error I got alerts continually. Any thoughts?

0 Karma

stmyers7941
Path Finder

Ok, update your Alert search with by adding | search Result1="Error" OR Result2="Error" and then change the Trigger Condition to number of results greater than 0.

Also, make sure the alert is set Trigger Once, not For Each Result. Let's say you're scheduled the alert for every 5 minutes looking back at the last 5 minutes...with "For Each Result" set, if your search returns 20 events below 40%, you'll get 20 alert messages. If you set to "Trigger Once", you'll get 1 alert if the search returns any events below 40% within the last 5 minutes. This way, at most you'll get 1 alert every 5 minutes, but only if Result1 or Result2 went below 40% once within the last 5 minutes.

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...