Reporting

Why do I keep getting an email every hour despite the trigger condition not being met?

DEAD_BEEF
Builder

I have a search that shows the number of logs from various indexes for the last 60 mins. I have this saved as an alert to email me IF the event count < 1 million. I keep getting an email every hour despite the trigger condition not being met as the logs total more than 1M for the last 60 mins. What am I doing wrong? I feel like I'm going crazy here.

alt text

search

| tstats count WHERE (index=cisco OR index=palo OR index=email)  BY index

results

index    count  
cisco   3923160
palo     21720018
email   7583099
0 Karma
1 Solution

somesoni2
SplunkTrust
SplunkTrust

Your alert condition is matched based on the number of rows returned by the search, not the number of rows scanned. From your tstats search, you'll get one row for each index, so total number of results=total number of indexes, which I'm sure is way less than 1,000,000. Hence the condition is matching all the time and the alert is triggered. You should handle the triggering condition in the alert search itself. E.g. if you want to alert if , for any index, the number of events is <1M for selected time range, then:

Alert Search:

| tstats count WHERE (index=cisco OR index=palo OR index=email)  BY index | where count <1000000

Alert COndition: When Number of events is greater than 0.

If all three indexes that you're checking has event count more than 1M, the alert search will give 0 result, hence alert would not trigger.

View solution in original post

0 Karma

ryhluc01
Communicator

You have to create a custom condition. The number of results is going by how many events are physically returned by your search.

Custom Condition
Trigger: search count < 1000000

0 Karma

somesoni2
SplunkTrust
SplunkTrust

Your alert condition is matched based on the number of rows returned by the search, not the number of rows scanned. From your tstats search, you'll get one row for each index, so total number of results=total number of indexes, which I'm sure is way less than 1,000,000. Hence the condition is matching all the time and the alert is triggered. You should handle the triggering condition in the alert search itself. E.g. if you want to alert if , for any index, the number of events is <1M for selected time range, then:

Alert Search:

| tstats count WHERE (index=cisco OR index=palo OR index=email)  BY index | where count <1000000

Alert COndition: When Number of events is greater than 0.

If all three indexes that you're checking has event count more than 1M, the alert search will give 0 result, hence alert would not trigger.

0 Karma

DEAD_BEEF
Builder

Oh ok, I didn't realize it was looking at the # of rows being returned instead of count of each result. I adjusted it and it appears to be working as intended now.

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...