Reporting

Why do I keep getting an email every hour despite the trigger condition not being met?

DEAD_BEEF
Builder

I have a search that shows the number of logs from various indexes for the last 60 mins. I have this saved as an alert to email me IF the event count < 1 million. I keep getting an email every hour despite the trigger condition not being met as the logs total more than 1M for the last 60 mins. What am I doing wrong? I feel like I'm going crazy here.

alt text

search

| tstats count WHERE (index=cisco OR index=palo OR index=email)  BY index

results

index    count  
cisco   3923160
palo     21720018
email   7583099
0 Karma
1 Solution

somesoni2
Revered Legend

Your alert condition is matched based on the number of rows returned by the search, not the number of rows scanned. From your tstats search, you'll get one row for each index, so total number of results=total number of indexes, which I'm sure is way less than 1,000,000. Hence the condition is matching all the time and the alert is triggered. You should handle the triggering condition in the alert search itself. E.g. if you want to alert if , for any index, the number of events is <1M for selected time range, then:

Alert Search:

| tstats count WHERE (index=cisco OR index=palo OR index=email)  BY index | where count <1000000

Alert COndition: When Number of events is greater than 0.

If all three indexes that you're checking has event count more than 1M, the alert search will give 0 result, hence alert would not trigger.

View solution in original post

0 Karma

ryhluc01
Communicator

You have to create a custom condition. The number of results is going by how many events are physically returned by your search.

Custom Condition
Trigger: search count < 1000000

0 Karma

somesoni2
Revered Legend

Your alert condition is matched based on the number of rows returned by the search, not the number of rows scanned. From your tstats search, you'll get one row for each index, so total number of results=total number of indexes, which I'm sure is way less than 1,000,000. Hence the condition is matching all the time and the alert is triggered. You should handle the triggering condition in the alert search itself. E.g. if you want to alert if , for any index, the number of events is <1M for selected time range, then:

Alert Search:

| tstats count WHERE (index=cisco OR index=palo OR index=email)  BY index | where count <1000000

Alert COndition: When Number of events is greater than 0.

If all three indexes that you're checking has event count more than 1M, the alert search will give 0 result, hence alert would not trigger.

0 Karma

DEAD_BEEF
Builder

Oh ok, I didn't realize it was looking at the # of rows being returned instead of count of each result. I adjusted it and it appears to be working as intended now.

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...