Alerting

Conditional Alert - Minimum Results AND Rises By

jchampagne
Path Finder

I need to setup a conditional alert with the following criteria:
- The number of results returned from my search must be greater than 1
- Only generate a new alert if the result count increases by 1

For example, in my log, it is normal to see a login event at the beginning of each day. However, if I see more than one login event, I want to get an alert because this means the client is getting disconnected. This is where the event count is greater than 1 kicks in.

I want to receive an alert upon each occurence of this login event beyond the first event. This is where the result count increases by 1 kicks in.

How can I create a custom alert condition that uses both the "Number of events is greater than" as well as the "Number of events rises by" conditions?

Tags (2)
0 Karma
1 Solution

gkanapathy
Splunk Employee
Splunk Employee

Seems to me you want to just be alerted for every login event except the first one in a day, right? Now, if there is only a small number of these returned in a search, then a quick way to do this would be:

sourcetype=myevents "login event" | eventstats earliest(_time) as firsttime | where _time!=firsttime

run over the period earliest=@d latest=now, and run that every 5 or 10 minutes or whatever. Now, if you have a lot of login events (if the scan count of the search is more than, say, several hundred or few thousand), then this would be a bad way to run the search and you would be better off summarizing and tracking so you don't keep on searching over the same time range over and over again through the day (e.g., at 7:00am you run the search from midnight to 7:00am; at 7:05am, you do it from midnight to 7:05am, which is wasteful because you just looked at all by the last 5 minutes already; etc) but if the actual number is small (you're saying the normal situation is just one per day) then no big deal.

View solution in original post

gkanapathy
Splunk Employee
Splunk Employee

Seems to me you want to just be alerted for every login event except the first one in a day, right? Now, if there is only a small number of these returned in a search, then a quick way to do this would be:

sourcetype=myevents "login event" | eventstats earliest(_time) as firsttime | where _time!=firsttime

run over the period earliest=@d latest=now, and run that every 5 or 10 minutes or whatever. Now, if you have a lot of login events (if the scan count of the search is more than, say, several hundred or few thousand), then this would be a bad way to run the search and you would be better off summarizing and tracking so you don't keep on searching over the same time range over and over again through the day (e.g., at 7:00am you run the search from midnight to 7:00am; at 7:05am, you do it from midnight to 7:05am, which is wasteful because you just looked at all by the last 5 minutes already; etc) but if the actual number is small (you're saying the normal situation is just one per day) then no big deal.

jchampagne
Path Finder

That works perfectly! I'm going to go with this solution, as this particular log has less than 5k events per day and only a handful of login events. Thanks for your help!

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

yes, you're right. replace the stats with eventstats earliest(_time) as firsttime, without the by _time. my mistake.

jchampagne
Path Finder

I like the logic of this query, however it isn't working for me. When I run it with both piped sections, I get a number of matching events, but no results are displayed.

If I take off the where clause, I see all of my matching events, however the firsttime field is different for each result. In each row, firsttime is matching the _time for that row...it isn't a global value for the entire resultset. I think this is why the where clause is failing to return any results.

0 Karma

jcoates_splunk
Splunk Employee
Splunk Employee

I think you will want to break the problem into two -- with a subsearch or a saved search that uses outputlookup to write the current increase, and then inputlookup that result into your alerting search to aid in the decision.

It would be easy tojust alert on number of events is greater than 1, and run your search every N minutes over an N minute period. Let's try it out with a 5 minute N running */5 * * * *:

8:02 event 1
8:05 X=1, no alert
8:07 event 2
8:10 X=1, no alert
8:11 event 3
8:12 event 4
8:15 X=2, alert
8:16 event 5
8:20 X=1, no alert
8:21 event 6
8:22 event 7
8:25 X=2, alert

The only catch is that you get an alert at 8:15 and 8:25, when you want to suppress the 8:15 one because it's first. in theory a subsearch or map should be able to help you do that, but I can't make it work. You could easily use a scheduled search that outputs a lookup though, then inputlookup that into your alerting search.

Hopefully I'm understanding your problem correctly!

jcoates_splunk
Splunk Employee
Splunk Employee

Well, read something back in -- you just need an indication. your periodic search can be something like "login exists | inputlookup loginsemaphore.csv | dedup login_time | outputlookup loginsemaphore.csv". If a login already exists, loginsemaphore ends up with something in it. If it doesn't, it doesn't. Keep the window to -1d and it'll work fine.

Gerald's method is easier to construct, but won't perform well at scale as he notes.

0 Karma

jchampagne
Path Finder

If I'm understanding the logic, you're saying to run a periodic search that looks for the first login and outputs that value to a table. Then in in second search, read the time of that event back in and exclude it?

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...