Alerting

create alert based on percentege change of avg history?

nirt
Path Finder

Hi There,
I was wondering if it's possible to create an alert for any change based on the history...
for example:

every day i have 50K avg hits on a website, if a day i have either 10% more or 10% less than the avg daily then create an alarm
however, i want this to be automatic, for example if today i have 50K hits and in a year i will have 100K, then the 10% will grow to match the 100K and not the 50K

I hope this is quite clear

Thanks in advance

Tags (1)
0 Karma

colinmchugo
Explorer

Hi guys

With regard to this

yoursearchheretoreturnhits earliest=-30d@d latest=@d |
eval recentEvent = if (_time>relative_time(now(),"-1d@d"),1,0) |
bucket _time span=1h |
stats count as hourlyCount sum(recentEvent) as newCount by _time |
eval hour=strftime(_time,"%H") |
stats avg(hourlyCount) as AveragebyHour sum(newCount) as SingleHourCount by hour |
eval lowRange = round(AveragebyHour*.9,0) |
eval highRange = round(AveragebyHour*1.1,0) |
where SingleHourCount < lowRange or SingleHourCount > highRange

How can i see show the percentages increases or decreases in a visualize window? thanks alot

C.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

This thread has been inactive for more than 4 years. You stand a better chance of getting an answer if you post a new question.

---
If this reply helps you, Karma would be appreciated.
0 Karma

qjvtenkroode
Explorer

Be aware: Reviving an old thread and just skimming the comments

Also, some assumptions are made and keep in mind that the search is NOT OPTIMIZED AT ALL:

  1. Your field userCount is already extracted (with xmlkv for example).
  2. In this example all the test data is in the sourcetype "test4".
  3. Assumption that alerting is wanted based on the increased amount of users.
  4. Assumption that the total amount of userCounts in the whole hour are to be calculated.
  5. Search is scheduled hourly and goes over the data from an hour before.

So the monster of a search that makes this happen:

sourcetype="test4" | eval currenthour=strftime(now()-3600, "%H") | eval match=if(date_wday=lower(strftime(now(), "%A")), "T","F") | eval match2=if(date_hour=currenthour,"T","F") | where match="T" and match2="T" | stats sum(userCount) by date_wday,date_hour,date_mday | stats avg(sum(userCount)) as AVERAGE | appendcols  [search sourcetype=test4 | eval currenthour=strftime(now()-3600, "%H") | eval match=if(date_wday=lower(strftime(now(), "%A")), "T","F") | eval match2=if(date_hour=currenthour,"T","F") | where match="T" and match2="T" | stats latest(userCount) as LATEST]| eval PERC=(((LATEST-AVERAGE)/AVERAGE)*100) | table PERC

The basic thought of this search is to run this every hour and calculates the percentage of growth based on the average of the month. It's really hard (if even possible) to exclude the last event's field value from the average. So what is done is we take the average of the whole month and compare this with the value of the last value.

So the first three eval's are to fill some fields, like the current hour (offset by 1 hour since we go through the data from the previous hour) and two checks to filter out all the data that is not needed. The unnecessary data is filtered out with a where match="T" AND match2="T". Unnecessary data is data which is not in the same hour and week day span. Then we sum all the userCounts based on the weekday, the hour and the day of the month so we get four buckets containing the summed amount of userCounts for that specific hour on this specific weekday. Now we can do an average over these four summed values and append the summed value of the latest day. This we do with a sub search which again filters out the unnecessary data and get the latest value of sum(userCount) by weekday, the hour and the day of the month.

At this stage we are left with two values, an average of all 4 weekdays of the month and the current value of that weekday. With this we can calculate a percentage of growth for this hour, at this weekday compared with the average of this hour of this weekday of the whole month.

Now you could do some summarization or alerting based on this percentage growth, but be aware that this search uses the current hour and weekday!

Try and dissect the search and understand it on your own and see if this works for your dataset: I tested this on a months worth of self created data which only had a timestamp and a value for userCount (e.g. 17 Aug 2012 08:57:46 userCount=20)

Good luck!

qjvtenkroode
Explorer

Ah I think I see the issue, did you use the xmlkv command at the base of the search and the subsearch? It should be something like:

sourcetype="test4" | xmlkv | eval currenthour=strftime(now()-3600, "%H") ... [search sourcetype=test4 | xmlkv | eval currenthour=strftime(now()-3600, "%H") ... ]| eval PERC=(((LATEST-AVERAGE)/AVERAGE)*100) | table PERC

Shortened the query because of the character limitation. Replace ... with the rest of the query

0 Karma

nirt
Path Finder

Hi!
Thanks for the very detailed response, I have taken your monster search and I'm afraid it did not provide any results due to the stats commands, could be that I need to redefine how the date shows?
it is written as following:

Mon Nov 19 12:51:00 UTC 2012
18727
.....

Also I wish that the alert will show based on decreasing growth, to detect issues in the system(90% less users based on history is bad)

Let me know what info I can provide to troubleshoot the search a bit more

thanks again

0 Karma

lguinn2
Legend

New answer, based on the comments.

Can you put the xmlkv after the bucket?

Also, instead of xmlkv, try the rex command - which only extracts the usersCount field and is more efficient.

Finally, if usersCount is a count, don't count the count! Sum it instead in the first stats command. And the recentEvent field must change as well.

index="short_stats" host="us_short_stats" usersCount earliest=-30d@d latest=@d 
rex "\<usersCount>(?P<usersCount>\d+)\</usersCount>  |
eval recentEvent = if (_time>relative_time(now(),"-1d@d"),usersCount,0) |
bucket _time span=1h | 
stats sum(usersCount) as hourlyCount sum(recentEvent) as newCount by _time |
eval hour=strftime(_time,"%H") |
stats avg(hourlyCount) as AveragebyHour sum(newCount) as SingleHourCount by hour |
eval lowRange = round(AveragebyHour*.9,0) |
eval highRange = round(AveragebyHour*1.1,0) |
where SingleHourCount < lowRange or SingleHourCount > highRange

Does this help?

nirt
Path Finder

Hi again, I'm reviving this again as I have not been able to check the last hour but only the last day(as per your search)
Would it be possible for you to assist me in changing the search to check the last completed hour every hour? so at 11AM it will check 10AM?

Thanks
Nir

0 Karma

nirt
Path Finder

Oh I completed forgot, Is it possible to check the last hour? means if time is 12:00 now then do the compare for 11:00?

0 Karma

nirt
Path Finder

Thanks, some more info:
My usersCount is inside an XML page that's why I used xmlkv:

...
192

The count shows the the number currently, the sample is taken every minute

Using your syntax(missed " after rex) doesn't show any results however this syntax worked:
index="short_stats" host="us_short_stats" usersCount earliest=-14d@d latest=@d | xmlkv | eval recentEvent = if (_time>relative_time(now(),"-1d@d"),usersCount,0)

I think the results are correct, do you think the syntax I used is good? or there is a more efficient way to do it?

0 Karma

lguinn2
Legend

Okay for the revised question: how to calculate based on the hour --

yoursearchheretoreturnhits earliest=-30d@d latest=@d |
eval recentEvent = if (_time>relative_time(now(),"-1d@d"),1,0) |
bucket _time span=1h |
stats count as hourlyCount sum(recentEvent) as newCount by _time |
eval hour=strftime(_time,"%H") |
stats avg(hourlyCount) as AveragebyHour sum(newCount) as SingleHourCount by hour |
eval lowRange = round(AveragebyHour*.9,0) |
eval highRange = round(AveragebyHour*1.1,0) |
where SingleHourCount < lowRange or SingleHourCount > highRange

I think this will work...

0 Karma

lguinn2
Legend

Can you put the xmlkv after the bucket?

Also, instead of xmlkv, you might be able to use this - which would only extract the usersCount field and be more efficient.

rex "\<usersCount>(?P<usersCount>\d+)\</usersCount>"

Finally, if usersCount is a count, don't count the count! Sum it instead in the first stats command. Instead of

stats count as hourlyCount sum(recentEvent) as newCount by _time |

use

stats sum(usersCount) as hourlyCount sum(recentEvent) as newCount by _time |

0 Karma

nirt
Path Finder

Thanks for the quick reply, I have revised it with my search as following and I think I have some mistake which I don't understand:
index="short_stats" host="us_short_stats" usersCount earliest=-30d@d latest=@d | xmlkv |
eval recentEvent = if (_time>relative_time(now(),"-1d@d"),1,0) |
bucket _time span=1h | (more here, can't enter all)

Information regarding my source:
I want it to be on 'UsersCount' and the source is xml that's why I run xmlkv
Have I done something wrong?
The information given from the search does not make sense as the high/low/avg are same for all hours

0 Karma

lguinn2
Legend

I like sowings suggestion: you should consider summary indexing. But here is a solution that will work - although it will be quite slow for longer time periods and larger Splunk environments. It computes an average over a longish period of time (30 days) and compares that to yesterday's count.

yoursearchheretoreturnhits earliest=-30d@d latest=@d |
eval recentEvent = if (_time>relative_time(now(),"-1d@d"),1,0) |
bucket _time span=1d |
stats count as dailyCount sum(recentEvent) as newCount by _time |
stats avg(dailyCount) as AverageOverTime sum(newCount) as SingleDayCount |
eval lowRange = round(AverageOverTime*.9,0) |
eval highRange = round(AverageOverTIme*1.1,0) |
where SingleDayCount < lowRange or SingleDayCount > highRange

Alarm condition should be "#results > 0"

That should work. Let me know if it doesn't and I can debug my typing...

0 Karma

nirt
Path Finder

Hi Iguinn, Sorry for the late response I only now managed to get to it
Thanks for the great query, it looks great however I want to improve it a tad and would like your assistance
I also want to calculate based on the hour, for example the avg for 01:00AM is different than 15:00
Is it possible to do such calculation?

Thanks in advance!

0 Karma

sowings
Splunk Employee
Splunk Employee

The main answer here is going to be to aim you towards summary indexing. The idea is that you'll want to maintain a bit of history, and then refer back to those values to compare "now" with "yesterday" or whatever your time range is.

The Splunk Deployment Monitor application illustrates this pretty well, by keeping track of the number of forwarders it's seen (so if one of your servers goes down...), but also the log volumes from each forwarder, as well as each sourcetype of data. It may be that the Deployment Monitor app will satisfy your needs. At the very least, it will illustrate the principles you need to build your own solution.

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...