Splunk Search

How to identify drops in traffic with bucket and streamstats?

tomjones101
Explorer

Hi guys,

I am making a really cool alert to identify drops in traffic.

At the moment I am searching over a 10 minute period, putting the first 5 minutes in one bucket and the second 5 minutes in a second bucket and calculating the difference. Then defining a threshold.

It works ok but I want to make it better by dividing the count into three buckets instead of two. So I would have an earliest, middle, and latest bucket. Then set a condition along the likes of, middle count is x amount less than the earliest count and latest count is x amount less than middle count (and first condition is true), then trigger an alert.

This is what I have at the moment. Only two buckets.

index=blah earliest=-11m@m latest-1m@m | bucket span=5m _time 
| stats count by _time, Platform, Check 
| streamstats window=2 global=f current=f first(count) as p_count by Platform Check  
| eval difference=count-p_count 
| eval pc_difference=abs(round(difference/(count+abs(difference))*100,0)) 
| sort str(Platform) str(Check) 
| search difference < 0 

And a pretty complicated condition to eliminate white noise.

| where (pc_difference=100) OR (pc_difference>70 AND p_count>20) 
       OR (pc_difference>90 AND p_count>20 AND count<5)

What do I need to to introduce a third bucket. Ideally I want to bucket the count as the earliest, middle, and latest count and assign fields to each count. Then make calculations on the three fields.

I was thinking something like this.

index=blah earliest=-10m@m latest-1m@m | bucket span=3m _time 
|  stats count by _time, Platform, Check 
| streamstats window=3 global=f current=f 
             first(count) as p_count latest(count) as m_count last(count) as l_count by Platform Check 
| .... 

and so on but I think this is flawed. I can't seem to find a way to put the earliest, middle, and latest into three fields.

i would very much appreciate some help here.

Cheers,

Tommy boi

Labels (3)
0 Karma
1 Solution

woodcock
Esteemed Legend

Try this:

index=blah earliest=-10m@m latest-1m@m | bucket span=3m _time 
|  stats count by _time, Platform, Check
| streamstats window=3 global=f current=f earliest(count) AS p_count latest(count) AS l_count sum(count) AS sum by Platform Check
| eval m_count = sum - p_count - l_count | fields - sum

But, I think you would be much more satisfied with the Adaptive Thresholding capability in ITSI. In any case, check out this EXCELLENT blog post:
http://blogs.splunk.com/2016/01/29/writing-actionable-alerts/

View solution in original post

tomjones101
Explorer

Thanks for your input @woodcock @Ingiunn. This works ok but not exactly to my requirements. In these cases the first , middle and last count are assigned fields but they are not merged into one row, making it difficult to work with. Ideally the result will be one row for each check, with fields for first , middle and and last count. Do you think this is doable?

0 Karma

woodcock
Esteemed Legend

Did you try my answer? Each result/row should have 3 fields: l_count, m_count, and p_count (unless you have only 1 event for any particular Platform + Check values pairing. I think maybe you mean that all Platform values should be merged. I will update my answer under that assumption (but this is by no means clear to me that this is what the problem is).

0 Karma

tomjones101
Explorer

Thanks Woodcock. This is exactly what I want.

Just made some slight adjustments and made a some conditions. Works very well.

index=blah earliest=-11m@m latest=-1m@m  | bucket span=4m _time | (lots of eval statements here) | stats count by _time, Platform, Check | streamstats window=3 global=f earliest(count) as earliest_count latest(count) as last_count sum(count) as total_count by Platform Check | eval middle_count=total_count-last_count-earliest_count | eval earliest_middle_difference=middle_count-earliest_count | eval middle_latest_difference=last_count-middle_count | eval earliest_latest_difference=last_count-earliest_count | eval count1=middle_count+earliest_count | eval count2=last_count+middle_count | eval count3=last_count+earliest_count | eval earliest_middle_drop=abs(round(earliest_middle_difference/(count1+abs(earliest_middle_difference))*100,0))  | eval middle_latest_drop=abs(round(middle_latest_difference/(count2+abs(middle_latest_difference))*100,0)) | eval earliest_latest_drop=abs(round(earliest_latest_difference/(count3+abs(earliest_latest_difference))*100,0)) | sort str(Platform) str(Check) | table _time Platform Check earliest_count middle_count last_count total_count earliest_middle_difference middle_latest_difference earliest_latest_difference earliest_middle_drop middle_latest_drop earliest_latest_drop | search earliest_middle_difference < 0 AND middle_latest_difference < 0 AND earliest_latest_difference < 0 | where (earliest_middle_drop > 60 OR middle_latest_drop > 60) AND (earliest_latest_drop > 60)
0 Karma

ksharma7
Path Finder

@tomjones101 does your alert works for varying traffic like you will have different volume during peak and non peak hour? I have not really tested at the moment just wanted to know if you are working on variable traffic or constant traffic so that I can try this logic in mine

0 Karma

tomjones101
Explorer

Also ITSI, have been looking into it. Seems pretty good but there seems to be something suspect about the cost as it is not mentioned anywhere:(

0 Karma

lguinn2
Legend

ITSI is not free. You have to get a quote from Sales.

0 Karma

lguinn2
Legend

That search seems pretty complicated. Why not do something like this?
Rule: Look at the last 6 time periods and define the 20th percentile and 80th percentile values for count by Platform and Check.
Alert if the current time period is outside the range from 20th - 80th percentile.

index=blah earliest=-36m@m latest=-1m@m 
| bucket span=5m _time 
| eval timeframe=if(_time < relative_time(now(),"-6m@m"),"before","now") 
| stats count(eval(timeframe="before")) as before_count count(eval(timeframe="now")) as now_count 
         by _time Platform Check  timeframe
| eventstats p20(before_count) as low_before p80(before_count) as high_before by Platform Check
| fields - before_count
| where timeframe="now" AND now_count < low_before or now_count > high_before

In the above search, the only time a result will be returned is if the last time bucket contained a count that was outside the range. You can alert based on "# results > 0". If you actually want to see the results, for debugging purposes, simply remove the final line.

I think this is a lot more flexible. You can change the number of buckets that you examine or the timer ange of the buckets just by playing with the first few lines of the search. The real trick is in line 3, where you categorize an event as either belonging to "now" - the current time period, or "before" - a prior time period.

woodcock
Esteemed Legend

Try this:

index=blah earliest=-10m@m latest-1m@m | bucket span=3m _time 
|  stats count by _time, Platform, Check
| streamstats window=3 global=f current=f earliest(count) AS p_count latest(count) AS l_count sum(count) AS sum by Platform Check
| eval m_count = sum - p_count - l_count | fields - sum

But, I think you would be much more satisfied with the Adaptive Thresholding capability in ITSI. In any case, check out this EXCELLENT blog post:
http://blogs.splunk.com/2016/01/29/writing-actionable-alerts/

woodcock
Esteemed Legend

If your desire is to ignore Platform values (e.g. "merge them"), then this will do it:

index=blah earliest=-10m@m latest-1m@m | bucket span=3m _time 
|  stats count by _time, Check
| streamstats window=3 global=f current=f earliest(count) AS p_count latest(count) AS l_count sum(count) AS sum by Check
| eval m_count = sum - p_count - l_count | fields - sum
0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...