Splunk Search

Compare 90-day average to last 24-hour count

jdaivs
Explorer

I am trying to compare the event count from each of my devices for the last 24 hours to the daily average of each device over the last 90days. Here is my query:
sourcetype=eventsfrommydevice | eval DEVICE_NAME=coalesce(tag,DEVICE_NAME) | stats count BY DEVICE_NAME, date_month, date_mday | stats avg(count) AS "Average_Event_Count" BY DEVICE_NAME | eval Average_Event_Count=round(Average_Alert_Count,2)

The time window is set to the last 90 days. The first eval statement is there because the device names changed during the last 90 days and throws of the average because it looks at each device name as separate, so I tagged the old device name with the new and this eval, where the device name is tagged, replaces the old name value with the value in the tag. Then I am making a daily count. The next stat averages the daily event count. The last eval just rounds the average to 2 decimal places.

I want to have another column in the results that shows the event count for the last 24 hours. Eventually I want to set a threshold to tell me when the event counts are too far above or too far below the average but I need to analyze the results first. i can accomplish both sets of aggregation separately but I can't seem to get them together. I've tried using sub-searches and various eval expressions but no-go. Any help would be appreciated.

Tags (4)
1 Solution

jdaivs
Explorer

So I have it now, but it is VERY ugly and takes quite a while to load because the 90 day data is cold.

`sourcetype=mysourcedata | eval DEVICE_NAME=coalesce(tag,DEVICE_NAME) | stats count BY DEVICE_NAME, date_month, date_mday | stats avg(count) AS "Average_Alert_Count" BY DEVICE_NAME | inputlookup append=t DEVICE_NAME.csv | fillnull value=0 "Recent_Event_Count" | dedup DEVICE_NAME | eval Average_Alert_Count=round(Average_Alert_Count,2) | join DEVICE_NAME append [search sourcetype=mysourcedata earliest=-24h latest=now | eval DEVICE_NAME=coalesce(tag,DEVICE_NAME) | stats count as Recent_Event_Count by DEVICE_NAME | inputlookup append=t DEVICE_NAME.csv | fillnull value=0 "Recent_Event_Count" | dedup DEVICE_NAME]`

Using the lookup table in the subsearch first I was able to get all of the 0 count for last 24h devices but I was still missing a few where the 90 day average was 0 also so I added it to the main search as well and now I am getting everything even where the average and count are 0.

Thanks for all the help!

This is a mess though. I'd welcome any recommendations on how to clean this up and make to cold data return faster.

View solution in original post

0 Karma

jdaivs
Explorer

So I have it now, but it is VERY ugly and takes quite a while to load because the 90 day data is cold.

`sourcetype=mysourcedata | eval DEVICE_NAME=coalesce(tag,DEVICE_NAME) | stats count BY DEVICE_NAME, date_month, date_mday | stats avg(count) AS "Average_Alert_Count" BY DEVICE_NAME | inputlookup append=t DEVICE_NAME.csv | fillnull value=0 "Recent_Event_Count" | dedup DEVICE_NAME | eval Average_Alert_Count=round(Average_Alert_Count,2) | join DEVICE_NAME append [search sourcetype=mysourcedata earliest=-24h latest=now | eval DEVICE_NAME=coalesce(tag,DEVICE_NAME) | stats count as Recent_Event_Count by DEVICE_NAME | inputlookup append=t DEVICE_NAME.csv | fillnull value=0 "Recent_Event_Count" | dedup DEVICE_NAME]`

Using the lookup table in the subsearch first I was able to get all of the 0 count for last 24h devices but I was still missing a few where the 90 day average was 0 also so I added it to the main search as well and now I am getting everything even where the average and count are 0.

Thanks for all the help!

This is a mess though. I'd welcome any recommendations on how to clean this up and make to cold data return faster.

0 Karma

jdaivs
Explorer

Ok, I almost have it. I had to make some minor adjustments to what you all provided but I am seeing the 90-day average and the last 24 hour event count with the below:

sourcetype=mysourcedata | eval DEVICE_NAME=coalesce(tag,DEVICE_NAME) | stats count BY DEVICE_NAME, date_month, date_mday | stats avg(count) AS "Average_Event_Count" BY DEVICE_NAME | eval Average_Event_Count=round(Average_Event_Count,2) | join DEVICE_NAME [search sourcetype=mysourcedata earliest=-24h latest=now | stats count as Recent_Event_Count by DEVICE_NAME]

The only problem I have now is that the join search is limiting my results to devices that have had > or = 1 event for the last 24 hours.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

How about this?

 sourcetype=eventsfrommydevice | eval DEVICE_NAME=coalesce(tag,DEVICE_NAME)
| bin span=1d _time
| stats count BY DEVICE_NAME _time
| stats avg(count) AS "Average_Event_Count" BY DEVICE_NAME
| eval Average_Event_Count=round(Average_Alert_Count,2)
| join DEVICE_NAME
  [ search sourcetype=eventsfrommydevice earliest=-24h latest=now
  | stats count as Recent_Event_Count by DEVICE_NAME]

Note, the average isn't calculated correctly here or in your original search. Say you search over three days and have one event for a device today, zero for yesterday, and two for the day before. You'd expect an average of one for that device, but the queries will return 1.5 - the zero-event day isn't going to appear with a zero-count, skewing the average if such days exist.

martin_mueller
SplunkTrust
SplunkTrust

Remove pipe after pipe from the back until you get results.

As for bringing two searches together, the join does just that.

0 Karma

jdaivs
Explorer

I've tried all of the suggestions above as is and variations but I keep coming up with 0 results. This is improvement, though, because I was getting errors before. I am starting to think that there may be something on the back-end not set up to allow the use of some of these commands.

Is there a way to use two separate searches then bring them both together?

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

The trouble with that is timechart replacing the row-based grouping of stats with column-based grouping. As a result, the stats avg(count) in line 4 will break due to no field count. You'd have to go stats avg(*) as * and then transpose to allow the join to work... which seems a bit cumbersome.

As an alternative, untable to the rescue!

sourcetype=eventsfrommydevice | eval DEVICE_NAME=coalesce(tag,DEVICE_NAME)
| timechart count by DEVICE_NAME span=1d
| untable _time DEVICE_NAME count
| stats avg(count) as Average_Event_Count by DEVICE_NAME
| eval ...

lguinn2
Legend

Very excellent point about the computation of the average @martin_mueller

If you use this instead of your bin and stats commands (lines 2 & 3) in the first answer

timechart count by DEVICE_NAME span=1d

You won't have this problem, as timechart will correctly provide zero values for days with no count. But you have reminded me to go back and check my own work for this flaw!

martin_mueller
SplunkTrust
SplunkTrust

To work around the statistical shortcomings you could do this for the long-time daily average:

sourcetype=eventsfrommydevice | eval DEVICE_NAME=coalesce(tag,DEVICE_NAME)
| stats count by DEVICE_NAME
| addinfo | eval days = round((info_max_time - info_min_time) / 86400)
| eval Average_Event_Count = round(count / days, 2)
| fields DEVICE_NAME Average_Event_Count
| join ...
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...