Splunk Search

Question about Stats and statistical functions available

deeboh
Path Finder

Hey folks, I have a hopefully silly question about the stdev(), sum(), var() etc... functions within the stats command. I have an example below where stats sum() are display then another which displays nothing....

sourcetype=log4j ERROR | eval warns=errorGroup+"-"+errorNum | top warns by errorDesc | stats stdev(count), var(count) by warns

The above displays a grid with data, albeit the values are zero for stdev(count) and var(count)

When I remove "| top warns by errorDesc" The succeding search displays nothing accept the first column values for "warns". Nothing for stdev(count) or var(count)

Even more mysterious when I run the following query I actually am blessed with stdev and var data.

sourcetype=log4j ERROR | eval warns=errorGroup+"-"+errorNum | top warns by errorDesc | stats stdev(count), var(count)

Rather than completely show how inept I am I won't question the workings of the stats command. Rather i'll ask why adding "top warns by errorDesc" shows data.

Thanks in advance,

Tags (2)
1 Solution

David
Splunk Employee
Splunk Employee

This may be slightly off, because I've never use this datatype myself, but I believe this is why you're seeing the results you described above:

sourcetype=log4j ERROR | eval warns=errorGroup+"-"+errorNum 
                       | top warns by errorDesc 
                       | stats stdev(count), var(count) by warns

You're asking for the stdev / var of the number of events. The number of events is always going to be a single value.

When I remove "| top warns by errorDesc" The succeding search displays 
nothing accept the first column values for "warns". Nothing for stdev(count) or var(count) 

If you remove "top warns by errorDesc" then the count field is never generated, so there's nothing to perform statistics on.

sourcetype=log4j ERROR | eval warns=errorGroup+"-"+errorNum 
                       | top warns by errorDesc 
                       | stats stdev(count), var(count)

That is returning data -- specifically the stdev and the var between the top types of errors. In essence, the first one, you're looking at the stdev of the count for a particular warn. For the last one, you're looking across all the warns. So if you had 10 types of warns, all occurred 1 time, and one occurred 30 times, the stdev would be 9.17 (at least, according to my web calculator).

BTW: You might also find value in the following, related search (and because that's the direction I thought you were heading in, and started writing it before I re-read your original post):

sourcetype=log4j ERROR earliest=-7d@d latest=@d | eval warns=errorGroup+"-"+errorNum 
                   | stats count as Date_Warns_Count by date_mday,warns 
                   | stats stdev(Date_Warns_Count), var(Date_Warns_Count) by warns

That's similar to your first one, except it will provide you the stdev, var of the daily count, per warn.

Hope this is helpful.

View solution in original post

David
Splunk Employee
Splunk Employee

This may be slightly off, because I've never use this datatype myself, but I believe this is why you're seeing the results you described above:

sourcetype=log4j ERROR | eval warns=errorGroup+"-"+errorNum 
                       | top warns by errorDesc 
                       | stats stdev(count), var(count) by warns

You're asking for the stdev / var of the number of events. The number of events is always going to be a single value.

When I remove "| top warns by errorDesc" The succeding search displays 
nothing accept the first column values for "warns". Nothing for stdev(count) or var(count) 

If you remove "top warns by errorDesc" then the count field is never generated, so there's nothing to perform statistics on.

sourcetype=log4j ERROR | eval warns=errorGroup+"-"+errorNum 
                       | top warns by errorDesc 
                       | stats stdev(count), var(count)

That is returning data -- specifically the stdev and the var between the top types of errors. In essence, the first one, you're looking at the stdev of the count for a particular warn. For the last one, you're looking across all the warns. So if you had 10 types of warns, all occurred 1 time, and one occurred 30 times, the stdev would be 9.17 (at least, according to my web calculator).

BTW: You might also find value in the following, related search (and because that's the direction I thought you were heading in, and started writing it before I re-read your original post):

sourcetype=log4j ERROR earliest=-7d@d latest=@d | eval warns=errorGroup+"-"+errorNum 
                   | stats count as Date_Warns_Count by date_mday,warns 
                   | stats stdev(Date_Warns_Count), var(Date_Warns_Count) by warns

That's similar to your first one, except it will provide you the stdev, var of the daily count, per warn.

Hope this is helpful.

David
Splunk Employee
Splunk Employee

Ah, fixed that in mine. Thanks for pointing it out 🙂

0 Karma

deeboh
Path Finder

Hey David thanks for the Hookup. The ultimate goal of this query is to assign a value to each error group which I can judge "Risk". Much like stock market volatility. A lot of these errors are difficult to reproduce I want to use "Risk" to decide if there is customer impact. I'm missing a few variable i'm sure, but this is where my tinkering starts. 7, 30, 90 stdev as you've properly surmised in your description will be my intent. I can tie these stdev to install dates to see if certain installs cause more havoc than others? dunno yet;-).

By the way date_day needed to be date_mday

Thanks

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...