Splunk Search

Is it possible to calculate 'responseTime' values in a cumulative stats table?

alexandermunce
Communicator

I have identified an issue with a response time stats report that was built by a former Splunk specialist at my organization and I'm having trouble identifying the root cause or developing a better solution.

The goal is to produce a stats table where the first column defines the range (in seconds) and the second column displays a count of transactions that occurred in that range.

However, it seems that the calculations do not align with my own check of the raw data which I made in Excel — I feel the ranges must be incorrectly defined.

The ranges to include in the table are as follows:
<1.0 sec (next column will include a count of transactions which have a response time value of between 0 - 1.0)
<2.0 sec (next column will include a count of transactions which have a response time value of between 0 - 2.0)
<3.0 sec (next column will include a count of transactions which have a response time value of between 0 - 3.0.. etc)
<4.0 sec
<5.0 sec
<6.0 sec
<7.0 sec
<8.0 sec
<9.0 sec
<=10.0 sec

The search query is as follows — it is line 2 that I can't get my head around and feel it may be incorrect — it seems to be rounding values up and this is not appropriate, as we are dealing with hard range cutoffs, I.E, 1.0 seconds, 2.0 seconds, etc:

eventstats count as "total" | 
eval in_range=round(case(responseTime<10, ceil(responseTime), responseTime>=10.0,10.0),1) | 
streamstats count as cnt avg(responseTime) as run_avg | 
stats first(total) as total last(run_avg) as run_avg max(cnt) as count count as cnt by in_range | 
sort 0 in_range | 
eval range=if(in_range>=10, ">= 10.0 sec","< "+tostring(in_range)+" sec") | 
eval run_avg=round(run_avg,1) | 
rename cnt as "No of Transactions"| 
table range "No of Transactions"

The result of this search is a table which appears to have the correct format, however the "No of transactions" values do not seem to correctly fall within the ranges defined.

Second part to the problem - optional:
In addition to this, the ranges are not cumulative - ie, the actual ranges which it seems to be reporting are 0-1 sec, 1-2 sec, 2-3 sec, etc

Tags (2)
0 Karma
1 Solution

jeffland
SplunkTrust
SplunkTrust

Line 2 will round your responseTime field up to the next integer if it's smaller than 10, limiting it at 10 and keeping one decimal. You could also think of this as round(min(ceil(responseTime), 10), 1). I don't see anything wrong with it.

What happens next in your search is mostly superfluous if all you're interested in is the count of transactions per range. Since I don't know what your raw data looks like I'm going to assume that one transaction is one event with one responseTime value, which would then require a simple stats count by responseTime. Your entire search would be

index=foo responseTime=*
| eval range= round(min(ceil(responseTime), 10), 1)
| stats count as "No of transactions" by range
| sort range
| fieldformat range = if(range = 10, ">= ", "< ").range

It looks like the former splunk specialist tried to calculate the average responseTime per range as well, but that's never shown in your final table. You could add it by including avg(responseTime) as "Average transaction response time" before the by clause of the stats.

As per your optional requirement regarding cumulative stats, there are multiple ways this can be done. One would be to simply reverse the sort oder, calculate a running sum and reverse the order again:

index=foo responseTime=*
| eval range= round(min(ceil(responseTime), 10), 1)
| stats count as "No of transactions" by range
| sort - range
| fieldformat range = if(range = 10, ">= ", "< ").range
| streamstats sum("No of transactions") as "No of transactions (total)"
| sort range

I've moved this info to a new field, you can just change that if you don't need the individual values and are only interested in the cumulative stats.
If this doesn't work for you, feel free to provide more details, possibly sharing some of your (sanitized) data.

View solution in original post

jeffland
SplunkTrust
SplunkTrust

Line 2 will round your responseTime field up to the next integer if it's smaller than 10, limiting it at 10 and keeping one decimal. You could also think of this as round(min(ceil(responseTime), 10), 1). I don't see anything wrong with it.

What happens next in your search is mostly superfluous if all you're interested in is the count of transactions per range. Since I don't know what your raw data looks like I'm going to assume that one transaction is one event with one responseTime value, which would then require a simple stats count by responseTime. Your entire search would be

index=foo responseTime=*
| eval range= round(min(ceil(responseTime), 10), 1)
| stats count as "No of transactions" by range
| sort range
| fieldformat range = if(range = 10, ">= ", "< ").range

It looks like the former splunk specialist tried to calculate the average responseTime per range as well, but that's never shown in your final table. You could add it by including avg(responseTime) as "Average transaction response time" before the by clause of the stats.

As per your optional requirement regarding cumulative stats, there are multiple ways this can be done. One would be to simply reverse the sort oder, calculate a running sum and reverse the order again:

index=foo responseTime=*
| eval range= round(min(ceil(responseTime), 10), 1)
| stats count as "No of transactions" by range
| sort - range
| fieldformat range = if(range = 10, ">= ", "< ").range
| streamstats sum("No of transactions") as "No of transactions (total)"
| sort range

I've moved this info to a new field, you can just change that if you don't need the individual values and are only interested in the cumulative stats.
If this doesn't work for you, feel free to provide more details, possibly sharing some of your (sanitized) data.

alexandermunce
Communicator

Hi Jeff,

This worked well and simplified the query, thank you for taking the time to assist.

I actually did not need to reverse the sort order as I wanted cumulative from the top of the list down.

My final requirement is to add a column in the stats for a percentage of the "No of transactions (total)" value against the total.

I am thinking I might be best to do this in Excel.

0 Karma

jeffland
SplunkTrust
SplunkTrust

No problem. This adds a Column named "Total transactions" to your table and then calculates the percentage each "No of transactions (total)" is of that in the field "Percentage of total transactions" (simply append to exisiting search):

| eventstats sum("No of transactions") as "Total transactions"
| eval "Percentage of total transactions" = round('No of transactions (total)' / 'Total transactions' * 100, 0)

If you don't want to keep the column "Total transactions", just append another | fields - "Total transactions" to your search.

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...