Splunk Search

What to do when appendcols command can't handle larger counts?

f5x6kb8
Explorer

We need to determine a 30 day average based on the count of two events, a request and a response. The issue is that each generate upward of 30K each...hourly. The search below works great for short durations, but once the duration increases, the count data from the appendcols is all over the map.
Any ideas would be greatly appreciated!!

index=blah blah blah 
| search field="A Response*"  
| timechart span=1h count as response  
| appendcols [ search field="A Request*"   
| timechart span=1h count as request ]  
| eval reciprocal=round(response/request,2)*100
0 Karma
1 Solution

DalJeanis
SplunkTrust
SplunkTrust

Eliminate appendcols by just processing the data once for both types.

index=blah blah blah 
 | search field="A Response*" OR field="A Request*"
 | bin _time span=1h
 | eval request=if(like(field,"A Request%"),1,0)
 | eval response=if(like(field,"A Response%"),1,0)
 | timechart span=1h sum(request) as request, sum(response) as response  
 | eval reciprocal=round(response/request,2)*100

The search might also be written like this. not sure which is more efficient, and not sure whether the .* on the end is needed.

  | regex field="^A Re(sponse|quest).*"

View solution in original post

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

Eliminate appendcols by just processing the data once for both types.

index=blah blah blah 
 | search field="A Response*" OR field="A Request*"
 | bin _time span=1h
 | eval request=if(like(field,"A Request%"),1,0)
 | eval response=if(like(field,"A Response%"),1,0)
 | timechart span=1h sum(request) as request, sum(response) as response  
 | eval reciprocal=round(response/request,2)*100

The search might also be written like this. not sure which is more efficient, and not sure whether the .* on the end is needed.

  | regex field="^A Re(sponse|quest).*"
0 Karma

f5x6kb8
Explorer

Worked like a charm! Thank you very much for taking the time. Have a Great Weekend!

0 Karma

niketn
Legend

You need to summarize data per hour (which implies you will reduce daily events from 30K*24 to 24). Then you will be able to run subsearches like appendcol without dropping data.

Refer to sitimechart and collect commands on Splunk documentation for your reference.
https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Sitimechart
https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Collect

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"
0 Karma

somesoni2
SplunkTrust
SplunkTrust

and/or it may be possible to avoid the appendcols altogether. We can have a look if you can share full search of yours.

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...