Splunk Search

How to improve the performance of my search?

shankarananthth
Explorer
index="way" sourcetype="transactions"   
| transaction fields=Id keepevicted=true   
| eval Status=if(isnotnull(Error), "Failed", "Success") 
| where Status="Failed"  
|eval Partner= coalesce(partners,Partnering) 
| table Appname ServiceName Partner Stats _time Error ResponseTime  
| timechart span=1h useother=f count(Stats) by Appname

When I run the search above in our production environment at an interval of 7 hours, it's fetching 40,724 events out of 13,92,773 events and running nearly 14 minutes.

Is there any option to reduce the running time of the search?
I hope indexing won't help much in this scenario. Is there another option to optimize the search and reduce the time?

Thanks in adavance..

Regards,
Shankarananth.T

0 Karma

preactivity
Path Finder

Always try to avoid transaction command if you want to achieve better performance. You can improve the performance by 10 X times by using Splunk meta data fields. I can help you in that please contact me in fiverr or Email (hurdlej1@gmail.com)

https://www.fiverr.com/s2/affc9b7a8a
https://www.fiverr.com/s2/608e8ed73f?utm_source=CopyLink_Mobile

0 Karma

diogofgm
SplunkTrust
SplunkTrust

If you want to help just post here an answer so every splunk user looking for something similar will be able to find it.

------------
Hope I was able to help you. If so, some karma would be appreciated.
0 Karma

somesoni2
SplunkTrust
SplunkTrust

Try something like this

index="way" sourcetype="transactions"   | stats values(Appname) as Appname  values(Error) as Error first(_time) as _time by Id | where isnotnull(Error) | timechart span=1h count by Appname

Updated answer

index="way" sourcetype="transactions" Error=*  | stats values(Appname) as Appname  values(Error) as Error first(_time) as _time by Id | where isnotnull(Error) | timechart span=1h count by Appname
0 Karma

shankarananthth
Explorer

Ok i will try all ur inputs .

0 Karma

shankarananthth
Explorer

Yes somesoni2 i tried it.
But the problem is data volume for one hour itself, its giving nearly 1.5 millions records.
more over i'm working on log files to fetch the data.
For fetching and searching such huge volume of data from log file, its taking long time.
Is there any way to make performing search faster.

Thanks,
Shankarananth.T

0 Karma

pgreer_splunk
Splunk Employee
Splunk Employee

Might be helpful if you take the quickest search that you've attempted and run a 'Job->Inspect Job' and take a look at where the larger portions of your search are occurring. Are you running against a single indexer? multiple indexers? An index cluster? There could be other factors causing a slower than desirable search performance than just optimizing the search itself to reduce the data set and utilizing search commands that are more efficient that others. It would be interesting to see where the vast majority of time is being spent for your search.

0 Karma

shankarananthth
Explorer

Hi pgreer,
When i'm checking the Inspect job, i'm seeing dispatch.fetch takes more time.It takes nearly 75% of time.
Yes we are running against the single index.
Volume of data is very huge, for the period of 4 hours its checking for nearly 3 million records.

Regards,
Shankarananth.T

0 Karma

somesoni2
SplunkTrust
SplunkTrust

I guess the early filtering method of @javiergn should help you here to reduce the number of rows to be processed. Try the updated answers

0 Karma

javiergn
SplunkTrust
SplunkTrust

One optimisation is to filter out the fields you need from the very beginning of your query.
You might also be able to pre filter by Failed status before your transaction begins.
I also don't understand why you need the Partner field if you are getting rid of it in your timechart so you can remove that too.
Try this and let me know if that helps:

index="way" sourcetype="transactions" Error=*
| table _time, id, Error, Stats, ResponseTime, Appname
| transaction fields=Id keepevicted=true
| timechart span=1h useother=f count(Stats) by Appname

A more advanced optimisation would be to play with the transaction command using the maxspan or maxevents parameters. You can even replace transaction with stats and streamstats, There are several examples for transaction alternatives in the forum.

Hope that helps.

Thanks,
J

javiergn
SplunkTrust
SplunkTrust

Hi,

If you tell us what exactly you are trying to achieve we might be able to help a bit more.

My guess is that, as I mentioned before, your transaction is taking a lot of resources so might be able to either tweak it or simply replace it with stats.

Take also a look at the comments from somesoni2 and pgreer as you might find what you need there

0 Karma

shankarananthth
Explorer

Hi,

Sorry for late response javiergn and somesoni2.
Thank you very much for your reply.
Its really use-full. I can see some improvement in my Query.
Last-time for 1 hour interval of data Query ran for 15 minutes.
Now its running for 9 minutes.
But still end user want to reduce time more.
I gone through some other posts also. Its mentioned like there is no any in build command or event to optimize the query in splunk.
Is there anything i can do in conf file to improve the performance of the Query.

Regards,
Shankarananth.T

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...