Reporting

Why is the accelerated Data Model performing slow?

vaibhavagg2006
Communicator

I have an accelerated data model which is performing very slow. It takes more than 2 minutes to return just count of events and more than 4 minutes to do any statistical functions. For example below query takes 121 seconds to run:-

|tstats count from datamodel=mydatamodel where (nodename=mydatamodel.logs) (mydatamodel.tag=prod) groupby "mydatamodel.transactionID"

This returned around 4 Million rows after scanning 12 M events. Is there anything I can do to improve the performance. We have a search head cluster with good configuration and around 13 indexers.

0 Karma

robertlynch2020
Motivator

Did you get an answer to your question above as i am also have performance issues with | tstats

0 Karma

micahkemp
Champion

If the datamodel is accelerated, you can use summariesonly=t to only search the accelerated data:

|tstats summariesonly=t count from datamodel=mydatamodel where (nodename=mydatamodel.logs) (mydatamodel.tag=prod) groupby "mydatamodel.transactionID"

This should result in a faster search.

esix_splunk
Splunk Employee
Splunk Employee

Is that streaming command equally spread across your indexers, or do you have one or two indexers that are taking longer then others?

Dispatch stream taking a lot of time could reflect indexer performance issues, especially around I/o. How are your normal searches running? What if you run tstats against the index instead of the DM.

Circling back to the data model, what kind of events are fitting into the DM? And what’s the base search look like?

0 Karma

vaibhavagg2006
Communicator

Yes the streaming command is equally spread. All the indexers and almost taking equal time.

My base search looks like something below ::
"index=idx1 (sourcetype=abc transactionID=) OR (sourcetype=xyz ("some search string") ) OR (sourcetype=blah (*search string))"

Also i tried to run normal index search vs tstats against accelerated data model for same set of data.
The response time is 168 seconds vs 138 seconds.
Event count:-19,821,855

Not able to run tstats against the index as I dont have indexed fields on which I am trying to group by

0 Karma

esix_splunk
Splunk Employee
Splunk Employee

What are the terms of the acceleration? Is it actually accelerated? You can check on the status of the data model in your Management Console, or in the Data Model Audit Dashboard of CIM (if you're using it..)

It's worth noting that your accelerated data exists on the indexers, and not on the SHC. If your indexers are facing anytype of I/o, Memory, or CPU contention, then this can effect performance of your data model.

Additionally, the structure of your data model can also adversely effect performance. Whats the time span youre searching over? If you adjust the timespan down, how does it effect the search performance?

0 Karma

vaibhavagg2006
Communicator

Yes, the data model is accelerated for last 30 days.
Indexers are ok in terms of resources, this slowness is consistent and query takes same time to run across the day.
I am searching for "Yesterday". The event count I mentioned(12 M) is for 1 day. The logs are in key value formatl

0 Karma

esix_splunk
Splunk Employee
Splunk Employee

Did you confirm that the data is actually accelerated via MC or DM Audit Dashboard? This is a key thing to confirm. If your data model acceleration is not completing or working properly, then your tstats search just search against the raw data, which would give you a search time similar to normal search performance.

Whats the base search of your data models look like?

Can you check search log and see where the most process time is spent?

0 Karma

vaibhavagg2006
Communicator

Yes, I checked the acceleration is fine.
It is taking maximum time in "command.tstats" and then "dispatch.stream.remote". Under "command.tstats" most of the time is taken by "command.tstats.query_tsidx"
Just curious what is invocations in the job inspector. It says 623 for "Command.tstats" and around 25 for each indexer.

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...