I'm working on tracking down some slowness in searches of all types that I am doing. Looking at the search inspector for some of these, I consistently find that dispatch.fetch takes up the vast majority of the overall search time.
I've looked at this document http://docs.splunk.com/Documentation/Splunk/5.0.1/Search/UsingtheSearchJobInspector, however, I'm looking for some insight on how to troubleshoot the long dispatch.fetch time specifically.
Any insight?
Fetch is mostly waiting on events to be pulled back from disk. I would be checking to see if there is a lot of I/O contention. Also, it could be representative of how your searches are structured versus your data. A dense search (where a large proportion of events in the index match your search terms) will necessarily have more fetching to do than a sparse one.
Fetch is mostly waiting on events to be pulled back from disk. I would be checking to see if there is a lot of I/O contention. Also, it could be representative of how your searches are structured versus your data. A dense search (where a large proportion of events in the index match your search terms) will necessarily have more fetching to do than a sparse one.
how to check I/O contention? ( I tried SOS, but not showing)
I am connecting my hunk application(6.4) to datastax cassandra 3.1 to get the results for monitoring and the results took consistently 5 seconds to render though the table has data in hundreds.
I have verified my CassandraERP Connector class also which is hardly taking time in mili seconds to return the response.Could anyone help me in getting this clarified.
Execution costs
Duration (seconds) Component Invocations Input count Output count
0.00 command.fields 4 1 1
0.00 command.search 4 1 1
0.00 command.search.filter 4 - -
2.02 command.stdin 3 - 1
2.00 command.stdin.cpd2sr 2 1 1
0.00 command.stdin.calcfields 1 1 1
2.00 command.stdin.cpd2sr.blocked 1 - -
0.00 command.stdin.kv 1 1 1
0.00 command.stdin.tags 1 1 1
0.00 command.stdin.typer 1 1 1
0.00 command.stdin.fieldalias 1 1 1
0.00 command.stdin.lookups 1 1 1
0.00 dispatch.check_disk_usage 1 - -
0.06 dispatch.createdSearchResultInfrastructure 1 - -
0.04 dispatch.evaluate 1 - -
0.04 dispatch.evaluate.search 1 - -
4.08 dispatch.fetch 6 - -
0.00 dispatch.localSearch 1 - -
0.00 dispatch.preview 1 - -
0.00 dispatch.readEventsInResults 1 - -
0.00 dispatch.stream.local 1 - -
0.00 dispatch.timeline 6 - -
0.03 dispatch.writeStatus 8 - -
0.01 startup.configuration 1 - -
0.03 startup.handoff 1 - -
Thanks - the disk I/O contention comment led me to find a bad network storage mount parameter on the index server that was causing the disk to be very busy.