Splunk Search

Looking at the job inspector, why is startup.handoff taking majority of the search time?

cegoes
Explorer

Pastebin of search.log: http://pastebin.com/aAzw697G

Job inspect statistics:
    0.00    command.fields    15    197    197
         0.08    command.search    15    -    197
         0.04    command.search.index    24    -    -
         0.01    command.search.filter    2    -    -
         0.00    command.search.calcfields    2    1,167    1,167
         0.00    command.search.fieldalias    2    1,167    1,167
         0.00    command.search.index.usec_1_8    489    -    -
         0.00    command.search.index.usec_512_4096    5    -    -
         0.02    command.search.rawdata    2    -    -
         0.01    command.search.kv    2    -    -
         0.00    command.search.typer    2    197    197
         0.00    command.search.lookups    2    1,167    1,167
         0.00    command.search.summary    15    -    -
         0.00    command.search.tags    2    197    197
         0.00    dispatch.check_disk_usage    1    -    -
         0.00    dispatch.createdSearchResultInfrastructure    1    -    -
         0.04    dispatch.evaluate    1    -    -
         0.03    dispatch.evaluate.search    1    -    -
         0.04    dispatch.fetch    16    -    -
         0.06    dispatch.localSearch    1    -    -
         0.01    dispatch.readEventsInResults    1    -    -
         0.08    dispatch.stream.local    15    -    -
         0.31    dispatch.timeline    16    -    -
         0.05    dispatch.writeStatus    6    -    -
         0.02    startup.configuration    1    -    -
         0.64    startup.handoff    1    -    -
0 Karma

MuS
SplunkTrust
SplunkTrust

Hi cegoes,

looking at the docs http://docs.splunk.com/Documentation/Splunk/6.4.2/Search/ViewsearchjobpropertieswiththeJobInspector you can find this about the search.handoff :

The time elapsed between the forking of a separate search process and the beginning of useful work of the forked search processes. In other words it is the approximate time it takes to build the search apparatus. This is cumulative across all involved peers. If this takes a long time, it could be indicative of I/O issues with .conf files or the dispatch directory.

So, this value is cumulative and this means that if you have a large environment, you will see much higher numbers than with a single-instance splunk.
As stated in the docs a high number in the search.handoff can indicate some problem in your environment. Some hints can be find here in this answers https://answers.splunk.com/answers/247024/how-to-troubleshoot-why-startuphandoff-in-the-sear.html

Hope this helps ...

cheers, MuS

0 Karma

cegoes
Explorer

I'm running locally on Windows 7 with a Samsung 850 EVO SSD.
Due to my inexperience with Splunk, I'm not sure what the local environment could be.

Do you have any ideas as to what it could be?

0 Karma

MuS
SplunkTrust
SplunkTrust

Well, as mentioned in the provided link's answer the reasons can vary. It can be your local Windows box, it can be your SSD - because having a SSD does not automatically mean you will benefit from it's theoretical I/O speed.
I would suggest to have a closer look at the machine specs and if the minimum specs are met http://docs.splunk.com/Documentation/Splunk/6.4.2/Installation/Systemrequirements#Recommended_hardwa... and use Perfmon data to monitor what happens if you start a search on your box.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...