Splunk Search

We are getting error when we search with windows index , this is happening from past month

Kaushikkatta03
Explorer

Below is the error we got

[hsplunkp01] Dispatch Runner: Configuration initialization for /opt/splunk/var/run/searchpeers/C090FDA2-105E-4875-A110-3F13FF986151-1510330174 took longer than expected (3157ms) when dispatching a search (search ID: remote_splunksearchhead.XXXXXXX.com_1510330563.816_8FAD37F6-D2F6-4C43-A22C-66B26D1236E6); this typically reflects underlying storage performance issues.

Kindly let us know troubleshooting methods and resolution for this .

0 Karma

Richfez
SplunkTrust
SplunkTrust

You don't mention an operating system, OS version, splunk version or anything else, so this can only be generic help.

First thing to look at is to examine your disk performance using systat, iostat, perfmon or whatever other disk performance tool you have. The easiest thing to look at for that is "disk idle time". (The other entities often get weird when there's RAID underneath the "disk" - not unuseful, and maybe perfectly wonderful, but harder to interpret).

Idle time should be under, oh - this is really variable, but let's say if you are under 50% idle for significant periods of time, it could simply be disk contention. Short little tiny spikes even to 0% idle are fine, but longer ones could be bad. In this case, your disks simply aren't fast enough.

(Another great metric for this is "disk queue length", which should generally stay below the number of physical disks you have in the system, and which should "take care of spikes" quickly. So if you have 10 disks in R10, the disk queue length should remain below 10. Spikes, which may go up quite a bit higher, should be cleared out within a few seconds.)

Another thing - directly related to the above - is your swap usage. If you are swapping because of a lack of RAM, well, every time disk swaps you are killing your performance for anything else disk related.

Speaking of swap - how much RAM do you have in your indexer and how much is free (or used for file system cache)? RAM can help them a LOT. Don't listen to those wimpy amounts recommended by Splunk - there are graphs and charts (by the community) showing the performance increase going even from 64 GB of RAM to 128 GB. File system cache is amazing. 🙂

In all cases, this problem is indicating disks aren't fast enough or your indexers can't cache enough of the disk (which also implies your disks are too slow).

This isn't necessarily the problem but it needs double or triple checked before moving on to other potential problems.

Happy Splunking, I hope we help you get this sorted out!
-Rich

Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...