Several of the canned searches in the Service Now app are complaining about configuration initialization taking a long time. In my case, I'm seeing values 1000-2000ms.
And when I'm getting the complaint, I'm also not seeing search results. Not sure if this is an app issue or a problem with my splunk instance.
Any ideas where I should start?
Considerations regarding NFS for hot/warm buckets: http://docs.splunk.com/Documentation/Splunk/latest/Installation/Systemrequirements#Considerations_re...
Is this something I can suppress or at least expand the threshold for?
In 6.1.4 and 6.2, the message is restricted to admins only. We don't currently allow any tuning of the message threshold, as it's something we always want our support folks and engineers to see when examining diags and other forensics. It's a pretty serious red flag.
Have you tuned your NFS client-side parameters to allow for sufficient concurrency? For example, your NFS client should support enough concurrency for your search load -- one NFS client connection per concurrent search is a good starting point.
Considerations regarding NFS for hot/warm buckets: http://docs.splunk.com/Documentation/Splunk/latest/Installation/Systemrequirements#Considerations_re...
Is this something I can suppress or at least expand the threshold for?
In 6.1.4 and 6.2, the message is restricted to admins only. We don't currently allow any tuning of the message threshold, as it's something we always want our support folks and engineers to see when examining diags and other forensics. It's a pretty serious red flag.
Have you tuned your NFS client-side parameters to allow for sufficient concurrency? For example, your NFS client should support enough concurrency for your search load -- one NFS client connection per concurrent search is a good starting point.
we are currently using version 6.3. We have randomly been getting the below message for a few months. We have even had issues where our search head became non responsive and we had to restart the splunkd service. We have received this message from both of our indexers while running searches from both our of search heads. Our Splunk Instance is on a windows server platform. Our Splunk software is located on our 😧 drive with a NTFS file system. Our Hot/Warm storage for our indexers is on a 6 +6 RAID 1+0 NTFS file system. Is there something we could do to safely tune our NTFS client-side parameters to allow for sufficient concurrency for our searches?
[PL-WLMSPLPP04] Configuration initialization for D:\Program Files\Splunk\var\run\searchpeers\pl-wlmsplpp01-1447872759 took longer than expected (1011ms) when dispatching a search (search ID: remote_pl-wlmsplpp01_1447875015.646); this typically reflects underlying storage performance issues
Wonderful answer!
Doubling my default concurrency of 64 to 128 seems to have solved the warning issue.
Thanks a lot for your help!
This message means your search processes are taking >1s to read initial configuration information from disk. What does the I/O subsystem underneath $SPLUNK_HOME/etc look like in your environment? If $SPLUNK_HOME/etc is networked storage, for example, there might be disk/network performance issues affecting search startup time.
No huge props.conf, the networked storage is solid.
The only thing I can add here, is that this indexer is the only indexer in my environment running 6.2.
Any chance the version discrepancy has something to do with this?
Any chance the version discrepancy has something to do with this?
The message is new in 6.2 and 6.1.4, so if your other indexers are running older versions, they're likely just not emitting any message (while still suffering from the same slowness).
the networked storage is solid
It's possible that networked storage performs fine in general, yet still isn't quite up to the task here.
How are you using networked storage here? Is this an indexer using mounted bundles?
Our storage for splunk is NFS-based, this includes all of $SPLUNK_HOME.
It seems to perform well, we don't seem to see bottlenecks for indexing. (ie indexing performance in S.O.S.)
The SH and other indexers are all running 6.1.3. This indexer in question is my test indexer and it's running 6.2.
So it's very likely if this is a new message, that we'll see this complaint for all our gear. Is this something I can suppress or at least expand the threshold for?
I've seen huge props.conf files contribute to this figure. Go check if you have some of these, an often-forgotten location is the learned app.
Hm. Since the original post, I've seen it crop up for other searches. Seems to be specific to the one indexer.
May need to do some health-checking on that indexer.
This error shouldn't be specific to any particular index. No matter what index you search against, the search process initializes the same configurations.
The time-to-initialize could depend on the app from which the search is run and the user who runs the search, though.
Would this be true if the error is specific to one particular index?