Our Splunk environment is chronically under resourced, so we see a lot of this message: [umechujf,umechujs] Configuration initialization for D:\Splunk\etc took longer than expected (10797ms) when dispatching a search with search ID _MTI4NDg3MjQ0MDExNzAwNUBtaWw_MTI4NDg3MjQ0MDExNzAwNUBtaWw__t2monitor__ErrorCount_1694617402.9293. This usually indicates problems with underlying storage performance. It is our understanding that the core issue here is not so much storage, but processor availability. Basically Splunk had to wait 10.7 seconds for the specified pool of processors to be available before it could run the search. We are running a single SH and single IDX. Both are configured for 10 CPU cores. Also, this is a VM environment, so those are shared resources. I know, basically all of the things Splunk advises against (did I mention also running Windows?). No, we can't address the overall resource situation right now. Somewhere the idea came up that reducing the quantity of cores might help improve processor availability, so if Splunk were only waiting for 4 or 8 cores, it would at least get to the point of beginning the search with less initial delay as it would have to wait for a smaller pool of cores to be available first. So our question is, which server is most responsible for the delay, the SH or the IDX? Which would be the better candidate for reducing the number of available cores?
... View more