Hello,
The user has a role setting to run up to 100 concurrent job searches. However, at about 15-20 concurrent jobs, any new searches are getting queued. Is there a hard setting somewhere which has to be tweaked? Looking a CPU, RAM, and disk read/write performance, the system is not heavily utilized. We use a standalone server with indexing and search functions combined.
You are most likely hitting the system-wide limit for historical search concurrency.
The system-wide limit of historical searches is computed as:
max_hist_searches = max_searches_per_cpu x number_of_cpus + base_max_searches
see the section on concurrency in the limits.conf spec file
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf?ac=partner_smt
I would caution against tweaking these limits as you may see search performance decrease. Instead consider adding more cpu cores or moving to a distributed deployment with search head clustering
You are most likely hitting the system-wide limit for historical search concurrency.
The system-wide limit of historical searches is computed as:
max_hist_searches = max_searches_per_cpu x number_of_cpus + base_max_searches
see the section on concurrency in the limits.conf spec file
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf?ac=partner_smt
I would caution against tweaking these limits as you may see search performance decrease. Instead consider adding more cpu cores or moving to a distributed deployment with search head clustering