When submitting queries in rapid succession to Splunk (via the REST API), I'm getting 503 errors from splunkd. This seems to occur after exactly 3 rapid-fire requests in most cases, although there have been times when I get much further than this before seeing the error.
The particular queries being executed don't seem to matter.
splunkd.log shows:
05-04-2010 20:02:31.969 ERROR DispatchCommand - Your maximum number of concurrent searches has been reached. usage=3 quota=3 The search was not run. SearchId=1273003350.8
05-04-2010 20:06:55.635 ERROR DispatchCommand - Your maximum number of concurrent searches has been reached. usage=3 quota=3 The search was not run. SearchId=1273003614.12
05-04-2010 20:06:58.822 ERROR DispatchCommand - Your maximum number of concurrent searches has been reached. usage=3 quota=3 The search was not run. SearchId=1273003617.16
In etc/system/local/limits.conf I have:
[search]
base_max_searches = 8
max_searches_per_cpu = 8
max_rt_search_multiplier = 6
Splunk appears to be ignoring limits.conf
The base_max_searches setting has nothing to do with the srchJobsQuota setting under roles. The former is a system-wide server setting.
You might hit the limits for individual role search quota(authorize.conf) before you hit the system search quota limitation(limits.conf). authorize.conf determines the limitation for concurrent search for roles.
http://www.splunk.com/base/Documentation/4.1.1/Admin/Authorizeconf
$SPLUNK_HOME/etc/system/local/authorize.conf $SPLUNK_HOME/etc/system/default/authorize.conf
The default quota for normal role is 3.
Based on your diag, there is no change in your $SPLUNK_HOME/etc/system/local/authorize.conf, so Splunk will use default srchJobsQuota = 3.
Please modify your $SPLUNK_HOME/etc/system/local/authorize.conf to increase your quota.