I've read a couple other posts related to this, but I still need help. I just installed Splunk v4.1.3 on CentOS and moved my 8GB license over from the old Windows server. I migrated all the warm buckets over (about 190GB) and LOVE some of the built-in Status dashboards.
When logged in as a user, I have a problem. When I try to run a "Search details" dashboard search (push the search button) for last 24 hours, I get "Your maximum number of concurrent searches has been reached. usage=3 quota=3 The search was not run. SearchId=1279715666.60". When I click "Jobs" in upper left corner, it shows about 25 jobs all with a "Done" status. I select-all, delete all the jobs, and refresh to be sure they're gone. Then I go back and try to run the same search again, and get the same error but showing a different SearchID (SearchId=1279715854.75).
When logged in as an admin, I don't have this problem. My server is Dell 710 dual-core Intel Xeon 2.4GHz dual-processor quad-core CPUs with hyperthreading and about 70GB of RAM.
Thanks, Swack
The maximum amount of concurrent searches that can be run system-wide is determined by a setting in limits.conf:
[search]
max_searches_per_cpu = <int>
* the maximum number of concurrent searches per CPU. The system-wide number of searches
* is computed as max_searches_per_cpu x number_of_cpus + 2
* Defaults to 2
You can increase this value in order to raise your system-wide concurrent search quota.
But since you are not hitting the limit as admin, you likely have to increase your regular user's concurrent search quota. In authorize.conf you will want to tweak
srchJobsQuota = <number>
* Maximum number of concurrently running historical searches a member of this role can have (excludes real-time searches, see rtSrchJobsQuota)
and possibly
rtSrchJobsQuota = <number>
* Maximum number of concurrently running real-time searches a member of this role can have
for the appropriate roles.
I have this same issue but for some reason only certain users get the error. All of the users are configured the same and the actions they are performing in the application are the same. I can't figure out why some of them get this error when others do not. I could increase the search quota but that doesn't answer the question of why some users get the error when others do not.
Here are some troubleshooting documentation http://www.splunk.com/wiki/Community:TroubleshootingSearchQuotas
The maximum amount of concurrent searches that can be run system-wide is determined by a setting in limits.conf:
[search]
max_searches_per_cpu = <int>
* the maximum number of concurrent searches per CPU. The system-wide number of searches
* is computed as max_searches_per_cpu x number_of_cpus + 2
* Defaults to 2
You can increase this value in order to raise your system-wide concurrent search quota.
But since you are not hitting the limit as admin, you likely have to increase your regular user's concurrent search quota. In authorize.conf you will want to tweak
srchJobsQuota = <number>
* Maximum number of concurrently running historical searches a member of this role can have (excludes real-time searches, see rtSrchJobsQuota)
and possibly
rtSrchJobsQuota = <number>
* Maximum number of concurrently running real-time searches a member of this role can have
for the appropriate roles.
Please note, these values have changed now . Read the latest limits.conf
base_max_searches = <int>
* A constant to add to the maximum number of searches, computed as a multiplier
of the CPUs.
* Defaults to 6
max_searches_per_cpu = <int>
* The maximum number of concurrent historical searches per CPU. The system-wide
limit of historical searches is computed as:
max_hist_searches = max_searches_per_cpu x number_of_cpus + base_max_searches
* Note: the maximum number of real-time searches is computed as:
max_rt_searches = max_rt_search_multiplier x max_hist_searches
* Defaults to 1