Splunk Search

What should I do, if anything, when approaching the max number of searches?

hulahoop
Splunk Employee
Splunk Employee

Sometimes I see this message in Splunk Web:

You are approaching the maximum number of searches that can be run concurrently. current=15, maximum=18

What should I do about it?

Tags (2)
1 Solution

hulahoop
Splunk Employee
Splunk Employee

If you have hardware to support allocation of more CPU resources to search execution, then it is possible to increase the max number of concurrent searches.

The number of concurrent searches is based on CPU and affected by 2 settings in limits.conf. From limits.conf.spec:

> [search]
> max_searches_per_cpu = <int>
> * the maximum number of concurrent searches per CPU. The system-wide number of searches
> * is computed as max_searches_per_cpu x number_of_cpus + 2
> * Defaults to 2

> [scheduler]
> max_searches_perc = <integer>
> * the maximum number of searches the scheduler can run, as a percentage
> * of the maximum number of concurrent searches, see [search] max_searches_per_cpu
> * for how to set the system wide maximum number of searches
> * Defaults to 25

For an 8 CPU box, the default number of max_searches_per_cpu is 18 (4x4cpu+2). It is likely, however, that such a server is capable of supporting more searches per CPU so it is safe to increment to max_searches_per_cpu=4.

Additionally, if you are running many scheduled searches for alerts or dashboards, you can find a more equitable division with max_searches_perc than the default 25%.

These settings will allow you to maximize your hardware. If you find this is not adequate, consider adding a server. In general, the recommended approach to scaling is to add more CPUs via additional Splunk servers so the workload of search execution can be shared.

View solution in original post

kotique
New Member

How do I hide this popup? It's freaking out our users.

0 Karma

hulahoop
Splunk Employee
Splunk Employee

If you have hardware to support allocation of more CPU resources to search execution, then it is possible to increase the max number of concurrent searches.

The number of concurrent searches is based on CPU and affected by 2 settings in limits.conf. From limits.conf.spec:

> [search]
> max_searches_per_cpu = <int>
> * the maximum number of concurrent searches per CPU. The system-wide number of searches
> * is computed as max_searches_per_cpu x number_of_cpus + 2
> * Defaults to 2

> [scheduler]
> max_searches_perc = <integer>
> * the maximum number of searches the scheduler can run, as a percentage
> * of the maximum number of concurrent searches, see [search] max_searches_per_cpu
> * for how to set the system wide maximum number of searches
> * Defaults to 25

For an 8 CPU box, the default number of max_searches_per_cpu is 18 (4x4cpu+2). It is likely, however, that such a server is capable of supporting more searches per CPU so it is safe to increment to max_searches_per_cpu=4.

Additionally, if you are running many scheduled searches for alerts or dashboards, you can find a more equitable division with max_searches_perc than the default 25%.

These settings will allow you to maximize your hardware. If you find this is not adequate, consider adding a server. In general, the recommended approach to scaling is to add more CPUs via additional Splunk servers so the workload of search execution can be shared.

rdjoraev_splunk
Splunk Employee
Splunk Employee

max_searches_per_cpu=4 could be a typo. Splunk supports maximum of 2 searches per CPU.
so it should be max_searches_per_cpu=2 or less.

0 Karma

Lowell
Super Champion

Note that the default was bumped to max_searches_per_cpu=4 in 4.1 (or possibly earlier). So the "Defaults to 2" is no longer accurate. I had bumped this to "3" years ago in my local/limits.conf, so after and upgrade I end up with a lower value than the default. Just a heads up!

the_wolverine
Champion

You shouldn't need to do anything. If you do hit the max concurrent search limit, once a search completes, it frees up a spot for the next search in line. Eventually all searches will complete.

It is not necessarily bad if you keep encountering this error so long a the max concurrent searches is not reached. If you consistently reach max concurrent searches you may want to look into staggering your searches.

Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...