Splunk Search

Is the search concurrency and limits.conf too high?

skirven
Communicator

Hi!
I'm wrestling with performance on our Production Splunk installation and have been reading on Search Concurrency and limits.conf. I'm trying to reconcile my information to make sure I'm understanding what I'm looking at.

In my current limits.conf (which I contend is too high)
[search]

base_max_searches=100
max_searches_per_cpu=10
dispatch_dir_warning_size = 10000
max_rawsize_perchunk = 0

We have 15 SH's with 16 CPUs each. What I'm trying to wrap my head around is in the DMC, in the SH section, drop down to "Search Concurrency". and I'm seeing one or two servers with a higher number, and most with low or 0 in there. My thought is that with the numbers being so high, the system is trying to overtax the system with processes, and bogging down one or 2 SHs and never really leveraging the power of the 15 SH cluster.

We experience crashes of the system, or where the Search Head goes down at the API level, etc.

Would it be best to put the limits.conf to something like:

base_max_searches=6
max_searches_per_cpu=1
dispatch_dir_warning_size = 10000
max_rawsize_perchunk = 0
max_searches_perc = 50

On a 16 Core server, that may give us:

Max Total Searches of 28
Max Scheduled Searches of 14

I'm still learning and reading, so I'd like some input to validate the findings.
Thank you,
Stephen Kirven

adonio
Ultra Champion
0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...