Splunk Search

What is the maximum range of values that can be assigned to max_mem_usage_mb?

splunkIT
Splunk Employee
Splunk Employee

We currently have the limits.conf max_mem_usage_mb parameter value set to 2000, which is 10x the default value (200). We have noticed incidences of splunk helper processes being killed due to OOM, and is suspecting that it might have something to do with max_mem_usage_mb value being too high:

01-27-2014 15:54:18.645 -0500 ERROR STMgr - dir='/opt/splunk/var/lib/splunk/_internaldb/db/hot_v1_43' out of memory failure rc=1 warm_rc[-2,12] from st_txn_start
01-27-2014 15:54:18.645 -0500 ERROR StreamGroup - unexpected rc=1 from IndexableValue->index
01-27-2014 15:54:18.694 -0500 FATAL ProcessRunner - Unexpected EOF from process runner child!

This limit was set to a high value to accommodate the end user running extremely large ad hoc queries. This decision was taken since the system has an extremely large amount of memory (160 GB) and a large number of cores (32) available.

As a rule of thumb, what is the maximum range of values that can be assigned to max_mem_usage_mb? Does it have a relation to the max available memory (160 GB in this case)?

http://docs.splunk.com/Documentation/Splunk/5.0.7/Admin/Limitsconf

0 Karma

mzorzi
Splunk Employee
Splunk Employee

The release 6.2.1 contains improvement on the way this parameter is applied. So you might consider an upgrade:

max_mem_usage_mb = < non-negative integer >
**Provides a limitation to the amount of RAM a batch of events or results will use
in the memory of search processes.
**Operates on an estimation of memory use which is not exact.
**The limitation is applied in an unusual way; if the number of results or events
exceeds maxresults, AND the estimated memory exceeds this limit, the data is
spilled to disk.
**This means, as a general rule, lower limits will cause a search to use more disk
I/O and less RAM, and be somewhat slower, but should cause the same results to
typically come out of the search in the end.
**This limit is applied currently to a number, but not all search processors.
However, more will likely be added as it proves necessary.
**The number is thus effectively a ceiling on batch size for many components of
search for all searches run on this system.
**0 will specify the size to be unbounded. In this case searches may be allowed to
grow to arbitrary sizes.

Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...