We have Splunk 6.1.5 search head systems with large amount of memory (128 GB) and 20 cores. The expected daily log volume is 500GB and the total number of concurrent users is 5. The expected concurrent searches count is approximately 30 for this system.
What is the max_mem_usage_mb value that I can set? Is there any formula to arrive at a safe number? The last thing that I do not want is to face OOM errors by setting large value for this attribute.
How much to assign to max_mem_usage_mb should be based more on the number of results in your searches than on the size of your system. If you find your search results are being truncated then you should increase the value of max_mem_usage_mb. Doubling the value to 400 MB shouldn't harm your system.
Do we need to set this limit on SH or on indexer?
Thanks