Deployment Architecture

Splunk process killed by OS [out of memory] in Search head

Yod_ssoni
Explorer

Hi,
Today morning 2 search heads out of 3 from cluster went down. When i checked it killed by OS with message 'out of memory' in /var/log/message, but system had enough memory at that time when splunk process got killed by OS [around 38% was free]. In splunkd.log file i did not find any error messages. Can any one please let me know how to get the root cause of this issue and fix it. This already happen 3-4 times.
Thanks in advance.

thanks,
Shashank Soni.

0 Karma

woodcock
Esteemed Legend

THP being enabled is the #1 reason for poor Splunk RAM management. Run a health check from your MC and see if everything is setup correctly. This will check for ulimits, too.

lacastillo
Path Finder

Just wanted to add this link describing THP's impact on memory to woodcock's answer.

http://docs.splunk.com/Documentation/Splunk/7.1.1/ReleaseNotes/SplunkandTHP

Hope this helps.

0 Karma

klaxdal
Contributor

Have you checked your ulimits ?

jkat54
SplunkTrust
SplunkTrust

Ulimits can trigger OOM killer too from what I understand. Up voting.

0 Karma

jkat54
SplunkTrust
SplunkTrust
0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to May Tech Talks, Office Hours, and Webinars!

Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars this month. This ...

They're back! Join the SplunkTrust and MVP at .conf24

With our highly anticipated annual conference, .conf, comes the fez-wearers you can trust! The SplunkTrust, as ...

Enterprise Security Content Update (ESCU) | New Releases

Last month, the Splunk Threat Research Team had two releases of new security content via the Enterprise ...