Splunk Search

Splunk service on indexer server gets killed by OOM killer when it should not.

sylim_splunk
Splunk Employee
Splunk Employee

We operates splunk platform of 10+ SHC members & indexer cluster with 100+, version 7.2.9. From time to time we see the Splunk services get killed by OOM killer from multiple indexers.
Using a search below it shows memory usage by Splunk but when it kills Splunk the memory usage is far less than 250GB (the max mem on each indexer) and even less than 100GB according to the graph.

  • Search:
    index=_introspection host= sourcetype=splunk_resource_usage component=PerProcess
    | rename "data.args" as args, "data.process" as process, "data.process_type" as processt
    | eval process_class=case(((process == "splunkd") AND like(processt,"search")),"Splunk Search",((process == "splunkd") AND ((like(args,"-p %start%") AND (true() XOR like(args,"%process-runner%"))) OR (args == "service"))),"splunkd server",((process == "splunkd") AND isnotnull(sid)),"search",((process == "splunkd") AND ((like(args,"fsck%") OR like(args,"recover-metadata%")) OR like(args,"cluster_thing"))),"index service",((process == "splunkd") AND (args == "instrument-resource-usage")),"scripted input",((like(process,"python%") AND like(args,"%/appserver/mrsparkle/root.py%")) OR like(process,"splunkweb")),"Splunk Web",isnotnull(process_class),process_class)
    | bin _time span=10s
    | stats latest(data.mem_used) AS resource_usage_dedup latest(process_class) AS process_class by data.pid, _time
    | stats sum(resource_usage_dedup) AS resource_usage by _time, process_class
    | timechart minspan=10s bins=200 median(resource_usage) AS "Resource Usage" by process_class

  • Graph:
    alt text

We found the search that caused the peak in the graph but it still appears far less than the max memory, 250gb available on the server

Tags (1)
1 Solution

sylim_splunk
Splunk Employee
Splunk Employee

Further investigation, the kernel message has this;

Dec 01 10:15:29 idx15 kernel: [72693.445279] Task in /system.slice/splunk.service killed as a result of limit of /system.slice/splunk.service
Dec 01 10:15:29 idx15 kernel: [72693.445891] memory: usage 104857600kB, limit 104857600kB, failcnt 1977805211

According to the log it hits the max usage defined by splunk.service of systemd unit, which is limited by "MemoryLimit=100G" - this appears to be static regardless of the memory available on the server.
We decide to increase the value to the 90% of the memory installed on the server. If you see the similar symptoms you may need to check the value for the param and adjust it accordingly.

View solution in original post

sylim_splunk
Splunk Employee
Splunk Employee

Further investigation, the kernel message has this;

Dec 01 10:15:29 idx15 kernel: [72693.445279] Task in /system.slice/splunk.service killed as a result of limit of /system.slice/splunk.service
Dec 01 10:15:29 idx15 kernel: [72693.445891] memory: usage 104857600kB, limit 104857600kB, failcnt 1977805211

According to the log it hits the max usage defined by splunk.service of systemd unit, which is limited by "MemoryLimit=100G" - this appears to be static regardless of the memory available on the server.
We decide to increase the value to the 90% of the memory installed on the server. If you see the similar symptoms you may need to check the value for the param and adjust it accordingly.

sylim_splunk
Splunk Employee
Splunk Employee

This doc link mentions about it, the version 8.0+ will adjust it according to the available memory configured on the server.

https://docs.splunk.com/Documentation/Splunk/8.0.0/Admin/RunSplunkassystemdservice#Configure_systemd...

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...