All,
Member of our management team is concerned about a Splunk Forwarder with a number of processes and threads. Curious what's normal ? What might create more threads? Less? Most my servers have anywhere from 40-49.
# ps auwxH|grep splunk|wc -l
43
ps auwwxH | grep -i splunk
Tue Mar 3 11:24:51 PST 2020
root 4980 0.0 0.2 362396 166344 ? Sl Feb27 0:24 splunkd -p 8089 restart
root 4980 0.0 0.2 362396 166344 ? Sl Feb27 0:00 splunkd -p 8089 restart
[...........]
root 4980 0.0 0.2 362396 166344 ? Sl Feb27 0:12 splunkd -p 8089 restart
root 4980 0.0 0.2 362396 166344 ? Sl Feb27 4:59 splunkd -p 8089 restart
root 4980 0.0 0.2 362396 166344 ? Sl Feb27 0:00 splunkd -p 8089 restart
root 4986 0.0 0.0 86200 1056 ? Ss Feb27 0:00 [splunkd pid=4980] splunkd -p 8089 restart [process-runner]
root 7619 0.0 0.0 103328 916 pts/32 S+ 11:24 0:00 grep -i splunk
My knee jerk is that maybe we have we have thread per file, another one reaching back to deployment server and maybe another per scripted input from Splunk_TA_nix?
Your output indicates to me that you have several Splunk daemons which are defunct. Likely a result of initial installation/startup.
It's not good practice to run Splunk as root (major security risk). Obviously, test in non-prod first if possible, but my suggestion would be to bring down Splunk gracefully (/opt/splunk/bin/splunk stop or systemctl stop splunk), then search for any errant processes (ps -ef |grep -i splunk) and kill them (kill -9 ).
Then modify /opt/splunk/etc/splunk-launch.conf and change the process owner to splunk, or Splunk, etc.).
Then restart the daemon, and all should be well.