All Apps and Add-ons

splunk process occuoying too much memory in solaris servers

sarnagar
Contributor

Hi All,

I have SPlunk forwarder installed on solaris servers and this seems to be occupying memory offlate. What could be the reason for this? N how to overcome this?
It occupies more than 500MB..

0 Karma

alacercogitatus
SplunkTrust
SplunkTrust

Check the number of open files. If Splunk is monitoring a large amount of individual files, the memory usage could expand drastically.

First, find the pid of the main Splunk process. Normally this is done with a ps -ef | grep splunk. Then take that pid and put it into this command: lsof -p | wc -l. This will count the number of open files for the forwarder. This number might be very high. If it is not very high (let's say < 10K), you may have other issues in play.

What version of Splunk forwarder? What version of Solaris?

0 Karma

sarnagar
Contributor

Hi,
Thakyou for the response.
Actually I'm not able to run the lsof command on the solaris server. I Get the below error:

lsof -p 5523 |wc -l

lsof: FATAL: lsof was compiled for a 32 bit kernel,
but this machine has booted a 64 bit kernel.
0
I have splunk UF version as:

/opt/splunkforwarder/bin/splunk version

Splunk Universal Forwarder 6.1.4 (build 233537)

Solaris version:

uname -a

SunOS ss73fmoapq230 5.10 Generic_150401-23 i86pc i386 i86pc

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...