Hi All,
We have a fleet of AIX & Linux servers running the Universal forwarder, and we have issues with the forwarder crashing due to low soft ulimits values on the servers (specifically the data segment size and resident memory size). All of the hard maximum limits are set to either unlimited or a very high value, so I'm looking for a way to be able to update the soft limit within the Splunk config. Our Unix Admins are outsourced and it's very difficult to have the limits updated across a large fleet of servers.
Obviously, I could create a wrapper script to set the limits before running the bin/splunk command, but the problem is that often the restarts are done by the deployment server and therefore the limits are reset.
Has anyone tried to do something similar? Is there any Splunk config that can be set to update the soft limits prior to the deployment client restarting the process?
Thanks,
Ash
The "ulimit" command itself is an embedded shell command which influences only the the environment of the current shell and processes inheriting that environment. Changes to ulimit values only affect processes spawned by the shell after they have been changed. Executing a ulimit command from within Splunk would be pointless, as it would only apply to child processes of the shell running the command. Consequently Splunk would not be able to alter things with a deployable app bundle, unless it includes script which fires off at installation time to alter:
a) /etc/security/limits.conf
to implement permanent reboot-safe system changes
b) the Splunk startup script to include ulimit values.
Either of these is going to require root priveleges (although not necessarily by running as root - sudo may be appropriate). Both solutions will also require a new login or that Splunk is restarted after recording the change.
It's a good question, but it's unlikely that the option to setup the system soft ulimit exists in Splunk.
Yeah I agree, but I was kinda hoping that someone may have already done something creative to get around this.