How can we change the ulimits of Splunk to the desired value ?
I have edited the /etc/security/limits.conf file and rebooted the instance
I have added "* - nofile 64000" to the file .
But Splunk still shows only 4096. How can we change this value .
hey have a look at this
you can create a script!
https://answers.splunk.com/answers/223838/why-are-my-ulimits-settings-not-being-respected-on.html
Also
https://answers.splunk.com/answers/13313/how-to-tune-ulimit-on-my-server.html
https://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/
Is it necessary to change the ulimts of all the Splunk instances or just the indexers ?
Any standard procedure for changing the ulimits of the servers ?
I tried "ulimit -n 65536" command on all the indexers and did a rolling restart on the servers. Still there is no change in the ulimits. and edited the /etc/init.d/splunk script as mentioned.
Is it the right process to be followed ?
No, you will need to restart the OS, not just splunk.
Take a look at maintenance mode
http://docs.splunk.com/Documentation/Splunk/7.0.1/Indexer/Usemaintenancemode
@nickhillscpl , how can we restart the OS without effecting Splunk service.
Maintenance mode only restarts the Splunk service.
Maintenance mode is useful if you are restarting a cluster peer. If the machine does not restart within the cluster timeout, a fix up operation will begin on all missing replicas. You want this to happen if one of your peers fails, but not if your rebooting.
Enabling maint. Mode prevents the cluster performing any fixup operations while you restart your servers.
To perform a clean os restart, enable maintenance mode on your cluster master, then run ‘splunk offline’ on a peer, and restart the os. When that peer is back up and connected to the cluster, repeat the process for other peers.
Then disable maint. Mode
Try setting it in the Splunk initd
https://answers.splunk.com/answers/223838/why-are-my-ulimits-settings-not-being-respected-on.html
Is it necessary to change the ulimts of all the Splunk instances or just the indexers ?
I personally change the ulimits on all splunk servers ( for consistency )
I also use the initd method to ensure ulimits are 'stickey'
** note you will need to full restart the server ( reboot ) for the changes to take effect as they are in the initd running /opt/splunk/bin/splunk restart wont put the required change in place .
how can reboot all the servers of a cluster safely ?
Rebooting is fine if its a single instance
Any standard procedure for changing the ulimits of the servers ?
I tried "ulimit -n 65536" command on all the indexers and did a rolling restart on the servers. Still there is no change in the ulimits. and edited the /etc/init.d/splunk script as mentioned.
Is it the right process to be followed ?
Did you set both hard and soft limits?
Take a look at this -
https://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/
You may be seeing the soft limit, even though you have raised the system wide hard limit
Is it necessary to change the ulimts of all the Splunk instances or just the indexers ?
Any standard procedure for changing the ulimits of the servers ?
I tried "ulimit -n 65536" command on all the indexers and did a rolling restart on the servers. Still there is no change in the ulimits. and edited the /etc/init.d/splunk script as mentioned.
Is it the right process to be followed ?
see this splunk official doc
http://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/ulimitErrors