It happened twice with me now. I stop and start the splunk, splunkd and splunkweb starts but splunkd process dies immediately. I keep trying stop and start but same thing happens. I don't see any error in any of the log files. /var/log/splunk/.
After almost 20-30 minutes if I try, everything works.
I think the reason was I was using transformation. I need to reset the source/source-type based on data in the messages. To do that I used RegEx. It started crashing when the message load increased. Looks like it increased the indexing time and may be queue was full or something. I removed that and this didn't happen again. But the strange thing is there is nothing in the logs.
The above logic is a guess though.
After you do a splunk stop, do a ps -ef and grep for splunk. If you see a python process running, you'll need to kill that, since that sometimes doesn't die and will prevent Splunk from coming back up again.
If this isn't the issue, seeing the actual log entries from the time it dies would help. 🙂
It happened again. This times it died automatically while running. I greped on python process and found root.py script with parameter restart. I killed the splunk process and the process got killed too. I tried to start again. Same thing is happening as I mentioned in my last post. splunkd process starts and dies. I don't see anything in the log.