Sharing a lesson learned...
Splunk 6.1.3 (but I think would apply to most) on RHEL 6.
I came in one morning to being unable to log into Splunk, and the web interface producing an error indicating that the drive was full. Upon checking the space, there was plenty, over 30 gigs. I have had it stop indexing once when it reached the 2 gig mark, as designed, but never saw this -- that did not prevent the web interface from working.
hmm, forgot to "answer" this so it would be closed. Tks Rich,
Cheers!
Long story short, I had the previous day created an alert to fire off in "real time". Be very careful with these! Overnight, the alert fired off, but I had set the criteria up wrong, so it fired off over 10,000 times. The space that was filled up were the inodes. This can be checked with 'ls -i'. The place that fills up is in ../splunk/var/run/splunk/dispatch/ -- I removed all the alerts in this directory and went happily about my business -- oh, and removing that offending alert.
hmm, forgot to "answer" this so it would be closed. Tks Rich,
Cheers!
Long story short, I had the previous day created an alert to fire off in "real time". Be very careful with these! Overnight, the alert fired off, but I had set the criteria up wrong, so it fired off over 10,000 times. The space that was filled up were the inodes. This can be checked with 'ls -i'. The place that fills up is in ../splunk/var/run/splunk/dispatch/ -- I removed all the alerts in this directory and went happily about my business -- oh, and removing that offending alert.
realtime/alltime alert searches are like a loaded gun, handle with care.
Hi @Michael
I just moved your content around to the appropriate spaces and also accepted the answer for you so this post will get more hits. Thanks for sharing this 🙂 very helpful.
Patrick
Thanks for sharing, Michael. For the benefit of users searching for similar problems in future, answer this question and accept the answer. That will mark this as a solution.