The original post by Jaci was done on my behalf. I have additional information that hopefully will give further clues to identify the issue.
The large volumes of data is accumulating in “C:\Program Files\Splunk\var\lib\splunk\fishbucket\db” and not in the …\fishbucket\splunk_private_db folder.
Three days ago after identifying the fishbucket accumulation problem I used the clean command and deleted several Gigs of data on our domain controllers, but the data keeps accumulating (note: I use the example of our domain controllers because they generate the most events for Splunk to consume, but most or all of our Splunk clients have the problem). On one of the domain controllers there is already 1.75 Gigs in the …\fishbucket\db folder after being cleaned 3 days ago. What I believe is occurring is all of the events normally shipped off the indexer are still being shipped off, but additionally are ending up in the …\fishbucket\db folder. The question is why? I disabled the local app SplunkLightForwarder and deployed and enabled a new app called wsu-lightforwarder. As discussed in the first post wsu-lightforwarder is an exact copy of SplunkLightForwarder with the exception default-mode.conf. There must be a setting in default-mode.conf that is causing the problem, but I don’t know what.
As a test, on one of the domain controllers I disabled wsu-lightforwarder, re-enabled SplunkLightForwarder and used the clean command. This essentially reverted the Splunk client back to the original state before the problem started. I found the …\fishbucket\db folder did not grow. Then I went back to using my wsu-lightforwarder app and the problem started back up again. The problem is definitely with wsu-lightforwarder, and again the only difference with SplunkLightForwarder is default-mode.conf (see first post). What could be causing this? Can I just disable the fishbucket in index.conf? I need to keep the small footprint of a light forwarder, but must be able to output syslog, so I need find a way to make my wsu-lightforwarder app to work.
Thank you gkanapathy and mick for your posts. Any further help would be greatly appreciated.
... View more