Hello,
This morning I upgraded my Splunk from version 4.3.4 to version 5.0. Since I have done this, none of my forwarders are communicating & I am receiving the following error:
skipped indexing of internal audit event will keep dropping events until indexer congestion is remedied. Check disk space and other issues that may cause indexer to block
Has anyone else had this issue? What is the best way to back out the upgrade?
Thanks.
Kevin
Nothing to do with 5.0. Everything to do with an accidental copy of deploymentclient.conf by a script that was making some conf changes for me and did so to the indexer... indexer was trying to send data to itself, just didn't manifest yesterday and popped up when the services restarted upon upgrade.
Nothing to do with 5.0. Everything to do with an accidental copy of deploymentclient.conf by a script that was making some conf changes for me and did so to the indexer... indexer was trying to send data to itself, just didn't manifest yesterday and popped up when the services restarted upon upgrade.
Nothing to do with 5.0. Everything to do with an accidental copy of deploymentclient.conf by a script that was making some conf changes for me and did so to the indexer... indexer was trying to send data to itself, just didn't manifest yesterday and popped up when the services restarted upon upgrade.
Crisis Averted!!!! My own fault...
Hey,
Receiving similar issues with one of my indexers after upgrading from 4.3.1 to 5.0. Check splunkd.log for:
WARN TcpInputProc - Stopping all listening ports. Queues blocked for more than 300 seconds
In my case, the parseQueue is getting gummed up on a indexer which never had the issue before. Restarting the indexer resolves the issue for about 12 hours before it gets gummed up again.
You can determine this by running the following query and looking for spikes in the parsequeue column (or review Splunk on Splunk to see it in pretty graph format):
index="_internal" source="*metrics.log" group="queue" earliest=-4h | timechart max(current_size) span=30m by name
Opening a ticket with support to pin down the problem. I'll update if I learn anything actionable.
@tgiles, do you have any UDP ports listening for data on the Splunk indexers? There is a memory leak documented in this answer: http://splunk-base.splunk.com/answers/64311/splunk-50-consuming-all-memory-is-it-possible-to-downgra...
Good luck, mine appears good since I removed the deploymentclient.conf that pulled down an output.conf to forward to itself....