Installation

Index Congestion since upgrade to version 5.0

kholleran
Communicator

Hello,

This morning I upgraded my Splunk from version 4.3.4 to version 5.0. Since I have done this, none of my forwarders are communicating & I am receiving the following error:

skipped indexing of internal audit event will keep dropping events until indexer congestion is remedied. Check disk space and other issues that may cause indexer to block

Has anyone else had this issue? What is the best way to back out the upgrade?

Thanks.

Kevin

Tags (1)
0 Karma
1 Solution

kholleran
Communicator

Nothing to do with 5.0. Everything to do with an accidental copy of deploymentclient.conf by a script that was making some conf changes for me and did so to the indexer... indexer was trying to send data to itself, just didn't manifest yesterday and popped up when the services restarted upon upgrade.

View solution in original post

kholleran
Communicator

Nothing to do with 5.0. Everything to do with an accidental copy of deploymentclient.conf by a script that was making some conf changes for me and did so to the indexer... indexer was trying to send data to itself, just didn't manifest yesterday and popped up when the services restarted upon upgrade.

kholleran
Communicator

Nothing to do with 5.0. Everything to do with an accidental copy of deploymentclient.conf by a script that was making some conf changes for me and did so to the indexer... indexer was trying to send data to itself, just didn't manifest yesterday and popped up when the services restarted upon upgrade.

0 Karma

kholleran
Communicator

Crisis Averted!!!! My own fault...

0 Karma

tgiles
Path Finder

Hey,

Receiving similar issues with one of my indexers after upgrading from 4.3.1 to 5.0. Check splunkd.log for:

WARN  TcpInputProc - Stopping all listening ports. Queues blocked for more than 300 seconds

In my case, the parseQueue is getting gummed up on a indexer which never had the issue before. Restarting the indexer resolves the issue for about 12 hours before it gets gummed up again.

You can determine this by running the following query and looking for spikes in the parsequeue column (or review Splunk on Splunk to see it in pretty graph format):

index="_internal" source="*metrics.log" group="queue" earliest=-4h | timechart max(current_size) span=30m by name

Opening a ticket with support to pin down the problem. I'll update if I learn anything actionable.

0 Karma

sowings
Splunk Employee
Splunk Employee

@tgiles, do you have any UDP ports listening for data on the Splunk indexers? There is a memory leak documented in this answer: http://splunk-base.splunk.com/answers/64311/splunk-50-consuming-all-memory-is-it-possible-to-downgra...

0 Karma

kholleran
Communicator

Good luck, mine appears good since I removed the deploymentclient.conf that pulled down an output.conf to forward to itself....

0 Karma
Get Updates on the Splunk Community!

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...