Getting Data In

My Splunk 6.3.2 universal forwarder is connecting, but why am I seeing "Could not send data to output queue" errors?

a212830
Champion

Hi,

I have a UFW running 6.3.2, and I'm seeing the following in my logs on a regular basis. I'm also being told that data is missing...

01-20-2016 21:35:00.859 -0500 INFO  TailReader - Continuing...
01-20-2016 21:35:07.025 -0500 INFO  TailReader - Could not send data to output queue (structuredParsingQueue), retrying...
01-20-2016 21:35:08.034 -0500 INFO  TailReader - Could not send data to output queue (structuredParsingQueue), retrying...
01-20-2016 21:35:08.731 -0500 INFO  TailReader - Could not send data to output queue (parsingQueue), retrying...
01-20-2016 21:35:15.383 -0500 INFO  TailReader - Continuing...
01-20-2016 21:35:23.161 -0500 INFO  TailReader -   ...continuing.
01-20-2016 21:35:23.182 -0500 INFO  TailReader - Continuing...
01-20-2016 21:35:27.036 -0500 INFO  TailReader - Could not send data to output queue (structuredParsingQueue), retrying...
01-20-2016 21:35:29.076 -0500 INFO  TcpOutputProc - Closing stream for idx=X.X.X.X:9997
01-20-2016 21:35:29.076 -0500 INFO  TcpOutputProc - Connected to idx=X.X.X.X:9997
01-20-2016 21:35:29.863 -0500 INFO  TcpOutputProc - Closing stream for idx=X.X.X.X:9997
01-20-2016 21:35:29.863 -0500 INFO  TcpOutputProc - Connected to idx=X.X.X.X:9997
01-20-2016 21:35:31.226 -0500 INFO  TailReader - Could not send data to output queue (parsingQueue), retrying...
01-20-2016 21:35:35.022 -0500 INFO  TailReader - Could not send data to output queue (structuredParsingQueue), retrying...

The Universal forwarder is connecting and load-balancing, but I'm also seeing "could not send data" messages, and I don't know why. When I grabbed this snapshot, the file had not been written to in about 30 minutes, so it's not busy. And the servers listed in the outputs.conf are all reachable and running. (And what is "structuredParsingqueue" anyway?).

Any ideas? There are a lot of files here - they roll over about every 60-90 seconds, once they reach 50mb, so it's very busy during the day (but quiet right now).

0 Karma

jplumsdaine22
Influencer

I'd check the network connection between the forwarder and the indexer first.

Then check if there are errors on the indexer - you may have run out of disk space or the indexer could have some other error.

As for what all the queues are:

http://wiki.splunk.com/Community:HowIndexingWorks

0 Karma

sloshburch
Splunk Employee
Splunk Employee

Sounds like a support case to me. They'll likely jump into the metrics.log to see in any queues are backed up. If you don't also see any comments about blocking then it might be just a network issue as mentioned by jplumsdaine22

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...