Getting Data In

Tcp output pipeline blocked. Attempt '300' to insert data failed. (Heavy forwarder ---> Indexer)

splunker12er
Motivator

Case:
I am gathering logs from a cisco-asa and writing them to a log file . and using monitor stanza i'm monitoring the log file and forwarding the logs to my indexer server via splunktcp://9997

Issue:
but, though data is not visible in splunk search

Findings:
/opt/splunk/bin/splunk list monitor -- showing the monitoring file name
/opt/splunk/bin/splunk list forward-server -- showing the indexer name (Active forwards)

In Heavy forwarder , Im seeing a message "Tcp output pipeline blocked. Attempt '300' to insert data failed."

Though I set my server.conf to:

[queue=parsingQueue]
maxSize = 10MB

still no luck.

Splunk version :
Heavy forwarder : Splunk 6.0.4 (build 207768)
Indexer/search head :Splunk 6.1.1 (build 207789)

No any notable logs in splunkd.log, just found the below in my metrics log:
Metrics Logs:

[root@splunkserver local]# tail -f /opt/splunk/var/log/splunk/metrics.log|grep blocked
03-05-2015 07:15:16.869 +0000 INFO  Metrics - group=queue, name=aggqueue, blocked=true, max_size_kb=1024, current_size_kb=1023, current_size=2728, largest_size=2763, smallest_size=0
03-05-2015 07:15:16.869 +0000 INFO  Metrics - group=queue, name=indexqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1330, largest_size=1330, smallest_size=0
03-05-2015 07:15:16.869 +0000 INFO  Metrics - group=queue, name=typingqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1353, largest_size=1353, smallest_size=0
0 Karma
1 Solution

harsmarvania57
Ultra Champion

Can you please let me know, have you configured limits.conf on Heavy Forwarder ? If yes, what you have configured for "maxKBps" ?

As per your splunkserver logs, I can see that splunk instance is not able to write data into disk at proper speed, due to following reason.
1.) Forwarder is sending hugh volume of logs. So indexqueue and other queues are full.
2.) Indexer is not able to write data into disk with proper speed. I/O problem(Low IOPS due to Slow disk(Low rpm/disk)).

View solution in original post

harsmarvania57
Ultra Champion

Can you please let me know, have you configured limits.conf on Heavy Forwarder ? If yes, what you have configured for "maxKBps" ?

As per your splunkserver logs, I can see that splunk instance is not able to write data into disk at proper speed, due to following reason.
1.) Forwarder is sending hugh volume of logs. So indexqueue and other queues are full.
2.) Indexer is not able to write data into disk with proper speed. I/O problem(Low IOPS due to Slow disk(Low rpm/disk)).

splunker12er
Motivator

yes, same issue .

Cause 1:
I found from metrics.log and found index queues status is blocked. Forwarder is sending too much of data (suggest a best practice to overcome this..)

Cause 2:
Adding more CPU core would resolve this ?

0 Karma

skawasaki_splun
Splunk Employee
Splunk Employee

Increase IOPs on the indexer or scale out horizontally (ie add another indexer).

0 Karma

splunker12er
Motivator

Thanks,. Any advise on the forwarder settings ?

0 Karma

teunlaan
Contributor

Are u sure the indexer is the problem? you should also see "blocked" messages an the indexer side.

We have had such behaviour of the Heavy Fowarders, but in our case "the network" was the problem. It was was FULL, so the heavy couldn't get the data on the network during the peak hours.

If you 100% sure the indexer is the problem (can't handle the amount of data), add an othe indexer next to it. (getting more IOPS, is probably harder)

For the forwarder : If the data volume is 24/7 the same, you can't do anything. otherwise you can put a lower number for "maxKBps" (so you don't get the error), but you're data will arrive later (at night when there isn't much data generated) in the system.

0 Karma

harsmarvania57
Ultra Champion

Yes, teunlaan is correct. Add new Indexer for extra logging and you can set maxKBps as well on forwarder, but it will delay your logs.

0 Karma

harsmarvania57
Ultra Champion

As per your logs it's showing that your index queue and queues are full. For this many reasons
1.) Forwarder is sendind quite high amount of logs.
2.) Indexer is not able to write data in proper speed.I/O problem.

0 Karma

splunker12er
Motivator

I am monitoring some 10 logs files (all the files are continuously open for writing cisco-asa logs)

out of 10 , only 3 files i can able to search in Search head -

I see in heavy forwarder -splunkweb , the below Warning :

Tcp output pipeline blocked. Attempt '300' to insert data failed

Is this due to huge volume of dat being monitored and splunk is unable to forward the logs to indexer for indexing ???

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...