Getting Data In

How to troubleshoot heavy-forwarder error "Tcp output pipeline blocked. Attempt '1400' to insert data failed." monitoring syslog files?

splunker12er
Motivator

I am using a Heavy Forwarder to monitor cisco-asa logs.
I have 10 cisco-asa firewalls, writing their logs to 10 different syslog files (using syslog log server Eg: 10.0.0.1.log , 10.0.0.2.log ,etc)
I am monitoring all the files, using inputs.conf (monitor stanza)

Only 3 devices logs are monitored. I am unable to search the other logs from the search head.
I'm seeing the error below on the heavy forwarder -splunkweb (Tcp output pipeline blocked. Attempt '1400' to insert data failed.)

Files are continually open for writing. Files grow to a certain size and then roll to .tgz format. New files are open for writing.

0 Karma

esix_splunk
Splunk Employee
Splunk Employee

Check your ulimits on your user and on your box. You might be reaching OS limits.

0 Karma

splunker12er
Motivator

Yes. I found the issue due to IO.

Instead of adding one more indexer to my deployment , can i increase the CPU cores & storage in the existing indexer ?
Will that help to resolve the issue ?

(because , if i add one more indexer, i have to setup distributed search , where in my case indexer & search head is the same server and not too many users for searching.)

0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to May Tech Talks, Office Hours, and Webinars!

Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars this month. This ...

They're back! Join the SplunkTrust and MVP at .conf24

With our highly anticipated annual conference, .conf, comes the fez-wearers you can trust! The SplunkTrust, as ...

Enterprise Security Content Update (ESCU) | New Releases

Last month, the Splunk Threat Research Team had two releases of new security content via the Enterprise ...