Getting Data In

TcpInputProc errors after turning on autoLB on forwarders

mfrost8
Builder

We recently started turning on 'autoLB' for our lightweight forwarders. We use the default value of 30 seconds for the forwarder to rotate from indexer to indexer.

It seems to work fine except that now we're see regular errors in splunkd.log from each forwarder as it decides to disconnect from each indexer. That is, we'll see

06-16-2010 12:14:15.337 INFO TcpInputProc - Connection in cooked mode from host.x.y.z 06-16-2010 12:14:15.478 INFO TcpInputProc - Valid signature found 06-16-2010 12:14:15.478 INFO TcpInputProc - Connection accepted from host.x.y.z

and then

06-16-2010 12:14:47.478 ERROR TcpInputProc - Error encountered for connection from host=host.x.y.z, ip=x.x.x.x. Connection closed by peer 06-16-2010 12:14:47.478 INFO TcpInputProc - Hostname=host.x.y.z closed connection

Because these forwarders are now going to be disconnect once per minute now, this is going to generate a lot of new "errors" in splunkd.log.

Can this be suppressed from splunkd.log as it's not really a sign of a problem as I understand it?

Thanks

Tags (2)
0 Karma
1 Solution

mfrost8
Builder

According to support:

There was a bug reported and it was fixed in 4.1.3 release. Please upgrade to 4.1.3 to see if it works fine on your side. Bug#SPL-31032.

When we upgraded from 4.1.2 to 4.1.4 and this issue went away. FYI.

View solution in original post

0 Karma

mfrost8
Builder

According to support:

There was a bug reported and it was fixed in 4.1.3 release. Please upgrade to 4.1.3 to see if it works fine on your side. Bug#SPL-31032.

When we upgraded from 4.1.2 to 4.1.4 and this issue went away. FYI.

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

Well, you certainly can change the log level for TcpInputProc from FATAL, but that might hide other problems. You could set it to just ERROR or WARN to get rid of the INFO messages, which should be fine.

I'd worry about why you are in fact getting the ERRORS though, and investigate whether there is something funny going on on your network.

mfrost8
Builder

The thing is that nothing has changed other than our usage of this autoLB functionality. Why would we get no errors of this kind of the forwarder is pinned to one indexer,but now get tons of them if we use autoLB? I find it hard to believe that this is a network issue. While I realize we could turn off this level of logging, but I'd hate to lose other messages of this log level.

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

To change the log level permanently, you have to edit $SPLUNK_HOME/etc/log.cfg. Changing it in the Manager GUI only lasts until splunkd is next shut down.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...