Getting Data In

UAT SPLUNK install - forwarder not forwarding (Could not send data to output queue (parsingQueue), retrying...)

robertlynch2020
Motivator

Hi

I have set up a UAT Install of SPLUNK on dell178srv. The new SPLUNK is up and running and i can access and use it 🙂

I have copied working forwarders over and changed 2 files, but these wont send data into the new SPLUNK.
The new machine is dell178srv. To me i should only have to change the below 2 files. Or am i missing something?

I am getting - (Could not send data to output queue (parsingQueue), retrying...) in the splunkforwarder/var/log/splunk/splunkd.log

/dell178srv/apps/splunkforwarder/etc/system/local/inputs.conf
/dell178srv/apps/splunkforwarder/etc/system/local/server.conf

inputs.conf
[default]
host = dell178srv

server.conf
[sslConfig]
sslKeysfilePassword = $1$gTHXLJ1CRdIB

[lmpool:auto_generated_pool_forwarder]
description = auto_generated_pool_forwarder
quota = MAX
slaves = *
stack_id = forwarder

[lmpool:auto_generated_pool_free]
description = auto_generated_pool_free
quota = MAX
slaves = *
stack_id = free

[general]
pass4SymmKey = $1$1n2DcNgEDoAB
serverName = dell178srv

[httpServer]
disableDefaultPort = true

0 Karma
1 Solution

mattymo
Splunk Employee
Splunk Employee

The main thing to ensure data that is collected via inputs is sent to indexers is an outputs.conf configuration.

http://docs.splunk.com/Documentation/Forwarder/6.5.2/Forwarder/Configureforwardingwithoutputs.conf
https://docs.splunk.com/Documentation/Splunk/6.5.2/Admin/Outputsconf

Sample config of outputs.conf:

[tcpout]
defaultGroup=my_indexers

[tcpout:my_indexers]
server=mysplunk_indexer1:9997, mysplunk_indexer2:9996

[tcpout-server://mysplunk_indexer1:9997]

Or another easy way to configure it is to configure forwarding on a Search Head (Settings > forwarding & Receiving) in the Gui and steal the config from it...will show you how Splunk builds outputs.conf

As long as the forwarder has a legit outputs config, then we may need to check on other symptoms that could cause the forwarder to backup and be unable to send data to the output queue...but lets cross that bridge when/if we need to.

A great place to troubleshoot forwarding is on the forwarder itself, in $SPLUNK_HOME/var/log/splunk/splunkd.log

[splunker@n00b-splkufw-01 splunk]$ tail -1000 splunkd.log | grep TcpOutputProc

You will be able to follow the forwarders attempts to connect to your indexers and be able to diagnose issues like, forwarder connectivity or your pipelines being full due to inability to move the data to the idx fast enough (again, probably a topic for another day)

Look, my UF is smart enough to tell me that I have yet to complete my outputs config!

[splunker@n00b-splkufw-01 splunk]$ tail -1000 splunkd.log | grep TcpOutputProc
02-15-2017 00:52:19.691 +0000 INFO  TcpOutputProc - Initializing with fwdtype=lwf
02-15-2017 00:52:19.696 +0000 INFO  TcpOutputProc - found Whitelist forwardedindex.0.whitelist , RE : .*
02-15-2017 00:52:19.697 +0000 INFO  TcpOutputProc - found Blacklist forwardedindex.1.blacklist , RE : _.*
02-15-2017 00:52:19.697 +0000 INFO  TcpOutputProc - found Whitelist forwardedindex.2.whitelist , RE : (_audit|_introspection|_internal|_telemetry)
02-15-2017 00:52:19.813 +0000 ERROR TcpOutputProc - LightWeightForwarder/UniversalForwarder not configured. Please configure outputs.conf.
03-01-2017 01:48:04.039 +0000 INFO  TcpOutputProc - Initializing with fwdtype=lwf
03-01-2017 01:48:04.046 +0000 INFO  TcpOutputProc - found Whitelist forwardedindex.0.whitelist , RE : .*
03-01-2017 01:48:04.046 +0000 INFO  TcpOutputProc - found Blacklist forwardedindex.1.blacklist , RE : _.*
03-01-2017 01:48:04.047 +0000 INFO  TcpOutputProc - found Whitelist forwardedindex.2.whitelist , RE : (_audit|_introspection|_internal|_telemetry)
03-01-2017 01:48:04.171 +0000 ERROR TcpOutputProc - LightWeightForwarder/UniversalForwarder not configured. Please configure outputs.conf.
03-01-2017 01:49:36.928 +0000 INFO  TcpOutputProc - Initializing with fwdtype=lwf
03-01-2017 01:49:36.935 +0000 INFO  TcpOutputProc - found Whitelist forwardedindex.0.whitelist , RE : .*
03-01-2017 01:49:36.936 +0000 INFO  TcpOutputProc - found Blacklist forwardedindex.1.blacklist , RE : _.*
03-01-2017 01:49:36.936 +0000 INFO  TcpOutputProc - found Whitelist forwardedindex.2.whitelist , RE : (_audit|_introspection|_internal|_telemetry)
03-01-2017 01:49:37.073 +0000 ERROR TcpOutputProc - LightWeightForwarder/UniversalForwarder not configured. Please configure outputs.conf.

There is an amazing diagram on the pipelines and how data flows through Splunk deployments HERE

Hopefully that helps, if not come on back and let us know what you see!

- MattyMo

View solution in original post

0 Karma

mattymo
Splunk Employee
Splunk Employee

The main thing to ensure data that is collected via inputs is sent to indexers is an outputs.conf configuration.

http://docs.splunk.com/Documentation/Forwarder/6.5.2/Forwarder/Configureforwardingwithoutputs.conf
https://docs.splunk.com/Documentation/Splunk/6.5.2/Admin/Outputsconf

Sample config of outputs.conf:

[tcpout]
defaultGroup=my_indexers

[tcpout:my_indexers]
server=mysplunk_indexer1:9997, mysplunk_indexer2:9996

[tcpout-server://mysplunk_indexer1:9997]

Or another easy way to configure it is to configure forwarding on a Search Head (Settings > forwarding & Receiving) in the Gui and steal the config from it...will show you how Splunk builds outputs.conf

As long as the forwarder has a legit outputs config, then we may need to check on other symptoms that could cause the forwarder to backup and be unable to send data to the output queue...but lets cross that bridge when/if we need to.

A great place to troubleshoot forwarding is on the forwarder itself, in $SPLUNK_HOME/var/log/splunk/splunkd.log

[splunker@n00b-splkufw-01 splunk]$ tail -1000 splunkd.log | grep TcpOutputProc

You will be able to follow the forwarders attempts to connect to your indexers and be able to diagnose issues like, forwarder connectivity or your pipelines being full due to inability to move the data to the idx fast enough (again, probably a topic for another day)

Look, my UF is smart enough to tell me that I have yet to complete my outputs config!

[splunker@n00b-splkufw-01 splunk]$ tail -1000 splunkd.log | grep TcpOutputProc
02-15-2017 00:52:19.691 +0000 INFO  TcpOutputProc - Initializing with fwdtype=lwf
02-15-2017 00:52:19.696 +0000 INFO  TcpOutputProc - found Whitelist forwardedindex.0.whitelist , RE : .*
02-15-2017 00:52:19.697 +0000 INFO  TcpOutputProc - found Blacklist forwardedindex.1.blacklist , RE : _.*
02-15-2017 00:52:19.697 +0000 INFO  TcpOutputProc - found Whitelist forwardedindex.2.whitelist , RE : (_audit|_introspection|_internal|_telemetry)
02-15-2017 00:52:19.813 +0000 ERROR TcpOutputProc - LightWeightForwarder/UniversalForwarder not configured. Please configure outputs.conf.
03-01-2017 01:48:04.039 +0000 INFO  TcpOutputProc - Initializing with fwdtype=lwf
03-01-2017 01:48:04.046 +0000 INFO  TcpOutputProc - found Whitelist forwardedindex.0.whitelist , RE : .*
03-01-2017 01:48:04.046 +0000 INFO  TcpOutputProc - found Blacklist forwardedindex.1.blacklist , RE : _.*
03-01-2017 01:48:04.047 +0000 INFO  TcpOutputProc - found Whitelist forwardedindex.2.whitelist , RE : (_audit|_introspection|_internal|_telemetry)
03-01-2017 01:48:04.171 +0000 ERROR TcpOutputProc - LightWeightForwarder/UniversalForwarder not configured. Please configure outputs.conf.
03-01-2017 01:49:36.928 +0000 INFO  TcpOutputProc - Initializing with fwdtype=lwf
03-01-2017 01:49:36.935 +0000 INFO  TcpOutputProc - found Whitelist forwardedindex.0.whitelist , RE : .*
03-01-2017 01:49:36.936 +0000 INFO  TcpOutputProc - found Blacklist forwardedindex.1.blacklist , RE : _.*
03-01-2017 01:49:36.936 +0000 INFO  TcpOutputProc - found Whitelist forwardedindex.2.whitelist , RE : (_audit|_introspection|_internal|_telemetry)
03-01-2017 01:49:37.073 +0000 ERROR TcpOutputProc - LightWeightForwarder/UniversalForwarder not configured. Please configure outputs.conf.

There is an amazing diagram on the pipelines and how data flows through Splunk deployments HERE

Hopefully that helps, if not come on back and let us know what you see!

- MattyMo
0 Karma

robertlynch2020
Motivator

hi

Yes, it was the output that i did not change when i copied it over.

Cheers.

0 Karma

mattymo
Splunk Employee
Splunk Employee

awesome! glad to hear you are up and running!

- MattyMo
0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...