Getting Data In

Strange queue name

dbamberger
New Member

Hello all.

I'm 4 days into my splunk experience and have a problem I don't know where to begin tracking down. I have a 4.2.1 splunk installation and the indexer is showing metrics.log entries such as this:

06-09-2011 14:27:38.435 -0700 INFO Metrics - group=queue, name=192.168.27.128_9997, max_size_kb=500, current_size_kb=4, current_size=10, largest_size=10, smallest_size=10

This queue never gets smaller, only larger and eventually seems to cause a blockage of itself, aggqueue and typingqueue.

My question starts with what generates that "name=192.168.27.128_9997"? The only thing I know of that contains that string is my forwarder's $SPLUNK_HOME/etc/apps/search/local/outputs.conf file as the defaultGroup entry. The contents of the file are below and are a holdover from a 4.1.2 installation.

[tcpout]
defaultGroup = 192.168.27.128_9997
disabled = false

[tcpout:192.168.27.128_9997]
server = 192.168.27.128:9997

[tcpout-server://192.168.27.128:9997]

Hopefully given some context I can hunt down a more appropriate configuration and begin indexing our files.

Regards.
dbam

Tags (3)
0 Karma

woodcock
Esteemed Legend

Either Splunk does not like your strange name (with all those numbers and dots/periods) or your indexers have SSL turned on and are listening on port 9998 in stead of port 9997. There are tools to check which ports have listeners (I do not know your OS) or you can just experiment (try changing your default group to use alpha-only). Here is another answer that shows a working pair of configuration files:
http://answers.splunk.com/answers/110392/ssl-connection-between-indexer-and-forwarder.html

0 Karma

mw
Splunk Employee
Splunk Employee

Is your Splunk server receiving data from you forwarders? Go to: Manager -> Forwarding and Receiving -> Configure Receiving

Is the server configured to Receive on port 9997?

While you're in Manager, go to Manager -> Forwarding and Receiving -> Configure Forwarding, and make sure that the splunk server isn't forwarding data to itself.

dbamberger
New Member

It had been, we'd pulled in a good deal of fascinating stuff(recall 4 days in!). Then after an upgrade of indexer and forwarders it shut down on us. We had TCP9997 setup as a listener.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...