Getting Data In

setting stopacceptorafterqblock attribute in inputs.conf

k_harini
Communicator

Hi All,
We are running into serious issue with Forwarder settings. Please help
Forwarder was working fine when the set up was done. Now recently since last 1 week it is not working.. Issue - in Splunk d log - connection established and 1 event was passed and after some time the connectivity was lost and 9997 port got closed by itself.

I Google and found about stopacceptorafterqblock settings. Now my doubt is should this be in Forwarder inputs. Conf or indexer inputs. Conf..

If it is indexer inputs. Conf then which app should it be?

We have added Forwarder inputs.conf in search app in Splunk universal Forwarder..

Can some experts please help on this? This s a big hindrance for our client demo..

Tags (1)
0 Karma
1 Solution

woodcock
Esteemed Legend

The bottom line is that if it is related to blocking queues (as it appears to be), you need to fix that problem first:
https://wiki.splunk.com/Community:TroubleshootingBlockedQueues

You may need to open a support case.

View solution in original post

0 Karma

woodcock
Esteemed Legend

The bottom line is that if it is related to blocking queues (as it appears to be), you need to fix that problem first:
https://wiki.splunk.com/Community:TroubleshootingBlockedQueues

You may need to open a support case.

0 Karma

k_harini
Communicator

Thank you woodcock.. Problem resolved. Indexer was again forwarding to itself.. Outputs. Conf was present in indexer. Tat was the issue.. It is resolved now.. Splunk answers.splunk.com is my savior any time. Special thanks to Iguinn.. She had clearly mentioned tat could be the reason in some other post

0 Karma

woodcock
Esteemed Legend

Yes, she is a smart one. Be sure to upvote her answer in that other question to show your appreciation! It wouldn't hurt to post a link to that one as a comment under this answer, either (for others in your same boat later).

0 Karma

k_harini
Communicator

There is only 1 indexer.. It was working fine.. All of a sudden this happened.. Indexer is windows server and Forwarder is AIX. There are multiple error messages here
1. skipped indexing of internal audit event will keep dropping events until indexer congestion is remedied. check disk space.. (2nd issue)
2. Forwarding to indexer group default auto lb blocked for 10 secs (1st issue)

These are displayed in indexer messages

0 Karma

woodcock
Esteemed Legend

What is your architecture? How many Indexers and is the destination index clustered? Show us the error logs that you clearly have seend. Show us the inputs.conf on your forwarder. Obviously you have read https://docs.splunk.com/Documentation/Splunk/6.6.0/Forwarding/Receiverconnection but did you notice that the suggestion to increase stopAcceptorAfterQBlock applies only to Windows indexers (surely you do not have Windows Indexers!)?

So here are other things to check:
1: Are Indexers trashing the data? Run 'Run splunk cmd btool props list on your indexer and send the output for the destination sourcetype. You should see the same details for each indexer. If that is all correct (no nullqueue), then we know that the indexers will not trash these events.
2: Is forwarder sending? From CLI on the UF run splunk list monitor and splunk _internal call /services/admin/inputstatus/TailingProcessor:FileStatus.
3: Is there ANY data at all from this host? From the Search Head, run this search for "All time": |metadata type=hosts | where host=YourHostHere | convert ctime(*Time)
4: Is there any data for this sourcetype and host? From the Search Head, run this search for "All time": |tstats min(_time) AS earliestTime max(_time) AS latestTime groupby host sourcetype | search sourcetype="YourSourcetypeHere" host="YourHostHere" | convert ctime(*Time)
5: Was the data here, but now is gone? From the Search Head, run this search for "Last 7 days": index=_internal sourcetype=splunkd bucketmover | rex "[\/\\\](?<indexname>[^\/\\\]*)[\/\\\][^\/\\\]*db[\/\\\]db_(?<newestTime>\d+)_(?<oldestTime>\d+)_\d+" | rex "db_(?<newestTime>\d+)_(?<oldestTime>\d+)_\d+.*?[\/\\\](?<indexname>[^\/\\\]*)[\/\\\][^\/\\\]*db" | stats max(oldestTime) AS oldestTime BY indexname | eval retentionDays=(now()-oldestTime)/(60*60*24)
6 Are we sure that we are looking for the data in the correct place; perhaps the inputs.conf contains a value for index=main or index=test or something similar so it either is in the wrong place (e.g. index=main) OR even worse, got dropped entirely (because index=test is not defined on the indexers). If this is the case, there should be a message on the Search Head saying something like Search peer idxX.XXX has the following message: Received event for unconfigured/disabled/deleted index=XXX with source=XXX host=XXX sourcetype="splunk_audit". So far received events from 1 missing index(es).

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...