Seen in splunk.log repeatedly (nothing else)
TailingProcessor - Could not send data to output queue (parsingQueue), retrying...
Our forwarders seem to get blocked occasionally and are unable to recover. We've found them in this state for days sometimes, and due to block, we don't get the internal logs in Splunk to detect this condition.
Files monitored:
maybe 50-100 files, 10 active files, rolled files ~100MB each at rotation.
A restart of the splunkforwarder resolves the issue.
If what @ddrillic (@lguinn) said is the problem; here is another way out:
https://answers.splunk.com/answers/309910/how-to-monitor-a-folder-for-newest-files-only-file.html
The question is whether they are being blocked at the forwarder level or at the indexer level.
Cheerful discussion at Could not send data to output queue (parsingQueue)
@lguinn explained and said -
There are maybe 50-100 (100MB) files so this is not the issue. Also, it is forwarder-specific .. as in a handful of forwarders get blocked and never recover on their own.
Occasional blocking at the indexer is normal and recovers. But am seeing ceilings being hit, but indexer recovers .. but the forwarder does not for many days before I detect it. Restarting the splunkforwarder resolves it.
Is it just parsing queue thats blocked?
I don't know what is causing it but you can turn on an alert inside the (Deployment) Monitoring Console to alert you to Missing Forwarders
. If you can cannot find it, run the Health Checks
and look for bread crumbs there.
Also, upgrade to the latest forwarder version of Splunk; I find that forwarders often run VERY far behind in versions, which is not good.
Yeah, we're planning on upgrading soon as assuming this is an undocumented bug at this point.