Getting Data In

Blacklist problem

mjmcleod
New Member

I'm using the universal forwarder on Solaris.

I set up the following input:


[monitor:///var/log]
disabled = false
index = syslog

and then discovered the joy of /var/log/pool/poold, to which the system writes once every fifteen seconds. We don't really need or care about that log, so I added


blacklist = .*poold$

and restarted the forwarder. I can see from the output of "splunk list monitor" that the file is no longer supposedly being watched, but I'm still getting new events from it!

Someone else here mentioned that they used "splunk clean eventdata" to clear up this sort of problem, but that only works with the full Splunk install, not with the universal forwarder.

Any ideas?

0 Karma
1 Solution

jbsplunk
Splunk Employee
Splunk Employee

'splunk clean eventdata' clears data that has already been indexed. Since the data is coming in via a UF, and the UF has no indexes, this isn't a valid command. You could try to run 'splunk clean eventdata syslog -f', but would clear out ALL your syslog data, so use it carefully. It is possible that, if you've got a lot of data, that some of it is still queued on the forwarder.

You could also look at the file input status by running the following command on the UF in $SPLUNK_HOME/bin/

splunk _internal call /services/admin/inputstatus/TailingProcessor:FileStatus

View solution in original post

mjmcleod
New Member

I'd been giving it a good 30 minutes to clear the forwarder queue, but it was still going.

Came back today after the weekend and it's stopped pushing data over from that log. So something of a false alarm!

0 Karma

jbsplunk
Splunk Employee
Splunk Employee

'splunk clean eventdata' clears data that has already been indexed. Since the data is coming in via a UF, and the UF has no indexes, this isn't a valid command. You could try to run 'splunk clean eventdata syslog -f', but would clear out ALL your syslog data, so use it carefully. It is possible that, if you've got a lot of data, that some of it is still queued on the forwarder.

You could also look at the file input status by running the following command on the UF in $SPLUNK_HOME/bin/

splunk _internal call /services/admin/inputstatus/TailingProcessor:FileStatus
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...