I'm using the universal forwarder on Solaris.
I set up the following input:
[monitor:///var/log]
disabled = false
index = syslog
and then discovered the joy of /var/log/pool/poold, to which the system writes once every fifteen seconds. We don't really need or care about that log, so I added
blacklist = .*poold$
and restarted the forwarder. I can see from the output of "splunk list monitor" that the file is no longer supposedly being watched, but I'm still getting new events from it!
Someone else here mentioned that they used "splunk clean eventdata" to clear up this sort of problem, but that only works with the full Splunk install, not with the universal forwarder.
Any ideas?
'splunk clean eventdata' clears data that has already been indexed. Since the data is coming in via a UF, and the UF has no indexes, this isn't a valid command. You could try to run 'splunk clean eventdata syslog -f', but would clear out ALL your syslog data, so use it carefully. It is possible that, if you've got a lot of data, that some of it is still queued on the forwarder.
You could also look at the file input status by running the following command on the UF in $SPLUNK_HOME/bin/
splunk _internal call /services/admin/inputstatus/TailingProcessor:FileStatus
I'd been giving it a good 30 minutes to clear the forwarder queue, but it was still going.
Came back today after the weekend and it's stopped pushing data over from that log. So something of a false alarm!
'splunk clean eventdata' clears data that has already been indexed. Since the data is coming in via a UF, and the UF has no indexes, this isn't a valid command. You could try to run 'splunk clean eventdata syslog -f', but would clear out ALL your syslog data, so use it carefully. It is possible that, if you've got a lot of data, that some of it is still queued on the forwarder.
You could also look at the file input status by running the following command on the UF in $SPLUNK_HOME/bin/
splunk _internal call /services/admin/inputstatus/TailingProcessor:FileStatus