After upgrading a Solaris SPARC forwarder from Splunk 3.4.9 to 4.1.4 (build 82143) one log file stopped being indexed. Lots of new data is being written to it, but I'm not seeing it on the indexer. Ten plus other files are being monitored fine from the same forwarder. In inputs.conf it's:
[monitor:///log/syslog/network/netscalar.log]
disabled = false
In the TailingProcessor:FileStatus it's:
log/syslog/network/netscalar.log
file position 10184387
file size 10184387
percent 100.00
type open file
The log file is rotated, so there is also netscalar.log.1 and so on in the directory. I tried clearing the eventdata in the fishbucket, which took a few hours but it read all the files again, but that didn't fix it. Is there a way I can get Splunk to treat this file as entirely new to get this data indexed?
I've seen issues in the past where a file stops forwarding because of the rotation strategy - whether the file is moved (inode change) or copied and truncated in place. Not sure if that's still an issue with the newest versions of splunk, but we used to handle this situation by splunking the directory and then whitelisting for the specific file.
I deleted the log file and created a new one and Splunk is indexing it fine, thanks all set.
I've seen issues in the past where a file stops forwarding because of the rotation strategy - whether the file is moved (inode change) or copied and truncated in place. Not sure if that's still an issue with the newest versions of splunk, but we used to handle this situation by splunking the directory and then whitelisting for the specific file.
I changed inputs.conf to monitor the directory and _whitelist the file, but it still shows up as 100% read in the TailingProcessor:FileStatus. FWIW, the file size there is the same as the actual file size on the disk.