I am using the universal forwarder to index a log file that regenerates every time that a new row is added. In other words, the logging mechanism rewrites the entire file periodically; it doesn't append rows to the previous file. The issue that I am having is that when new rows are added to the log file, the entire file is being re-indexed, which results in duplicate event rows. Is there a way to configure this file (in the inputs and/or props configuration files) to prevent this from happening? Thanks.
I would write shell script to do a delta of the "real file" and a "copy file" and then do echo $new_lines >> /other/directory/copy_file; rm -f /real/directory/real_file
and have Splunk monitor /other/directory/copy_file
.