I'm feeding splunk a large quantity of historical gzipped syslog files for many, many different machines through a single TCP listener input. These archived files almost certainly contain overlapping data. Furthermore, new data may come in that overlaps with the old data. I can filter my search results to not show that duplicated data, but is it possible to strip any duplicate lines at index time?
Similar scenario with logrotate compressing and rotating logs see http://answers.splunk.com/answers/121267/how-does-splunk-handle-nix-logrotate-based-log-rotation
No, that is not possible.