Hi,
I see duplicate data getting ingested when a file which was already ingested is being recreated upon a system failure with the existing data plus new data in the file.
For example, lets say test.csv has the following data,
a b c
When the file is deleted and recreated with the same name but with the following additional data,
a b c
1 2 3
4 5 6
it ingests a b c again besides 1 2 3...
Can someone help me with the correct stanza to be added in inputs.conf or any other solution to avoid data being duplicated as per an example mentioned above.
Thanks.
Hi,
by default splunk reads the first few lines of a file (256 bytes) and calculates a hashvalue over those lines. When a new file with the same hash appears it isn't read. In your case, those first few lines are the same, therefore the file is reindexed again and you get your duplicates.
You can adjust the amount of bytes that are read with this value in the inputs.conf:
initCrcLength = <integer>
Here is the link to the examples and spec of the inputs.conf file.
You can find additional details for initCrcLength there, and also much more configuration options for your inputs.
Greetings
Tom