Getting Data In

how to avoid duplicate data when a file is created with the same name which was delete earlier.

soniaraj13
New Member

Hi,

I see duplicate data getting ingested when a file which was already ingested is being recreated upon a system failure with the existing data plus new data in the file.

For example, lets say test.csv has the following data,

a b c

When the file is deleted and recreated with the same name but with the following additional data,

a b c
1 2 3
4 5 6

it ingests a b c again besides 1 2 3...

Can someone help me with the correct stanza to be added in inputs.conf or any other solution to avoid data being duplicated as per an example mentioned above.

Thanks.

Tags (2)
0 Karma

tom_frotscher
Builder

Hi,

by default splunk reads the first few lines of a file (256 bytes) and calculates a hashvalue over those lines. When a new file with the same hash appears it isn't read. In your case, those first few lines are the same, therefore the file is reindexed again and you get your duplicates.

You can adjust the amount of bytes that are read with this value in the inputs.conf:

initCrcLength = <integer>

Here is the link to the examples and spec of the inputs.conf file.
You can find additional details for initCrcLength there, and also much more configuration options for your inputs.

Greetings

Tom

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...