Getting Data In

How to avoid indexing duplicate events with files being rotated and compressed?

srenou
New Member

Hello,

We have a weblogic instance that writes its log file using log rotation as well as compression of the file.
When the box is under strong load, we can see in Splunk some data appearing twice as they get indexed when the file access.log is processed and reindexed when the file access.log.1.gz is being processed.
We also see some data that were indexed only in file access.log.1.gz.

Ignoring the gz file would mean that we would lose those extra data, but in the current situation, we are getting duplicate data being processed.
Is there a workaround to that situation to not lose any data and avoid processing duplicate data?
I saw some proposal to remove the duplicates after the processing (| dedup _raw`), but this sounds like an after the fact item.

It also appears that this does not occur if the system is not under stress, so Splunk is able to identify on occasion that the file 1.gz is the compressed version of the previously indexed access.log.

Thanks for any help as I am trying to get all my data and no duplicates.

0 Karma

nettrigger
Explorer

I have the same problem and to this day, Splunk has not been able to give me a professional and real answer. Disappointing.

Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...