Getting Data In

Why are we receiving a massive amount of duplicate events from our forwarder, resulting in 4 license violations?

ekst_andwii
New Member

We are experiencing a massive duplication of events in two log files indexed by Splunk. This started suddenly on a Friday and went on all weekend, resulting in 4 license violations. There are other log files indexed by the same forwarder and they share monitor and other config.

Example of line in the log file:

|2016-09-12 10:59:59,597|DEBUG|HikariCP connection filler (pool HikariPool-0)|HikariPool|After fill pool stats HikariPool-0 (total=20, inUse=3, avail=17, waiting=0)

In Splunk (event source):

...
|2016-09-12 10:59:59,597|DEBUG|HikariCP connection filler (pool HikariPool-0)|HikariPool|After fill pool stats HikariPool-0 (total=20, inUse=3, avail=17, waiting=0)
|2016-09-12 10:59:59,563|DEBUG|Hikari Housekeeping Timer (pool HikariPool-0)|HikariPool|Before cleanup pool stats HikariPool-0 (total=20, inUse=3, avail=17, waiting=0)
|2016-09-12 10:59:59,563|DEBUG|Hikari Housekeeping Timer (pool HikariPool-0)|HikariPool|After cleanup pool stats HikariPool-0 (total=20, inUse=3, avail=17, waiting=0)
|2016-09-12 10:59:59,597|DEBUG|HikariCP connection filler (pool HikariPool-0)|HikariPool|After fill pool stats HikariPool-0 (total=20, inUse=3, avail=17, waiting=0)
|2016-09-12 10:59:59,563|DEBUG|Hikari Housekeeping Timer (pool HikariPool-0)|HikariPool|Before cleanup pool stats HikariPool-0 (total=20, inUse=3, avail=17, waiting=0)
|2016-09-12 10:59:59,563|DEBUG|Hikari Housekeeping Timer (pool HikariPool-0)|HikariPool|After cleanup pool stats HikariPool-0 (total=20, inUse=3, avail=17, waiting=0)
|2016-09-12 10:59:59,597|DEBUG|HikariCP connection filler (pool HikariPool-0)|HikariPool|After fill pool stats HikariPool-0 (total=20, inUse=3, avail=17, waiting=0)
|2016-09-12 10:59:59,563|DEBUG|Hikari Housekeeping Timer (pool HikariPool-0)|HikariPool|Before cleanup pool stats HikariPool-0 (total=20, inUse=3, avail=17, waiting=0)
|2016-09-12 10:59:59,563|DEBUG|Hikari Housekeeping Timer (pool HikariPool-0)|HikariPool|After cleanup pool stats HikariPool-0 (total=20, inUse=3, avail=17, waiting=0)
|2016-09-12 10:59:59,597|DEBUG|HikariCP connection filler (pool HikariPool-0)|HikariPool|After fill pool stats HikariPool-0 (total=20, inUse=3, avail=17, waiting=0)
...

619 events in total for timestamp 2016-09-12 10:59:59,597

Any ideas??

0 Karma

ddrillic
Ultra Champion

Are you sure it's the fault of Splunk? - ; )

Similar case at How do I fix a large amount of duplicate events that are locking out my instance?

It says -

-- Apologies for the confusion. This answer is all set. Further investigation showed that our product's logs were guilty of producing duplicate entries. Thank you for your help!

BTW, the best practice is to avoid DEBUG logging as they inflate the logs, or, if truly needed, to minimize the time the DEBUG logging is being used.

0 Karma

ekst_andwii
New Member

Yes, we double checked. There is only one line of log for the given timestamp.

0 Karma

lukejadamec
Super Champion

I've seen this before on a Linux system when the monitored directory was set up as a mirror so it was constantly being re-written and re-indexed.

0 Karma

ekst_andwii
New Member

I don't think there is any mirroring on the directory. Plain Linux installation (on a VM) with Tomcat.

0 Karma

lukejadamec
Super Champion

Can you post your inputs.conf settings for this source?
Can you also verify that they are all from the same source? The events you posted do not show source or sourcetype.

0 Karma

ekst_andwii
New Member

I appreciate your reply but I have submitted a support case to Splunk regarding this. I will let you know what the outcome is.

99% of the events are from the same source, a few(!) are from .1 files (rolled over logs).

0 Karma

jwelch_splunk
Splunk Employee
Splunk Employee

Are you using Indexer Clustering and IndexerDiscovery?

0 Karma

ekst_andwii
New Member

No, this is pretty much a default installation. We use whitelists to include logs in the monitors.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...