Getting Data In

Duplicate records

jaterlwj
Explorer

I have tested and realized that when monitoring a file with let's say 24 rows with the option "Continuously index data from a file or directory this Splunk instance can access".
I noticed that when I add a new row and refreshes. There are now 49 rows. The older 24 records are being duplicated. Is there any option to stop duplicate rows?

Here are some specifics.
File format: .log
Specify the source:"Continuously index data from a file or directory this Splunk instance can access."

set host: constant value
set source type: manual
destination index:default

Tags (1)
0 Karma

jaterlwj
Explorer

Any help would be good!

0 Karma

jaterlwj
Explorer

Hi, Thanks for your reply.

Pardon me for my ignorance, but what should I look for under the _internal index? There's roughly 1.7m events in there. 😮

0 Karma

ak
Path Finder

check the _internal index. it appears the whole file is being reread, thus 24 + 25 rows.

0 Karma

ak
Path Finder

is your log file terminated with an end of file message, something like [END OF LOG FILE]?

if so, this will confuse splunk. splunk uses the last 256 bytes for CRC. If you have a termination message that is constantly appended to your file, the CRC check will fail. When this happens, splunk rereads the file, thus duplicating records.

See: splunk log rotation

0 Karma

jaterlwj
Explorer

Hi thank you for your reply! But the log file that I used does not contain any end of file message!

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...