Splunk Search

how to avoid duplicates when a log file is read from an url using scripted input ?

vallurupalli
New Member

we are reading http logs from a weburl using the curl command the webserver log is exposed as http://host/webserver.log which is read using scripted data input every 5 min.

If the log file has older entry along with new once when the next read happens splunk keeps loading the old log along with new once again , how to avoid the duplicates after if the log file is not rotated but still got new entries in it along with the old once that are already read during previous call.

Tags (1)
0 Karma

gkanapathy
Splunk Employee
Splunk Employee

Your script must keep track of what has been read, and output only new items. If it is helpful and you are on version 5.0, you can use modular inputs which provides a checkpointing function that will make this tracking easier.

http://docs.splunk.com/Documentation/Splunk/5.0/AdvancedDev/ModInputsCheckpoint

theouhuios
Motivator

You can use the sort command to list the new ones up. Or else you can specify the timeframe by setting up the earliest and latest in your search query.

Eg: earliest=-5h@h latest=@h --> Gives data which ahs occurred in last 5 hours

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...