Splunk Search

how to avoid duplicates when a log file is read from an url using scripted input ?

vallurupalli
New Member

we are reading http logs from a weburl using the curl command the webserver log is exposed as http://host/webserver.log which is read using scripted data input every 5 min.

If the log file has older entry along with new once when the next read happens splunk keeps loading the old log along with new once again , how to avoid the duplicates after if the log file is not rotated but still got new entries in it along with the old once that are already read during previous call.

Tags (1)
0 Karma

gkanapathy
Splunk Employee
Splunk Employee

Your script must keep track of what has been read, and output only new items. If it is helpful and you are on version 5.0, you can use modular inputs which provides a checkpointing function that will make this tracking easier.

http://docs.splunk.com/Documentation/Splunk/5.0/AdvancedDev/ModInputsCheckpoint

theouhuios
Motivator

You can use the sort command to list the new ones up. Or else you can specify the timeframe by setting up the earliest and latest in your search query.

Eg: earliest=-5h@h latest=@h --> Gives data which ahs occurred in last 5 hours

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...