Getting Data In

Monitor a File That's Being Purged Regularly

jepoyyyy
Explorer

Hi All,

I have a multi-tiered Splunk deployment and I am having some serious indexing lag from a remote host.

We have configured a forwarder to monitor a file that is being purged every 30 minutes. After the said interval, the contents of the file are being written in an archive directory. The problem is, we have a significant amount of lag before it becomes searchable in Splunk. We sometimes experience as far as 5 hour indexing lag from that particular source. Upon checking on it now, it is down to 45 minutes lag. So the lag varies from time to time.

We're pretty sure that it is not being caused by an undersized Splunk infrastracture because we are also collecting *nix stats (cpu, ram, disk, etc) and these events come in in near-realtime.

Upon checking the logs from the forwarder, we see this line from time to time.

WatchedFile - Checksum for seekptr didn't match, will re-read entire file="/some/file/name/file.log".

Is there an inputs.conf parameter that I should make use to monitor a file that is being flushed regularly?

Any help would greatly be appreciated.

Kindest regards,
Jeff

0 Karma

jepoyyyy
Explorer

I found the root cause of this already. The file that was being monitored was just too big for the default bandwidth limit of the forwarder.

I modified the maxKbps in limits.conf to adjust it and accommodate the volume.

I hope this helps someone someday.

Kindest regards,
Jef

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...