Monitoring Splunk

Delay of monitoring thousands of file

katalinali
Path Finder

I monitored several thousands of file in splunk but I find it indexes the new events for more than 30 minutes. I have set the lines:

[inputproc] max_fd = 256 time_before_close = 2

but it can't improve the situation. Are there any other methods to solve it?

Mick
Splunk Employee
Splunk Employee

Yes, remove the files that are no longer being updated, or blacklist files that you are not actually interested in.

The monitor input was designed to pick up data as it is added to a file, so simply enabling it for thousands of static files is actually using it in the wrong way, as it will always go back and check files to see if they have been updated.

Using this method for a first time load is fine, as long as you update your inputs once that initial data-load is complete. Leaving it in place for a few hundred files is also fine, as Splunk can check this many files relatively quickly. As you increase the number of files being monitored however, you are slowing down how quickly new data is picked up.

I suspect that you are actually monitoring more files than you think, or perhaps you are using a NFS mount - network latency is also an important factor

Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...