Monitoring Splunk

Delay of monitoring thousands of file

katalinali
Path Finder

I monitored several thousands of file in splunk but I find it indexes the new events for more than 30 minutes. I have set the lines:

[inputproc] max_fd = 256 time_before_close = 2

but it can't improve the situation. Are there any other methods to solve it?

Mick
Splunk Employee
Splunk Employee

Yes, remove the files that are no longer being updated, or blacklist files that you are not actually interested in.

The monitor input was designed to pick up data as it is added to a file, so simply enabling it for thousands of static files is actually using it in the wrong way, as it will always go back and check files to see if they have been updated.

Using this method for a first time load is fine, as long as you update your inputs once that initial data-load is complete. Leaving it in place for a few hundred files is also fine, as Splunk can check this many files relatively quickly. As you increase the number of files being monitored however, you are slowing down how quickly new data is picked up.

I suspect that you are actually monitoring more files than you think, or perhaps you are using a NFS mount - network latency is also an important factor

Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...