Monitoring Splunk

Delay of monitoring thousands of file

katalinali
Path Finder

I monitored several thousands of file in splunk but I find it indexes the new events for more than 30 minutes. I have set the lines:

[inputproc] max_fd = 256 time_before_close = 2

but it can't improve the situation. Are there any other methods to solve it?

Mick
Splunk Employee
Splunk Employee

Yes, remove the files that are no longer being updated, or blacklist files that you are not actually interested in.

The monitor input was designed to pick up data as it is added to a file, so simply enabling it for thousands of static files is actually using it in the wrong way, as it will always go back and check files to see if they have been updated.

Using this method for a first time load is fine, as long as you update your inputs once that initial data-load is complete. Leaving it in place for a few hundred files is also fine, as Splunk can check this many files relatively quickly. As you increase the number of files being monitored however, you are slowing down how quickly new data is picked up.

I suspect that you are actually monitoring more files than you think, or perhaps you are using a NFS mount - network latency is also an important factor

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...