Getting Data In

splunk universal forwarder batch input forwarding but not deleting

stamstam
Explorer

Hi, we have an indexer cluster, to which we index many many small files.
we have about a few hundreds thousand files.
we run a universal forwarder on a strong machine(130GB 24CPU) and have a batch input on local directory.
our problem is as follows:
the data is indexed very slowly, and also the batch input is freaking a little....
it used to write logs about every indexed file("Batch input finished reading file..."), but now it writes a few, than stops, than continue to forward data but doesn't delete the files.
the only log we can see is when we turn on DEBUG level logging.
I have checked the logs and I dont have any blocked queues.

We would really appreciate if anyone would either have a reasonable explanation for the problem i'm having, or if someone will be able to suggest another way of indexing this immense amount of files.

0 Karma

somesoni2
Revered Legend

Few hundreds thousands files can be too many for a single Universal Forwarder instance. Can you check what the CPU percentage of the Splunk process on that box?

0 Karma

splunker12er
Motivator

If your UF is running on Windows the files might have chances to get locked by its associated processes. You may run procmon tool to determine to analyse this.
Also Check the file system permissions to verify that the UF has rights to delete the files

0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to May Tech Talks, Office Hours, and Webinars!

Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars this month. This ...

They're back! Join the SplunkTrust and MVP at .conf24

With our highly anticipated annual conference, .conf, comes the fez-wearers you can trust! The SplunkTrust, as ...

Enterprise Security Content Update (ESCU) | New Releases

Last month, the Splunk Threat Research Team had two releases of new security content via the Enterprise ...