Okay, this sounds like a conceptual/design problem here rather than a real Splunk problem.
There are options that could help you to increase the performance of the universal forwarder, like splitting the inputs stanzas to be more specific on the path and add specific sourcetypes, index names to each stanza for example. Because using just one single input stanza for such a mount of data brings in a lot of other concerns like is all data going into the same index, if not who is doing the parsing and re-writes the index name? This can be handled by the UF in a very efficient way.
Also, by using a batch stanza you rely on the IOPs performance of the disk, because each file will be deleted after it has been read adding wait-time to the overall performance.
The current concept makes it really hard for the universal forwarder to perform the best, and by adding more and more data it will not work better.
Try to question if this is the best way for the universal forwarder to read the amount of data you produce and then change the way the data gets ingested.
cheers, MuS
... View more