When we have a universal forwarder installed on a VM server (hard drive is 40gb). When the service went down yesterday the logs started to queue up on the server as expected but it took so long to get the service back up and running that we ran in to issues where the hard drive was filled up. How can i set a hard stop for the size of logs the universal forwarder can queue up before it starts to purge the older logs?
use case:
if service fails, queue logs up to 10gb (or 25% of free space or something static like that), once that limit is reached, purge old logs to make room for new logs until service is restored.
Any help would be greatly appreciated! Thanks!
I was able to alleviate this issue by removing the forwarder from this server and instead installing snare for the time being. This is not an ideal solution but it does solve the current issue.
I doubt that your problem is the splunk instance ,because the UF queue is only 500KB and is in memory.
Please check :
the logs that built up were system logs that the forwarder was monitoring and queues up when the forwarder isnt working. I need either make the forwarder stop queuing logs to send at a certain point or roll the logs at a certain point. Does that make more sense? The issue is not with logs generated by the forwarder but by the system to be forwarded that the forwarder is holding until it can send them.