Hello-
My current setup:
Device Syslog --> Syslog Server w/ Splunk HvyFwd --> Splunk Indexer
When I restart my Heavy Forwarder server or Splunkd, it takes up to 30 minutes to begin forwarding syslogs to the indexer. Is this due to the number of devices and folders stored within the syslog server, and is there a way to speed this process up?
Thanks,
When you stop splunk it still keep getting the syslog meaning when you start again it is falling behind and can't catch up in time (for some reason)
I would look on the queues on on the HF and on the indexing tier to see where the bottleneck is.
I would also look in the internal log and see if there is any errors on the monitoring path (to big files or such error)
Check number of directories and files monitored by the forwarder.
$SPLUNK_HOME/bin/splunk list monitor
Can you share details of how you are determining the delay is on the HF? Why are you using an HF on the syslog server?
Thanks for the response!
I run heavy forwarder on top of my syslog server and it is actively monitoring the device directories. This was recommended to me as best practice a few years back. It works wonderfully, it just takes a while to start again after restarting the heavy forwarder.
I've monitored the syslog file of a device using tail -f and see that it's being updated instantly. If I look in splunkd.log after a restart, it takes 15-20 minutes to go through and say that it's going to begin monitoring all of the directories specified. My file hierarchy is as follows:
DeviceType/DeviceName/date.log
example: CiscoSwitch/Switch69/20200212.log