If the files/directories are not accessible, Splunk will drop them from it's monitoring list and will not be monitored till it restart again (when it'll re-evaluate the monitoring list again, same behavior happens with IgnoreOlderThan attribute.), so yes, it's expected.
If the files/directories are not accessible, Splunk will drop them from it's monitoring list and will not be monitored till it restart again (when it'll re-evaluate the monitoring list again, same behavior happens with IgnoreOlderThan attribute.), so yes, it's expected.
Ok, is it a feature or a bug? It seems to me that if permissions are changed, the forwarder has to pick up these changes at some point without being bounced.
It's neither a feature nor bug, its just how it works. You can submit a p4 enhancement request if you think there is a room for improvement.
Fair enough. To me it's obviously a bug. There is a problem of how an admin can detect that permissions were changed. We have constant issues with that and only after major commotions, like today, we fix these issues by bouncing. How me, as an admin, can detect something like that?
Is there a way to detect any of these underlying OS changes in the Splunk logs?
Things like access denied should be logged in the splunkd log of the forwarder.
That's great - so we might need "just" an alert based on this logging..
I agree, its pain when you have hundreds if not thousands of forwarders managing and requesting permissions for the files to the id that splunk forwarder is running as. We were in the same situation and came a long way since we have convinced our management to let our Splunk forwarders run as root. Which made things lot easier.
Oh man - I don't like this solution at all ; -) it just illustrates what a software limitation/bug forces us to do.
Is there maybe anything like http://<host>:8000/en-US/debug/refresh
for the forwarder?
You may try this CLI command
$Splunk_Home/bin/splunk _internal call /configs/conf-inputs/_reload
Interesting