Hi All,
I've been thinking for some time that I am not getting the performance I should be out of my Splunk setup and I'd like some advice for troubleshooting.
Currently, I have a single Search Head, which is searching across two Indexers (non-clustered) and a few Universal Forwarders.
They are all VMs, but with 4vCPUs and 10GB RAM and are idle most of the time. Forgetting the complete setup for now, I am focusing on a test which involved me stopping the services on Indexer 1 and clearing the indexes on it, including the fishbuket index.
I have three inputs for Event Logs;
- Application log going to an 'application' index
- Security log going to a 'security' index
- System log going to a 'system' index
All indexes are defined in the indexes.conf
I have configued the inputs.conf to collect local event logs on Indexer 1, so when I restart the services I expect it to start filling up the three indexes I have defined from the local Event Logs of the same name. This does happen, but really slowly - I think it Indexed about 300 Event Logs in 45 minutes.
Does anyone have any idea why this would be slow and how I might troubleshoot it?
Many thanks!
M
OK, so after digging around it appears that it was indexing correctly, just that it wasn't re-reading old event logs due to the following files being present;
$SPLUNK_HOME\var\lib\splunk\modinputs\WinEventLog\
Once I cleared these out everything started working as expected 🙂
OK, so after digging around it appears that it was indexing correctly, just that it wasn't re-reading old event logs due to the following files being present;
$SPLUNK_HOME\var\lib\splunk\modinputs\WinEventLog\
Once I cleared these out everything started working as expected 🙂