We use DB connect to import log data from several databases into Splunk. We want to do this every 60 seconds to keep Splunk up to date. We set this up and it worked fine, but once we did a reboot of the Splunk machine this was running on, we noticed spikes in CPU use every minute. All the inputs (that run every 60 seconds) now start at the same time (as is documented). This is very impractical and inefficient. We have solved this by giving each input a slightly different interval (like 60 seconds / 62 seconds / 64 seconds / etc). It would be more practical if we could either define a cron schedule in seconds (which is not supported in cron) or be able to define a delay for an input so it will not immediately start after a Splunk start ,but wait for a specific time. That would allow us to balance the inputs within a minute and thus balance load on the CPU.
Or the same thing for scheduling reports (a window) can be done for DB Connect, too.
In case it helps.
We had a similar issue with opsec grabbing data every 30 seconds.
There was too much data to process, before the data was processed a new process was started, using up cpu.
We increased the time between grabbing the data to 300 seconds.
Hi,
thank you for your comment. We prefer to get the data a little sooner as we have some alerting set up based on this data. Cannot do it continuously as the data in SQL should be updated within 60 seconds of being created.
Setting it up with different intervals has helped but is not a very intuitive solution.
Matthijs