We've been experiencing latency and are trying to figure out ways to solve it.
We forward events to a Windows Event Collector (Forwarder).
Our inputs.conf looks something like this.
[WinEventLog://ForwardedEvents]
sourcetype = WinEventLog:ForwardedEvents
disabled = 0
start_from = oldest
current_only = 0
evt_resolve_ad_obj = 1
checkpointInterval = 5
index = wineventlog
renderXml = false
suppress_text = 0
We were then instructed to change the start_from = oldest to newest
After restarting the UF on the Windows Event collector, the newest events from that point in time were search able.
Then new events after that point in time *were not * .
Has anyone else experienced this behavior ?
When will Splunk catch up ?
read here in detail:
second *
is the relevant one for your question, however the entire context is important too.
start_from = <string>
* How the input should chronologically read the Event Log channels.
* If you set this setting to "oldest", the input reads Windows event logs
from oldest to newest.
* If you set this setting to "newest" the input reads Windows event logs
in reverse, from newest to oldest. Once the input consumes the backlog of
events, it stops.
* If you set this setting to "newest", and at the same time set the
"current_only" setting to 0, the combination can result in the input
indexing duplicate events.
* Do not set this setting to "newest" and at the same time set the
"current_only" setting to 1. This results in the input not collecting
any events because you instructed it to read existing events from oldest
to newest and read only incoming events concurrently (A logically
impossible combination.)
* Default: "oldest".
link:
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf
Ok, so I read this correctly. After the newest invents are indexed, there will be no new events indexed, until you change it back to start_from = oldest again ?
Is my understanding correct ?
I think so because I am facing the same issue.