Monitoring Splunk

Timestamp logic/config on forwarder or indexer?

oreoshake
Communicator

I looked at the report for timestamping errors and found a fair amount of errors. I’ve been following the Splunk blogs and saw Vi’s http://blogs.splunk.com/2010/03/02/guess-what-time-it-is/

We absolutely should turn off timestamp checking and just go with Splunk’s time because the logs in question actually capture the output from the console (rootsh specifically). This of course will contain tons of timestamps well in the past.

Do I update props.conf on the Indexer or the forwarder? We are using light forwarders so I’m not sure if the timestamp is extracted on the indexers or the forwarders.

Tags (2)
0 Karma
1 Solution

gkanapathy
Splunk Employee
Splunk Employee

The time is extracted where the log data is parsed. This is on the indexer if you are using a lightweight forwarder, and on a forwarder if you are using a heavy forwarder. (Parsing the data is the essential difference between a light and heavy forwarding.)

Update: I wrote this which goes into a more thorough explanation.

View solution in original post

bwooden
Splunk Employee
Splunk Employee

The timestamp is applied during parsing. A light forwarder will not parse. A light forwarder adds just a bit of metadata regarding source of event before sending it along.

If you want to use Splunk time for a specific data source you would want to modify the props.conf file in the local directory of the system doing the parsing.

Below is an example I stole from $SPLUNK_HOME/etc/system/README/props.conf.example


# The following example turns off DATETIME_CONFIG (which can speed up indexing) from any path
# that ends in /mylogs/*.log.

[source::.../mylogs/*.log]
DATETIME_CONFIG = NONE

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

The time is extracted where the log data is parsed. This is on the indexer if you are using a lightweight forwarder, and on a forwarder if you are using a heavy forwarder. (Parsing the data is the essential difference between a light and heavy forwarding.)

Update: I wrote this which goes into a more thorough explanation.

gkanapathy
Splunk Employee
Splunk Employee

Indexer CPU maxing out is not common for indexing of data. It is common when searching though. The solution is either more indexers or much faster indexer disks.

0 Karma

oreoshake
Communicator

Also, the flow from LWF=>Heavy Forwarder->Indexer is intriguing. We are having indexer CPU performance issues...in your experience, is this common?

0 Karma

oreoshake
Communicator

Thanks! That post is very helpful.

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...