Monitoring Splunk

Timestamp logic/config on forwarder or indexer?

oreoshake
Communicator

I looked at the report for timestamping errors and found a fair amount of errors. I’ve been following the Splunk blogs and saw Vi’s http://blogs.splunk.com/2010/03/02/guess-what-time-it-is/

We absolutely should turn off timestamp checking and just go with Splunk’s time because the logs in question actually capture the output from the console (rootsh specifically). This of course will contain tons of timestamps well in the past.

Do I update props.conf on the Indexer or the forwarder? We are using light forwarders so I’m not sure if the timestamp is extracted on the indexers or the forwarders.

Tags (2)
0 Karma
1 Solution

gkanapathy
Splunk Employee
Splunk Employee

The time is extracted where the log data is parsed. This is on the indexer if you are using a lightweight forwarder, and on a forwarder if you are using a heavy forwarder. (Parsing the data is the essential difference between a light and heavy forwarding.)

Update: I wrote this which goes into a more thorough explanation.

View solution in original post

bwooden
Splunk Employee
Splunk Employee

The timestamp is applied during parsing. A light forwarder will not parse. A light forwarder adds just a bit of metadata regarding source of event before sending it along.

If you want to use Splunk time for a specific data source you would want to modify the props.conf file in the local directory of the system doing the parsing.

Below is an example I stole from $SPLUNK_HOME/etc/system/README/props.conf.example


# The following example turns off DATETIME_CONFIG (which can speed up indexing) from any path
# that ends in /mylogs/*.log.

[source::.../mylogs/*.log]
DATETIME_CONFIG = NONE

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

The time is extracted where the log data is parsed. This is on the indexer if you are using a lightweight forwarder, and on a forwarder if you are using a heavy forwarder. (Parsing the data is the essential difference between a light and heavy forwarding.)

Update: I wrote this which goes into a more thorough explanation.

gkanapathy
Splunk Employee
Splunk Employee

Indexer CPU maxing out is not common for indexing of data. It is common when searching though. The solution is either more indexers or much faster indexer disks.

0 Karma

oreoshake
Communicator

Also, the flow from LWF=>Heavy Forwarder->Indexer is intriguing. We are having indexer CPU performance issues...in your experience, is this common?

0 Karma

oreoshake
Communicator

Thanks! That post is very helpful.

0 Karma
Get Updates on the Splunk Community!

More Ways To Control Your Costs With Archived Metrics | Register for Tech Talk

Tuesday, May 14, 2024  |  11AM PT / 2PM ET Register to Attend Join us for this Tech Talk and learn how to ...

.conf24 | Personalize your .conf experience with Learning Paths!

Personalize your .conf24 Experience Learning paths allow you to level up your skill sets and dive deeper ...

Threat Hunting Unlocked: How to Uplevel Your Threat Hunting With the PEAK Framework ...

WATCH NOWAs AI starts tackling low level alerts, it's more critical than ever to uplevel your threat hunting ...