Hi folks.
We have an entry in props to parse our custom datestamps (format is YYYYMMDD HHMMSS.nnn) as follows:
MAX_TIMESTAMP_LOOKAHEAD = 50
NO_BINARY_CHECK = 1
SHOULD_LINEMERGE = false
TIME_FORMAT = %Y%m%d %H%M%S.%3N
TIME_PREFIX = ^
pulldown_type = 1
Quite often, our Splunk heavy forwarder reports that it cannot parse timestamps for these sourcetypes. What I see in the source logfiles is that sometimes, our logs do contain multiple lines (in the form of Java errors)... In that case, should we change the SHOULD_LINEMERGE from no to yes? Or do we need a far more complex regex to indicate the start of a new line?
Example (seriously munged and truncated, but just to give you an idea):
0130805 160141.074 some message about an error
com.stuff.api. Transactior timedout
at com.stuff.exception
at com.stuff.exception
at com.stuff.exception
at com.stuff
at com.stuff
at com.stuff
If you don't want Splunk to get confused as to what timestamp it should set for the subsequent lines in multiline events, you need to set SHOULD_LINEMERGE to yes. The very fact that you're talking about that logs contain multiple lines supports this. Without SHOULD_LINEMERGE set to yes, Splunk will treat each line as an individual event (obviously) which you won't want unless you know what you're doing and why.
If you don't want Splunk to get confused as to what timestamp it should set for the subsequent lines in multiline events, you need to set SHOULD_LINEMERGE to yes. The very fact that you're talking about that logs contain multiple lines supports this. Without SHOULD_LINEMERGE set to yes, Splunk will treat each line as an individual event (obviously) which you won't want unless you know what you're doing and why.