Getting Data In

Timestamp parsing - Failed to parse errors

Derek
Path Finder

Hi,

I have a log that contains large multiline events such as:

---- DS log entry made at 08/29/2011 11:03:00
*** Log is continued from intermediate LogID [15cc1510] ***
Message E:\FILES\SPOOL\DOMAINS\TEST.COM\B0391744043.RCP queued for remote delivery to domain TEST.COM (.LCK).
Message E:\FILES\SPOOL\DOMAINS\TEST.COM\B0391744045.RCP queued for remote delivery to domain TEST.COM (.LCK).
Message E:\FILES\SPOOL\DOMAINS\TEST.COM\B0391744047.RCP queued for remote delivery to domain TEST.COM (.LCK).
Message E:\FILES\SPOOL\DOMAINS\TEST.COM\B0391744051.RCP queued for remote delivery to domain TEST.COM (.LCK).
Message E:\FILES\SPOOL\DOMAINS\TEST.COM\B0391744053.RCP queued for remote delivery to domain TEST.COM (.LCK).
Message E:\FILES\SPOOL\DOMAINS\TEST.COM\B0391744054.RCP queued for remote delivery to domain TEST.COM (.LCK).
Message E:\FILES\SPOOL\DOMAINS\TEST.COM\B0391744056.RCP queued for remote delivery to domain TEST.COM (.LCK).
Message E:\FILES\SPOOL\DOMAINS\TEST.COM\B0391744062.RCP queued for remote delivery to domain TEST.COM (.LCK).
*** Intermediate LogID [15cc03e0] will be continued later. ***

I would like each line that starts with "Message" to be its own event so that I can search and manipulate related events. But the induvidual lines when I break them don't have a timestamp and so I get errors like:

DateParserVerbose - Failed to parse timestamp for event. Context="source::/opt/logs/OPR20110829-7.LOG|host::server1|mysourcetype|" Text="Message E:\FILES\SPOOL\DOMAINS\TEST.COM\B0391744043.RCP queued for remote delivery to domain"

Using props.conf of:

[mysourcetype]
LINE_BREAKER = ([\r\n]+)((----)|(***\s)|(Message\s))
SHOULD_LINEMERGE = false

When I look at the events in the search app they end up appearing in the correct order with a timestamp that based on precedence would seem like it's getting it from the previous events based on the documentation.

The source is /opt/logs/OPR20110829-7.LOG and I also tried a custom file name timestamp to with luck.

How can I supress the warnings in the splunkd logs if the timestamp precedence is working ok for me?

Tags (2)
1 Solution

hexx
Splunk Employee
Splunk Employee

I don't believe that it's a good idea to suppress those warnings, but if you really need to do so, you could edit $SPLUNK_HOME/etc/log.cfg and push the threshold of DateParserVerbose to ERROR by adding this line :

category.DateParserVerbose=ERROR

This requires a restart of splunkd to take effect.

Please note that by doing this, you would obfuscate other warnings of this type for other sources, which might be undesirable.

View solution in original post

hexx
Splunk Employee
Splunk Employee

I don't believe that it's a good idea to suppress those warnings, but if you really need to do so, you could edit $SPLUNK_HOME/etc/log.cfg and push the threshold of DateParserVerbose to ERROR by adding this line :

category.DateParserVerbose=ERROR

This requires a restart of splunkd to take effect.

Please note that by doing this, you would obfuscate other warnings of this type for other sources, which might be undesirable.

hexx
Splunk Employee
Splunk Employee

These errors are logged when Splunk is unable to find a time stamp in the event and falls back on something else such as the time stamp of the previous event from that source/host/sourcetype, or the current time.

0 Karma

Derek
Path Finder

Yes, I would rather not, but haven't been able to get anywere yet with it.

When does splunkd log the error? If it doesn't parse a timestamp from the raw event and has to use the filename or indexing time?

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...