Greetings Splunkers & Splunkettes!
I have the following log entry:
124.180.34.147 2011-10-23 00:09:55 - /xxx_hrtv3mstream0_2011060809500030 0 6 1 200 {aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee} 5.2.43 en-AU WMPlayer/11.0.5721.5251 - netgem 5.2.43 NetgemOS 5.2.43 SD863x 0 0 497478 rtsp TCP - - - 376000 373109 47 0 0 0 0 0 0 0 0 0 100 192.148.158.3 - 189 - - 1060200800 - rtsp://xxx.xxx.xxx.com/xxx_hrtv3mstream0_2011060809500030 - - - - live_create - -
For some reason, this entry, and ONLY this entry, has the following timestamp:
10/23/11 8:50:00.300 PM
This is NOT the indexing time (because I just indexed it now on the 26th of October at 15:10 AEDST), and 8:50 (or any hour increment thereof due to timelines) is not mentioned anywhere in the entry (including where I redacted).
My props.conf is nothing special:
[mms_export_e_wms_90]
pulldown_type = true
KV_MODE = none
SHOULD_LINEMERGE = false
TZ = UTC
TRANSFORMS-comment = hash_comment
REPORT-fields = mms_export_e_wms_90_fields
Transforms does nothing but sinkhole comments and search-time field definitions.
Can someone shed any light on this pretty please?
<scratching-head>
EDIT: Just thought I'd add in that I am crcSalt'ing the input, and this is after a flushing of the event indexes, so this isn't an artifact from a previous indexing (If that's even possible)
After much to-and-fro it turned out that I was editting the incorrect props.conf, not realising that the timestamp was set on the heavy forwarder as opposed ot the indexer (where I was editting the props.conf).
Thanks all for you help regardless 🙂
After much to-and-fro it turned out that I was editting the incorrect props.conf, not realising that the timestamp was set on the heavy forwarder as opposed ot the indexer (where I was editting the props.conf).
Thanks all for you help regardless 🙂
Hi, it may be possible that splunk tries to interpret the date from the "first" timestamp and the time from the "second" timestamp in a less than perfect way;
124.180.34.147
2011-10-23 00:09:55 - /xxx_hrtv3mstream0_201106
0809500030 0 6 1 200 {
You probably need to specify the TIME_FORMAT
, TIME_PREFIX
and possibly the MAX_TIMESTAMP_LOOKAHEAD
(for that source/sourcetype) in props.conf
on your indexer.
My guess is that your props.conf should look something like;
[mms_export_e_wms_90]
TIME_FORMAT=%Y-%m-%d %H:%M:%S
TIME_PREFIX=^\d+\.\d+\.\d+\.\d+\s
MAX_TIMESTAMP_LOOKAHEAD=30
...
Hope this helps,
/Kristian
I recommend the
MAX_TIMESTAMP_LOOKAHEAD = 30
The timestamp lookahead is simple to set, and it makes Splunk more efficient, because it won't scan the whole event for the timestamp. Once in a while, Splunk just doesn't seem to want to quit searching the event for more time info! 🙂