A typical Event (which has no line breaks):
HOSTVULN: HOST_ID=109436564, IP="10.1.40.106", TRACKING_METHOD="AGENT", OS="Windows 10 Enterprise 64 bit Edition Version 1803", DNS="410-dt-12345-04", NETBIOS="410-DT-12345-04", LAST_SCAN_DATETIME="2020-01-09T18:06:05Z", LAST_VM_SCANNED_DATE="2020-01-09T17:59:24Z", SEVERITY=4, QID="372286", TYPE="CONFIRMED", SSL="0", STATUS="FIXED", FIRST_FOUND_DATETIME="2019-12-14T02:23:09Z", LAST_FOUND_DATETIME="2019-12-19T20:16:45Z", TIMES_FOUND="36", LAST_TEST_DATETIME="2020-01-09T17:59:24Z", LAST_UPDATE_DATETIME="2020-01-09T18:06:05Z", LAST_FIXED_DATETIME="2019-12-20T00:39:31Z", IS_IGNORED="0", IS_DISABLED="0"
Splunk is currently extracting the index time based on LAST_SCAN_DATETIME="2020-01-09T18:06:05Z". I assume this is because this is the first date/time in the event. Fair enough.
I have two issues to fix.
1. I would prefer Splunk to extract the index time based instead on the second date/time LAST_VM_SCANNED_DATE="2020-01-09T17:59:24Z" so I have written a regex for props.conf to account for this which is destined for the index cluster search peers.
2. All of the times in the events are GMT, (my local time is Pacific) and the events are currently being indexed 8 hours "into the future". I want the event indexed to my local time. Again, I have tried to correct for this in props.conf which is destined for the index cluster search peers.
My overall problem is that, although the props.conf is successfully pushed to the index cluster search peers (via a cluster bundle), the configuration is being completely ignored by Splunk. I'm unsure whether the props.conf configuration is invalid, or it's in the wrong location, or whatever.
Here is the props.conf that is on the indexers:
[qualys:hostDetection]
DATETIME_CONFIG =
LINE_BREAKER = ([\r\n]+)
NO_BINARY_CHECK = true
TIME_PREFIX = ^.+LAST_VM_SCANNED_DATE="
TIME_FORMAT = %Y-%m-%dT%H:%M:%SZ
TZ = GMT
MAX_TIMESTAMP_LOOKAHEAD = 22
category = Custom
pulldown_type = 1
I was particularly concerned about the line:
TIME_PREFIX = ^.+LAST_VM_SCANNED_DATE="
and whether either of the last two characters needed to be escaped with a \ but no combination I tried has worked.
Advice would be much appreciated. Thank you.
Hi untieshoe,
give this regex a try:
TIME_PREFIX = LAST_VM_SCANNED_DATE="
Also check on one of the indexer of your config gets applied with this command:
splunk btool props list --debug
Hope this helps ...
cheers, MuS
Hi untieshoe,
give this regex a try:
TIME_PREFIX = LAST_VM_SCANNED_DATE="
Also check on one of the indexer of your config gets applied with this command:
splunk btool props list --debug
Hope this helps ...
cheers, MuS
Thank you MuS for the suggestions. The suggested regex did not affect the outcome in any way, unfortunately. Btool indicates that the props.conf stanza is being included at the indexer(s).
Ultimately the problem turned out to be caused by a previously undiscovered props.conf file, buried within an app on an upstream heavy forwarder, complete with conflicting and over-riding configurations.
The props file on the heavy forwarder contained (among other things):
TZ=US/Pacific
which is problematic for my purpose. Pacific is my local time zone, but this does not correspond to the time fields within the forwarded events (all in UTC). So the solution was to alter the props.conf file in the app on the heavy forwarder so we have:
TZ=UTC
This causes the UTC heavy forwarder data to be correctly indexed in my local Pacific time zone. Problem solved.
As for the other issue (wrong field used as the basis of index-time extraction), this too was caused by a conflicting setting in the props.conf file on the heavy forwarder. Again, fixed in the heavy forwarder's props.conf file.
I thank MuS for setting me on the right track for a solution, and especially for the regex structure suggestion which was correct:
TIME_PREFIX=LAST_VM_SCANNED_DATE="
This props.conf config works on the provided example:
SHOULD_LINEMERGE=false
NO_BINARY_CHECK=true
LINE_BREAKER=([\r\n]+)
TIME_PREFIX=LAST_VM_SCANNED_DATE="
TIME_FORMAT=%Y-%m-%dT%H:%M:%SZ
TZ=UTC
So lets step back and check some basics :
This will give you some starting points to troubleshoot this
cheers, MuS
I've implemented your props suggestion MuS. Again, no joy unfortunately.
Thanks for the suggestions!
I have completed a rolling restart of the Index cluster, but there has been no change in the behavior. The time zone is still incorrect and the wrong time field in the data is being extracted for timestamping.
On a side note, out of curiosity I was prompted to go searching for 'future data' after implementing the datetime.xml patch in December 2019 to fix the 'Splunk year 2020 bug'. I had not expected to find any future data and was surprised to find three sources with similar but different timestamp issues. This one I'm struggling with is the last one I have to fix and is very similar to one of the others I fixed. The other simply needed TZ=UTC in props.conf for the affected sourcetype. This one is a bit trickier in that the wrong time is being extracted from the raw data. But my dilemma is whether the new datetime.xml (file) has actual introduced this problem. I can't prove it one way or the other without a lot of work...
I don't think this is related, it sounds more like your events are not being either parsed at all or not applied. You might want to switch over to the community slack channel and ask in #getting-data-in or #admin
cheers, MuS
Not parsed or not applied? I'm not quite sure what you mean. The data is definitely ingested with the correct sourcetype, and the ingested data is searchable from the search head. Splunk is extracting fields within the data at search time....
I will admit to being new at props.conf, so there may yet be something basic I'm overlooking that you might take for granted, or assume I'm aware of... But I appreciate your help.