I figured this out, what was going on is it wasn't applying the CSV extractions, so header lines were getting included in event data.
Problem solved by switching the datatypes to all csv, but making one custom CSV with the necessary timestamp search modifications. ^.^
I had similar issue and below props and transforms worked for me in my scenario where it ignored the lines started with #:
[sourcetype]
TRANSFORMS-ignore_comments = setnull
I figured this out, what was going on is it wasn't applying the CSV extractions, so header lines were getting included in event data.
Problem solved by switching the datatypes to all csv, but making one custom CSV with the necessary timestamp search modifications. ^.^
This might be relevant to your issue: http://blogs.splunk.com/2013/10/18/iis-logs-and-splunk-6/ http://blogs.splunk.com/2013/10/22/dropping-useless-headers-in-splunk-6/
I've wondered the same thing and have an idea, but haven't had a chance to try it. Set your transforms.conf file to send lines beginning with '#' to nullQueue.
props.conf
[<sourcetype>]
SHOULD_LINEMERGE = false
TRANSFORMS-set = setnull,setparsing
transforms.conf
[setnull]
REGEX = ^#.
DEST_KEY = queue
FORMAT = nullQueue
[setparsing]
REGEX = logit
DEST_KEY = queue
FORMAT = indexQueue
What he said ^
Can you post an example? Do you want to not index those records, or just not have them show up in search?
Try not posting comments as answers, it irks me.