Real novice here. I am ingesting a sourcetype into Splunk, and want to filter out any events with the word "FAILED" right after the first IP address.`
Below is my props in etc\apps\search\local
[sslbcoat1]
DATETIME_CONFIG =
NO_BINARY_CHECK = true
category = Custom
pulldown_type = true
TRANSFORMS-null = setnull
`
and then here's my transforms where I THOUGHT I was carving out failures and sending them to null:
[setnull]
REGEX = ^.+(F...).+$
DEST_KEY = queue
FORMAT = nullQueue
The regex might be wrong, but somehow I don't think that's the big problem. Thanks and any insight would be appreciated. Also every time I try to save props.conf it says I do not have permission to save even though I'm a full admin, I have to fully unlock the directory to save, is there a way to avoid that? Thanks.
This thread is more than 3 years old. To help ensure more people see it and offer solutions, you should post a new question.
I have a little different scenario but facing a similar issue. Seeking your help.
We are integrating the json logs via HEC into Splunk Heavy Forwarder.
I have tried the below configurations.I am applying the props for the source. In transforms, there are different regexes and I would want to route it to different indexes based on log files and route all the other files not required to a null queue. I would not be able to use FORMAT=indexqueue in transforms.conf as I cannot mention multiple indexes in inputs.conf .This is not working and I am not getting results as expected. Kindly help.
The configs are like below:
PROPS.CONF --
[source::*model-app*]
TRANSFORMS-segment=setnull,security_logs,application_logs,provisioning_logs
TRANSFORMS.CONF --
[setnull]
REGEX=class\"\:\"(.*?)\"
DEST_KEY = queue
FORMAT = nullQueue
[security_logs]
REGEX=(class\"\:\"(/var/log/cron|/var/log/audit/audit.log|/var/log/messages|/var/log/secure)\")
DEST_KEY=_MetaData:Index
FORMAT=model_sec
WRITE_META=true
LOOKAHEAD=40000
[application_logs]
REGEX=(class\"\:\"(/var/log/application.log|/var/log/local*?.log)\")
DEST_KEY=_MetaData:Index
FORMAT=model_app
WRITE_META=true
LOOKAHEAD=40000
[provisioning_logs]
REGEX=class\"\:\"(/opt/provgw-error_msg.log|/opt/provgw-bulkrequest.log|/opt/provgw/provgw-spml_command.log.*?)\"
DEST_KEY=_MetaData:Index
FORMAT=model_prov
WRITE_META=true
Your regex is defantly the culprit here
Try this and it should work
[setnull]
REGEX = \d+\.\d+\.\d+\.\d+\sFAILED
DEST_KEY = queue
FORMAT = nullQueue
So that regex pulls out the "FAILED" but I think the regex needs to specify the entire event that must be sent to null? \d{4}\W\d{2}\W\d{2}\s\d{2}\W\d{2}\W\d{2}.+(FAIL..).+
does just that (even though it's hideous), but I tried both and still returns me the failed events in search. I should only need to do a TRANSFORMS-null in props.conf under my [sslbcoat1] stanza, and then put a corresponding stanza in transforms.conf right?
No, you just telling splunk to identify a pattern within an event. If that pattern matches the regex, it will then throw that event out. You need to first apply the base configs I provided you in the comments to split your event correctly, then apply the nullqueue stanza I gave you which will throw away your failed events
Understood on the regex, the events already break correctly automatically using the Splunk's auto setting (I get 93 events which is what I have before null) do I still need to add those base configs to the sourcetype stanza in props? do I need to be adding the TRANSFORM-null under a source stanza rather than a sourcetype stanza?
Adding the base configs @skoelpin provided is a Best Practice. Specifying them means Splunk doesn't have to guess about the right time format, etc. so it speeds things up a little.
If you don't have the right regex then the filter can't possibly work.
Please show some sample events, both those to be filtered and those to be kept.
2014-04-17 13:49:32 2910 XX.XX.XX.XXX TUNNELED - none - - XX.XX.XXX.XXX - XX.XX.XXX.XXX TLSv1 RC4-SHA 128 *.roadtrippers.com "none" TLSv1 RC4-SHA 128 - XXX.XXX.XXX.XXX SG-SSL-Proxy-Service XX.XX.XXX.XXX 584912014-04-17 13:50:04 1 XX.XX.X.XX FAILED - - - -XXX.XXX.XX.XXX - - - none - - - - none - - XXX.XXX.XXX.XXX SG-SSL-Proxy-Service - 57498
Two events mashed together. Splunk separates them by timestamp when I index automatically, but I need to separate all the events with "FAILED" in them, and send them to null. I feel like I am missing something obvious. I am indexing these by uploading a txt file with the sample logs in them.
The simplest regex for that is (FAILED)
, but that will match the word anywhere in the event. To find only "FAILED" after an IP address, try (\d+\.\d+\.\d+\.\d+\sFAILED)
.
Have you added MAX_DAYS_SINCE
to your props.conf file to account for the 2014 dates you're uploading?
Apologies didn't see this response, I have some sample data in a .txt with 93 timestamped events in it. I have the sourcetype defined as above as well as a corresponding transform for setnull. I go to "Add Data" and upload my "ssl_bcoat_2.txt", save it as an sslbcoat1 sourcetype, then create a text index to index it to. Just to be sure, I should be editing props and transforms in the apps\local and not system\local correct?
Correct. $SPLUNK_HOME\etc\apps\local, to be more specific.
So are you saying that the events are mashed together in a single event?
If so, thats because you didn't apply base configs to your props.conf
Step 1: Add this to your props on the indexer and restart splunkd
[<YOUR SOURCETYPE>]
TIME_PREFIX = ^
TIME_FORMAT = %Y-%m-%d %H:%M:%S
SHOULD_LINEMERGE = false
LINE_BREAKER = ([\r\n]+)\d+-\d+-\d+\s\d+:\d+:\d+\s
MAX_TIMESTAMP_LOOKAHEAD = 25
TRUNCATE = 10000
Step 2: Apply the regex I gave you below