one of my data sources has host field in the raw packet. However when we search the events the host field is the name of the forwarder. Where do I rename that? I do use a transform, so can it be done there on ingestion?
What would be the syntak? in the props.conf file?
If the event does not have a host extraction, it will use the default host field in the forwarder inputs.conf.
Do a ./splunk btool inputs list --debug
on the forwarder to figure
To change the host you can :
- if all your events fro one input are from the same host,
add on the inputs.conf, under the input stanza a setting host=myactualhostname
-if your host has to be extracted from the event itself, then use a props/transforms rule on the indexers (or the first heavy forwarder if any)
If you are in a syslog data situation, look at the default syslog sourcetypes, they try to do that at indextime.
Look at the defaults [syslog] definitions in $SPLUNK_HOME/etc/system/default/props.conf and [syslog-host] in $SPLUNK_HOME/etc/system/default/transforms.conf ) for examples.
I get this stanza error:
Undocumented key used in transforms.conf; stanza='hostoverride' setting='DEST_KEY' key='MetData:Host'
Here is my transforms:
[hostoverride]
DEST_KEY = MetData:Host
REGEX = CEF:\s+\d(?:|[^|]+)(6)|+host=([a-zA-Z0-9.-_]+)
FORMAT = host::$1
I fixed the missing 'a'. I also checked my regex in regex101 and found that was not correct, I validated my regex finds the value. It is still not giving me host as I want it.
2 questions:
1. I do the transforms at the forwarder and not the indexer?
2. I have 2 transforms. First is the host over ride and the second is to parse the event as it is a custom CEF formatted string. Below is my props.conf for this index.
[netwitness]
FIELDALIAS-severity_as_id = severity as severity_id
FIELDALIAS-dst_as_dest = dst as dest
EVAL-app = netwitness
EXTRACT-subject = CEF:\s+\d(?:|[^|]+){4}|(?[^|]+)
EXTRACT-shost = CEF:\s+\d(?:|[^|]+){6}|+host=(?[a-zA-Z0-9.-_]+)
TRANSFORMS-ho=hostoverride
TRANSFORMS = netwitness-extractions
props/transforms go where the parsing happens, i.e. the indexer. Unless you use a heavy forwarder, of course. But I am assuming you are collecting using a universal forwarder that sends directly to your indexer(s). If there is any intermediary forwarder in your forwarding chain that is a HEAVY forwarder, you need to put props/transforms there.
Oh, and I just noticed your second TRANSFORMS is not valid. Should be something like TRANSFORMS-net or sumsuch.
You can also write: TRANSFORMS-ho=hostoverride, netwitness-extractions
What does your transforms.conf look like now?
Can you share a sample event as well?
I am using a light forwarder to send the log to the indexer, so I am guessing that is a intermediary?
[hostoverride]
REGEX = host=([a-zA-Z0-9.-_]+)
DEST_KEY = MetaData:Host
FORMAT = host::$1
[netwitness-extractions]
REGEX = CEF:\d+|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|([^|])|
FORMAT = nullQueue
sample Event:
CEF: 1|RSA|Netwitness|10.6|severity=2|Executables|sessionid=94463671599|host=support.content.office.microsoft.com,support.content.office.microsoft.com|src=10.51.0.139|spt=59014|dst=23.36.68.96|dport=80|fname=AF102430631.wat,AF102430631.wat|dorg=Akamai Technologies|client=Microsoft ULS 15.0,Microsoft ULS 15.0 (Windows NT 6.1; Microsoft ULS 15.0.4669)|extension=wat,wat|server=Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0|service=80|threat=|username=|content=application/octet-stream,application/octet-stream|action=get,GET|zone=internet,internet|analysis.service=http1.1 without accept header,http1.1 without referer header,ssl certificate self-signed|analysis.session=0,ratio low transmitted,watchlist port,first carve,long connection,session size 10-50k,first carve not dns|analysis.file=exe filetype,exe two sections,exe filetype but not exe extension,small executable extension mismatch,small executable|filetype=windows executable,x86 pe,windows dll,signed executable|office=|device.host=|ioc=|boc=|eoc=|icf.category=|
"light forwarder" and "Universal forwarders" do not parse the events (except a few exceptions)
so the props/transforms has to be setup on your indexers (or on your first heavy forwarders is any are chained)
Thank You, That worked.
It's supposed to be MetaData::Host
you are missing an 'a' and a ':'
so the 'host' in the raw packet is not splunk forwarder or host of the device sending the event. Here is an example:
|host=teemfsw1.spt.com,
this is the server and domain space of the offending event.
I was converting it to shost using this: #EXTRACT-shost = CEF:\s+\d(?:|[^|]+){6}|+host=(?[a-zA-Z0-9.-_]+)
But it sounds like host is a built in field for device sending to the indexer?
The required meta fields in splunk are :
source, sourcetype, host (we could consider index and _time to also be always populated)
If you want to use a field at search time for the "original offending host", maybe you could use a different name to distinguish is from the "event sender host" .
example with : offender_host
EXTRACT-shost = CEF\:\s+\d(?:\|[^\|]+){6}\|+host=(?<offender_host>[a-zA-Z0-9.-_]+)
The words you were searching for are "override host"
You will put code stanzas in props.conf and transforms.conf.
Here's a pretty good answer...
https://answers.splunk.com/answers/91933/can-you-override-host-for-an-input.html
And for you it will look something like this...
props.conf
[sourcetype_stanza_name]
TRANSFORMS-host_rename = host_rename_stanza
transforms.conf:
[host_rename_stanza]
REGEX = some regex that finds the host=([^\s]+)\s
DEST_KEY = MetaData:Host
FORMAT = host::$1
Can you please review this part of our documentation and let us know if you run into any trouble doing it like that?