Getting Data In

How to blackhole unwanted server logs by configuring props.conf and transforms.conf?

erinaldo
Explorer

Our main syslog server just forwards everything to Splunk. We have exclusions in syslog for certain applications but we would still like to clean out anything not vital to Splunk. I've attempted to set up the props.conf and transforms.conf appropriately but it doesn't seem to work properly. I moved them to /opt/splunk/etc/system/local instead of editing the default files.

props.conf
[source::udp:514]
TRANSFORMS-drop_hosts = drop_hosts

transforms.conf   
[drop_hosts]
SOURCE_KEY = Metadata:Host
REGEX = 192.168.158.131.log
DEST_KEY = queue
FORMAT = nullQueue

I am just testing it with one right now. But when I pull up the Data Summary and look at the host count for that IP it continues to rise.

0 Karma
1 Solution

somesoni2
Revered Legend

It will.. My bad, I didn't realize it's a TCP input directly coming on Indexers, so configurations are created in correct place. Now we should check if the entries are correct OR not. What is the actual Host name that you see in the log entries? Is it really 192.168.158.131.log?

How about you try this in your transforms.conf (keep props.conf same)?

 [drop_hosts]
 SOURCE_KEY = Metadata:Host
 REGEX = 192\.168\.158\.131
 DEST_KEY = queue
 FORMAT = nullQueue

View solution in original post

somesoni2
Revered Legend

It will.. My bad, I didn't realize it's a TCP input directly coming on Indexers, so configurations are created in correct place. Now we should check if the entries are correct OR not. What is the actual Host name that you see in the log entries? Is it really 192.168.158.131.log?

How about you try this in your transforms.conf (keep props.conf same)?

 [drop_hosts]
 SOURCE_KEY = Metadata:Host
 REGEX = 192\.168\.158\.131
 DEST_KEY = queue
 FORMAT = nullQueue

erinaldo
Explorer

Yes on the rsyslog server that's the actual entry in /var/log/remote.

That worked! Thank you sir.

0 Karma

erinaldo
Explorer

Are you able to whitelist/blacklist with Splunk as well. I may have some issues with the regex as the hostnames are all over the place. So for example we don't need to see any of the ipaddress.logs but we may need to see server1.log but not see server2. log.

0 Karma

somesoni2
Revered Legend

There is no blacklist/whitelist available for UDP input. You can add multiple hosts in the same regex and/or add more transforms.conf stanza's, if that's what you need. Like this

 props.conf
 [source::udp:514]
 TRANSFORMS-drop_hosts = drop_hosts_set1,drop_hosts_set2

 transforms.conf   
 [drop_hosts_set1]
 SOURCE_KEY = Metadata:Host
 REGEX = (192\.168\.158\.131)|(192\.168\.158\.132)|(....other hosts)
 DEST_KEY = queue
 FORMAT = nullQueue

[drop_hosts_set2]
 SOURCE_KEY = Metadata:Host
 REGEX = (192\.168\.159\.131)|(192\.168\.159\.132)|(....other hosts)
 DEST_KEY = queue
 FORMAT = nullQueue
0 Karma

erinaldo
Explorer

Ok yeah I can make that work. Thanks for your help.

0 Karma

somesoni2
Revered Legend

Few questions-
1) In where Splunk server did you place these config files, Your forwarder on syslog server OR Indexers?
2) If you've created these files on your forwarder at syslog servers, then is it a Universal Forwarder OR Heavy Forwarder? If it's Universal Forwarder, then it won't work as event filtering is not available for UF, you should keep it in Indexers (or Heavy Forwarder if there is any in between UF and Indexer).
3) Assuming, now it was kept in correct Splunk server, did you restart the Splunk service after making the change?

0 Karma

erinaldo
Explorer

We actually just forward it from our rsyslog server. We don't use splunk forwarders. Will this not work with those options on the indexer then?

0 Karma

erinaldo
Explorer

The indexer. Yes the Splunk services have been restarted.

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...