Getting Data In

How To Filter SysLog Generated by Juniper Firewall

Kinan
Engager

We have configured our Juniper Firewall to send its SysLog data through UDP and then setup Splunk to listen to that port from Juniper. As you'd expect, we ran out of our indexing volume almost immediately and had to tweak the settings on Juniper to get only traffic information and avoid event notifications in order to reduce the log volume that gets indexed by Splunk. However, observing the volume indexed by Splunk, it seems to be still large and may blow our limit.

We'd like a way to perform filtering on the SysLog of Juniper before it gets consumed by Splunk.
The [following best practice post] indicates that using UDP is not recommended for SysLog since data loss may occur and the article recommends setting up either external files, forwarders or some sort of a middle tier server to read the Syslogs from Juniper before pushing the log data to Splunk for indexing. 1

Wonder if there are other people out there who ran to the same problem and could provide input on their experience and what router they ended up going.

1 Solution

jbsplunk
Splunk Employee
Splunk Employee

Usually what people do for these types of cases is to stop the data from being indexed by Splunk by routing the unwanted data into the nullQueue. This prevents it from being written to disk and stops you from incurring the license cost associated with these events.

You can do this either on an indexer, or via a heavy forwarder. It happens in the typingQueue, where regex replacement occurs, because it is based on a regex that you'll define. So, there is some cost associated with checking data to see if it matches the filter, which is why some people choose to do this on a heavy forwarder. Keep in mind that once the data leaves the heavy forwarder, it is 'cooked' and cannot be altered at the indexer. So, if you need data parsing to happen and want to use a heavy forwarder, all your parsing needs to occur at the heavy forwarder level.

For details on the specific configuration, please see this section in the documentation:

http://docs.splunk.com/Documentation/Splunk/latest/Deploy/Routeandfilterdatad#Filter_event_data_and_...

View solution in original post

jbsplunk
Splunk Employee
Splunk Employee

Usually what people do for these types of cases is to stop the data from being indexed by Splunk by routing the unwanted data into the nullQueue. This prevents it from being written to disk and stops you from incurring the license cost associated with these events.

You can do this either on an indexer, or via a heavy forwarder. It happens in the typingQueue, where regex replacement occurs, because it is based on a regex that you'll define. So, there is some cost associated with checking data to see if it matches the filter, which is why some people choose to do this on a heavy forwarder. Keep in mind that once the data leaves the heavy forwarder, it is 'cooked' and cannot be altered at the indexer. So, if you need data parsing to happen and want to use a heavy forwarder, all your parsing needs to occur at the heavy forwarder level.

For details on the specific configuration, please see this section in the documentation:

http://docs.splunk.com/Documentation/Splunk/latest/Deploy/Routeandfilterdatad#Filter_event_data_and_...

Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...