All Apps and Add-ons

Palo Alto Networks App for Splunk: Is it possible to restrict certain Palo Alto traffic logs?

turn1p
New Member

Hi,

New to Splunk Enterprise and only have the Palo Alto Networks App for Splunk + Palo Alto Networks Add-on for Splunk installed and configured.

Is it possible to restrict which traffic logs or at least determine which firewall rules are generating most of these logs? i noticed pan:traffic consuming around 75% of our daily allowance and that leaves me little room to log much else in our environment.

Thanks in advance

0 Karma
1 Solution

niemesrw
Path Finder

Hi turn1p -

We do something similar - when creating policies, we put an additional "key" in the policy name of "xlog" which we then filter upon input on our splunk server that is ingesting the logs (in our case a heavy forwarder - you might do the same on your indexer).

Here's the relevant props & transforms:

props.conf:
[pan:log]
TRANSFORMS-drop = discard-nolog

transforms.conf:
[discard-nolog]
REGEX = TRAFFIC.*xlog
DEST_KEY = queue
FORMAT = nullQueue

View solution in original post

0 Karma

btorresgil
Builder

Since Palo Alto Networks App version 5.x and Add-on version 3.6.x, the data is put in the dashboards by eventtype instead of index. This means if a user does not have access to an index, the logs in that index will not show up in the App dashboards. You can use this to control what logs each user sees.

For example, you can have one firewall send to a data input on port 515 and another firewall send to port 516. Port 515 goes to an index called pan_logs_perimeter and port 516 goes to an index called pan_logs_datacenter. Then create two users, a 'perimeter' user with permission to only see the pan_logs_perimeter index and a 'datacenter' user that can only see the pan_logs_datacenter index. When both of these users access the App dashboards, they will see only the data from the index they are allowed to see. The 'perimeter' user will only see logs from pan_log_perimeter, and the 'datacenter' user will only see logs from pan_logs_datacenter, even if they are accessing the same dashboard in the same App.

niemesrw
Path Finder

Hi turn1p -

We do something similar - when creating policies, we put an additional "key" in the policy name of "xlog" which we then filter upon input on our splunk server that is ingesting the logs (in our case a heavy forwarder - you might do the same on your indexer).

Here's the relevant props & transforms:

props.conf:
[pan:log]
TRANSFORMS-drop = discard-nolog

transforms.conf:
[discard-nolog]
REGEX = TRAFFIC.*xlog
DEST_KEY = queue
FORMAT = nullQueue

0 Karma

sk314
Builder

This is a much more graceful than what I did. I didn't know the log format itself was configurable on PAN end. I ended up writing a shitty regex to filter pan:traffic allowed logs. like so:

 [pan_traffic_allowed] 
REGEX = (([^,]*,){3}TRAFFIC,([^,]*,){26}allow,)
DEST_KEY = queue
FORMAT =  nullQueue

😞

0 Karma

niemesrw
Path Finder

Well the format isn't really configurable, we're just adding some more 'metadata' to the policy name. Since Brian Torres-Gil below works for Palo Alto (and wrote this app), he might be able to ask that they log tags or some other field we could parse on ingest..

Either way - whatever works!

sk314
Builder

that explains - I was wondering why @btorresgil sounds familiar!! Also - whatever works!! 🙂

0 Karma
Get Updates on the Splunk Community!

Updated Team Landing Page in Splunk Observability

We’re making some changes to the team landing page in Splunk Observability, based on your feedback. The ...

New! Splunk Observability Search Enhancements for Splunk APM Services/Traces and ...

Regardless of where you are in Splunk Observability, you can search for relevant APM targets including service ...

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...