Refine your search:

Good afternoon,

We are currently sending all of our Palo Alto syslogs to a syslog server that collects multiple machines syslogs and forwards them via a universal forwarder to our splunk instance.

We filtered out all logs tagged with the palo alto device name and set the sourcetype to pan_log

heres the piece of our inputs.conf broken out for the palo alto logs from our syslog server /prod/splunkforwarder/etc/apps/syslog/default/inputs.conf [monitor:///prod/remotesyslog/logs/paloalto/] blacklist=.gz$ disabled=false sourcetype=pan_log host_segment=4 index=syslog

The index=syslog is the generic index name we use for all syslogs rather than 'main' or 'default' etc.

we also made an update to the macros.conf on the application side via our search head and included the index name under : opt/splunk/etc/apps/SplunkforPaloAltoNetworks/default#

Base Macros

[pan_threat] definition = index=syslog sourcetype="pan_threat" NOT "THREAT,url"

[pan_traffic] definition = index=syslog sourcetype="pan_traffic"

[pan_system] definition = index=syslog sourcetype="pan_system"

[pan_config] definition = index=syslog sourcetype="pan_config"

[pan_web_activity] definition = index=syslog sourcetype="pan_threat" "THREAT,url"

Oddly enough under this dir /opt/splunk/etc/apps/SplunkforPaloAltoNetworks/local#

the inputs.conf listed there is empty..? is this correct?

Now as it stands I am able to see under splunk deployment monitor a pan_log sourcetype that is receiving traffic but I am unable to view any data under the palo alto app or by doing an independent search such as sourcetype="pan_log" or 'pan_threat' etc.

Any help would be greatly appreciated.

asked 26 Oct '12, 12:10

be910j's gravatar image

accept rate: 0%

I should have noted that our syslog server is load balancing out to 6 indexers.

(26 Oct '12, 12:40) be910j

A follow up:

Well it appears that all the data coming in under 'pan_log' I can now manually search against, if i specify index=<index name=""> sourcetype=pan_log my assumption is that its not properly transforming to say .. pan_threat or pan_system etc. transforms.conf and props.conf appear to be fine, I dont have an inputs conf under the dafault folder however and my local folder inputs.conf is empty.. does anyone have a good example of a proper inputs.conf for this app?


(02 Nov '12, 14:18) be910j

Thanks, yes so far I haven't edited or created any inputs.conf under the app directory, be it default or local, just on our forwarder on the syslog server. All the data is being captured though as it's specified on the inputs.conf on the syslog server that is forwarding the syslogs of a multitude of systems the pa just being another one of those, it just happens that the data isnt transforming. -comment continued . .

(06 Dec '12, 09:11) be910j

cont: It is being tagged as the correct sourcetpye by the inputs.conf on the syslog server before it comes over, and it comes over in that stream with the index="syslog"

I guess what my question should be is, 1) does it need to be taged as index="pan_logs" for transforms to function, or 2) can I just point the app to look in the "syslog" index where all the data is and pull out it's sourcetpype for 'pan_log' to get transforms to start happening?

(06 Dec '12, 09:11) be910j
  1. No. But sourcetype must be set to pan_log. 2. No. The props/transforms are happening pre-indexing. Given the architecture described, you should ignore any reference to opening a port on a Splunk instance via inputs.conf. You've already got a functioning input (although the host_segment is of questionable value.) If the forwarder is a UF, and props/transforms from the PAN app are living happily on the Splunk Indexer(s) and the PAN app is installed on the Search Head, this should all flow well. I suspect missing props/transforms on the indexers, or inability to search the syslog index in app.
(10 Dec '12, 11:51) ekost ♦

ah ya know i think you just called it.. checking my indexers none of the transforms.conf's under etc/system/local have anything for ## INDEX-TIME TRANSFORMS (like 1 stanza for another app) So since it needs to process the transforms at index time (forgive me im a noob) can I just insert the transforms from the conf file thats in my app directory on the search head into the #index-time transforms segment on the local transforms file on all my indexers (along with the appropriate props updates as well)? Talk about feeling dumb I've been looking at the search head this whole time.

(11 Dec '12, 12:47) be910j

also just a side note, really the only place I see the palo alto transforms data is under the app directory itself under default on the search head, no where else do I see it's transform data.

(11 Dec '12, 12:55) be910j

5 Answers:

adding to the summary indexing discussion: please take a look at this post: Summary indexing on a search head on Splunk Answers . also, if you plan on using multiple indexers, i would discourage the use of summary indexes for now. admittedly, the summary indexing use is not the best in this app. the summaries are of a high dimensionality, which results in a low summarized to raw data ratio. ultimately, the summaries will become very large. i am working on a better strategy for this.

'pan_threat' host="pa*" : i was unable to recreate this issue. this search works ok on a newly installed splunk instance with a fresh install of the app.

the reason for : This - index="pan_logs" pan_threat | bin _time span=5m | fillnull vsys app category src_ip dst_ip severity RISK threat_id CATEGORY | stats count by vsys app threat_id severity category src_ip dst_ip log_subtype CATEGORY RISK _time works

because the pan_logs index is not in the default search path of the user running the search.

i appreciate your feedback. i have added several things to my to do list for the next version of the app. happy to talk to you in person about some of this.


answered 19 Dec '12, 19:05

monzy's gravatar image

accept rate: 34%

edited 20 Dec '12, 01:14

Ah ok, - just re read the edit, so i noticed the dashboard uses index = summary Datacube etc, should I just edit that then to point to it's actual index for pan_logs / macro? In place of using the summary's that is? Or can I just modify the macros.summary.conf to get around it?
And thanks, I always get annoyed when people ask questions on forums and never report back what their results were, doesnt help the community at all that way.And, I might just take you up on that, after I get a few environment issues sorted I'll drop you a line. Cheers!

(20 Dec '12, 14:23) be910j

Just to follow up here on what was going on. It looks like the root of my dashboard issue after the macros were working was that I originally replaced my macros conf with the macros.conf.summary and forgot that this was still in my local dir and taking precedence over the default even though I had updated to the latest versions and other wise disabled summary indexing within the manager for the specific si_* scheduled searches. Dashboards worked after .old'ing my local macros conf and allowing it to use the default macros conf in my default folder. I'm good to go now! Thanks again Monzy!

(14 Jan '13, 11:58) be910j

The app's main dashboard page has inline searches. Those searches use index=pan_logs. Other views have searches built on the macros. You have already modified those macros. But adding the index=syslog was not neccessary for those views.

Lastly, it is a good practice to keep different log types separated by indexes. I would not recommend sending all syslog type logs into one index.


answered 11 Dec '12, 20:18

monzy's gravatar image

accept rate: 34%

Thanks, since that original post I've updated the app and it appears the macros edit for the index isn't there anymore, so I will continue on without it. We are planning to move them their own index at some point,we are really just trying to get a poc for mgmt at the moment,only taking system and cfg logs on one of our PA's. None of the transforms are occurring though,the only data I am able to search against is all Pan_logs and not pan_sys* or pan_config etc. After reading ekost's comment above I checked my indexers and none of the PA transform /props data is on my indexers at the moment.

(12 Dec '12, 06:07) be910j

(ran out of characters..)
Aside from updating transforms and props on all my indexers and breaking the PA logs off into their own index for best prac. sake, is there anything I'm missing here?


(12 Dec '12, 06:14) be910j

No, that's all you need. Grab the props/transforms and roll them into an app (pan_props). Deploy app to the indexers (preferably using config mgmt tool or deployment server.) Restart the indexers (serially preferred.) Check the app is enabled on the indexers and verify that transforms are being applied (a typo is not a friend here.)

(12 Dec '12, 09:57) ekost ♦

ah ok, so rather than taking the transforms and props and merging them into the existing conf files under /etc/system/local on the indexers, create a custom app (im assuming a bit like a deployment app under etc/apps/(app-name)/default/ and toss in the props and transforms there, and im assuming create the local dir and metadata folders, set the app.conf to enabled and then restart the indexers one after the other? Did I get that right? (I hate to ask 101 type questions, been reading all the various admin docs and find bits and pieces but never really the entire picture. Thanks!

(12 Dec '12, 11:43) be910j

Yup! That covers it. Splunk recently began providing 'TA' apps that are the components for the forwarder/indexer tier in a distributed Splunk architecture. However, legacy apps and anything home-grown will be managed like this: grabbing the bits you need and deploying them where they're needed.

(12 Dec '12, 12:25) ekost ♦

Awesome thanks a million! I'll be sure to post back my results for posterity in case someone else comes along in my shoes looking for the same answer.

(12 Dec '12, 14:09) be910j

quick question, so I've built my "pan_props" created the usual default/local/metadata folders and dropped the props and transforms from the app/default directory off the search head into my pan_props folder- haven't yet deployed this to my indexers. Question is this - since the props does a lookup on line 1 do i need to also include the lookups folder from the app into my "pan_props" that will be deployed to the indexers as well, or is it not necessary that it be avail at index and parse time? I know the lookups only happen at search but do they need to be avail on the indexer?

(13 Dec '12, 10:08) be910j

You shouldn't be editing anything in the default folder. Anything you want to modify should be in the local folder. I believe stanza/section's in local supersede anything in default. Here is what my inputs.conf in /opt/splunk/etc/apps/SplunkforPaloAltoNetworks/local

connection_host = ip
sourcetype = pan_log
index = pan_logs
no_appending_timestamp = true


answered 01 Dec '12, 10:20

jtc242's gravatar image

accept rate: 0%

Ok hate to use the answer box because this is only kinda an answer based off everything I've gathered from people on this thread, so if another noob is searching and finds this hopefully this overview will get them part way there pretty quickly without sifting and piecing together all this from the whole thread, aside from my added issue at the bottom that is

Overview: Distributed search

  • Installed app -
  • Set PA to fwd syslog to a syslog server with a UF installed -
  • Using an app "syslog" set it's inputs.conf to collect the PA logs
    and tag them as pan_log (initially was merging into an existing index - now using a newly created index of 'pan__logs')
  • I was able to view data by searching pan_log -
  • No transforms occurring -
  • I had not created an app props dir on the indexers for the props and transforms conf's -
  • Created an app directory and populated the props and transforms under its default dir ie /opt/splunk/etc/apps/pan_props/default
  • Restarted indexers in sequence.
  • Received errors regarding index for pan_logs (index was created on the search head but had not been created on the indexers)
  • Created the index on indexers and updated index.conf to match
  • Restarted in sequence again -
  • PA data now being transformed and residing in it's own index.
  • However macros are not searchable so no 'pan_ threat' just index=pan _logs sourcetype="pan _threat" so none of the saved searches work.
  • So just to see what would happen, re specified the index on the base macros on macros.conf on the search head (I know Monzy you said it was un-neccesary to modify the macros but for some reason it seemed to work so I went with it) ie:index=pan_logs
  • Restarted the splunk service on the search head and macros started working,
  • Able to search based off macros for 'pan_system' etc
  • Able to use saved searches. Progress!

I am still having a few other issues but I'll post those as a comment on this.


answered 13 Dec '12, 22:28

be910j's gravatar image

accept rate: 0%

edited 13 Dec '12, 22:33

What I'm not able to do at the moment: 1. Can't see dashboard data just doesn't load at all, getting null data, but i can investigate pull the highlighted piece of syntax out of the root search happening before the summary's and that pulls data.

2. Strangely enough I can't search allot of specific items on their own or with wild card.

(13 Dec '12, 22:30) be910j

Example: Unable to search host names like host="pa-fw-name" doesnt work, but if I search 'pan_threat' and then select the host name it will append my search and work, then removing 'pan_threat' from the search it will still work, but if i type it in directly no go, also if I wild card it, it wont work (even with 'pan_threat' as a preceding entry) so 'pan_threat' host="pa*" no go but complete the host name and its ok? There are others that I have that same type of issues searching with thats just my best example of the problem.

(13 Dec '12, 22:31) be910j

Ok so I've confirmed on issue 1. above that summary indexes arent searching. this - search index = summary DataCube = threat | bin _time span=5m | fillnull vsys app category src_ip dst_ip severity RISK threat_id CATEGORY | stats count by vsys app threat_id severity category src_ip dst_ip log_subtype CATEGORY RISK _time

Doesnt work

This - index="pan_logs" pan_threat | bin _time span=5m | fillnull vsys app category src_ip dst_ip severity RISK threat_id CATEGORY | stats count by vsys app threat_id severity category src_ip dst_ip log_subtype CATEGORY RISK _time works

(14 Dec '12, 13:27) be910j

Now I've checked in manager and summary indexing is enabled, and the macros.conf.summary file specifies a summary index on the search head. Is there something I'm missing on the indexers that would stop them from summary indexing? At this point I think this is my biggest hurdle left to get over. Currently the only thing I have on the indexers is the props transforms and the pan_logs index. Do I need to create a new index for the summary there and copy over macros.conf.summary?

(14 Dec '12, 13:39) be910j

In a typical distributed search configuration, the search head is responsible for generating 'and' hosting summary data. However, you can configure the search head to forward any generated data to be hosted on the indexers. You'd have to check your configurations on the search head for an outputs.conf that redirects output to the indexers (just like your forwarders do.) Also notable is the index named 'summary' is a system default and is created on both search heads and indexers. The PAN app does not appear to use a custom summary index.

(17 Dec '12, 10:52) ekost ♦

gotcha, and it looks like we may having something messing with our summary indexes, so I'll follow that road to sort that out and revisit this as it may be related.


(18 Dec '12, 08:47) be910j

I have an existing syslog server that was already receiving PaloAlto logs. I've installed the universal forwarder on the syslog server and it is successfully sending the data to splunk. The PA app seems to be working ok, with the exception that searches by username always return no data. If I use the default splunk search page, I would like the 'host' field to be populated with the hostname of the firewall that generated the log message. Right now, it is always populated with the name of the logfile "firewall.log". I have tried several iterations of transforms.conf and props.conf on the syslog/forwarder host. Here are the current contents:

DEST_KEY = MetaData:Host
REGEX = \b\w*-\w*-\d\b
FORMAT = host::$1


Any suggestions?


answered 14 May '13, 08:58

colinxb's gravatar image

accept rate: 0%

So your having 2 issues then, no hostname for the PA and no username data? Do you get something like 0's where a User name should be on some charts?

cheers, Brandon

(14 May '13, 09:09) be910j


Usernames are present in the logs, and the charts.

Interestingly, I found that filtering the charts for auser (where my username is domainauser) does update the charts in the PA App. Putting 'auser' in the source user field, however causes all of the charts to result in 'no results found'

I've tried a couple of other variations for transforms and props, but I can't seem to get the host field to populate correctly.

(14 May '13, 10:33) colinxb

Syslog-ng can do all kinds of nifty tricks with a syslog stream before the data is written into the log. So the very first thing is to review the firewall.log on the syslog host to be sure the real device host name is in the event in a consistent place or marked. If it isn't, that's likely your syslog-ng configuration or the syslog stream is being forwarded through multiple hosts.

(22 May '13, 17:13) ekost ♦
Post your answer
toggle preview

Follow this question

Log In to enable email subscriptions



Answers + Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text]( "Title")
  • image?![alt text](/path/img.jpg "Title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported



Asked: 26 Oct '12, 12:10

Seen: 3,617 times

Last updated: 22 May '13, 17:13

Copyright © 2005-2014 Splunk Inc. All rights reserved.