Good afternoon,
We are currently sending all of our Palo Alto syslogs to a syslog server that collects multiple machines syslogs and forwards them via a universal forwarder to our splunk instance.
We filtered out all logs tagged with the palo alto device name and set the sourcetype to pan_log
heres the piece of our inputs.conf broken out for the palo alto logs from our syslog server
/prod/splunkforwarder/etc/apps/syslog/default/inputs.conf
[monitor:///prod/remotesyslog/logs/paloalto*/*]
blacklist=.gz$
disabled=false
sourcetype=pan_log
host_segment=4
index=syslog
The index=syslog is the generic index name we use for all syslogs rather than 'main' or 'default' etc.
we also made an update to the macros.conf on the application side via our search head and included the index name under :
opt/splunk/etc/apps/SplunkforPaloAltoNetworks/default#
[pan_threat]
definition = index=syslog sourcetype="pan_threat" NOT "THREAT,url"
[pan_traffic]
definition = index=syslog sourcetype="pan_traffic"
[pan_system]
definition = index=syslog sourcetype="pan_system"
[pan_config]
definition = index=syslog sourcetype="pan_config"
[pan_web_activity]
definition = index=syslog sourcetype="pan_threat" "THREAT,url"
Oddly enough under this dir
/opt/splunk/etc/apps/SplunkforPaloAltoNetworks/local#
Now as it stands I am able to see under splunk deployment monitor a pan_log sourcetype that is receiving traffic but I am unable to view any data under the palo alto app or by doing an independent search such as sourcetype="pan_log" or 'pan_threat' etc.
Any help would be greatly appreciated.
adding to the summary indexing discussion: please take a look at this post: http://splunk-base.splunk.com/answers/5837/summary-indexing-on-a-search-head . also, if you plan on using multiple indexers, i would discourage the use of summary indexes for now. admittedly, the summary indexing use is not the best in this app. the summaries are of a high dimensionality, which results in a low summarized to raw data ratio. ultimately, the summaries will become very large. i am working on a better strategy for this.
'pan_threat' host="pa*" : i was unable to recreate this issue. this search works ok on a newly installed splunk instance with a fresh install of the app.
the reason for : This - index="pan_logs" pan_threat | bin _time span=5m | fillnull vsys app category src_ip dst_ip severity RISK threat_id CATEGORY | stats count by vsys app threat_id severity category src_ip dst_ip log_subtype CATEGORY RISK _time works
because the pan_logs index is not in the default search path of the user running the search.
i appreciate your feedback. i have added several things to my to do list for the next version of the app. happy to talk to you in person about some of this.
I have an existing syslog server that was already receiving PaloAlto logs. I've installed the universal forwarder on the syslog server and it is successfully sending the data to splunk. The PA app seems to be working ok, with the exception that searches by username always return no data. If I use the default splunk search page, I would like the 'host' field to be populated with the hostname of the firewall that generated the log message. Right now, it is always populated with the name of the logfile "firewall.log". I have tried several iterations of transforms.conf and props.conf on the syslog/forwarder host. Here are the current contents:
transforms.conf
[PAN_Firewall]
DEST_KEY = MetaData:Host
REGEX = \b\w*-\w*-\d\b
FORMAT = host::$1
props.conf
[source::/var/log/syslog/firewall.log]
TRANSFORMS-PAN=PAN_Firewall
Any suggestions?
Syslog-ng can do all kinds of nifty tricks with a syslog stream before the data is written into the log. So the very first thing is to review the firewall.log on the syslog host to be sure the real device host name is in the event in a consistent place or marked. If it isn't, that's likely your syslog-ng configuration or the syslog stream is being forwarded through multiple hosts.
True.
Usernames are present in the logs, and the charts.
Interestingly, I found that filtering the charts for auser (where my username is domain\auser) does update the charts in the PA App. Putting 'auser' in the source user field, however causes all of the charts to result in 'no results found'
I've tried a couple of other variations for transforms and props, but I can't seem to get the host field to populate correctly.
So your having 2 issues then, no hostname for the PA and no username data? Do you get something like 0's where a User name should be on some charts?
cheers,
Brandon
adding to the summary indexing discussion: please take a look at this post: http://splunk-base.splunk.com/answers/5837/summary-indexing-on-a-search-head . also, if you plan on using multiple indexers, i would discourage the use of summary indexes for now. admittedly, the summary indexing use is not the best in this app. the summaries are of a high dimensionality, which results in a low summarized to raw data ratio. ultimately, the summaries will become very large. i am working on a better strategy for this.
'pan_threat' host="pa*" : i was unable to recreate this issue. this search works ok on a newly installed splunk instance with a fresh install of the app.
the reason for : This - index="pan_logs" pan_threat | bin _time span=5m | fillnull vsys app category src_ip dst_ip severity RISK threat_id CATEGORY | stats count by vsys app threat_id severity category src_ip dst_ip log_subtype CATEGORY RISK _time works
because the pan_logs index is not in the default search path of the user running the search.
i appreciate your feedback. i have added several things to my to do list for the next version of the app. happy to talk to you in person about some of this.
Just to follow up here on what was going on. It looks like the root of my dashboard issue after the macros were working was that I originally replaced my macros conf with the macros.conf.summary and forgot that this was still in my local dir and taking precedence over the default even though I had updated to the latest versions and other wise disabled summary indexing within the manager for the specific si_* scheduled searches. Dashboards worked after .old'ing my local macros conf and allowing it to use the default macros conf in my default folder. I'm good to go now! Thanks again Monzy!
Ah ok, - just re read the edit, so i noticed the dashboard uses index = summary Datacube etc, should I just edit that then to point to it's actual index for pan_logs / macro? In place of using the summary's that is? Or can I just modify the macros.summary.conf to get around it?
And thanks, I always get annoyed when people ask questions on forums and never report back what their results were, doesnt help the community at all that way.And, I might just take you up on that, after I get a few environment issues sorted I'll drop you a line.
Cheers!
Ok hate to use the answer box because this is only kinda an answer based off everything I've gathered from people on this thread, so if another noob is searching and finds this hopefully this overview will get them part way there pretty quickly without sifting and piecing together all this from the whole thread, aside from my added issue at the bottom that is
Overview: Distributed search
I am still having a few other issues but I'll post those as a comment on this.
gotcha, and it looks like we may having something messing with our summary indexes, so I'll follow that road to sort that out and revisit this as it may be related.
Thanks!
In a typical distributed search configuration, the search head is responsible for generating 'and' hosting summary data. However, you can configure the search head to forward any generated data to be hosted on the indexers. You'd have to check your configurations on the search head for an outputs.conf that redirects output to the indexers (just like your forwarders do.) Also notable is the index named 'summary' is a system default and is created on both search heads and indexers. The PAN app does not appear to use a custom summary index.
Now I've checked in manager and summary indexing is enabled, and the macros.conf.summary file specifies a summary index on the search head. Is there something I'm missing on the indexers that would stop them from summary indexing? At this point I think this is my biggest hurdle left to get over.
Currently the only thing I have on the indexers is the props transforms and the pan_logs index. Do I need to create a new index for the summary there and copy over macros.conf.summary?
Ok so I've confirmed on issue 1. above that summary indexes arent searching.
this - search index = summary DataCube = threat | bin _time span=5m | fillnull vsys app category src_ip dst_ip severity RISK threat_id CATEGORY | stats count by vsys app threat_id severity category src_ip dst_ip log_subtype CATEGORY RISK _time
Doesnt work
This - index="pan_logs" pan_threat
| bin _time span=5m | fillnull vsys app category src_ip dst_ip severity RISK threat_id CATEGORY | stats count by vsys app threat_id severity category src_ip dst_ip log_subtype CATEGORY RISK _time
works
Example:
Unable to search host names like host="pa-fw-name" doesnt work, but if I search 'pan_threat' and then select the host name it will append my search and work, then removing 'pan_threat' from the search it will still work, but if i type it in directly no go, also if I wild card it, it wont work (even with 'pan_threat' as a preceding entry) so 'pan_threat' host="pa*" no go but complete the host name and its ok? There are others that I have that same type of issues searching with thats just my best example of the problem.
What I'm not able to do at the moment:
1.
Can't see dashboard data just doesn't load at all, getting null data, but i can investigate pull the highlighted piece of syntax out of the root search happening before the summary's and that pulls data.
2.
Strangely enough I can't search allot of specific items on their own or with wild card.
The app's main dashboard page has inline searches. Those searches use index=pan_logs. Other views have searches built on the macros. You have already modified those macros. But adding the index=syslog was not neccessary for those views.
Lastly, it is a good practice to keep different log types separated by indexes. I would not recommend sending all syslog type logs into one index.
quick question, so I've built my "pan_props" created the usual default/local/metadata folders and dropped the props and transforms from the app/default directory off the search head into my pan_props folder- haven't yet deployed this to my indexers. Question is this - since the props does a lookup on line 1 do i need to also include the lookups folder from the app into my "pan_props" that will be deployed to the indexers as well, or is it not necessary that it be avail at index and parse time? I know the lookups only happen at search but do they need to be avail on the indexer?
Awesome thanks a million! I'll be sure to post back my results for posterity in case someone else comes along in my shoes looking for the same answer.
Yup! That covers it. Splunk recently began providing 'TA' apps that are the components for the forwarder/indexer tier in a distributed Splunk architecture. However, legacy apps and anything home-grown will be managed like this: grabbing the bits you need and deploying them where they're needed.
ah ok, so rather than taking the transforms and props and merging them into the existing conf files under /etc/system/local on the indexers, create a custom app (im assuming a bit like a deployment app under etc/apps/(app-name)/default/ and toss in the props and transforms there, and im assuming create the local dir and metadata folders, set the app.conf to enabled and then restart the indexers one after the other? Did I get that right? (I hate to ask 101 type questions, been reading all the various admin docs and find bits and pieces but never really the entire picture. Thanks!
No, that's all you need.
Grab the props/transforms and roll them into an app (pan_props).
Deploy app to the indexers (preferably using config mgmt tool or deployment server.)
Restart the indexers (serially preferred.)
Check the app is enabled on the indexers and verify that transforms are being applied (a typo is not a friend here.)