All Apps and Add-ons

How to get transforms for the Splunk for Palo Alto Networks app to create and assign proper sourcetypes in a Splunk Cloud environment?

ss1210
New Member

Hi,

I'm using Splunk Cloud and trying to get the Splunk for Palo Alto app running, but I'm sure this issue applies to other apps. Since we need a tcp/udp input, I created a universal forwarder and installed the PAN app on my cloud search head.

I get the pan logs forwarding, but the source types are not being created like the transforms should do. If I try to unzip the PAN app on the forwarder, nothing flows.

I read that the forwarder does not process the transforms anyway, but rather, it should be on my indexer. But in the cloud, I don't have any way to install the PAN app on my indexer. Does anyone know how to get the PAN app (or others) to handle the transforms?

In the Cloud UI for settings, I can find that transform listed and the regex matches a raw line, but it doesn't assign it to the right source type. I'm assuming that's because the PAN app is not on my indexer.

Has anyone ran into this before and know what to do?

Thank you!
Steve

0 Karma

acharlieh
Influencer

Typically with Splunk Cloud, if you need custom index-time parsing, you'll want to install and configure one to many Heavy Forwarders.

You then point your Universal Forwarders to the Heavy Forwarders (who does the further parsing for you), who is then configured to send to your Splunk Cloud instance. As the data is already parsed, the Splunk Cloud indexers will just index the data straight to disk.

I'm not a Splunk Cloud customer myself, but if you log a support request, they might be able to add the app onto your indexers, but the Heavy Forwarder solution puts more control into your hands.

(Be careful and on the look out for bottlenecks with this... @dwaddle has a better explanation of this, but the gist is: Right now you have every UF sending data to your Indexers, when you add HFs, the UFs balance across the HFs (which is the same), and the HFs balance sending the parsed output across the indexers. If you have fewer HFs than Indexers, you would be introducing bottlenecks in your processing pipeline. )

If you're interested in the nitty gritty of how the input through indexing pipelines work, you may also be interested in the How Indexing Works community wiki page, also @amrit and @Jag gave an excellent conf 2014 talk on the indexing pipeline entitled "How Splunkd Works." They'll be reprising this talk at .conf 2015, along with one on Pipeline Sets (new feature perhaps? I haven't personally learned about these yet).

0 Karma
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...