Getting Data In

What does sendCookedData actually do on a heavy forwarder (i.e. what does 'cooked' mean at a technical level)?

moonhound
Explorer

What transformations / processing happens when data is cooked on a heavy forwarder? Is it the same as the data being indexed just without local storage (barring also setting indexAndForward to true)? Or rather if an app says it has 'index time operations' will they happen during the heavy forwarder's processing of data? I can see that props.conf changes are applied but I don't have a lot of leeway for testing at the moment.

I have a heavy forwarder sitting in front of an indexer cluster as a means of load balancing / homogenizing data that doesn't play nicely with load balancing or splunk in general. Some apps that people here have requested we get set up say they can't handle indexing in an indexer cluster, so I'm trying to verify if we can shove those out onto the heavy forwarder and end up with usable data in our cluster.

0 Karma
1 Solution

rphillips_splk
Splunk Employee
Splunk Employee

This post gives an explanation of cooked vs raw data. (http://answers.splunk.com/answers/292/what-is-the-distinction-between-parsed-unparsed-and-raw-data.h...). As data comes into an indexer it passes through various queues (tcpin, parsingQueue, AggQueue,typingQueue,indexingQueue) before it is written to disk (http://docs.splunk.com/File:Cloggedpipeline.png). If you have a heavy forwarder (aka a full Splunk instance) the data will get processed through these queues at the heavy forwarder and not again at the indexer.
If you have an app that requires props/transforms.conf that typically would be placed on an indexer, you will want to place those on the heavy forwarder instead since the indexer will not re-parse the data after it has been processed by the heavy forwarder.

View solution in original post

rphillips_splk
Splunk Employee
Splunk Employee

This post gives an explanation of cooked vs raw data. (http://answers.splunk.com/answers/292/what-is-the-distinction-between-parsed-unparsed-and-raw-data.h...). As data comes into an indexer it passes through various queues (tcpin, parsingQueue, AggQueue,typingQueue,indexingQueue) before it is written to disk (http://docs.splunk.com/File:Cloggedpipeline.png). If you have a heavy forwarder (aka a full Splunk instance) the data will get processed through these queues at the heavy forwarder and not again at the indexer.
If you have an app that requires props/transforms.conf that typically would be placed on an indexer, you will want to place those on the heavy forwarder instead since the indexer will not re-parse the data after it has been processed by the heavy forwarder.

moonhound
Explorer

That's very helpful, thank you!

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...