Question about reindexing indexed data:
I have a legacy 4.2.x splunk server running.
Its set to index all data and forward onto a new instance.
The intention is that I want to deprecate the legacy instance, but not until the new instance is up and running..
The new instance has various props.conf and transforms.conf files configured. I also have various universal forwarders and the data from the universal forwarders is getting transformed as I want to the new instance. In this case mostly just setting an appropriate index/etc to index to.
However this is not the case with the legacy instance forwarding to the new instance. None of the props.conf and transform.conf files seem to have any effect on the new instance..
When I convert the legacy instance to a Light Forwarder ( 'splunk enable app splunkLightForwarder' ) everything starts to work as I want it; props.conf and transform.conf process the events as I want splunk to on the new instance.
So why is this? I assume the(now configured) light forwarder is sending raw/uncooked data and the my new instance just applies the props/transforms fine..
The event data has been transformed (and indexed) on my legacy server and forwarded to my new instance, why don't my props/transforms apply on the new instance?
Can this props/transforms only occurs once in the forwarding pipeline?
How can I reapply props/transform mods on my new Splunk instance?
Can't cooked events be recooked, is this the problem?
Or should multiple props/transforms files of sequential forwarders be possible?
Also tryied setting 'SendCookedData=false’ on the legacy instance - no good :(
Can some provide some insight please...
Though I have not experimented with your type of setup, I do believe you're right: data passes through the parsing phase just once.
So when your legacy machine is configured as Heavy Forwarder w Index-and-Forward, the local props and transforms are applied. When configuring it as a LWF, the props and transforms on the new indexer are used.
Much as with porkchops, I don't think you can cook->uncook->re-cook your data, i.e. force the new indexer to redo the parsing. Also, with index-and-forward on the legacy machine, I don't think you can send uncooked data to a Splunk instance further down the processing line, i.e. the legacy machine will parse and index first, then do the forwarding.
Would it not be possible to copy the relevant parts of the props and transforms from the new indexer to the legacy machine and have it do the parsing there?
Hope this helps,
how to blacklist events from file 2 Answers