Getting Data In

can multiple sourcetypes stanza's in a props.conf - point to one single report extract (see below)

dmacgillivray
Communicator

Hello Splunk Community,

Does this seem logical below? I am unsure if ASCII precedence is in play when I use the below logic?
See Props.conf and then transforms.conf below that.

Thank you in advance.

-----props.conf-----

[sourcetype1]
SHOULD_LINEMERGE = false
REPORT-inserts  = extract_inserts
REPORT-updates  = extract_updates
REPORT-discards = extract_discards
REPORT-deletes  = extract_deletes

[sourcetype2]
SHOULD_LINEMERGE = false
REPORT-inserts  = extract_inserts
REPORT-updates  = extract_updates
REPORT-discards = extract_discards
REPORT-deletes  = extract_deletes

[sourcetype3]
SHOULD_LINEMERGE = false
REPORT-inserts  = extract_inserts
REPORT-updates  = extract_updates
REPORT-discards = extract_discards
REPORT-deletes  = extract_deletes

-----transforms.conf----

[extract_inserts]
REGEX = inserts:\s+(?<inserts>\w*)
MV_ADD = true

[extract_updates]
REGEX = updates:\s+(?<update>;\w*)
MV_ADD = true

[extract_discards]
REGEX = discards:\s+(?<discards>\w*)
MV_ADD = true

[extract_deletes]
REGEX = deletes:\s+(?<deletes>\w*)
MV_ADD = true
0 Karma
1 Solution

dmacgillivray
Communicator

Hi lguinn, Thanks for your help. I am trying to get these fields in as the root fields into a data model. The root event in the model only allows one root event search that gets accelerated on. The problem is that all this data is going through an HF that I don't have access to. By then I thought the only parsing I could do was index-time. So, I gather with the conditions I have noted that search time would be the only way to go? Some confusion on the subject as I still want to make use of a props.conf & transforms..

Thanks,
Daniel

View solution in original post

0 Karma

dmacgillivray
Communicator

Hi lguinn, Thanks for your help. I am trying to get these fields in as the root fields into a data model. The root event in the model only allows one root event search that gets accelerated on. The problem is that all this data is going through an HF that I don't have access to. By then I thought the only parsing I could do was index-time. So, I gather with the conditions I have noted that search time would be the only way to go? Some confusion on the subject as I still want to make use of a props.conf & transforms..

Thanks,
Daniel

0 Karma

lguinn2
Legend

When you are using a heavy forwarder, the parsing is done on the forwarder.

But creating fields does not happen on the forwarder. Index-time fields are created on the indexer and added to the keyword index that is stored with the events.

Search time fields are exactly what you want. In order to have search-time fields though, the props.conf and transforms.conf belong on the search head. If you aren't using a search head, then the configuration files go on the indexer(s).

And now that I have re-read transforms.conf.spec - I realize that your syntax will, in fact, give you search-time fields. So you were already doing "the right thing," even if you were unsure of the syntax.

My apologies for causing any confusion.

dmacgillivray
Communicator

Thanks for verifying. I am glad there is such a great community of users out there to support Splunk. Your help is appreciated very much..

lguinn2
Legend

Yes, this will work. This is exactly why Splunk has the capability of referencing a transformation from props.conf - so the transforms.conf stanzas can be reused.

Also, precedence should not matter/affect these transforms.

--Removed earlier text about search-time vs. index-time field extractions--

Your props.conf and transforms.conf will give you search-time field extractions - so these configuration files should be placed on your search head. If you are not using a search head, place the files on the indexer(s).

Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...