Getting Data In

If I have a single data input, how can I edit my inputs and outputs.conf to send this data to 2 different hosts, indexes, and sourcetypes?

tmarlette
Motivator

I have a single data input (myLog.log) and I need to send this same data to 2 different hosts, indexes and sourcetypes. I am using deployment server, so I have custom built two separate apps.

App1: inputs.conf

[monitor:///opt/splunk/var/log/myLog.log]
crcSalt = <SOURCE>
disabled = 0
sourcetype = testy_testy
index = test_apps

App1:outputs.conf

[tcpout]
defaultGroup = myCustomer1

[tcpout:myCustomer1]
server = 10.10.10.10:8089

App2:inputs.conf

[monitor:///opt/splunk/var/log/myLog.log]
crcSalt = <SOURCE>
disabled = 0
sourcetype = staging_test
index = staging

App2:outputs.conf

 [tcpout]
 defaultGroup = myCustomer2

 [tcpout:myCustomer2]
 server = 10.10.10.20:10080

Both of these apps exist on the same machine, however, I am not seeing any data in my second index. Will the fishbucket not register this as a new data source, and is there way to configure this?

0 Karma
1 Solution

tmarlette
Motivator

The only way to do this, is to route data via the _TCP_ROUTING setting in inputs.conf, and then change the sourcetypes / indexes using transforms at the receiving indexer level. This is quite a bit of administrative overhead, but it is doable.

For more information:
http://docs.splunk.com/Documentation/Splunk/6.0.2/Forwarding/Routeandfilterdatad
https://answers.splunk.com/answers/302247/routing-events-to-a-specific-index-based-on-a-fiel.html

View solution in original post

0 Karma

tmarlette
Motivator

The only way to do this, is to route data via the _TCP_ROUTING setting in inputs.conf, and then change the sourcetypes / indexes using transforms at the receiving indexer level. This is quite a bit of administrative overhead, but it is doable.

For more information:
http://docs.splunk.com/Documentation/Splunk/6.0.2/Forwarding/Routeandfilterdatad
https://answers.splunk.com/answers/302247/routing-events-to-a-specific-index-based-on-a-fiel.html

0 Karma

MuS
SplunkTrust
SplunkTrust

Hi tmarlette,

I assume there is some problem with precedence of your cons files, see the docs http://docs.splunk.com/Documentation/Splunk/6.3.0/Admin/Wheretofindtheconfigurationfiles
I would rather try the filter and routing approach described in the docs here http://docs.splunk.com/Documentation/Splunk/6.3.0/Forwarding/Routeandfilterdatad if you have a heavy weight forwarder or send all event to both indexer are filter and nullQueue them on the indexer http://docs.splunk.com/Documentation/Splunk/6.3.0/Forwarding/Routeandfilterdatad#Filter_event_data_a...

For your example it would be on server 1:

props.conf

[testy_testy]
TRANSFORMS-null-testy_testy= setnull

transforms.conf

[setnull]
REGEX = .
DEST_KEY = queue
FORMAT = nullQueue

and for server 2 it would be:

props.conf

[staging_test]
TRANSFORMS-null-staging_test= setnull

transforms.conf

[setnull]
REGEX = .
DEST_KEY = queue
FORMAT = nullQueue

or visa versa 😉 Hope this makes sense ...

cheers, MuS

Update:

Just found this: it should be possible, you have to define a group in each outputs.conf and specify which input (log path) you want to send to which group Link to routing documentation

outputs.conf in App1:

[tcpout:App1]
server=server1:9997

outputs.conf in App2:

[tcpout:App2]
server=server2:9997

inputs.conf in App1:

[monitor:///opt/splunk/var/log/myLog.log]
crcSalt = <SOURCE>
disabled = 0
sourcetype = testy_testy
index = test_apps
_TCP_ROUTING = App1

inputs.conf in App2:

[monitor:///opt/splunk/var/log/myLog.log]
crcSalt = <SOURCE>
disabled = 0
sourcetype = staging_test
index = staging
_TCP_ROUTING = App2

Make sure you test the precedence of the config files and stanzas with btool to see what is actually applied to the input:

$SPLUNK_HOME/bin/splunk cmd btool --debug inputs list | grep -v default 

Hope this makes more sense now 🙂

cheers, MuS

tmarlette
Motivator

hey MuS,

This looks like it will just route either of thse data sets to the 'nullQueue' on a HF, instead of to separate indexers / indexes / sourcetypes. I may not be understanding something.

0 Karma

tmarlette
Motivator

It does. Your latest method is what I'm doing right now actually, and Splunk simply doesn't ingest the forwarder simply doesn't ingest the file twice.

0 Karma

MuS
SplunkTrust
SplunkTrust

I assume you have conflicting config here because of this:

How app directory names affect precedence
Note: For most practical purposes, the information in this subsection probably won't matter, but it might prove useful if you need to force a certain order of evaluation or for troubleshooting.

To determine priority among the collection of apps directories, Splunk uses ASCII sort order. Files in an apps directory named "A" have a higher priority than files in an apps directory named "B", and so on. Also, all apps starting with an uppercase letter have precedence over any apps starting with a lowercase letter, due to ASCII sort order. ("A" has precedence over "Z", but "Z" has precedence over "a", for example.)

In addition, numbered directories have a higher priority than alphabetical directories and are evaluated in lexicographic, not numerical, order. For example, in descending order of precedence:

$SPLUNK_HOME/etc/apps/myapp1
$SPLUNK_HOME/etc/apps/myapp10
$SPLUNK_HOME/etc/apps/myapp2
0 Karma

MuS
SplunkTrust
SplunkTrust

Update ping; just added some new stuff to the answer

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...