Getting Data In

Heavy Forwarder Forwarding Question

jmads
Explorer

I am a Splunk novice and have created a splunk indexer cluster in a windows environment. I have two heavy forwarders gathering event log data from machines in each heavy forwarder’s specific subnet. When I log onto either indexer cluster member or the search head, I can see that event log data is being collected from both heavy forwarders in the Main index – good so far.

Now, on the Master Node, I updated the indexes.conf file (located at \etc\master-apps_cluster\local) and created two new indexes – one index named Cat and one named Dog. After Distributing the Configuration Bundle, both indexers in the indexer cluster now show the Cat and Dog indexes – now this part is good.

I cannot for the life of me figure out how to get one heavy forwarder to forward all of the event log data it collected to go to the Cat index instead of the Main index and for the other heavy forwarder to forward to Dog instead of Main. Can someone help me? I appreciate assistance. Thanks!

1 Solution

pdaigle_splunk
Splunk Employee
Splunk Employee

On the Heavy Forwarder, modify the inputs.conf file and specify the index via the "index= " specification for each data source to define which index (Cat, Dog, Main, etc.) it should go to:

http://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf

Great examples of this and information here in the inputs.conf spec documentation.

View solution in original post

jmads
Explorer

I know that I have to put index = Dog somewhere in an inputs.conf file, but where? I added it to my inputs.conf file's default stanza in $SPLUNK_HOME/etc/system/local and rebooted to no avail. What this boils down to is how do I change the Heavy Forwarder's default index of Main to Dog?

0 Karma

jmads
Explorer

FrankVI, I truly appreciate your help! Unfortunately, my splunk system is totally disconnected form the Internet: however, I have run the tool. I did compare the outputs of a similar Heavy Forwarder that has not been set to point to Dog and the Heavy Forwarder that points to Dog using the command that you gave. Every place that index = Dog on the one heavy Forwarder has Index = default on the other, but both are dumping all of their contents into Main. In fact, on the Heavy Forwarder pointing to Dog, there is no index pointing to Main or default. I even rebooted the heavy forwarder again and still no luck. Do you have any other ideas? In the meantime, I can start the process to try and get the output from the command over to the Internet. Again, thank you!

0 Karma

pdaigle_splunk
Splunk Employee
Splunk Employee

Are you collect data to the Heavy Forwarder using Universal Forwarders from individual hosts/servers/computers? Or how does data come into the Heavy Forwarder? Is the Heavy Forwarder only being used to consolidate to a single data stream into the Indexers or are you doing any sort of manipulation of the data once it hits the Heavy Forwarder, i.e., are you doing some sort of filter and routing type configuration?

0 Karma

jmads
Explorer

I simply had SCCM push out the universal forwarder to windows machines. The universal forwarder specifies to send system and security logs to the heavy forwarder. Then the HF sends them to the indexer cluster's default index of main. I need this to be a different index. Thanks for helping!

0 Karma

pdaigle_splunk
Splunk Employee
Splunk Employee

If the Heavy Forwarder is just configured to forward what is being sent from the Universal Forwarder, then that would explain or could potentially explain why it is still going to the default or main index at the Indexers. The Heavy Forwarder only applies the configuration files when you are storing data on it and not just forwarding it on. In this case, you need to go to the Universal Forwarder and modify the conf file there to define the index you wish to forward the defined inputs for. Test it by making the change on one of the Universal Forwarders and see if you are getting data into the index then.

jmads
Explorer

AWESOME! It works! Now, I just need to update the deployment server to pass out Index = Dog to all of the hosts! Thanks for all of the help!

0 Karma

pdaigle_splunk
Splunk Employee
Splunk Employee

That is great news! Glad we were able to help you out and get you headed in the right direction! You're welcome!

0 Karma

FrankVl
Ultra Champion

Can you share the inputs.conf for the actual inputs that you configured? It could be that those contain explicit "index=main" setting.

If you don't know where to find those inputs.conf, try the following on your Heavy Forwarder from $SPLUNK_HOME/bin/: ./splunk cmd btool inputs list --debug

And then share the output of that command here (mask any sensitive information if needed)

jmads
Explorer

Thank you for the responses! If I am to understand this correctly, I need to have a stanza for every host that is forwarding its event logs to the heavy forwarder? If I have 500 hosts sending event logs to one heavy forwarder, I will need 500 stanzas in that heavy forwarder's inputs.conf file? This means that every time a machine is added to that subnet and receives the universal forwarder via SCCM, I will need to go to the heavy forwarder and update the inputs.conf file? I hope that this is wrong.

I was able to edit the inputs.conf file to import perfmon data into the Dog and Cat indexes; however, I was not able to get the event logs that are forwarded to the Heavy Forwarders from several hundred machines to get to the proper index (this data still shows up in the Main index). I wish there was simply a place I can change this default Main Index in the inputs.conf file to say Index = Dog or Cat to get the logs to it.

I tried the following inputs conf file for getting the event logs forwarded to the Dog index instead of Main based on ddrillic response.

[default]
host = SplunkHF-Dog01

[tcp://SplunkHF-Dog01:9997]
connection_host = DNS
disabled = 0
sourcetype = WinEventLog:Security
source = WinEventLog:Security
index = Dog

What am I doing wrong? What else can I try?

0 Karma

FrankVl
Ultra Champion

No, you just need to set the index= for all your existing inputs. No need to define whole new inputs.

Or, if you want to apply it to all your inputs, you can also specify it under [default] stanza, and make sure there are no explicit index=main settings elsewhere.

jmads
Explorer

FrankVl, I appreciate the response! I added index = Dog to my default stanza and the logs are still going to Main (and I did reboot the heavy forwarder as well as the members of the indexer cluster). Let me make sure that I have everything correct...

The inputs.conf file I am editing is in $SPLUNK_HOME/etc/system/local.

The entire inputs.conf file is
[default]
host = SplunkHF-Dog01
index = Dog

The original inputs.conf file that was there was just missing the index line.

No other settings have been changed on the heavy forwarder conf files. Should I update another conf file? As this conf file is in the local folder, should settings in this one take precedence over any other inputs conf files? Thanks for the help!!

0 Karma

ddrillic
Ultra Champion

Something like this should work in inputs.conf -

[tcp://<host>:<port>]
connection_host = DNS
disabled = 0
sourcetype = <sourcetype>
source = <source>
index = <index name>
0 Karma

pdaigle_splunk
Splunk Employee
Splunk Employee

On the Heavy Forwarder, modify the inputs.conf file and specify the index via the "index= " specification for each data source to define which index (Cat, Dog, Main, etc.) it should go to:

http://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf

Great examples of this and information here in the inputs.conf spec documentation.

jmads
Explorer

PDAIGLE came up with the correct answer in one of the comments below!

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...