Getting Data In

How to route data from a universal forwarder and a local machine to two different indexes if they share the same directory path?

Bliide
Path Finder

I have data on a local machine in the following directory path: d:\log files\app name

I have data on a server with a universal forwarder in the following directory path: d:\log files\app name

The application records all the log files for the different applications within the same directory. I have the local data indexed to IndexA and I would like to index the data from the forwarded server to IndexB. How do I tell Splunk to only index the data from the forwarder on IndexB when they share the same directory path?

0 Karma
1 Solution

aakwah
Builder

You can create monitoring stanza in inputs.conf to process forwarder logs (they should have different naming than local files, for example ending with .gz) as per the following:

[monitor:///tmp/*.gz]
index = IndexB
sourcetype = fwd_logs

This stanza for linux system please modify it for windows.

Regards,
Ahmed

View solution in original post

pradeepkumarg
Influencer

Same path shouldn't matter as long as the inputs.conf targeted are different. In the inputs.conf on the forwarder have the index attribute to indexB

0 Karma

aakwah
Builder

You can create monitoring stanza in inputs.conf to process forwarder logs (they should have different naming than local files, for example ending with .gz) as per the following:

[monitor:///tmp/*.gz]
index = IndexB
sourcetype = fwd_logs

This stanza for linux system please modify it for windows.

Regards,
Ahmed

Bliide
Path Finder

It worked. I created a new index name in an edited monitor stanza. IndexB is now pulling in the correct data but it is pulling data that was not indexed by the old setup. How can I tell IndexB to index all data in the log files that was already indexed previously?

0 Karma

aakwah
Builder

Good news !

You need to reprocess the old files by moving them to the new monitored directory but the issue now is that splunk forwarder will not index them because they are already processed, so you have 2 solutions to reprocess old files:

-Splunk forwarder keep track of processed files through fishbucket directory "/opt/splunkforwarder/var/lib/splunk/fishbucket/", so if you remove all the contents of fishbucket directory splunk will process again all files under monitored directories which will process the required files to the new index "IndexB", but this also will cause duplicates because all files will be processed, so you should move all processed files to archive directory to not be processed again.

-Second solution is make small edit on files you want to process by adding newline or space for example, as splunk will check the checksum of the file to identify if the file is processed or not, unfortunately changing file name is not enough.

Hope this answer your questions, please let me know if you still have issues.

Regards,
Ahmed

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...