We are converting from a single Splunk instantance to a cluster. At this time we are also implementing Universal Forwarders on several of our application servers.
I have several props.conf and transforms.conf rules written for the old system that I would like to implement on the new system. So far I have been unable to get the rules to activate.
1) I am able to select the index in which I want the logs to go to, but this has only been successful by defining them on the UF. I have tried several different configuration for inputs.conf in my app on the splunk indexers, but to know avail.
2) Also I would like to push files from the distrubtion server to the UFs. I have been able get the files over by including the host as part of the distrubtion class, but I have been unable to craft an inputs.conf and outputs.conf file that will work both on the UFs and the Indexers.
Anyone have some suggestions?
1) If you are using UF, the input phase takes place on the forwarder, and the parsing and indexing phases take place on the indexer. This means that input-related configuration needs to be done on the forwarder. i.e.
[monitor:///var/log/my_app]
index = blah
sourcetype = app_log
The inputs.conf
on the indexer will most likely only need a TCP listener.
2) as mentioned in 1) you will need different inputs.conf
files on forwarders/indexers. The outputs.conf
file on the forwarders does not need to be complicated. Do you need an outputs.conf
on the indexers. Where would they forward their data? If you are setting up load balanced forwarding between the indexers in your cluster, something like this would most likely work in the outputs.conf
on your forwarders.
[tcpout]
defaultGroup = lb
[tcpout:lb]
server = 1.2.3.4:4433, 1.2.3.5:4433
autoLB = true
UPDATE: If the indexing and searching function is divided between separate hosts (dedicated Indexers(s) and SearchHead(s)) they will need different props.conf
parameters. Line breaking, time formats etc that are set at index time will need to go to the indexers' props.conf
, and other configurations such field extractions, tags, eventtypes etc will need to be in the SearchHeads' props.conf
UPDATE 2: Yes, they will most certainly use different stanzas. For inputs this is especially true. You would not want to open up a tcp port listening for log traffic (default 9997) on all your forwarders. Conversely, you would not want to monitor c:\logs\IIS
on your Linux-based Indexer.
Thus you need two 'applications' (each app really being just an inputs.conf file) and two (minimum) serverclasses (indexer and forwarder).
Indexers get the inputs.conf
file telling it to listen to port 9997, and forwarders get a different inputs.conf
file telling it to monitor some directory or file.
Regarding props.conf
files, you can in most cases (I believe) just have the same file pushed out everywhere, as the server role (UF, HF, LWF, indexer, search head) will cause it to only read/use/understand the parts appropriate for the role.
For more information, see:
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf
http://docs.splunk.com/Documentation/Splunk/4.3/Admin/Configurationparametersandthedatapipeline
http://docs.splunk.com/Documentation/Splunk/4.3/Deploy/Datapipeline
Hope this helps,
Kristian
1) If you are using UF, the input phase takes place on the forwarder, and the parsing and indexing phases take place on the indexer. This means that input-related configuration needs to be done on the forwarder. i.e.
[monitor:///var/log/my_app]
index = blah
sourcetype = app_log
The inputs.conf
on the indexer will most likely only need a TCP listener.
2) as mentioned in 1) you will need different inputs.conf
files on forwarders/indexers. The outputs.conf
file on the forwarders does not need to be complicated. Do you need an outputs.conf
on the indexers. Where would they forward their data? If you are setting up load balanced forwarding between the indexers in your cluster, something like this would most likely work in the outputs.conf
on your forwarders.
[tcpout]
defaultGroup = lb
[tcpout:lb]
server = 1.2.3.4:4433, 1.2.3.5:4433
autoLB = true
UPDATE: If the indexing and searching function is divided between separate hosts (dedicated Indexers(s) and SearchHead(s)) they will need different props.conf
parameters. Line breaking, time formats etc that are set at index time will need to go to the indexers' props.conf
, and other configurations such field extractions, tags, eventtypes etc will need to be in the SearchHeads' props.conf
UPDATE 2: Yes, they will most certainly use different stanzas. For inputs this is especially true. You would not want to open up a tcp port listening for log traffic (default 9997) on all your forwarders. Conversely, you would not want to monitor c:\logs\IIS
on your Linux-based Indexer.
Thus you need two 'applications' (each app really being just an inputs.conf file) and two (minimum) serverclasses (indexer and forwarder).
Indexers get the inputs.conf
file telling it to listen to port 9997, and forwarders get a different inputs.conf
file telling it to monitor some directory or file.
Regarding props.conf
files, you can in most cases (I believe) just have the same file pushed out everywhere, as the server role (UF, HF, LWF, indexer, search head) will cause it to only read/use/understand the parts appropriate for the role.
For more information, see:
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf
http://docs.splunk.com/Documentation/Splunk/4.3/Admin/Configurationparametersandthedatapipeline
http://docs.splunk.com/Documentation/Splunk/4.3/Deploy/Datapipeline
Hope this helps,
Kristian
See further updates above. /k
Kristian,
I think 'different' is the wrong word. From what I have read they use different stanzas and can thus be placed in the same field.
From what I have found so far, my apps/.../props.conf has not made it to my Search Heads.
updated my answer for clarity /k
I have the UF sending the data, I just cant seem to get my props.conf on my indexer to be applied.
From your second link, I think I should be looking at the search head and not the indexer. (I am using Transforms and Reports).
Let me chew on this for a few hours, thank you Kristian.