I am trying to achieve the following:
1 - define the index on the forwarder directly in the inputs.conf (let's say index=default_index)
2 - once reached at the indexer, there is a transform which modifies the index based on a string taken from file's source (let's say index=var, since the file is somewhere in /var/log/)
3 - if the index defined at 2) doesn't exist, send the data to the index defined at 1)
now it is working as it should --> the second (and last) definition of the index takes precedence, and I got myself with many error messages
> Search peer splunk02-indexer has the
> following message: received event for
> unconfigured/disabled/deleted
> index='xxx-xxx' with
> source='source::/var/log/abc/apps/xxx-xxxi/IN147Z.xxx-xxx.log'
> host='host::mdrd90abc1'
> sourcetype='sourcetype::abc_apps' (3
> missing total)
My question is if this setup is possible. is it ?
Thanks
Unfortunately yes, it gets to "dynamic" indexex; it's about an IBM Middleware, where several applications (with the corresponding processes), on several environments (test/dev/inte/prod).
From the Splunk point of view, it is unknown when an application is deployed and what name shall it have; what we did, as prerequisite, is every application should write its logs in a folder with its name and defined a whole nomenclature.
Basically now I have a structure like this (what is below IBM, except apps/, is not such a big problem)
/var/log/IBM/apps/App1
/var/log/IBM/apps/App2
...
/var/log/IBM/apps/AppX
the App* Folder is created automatically everytime a new app is deployed.
What I did:
- on the forwarder - inputs.conf --> all files under /var/log/IBM/app/ get the app index
- once they reach the indexer, the index is rewritten, taking the string from the path
SOURCE_KEY = MetaData:Source
REGEX = /var/log/IBM/apps/([^\/]+)/([^\/]+)\.([^\/]+).log
DEST_KEY = _MetaData:Index
FORMAT = $1
(I take it that this operation wil always take precedence and my index will always be the one defined in this step)
now I have to manually add the index, and, until I do that, the logs go nowhere, because of the missing index. Is it a possibility, that when the index defined on step 2 before the indexing doesn't exist, the one defined on step 1 on the forwarder will be used ?
hopefully I made sense 🙂
back to your questions:
Yes, unfortunately, I can see it only when I get the "index missing" error pops out
Can you explain what you DO know in advance about the data?
Nothing, except the path and the structure/nomenclature of this path, on which I have to build my extraction regexes; I know where that the fourth position is the index, then also a few fields based on the filename.
But, as earlier said, it is more a theoretical question, based on the similarity with ACL's/iptables rules (for example), where the last rule takes precedence, if somehow satisfied.
Thanks for your time
I started to answer and decided that it would be more helpful if you can provide your transforms.conf file for us.
Also... it would help if you explained WHY you want to do this.
Is the name of the app truly completely dynamic and a total surprise?
Consider this:Unique indexes are created because you want all the data in that index to have the same retention policy.
Sourcetypes are a further granuarlization. Can you explain what you DO know in advance about the data?
I think there is a bit of a "sideways angle" to the way you've done it... at least in terms of Splunk.
Where you cannot create a dynamic index (in the .conf files) (just naming it doesn't make it exist) but you can assign data to a sourcetype that has no definition, as the assignment in transforms does create it in the system and it inherits from defaults.
Not so with indexes. You can however... create an index with the REST/API but the cost of "waiting" for it to be there would be giant.