Getting Data In

How to implement tagging on a universal forwarder to categorize data so we can filter our searches?

vinceskahan
Path Finder

I'm totally lost trying to decipher the impossibly dense abstract documentation here. I need to do something that I'd hope is simple, with a full example really needed. I am getting nowhere fast trying to wrap my mind around the circularly-referencing docs here none of which having CLI examples at all....

The problem - I have a variety of Linux VMs running universal forwarders, forwarding syslogs and custom logs and the like to the central Splunk server we've set up. We tend to try to notionally categorize each VM into groups that make sense for us (ie, product-ABC-production-servers, or product-XYZ-development-servers or the like).

  • How do I define things on the forwarder computers to have all the data from that system categorized so we can filter our searches etc. based on that categorization/tagging/bucketing/whatever-word-you-want-to-use ?

  • What would a typical inputs.conf file entry for forwarding /var/log/messages look like ?

  • What other file(s) do I need to edit to make the tagging/annotating happen ?

  • What would a working example of 'those' files look like ?

0 Karma
1 Solution

vinceskahan
Path Finder

Found an answer that seems to work....

_meta = key1::value1 key2::value2
(and so on - use double colons to separate key/value, and whitespace to separate multiple key/value pairs from each other)

tested in inputs.conf on a universalforwarder using current splunk

View solution in original post

vinceskahan
Path Finder

Found an answer that seems to work....

_meta = key1::value1 key2::value2
(and so on - use double colons to separate key/value, and whitespace to separate multiple key/value pairs from each other)

tested in inputs.conf on a universalforwarder using current splunk

renjith_nair
Legend

Easiest way to achieve this is to create different indexes for your different environments for eg: one index for production, one for development etc and forward the data from your forwarder to respective indexes.

For example, your product-ABC-production-servers will forward data to an index called abc-production and the inputs.conf will be

[monitor:///var/log/messages]
sourcetype=syslog   
index=abc-production

This will be the same for all the forwarders which are under category production. Repeat the same for your development environment.

There are many advantages of having different indexes for your production and development (just assuming that you have separate environment or different user base). You will be able to restrict user access to a particular set of data/servers and also you can search data from a set of hosts/forwarders easily by just index=abc-production

Reference :

http://docs.splunk.com/Documentation/Splunk/6.2.0/Updating/Exampleaddaninputtoforwarders
http://docs.splunk.com/Documentation/Splunk/6.2.0/Data/Usingforwardingagents
http://docs.splunk.com/Documentation/Splunk/6.2.0/Data/Configureyourinputs

Another way is to add tags for all your servers so that you can categorize servers into different category. But there are no data separation in this case.

I would prefer the first option.

Hope this help!

Happy Splunking!

vinceskahan
Path Finder

Thanks for the detailed answer which is great for how/why to use indexes, but I was specifically asking about tags.

You mentioned 'Another way is to add tags for all your servers so that you can categorize servers into different category. *' which is my actual question - *how do I implement tags ? What files do I edit where ? Could you perhaps add a little to your example above with adding a tag to that monitor to complete the picture ?

Thanks again for the help...

0 Karma

vinceskahan
Path Finder

Ummm, that doesn't help at all. It references a maze of circular references to using Splunk Web. I'm trying to use it all CLI from text-mode-only splunk forwarders. As I mentioned in my original post, I'm looking to CLI working examples or references, not references to the (horrible) web-centric maze of splunk docs.

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...