Getting Data In

Retain "Facility" via TCP/UDP:514 Direct Indexing

hmsjclee
Engager

Hi,

We're currently experimenting with having Splunk directly index our Syslog-NG logs.

However, we seem to have lost the "source" attribute for which log-file the entries came from.

Is there anyway that we can have props/transforms pick up some kind of SOURCE_KEY and REGEX to help us pull the "facility" portion of the log-shipping in order to replace our "source" field with the result?

Thank you.

John

Tags (1)

ashbyj
Engager

Our splunk server is also our syslog-ng server, so Splunk uses the local /var/log/messages which contains entries from all of our hosts. Each log message has facility and level by adding this to syslog-ng.conf on the syslog server:

destination messages {
    file("/var/log/messages"  template("$DATE $FULLHOST $MESSAGE [facility=$FACILITY level=$LEVEL]\n") ); };
    log { source(src); filter(f_messages); destination(messages); };
}

Each client pushes to the syslog-ng server via tcp(514). When syslog gets messages from tcp 514, it puts it in the messages file in that format spefified by template(). Splunk automatically creates fields from anything with "field=foo", so you'll get fields for facility and level.

Again, Splunk is not listening on 514, but only indexing the local /var/log/messages. Syslog-ng is the one doing the listening.

0 Karma

southeringtonp
Motivator

The best approach is to run syslog-ng on the Splunk server and have it write events to different files based on the facility. If the facility is included in the filename or path, then Splunk can extract it into a field based on the source. Syslog-ng allows a $FACILITY variable that should give you what you need.

For more information on setting up syslog-ng with Splunk, look here.

Something along these lines should work...

In syslog-ng.conf:

destination d_split_by_host {
    file("/inputs/syslog/$HOST/$FACILITY/messages"
        owner("root")
        group("splunk")
        perm(0640)
        dir_perm(0750)
        create_dirs(yes)
    );
}

In transforms.conf:

[source-syslog-facility]
SOURCE_KEY = source
REGEX = ^/inputs/syslog/[^/+]/([^/+])/
FORMAT = syslog_facility::$1

In props.conf:

[syslog]
REPORT-facility = source-syslog-facility

adamw
Communicator

It's going to end in less tears if you do what balbano suggested, and have splunk index the files that syslog-ng writes out. This was strongly recommended by all of the Splunk folks we've ever talked to.

0 Karma

Genti
Splunk Employee
Splunk Employee

If the information that you request be extracted is in the event-log, then it should be achieved through a props/transform.
If possible, give a sample log file and exactly what you are trying to extract, and we can get back to you with some regex and props/transform to achieve the extraction.

0 Karma

balbano
Contributor

In the past we used to use syslog-ng server. we then installed our splunk indexers on the same server as our central syslog-ng server and just had splunk monitor local directories that the syslog-ng logs were being written to. As we are still testing out different architectures, we decided to test having splunk receive traffic on tcp/udp port 514 (previously bound to syslog-ng service) and have splunk directly receive all syslog-ng traffic. While it works fine, what we are loosing are the facility log code that identifies what log file is being sent over. Is there any way to fix that?

0 Karma

Paolo_Prigione
Builder

I'm not sure I understood your question. Is Splunk now indexing the syslog-ng data received on tcp-514, while before you had the syslog-ng server writing files on the Splunk indexer file system?

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...