All Apps and Add-ons

Issue with received syslog packets

splkmika1
Explorer

I have the following setup:

  • Distributed Splunk Enterprise deployment with 2 clustered indexers, 1 cluster master, 1 search head.
  • Separate server configured with an instance of Kiwi Syslog server (listening on UDP 514). Syslogs are being successfully written to disk based on sending device category (e.g switch, firewall etc). This server also has an instance of Universal Forwarder installed, which is monitoring the log file and forwarding this data on to the Index cluster.

The above seems to be working ok. I can see syslogs being received by the syslog server and being written to log file successfully. I can also log into my Splunk Search head under the basic "Searching & Reporting" app and I can search on the custom index which I am sending these syslogs to and can see the syslogs appearing on the Indexer.

My issue however is three-fold:

  • Firstly, the Splunk Indexers don't seem to be getting the host field correct. Without any sourcetype defined on my Universal Fowarder, it was giving the host field as the Facility and severity level (Local6.Notice) of the syslog message. I changed the sourcetype defined for this syslog file monitor on the Universal forwarder to cisco:ios. Now it has labelled the host field as the hostname of the syslog server rather than the hostname of the device originating the syslog message. How do I get it to correctly pick out the correct hostname out of the syslog message?
  • Secondly, in a bid to solve this issue I installed the Cisco Networks Add-on (TA-cisco_ios) on my Indexers and my Search Head and then installed the Cisco Networks App (cisco_ios) on my search head. I believed that the add-on would help to interpret the incoming cisco syslog messages, so that the syslog fields would get interpreted correctly, however the syslogs are still being displayed with the host = the syslog hostname.
  • Finally the newly installed Cisco Networks app, although it appears to have installed correctly, is not showing any received data, even though I can see the syslog messages using the basic "Search & Reporting" app. (if my syslogs are being placed in a custom index, do I need to tweak the app to be looking into the correct index?).

Thanks in advance 🙂

0 Karma
1 Solution

FrankVl
Ultra Champion

Best option would be to see if you can let Kiwi write to separate folders or files per originating source host. That way you can set the host field using the host_segment or host_regex setting in inputs.conf. Then you just need to make sure there is not props/transforms being applied that overwrites it again.

Alternatively:
- change kiwi logging format such that it matches what the respective TA expects, so it doesn't write the facility and severity in the place where the TA expects to find the hostname
- write your own props/transforms to extract the host name correctly. If you want help with that, it would be useful to see some sample data.

View solution in original post

0 Karma

mikaelbje
Motivator

Also version 2.5.4 of the Cisco Networks app has a bug that breaks the overview dashboard. Upgrade to Splunk latest and version 2.5.5 of the app.

0 Karma

splkmika1
Explorer

Thanks I downloaded the updated app and that improved things heaps. 🙂

0 Karma

damode
Motivator

@splkmika1 Can you please share how have you configure the Kiwi Server ? for e.g: data encoding set to UTF-8 ? what is the log file format set to ?

Also, in what app have you put the inputs.conf that has the monitoring stanze reading your Cisco logs ? is it a customised app or Cisco add-on ?

Thanks!

0 Karma

splkmika1
Explorer

Damode,

Page 117 of the Kiwi Syslog Server Admin Guide (v9.6) goes into configuring up a custom log format for your syslog server to use. I simply configured the custom log format up following these instructions and the following settings:

  • Log File Fields: Date, Time, Hostname, Message text
  • Date Format: YYYY-MM-DD
  • Time Format: HH:MM:SS
  • Delimiter: Tab
  • Qualifier: None
  • Adjust time to UTS: disabled

The big thing for me was adjusting the Kiwi Syslog configuration so that it didn't prepend it's own facility and severity codes to each syslog message coming in. In my case, Splunk was inspecting the syslogs coming in and using the . as the host name. Once Kiwi had been told NOT to add this information in, Splunk was able to correctly pull the hostname out of each syslog message.

I have all of the syslogs from my switches going into the one syslog file, with Kiwi pre-pending the hostname of the device originating the syslog to the start of each syslog message as it writes it to disk. Splunk then Monitors this file and onfowards to Splunk ... and the Indexers are now able to correctly pull the originating hostname out of the received data.

In terms of the inputs.conf .... I simply used the CLI to add the file monitor to the Universal Forwarder running on my syslog server.

splunk add monitor c:\syslogs\switches\switch_logs.txt -index networkstuff -sourcetype syslog

this appears to add it to the following location:
$SPLUNK_HOME/etc/apps/search/local/inputs.conf

0 Karma

splkmika1
Explorer

Thanks for that, will download the latest version and give it a go.

0 Karma

FrankVl
Ultra Champion

Best option would be to see if you can let Kiwi write to separate folders or files per originating source host. That way you can set the host field using the host_segment or host_regex setting in inputs.conf. Then you just need to make sure there is not props/transforms being applied that overwrites it again.

Alternatively:
- change kiwi logging format such that it matches what the respective TA expects, so it doesn't write the facility and severity in the place where the TA expects to find the hostname
- write your own props/transforms to extract the host name correctly. If you want help with that, it would be useful to see some sample data.

0 Karma

splkmika1
Explorer

FrankVI, if I am attempting to put in place the third option that you listed above, would I be correct in assuming that the props/transform would need to be located on the indexer (rather than the Universal Forwarder)?

The syslogs are coming in, in the ISO tab delimited format as follows:
dd-mm-yyyy[tab]**hh:mm:sstab(severity)[tab].... rest of syslog message

e.g
2018-07-19 08:00:00 Local6.Notice switch01

Is it possible to do something similar to the following???:

props.conf
[source::mysource]
TRANSFORMS=hostoverwrite

transforms.conf
[hostoverwrite]
DEST_KEY = Metadata:Host
REGEX = ^\S+\s+\S+\t\w+.\w+\t(?P\w+)
FORMAT = $1

0 Karma

FrankVl
Ultra Champion

Should indeed go on the indexer. Your attempt is close, but the FORMAT setting should be: FORMAT = host::$1 and the DEST_KEY should be MetaData:Host with capital D.

See also: http://docs.splunk.com/Documentation/Splunk/latest/Admin/Transformsconf#KEYS:

0 Karma

splkmika1
Explorer

Thanks for that. I've had a go at that. I've modified those files as follows:

props.conf
[source::c:\data\syslogd\cisco_sw\syslog_sw.txt]
TRANSFORMS-syslogsw=hostoverwrite

transforms.conf
[hostoverwrite]
DEST_KEY = MetaData:Host
REGEX = ^S+\s+\S+\s+\w+.\w+\s+(?switch\d\d)\s
FORMAT = host::$1

When I push this out from my cluster master to my cluster peers, it appears to be deployed successfully, however when I go back to my search head, I can still see events coming in with the wrong host name. It's still pulling out the . codes like this "Local6.Notice" and using that as a hostname.

As mentioned before the syslog messages coming in have the format:
2018-07-24 08:04:57 Local6.Notice switch21 4221: JUL 24 08:04:58: syslog message body

Am I correct in assuming that the "[source::c:\data\syslogd\cisco_sw\syslog_sw.txt]" line that I have in my props.conf, will only apply the transform if the event is coming from the above file monitor source?
Also, I'm still a little uncertain about my REGEX, particularly the section in the capture brackets (). I don't know whether I need to include the "?" or whether my capture group should just be "(switch\d\d)"

I really appreciate the help that you've provided already on this.

0 Karma

FrankVl
Ultra Champion

Capture group should be without the ?
You're missing a \ before the first S
You're missing a \ before the .

Try: ^\S+\s+\S+\s+\w+\.\w+\s+(switch\d\d)\s

Make sure to test your regex using tools like regex101.com: https://regex101.com/r/j1DRbL/1

What sourcetype do you use for this? There will be some transforms in place for that sourcetype, that pulls out the hostname as Local6.Notice. You need to ensure you overwrite that transform. Sorry I didn't mention that before.

0 Karma

splkmika1
Explorer

Thanks for spotting that ..... I could have sworn that I had typed those two missing "\"'s, in the above post. :-). I've just double checked my transform.conf on my indexer and those missing "\"s are present.... though it looks as though I've got the characters within my () incorrect.

I used the CLI on the Universal Forwarder to add the file monitor with sourcetype "syslog" (which then added the file monitor commands under $SPLUNK_HOME/etc/apps/search/inputs.conf).
This sourcetype is being rewritten when it hits the Indexer to cisco:ios by the Cisco Networks TA

I've gone back to your earlier suggestion of fixing this problem back at the syslog server and custom tailored the syslogs being output so that the syslog server doesn't prepend the . fields to incoming syslogs. With these fields not present in the syslog message, Splunk is now correctly picking up the host name of the device originating the syslogs.

I still think I have to do some more learning when it comes to REGEX and transform.conf.... but for the time being I got things working 🙂

Thanks for help in getting this issue resolved.

0 Karma

splkmika1
Explorer

Thanks for that response. I'll give it a go and let you know how I go.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...