Getting Data In

Tuning my linux_secure sourcetype

daniel333
Builder

All,

I am seeing parsing queue slow downs when large sets of linux_secure data comes in. After talking with support we decided that the default timestamp of "auto" was a bad idea for us. So I am rewriting the props.conf file for linux_secure

[linux_secure2]
DATETIME_CONFIG =
LINE_BREAKER = ([\r\n]+)
MAX_TIMESTAMP_LOOKAHEAD = 27
NO_BINARY_CHECK = true
REPORT-syslog = syslog-extractions
SHOULD_LINEMERGE = false
TIME_FORMAT = %Y-%m-%dT%H:%m:%S.%q
TIME_PREFIX = ^
TZ = GMT
category = Operating System
description = Format for the /var/log/secure file containing all security related messages on a Linux machine
disabled = false
pulldown_type = true

How ever when I run this, all my events come stamped at 4:00. Any idea where I went wrong?

Here is an example log

2019-12-17T23:01:36.259901+00:00 ssomehost.somedomain.local sudo:   ambari : TTY=unknown ; PWD=/var/lib/ambari-agent ; USER=root ; ENV=PATH=/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/hdp/current/hadoop-client/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/chef/bin:/var/lib/ambari-agent ; COMMAND=/usr/bin/test -f /disk11
0 Karma
1 Solution

daniel333
Builder

Ah! Lowercase 'm' I see it now.

View solution in original post

0 Karma

daniel333
Builder

Ah! Lowercase 'm' I see it now.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...