Getting Data In

Why is date not parsing correctly on my search head cluster?

pfabrizi
Path Finder

I have 2 splunk environments a DEV and PROD. I am send events from same syslog source. I have this date parsing:

TIME_PREFIX=severity\=\d+\|
MAX_TIMESTAMP_LOOKAHEAD=22
TIME_FORMAT=%Y-%b-%d %H:%M:%S
TZ = UTC

Here is the event string:
Aug 29 11:08:30 tnnwsau1 CEF:1|RSA|Netwitness|10.6|severity=2|2018-Aug-29 15:05:07|Executables

in DEV it is parsing correct ( 2018-aug-29 15:05:07) however in PROD is the Aug 29 11:08:30.

My DEV is REHL 6, Prod is RHEL 7.
Is there some global setting that might be an issue?

Our dev is a single search head, where prod is a clustered SH?

Any thoughts?

Thanks!

0 Karma

serjandrosov
Path Finder

You might need to check configuration consistence for both environments for sourcetype stanza (are you using [syslog] as sourcetype for this data?).
Run on both PROD and DEV indexers:

$SPLUNK_HOME/bin/splunk cmd btool props list --debug

Look at the differences and sources.

0 Karma

pfabrizi
Path Finder

yeah, I did that.

0 Karma

poete
Builder

Hello @pfabrizi,

did you check the global settings of the server, and more especially the timezone?

In addition, did you check the timezone of the user you are running the tests with?

I hope this helps

0 Karma

pfabrizi
Path Finder

I am guessing this is the issue?
Prod
ZONE="America/New_York"

DEV:
ZONE=US/Eastern
UTC=true

Thanks!

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...