Getting Data In

Configuring TimeStamp Format

thomastaylor
Communicator

Hello everyone!

Currently, we're running into a problem configuring our timestamp for a source type. After browsing a few questions on Splunk Answers, we are still not able to correctly implement the timestamp.

When we leave the extraction on "Auto", it does not extract the timezone from our logs:

2018-05-23 08:49:59.266814 - Toaster 375790 - INFO - Boot up sequence begin
2018-05-23 08:46:23.836273 - Toaster 450623 - INFO - Loading: Network Lib

TIME_FORMAT =
%Y-%m-%d %H:%M:%S.%6N

We left the TIME_PREFIX empty because the timestamp is at the beginning of our log.

We have also attempted to use TIME_PREFIX = ^

Remaining Configuration:
MAX_TIMESTAMP_LOOKAHEAD = 50
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = true
TZ = America/New_York
category = Application
disabled = false
pulldown_type = true

Tags (2)
0 Karma
1 Solution

thomastaylor
Communicator

Thank you to everyone who answered this question! I should have included more details, but I now know what the solution was. This event was received by the HTTP Event Collector. I was trying to parse the raw event that was sent.

By referencing this question, https://answers.splunk.com/answers/312321/can-we-process-the-timestamp-in-an-event-sent-to-t.html, you can see that, for time to be extracted properly, the "time" keyword must be placed in the post call.

View solution in original post

0 Karma

thomastaylor
Communicator

Thank you to everyone who answered this question! I should have included more details, but I now know what the solution was. This event was received by the HTTP Event Collector. I was trying to parse the raw event that was sent.

By referencing this question, https://answers.splunk.com/answers/312321/can-we-process-the-timestamp-in-an-event-sent-to-t.html, you can see that, for time to be extracted properly, the "time" keyword must be placed in the post call.

0 Karma

skoelpin
SplunkTrust
SplunkTrust

You should include TIME_PREFIX=^ and you should change your MAX_TIMESTAMP_LOOKAHEAD to something more reasonable like 28 rather than 1000

0 Karma

thomastaylor
Communicator

We have tried this as well. We just tried it again to make sure and it still did not yield the correct result.

0 Karma

skoelpin
SplunkTrust
SplunkTrust

The info I provided wont fix the TZ issue, but rather take some of the load off your indexers.. As for TZ, are the servers with the UF's installed in the same TZ as your indexers? What's your end goal? Have you tried change the TZ in the Splunk UI?

0 Karma

thomastaylor
Communicator

Thanks for the response!

Right now, we are using the Splunk Cloud client as a proof of concept. We have not figured out a way to configure Splunk to pull the timestamp correctly from our log. We have changed the timezone in the UI to match the timezone from our log source.

The timestamp that shows on the Splunk search is the time that the log was received from the server, but not the time when the log was generated.

0 Karma

adonio
Ultra Champion

where is the timezone in the data?
your format looks ok,
can you share the full relevant props.conf?
will recommend to create a sourcetype regardless and not leave on auto

0 Karma

thomastaylor
Communicator

We have actually already created the source type named "Kitchen Appliances". We have updated the post with the full props.conf that we can view from the cloud client.

0 Karma

FrankVl
Ultra Champion

Can you provide some details on how you have deployed that configuration? What does the rest of your props.conf look like? On which instance is it deployed (and what is your architecture)?

Is that sourcetype the original sourecetype from inputs.conf, or are you overriding the sourcetype using transforms,conf?

0 Karma

thomastaylor
Communicator

We are currently using the cloud and cannot access the props.conf. We have created our own source type named Kitchen Appliances.

Remaining Configuration:

MAX_TIMESTAMP_LOOKAHEAD = 1000
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = true
TZ = America/New_York
category = Application
disabled = false
pulldown_type = true

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...