Splunk Search

Can and cannot overwrite _time

iKate
Builder

Hi everyone!

I made a table that shows web sources from where visitors come to our service.
By clicking any row timechart of visitors for the selected source opens. But it opens not for each source.

The data that is used for these tables and charts doesn't have _time parameter but has year, month, day values. By concatenating these values and converting to timestamp I got _time and built timechart.

Here is working example with source "yandex":

index=visitors source=web | where source_from="yandex" | strcat year "." month "." day date | convert timeformat="%Y.%m.%d" mktime(date) as _time | timechart sum(visitors) as visitors 

Not working example with source "google" though it has 5 times more occurances in our statistics then "yandex":

index=visitors source=web | where source_from="google" | strcat year "." month "." day date | convert timeformat="%Y.%m.%d" mktime(date) as _time | timechart sum(visitors) as visitors 

The data seems to be similar for these sources, has no gaps in days.

I found that in the second variant splunk can't write result into _time. Trying to use eval _time=.. or strftime didn't helped. How can I write to _time to use timechart?

0 Karma
1 Solution

kallu
Communicator

Like Ayn said, your life will be much easier if you extract timestamps from CSV.
Something like this in your props.conf should do it.

    [web_visitors]
    MAX_TIMESTAMP_LOOKAHEAD = 10
    NO_BINARY_CHECK = 1
    SHOULD_LINEMERGE = false
    TIME_FORMAT = %Y,%m,%d
    TZ=UTC
    EXTRACT-web_visitors = (?i)^\d+,\d+,\d+,(?P<source_from>[^,]+),(?P<visitors>.+)$

This should parse both timestamp and "from" & "visitors" -fields from your CSV.

And if CSV header gets annoying, here is some ideas what you can do for it

http://splunk-base.splunk.com/answers/49366/how-to-ignore-first-three-line-of-my-log

View solution in original post

kallu
Communicator

Like Ayn said, your life will be much easier if you extract timestamps from CSV.
Something like this in your props.conf should do it.

    [web_visitors]
    MAX_TIMESTAMP_LOOKAHEAD = 10
    NO_BINARY_CHECK = 1
    SHOULD_LINEMERGE = false
    TIME_FORMAT = %Y,%m,%d
    TZ=UTC
    EXTRACT-web_visitors = (?i)^\d+,\d+,\d+,(?P<source_from>[^,]+),(?P<visitors>.+)$

This should parse both timestamp and "from" & "visitors" -fields from your CSV.

And if CSV header gets annoying, here is some ideas what you can do for it

http://splunk-base.splunk.com/answers/49366/how-to-ignore-first-three-line-of-my-log

iKate
Builder

It's better to say "Thank you very much kallu!" late than never:) I've implemented your suggestion and it have saved lots of my time and nerves.

0 Karma

iKate
Builder

Yes, sure, here's a piece of csv:

ga:year, ga:month, ga:day, ga:source, ga:visitors
2012,07,02,google,1907
2012,07,02,yandex,1009
2012,07,03,google,2090
2012,07,03,yandex,1598

0 Karma

Ayn
Legend

Could you provide us with a sample from the CSV file? I still think that you should focus on getting your timestamp recognition setup properly instead of messing with workarounds.

0 Karma

iKate
Builder

@Ayn @bmacias84 You're right, _time is a default metadata, but in our case this is data from indexed .csv file so _time value for all entries has the same value of its indexing time.
@bmacias84 sorry, I didn't catch what did you expect 'as(time)' should make. In fact 'chart sum(visitors) as visitors over ctime' is the way I'm doing the chart, but it's still unclear why timechart works in this case occasionaly.

0 Karma

bmacias84
Champion

Correct me if I am wrong but _time is a default metadata field. Metadata can only be overwritten during time of index with a transform. Try using the chart command.


index=visitors source=web | where source_from="google" | strcat year "." month "." day date | convert timeformat="%Y.%m.%d" mktime(date) as ctime | chart sum(visitors) as visitors over ctime as(time)

0 Karma

Ayn
Legend

Splunk can do epoch. What do you mean that it's not presented in your data? You have fields containing date information so that information must be in there somewhere.

0 Karma

iKate
Builder

As a timestamp I meen date&time in epoch time format like 123421341342.
Typically time information is presented in our raw data but not in this case.

0 Karma

Ayn
Legend

Not strictly an answer, but - what are you using as timestamp in Splunk right now? It seems you want to sidestep that completely, so why not use the time you want to use in your searches anyway instead?

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...