Splunk Search

Why does the timechart command display inconsistent results when the time range is changed?

fvegdom
Path Finder

When I use the following search (some criteria obfuscated for security):

index=main sourcetype=transaction application=foo component=bar  customerCode=x Type=y messageType=z | timechart span=1d count as count

and I set the time range to a single day (9th of jan 2017), the resulting table shows a single result:

_time                         count
2017-01-09T00:00:00.000+0100    1

as I would expect, this is consistent with the event I find when I omit the timechart command entirely, but when I change the time range to the whole month of jan 2017, I get this (just showing first 14 results)

_time                         count
2017-01-01T00:00:00.000+0100    0
2017-01-02T00:00:00.000+0100    0
2017-01-03T00:00:00.000+0100    0
2017-01-04T00:00:00.000+0100    0
2017-01-05T00:00:00.000+0100    3
2017-01-06T00:00:00.000+0100    0
2017-01-07T00:00:00.000+0100    0
2017-01-08T00:00:00.000+0100    0
2017-01-09T00:00:00.000+0100    0
2017-01-10T00:00:00.000+0100    0
2017-01-11T00:00:00.000+0100    0
2017-01-12T00:00:00.000+0100    15
2017-01-13T00:00:00.000+0100    25

suddenly there is nothing counted on the 9th, how can this happen?

in fact, if I click the cell that says zero in the table that gets returned, and select get view events, it takes me to the event that I would expect to be counted here:

2017-01-09T13:36:56.109+0100 TRAN b53a13ca-e1bc-4e64-964c-09c4714ba40e custom-operations process-engine 127.0.1.1 type:y|customerCode:x|duration:1327|bytesAllocated:15692632|executorUtilPct:0.0|messageType:z

some data here is modified for security reasons, but that should not affect the anwswers

1 Solution

fvegdom
Path Finder

It turns out the problem is not because of any issues with timechart but due to the fact that the base search is not returning the missing event in the first place. I will create a different issue to answer why that is happening.

View solution in original post

0 Karma

fvegdom
Path Finder

It turns out the problem is not because of any issues with timechart but due to the fact that the base search is not returning the missing event in the first place. I will create a different issue to answer why that is happening.

0 Karma

niketn
Legend

First of all, I see no point of renaming count as count, unless you want it to be uppercase. like count as Count.
Can you also verify the results with stats command instead of timechart

<Your Base Search>
| bin _time span=1d
| stats count as Count by date_mday
____________________________________________
| makeresults | eval message= "Happy Splunking!!!"
0 Karma

fvegdom
Path Finder

Hi niketnilay,

thanks for you reply.
There is indeed no purpose in it, this was just a relic of something else I did earlier(it was dc(uid) as count)

when I run the search you suggest I get
date_mday Count
12 15
13 25
16 25
19 7
20 13
24 4
5 3
so similar to what I got as a result of what somesoni2 suggested

index=main sourcetype=transaction
application=foo component=bar

customerCode=x Type=y messageType=z |
bucket span=1d _time | stats count by
_time

except this is not ordered by time.

in my last comment I found however that the timechart command itself is not the problem. The initial search seems to miss this transaction entirely when it is run with a larger time range:

when I run the search index=main
sourcetype=transaction application=foo
component=bar customerCode=x Type=y
messageType=z for only the 9th, I get
the event above

but when I run it from 8th to 10th, I
get no results !?

so I should focus on why the search is not returning this event in the first place and not on any problems with timechart. Should I make a different question for that?

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

I think it would be a good idea. People reading the title of this one will focus on "why does timechart", and your real problem is "search time range returns inconsistent results".

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

Please post the entire search that got you the last results.

0 Karma

fvegdom
Path Finder

It's the same search, but with a different timerange

index=main sourcetype=transaction application=foo component=bar customerCode=x Type=y messageType=z | timechart span=1d count as count

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

Shot in the dark, but simplify "count as count" to "count" -- and verify that there is no existing field called "count" on the events -- and see what happens.

I've noticed that splunk has occasional trouble distinguishing between the count it's doing at any given time and the count that is a field already on an event record.

0 Karma

fvegdom
Path Finder

thanks for your suggestion but it makes no difference

index=main sourcetype=transaction application=foo component=bar
customerCode=x Type=y messageType=z | timechart span=1d count

gives the same result,

I'm positive that there is no count field, and
the same query with dc(uid) also gives no result (uid is the b53a13ca-e1bc-4e64-964c-09c4714ba40e code in the event above)

I also tried it with putting a | fields command before the timechart to rule out what you suggest:

index=main sourcetype=transaction application=foo component=bar
customerCode=x Type=y messageType=z | fields uid | timechart span=1d count

and that makes no difference either

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

Hmmm. I'm seeing nothing at all. There's only one handle left to pull on. Try it without the timespan parameter, or with different timespan parameters - 8h or something.

0 Karma

fvegdom
Path Finder

my replies keep dissapearing...

I tried both your suggestions, but I discovered that the problem is not with the timechart but before it,

when I run the search index=main sourcetype=transaction application=foo component=bar
customerCode=x Type=y messageType=z for only the 9th, I get the event above

but when I run it from 8th to 10th, I get no results !?

0 Karma

somesoni2
SplunkTrust
SplunkTrust

How big is your main index? if you just run this for say jan8 to jan10, how many events that you get for each day?

index=main sourcetype=transaction | timechart span=1d count
0 Karma

fvegdom
Path Finder

@aaraneta, good to know, thanks
@somsoni2 just woke up and ran it for the month and this is what I get

_time count
2017-01-01T00:00:00.000+0100 436892
2017-01-02T00:00:00.000+0100 633636
2017-01-03T00:00:00.000+0100 700639
2017-01-04T00:00:00.000+0100 691269
2017-01-05T00:00:00.000+0100 754214
2017-01-06T00:00:00.000+0100 708054
2017-01-07T00:00:00.000+0100 537937
2017-01-08T00:00:00.000+0100 528364
2017-01-09T00:00:00.000+0100 726485
2017-01-10T00:00:00.000+0100 807973
2017-01-11T00:00:00.000+0100 790816
2017-01-12T00:00:00.000+0100 795250
2017-01-13T00:00:00.000+0100 775745

so 100ks of events per day

0 Karma

somesoni2
SplunkTrust
SplunkTrust

Depending upon the hardware configuration of your Splunk instance, this can be very large and/or indexers may have low memory dropping events. Just to confirm that large number of result is a problem and not with the data bucket that stores the data for Jan 09, can you run your original search with 24 hr time range from 2017-01-08T14:00:00+0100 to 2017-01-09T14:00:00+0100, so that we know there is a matching events and time range is one day for which we know it works fine. After that try with 2017-01-08T00:00:00+0100 to 2017-01-09T14:00:00+0100 (about 1.5 days)

0 Karma

fvegdom
Path Finder

hi @somesoni2

same search with time range 2017-01-08T14:00:00+0100 to 2017-01-09T14:00:00+0100
returns the event

time range
2017-01-08T00:00:00+0100 to 2017-01-09T14:00:00+0100 does too

If I leave the start of the range at 2017-01-08T00:00:00+0100 I can go up to 2017-01-10T11:00:00+010 and get the same results

if I set it to 2017-01-10T13:00:00+010 I get 40 results and then If I put it to
2017-01-10T14:00:00+010 I get no results again

0 Karma

aaraneta_splunk
Splunk Employee
Splunk Employee

@fvegdom - Regarding your comment about why your replies were disappearing: Since you a new user to Answers, your comments were sent to the moderation queue before being published; I just published one of them above.

Typically, users should see a banner up top that shows that their post is being moderated. I apologize if it did not display as it is unfortunately it's a bug on our current platform that will hopefully be fixed soon.

0 Karma

somesoni2
SplunkTrust
SplunkTrust

Do you get results for jan 9 when you run this??

index=main sourcetype=transaction application=foo component=bar 
customerCode=x Type=y messageType=z | bucket span=1d _time | stats count by _time
0 Karma

fvegdom
Path Finder

tried that and I get this

_time                         count
2017-01-05T00:00:00.000+0100    3
2017-01-12T00:00:00.000+0100    15
2017-01-13T00:00:00.000+0100    25
2017-01-16T00:00:00.000+0100    25
2017-01-19T00:00:00.000+0100    7
2017-01-20T00:00:00.000+0100    13
2017-01-24T00:00:00.000+0100    4

so basically no, no results for jan 9,
this is what I would get if I ran the original timechart command with cont=false (verified)

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

Hmmm. How is the different time range being entered? standard search or dashboard?

0 Karma

fvegdom
Path Finder

Standard search, (I discovered it when I had a weird result in a dashboard, but I opened it in search)

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...