Splunk Search

Timechart Using Too Few Bins

David
Splunk Employee
Splunk Employee

I have a timechart covering data from every 10 minutes. If I look at the last 24 hours, that would generate 144 bins. The docs say that timechart defaults to

bins=300. This finds the smallest bucket size that results in no more than 300 distinct buckets.

As I have fewer than 300 buckets, I would expect that it would represent the data with 144 buckets covering ten minute intervals. In reality, though, it's summarizing to every 30 minutes. I could force the issue with bins=144, but then that becomes problematic if the user switches the timepicker to 4 hours (gaps), or 48 hours (less accurate).

How can I make Timechart use all the way up to 300 bins, without destroying the graph at other time windows?

Tags (1)
1 Solution

sideview
SplunkTrust
SplunkTrust

It seems like 300 is not actually the default.

I just ran a test search, and indeed it seems to bucket too low -- last 24 hours gets 49 buckets of 30mins each.

However when I add bins=300 to the same timechart clause, suddenly I get 293 buckets of 5 minutes each.

<your search> | timechart count bins=300

View solution in original post

sideview
SplunkTrust
SplunkTrust

It seems like 300 is not actually the default.

I just ran a test search, and indeed it seems to bucket too low -- last 24 hours gets 49 buckets of 30mins each.

However when I add bins=300 to the same timechart clause, suddenly I get 293 buckets of 5 minutes each.

<your search> | timechart count bins=300

David
Splunk Employee
Splunk Employee

Here's the implementation of this method, using sideview_utils to dynamically size the number of bins for timechart. Note: for anyone who might stumble upon this -- it is reasonably complex, not for the brave of heart, and should be tested extensively.

http://pastebin.com/jqDktMhC

0 Karma

sideview
SplunkTrust
SplunkTrust

the connect option wont help you because there literally is a datapoint at zero. Connect will only draw a connection across null points. It's expected, it's just a fact of life when there's only so much granularity in the actual data. You could use svutils to embed a customBehavior in JS whereby it outputs a span="30m" / span="10m" term as appropriate, but I'm not sure it'd be worth the extra surface area.

David
Splunk Employee
Splunk Employee

Hmm. I wonder if its just specific to my data, but when I do bins 300, I also get 5 min intervals -- except I only have data every 10 min, so I get data point, then null, then a datapoint, creating a ton of valleys. Is this expected, and I should address it with the connect option? It seems like it should be able to bin up to that number (in case I have 7 days of data), but have it just use the minimum reasonable bucketing for 4 hours of data.

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...