Deployment Architecture

How do I get splunk to read 152ms (milliseconds)?

gwright327
Explorer

Im running a query to find out how many times a service gets called in so many milliseconds. I bucket the time in milliseconds and splunk says it is invalid.

Tags (2)
1 Solution

Richfez
SplunkTrust
SplunkTrust

So, you've probably found that a run-anywhere search like the one below works:

index=_internal | bucket _time span=100ms | stats count by _time

But that one like the next one won't:

index=_internal | bucket _time span=152ms | stats count by _time

bucket can work with sub-second buckets, but in order to do so the buckets have to evenly divide into 1 second intervals. So 500ms, 250ms, 125ms, 100 ms - these are all good. But 152, 333.33 (heh, I tried lots of digits but it still wouldn't work!), 72 - those won't work.

If that doesn't work quite well enough, there is a sort of workaround I thought of to get more arbitrary divisions. You will have to test this thoroughly before relying on it - I can't guarantee it'll work without some adjustment and fixes. I don't have enough data like this to work with at home to really thoroughly test it, especially at the boundaries of seconds and whatnot.

The technique is to bucket things in a divisible-into-1-second subdivision of the time frame you want that is also a division of your desired time range, so for 152 ms the trick would be to use 8ms as your base, because 8x19=152. Once you have that, you can streamstats 19 of those 8ms time slots together to get your 152ms. See the footnote at the bottom for how I came up with 8ms times 19 slots if you need to.

index=_internal 
| bucket _time span=8ms 
| transaction _time 
| streamstats window=19 sum(eventcount) AS my_eventcount by _time 
| table _time my_eventcount

And there you go, 152ms divisions.

Please, update us with how that works out, or with your solution if you find one.

Footnote: I got 8msx19 by starting from the premise could do 1ms and add 152 of them together, but 152 is divisible by two, so why not 2ms and 76... oh, repeat, 4ms and 38... oh, repeat again for 8mx and 19 - ah, now I can't divide any more. I'm not positive doing something silly like the 152x1ms each is THAT bad, but this seemed a minor optimization. You can test if you want if you have enough data to make things take some time.

View solution in original post

Richfez
SplunkTrust
SplunkTrust

So, you've probably found that a run-anywhere search like the one below works:

index=_internal | bucket _time span=100ms | stats count by _time

But that one like the next one won't:

index=_internal | bucket _time span=152ms | stats count by _time

bucket can work with sub-second buckets, but in order to do so the buckets have to evenly divide into 1 second intervals. So 500ms, 250ms, 125ms, 100 ms - these are all good. But 152, 333.33 (heh, I tried lots of digits but it still wouldn't work!), 72 - those won't work.

If that doesn't work quite well enough, there is a sort of workaround I thought of to get more arbitrary divisions. You will have to test this thoroughly before relying on it - I can't guarantee it'll work without some adjustment and fixes. I don't have enough data like this to work with at home to really thoroughly test it, especially at the boundaries of seconds and whatnot.

The technique is to bucket things in a divisible-into-1-second subdivision of the time frame you want that is also a division of your desired time range, so for 152 ms the trick would be to use 8ms as your base, because 8x19=152. Once you have that, you can streamstats 19 of those 8ms time slots together to get your 152ms. See the footnote at the bottom for how I came up with 8ms times 19 slots if you need to.

index=_internal 
| bucket _time span=8ms 
| transaction _time 
| streamstats window=19 sum(eventcount) AS my_eventcount by _time 
| table _time my_eventcount

And there you go, 152ms divisions.

Please, update us with how that works out, or with your solution if you find one.

Footnote: I got 8msx19 by starting from the premise could do 1ms and add 152 of them together, but 152 is divisible by two, so why not 2ms and 76... oh, repeat, 4ms and 38... oh, repeat again for 8mx and 19 - ah, now I can't divide any more. I'm not positive doing something silly like the 152x1ms each is THAT bad, but this seemed a minor optimization. You can test if you want if you have enough data to make things take some time.

gwright327
Explorer

Rich,

Thank you for the help, I wasnt able to divide them as you stated but I was able to round them out to get the same result I was looking for.

0 Karma

gwright327
Explorer

perfect!, Thank you!

0 Karma

Richfez
SplunkTrust
SplunkTrust

I just now realized I never converted my "test" answers back into your own search, but I think that's probably not too hard. If you do need help in doing so, be sure to ask.

0 Karma

Richfez
SplunkTrust
SplunkTrust

So you have events with timestamping that includes milliseconds - are these recognized as such in Splunk as the _time for the event or not? The inputs may need to be edited to make it so, as per here.

And, exactly what you are trying to do isn't quite clear. What report, timechart or alert do you want to have come out the other side?

0 Karma

gwright327
Explorer

Hi Rich,

The query Im working with is below, When I try to input a three digit number where the X's are for span it gives me an error.

Query: index=* eventName=* earliest=-4h@h latest=@h | bucket _time span=XXXms | stats count by _time,eventName,host | stats max(count) by eventName

0 Karma

Richfez
SplunkTrust
SplunkTrust

I'm going to toss out a possible answer. If it's resolves this, great, if not we'll have to do more poking around.

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...