Splunk Search

scaling epoch timestamps results in strange values

pdovy
New Member

I've got a sourcetype which captures data for two nearly identical applications, the difference being that one calculates timestamps as microsecond epochs and the other as nanosecond epochs. I am using queries to do some latency analysis, so I'd like to scale up the microsecond epochs so that the results are all in the same units.

I'm trying to do the following in my query:

| eval SCALED_REQUEST_TIME = if(REQUEST_TIME > 10000000000000000, REQUEST_TIME, REQUEST_TIME * 1000)

However I get some pretty strange results, namely that for microsecond timestamps that get scaled by this line, the last 3 (new) digits are arbitrary, for example:

1321545903871484

becomes

1321545903871483904

I've tried using convert with num() to convert it before hand, and using asnumber() in the eval, but I get the same result regardless.

Tags (2)
0 Karma

gkanapathy
Splunk Employee
Splunk Employee

Ah, I wonder if you're having trouble because this arithmetic is being done using floating point on the processor, which makes it subject to rounding problems. (It happens that 16 decimal digits about the limit for double-precision FP numbers.) For something like this you would really want to use arbitrary precision integers, or 64-bit integers (or higher). I don't know if you can coerce this in Splunk, however, do I don't know if I have solution for you.

0 Karma
Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer Certification at ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...