Splunk Search

scaling epoch timestamps results in strange values

pdovy
New Member

I've got a sourcetype which captures data for two nearly identical applications, the difference being that one calculates timestamps as microsecond epochs and the other as nanosecond epochs. I am using queries to do some latency analysis, so I'd like to scale up the microsecond epochs so that the results are all in the same units.

I'm trying to do the following in my query:

| eval SCALED_REQUEST_TIME = if(REQUEST_TIME > 10000000000000000, REQUEST_TIME, REQUEST_TIME * 1000)

However I get some pretty strange results, namely that for microsecond timestamps that get scaled by this line, the last 3 (new) digits are arbitrary, for example:

1321545903871484

becomes

1321545903871483904

I've tried using convert with num() to convert it before hand, and using asnumber() in the eval, but I get the same result regardless.

Tags (2)
0 Karma

gkanapathy
Splunk Employee
Splunk Employee

Ah, I wonder if you're having trouble because this arithmetic is being done using floating point on the processor, which makes it subject to rounding problems. (It happens that 16 decimal digits about the limit for double-precision FP numbers.) For something like this you would really want to use arbitrary precision integers, or 64-bit integers (or higher). I don't know if you can coerce this in Splunk, however, do I don't know if I have solution for you.

0 Karma
Get Updates on the Splunk Community!

Join Us for Splunk University and Get Your Bootcamp Game On!

If you know, you know! Splunk University is the vibe this summer so register today for bootcamps galore ...

.conf24 | Learning Tracks for Security, Observability, Platform, and Developers!

.conf24 is taking place at The Venetian in Las Vegas from June 11 - 14. Continue reading to learn about the ...

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...