Splunk Search

scaling epoch timestamps results in strange values

pdovy
New Member

I've got a sourcetype which captures data for two nearly identical applications, the difference being that one calculates timestamps as microsecond epochs and the other as nanosecond epochs. I am using queries to do some latency analysis, so I'd like to scale up the microsecond epochs so that the results are all in the same units.

I'm trying to do the following in my query:

| eval SCALED_REQUEST_TIME = if(REQUEST_TIME > 10000000000000000, REQUEST_TIME, REQUEST_TIME * 1000)

However I get some pretty strange results, namely that for microsecond timestamps that get scaled by this line, the last 3 (new) digits are arbitrary, for example:

1321545903871484

becomes

1321545903871483904

I've tried using convert with num() to convert it before hand, and using asnumber() in the eval, but I get the same result regardless.

Tags (2)
0 Karma

gkanapathy
Splunk Employee
Splunk Employee

Ah, I wonder if you're having trouble because this arithmetic is being done using floating point on the processor, which makes it subject to rounding problems. (It happens that 16 decimal digits about the limit for double-precision FP numbers.) For something like this you would really want to use arbitrary precision integers, or 64-bit integers (or higher). I don't know if you can coerce this in Splunk, however, do I don't know if I have solution for you.

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...