Splunk Search

scaling epoch timestamps results in strange values

pdovy
New Member

I've got a sourcetype which captures data for two nearly identical applications, the difference being that one calculates timestamps as microsecond epochs and the other as nanosecond epochs. I am using queries to do some latency analysis, so I'd like to scale up the microsecond epochs so that the results are all in the same units.

I'm trying to do the following in my query:

| eval SCALED_REQUEST_TIME = if(REQUEST_TIME > 10000000000000000, REQUEST_TIME, REQUEST_TIME * 1000)

However I get some pretty strange results, namely that for microsecond timestamps that get scaled by this line, the last 3 (new) digits are arbitrary, for example:

1321545903871484

becomes

1321545903871483904

I've tried using convert with num() to convert it before hand, and using asnumber() in the eval, but I get the same result regardless.

Tags (2)
0 Karma

gkanapathy
Splunk Employee
Splunk Employee

Ah, I wonder if you're having trouble because this arithmetic is being done using floating point on the processor, which makes it subject to rounding problems. (It happens that 16 decimal digits about the limit for double-precision FP numbers.) For something like this you would really want to use arbitrary precision integers, or 64-bit integers (or higher). I don't know if you can coerce this in Splunk, however, do I don't know if I have solution for you.

0 Karma
Get Updates on the Splunk Community!

Updated Team Landing Page in Splunk Observability

We’re making some changes to the team landing page in Splunk Observability, based on your feedback. The ...

New! Splunk Observability Search Enhancements for Splunk APM Services/Traces and ...

Regardless of where you are in Splunk Observability, you can search for relevant APM targets including service ...

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...