I'm using the map function to do a search on reach row of a table I've created with some IDs that link certain things together in the system/process I am trying to analyze.
The table holds 4 columns: ID1, Time1, ID2, Time2
When I pass these values to the map function as $ID1$
etc. they all work fine except for ID2 which is a large number prefixed by "RT".
e.g. RT201804171037017795. This kept showing up as "null" in the resulting events and hence lead to problems.
I realized that this "RT" means it might get recognized as a real time search value hence I trimmed the RT using the trim() function: trim(ID2, "RT")
. So far so good.
However, now when I parse the number to the map function as $ID2$
and use it's value as a table field ...| eval Identifier2=$ID2$ | table Identifier2 | ...
The resulting field is not 201804171037017795 but 201804171037017800. Because the number is so large I thougth it might be recognized as an Epoch time. This is probably the case as both 2018041710370177 and 2018041710370178 result in the same Epoch time (recognized as microseconds - according to epochconverter): Monday 12 December 2033 23:08:30.370.
Hence, the reason the number is rounded up is likely because Splunk thinks I'm giving it an Epoch time while it is simply a large identifier. Thus, my question (finally) is: How do I stop Splunk from recognizing this large number as a timestamp? I want to explicitly tell Splunk it is just a number or even a string and the value does not matter. It should be parsed as a string only for identification.
I've already tried toString(ID2)
to no avail.
TL;DR: How to specifically tell Splunk how to handle a (large) value as a string/number and not as an Epoch time?
Within your map subsearch, wrap your reference to the variable in double-quotes, like this:
| makeresults
| eval f1="RT201804171037017795", f2=trim(f1, "RT")
| map
[| stats count
| eval sub_f1="$f1$", sub_f2="$f2$"]
This will force Splunk to treat the contents of what you're passing into the subsearch as a string (preserving all of it), and then you can convert it back to a number within the context of the subsearch.
Within your map subsearch, wrap your reference to the variable in double-quotes, like this:
| makeresults
| eval f1="RT201804171037017795", f2=trim(f1, "RT")
| map
[| stats count
| eval sub_f1="$f1$", sub_f2="$f2$"]
This will force Splunk to treat the contents of what you're passing into the subsearch as a string (preserving all of it), and then you can convert it back to a number within the context of the subsearch.
I'm not sure the problem is what you think it is. I tried this test:
| makeresults
| eval f1="RT201804171037017795", f2=trim(f1, "RT")
| map
[| stats count
| eval sub_f1="$f1$", sub_f2="$f2$"]
I get back exactly what I'd expect - the values fed through. Maybe something else is causing the issue. Can you describe the source data and search approach?
Try this (no double quotes in sub_f2)
| makeresults
| eval f1="RT201804171037017795", f2=trim(f1, "RT")
| map
[| stats count
| eval sub_f1="$f1$", sub_f2=$f2$]
Just using "$ID1$"
instead of $ID1$
did the trick....
Never knew this was a way to tell Splunk a field is a string. Are there more 'tricks' like this, and do you have a reference for it?
Thanks for the help both of you! Can't accept a comment as an answer sadly; i.e. if one of you could answer the question i'll accept it 🙂
I stand corrected!
Try using ...| eval Identifier2=\"$ID2$\" | table Identifier2
in your map search to treat this a string, instead of number. Since it's a very large number Splunk might be rounding it to contain within Splunk limit.