Hi,
I need to lookup some values from a lookup with an id, and I have multiple values per id with more coming in from time to time. Entries are typically appended to the bottom of a lookup. The default behavior which uses the first match from the top is not what I need because I'm only interested in the latest entry per id though. It would be nice to be able to perform the lookup from bottom to top, or prepend the entry.
Here's what I came up with so far:
a) I could re-sort the lookup after inserting a new entry, effectively prepending the new entry
b) I could re-write the entire lookup on adding an entry, something like
| eval new lookup entry
| inputlookup append=t lookup
| outputlookup lookup
which would also prepend it
c) I could use max_matches=100000 on my lookup definition and return a multivalue field, then get the last entry with | eval field = mvindex(field, -1)
, effectively doing the lookup from bottom to top
d) I could furthermore work with a time based lookup, using some date in the past, to reduce the number of matches
Each of these has at least one downside though. a) is ugly: I need to handle the additional step, depending on how I create the entry this is more or less of a hassle. b) raises performance concerns: I'm moving around a potentially large amoung of lookup content. c) is even uglier, because multivalue with its limits and quirks, and d) means I need to know a sensible value for that date.
Any other ideas?
You can also take advantage of the fact that a kv store stores new values in _key-sorted order, meaning that if your kv store looks like
_key id field1
1234 1 foo
1235 2 bar
1236 3 baz
and you do
| makeresults
| eval _key = "1233", id = 1, field1 = "new"
| outputlookup append=t lookup
your kv store will look like this:
_key id field1
1233 1 new
1234 1 foo
1235 2 bar
1236 3 baz
This means your new value is prepended, and a lookup on id returns the new value. Candidates for such a decreasing key might be values such as 2000000000 minus epoch.