We have a automatic lookup which is based on a lookup being appended by a report. Lookup is refreshed 6 times a day and automatic lookup appends couple of fields from the lookup to the indexed events.
Whenever new records are added to the lookup, the automatic lookup doesn't return the new value when new events are queried.
Sometimes it takes 2 hours to get the lookup refreshed but it returns the older records, so it seems that lookup is getting cached.
How we can stop the lookup getting cached?
Thanks,
Varun Negi
Look in the index=_internal logs for events about distributed bundle push (knowledge bundle), and check when the bundle was sent to the indexers for the last time.
If your bundle is large it may take a few minutes to be sent from the Search-head to the indexers
If your bundle is too large, you should see errors.
Also check if your lookup is not excluded from the bundle
http://docs.splunk.com/Documentation/Splunk/6.5.3/DistSearch/Limittheknowledgebundlesize
And finally if your lookup is larger than 20MB, the indexers have to also pre-index them otherwise they load it in memory.
you can raise the limit (see limits.conf on indexers)
[lookup]
max_memtable_bytes = 60000000
# for keeping lookups under 60MB in memory
I have been facing the same issue.
If I do an inputlookup I can see the lastest values, if I do a join with the results it works, but the automatic lookup refuses to fetch the latest data.
Did you find a solution for this? And what version of Splunk are you running?
Please explain how your auto-lookup is configured.
If you could post the props.conf that uses it, the transforms.conf that defines it and then maybe also the contents of the files in the metadata folder from the same app.
props.conf -
[host::*]
LOOKUP-host = lookup myfile xxx AS xxx OUTPUTNEW yyy zzz
transforms.conf -
[myfile]
filename = myfile.csv
case_sensitive_match = false
There is no issue with the file as the same file works correctly in other environment.