We have small lookup updated in search by outputlookup append=true
This is a SMALL size
Our users noticed the lookup lost the added data from the last 2 months.
Any clue?
We have a search head and indexer cluster.
We find the root cause. the bulde pushout to the search head by the apps upgreade. To prevent this, we need to add -preserve-lookups true
Start by checking the output from a single night, see whether it is really appending.
Also, start by checking whether the search that appends the data is running afoul of any subsearch limitations during the intermediate steps.
Then backtrack 1 month at a time, see what the daily jobs were reporting, what errors may have cropped up.
Is it adding new data again since your users brought it to your attention? or is it still broken? If it's still broken, can you run the search manually w/o the outputlookup to see if there are still results?
Also, if it's still broken, is the owner of the saved search still a valid Splunk user? That's a long shot, but I think we've run into it before - user leaves company -> account terminated -> searches stop running (there would be errors in _internal)
Hey Matt!! Thank for help.
It is a schedule search happened daily.
The earliest is late 2016
The content is coming from the log of certain session
so if you do an inputlookup do you see the data from late 2016???
http://docs.splunk.com/Documentation/Splunk/6.6.1/SearchReference/Inputlookup
Hi chamrong! any chance someone did an outputlookup without append and blew it away?
you could search index=_internal
for your lookup name to see if you can identify any searches that may have overwritten it in the search or audit logs...
is it a scheduled search that populates this ?
whats the earliest date you have in the lookup?
As long as you have the data still, you can simply run a search that covers the span of the missing data and rebuild the lookup file....
Be sure to check | inputlookup <yourLookupFile>
to verify the users claims!