I have a lookup table blacklist.csv , which has blacklisted src & dest IPs. Using the below search query , I am listing the events containing blacklisted IPs
index=* sourcetype=pan* [|inputlookup blacklist.csv | fields dest_ip] OR [|inputlookup blacklist.csv | fields src_ip] | table dest_ip,src_ip,status,etc..
My lookup table has 10,000 records, which makes my search slow down. Is there a way optimize the query ?
or any other mechanism to speed up the search ?
You could index the csv file into an index, that i am sure will speed up your search.
If you have a distributed environment and your lookup file is BIG , you also add local=t .
Like;
[|inputlookup local=t blacklist.csv
That helps to speed it up sometimes .
See if this is faster than your current query.
index=* sourcetype=pan* [|inputlookup blacklist.csv | table dest_ip src_ip | format | format "" "" "OR" "" "OR" ""] | table dest_ip,src_ip,status,etc..
No there are no duplicate entries in the lookup table.
I tried dedup but it takes the same time..
For a very long time , still the search query is in parsing state.
Are there any duplicates in the list? As a first step, add a dedup dest_ip
and dedup src_ip
to each.