Hi All,
I have the following search command in my sheduled report:
index=linux_log mountedon | dedup host, mountedon | fields host, mountedon | map search="search index=linux_log mountedon=$$mountedon$$ host=$$host$$ earliest=-6m@m | timechart avg(use) as use, values(host) as host, values(mountedon) as mountedon span=60m | predict use as predict future_timespan="4000" | stats latest(predict) as predict, values(host) as host, values(mountedon) as mountedon, latest(_time) as predicttime | table host, mountedon, predicttime, predict" maxsearches=100000000000000000000000000000 | eval c_predicttime=strftime(predicttime,"%d-%m-%y %H:%M") | table host, mountedon, predict, c_predicttime | sort - predict | where predict > 95 | count
This job takes a long time to finish but when I look in the jobs manager, I see that this job has taken moren then 30MB for a simple count. Is there a way to limit this?
... View more