Hi,
I have the following search, which is taking quite a while, and was wondering if there are any obvious improvements for it. It does parse a fair amount of events (1 million+). I'm trying to count unique high-level url's.
index=proxy sourcetype="leef" usrName!="-"
| eval url=urldecode(url)
| eval url=ltrim(url, "http://")
| eval url=ltrim(url, "https://")
| eval url=split(url, "/")
| eval url=mvindex(url,0)
| dedup src, dst
| top limit=100 url
Try this
index=proxy sourcetype="leef" usrName!="-"
| fields src dst url
| dedup src, dst
| eval url=urldecode(url)
| rex field=url "https*\:\/\/(?<url>[^\/]+)"
| top limit=100 url
Try this
index=proxy sourcetype="leef" usrName!="-"
| fields src dst url
| dedup src, dst
| eval url=urldecode(url)
| rex field=url "https*\:\/\/(?<url>[^\/]+)"
| top limit=100 url
Thanks!!!!