Getting Data In

Efficient search?

a212830
Champion

Hi,

I have the following search, which is taking quite a while, and was wondering if there are any obvious improvements for it. It does parse a fair amount of events (1 million+). I'm trying to count unique high-level url's.

index=proxy sourcetype="leef" usrName!="-" 
| eval url=urldecode(url) 
| eval url=ltrim(url, "http://") 
| eval url=ltrim(url, "https://") 
| eval url=split(url, "/") 
| eval url=mvindex(url,0) 
| dedup src, dst 
| top limit=100 url
0 Karma
1 Solution

somesoni2
Revered Legend

Try this

index=proxy sourcetype="leef" usrName!="-" 
| fields src dst url
 | dedup src, dst 
 | eval url=urldecode(url) 
 | rex field=url "https*\:\/\/(?<url>[^\/]+)"
 | top limit=100 url

View solution in original post

somesoni2
Revered Legend

Try this

index=proxy sourcetype="leef" usrName!="-" 
| fields src dst url
 | dedup src, dst 
 | eval url=urldecode(url) 
 | rex field=url "https*\:\/\/(?<url>[^\/]+)"
 | top limit=100 url

a212830
Champion

Thanks!!!!

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...