Getting Data In

Efficient search?

a212830
Champion

Hi,

I have the following search, which is taking quite a while, and was wondering if there are any obvious improvements for it. It does parse a fair amount of events (1 million+). I'm trying to count unique high-level url's.

index=proxy sourcetype="leef" usrName!="-" 
| eval url=urldecode(url) 
| eval url=ltrim(url, "http://") 
| eval url=ltrim(url, "https://") 
| eval url=split(url, "/") 
| eval url=mvindex(url,0) 
| dedup src, dst 
| top limit=100 url
0 Karma
1 Solution

somesoni2
Revered Legend

Try this

index=proxy sourcetype="leef" usrName!="-" 
| fields src dst url
 | dedup src, dst 
 | eval url=urldecode(url) 
 | rex field=url "https*\:\/\/(?<url>[^\/]+)"
 | top limit=100 url

View solution in original post

somesoni2
Revered Legend

Try this

index=proxy sourcetype="leef" usrName!="-" 
| fields src dst url
 | dedup src, dst 
 | eval url=urldecode(url) 
 | rex field=url "https*\:\/\/(?<url>[^\/]+)"
 | top limit=100 url

a212830
Champion

Thanks!!!!

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...