All Apps and Add-ons

Sorting issue in sideview utils

keerthana_k
Communicator

Hi

We are using a UI similar to the Analyze data example provided in Side View Utils. We use the following query :

index=index_name | eval SessionDurationAvg= case(SessionDurationAvg < 60, "A< 1min", SessionDurationAvg >= 60 AND SessionDurationAvg < 900, "B1min - 15min", SessionDurationAvg >= 900 AND SessionDurationAvg < 1800, "C15min - 30min",SessionDurationAvg >= 1800 AND SessionDurationAvg < 2700, "D30min - 45min", SessionDurationAvg >= 2700 AND SessionDurationAvg < 3600, "E45min - 1hr", SessionDurationAvg >= 3600, "F> 1hr") | stats count as Count by ClientIP SessionDurationAvg | lookup bnigeoip clientip as ClientIP output client_city as City | eval SessionDurationAvg = substr(SessionDurationAvg, 2, len(SessionDurationAvg))

We need the number of users in each Session duration bucket. The metric we use is dc(ClientIP) by SessionDurationAvg. Though we are getting the results, the buckets are not ordered based on size. We would like to have order of buckets as follows:

< 1min

1-15min

15-30min

30-45min

45min-1hr

> 1hr

Is there any way to achieve this?

Also, when we use this Session duration field in split-by the legends with "<" symbol does not get displayed when we mouse over the value in the chart. How do I correct this?

Thanks

Keerthana

1 Solution

sideview
SplunkTrust
SplunkTrust

First let's deal with the sort order question.

And let's translate the question to a search that will run against the index=_internal data so anyone can run it on their splunk instance:

index=_internal group=per_sourcetype_thruput | eval epsBucket=case(eps<0.1,"less than 0.1",eps<1,"between 0.1 and 1",eps<2,"between 1 and 2",eps<10,"between 2 and 10",eps<100,"between 10 and 100",eps<1000,"100-1000",eps>=1000,"1000 and up") | stats count by epsBucket

The same thing happens there - Splunk picks a lexicographic order for the 'epsBucket' fields, and they end up coming out in an order that's confusing for us humans.

So one fix is just to rely on the stats command's alphabetical ordering and break it into two steps. In the following search we just create interim categories of A,B,C,D,E,F,G. We do our stats command and we let Splunk order them alphabetically, and then only at the end do we do another eval to turn these into our nice string labels.

index=_internal group=per_sourcetype_thruput | eval epsBucket=case(eps<0.1,"A",eps<1,"B",eps<2,"C",eps<10,"D",eps<100,"E",eps<1000,"F",eps>=1000,"G") | stats count by epsBucket | eval epsBucket=case(epsBucket="A","less than 0.1",epsBucket="B","between 0.1 and 1",epsBucket="C","between 1 and 2",epsBucket="D","between 2 and 10",epsBucket="E","between 10 and 100",epsBucket="F","between 100 and 1000",epsBucket="G","1000 or more")

as a best-practice item, if you use the same bucketing in lots of searches it might make sense to use a macro, or even to turn the last eval clause into a lookup and keep this mapping in a little lookup file.

It might also be possible to use a charting key to reorder the fields when the chart is being rendered but I don't know of one.

As to the second question, about "<" and ">" characters disappearing from the legend, or disappearing when you mouseover the chart, that again would be a core Splunk issue, but I'm afraid I cannot reproduce it in 5.0.3 with the JSChart module.

View solution in original post

sideview
SplunkTrust
SplunkTrust

First let's deal with the sort order question.

And let's translate the question to a search that will run against the index=_internal data so anyone can run it on their splunk instance:

index=_internal group=per_sourcetype_thruput | eval epsBucket=case(eps<0.1,"less than 0.1",eps<1,"between 0.1 and 1",eps<2,"between 1 and 2",eps<10,"between 2 and 10",eps<100,"between 10 and 100",eps<1000,"100-1000",eps>=1000,"1000 and up") | stats count by epsBucket

The same thing happens there - Splunk picks a lexicographic order for the 'epsBucket' fields, and they end up coming out in an order that's confusing for us humans.

So one fix is just to rely on the stats command's alphabetical ordering and break it into two steps. In the following search we just create interim categories of A,B,C,D,E,F,G. We do our stats command and we let Splunk order them alphabetically, and then only at the end do we do another eval to turn these into our nice string labels.

index=_internal group=per_sourcetype_thruput | eval epsBucket=case(eps<0.1,"A",eps<1,"B",eps<2,"C",eps<10,"D",eps<100,"E",eps<1000,"F",eps>=1000,"G") | stats count by epsBucket | eval epsBucket=case(epsBucket="A","less than 0.1",epsBucket="B","between 0.1 and 1",epsBucket="C","between 1 and 2",epsBucket="D","between 2 and 10",epsBucket="E","between 10 and 100",epsBucket="F","between 100 and 1000",epsBucket="G","1000 or more")

as a best-practice item, if you use the same bucketing in lots of searches it might make sense to use a macro, or even to turn the last eval clause into a lookup and keep this mapping in a little lookup file.

It might also be possible to use a charting key to reorder the fields when the chart is being rendered but I don't know of one.

As to the second question, about "<" and ">" characters disappearing from the legend, or disappearing when you mouseover the chart, that again would be a core Splunk issue, but I'm afraid I cannot reproduce it in 5.0.3 with the JSChart module.

Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...