Splunk Search

How to create a table depending on quarters and intervals?

Terka117
New Member

Hi Team,

I have data of several years sorted by specific dates and numbers. And I would like to display them on quaterly basis into table and devide numbers into six intervals (0-0,01;0,011-0,03;0,301-0,1;0,101-1;1,001-5;5+). Moreover there is not every day any activity but then there are days with many rows.

Can anybody help me with the solution how to solve this?
I have googled but haven't found anything.

Thank you!

0 Karma

DalJeanis
Legend

We can't interpret your intervals, because the number of dashes, leading zeroes, commas and semicolon do not make any obvious pattern.

So, we're just going to make stuff up.

Let's suppose that you have data that has the date and a number. Let's say, so we can be concrete about it, that the number is the total number of movie tickets sold for a particular show, and that some days the movie theater is closed and other days it has many packed shows.

This first section just creates some test data. Your actual process will do something else, obviously.

    | makeresults 
    | eval mydata="2017-10-01,12 2017-10-01,21 2017-10-02,10 2017-10-03,14 2017-10-03,32 2017-10-07,12 2017-10-05,1 2017-10-06,13 2017-10-06,25 2017-10-06,12 2017-10-07,18 2017-10-07,16" 
    | makemv mydata 
    | mvexpand mydata 
    | makemv delim="," mydata 
    | eval myDate=mvindex(mydata,0) 
    | eval myCount=mvindex(mydata,1) | table myDate myCount
    | rename COMMENT as "the above enters one week's worth of test data with the date in myDate and the count in myCount"

This section does any necessary data conversion and then stats the ticket sales up by day...

    | rename COMMENT as "change the date into epoch time, bin at the day level (not needed here but useful to remember if you are looking at normal log events."
    | eval _time=strptime(myDate,"%Y-%m-%d")
    | bin _time span=1d

    | rename COMMENT as "add zero records for any days that might be missing, then sum it up by day."
    | appendpipe [
         | stats min(_time) as mintime max(_time) as maxtime 
         | eval myTimes = mvrange(mintime,maxtime+1,86400) 
         | mvexpand myTimes 
         | rename myTimes as _time 
         | table _time 
         | eval myCount=0
         ]
    | stats sum(myCount) as myCount by _time

This section puts them into buckets. Since we had no idea what you meant you wanted for buckets, we just used sets of ten.

    | rename COMMENT as "now you have one record per day. put the count value into your buckets and stats them up"
    | eval myBucket=case(isnull(myCount) or myCount=0, "01", 
        myCount<10, "02", 
        myCount<20, "03", 
        myCount<30, "04", 
        myCount<40, "05", 
        myCount>=40, "06")
    | stats count as daysInBucket by myBucket

    | rename COMMENT as "add a zero record for each bucket and stats em again to make sure all buckets are display"
    | append [
        | makeresults 
        | eval myBucket="01 02 03 04 05 06" 
        | table myBucket 
        | makemv myBucket 
        | mvexpand myBucket 
        | eval daysInBucket = 0 
        ]
    | stats sum(daysInBucket) as daysInBucket by myBucket

And there you go.

0 Karma

Richfez
SplunkTrust
SplunkTrust

Could you explain differently? Or at least confirm this re-statement? -

You'd like a count of numbers falling into each of those specific buckets, divided up by quarters.

Does that sound right?

0 Karma

Terka117
New Member

Almost, I would like to sum those numbers up, not count. But the rest is correct 🙂

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...