Dashboards & Visualizations

How to get a graph with respect to time spend vs number of events

anirban_nag
Explorer

I have a program which is logging events after every 1 hour. Which means the job runs after every 1 hour.

With every run it generates an UniqueID and it stays through out the same until the program gets terminated for that hour's run. The program logs FileName with it. To discriminate the start and stop of the program, it logs status as Status=START and Status=END. Status is the field-name. So for example below are the two sample runs.

index=prg, _time=2:00, UniqueID=ID1, Status=START, Message="Program starts"
index=prg, _time=2:01, UniqueID=ID1, FileName=F1, Status=DEBUG, Message="File logged"
index=prg, _time=2:02, UniqueID=ID1, FileName=F2, Status=DEBUG, Message="File logged"
index=prg, _time=2:03, UniqueID=ID1, FileName=F3, Status=DEBUG, Message="File logged"
index=prg, _time=2:04, UniqueID=ID1, Status=END, Message="Program ends"

index=prg, _time=3:00, UniqueID=ID2, Status=START, Message="Program starts"
index=prg, _time=3:05, UniqueID=ID2, FileName=F11, Status=DEBUG, Message="File logged"
index=prg, _time=3:07, UniqueID=ID2, FileName=F12, Status=DEBUG, Message="File logged"
index=prg, _time=3:09, UniqueID=ID2, FileName=F13, Status=DEBUG, Message="File logged"
index=prg, _time=3:11, UniqueID=ID2, FileName=F17, Status=DEBUG, Message="File logged"
index=prg, _time=3:22, UniqueID=ID2, Status=END, Message="Program ends"

So with above example we could see ID1 took 4 minutes to end and logged 3 files, whereas ID2 took 22 minutes and logged 4 files. I need this in a graph, where time would be in Y axis and number of files would be in X axis. We want to see the trend... like for how many files what the time graph looks like.

0 Karma

woodcock
Esteemed Legend

Start with something like this:

| makeresults 
| eval raw="index=prg, _time=2:00, UniqueID=ID1, Status=START, Message=\"Program starts\"::index=prg, _time=2:01, UniqueID=ID1, FileName=F1, Status=DEBUG, Message=\"File logged\"::index=prg, _time=2:02, UniqueID=ID1, FileName=F2, Status=DEBUG, Message=\"File logged\"::index=prg, _time=2:03, UniqueID=ID1, FileName=F3, Status=DEBUG, Message=\"File logged\"::index=prg, _time=2:04, UniqueID=ID1, Status=END, Message=\"Program ends\"::index=prg, _time=3:00, UniqueID=ID2, Status=START, Message=\"Program starts\"::index=prg, _time=3:05, UniqueID=ID2, FileName=F11, Status=DEBUG, Message=\"File logged\":: index=prg, _time=3:07, UniqueID=ID2, FileName=F12, Status=DEBUG, Message=\"File logged\"::index=prg, _time=3:09, UniqueID=ID2, FileName=F13, Status=DEBUG, Message=\"File logged\"::index=prg, _time=3:11, UniqueID=ID2, FileName=F17, Status=DEBUG, Message=\"File logged\"::index=prg, _time=3:22, UniqueID=ID2, Status=END, Message=\"Program ends\""
| makemv delim="::" raw
| mvexpand raw
| rename raw AS _raw
| kv

| rename COMMENT AS "Everything above generates sample events; everything below is your solution"

| rex field=time "((?<hours>\d+):)?(?<minutes>\d+):(?<seconds>\d+)"
| fillnull value="0" hours minutes seconds
| eval time = seconds + (60 * (minutes + 60 * hours))
| fields - hours minutes seconds
| stats count(eval(Message="File logged")) AS files_logged sum(time) AS time BY UniqueID

OK, so now you have your basic tabluar data but you need to decide the nature of your analysis. I am taking a guess here:

| chart avg(files_logged) BY time

Pick your visualization and profit!

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

Lots of ways to go with this. First you need to prep the data...

index=prg 
| bin _time as day span=1d
| stats 
    sum(eval(case(Status="DEBUG",1))) as filecount,
    range(_time) as duration 
    min(_time) as _time 
    by day UniqueID

| rename COMMENT as "The above gives you the following information, duration in seconds so you need to divide"
| table _time filecount duration day UniqueID 
| eval durationmin=round(duration/60,2) 

With the above information, you can do any of the following:

1) Calculate a static table

| stats min(durationmin) as minminutes,
        max(durationmin) as maxminutes,
        avg(durationmin) as avgminutes,
        stdev(durationmin) as stdevminutes
        by filecount
| eval minminutes=round(minminutes,2)
| eval maxminutes=round(maxminutes,2)
| eval avgminutes=round(avgminutes,2)
| eval stdevminutes=round(stdevminutes,2)

2) calculate a windowed daily average

| appendpipe [
    | stats values(_time) as time values(filecount) as filecount 
    | mvexpand time
    | mvexpand filecount
    ]  
| stats values(*) as * by _time filecount
| streamstats timewindow=7d 
        avg(durationmin) as avgminutes
        by filecount
| eval avgminutes=round(avgminutes,2)
| timechart span=1d avg(avgminutes) as avgminutes by filecount 

Or lots of other things. Depends on what you are trying to see.

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...