Splunk Search

A table with fields as lines

timmalos
Communicator

We got some events ,with an extract we got

Event 1 : Field1=VALUE11,Field2=VALUE12,Field3=VALUE13,... 
Event 2 : Field1=VALUE21,Field2=VALUE22,Field3=VALUE23,... 
Event 3 : Field1=VALUE31,Field2=VALUE32,Field3=VALUE33,... 
etc... 

What we would like is obtain a table like this:

Metric    |     Field1        |      Field 2      |    Field 3 ... 
----------------------------------------------------------------------------------
      Avg | avg(Field1)       | avg(Field2)       | avg(Field3) ... 
   Median | median(Field1)    | median(Field2)    | median(Field3) ... 
Sparkline | sparkline(Field1) | sparkline(Field2) | sparkline(Field3) ... 

What the best way to do that?
Seems impossible to find an easy way.
We tried in this direction :

| eval metric=mvappend("avg","median") | mvexpand metric | eval value=case(metric=="swFCPortRxC3Frames",swFCPortRxC3Frames,metric=="swFCPortRxC2Frames",swFCPortRxC2Frames) | chart latest(value) by metric swFCPortIndex

Thx for your help !

Tags (2)
0 Karma
1 Solution

jonuwz
Influencer

Lots of metrics in the same event require you to know what the field names are if you're going to perform stats on them. The workaround is to use untable. Whenever you think "How the heck .." - untable is usually the answer. xyseries is its inverse.

I have data similar to yours. every 2 minute I have a feed of disk utilization that looks like :

Event 1: sda=0.0;sda1=0.0;sda2=0.0
Event 2: sda=10.3;sda1=1.0;sda2=9.3

Here's the search that only hits the index once, and works for arbitrarily named fields :
The 1st "table" is where you include / filter the fields you are interested in - this allows far more flexibility than using wildcards in the stats commands.

initial search
| table _time sd*
| untable _time field value
| eval metric="raw"
| appendpipe [ | stats avg(value) as data by field | eval metric="Average" ]
| appendpipe [ | stats median(value) as data by field | eval metric="Median" ]
| appendpipe [ | stats sparkline(avg(value)) as data by field | eval metric="Sparkline" ]
| where metric !="raw"
| xyseries metric field data

result :

alt text

You can save yourself a lot of headache by transposing the columns and rows

initial search
| table _time sd*
| untable _time field value
| chart avg(value) as Average median(value) as Median sparkline(avg(value)) as sparkline by field

alt text

View solution in original post

timmalos
Communicator

You 're right, but i needed the untable() function that i didn't know to do that 🙂

0 Karma

jonuwz
Influencer

Lots of metrics in the same event require you to know what the field names are if you're going to perform stats on them. The workaround is to use untable. Whenever you think "How the heck .." - untable is usually the answer. xyseries is its inverse.

I have data similar to yours. every 2 minute I have a feed of disk utilization that looks like :

Event 1: sda=0.0;sda1=0.0;sda2=0.0
Event 2: sda=10.3;sda1=1.0;sda2=9.3

Here's the search that only hits the index once, and works for arbitrarily named fields :
The 1st "table" is where you include / filter the fields you are interested in - this allows far more flexibility than using wildcards in the stats commands.

initial search
| table _time sd*
| untable _time field value
| eval metric="raw"
| appendpipe [ | stats avg(value) as data by field | eval metric="Average" ]
| appendpipe [ | stats median(value) as data by field | eval metric="Median" ]
| appendpipe [ | stats sparkline(avg(value)) as data by field | eval metric="Sparkline" ]
| where metric !="raw"
| xyseries metric field data

result :

alt text

You can save yourself a lot of headache by transposing the columns and rows

initial search
| table _time sd*
| untable _time field value
| chart avg(value) as Average median(value) as Median sparkline(avg(value)) as sparkline by field

alt text

lguinn2
Legend

I agree with @zeroactive. But this may do what you want

yoursearchhere
| stats avg(Field*) as Field*
| eval Metric="Average"
| append [ search yoursearchhere
   | stats median(Field*) as Field*
   | eval Metric="Median" ]
| append [ search yoursearchhere
   | stats sparkline(count(Field*)) as Field*
   | eval Metric="Sparkline" ]
| table Metric Field*
0 Karma

jonuwz
Influencer

The sparklines will be the count of the samples, and wont plot the actual values.

0 Karma

zeroactive
Path Finder

Odd way to look at data... Why not do the table by Field ID, with columns for Avg, Median, and Sparkline?

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...