Splunk Search

How do you create a threshold on a bar chart to show when certain cpu/computer (e.g server) resource(s) exceed its normal threshold?

bidahor13
Path Finder

Need Help : I'm trying to create a bar chart to display the data below for each server:
1. Free Space

2. Free Megabytes

3. Idle Time

4. Current Bandwidth

5. Disk Time = Avg Disk sec/Transfer * Disk Transfers/sec

6. Disk Write Time

7. Avg. Disk Bytes/Transfer(Total IOPS)

8. Avg. Disk Bytes/Write(write IOPs)

9. Avg. Disk Queue Length (Queue time is time spent waiting for the device because it is busy with another request or waiting for the SCSI bus to the device because it is busy)
10.Avg. Disk Write Queue Length

What is the best way to write the search to gather all this data? I already have Splunk collecting this data - except for items 5 - 10.

0 Karma

woodcock
Esteemed Legend

You definitely have a very inefficient search with much redundancy in it. Try this one and see if it gives you what you need:

index=perfmon counter="*" collection="*" | stats  sparkline(avg(Value)) AS Trend_Avg_Disk_Bytes_Write avg(Value) AS AVG_KB  min(Value) AS MIN_KB  max(Value) AS MAX_KB by host | eval AVG_KB=round((AVG_KB)/1024 ,1)   | eval MIN_KB=round((MIN_KB)/1024 ,1)   | eval MAX_KB=round( (MAX_KB)/1024 ,1)

As far as creating an "over-threshold" chart, try something like this:

index=perfmon counter="*" collection="*" | timechart span=1h max(Value) AS MAX_KB by host | eval MAX_KB=round((MAX_KB)/1024 ,1) | where MAX_KB>10

skoelpin
SplunkTrust
SplunkTrust

You should consider redoing your search and specify an index rather than calling all indexes.. If it's not slow now then it will be much slower in the future when you have millions of events.. You're also using this in combination with the transaction command which will further hurt search performance.

I would create a dashboard with 1 panel for each search metric you want rather than trying to bundle them all into 1 search..

bidahor13
Path Finder

Thanks for the heads-up Skoelpin- need ideas on how to write the search and display the data on the dashboard? -that way it looks meaningful to use.

0 Karma

skoelpin
SplunkTrust
SplunkTrust

What you should do is this.. Go to your search and specify one index and do a search.. For example, say you want to identify 'Free Megabytes'.

index=megz (your query) | stats avg(value) AS "Free Megabytes" 

Then go to the top right and click 'save as' and save it as a dashboard. Select New Dashboard and give it a name and location. Then start a new search for 'Disk Time'

index=megz (your query) | eval diskTime=(Avg_Disk_Sec/Transfer)*Disk_Transfers/Sec | timechart count 

Then go click 'save as' and click 'Existing Dashboard' and save it to the same dashboard as you saved the previous search. This will add a new panel to your dashboard.. So you will have 10 independent panels which will make up your dashboard.. This will be much faster having independent panels then it would be to have all searches in 1 panel and give you more flexibility if you wanted to edit them. Also, once you get a million + events then you will notice a difference in search performance. If some panels have the same search patterns then you could look into post processing to speed it up.

0 Karma

lguinn2
Legend

The best way to write this search depends entirely on the way the data has been collected. Is there a sourcetype that has been assigned to this data? What are the field names? Can you show a small sample of the data?

It is certainly possible to create a chart with many values, but the other problem is the scale. You can have a dashboard with multiple panels and each panel can have its own scale and format. But in a single chart, you can't have 6 different scales... so it may not be very interesting.

Have you considered a dashboard which allows you to group similar values (like IOPS) into charts?

0 Karma

bidahor13
Path Finder

Kindly, view attached file above called resources.

0 Karma

bidahor13
Path Finder
0 Karma

bidahor13
Path Finder
index =*   collection="*"   Host="XXXXX_*")   index=perfmon counter="*" collection="*" | transaction maxspan=3h counter  by host | stats  sparkline(avg(Value)) AS Trend_Avg_Disk_Bytes_Write avg(Value) AS AVG_KB  min(Value) AS MIN_KB  max(Value) AS MAX_KB by host | eval AVG_KB=round((AVG_KB)/1024 ,1)   | eval MIN_KB=round((MIN_KB)/1024 ,1)   | eval MAX_KB=round( (MAX_KB)/1024 ,1)
0 Karma
Get Updates on the Splunk Community!

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...