Splunk Search

How to build a dashboard to show critical devices without any overlap?

louispaul76
Engager

Hi Giuseppe,

Thanks for your quick reply. See below my search:

| inputlookup perimeter.csv
| eval SplunkHost=lower(SplunkHost) 
| join SplunkHost type=outer 
    [| metadata my indexexes type=hosts 
    | rename totalCount as Count, host as SplunkHost, lastTime as "Last Event" 
    | eval actualhourslag=(now()-'Last Event')/60/60 
    | eval SplunkHost=lower(SplunkHost)] 
| fieldformat "Last Event"=strftime('Last Event', "%c") 
| where actualhourslag>HoursLag OR NOT actualhourslag="*"
| stats sum(Count) by SplunkHost
| rename sum(Count) as total
| where total < 50

This is the search I'm using for my dashboard. When I use the timeframe 0-6 hours or 6-12 hours or 12 - 18 hours or 18 - 24 hours or 1 - 5 days, the result is almost the same in some instance and they overlap. I just would like to know if I can have each device in a very specific timeframe. For example, any devices in 0-6 hours cannot be seen in 12-18 hours and cannot be seen in 18-24 hours and so on.

Best,

Louispaul76

Tags (2)
0 Karma

woodcock
Esteemed Legend

First, fix your basic search like this:

| metadata my indexexes type=hosts 
| rename totalCount as Count, host as SplunkHost, lastTime as "Last Event" 
| eval actualhourslag=(now()-'Last Event')/60/60 
| eval which="metadata"
| inputlookup append=t perimeter.csv
| eval which=coalesce(which, "csv")
| eval SplunkHost=lower(SplunkHost)
| stats values(*) AS * BY SplunkHost
| where actualhourslag>HoursLag OR NOT actualhourslag="*"
| stats sum(Count) by SplunkHost
| rename sum(Count) as total
| where total < 50
| fieldformat "Last Event"=strftime('Last Event', "%c")

Now I am not sure what you mean about the timeframe stuff but perhaps add something like this:

| bin timeframe span=6
| stats <some stuff here> BY timeframe
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi louispaul76,
beware that you created a new question, instead comment my answer!

Anyway, in your search I see two problems:

  • the first is that you use join that is a very slow command, so it's better to use the stats command as I suggest;
  • the second is that you put in the main search | inputlookup and in the second | metasearch ; it's better to change the order because there's a limit of 50,000 results in subsearches so you could not have all the results of your secondary search.

So try something like this

| metadata my indexexes type=hosts
| rename totalCount as Count, lastTime as "Last Event"
| eval actualhourslag=(now()-'Last Event')/60/60, host=lower(host), timeframe=case(strftime(_time,"%H")<7", "0-6", strftime(_time,"%H")<13","7-12", strftime(_time,"%H")<19","13-18",strftime(_time,"%H")>18","19-24")
| stats values(timeframe) AS timeframe count BY host
| append [  | inputlookup perimeter.csv | eval host=lower(SplunkHost) ]
| stats values(timeframe) AS timeframe sum(count) AS Total BY host 

So you can apply all the rules you want using the timeframe field.

Bye.
Giuseppe

0 Karma

nareshinsvu
Builder

Can you create a new column like found_flag in your csv file and keep updating it?

| inputlookup perimeter.csv | where found_flag=""
|blah blah
|eval found_flag=1
|outputlookup perimeter.csv append=true

And then another search query to dedup the results

| inputlookup perimeter.csv | sort - found_flag |dedup SplunkHost |outputlookup perimeter.csv 
0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...