All Apps and Add-ons

Help with looping over the result table

damucka
Builder

Hello,

Let us say I have one of the columns of the result table (of dbxquery) called HOST. It looks as follows:

HOST
ls5920/25
ls5920/25
ls5920/25
ls5921/26
ls5921/26
ls5921/26
ls5922/27
ls5922/27
ls5922/27

What I need to do now is to loop over these and trigger a dump for each host. To complicate it, the notation ls5920/25 means for example that these are actually two hosts, primary and secondary, one being ls5920 and second ls5925. I would like to trigger dump for both. Also, the entries are duplicated as below. The complete SPL at the moment looks as follows:

| noop search_optimization=false| dbxquery query="call \"ML\".\"ML.PROCEDURES::PR_ALERT_TYPE_ANALYSING_LAST_MINUTES_ALL_HOSTS\"('BWP', to_timestamp(to_nvarchar('2019-06-19 10:31:00', 'YYYY-MM-DD HH24:MI'),'YYYY-MM-DD HH24:MI'), ?)" connection="HANA_MLBSO" 

        |rename comment AS " -------- Base search to fetch anomalies from the DB -------------------------"

        |rename comment AS "    -------- Count the result events and set it under totalCount: use in the custom condition to trigger the alert ( _totalCount > 0) -----------"
        | eventstats count as totalCount
        |rename comment AS " -------------------------------------------------"

        |rename comment AS "    -------- Search the executed alerts 5m in the past: if there were no anomalies in that time set the _trigger var to 1,
                                -------- then use it in custom trigger condition together with totalCount
                                ----------> resultcount 0 - 1 (no anomalies, the result consists of the totalCount being set by the eventstats above. Set the trigger and alert)
                                ----------> resultcount > 1 (anomalies found in the previous executions, do NOT trigger further alert actions)"
| appendcols 
         [
          search index=_internal sourcetype=scheduler 
            savedsearch_name="Anomaly Detection BWP Clone 2" OR
            savedsearch_name="Crash Dump Alert" OR 
            savedsearch_name="HSR-switch triggered" 
            earliest=-5m latest=now
         | convert ctime(scheduled_time) as SCHEDULE
         | convert ctime(dispatch_time) as DISPATCH 
         | stats max(result_count) as resultcount
         | rename comment AS "----------- ount --------------"
         | eval trigger=case(resultcount<2, "1",1<2,"0")

         ]
        |rename comment AS " ------------------------------------------"

        |rename comment AS " ------------------------ In case of "NORMAL" state entry, the corresponding alert/notification should also be triggered -----------------------"
        | eval trigger=case(ALERT_TYPE="NORMAL","1",1<2,trigger)
        | eval reason=case(ALERT_TYPE="NORMAL","",1<2," - reason -")
        |rename comment AS " -------------------------------------------"

        |rename comment AS " --------------- Set the e-mail recipients -------------"
        | eval DBSID="TEST"
        | eval PRIO = "P1"
        | lookup email_groups.csv DBSID OUTPUT email_recipients_DBSID AS email_recipients_DBSID
        | lookup email_groups_critical_alerts.csv "PRIO" OUTPUT email_recipients_critical_alerts AS email_recipients_critical_alerts
        | strcat email_recipients_critical_alerts email_recipients_DBSID  email_recipients
        |rename comment AS " -----------------------------------------------------------------"

        |rename comment AS " --------------- rtedump triggering ---------------------"
        |eval rtetrigger=case(ALERT_TYPE="MAIN" AND trigger=0,"1",1<2,"0")
        |eval host_to_trigger=ls5947

        |where rtetrigger=1
         |map maxsearches=1 search="dbxquery query=\"call SYS.MANAGEMENT_CONSOLE_PROC('runtimedump dump','ls5947:30240',?)\" connection=\"HANA_MLBSO_BHT\" "



        |rename comment AS " ------------------------------------------------------------------"

        | rename comment AS " ------------ Hide the auxiliary variables by renaming them to _var in order not to present them in the result ----------------------------------"
        | rename totalCount AS _totalCount
        | rename resultcount AS _resultcount
        | rename trigger AS _trigger
        | rename rtetrigger AS _rtetrigger
        | rename reason AS _reason
        | rename DBSID AS _DBSID
        | rename email_recipients AS _email_recipients
        | rename email_recipients_DBSID AS _email_recipients_DBSID
        | rename email_recipients_critical_alerts AS _email_recipients_critical_alerts
        | rename PRIO AS _PRIO
        | rename comment AS " ------------------------------------------------------"

Where I would need help with the looping is the section "rtedump triggering" - at the moment there is a static dump triggering for ls5947 there.

How would I achieve this?
The HOST column is the part of the first dbxquery output. In the ideal case, the dump triggering should "just" be executed, but the table output should not get changed and be still presented to the users.

Kind Regards,
Kamil

0 Karma
1 Solution

DavidHourani
Super Champion

Hi @damucka,
Have a look at the map command manual here :
https://docs.splunk.com/Documentation/Splunk/7.3.0/SearchReference/Map

You should be able to use the value of your host_to_trigger field directly by using it as a token $host_to_trigger$. So your search should look like this:

  ... |eval rtetrigger=case(ALERT_TYPE="MAIN" AND trigger=0,"1",1<2,"0")
       |eval host_to_trigger=ls5947
       |where rtetrigger=1
       |map maxsearches=1 search="dbxquery query=\"call SYS.MANAGEMENT_CONSOLE_PROC('runtimedump dump','$host_to_trigger$:30240',?)\" connection=\"HANA_MLBSO_BHT\" "

Cheers,
David

View solution in original post

0 Karma

DavidHourani
Super Champion

Hi @damucka,
Have a look at the map command manual here :
https://docs.splunk.com/Documentation/Splunk/7.3.0/SearchReference/Map

You should be able to use the value of your host_to_trigger field directly by using it as a token $host_to_trigger$. So your search should look like this:

  ... |eval rtetrigger=case(ALERT_TYPE="MAIN" AND trigger=0,"1",1<2,"0")
       |eval host_to_trigger=ls5947
       |where rtetrigger=1
       |map maxsearches=1 search="dbxquery query=\"call SYS.MANAGEMENT_CONSOLE_PROC('runtimedump dump','$host_to_trigger$:30240',?)\" connection=\"HANA_MLBSO_BHT\" "

Cheers,
David

0 Karma

damucka
Builder

Hi David,

Thank you, this works.
Now "just" the proper looping over the HOST column is missing to set the host_to_trigger properly.

Any idea on that?

Regards,
Kamil

0 Karma

DavidHourani
Super Champion

@damucka, what do you mean ? Is the HOST column already populated from the search that precedes the "rtedump triggering" how are you getting that field ?

0 Karma

damucka
Builder

Precisely, yes.
It is populated by the very first search / dbxquery and it looks as follows:

HOST
 ls5920/25
 ls5920/25
 ls5920/25
 ls5921/26
 ls5921/26
 ls5921/26
 ls5922/27
 ls5922/27
 ls5922/27

So, how would I loop over it and feed the entries to host_to_trigger and then execute one by one the following map query over it? Also, the HOST column has duplicates and also e.g. the entry ls5922/27 would mean I need to execute the map for two hosts, the ls5922 and ls5927.

0 Karma

DavidHourani
Super Champion

Hi again @damucka,

So you can do like this :

| makeresults 
| eval Existing_Host="ls5920/25" 
| eval FirstPart=substr(Existing_Host,1,4), SecondPart=substr(Existing_Host,5,7), SecondPart=split(SecondPart,"/") 
| mvexpand SecondPart 
| eval host_to_trigger=FirstPart+SecondPart
| dedup host_to_trigger
| table host_to_trigger

To split your host field, this will give you a host_to_trigger field that has the entire list of hosts based on your original list that looks like ls5920/25.

Let me know if that helps.

0 Karma

damucka
Builder

Hi David,

Perfect, thank you. It helps a lot and works as I would expect.
One last question to that:
- The next command, which triggers the dumps based on the hosts is the map command with dbxquery:

        |map maxsearches=1 search="dbxquery query=\"call SYS.MANAGEMENT_CONSOLE_PROC('runtimedump dump','$host_to_trigger$:30240',?)\" connection=\"HANA_MLBSO_BHT\" "

It does not bring any result back, just triggers the dump. This is fine. However it causes also my whole result to be empty, which is not what I want. Is there any way to trigger the map command but still have access to the search results as they were before the map command and present them back?

Like on the example above, how would I execute the map command, but still present the table with the host_to_trigger (and in my case some other variables/columns) to the end user?

Kind Regards,
Kamil

0 Karma

DavidHourani
Super Champion

Hi Kamil,

Glad to know I could help !

And yeah, this is the default behavior of the map command, the output will only show the results of map. If you want to keep some fields, you could append to your dbxquery the columns needed and that should do the trick for you.

Please upvote and accept the comments and answers if they were helpful!

Cheers,
David

0 Karma

damucka
Builder

Hi David,

I am trying to implement it but the moment I append I get the syntax error telling me the "search" is expected for the append command.
So .. would I have to execute the previous search (previous to map) once again and append the results to the map?
I cannot execute it completely after the map because the map depends on the results of it.

Or is there any other way to append the results of the previous search to the map without re-executing it?

Kind Regards,
Kamil

0 Karma

DavidHourani
Super Champion

You could add the append to themap command directly but it will duplicate the results, see the query here for more info :

index="_internal" | stats count by source | map search="|makeresults | eval A=$source$ |append [|makeresults] "

Best would be to append right after map it will save you a lot of resources. You could also include your entire map logic in a subsearch but that might complicate things a little.

Cheers,
David

0 Karma

damucka
Builder

Thank you.
What I did was to append the whole map command to the previous search and because within append I need the variables for the map itself, I had to execute the dbxquery again. Maybe this is not the perfect solution, but I guess I can live with that.
So, the dump triggering section looks as follows now:

|rename comment AS " ****************************** Start:    rtedump triggering ************************************************************************  "
 | eval rtetrigger=case(ALERT_TYPE="MAIN" AND trigger=0,"1",1<2,"0")
 | where rtetrigger = 0 
 | append
 [

   | dbxquery query="call \"ML\".\"ML.PROCEDURES::PR_ALERT_TYPE_ANALYSING_LAST_MINUTES_ALL_HOSTS\"('BWP', to_timestamp(to_nvarchar('2019-06-19 10:31:00', 'YYYY-MM-DD HH24:MI'),'YYYY-MM-DD HH24:MI'), ?)" connection="HANA_MLBSO"

     | eval HOST="ls5945/47"
     | eval Existing_Host=HOST 
     | eval FirstPart=substr(Existing_Host,1,4), SecondPart=substr(Existing_Host,5,7), SecondPart=split(SecondPart,"/") 
     | mvexpand SecondPart 
     | eval host_to_trigger=FirstPart+SecondPart
     | dedup host_to_trigger 
     | table host_to_trigger    
   | map maxsearches=20 search="dbxquery query=\"call SYS.MANAGEMENT_CONSOLE_PROC('runtimedump dump','$host_to_trigger$:30240',?)\" connection=\"HANA_MLBSO_BHT\" "    
 ]  
|rename comment AS " ****************************  End:       rtedump triggering ***********************************************************************  "

Thank you for your support.

Regards,
Kamil

0 Karma

DavidHourani
Super Champion

Most welcome Kamil ! Glad I could help 🙂

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...