All Apps and Add-ons

Is there a way I can ensure the rest of the search does not run when the dbxquery fails in a subsearch?

briancronrath
Contributor

I've been running into some issues where I'm doing a join on data from a subsearch that uses the dbxquery command for a saved search that periodically writes us a lookup file. We ran into some db connection issues where searchheads were intermittently not able to connect to this db and run the search; however, when this occurred it would essentially destroy our lookup file because the search continued to run even though the dbxquery would fail. Is there a way I can ensure the rest of the search does not run when the dbxquery fails in a subsearch?

0 Karma
1 Solution

DalJeanis
Legend

In essence, you'll need to put another step in there.

Put the dbxquery results into a staging file, then only write the results from staging into the lookup if there is something in the staging file.

| inputlookup myrealfile.csv
| eval filenum=0
| inputlookup append=t mystagingfile.csv 
| eval filenum=coalesce(filenum,1)
| eventstats max(filenum) as maxfile
| where filenum=maxfile  
| fields - filenum maxfile
| outputlookup append=f myrealfile.csv
| appendpipe 
    [| where false() 
     | outputlookup append=f mystagingfile.csv 
     ]

...or, if you prefer this style...

| inputlookup mystagingfile.csv 
| appendpipe 
    [ | stats count as newcount 
      | eval rectype=if(newcount>0,"keepme","killme") 
      | inputlookup append=true myrealfile.csv 
      | eventstats max(rectype) as rectype
      | where isnull(newcount) AND rectype="keepme"
      | fields - rectype
    ]
| outputlookup append=f myrealfile.csv
| appendpipe 
    [| where false() 
     | outputlookup append=f mystagingfile.csv 
     ]

View solution in original post

0 Karma

DalJeanis
Legend

In essence, you'll need to put another step in there.

Put the dbxquery results into a staging file, then only write the results from staging into the lookup if there is something in the staging file.

| inputlookup myrealfile.csv
| eval filenum=0
| inputlookup append=t mystagingfile.csv 
| eval filenum=coalesce(filenum,1)
| eventstats max(filenum) as maxfile
| where filenum=maxfile  
| fields - filenum maxfile
| outputlookup append=f myrealfile.csv
| appendpipe 
    [| where false() 
     | outputlookup append=f mystagingfile.csv 
     ]

...or, if you prefer this style...

| inputlookup mystagingfile.csv 
| appendpipe 
    [ | stats count as newcount 
      | eval rectype=if(newcount>0,"keepme","killme") 
      | inputlookup append=true myrealfile.csv 
      | eventstats max(rectype) as rectype
      | where isnull(newcount) AND rectype="keepme"
      | fields - rectype
    ]
| outputlookup append=f myrealfile.csv
| appendpipe 
    [| where false() 
     | outputlookup append=f mystagingfile.csv 
     ]
0 Karma
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...