Hi @aamirs291 From the sounds of your requirements you'll need to create some sort of state machine logic with lookup files (csv file or kvstore) that can track the "trigger status" of the incoming webhook events and whether they have alerted out from Splunk or not. Based on your example input and requirements, I've of created a run anytime data set to demonstrate the type of state logic. Adjust the dummydata at the | makeresults `comment("# create some dummy data - modify data to test")`
| eval dummydata="abc|Completed,bca|Queued,cab|Queued,def|unknown"
| makemv dummydata delim="," | mvexpand dummydata
| eval dummydata=split(dummydata, "|"), Process_Name=mvindex(dummydata, 0), Trigger Status=mvindex(dummydata, 1)
`comment("# alert state logic: append previously triggered alert status event and filter out Completed ones")`
| inputlookup append=true triggerAlertStatus.csv
| fields Process_Name Trigger_Status
| stats
max(_time) AS _time
latest(Trigger_Status) AS Trigger_Status
BY Process_Name
| where Trigger_Status!="Completed"
| table _time Process_Name Trigger_Status
`comment("# lookup against Process_Name to see if alert has triggered previously and set pending alert if not")`
| lookup triggerAlertStatus.csv Process_Name OUTPUTNEW Alert_Status
| eval Alert_Status=if(isnull(Alert_Status), "pending", "triggered")
| outputlookup override_if_empty=false triggerAlertStatus.csv
| where Alert_Status="pending" In another search tab run "| inputlookup triggerAlertStatus.csv" command to see how the lookup file changes on each run. Hopefully it has enough hints to get you going - it will need adjusting to match your data and base search results etc. Your final search would become your regularly scheduled Splunk alert and every time it runs it would update the lookup file and maintain the alerting status. There is a detail blog post here too for more background: https://www.splunk.com/en_us/blog/tips-and-tricks/maintaining-state-of-the-union.html Hope this helps.
... View more