Splunk Search

How can I clean up my messy appended search?

cdgill
Explorer

Here is my search:

index=jenkins* job_name=mosaic-os*/master event_tag=job_event (type=started OR type=completed) (job_result=SUCCESS)  | dedup build_number | eval build_duration = job_duration + queue_time + 'test_summary.duration' | eventstats sum(stages{}.duration) as actual_build_time by build_number | eval time_waiting_for_executor = job_duration-actual_build_time-queue_time | stats values(job_name) as "Job Name", values(job_result) as "Included Job Results", avg(queue_time) as "Queue Time", avg(time_waiting_for_executor) as "Executor Wait Time", avg(build_duration) as "Build Duration", min(build_duration) as "Min Build Duration", max(build_duration) as "Max Build Duration", stdev(build_duration) as "Duration Standard Deviation", stdev(queue_time) as "Queue Time Standard Deviation", stdev(test_summary.duration) as "Test Standard Deviation" | append [search index=jenkins* job_name=mosaic-os*/master event_tag=job_event (type=started OR type=completed) (NOT job_result=SUCCESS)  | dedup build_number | eval build_duration = job_duration + queue_time + 'test_summary.duration' | eventstats sum(stages{}.duration) as actual_build_time by build_number | eval time_waiting_for_executor = job_duration-actual_build_time-queue_time | stats values(job_name) as "Job Name", values(job_result) as "Included Job Results", avg(queue_time) as "Queue Time", avg(time_waiting_for_executor) as "Executor Wait Time", avg(build_duration) as "Build Duration", min(build_duration) as "Min Build Duration", max(build_duration) as "Max Build Duration", stdev(build_duration) as "Duration Standard Deviation", stdev(queue_time) as "Queue Time Standard Deviation", stdev(test_summary.duration) as "Test Standard Deviation" ]

Basically the only difference is the first shows successful builds while the second shows non successful builds. There's got to be a better way to write this query, but I'm not well versed enough in Splunk to figure it out.

Tags (1)
0 Karma
1 Solution

elliotproebstel
Champion

Final solution:

index=jenkins* job_name=mosaic-os*/master event_tag=job_event (type=started OR type=completed)
| dedup build_number 
| eval build_duration = job_duration + queue_time + 'test_summary.duration' 
| eventstats sum(stages{}.duration) as actual_build_time by build_number 
| eval time_waiting_for_executor = job_duration-actual_build_time-queue_time 
| eval result_status=if(job_result="SUCCESS", "SUCCESS", "FAILURE")
| stats values(job_name) as "Job Name", values(job_result) as "Included Job Results", avg(queue_time) as "Queue Time", avg(time_waiting_for_executor) as "Executor Wait Time", avg(build_duration) as "Build Duration", min(build_duration) as "Min Build Duration", max(build_duration) as "Max Build Duration", stdev(build_duration) as "Duration Standard Deviation", stdev(queue_time) as "Queue Time Standard Deviation", stdev(test_summary.duration) as "Test Standard Deviation" BY result_status

View solution in original post

elliotproebstel
Champion

Final solution:

index=jenkins* job_name=mosaic-os*/master event_tag=job_event (type=started OR type=completed)
| dedup build_number 
| eval build_duration = job_duration + queue_time + 'test_summary.duration' 
| eventstats sum(stages{}.duration) as actual_build_time by build_number 
| eval time_waiting_for_executor = job_duration-actual_build_time-queue_time 
| eval result_status=if(job_result="SUCCESS", "SUCCESS", "FAILURE")
| stats values(job_name) as "Job Name", values(job_result) as "Included Job Results", avg(queue_time) as "Queue Time", avg(time_waiting_for_executor) as "Executor Wait Time", avg(build_duration) as "Build Duration", min(build_duration) as "Min Build Duration", max(build_duration) as "Max Build Duration", stdev(build_duration) as "Duration Standard Deviation", stdev(queue_time) as "Queue Time Standard Deviation", stdev(test_summary.duration) as "Test Standard Deviation" BY result_status

cdgill
Explorer

I got the perfect result when I changed job_status to job_result. Thank you so much!

0 Karma

elliotproebstel
Champion

Ahh, yes - apologies for the typo, but glad we got it sorted! I'll convert the solving comment to an answer and fix the typo, so you can accept.

0 Karma

elliotproebstel
Champion

How about adding a BY clause to your stats command and using that to separate the successful vs non-successful builds? It would look like this:

index=jenkins* job_name=mosaic-os*/master event_tag=job_event (type=started OR type=completed)
| dedup build_number 
| eval build_duration = job_duration + queue_time + 'test_summary.duration' 
| eventstats sum(stages{}.duration) as actual_build_time by build_number 
| eval time_waiting_for_executor = job_duration-actual_build_time-queue_time 
| stats values(job_name) as "Job Name", values(job_result) as "Included Job Results", avg(queue_time) as "Queue Time", avg(time_waiting_for_executor) as "Executor Wait Time", avg(build_duration) as "Build Duration", min(build_duration) as "Min Build Duration", max(build_duration) as "Max Build Duration", stdev(build_duration) as "Duration Standard Deviation", stdev(queue_time) as "Queue Time Standard Deviation", stdev(test_summary.duration) as "Test Standard Deviation" BY job_result

Does that get you what you're looking for?

0 Karma

cdgill
Explorer

That almost gets me there thank you! The only issue is your solution breaks up the job results individually, where I want successful to be one row and failure/unstable/aborted to be grouped together in another row.

0 Karma

elliotproebstel
Champion

Hmmm...Could you post a screencap of what you're getting now? I'd expect it to have one row for SUCCESS and one for whatever job_result isn't SUCCESS (maybe FAILURE?). You could edit your question to add it in. Feel free to redact/obfuscate any sensitive fields.

0 Karma

cdgill
Explorer

Here is the result: link text

https://imgur.com/a/84V3A

0 Karma
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...