Splunk Search

Why is my appendcols search returning an incorrect count?

mackd
New Member

I have two separate searches that I want to group into one. When I use appendcols I get wrong counts for the search encapsulated within appendcols. Can someone clue me into what I'm doing wrong?

In the search below, "Provisioned Org" returns an incorrect count, than when I run it on its own.

sourcetype=logs statusCode=400 "Org failure"  earliest=-1mon@mon latest=@mon| timechart span=1d count as FAILED|appendcols [search  sourcetype=logs   "Provisioned org"  earliest=-1mon@mon latest=@mon | timechart span=1d count as SUCCESSFUL]
Tags (2)
0 Karma
1 Solution

niketn
Legend

Appendcols will not be able to correlate too many events. Considering the fact that your are roughly trying to aggregate one months of data for successful and unsuccessful events, there might be more events than what can be handled based on your Splunk configurations (hardware and limits). You would notice two things search running too slow and older dates returning 0 counts.

1) You can either run appendcols for relatively shorter period of time like a week or single day.
2) If stausCode field or any other field for correlation is present for both successful and failed events then use stats/timechart command instead of any other correlation techniques like append, appendcols or join. Assuming 200 is successful and rest all (including 400) are failed.

sourcetype=logs "Org failure" OR "Provisioned org" statusCode=*  earliest=-1mon@mon latest=@mon| timechart span=1d count(eval(statusCode=200)) as SUCCESS, count(eval(statusCode!=200)) as FAILED
____________________________________________
| makeresults | eval message= "Happy Splunking!!!"

View solution in original post

niketn
Legend

Appendcols will not be able to correlate too many events. Considering the fact that your are roughly trying to aggregate one months of data for successful and unsuccessful events, there might be more events than what can be handled based on your Splunk configurations (hardware and limits). You would notice two things search running too slow and older dates returning 0 counts.

1) You can either run appendcols for relatively shorter period of time like a week or single day.
2) If stausCode field or any other field for correlation is present for both successful and failed events then use stats/timechart command instead of any other correlation techniques like append, appendcols or join. Assuming 200 is successful and rest all (including 400) are failed.

sourcetype=logs "Org failure" OR "Provisioned org" statusCode=*  earliest=-1mon@mon latest=@mon| timechart span=1d count(eval(statusCode=200)) as SUCCESS, count(eval(statusCode!=200)) as FAILED
____________________________________________
| makeresults | eval message= "Happy Splunking!!!"

mackd
New Member

Thank you. Yes, I did notice both conditions you mentioned - slow queries and 0 counts.

0 Karma
Get Updates on the Splunk Community!

Wondering How to Build Resiliency in the Cloud?

IT leaders are choosing Splunk Cloud as an ideal cloud transformation platform to drive business resilience,  ...

Updated Data Management and AWS GDI Inventory in Splunk Observability

We’re making some changes to Data Management and Infrastructure Inventory for AWS. The Data Management page, ...

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...