Splunk Search

Why is my appendcols search returning an incorrect count?

mackd
New Member

I have two separate searches that I want to group into one. When I use appendcols I get wrong counts for the search encapsulated within appendcols. Can someone clue me into what I'm doing wrong?

In the search below, "Provisioned Org" returns an incorrect count, than when I run it on its own.

sourcetype=logs statusCode=400 "Org failure"  earliest=-1mon@mon latest=@mon| timechart span=1d count as FAILED|appendcols [search  sourcetype=logs   "Provisioned org"  earliest=-1mon@mon latest=@mon | timechart span=1d count as SUCCESSFUL]
Tags (2)
0 Karma
1 Solution

niketn
Legend

Appendcols will not be able to correlate too many events. Considering the fact that your are roughly trying to aggregate one months of data for successful and unsuccessful events, there might be more events than what can be handled based on your Splunk configurations (hardware and limits). You would notice two things search running too slow and older dates returning 0 counts.

1) You can either run appendcols for relatively shorter period of time like a week or single day.
2) If stausCode field or any other field for correlation is present for both successful and failed events then use stats/timechart command instead of any other correlation techniques like append, appendcols or join. Assuming 200 is successful and rest all (including 400) are failed.

sourcetype=logs "Org failure" OR "Provisioned org" statusCode=*  earliest=-1mon@mon latest=@mon| timechart span=1d count(eval(statusCode=200)) as SUCCESS, count(eval(statusCode!=200)) as FAILED
____________________________________________
| makeresults | eval message= "Happy Splunking!!!"

View solution in original post

niketn
Legend

Appendcols will not be able to correlate too many events. Considering the fact that your are roughly trying to aggregate one months of data for successful and unsuccessful events, there might be more events than what can be handled based on your Splunk configurations (hardware and limits). You would notice two things search running too slow and older dates returning 0 counts.

1) You can either run appendcols for relatively shorter period of time like a week or single day.
2) If stausCode field or any other field for correlation is present for both successful and failed events then use stats/timechart command instead of any other correlation techniques like append, appendcols or join. Assuming 200 is successful and rest all (including 400) are failed.

sourcetype=logs "Org failure" OR "Provisioned org" statusCode=*  earliest=-1mon@mon latest=@mon| timechart span=1d count(eval(statusCode=200)) as SUCCESS, count(eval(statusCode!=200)) as FAILED
____________________________________________
| makeresults | eval message= "Happy Splunking!!!"

mackd
New Member

Thank you. Yes, I did notice both conditions you mentioned - slow queries and 0 counts.

0 Karma
Get Updates on the Splunk Community!

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...

Updated Team Landing Page in Splunk Observability

We’re making some changes to the team landing page in Splunk Observability, based on your feedback. The ...