Splunk Search

Why is my appendcols search returning an incorrect count?

mackd
New Member

I have two separate searches that I want to group into one. When I use appendcols I get wrong counts for the search encapsulated within appendcols. Can someone clue me into what I'm doing wrong?

In the search below, "Provisioned Org" returns an incorrect count, than when I run it on its own.

sourcetype=logs statusCode=400 "Org failure"  earliest=-1mon@mon latest=@mon| timechart span=1d count as FAILED|appendcols [search  sourcetype=logs   "Provisioned org"  earliest=-1mon@mon latest=@mon | timechart span=1d count as SUCCESSFUL]
Tags (2)
0 Karma
1 Solution

niketn
Legend

Appendcols will not be able to correlate too many events. Considering the fact that your are roughly trying to aggregate one months of data for successful and unsuccessful events, there might be more events than what can be handled based on your Splunk configurations (hardware and limits). You would notice two things search running too slow and older dates returning 0 counts.

1) You can either run appendcols for relatively shorter period of time like a week or single day.
2) If stausCode field or any other field for correlation is present for both successful and failed events then use stats/timechart command instead of any other correlation techniques like append, appendcols or join. Assuming 200 is successful and rest all (including 400) are failed.

sourcetype=logs "Org failure" OR "Provisioned org" statusCode=*  earliest=-1mon@mon latest=@mon| timechart span=1d count(eval(statusCode=200)) as SUCCESS, count(eval(statusCode!=200)) as FAILED
____________________________________________
| makeresults | eval message= "Happy Splunking!!!"

View solution in original post

niketn
Legend

Appendcols will not be able to correlate too many events. Considering the fact that your are roughly trying to aggregate one months of data for successful and unsuccessful events, there might be more events than what can be handled based on your Splunk configurations (hardware and limits). You would notice two things search running too slow and older dates returning 0 counts.

1) You can either run appendcols for relatively shorter period of time like a week or single day.
2) If stausCode field or any other field for correlation is present for both successful and failed events then use stats/timechart command instead of any other correlation techniques like append, appendcols or join. Assuming 200 is successful and rest all (including 400) are failed.

sourcetype=logs "Org failure" OR "Provisioned org" statusCode=*  earliest=-1mon@mon latest=@mon| timechart span=1d count(eval(statusCode=200)) as SUCCESS, count(eval(statusCode!=200)) as FAILED
____________________________________________
| makeresults | eval message= "Happy Splunking!!!"

mackd
New Member

Thank you. Yes, I did notice both conditions you mentioned - slow queries and 0 counts.

0 Karma
Get Updates on the Splunk Community!

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer at Splunk .conf24 ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...

Share Your Ideas & Meet the Lantern team at .Conf! Plus All of This Month’s New ...

Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data ...

Combine Multiline Logs into a Single Event with SOCK: a Step-by-Step Guide for ...

Combine multiline logs into a single event with SOCK - a step-by-step guide for newbies Olga Malita The ...