Splunk Search

Silent hit of limit

iKate
Builder

Dear splunk employees,

Can you please implement an improvement to splunk notifications: if any configuration limitations are hit - inform user.

I've faced with this problem several times and the recent one is as follows: we have scheduled search that uses map command to put a specific date into the dbquery search and than performs other calculations.
And since it's a subsearch, it has a limit of 500000 events. One day we exceeded this number but didn't notice it as no indication of it was available, so results were misleading 😞

Please, make such notifications near search bar or if it is a scheduled search, send an alarm with listed results or if it is server-side limit, send an alarm to admin's email.

Hope for your help! Thanks

DalJeanis
Legend

I find silent crashes annoying as well. The limit could be 50K or 10K depending on which limit you are hitting. maxresultrows (search, default 50K) or maxout (subsearch, default 10K) limit.

This limit is on the output of the results, not on the number of events that are analysed, so an effective subsearch can scan more events than that as long as it is selective about what it returns. Depending on your use case, there could be a creative way of getting around this limit by merging/concatenating events and then using mvexpand to spread them back out again...

There are various places to trap this issue. For instance, you could set up an alert to notice the phrase "truncating to maxout" in the search results.

This post suggests adding maxout=0 and maxtime=0 as part of the append command (for example) as a way of beating the limits... https://answers.splunk.com/answers/77971/subsearch-subsearch-produced-173215-results-truncating-to-m...

Mostly, the best practice is to eliminate subsearches wherever possible in favor of correlated searches...bring both sets of results into a single query and analyze them together. Or, alternately, flip which part of the search is the main search and which is the subsearch. It's not always possible, but often it's the way to go, and if a subsearch is returning 50K results or more, then there's at the very least a business case for a review.

If you'd like help analyzing your search for improvement, then please post the query (minus any confidential information) into a new question and we'll help you all we can.

iKate
Builder

DalJeanis, thanks for these practical tips. Actually I've already learned them empirically) And in my case as I wrote in previous answer helped using sql time modificaton without mapping anything into dbquery.
And also thanks for offering help!

0 Karma

s2_splunk
Splunk Employee
Splunk Employee

Hi iKate,
you can submit any enhancement requests you have for the product by filing a P4 support case via the Splunk support portal.

Having said that:
Any kind of warning issued as a result of the execution of an ad-hoc search should result in a visual hint in the UI that alerts the user, either with a yellow or a red icon.
Although I have not tested/validate this, any errors or warnings produced by a scheduled search should produce a log entry in scheduler.log with (index=_internal sourcetype=scheduler). You should be able to find a corresponding message whenever any limits are exceeded.
Identify the pattern and create a search that finds it and alerts on it to meet your need.

Finally, you may want to see if you can rewrite your search to not require a sub-search. In most cases, that is possible and far more efficient than using subsearches. There are a lot of folks on answers that can possibly help you with that.

Hope that helps!

0 Karma

iKate
Builder

Hi ssievert! Thanks for answering.

You're right I'll better submit an enhancement case on making truncating notifications more visible to users. Maybe not only truncating issues (affecting integrity of data greatly) happen almost silently but that is what I've faced and from what I've suffered for several times)

So at the moment if map results are greater than subsearch limit in your confs OR you join more than another limit in conf (in my case it's 50000) what you get in search ui - just green mark, that never makes you suspect something serious is happening.
But if you unwrap notification you'll see really bad news like:

 [subsearch]: Search Processor: Subsearch produced 163678 results, truncating to maxout 50000

OR

[map]: Search Processor: Subsearch produced 2266171 results, truncating to maxout 500000

One might have created saved search long ago and for that moment it were no problems with hitting any limit and one could even were not aware of some limits that he can face with in a query. But after some time number of events has increased and truncation started to happen. But one will get no indication of it.

Once you stumbled upon this problem you might set an alarm for finding such issues in logs, but checking _internal index on "truncating" shows nothing even when you know it had happened for sure.

So all this may be rather confusing for users especially new ones.

As for me I rewrote query using sql date modifications instead.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...