I just ran into the problem -- Error in 'IndexScopedSearch': The search failed. More than 125000 events found at time. While these threads makes sense, I have two related questions.
http://answers.splunk.com/questions/2958/error-in-indexscopedsearch-the-search-failed
http://answers.splunk.com/questions/303/whats-max-events-i-can-have-timestamped-with-a-particular-se...
First, I noticed that if I have a few seconds that each are under the 125k (or is it 100k?) limit for unique combination of indexer/index/host/source/sourcetype, my searches will work fine provided I'm searching a narrow enough timeframe. But once I search say, the dreaded All time, I get this error. It looks like perhaps once at a given time scale, this rule may actually apply to 1 minute buckets. Is this correct?
Second, is this a limitation only to the UI? jrodman mentions "Splunk stores the values by second, but needs to return the data to the UI, and other clients...". Does this include scheduled searches for summary indexes? Just wondering if the limitation is from Splunk needing to create some sort of metadata for the UI and related things, or if its a specific storage-retrieval deal.
This limit is purely based on the events retrieved based on the search keywords, so scoping the search down to retrieve fewer than 100-200k events per second per distributed indexer will guarantee that you don't hit this.
However, it seems odd that you'd run into this by expanding your time range. This warning comes up for a particular second, and expanding the time range will only make a difference if you capture another particular second that has more events than this (again, per server).
The limitation is in splunkd itself, but we could explore making the parameter tunable based on memory available. The reason for the limit is that the index (and its notion of a cursor) doesn't support subsecond precision, and to guarantee inverse-time-order retrieval, we must pull all events for a given second and then sort them with respect to subsecond and arrival.
Did anyone resolve this? I'm having a similar issue.
We are experiencing this issue when creating summaries with more than 1M results...
This limit is purely based on the events retrieved based on the search keywords, so scoping the search down to retrieve fewer than 100-200k events per second per distributed indexer will guarantee that you don't hit this.
However, it seems odd that you'd run into this by expanding your time range. This warning comes up for a particular second, and expanding the time range will only make a difference if you capture another particular second that has more events than this (again, per server).
The limitation is in splunkd itself, but we could explore making the parameter tunable based on memory available. The reason for the limit is that the index (and its notion of a cursor) doesn't support subsecond precision, and to guarantee inverse-time-order retrieval, we must pull all events for a given second and then sort them with respect to subsecond and arrival.
do you know what is the limitation in 4.1.6 ?
In 4.1.6, the limit should be much higher, but still not tunable.
Hi Stephen, where can we change the limitation on splunk 4.1.6 ?