Getting Data In

Cannot index event because it is larger than mpool size

lmyrefelt
Builder

Hi, i am getting the above message from our indexers from time to time.

" Search peer * has the following message: cannot index event because it is larger than mpool size=* source=source::/blah/blah .

Except for trying to reuce the size of the generated output ... is there anything i can do / tweak on the Splunk side ? Is there a size limitation on the (filebased) inputs ?

Thanks!

Tags (4)
1 Solution

lmyrefelt
Builder

OK, i might have been a little bit fast on the trigger there 😛
We have actually specified some retention based on time, size, number of buckets and whatNot for our indexes ...

So it seems the input, monitored file, is bigger than at least one if not all of the configure buckets in Hot ...

Any thoughts ? 🙂

39311:07-27-2013 19:41:21.549 +0200 ERROR IndexProcessor - cannot index event because it is larger than mpool size=## source=source::##
39312-07-27-2013 19:41:28.625 +0200 ERROR JournalSlice - Error writing: Success, file="##"
39313-07-27-2013 19:41:28.635 +0200 ERROR databasePartitionPolicy - Unable to write raw: for idx=##, path='##'
39314-07-27-2013 19:41:28.636 +0200 INFO databasePartitionPolicy - idx=## Moving from='hot_v1_96' to warm='write error on hot bucket'
39315-07-27-2013 19:41:38.647 +0200 WARN JournalSlice - Path="##" never got expected file size even after waiting 10000 ms
39316-07-27-2013 19:41:38.647 +0200 ERROR JournalSlice - Forcing immediate bucket roll due to incorrect size of bucket due to inconsistent file size of path="##"

View solution in original post

0 Karma

rzylstra_splunk
Splunk Employee
Splunk Employee

Hi

The mpool messages are related to the mpool lines and relates to the memory used by Splunk Indexer pipeline. The value is based the capacity value which the indexers memory limit.
see here: http://docs.splunk.com/Documentation/Splunk/6.5.1/Troubleshooting/Aboutmetricslog (search mpool)

In the props.conf you can adjust the TRUNCATE = xxx ( default is 10000 bytes) and MAX_EVENTS = xxx (default is 256 lines which is the max number of lines per event)

Try using this search to determine the best value for TRUNCATE:
sourcetype=mysourcetype | eval length=len(_raw) | stats max(length) perc95(length) max(linecount) perc95(linecount) 

Hope this helps to clarify and sort the issues.

Cheers

0 Karma

lmyrefelt
Builder

OK, i might have been a little bit fast on the trigger there 😛
We have actually specified some retention based on time, size, number of buckets and whatNot for our indexes ...

So it seems the input, monitored file, is bigger than at least one if not all of the configure buckets in Hot ...

Any thoughts ? 🙂

39311:07-27-2013 19:41:21.549 +0200 ERROR IndexProcessor - cannot index event because it is larger than mpool size=## source=source::##
39312-07-27-2013 19:41:28.625 +0200 ERROR JournalSlice - Error writing: Success, file="##"
39313-07-27-2013 19:41:28.635 +0200 ERROR databasePartitionPolicy - Unable to write raw: for idx=##, path='##'
39314-07-27-2013 19:41:28.636 +0200 INFO databasePartitionPolicy - idx=## Moving from='hot_v1_96' to warm='write error on hot bucket'
39315-07-27-2013 19:41:38.647 +0200 WARN JournalSlice - Path="##" never got expected file size even after waiting 10000 ms
39316-07-27-2013 19:41:38.647 +0200 ERROR JournalSlice - Forcing immediate bucket roll due to incorrect size of bucket due to inconsistent file size of path="##"

0 Karma

hettervik
Builder

Hi,

Long time no see!

Do you know why this error can occur? Could it be because the LINE_BREAKER is not configured right, and thus certain events get to big for this mpool thing?

0 Karma

frodebjerke
New Member

Did you ever get to solve this issue? I'm experiencing the same thing on our Splunk 6.1.6 installation with a log file at 9+ GB and an mpool of 4GB (appx)

0 Karma

TonyLeeVT
Builder

Are you breaking up the file (via line breaking) at all? I figured this much out...

1) Splunk is better at searching many smaller events rather than one large event.
2) If you have a large file, send it to Splunk and cut it up into smaller events using linebreaking

My XML files were GB in size, but when cut down to 15 line events, the ingest worked fine. Hope that helps. To get this working try using a file smaller than 9GB to carve the events up.

0 Karma

frodebjerke
New Member

Hmmm...well...
I sorted it out just two hours ago by setting MAX_DAYS_AGO in props.conf on the indexer to 185 (default value is 2000), and so far it seems to be working fine. I believe Splunk tried to index the whole old log from way back and ended up with all of mpool being spent on this.

I'm still not quite sure it works out, but within a few days it i will probably know for sure.

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...