Getting Data In

Error in splunkd.log: Breaking event because limit of 256 has been exceeded

sushma7
Path Finder

Hi Team,

I was indexing the WebSphere logs into SPLUNK, all of a sudden it stopped indexing. When I looked into the logs found the below error:

06-23-2014 10:10:12.855 -0400 WARN AggregatorMiningProcessor - Breaking event because limit of 256 has been exceeded - data_source="/opt/IBM/WebSphereND64/AppServer/profiles/AppSrv01/logs/JVM2/SystemOut_14.06.23_10.10.12.log", data_host="SEP01XVP-004", data_sourcetype="systemout"
06-23-2014 10:10:12.856 -0400 WARN DateParserVerbose - Failed to parse timestamp. Defaulting to timestamp of previous event (Mon Jun 23 01:15:17 2014). Context: source::/opt/IBM/WebSphereND64/AppServer/profiles/AppSrv01/logs/JVM2/SystemOut_14.06.23_10.10.12.log|host::SEP01XVP-004|systemout|430

How can I overcome this? How should I make the websphere logs back to indexing.
Kindly help on priority basis.

Regards,
Sushma.

Tags (1)
1 Solution

yannK
Splunk Employee
Splunk Employee

Guys, "WARN AggregatorMiningProcessor - Breaking event because limit of 256 has been exceeded"

Means that your multiline event has been in cut in chunks of 256 lines, because of the default limit.
see props.conf MAX_EVENTS=256

So usually it is followed by a warning that no timestamp was found on the second piece.
You can adapt your sourcetype, and maybe tune your timestamp extraction to improve it.

EDIT :
You can use this search to find the long events.

index=_internal source=*splunkd.log* AggregatorMiningProcessor OR LineBreakingProcessor OR DateParserVerbose WARN
| rex "(?<type>(Failed to parse timestamp|suspiciously far away|outside of the acceptable time window|too far away from the previous|Accepted time format has changed|Breaking event because limit of \d+|Truncating line because limit of \d+))"
| eval type=if(isnull(type),"unknown",type)
| rex "source::(?<eventsource>[^\|]*)\|host::(?<eventhost>[^\|]*)\|(?<eventsourcetype>[^\|]*)\|(?<eventport>[^\s]*)"
| eval eventsourcetype=if(isnull(eventsourcetype),data_sourcetype,eventsourcetype)
| stats count dc(eventhost) values(eventsource) dc(eventsource) values(type) values(index) by component eventsourcetype
| sort -count

View solution in original post

yannK
Splunk Employee
Splunk Employee

Guys, "WARN AggregatorMiningProcessor - Breaking event because limit of 256 has been exceeded"

Means that your multiline event has been in cut in chunks of 256 lines, because of the default limit.
see props.conf MAX_EVENTS=256

So usually it is followed by a warning that no timestamp was found on the second piece.
You can adapt your sourcetype, and maybe tune your timestamp extraction to improve it.

EDIT :
You can use this search to find the long events.

index=_internal source=*splunkd.log* AggregatorMiningProcessor OR LineBreakingProcessor OR DateParserVerbose WARN
| rex "(?<type>(Failed to parse timestamp|suspiciously far away|outside of the acceptable time window|too far away from the previous|Accepted time format has changed|Breaking event because limit of \d+|Truncating line because limit of \d+))"
| eval type=if(isnull(type),"unknown",type)
| rex "source::(?<eventsource>[^\|]*)\|host::(?<eventhost>[^\|]*)\|(?<eventsourcetype>[^\|]*)\|(?<eventport>[^\s]*)"
| eval eventsourcetype=if(isnull(eventsourcetype),data_sourcetype,eventsourcetype)
| stats count dc(eventhost) values(eventsource) dc(eventsource) values(type) values(index) by component eventsourcetype
| sort -count

jrodman
Splunk Employee
Splunk Employee

Be sure you ACTUALLY have events over 256 lines long.

0 Karma

yannK
Splunk Employee
Splunk Employee

beware, the TRUNCATE=0 may bite you if you have very bad events. You may want to have a real limit (default is 10000, why not use 100000 to start)

0 Karma

sushma7
Path Finder

Thanks for your replies guys...!!! Changing the MAX_EVENTS=10000 and TRUNCATE=0 in the props.conf file of the indexer and restarting the indexer has resolved has the issue.

0 Karma

lmyrefelt
Builder

@sushma7,

If you have it working on one machine, why don't you replicate the settings for that data-input ?

What your splunkd.log is saying to you is that; Splunk did not find (or reqnoize ) an timestamp in the indexed event and therefor it don't know how to break / format the given events. So if you have it working for one data-source / input you should be able to get it to work based on the settings from this one.

If your event starts with;
NN-NN-NNNN NN:NN:NN.NNN

24-06-2014 14:42:34.542

Then the setting from linu1988 would work .

And if this is how your timstamp looks like, you should be able to use

TIME_FORMAT = %d-%m-%Y %H:%M:%S.%3N

please check
http://strftime.net

and

http://man7.org/linux/man-pages/man3/strftime.3.html

good luck

0 Karma

lmyrefelt
Builder

If you can provide an example event, we can help you more.
You can try to use the following settings in props.conf (edit / adjust for ur env.) for the affected sourcetype
TIME_PREFIX = TIMESTAMP=
MAX_TIMESTAMP_LOOKAHEAD = 25
MAX_EVENTS =

and if you know the time format you can also specify this with;
TIME_FORMAT =

The docs is here; http://docs.splunk.com/Documentation

How to configure time stamps;
http://docs.splunk.com/Documentation/Splunk/latest/Data/HowSplunkextractstimestamps

0 Karma

lmyrefelt
Builder

@sushma7, BREAK_ONLY_BEFORE (as well as the other settings) should be in the props.conf file on your indexers. (please read the docs i linked to you for in depth details) .

0 Karma

sushma7
Path Finder

Hi lmyrefelt,
As said by you I shall try even changing the sourcetype, but just to give you an update that I am even collecting the same Websphere logs from another machine which works absolutely fine, the problem is while collecting the webshere logs from only one of the box.

0 Karma

sushma7
Path Finder

where should I use this line "BREAK_ONLY_BEFORE=\d{2}-\d{2}-\d{4}\s\d{2}:\d{2}:\d{2}.\d{3}"? under inputs.conf of local?

0 Karma

linu1988
Champion

use
BREAK_ONLY_BEFORE=\d{2}-\d{2}-\d{4}\s\d{2}:\d{2}:\d{2}\.\d{3}

0 Karma

lmyrefelt
Builder

the log that you are trying to index that is;

opt/IBM/WebSphereND64/AppServer/profiles/AppSrv01/logs/JVM2/SystemOut_14.06.23_10.10.12.log

There is suppose to be a log4j sourcetype, you could try to assign this sourcetype to the datainput instead of the (default one) "systemout" sourcetype

0 Karma

lmyrefelt
Builder

Well, this is your splunkd.log, what we need to see is your websphere log. To be able to determine timestamp / event breaking etc etc ..

ANd you should really read the doc that i linked for you 🙂

0 Karma

sushma7
Path Finder

The above mentioned lines is what I retrieved from my logs, the same lines re-appear number of times.

0 Karma

sushma7
Path Finder

06-23-2014 10:10:12.855 -0400 WARN AggregatorMiningProcessor - Breaking event because limit of 256 has been exceeded - data_source="/opt/IBM/WebSphereND64/AppServer/profiles/AppSrv01/logs/JVM2/SystemOut_14.06.23_10.10.12.log", data_host="SEP01XVP-004", data_sourcetype="systemout"
06-23-2014 10:10:12.856 -0400 WARN DateParserVerbose - Failed to parse timestamp. Defaulting to timestamp of previous event (Mon Jun 23 01:15:17 2014). Context: source::/opt/IBM/WebSphereND64/AppServer/profiles/AppSrv01/logs/JVM2/SystemOut_14.06.23_10.10.12.log|host::SEP01XVP-004|systemout|430

0 Karma

lmyrefelt
Builder

if you can give us (paste here) an event from your logs (well at lest the start) it would help us to help you.

Also read the docs, as it is described there how to solve this problem

0 Karma

sushma7
Path Finder

Hi,

I have included the lines MAX_EVENTS=10000 and TRUNCATE=0 in the props.conf(etc/system/local), assuming this would not break the events after reaching default value since I have set it to 10000, but this did not solve the issue, the websphere logs are not getting indexed and when I check the logs I find the same error as above even after setting the above values. Let me know if you need any more information from my side, to investigate/advise on the issue.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...