All Apps and Add-ons

JMS Messaging Modular Input: "OutOfMemoryError: Java heap space" error after larger messages are read from queue

bdahlb
Explorer

We receive an OutOfMemoryError: Java heap space error after larger messages are read from queue. This is followed by the individual queue no longer being read into Splunk. We have been able to reproduce this a few times by dropping a ~13MB message into Websphere MQ.

Debug logs follow:

11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at com.splunk.modinput.jms.JMSModularInput$MessageReceiver.run(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at com.splunk.modinput.jms.JMSModularInput$MessageReceiver.streamMessageEvent(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at com.splunk.modinput.ModularInput.marshallObjectToXML(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at java.lang.StringBuilder.toString(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at java.lang.String.<init>(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at java.util.Arrays.copyOfRange(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py"" Exception in thread "Thread-3" java.lang.OutOfMemoryError: Java heap space
11-18-2015 16:33:05.792 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  INFO Streaming message to Splunk for indexing

We have 3 JMS inputs running on this instance; the other 2 continue to work after the first one experiences issues. We have tried increasing the java_args stanza in jms.py to use "-Xms256m","-Xmx256m" but this has not helped to resolve the issue. Any help with this would be appreciated.

0 Karma
1 Solution

Damien_Dallimor
Ultra Champion

Have you tried larger than 256 ?

View solution in original post

0 Karma

Damien_Dallimor
Ultra Champion

Have you tried larger than 256 ?

0 Karma

bdahlb
Explorer

We had upped this line to 256MB already but apparently even that was too low for what we were doing; increasing it to 512MB resolved our issue.

0 Karma
Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Get the T-shirt to Prove You Survived Splunk University Bootcamp

As if Splunk University, in Las Vegas, in-person, with three days of bootcamps and labs weren’t enough, now ...

Wondering How to Build Resiliency in the Cloud?

IT leaders are choosing Splunk Cloud as an ideal cloud transformation platform to drive business resilience,  ...