All Apps and Add-ons

JMS Messaging Modular Input: "OutOfMemoryError: Java heap space" error after larger messages are read from queue

bdahlb
Explorer

We receive an OutOfMemoryError: Java heap space error after larger messages are read from queue. This is followed by the individual queue no longer being read into Splunk. We have been able to reproduce this a few times by dropping a ~13MB message into Websphere MQ.

Debug logs follow:

11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at com.splunk.modinput.jms.JMSModularInput$MessageReceiver.run(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at com.splunk.modinput.jms.JMSModularInput$MessageReceiver.streamMessageEvent(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at com.splunk.modinput.ModularInput.marshallObjectToXML(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at java.lang.StringBuilder.toString(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at java.lang.String.<init>(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at java.util.Arrays.copyOfRange(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py"" Exception in thread "Thread-3" java.lang.OutOfMemoryError: Java heap space
11-18-2015 16:33:05.792 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  INFO Streaming message to Splunk for indexing

We have 3 JMS inputs running on this instance; the other 2 continue to work after the first one experiences issues. We have tried increasing the java_args stanza in jms.py to use "-Xms256m","-Xmx256m" but this has not helped to resolve the issue. Any help with this would be appreciated.

0 Karma
1 Solution

Damien_Dallimor
Ultra Champion

Have you tried larger than 256 ?

View solution in original post

0 Karma

Damien_Dallimor
Ultra Champion

Have you tried larger than 256 ?

0 Karma

bdahlb
Explorer

We had upped this line to 256MB already but apparently even that was too low for what we were doing; increasing it to 512MB resolved our issue.

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...