All Apps and Add-ons

How to Increase Message Ingestion Rate Using JMS Modular Input

flee
Path Finder

Hello,

We're running JMS Modular Input (jms_ta) v1.3.7 on the Heavy Forwarder on Linux VM. We have two Heavy Forwarders with jms_ta running on each. There's no customization done on the jms_ta. We noticed the message ingestion rate from the two queues is between 20-30 messages a second in each queue whether we use only one or two jms_ta instances against the same queues. Although running two jms_ta against the same queues took half the time less than running one jms_ta to drain all the messages from the queues, the WatchQ monitor showed the ingestion rate was the same.

We'd like to increase the ingestion rate as high as possible. Is there any parameter in jms_ta/heavy forwarder/OS/indexer/pipeline to increase the ingestion rate?

We tried changing the following parameters, but there's no change in the ingestion rate.

  1. set the maxKBps to 0
  2. set the maxQueueSize to 256MB
  3. enabled connection pooling in the .bindings file
  4. changed the batch message size from the default in the .bindings file
  5. increased the JMS Messaging Modular Input JVM heap to 256MB
  6. created more than one input definition with different names but for the same queue

Any suggestions?

Thank you.

1 Solution

Damien_Dallimor
Ultra Champion

For scaling out throughput for polling messages from queues (not topics) , then the recommended approach is to scale horizontally by deploying (n) JMS Modular Inputs across (n) Splunk Forwarders (Heavy or Universal) , and forwarding the data into an Indexer cluster.

Check out this preso from slide 20 :

http://www.slideshare.net/damiendallimore/splunk-conf-2014-getting-the-message

Adding more stanzas within a single JMS Modular Input instance will soon hit limits because each of the stanzas is just a thread in the same JVM (addresses your points 5,6 above). So that is why I recommend the multiple JMS Mod Inputs across multiple forwarders.
Furthermore , a single JMS Modular Input instance will likely hit a bottleneck in the STDOUT/STDIN OS Buffer between the Modular Input Process (writing to STD OUT) and the Splunk Forwarder Instance (reading from STD IN) , which may lead to blocking in the JMS Mod Input's queue poller logic.

So ,scale out horizontally 🙂

View solution in original post

Damien_Dallimor
Ultra Champion

For scaling out throughput for polling messages from queues (not topics) , then the recommended approach is to scale horizontally by deploying (n) JMS Modular Inputs across (n) Splunk Forwarders (Heavy or Universal) , and forwarding the data into an Indexer cluster.

Check out this preso from slide 20 :

http://www.slideshare.net/damiendallimore/splunk-conf-2014-getting-the-message

Adding more stanzas within a single JMS Modular Input instance will soon hit limits because each of the stanzas is just a thread in the same JVM (addresses your points 5,6 above). So that is why I recommend the multiple JMS Mod Inputs across multiple forwarders.
Furthermore , a single JMS Modular Input instance will likely hit a bottleneck in the STDOUT/STDIN OS Buffer between the Modular Input Process (writing to STD OUT) and the Splunk Forwarder Instance (reading from STD IN) , which may lead to blocking in the JMS Mod Input's queue poller logic.

So ,scale out horizontally 🙂

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...