All Apps and Add-ons

Props.conf not being applied when using JMS Modular Input

lyndac
Contributor

I am using the SPlunk JMS Modular Input to read data off a JMS Queue. I've set up the Modular Input and data is being read off the queue as expected. However, I am having some issues with applying the correct timestamp to the event on indexing. I am not sure if it is related to the JMS Modular Input in some way or not. If I make a file that contains the event and add it via the splunk web, everything is perfect (the correct field is used as a timestamp, all the fields are extracted, etc). But when I try to index the live data from the JMS Queue, it does not seem to be applying the props.conf. The event timestamp is the current time and none of the fields are extracted. This is a sample of one of the events read off the queue:

    [{ "host":"myhost", "observer":"bb","timestamp":145392582610,"group":{"units":"bps","location":"MD"},"start":1453925400000, "stop":1453925700000, "user":"jdoe"}]

I have set up a message handler because I may eventually want to do something with the message before it is sent (and I wanted to see how it works). My code is:

public class MyMessageHandler extends AbstractMessageHandler {
  @Override
  public void handleMessage(Message message, MessageReceiver context) throws Exception{
      String event = getMessageBody(message);
      transportMessage(event, String.valueOf(System.currentTimeMillis()), "");
  }

  @Override
   public void setParams(Map<String, String>params) {}
}

props.conf:

[jms-zebra]
INDEXED_EXTRACTIONS= json
KV_MODE = none
TIMESTAMP_FIELDS = stop
MAX_TIMESTAMP_LOOKAHEAD =20
NO_BINARY_CHECK=TRUE
MAX_EVENTS=1000
TRUNCATE=100000
category=Structured
disabled=false
pulldown_type=true

inputs.conf:

    [jms://queue/:Consumer.jms.zebra]
    browse_frequency=30
    browse_mode=all
    browse_queue_only=0
    destination_pass = *******
    destination_user = system
    durable = 0
    index = zeb
    index_message_header = 0
    index_message_properties = 0
    init_mode = local
    local_init_mode_resource_factory_impl = org.splunkintegration.jms.LocalActiveMQJMSResourceFactory
    local_init_mode_resource_factory_params=serverURL=tcp://xxx.xxx.xx.xx:61616,userName=system,password=*******
    message_handler_impl = org.splunkintegration.jms.MyMessageHandler
    message_selector=type=summary
    strip_newlines=1
    sourcetype=jms-zebra
0 Karma
1 Solution

Murali2888
Communicator

Hi lyndac,

I suppose the problem is with INDEXED_EXTRACTIONS and KV_MODE configurations.
As per the documentation,

INDEXED_EXTRACTIONS = < CSV|W3C|TSV|PSV|JSON >
* Tells Splunk the type of file and the extraction and/or parsing method
  Splunk should use on the file.

KV_MODE = [none|auto|auto_escaped|multi|json|xml]
* Used for search-time field extractions only.
* Specifies the field/value extraction mode for the data.

As per your configuration, when you upload it as a file the timestamp is extracted as you have configured INDEXED_EXTRACTIONS where as when you read off the jms queue, the KV_MODE is set to none and so the json fields are not extracted which eventually nullified TIMESTAMP_FIELDS config.

I would suggest two approaches.

  1. Can you try KV_MODE=json and see if the timestamp is extracted?
  2. You can use the below configuration which we are using for to read SOAP payloads from jms queues.

    TIME_PREFIX = "stop":
    TIME_FORMAT = %s
    TRUNCATE = 0

Please do let us know if this has resolved your problem.

View solution in original post

0 Karma

Murali2888
Communicator

Hi lyndac,

I suppose the problem is with INDEXED_EXTRACTIONS and KV_MODE configurations.
As per the documentation,

INDEXED_EXTRACTIONS = < CSV|W3C|TSV|PSV|JSON >
* Tells Splunk the type of file and the extraction and/or parsing method
  Splunk should use on the file.

KV_MODE = [none|auto|auto_escaped|multi|json|xml]
* Used for search-time field extractions only.
* Specifies the field/value extraction mode for the data.

As per your configuration, when you upload it as a file the timestamp is extracted as you have configured INDEXED_EXTRACTIONS where as when you read off the jms queue, the KV_MODE is set to none and so the json fields are not extracted which eventually nullified TIMESTAMP_FIELDS config.

I would suggest two approaches.

  1. Can you try KV_MODE=json and see if the timestamp is extracted?
  2. You can use the below configuration which we are using for to read SOAP payloads from jms queues.

    TIME_PREFIX = "stop":
    TIME_FORMAT = %s
    TRUNCATE = 0

Please do let us know if this has resolved your problem.

0 Karma

lyndac
Contributor

I tried just changing KV_MODE = json and that did get the fields to be extracted at search time. It had no effect on the timestamp though.
So then I tried your second suggestion but I escaped the quotes and colon in the regex and changed the TIME_FORMAT to account for the extra digits (13 instead of 10) and its working now!!!

My props.conf now looks like this:

[jms-zebra]
INDEXED_EXTRACTIONS= json
KV_MODE = json
TIME_PREFIX = \"stop\"\:
TIME_FORMAT = %s%3N
TRUNCATE = 0
MAX_TIMESTAMP_LOOKAHEAD =20
NO_BINARY_CHECK=TRUE
MAX_EVENTS=1000
TRUNCATE=100000
category=Structured
disabled=false
pulldown_type=true

Thank you for all the help! This was driving me NUTS!

0 Karma

Murali2888
Communicator

I am glad that it resolved your issue.

0 Karma

Jeremiah
Motivator

Indexed extractions aren't available for modular inputs. That's why your settings worked for a file, but not the JMS input. You'll have to rely on search time json field extraction. This is also why your timestamp isn't parsing; since "stop" isn't seen as a field, the timestamp isn't extracted. Just use the normal timestamp extraction settings like TIME_PREFIX, and it should work fine.

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...