Splunk Search

Why am I unable to search my JSON log file without using spath, even after configuring my props.conf?

rh0dium
Explorer

Hi Folks,

I have the following log file information. With my props.conf, it consumes it and visually shows fine, but I can't search on any of the elements without using spath. I would to be able to search on any of the sub-fields natively. There are two problems the first:

host="analytics" severity="8" | chart count(program)

Current Props:

[json_timestamp]
BREAK_ONLY_BEFORE_DATE = true
TIMESTAMP_FIELDS = timestamp
INDEXED_EXTRACTIONS = json
KV_MODE = none

Sample Log Info:

{"timestamp":"2016-10-20T21:58:31.428263+00:00", "host":"analytics", "host_ip":"127.0.0.1", "type":"syslog", "class":"SPLUNK_SERVER", "version":"1", "app":"axis", "severity_label":"info", "severity":"6", "facility_label":"user", "facility":"1", "program":"yum", "pid":"16284", "syslog_tag":"yum[16284]:", "message":" Updated: tzdata-java-2016g-2.64.amzn1.noarch"}
{"timestamp":"2016-10-20T21:58:31.983626+00:00", "host":"analytics", "host_ip":"127.0.0.1", "type":"syslog", "class":"SPLUNK_SERVER", "version":"1", "app":"axis", "severity_label":"info", "severity":"6", "facility_label":"user", "facility":"1", "program":"yum", "pid":"16284", "syslog_tag":"yum[16284]:", "message":" Updated: tzdata-2016g-2.64.amzn1.noarch"}
{"timestamp":"2016-10-20T21:58:32.038861+00:00", "host":"analytics", "host_ip":"127.0.0.1", "type":"syslog", "class":"SPLUNK_SERVER", "version":"1", "app":"axis", "severity_label":"info", "severity":"6", "facility_label":"user", "facility":"1", "program":"yum", "pid":"16284", "syslog_tag":"yum[16284]:", "message":" Updated: kernel-tools-4.4.23-31.54.amzn1.x86_64"}
{"timestamp":"2016-10-20T21:58:32.206431+00:00", "host":"analytics", "host_ip":"127.0.0.1", "type":"syslog", "class":"SPLUNK_SERVER", "version":"1", "app":"axis", "severity_label":"info", "severity":"6", "facility_label":"user", "facility":"1", "program":"yum", "pid":"16284", "syslog_tag":"yum[16284]:", "message":" Updated: aws-cfn-bootstrap-1.4-13.8.amzn1.noarch"}
{"timestamp":"2016-10-20T22:08:31.979328+00:00", "host":"analytics", "host_ip":"127.0.0.1", "type":"syslog", "class":"SPLUNK_SERVER", "version":"1", "app":"axis", "severity_label":"notice", "severity":"5", "facility_label":"auth", "facility":"4", "program":"su", "pid":"-", "syslog_tag":"su:", "message":" (to root) ec2-user on pts/0"}
0 Karma
1 Solution

TStrauch
Communicator

Hi,

im imported your test data with your given sourcetype. Works quite fine. All fields are extracted and there are no problems with searching.

Do you have a distributed environment? Make sure your Forwarders Input Stanzas are configured with the above sourcetype. Make sure the props.conf is available on your indexers.

kind regards

View solution in original post

TStrauch
Communicator

Hi,

im imported your test data with your given sourcetype. Works quite fine. All fields are extracted and there are no problems with searching.

Do you have a distributed environment? Make sure your Forwarders Input Stanzas are configured with the above sourcetype. Make sure the props.conf is available on your indexers.

kind regards

rh0dium
Explorer

Hey Thanks

I am running a distributed environment.

Are you sure that it's parsing it. It visually looks like it is. But it's not parsed. The host is parsed but not the contents of the data? The image looks like I should just be able to parse away but I can't.

Full Page - Looks Parsable

So if I try on filtering on one of the elements no dice.

Unable to parse

I wonder does the parsing happen on the forwarder or not. @DMohn suggests that I need to put this props.conf on my forwarders?

0 Karma

DMohn
Motivator

Hi @rh0dium,

No, the parsing does NOT hatten on the forwarder, but on the indexer. Have a look in the Splunk WIKI for a more detailled view: https://wiki.splunk.com/Community:HowIndexingWorks

I tried the same as @TStrauch: Placed a file with the stated contents on a forwarder, and created a sourcetype for the data with a props.conf entry on the indexer!
http://e54i.imgup.net/rh0_0d3ed.PNG

If done like this, the fields get extracted without any problems!
http://i18i.imgup.net/rh0_1a952.PNG

Please make sure you have the following settings done:

On the forwarder:
- Create a sourcetype for your forwarded data (with inputs.conf)

On the indexer:
- Create the same sourcetype with the extractions (with props.conf)

Hope this helps!

rh0dium
Explorer

You rock - it's working now.

0 Karma

Richfez
SplunkTrust
SplunkTrust

rh0dium,

Glad that worked! Could you mark this answer as "Accepted" to help all those who stumble across it later?

Thanks!
Rich

0 Karma

lquinn
Contributor

Are you using a forwarder? Where does the above props.conf reside?

rh0dium
Explorer

Yes I am! This is on the master. Why?

0 Karma

DMohn
Motivator

Please see the answer by @TStrauch below ... If you put the props.conf only on your master (whichever instance this means... license master? cluster master?) the INDEXED_EXTRACTIONS won't work, as they need to be on the indexer, as the name already suggests!

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...