All Apps and Add-ons

Hadoop Connect to MapR M3

melonman
Motivator

Hi

I am trying to Hadoop Connect to MapR M3 environment,
but I don't have information what value I need to put in HDFS URI.

I tried several ip:port combination and maprfs:///, but I got this error...

Unable to connect to Hadoop cluster 'hdfs://xxx.xxx' with principal 'None': Could not find configuration for cluster 'None', please configure your cluster first..

Invalid stanza name

Is there any limitation or mis configuration for MapR cluster??

Thanks,

0 Karma
1 Solution

cwl
Contributor

As mentioned in Hadoop Connect's release note, if you are using MapR then you have to use NFS mounted MapR filesystems via the local export option.
http://docs.splunk.com/Documentation/HadoopConnect/1.2/DeployHadoopConnect/Knownissues#Known_issues

Splunk Hadoop Connect currently does not offer support for MapR's HDFS implementation, but does support NFS mounted MapR filesystems via the local export option. (SPL-58420)

View solution in original post

0 Karma

cwl
Contributor

As mentioned in Hadoop Connect's release note, if you are using MapR then you have to use NFS mounted MapR filesystems via the local export option.
http://docs.splunk.com/Documentation/HadoopConnect/1.2/DeployHadoopConnect/Knownissues#Known_issues

Splunk Hadoop Connect currently does not offer support for MapR's HDFS implementation, but does support NFS mounted MapR filesystems via the local export option. (SPL-58420)
0 Karma
Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Get the T-shirt to Prove You Survived Splunk University Bootcamp

As if Splunk University, in Las Vegas, in-person, with three days of bootcamps and labs weren’t enough, now ...

Wondering How to Build Resiliency in the Cloud?

IT leaders are choosing Splunk Cloud as an ideal cloud transformation platform to drive business resilience,  ...