I've managed to setup Hunk and run streaming searches without any problem. However when I try to run a reporting search, that starts a MapReduce job the search fails with an error message like this:
JobStartException - Failed to start MapReduce job. Please consult search.log for more information. Message: [Failed to start MapReduce job, name=....] and [Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.]
Any ideas what could be happening?
We've seen this error come up a number of times and the error message (thrown by the Hadoop libraries) is misleading. The root cause of the problem in our observations has been a mismatch between the Hadoop client libraries on the Hunk server and Hadoop cluster. Therefore, the first thing you want to do is check that the versions are exactly the same
Just a reminder that Hadoop is very sensitive when it comes to the library versions so we strongly recommend that you use the exact same version as the cluster.
We've seen this error come up a number of times and the error message (thrown by the Hadoop libraries) is misleading. The root cause of the problem in our observations has been a mismatch between the Hadoop client libraries on the Hunk server and Hadoop cluster. Therefore, the first thing you want to do is check that the versions are exactly the same
Just a reminder that Hadoop is very sensitive when it comes to the library versions so we strongly recommend that you use the exact same version as the cluster.