All Apps and Add-ons

Splunk shuttl and Hdfs on different machines

nawneel
Communicator

Hello
I have couple of issues regarding Shuttl with HDFS archiving . situation is as follows.

i have a CDH3 cluster and on another machine i have my splunk indexer where i have put shuttl app.
I have also copied core and guava-r09-jarjar.jar to $SPLUNK_HOME/etc/apps/shuttl/lib as per requirement.

first thing arises is , is this a correct architecture for Splunk shuttl deployment.
secondaly , while configuring xmls (archiver.xml,server.xml,splunk.xml) which configuration file should i use to point my CDH3 hosts, i.e how will my Splunk/Shuttl will know where to archive my Splunk data .

Thanks in Advance

0 Karma

Petter_Eriksson
Splunk Employee
Splunk Employee

If you're using 0.7.x+ with the new configuration, then you should use shuttl/conf/backend/hdfs.properties to point to your NameNode of your Hadoop setup/cluster. The NameNode and Shuttl will co-ordinate where the files will go.

Also, when you're using CDH3, you might have to replace the hadoop.jar as well? I'm not sure about this, but if you're having troubles, that might be the problem.

The latest version of Shuttl is 0.8.2 as of writing this message. Highly recommend using it.

0 Karma

Petter_Eriksson
Splunk Employee
Splunk Employee

The annoying part is that you'll have to kill the Shuttl process every Splunk restart until Splunk has a solution for killing scripted inputs, which hopefully will be sooner rather than later.

0 Karma

Petter_Eriksson
Splunk Employee
Splunk Employee

The reason why you're getting "BindException: Address already in use" is because Splunk is not killing scripted inputs correctly (Shuttl is a configured to be a scripted input). This is a known issue and you can read more about it here: http://splunk-base.splunk.com/answers/28733/scripted-input-without-a-shell

However, to fix your error you can kill Shuttl and Splunk will restart it:
ps -ef | grep Shuttl #to get the pid of Shuttl
kill <process id of shuttl> #to kill the Shuttl process

It's safe to kill the Shuttl process. Data won't be lost.

  • Petter
0 Karma

nawneel
Communicator

I am currently using 0.8.1 shuttl version , and i am also using shuttl/conf/backend/hdfs.properties to point to my NameNode of Hadoop setup/cluster.
i have also copied core jar and guava 0.9 jar to my shuyyl/lib folder

i am not able to see any details on dashboard .
when i see my shutl log i am getting this error

ERROR ShuttlServer: Error during startup java.net.BindException: Address already in use

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...