I looking at using Shuttl combined with Amazon S3, I'm unfamiliar with Hadoop but have successfully installed and tested it.
Firstly, although the Shuttl documentation says that it works with Hadoop and S3 storage there doesn't seem to be instructions on how to set this up.
Secondly, I'm unclear about if I'm using S3 as the storage for my frozen buckets where does Hadoop run, does it run on-site with my Splunk installation or as an Amazon EC2 instance?
The configuration has changed a little since you asked the question.
We've setup a page with examples and detailed documentation here:
https://github.com/splunk/splunk-shuttl/tree/master/examples
The configuration has changed a little since you asked the question.
We've setup a page with examples and detailed documentation here:
https://github.com/splunk/splunk-shuttl/tree/master/examples
A belated response that I'll answer for others on Splunkbase.
To summarize:
If you use S3 as your storage, setting up Hadoop to use the data is up to you. You can use Hadoop in EC2, or EMR. (keep in mind you need to archive in CSV format; only Splunk can understand the Splunk Bucket format)
In order to setup Shuttl for Amazon S3, you provide a URI indicating the backend. Inside, $SPLUNK_HOME/etc/apps/shuttl/conf you'll see archiver.xml. In that file, you'll edit the archiverRootURI to have a value of the form: s3://
In addition, keep in mind that though "s3n" will work as well for Shuttl, you will need to keep in mind the limitation of file size. In general, it's safest to use "s3".
See the Quickstart guide for more information: https://github.com/splunk/splunk-shuttl/wiki/Quickstart-Guide