Hi.
I am using Hunk currently to connect to an Amazon S3 bucket for my virtual index. The end of the Path to data in HDFS changes daily to be the current date in UTC something like: 2015/05/04. Is there a way in the UI to have this path change automatically based on the current date in UTC? If not can I use the REST API to do this? I've tried to use the API a bit, but the documentation does not appear to be valid as I'm getting a lot of "your site has moved" or does not exist errors. Any help would greatly be appreciated so I do not have to change this path manually on a daily basis. Thanks.
Do you want Hunk to look at the files that match the search time range? You can probably use regex, for for earliest and latest time specification. Hunk will then search the correct files depending on the time range you search. Please see the following document for more details.
http://docs.splunk.com/Documentation/Hunk/latest/Hunk/SetupvirtualindexesforarchivedHadoopfiles
In your case, you can probably have something like the following.
[hadoop]
vix.provider = <provider_name>
vix.input.1.path = har:///<path_to_archive_file>/<archive_file>.har/...
vix.input.1.accept = \.gz$
vix.input.1.et.regex = /home/myindex/data/(\d+)/(\d+)/(\d+)/
vix.input.1.et.format = yyyyMMdd
vix.input.1.et.offset = 0
vix.input.1.lt.regex = /home/myindex/data/(\d+)/(\d+)/(\d+)/
vix.input.1.lt.format = yyyyMMdd
vix.input.1.lt.offset = 86400
I believe I have that. Are you saying that if my path is something like:
s3n://mysite-elblogs/MyLogs
instead of
s3n://mysite-elblogs/MyLogs/2015/05/05
It will automatically only pull in the last day's amount of data. I don't want it always pulling in the full month's worth of data and then filtering, that seems like a waste. I'd rather it only grabbed the last day of data instead of all data and then filtering.