Getting Data In

How to send some of our index data in an indexer cluster to a frozen bucket on another server?

hagjos43
Contributor

Here's our situation: We have a single site indexer cluster with a search head, two indexers, and a deployment server / cluster master. We upgraded from a single machine (about 6 or 7 months ago) to what we have now. The original machine is quickly running out of space and we are out of SAS bays to increase disk capacity.

My question: If I want to send some of our index data (let's say the IIS index for example) to a frozen bucket on another server, how do I go about configuring this in a clustered environment? I read this doc: http://docs.splunk.com/Documentation/Splunk/6.3.1511/Indexer/Bucketsandclusters (specifically the very bottom of that page) and it doesn't seem entirely clear. Since we have a cluster, should we have a single frozen bucket location across all of our indexers? Or must we have two frozen bucket locations?

Thanks,
Joe

0 Karma

jkat54
SplunkTrust
SplunkTrust

Joe,

First of all... frozen buckets cant be read until they are moved to the thawed directory.

So I'm not sure what you want to accomplish by moving a "frozen" bucket to anything except for long term storage. For example, please use coldToFrozenScript(s) to move frozen buckets to S3 storage in the cloud, or to inexpensive backup drives/arrays/tapes.

I think you're wanting your buckets rolling off the old server to land on the thawed path of the new environment so it becomes searchable in the new environment. In which case you would write a cold to frozen script that copies the buckets to your new indexers and places them in thawed. Now, if your new indexers have rep factor of 2+ then you only need to place into one thawed directory (I BELIEVE BUT AM UNSURE) .

Perhaps you'd be happier just shutting down the old server, and mounting it's filesystem with HOT/WARM/COLD buckets and then copying them into your new HOT/WARM/COLD locations. Make sure both splunks are down when you do the copy so you can make sure you've got everything perfectly copied before you turn them on again.. I also recommend creating a "fake" or "test" index in the old environment, index a few events, allow replication to occur, and then attempt this process above with the test index only.

0 Karma

dxu_splunk
Splunk Employee
Splunk Employee

Hey jkat,

Clustered frozen buckets behave a little differently.

If this is 6.3+, and if both copies of a bucket are searchable, but only 1 indexer has frozen the bucket, the bucket will still be responding to search queries (since it'll still be around on the other indexer).

Once the bucket is frozen on both indexers, the bucket will be fully deleted and gone.

0 Karma

hagjos43
Contributor

Thanks for your response. Actually we are okay with frozen buckets being unsearchable unless the users request very old historical data (by very old I'm talking 3 years+ here).

We won't be mounting any additional storage to these indexers, that's currently not an option. The only way for this to work in our environment is to utilize another server or storage location on the network (this is airgapped and not connected to the outside world, AWS or cloud storage is not an option).

I know, I'm being difficult 🙂

-Joe

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...