Installation

Moving DB files during Migration

strive
Influencer

Hi,

We have a setup which is running on Splunk 4.3.1. We have a new setup running on Splunk 5.0.4.

We have diverted all our traffic to new setup. Now we want to move all the warm dbs from older setup to new setup. To complete this exercise successfully, we are thinking of following approaches.

Approach 1:

  1. Take all the db_* directories from
    our 10 index directories IDX1/db_, IDX2/db_ ..... IDX10/db_*

  2. Create a .tgz file

  3. In each index note the highest
    bucket id. Say, for example it is 10. Then add 1 to it. So the
    number is 11.

  4. Untar the index dbs into respective
    directories. While doing this rename the
    directories in the new setup as db_Start_End_n+11. Where n is the bucket id. In this step, also rename hot bucket directory.

Question: When we rename the directories, will the manifest file be automatically updated with the latest bucket id? If not what should we do.

Approach 2:

  1. Take all the db* directories from
    our 10 index directories IDX1/db_,
    IDX2/db_
    ..... IDX10/db_*

  2. Create a .tgz file. While creating the .tgz file, rename the directories from old setup to increment the bucket id as n+99. So, the warm buckets will be db_Start_End_n+99. For example: db_Start_End_0 will be db_Start_End_99, db_Start_End_1 will be db_Start_End_100 and so on.

In this case will the manifest file be updated automatically to reflect the latest bucket id (Say 201) when the bucket id reaches 98 in new setup.

We are strongly inclined towards approach 1. Based on what we read in splunk blogs and splunk base, approach 1 should work.

Please suggest the better and workable approach.

Thanks

Strive

Labels (1)
1 Solution

lukejadamec
Super Champion

I just upgraded a server with a complete restore of data (db files). You're making it sound way too complicated.

1) Never try to move hot buckets. Stop Splunk and all of the hot buckets will roll to warm buckets.

2) Just copy the warm and cold db folders from the old server index directories to the new server index directories. It is that simple.

3) If you have duplicate "unique ID numbers" then just change the ones that are duplicates. You may need to rebuild those buckets with new IDs, but there should not be that many.

View solution in original post

strive
Influencer

Yes, we have started collecting data on new splunk server.

0 Karma

lukejadamec
Super Champion

I just upgraded a server with a complete restore of data (db files). You're making it sound way too complicated.

1) Never try to move hot buckets. Stop Splunk and all of the hot buckets will roll to warm buckets.

2) Just copy the warm and cold db folders from the old server index directories to the new server index directories. It is that simple.

3) If you have duplicate "unique ID numbers" then just change the ones that are duplicates. You may need to rebuild those buckets with new IDs, but there should not be that many.

lukejadamec
Super Champion

Yes, change the names on the new server, but ideally there would be no new db folders on the new server because the cut over would be done before the new server starts collecting data.
In your case that might not be possible. If you must change the bucket IDs from the old server, then be sure to use a large enough offset.
The bucket manifest will automatically update. If for some reason you end up with a duplicate ID number, then an error will show up in the splunkd log, and the index will go offline until the problem is corrected.
Lastly, you can move buckets while splunk is running.

0 Karma

apfender_splunk
Splunk Employee
Splunk Employee

./splunk stop does not roll buckets.
./splunk /start or ./splunk/restart does.

0 Karma

strive
Influencer

We also have few customers, whose daily log volume ranges from 25GB to 250GB. Our new customers will have daily log volume greater than 250GB and up to 600GB per day. We want to design a solution that works for everyone.

0 Karma

strive
Influencer

Hi, Thanks for your response. I do not know about your daily log volume. Our daily log volume is 100 GB. It takes considerable amount of time to move index db files from old server to new server. So i assume definitely there will be more buckets created in new server.

For your point #3, you mean changing the names on new server. Am i right?

0 Karma

lukejadamec
Super Champion

Have you started collecting data on the new splunk server?

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...