Splunk Search

any way to replicate search artifacts and index between 2 all-in-on Splunk instance without cluter

danielwan
Explorer

I have 2 separated all-in-one Splunk boxes running on the different sites for DR purpose.
Is there any way to replicate the index and search artifacts between them?
The cluster is my last resort because cluster seems to require a dedicated master node, and require to run search header and cluster peer node on different hosts, which needs at least 5 nodes for my situation: 1 master, 2 peer nodes (one on each side), and 2 search (one on each site).

0 Karma
1 Solution

mattymo
Splunk Employee
Splunk Employee

rsync or a scheduled task to do the copying should do the trick...

Are you actively indexing on both instances?? Are users accessing both??

In a purely Active/Standby situation where the DR node is not used except in a hard cutover situation, this could work pretty easy...if both the nodes are being actively used, it won't be so easy as you need to avoid bucket collisions...

Check this blog and the links it has that covers what it takes to move buckets around safely:

https://www.splunk.com/blog/2012/02/21/restoring-an-index.html

If you are truly expecting proper high availability, with no down time if a site fails, clustering is a much better option

- MattyMo

View solution in original post

0 Karma

woodcock
Esteemed Legend

Do you mean DR or HA? They really are different things. There is no HA solution for Search Heads. For DR, just do backups of etc and var (especially dispatch).

0 Karma

danielwan
Explorer

I mean DR, active-standby mode.
I am also evaluating cluster but it needs quite a few hosts.

0 Karma

mattymo
Splunk Employee
Splunk Employee

rsync or a scheduled task to do the copying should do the trick...

Are you actively indexing on both instances?? Are users accessing both??

In a purely Active/Standby situation where the DR node is not used except in a hard cutover situation, this could work pretty easy...if both the nodes are being actively used, it won't be so easy as you need to avoid bucket collisions...

Check this blog and the links it has that covers what it takes to move buckets around safely:

https://www.splunk.com/blog/2012/02/21/restoring-an-index.html

If you are truly expecting proper high availability, with no down time if a site fails, clustering is a much better option

- MattyMo
0 Karma

mattymo
Splunk Employee
Splunk Employee

in light of just DR, then rsync your $SPLUNK_HOME/etc and $SPLUNK_HOME/var to the DR box and plan for a way to swing the forwarded traffic over. I would probably configure rsync to skip replicating the internal splunk indexes to avoid any issues with duplication, or keep the DR splunk stopped so its not creating internal buckets at all.

Thats how I backed up my standalone VM until we grew to distributed.

Make sure you practice restoring! 😉

- MattyMo
0 Karma
Get Updates on the Splunk Community!

More Ways To Control Your Costs With Archived Metrics | Register for Tech Talk

Tuesday, May 14, 2024  |  11AM PT / 2PM ET Register to Attend Join us for this Tech Talk and learn how to ...

.conf24 | Personalize your .conf experience with Learning Paths!

Personalize your .conf24 Experience Learning paths allow you to level up your skill sets and dive deeper ...

Threat Hunting Unlocked: How to Uplevel Your Threat Hunting With the PEAK Framework ...

WATCH NOWAs AI starts tackling low level alerts, it's more critical than ever to uplevel your threat hunting ...