Getting Data In

universal forwarders in cluster failover

aaronkorn
Splunk Employee
Splunk Employee

Hello,

We have two linux syslog servers setup in a cluster receiving syslog feeds. When one of the servers goes down the syslogging gets failed over to the secondary server and the agent gets automatically when failed over then stopped when failed back over to the primary. One of the issues we are having is when it fails back over to the primary it ingests duplicate data. What is the best way to handle universal forwarders in a cluster to eliminate duplicates and for the agent to continue indexing where it left off?

1 Solution

sowings
Splunk Employee
Splunk Employee

The "bookmark" for where a Splunk forwarder left off ingesting monitored files is in the fishbucket "index". If you rsync this folder ($SPLUNK_DB/fishbucket) between the forwarders, they should fail over / fail back gracefully, without too much duplication.

View solution in original post

sowings
Splunk Employee
Splunk Employee

The "bookmark" for where a Splunk forwarder left off ingesting monitored files is in the fishbucket "index". If you rsync this folder ($SPLUNK_DB/fishbucket) between the forwarders, they should fail over / fail back gracefully, without too much duplication.

Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...