Getting Data In

universal forwarders in cluster failover

aaronkorn
Splunk Employee
Splunk Employee

Hello,

We have two linux syslog servers setup in a cluster receiving syslog feeds. When one of the servers goes down the syslogging gets failed over to the secondary server and the agent gets automatically when failed over then stopped when failed back over to the primary. One of the issues we are having is when it fails back over to the primary it ingests duplicate data. What is the best way to handle universal forwarders in a cluster to eliminate duplicates and for the agent to continue indexing where it left off?

1 Solution

sowings
Splunk Employee
Splunk Employee

The "bookmark" for where a Splunk forwarder left off ingesting monitored files is in the fishbucket "index". If you rsync this folder ($SPLUNK_DB/fishbucket) between the forwarders, they should fail over / fail back gracefully, without too much duplication.

View solution in original post

sowings
Splunk Employee
Splunk Employee

The "bookmark" for where a Splunk forwarder left off ingesting monitored files is in the fishbucket "index". If you rsync this folder ($SPLUNK_DB/fishbucket) between the forwarders, they should fail over / fail back gracefully, without too much duplication.

Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...