Deployment Architecture

Splunk Universal Forwarder Data Recovery Following a Network Issue

splunkmasterfle
Path Finder

Hi,

Just wondering if anyone has encountered the following issue.

I want to setup a distributed Splunk environment consisting of one indexer and multiple forwarders, let's say 6. The forwarders will be installed on a different network and must pass through a firewall in order to contact the indexer. If, for some reason, the network drops and the forwarders are unable to contact the indexer, what happends in this case?

-Do the forwarders stop sending data immediately?

-Will I lose some data from the files that the forwarders are monitoring?

-Is there a clean and elegant way to synchronize the files being monitored by the forwarders and the events on the indexer?

I am trying to setup Splunk on a production environment and having all of the events produced on the servers is crucial.

Has anyone had a similar issue and found a reliable solution?

Any help would be greatly appreciated!

Thanks!

0 Karma

grijhwani
Motivator

Splunk operates over TCP, so you don't lose data, although if your network outage lasts a long time you can find it starts chewing through memory. Once the connection restores it will eventually catch up automatically (provided it has the bandwidth).

0 Karma

splunkmasterfle
Path Finder

Is this information taken from the splunk documentation ?

0 Karma
Get Updates on the Splunk Community!

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer at Splunk .conf24 ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...

Share Your Ideas & Meet the Lantern team at .Conf! Plus All of This Month’s New ...

Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data ...