Getting Data In

Missing of events and flooding of data in Heavy forwarder

DataOrg
Builder

i have 4 region of splunk server and the architecture is

Uf(data from 20 location) ---> HF >>>>indexer .... search head

so if i add any new UF which is replacing old server. i need to change props.conf to route the data into correct indexer. which requires a restart the HF . during this time there is loss of some events and flooded of data to HF once it backs online.

can you please suggest best practice to overcome of missing events and flooded of data?
i dont have clustering...

0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

Best Practice is to not use intermediate forwarders, except in special circumstances. The HF does not distribute events as evenly as the UFs would and actually makes the indexers work harder. Eliminating the HF means you don't have to restart it when you add a UF so your problem goes away.

That said, the only lost events would be those "in flight" when the HF is restarted. That can be alleviated by turning on Indexer Ack.
Flooding of data is normal when a connection is re-established. The UFs have been buffering events while HF restarted and then sends that buffer when the HF is back up.

---
If this reply helps you, Karma would be appreciated.

View solution in original post

richgalloway
SplunkTrust
SplunkTrust

Best Practice is to not use intermediate forwarders, except in special circumstances. The HF does not distribute events as evenly as the UFs would and actually makes the indexers work harder. Eliminating the HF means you don't have to restart it when you add a UF so your problem goes away.

That said, the only lost events would be those "in flight" when the HF is restarted. That can be alleviated by turning on Indexer Ack.
Flooding of data is normal when a connection is re-established. The UFs have been buffering events while HF restarted and then sends that buffer when the HF is back up.

---
If this reply helps you, Karma would be appreciated.

FrankVl
Ultra Champion

To add to that: if you are indeed stuck with using HFs as intermediate layer: put more than 1 in place and use Splunk's auto load balancing to distribute data across the 2 (or more) HFs. If one of them is then down or restarted for maintenance, the other can still process data.

DataOrg
Builder

i dont have 2 HF....we have only one HF

0 Karma

FrankVl
Ultra Champion

Sounds like you may need to add one (or more) then. It doesn't only improve availability, it also improves data distribution over your indexers.

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...