Getting Data In

Splunk Architecture for Production

meenal901
Communicator

Hi,
We have 140 production servers, where we are planning to install universal forwarders.
Further we need to do processing to filter out data and send around 50 perc data to the indexers.
Each production server is producing around 1.5 GB of data .

With this much data volume and server count. What should be the number of heavy forwarders, indexers and search heads we should be using.

0 Karma

MuS
Legend

Hi meenal901,

this cannot be answered here; it all depends on your existing infrastructure, your use cases and other requirements like how many concurrent search will run, do you depend on live near real-time data and so on.

As a rule of thumb take a look at the docs about recommended hardware which should be good to index about 100Gb/day.

cheers, MuS

martin_mueller
SplunkTrust
SplunkTrust

Additionally, the effort to perform the 50% filtering you mentioned depends heavily on how the filters are built. Very simple filters won't have a huge impact while complex (badly built, usually) filters can make your servers grind to a halt.
Therefore it's impossible to say based on just a few numbers how many HFs you need, whether it'd make sense to use HFs at the sources instead of UFs, whether it'd make sense to send 100% to the indexers and filter there (network? legal issues?), and so on.

Schedule a workshop with your local Splunk Partner or Splunk Sales Engineer.

Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...