Getting Data In

Splunk Architecture for Production

meenal901
Communicator

Hi,
We have 140 production servers, where we are planning to install universal forwarders.
Further we need to do processing to filter out data and send around 50 perc data to the indexers.
Each production server is producing around 1.5 GB of data .

With this much data volume and server count. What should be the number of heavy forwarders, indexers and search heads we should be using.

0 Karma

MuS
Legend

Hi meenal901,

this cannot be answered here; it all depends on your existing infrastructure, your use cases and other requirements like how many concurrent search will run, do you depend on live near real-time data and so on.

As a rule of thumb take a look at the docs about recommended hardware which should be good to index about 100Gb/day.

cheers, MuS

martin_mueller
SplunkTrust
SplunkTrust

Additionally, the effort to perform the 50% filtering you mentioned depends heavily on how the filters are built. Very simple filters won't have a huge impact while complex (badly built, usually) filters can make your servers grind to a halt.
Therefore it's impossible to say based on just a few numbers how many HFs you need, whether it'd make sense to use HFs at the sources instead of UFs, whether it'd make sense to send 100% to the indexers and filter there (network? legal issues?), and so on.

Schedule a workshop with your local Splunk Partner or Splunk Sales Engineer.

Get Updates on the Splunk Community!

Wondering How to Build Resiliency in the Cloud?

IT leaders are choosing Splunk Cloud as an ideal cloud transformation platform to drive business resilience,  ...

Updated Data Management and AWS GDI Inventory in Splunk Observability

We’re making some changes to Data Management and Infrastructure Inventory for AWS. The Data Management page, ...

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...