Hey guys,
This questions is to those who had a little in-depth splunk engineering experiences.
I am trying to design a optimal architecture for our splunk installation. Here are my objective.
Environment :
On a well designed H/W, 16 Cores (Linux OS), CentOS, total approximately 4000 IOPS storage system
I appreciate any real life experiences anyone of you had running Splunk in a big scale deployments.
Cheers~!
I going to say this all really depends if you are talking about distributed search set up or an all-in-one.
Questions you have got to ask is how much through put are you estimating? How many dashboards will you have with how many searches? Will you have adhoc searches?
Ever search cost 1 CPU for the duration of the search. Now lets say each dashboard use 6 searches costing 6 CPUs (if your not using post process searches), then you have 3 saved search at any time, then you have 4 real time dashboards at 2 searches a piece running always occupying that's 8 more and not including adhoc searches. If that was happening constantly that's 15 concurrent searches using 15 CPU. That's worst case scenario. Now take into effect indexing more CPUs.
Worst case you also use the max of 200 MB of memory per search that's 3GB just in searches.
Regarding IOP, those will be consumed my searches and indexing. Your index could be affected by how intense your searches are and vice versa.
These are just worst cases. Cheers
Read this doc for more info on Scaleyourdeployment
An indexer will use as many IOPS as your hardware can give to it. It will not utilize that many cores, the key is write performance. Your hardware should be capable of well over 50GB/day indexing volume.
A search head will use those cores - provided there are a number of users/searches running in parallel, those usually are more CPU-intensive than indexing.