Hi everybody
At the moment I've got about 170 indexes on my indexer. I
What's the best practice limit of numbers of indexes per indexer? I've got a 2 x quad core xeon 2.0ghz machine with 16gb memory.
Regards, Simon
Do you mean 170 distinct index directories in $SPLUNK_DB? or 170 buckets across all of your indexes?
There's no hard limit, it's all dependant on the total amount of data you are indexing, your disk and the efficiency of your indexing settings. If you are actively indexing data to every single index, then Splunk may have a hard time keeping up with all of the aggregation, sourcetyping, etc. If you look for messages containing 'blocked!!=true' in the _internal index, that will tell you if you are hitting any resource limitations. CPU time is one possible bottleneck, disk contention is another.
If there are no 'blocked' messages, then that would indicate that your instance is happy and able to cope with the workload. If you're not indexing a high volume of data, I wouldn't expect Splunk to be complaining very much.
Do you mean 170 distinct index directories in $SPLUNK_DB? or 170 buckets across all of your indexes?
There's no hard limit, it's all dependant on the total amount of data you are indexing, your disk and the efficiency of your indexing settings. If you are actively indexing data to every single index, then Splunk may have a hard time keeping up with all of the aggregation, sourcetyping, etc. If you look for messages containing 'blocked!!=true' in the _internal index, that will tell you if you are hitting any resource limitations. CPU time is one possible bottleneck, disk contention is another.
If there are no 'blocked' messages, then that would indicate that your instance is happy and able to cope with the workload. If you're not indexing a high volume of data, I wouldn't expect Splunk to be complaining very much.
Hi Mick
I have 170 seperate indexes, not buckets!
But at the moment I can't find any "blocked" messages.
Well, that helped. Thanks for answering