1. Controlling the size of a hot bucket :
maxDataSize = auto | auto_high_volume
auto = 750 mb
auto_high_volume = 10 Gb
maxHotSpanSecs = <positive integer>
default value = 90 days
If i set both the parameters, which takes the first precedence ?
Does changing maxDataSize paramter's value to higher one , requires tuning of other parameters also , accordingly?
2. Performance of Search query
when i make the maxDataSize = auto_high_volume i.e 10Gb
where a hot bucket will grow to a size of 10Gb , till that period it retains in hot bucket ., Will the search be faster , since the data available in HOT bucket ?
The maxDataSize
controls the size of a single bucket and maxHotSpanSecs
controls the timespan of a bucket. Whenever either of these is reached, a hot bucket rolls to warm. Therefore, you will have some buckets that are smaller than maxDataSize
because they hit the timespan limit. And you will have some buckets rolled because they were full (they hit maxDataSize
) even though their timespan is less than maxHotSpanSecs
.
Generally, you should not need to tune other parameters. However, I would not set a bucket larger than 10GB without careful thought. It is slower to search many small buckets, but a super large bucket that contains many days of data is also not efficient.
Most searches in Splunk are run on timespans of 24 hours or less. If that is your case, you may to size the buckets so that they roll about once a day. Again, avoid buckets smaller than 750MB or larger than 10GB.
The maxDataSize
controls the size of a single bucket and maxHotSpanSecs
controls the timespan of a bucket. Whenever either of these is reached, a hot bucket rolls to warm. Therefore, you will have some buckets that are smaller than maxDataSize
because they hit the timespan limit. And you will have some buckets rolled because they were full (they hit maxDataSize
) even though their timespan is less than maxHotSpanSecs
.
Generally, you should not need to tune other parameters. However, I would not set a bucket larger than 10GB without careful thought. It is slower to search many small buckets, but a super large bucket that contains many days of data is also not efficient.
Most searches in Splunk are run on timespans of 24 hours or less. If that is your case, you may to size the buckets so that they roll about once a day. Again, avoid buckets smaller than 750MB or larger than 10GB.
Would it be logical then to set the indexes up like this ...
maxDataSize=auto_high_volume
maxHotSpanSecs=86400 #one day
... so that the bucket doesn't get too big but it would also roll at least daily?