As far as I can tell, setting maxVolumeDataSizeMB does not trigger bucket moves and has no impact at all. Does anyone use this setting successfully, and can help me understand how it actually works? The documentation has not proven useful. Thanks!
Which version of Splunk, on what Platform? I don't know of any defects on the behavior that presently exist.
per indexes.conf.spec:
maxVolumeDataSizeMB =
* Optional.
* If set, this attribute will limit the total cumulative size of all databases
that reside on this volume to the maximum size specified, in MB.
* If the size is exceeded, Splunk will remove buckets with the oldest value of latest time (for a given bucket)
across all indexes in the volume, until the volume is below the maximum size.
Note that this can cause buckets to be frozen directly from a warm DB, if those
buckets happen to have the oldest value of latest time across all indexes in the volume.
It is important to understand that this value acts on the aggregate size of indexes which reference the volume on the instance where this setting exists. What I mean by this is this isn't the equivalent of doing 'df -k' and seeing that the volume's size is over the specified threshold. So, if you've got other data there, such as frozen buckets or you're using that disk for other types of storage, Splunk won't take that into consideration.
SPL-50187 was filed to make this feature more clear in the documentation.
splunkd should log messages with the 'BucketMover' component if something exceeds a trigger which has been set, so it would be a good idea to review it and get an idea of what is happening. What is the specific behavior you've observed that makes you think this doesn't work?
Hi
maxVolumeDataSizeMB only affects you if you have a volume set up in indexes.conf and an index (also in indexes.conf) configured to use it. Then, all of your configured indexes will adhere to using the volume's mountpoint for their storage, and you could use maxVolumeDataSizeMB to regulate how big you wish to allow the volume to get as a whole. Note you can use maxTotalDataSizeMB in your index stanza to regulate size on a per index basis as well. My guess is you don't have maxTotalDataSizeMB on an index set. A piece of my indexes.conf is below - just note that I never roll things over to cold or frozen, I keep things hot or warm 😉
# Set up shared disk pool for indexers
[volume:hotwarm]
path = $SPLUNK_DB
maxVolumeDataSizeMB = 900000
# Main indexes
[main]
homePath = volume:hotwarm/defaultdb/db
coldPath = volume:hotwarm/defaultdb/colddb
thawedPath = $SPLUNK_DB/defaultdb/thaweddb
maxTotalDataSizeMB = 850000
In english - my main indexes are stored in a volume called hotwarm. Volume hotward size limit - 900,000MB, but the main index in that volume can only be 850,000MB (to allow space for some other smaller indexes I have).
show us the config file for your indexex.conf.
Which version of Splunk, on what Platform? I don't know of any defects on the behavior that presently exist.
per indexes.conf.spec:
maxVolumeDataSizeMB =
* Optional.
* If set, this attribute will limit the total cumulative size of all databases
that reside on this volume to the maximum size specified, in MB.
* If the size is exceeded, Splunk will remove buckets with the oldest value of latest time (for a given bucket)
across all indexes in the volume, until the volume is below the maximum size.
Note that this can cause buckets to be frozen directly from a warm DB, if those
buckets happen to have the oldest value of latest time across all indexes in the volume.
It is important to understand that this value acts on the aggregate size of indexes which reference the volume on the instance where this setting exists. What I mean by this is this isn't the equivalent of doing 'df -k' and seeing that the volume's size is over the specified threshold. So, if you've got other data there, such as frozen buckets or you're using that disk for other types of storage, Splunk won't take that into consideration.
SPL-50187 was filed to make this feature more clear in the documentation.
splunkd should log messages with the 'BucketMover' component if something exceeds a trigger which has been set, so it would be a good idea to review it and get an idea of what is happening. What is the specific behavior you've observed that makes you think this doesn't work?
Yes. Another way to put it is that "volume" does not refer to your OS disk volumes. Rather it refers to the "volumes" as defined within indexes.conf, and things only are considered to live on a "volume" if defined as such in indexes.conf. That's probably what's misleading.
Sorry, forgot to say this is for Splunk 4.3.1 on RHEL5.
I set the maxVolumeDataSizeMB value to 90% of a disk partition, but it filled up to 100%, and no buckets were moving from warm to cold. The partition is dedicated to Splunk, so there are no other files in there.