It is not ideal to change the maxDataSize as that has been set to 10 GB by default on 64-bit operating systems, and this is an optimal value. You have too little storage space available, and this makes things a bit challenging due to the huge chuck size of 10 GB per bucket.
Basically at 10 GB per bucket, we can't even 'do much' as some of your indexes have only 10 GB of storage space.
An important component that is missing is the maxWarmDBCount, as the default value is set at 300 which is too large based on the amount of storage you have currently. What this mean is that your indexes will never roll over to the 'cold' buckets, and the coldtoFrozen script will never work as there are no cold buckets per se.
To get around this, we have to reduce the number of warm buckets accordingly so that we can have buckets rolling over to the cold.
For the sake of illustration, I am going to use 750 MB per bucket.
The general formula goes like this:
For abcd, and assuming minFreeSpace is using the default value of 2G:
maxWarmDBCount = (LocalDiskSpace-minFreeSpace-(maxHotBuckets*maxDataSize))/maxDataSize
maxWarmDBcount = (38G - 2G)-(10 * 0.75G) / 0.75
maxWarmDBcount = 38
In this case, my indexes.conf for abcd will look something like:
[abcd]
homePath = $SPLUNK_DB/abcd/db
coldPath = $SPLUNK_DB/abcd/colddb
thawedPath = $SPLUNK_DB/abcd/thaweddb
coldToFrozenScript = compressedExportabcd.sh
maxHotBuckets = 10
maxConcurrentOptimizes = 6
maxTotalDataSizeMB = 38912
maxHotIdleSecs = 86400
maxDataSize = 750
maxWarmDBCount = 38
... View more