I am running into a scenario where a high volume index is quickly rolling over from hot/warm to cold and then to frozen. Our current requirements are to keep data for 18 months, or until it hits max storage. A few of the current settings for the index are seen below:
[volume:hot]
path = /opt/splunkDB/hot-warm
maxVolumeDataSizeMB = 2000000
[volume:cold]
path = /opt/splunkDB/cold
maxVolumeDataSizeMB = 7500000
[volume:frozen]
path = /opt/splunkDB/frozen
maxVolumeDataSizeMB = 950000
[high_volume_index]
coldPath = volume:cold/high_volume_index/colddb
coldPath.maxDataSizeMB = 0
coldToFrozenDir = /opt/splunkDB/frozen/high_volume_index/frozendb
coldToFrozenScript =
frozenTimePeriodInSecs = 47088000
homePath = volume:hot/high_volume_index/db
homePath.maxDataSizeMB = 0
hotBucketTimeRefreshInterval = 10
maxDataSize = auto
maxHotBuckets = 3
maxHotIdleSecs = 0
maxHotSpanSecs = 7776000
maxWarmDBCount = 300
Is there a way to manage max frozen data size similar to coldPath.maxDataSizeMB
other than frozenTimePeriodInSecs
? One solution I saw was to reduce the frozenTimePeriodInSecs
to a smaller value than 18 months for this single index since it's not making our 18 month requirement now, and would potentially block other indexes from meeting that requirement.
Hi there @ryan_robertson
Take a look at yannK's answer.
https://answers.splunk.com/answers/153365/is-there-a-way-to-control-the-size-of-frozen-buckets.html
Hi there @ryan_robertson
Take a look at yannK's answer.
https://answers.splunk.com/answers/153365/is-there-a-way-to-control-the-size-of-frozen-buckets.html