Deployment Architecture

Setting a max index size for frozen data?

ryan_robertson
Engager

I am running into a scenario where a high volume index is quickly rolling over from hot/warm to cold and then to frozen. Our current requirements are to keep data for 18 months, or until it hits max storage. A few of the current settings for the index are seen below:

[volume:hot]
path = /opt/splunkDB/hot-warm
maxVolumeDataSizeMB = 2000000

[volume:cold]
path = /opt/splunkDB/cold
maxVolumeDataSizeMB = 7500000

[volume:frozen]
path = /opt/splunkDB/frozen
maxVolumeDataSizeMB = 950000

[high_volume_index]
coldPath = volume:cold/high_volume_index/colddb
coldPath.maxDataSizeMB = 0
coldToFrozenDir = /opt/splunkDB/frozen/high_volume_index/frozendb
coldToFrozenScript =
frozenTimePeriodInSecs = 47088000
homePath = volume:hot/high_volume_index/db
homePath.maxDataSizeMB = 0
hotBucketTimeRefreshInterval = 10
maxDataSize = auto
maxHotBuckets = 3
maxHotIdleSecs = 0
maxHotSpanSecs = 7776000
maxWarmDBCount = 300

Is there a way to manage max frozen data size similar to coldPath.maxDataSizeMB other than frozenTimePeriodInSecs? One solution I saw was to reduce the frozenTimePeriodInSecs to a smaller value than 18 months for this single index since it's not making our 18 month requirement now, and would potentially block other indexes from meeting that requirement.

0 Karma
1 Solution

alemarzu
Motivator
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...