All,
Sorry guys, don't do this much and the docs are not giving me the warm and fuzzy's about about how to do this.
I'd like to take advantage of the fact I have pretty snappy local disks on my new servers to keep a day or two logs events there before rolling to cold. Never really done this before. Where I am running into trouble here is that config for the hot/warm. What's the setting here to get Splunk rolling from FASTDISk to BIGDISk isntead of just running out of space and crashing? Does Splunk just "know" as it appraoches 700gigs to start rolling? I assume this would be volume level setting but not seeing it.
# 12TB store
[volume:bigdisk]
path = /data
maxVolumeDataSizeMB = 110000000
# 1TB local SSDs
[volume:fastdisk]
path = /splunk_local
maxVolumeDataSizeMB = 700000
[default]
# 10gig buckets
maxDataSize = auto_high_volume
# Company requires min 120 day logs available
frozenTimePeriodInSecs=36820000
# Hot/Warm - should just roll to cold when fastdisk is low on space
homePath = volume:fastdisk/$_index_name/db
homePath.maxDataSizeMB = 700000
# Cold - should drop data when full - not crash
coldPath = volume:bigdisk/$_index_name/colddb
coldPath.maxDataSizeMB = 110000000
thawedPath = $SPLUNK_DB/$_index_name/thaweddb
[main]
[os]
[windows]
In all the examples I have seen, the following is at the index level -
homePath.maxDataSizeMB = 700000
So, I would try to put it at each index where the accumulative size would be below 700000
MBs.