Deployment Architecture

Is it possible to roll data from warm to cold with a time parameter? Otherwise there might be a GDPR Problem

Dennis54321
Engager

Is there a way to roll data from warm to cold with a time parameter? I searched and i didn't find a way. There is only this post in which they guess the time with help of maxWarmDBCount or homePath.maxDataSizeMB.

http*s://answers.splunk.com/answers/9911/way-to-move-warm-data-to-cold-by-time.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev

Imagine a company with the following Settings:
Hot/Warm is on SSD and Cold is on HDD

//Move Hot to Warm after 6 hours
maxHotSpanSecs = 21600

//Move Warm to Cold when size of Hot and Warm is over 3 TB.
homePath.maxDataSizeMB = 3000000
//Set maxWarmDBCount to highest number to overwrite standard value.
maxWarmDBCount = 4294967295

//Delete data after 6 months
frozenTimePeriodInSecs = 16070400

With this Setup the Logs will be rolled from Hot to Warm in 6 Hours with maxHotSpanSecs and lets imagine that with the size of daily logging the buckets will be rolled from Warm to Cold after 30 days with the homePath.maxDataSizeMB Setting. The company uses this big SSD storage for Hot/Warm, to look back at least 30 days with the fast speed of the SSDs. With the frozenTimePeriodInSecs all data will be deleted after 6 months.

Thats all fine up until now.

The company now creates a new index [SuperSensitiveData]. The logs in this index must be deleted after 7 days to be GDPR complient. The size of daily input in this index vary a lot. The Splunk admin creates following Stanza in indexes.conf.

[SuperSensitiveData]
thawedPath = volume:company_thaweddb/debug/thaweddb
coldPath = volume:company_cold/debug/colddb
homePath = volume:company_hot/debug/db

Deleting after 7 days

frozenTimePeriodInSecs = 604800

Now the Data from SuperSensitiveData will be in Hot/Warm for 30 days before it is deleted right? Since the frozenTimePeriodInSecs Parameter in the SuperSensitiveData stanza only works for data in Cold storage and the data only gets there after 30 days.

And this can be even worse. If the company changes their environment and the amount of daily data is halved. The SuperSensitiveData will be there for 60 days.

Did i miss something there? Is there a way to solve this problem?

fene
New Member

I believe it should be done with maxHotSpanSecs parameter in indexes.conf

https://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf

maxHotSpanSecs = <positive integer>
* Upper bound of timespan of hot/warm buckets, in seconds.

 

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...