Splunk Search

Reduce index period for old Index

trkswe
New Member

Hi All,

We had an index named axo, which is around 3 years old and had around 300 GB of data.
Now we have decided to reduce the index size, by retaining only the latest 90 days of data.

We have updated the "frozenTimePeriodInSecs = 7776000" in /opt/splunk/etc/system/local/indexes.conf.
We also ran btool command (./splunk cmd btool indexes list) to see if there are mutiple .conf files.
But in the btool result as well, we observed "frozenTimePeriodInSecs = 7776000" was correct.

When we do the search, we still see the old data of the past 2 years.

Is the method of reducing the size of index correct?
Do we need to follow any other method? Please guide.

PS: "maxHotSpanSecs = 7776000"

Thank you.

0 Karma
1 Solution

DavidHourani
Super Champion

Hi @trkswe,

Changing the frozenTimePeriodInSecs does purge the older logs but if you have old and new logs mixed up in the same bucket of your index then those buckets will only expire once the newer data is older than 90 days.
This means if you've indexed data from 2 years at the same time as data from a period <90 days the data will not be purged before the most recent data goes over 90 days. This usually happen when you index multiple days/months/years of log at the same time.

Have a look at this wiki it will help : https://wiki.splunk.com/Deploy:SplunkBucketRetentionTimestampsAndYou

PS: It's best practice to avoid using system/local for configurations, try making an app and putting your configs there instead it will be easier to maintain and manage.

Cheers,
David

View solution in original post

0 Karma

DavidHourani
Super Champion

Hi @trkswe,

Changing the frozenTimePeriodInSecs does purge the older logs but if you have old and new logs mixed up in the same bucket of your index then those buckets will only expire once the newer data is older than 90 days.
This means if you've indexed data from 2 years at the same time as data from a period <90 days the data will not be purged before the most recent data goes over 90 days. This usually happen when you index multiple days/months/years of log at the same time.

Have a look at this wiki it will help : https://wiki.splunk.com/Deploy:SplunkBucketRetentionTimestampsAndYou

PS: It's best practice to avoid using system/local for configurations, try making an app and putting your configs there instead it will be easier to maintain and manage.

Cheers,
David

0 Karma

trkswe
New Member

Thanks a lot.

Confirmed with the Analyst. Your assumption was right.
The old data was injested a few months ago.

Thanks for the tip on best practice as well.

0 Karma
Get Updates on the Splunk Community!

Join Us for Splunk University and Get Your Bootcamp Game On!

If you know, you know! Splunk University is the vibe this summer so register today for bootcamps galore ...

.conf24 | Learning Tracks for Security, Observability, Platform, and Developers!

.conf24 is taking place at The Venetian in Las Vegas from June 11 - 14. Continue reading to learn about the ...

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...