on Splunk Distributed Management Console, I see that for one index
DataAge Vs FrozenAge
1005/562
why is DataAge more than FrozenAge? why the data is not rolling? Does data Age takes indextime in to account or the timestamp of an event to froze the event. Because, here for DataAge calculation it is taking timestamp of the event. But looks like for rolling buckets it is taken _indextime in to account
@ankithreddy777 - Did the answer provided by jtacy help provide a working solution to your question? If yes, please don't forget to resolve this post by clicking "Accept". If no, please leave a comment with more feedback. Thanks!
You need a complete understanding of data retention and retirement policies to even understand the answer to your question.
http://docs.splunk.com/Documentation/Splunk/6.5.2/Indexer/Setaretirementandarchivingpolicy
The answer to your question, data age is greater than frozen age possibly because the bucket is still "open".
Such as a hot bucket default size is 10GB for 64bit systems and 750MB for 32bit systems by default. If the data you are adding to the index is much smaller... say 1mb per day, then it would take 10240 days before the HOT bucket closes. In this case Splunk can never roll the hot bucket to warm, and it can not roll from warm to frozen. (Note, Splunk will roll the buckets anytime it stops/restarts)
One way to insure your hot buckets roll is by setting maxHotSpanSecs and maxHotIdleSecs in your indexes.conf:
[indexName]
maxHotSpanSecs = 86401
maxHotIdleSecs = 86401
Thanks to @dwaddle for the reminder. Do not do 86400 or 3600 here as it starts a condition known as ohSnap.
This would cause the hot buckets to roll before reaching their 10gb/750mb limit, and instead they would roll every 86400s (1 day). Another event that will trigger hot bucket rolls is restarting splunk on the indexer.
You can also set the maxDataSize to smaller numbers which will cause them to roll sooner, but generally you end up with a combination of indexes.conf settings when you're trying to set a very specific retention period.
I would also add that it's probably worth looking at some of the earliest data in the index affected by this problem to make sure that timestamp recognition is working properly. If the problem is partly being caused by bad timestamps, rolling hot buckets faster will hide the symptom but not the cause.
Low-throughput indexes are an interesting challenge. Make the buckets too large and they'll almost never roll, but make them too small and they can lead to big bucket counts in a large clustered environment with long retention. Either way, make a poor choice and you might be stuck with the consequences for a while!