I have lots of lines like the ones below from splunkd.log
:
09-27-2013 15:04:28.681 +0000 INFO IndexProcessor - Starting to move buckets with the oldest latest time until we achieve compliance (current size=337945007355, max=146800640000)
09-27-2013 15:04:29.726 +0000 INFO IndexProcessor - Starting to move buckets with the oldest latest time until we achieve compliance (current size=337945445225, max=146800640000)
09-27-2013 15:04:30.761 +0000 INFO IndexProcessor - Starting to move buckets with the oldest latest time until we achieve compliance (current size=337948447673, max=146800640000)
Doing some basic math shows that the current size
only ever increases. Does this mean that I'm accumulating buckets faster than they can be aged off? What should I do?
I don't know that the message refers to hot buckets. I would take a look at the actual files on disk. Go to
$SPLUNK_DB
(in Linux, the default is /opt/splunk/var/lib/splunk
). You should see one directory for each index, along with some other files; note that the directory for the main
index is called defaultdb
. Take a look at the directory tree. Also check out the size of the db
subdirectory and the colddb
subdirectory. db
holds the hot and warm buckets; colddb
holds the cold buckets.
Does the overall size of the directories match what you have in indexes.conf
? If you look at these directories over time, can you see the buckets moving from db
to colddb
and then aging out? I would also check the maximum size setting for your indexes (you can do this in the Splunk Manager GUI if you prefer). Is the overall size of your index large enough? If the maximum size is too small, then Splunk may be aging out data sooner than you like.
If you have a number of indexes and a high volume of data, buckets may be aging out of the indexes fairly quickly. That could be normal.
Here is the documentation on How Splunk Stores Indexes