Getting Data In

Total space for index

skippylou
Communicator

I see alot in the docs, etc. that show how to set limits on buckets, etc. I can't seem to find out if there is a way to limit size on an index and have old stuff deleted out when room is needed for new stuff - a fifo approach.

Basically want to ensure that any new logs coming in aren't blocked on me clearing space or archiving, etc.

Thoughts?

Thanks,

Scott

1 Solution

ftk
Motivator

You can set maxTotalDataSizeMB on a per index basis in indexes.conf.

maxTotalDataSizeMB =

  • The maximum size of an index (in MB).
  • If an index grows larger, the oldest data is frozen.
  • Defaults to 500000.

Once the data is moved to frozen, by default it is deleted: http://www.splunk.com/base/Documentation/latest/Admin/HowSplunkstoresindexes

After changing indexes.conf you will have to restart your Splunk instance.

View solution in original post

ftk
Motivator

You can set maxTotalDataSizeMB on a per index basis in indexes.conf.

maxTotalDataSizeMB =

  • The maximum size of an index (in MB).
  • If an index grows larger, the oldest data is frozen.
  • Defaults to 500000.

Once the data is moved to frozen, by default it is deleted: http://www.splunk.com/base/Documentation/latest/Admin/HowSplunkstoresindexes

After changing indexes.conf you will have to restart your Splunk instance.

Lowell
Super Champion

Be sure to check out this resource as well: http://www.splunk.com/wiki/Deploy:UnderstandingBuckets

0 Karma

Lowell
Super Champion

You are correct, a whole bucket is frozen (archived/deleted) at once. The 10Gb default is for 64bit systems, it's 700Mb for 32 bit systems. So I think its safe to say that anything in the middle should be safe. The issue is less about the size of your buckets, but how many buckets you will end up with based on that size. A hundred or two shouldn't be a problem, but 10,000 buckets will be. Having buckets with a smaller time span could improve performance if your searches are generally over small time ranges.... so, yeah, it's complicated.

0 Karma

skippylou
Communicator

Thanks, after re-reading that again it makes more sense now. Just to clarify, when it deletes it has to delete a whole bucket it seems - which defaults to 10GB based on maxDataSize for buckets. Is there any performance penalty to drop that lower that people have seen?

0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to May Tech Talks, Office Hours, and Webinars!

Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars this month. This ...

They're back! Join the SplunkTrust and MVP at .conf24

With our highly anticipated annual conference, .conf, comes the fez-wearers you can trust! The SplunkTrust, as ...

Enterprise Security Content Update (ESCU) | New Releases

Last month, the Splunk Threat Research Team had two releases of new security content via the Enterprise ...