Getting Data In

Total space for index

skippylou
Communicator

I see alot in the docs, etc. that show how to set limits on buckets, etc. I can't seem to find out if there is a way to limit size on an index and have old stuff deleted out when room is needed for new stuff - a fifo approach.

Basically want to ensure that any new logs coming in aren't blocked on me clearing space or archiving, etc.

Thoughts?

Thanks,

Scott

1 Solution

ftk
Motivator

You can set maxTotalDataSizeMB on a per index basis in indexes.conf.

maxTotalDataSizeMB =

  • The maximum size of an index (in MB).
  • If an index grows larger, the oldest data is frozen.
  • Defaults to 500000.

Once the data is moved to frozen, by default it is deleted: http://www.splunk.com/base/Documentation/latest/Admin/HowSplunkstoresindexes

After changing indexes.conf you will have to restart your Splunk instance.

View solution in original post

ftk
Motivator

You can set maxTotalDataSizeMB on a per index basis in indexes.conf.

maxTotalDataSizeMB =

  • The maximum size of an index (in MB).
  • If an index grows larger, the oldest data is frozen.
  • Defaults to 500000.

Once the data is moved to frozen, by default it is deleted: http://www.splunk.com/base/Documentation/latest/Admin/HowSplunkstoresindexes

After changing indexes.conf you will have to restart your Splunk instance.

Lowell
Super Champion

Be sure to check out this resource as well: http://www.splunk.com/wiki/Deploy:UnderstandingBuckets

0 Karma

Lowell
Super Champion

You are correct, a whole bucket is frozen (archived/deleted) at once. The 10Gb default is for 64bit systems, it's 700Mb for 32 bit systems. So I think its safe to say that anything in the middle should be safe. The issue is less about the size of your buckets, but how many buckets you will end up with based on that size. A hundred or two shouldn't be a problem, but 10,000 buckets will be. Having buckets with a smaller time span could improve performance if your searches are generally over small time ranges.... so, yeah, it's complicated.

0 Karma

skippylou
Communicator

Thanks, after re-reading that again it makes more sense now. Just to clarify, when it deletes it has to delete a whole bucket it seems - which defaults to 10GB based on maxDataSize for buckets. Is there any performance penalty to drop that lower that people have seen?

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...