Getting Data In

Total space for index

skippylou
Communicator

I see alot in the docs, etc. that show how to set limits on buckets, etc. I can't seem to find out if there is a way to limit size on an index and have old stuff deleted out when room is needed for new stuff - a fifo approach.

Basically want to ensure that any new logs coming in aren't blocked on me clearing space or archiving, etc.

Thoughts?

Thanks,

Scott

1 Solution

ftk
Motivator

You can set maxTotalDataSizeMB on a per index basis in indexes.conf.

maxTotalDataSizeMB =

  • The maximum size of an index (in MB).
  • If an index grows larger, the oldest data is frozen.
  • Defaults to 500000.

Once the data is moved to frozen, by default it is deleted: http://www.splunk.com/base/Documentation/latest/Admin/HowSplunkstoresindexes

After changing indexes.conf you will have to restart your Splunk instance.

View solution in original post

ftk
Motivator

You can set maxTotalDataSizeMB on a per index basis in indexes.conf.

maxTotalDataSizeMB =

  • The maximum size of an index (in MB).
  • If an index grows larger, the oldest data is frozen.
  • Defaults to 500000.

Once the data is moved to frozen, by default it is deleted: http://www.splunk.com/base/Documentation/latest/Admin/HowSplunkstoresindexes

After changing indexes.conf you will have to restart your Splunk instance.

Lowell
Super Champion

Be sure to check out this resource as well: http://www.splunk.com/wiki/Deploy:UnderstandingBuckets

0 Karma

Lowell
Super Champion

You are correct, a whole bucket is frozen (archived/deleted) at once. The 10Gb default is for 64bit systems, it's 700Mb for 32 bit systems. So I think its safe to say that anything in the middle should be safe. The issue is less about the size of your buckets, but how many buckets you will end up with based on that size. A hundred or two shouldn't be a problem, but 10,000 buckets will be. Having buckets with a smaller time span could improve performance if your searches are generally over small time ranges.... so, yeah, it's complicated.

0 Karma

skippylou
Communicator

Thanks, after re-reading that again it makes more sense now. Just to clarify, when it deletes it has to delete a whole bucket it seems - which defaults to 10GB based on maxDataSize for buckets. Is there any performance penalty to drop that lower that people have seen?

0 Karma
Get Updates on the Splunk Community!

More Ways To Control Your Costs With Archived Metrics | Register for Tech Talk

Tuesday, May 14, 2024  |  11AM PT / 2PM ET Register to Attend Join us for this Tech Talk and learn how to ...

.conf24 | Personalize your .conf experience with Learning Paths!

Personalize your .conf24 Experience Learning paths allow you to level up your skill sets and dive deeper ...

Threat Hunting Unlocked: How to Uplevel Your Threat Hunting With the PEAK Framework ...

WATCH NOWAs AI starts tackling low level alerts, it's more critical than ever to uplevel your threat hunting ...