Getting Data In

Quick sanity check on my indexes.conf config?

daniel333
Builder

All,

Can you do a quick peer review of my index config here?

My expectation is we can save as much as we want, but once we go over 1/2TB, older buckets will get dropped. Indexing, however, will never stop, giving my user 1/2TB to use how he pleases.

[volume:akamaidisk]
path = /akamaidata
maxVolumeDataSizeMB = 500000

[akamai2]
homePath=volume:akamaidisk/akamai2/db
coldPath=volume:akamaidisk/akamai2/colddb
thawedPath=$SPLUNK_DB/akamai2/thaweddb
maxDataSize=auto_high_volume
Tags (1)
0 Karma

ddrillic
Ultra Champion

The 1/2 TB limitation is configurable - it's the default max size of an index - you can change it if you like.

It's maxTotalDataSizeMB and the following thread speaks about it - how to manage the size of my indexes to fit in my volumes.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...