Security

Splunk max index size configuration

Bhuvnesh
Engager

Hi,

I have max 63G available for splunk indexes. I have 3 main indexex and size should be like below.

abcd 38G
abcd-fine 15G
abcd-mon 10G

I have prepared the local/indexs.conf file for the same. Kindly review and suggest if it is good or I am missing something.

[abcd]
homePath = $SPLUNK_DB/abcd/db
coldPath = $SPLUNK_DB/abcd/colddb
thawedPath = $SPLUNK_DB/abcd/thaweddb
coldToFrozenScript = compressedExportabcd.sh
maxHotBuckets = 10
maxConcurrentOptimizes = 6
maxTotalDataSizeMB = 38912
maxHotIdleSecs = 86400
maxDataSize = 2048

[abcd-audit]
homePath = $SPLUNK_DB/abcd-audit/db
coldPath = $SPLUNK_DB/abcd-audit/colddb
thawedPath = $SPLUNK_DB/abcd-audit/thaweddb
coldToFrozenScript = compressedExportabcdAudit.sh
maxHotBuckets = 10
maxConcurrentOptimizes = 6
maxTotalDataSizeMB = 512
maxHotIdleSecs = 86400
maxDataSize = 50

[abcd-monitoring]
homePath = $SPLUNK_DB/abcd-mon/db
coldPath = $SPLUNK_DB/abcd-mon/colddb
thawedPath = $SPLUNK_DB/abcd-mon/thaweddb
coldToFrozenScript = compressedExportabcdMon.sh
maxHotBuckets = 10
maxConcurrentOptimizes = 6
maxTotalDataSizeMB = 11264
maxHotIdleSecs = 86400
maxDataSize = auto

Many thanks,
Bhuvnesh

Tags (1)
0 Karma

twkan
Splunk Employee
Splunk Employee

It is not ideal to change the maxDataSize as that has been set to 10 GB by default on 64-bit operating systems, and this is an optimal value. You have too little storage space available, and this makes things a bit challenging due to the huge chuck size of 10 GB per bucket.

Basically at 10 GB per bucket, we can't even 'do much' as some of your indexes have only 10 GB of storage space.

An important component that is missing is the maxWarmDBCount, as the default value is set at 300 which is too large based on the amount of storage you have currently. What this mean is that your indexes will never roll over to the 'cold' buckets, and the coldtoFrozen script will never work as there are no cold buckets per se.

To get around this, we have to reduce the number of warm buckets accordingly so that we can have buckets rolling over to the cold.

For the sake of illustration, I am going to use 750 MB per bucket.

The general formula goes like this:

For abcd, and assuming minFreeSpace is using the default value of 2G:

maxWarmDBCount = (LocalDiskSpace-minFreeSpace-(maxHotBuckets*maxDataSize))/maxDataSize
maxWarmDBcount = (38G - 2G)-(10 * 0.75G) / 0.75
maxWarmDBcount = 38

In this case, my indexes.conf for abcd will look something like:

[abcd]
homePath = $SPLUNK_DB/abcd/db
coldPath = $SPLUNK_DB/abcd/colddb
thawedPath = $SPLUNK_DB/abcd/thaweddb
coldToFrozenScript = compressedExportabcd.sh
maxHotBuckets = 10
maxConcurrentOptimizes = 6
maxTotalDataSizeMB = 38912
maxHotIdleSecs = 86400
maxDataSize = 750
maxWarmDBCount = 38

Bhuvnesh
Engager

Hi,

I am using the version 4.1.6

[root@abcd1 rawdata]# splunk version

Splunk 4.1.6 (build 89596)

Thanks,

Bhuvnesh

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...