Monitoring Splunk

Search Head out of disk space due to HUGE audit and _internaldb DBs

woodcock
Esteemed Legend

I am out of disk space on my oldest search head because of this:

cd ${SPLUNK_HOME}/var/lib/splunk; du -sh ./* 2> /dev/null | grep G
20G     ./audit
5.6G    ./_internaldb
1.3G    ./summary_forwarders
1.3G    ./summary_hosts
7.1G    ./summary_sources

Is it OK to go into the audit directory and delete the oldest buckets?
What are the "best practices" for managing file space in these always-growing DBs?

Tags (2)
0 Karma
1 Solution

lguinn2
Legend

These indexes are not "always growing." They have a maximum size, which you can change. Then when you restart Splunk, it will pare the indexes down to that size. You can set this in indexes.conf, but it is best to do this through the GUI on the indexer(s) if you are unsure. I believe that you will need to restart Splunk after you change this value.

maxTotalDataSizeMB = 500000

Don't just delete buckets. Let Splunk do it.

Although Luke makes a great point, it would be a good idea to look at these indexes and see what's up. The SOS app has a lot of reports based on the _internal and _audit indexes, and most of the built-in reports and dashboards use these indexes as well.

View solution in original post

lguinn2
Legend

These indexes are not "always growing." They have a maximum size, which you can change. Then when you restart Splunk, it will pare the indexes down to that size. You can set this in indexes.conf, but it is best to do this through the GUI on the indexer(s) if you are unsure. I believe that you will need to restart Splunk after you change this value.

maxTotalDataSizeMB = 500000

Don't just delete buckets. Let Splunk do it.

Although Luke makes a great point, it would be a good idea to look at these indexes and see what's up. The SOS app has a lot of reports based on the _internal and _audit indexes, and most of the built-in reports and dashboards use these indexes as well.

lukejadamec
Super Champion

If you're running default settings on internal indexes, then you need to figure out why your deployment has such a large log volume - what messages are filling up the logs?

lguinn2
Legend

Splunk indexes its own logs - so the indexes contain the current log data, and as much historical log data as will fit in the allotted space. The idea is that you can use Splunk to search Splunk's own logs, just like you can search any other log file!

Don't delete the logs though, because if Splunk were broken, you might need to manually inspect the logs.

0 Karma

woodcock
Esteemed Legend

This is not in ${SPLUNK_HOME}/var/log/splunk/ or are you saying that the db space is merely a place to hold old logs?

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...