Hi there!
In the Splunk enterprise edition, the disk is getting almost full. However, it seems not to have enough data to fill a 200GB of disk space. How can I find out the details space usage as well as reduce that as necessary. Is there any way to automate the process for future?
Best regards
Hyder
how much data do you ingest daily and how long do you keep it? Have you enabled/reviewed the distributed management console. It should show you some of those numbers.
http://docs.splunk.com/Documentation/Splunk/latest/DMC/DMCoverview
If you need to remove data, you could always shrink your retention period to purge old events.
Thanks for your answer. We have data in one drive only and that is filling up. However, there are plenty of spaces available in other drives of the machine. In terms of modifying the indexes.conf, have found the there are several of them available to change. We have 30-40 indexes available to change the configurations on coldPath, homePath and thawedPath in this file of /opt/splunk/etc/apps/search/local/indexes.conf. Is there anywhere can change these values globally? And for future creation of indexes will be selected by default this configuration as well. At this point we don't want to delete anything just to move the data to a new directory after 1 year for some indexes and 6 months for others.
Any advise on configuration details will be highly appreciated.
Cheers.
/opt/splunk/etc/apps/search/local/indexes.conf seems to be the right indexes.conf and you probably need to change each value separately and refer to the other drives.
[xxxx]
coldPath = $SPLUNK_DB/xxxx/colddb
homePath = $SPLUNK_DB/xxxx/db
thawedPath = $SPLUNK_DB/xxxx/thaweddb