Greetings;
I read some postings here that show how to adjust the sum-total disk space that SPLUNK uses. My root partition is below the magic 5GB limit, and my indexing has stopped. I've done this so far:
1) Looked inside /opt/splunk/etc/system/default/indexes.conf, and I saw the setting, "frozenTimePeriodInSecs" but the file says, "make no changes here, make changes in /opt/splunk/etc/system/local instead.
2) Files in this directory are inputs.conf, migration.conf and server.conf. I put, "frozenTimePeriodInSecs" in this file, and made it equal to 1.5 years (47,088,000 seconds)
Upon restarting SPLUNK, I'm now getting a message:
Checking conf files for problems...
Invalid key in stanza [default] in /opt/splunk/etc/system/local/inputs.conf, line 3: frozenTimePeriodInSecs (value: 47088000)
Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug'
Can anyone give me a rundown of the mathematical relationship between this value, and the other bucket-retention values, etc etc... so that my config is consistent?
frozenTimePeriodInSecs
is an indexes.conf setting, not inputs.conf... if you throw it underneath a stanza in indexes.conf it'll delete buckets from that index where the youngest event is older than 47088000 seconds, ie 545 days.
Big data, small data, huge data - splunk don't mind 😄
As an alternative to guessing how old your data can be, you can restrict indexes by size as well. That should be easier to match up with your existing disks... see maxTotalDataSizeMB in the indexes.conf docs.
I can certainly do that.. Even if it says, "don't make changes here". The reasoning is that the settings will be re-set when you upgrade the product. Make a just a small problem.. I guess I can make the change in indexes.conf anyway....
I'll move my new line into a new indexes.conf file in the 'local' directory. I initially put it into the default directory and bounced SPLUNK, and it worked... I was under 5GB, and my indexing started up again. I also moved the magic threshold to 4500MB from 5000. I run this server for educational purposes. I have 100 messages per min coming in, and two hosts reporting... It took 2 years for me to amass 5GB of data 🙂
Dont make the changes in the indexes.conf file in the $splunk_home$/etc/system/default folder. This is bad. These files wont persist through and upgrade.
Instead, create a indexes.conf in the $splunk_home$/etc/system/local folder, and create the stanza for your index (or default) and add the frozenTimePeriodInSecs to this file.
You should never edit the files in /default. If you need to make changes to this, create the file in the */local context, and add the settings there. When Splunk reads configs, it will start with whats in *default and then local, and the settings in local will aggregate or over-ride the default.
Do you have access to Splunk on Splunk (SoS App) or else built in Distributed Management Console (DMC) app?
You should be able to perform health check if you are running latest Splunk Enterprise version 6.5. Even if you don't you should be able to check License usage/distribution and Indexing Buckets usages/performance related stats etc.
This is a stand-alone version of Splunk running at my house 🙂 I've never used SoS or DMC before. I understand there's some way to fine-tune the bucket retention settings; I guess they all have to work together... When my retention value was 6 years, and I knocked it back to 1.5 years, I guess there are other settings which dont 'mesh' with my 1.5 year setting?