Check the splunk\etc\system\local\server.conf file for the attribute minFreeSpace = <num>
This attribute sets the minimum allowed free disk space on the disk that holds the indexed data. See the server.conf spec:
http://docs.splunk.com/Documentation/Splunk/6.0/Admin/Serverconf
The solution is to increase the size of the audit partition, or reduce the volume of data being indexed, or decrease the age of data before it is frozen(deleted or archived), or decrease the size of the index that is getting too large before it is frozen(deleted or archived).
If you cannot increase the size of your disk, then basically you need to modify your 'data retirement policy'. See the docs at:
http://docs.splunk.com/Documentation/Splunk/6.0/Indexer/Setaretirementandarchivingpolicy
If it is the audit or other internal indexes that are causing the problem, then you should find out why they are getting so large so fast. If things seem normal, then maybe the audit partition where the audit index is being stored is simply too small.
I believe it is referring to if the disk space within the machine that the index(or drive) is on falls below a certain amount of MB, then pause indexing.
I would recommend setting the following setting if you would like to control the size of your index in indexes.conf
[indexName]
maxTotalDataSizeMB = 250000
http://docs.splunk.com/Documentation/Splunk/6.0/Indexer/Setaretirementandarchivingpolicy
I am getting following disk space issue related to this size.
You are low in disk space on partition "/opt/splunk/var/lib/splunk/audit/db". Indexing has been paused. Will resume when free disk space rises above 1000MB.
Every Time I am lowing size here and after some day issue raised again. Please give some permanent solution to resolve this.