Monitoring Splunk

Search Peer indexer Minimum Free Disk Space Reached

brdr
Contributor

I've read some Answers on this issue and understand how to solve by adjusting server.conf. The question i have is how exactly to trace back this error to the object (search, report, alerts, etc...) that causing this issue. We have couple thousand of these objects and multiple search clusters and 20 indexes in the cluster. It would be great to have steps to isolate the offending object.

Thanks,
brdr

Labels (2)
0 Karma

jnudell_2
Builder

Hi @brdr ,

This message is usually an indication of improperly configured storage for indexing operations. If you're running into a situation where your indexers have less than 5GB (the default threshold for this message) of free disk space for the hot/cold storage volumes, you probably have one of the following situations:
1. You have not properly configured indexes.conf settings for volume management that allows Splunk to clean up space as needed for the hot/cold volumes.
2. You have not provided the supported default minimum disk space for /opt/splunk (or wherever Splunk is installed) of 300GB and search operations are overfilling that space causing this message.

From your desription, it sounds like #2 in this case. If it were me, I would investigate the server(s) in question to determine which folder is causing the issue (most likely something in /opt/splunk/var). From the CLI (Linux) I would use the following command:

df -sh /opt/splunk

This will show how much storage is being used by each directory in /opt/splunk. If it's var, then I would check var as well:
df -sh /opt/splunk/var

Depending upon which directory below that is causing the issue, there are different steps to take, but you would have an idea of where the offending data resides.

If you have NOT allocated the default minimum of 300GB for /opt/splunk, I would highly recommend that you do that. If the hot/cold data shares the same mount point as /opt/splunk then I would recommend that you review your indexes.conf to implement volume/index management that does not allow your disk to fill up, and rolls data appropriately to do this. Typically, I recommend that configurations be made to leave a 5% - 10% chunk of free space on the volume.

I hope this helps.

nareshinsvu
Builder

I got into similar issue.

My indexers were having huge files under $SPLUNK_HOME/var/run/searchpeers.

Which are images of all unwanted lookup files created on the search heads. After clearing them on search heads, the disk space came down on indexers.

Did you search for such files? or done some housekeeping?

0 Karma

Vijeta
Influencer

@brdr you can use the the DMC and see for search activities , it will give you top 10 long running searches .

0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to May Tech Talks, Office Hours, and Webinars!

Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars this month. This ...

They're back! Join the SplunkTrust and MVP at .conf24

With our highly anticipated annual conference, .conf, comes the fez-wearers you can trust! The SplunkTrust, as ...

Enterprise Security Content Update (ESCU) | New Releases

Last month, the Splunk Threat Research Team had two releases of new security content via the Enterprise ...