We successfully implemented DFS replication in place of the shared mounted path.
We have 3 Search Heads all configured with search head pooling enabled. Each SH has this configured in $SPLUNK_HOME$/etc/system/local/server.conf
[pooling]
state = enabled
storage = c:\Splunk
We ran into an issue where saved search alerts were being run from multiple search heads so we dedicated a single SH as our "job" server. 2 of the search heads have the pipeliner scheduler disabled with this configuration setting in $SPLUNK_HOME/etc/system/local/default-mode.conf
[pipeline:scheduler]
disabled = true
The only other configuration necessary is setting up DFS replication. Each search head is configured to replicate the c:\Splunk directory to the other 2 search heads.
This setup has worked very well for us for about a year now. One thing to note though, the C:\Splunk\var\run\splunk\dispatch directory is extremely active. It's constantly updating, creating, and deleting files. Because of this, on a very active system with lots of users, DFS may have issues keeping up with all the changes.
... View more