I have multiple search heads and multiple indexers in a cluster. I've deployed SOS to the cluster master (also the deployment server and license manager), and to all search heads. I've deployed the TA-SOS to all indexers, enabled scripting on all servers, and restarted the entire environment.
All _internal data is being sent to the cluster.
On my master node, SOS shows only itself and the search peers. This is the same for the rest of the search heads.
I want to be able to see everything from my master node. Am I hoping for too much, or have a missed something in the configurations?
Thanks...
I've made some headway -
I edited the /$SPLUNK_APP/etc/apps/sos/lookups/splunk_servers_cache.csv file by adding the additional search heads into the list. SOS does not report the server hardware and OS statistics, but I can now see the search statistics from my master node.
That's mostly want I wanted.
I've made some headway -
I edited the /$SPLUNK_APP/etc/apps/sos/lookups/splunk_servers_cache.csv file by adding the additional search heads into the list. SOS does not report the server hardware and OS statistics, but I can now see the search statistics from my master node.
That's mostly want I wanted.
This is indeed how one should proceed. Instance auto-discovery in S.o.S piggy-backs on distributed search, which is why search-heads cannot discover each other but only their own search peers.
The lack of instance details is expected as well for those instances as again, we rely on distributed search to collect those.