Deployment Architecture

How do I remove a search head that is appearing under the Indexer Clustering page?

dawid_schulz
Explorer

Hi,

I've got problem - I added an additional search head to a Splunk cluster (not search head cluster) and I can see it under Indexer Clustering: Master Node search head bookmarks. I was testing search just to check something and now I want to delete it from the cluster, but no success. I was using splunk remove search-server but got

In handler 'distsearch-peer': There is no search peer with a URI of xxx.xxx.xxx.xxx. Either the URI you entered is incorrect or the search peer has already been removed.

But the url is correct of course and peer still appears in Indexer Clustering page. cluster-peers with guid isn't working too.

Any ideas how to clean this mess up?

Martin_Doering
Explorer

I also had the same issue (using Splunk version 7.2.7), and a restart of the cluster master did not help.

First, you need to disable indexer clustering on the search head you want to remove. In Splunk Web, go to Settings --> Indexer Clustering. Then remove the cluster master you do not want to collect data from, which is only working if it is not the only cluster master. If it is the only one, click on Edit --> Disable Indexer Clustering instead.

Afterwards, the search head will show as "Unavailable" in the monitoring console of the cluster master.

After some digging through the configuration files of the cluster master (looking for the search head's IP address and host name), I found some leftovers of the removed search head:

  1. In the [settings] stanza of $SPLUNK_HOME$/etc/apps/splunk_monitoring_console/local/splunk_monitoring_console_assets.conf in the configuredPeers key
  2. In $SPLUNK_HOME$/etc/apps/splunk_monitoring_console/lookups/assets.csv
  3. In $SPLUNK_HOME$/etc/apps/splunk_monitoring_console/lookups/dmc_forwarder_assets.csv

After removing those leftovers and restarting splunkd, the search head had also been removed from the cluster master.

dkrichards16
Path Finder

I had the same issue and a restart of Splunk on the index master cleared it up.

0 Karma

jchampagne_splu
Splunk Employee
Splunk Employee

What does your server.conf file look like on the Cluster Master, Indexers (search peers), and search head look like?
http://docs.splunk.com/Documentation/Splunk/6.3.1511/Indexer/Enableclustersindetail

0 Karma

dawid_schulz
Explorer

[general]
pass4SymmKey = xxxxxxxxxx
serverName = xxxxxxxxxx
site = site4

[clustering]
available_sites = site4,site2
mode = master
multisite = true
pass4SymmKey = xxxxxxxxxxxxxx
site_replication_factor = origin:1,site2:2,site4:2,total:5
site_search_factor = origin:1,site2:2,site4:2,total:5

INDEXER

[general]
pass4SymmKey = xxxxxx
serverName = xxxxxxx
site = site4

[clustering]
master_uri = xxxxxxxx
mode = slave
pass4SymmKey = xxxxxxx

Search head

[general]
pass4SymmKey = xxxxxxxx
serverName = xxxxxxxx
site = site4
sessionTimeout = 8h

[clustering]
master_uri = xxxxxxx
mode = searchhead
multisite = true
pass4SymmKey = xxxxx

As you can see - nothing unusual

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...