I’m still seeing intermittent connection error messages between the Splunk search heads and indexers.
It looks like the search heads perform a file copy across port 8089 to the indexers on a recurring basis. If those copies fail, then searches / timed reports / whatever will fail too. So, kinda important.
05-12-2011 08:45:56.443 -0500 WARN DistributedBundleReplicationManager - Unable to connect to remote peer: https://64.x.x.x:8089.
05-12-2011 08:45:28.158 -0500 ERROR DistributedBundleReplicationManager - Unable to get remote checksum from peer named 10.x.x.x with uri=https://64.x.x.x:8089
05-12-2011 08:45:28.158 -0500 WARN DistributedBundleReplicationManager - Unable to connect to remote peer: https://64.x.x.x:8089.
05-12-2011 08:45:28.157 -0500 ERROR DistributedBundleReplicationManager - Unable to get remote checksum from peer named 10.x.x.x with uri=https://64.x.x.x:8089
05-12-2011 08:45:28.157 -0500 WARN DistributedBundleReplicationManager - Unable to connect to remote peer: https://64.x.x.x:8089.
05-12-2011 08:45:28.156 -0500 WARN DistributedBundleReplicationManager - Unable to login to remote peer at https://64.x.x.x:8089 named 10.x.x.x with username splunk-system-user
05-12-2011 08:45:28.155 -0500 WARN DistributedBundleReplicationManager - Unable to login to remote peer at https://64.x.x.x:8089 named 10.x.x.x with username splunk-system-user
disregard. was a scripting issue...