When I run the 'sendemail' command from a search I can successfully send out an email to *****@gmail.com:
INFO sendemail:134 - Sending email. subject="test", results_link="None", recipients="[u'*****@gmail.com']",
server="127.0.0.1"
host = sh01
But when I attempt to use an alert (created on SH01), python.log shows an 'ERROR' from 'SH02'
ERROR sendemail:452 - [Errno 111] Connection refused while sending mail to: *****@gmail.com
host = sh02
ERROR sendemail:137 - Sending email. subject="Splunk Alert: toot test", results_link="https://****:8000/app/search/search?q=%7Cloadjob%20rt_scheduler_am9uYXRoYW4ucGh1bmc_search_RMD5a4567364310f5ab7_at_1531841295_42380.1152_A796D57B-F1E3-44CF-B9F2-EBA799BB1E72%20%7C%20head%2032%20%7C%20tail%201&earliest=0⪭st=now", recipients="[u'\****@gmail.com']", server="127.0.0.1"
host = sh02
Can anyone explain why this log is coming from sh02 and if I can make it such that an alert action happens on sh01 instead?
do you have the email server configured on SH02?
I do not, and I do not have permission to, is there a way to make it such that sh01 always attempts the alert action?
is it a Search Head Cluster?
read here and apply opposite logic:
https://docs.splunk.com/Documentation/Splunk/7.1.2/DistSearch/Adhocclustermember
i am doubtful that you will have access to this as you mentioned you have no access to set mail server.
i think that a quick note to your admin will do, takes less than a minute to fix
it is a search head cluster of sh01, sh02, sh03
changing the captain role didn't seem to do anything
sh02 and sh03 are the hosts where the email ERROR log appears
the alert never goes off from sh01
This might not be a problem I can resolve alone, thank you