I have a JMX monitor I first deployed with a typo in the IP Address. I updated the JMX XML Config file with the new IP, however JMX is still trying to poll the old IP. I have done the following to try and clear the issue, however the problem still persists.
Here are the log entries from splunkd.log that lead me to realize that I had keyed in the wrong IP:
04-13-2015 08:00:28.136 -0400 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\SPLUNK4JMX\bin\jmx.py"" host=192.168.100.137, jmxServiceURL=, jmxport=1099, jvmDescription=EHE-FORUM-SOA-EXT-1, processID=0,stanza=jmx://Forum-EHE_EveryMinute,systemErrorMessage="Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: 192.168.100.137; nested exception is:
04-13-2015 08:00:28.136 -0400 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\SPLUNK4JMX\bin\jmx.py"" java.net.ConnectException: Connection timed out: connect]"
Here are the log entries from splunkd.log after correcting the IP:
04-13-2015 08:01:13.740 -0400 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\SPLUNK4JMX\bin\jmx.py"" host=192.168.100.184, jmxServiceURL=, jmxport=1099, jvmDescription=EHE-FORUM-SOA-EXT-1, processID=0,stanza=jmx://Forum-EHE_Hourly,systemErrorMessage="Connection refused to host: 192.168.100.137; nested exception is:
04-13-2015 08:01:13.740 -0400 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\SPLUNK4JMX\bin\jmx.py"" java.net.ConnectException: Connection timed out: connect"
Old Host IP: 192.168.100.137
New Host IP: 192.168.100.184
How do I clear the residual cache of the old IP?
Eventually ran a packet capture and discovered that the IP listed in "Connection refused to host: 192.168.100.137;" actually came from the device itself. It has two NIC's and I was trying to route traffic to it on the NIC to which the JMX was not binded because I was trying to keep the JMX traffic on the same network and not have to route it through a series of security appliances and a firewall.
Eventually ran a packet capture and discovered that the IP listed in "Connection refused to host: 192.168.100.137;" actually came from the device itself. It has two NIC's and I was trying to route traffic to it on the NIC to which the JMX was not binded because I was trying to keep the JMX traffic on the same network and not have to route it through a series of security appliances and a firewall.
Are you sure you are editing correct config file, because as per your logs jmx stanza is different in both the logs.
stanza=jmx://Forum-EHE_EveryMinute
And
stanza=jmx://Forum-EHE_Hourly
Good question. I edited both at the same time. I am getting the error on both stanzas:
4/13/15
11:20:50.365 AM
04-13-2015 11:20:50.365 -0400 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\SPLUNK4JMX\bin\jmx.py"" host=192.168.100.184, jmxServiceURL=, jmxport=1099, jvmDescription=EHE-FORUM-SOA-EXT-1, processID=0,stanza=jmx://Forum-EHE_EveryMinute,systemErrorMessage="Connection refused to host: 192.168.100.137; nested exception is:
Here is the Hourly config:
<!-- Hourly Polling -->
<jmxpoller>
<formatter className="com.dtdsoftware.splunk.formatter.TokenizedMBeanNameQuotesStrippedFormatter"/>
<cluster name="XXX" description="XXX">
<mbean domain="XXX" properties="XXX" dumpAllAttributes="true"/>
<jmxserver host="192.168.100.184" jmxport="1099" lookupPath="/fsjmx" jvmDescription="EHE-FORUM-SOA-EXT-1" jmxuser="XXX" jmxpass="XXX"/>
<jmxserver host="192.168.100.186" jmxport="1099" lookupPath="/fsjmx" jvmDescription="EHE-FORUM-SOA-EXT-2" jmxuser="XXX" jmxpass="XXX"/>
</cluster>
</jmxpoller>
Here is the EveryMinute Config:
<!-- Minute Polling -->
<jmxpoller>
<formatter className="com.dtdsoftware.splunk.formatter.TokenizedMBeanNameQuotesStrippedFormatter"/>
<cluster name="forum-prod_8x" description="PROD Forum Systems 8.x">
<mbean domain="XXX" properties="name=XXX">
<attribute name="CPU Utilization" outputname="cpuUtilization"/>
</mbean>
<jmxserver host="192.168.100.184" jmxport="1099" lookupPath="/fsjmx" jvmDescription="EHE-FORUM-SOA-EXT-1" jmxuser="XXX" jmxpass="XXX"/>
<jmxserver host="192.168.100.186" jmxport="1099" lookupPath="/fsjmx" jvmDescription="EHE-FORUM-SOA-EXT-2" jmxuser="XXX" jmxpass="XXX"/>
</cluster>
</jmxpoller>
This is a known good config set up as I have it deployed to 9 other Splunk servers successfully collecting both hourly and every minute data.