We currently have a forwarder with multiple NICs.
eth0: 192.168.1.x
eth1: 192.168.2.x
All of the data comes in eth0 (192.168.1.x) and then it is forwarded on to our indexers which are on the 192.168.2.x subnet via the eth1 interface. We can confirm this via tcpdump. However, we notice that the source IP of the packets is the eth0 address (192.168.1.x), this forms a circular pattern as the ACK packets are sent back around to the 192.168.1.x address instead of back to eth1 at 192.168.2.x.
Is this a Linux configuration issue or a Splunk configuration issue?
Any help would be much appreciated.
Thanks!
Typically this is controlled by the operating system. When a unix process uses the connect()
system call to make a TCP connection, the source address is typically determined by the outbound interface in the routing table. There are some things an application can do to change that though. A programmer can use certain options on the bind()
system call prior to connect()
to force the use of a particular source address. This is how SPLUNK_BINDIP
works.
Are you using SPLUNK_BINDIP
? If you are, does this behaviour change when you disable it and restart your forwarder?
If you took a tcpdump
of a telnet or a netcat connecting to your indexer, does it also set the source address to the 192.168.1.x address? You could use nc -z -v 192.168.2.x 7777
and tcpdump
its results. Of course, replace 7777 with your indexer port#.
Typically this is controlled by the operating system. When a unix process uses the connect()
system call to make a TCP connection, the source address is typically determined by the outbound interface in the routing table. There are some things an application can do to change that though. A programmer can use certain options on the bind()
system call prior to connect()
to force the use of a particular source address. This is how SPLUNK_BINDIP
works.
Are you using SPLUNK_BINDIP
? If you are, does this behaviour change when you disable it and restart your forwarder?
If you took a tcpdump
of a telnet or a netcat connecting to your indexer, does it also set the source address to the 192.168.1.x address? You could use nc -z -v 192.168.2.x 7777
and tcpdump
its results. Of course, replace 7777 with your indexer port#.
AH! okay. That makes sense. The TCP connection was already open with the 192.168.1.x address. The new interface would have changed the routing, but not affected the source address of the existing TCP connection.
Thanks, we did that and it worked correctly, so we decided to restart the service ( we hadn't done that since we brought up the second interface ). It looks like splunkd just needed to re-set it's interface/IP. We just wanted to make sure this was all we had to do before we went and restarted our production forwarder when it wasn't necessary.