Splunk Enterprise Security

Does Splunk support double NIC interfaces on the private network to improve performance?

danny12345
Explorer

First, some background info on our Splunk system. We are setting up a 2-site cluster with a replication factor of 2. We have a search head cluster of about 3 to 4 nodes and our indexer cluster has about 11 indexers at each site. In addition, we have our networks separated out into private and public networks. Users log into Splunk through the public network, but all internal communications between the various nodes happen on the private network, which we have connected up to a dedicated private switch. We also have enough network interfaces on our nodes and our private switch to support either bonded interfaces or dual NIC connections with 2 separate IP addresses.

The question we have is, does Splunk support dual NIC connections with 2 separate IP addresses on a private network that is apart from the public network interface, which is on it's own public network? If the answer is no, then we have a follow-up question, which is will Splunk be able to utilize the bandwidth of a bonded network interface (1 x 10Gb VS. 2 x 10Gb connections) better than a single 10Gb connection? However, if the answer is yes, then that would allow us to get around the somewhat complicated configuration process for setting up bonded network interfaces through the OS by simply setting up 2 separate interfaces with 2 IP addresses, but we're not sure if Splunk can support this, what other's were thinking/trying/etc..

Here is a link to a similar question, but it's not exactly the question I have now (why do bonding VS. how to do bonding), so that is why I am asking a new question.

Thank you for your help and please let me know if you need more information!

https://answers.splunk.com/answers/72426/rhel-nic-bonding-with-splunk.html

0 Karma
1 Solution

rabbidroid
Path Finder

Splunk is not aware of the underlying network setup, it will utilize the bandwidth the host makes available to it. In the case of bonding in Linux, Splunk will use the bond0 interface (or whatever name you give it) which is the master of all the interfaces you configured it to be. Splunk will not be aware of the slave interfaces.

There are multiple modes of bonding in Linux, the mode I use is 802.3ad (mode 4), which provides load-balancing and fault tolerance. In my case this will be 2x10Gbe which will give me 20Gbe. A supported switch is required to utilize this configuration.

To check what mode your interface is configured, you can run:
cat /proc/net/bonding/bond0 which should give an output similar to this:

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 5c:b9:01:9c:c5:90
Active Aggregator Info:
    Aggregator ID: 1
    Number of ports: 2
    Actor Key: 15
    Partner Key: 32990
    Partner Mac Address: 00:23:04:ee:be:01

Slave Interface: eno49
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 10
Permanent HW addr: 5c:b9:01:9c:c5:90
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: 5c:b9:01:9c:c5:90
    port key: 15
    port priority: 255
    port number: 1
    port state: 63
details partner lacp pdu:
    system priority: 32667
    system mac address: 00:23:04:ee:be:01
    oper key: 32990
    port priority: 32768
    port number: 31276
    port state: 61

Slave Interface: eno50
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 5c:b9:01:9c:c5:91
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: 5c:b9:01:9c:c5:90
    port key: 15
    port priority: 255
    port number: 2
    port state: 63
details partner lacp pdu:
    system priority: 32667
    system mac address: 00:23:04:ee:be:01
    oper key: 32990
    port priority: 32768
    port number: 14892
    port state: 61

TL;DR, if configured correctly with a supported switch, Yes.

View solution in original post

rabbidroid
Path Finder

Splunk is not aware of the underlying network setup, it will utilize the bandwidth the host makes available to it. In the case of bonding in Linux, Splunk will use the bond0 interface (or whatever name you give it) which is the master of all the interfaces you configured it to be. Splunk will not be aware of the slave interfaces.

There are multiple modes of bonding in Linux, the mode I use is 802.3ad (mode 4), which provides load-balancing and fault tolerance. In my case this will be 2x10Gbe which will give me 20Gbe. A supported switch is required to utilize this configuration.

To check what mode your interface is configured, you can run:
cat /proc/net/bonding/bond0 which should give an output similar to this:

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 5c:b9:01:9c:c5:90
Active Aggregator Info:
    Aggregator ID: 1
    Number of ports: 2
    Actor Key: 15
    Partner Key: 32990
    Partner Mac Address: 00:23:04:ee:be:01

Slave Interface: eno49
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 10
Permanent HW addr: 5c:b9:01:9c:c5:90
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: 5c:b9:01:9c:c5:90
    port key: 15
    port priority: 255
    port number: 1
    port state: 63
details partner lacp pdu:
    system priority: 32667
    system mac address: 00:23:04:ee:be:01
    oper key: 32990
    port priority: 32768
    port number: 31276
    port state: 61

Slave Interface: eno50
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 5c:b9:01:9c:c5:91
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: 5c:b9:01:9c:c5:90
    port key: 15
    port priority: 255
    port number: 2
    port state: 63
details partner lacp pdu:
    system priority: 32667
    system mac address: 00:23:04:ee:be:01
    oper key: 32990
    port priority: 32768
    port number: 14892
    port state: 61

TL;DR, if configured correctly with a supported switch, Yes.

danny12345
Explorer

Thank you very much for your response. It is clear now that Splunk does indeed support bonded networks. The real question I have now is, does Splunk make use of the additional network throughput or is there a way to figure this out? It seems like 10Gb connections should be fast enough for Splunk.

0 Karma

rabbidroid
Path Finder

It will use all throughput the OS makes available to it, be it 1Gbe or 200Gbe

0 Karma

danny12345
Explorer

Yes, thank you -- I understand that -- But does Splunk make use of the additional throughput in terms of actually sending that much stuff across the wire, not just the fact that it can send that much stuff if it wants to, which clearly it can because it's separated out into the OS layer like you said.

0 Karma

nickhills
Ultra Champion

Splunk will run as fast as you push it.

If you only have 4 cores and 4gb of ram, and 10mbit networking, it will run just fine if your event count is low, and you seldom perform searches. * (I know thats not quite true 🙂

However, if you're indexing 2Tb a day, have hundreds of users and thousands of searches, Splunk will happily take advantage of 192cores, 500gb of ram and all the network (and disk) IO you can throw at it.

If my comment helps, please give it a thumbs up!
0 Karma

rabbidroid
Path Finder

10Gbe should be more than enough for indexing and searches. If it's not enough, you should probably add more indexers instead of more NIC's.

The only reason you should be concerned about throughput is when you're using SmartStore, as the traffic to S3 will increase depending on your setup and search behavior. But with many indexers capable of doing 10Gbs you will probably saturate your uplinks to AWS or your local S3 Storage solution.

Also, Splunk will perform a lot better with more but slimmer servers instead of fewer fat servers. Having 10 servers with 128 cores is not the same than 20 servers with 64 cores. More smaller servers = better.

0 Karma

danny12345
Explorer

Thanks @rabbidroid that satisfies the answer. We thought 10Gbe should be enough for Splunk too, since it is not the bottleneck, but rather CPU is. We are not using SmartStore also.

0 Karma

codebuilder
SplunkTrust
SplunkTrust

You will not realize any performance gain by using bonding.
NIC bonding provides high availability for your networking (active/failover). It doe's not aggregate throughput.

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

danny12345
Explorer

Thank you for your response. I'm not trying to be mean, but your statement is false. Bonded networks can provide load balancing, not just fault tolerance (e.x., mode 0 or mode 2). The main question really becomes, is Splunk able to utilize the additional throughput, or are there other bottlenecks besides the network speed, such as CPU? In addition, you did not answer the original question, which was does Splunk support double NIC connections on an internal private network (not the public network) with 2 separate IP addresses? Thank you again very much!

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...