Actions: | Security

AllGoodBits.org

Navigation: Home | Services | Tools | Articles | Other

Bonded and Bridged Networking for Virtualization Hosts

For many years, most servers have come with several network interfaces, but for most purposes, albeit with some noteable exceptions, the available bandwidth of the network interface has not often been the limiting factor on the performance of my services. Therefore it has been common to only use a single network interface on each server.

However, now I have a pattern that require lots of bandwidth, and it seems that it would be useful to take advantage of all those spare NICs. Virtual machine hosts often require lots of bandwidth since there are potentially many services funnelled through the host's NIC so 4Gbps sounds much more appealing that 1Gbps.

Fortunately, linux networking is powerful and conveniently allows for joining several interfaces into a single pipe. Linux refers to this as a bond, but you might also see the word trunk used this way, although trunking often refers to the practice of making several VLANs available on a single interface.

The first part of this article discusses and demonstrates bonding network interfaces.

Most virtualization technologies (including KVM) recommend bridged networking for the VM guests so that they can use IP addresses on the external without NAT and/or special portforwarding effort. The second part of this article briefly explains how to modify the bonded setup to allow for bridged networking. The result is that guest VMs are able to request/obtain IP addresses over DHCP, that the DHCP server sees the DHCP request coming from the (virtual) MAC address that you configured for the VM's NIC, all VMs are able to benefit from the large, bonded pipe and everything just works.

How to simply use 2 network interfaces in a single bond

The basic idea is to create a virtual network interface as a bond, configure it with the IP addressing information that we want, but no physical interfaces, and then add the physical interfaces to the bond. Here's a diagram:

                                  +--------+
                    +-------------| eth0   |
+----------+        |             +--------+
|          |--------+
|  bond0   |
|          |--------+
+----------+        |             +--------+
                    +-------------| eth1   |
                                  +--------+

Under RHEL6/CentOS6, we need to make a few simple changes under /etc/sysconfig/network-scripts and load the bonding driver module.

Configuring the bond interface

Here is one possible setup, there are others.

/etc/sysconfig/network-scripts/ifcfg-bond0:

DEVICE=bond0
USERCTL=no
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=10.8.0.2
GATEWAY=10.8.0.1
NETMASK=255.255.255.0
BONDING_OPTS="mode=2 miimon=80 xmit_hash_policy=layer3+4"

BOOTPROTO could also take the value dhcp, in which case delete the entries for IPADDR, GATEWAY and NETMASK. The option xmit_hash_policy only applies to bonding in modes 2 or 4. See the kernel bond documentation for details.

Adding physical interfaces to the bond

/etc/sysconfig/network-scripts/ifcfg-eth0:

DEVICE=eth0
HWADDR=00:10:AA:BB::CC:DD
USERCTL=no
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
ETHTOOL_OPTS="speed 1000 duplex full"
  • The last line is optional, but the test machine's Broadcom NetXtreme (BCM5709) using the bnx2 kernel module seemed to need it.
  • Then repeat for additional NICs (ifcfg-ethN).

Load the kernel module for the bonding driver

In the el5, this happened in /etc/modprobe.conf, but in el6 systems, this has moved to a include-based approach from /etc/modprobe.d/, so create /etc/modprobe.d/bonding.conf:

alias bond0 bonding
options bond0 mode=4 miimon=100

There are number of different possibilities for the value of mode, as documented in the linux kernel's bond documentation. Mode 4 is 802.3ad Dynamic link aggregation using LACP and requires support from the uplink switch. Another likely possibility is mode 6 (balance-alb, Adaptive Load Balancing) which does not require special switch support.

That second line options... is an alternative to BONDING_OPTS in ifcfg-bond0.

How to modify the bonded config to implement Bridged networking

Create a virtual interface for the bridge, /etc/sysconfig/network-scripts/ifcfg-br0:

DEVICE=br0
TYPE=Bridge
NM_CONTROLLED=no
BOOTPROTO=static
ONBOOT=yes
IPADDR=10.8.0.2
GATEWAY=10.8.0.1
NETMASK=255.0.0.0

BOOTPROTO could also take the value dhcp, in which case delete the entries for IPADDR, GATEWAY and NETMASK.

Modify the bond to add it to the bridge. Delete the IP address info and append a single line to /etc/sysconfig/network-scripts/ifcfg-bond0:

DEVICE=bond0
USERCTL=no
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
BONDING_OPTS="mode=1 miimon=80 xmit_hash_policy=layer3+4"
BRIDGE=br0

Modify the physical interface(s) configs to add them to the bond. /etc/sysconfig/network-scripts/ifcfg-eth0:

DEVICE=eth0
HWADDR=00:10:AA:BB::CC:DD
USERCTL=no
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
ETHTOOL_OPTS="speed 1000 duplex full"

Using LACP (802.3ad) to aggregate interfaces

Bonding mode=4 uses LACP to provide better performance, but it requires cooperation from switch. Lots of managed switches support this, including models from Cisco, such as the 3750, and even (some?) Dell PowerConnect models.

Cisco IOS example:

interface port-channel 1
  switchport mode trunk
  switchport trunk native vlan 40
  switchport trunk allowed vlan 40-44

interface GigabitEthernet1/0/1
  switchport mode trunk
  switchport trunk native vlan 40
  switchport trunk allowed vlan 40-44
  channel-protocol lacp
  channel-group 2 mode active

interface GigabitEthernet2/0/1
  switchport mode trunk
  switchport trunk native vlan 40
  switchport trunk allowed vlan 40-44
  channel-protocol lacp
  channel-group 2 mode active