Skip to Content

How to Configure LACP Bonding Using Netplan in Ubuntu?

It’s quite common to see in a network environment that multiple interfaces are connected to the network devices for redundancy.

In the event of one of the links ever goes down, the remaining links would take care of the traffic.

Bonding with LACP on netplan

Bonding is a way to club multiple interfaces as one and get maximum bandwidth. There are multiple protocols that help you create bonding, one among them is the industry standard known as LACP. And we are going to configure bonding using LACP with Netplan on Ubuntu.

If you are from a networking background, the bond interface usually calls it port-channel, bridge aggregation, link aggregation. However, when it comes to the server side we call it bonding or bond interface. In the end, it is the same thing.

Below is what my lab physical connectivity looks like.

The Ubuntu box is connected to a Layer3 switch with the SIX interfaces.

netplan bonding example

For all those SIX interfaces, I am going to club all of them as Bond0 with an IP address, this interface is an untagged port (access port), and configured with port-channel on the switch side.

Later we will take look at a tagged configuration using a specific VLAN.

Note: Switch configuration not covered here.

In /etc/netplan edit the YAML file as below.

Note: Before you perform any change, make sure you backup the configuration.

Step 1. Group the interface.

First, you need to group all the interfaces as one, since I have the interface name started with ens, I grouped them as ens*. Which basically says any interface start with the name ens.

ethernets:
   eports:
     match: 
       name: ens*

Step 2. Configure the Bond interface.

Then define the bond interface and call the interface group eports that you have just created and configure the IP address on it as well.

bonds:
   bond0:
     interfaces: [eports]
     addresses: [10.1.1.10/24]
     gateway4: 10.1.1.1
     nameservers:
       search: [local]
       addresses: [4.2.2.2]

Step 3. Add the LACP configuration.

After that add the LACP configurations, LACP is the standard bonding protocol.

   parameters:
       mode: 802.3ad
       lacp-rate: fast
       mii-monitor-interval: 100
  • The final netplan configuration would look like below.
network:
 version: 2
 renderer: networkd
 ethernets:
   eports:
     match: 
       name: ens*
 bonds:
   bond0:
     interfaces: [eports]
     addresses: [10.1.1.10/24]
     gateway4: 10.1.1.1
     nameservers:
       search: [local]
       addresses: [4.2.2.2]
     parameters:
       mode: 802.3ad
       lacp-rate: fast
       mii-monitor-interval: 100

Step 4. Apply the configuration.

Save the configuration and apply the configuration that you have just made using the command below.

sudo netplan apply

Step 5. Verification.

In the Bond interface, there will be slaves and a master, the physical interfaces who are part of the logical bond interface will be the slave, and the bond interface will be master.

Type IP addr, you should be able to see the physical interface has become SLAVE and it is up now (<BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP>).

2: ens6f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fb brd ff:ff:ff:ff:ff:ff
3: ens6f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fb brd ff:ff:ff:ff:ff:ff
4: ens5f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fb brd ff:ff:ff:ff:ff:ff
5: ens5f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fb brd ff:ff:ff:ff:ff:ff
6: ens3f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fb brd ff:ff:ff:ff:ff:ff
7: ens3f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fb brd ff:ff:ff:ff:ff:ff

When we look into the status of the Bond0 interface you can see, it has become master and it is up as well, also got an IP address 10.1.1.10, which is good!.

16: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fb brd ff:ff:ff:ff:ff:ff
    inet 10.1.1.10/24 brd 10.1.1.255 scope global bond0
       valid_lft forever preferred_lft forever
  • Lets ping the default gateway to make sure that the connectivity is okay.
saif@gld:/etc/netplan$ ping -c 4 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
64 bytes from 10.1.1.1: icmp_seq=1 ttl=255 time=0.729 ms
64 bytes from 10.1.1.1: icmp_seq=2 ttl=255 time=0.644 ms
64 bytes from 10.1.1.1: icmp_seq=3 ttl=255 time=0.627 ms
64 bytes from 10.1.1.1: icmp_seq=4 ttl=255 time=0.907 ms
--- 10.1.1.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3075ms
rtt min/avg/max/mdev = 0.627/0.726/0.907/0.115 ms
saif@gld:/etc/netplan$

Great!, our Bond0 is working as expected.