Discover the Netplan network configuration in Ubuntu

Saifudheen SidheeqLatest PostsLeave a Comment

ubuntu netplan configuration

Ever since Ubuntu changed it’s network configuration to netplan from traditional configuration, many of them got so upset. Some of them are very annoyed to the point they even entirely removed the netplan and installed ifupdown back to the system again.

So is Netplan going to stay ?, if yes shouldn’t we learn how to configure them on our Ubuntu machine ?

The answer to both the questions are Yes, After doing some research about the netplan, it looks like that it’s going to stay for ever, and if you are looking at the network automation trends you could also see that the Yaml (Yet another markup language) files which are used at many places, similarly the yaml files are used in the netplan as well.

Now the option that we have is you could either install the ifupdown which you never know how long it would be available or just accept the change and learn them.

If you are ready to accept the change then let’s get started.

In this blog we would look at different scenarios in which we could create network using the netplan in Ubuntu with both DHCP and static configuration.

1. DHCP configuration with netplan

By default you don’t have to create or modify anything in netplan if you are planning to use DHCP, Ubuntu machine would have an IP address configured automatically via DHCP out of the box. And this is how netplan config looks like from Ubuntu server.

[email protected]:~$ cat /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by
# the datasource.  Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    ethernets:
        ens33:
            dhcp4: true
    version: 2
[email protected]:~$ 

What if you are Ubuntu desktop user, then it would look like below.

[email protected]:~$ cat /etc/netplan/01-network-manager-all.yaml 
# Let NetworkManager manage all devices on this system
network:
  version: 2
  renderer: NetworkManager
[email protected]:~$ 

What are the differences you could spot between these two ?

There is a renderer mentioned in the Ubuntu Desktop but not on the server.

Although DHCP enabled on both of them, Ubuntu server has the value dhcp4: true , but not on the Desktop.

Also the yaml file name also different.

As you can see the yaml configuration changed from cloud-init to network manager, sometimes the configuration would look like 01-netcfg.yaml

It may be even different in your machine as well, so make sure you are editing the correct file.

2. Static IP configuration using netplan

If you are using ubuntu machine as server its unlikly that you would be using DHCP. Lets look at how we can configure the IP statically.

Before we modify the netplan, make sure you take a backup of the current netplan configuration.
type the command below to take the backup of the existing configuration.

sudo cp /etc/netplan/01-network-manager-all.yaml /etc/netplan/01-network-manager-all.yaml.bak

now lets look at the physical interface name by typing ip addr, as you can see I have 6 interfaces on this machine.

I have six interfaces, let me give different IP’s to each

sudo nano /etc/netplan/50-cloud-init.yaml

Edit the yaml file in the following sequence.

# Let NetworkManager manage all devices on this system
network:
    version: 2
    renderer: NetworkManager
    ethernets:
        ens3f0:
          dhcp4: no
          addresses: [10.25.101.206/24]
          gateway4: 10.25.101.12
        ens3f1:
          dhcp4: no
          addresses: [10.25.101.207/24]
          gateway4: 10.25.101.12
        ens5f0:
          dhcp4: no
          addresses: [10.25.101.208/24]
          gateway4: 10.25.101.12
        ens5f1:
          dhcp4: no
          addresses: [10.25.101.209/24]
          gateway4: 10.25.101.12
        ens6f0:
          dhcp4: no
          addresses: [10.25.101.210/24]
          gateway4: 10.25.101.12
        ens6f1:
          dhcp4: no
          addresses: [10.25.101.211/24]
          gateway4: 10.25.101.12
  • Apply the config
 sudo netplan apply

As you can see the IP address has been changed now.

2: ens6f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d0:67:26:bb:17:42 brd ff:ff:ff:ff:ff:ff
    inet 10.25.101.210/24 brd 10.25.101.255 scope global noprefixroute ens6f0
       valid_lft forever preferred_lft forever
    inet6 fe80::d267:26ff:febb:1742/64 scope link 
       valid_lft forever preferred_lft forever
3: ens6f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d0:67:26:bb:17:43 brd ff:ff:ff:ff:ff:ff
    inet 10.25.101.211/24 brd 10.25.101.255 scope global noprefixroute ens6f1
       valid_lft forever preferred_lft forever
    inet6 fe80::d267:26ff:febb:1743/64 scope link 
       valid_lft forever preferred_lft forever
4: ens5f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d0:67:26:bb:10:04 brd ff:ff:ff:ff:ff:ff
    inet 10.25.101.208/24 brd 10.25.101.255 scope global noprefixroute ens5f0
       valid_lft forever preferred_lft forever
    inet6 fe80::d267:26ff:febb:1004/64 scope link 
       valid_lft forever preferred_lft forever
5: ens5f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d0:67:26:bb:10:05 brd ff:ff:ff:ff:ff:ff
    inet 10.25.101.209/24 brd 10.25.101.255 scope global noprefixroute ens5f1
       valid_lft forever preferred_lft forever
    inet6 fe80::d267:26ff:febb:1005/64 scope link 
       valid_lft forever preferred_lft forever
6: ens3f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fa brd ff:ff:ff:ff:ff:ff
    inet 10.25.101.206/24 brd 10.25.101.255 scope global noprefixroute ens3f0
       valid_lft forever preferred_lft forever
    inet6 fe80::d267:26ff:febb:16fa/64 scope link 
       valid_lft forever preferred_lft forever
7: ens3f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fb brd ff:ff:ff:ff:ff:ff
    inet 10.25.101.207/24 brd 10.25.101.255 scope global noprefixroute ens3f1
       valid_lft forever preferred_lft forever
    inet6 fe80::d267:26ff:febb:16fb/64 scope link 
       valid_lft forever preferred_lft forever

What if I have only one interaface on my ubuntu machine ?

Then it’s way too easier, you just got to add that one interface IP configurations than all the above.

I am not done yet, let go ahead and initiate the ping from the switch and verify the connectivity again.

[GLD]ping 10.25.101.206
Ping 10.25.101.206 (10.25.101.206): 56 data bytes, press CTRL_C to break
56 bytes from 10.25.101.206: icmp_seq=0 ttl=64 time=1.177 ms
56 bytes from 10.25.101.206: icmp_seq=1 ttl=64 time=0.938 ms
56 bytes from 10.25.101.206: icmp_seq=2 ttl=64 time=0.726 ms
56 bytes from 10.25.101.206: icmp_seq=3 ttl=64 time=0.692 ms
56 bytes from 10.25.101.206: icmp_seq=4 ttl=64 time=0.826 ms

--- Ping statistics for 10.25.101.206 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 0.692/0.872/1.177/0.175 ms
[GLD]ping 10.25.101.207
Ping 10.25.101.207 (10.25.101.207): 56 data bytes, press CTRL_C to break
56 bytes from 10.25.101.207: icmp_seq=0 ttl=64 time=0.774 ms
56 bytes from 10.25.101.207: icmp_seq=1 ttl=64 time=0.743 ms
56 bytes from 10.25.101.207: icmp_seq=2 ttl=64 time=0.769 ms
56 bytes from 10.25.101.207: icmp_seq=3 ttl=64 time=0.635 ms
56 bytes from 10.25.101.207: icmp_seq=4 ttl=64 time=0.781 ms

--- Ping statistics for 10.25.101.207 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 0.635/0.740/0.781/0.054 ms
[GLD]ping 10.25.101.208
Ping 10.25.101.208 (10.25.101.208): 56 data bytes, press CTRL_C to break
56 bytes from 10.25.101.208: icmp_seq=0 ttl=64 time=0.970 ms
56 bytes from 10.25.101.208: icmp_seq=1 ttl=64 time=0.821 ms
56 bytes from 10.25.101.208: icmp_seq=2 ttl=64 time=0.737 ms
56 bytes from 10.25.101.208: icmp_seq=3 ttl=64 time=0.832 ms
56 bytes from 10.25.101.208: icmp_seq=4 ttl=64 time=0.930 ms

--- Ping statistics for 10.25.101.208 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 0.737/0.858/0.970/0.083 ms
[GLD]ping 10.25.101.209
Ping 10.25.101.209 (10.25.101.209): 56 data bytes, press CTRL_C to break
56 bytes from 10.25.101.209: icmp_seq=0 ttl=64 time=0.879 ms
56 bytes from 10.25.101.209: icmp_seq=1 ttl=64 time=0.825 ms
56 bytes from 10.25.101.209: icmp_seq=2 ttl=64 time=0.692 ms
56 bytes from 10.25.101.209: icmp_seq=3 ttl=64 time=0.741 ms
56 bytes from 10.25.101.209: icmp_seq=4 ttl=64 time=0.759 ms

--- Ping statistics for 10.25.101.209 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 0.692/0.779/0.879/0.066 ms
[GLD]ping 10.25.101.210
Ping 10.25.101.210 (10.25.101.210): 56 data bytes, press CTRL_C to break
56 bytes from 10.25.101.210: icmp_seq=0 ttl=64 time=0.897 ms
56 bytes from 10.25.101.210: icmp_seq=1 ttl=64 time=0.710 ms
56 bytes from 10.25.101.210: icmp_seq=2 ttl=64 time=1.277 ms
56 bytes from 10.25.101.210: icmp_seq=3 ttl=64 time=0.715 ms
56 bytes from 10.25.101.210: icmp_seq=4 ttl=64 time=0.731 ms

--- Ping statistics for 10.25.101.210 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 0.710/0.866/1.277/0.217 ms
[GLD]ping 10.25.101.211
Ping 10.25.101.211 (10.25.101.211): 56 data bytes, press CTRL_C to break
56 bytes from 10.25.101.211: icmp_seq=0 ttl=64 time=5.480 ms
56 bytes from 10.25.101.211: icmp_seq=1 ttl=64 time=0.821 ms
56 bytes from 10.25.101.211: icmp_seq=2 ttl=64 time=0.798 ms
56 bytes from 10.25.101.211: icmp_seq=3 ttl=64 time=0.730 ms
56 bytes from 10.25.101.211: icmp_seq=4 ttl=64 time=0.759 ms

--- Ping statistics for 10.25.101.211 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 0.730/1.718/5.480/1.881 ms
[GLD]

That was pretty easy, wasnt it, let’s go bit advanced.

3. Adding secondary static IP address on the same interface using netplan

Sometimes you may want to use two different IP addresses on your server on the same interface one being the primary and the other being the secondary, this is how you can configure the same.

# Let NetworkManager manage all devices on this system
network:
    version: 2
    renderer: NetworkManager
    ethernets:
        ens3f0:
          dhcp4: no
          addresses: [10.25.101.206/24]
          addresses: [10.25.101.216/24]
          gateway4: 10.25.101.12
        ens3f1:
          dhcp4: no
          addresses: [10.25.101.207/24]
          addresses: [10.25.101.217/24]
          gateway4: 10.25.101.12
        ens5f0:
          dhcp4: no
          addresses: [10.25.101.208/24]
          addresses: [10.25.101.218/24]
          gateway4: 10.25.101.12
        ens5f1:
          dhcp4: no
          addresses: [10.25.101.209/24]
          addresses: [10.25.101.219/24]
          gateway4: 10.25.101.12
        ens6f0:
          dhcp4: no
          addresses: [10.25.101.210/24]
          addresses: [10.25.101.220/24]
          gateway4: 10.25.101.12
        ens6f1:
          dhcp4: no
          addresses: [10.25.101.211/24]
          addresses: [10.25.101.221/24]
          gateway4: 10.25.101.12

after sudo netplan apply the IP configuration would look like below. Apart from the primary IP address you can also see the secondary IP address as well.

2: ens6f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d0:67:26:bb:17:42 brd ff:ff:ff:ff:ff:ff
    inet 10.25.101.210/24 brd 10.25.101.255 scope global noprefixroute ens6f0
       valid_lft forever preferred_lft forever
    inet 10.25.101.220/24 brd 10.25.101.255 scope global secondary noprefixroute ens6f0
       valid_lft forever preferred_lft forever
    inet6 fe80::d267:26ff:febb:1742/64 scope link 
       valid_lft forever preferred_lft forever
3: ens6f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d0:67:26:bb:17:43 brd ff:ff:ff:ff:ff:ff
    inet 10.25.101.211/24 brd 10.25.101.255 scope global noprefixroute ens6f1
       valid_lft forever preferred_lft forever
    inet 10.25.101.221/24 brd 10.25.101.255 scope global secondary noprefixroute ens6f1
       valid_lft forever preferred_lft forever
    inet6 fe80::d267:26ff:febb:1743/64 scope link 
       valid_lft forever preferred_lft forever
4: ens5f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d0:67:26:bb:10:04 brd ff:ff:ff:ff:ff:ff
    inet 10.25.101.208/24 brd 10.25.101.255 scope global noprefixroute ens5f0
       valid_lft forever preferred_lft forever
    inet 10.25.101.218/24 brd 10.25.101.255 scope global secondary noprefixroute ens5f0
       valid_lft forever preferred_lft forever
    inet6 fe80::d267:26ff:febb:1004/64 scope link 
       valid_lft forever preferred_lft forever
5: ens5f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d0:67:26:bb:10:05 brd ff:ff:ff:ff:ff:ff
    inet 10.25.101.209/24 brd 10.25.101.255 scope global noprefixroute ens5f1
       valid_lft forever preferred_lft forever
    inet 10.25.101.219/24 brd 10.25.101.255 scope global secondary noprefixroute ens5f1
       valid_lft forever preferred_lft forever
    inet6 fe80::d267:26ff:febb:1005/64 scope link 
       valid_lft forever preferred_lft forever
6: ens3f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fa brd ff:ff:ff:ff:ff:ff
    inet 10.25.101.206/24 brd 10.25.101.255 scope global noprefixroute ens3f0
       valid_lft forever preferred_lft forever
    inet 10.25.101.216/24 brd 10.25.101.255 scope global secondary noprefixroute ens3f0
       valid_lft forever preferred_lft forever
    inet6 fe80::d267:26ff:febb:16fa/64 scope link 
       valid_lft forever preferred_lft forever
7: ens3f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fb brd ff:ff:ff:ff:ff:ff
    inet 10.25.101.207/24 brd 10.25.101.255 scope global noprefixroute ens3f1
       valid_lft forever preferred_lft forever
    inet 10.25.101.217/24 brd 10.25.101.255 scope global secondary noprefixroute ens3f1
       valid_lft forever preferred_lft forever
    inet6 fe80::d267:26ff:febb:16fb/64 scope link 
       valid_lft forever preferred_lft forever

Lets ping the newly added IP.

[GLD]ping 10.25.101.216
Ping 10.25.101.216 (10.25.101.216): 56 data bytes, press CTRL_C to break
56 bytes from 10.25.101.216: icmp_seq=0 ttl=64 time=0.703 ms
56 bytes from 10.25.101.216: icmp_seq=1 ttl=64 time=2.972 ms
56 bytes from 10.25.101.216: icmp_seq=2 ttl=64 time=0.690 ms
56 bytes from 10.25.101.216: icmp_seq=3 ttl=64 time=0.704 ms
56 bytes from 10.25.101.216: icmp_seq=4 ttl=64 time=0.729 ms

--- Ping statistics for 10.25.101.216 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 0.690/1.160/2.972/0.906 ms
[GLD]ping 10.25.101.217
Ping 10.25.101.217 (10.25.101.217): 56 data bytes, press CTRL_C to break
56 bytes from 10.25.101.217: icmp_seq=0 ttl=64 time=0.794 ms
56 bytes from 10.25.101.217: icmp_seq=1 ttl=64 time=0.852 ms
56 bytes from 10.25.101.217: icmp_seq=2 ttl=64 time=0.908 ms
56 bytes from 10.25.101.217: icmp_seq=3 ttl=64 time=0.788 ms
56 bytes from 10.25.101.217: icmp_seq=4 ttl=64 time=0.711 ms

--- Ping statistics for 10.25.101.217 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 0.711/0.811/0.908/0.066 ms
[GLD]ping 10.25.101.218
Ping 10.25.101.218 (10.25.101.218): 56 data bytes, press CTRL_C to break
56 bytes from 10.25.101.218: icmp_seq=0 ttl=64 time=0.817 ms
56 bytes from 10.25.101.218: icmp_seq=1 ttl=64 time=0.727 ms
56 bytes from 10.25.101.218: icmp_seq=2 ttl=64 time=0.725 ms
56 bytes from 10.25.101.218: icmp_seq=3 ttl=64 time=0.651 ms
56 bytes from 10.25.101.218: icmp_seq=4 ttl=64 time=0.860 ms

--- Ping statistics for 10.25.101.218 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 0.651/0.756/0.860/0.074 ms
[GLD]ping 10.25.101.219
Ping 10.25.101.219 (10.25.101.219): 56 data bytes, press CTRL_C to break
56 bytes from 10.25.101.219: icmp_seq=0 ttl=64 time=0.973 ms
56 bytes from 10.25.101.219: icmp_seq=1 ttl=64 time=0.728 ms
56 bytes from 10.25.101.219: icmp_seq=2 ttl=64 time=8.656 ms
56 bytes from 10.25.101.219: icmp_seq=3 ttl=64 time=0.735 ms
56 bytes from 10.25.101.219: icmp_seq=4 ttl=64 time=0.737 ms

--- Ping statistics for 10.25.101.219 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 0.728/2.366/8.656/3.146 ms
[GLD]ping 10.25.101.220
Ping 10.25.101.220 (10.25.101.220): 56 data bytes, press CTRL_C to break
56 bytes from 10.25.101.220: icmp_seq=0 ttl=64 time=1.040 ms
56 bytes from 10.25.101.220: icmp_seq=1 ttl=64 time=0.725 ms
56 bytes from 10.25.101.220: icmp_seq=2 ttl=64 time=0.698 ms
56 bytes from 10.25.101.220: icmp_seq=3 ttl=64 time=0.753 ms
56 bytes from 10.25.101.220: icmp_seq=4 ttl=64 time=0.718 ms

--- Ping statistics for 10.25.101.220 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 0.698/0.787/1.040/0.128 ms
[GLD]ping 10.25.101.221
Ping 10.25.101.221 (10.25.101.221): 56 data bytes, press CTRL_C to break
56 bytes from 10.25.101.221: icmp_seq=0 ttl=64 time=1.314 ms
56 bytes from 10.25.101.221: icmp_seq=1 ttl=64 time=0.803 ms
56 bytes from 10.25.101.221: icmp_seq=2 ttl=64 time=5.904 ms
56 bytes from 10.25.101.221: icmp_seq=3 ttl=64 time=0.735 ms
56 bytes from 10.25.101.221: icmp_seq=4 ttl=64 time=0.766 ms

--- Ping statistics for 10.25.101.221 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 0.735/1.904/5.904/2.011 ms
[GLD]

Awesome that works just fine!

You can achieve the same result by keeping the netplan configuration first address with DHCP and second as static as well.

Bonding with LACP on netplan

Bonding is a way to club multiple interface as one, if you are from networking background it usually call it a portchannel in cisco, or bridge aggragation, link aggregation on other vendors.

Creation of a Bond interface with an IP address – Untagged interface.

I have Ubuntu machine which has 6 interface connected to a switch like below and I am going to club all the six interfaces as Bond0 with an IP address, this interface is untagged port (accessport), later we would look at tagged configration using a specific VLAN.
Note: Switch configuration not covered here.

Bonding using netplan with LACP

Step 1. In /etc/netplan edit the yaml file as below.

  • First you need to group all the interfaces as one, since I have the interface name started with ens, I grouped them as below.
ethernets:
   eports:
     match: 
       name: ens*
  • Then define the bond interface and call the interface group eports that you have just created and configure the IP address as well.
bonds:
   bond0:
     interfaces: [eports]
     addresses: [10.1.1.10/24]
     gateway4: 10.1.1.1
     nameservers:
       search: [local]
       addresses: [4.2.2.2]
  • After that add the LACP configuration as well.
   parameters:
       mode: 802.3ad
       lacp-rate: fast
       mii-monitor-interval: 100
  • The final netplan configuration would look like below.
network:
 version: 2
 renderer: networkd
 ethernets:
   eports:
     match: 
       name: ens*
 bonds:
   bond0:
     interfaces: [eports]
     addresses: [10.1.1.10/24]
     gateway4: 10.1.1.1
     nameservers:
       search: [local]
       addresses: [4.2.2.2]
     parameters:
       mode: 802.3ad
       lacp-rate: fast
       mii-monitor-interval: 100

Step 2. Save the configuration and apply the configuration that you have just made using the command below.

sudo netplan apply

Step 3. Verify the configuration.

  • When you type IP addr, you could see the physical interface has become SLAVE and it is up now.
2: ens6f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fb brd ff:ff:ff:ff:ff:ff
3: ens6f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fb brd ff:ff:ff:ff:ff:ff
4: ens5f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fb brd ff:ff:ff:ff:ff:ff
5: ens5f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fb brd ff:ff:ff:ff:ff:ff
6: ens3f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fb brd ff:ff:ff:ff:ff:ff
7: ens3f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fb brd ff:ff:ff:ff:ff:ff
  • And Bond0 has become master and it is up, also got an IP address 10.1.1.10
16: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d0:67:26:bb:16:fb brd ff:ff:ff:ff:ff:ff
    inet 10.1.1.10/24 brd 10.1.1.255 scope global bond0
       valid_lft forever preferred_lft forever
  • Lets ping the default gateway to make sure that the connectivity is okay.
[email protected]:/etc/netplan$ ping -c 4 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
64 bytes from 10.1.1.1: icmp_seq=1 ttl=255 time=0.729 ms
64 bytes from 10.1.1.1: icmp_seq=2 ttl=255 time=0.644 ms
64 bytes from 10.1.1.1: icmp_seq=3 ttl=255 time=0.627 ms
64 bytes from 10.1.1.1: icmp_seq=4 ttl=255 time=0.907 ms

--- 10.1.1.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3075ms
rtt min/avg/max/mdev = 0.627/0.726/0.907/0.115 ms
[email protected]:/etc/netplan$

Great!, our Bond0 is working as expected.

Creation of a Bond interface with a VLAN tagged.

Creation of a Bond interface with a VLAN using netplan.

In this method, we are going to create Bond0 with sub interface ( Bond0.10) that represents a VLAN tagged interface.

Step 1. Goto /etc/netplan and edit the netplan configuration file as below.

cd /etc/netplan/
sudo nano 01-network-manager-all.yaml
  • Like below, group all the interface as one as eports.
network:
 version: 2
 renderer: networkd
 ethernets:
     eports:
       match:
         name: ens*
  • In the netplan yaml file add the bond0 interface under bonds, just below that add the LACP parameters.
bonds:
   bond0:
     interfaces: [eports]
     dhcp4: no
     parameters:
       mode: 802.3ad
       mii-monitor-interval: 100
  • Create a VLAN with the vlans block and name the vlan as bond0.10, and the ID which represents the VLAN ID as 10, and You should also point the physical interface using the link then the IP addresses parameters.
vlans:
   bond0.10:
     id: 10
     link: bond0
     addresses: [10.1.1.10/24]
     gateway4: 10.1.1.1
     nameservers:
       search: [local]
       addresses: [4.2.2.2]
  • The final netplan configuration would look like below.
network:
 version: 2
 renderer: networkd
 ethernets:
     eports:
       match:
         name: ens*
 bonds:
   bond0:
     interfaces: [eports]
     dhcp4: no
     parameters:
       mode: 802.3ad
       lacp-rate: fast
       mii-monitor-interval: 100
 vlans:
   bond0.10:
     id: 10
     link: bond0
     addresses: [10.1.1.10/24]
     gateway4: 10.1.1.1
     nameservers:
       search: [local]
       addresses: [4.2.2.2]

Step 2. Apply the netplan configuration.

sudo netplan apply

Step 3. Netplan verification.

As you can see the physical interface has become slave and the bond0 become the master.

2: ens6f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether a6:29:49:0c:2a:db brd ff:ff:ff:ff:ff:ff
3: ens6f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether a6:29:49:0c:2a:db brd ff:ff:ff:ff:ff:ff
4: ens5f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether a6:29:49:0c:2a:db brd ff:ff:ff:ff:ff:ff
5: ens5f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether a6:29:49:0c:2a:db brd ff:ff:ff:ff:ff:ff
6: ens3f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether a6:29:49:0c:2a:db brd ff:ff:ff:ff:ff:ff
7: ens3f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether a6:29:49:0c:2a:db brd ff:ff:ff:ff:ff:ff
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a6:29:49:0c:2a:db brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a429:49ff:fe0c:2adb/64 scope link 
       valid_lft forever preferred_lft forever
  • You can also see the vlan interface as bond0.10
9: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a6:29:49:0c:2a:db brd ff:ff:ff:ff:ff:ff
    inet 10.1.1.10/24 brd 10.1.1.255 scope global bond0.10
       valid_lft forever preferred_lft forever
    inet6 fe80::a429:49ff:fe0c:2adb/64 scope link
  • Lets ping from the switch to the IP of the vlan interface that we have created on ubuntu using netplan.
<GLD>ping 10.1.1.10
Ping 10.1.1.10 (10.1.1.10): 56 data bytes, press CTRL_C to break
56 bytes from 10.1.1.10: icmp_seq=0 ttl=64 time=1.447 ms
56 bytes from 10.1.1.10: icmp_seq=1 ttl=64 time=1.544 ms
56 bytes from 10.1.1.10: icmp_seq=2 ttl=64 time=1.842 ms
56 bytes from 10.1.1.10: icmp_seq=3 ttl=64 time=5.763 ms
56 bytes from 10.1.1.10: icmp_seq=4 ttl=64 time=1.542 ms

--- Ping statistics for 10.1.1.10 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 1.447/2.428/5.763/1.673 ms
<GLD>

Well, that worked just fine and our vlan interface is up and running.

Bonding error

If you ever tried to configure bonding with the Ubuntu desktop using networkmanager you might have seen below error when you tried to group the interfaces.

networkmanager definitions do not support name globbing

To resolve this issue you can change the renderer from Network manager to networkd like below. If you apply the netplan now everything should work just fine.

networkmanager definitions do not support name globbing


Learn About This Author

Saifudheen Sidheeq

Saifudheen is in the Computer networking and technology field for about a decade now. He loves technology and new ideas, and in his spare time, he loves to write about them.