
SDN with OVN and OVS (Lab Build)
This is the final part in my series on OVN and OVS where we're going to pull together the background research and knowledge I've laid our in parts 1-5 in a working lab based on the following environment.
There are going to be 3 host servers (chassis) in my lab, excitingly named chassis1-3. Each has a dedicated nic for 'management' traffic (eth0) and a nic for 'provider' traffic. The management address space is 10.0.20.0/24 and the provider space 192.168.0.0/24.
To simulate a virtual 'tenant' we will have 3 VMs (one per chassis), connected to a logical switch (ls-test-internal). External connectivity to/via the provider network will come via logical router (lr-test) that connects between ls-test-internal and a second logical switch (ls-provider) that uplinks to our provider space through an OVS bridge (br-provider).
Let's get into the lab configuration which is based on SUSE Leap, however all of this translates to other distros with a few changes to paths and network configuration tools.
Chassis base configuration
To prepare the chassis we need to configure some basic settings such as: Hostname; IP addressing for eth0 (eth1 will remain unnumbered); installation of packages for libvirt, openvswitch and OVN.
# set the hostname (insert chassis1-3)
sudo nano /etc/hostname
# configure ifcfg scripts for eth0 and eth1
sudo nano /etc/sysconfig/network/ifcfg-eth0
BOOTPROTO='static'
IPADDR=10.0.20.10x/24
STARTMODE='auto'
MTU=9000
ZONE=public
sudo nano /etc/sysconfig/network/ifcfg-eth1
BOOTPROTO=none
STARTMODE=auto
MTU=9000
ZONE=public
# configure a default route
sudo nano /etc/sysconfig/network/routes
default 10.0.20.254
# configure DNS resolution
sudo nano /etc/sysconfig/network/config
NETCONFIG_DNS_STATIC_SERVERS="10.0.20.254"
For eth0 and eth1 I have set the MTU at 9000 to allow jumbo frames and to avoid having to reduce the MTU on guest VMs from the default of 1500. That might not be supported in your network switch environment.
# install libvirt packages using the kvm_server pattern
sudo zypper in -t pattern kvm_server
# install OVN and openvswitch
sudo zypper in openvswitch3 ovn3 ovn3-central ovn3-host
OK those are the main steps for configuring the chassis, next we can start configuring OVN and OVS.
OVN and OVS configuration
# set the openvswitch system-id to the hostname (chassis1-3)
nano /etc/openvswitch/system-id.conf
# enable the firewall services and additional ports for ovn-central (northd) to function.
firewall-cmd --zone=public --add-service=ovn-central-firewall-service --permanent
firewall-cmd --zone=public --add-service=ovn-host-firewall-service --permanent
firewall-cmd --zone=public --add-port=6643/tcp --permanent
firewall-cmd --zone=public --add-port=6644/tcp --permanent
# reload the firewall to make changes active
firewall-cmd --reload
# start openvswitch and ovn-controller
systemctl enable --now openvswitch
systemctl enable --now ovn-controller
Before we start ovn-northd (central) we need to provide the configuration needed by the ovn-ctl
script to init our cluster. You don't run ovn-ctl
directly, that happens when you start ovn-northd with the values for OVN_NORTHD_OPTS used in the init process.
# define OVN_NORTHD_OPTS in /etc/sysconfig/ovn
# first server (chassis1)
echo 'OVN_NORTHD_OPTS="--db-nb-addr=10.0.20.101 --db-nb-create-insecure-remote=yes --db-sb-addr=10.0.20.101 --db-sb-create-insecure-remote=yes --db-nb-cluster-local-addr=10.0.20.101 --db-sb-cluster-local-addr=10.0.20.101 --ovn-northd-nb-db=tcp:10.0.20.101:6641,tcp:10.0.20.102:6641,tcp:10.0.20.103:6641 --ovn-northd-sb-db=tcp:10.0.20.101:6642,tcp:10.0.20.102:6642,tcp:10.0.20.103:6642"' >> /etc/sysconfig/ovn
# second server (chassis2)
echo 'OVN_NORTHD_OPTS="--db-nb-addr=10.0.20.102 --db-nb-create-insecure-remote=yes --db-sb-addr=10.0.20.102 --db-sb-create-insecure-remote=yes --db-nb-cluster-local-addr=10.0.20.102 --db-sb-cluster-local-addr=10.0.20.102 --db-nb-cluster-remote-addr=10.0.20.101 --db-sb-cluster-remote-addr=10.0.20.101 --ovn-northd-nb-db=tcp:10.0.20.102:6641,tcp:10.0.20.101:6641,tcp:10.0.20.103:6641 --ovn-northd-sb-db=tcp:10.0.20.102:6642,tcp:10.0.20.101:6642,tcp:10.0.20.103:6642"' >> /etc/sysconfig/ovn
# third server (chassis3)
echo 'OVN_NORTHD_OPTS="--db-nb-addr=10.0.20.103 --db-nb-create-insecure-remote=yes --db-sb-addr=10.0.20.103 --db-sb-create-insecure-remote=yes --db-nb-cluster-local-addr=10.0.20.103 --db-sb-cluster-local-addr=10.0.20.103 --db-nb-cluster-remote-addr=10.0.20.101 --db-sb-cluster-remote-addr=10.0.20.101 --ovn-northd-nb-db=tcp:10.0.20.103:6641,tcp:10.0.20.101:6641,tcp:10.0.20.102:6641 --ovn-northd-sb-db=tcp:10.0.20.103:6642,tcp:10.0.20.101:6642,tcp:10.0.20.102:6642"' >> /etc/sysconfig/ovn
Before we start ovn-northd there are 2 environment variables I've found necessary to avoid constantly getting an error running ovn-nbctl
and ovn-sbctl
commands. Without these you have to know which chassis is the current leader and run all commands from there, or add the --no-leader-only
option which is a pita.
# update system environment variables (or do it for your user under .bashrc)
nano /etc/environment
OVN_NBCTL_OPTIONS=--no-leader-only
OVN_SBCTL_OPTIONS=--no-leader-only
Now you can start ovn-northd. Check status with journalctl
or tail the log files to validate the cluster is formed.
# start ovn-northd
systemctl enable --now ovn-northd.service
# check service status
journalctl -u ovn-northd.service
# tail the log file
tail -f /var/log/ovn/ovn-northd.log
Final stage of configuration for OVN and OVS is to connect ovn-controller to ovn-central (northd) and create the bridge used for the uplink to the provider network. In my case I will let wicked (network config tool for SUSE Leap) handle the creation of the OVS bridge. You can achieve this in Netplan afaik but I haven't tested it.
# create wicked ifcfg script for br-provider OVS bridge
nano /etc/sysconfig/network/ifcfg-br-provider
STARTMODE=auto
BOOTPROTO=none
OVS_BRIDGE=yes
OVS_BRIDGE_PORT_DEVICE=eth1
ZONE=public
# reload all interfaces to apply changes
wicked ifreload all
# check br-provider is created
ovs-vsctl show
7eeae319-dece-4ada-baba-fb46f3c2e82e
Bridge br-provider
Port eth1
Interface eth1
Port br-provider
Interface br-provider
type: internal
Bridge br-int
Port br-int
Interface br-int
type: internal
ovs_version: "3.1.0"
And finally update OVS to connect ovn-controller to ovn-central (northd).
# chassis1
ovs-vsctl set open_vswitch . external-ids:ovn-remote=tcp:10.0.20.101:6642,tcp:10.0.20.102:6642,tcp:10.0.20.103:6642
ovs-vsctl set open_vswitch . external-ids:ovn-encap-type=geneve
ovs-vsctl set open_vswitch . external-ids:ovn-nb=tcp:10.0.20.101:6641
ovs-vsctl set open_vswitch . external-ids:ovn-encap-ip=10.0.20.101
ovs-vsctl set open_vswitch . external-ids:ovn-bridge-mappings=UPLINK:br-provider
# chassis2
ovs-vsctl set open_vswitch . external-ids:ovn-remote=tcp:10.0.20.101:6642,tcp:10.0.20.102:6642,tcp:10.0.20.103:6642
ovs-vsctl set open_vswitch . external-ids:ovn-encap-type=geneve
ovs-vsctl set open_vswitch . external-ids:ovn-nb=tcp:10.0.20.102:6641
ovs-vsctl set open_vswitch . external-ids:ovn-encap-ip=10.0.20.102
ovs-vsctl set open_vswitch . external-ids:ovn-bridge-mappings=UPLINK:br-provider
#chassis3
ovs-vsctl set open_vswitch . external-ids:ovn-remote=tcp:10.0.20.101:6642,tcp:10.0.20.102:6642,tcp:10.0.20.103:6642
ovs-vsctl set open_vswitch . external-ids:ovn-encap-type=geneve
ovs-vsctl set open_vswitch . external-ids:ovn-nb=tcp:10.0.20.103:6641
ovs-vsctl set open_vswitch . external-ids:ovn-encap-ip=10.0.20.103
ovs-vsctl set open_vswitch . external-ids:ovn-bridge-mappings=UPLINK:br-provider
Note the last setting in each of the chassis code blocks above where I define an arbitrary name "UPLINK" as an ovn-bridge-mapping to the br-provider bridge we configured before.
To check this is all working and our OVS chassis have been registered in ovn-central by ovn-controller use the ovn-sbctl show
command.
ovn-sbctl show
Chassis chassis1
hostname: chassis1
Encap geneve
ip: "10.0.20.101"
options: {csum="true"}
Chassis chassis2
hostname: chassis2
Encap geneve
ip: "10.0.20.102"
options: {csum="true"}
Chassis chassis3
hostname: chassis3
Encap geneve
ip: "10.0.20.103"
options: {csum="true"}
Guest integration
We integrate libvirt guests by adding updating their network interface with an openvswitch virtual port. This connects (patches) the guest interface to br-int so ovn programming can be implemented.
The following is an example of a front-end network defined through virsh
.
# define a libvirt network from a predefined template
nano ovs_net.xml
<network>
<name>ovs-net</name>
<forward mode='bridge'/>
<bridge name='br-int'/>
<virtualport type='openvswitch'/>
</network>
# define the network using virsh
virsh net-define ovs_net.xml
# set the network to autostart
virsh net-autostart ovs-net
# start the network
virsh net-start ovs-net
Now launch/install a VM. My example is based on a downloaded cloud image of SUSE Leap Micro, you can go ahead and download any image you want, that's not the focus of this lab.
# launching vm1 from a cloud image
virt-install --name vm1 --os-variant slm6.0 --vcpus 2 --memory 2048 \
--graphics vnc --virt-type kvm --disk /var/lib/libvirt/images/vm1.qcow2 \
--cdrom /var/lib/libvirt/images/cloudinit.iso \
--network=network=default,model=virtio,virtualport_type=openvswitch,target.dev=vm1_eth0
Once the guest boots you can see the newly created port (vm1_eth0
) attached to br-int, we will need to make a note of the assigned port id and attached-mac using the following commands.
# check vm1_eth0 has been created as a port in OVS
ovs-vsctl show
7eeae319-dece-4ada-baba-fb46f3c2e82e
Bridge br-provider
Port eth1
Interface eth1
Port br-provider
Interface br-provider
type: internal
Bridge br-int
fail_mode: secure
datapath_type: system
Port vm1_eth0
Interface vm1_eth0
Port br-int
Interface br-int
type: internal
Port ovn-leap42-0
Interface ovn-leap42-0
type: geneve
options: {csum="true", key=flow, remote_ip="10.0.20.102"}
Port ovn-leap43-0
Interface ovn-leap43-0
type: geneve
options: {csum="true", key=flow, remote_ip="10.0.20.103"}
ovs_version: "3.1.0
# get the port id and attached-mac
ovs-vsctl get interface vm1_eth0 external_ids:iface-id
"76757c0f-9892-4614-bdc0-96756d3e28da"
ovs-vsctl get interface vm1_eth0 external_ids:attached-mac
"52:54:00:b2:9b:2b"
Now that we have guest VMs booted (you should rerun this for VMs 2 and 3), we can do all the rest of the programming for the network via the ovn-central (northd) API.
OVN programming and configuration
First we need to create the logical switch to connect our guests VMs.
# create the logical switch (ls-test-internal)
ovn-nbctl ls-add ls-test-internal
# add ports for each guest VM (repeat for each guest VM)
ovn-nbctl lsp-add ls-test-internal 76757c0f-9892-4614-bdc0-96756d3e28da
ovn-nbctl lsp-set-addresses 76757c0f-9892-4614-bdc0-96756d3e28da "52:54:00:b2:9b:2b 192.168.100.11/24"
# validate ports are up (connected)
ovn-nbctl list logical_switch_port
_uuid : ad4c0257-d4ab-40e9-98a5-7c4b28d6ff07
addresses : ["52:54:00:b2:9b:2b 192.168.100.11/24"]
dhcpv4_options : []
dhcpv6_options : []
dynamic_addresses : []
enabled : []
external_ids : {}
ha_chassis_group : []
mirror_rules : []
name : "76757c0f-9892-4614-bdc0-96756d3e28da"
options : {}
parent_name : []
port_security : []
tag : []
tag_request : []
type : ""
up : true
All being wll (ovn-nbctl list logical_switch_ports
show's "Up: true" for each port) your guests are now connected to ls-test-internal
and can ping each other. I am not going to cover that in this post. Just give them an IP and test out connectivity.
Next let's add the logical switch ls-provider
to uplink us to the provider network and create a logical router lr-test
to provide external connectivity.
Start by defining the new switch, router and associated ports.
# create logical switch ls-provider
ovn-nbctl ls-add ls-provider
# create logical router lr-test
ovn-nbctl lr-add lr-test
# connect lr-test to ls-test-internal
ovn-nbctl lrp-add lr-test lr-test-ls-test-internal 00:00:00:00:00:01 192.168.100.1/24
# connect lr-test to ls-provider
ovn-nbctl lrp-add lr-test lr-test-ls-provider 00:00:00:00:00:02 192.168.0.1/24
# connect ls-test-internal to lr-test
ovn-nbctl lsp-add ls-test-internal ls-test-internal-lr-test
ovn-nbctl lsp-set-type ls-test-internal-lr-test router
ovn-nbctl lsp-set-addresses ls-test-internal-lr-test router
ovn-nbctl lsp-set-options ls-test-internal-lr-test router-port=lr-test-ls-test-internal
# connect ls-provider to lr-test
ovn-nbctl lsp-add ls-provider ls-provider-lr-test
ovn-nbctl lsp-set-type ls-provider-lr-test router
ovn-nbctl lsp-set-addresses ls-provider-lr-test router
ovn-nbctl lsp-set-options ls-provider-lr-test router router-port=lr-test-ls-provider
Now that we have the logical router created and connected to ls-test-internal our guest VMs will be able to ping both in the local IP (192.168.100.1/24) and the provider IP (192.168.0.1/24).
To gain access to the provider network we now need to connect ls-provider
by adding a localnet port on ls-provider
and linking it to the br-provider bridge through an ovn-bridge-mapping.
# create the localnet port
ovn-nbctl lsp-add ls-provider ls-provider-br-provider
ovn-nbctl lsp-set-type ls-provider-br-provider localnet
ovn-nbctl lsp-set-addresses ls-provider-br-provider unknown
ovn-nbctl lsp-set-options ls-provider-br-provider network_name=UPLINK
Note lsp-set-options
defines a network_name
key/value to instruct OVS which ovn-bridge-mapping to use to look up the bridge for creating the associated patch port.
Before we schedule this port on our chassis let's add a route into lr-test
so that it forwards traffic. In this example I am just using a default route to the firewall interface terminating the provider network.
# set a route on lr-test
ovn-nbctl lr-route-add lr-test 0.0.0.0/0 192.168.0.254
We can now schedule our port and as this is an HA setup with chassis1-3 participating equally I will assign weights to each to ensure one is always hosting the localnet port binding.
# assign costs to each chassis
ovn-nbctl lrp-set-gateway-chassis lr-test-ls-provider chassis1 100
ovn-nbctl lrp-set-gateway-chassis lr-test-ls-provider chassis2 75
ovn-nbctl lrp-set-gateway-chassis lr-test-ls-provider chassis3 50
If you have something that will respond on 192.168.0.254 you should now be able to ping that from any of your guest VMs. To check everything is working use the following commands.
Wrapping Up
Throughout this series I've tried to demystify some of the inner workings of configuring OVN. Yes, you can use LXD or even Openstack but I hope that by following this series you've begun to understand some of the foundational elements of software defined networking that underpin these amazing opensource projects.
Time permitting I will start unpacking some of the other topics that should now be accessible like multi-tenancy and other virtual network functions so add comments if there is something that would aid your own project.
Ducksource Newsletter
Join the newsletter to receive the latest updates in your inbox.