Dobrev.EU Blog

Things I want to share

Running Docker With Open vSwitch Network Overlay on CentOS 7

| Comments

Docker containers are the technology of the decade that really sped-up CI/CD cycle. Running clustered Docker solutions sometimes require network fencing or multi-node connectivity. Lucky Docker supports Open vSwitch overlay networking out of the box with just few modifications to start-up services that I’m about to cover in this post.

Requirements

  • At least 2 CentOS 7.x nodes for overlay networking to work (10.0.10.2 and 10.0.10.3)
  • Docker (current) from CentOS official repo (version 1.12.6 at the time of writing)
  • Open vSwitch 2.8.0 (RPM manually built from source)
  • HashiCorp Consul for Docker configuration store

Installation

Open vSwitch RPMs

I prefer to build RPMs from source whenever possible and distribute them to servers from a local repository. Copy the contents of the following code snippet to a file on your system and execute it. On success you should have RPMs created under $HOME/rpmbuild/openvswitch/RPMS

Build OpenVSwitch RPMs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
#!/bin/bash
set -e "${VERBOSE:+-x}"
OVS_VERSION=${OVS_VERSION:-2.8.0}
JOB_BASE_NAME="openvswitch"

TOPDIR="${HOME}/rpmbuild/${JOB_BASE_NAME}"

# Create the rpmbuild folder skeleton
mkdir -p ${TOPDIR}/{BUILD,RPMS,SOURCES,SPECS,SRPMS}

# Download Open vSwitch
wget http://openvswitch.org/releases/openvswitch-${OVS_VERSION}.tar.gz -O ${TOPDIR}/SOURCES/openvswitch-${OVS_VERSION}.tar.gz

# Unarchive it
tar zxf ${TOPDIR}/SOURCES/openvswitch-${OVS_VERSION}.tar.gz

# Build the RPMs
if [[ `lsb_release -r | awk '{print $2}'` =~ ^7 ]]; then
  echo "Building RPMs for CentOS 7"
  echo
 spectool -A -g -C ${TOPDIR}/SOURCES ./openvswitch-${OVS_VERSION}/rhel/openvswitch-fedora.spec
  spectool -A -g -C ${TOPDIR}/SOURCES ./openvswitch-${OVS_VERSION}/rhel/openvswitch-dkms.spec
  sudo /usr/bin/yum-builddep -y ./openvswitch-${OVS_VERSION}/rhel/openvswitch-fedora.spec
  sudo /usr/bin/yum-builddep -y ./openvswitch-${OVS_VERSION}/rhel/openvswitch-dkms.spec
  rpmbuild ${VERBOSE:+-v} --define "_topdir ${TOPDIR}" -ba --nocheck ./openvswitch-${OVS_VERSION}/rhel/openvswitch-fedora.spec
  rpmbuild ${VERBOSE:+-v} --define "_topdir ${TOPDIR}" -ba --nocheck ./openvswitch-${OVS_VERSION}/rhel/openvswitch-dkms.spec

  echo "Copying files to remote RPMs storage"
  cp -f ${VERBOSE:+-v} ${TOPDIR}/SRPMS/*.src.rpm ${HOME}/rpms/source
  cp -f ${VERBOSE:+-v} ${TOPDIR}/RPMS/*/*.rpm ${HOME}/rpms
else
  echo "Build for `lsb_release -r` not yet supported"
  exit 0
fi

rm -rf ./openvswitch-${OVS_VERSION}

Install Open vSwitch

Install Open vSwitch
1
2
3
4
yum localinstall $HOME/rpmbuild/openvswitch/RPMS/openvswitch-2.8.0-1.el7.centos.x86_64.rpm \
  $HOME/rpmbuild/openvswitch/openvswitch-ovn-central-2.8.0-1.el7.centos.x86_64.rpm \
  $HOME/rpmbuild/openvswitch/RPMS/openvswitch-ovn-host-2.8.0-1.el7.centos.x86_64.rpm \
  $HOME/rpmbuild/openvswitch/RPMS/openvswitch-ovn-docker-2.8.0-1.el7.centos.x86_64.rpm

Docker

Install Docker
1
yum install docker

HashiCorp Consul

I plan to run Consul as a Docker container on a separate node so I’m not covering manual installation steps. You can again create RPMs and distribute them from a repository to your nodes.

Configuration

HashiCorp Consul

On a external node of your choise run Consul container and expose its ports. Make a note of the node’s IP. We’re going to need it later on.

Run Consul in a container
1
docker run -d -p 8400:8400 -p 8500:8500 -p 8600:53 consul

Docker

For multi-host networking with OVN and Docker, Docker has to be started with a destributed key-value store. Start your Docker daemon with:

Start Docker with distributed key-value store
1
2
docker daemon --cluster-store=consul://$IP_OF_EXTERNAL_CONSUL_NODE:8500 \
  --cluster-advertise=$HOST_IP:0

where $IP_OF_EXTERNAL_CONSUL_NODE is the IP address you’ve noted in the previos step and $HOST_IP is the IP of the node you start Docker on. These options can be included in /etc/sysconfig/docker too.

Open vSwitch overlay mode

Start OVN Northbound Database (ovn-northd daemon) on the external node.

Start OVN Northbound Database
1
2
3
4
systemctl enable ovn-northd.service
systemctl start ovn-northd.service
ovn-nbctl set-connection ptcp:6641
ovn-sbctl set-connection ptcp:6642

On each host, where you plan to spawn your containers, you will need to run the below command once. You may need to run it again if your OVS database gets cleared. It is harmless to run it again in any case:

1
2
3
4
5
ovs-vsctl set Open_vSwitch . \
    external_ids:ovn-remote="tcp:$CENTRAL_IP:6642" \
    external_ids:ovn-nb="tcp:$CENTRAL_IP:6641" \
    external_ids:ovn-encap-ip=$LOCAL_IP \
    external_ids:ovn-encap-type="$ENCAP_TYPE"

where:

$LOCAL_IP is the IP address via which other hosts can reach this host. This acts as your local tunnel endpoint.

$ENCAP_TYPE is the type of tunnel that you would like to use for overlay networking. The options are geneve or stt. Your kernel must have support for your chosen $ENCAP_TYPE. Both geneve and stt are part of the Open vSwitch kernel module that is compiled from this repo.

Start the ovn-controller

1
2
systemctl enable ovn-controller.service
systemctl start ovn-controller.service

Start the Open vSwitch network driver

By default Docker uses Linux bridge for networking. But it has support for external drivers. To use Open vSwitch instead of the Linux bridge, you will need to start the Open vSwitch driver. Unlucky for us there isn’t .service script so copy the contents of the following snippet to /etc/systemd/system/ovn-docker-overlay-driver.service

Open vSwitch network driver service script
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[Unit]
Description=OVN Docker Overlay daemon
After=syslog.target
Requires=openvswitch.service
After=openvswitch.service

[Service]
Type=forking
RemainAfterExit=yes
Environment=OVS_RUNDIR=%t/openvswitch OVS_DBDIR=/var/lib/openvswitch
EnvironmentFile=-/etc/sysconfig/ovn-docker-overlay-driver
PIDFile=/var/run/openvswitch/ovn-docker-overlay-driver.pid
ExecStart=/usr/bin/ovn-docker-overlay-driver --detach --pidfile /var/run/openvswitch/ovn-docker-overlay-driver.pid $OVN_DOCKER_OVERLAY_OPTS
KillSignal=TERM

[Install]
WantedBy=multi-user.target

and start it:

Start Open vSwitch network driver service
1
2
systemctl enable ovn-docker-overlay-driver.service
systemctl start ovn-docker-overlay-driver.service

Test setup

Create OVN overlay network

On one of the nodes crete the overlay network. Remember Docker nodes are using Consul for configuration storage.

Create OVN overlay network
1
docker network create -d openvswitch --subnet=192.168.200.0/24 ovsnetwork

Attach Docker container to the overlay network

On each Docker host run a small container attached to the network we’ve just created.

Attach Docker container
1
docker run -it --net=ovsnetwork alpine /bin/ash

If everything was properly setup you should be able to ping both containers from each other. Congratulations!

Comments