VNF-in-a-Box: Set Up a Playground for Edge Services on DPDK-Supported Hardware

ID 标签 673000
已更新 7/3/2019
版本 Latest
公共

author-image

作者

Introduction

The intent of this tutorial is to show how to create a playground-like environment for the prototyping and evaluation of potential edge services. It serves as an introduction to the chosen technologies and contains instructions to create a test environment on various infrastructures, including the Intel Atom® C3000 processor series kits for use as the Network Function Virtualized Infrastructure (NFVI). To accelerate packet processing, one physical network interface controller (NIC) will be passed down to the containers by making use of the Data Plane Development Kit (DPDK) and a plugin for Vector Packet Processing (VPP). Alternatively, the same setup can be achieved by using a virtual machine that can be created using instructions in the tutorial VNF-in-a-Box: Set Up a Playground for Edge Services on a Virtual Machine.

Playground Components

This playground includes the following tools and frameworks:

logos of tools and frameworks

Kata Containers

Kata Containers is a Docker* runtime alternative for greater workload isolation and security. It is an open source project designed to bring together the advantages of virtual machines and containers to build a standard implementation of lightweight virtual machines (VMs) that act and perform like containers but provide the workload isolation and security advantages of VMs. Because the Kata runtime is compatible with the Open Container Initiative* (OCI) specs, Kata Containers can run side by side with Docker (runc*) containers — even on the same host — and work seamlessly with the Kubernetes* Container Runtime Interface (CRI). Kata Containers enjoys industry support from some of the world's largest cloud service providers, operating system vendors, and telecom equipment manufacturers. The code is hosted on GitHub* under the Apache* License Version 2 and the project is managed by the OpenStack* Foundation.

Data Plane Development Kit (DPDK)

Data Plane Development Kit (DPDK) is a Docker networking alternative that consists of libraries to accelerate packet processing workloads running on a wide variety of CPU architectures.

Open Baton

Open Baton is a network functions virtualization (NFV) management and orchestration (MANO) framework, driven by Fraunhofer FOKUS and TU Berlin. It provides full automation of service deployment and lifecycle management and is the result of an agile design process for building a framework capable of orchestrating virtualized network functions (VNF) services across heterogeneous infrastructures.

Vector Packet Processing (VPP)

VPP is the open source version of the vector packet processing technology from Cisco*, a high-performance packet-processing stack that can run on commodity CPUs.

Tutorial Goal

The goal in the following sections is to set up a test environment that uses Docker to deploy Kata Containers. The container will be connected to one of the physical NICs of the host, providing a boost in packet throughput. This is achieved by configuring the host to allocate hardware resources to VPP (using DPDK), including RAM (in the form of hugepages), CPUs (by dedicating cores to DPDK), and network resources (using DPDK NIC drivers). The original operating system will no longer manage these resources and cannot interfere with subsequent operations. Finally, this tutorial introduces Open Baton as an NFV MANO framework with a modified/enhanced Docker Virtual Network Function Manager (VNFM). We’ll use it to deploy DPDK-empowered virtual network functions (VNFs) on the Intel Atom C3000 processor series kits. This tutorial is partially based on the following guides:

Kata Container Developer Guide

DPDK Quick Start Guide

Prerequisites

Hardware

We used the following hardware for this tutorial:

  • Intel Atom® C3000 processor series kit
  • Workstation with Ubuntu* 18
  • Switch + Ethernet cables
  • USB Stick (to install CentOS* on the Intel Atom C3000 processor series kit)

The specifications of the Intel Atom C3000 processor series kit we used for this tutorial are as follows:

Intel Atom® C3000 processor series Specs
CPU 4 cores, frequencies - 2.2 GHz
Ethernet 4x 1 GbE ports (RJ-45)
I-O connectors 1x USB 3.0 port, 1x micro-USB console port

Software

The configuration has been tested using the following software:

Software Version
OS CentOS* 7
Kernel 3.10.0-862.14.4.el7.x86_64
Docker* 18.09.0-ce (package)
DPDK-usertools 17.11.2 (source)
VPP 18.10-release (package)
DPDK-VPP 18.08 (installed with VPP)
Go* 1.9.4 (package)
Kata Containers 1.4.0 (source/package)
QEMU-lite 2.11 (installed with Kata)
VPP CNM plugin latest (10 Mar 2018)

Configure the Playground – Intel Atom® C3000 Processor Series Kit

This chapter deals with the configuration and installation on real hardware using the recently released Intel Atom C3000 processor series kit. It works perfectly for this playground, providing a small box with enough resources and the required features.

Memory: 8 GB DDR4-1866

Depending on how you received the hardware, you may have to set up and configure the box via the serial console. This procedure is described in the user manual of the Intel Atom C3000 processor series kit. To follow this tutorial, you’ll need an open terminal session with root rights connected to the box (e.g., Laptop running Ubuntu* 16.04 - connected via USB).

CPU Flags and HugePages

The first step is to use grand unified bootloader (GRUB) to set a few CPU flags and allocate hugepages. You must determine whether to allocate the hugepages at boot or afterward. You can use 2 MB (hugepage) or 1 GB (gigapage) pages. In this tutorial, we will allocate 1600 * 2 MB pages after booting as we have a total of about 8 GB RAM available on the machine. These hugepages will be used by VPP as well as by the Kata Containers. The intent of the hugepages is to increase overall performance. Specifically, the translation lookaside buffer (TLB) between the CPU and CPU cache caches the virtual to physical address mapping.

Assuming the TLB contains 256 entries and each entry can save 4,096 bytes, it will be able to store up to 1 MB in the TLB without using hugepages. When using hugepages, the TLB entry can now point to 2 MB, which increases the memory mapping capability to 500 MB.

# Use your favourite editor to modify /etc/default/grub
# e.g. : 
vi /etc/default/grub

Locate the line starting with GRUB_CMDLINE_LINUX_DEFAULT, which we’ll modify to enable input-output memory management system (IOMMU), Intel® Virtualization Technology (Intel® VT) to pass hardware resources down to virtual machines.

# Therefore we will append the following to the already existing content
iommu=pt intel_iommu=on

You may also consider limiting the available CPUs for the operating system, and to assign dedicated CPUs to the VPP-DPDK environment.

# If we want to save 2 of our 4 cores for VPP-DPDK, we could add the following
isolcpus=3-4

The result may look like the following, depending on your preferences:

GRUB_CMDLINE_LINUX_DEFAULT="crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb iommu=pt intel_iommu=on isolcpus=3-4"

Now let’s allocate a few hugepages. We will check the default settings and currently available hugepages and afterward apply a new configuration.

# You can check the default hugepage settings via:
cat /proc/meminfo | grep Huge

# Allocate 1600 hugepages 
echo "vm.nr_hugepages = 1600" >> /etc/sysctl.conf
sysctl -p

Depending on your configuration, the hugepage setup might look like the following:

# cat /proc/meminfo | grep Huge
AnonHugePages:     16384 kB
HugePages_Total:    1600
HugePages_Free:     1600
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

Because we have modified the GRUB configuration we’ll rebuild the GRUB config file.

grub2-mkconfig -o "$(readlink -e /etc/grub2.cfg)"

Network Configuration

This assumes you are connected to the Intel Atom C3000 processor series kit via the serial port. As the OS is CentOS* 7, we will set up the network interfaces with static IPv4 addresses as follows.

# Depending on your setup (hardware or virtualized NICs) the interface names may differ

# The network scripts are located at : /etc/sysconfig/network-scripts/ifcfg-*
# We will edit the following files and edit the content for the specific lines

# Interface name : "enp3s0f0" (on hardware)
# This will serve as management (mgmt) interface (Port 1)
DEFROUTE=yes
ONBOOT="yes"
IPADDR="192.168.0.2"
PREFIX="24"
GATEWAY="192.168.0.1"

# Interface name : "enp3s0f1" (on hardware)
# This will later serve as DPDK (dpdk) interface (Port 2)
DEFROUTE=no
ONBOOT=no
IPADDR="192.168.3.2"
PREFIX="24"
GATEWAY="192.168.3.3"

# Interface name : "enp4s0f0" (on hardware)
# This wont be used later (unused), still we can set it up (Port 3)
DEFROUTE=no
ONBOOT="yes"
IPADDR="192.168.2.2"
PREFIX="24"
GATEWAY="192.168.2.3"

# Interface name : "enp4s0f1" (on hardware)
# This will later serve as bridged (bridge) interface (Port 4)
DEFROUTE=no
ONBOOT="yes"
IPADDR="192.168.1.2"
PREFIX="24"
GATEWAY="192.168.1.3"

Consider adding your secure socket shell (SSH) public-key to the authorized users file to avoid the need to always type your credentials when connecting to the box. Since we have changed the GRUB config and set up static IPv4 addresses we should perform a reboot.

Set Up Internet Connectivity

Ensure that your default route is set correctly and that you added a nameserver in your resolv.conf.

To download and install packages from the internet requires connecting the machine to a network with internet access or allowing a connection to the net via another computer.

# Assumptions :
#   laptop WLAN (wlan0)    - access to internet
#   laptop ETH  (eth0)     - 192.168.0.3
#   Hardware ETH (enp3s0f0) - 192.168.0.2

# On the laptop allow ip forwarding and setup a NAT via iptables :

sysctl -w net.ipv4.ip_forward=1
iptables -A FORWARD -o wlan0 -i eth0 -s 192.168.0.0/24 -m conntrack --ctstate NEW -j ACCEPT
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -t nat -F POSTROUTING
iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE

# Add a nameserver to the /etc/resolv.conf on the machine 
# Open the file via your favorite editor or simply append a default nameserver

echo "nameserver 8.8.8.8" >> /etc/resolv.conf

This configuration is not persistent, and you will lose connectivity after a reboot.

Install Go*

To build the VPP Docker plugin and the Kata Containers components we need to install Go.

curl -O https://storage.googleapis.com/golang/go1.9.4.linux-amd64.tar.gz
tar -zxvf  go1.9.4.linux-amd64.tar.gz -C /usr/local/
export PATH=$PATH:/usr/local/go/bin
# The GOPATH will likely be in your root home directory
GOPATH=/root/go
# to make the PATH changes persistent :
echo "export PATH=\$PATH:/usr/local/go/bin" >> /etc/profile.d/path.sh
echo "export GOPATH=/root/go" >> /etc/profile.d/path.sh

Install Docker*

Because the Kata runtime is a replacement for the default Docker runtime (runc) we will have to install Docker as well.

# Installing necessary packages for Docker
yum install -y yum-utils device-mapper-persistent-data lvm2
# Adding Docker repository
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# Install latest Docker (At the time this tutorial was written it was 18.09)
yum install -y docker-ce

Install Kata Containers

Decide how to install the Kata Containers components. You can use the prebuilt packages from their repositories or check out the source code and build them yourself. It is also possible to run a mixed setup.

source /etc/os-release
yum -y install yum-utils
ARCH=$(arch)
# Adding Kata Container repositories"
yum-config-manager --add-repo "http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/master/CentOS_${VERSION_ID}/home:katacontainers:releases:${ARCH}:master.repo"
# Install Kata Containers via prebuild packages
yum -y install kata-runtime kata-proxy kata-shim
# Enable hugepages usage
sed -i -e 's/^# *\(enable_hugepages\).*=.*$/\1 = true/g' /usr/share/defaults/kata-containers/configuration.toml
# disable initrd image option
sed -i 's/^\(initrd =.*\)/# \1/g' /usr/share/defaults/kata-containers/configuration.toml
# Set default memory to 512MB
sed -i 's/^\(default_memory = 2048\)/default_memory = 512/g' /usr/share/defaults/kata-containers/configuration.toml

Download and Install DPDK

We will download, build and install the DPDK source code manually, making use of the usertools of this installation to build the igb_uio DPDK NIC driver.

# Perform an update
yum update --exclude=kernel
# Install necessary packages to build the DPDK source code
yum install -y kernel-devel numactl-devel gcc
cd /usr/local/src
curl -O http://fast.dpdk.org/rel/dpdk-17.11.2.tar.xz
tar -xf dpdk-17.11.2.tar.xz
dpdkfolder=$(ls /usr/local/src | grep dpdk | grep "$(echo dpdk-17.11.2 | cut -d '-' -f2)" | grep -v tar)
cd $dpdkfolder

Open config/common_base to set your preferred build options.

CONFIG_RTE_BUILD_SHARED_LIB=y
CONFIG_RTE_EAL_IGB_UIO=y
CONFIG_RTE_EAL_VFIO=y
CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=y
CONFIG_RTE_LIBRTE_VHOST=y
CONFIG_RTE_LIBRTE_VHOST_NUMA=y
CONFIG_RTE_LIBRTE_PMD_VHOST=y

CONFIG_RTE_KNI_PREEMPT_DEFAULT=n

Also, be sure to disable the KNI-related build options in config/common_linuxapp.

CONFIG_RTE_KNI_KMOD=n
CONFIG_RTE_LIBRTE_KNI=n
CONFIG_RTE_LIBRTE_PMD_KNI=n

Now we are ready to build and install DPDK.

# Build
make install T=x86_64-native-linuxapp-gcc DESTDIR=install

Afterwards, we will add the usertools to our PATH.

export PATH=$PATH:/usr/local/src/dpdk-stable-17.11.2/usertools/
echo "export PATH=\$PATH:/usr/local/src/dpdk-stable-17.11.2/usertools/" >> /etc/profile.d/path.sh

Load Necessary Drivers

# If you intend to use igb_uio as you have build the DPDK source code manually : 
modprobe uio
insmod /usr/local/src/$dpdkfolder/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko

modprobe uio_pci_generic
modprobe vfio-pci

# To do so persistent :
echo "uio" >> /etc/modules-load.d/obkatadpdkenv.conf
echo "uio_pci_generic" >> /etc/modules-load.d/obkatadpdkenv.conf
echo "vfio-pci" >> /etc/modules-load.d/obkatadpdkenv.conf

Bind the NIC Supporting DPDK to the DPDK Driver

We will choose NIC number 2 (0000:03:00.1) for DPDK support. Before we can, we have to check to see if the kernel already brought up the NIC interface.

showing where the nic 2 is in the Intel Atom C 300 processor series kit

# If you want to put down an active interface without greater effort, we will need some additional tools, we will need them anyway for the DPDK usertools to work properly
yum install -y net-tools pciutils libpcap-devel

# First lets list the available NICs

dpdk-devbind.py --status

# A sample output would look like the following :

Network devices using kernel driver
===================================
0000:03:00.0 'Ethernet Connection X553 1GbE 15e4' if=enp3s0f0 drv=ixgbe unused=igb_uio,vfio-pci,uio_pci_generic *Active*
0000:03:00.1 'Ethernet Connection X553 1GbE 15e4' if=enp3s0f1 drv=ixgbe unused=igb_uio,vfio-pci,uio_pci_generic *Active*
0000:04:00.0 'Ethernet Connection X553 1GbE 15e5' if=enp4s0f0 drv=ixgbe unused=igb_uio,vfio-pci,uio_pci_generic *Active*
0000:04:00.1 'Ethernet Connection X553 1GbE 15e5' if=enp4s0f1 drv=ixgbe unused=igb_uio,vfio-pci,uio_pci_generic *Active*

If the NIC you want to use for VPP-DPDK is shown as active, you will have to take the interface down as the kernel should not control this interface. The procedure may look like the following:

# Assuming your interface name was enp3s0f1 (relating to the NIC at pci address 0000:03:00.1) :
ifconfig enp3s0f1 down

# Now we are able to change the driver for that NIC
dpdk-devbind.py --bind=igb_uio 0000:03:00.1

Install VPP

For this tutorial, we will go with version 18.07 of VPP. It installs its own DPDK version, which you can check via the command vppctl show dpdk version. To add the repository to CentOS, create: /etc/yum.repos.d/fdio-release.repo with the following input:

[fdio-release]
name=fd.io release branch latest merge
baseurl=https://nexus.fd.io/content/repositories/fd.io.centos7/
enabled=1
gpgcheck=0

Now install VPP.

yum install -y vpp vpp-plugins
# If you want to tweak the performance you may edit the /etc/vpp/startup.conf
# For the CPU part, corelist-workers can be set according to your available CPU cores ( if you have 4 CPUs you may set it to “2-3” )
# For the DPDK part, socket-mem can be increased according to your hugepage settings ( if you have 1600 2MB hugepages you may set it to “1024,1024” )
systemctl restart vpp
# List interfaces
vppctl show int
# List NICs
vppctl show hardware
# Show NIC PCI slots
vppctl show pci
# If you want to modify the VPP startup parameters : /etc/vpp/startup.conf"

installing V P P

Bring up the network interface now handled by VPP:

# List interfaces
vppctl show int 
# Most probably your interface name will be 
#       GigabitEthernet0/6/0 (if you are using a virtualized NIC)
#       TenGigabitEthernet3/0/1 (if you are working with the real hardware)
vppctl set interface state TenGigabitEthernet3/0/1 up

Install Kata VPP Docker* Plugin

The Kata VPP Docker plugin is used to create the VPP virtual host (vhost) user interface, which is attached to the Kata Containers.

# Install git if not already done
yum install git
# Create the plugin directory
mkdir -p /etc/docker/plugins 
# Get VPP Docker plugin
go get -d -u github.com/clearcontainers/vpp
cd $GOPATH/src/github.com/clearcontainers/vpp
go build
# Enable the Docker plugin
cp vpp.JSON /etc/docker/plugins/vpp.JSON
# Restart Docker
systemctl daemon-reload
systemctl restart docker
# Start the plugin
./vpp -alsologtostderr -logtostderr

Docker* Runtime Configuration

We will create a file for configuring the default Docker runtime.

mkdir -p /etc/systemd/system/docker.service.d/
touch /etc/systemd/system/docker.service.d/kata-containers.conf

The file contains the available runtimes. To use Docker via a remote device (which does not have access to the Docker socket) you need to redefine the sockets. This is already done in the example below. Using this example, the default runtime will be the Kata runtime.

[Service]
ExecStart=
#ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2376 -D --default-runtime=runc
#ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -D --default-runtime=runc
#ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2376 -D --add-runtime kata-runtime=/usr/bin/kata-runtime --default-runtime=kata-runtime
ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -D --add-runtime kata-runtime=/usr/bin/kata-runtime --default-runtime=kata-runtime

To contact the Docker API via a remote machine, allow tcp connections through the firewall.

sudo firewall-cmd --zone=public --add-port=2376/tcp --permanent
sudo firewall-cmd --reload

After modifying the Docker runtime, restart Docker.

systemctl daemon-reload
systemctl restart docker

Disable Security-Enhanced Linux (SELinux)

If we do not disable SELinux for this setup, VPP will have problems creating the sockets. Edit /etc/sysconfig/selinux as follows:

# Change the value from
# SELINUX=enforcing
# to
SELINUX=disabled
# to disable it right away without restarting execute
setenforce 0

Optional - Create Docker* Networks

At this stage, we can decide whether we want to create the necessary Docker networks manually or let Open Baton create them automatically.

docker network create -d=vpp --ipam-driver=vpp --subnet=192.168.3.0/24 --gateway=192.168.3.1  vpp_net
docker network create --subnet=192.168.10.0/24 --gateway=192.168.10.1  normal_net

steps to create Docker networks manually

If you choose to let Open Baton create the networks, beware because after each deployment the network will block the creation of a new network with the same CIDR. You will have to delete the network first in order to start another deployment.

Set Up Open Baton with Docker-Compose

In this setup we will run Open Baton directly on the hardware under consideration. We can easily get a working environment up and running by using a Docker-compose file. We will use the default Docker runtime (runc) for this setup as we want to reserve the remaining resources for the Kata Containers.

Install Docker-Compose

We will stick with the file version 2.x – using version 3.x eliminates the ability to set memory limits and runtime values for default deployments.

curl -L "https://github.com/docker/compose/releases/download/1.23.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
export PATH=$PATH:/usr/local/bin
echo "export PATH=\$PATH:/usr/local/bin" >> /etc/profile.d/path.sh

Use this yaml file to deploy Open Baton:

version: '2.3'
services:
  nfvo:
    image: openbaton/nfvo:6.0.1
    mem_limit: 512MB
    runtime: runc
    depends_on:
      - rabbitmq_broker
      - nfvo_database
    restart: always
    environment:
      - NFVO_RABBIT_BROKERIP=192.168.0.2
      - NFVO_QUOTA_CHECK=false
      - NFVO_PLUGIN_INSTALLATION-DIR=/dev/null
      - SPRING_RABBITMQ_HOST=192.168.20.6
      - SPRING_DATASOURCE_URL=jdbc:mysql://192.168.20.5:3306/openbaton
      - SPRING_DATASOURCE_DRIVER-CLASS-NAME=org.mariadb.jdbc.Driver
      - SPRING_JPA_DATABASE-PLATFORM=org.hibernate.dialect.MySQLDialect
      - SPRING_JPA_HIBERNATE_DDL-AUTO=update
    ports:
      - "8080:8080"
    networks:
      ob_net:
        ipv4_address: 192.168.20.2
  vnfm-docker-go:
    image: openbaton/vnfm-docker-go:6.0.1
    mem_limit: 256MB
    runtime: runc
    depends_on:
      - nfvo
    restart: always
    environment:
      - BROKER_IP=192.168.20.6
    networks:
      ob_net:
       ipv4_address: 192.168.20.3
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:rw
  driver-docker-go:
    image: openbaton/driver-docker-go:6.0.1
    mem_limit: 256MB
    runtime: runc
    depends_on:
      - nfvo
    restart: always
    environment:
      - BROKER_IP=192.168.20.6
    networks:
      ob_net:
       ipv4_address: 192.168.20.4
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:rw
  rabbitmq_broker:
    image: rabbitmq:3-management-alpine
    mem_limit: 512MB
    runtime: runc
    hostname: openbaton-rabbitmq
    environment:
      - RABBITMQ_DEFAULT_USER=admin
      - RABBITMQ_DEFAULT_PASS=openbaton
    ports:
      - "5672:5672"
      - "15672:15672"
    networks:
      ob_net:
        ipv4_address: 192.168.20.6
  nfvo_database:
    image: mysql/mysql-server:5.7.20
    mem_limit: 512MB
    runtime: runc
    environment:
      - MYSQL_RANDOM_ROOT_PASSWORD=true
      - MYSQL_DATABASE=openbaton
      - MYSQL_USER=admin
      - MYSQL_PASSWORD=changeme
    networks:
      ob_net:
        ipv4_address: 192.168.20.5
networks:
  ob_net:
    driver: bridge
    ipam:
     config:
       - subnet: 192.168.20.0/24
         gateway: 192.168.20.1

If you saved the content in a file (e.g., OpenBaton.yaml), start it up by using the following command:

# You may have to load KVM kernel modules first
modprobe kvm
# To do so persistent :
echo "kvm" >> /etc/modules-load.d/obkatadpdkenv.conf

COMPOSE_HTTP_TIMEOUT=240 docker-compose -f OpenBaton.yaml up -d

If you are new to Open Baton, the GitHub documentation is a good starting point. However, the basic workflow is covered in this document. Once the machines are deployed, it will take a few minutes for Open Baton to configure itself. When it finishes you will be able to access the dashboard via your browser: 192.168.0.2:8080. The default credentials are admin with the password openbaton. We will use the dashboard from a remote machine.

set up of Open Baton

Register a Point of Presence

In order to deploy our VNFs on our infrastructure, we need to tell Open Baton where and how to contact it. This is done by registering a Point of Presence (PoP), which in this case is our newly created Kata environment. Note that we use the local Docker socket. Alternatively, you can also insert the URL of your environment if you enabled the remote API. Or you can either copy and paste the JSON definition of PoP (see below) or enter it manually in the form.

{ 
  "name": "silicombox",
  "authUrl": "unix:///var/run/docker.sock",
  "tenant": "1.38",
  "type": "docker"
}

The tab for registering a PoP is below:

V I M instances on Open Baton menu

Prepare the Docker* Image

Use any Docker image for your VNFs. You can create an original image or use a preexisting one. For this tutorial we will create our own Docker image using a Dockerfile that will use alpine as a base image, install, and start an Iperf Server. Execute the following commands directly in the CLI on your box:

mkdir ~/dockerfiledir
cd  ~/dockerfiledir

cat <<EOF > Dockerfile
FROM alpine:latest
RUN apk add --no-cache iperf
ENTRYPOINT iperf -s & iperf -u -s
EOF

docker build -t ob-tutorial-iperf:latest -f Dockerfile . -m 256MB

Onboard VNFDs and NSD

Next, we upload our VNFDs. As this tutorial involves a very basic use case, it will work with what we have available on the Docker images. This means there are no lifecycle scripts to be executed; simply upload a basic Network Services Descriptor (NSD). To do this we navigate to the NS Descriptors tab contained in the Catalogue drop-down menu in the left bar.

Network Service Descriptors in Open Baton menu

You may use this JSON file representing our NSD:

{  
   "name":"Iperf-Servers",
   "vendor":"Intel-FOKUS",
   "version":"1.0",
   "vnfd":[  deply 
      {  
         "name":"Iperf-Server-Normal",
         "vendor":"Intel-FOKUS",
         "version":"1.0",
         "lifecycle_event":[],
         "configurations":{
               "configurationParameters":[{
                     "confKey":"publish",      
                     "value":"5001"
                                   }],
               "name":"iperf-configuration"
         },
         "virtual_link":[  
            {  
               "name":"normal_net"
            }
         ],
         "vdu":[  
            {  
               "vm_image":[  
                  "ob-tutorial-iperf:latest"
               ],
               "scale_in_out":1,
               "vnfc":[  
                  {  
                     "connection_point":[  
                        {  
                           "virtual_link_reference":"normal_net",
                           "fixedIp":"192.168.10.2"
                        }
                     ]
                  }
               ]
            }
         ],
         "deployment_flavour":[  
            {  
               "flavour_key":"m1.small"
            }
         ],
         "type":"server",
         "endpoint":"docker"
      },
      {  
         "name":"Iperf-Server-DPDK",
         "vendor":"Intel-FOKUS",
         "version":"1.0",
         "lifecycle_event":[],
         "virtual_link":[  
            {  
               "name":"vpp_net"
            }
         ],
         "vdu":[  
            {  
               "vm_image":[  
                  "ob-tutorial-iperf:latest"
               ],
               "scale_in_out":1,
               "vnfc":[  
                  {  
                     "connection_point":[  
                        {  
                           "virtual_link_reference":"vpp_net",
                           "fixedIp":"192.168.3.2"
                        }
                     ]
                  }
               ]
            }
         ],
         "deployment_flavour":[  
            {  
               "flavour_key":"m1.small"
            }
         ],
         "type":"server",
         "endpoint":"docker"
      }
   ],
   "vld":[  
      {  
         "name":"vpp_net",
         "cidr":"192.168.3.0\/24",
         "metadata": {
             "driver": "vpp",
             "ipam-driver": "vpp"
             }
      },
      {  
         "name":"normal_net",
         "cidr":"192.168.10.0\/24
      }
   ]
}

Now you have uploaded the tutorial NSD, which consists of two Iperf Servers that will be deployed in two separate networks. One will be deployed using the VPP DPDK network (vpp_net), the other will use the default Docker bridge (normal_net).

Deploy the Network Service

Now that we have saved our NSD we can deploy it. Again, we have to navigate to the NS Descriptors tab and select our just onboarded NSD Iperf-Servers.

Launch Action button

We choose to deploy it on our infrastructure, which we have named silicombox, thus we add this PoP to both of our VNFDs (Iperf-Server-DPDK and Iperf-Server-Normal). Afterwards, we can launch our NSD.

 

Now that the NSR is deployed, you can navigate to the NS Records tab, which you find inside the Orchestrate NS drop-down menu. Here you can see all your deployed Network Services, so-called Network Service Records (NSRs), and the execution of the different life cycles of your NSR and VNFRs. After a short time, we should see our NSR Iperf-Servers in ACTIVE state.

Launch Action button

Using the Docker CLI, you can see that your NSR is deployed in addition to Open Baton.

 

you can see that your N S R is deployed in addition to Open Baton

Via the following command you can check the logs of the running service to see further details (e.g. throughput, packetloss).

# Check docker for the container id’s
docker ps
# Access the logs with the appropriate container id of whichever Iperf server
docker logs c18770f57b5c
# Or tail the log to see it in realtime
docker logs -f c18770f57b5c

The following diagram describes our complete setup. We cannot yet directly reach the deployed Iperf Servers via the remote machine.

completed setup

How to Use the Network Service

Since we have deployed two Iperf Servers, we can now use an Iperf Client to test the networks. Depending on your setup you may use another machine (such as the laptop) or VM connected to the networks to do so. Since we want to reach machines behind a network address translation(NAT) network, we will need to add a route to the isolated machine in order to reach the Kata Containers.

# Access normal_net via bridge interface of the machine
ip route replace 192.168.10.0/24 via 192.168.1.2

route added to the isolated machine

We must also add the DPDK interface to the bridge of the Kata Container interface in VPP. Otherwise, the Iperf Server running in the DPDK network will not be able to reach outside of the machine.

# List interfaces
vppctl show int
# You will now see that there is another interface besides your DPDK NIC (VirtualEthernet0/0/0)
# The interface also got a layer 2 bridge assigned, you can check for bridges via :
vppctl show bridge
# Most probably your bridge will be the id=1
# And check the bridge details via :
vppctl show bridge 1 detail

# Now we need to add our DPDK NIC to this bridge aswell
#       GigabitEthernet0/6/0 (if you are using a virtualized NIC)
#       TenGigabitEthernet3/0/1 (if you are working with the real hardware)
vppctl set interface l2 bridge TenGigabitEthernet3/0/1 1

Now both VNFs should be reachable via a remote machine, which must be connected to the bridge and the DPDK network. Using the Iperf Client we can start testing both networks.

# (execute on your workstation!)
# install iperf
sudo apt-get install -y iperf
# connect to “normal” instance in TCP mode
iperf -c 192.168.10.2
# connect to “normal” instance in UDP mode
iperf -u -c 192.168.10.2
# connect via DPDK in TCP mode
iperf -c 192.168.3.2
# connect via DPDK in UDP mode
iperf -u -c 192.168.3.2

If you experience connectivity issues using Iperf in user datagram protocol (UDP) mode, check the iptables on your machine hosting the Kata environment.

# Set iptable rule to allow UDP connections on port 5001 for the Iperf server running on 192.168.10.2
iptables -t nat -A DOCKER -p udp --dport 5001 -j DNAT --to-destination 192.168.10.2:5001
iptables -A DOCKER -j ACCEPT -p udp --destination 192.168.10.2 --dport 5001

We ran our experiments to compare both network types using two Intel Atom C3000 processor series kits connected via gigabit switches. The experiments showed that the VPP DPDK network exhibits increased packet throughput in handling small UDP packets.the V P P D P D K network exhibits increased packet throughput in handling small U D P packets

Summary

In this tutorial, we've set up a playground around Open Baton NFV MANO, Kata Containers, and DPDK. We can deploy VNFs onto our infrastructure, which can make use of the DPDK enabled boost in packet throughput. By running the Iperf use case, using small UDP packets, we can verify the advantage of the VPP-DPDK network compared to the default network.

As we are using a very basic setup regarding the configuration of VPP, DPDK and Kata Containers, the next step would be to adjust the setup to further increase performance. This presents a good starting point to learn more about DPDK since we have a running setup which we can directly benchmark to evaluate any changes in configuration.

To get farther into the topic of NFV, we can write our own network functions using Open Baton and deploy them on our Kata Container DPDK enabled infrastructure.

Resources

Open Baton

Kata Containers

VPP - Vector Packet Processing

DPDK - Data Plane Development Kit