Accelerate Container Networking with Data Plane Development Kit (DPDK)

ID 标签 688026
已更新 10/22/2018
版本 Latest
公共

author-image

作者

Introduction

Container networking is a light virtualization technique. Compared with virtual machines (VMs), which usually take minutes to start, containers just need seconds. In addition, containers are much easier to deploy. With those advantages, containers are widely used in networking applications.

Virtual network functions (VNFs) are one of the main applications leveraging containers to implement virtualization, with functions such as virtual router (vRouter) and virtual firewall (vFirewall). Also, NoSQL databases, like key-value stores, also use containers to achieve large-scale deployment.

To guarantee performance, container networks must satisfy the following requirements:

  • High throughput and low latency
  • Isolation, especially in the multitenancy scenario
  • Quality of service for different container requirements

This article focuses on methods to provide high throughput for packet input/output (I/O) in container networks using the Data Plane Development Kit (DPDK).

Optimizing Container Networking

Container networking is based on the kernel network stack and is isolated through the kernel network namespace. A Linux* bridge is the default network for Docker*, where a Docker instance connects to the outside by creating a pair of virtual Ethernet devices (veth) and joining one to the bridge. But the performance of Linux bridge and veth is usually not satisfying.

In general, there are two ways to improve container network performance:

  • Directly assign a networking interface card (NIC) resource to containers. Specifically, package virtual functions (VFs) or queues of the NIC as virtual interfaces, and allocate them to specific container network namespaces. As shown in figure 1, container instances are able to directly utilize hardware to perform packet I/O, which eliminates expensive software emulation overhead.
  • Avoid using an overlay network to get rid of the overhead from complicated packet encapsulation and decapsulation.

hardware resource container diagram

Figure 1. Directly assign hardware resources to containers.

However, the above techniques are based on kernel space drivers, where the context switch between user space and kernel space introduce huge overheads for networking applications.

Using Data Plane Development Kit (DPDK) to Accelerate Container Networking

The Data Plane Development Kit (DPDK), consists of libraries to accelerate packet processing applications running on a wide variety of CPU architectures. One of the main modules in DPDK is user-space drivers for high-speed NICs; for instance, 10 Gbps and 40 Gbps NICs. With plenty of acceleration technologies, like batching, polling, and huge pages, DPDK provides extremely fast packet I/O with a minimum number of CPU cycles. Therefore, DPDK is an efficient method to accelerate container networking.

There are two widely used solutions for accelerating container networking with DPDK. One is called the device pass-through model and the other is the vSwitch model.

Device pass-through model

The device pass-through model uses DPDK as the VF driver to perform packet I/O for container instances. As shown in figure 2, the VF of the NIC is driven by the user space driver DPDK. Applications inside containers directly receive and send packets in user space, without context switching. In addition, a single receive (RX) or transmission (TX) operation introduces one direct memory access (DMA). No extra memory copies are needed, thus greatly improving the throughput.

hardware resource container diagrams

Figure 2. Use DPDK as the user space VF driver.

vSwitch model

The vSwitch model uses a DPDK accelerated centric software switch (for example, Open vSwitch* and Vector Packet Processing) to forward packets among container instances and outside. In addition, each container instance uses a DPDK virtual device, virtio-user, to communicate with the centric switch. Figure 3 illustrates the framework of the vSwitch model.

Virtio-user is a user space virtual device designed for container networks, which follows VirtIO protocol and can communicate with the backend device; for example, vhost-user and vhost-net. In the vSwitch model, the centric switch creates a vhost-user port for each virtio-user device. All packets from or to the container instance go through the virtio-user and vhost-user device. Therefore, the packet I/O is in user space, entirely eliminating context switch overheads. In addition, virtio-user and vhost-user leverage various kinds of advanced features, such as dequeue zero-copy and vector processing, which guarantees high performance for container networking.

hardware resource container diagram

Figure 3. vSwitch model framework.

Conclusion

Users can leverage containers to implement virtualization that is faster and easier to deploy than virtual machines. This article explained optimizing container networking and using DPDK to accelerate packet processing applications through two widely-used solutions: device pass-through model and vSwitch model. For more information, check out this article on Getting Started with the Data Plane Development Kit.

About the Authors

Jianfeng Tan, a software engineer working on driver development for the DPDK para-virtualized NIC VirtIO at Intel, focuses on how to accelerate the container network with DPDK and NFV technology. Jianfeng has a master’s degree in Computer Science and Technology from Tsinghua University.

Jiayu Hu, a software engineer at Intel, works on developing DPDK VirtIO driver and networking stack accelerating libraries. Her main research areas include NFV and container networking. Jiayu got her master’s degree from the University of Science and Technology of Chain (USTC).

"