Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/347649234

Performance of OVS-DPDK in Container Networks

Conference Paper · December 2020


DOI: 10.1109/ICTC49870.2020.9289483

CITATIONS READS
0 337

2 authors, including:

Quang-Huy Nguyen
Soongsil University
3 PUBLICATIONS   1 CITATION   

SEE PROFILE

All content following this page was uploaded by Quang-Huy Nguyen on 23 December 2020.

The user has requested enhancement of the downloaded file.


Performance of OVS-DPDK in Container Networks
Quang-Huy Nguyen Younghan Kim
School of Electronic Engineering School of Electronic Engineering
Soongsil University Soongsil University
Seoul, Korea Seoul, Korea
huynq@dcn.ssu.ac.kr younghak@ssu.ac.kr

Abstract—Network Function Virtualization (NFV) is a container. There are many kinds of virtual switch, such as
prominent technology that replaces the traditional hardware OpenVSwitch (OVS) [4], hyper-V virtual switch [5],
network node by the virtualized network software. Along with the OFSoftSwitch [6], Lagopus [7], etc. Our implementation based
development in the software field that divides an application into on the OVS because of its feature-rich, multilayer, distributed
microservices, the container appears to adapt the reliability, virtual switch widely used as the main networking layer on
availability, operationality of the application. To facilitate the virtual environments and other SDN applications. OVS has
development of the I/O intensive in the high-density container traditionally been divided into a fast kernel-based datapath
environment, the combination OVS and DPDK are perfect that (fastpath) consisting of a flow table and a slower userspace
increases performance significantly. In this paper, we design and
datapath (slowpath) that processes packets that did not match
integrate OVS-DPDK in container networking in two cases. We
any flow in the fastpath. In order to turn the user space to
deployed the two containers in a host and used OVS-DPDK as a
virtual bridge connecting them in the first case. In the second case, fastpath, Intel has integrated Data Plane Development (DPDK)
we configured the system on two hosts. Then we measured the into the OVS.
throughput, CPU usage, and evaluated OVS and OVS-DPDK in DPDK is a framework that offers efficient implementations
both cases. for a wide set of common functions such as network interface
controller (NIC) packet input/output, easy access to hardware
Keywords—NFV, OVS, DPDK
features (e.g., SR-IOV, FDIR, etc.), memory allocation and
I. INTRODUCTION queuing. Compare with the Linux network stack, the DPDK has
a prominent feature that can bypass the kernel networking stack
In recent years, the telecommunication industry has evolved to bring high performance to the applications in the userspace.
towards Network Function Virtualization (NFV) where the The DPDK provides data plane libraries that optimized NIC
traditional "hardware heavy" network node functions are being drivers, such as Polling Mode Drivers (PMD) for high-speed
replaced by virtualized software that can run on generic processing, lockless ring buffers, Huge Pages for allocating and
hardware. Current de facto implementations of software managing memory. Developers can base on these libraries to
virtualization utilize Virtual Machines (VMs) are rapidly package in the application to gain a high achievement.
moving towards lightweight containers [1]. The original NFV
intends to follow the software standard where it is deployed in By integrating OVS with DPDK, the fastpath is also in
VMs called the Virtualized Network Function (VNF), in userland, minimizing kernel to userland interactions as well as
containers called Containerized Network Function (CNF). With leveraging the high performance offered by DPDK. Currently,
the advanced contained technology, CNF can be easily created, some papers mention and implement DPDK to the connection
migrated, backup and deleted on different network locations VM to VM to improve the transport performance of network. In
without depending on the specific hardware. [8], the authors developed NetVM platform that takes advance
of DPDK’s capabilities to gain high throughput for the VNF
Based on the rapid growth of container technology, network chain. Wang, L. M, Miskell, T. [9] used OVS-DPDK to capture
functions (like firewall, routing, etc.) are divided into many and transport the VM’s traffic. Yang, M., & Huang, Y [10]
pieces that called microservices and linked with each component compared OVS-DPDK on the physical host to OVS-DPDK was
to create a service function chain replacing the existing deployed in Docker container, that testbed environment tested
monolithic applications. For instance, LinkedIn replaced their the throughput inter-VM.
monolithic application with microservices in 2011 which
resulted in less complex software and shorter development In this paper, we focus on the design and integrate OVS-
cycles. Currently, the container networking faces many DPDK for container networking in the two cases. In the first
challenges to adapt to this change. The high-density container case, we deployed the containers in a host and used OVS-DPDK
causes the latency time to establish its network and time- as a virtual bridge connecting them. In the second case, we
consuming to container launch time. configured the system on two hosts. Then we measured the
There is an important component in the Network Function
Virtualization Infrastructure (NFVI) – virtual switch. Virtual
switch has the function to connect VM to VM or container to

XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE


978-1-7281-6758-9/20/$31.00 ©2020 IEEE 437 ICTC 2020

Authorized licensed use limited to: Soongsil University. Downloaded on December 23,2020 at 07:32:50 UTC from IEEE Xplore. Restrictions apply.
throughput, CPU usage and evaluated OVS and OVS-DPDK in The OVS kernel module contains a simple flow table for
both cases. forwarding packets received. Still, a small part of the packets
which are called exception packets (first packet in an Openflow
flow) does not match an existing entry in the kernel space and
are sent to the userspace OVS daemon (ovs-vswitchd) to be
handled. The daemon will then analyze the packet and update
the OVS kernel flow table so additional packets on this flow will
go directly via the OVS kernel model forwarding table. This
approach eliminates the need for context switches between user
space and kernel space for most of the traffic however we are
still limited by the Linux network stack that is not well suited for
use cases with high packet rate demands..
C. Integrating DPDK into OVS
Integrating DPDK into OVS can leverage the PMD drivers
and move the previous OVS kernel module forwarding table to
the user space. Otherwise, to boost up the networking container,
the vhost-user/ virtio-pmd architecture is implemented to the
OVS-DPDK. Vhost-user (backend) runs on the host userspace
as part of the OVS-DPDK userspace application. As mentioned
DPDK is a library and the vhost-user module are additional APIs
inside that library. The OVS-DPDK is the actual application
Fig. 1. OVS-DPDK architecture linked with this library and invoking the APIs. Virtio-pmd
(frontend) runs on the container and is a poll mode driver
II. BACKGROUND
consuming dedicated cores and performing polling with no
A. Data Plane Development Kit interrupts. For an application running on the user space to
DPDK aims to provide a simple and complete framework for consume the virtio-pmd it needs to be linked with the DPDK
fast packet processing in data plane applications. It implements library as well. The Fig. 1 illustrates how these components
a "run to completion model for packet processing" meaning that come together.
all resources need to be allocated prior to calling the data plane
application. Those dedicated resources are executed on III. DESIGN AND IMPLEMENTATION
dedicated logical processing cores. In this section, we design, implement the CNF and OVS-
DPDK on two cases and evaluate them. In the first case, we
The technique is opposed to a Linux kernel where we have a
scheduler and interrupts for switching between processes, in the deploy the system on a host. The second case was deployed on
DPDK architecture the devices are accessed by constant polling. two hosts. Then, we test the throughput of packets transmitted
This avoids the context switching and interrupts processing from container to container and compare the result between
overhead at the cost of dedicating 100% of the part of the CPU OVS and OVS-DPDK.
cores to handle packet processing.
In practice, DPDK offers a series of poll mode drivers
(PMDs) that enable direct transfer of packets between user space
and the physical interfaces which bypass the kernel network
stack altogether. This approach provides a significant
performance boost over the kernel forwarding by eliminating
interrupt handling and bypassing the kernel stack. The DPDK is
a set of libraries. Thus, in order to use them, you need an
application that links with these libraries and invokes the
relevant APIs.
B. OpenVSwitch
OVS is a software switch which enables the packet
forwarding inside the kernel. It is composed of a userspace part
and a kernel part:
• User space - including a database (ovsdb-server) and an
OVS daemon for managing and controlling the switch
(ovs-vswitchd).
• Kernel space - including the ovs kernel module Fig. 2. Testbed on a single host
responsible for the datapath or forwarding plane.

438

Authorized licensed use limited to: Soongsil University. Downloaded on December 23,2020 at 07:32:50 UTC from IEEE Xplore. Restrictions apply.
1400
OVS-DPDK OVS
1200 1160
Throughput (Mbits/s)

1000
800
600
444
400
225
165
200 104 120 118 131 134
59

0
64 128 256 512 1518
Packet size (bytes)
Fig. 3. Throughput Comparison of Testbed on a host Fig. 5. Testbed on two hosts

We use Intel Core i5-6300 (base frequency 2.4 GHz, turbo Fig. 3 shows the average throughput of OVS and OVS-
frequency 3.00 GHz with 8 threads) processor, 6 GB of DPDK. Overall, the throughput increases by the packet size.
memory, Intel 82540EM Gigabit Ethernet Controller (1 Throughput of OVS increases steadily from 104 to 165 Mbits
Gigabits/s) for each host. The OS environment is Ubuntu 16.04 per second at the packet size from 64 bytes to 1518 bytes. While
(kernel 4.4.0). OVS (version 2.6.1) and DPDK (version throughput of OVS-DPDK grows dramatically from 59 to 1160
16.11.1) are used in the system. Mbits/s. At the 64 bytes, OVS-DPDK has lower throughput than
OVS because the smaller packet size is, the more consumption
A. Testbed on a single host CPU is. OVS-DPDK consumes more CPU usage than OVS –
Packets are generated by Pktgen [3], which is a software that shows in Fig. 4.
based on the DPDK fast packet processing framework and by B. Testbed on multiple hosts
Iperf [11], which is a widely used tool for network performance
measurement running on various platforms. To implement In the second case, as shown in Fig. 5, the OVS-DPDK and
OVS-DPDK for the inter-container network, we deployed two two containers were deployed in two hosts. The metrics used in
containers run by Docker [2] and OVS-DPDK in the userspace this case is the same as those used in testbed 1.
of a host. One container has the function to generate traffic, The Fig. 6 illustrates the average throughput of the inter-
which was installed packet generator tool. The other one container network deployed in the two hosts. Overall, the
captures packets and checks performance. The metrics of throughput has an increase in both of OVS and OVS-DPDK.
throughput and CPU usage are measured every second. Fig. 2 Throughput of OVS is moderate growth, from 7 to 17 Mbit/s at
shows the components configured in the testbed. the 64 to 1518 bytes, respectively. Meanwhile OVS-DPDK has
a sharp rise in throughput. It begins from 7 to 141 Mbit/s at the

80.00% 160
OVS-DPDK OVS 141
69.86% 140
70.00%
Throughput (Mbits/s)

60.00% 120
100
50.00%
CPU usage

80
40.00%
30.23% 60 56
30.00%
40 29
20.00% 17
20 7
13 15 14 14 14
10.00%
0
0.00% 64 128 256 512 1518
OVS OVS-DPDK Packet size (bytes)
Fig. 4. CPU consumption while running
Fig. 6. Throughput Comparsion of Testbed on two hosts

439

Authorized licensed use limited to: Soongsil University. Downloaded on December 23,2020 at 07:32:50 UTC from IEEE Xplore. Restrictions apply.
64 to 1518 bytes of packet size. Comparing to the testbed on a 00613, Development of Content-oriented Delay Tolerant
host, the throughput decreases significantly because in the first networking in Multi-access Edge Computing Environment) and
case, all connections are configured by the virtual device on a ITRC(Information Technology Research Center) support
host, packets do not pass through the physical link as the second program(IITP-2020-2017-0-01633).
case.
REFERENCES
IV. CONCLUSION
[1] Bernstein, David, “Containers and cloud: From lxc to docker to
We implemented the OVS-DPDK for the inter-container kubernetes,” IEEE Cloud Computing 1.3, pp. 81-84, 2014.
network in two use cases. In the first case, the testbed deployed [2] https://www.docker.com
on a host that uses virtual network interfaces and virtual switch [3] Olsson, Robert, “Pktgen the linux packet generator,” Proceedings of the
to connect containers. While the second case used two hosts for Linux Symposium, Ottawa, Canada, Vol. 2, 2005.
the inter-container network Both cases showed the outstanding [4] http://www.openvswitch.org
capability of DPDK than original OVS but the DPDK [5] Hyper-V Virtual Switch project, https://docs.microsoft.com/en-
consumed more CPU than OVS. us/windows-server/virtualization/hyper-v-virtual-switch/hyper-v-virtual-
Although, the OVS-DPDK has a high performance than switch
original OVS but it stills not good in the general. In future work, [6] OFSoftSwitch project, https://github.com/CPqD/ofsoftswitch13
we will extend our testbed with other techniques, such as [7] Lagopus switch project, http://www.lagopus.org
SRIoV with DPDK which allows a NIC device to appear to be [8] Hwang, Jinho, K. K_ Ramakrishnan, and Timothy Wood, “NetVM: High
performance and flexible networking using virtualization on commodity
multiple separated physical NIC devices and NUMA which is platforms,” IEEE Transactions on Network and Service Management 12.1
CPU management to gain high-speed packet processing for the (2015): 34-47
inter-container network. [9] Wang, Liang-Min, et al, “OVS-DPDK Port Mirroring via NIC
Offloading,” NOMS 2020-2020 IEEE/IFIP Network Operations and
ACKNOWLEDGMENT Management Symposium, IEEE, 2020.
This work was partly supported by Institute of Information [10] Yang, Muhui, and Yihua Huang, “OVS-DPDK with TSO feature running
under docker,” 2018 International Conference on Information
& Communications Technology Planning & Evaluation (IITP) Networking (ICOIN). IEEE, 2018.
grant funded by the Korea government(MSIT) (No.2017-0- [11] Traffic generator, https://iperf.fr

440

Authorized licensed use limited to: Soongsil University. Downloaded on December 23,2020 at 07:32:50 UTC from IEEE Xplore. Restrictions apply.
View publication stats

You might also like