Container4NFV on Arm

Project:Container4NFV, https://wiki.opnfv.org/display/container4nfv
Editors:Trevor Tao (Arm Ltd)
Authors:Trevor Tao (Arm Ltd)
Abstract:This document gives a brief introduction on Arm’s work status, strategy and possible roadmap for Container4NFV on arm64 server platform. The target of the description is to let you get to know what are Arm’s current capability and future direction for Container4NFV project.

1. Container4NFV on Arm

1.1. Abstract

This document gives a brief introduction on Arm’s work status, strategy and possible roadmap for Container4NFV on arm64 server platform. The target of the description is to let you get to know what are Arm’s current capability and future direction for Container4NFV project.

1.2. Introduction

As you know, Arm is a silver member of OPNFV. Arm actively takes part in the work of Container4NFV project which aims to enable building a container-based NFV infrastructure in edge node or core-network on Arm server platform. We would introduce the work of Arm containerized NFV-I from the following points: 1. Architecture 2. Container Networking 3. Related Projects 4. Current Status and Future Plan 5. Contacts

1.3. Architecture

Basically, Arm’s containerized NFV infrastructure aligns with the architecture of Container4NFV which now usually uses the Kubernetes as the Container Orchestration Engine(COE), and the CNI as the network framework. Currently a typical architecture of containerized NFV-I on Arm is composed of installer, Kubernetes, related OPNFV projects, such as Functest, Yardstick, and possible Arm Node Feature Discovery(A-NFD) which would enable finding certain resources and their usage status of Arm servers and is still to be developed. In the future, other high-level VNF orchestration engines, such as Tacker or ONAP would also be brought in to facilitate the deployment of actual VNFs.

Containerized NFV Infrastruture on Arm

A typical VNF networking service deployment is given as the following graph:

Networking Service Deployment on Arm Server

1.4. Container Networking

1.4.1. Basic Networking Model

Since Arm’s containerized NFV infrastructure uses Kubernetes as the COE, so the CNI plug-ins are used to orchestrate networking. Every time a POD is initialized or removed, the default CNI plug-in is called with the default configuration. This CNI plug-in creates a pseudo interface, attaches it to the relevant underlay network, sets the IP and routes and maps it to the POD namespace.

The Kubernetes networking model satisfies the following fundamental requirements: * 1 all containers can communicate with all other containers without NAT * 2 all nodes can communicate with all containers (and vice-versa) without NAT * 3 the IP that a container sees itself as is the same IP that others see it as

On the Arm platform, the most common Kubernetes networking solution is Flannel which uses overlay technique to resolve the pod communication across hosts. The arm64 version of Flannel release can be found here. Project Calico is also a high performance, highly scalable networking solution which provides network policy for connecting Kubernetes pods based on the same IP networking principles as the internet. But Calico for Arm is still under development and it’s one of our task to enable it for container networking on Arm Container4NFV.

Refer to guide, Kubernetes uses CNI plug-ins to orchestrate networking. Every time a POD is initialized or removed, the default CNI plug-in is called with the default configuration. This CNI plug-in creates a pseudo interface, attaches it to the relevant underlay network, sets the IP and routes and maps it to the POD namespace.

Most of the ordinary Kubernetes CNI plugins for arm64, including bridge, flannel, loopback, host-local, portmap, macvlan, ipvlan, ptp, noop could be found in the release of containernetworking CNI. Current CNI plugins stable version for arm64 is v0.6.0.

1.4.2. Multiple Interfaces Support in a Pod

Kubernetes initially supports only one CNI interface per POD with one cluster-wide configuration. But some VNFs with data plane acceleration, there would be one or two interfaces used for high performance data access besides the normal interfaces, such as Flannel, Calico, Weave, PTP, which are still kept for control or configuration purpose.

The SR-IOV CNI or DPDK CNI could be chosen to add data plane acceleration interfaces for Kubernetes Pods. Arm is doing some improvement on SR-IOV CNI to assign PF directly if VF is not needed or available.

With the help of Multus CNI plugin, multiple interfaces can be added at the same time when deploying a pod. The Multus CNI has the following features:

  • It is a contact between the container runtime and other plugins, and it doesn’t have any of its own net configuration, it calls other plugins like flannel/calico to do the real net conf job.
  • Multus reuses the concept of invoking the delegates in flannel, it groups the multi plugins into delegates and invoke each other in sequential order, according to the JSON scheme in the cni configuration.
  • No. of plugins supported is dependent upon the number of delegates in the conf file.
  • Master plugin invokes “eth0” interface in the pod, rest of plugins(Mininon plugins eg: sriov,ipam) invoke interfaces as “net0”, “net1”.. “netn”
  • The “masterplugin” is the only net conf option of Multus cni, it identifies the primary network. The default route will point to the primary network.

A typical Multus CNI configuration with DPDK passthrough(SR-IOV PF) enabled is given below:

{
  "name": "multus-k8s-network",
  "type": "multus",
  "delegates": [
      {
              "type": "flannel",
              "masterplugin": true,
              "delegate": {
                      "isDefaultGateway": true
              }
      },
      {
              "type": "sriov",
              "master": "eth1",
              "dpdk": {
                      "ethernet_driver": "ixgbe",
                      "io_driver": "vfio-pci",
                      "dpdk_devbind": "/root/dpdk/usertools/dpdk-devbind.py"
              }
      },
      {
              "type": "sriov",
              "master": "eth2",
              "dpdk": {
                      "ethernet_driver": "ixgbe",
                      "io_driver": "vfio-pci",
                      "dpdk_devbind": "/root/dpdk/usertools/dpdk-devbind.py"
              }
      }
  ]
}

1.6. Current Status and Future Plan

Now for Arm containerized NFV-I, we have enabled Multus CNI with Flannel CNI, SR-IOV/DPDK CNI. Data plane acceleration with DPDK on SR-IOV or NIC passthrough in containers has also been enabled and tested.

Container Networking Acceleration with DPDK

A typical VNF(OpenWRT) has been enabled on the arm64 containerized platform to demo a vCPE use case.

vCPE Use Case

We have also enabled Yardstick to verify the compliance of the Pod communication in the Kubernetes context.

Yardstick Container Test Environment on Arm NFV-I
For the future plan, we would continue to align with the development roadmap of Container4NFV. And the following work would be also be preferred
for Arm Contaier4NFV of the next ‘F’ release:
  • 1 Project Calico enablement for arm64
  • 2 VPP DPDK/ODP for container networking
  • 3 OPNFV installer enablement on Arm for Container4NFV
  • 4 Possible enhancement to Yardstick, Functest
  • 5 Typical VNFs w/o data plane accelerations
  • 6 CI work with Yardstick, Functest

1.7. Contacts

Trevor Tao(Zijin Tao), Bin Lu, Song Zhu, Kaly Xin and Yibo Cai from Arm have made contributions to this document.

Trevor Tao: trevor.tao@arm.com Bin Lu: bin.lu@arm.com Song Zhu: song.zhu@arm.com Kaly xin: kaly.xin@arm.com Yibo Cai: yibo.cai@arm.com