vCGNAPT samplevnf

1. vCGNAPT - Release Notes

1.1. Introduction

This is the beta release for vCGNAPT VNF. vCGNAPT application can be run independently (refer INSTALL.rst).

1.2. User Guide

Refer to README.rst for further details on vCGNAPT, HLD, features supported, test plan. For build configurations and execution requisites please refer to INSTALL.rst.

1.3. Feature for this release

This release supports following features as part of vCGNAPT

  • vCGNAPT can run as a standalone application on bare-metal linux server or on a virtual machine using SRIOV and OVS dpdk.
  • Static NAT
  • Dynamic NAT
  • Static NAPT
  • Dynamic NAPT
  • ARP (request, response, gratuitous)
  • ICMP (terminal echo, echo response, passthrough)
  • ICMPv6 and ND (Neighbor Discovery)
  • UDP, TCP and ICMP protocol passthrough
  • Multithread support
  • Multiple physical port support
  • Limiting max ports per client
  • Limiting max clients per public IP address
  • Live Session tracking to NAT flow
  • PCP support
  • NAT64
  • ALG SIP
  • ALG FTP

1.4. System requirements - OS and kernel version

This is supported on Ubuntu 14.04 and 16.04 and kernel version less than 4.5

VNFs on BareMetal support:
OS: Ubuntu 14.04 or 16.04 LTS kernel: < 4.5 http://releases.ubuntu.com/16.04/ Download/Install the image: ubuntu-16.04.1-server-amd64.iso
VNFs on Standalone Hypervisor

HOST OS: Ubuntu 14.04 or 16.04 LTS http://releases.ubuntu.com/16.04/ Download/Install the image: ubuntu-16.04.1-server-amd64.iso

  • OVS (DPDK) - 2.5
  • kernel: < 4.5
  • Hypervisor - KVM
  • VM OS - Ubuntu 16.04/Ubuntu 14.04

1.5. Known Bugs and limitations

  • Hadware Load Balancer feature is supported on fortville nic FW version 4.53 and below.
  • L4 UDP Replay is used to capture throughput for dynamic cgnapt
  • Hardware Checksum offload is not supported for IPv6 traffic.
  • CGNAPT on sriov is tested till 4 threads

1.6. Future Work

  • SCTP passthrough support
  • Multi-homing support
  • Performance optimization on different platforms

1.7. References

Following links provides additional information for differenet version of DPDKs

2. vCGNAPT - Readme

2.1. Introduction

This application implements vCGNAPT. The idea of vCGNAPT is to extend the life of the service providers IPv4 network infrastructure and mitigate IPv4 address exhaustion by using address and port translation in large scale. It processes the traffic in both the directions.

It also supports the connectivity between the IPv6 access network to IPv4 data network using the IPv6 to IPv4 address translation and vice versa.

2.1.1. About DPDK

The DPDK IP Pipeline Framework provides set of libraries to build a pipeline application. In this document, CG-NAT application will be explained with its own building blocks.

This document assumes the reader possess the knowledge of DPDK concepts and IP Pipeline Framework. For more details, read DPDK Getting Started Guide, DPDK Programmers Guide, DPDK Sample Applications Guide.

2.2. Scope

This application provides a standalone DPDK based high performance vCGNAPT Virtual Network Function implementation.

2.3. Features

The vCGNAPT VNF currently supports the following functionality:
  • Static NAT
  • Dynamic NAT
  • Static NAPT
  • Dynamic NAPT
  • ARP (request, response, gratuitous)
  • ICMP (terminal echo, echo response, passthrough)
  • ICMPv6 and ND (Neighbor Discovery)
  • UDP, TCP and ICMP protocol passthrough
  • Multithread support
  • Multiple physical port support
  • Limiting max ports per client
  • Limiting max clients per public IP address
  • Live Session tracking to NAT flow
  • NAT64
  • PCP Support
  • ALG SIP
  • ALG FTP

2.4. High Level Design

The Upstream path defines the traffic from Private to Public and the downstream path defines the traffic from Public to Private. The vCGNAPT has same set of components to process Upstream and Downstream traffic.

In vCGNAPT application, each component is constructed as IP Pipeline framework. It includes Master pipeline component, load balancer pipeline component and vCGNAPT pipeline component.

A Pipeline framework is collection of input ports, table(s), output ports and actions (functions). In vCGNAPT pipeline, main sub components are the Inport function handler, Table and Table function handler. vCGNAPT rules will be configured in the table which translates egress and ingress traffic according to physical port information from which side packet is arrived. The actions can be forwarding to the output port (either egress or ingress) or to drop the packet.

2.5. vCGNAPT Graphical Overview

The idea of vCGNAPT is to extend the life of the service providers IPv4 network infrastructure and mitigate IPv4 address exhaustion by using address and port translation in large scale. It processes the traffic in both the directions.

+------------------+
|                 +-----+
| Private consumer | CPE  |---------------+
|   IPv4 traffic  +-----+                 |
+------------------+                      |
               +------------------+       v        +----------------+
               |                  | +------------+ |                |
               |   Private IPv4   | |  vCGNAPT   | |    Public      |
               |  access network  | |   NAT44    | |  IPv4 traffic  |
               |                  | +------------+ |                |
               +------------------+       |        +----------------+
+------------------+                      |
|                 +-----+                 |
| Private consumer| CPE |-----------------+
|  IPv4 traffic   +-----+
+------------------+
    Figure 1: vCGNAPT deployment in Service provider network

2.6. Components of vCGNAPT

In vCGNAPT, each component is constructed as a packet framework. It includes Master pipeline component, driver, load balancer pipeline component and vCGNAPT worker pipeline component. A pipeline framework is a collection of input ports, table(s), output ports and actions (functions).

2.6.1. Receive and transmit driver

Packets will be received in bulk and provided to load balancer thread. The transmit takes packets from worker thread in a dedicated ring and sent to the hardware queue.

2.6.2. ARPICMP pipeline

ARPICMP pipeline is responsible for handling all l2l3 arp related packets.

This component does not process any packets and should configure with Core 0, to save cores for other components which processes traffic. The component is responsible for: 1. Initializing each component of the Pipeline application in different threads 2. Providing CLI shell for the user 3. Propagating the commands from user to the corresponding components. 4. ARP and ICMP are handled here.

2.6.3. Load Balancer pipeline

Load balancer is part of the Multi-Threaded CGMAPT release which distributes the flows to Multiple ACL worker threads.

Distributes traffic based on the 2 or 5 tuple (source address, source port, destination address, destination port and protocol) applying an XOR logic distributing the load to active worker threads, thereby maintaining an affinity of flows to worker threads.

Tuple can be modified/configured using configuration file

2.7. vCGNAPT - Static

The vCGNAPT component performs translation of private IP & port to public IP & port at egress side and public IP & port to private IP & port at Ingress side based on the NAT rules added to the pipeline Hash table. The NAT rules are added to the Hash table via user commands. The packets that have a matching egress key or ingress key in the NAT table will be processed to change IP & port and will be forwarded to the output port. The packets that do not have a match will be taken a default action. The default action may result in drop of the packets.

2.8. vCGNAPT - Dynamic

The vCGNAPT component performs translation of private IP & port to public IP & port at egress side and public IP & port to private IP & port at Ingress side based on the NAT rules added to the pipeline Hash table. Dynamic nature of vCGNAPT refers to the addition of NAT entries in the Hash table dynamically when new packet arrives. The NAT rules will be added to the Hash table automatically when there is no matching entry in the table and the packet is circulated through software queue. The packets that have a matching egress key or ingress key in the NAT table will be processed to change IP & port and will be forwarded to the output port defined in the entry.

Dynamic vCGNAPT acts as static one too, we can do NAT entries statically. Static NAT entries port range must not conflict to dynamic NAT port range.

2.8.1. vCGNAPT Static Topology

IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 1) IXIA
operation:
Egress --> The packets sent out from ixia(port 0) will be CGNAPTed to ixia(port 1).
Igress --> The packets sent out from ixia(port 1) will be CGNAPTed to ixia(port 0).

2.8.2. vCGNAPT Dynamic Topology (L4REPLAY)

IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 0)L4REPLAY
operation:
Egress --> The packets sent out from ixia will be CGNAPTed to L3FWD/L4REPLAY.
Ingress --> The L4REPLAY upon reception of packets (Private to Public Network),
will immediately replay back the traffic to IXIA interface. (Pub -->Priv).

2.8.3. How to run L4Replay

After the installation of samplevnf:

go to <samplevnf/VNFs/L4Replay>
./buid/L4replay -c  core_mask -n no_of_channels(let it be as 2) -- -p PORT_MASK --config="(port,queue,lcore)"
    eg: ./L4replay -c 0xf -n 4 -- -p 0x3 --config="(0,0,1)"

2.9. Installation, Compile and Execution

Plase refer to <samplevnf>/docs/vCGNAPT/INSTALL.rst for installation, configuration, compilation and execution.

3. vCGNAPT - Installation Guide

3.1. vCGNAPT Compilation

After downloading (or doing a git clone) in a directory (samplevnf)

3.1.1. Dependencies

  • DPDK supported versions ($DPDK_RTE_VER = 16.04, 16.11, 17.02 or 17.05) Downloaded and installed via vnf_build.sh or manually from [here] (http://fast.dpdk.org/rel/)
  • libpcap-dev
  • libzmq
  • libcurl

3.1.2. Environment variables

Apply all the additional patches in ‘patches/dpdk_custom_patch/’ and build dpdk required only for DPDK version 16.04.

export RTE_SDK=<dpdk directory>
export RTE_TARGET=x86_64-native-linuxapp-gcc

This is done by vnf_build.sh script.

3.2. Auto Build:

$ ./tools/vnf_build.sh in samplevnf root folder

Follow the steps in the screen from option [1] –> [9] and select option [8] to build the vnfs. It will automatically download selected DPDK version and any required patches and will setup everything and build vCGNAPT VNFs.

Following are the options for setup:

----------------------------------------------------------
 Step 1: Environment setup.
----------------------------------------------------------
[1] Check OS and network connection
[2] Select DPDK RTE version

----------------------------------------------------------
 Step 2: Download and Install
----------------------------------------------------------
[3] Agree to download
[4] Download packages
[5] Download DPDK zip
[6] Build and Install DPDK
[7] Setup hugepages

----------------------------------------------------------
 Step 3: Build VNFs
----------------------------------------------------------
[8] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay)

[9] Exit Script

An vCGNAPT executable will be created at the following location samplevnf/VNFs/vCGNAPT/build/vCGNAPT

3.3. Manual Build:

  1. Download DPDK supported version from dpdk.org

  2. unzip dpdk-$DPDK_RTE_VER.zip and apply dpdk patches only in case of 16.04 (Not required for other DPDK versions)

    • cd dpdk

      • patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-management.patch
      • patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-Rx-hang-when-disable-LLDP.patch
      • patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-status-change-interrupt.patch
      • patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-VF-bonded-device-link-down.patch
      • patch -p1 < $VNF_CORE/patches/dpdk_custom_patch/disable-acl-debug-logs.patch
      • patch -p1 < $VNF_CORE/patches/dpdk_custom_patch/set-log-level-to-info.patch
    • build dpdk

      • make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
      • cd x86_64-native-linuxapp-gcc
      • make
    • Setup huge pages

      • For 1G/2M hugepage sizes, for example 1G pages, the size must be specified explicitly and can also be optionally set as the default hugepage size for the system. For example, to reserve 8G of hugepage memory in the form of eight 1G pages, the following options should be passed to the kernel: * default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048
      • Add this to Go to /etc/default/grub configuration file.
      • Append “default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048”
        to the GRUB_CMDLINE_LINUX entry.
  3. Setup Environment Variable

    • export RTE_SDK=<samplevnf>/dpdk
    • export RTE_TARGET=x86_64-native-linuxapp-gcc
    • export VNF_CORE=<samplevnf> or using ./tools/setenv.sh
  4. Build vCGNAPT VNFs

    • cd <samplevnf>/VNFs/vCGNAPT
    • make clean
    • make
  5. An vCGNAPT executable will be created at the following location

    • <samplevnf>/VNFs/vCGNAPT/build/vCGNAPT

3.4. Run

3.4.1. Setup Port to run VNF

For DPDK versions 16.04
1. cd <samplevnf>/dpdk
2. ./tools/dpdk_nic_bind.py --status <--- List the network device
3. ./tools/dpdk_nic_bind.py -b igb_uio <PCI Port 0> <PCI Port 1>
.. _More details: http://dpdk.org/doc/guides-16.04/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules

For DPDK versions 16.11
1. cd <samplevnf>/dpdk
2. ./tools/dpdk-devbind.py --status <--- List the network device
3. ./tools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
.. _More details: http://dpdk.org/doc/guides-16.11/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules

For DPDK versions 17.xx
1. cd <samplevnf>/dpdk
2. ./usertools/dpdk-devbind.py --status <--- List the network device
3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
.. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules

Make the necessary changes to the config files to run the vCGNAPT VNF
eg: ports_mac_list = 00:00:00:30:21:F0 00:00:00:30:21:F1

3.4.2. Dynamic CGNAPT

Update the configuration according to system configuration.

./vCGNAPT -p <port mask> -f <config> -s <script> - SW_LoadB
./vCGNAPT -p <port mask> -f <config> -s <script> -hwlb <num_WT> - HW_LoadB

3.4.3. Static CGNAPT

Update the script file and add Static NAT Entry

e.g,
;p <pipeline id> entry addm <prv_ipv4/6> prvport> <pub_ip> <pub_port> <phy_port> <ttl> <no_of_entries> <end_prv_port> <end_pub_port>
;p 3 entry addm 152.16.100.20 1234 152.16.40.10 1 0 500 65535 1234 65535

3.4.4. Run IPv4

Software LoadB:

cd <samplevnf>/VNFs/vCGNAPT/build
./vCGNAPT -p 0x3 -f ./config/arp_txrx-2P-1T.cfg  -s ./config/arp_txrx_ScriptFile_2P.cfg


Hardware LoadB:

cd <samplevnf>/VNFs/vCGNAPT/build
./vCGNAPT -p 0x3 -f ./config/arp_hwlb-2P-1T.cfg  -s ./config/arp_hwlb_scriptfile_2P.cfg --hwlb 1

3.4.5. Run IPv6

Software LoadB:

cd <samplevnf>/VNFs/vCGNAPT/build
./vCGNAPT -p 0x3 -f ./config/arp_txrx-2P-1T-ipv6.cfg  -s ./config/arp_txrx_ScriptFile_2P.cfg


Hardware LoadB:

cd <samplevnf>/VNFs/vCGNAPT/build
./vCGNAPT -p 0x3 -f ./config/arp_hwlb-2P-1T-ipv6.cfg  -s ./config/arp_hwlb_scriptfile_2P.cfg --hwlb 1

3.4.6. vCGNAPT execution on BM & SRIOV

To run the VNF, execute the following:
samplevnf/VNFs/vCGNAPT# ./build/vCGNAPT -p 0x3 -f ./config/arp_txrx-2P-1T.cfg -s ./config/arp_txrx_ScriptFile_2P.cfg
Command Line Params:
-p PORTMASK: Hexadecimal bitmask of ports to configure
-f CONFIG FILE: vCGNAPT configuration file
-s SCRIPT FILE: vCGNAPT script file

3.4.7. vCGNAPT execution on OVS

To run the VNF, execute the following:

samplevnf/VNFs/vCGNAPT# ./build/vCGNAPT -p 0x3 ./config/arp_txrx-2P-1T.cfg -s ./config/arp_txrx_ScriptFile_2P.cfg --disable-hw-csum
Command Line Params:
-p PORTMASK: Hexadecimal bitmask of ports to configure
-f CONFIG FILE: vCGNAPT configuration file
-s SCRIPT FILE: vCGNAPT script file
--disable-hw-csum :Disable TCP/UDP hw checksum