vFW samplevnf

1. vFW - Release Notes

1.1. Introduction

This is a beta release for Sample Virtual Firewall VNF. This vFW can application can be run independently (refer INSTALL.rst).

1.2. User Guide

Refer to README.rst for further details on vFW, HLD, features supported, test plan. For build configurations and execution requisites please refer to INSTALL.rst.

1.3. Feature for this release

This release supports following features as part of vFW

  • Basic packet filtering (malformed packets, IP fragments)
  • Connection tracking for TCP and UDP
  • Access Control List for rule based policy enforcement
  • SYN-flood protection via Synproxy* for TCP
  • UDP, TCP and ICMP protocol pass-through
  • CLI based enable/disable connection tracking, synproxy, basic packet filtering
  • L2L3 stack support for ARP/ICMP handling
  • ARP (request, response, gratuitous)
  • ICMP (terminal echo, echo response, passthrough)
  • ICMPv6 and ND (Neighbor Discovery)
  • Hardware and Software Load Balancing
  • Multithread support
  • Multiple physical port support

1.4. System requirements - OS and kernel version

This is supported on Ubuntu 14.04 and Ubuntu 16.04 and kernel version less than 4.5

VNFs on BareMetal support:
OS: Ubuntu 14.04 or 16.04 LTS kernel: < 4.5 http://releases.ubuntu.com/16.04/ Download/Install the image: ubuntu-16.04.1-server-amd64.iso
VNFs on Standalone Hypervisor:
HOST OS: Ubuntu 14.04 or 16.04 LTS http://releases.ubuntu.com/16.04/ Download/Install the image: ubuntu-16.04.1-server-amd64.iso
  • OVS (DPDK) - 2.5
  • kernel: < 4.5
  • Hypervisor - KVM
  • VM OS - Ubuntu 16.04/Ubuntu 14.04

1.5. Known Bugs and limitations

  • Hadware Load Balancer feature is supported on fortville nic FW version 4.53 and below.
  • Hardware Checksum offload is not supported for IPv6 traffic.
  • vFW on sriov is tested upto 4 threads
  • Http Multiple clients/server with HWLB is not working

1.6. Future Work

Following would be possible enhancement functionalities

  • Automatic enable/disable of synproxy
  • Support TCP timestamps with synproxy
  • FTP ALG integration
  • Performance optimization on different platforms

1.7. References

Following links provides additional information for differenet version of DPDKs

2. vFW - Readme

2.1. Introduction

The virtual firewall (vFW) is an application implements Firewall. vFW is used as a barrier between secure internal and an un-secure external network. The firewall performs Dynamic Packet Filtering. This involves keeping track of the state of Layer 4 (Transport)traffic,by examining both incoming and outgoing packets over time. Packets which don’t fall within expected parameters given the state of the connection are discarded. The Dynamic Packet Filtering will be performed by Connection Tracking component, similar to that supported in linux. The firewall also supports Access Controlled List(ACL) for rule based policy enforcement. Firewall is built on top of DPDK and uses the packet library.

2.1.1. About DPDK

The DPDK IP Pipeline Framework provides a set of libraries to build a pipeline application. In this document, vFW will be explained in detail with its own building blocks.

This document assumes the reader possesses the knowledge of DPDK concepts and packet framework. For more details, read DPDK Getting Started Guide, DPDK Programmers Guide, DPDK Sample Applications Guide.

2.2. Scope

This application provides a standalone DPDK based high performance vFW Virtual Network Function implementation.

2.3. Features

The vFW VNF currently supports the following functionality:
  • Basic packet filtering (malformed packets, IP fragments)
  • Connection tracking for TCP and UDP
  • Access Control List for rule based policy enforcement
  • SYN-flood protection via Synproxy* for TCP
  • UDP, TCP and ICMP protocol pass-through
  • CLI based enable/disable connection tracking, synproxy, basic packet filtering
  • Multithread support
  • Multiple physical port support
  • Hardware and Software Load Balancing
  • L2L3 stack support for ARP/ICMP handling
  • ARP (request, response, gratuitous)
  • ICMP (terminal echo, echo response, passthrough)
  • ICMPv6 and ND (Neighbor Discovery)

2.4. High Level Design

The Firewall performs basic filtering for malformed packets and dynamic packet filtering incoming packets using the connection tracker library. The connection data will be stored using a DPDK hash table. There will be one entry in the hash table for each connection. The hash key will be based on source address/port,destination address/port, and protocol of a packet. The hash key will be processed to allow a single entry to be used, regardless of which direction the packet is flowing (thus changing the source and destination). The ACL is implemented as libray stattically linked to vFW, which is used for used for rule based packet filtering.

TCP connections and UDP pseudo connections will be tracked separately even if theaddresses and ports are identical. Including the protocol in the hash key will ensure this.

The Input FIFO contains all the incoming packets for vFW filtering. The vFW Filter has no dependency on which component has written to the Input FIFO. Packets will be dequeued from the FIFO in bulk for processing by the vFW. Packets will be enqueued to the output FIFO. The software or hardware loadbalancing can be used for traffic distribution across multiple worker threads. The hardware loadbalancing require ethernet flow director support from hardware (eg. Fortville x710 NIC card). The Input and Output FIFOs will be implemented using DPDK Ring Buffers.

2.5. Components of vFW

In vFW, each component is constructed using packet framework pipelines. It includes Rx and Tx Driver, Master pipeline, load balancer pipeline and vfw worker pipeline components. A Pipeline framework is a collection of input ports, table(s),output ports and actions (functions).

2.5.1. Receive and Transmit Driver

Packets will be received in bulk and provided to LoadBalancer(LB) thread. Transimit takes packets from worker threads in a dedicated ring and sent to hardware queue.

2.5.2. Master Pipeline

The Master component is part of all the IP Pipeline applications. This component does not process any packets and should configure with Core 0, to allow other cores for processing of the traffic. This component is responsible for 1. Initializing each component of the Pipeline application in different threads 2. Providing CLI shell for the user control/debug 3. Propagating the commands from user to the corresponding components

2.5.3. ARPICMP Pipeline

This pipeline processes the APRICMP packets.

2.5.4. TXRX Pipelines

The TXTX and RXRX pipelines are pass through pipelines to forward both ingress and egress traffic to Loadbalancer. This is required when the Software Loadbalancer is used.

2.5.5. Load Balancer Pipeline

The vFW support both hardware and software balancing for load balancing of traffic across multiple VNF threads. The Hardware load balancing require support from hardware like Flow Director for steering of packets to application through hardware queues.

The Software Load balancer is also supported if hardware load balancing can’t be used for any reason. The TXRX along with LOADB pipeline provides support for software load balancing by distributing the flows to Multiple vFW worker threads. Loadbalancer (HW or SW) distributes traffic based on the 5 tuple (src addr, src port, dest addr, dest port and protocol) applying an XOR logic distributing to active worker threads, thereby maintaining an affinity of flows to worker threads.

2.5.6. vFW Pipeline

The vFW performs the basic packet filtering and will drop the invalid and malformed packets.The Dynamic packet filtering done using the connection tracker library. The packets are processed in bulk and Hash table is used to maintain the connection details. Every TCP/UDP packets are passed through connection tracker library for valid connection. The ACL library integrated to firewall provide rule based filtering.

2.5.7. vFW Topology

IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 1) IXIA
operation:
Egress --> The packets sent out from ixia(port 0) will be Firewalled to ixia(port 1).
Igress --> The packets sent out from ixia(port 1) will be Firewalled to ixia(port 0).

2.5.8. vFW Topology (L4REPLAY)

IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 0)L4REPLAY
operation:
Egress --> The packets sent out from ixia will pass through vFW to L3FWD/L4REPLAY.
Ingress --> The L4REPLAY upon reception of packets (Private to Public Network),
will immediately replay back the traffic to IXIA interface. (Pub -->Priv).

2.5.9. How to run L4Replay

After the installation of samplevnf

go to <samplevnf/VNFs/L4Replay>
./buid/L4replay -c  core_mask -n no_of_channels(let it be as 2) -- -p PORT_MASK --config="(port,queue,lcore)"
    eg: ./L4replay -c 0xf -n 4 -- -p 0x3 --config="(0,0,1)"

2.6. Installation, Compile and Execution

Plase refer to <samplevnf>/docs/vFW/INSTALL.rst for installation, configuration, compilation and execution.

3. vFW - Installation Guide

3.1. vFW Compilation

After downloading (or doing a git clone) in a directory (samplevnf)

3.1.1. Dependencies

  • DPDK supported versions ($DPDK_RTE_VER = 16.04, 16.11, 17.02 or 17.05). Downloaded and installed via vnf_build.sh or manually from [here] (http://fast.dpdk.org/rel/dpdk-$DPDK_RTE_VER.zip). Both the options are available as part of vnf_build.sh below.
  • libpcap-dev
  • libzmq
  • libcurl

3.1.2. Environment variables

Apply all the additional patches in ‘patches/dpdk_custom_patch/’ and build dpdk (NOTE: required only for DPDK version 16.04).

export RTE_SDK=<dpdk directory>

export RTE_TARGET=x86_64-native-linuxapp-gcc

This is done by vnf_build.sh script.

3.2. Auto Build

$ ./tools/vnf_build.sh in samplevnf root folder

Follow the steps in the screen from option [1] –> [9] and select option [8] to build the vnfs. It will automatically download selected DPDK version and any required patches and will setup everything and build vFW VNFs.

Following are the options for setup:

----------------------------------------------------------
 Step 1: Environment setup.
----------------------------------------------------------
[1] Check OS and network connection
[2] Select DPDK RTE version

----------------------------------------------------------
 Step 2: Download and Install
----------------------------------------------------------
[3] Agree to download
[4] Download packages
[5] Download DPDK zip
[6] Build and Install DPDK
[7] Setup hugepages

----------------------------------------------------------
 Step 3: Build VNFs
----------------------------------------------------------
[8] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay)

[9] Exit Script

An vFW executable will be created at the following location samplevnf/VNFs/vFW/build/vFW

3.3. Manual Build

  1. Download DPDK supported version from dpdk.org

  2. unzip dpdk-$DPDK_RTE_VER.zip and apply dpdk patches only in case of 16.04 (Not required for other DPDK versions)

    • cd dpdk

      • patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-management.patch
      • patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-Rx-hang-when-disable-LLDP.patch
      • patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-status-change-interrupt.patch
      • patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-VF-bonded-device-link-down.patch
      • patch -p1 < $VNF_CORE/patches/dpdk_custom_patch/disable-acl-debug-logs.patch
      • patch -p1 < $VNF_CORE/patches/dpdk_custom_patch/set-log-level-to-info.patch
    • build dpdk
      • make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
      • cd x86_64-native-linuxapp-gcc
      • make
    • Setup huge pages
      • For 1G/2M hugepage sizes, for example 1G pages, the size must be specified explicitly and can also be optionally set as the default hugepage size for the system. For example, to reserve 8G of hugepage memory in the form of eight 1G pages, the following options should be passed to the kernel: * default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048
      • Add this to Go to /etc/default/grub configuration file.
      • Append “default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048”
        to the GRUB_CMDLINE_LINUX entry.
  3. Setup Environment Variable

    • export RTE_SDK=<samplevnf>/dpdk
    • export RTE_TARGET=x86_64-native-linuxapp-gcc
    • export VNF_CORE=<samplevnf>

    or using ./tools/setenv.sh

  4. Build vFW VNFs

    • cd <samplevnf>/VNFs/vFW
    • make clean
    • make
  5. The vFW executable will be created at the following location

    • <samplevnf>/VNFs/vFW/build/vFW

3.4. Run

3.4.1. Setup Port to run VNF

The tools folder and utilities names are different across DPDK versions.

For DPDK versions 16.04
1. cd <samplevnf>/dpdk
2. ./tools/dpdk_nic_bind.py --status <--- List the network device
3. ./tools/dpdk_nic_bind.py -b igb_uio <PCI Port 0> <PCI Port 1>

 .. _More details: http://dpdk.org/doc/guides-16.04/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules

For DPDK versions 16.11
1. cd <samplevnf>/dpdk
2. ./tools/dpdk-devbind.py --status <--- List the network device
3. ./tools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>

 .. _More details: http://dpdk.org/doc/guides-16.11/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules

For DPDK versions 17.xx
1. cd <samplevnf>/dpdk
2. ./usertools/dpdk-devbind.py --status <--- List the network device
3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>

 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules

Make the necessary changes to the config files to run the vFW VNF

eg: ports_mac_list = 00:00:00:30:21:01 00:00:00:30:21:00

3.4.2. Firewall Run commands

Update the configuration according to system configuration.

./vFW -p <port mask> -f <config> -s <script> - SW_LoadB
./vFW -p <port mask> -f <config> -s <script> -hwlb <num_WT> - HW_LoadB
3.4.2.1. Run IPv4

To run the vFW in Software LB or Hardware LB with IPv4 traffic

Software LoadB:

cd <samplevnf>/VNFs/vFW/
./build/vFW -p 0x3 -f ./config/VFW_SWLB_IPV4_SinglePortPair_4Thread.cfg  -s ./config/VFW_SWLB_IPV4_SinglePortPair_script.tc


Hardware LoadB:

cd <samplevnf>/VNFs/vFW/
./build/vFW -p 0x3 -f ./config/VFW_HWLB_IPV4_SinglePortPair_4Thread.cfg  -s ./config/VFW_HWLB_IPV4_SinglePortPair_script.cfg --hwlb 4
3.4.2.2. Run IPv6

To run the vFW in Software LB or Hardware LB with IPvr64 traffic

Software LoadB:

cd <samplevnf>/VNFs/vFW
./build/vFW -p 0x3 -f ./config/VFW_SWLB_IPV6_SinglePortPair_4Thread.cfg  -s ./config/VFW_SWLB_IPV6_SinglePortPair_script.tc


Hardware LoadB:

cd <samplevnf>/VNFs/vFW/
./build/vFW -p 0x3 -f ./config/VFW_HWLB_IPV6_SinglePortPair_4Thread.cfg  -s ./config/VFW_HWLB_IPV6_SinglePortPair_script.tc --hwlb 4
3.4.2.3. vFW execution on BM & SRIOV

To run the VNF, execute the following

samplevnf/VNFs/vFW# ./build/vFW -p 0x3 -f ./config/VFW_SWLB_IPV4_SinglePortPair_4Thread.cfg  -s ./config/VFW_SWLB_IPV4_SinglePortPair_script.tc
Command Line Params:
-p PORTMASK: Hexadecimal bitmask of ports to configure
-f CONFIG FILE: vFW configuration file
-s SCRIPT FILE: vFW script file
3.4.2.4. vFW execution on OVS
To run the VNF, execute the following:
samplevnf/VNFs/vFW# ./build/vFW -p 0x3 -f ./config/VFW_SWLB_IPV4_SinglePortPair_4Thread.cfg  -s ./config/VFW_SWLB_IPV4_SinglePortPair_script.tc --disable-hw-csum
Command Line Params:
-p PORTMASK: Hexadecimal bitmask of ports to configure
-f CONFIG FILE: vFW configuration file
-s SCRIPT FILE: vFW script file
--disable-hw-csum :Disable TCP/UDP hw checksum