OPNFV Configuration Guide

Colorado 1.0

Abstract

This document provides guidance for the configurations available in the Colorado release of OPNFV.

The release includes four installer tools leveraging different technologies; Apex, Compass4nfv, Fuel and JOID, which deploy components of the platform.

This document also includes the selection of tools and components including guidelines for how to deploy and configure the platform to an operational state.

Configuration Options

OPNFV provides a variety of virtual infrastructure deployments called scenarios designed to host virtualised network functions (VNF’s). KVM4NFV scenarios provide specific capabilities and/or components aimed to solve specific problems for the deployment of VNF’s. KVM4NFV scenario includes components such as OpenStack,KVM etc. which includes different source components or configurations.

KVM4NFV Scenarios

Each KVM4NFV scenario provides unique features and capabilities, it is important to understand your target platform capabilities before installing and configuring. This configuration guide outlines how to install and configure components in order to enable the features required.

Scenarios are implemented as deployable compositions through integration with an installation tool. OPNFV supports multiple installation tools and for any given release not all tools will support all scenarios. While our target is to establish parity across the installation tools to ensure they can provide all scenarios, the practical challenge of achieving that goal for any given feature and release results in some disparity.

Colorado scenario overeview

The following table provides an overview of the installation tools and available scenario’s in the Colorado release of OPNFV.

Scenario status is indicated by a weather pattern icon. All scenarios listed with a weather pattern are possible to deploy and run in your environment or a Pharos lab, however they may have known limitations or issues as indicated by the icon.

Weather pattern icon legend:

Weather Icon Scenario Status
../images/weather-clear.jpg Stable, no known issues
../images/weather-few-clouds.jpg Stable, documented limitations
../images/weather-overcast.jpg Deployable, stability or feature limitations
../images/weather-dash.jpg Not deployed with this installer

Scenarios that are not yet in a state of “Stable, no known issues” will continue to be stabilised and updates will be made on the stable/colorado branch. While we intend that all Colorado scenarios should be stable it is worth checking regularly to see the current status. Due to our dependency on upstream communities and code some issues may not be resolved prior to the D release.

Scenario Naming

In OPNFV scenarios are identified by short scenario names, these names follow a scheme that identifies the key components and behaviours of the scenario. The rules for scenario naming are as follows:

os-[controller]-[feature]-[mode]-[option]
Details of the fields are
  • os: mandatory
    • Refers to the platform type used
    • possible value: os (OpenStack)
  • [controller]: mandatory

    • Refers to the SDN controller integrated in the platform
    • example values: nosdn, ocl, odl, onos
    • [feature]: mandatory
      • Refers to the feature projects supported by the scenario
      • example values: nofeature, kvm, ovs, sfc
    • [mode]: mandatory
      • Refers to the deployment type, which may include for instance high availability
      • possible values: ha, noha
    • [option]: optional
      • Used for the scenarios those do not fit into naming scheme.
      • The optional field in the short scenario name should not be included if there is no optional scenario.

Some examples of supported scenario names are:

  • os-nosdn-kvm-noha
    • This is an OpenStack based deployment using neutron including the OPNFV enhanced KVM hypervisor
  • os-onos-nofeature-ha
    • This is an OpenStack deployment in high availability mode including ONOS as the SDN controller
  • os-odl_l2-sfc
    • This is an OpenStack deployment using OpenDaylight and OVS enabled with SFC features
Installing your scenario

There are two main methods of deploying your target scenario, one method is to follow this guide which will walk you through the process of deploying to your hardware using scripts or ISO images, the other method is to set up a Jenkins slave and connect your infrastructure to the OPNFV Jenkins master.

For the purposes of evaluation and development a number of Colorado scenarios are able to be deployed virtually to mitigate the requirements on physical infrastructure. Details and instructions on performing virtual deployments can be found in the installer specific installation instructions.

To set up a Jenkins slave for automated deployment to your lab, refer to the Jenkins slave connect guide.

Introduction

In KVM4NFV project, we focus on the KVM hypervisor to enhance it for NFV, by looking at the following areas initially

  • Minimal Interrupt latency variation for data plane VNFs:
    • Minimal Timing Variation for Timing correctness of real-time VNFs
    • Minimal packet latency variation for data-plane VNFs
  • Inter-VM communication,

  • Fast live migration

Configuration of Cyclictest

Cyclictest measures Latency of response to a stimulus. Achieving low latency with the KVM4NFV project requires setting up a special test environment. This environment includes the BIOS settings, kernel configuration, kernel parameters and the run-time environment.

Pre-configuration activities

Intel POD1 is currently used as OPNFV-KVM4NFV test environment. The latest build packages are downloaded onto Intel Pod1-jump server from artifact repository. Yardstick running in a ubuntu docker container on Intel Pod1-jump server will trigger the cyclictest.

Running cyclictest through Yardstick will Configure the host(Pod1-node1), the guest, executes cyclictest on the guest.

The following scripts are used for configuring host and guest to create a special test environment and achieve low latency.

host-setup0.sh: On running this script will install latest kernel rpm on host and will make necessary changes as following to create special test environment

  • Isolates CPUs from the general scheduler
  • Stops timer ticks on isolated CPUs whenever possible
  • Stops RCU callbacks on isolated CPUs
  • Enables intel iommu driver and disables DMA translation for devices
  • Sets HugeTLB pages to 1GB
  • Disables machine check
  • Disables clocksource verification at runtime

host-setup1.sh: On running this script will make following test environment changes

  • Disabling watchdogs to reduce overhead
  • Disabling RT throttling
  • Reroute interrupts bound to isolated CPUs to CPU 0
  • Change the iptable so that we can ssh to the guest remotely
host-run-qemu.sh: On running this script will launch a guest vm on host.
Note: download guest disk image from artifactory

guest-setup0.sh: On running this scrcipt on guest vm will install the latest build kernel rpm, cyclictest and makes following configuration on guest vm.

  • Isolates CPUs from the general scheduler
  • Stops timer ticks on isolated CPUs whenever possible
  • Uses polling idle loop to improve performance
  • Disables clocksource verification at runtime

guest-setup1.sh: On running this script on guest vm will make following configurations

  • Disable watchdogs to reduce overhead
  • Routes device interrupts to non-RT CPU
  • Disables RT throttling

Hardware configuration

Currently Intel POD1 is used as test environment for kvmfornfv to execute cyclictest. As part of this test environment Intel pod1-jump is configured as jenkins slave and all the latest build artifacts are downloaded on to it. Intel pod1-node1 is the host on which a guest vm will be launched as a part of running cylictest through yardstick.