OPNFV Documentation

Open Platform for NFV (OPNFV) facilitates the development and evolution of NFV components across various open source ecosystems. Through system level integration, deployment and testing, OPNFV creates a reference NFV platform to accelerate the transformation of enterprise and service provider networks. Participation is open to anyone, whether you are an employee of a member company or just passionate about network transformation.

Platform overview

Introduction

Network Functions Virtualization (NFV) is transforming the networking industry via software-defined infrastructures and open source is the proven method for quickly developing software for commercial products and services that can move markets. Open Platform for NFV (OPNFV) facilitates the development and evolution of NFV components across various open source ecosystems. Through system level integration, deployment and testing, OPNFV constructs a reference NFV platform to accelerate the transformation of enterprise and service provider networks. As an open source project, OPNFV is uniquely positioned to bring together the work of standards bodies, open source communities, service providers and commercial suppliers to deliver a de facto NFV platform for the industry.

By integrating components from upstream projects, the community is able to conduct performance and use case-based testing on a variety of solutions to ensure the platform’s suitability for NFV use cases. OPNFV also works upstream with other open source communities to bring contributions and learnings from its work directly to those communities in the form of blueprints, patches, bugs, and new code.

OPNFV focuses on building NFV Infrastructure (NFVI) and Virtualised Infrastructure Management (VIM) by integrating components from upstream projects such as OpenDaylight, ONOS, Tungsen Fabric, OVN, OpenStack, Kubernetes, Ceph Storage, KVM, Open vSwitch, and Linux. More recently, OPNFV has extended its portfolio of forwarding solutions to include DPDK, fd.io and ODP, is able to run on both Intel and ARM commercial and white-box hardware, support VM, Container and BareMetal workloads, and includes Management and Network Orchestration MANO components primarily for application composition and management in the Fraser release.

These capabilities, along with application programmable interfaces (APIs) to other NFV elements, form the basic infrastructure required for Virtualized Network Functions (VNF) and MANO components.

Concentrating on these components while also considering proposed projects on additional topics (such as the MANO components and applications themselves), OPNFV aims to enhance NFV services by increasing performance and power efficiency improving reliability, availability and serviceability, and delivering comprehensive platform instrumentation.

OPNFV Platform Architecture

The OPNFV project addresses a number of aspects in the development of a consistent virtualisation platform including common hardware requirements, software architecture, MANO and applications.

OPNFV Platform Overview Diagram

Overview infographic of the opnfv platform and projects.

To address these areas effectively, the OPNFV platform architecture can be decomposed into the following basic building blocks:

  • Hardware: Infrastructure working group, Pharos project and associated activities
  • Software Platform: Platform integration and deployment projects
  • MANO: MANO working group and associated projects
  • Tooling and testing: Testing working group and test projects
  • Applications: All other areas and drive requirements for OPNFV

OPNFV Lab Infrastructure

The infrastructure working group oversees such topics as lab management, workflow, definitions, metrics and tools for OPNFV infrastructure.

Fundamental to the WG is the Pharos Specification which provides a set of defined lab infrastructures over a geographically and technically diverse federated global OPNFV lab.

Labs may instantiate bare-metal and virtual environments that are accessed remotely by the community and used for OPNFV platform and feature development, build, deploy and testing. No two labs are the same and the heterogeneity of the Pharos environment provides the ideal platform for establishing hardware and software abstractions providing well understood performance characteristics.

Community labs are hosted by OPNFV member companies on a voluntary basis. The Linux Foundation also hosts an OPNFV lab that provides centralized CI and other production resources which are linked to community labs.

The Lab-as-a-service (LaaS) offering provides developers to readily access NFV infrastructure on demand. Ongoing lab capabilities will include the ability easily automate deploy and test of any OPNFV install scenario in any lab environment using a concept called “Dynamic CI”.

OPNFV Software Platform Architecture

The OPNFV software platform is comprised exclusively of open source implementations of platform component pieces. OPNFV is able to draw from the rich ecosystem of NFV related technologies available in open source communities, and then integrate, test, measure and improve these components in conjunction with our upstream communities.

Virtual Infrastructure Management

OPNFV derives it’s virtual infrastructure management from one of our largest upstream ecosystems OpenStack. OpenStack provides a complete reference cloud management system and associated technologies. While the OpenStack community sustains a broad set of projects, not all technologies are relevant in the NFV domain, the OPNFV community consumes a sub-set of OpenStack projects and the usage and composition may vary depending on the installer and scenario.

For details on the scenarios available in OPNFV and the specific composition of components refer to the OPNFV User Guide & Configuration Guide.

OPNFV now also has initial support for containerized VNFs.

Operating Systems

OPNFV currently uses Linux on all target machines, this can include Ubuntu, Centos or SUSE Linux. The specific version of Linux used for any deployment is documented in the installation guide.

Networking Technologies

SDN Controllers

OPNFV, as an NFV focused project, has a significant investment on networking technologies and provides a broad variety of integrated open source reference solutions. The diversity of controllers able to be used in OPNFV is supported by a similarly diverse set of forwarding technologies.

There are many SDN controllers available today relevant to virtual environments where the OPNFV community supports and contributes to a number of these. The controllers being worked on by the community during this release of OPNFV include:

  • Neutron: an OpenStack project to provide “network connectivity as a service” between interface devices (e.g., vNICs) managed by other OpenStack services (e.g. Nova).
  • OpenDaylight: addresses multivendor, traditional and greenfield networks, establishing the industry’s de facto SDN platform and providing the foundation for networks of the future.
  • Tungsen Fabric: An open source SDN controller designed for cloud and NFV use cases. It has an analytics engine, well defined northbound REST APIs to configure and gather ops/analytics data.
  • OVN: A virtual networking solution developed by the same team that created OVS. OVN stands for Open Virtual Networking and is dissimilar from the above projects in that it focuses only on overlay networks.

Data Plane

OPNFV extends Linux virtual networking capabilities by using virtual switching and routing components. The OPNFV community proactively engages with the following open source communities to address performance, scale and resiliency needs apparent in carrier networks.

  • OVS (Open vSwitch): a production quality, multilayer virtual switch designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols.
  • FD.io (Fast data - Input/Output): a high performance alternative to Open vSwitch, the core engine of FD.io is a vector processing engine (VPP). VPP processes a number of packets in parallel instead of one at a time thus significantly improving packet throughput.
  • DPDK: a set of libraries that bypass the kernel and provide polling mechanisms, instead of interrupt based operations, to speed up packet processing. DPDK works with both OVS and FD.io.

MANO

OPNFV integrates open source MANO projects for NFV orchestration and VNF management. New MANO projects are constantly being added.

Deployment Architecture

A typical OPNFV deployment starts with three controller nodes running in a high availability configuration including control plane components from OpenStack, SDN controllers, etc. and a minimum of two compute nodes for deployment of workloads (VNFs). A detailed description of the hardware requirements required to support the 5 node configuration can be found in pharos specification: Pharos Project

In addition to the deployment on a highly available physical infrastructure, OPNFV can be deployed for development and lab purposes in a virtual environment. In this case each of the hosts is provided by a virtual machine and allows control and workload placement using nested virtualization.

The initial deployment is done using a staging server, referred to as the “jumphost”. This server-either physical or virtual-is first installed with the installation program that then installs OpenStack and other components on the controller nodes and compute nodes. See the OPNFV User Guide & Configuration Guide for more details.

The OPNFV Testing Ecosystem

The OPNFV community has set out to address the needs of virtualization in the carrier network and as such platform validation and measurements are a cornerstone to the iterative releases and objectives.

To simplify the complex task of feature, component and platform validation and characterization the testing community has established a fully automated method for addressing all key areas of platform validation. This required the integration of a variety of testing frameworks in our CI systems, real time and automated analysis of results, storage and publication of key facts for each run as shown in the following diagram.

Overview infographic of the OPNFV testing Ecosystem

Release Verification

The OPNFV community relies on its testing community to establish release criteria for each OPNFV release. With each release cycle the testing criteria become more stringent and better representative of our feature and resiliency requirements. Each release establishes a set of deployment scenarios to validate, the testing infrastructure and test suites need to accommodate these features and capabilities.

The release criteria as established by the testing teams include passing a set of test cases derived from the functional testing project ‘functest,’ a set of test cases derived from our platform system and performance test project ‘yardstick,’ and a selection of test cases for feature capabilities derived from other test projects such as bottlenecks, vsperf, cperf and storperf. The scenario needs to be able to be deployed, pass these tests, and be removed from the infrastructure iteratively in order to fulfill the release criteria.

Functest

Functest provides a functional testing framework incorporating a number of test suites and test cases that test and verify OPNFV platform functionality. The scope of Functest and relevant test cases can be found in the Functest User Guide

Functest provides both feature project and component test suite integration, leveraging OpenStack and SDN controllers testing frameworks to verify the key components of the OPNFV platform are running successfully.

Yardstick

Yardstick is a testing project for verifying the infrastructure compliance when running VNF applications. Yardstick benchmarks a number of characteristics and performance vectors on the infrastructure making it a valuable pre-deployment NFVI testing tools.

Yardstick provides a flexible testing framework for launching other OPNFV testing projects.

There are two types of test cases in Yardstick:

  • Yardstick generic test cases and OPNFV feature test cases; including basic characteristics benchmarking in compute/storage/network area.
  • OPNFV feature test cases include basic telecom feature testing from OPNFV projects; for example nfv-kvm, sfc, ipv6, Parser, Availability and SDN VPN

System Evaluation and compliance testing

The OPNFV community is developing a set of test suites intended to evaluate a set of reference behaviors and capabilities for NFV systems developed externally from the OPNFV ecosystem to evaluate and measure their ability to provide the features and capabilities developed in the OPNFV ecosystem.

The Dovetail project will provide a test framework and methodology able to be used on any NFV platform, including an agreed set of test cases establishing an evaluation criteria for exercising an OPNFV compatible system. The Dovetail project has begun establishing the test framework and will provide a preliminary methodology for the Fraser release. Work will continue to develop these test cases to establish a stand alone compliance evaluation solution in future releases.

Additional Testing

Besides the test suites and cases for release verification, additional testing is performed to validate specific features or characteristics of the OPNFV platform. These testing framework and test cases may include some specific needs; such as extended measurements, additional testing stimuli, or tests simulating environmental disturbances or failures.

These additional testing activities provide a more complete evaluation of the OPNFV platform. Some of the projects focused on these testing areas include:

Bottlenecks

Bottlenecks provides a framework to find system limitations and bottlenecks, providing root cause isolation capabilities to facilitate system evaluation.

NFVBench

NFVbench is a lightweight end-to-end dataplane benchmarking framework project. It includes traffic generator(s) and measures a number of packet performance related metrics.

QTIP

QTIP boils down NFVI compute and storage performance into one single metric for easy comparison. QTIP crunches these numbers based on five different categories of compute metrics and relies on Storperf for storage metrics.

Storperf

Storperf measures the performance of external block storage. The goal of this project is to provide a report based on SNIA’s (Storage Networking Industry Association) Performance Test Specification.

VSPERF

VSPERF provides an automated test-framework and comprehensive test suite for measuring data-plane performance of the NFVI including switching technology, physical and virtual network interfaces. The provided test cases with network topologies can be customized while also allowing individual versions of Operating System, vSwitch and hypervisor to be specified.

Installation

Abstract

This an overview document for the installation of the Gambia release of OPNFV.

The Gambia release can be installed making use of any of the installer projects in OPNFV: Apex, Compass4Nfv or Fuel. Each installer provides the ability to install a common OPNFV platform as well as integrating additional features delivered through a variety of scenarios by the OPNFV community.

Introduction

The OPNFV platform is comprised of a variety of upstream components that may be deployed on your infrastructure. A composition of components, tools and configurations is identified in OPNFV as a deployment scenario.

The various OPNFV scenarios provide unique features and capabilities that you may want to leverage, and it is important to understand your required target platform capabilities before installing and configuring your scenarios.

An OPNFV installation requires either a physical infrastructure environment as defined in the Pharos specification, or a virtual one. When configuring a physical infrastructure it is strongly advised to follow the Pharos configuration guidelines.

Scenarios

OPNFV scenarios are designed to host virtualised network functions (VNF’s) in a variety of deployment architectures and locations. Each scenario provides specific capabilities and/or components aimed at solving specific problems for the deployment of VNF’s.

A scenario may, for instance, include components such as OpenStack, OpenDaylight, OVS, KVM etc., where each scenario will include different source components or configurations.

To learn more about the scenarios supported in the Fraser release refer to the scenario description documents provided:

Installation Procedure

Detailed step by step instructions for working with an installation toolchain and installing the required scenario are provided by the installation projects. The projects providing installation support for the OPNFV Gambia release are: Apex, Compass4nfv and Fuel.

The instructions for each toolchain can be found in these links:

OPNFV Test Frameworks

If you have elected to install the OPNFV platform using the deployment toolchain provided by OPNFV, your system will have been validated once the installation is completed. The basic deployment validation only addresses a small part of capabilities in the platform and you may want to execute more exhaustive tests. Some investigation will be required to select the right test suites to run on your platform.

Many of the OPNFV test project provide user-guide documentation and installation instructions in this document

User Guide & Configuration Guide

Abstract

OPNFV is a collaborative project aimed at providing a variety of virtualisation deployments intended to host applications serving the networking and carrier industries. This document provides guidance and instructions for using platform features designed to support these applications that are made available in the OPNFV Gambia release.

This document is not intended to replace or replicate documentation from other upstream open source projects such as KVM, OpenDaylight, OpenStack, etc., but to highlight the features and capabilities delivered through the OPNFV project.

Introduction

OPNFV provides a suite of scenarios, infrastructure deployment options, which are able to be installed to host virtualised network functions (VNFs). This document intends to help users of the platform leverage the features and capabilities delivered by OPNFV.

OPNFVs’ Continuous Integration builds, deploys and tests combinations of virtual infrastructure components in what are defined as scenarios. A scenario may include components such as KVM, OpenDaylight, OpenStack, OVS, etc., where each scenario will include different source components or configurations. Scenarios are designed to enable specific features and capabilities in the platform that can be leveraged by the OPNFV user community.

Feature Overview

The following links outline the feature deliverables from participating OPNFV projects in the Gambia release. Each of the participating projects provides detailed descriptions about the delivered features including use cases, implementation, and configuration specifics.

The following Configuration Guides and User Guides assume that the reader already has some knowledge about a given project’s specifics and deliverables. These Guides are intended to be used following the installation with an OPNFV installer to allow users to deploy and implement feature delivered by OPNFV.

If you are unsure about the specifics of a given project, please refer to the OPNFV wiki page at http://wiki.opnfv.org for more details.

Testing Frameworks

Testing Framework Overview

OPNFV Testing Overview

Introduction

Testing is one of the key activities in OPNFV and includes unit, feature, component, system level testing for development, automated deployment, performance characterization and stress testing.

Test projects are dedicated to provide frameworks, tooling and test-cases categorized as functional, performance or compliance testing. Test projects fulfill different roles such as verifying VIM functionality, benchmarking components and platforms or analysis of measured KPIs for OPNFV release scenarios.

Feature projects also provide their own test suites that either run independently or within a test project.

This document details the OPNFV testing ecosystem, describes common test components used by individual OPNFV projects and provides links to project specific documentation.

The OPNFV Testing Ecosystem

The OPNFV testing projects are represented in the following diagram:

Overview of OPNFV Testing projects

The major testing projects are described in the table below:

Project Description
Bottlenecks This project aims to find system bottlenecks by testing and verifying OPNFV infrastructure in a staging environment before committing it to a production environment. Instead of debugging a deployment in production environment, an automatic method for executing benchmarks which plans to validate the deployment during staging is adopted. This project forms a staging framework to find bottlenecks and to do analysis of the OPNFV infrastructure.
CPerf SDN Controller benchmarks and performance testing, applicable to controllers in general. Collaboration of upstream controller testing experts, external test tool developers and the standards community. Primarily contribute to upstream/external tooling, then add jobs to run those tools on OPNFV’s infrastructure.
Dovetail This project intends to define and provide a set of OPNFV related validation criteria/tests that will provide input for the OPNFV Complaince Verification Program. The Dovetail project is executed with the guidance and oversight of the Complaince and Certification (C&C) committee and work to secure the goals of the C&C committee for each release. The project intends to incrementally define qualification criteria that establish the foundations of how one is able to measure the ability to utilize the OPNFV platform, how the platform itself should behave, and how applications may be deployed on the platform.
Functest This project deals with the functional testing of the VIM and NFVI. It leverages several upstream test suites (OpenStack, ODL, ONOS, etc.) and can be used by feature project to launch feature test suites in CI/CD. The project is used for scenario validation.
NFVbench NFVbench is a compact and self contained data plane performance measurement tool for OpensStack based NFVi platforms. It is agnostic of the NFVi distribution, Neutron networking implementation and hardware. It runs on any Linux server with a DPDK compliant NIC connected to the NFVi platform data plane and bundles a highly efficient software traffic generator. Provides a fully automated measurement of most common packet paths at any level of scale and load using RFC-2544. Available as a Docker container with simple command line and REST interfaces. Easy to use as it takes care of most of the guesswork generally associated to data plane benchmarking. Can run in any lab or in production environments.
QTIP QTIP as the project for “Platform Performance Benchmarking” in OPNFV aims to provide user a simple indicator for performance, supported by comprehensive testing data and transparent calculation formula. It provides a platform with common services for performance benchmarking which helps users to build indicators by themselves with ease.
StorPerf The purpose of this project is to provide a tool to measure block and object storage performance in an NFVI. When complemented with a characterization of typical VF storage performance requirements, it can provide pass/fail thresholds for test, staging, and production NFVI environments.
VSPERF VSPERF is an OPNFV project that provides an automated test-framework and comprehensive test suite based on Industry Test Specifications for measuring NFVI data-plane performance. The data-path includes switching technologies with physical and virtual network interfaces. The VSPERF architecture is switch and traffic generator agnostic and test cases can be easily customized. Software versions and configurations including the vSwitch (OVS or VPP) as well as the network topology are controlled by VSPERF (independent of OpenStack). VSPERF is used as a development tool for optimizing switching technologies, qualification of packet processing components and for pre-deployment evaluation of the NFV platform data-path.
Yardstick The goal of the Project is to verify the infrastructure compliance when running VNF applications. NFV Use Cases described in ETSI GS NFV 001 show a large variety of applications, each defining specific requirements and complex configuration on the underlying infrastructure and test tools.The Yardstick concept decomposes typical VNF work-load performance metrics into a number of characteristics/performance vectors, which each of them can be represented by distinct test-cases.

Testing Working Group Resources

Test Results Collection Framework

Any test project running in the global OPNFV lab infrastructure and is integrated with OPNFV CI can push test results to the community Test Database using a common Test API. This database can be used to track the evolution of testing and analyse test runs to compare results across installers, scenarios and between technically and geographically diverse hardware environments.

Results from the databse are used to generate a dashboard with the current test status for each testing project. Please note that you can also deploy the Test Database and Test API locally in your own environment.

Overall Test Architecture

The management of test results can be summarized as follows:

+-------------+    +-------------+    +-------------+
|             |    |             |    |             |
|   Test      |    |   Test      |    |   Test      |
| Project #1  |    | Project #2  |    | Project #N  |
|             |    |             |    |             |
+-------------+    +-------------+    +-------------+
         |               |               |
         V               V               V
     +---------------------------------------------+
     |                                             |
     |           Test Rest API front end           |
     |    http://testresults.opnfv.org/test        |
     |                                             |
     +---------------------------------------------+
         ^                |                     ^
         |                V                     |
         |     +-------------------------+      |
         |     |                         |      |
         |     |    Test Results DB      |      |
         |     |         Mongo DB        |      |
         |     |                         |      |
         |     +-------------------------+      |
         |                                      |
         |                                      |
   +----------------------+        +----------------------+
   |                      |        |                      |
   | Testing Dashboards   |        |  Test Landing page   |
   |                      |        |                      |
   +----------------------+        +----------------------+
The Test Database

A Mongo DB Database was introduced for the Brahmaputra release. The following collections are declared in this database:

  • pods: the list of pods used for production CI
  • projects: the list of projects providing test cases
  • test cases: the test cases related to a given project
  • results: the results of the test cases
  • scenarios: the OPNFV scenarios tested in CI

This database can be used by any project through the Test API. Please note that projects may also use additional databases. The Test Database is mainly use to collect CI test results and generate scenario trust indicators. The Test Database is also cloned for OPNFV Plugfests in order to provide a private datastore only accessible to Plugfest participants.

Test API description

The Test API is used to declare pods, projects, test cases and test results. Pods correspond to a cluster of machines (3 controller and 2 compute nodes in HA mode) used to run the tests and are defined in the Pharos project. The results pushed in the database are related to pods, projects and test cases. Trying to push results generated from a non-referenced pod will return an error message by the Test API.

The data model is very basic, 5 objects are available:
  • Pods
  • Projects
  • Test cases
  • Results
  • Scenarios

For detailed information, please go to http://artifacts.opnfv.org/releng/docs/testapi.html

The code of the Test API is hosted in the releng-testresults repository [TST2]. The static documentation of the Test API can be found at [TST3]. The Test API has been dockerized and may be installed locally in your lab.

The deployment of the Test API has been automated. A jenkins job manages:

  • the unit tests of the Test API
  • the creation of a new docker file
  • the deployment of the new Test API
  • the archive of the old Test API
  • the backup of the Mongo DB
Test API Authorization

PUT/DELETE/POST operations of the TestAPI now require token based authorization. The token needs to be added in the request using a header ‘X-Auth-Token’ for access to the database.

e.g:

headers['X-Auth-Token']

The value of the header i.e the token can be accessed in the jenkins environment variable TestApiToken. The token value is added as a masked password.

headers['X-Auth-Token'] = os.environ.get('TestApiToken')

The above example is in Python. Token based authentication has been added so that only CI pods running Jenkins jobs can access the database. Please note that currently token authorization is implemented but is not yet enabled.

Test Project Reporting

The reporting page for the test projects is http://testresults.opnfv.org/reporting/

Testing group reporting page

This page provides reporting per OPNFV release and per testing project.

Testing group Euphrates reporting page

An evolution of the reporting page is planned to unify test reporting by creating a landing page that shows the scenario status in one glance (this information was previously consolidated manually on a wiki page). The landing page will be displayed per scenario and show:

  • the status of the deployment
  • the score from each test suite. There is no overall score, it is determined

by each test project. * a trust indicator

Test Case Catalog

Until the Colorado release, each testing project managed the list of its test cases. This made it very hard to have a global view of the available test cases from the different test projects. A common view was possible through the API but it was not very user friendly. Test cases per project may be listed by calling:

with project_name: bottlenecks, functest, qtip, storperf, vsperf, yardstick

A test case catalog has now been realized [TST4]. Roll over the project then click to get the list of test cases, and then click on the case to get more details.

Testing group testcase catalog
Test Dashboards

The Test Dashboard is used to provide a consistent view of the results collected in CI. The results shown on the dashboard are post processed from the Database, which only contains raw results. The dashboard can be used in addition to the reporting page (high level view) to allow the creation of specific graphs according to what the test owner wants to show.

In Brahmaputra, a basic dashboard was created in Functest. In Colorado, Yardstick used Grafana (time based graphs) and ELK (complex graphs). Since Danube, the OPNFV testing community decided to adopt the ELK framework and to use Bitergia for creating highly flexible dashboards [TST5].

Testing group testcase catalog
Power Consumption Monitoring Framework
Introduction

Power consumption is a key driver for NFV. As an end user is interested to know which application is good or bad regarding power consumption and explains why he/she has to plug his/her smartphone every day, we would be interested to know which VNF is power consuming.

Power consumption is hard to evaluate empirically. It is however possible to collect information and leverage Pharos federation to try to detect some profiles/footprints. In fact thanks to CI, we know that we are running a known/deterministic list of cases. The idea is to correlate this knowledge with the power consumption to try at the end to find statistical biais.

High Level Architecture

The energy recorder high level architecture may be described as follows:

Energy recorder high level architecture

The energy monitoring system in based on 3 software components:

  • Power info collector: poll server to collect instantaneous power consumption information
  • Energy recording API + influxdb: On one leg receive servers consumption and

on the other, scenarios notfication. It then able to establish te correlation between consumption and scenario and stores it into a time-series database (influxdb) * Python SDK: A Python SDK using decorator to send notification to Energy recording API from testcases scenarios

Power Info Collector

It collects instantaneous power consumption information and send it to Event API in charge of data storing. The collector use different connector to read the power consumption on remote servers:

  • IPMI: this is the basic method and is manufacturer dependent. Depending on manufacturer, refreshing delay may vary (generally for 10 to 30 sec.)
  • RedFish: redfish is an industry RESTFUL API for hardware managment. Unfortunatly it is not yet supported by many suppliers.
  • ILO: HP RESTFULL API: This connector support as well 2.1 as 2.4 version of HP-ILO

IPMI is supported by at least:

  • HP
  • IBM
  • Dell
  • Nokia
  • Advantech
  • Lenovo
  • Huawei

Redfish API has been successfully tested on:

  • HP
  • Dell
  • Huawei (E9000 class servers used in OPNFV Community Labs are IPMI 2.0

compliant and use Redfish login Interface through Browsers supporting JRE1.7/1.8)

Several test campaigns done with physical Wattmeter showed that IPMI results were notvery accurate but RedFish were. So if Redfish is available, it is highly recommended to use it.

Installation

To run the server power consumption collector agent, you need to deploy a docker container locally on your infrastructure.

This container requires:

  • Connectivy on the LAN where server administration services (ILO, eDrac, IPMI,...) are configured and IP access to the POD’s servers
  • Outgoing HTTP access to the Event API (internet)

Build the image by typing:

curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/docker/server-collector.dockerfile|docker build -t energyrecorder/collector -

Create local folder on your host for logs and config files:

mkdir -p /etc/energyrecorder
mkdir -p /var/log/energyrecorder

In /etc/energyrecorder create a configuration for logging in a file named collector-logging.conf:

curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/server-collector/conf/collector-logging.conf.sample > /etc/energyrecorder/collector-logging.conf

Check configuration for this file (folders, log levels.....) In /etc/energyrecorder create a configuration for the collector in a file named collector-settings.yaml:

curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/server-collector/conf/collector-settings.yaml.sample > /etc/energyrecorder/collector-settings.yaml

Define the “PODS” section and their “servers” section according to the environment to monitor. Note: The “environment” key should correspond to the pod name, as defined in the “NODE_NAME” environment variable by CI when running.

IMPORTANT NOTE: To apply a new configuration, you need to kill the running container an start a new one (see below)

Run

To run the container, you have to map folder located on the host to folders in the container (config, logs):

docker run -d --name energy-collector --restart=always -v /etc/energyrecorder:/usr/local/energyrecorder/server-collector/conf -v /var/log/energyrecorder:/var/log/energyrecorder energyrecorder/collector
Energy Recording API

An event API to insert contextual information when monitoring energy (e.g. start Functest, start Tempest, destroy VM, ..) It is associated with an influxDB to store the power consumption measures It is hosted on a shared environment with the folling access points:

Component Connectivity
Energy recording API documentation http://energy.opnfv.fr/resources/doc/
influxDB (data) http://energy.opnfv.fr:8086

In you need, you can also host your own version of the Energy recording API (in such case, the Python SDK may requires a settings update) If you plan to use the default shared API, following steps are not required.

Image creation

First, you need to buid an image:

curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/docker/recording-api.dockerfile|docker build -t energyrecorder/api -
Setup

Create local folder on your host for logs and config files:

mkdir -p /etc/energyrecorder
mkdir -p /var/log/energyrecorder
mkdir -p /var/lib/influxdb

In /etc/energyrecorder create a configuration for logging in a file named webapp-logging.conf:

curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/recording-api/conf/webapp-logging.conf.sample > /etc/energyrecorder/webapp-logging.conf

Check configuration for this file (folders, log levels.....)

In /etc/energyrecorder create a configuration for the collector in a file named webapp-settings.yaml:

curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/recording-api/conf/webapp-settings.yaml.sample > /etc/energyrecorder/webapp-settings.yaml

Normaly included configuration is ready to use except username/passwer for influx (see run-container.sh bellow). Use here the admin user.

IMPORTANT NOTE: To apply a new configuration, you need to kill the running container an start a new one (see bellow)

Run

To run the container, you have to map folder located on the host to folders in the container (config, logs):

docker run -d --name energyrecorder-api -p 8086:8086 -p 8888:8888  -v /etc/energyrecorder:/usr/local/energyrecorder/web.py/conf -v /var/log/energyrecorder/:/var/log/energyrecorder -v /var/lib/influxdb:/var/lib/influxdb energyrecorder/webapp admin-influx-user-name admin-password readonly-influx-user-name user-password

with

Parameter name Description
admin-influx-user-name
admin-password
readonly-influx-user-name
user-password
Influx user with admin grants to create
Influx password to set to admin user
Influx user with readonly grants to create
Influx password to set to readonly user

NOTE: Local folder /var/lib/influxdb is the location web influx data are stored. You may used anything else at your convience. Just remember to define this mapping properly when running the container.

Power consumption Python SDK

a Python SDK - almost not intrusive, based on python decorator to trigger call to the event API.

It is currently hosted in Functest repo but if other projects adopt it, a dedicated project could be created and/or it could be hosted in Releng.

How to use the SDK

import the energy library:

import functest.energy.energy as energy

Notify that you want power recording in your testcase:

@energy.enable_recording
def run(self):
    self.do_some_stuff1()
    self.do_some_stuff2()

If you want to register additional steps during the scenarios you can to it in 2 different ways.

Notify step on method definition:

@energy.set_step("step1")
def do_some_stuff1(self):
...
@energy.set_step("step2")
def do_some_stuff2(self):

Notify directly from code:

@energy.enable_recording
def run(self):
  Energy.set_step("step1")
  self.do_some_stuff1()
  ...
  Energy.set_step("step2")
  self.do_some_stuff2()
SDK Setting

Settings delivered in the project git are ready to use and assume that you will use the sahre energy recording API. If you want to use an other instance, you have to update the key “energy_recorder.api_url” in <FUNCTEST>/functest/ci/config_functest.yaml” by setting the proper hostname/IP

Results

Here is an example of result comming from LF POD2. This sequence represents several CI runs in a raw. (0 power corresponds to hard reboot of the servers)

You may connect http://energy.opnfv.fr:3000 for more results (ask for credentials to infra team).

Energy monitoring of LF POD2
OPNFV Test Group Information

For more information or to participate in the OPNFV test community please see the following:

wiki: https://wiki.opnfv.org/testing

mailing list: test-wg@lists.opnfv.org

IRC channel: #opnfv-testperf

weekly meeting (https://wiki.opnfv.org/display/meetings/TestPerf):
  • Usual time: Every Thursday 15:00-16:00 UTC / 7:00-8:00 PST
  • APAC time: 2nd Wednesday of the month 8:00-9:00 UTC

Reference Documentation

Project Documentation links
Bottlenecks https://wiki.opnfv.org/display/bottlenecks/Bottlenecks
CPerf https://wiki.opnfv.org/display/cperf
Dovetail https://wiki.opnfv.org/display/dovetail
Functest https://wiki.opnfv.org/display/functest/
NFVbench https://wiki.opnfv.org/display/nfvbench/
QTIP https://wiki.opnfv.org/display/qtip
StorPerf https://wiki.opnfv.org/display/storperf/Storperf
VSPERF https://wiki.opnfv.org/display/vsperf
Yardstick https://wiki.opnfv.org/display/yardstick/Yardstick

[TST1]: OPNFV web site

[TST2]: TestAPI code repository link in releng-testresults

[TST3]: TestAPI autogenerated documentation

[TST4]: Testcase catalog

[TST5]: Testing group dashboard


This page provides the links to the installation, configuration and user guides of the different test projects.

Bottlenecks

Yardstick

Testing Developer Guides

Testing group

Test Framework Overview
Testing developer guide
Introduction

The OPNFV testing ecosystem is wide.

The goal of this guide consists in providing some guidelines for new developers involved in test areas.

For the description of the ecosystem, see [DEV1].

Developer journey

There are several ways to join test projects as a developer. In fact you may:

  • Develop new test cases
  • Develop frameworks
  • Develop tooling (reporting, dashboards, graphs, middleware,...)
  • Troubleshoot results
  • Post-process results

These different tasks may be done within a specific project or as a shared resource accross the different projects.

If you develop new test cases, the best practice is to contribute upstream as much as possible. You may contact the testing group to know which project - in OPNFV or upstream - would be the best place to host the test cases. Such contributions are usually directly connected to a specific project, more details can be found in the user guides of the testing projects.

Each OPNFV testing project provides test cases and the framework to manage them. As a developer, you can obviously contribute to them. The developer guide of the testing projects shall indicate the procedure to follow.

Tooling may be specific to a project or generic to all the projects. For specific tooling, please report to the test project user guide. The tooling used by several test projects will be detailed in this document.

The best event to meet the testing community is probably the plugfest. Such an event is organized after each release. Most of the test projects are present.

The summit is also a good opportunity to meet most of the actors [DEV4].

Be involved in the testing group

The testing group is a self organized working group. The OPNFV projects dealing with testing are invited to participate in order to elaborate and consolidate a consistant test strategy (test case definition, scope of projects, resources for long duration, documentation, ...) and align tooling or best practices.

A weekly meeting is organized, the agenda may be amended by any participant. 2 slots have been defined (US/Europe and APAC). Agendas and minutes are public. See [DEV3] for details. The testing group IRC channel is #opnfv-testperf

Best practices

All the test projects do not have the same maturity and/or number of contributors. The nature of the test projects may be also different. The following best practices may not be acurate for all the projects and are only indicative. Contact the testing group for further details.

Repository structure

Most of the projects have a similar structure, which can be defined as follows:

`-- home
  |-- requirements.txt
  |-- setup.py
  |-- tox.ini
  |
  |-- <project>
  |       |-- <api>
  |       |-- <framework>
  |       `-- <test cases>
  |
  |-- docker
  |     |-- Dockerfile
  |     `-- Dockerfile.aarch64.patch
  |-- <unit tests>
  `- docs
     |-- release
     |   |-- release-notes
     |   `-- results
     `-- testing
         |-- developer
         |     `-- devguide
         |-- user
               `-- userguide
API

Test projects are installing tools and triggering tests. When it is possible it is recommended to implement an API in order to perform the different actions.

Each test project should be able to expose and consume APIs from other test projects. This pseudo micro service approach should allow a flexible use of the different projects and reduce the risk of overlapping. In fact if project A provides an API to deploy a traffic generator, it is better to reuse it rather than implementing a new way to deploy it. This approach has not been implemented yet but the prerequisites consiting in exposing and API has already been done by several test projects.

CLI

Most of the test projects provide a docker as deliverable. Once connected, it is possible to prepare the environement and run tests through a CLI.

Dockerization

Dockerization has been introduced in Brahmaputra and adopted by most of the test projects. Docker containers are pulled on the jumphost of OPNFV POD. <TODO Jose/Mark/Alec>

Code quality

It is recommended to control the quality of the code of the testing projects, and more precisely to implement some verifications before any merge:

  • pep8
  • pylint
  • unit tests (python 2.7)
  • unit tests (python 3.5)

The code of the test project must be covered by unit tests. The coverage shall be reasonable and not decrease when adding new features to the framework. The use of tox is recommended. It is possible to implement strict rules (no decrease of pylint score, unit test coverages) on critical python classes.

Third party tooling

Several test projects integrate third party tooling for code quality check and/or traffic generation. Some of the tools can be listed as follows:

Project Tool Comments
Bottlenecks TODO  
Functest Tempest Rally Refstack RobotFramework OpenStack test tooling OpenStack test tooling OpenStack test tooling Used for ODL tests
QTIP Unixbench RAMSpeed nDPI openSSL inxi  
Storperf TODO  
VSPERF TODO  
Yardstick Moongen Trex Pktgen IxLoad, IxNet SPEC Unixbench RAMSpeed LMBench Iperf3 Netperf Pktgen-DPDK Testpmd L2fwd Fio Bonnie++ Traffic generator Traffic generator Traffic generator Traffic generator Compute Compute Compute Compute Network Network Network Network Network Storage Storage
Testing group configuration parameters
Testing categories

The testing group defined several categories also known as tiers. These categories can be used to group test suites.

Category Description
Healthcheck Simple and quick healthcheck tests case
Smoke Set of smoke test cases/suites to validate the release
Features Test cases that validate a specific feature on top of OPNFV. Those come from Feature projects and need a bit of support for integration
Components Tests on a specific component (e.g. OpenStack, OVS, DPDK,..) It may extend smoke tests
Performance Performance qualification
VNF Test cases related to deploy an open source VNF including an orchestrator
Stress Stress and robustness tests
In Service In service testing
Testing domains

The domains deal with the technical scope of the tests. It shall correspond to domains defined for the certification program:

  • compute
  • network
  • storage
  • hypervisor
  • container
  • vim
  • mano
  • vnf
  • ...
Testing coverage

One of the goals of the testing working group is to identify the poorly covered areas and avoid testing overlap. Ideally based on the declaration of the test cases, through the tags, domains and tier fields, it shall be possible to create heuristic maps.

Reliability, Stress and Long Duration Testing

Resiliency of NFV refers to the ability of the NFV framework to limit disruption and return to normal or at a minimum acceptable service delivery level in the face of a fault, failure, or an event that disrupts the normal operation [DEV5].

Reliability testing evaluates the ability of SUT to recover in face of fault, failure or disrupts in normal operation or simply the ability of SUT absorbing “disruptions”.

Reliability tests use different forms of faults as stimulus, and the test must measure the reaction in terms of the outage time or impairments to transmission.

Stress testing involves producing excess load as stimulus, and the test must measure the reaction in terms of unexpected outages or (more likely) impairments to transmission.

These kinds of “load” will cause “disruption” which could be easily found in system logs. It is the purpose to raise such “load” to evaluate the SUT if it could provide an acceptable level of service or level of confidence during such circumstances. In Danube and Euphrates, we only considered the stress test with excess load over OPNFV Platform.

In Danube, Bottlenecks and Yardstick project jointly implemented 2 stress tests (concurrently create/destroy VM pairs and do ping, system throughput limit) while Bottlenecks acts as the load manager calling yardstick to execute each test iteration. These tests are designed to test for breaking points and provide level of confidence of the system to users. Summary of the test cases are listed in the following addresses:

Stress test cases for OPNFV Euphrates (OS Ocata) release can be seen as extension/enhancement of those in D release. These tests are located in Bottlenecks/Yardstick repo (Bottlenecks as load manager while Yardstick execute each test iteration):

network usage from different VM pairs): https://wiki.opnfv.org/display/DEV/Intern+Project%3A+Baseline+Stress+Test+Case+for+Bottlenecks+E+Release

In OPNFV E release, we also plan to do long duration testing over OS Ocata. A separate CI pipe testing OPNFV XCI (OSA) is proposed to accomplish the job. We have applied specific pod for the testing. Proposals and details are listed below:

The long duration testing is supposed to be started when OPNFV E release is published. A simple monitoring module for these tests is also planned to be added: https://wiki.opnfv.org/display/DEV/Intern+Project%3A+Monitoring+Stress+Testing+for+Bottlenecks+E+Release

How TOs
Where can I find information on the different test projects?

On http://docs.opnfv.org! A section is dedicated to the testing projects. You will find the overview of the ecosystem and the links to the project documents.

Another source is the testing wiki on https://wiki.opnfv.org/display/testing

You may also contact the testing group on the IRC channel #opnfv-testperf or by mail at test-wg AT lists.opnfv.org (testing group) or opnfv-tech-discuss AT lists.opnfv.org (generic technical discussions).

How can I contribute to a test project?

As any project, the best solution is to contact the project. The project members with their email address can be found under https://git.opnfv.org/<project>/tree/INFO

You may also send a mail to the testing mailing list or use the IRC channel #opnfv-testperf

Where can I find hardware resources?

You should discuss this topic with the project you are working with. If you need access to an OPNFV community POD, it is possible to contact the infrastructure group. Depending on your needs (scenario/installer/tooling), it should be possible to find free time slots on one OPNFV community POD from the Pharos federation. Create a JIRA ticket to describe your needs on https://jira.opnfv.org/projects/INFRA. You must already be an OPNFV contributor. See https://wiki.opnfv.org/display/DEV/Developer+Getting+Started.

Please note that lots of projects have their own “how to contribute” or “get started” page on the OPNFV wiki.

How do I integrate my tests in CI?

It shall be discussed directly with the project you are working with. It is done through jenkins jobs calling testing project files but the way to onboard cases differ from one project to another.

How to declare my tests in the test Database?

If you have access to the test API swagger (access granted to contributors), you may use the swagger interface of the test API to declare your project. The URL is http://testresults.opnfv.org/test/swagger/spec.html.

Testing Group Test API swagger

Click on Spec, the list of available methods must be displayed.

Testing Group Test API swagger

For the declaration of a new project use the POST /api/v1/projects method. For the declaration of new test cases in an existing project, use the POST

/api/v1/projects/{project_name}/cases method

Testing group declare new test case
How to push your results into the Test Database?

The test database is used to collect test results. By default it is enabled only for CI tests from Production CI pods.

Please note that it is possible to create your own local database.

A dedicated database is for instance created for each plugfest.

The architecture and associated API is described in previous chapter. If you want to push your results from CI, you just have to call the API at the end of your script.

You can also reuse a python function defined in functest_utils.py [DEV2]

Where can I find the documentation on the test API?

The Test API is now documented in this document (see sections above). You may also find autogenerated documentation in http://artifacts.opnfv.org/releng/docs/testapi.html A web protal is also under construction for certification at http://testresults.opnfv.org/test/#/

I have tests, to which category should I declare them?

See table above.

The main ambiguity could be between features and VNF. In fact sometimes you have to spawn VMs to demonstrate the capabilities of the feature you introduced. We recommend to declare your test in the feature category.

VNF category is really dedicated to test including:

  • creation of resources
  • deployement of an orchestrator/VNFM
  • deployment of the VNF
  • test of the VNFM
  • free resources

The goal is not to study a particular feature on the infrastructure but to have a whole end to end test of a VNF automatically deployed in CI. Moreover VNF are run in weekly jobs (one a week), feature tests are in daily jobs and use to get a scenario score.

Where are the logs of CI runs?

Logs and configuration files can be pushed to artifact server from the CI under http://artifacts.opnfv.org/<project name>

References

[DEV1]: OPNFV Testing Ecosystem

[DEV2]: Python code sample to push results into the Database

[DEV3]: Testing group wiki page

[DEV4]: Conversation with the testing community, OPNFV Beijing Summit

[DEV5]: GS NFV 003

IRC support chan: #opnfv-testperf

Bottlenecks

Dovetail / OPNFV Verified Program

Functest

StorPerf

VSPERF

Yardstick

Developer

Documentation Guide

Documentation Guide

This page intends to cover the documentation handling for OPNFV. OPNFV projects are expected to create a variety of document types, according to the nature of the project. Some of these are common to projects that develop/integrate features into the OPNFV platform, e.g. Installation Instructions and User/Configurations Guides. Other document types may be project-specific.

Getting Started with Documentation for Your Project

OPNFV documentation is automated and integrated into our git & gerrit toolchains.

We use RST document templates in our repositories and automatically render to HTML and PDF versions of the documents in our artifact store, our Wiki is also able to integrate these rendered documents directly allowing projects to use the revision controlled documentation process for project information, content and deliverables. Read this page which elaborates on how documentation is to be included within opnfvdocs.

Licencing your documentation

All contributions to the OPNFV project are done in accordance with the OPNFV licensing requirements. Documentation in OPNFV is contributed in accordance with the Creative Commons 4.0 and the `SPDX https://spdx.org/>`_ licence. All documentation files need to be licensed using the text below. The license may be applied in the first lines of all contributed RST files:

.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. SPDX-License-Identifier: CC-BY-4.0
.. (c) <optionally add copywriters name>

These lines will not be rendered in the html and pdf files.
How and where to store the document content files in your repository

All documentation for your project should be structured and stored in the <repo>/docs/ directory. The documentation toolchain will look in these directories and be triggered on events in these directories when generating documents.

Document structure and contribution

A general structure is proposed for storing and handling documents that are common across many projects but also for documents that may be project specific. The documentation is divided into three areas Release, Development and Testing. Templates for these areas can be found under opnfvdocs/docs/templates/.

Project teams are encouraged to use templates provided by the opnfvdocs project to ensure that there is consistency across the community. Following representation shows the expected structure:

docs/
├── development
│   ├── design
│   ├── overview
│   └── requirements
├── release
│   ├── configguide
│   ├── installation
│   ├── release-notes
│   ├── scenarios
│   │   └── scenario.name
│   └── userguide
├── testing
│   ├── developer
│   └── user
└── infrastructure
    ├── hardware-infrastructure
    ├── software-infrastructure
    ├── continuous-integration
    └── cross-community-continuous-integration
Release documentation

Release documentation is the set of documents that are published for each OPNFV release. These documents are created and developed following the OPNFV release process and milestones and should reflect the content of the OPNFV release. These documents have a master index.rst file in the <opnfvdocs> repository and extract content from other repositories. To provide content into these documents place your <content>.rst files in a directory in your repository that matches the master document and add a reference to that file in the correct place in the corresponding index.rst file in opnfvdocs/docs/release/.

Platform Overview: opnfvdocs/docs/release/overview

  • Note this document is not a contribution driven document
  • Content for this is prepared by the Marketing team together with the opnfvdocs team

Installation Instruction: <repo>/docs/release/installation

  • Folder for documents describing how to deploy each installer and scenario descriptions
  • Release notes will be included here <To Confirm>
  • Security related documents will be included here
  • Note that this document will be compiled into ‘OPNFV Installation Instruction’

User Guide: <repo>/docs/release/userguide

  • Folder for manuals to use specific features
  • Folder for documents describing how to install/configure project specific components and features
  • Can be the directory where API reference for project specific features are stored
  • Note this document will be compiled into ‘OPNFV userguide’

Configuration Guide: <repo>/docs/release/configguide

  • Brief introduction to configure OPNFV with its dependencies.

Release Notes: <repo>/docs/release/release-notes

  • Changes brought about in the release cycle.
  • Include version details.
Testing documentation

Documentation created by test projects can be stored under two different sub directories /user or /developemnt. Release notes will be stored under <repo>/docs/release/release-notes

User documentation: <repo>/testing/user/ Will collect the documentation of the test projects allowing the end user to perform testing towards a OPNFV SUT e.g. Functest/Yardstick/Vsperf/Storperf/Bottlenecks/Qtip installation/config & user guides.

Development documentation: <repo>/testing/developent/ Will collect documentation to explain how to create your own test case and leverage existing testing frameworks e.g. developer guides.

Development Documentation

Project specific documents such as design documentation, project overview or requirement documentation can be stored under /docs/development. Links to generated documents will be dislayed under Development Documentaiton section on docs.opnfv.org. You are encouraged to establish the following basic structure for your project as needed:

Requirement Documentation: <repo>/docs/development/requirements/

  • Folder for your requirement documentation
  • For details on requirements projects’ structures see the Requirements Projects page.

Design Documentation: <repo>/docs/development/design

  • Folder for your upstream design documents (blueprints, development proposals, etc..)

Project overview: <repo>/docs/development/overview

  • Folder for any project specific documentation.
Infrastructure Documentation

Infrastructure documentation can be stored under <repo>/docs/ folder of corresponding infrastructure project.

Including your Documentation

In your project repository

Add your documentation to your repository in the folder structure and according to the templates listed above. The documentation templates you will require are available in opnfvdocs/docs/templates/ repository, you should copy the relevant templates to your <repo>/docs/ directory in your repository. For instance if you want to document userguide, then your steps shall be as follows:

git clone ssh://<your_id>@gerrit.opnfv.org:29418/opnfvdocs.git
cp -p opnfvdocs/docs/userguide/* <my_repo>/docs/userguide/

You should then add the relevant information to the template that will explain the documentation. When you are done writing, you can commit the documentation to the project repository.

git add .
git commit --signoff --all
git review
In OPNFVDocs Composite Documentation
In toctree
To import project documents from project repositories, we use submodules.
Each project is stored in opnfvdocs/docs/submodule/ as follows:
_images/Submodules.jpg

To include your project specific documentation in the composite documentation, first identify where your project documentation should be included. Say your project userguide should figure in the ‘OPNFV Userguide’, then:

vim opnfvdocs/docs/release/userguide.introduction.rst

This opens the text editor. Identify where you want to add the userguide. If the userguide is to be added to the toctree, simply include the path to it, example:

.. toctree::
    :maxdepth: 1

 submodules/functest/docs/userguide/index
 submodules/bottlenecks/docs/userguide/index
 submodules/yardstick/docs/userguide/index
 <submodules/path-to-your-file>
‘doc8’ Validation

It is recommended that all rst content is validated by doc8 standards. To validate your rst files using doc8, install doc8.

sudo pip install doc8

doc8 can now be used to check the rst files. Execute as,

doc8 --ignore D000,D001 <file>
Testing: Build Documentation Locally
Composite OPNFVDOCS documentation

To build whole documentation under opnfvdocs/, follow these steps:

Install virtual environment.

sudo pip install virtualenv
cd /local/repo/path/to/project

Download the OPNFVDOCS repository.

git clone https://gerrit.opnfv.org/gerrit/opnfvdocs

Change directory to opnfvdocs & install requirements.

cd opnfvdocs
sudo pip install -r etc/requirements.txt

Update submodules, build documentation using tox & then open using any browser.

cd opnfvdocs
git submodule update --init
tox -edocs
firefox docs/_build/html/index.html

Note

Make sure to run tox -edocs and not just tox.

Individual project documentation

To test how the documentation renders in HTML, follow these steps:

Install virtual environment.

sudo pip install virtualenv
cd /local/repo/path/to/project

Download the opnfvdocs repository.

git clone https://gerrit.opnfv.org/gerrit/opnfvdocs

Change directory to opnfvdocs & install requirements.

cd opnfvdocs
sudo pip install -r etc/requirements.txt

Move the conf.py file to your project folder where RST files have been kept:

mv opnfvdocs/docs/conf.py <path-to-your-folder>/

Move the static files to your project folder:

mv opnfvdocs/_static/ <path-to-your-folder>/

Build the documentation from within your project folder:

sphinx-build -b html <path-to-your-folder> <path-to-output-folder>

Your documentation shall be built as HTML inside the specified output folder directory.

Note

Be sure to remove the conf.py, the static/ files and the output folder from the <project>/docs/. This is for testing only. Only commit the rst files and related content.

Adding your project repository as a submodule

Clone the opnfvdocs repository and your submodule to .gitmodules following the convention of the file

cd docs/submodules/
git submodule add https://gerrit.opnfv.org/gerrit/$reponame
git submodule init $reponame/
git submodule update $reponame/
git add .
git commit -sv
git review
Removing a project repository as a submodule
git rm docs/submodules/$reponame rm -rf .git/modules/$reponame git config -f .git/config –remove-section submodule.$reponame 2> /dev/null git add . git commit -sv git review

Submodule Transition

Moving away from submodules.

At the cost of some release-time overhead, there are several benefits the transition provides projects:

  • Local builds - Projects will be able to build and view there docs locally, as they would appear on the OPNFV Docs website.
  • Reduced build time - Patchset verification will only run against individual projects docs, not all projects.
  • Decoupled build failures - Any error introduced to project’s docs would not break builds for all the other projects
Steps

To make the transition the following steps need to be taken across three repositories:

  • Your project repository (Ex. Fuel)
  • The Releng repository
  • The OPNFV Docs repository
Adding a Local Build

In your project repo:

  1. Add the following files:

    docs/conf.py

    from docs_conf.conf import *  # noqa: F401,F403
    

    docs/conf.yaml

    ---
    project_cfg: opnfv
    project: Example
    

    docs/requirements.txt

    lfdocs-conf
    sphinx_opnfv_theme
    # Uncomment the following line if your project uses Sphinx to document
    # HTTP APIs
    # sphinxcontrib-httpdomain
    

    tox.ini

    [tox]
    minversion = 1.6
    envlist =
        docs,
        docs-linkcheck
    skipsdist = true
    
    [testenv:docs]
    deps = -rdocs/requirements.txt
    commands =
        sphinx-build -b html -n -d {envtmpdir}/doctrees ./docs/ {toxinidir}/docs/_build/html
        echo "Generated docs available in {toxinidir}/docs/_build/html"
    whitelist_externals = echo
    
    [testenv:docs-linkcheck]
    deps = -rdocs/requirements.txt
    commands = sphinx-build -b linkcheck -d {envtmpdir}/doctrees ./docs/ {toxinidir}/docs/_build/linkcheck
    

    .gitignore

    .tox/ docs/_build/*

    docs/index.rst

    If this file doesn’t exist, it will need to be created along any other missing index file for directories (release, development). Any example of the file’s content looks like this:

    .. This work is licensed under a Creative Commons Attribution 4.0 International License.
    .. SPDX-License-Identifier: CC-BY-4.0
    .. (c) Open Platform for NFV Project, Inc. and its contributors
    
    .. _<project-name>:
    
    ==============
    <project-name>
    ==============
    
    .. toctree::
       :numbered:
       :maxdepth: 2
    
       release/release-notes/index
       release/installation/index
       release/userguide/index
       scenarios/index
    

You can verify the build works by running:

tox -e docs
Creating a CI Job

In the releng repository:

  1. Update your project’s job file jjb/<project>/<projects-jobs.yaml with the following (taken from this guide):

    ---
    - project:
        name: PROJECT
        project: PROJECT
        project-name: 'PROJECT'
    
        project-pattern: 'PROJECT'
        rtd-build-url: RTD_BUILD_URL
        rtd-token: RTD_TOKEN
    
        jobs:
          - '{project-name}-rtd-jobs'
    

You can either send an email to helpdesk in order to get a copy of RTD_BUILD_URL and RTD_TOKEN, ping aricg or bramwelt in #opnfv-docs on Freenode, or add Aric Gardner or Trevor Bramwell to your patch as a reviewer and they will pass along the token and build URL.

Removing the Submodule

In the opnfvdocs repository:

  1. Add an intersphinx link to the opnfvdocs repo configuration:

    docs/conf.py

    intersphinx_mapping['<project>'] = ('http://opnfv-<project>.readthedocs.io', None)
    

    If the project exists on ReadTheDocs, and the previous build was merged in and ran, you can verify the linking is working currectly by finding the following line in the output of tox -e docs:

    loading intersphinx inventory from https://opnfv-<project>.readthedocs.io/en/latest/objects.inv...
    
  2. Ensure all references in opnfvdocs are using :ref: or :doc: and not directly specifying submodule files with ../submodules/<project>.

    For example:

    .. toctree::
    
       ../submodules/releng/docs/overview.rst
    

    Would become:

    .. toctree::
    
       :ref:`Releng Overview <releng:overview>`
    

    Some more examples can be seen here.

  3. Remove the submodule from opnfvdocs, replacing <project> with your project and commit the change:

    git rm docs/submodules/<project>
    git commit -s
    git review
    

Addendum

Index File

The index file must relatively refence your other rst files in that directory.

Here is an example index.rst :

*******************
Documentation Title
*******************

.. toctree::
   :numbered:
   :maxdepth: 2

   documentation-example
Source Files

Document source files have to be written in reStructuredText format (rst). Each file would be build as an html page.

Here is an example source rst file :

=============
Chapter Title
=============

Section Title
=============

Subsection Title
----------------

Hello!
Writing RST Markdown

See http://sphinx-doc.org/rest.html .

Hint: You can add dedicated contents by using ‘only’ directive with build type (‘html’ and ‘singlehtml’) for OPNFV document. But, this is not encouraged to use since this may make different views.

.. only:: html
    This line will be shown only in html version.
Verify Job

The verify job name is docs-verify-rtd-{branch}.

When you send document changes to gerrit, jenkins will create your documents in HTML formats (normal and single-page) to verify that new document can be built successfully. Please check the jenkins log and artifact carefully. You can improve your document even though if the build job succeeded.

Merge Job

The merge job name is docs-merge-rtd-{branch}.

Once the patch is merged, jenkins will automatically trigger building of the new documentation. This might take about 15 minutes while readthedocs builds the documentatation. The newly built documentation shall show up as appropriate placed in docs.opnfv.org/{branch}/path-to-file.

OPNFV Projects

Daisy4NFV

Edgecloud

  • Edgecloud overview

Joid

  • JOID overview

Opera

  • Opera overview

Infrastructure

Infrastructure Overview

OPNFV develops, operates, and maintains infrastructure which is used by the OPNFV Community for development, integration, and testing purposes. OPNFV Infrastructure Working Group (Infra WG) oversees the OPNFV Infrastructure, ensures it is kept in a state which serves the community in best possible way and always up to date.

Infra WG is working towards a model whereby we have a seamless pipeline for handing resource requests from the OPNFV community for both development and Continuous Integration perspectives. Automation of requests and integration to existing automation tools is a primary driver in reaching this model. In the Infra WG, we imagine a model where the Infrastructure Requirements that are specified by a Feature, Installer or otherrelevant projects within OPNFV are requested, provisioned, used, reported on and subsequently torn down with no (or minimal) user intervention at the physical/infrastructure level.

Objectives of the Infra WG are

  • Deliver efficiently dimensions resources to OPNFV community needs on request in a timely manner that ensure maximum usage (capacity) and maximum density (distribution of workloads)
  • Satisfy the needs of the twice-yearly release projects, this includes being able to handle load (amount of projects and requests) as well as need (topology and different layouts)
  • Support OPNFV community users. As the INFRA group, we are integral to all aspects of the OPNFV Community (since it starts with the Hardware) - this can mean troubleshooting any element within the stack
  • Provide a method to expand and adapt as OPNFV community needs grow and provide this to Hosting Providers (lab providers) for input in growth forecast so they can better judge how best to contribute with their resources.
  • Work with reporting and other groups to ensure we have adequate feedback to the end-users of the labs on how their systems, code, feature performs.

The details of what is provided as part of the infrastructure can be seen in following chapters.

Hardware Infrastructure

TBD

Software Infrastructure

Continuous Integration - CI

Please see the details of CI from the chapters below.

Cross Community Continuous Integration - XCI

Please see the details of XCI from the chapters below.

Operations Supporting Tools

Found a typo or any other feedback? Send an email to users@opnfv.org or talk to us on IRC.