Note

This page is still under development.

OPNFV Documentation

Open Platform for NFV (OPNFV) facilitates the development and evolution of NFV components across various open source ecosystems. Through system level integration, deployment and testing, OPNFV creates a reference NFV platform to accelerate the transformation of enterprise and service provider networks. Participation is open to anyone, whether you are an employee of a member company or just passionate about network transformation.

Release Documentation

OPNFV Overview

Introduction

Network Functions Virtualization (NFV) is transforming the networking industry via software-defined infrastructures and open source is the proven method for quickly developing software for commercial products and services that can move markets. Open Platform for NFV (OPNFV) facilitates the development and evolution of NFV components across various open source ecosystems. Through system level integration, deployment and testing, OPNFV constructs a reference NFV platform to accelerate the transformation of enterprise and service provider networks. As an open source project, OPNFV is uniquely positioned to bring together the work of standards bodies, open source communities, service providers and commercial suppliers to deliver a de facto NFV platform for the industry.

By integrating components from upstream projects, the community is able to conduct performance and use case-based testing on a variety of solutions to ensure the platform’s suitability for NFV use cases. OPNFV also works upstream with other open source communities to bring contributions and learnings from its work directly to those communities in the form of blueprints, patches, bugs, and new code.

OPNFV initially focused on building NFV Infrastructure (NFVI) and Virtualised Infrastructure Management (VIM) by integrating components from upstream projects such as OpenDaylight, OpenStack, Ceph Storage, KVM, Open vSwitch, and Linux. More recently, OPNFV has extended its portfolio of forwarding solutions to include fd.io and ODP, is able to run on both Intel and ARM commercial and white-box hardware, and includes Management and Network Orchestration MANO components primarily for application composition and management in the Colorado release.

These capabilities, along with application programmable interfaces (APIs) to other NFV elements, form the basic infrastructure required for Virtualized Network Functions (VNF) and MANO components.

Concentrating on these components while also considering proposed projects on additional topics (such as the MANO components and applications themselves), OPNFV aims to enhance NFV services by increasing performance and power efficiency improving reliability, availability and serviceability, and delivering comprehensive platform instrumentation.

OPNFV Platform Architecture

The OPNFV project addresses a number of aspects in the development of a consistent virtualisation platform including common hardware requirements, software architecture, MANO and applications.

OPNFV Platform Overview Diagram

Overview infographic of the opnfv platform and projects.

To address these areas effectively, the OPNFV platform architecture can be decomposed into the following basic building blocks:

  • Hardware: with the Infra working group, Pharos project and associated activities
  • Software Platform: through the platform integration and deployment projects
  • MANO: through the MANO working group and associated projects
  • Applications: which affect all other areas and drive requirements for OPNFV

OPNFV Lab Infrastructure

The infrastructure working group oversees such topics as lab management, workflow, definitions, metrics and tools for OPNFV infrastructure.

Fundamental to the WG is the Pharos Project which provides a set of defined lab infrastructures over a geographically and technically diverse federated global OPNFV lab.

Labs may instantiate bare-metal and virtual environments that are accessed remotely by the community and used for OPNFV platform and feature development, build, deploy and testing. No two labs are the same and the heterogeneity of the Pharos environment provides the ideal platform for establishing hardware and software abstractions providing well understood performance characteristics.

Community labs are hosted by OPNFV member companies on a voluntary basis. The Linux Foundation also hosts an OPNFV lab that provides centralized CI and other production resources which are linked to community labs. Future lab capabilities will include the ability easily automate deploy and test of any OPNFV install scenario in any lab environment as well as on a nested “lab as a service” virtual infrastructure.

OPNFV Software Platform Architecture

The OPNFV software platform is comprised exclusively of open source implementations of platform component pieces. OPNFV is able to draw from the rich ecosystem of NFV related technologies available in open-source then integrate, test, measure and improve these components in conjunction with our source communities.

While the composition of the OPNFV software platform is highly complex and constituted of many projects and components, a subset of these projects gain the most attention from the OPNFV community to drive the development of new technologies and capabilities.

Virtual Infrastructure Management

OPNFV derives it’s virtual infrastructure management from one of our largest upstream ecosystems OpenStack. OpenStack provides a complete reference cloud management system and associated technologies. While the OpenStack community sustains a broad set of projects, not all technologies are relevant in an NFV domain, the OPNFV community consumes a sub-set of OpenStack projects where the usage and composition may vary depending on the installer and scenario.

For details on the scenarios available in OPNFV and the specific composition of components refer to the OPNFV installation instruction: http://artifacts.opnfv.org/opnfvdocs/colorado/2.0/docs/installationprocedure/index.html

Operating Systems

OPNFV currently uses Linux on all target machines, this can include Ubuntu, Centos or SUSE linux. The specific version of Linux used for any deployment is documented in the installation guide.

Networking Technologies
SDN Controllers

OPNFV, as an NFV focused project, has a significant investment on networking technologies and provides a broad variety of integrated open source reference solutions. The diversity of controllers able to be used in OPNFV is supported by a similarly diverse set of forwarding technologies.

There are many SDN controllers available today relevant to virtual environments where the OPNFV community supports and contributes to a number of these. The controllers being worked on by the community during this release of OPNFV include:

  • Neutron: an OpenStack project to provide “network connectivity as a service” between interface devices (e.g., vNICs) managed by other OpenStack services (e.g., nova).
  • OpenDaylight: addresses multivendor, traditional and greenfield networks, establishing the industry’s de facto SDN platform and providing the foundation for networks of the future.
  • ONOS: a carrier-grade SDN network operating system designed for high availability, performance, scale-out.
Data Plane

OPNFV extends Linux virtual networking capabilities by using virtual switching and routing components. The OPNFV community proactively engages with these source communities to address performance, scale and resiliency needs apparent in carrier networks.

  • FD.io (Fast data - Input/Output): a collection of several projects and libraries to amplify the transformation that began with Data Plane Development Kit (DPDK) to support flexible, programmable and composable services on a generic hardware platform.
  • Open vSwitch: a production quality, multilayer virtual switch designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols.

Deployment Architecture

A typical OPNFV deployment starts with three controller nodes running in a high availability configuration including control plane components from OpenStack, SDN, etc. and a minimum of two compute nodes for deployment of workloads (VNFs). A detailed description of the hardware requirements required to support the 5 node configuration can be found in pharos specification: http://artifacts.opnfv.org/pharos/colorado/2.0/docs/specification/index.html

In addition to the deployment on a highly available physical infrastructure, OPNFV can be deployed for development and lab purposes in a virtual environment. In this case each of the hosts is provided by a virtual machine and allows control and workload placement using nested virtualization.

The initial deployment is done using a staging server, referred to as the “jumphost”. This server-either physical or virtual-is first installed with the installation program that then installs OpenStack and other components on the controller nodes and compute nodes. See the OPNFV User Guide for more details.

The OPNFV Testing Ecosystem

The OPNFV community has set out to address the needs of virtualization in the carrier network and as such platform validation and measurements are a cornerstone to the iterative releases and objectives.

To simplify the complex task of feature, component and platform validation and characterization the testing community has established a fully automated method for addressing all key areas of platform validation. This required the integration of a variety of testing frameworks in our CI systems, real time and automated analysis of results, storage and publication of key facts for each run as shown in the following diagram.

Overview infographic of the Colorado OPNFV testing Ecosystem

Release Verification

The OPNFV community relies on its testing community to establish release criteria for each OPNFV release. Each release cycle the testing criteria become more stringent and better representative of our feature and resiliency requirements.

As each OPNFV release establishes a set of deployment scenarios to validate, the testing infrastructure and test suites need to accommodate these features and capabilities. It’s not only in the validation of the scenarios themselves where complexity increases, there are test cases that require multiple datacenters to execute when evaluating features, including multisite and distributed datacenter solutions.

The release criteria as established by the testing teams include passing a set of test cases derived from the functional testing project ‘functest,’ a set of test cases derived from our platform system and performance test project ‘yardstick,’ and a selection of test cases for feature capabilities derived from other test projects such as bottlenecks, vsperf, cperf and storperf. The scenario needs to be able to be deployed, pass these tests, and be removed from the infrastructure iteratively (no less that 4 times) in order to fulfill the release criteria.

Functest

Functest provides a functional testing framework incorporating a number of test suites and test cases that test and verify OPNFV platform functionality. The scope of Functest and relevant test cases can be found in its :ref:`user guide <Functest-UserGuide>`_

Functest provides both feature project and component test suite integration, leveraging OpenStack and SDN controllers testing frameworks to verify the key components of the OPNFV platform are running successfully.

Yardstick

Yardstick is a testing project for verifying the infrastructure compliance when running VNF applications. Yardstick benchmarks a number of characteristics and performance vectors on the infrastructure making it a valuable pre-deployment NFVI testing tools.

Yardstick provides a flexible testing framework for launching other OPNFV testing projects.

There are two types of test cases in Yardstick:

  • Yardstick generic test cases and OPNFV feature test cases; including basic characteristics benchmarking in compute/storage/network area.
  • OPNFV feature test cases include basic telecom feature testing from OPNFV projects; for example nfv-kvm, sfc, ipv6, Parser, Availability and SDN VPN

System Evaluation and compliance testing

The OPNFV community is developing a set of test suites intended to evaluate a set of reference behaviors and capabilities for NFV systems developed externally from the OPNFV ecosystem to evaluate and measure their ability to provide the features and capabilities developed in the OPNFV ecosystem.

The Dovetail project will provide a test framework and methodology able to be used on any NFV platform, including an agreed set of test cases establishing an evaluation criteria for exercising an OPNFV compatible system. The Dovetail project has begun establishing the test framework and will provide a preliminary methodology for the Colorado release. Work will continue to develop these test cases to establish a stand alone compliance evaluation solution in future releases.

Additional Testing

Besides the test suites and cases for release verification, additional testing is performed to validate specific features or characteristics of the OPNFV platform. These testing framework and test cases may include some specific needs; such as extended measurements, additional testing stimuli, or tests simulating environmental disturbances or failures.

These additional testing activities provide a more complete evaluation of the OPNFV platform. Some of the projects focused on these testing areas include:

VSPERF

VSPERF provides a generic and architecture agnostic vSwitch testing framework and associated tests. This serves as a basis for validating the suitability of different vSwitch implementations and deployments.

Bottlenecks

Bottlenecks provides a framework to find system limitations and bottlenecks, providing root cause isolation capabilities to facilitate system evaluation.

OPNFV Installation

Abstract

The following document provides an overview of the instructions required for the installation of the Colorado release of OPNFV.

The Colorado release can be installed using a variety of technologies provided by the integration projects participating in OPNFV: Apex, Compass4Nfv, Fuel and JOID. Each installer provides the ability to install a common OPNFV platform as well as integrating additional features delivered through a variety of scenarios by the OPNFV community.

Introduction

The OPNFV platform is comprised of a variety of upstream components that may be deployed on your physical infrastructure. A composition of components, tools and configurations is identified in OPNFV as a deployment scenario. The various OPNFV scenarios provide unique features and capabilities that you may want to leverage, it is important to understand your required target platform capabilities before installing and configuring your target scenario.

An OPNFV installation requires either a physical, or virtual, infrastructure environment as defined in the Pharos specification <http://artifacts.opnfv.org/pharos/colorado/2.0/docs/specification/index.html>. When configuring a physical infrastructure it is strongly advised to follow the Pharos configuration guidelines.

Scenarios

OPNFV scenarios are designed to host virtualised network functions (VNF’s) in a variety of deployment architectures and locations. Each scenario provides specific capabilities and/or components aimed to solve specific problems for the deployment of VNF’s. A scenario may, for instance, include components such as OpenStack, OpenDaylight, OVS, KVM etc... where each scenario will include different source components or configurations.

To learn more about the scenarios supported in the Colorado release refer to the scenario description documents provided:

  • Accelerated OVS os-nosdn-ovs <http://artifacts.opnfv.org/ovsnfv/colorado/2.0/docs/scenarios_os-nosdn-ovs/index.html>
  • Layer 3 overlay using opendaylight os-odl-l2-bgpvpn <http://artifacts.opnfv.org/sdnvpn/colorado/2.0/docs/scenarios_os-odl_l2-bgpvpn/index.html>
  • FD.io based forwarding using opendaylight SDN os-odl-12-fdio-noha <http://artifacts.opnfv.org/fds/colorado/2.0/scenarios_os-odl_l2-fdio-noha/index.html>
  • High availability service function chaining os-odl-l2-sfc-ha <http://artifacts.opnfv.org/sfc/colorado/2.0/docs/scenarios_os-odl_l2-sfc-ha/index.html>
  • Service function chaining os-odl-l2-sfc-noha <http://artifacts.opnfv.org/sfc/colorado/2.0/docs/scenarios_os-odl_l2-sfc-noha/index.html>
  • Accelerated KVM hypervisor os-nosdn-kvm-ha <http://artifacts.opnfv.org/kvmfornfv/colorado/2.0/docs/scenarios_os-nosdn-kvm-ha/index.html>
  • LXD container hypervisor os-nosdn-lxd-noha <http://artifacts.opnfv.org/joid/colorado/2.0/docs/scenarios_os-nosdn-lxd-noha/index.html>
  • High Availability LXD container hypervisor os-nosdn-lxd-ha <http://artifacts.opnfv.org/joid/colorado/2.0/docs/scenarios_os-nosdn-lxd-ha/index.html>

Installation Procedure

Detailed step by step instructions for working with an installation toolchain and installing the required scenario are provided by each installation project. The four projects providing installation support for the OPNFV Colorado release are; Apex, Compass4nfv, Fuel and Joid.

The instructions for each toolchain can be found in these links:

  • Apex installation instruction
  • Compass4nfv installation instruction
  • Fuel installation instruction
  • Joid installation instruction

OPNFV Test Frameworks

If you have elected to install the OPNFV platform using the deployment toolchain provided by OPNFV your system will have been validated once the installation is completed. The basic deployment validation only addresses a small component of the capability provided in the platform and you may desire to execute more exhaustive tests. Some investigation is required to select the right test suites to run on your platform from the available projects and suites.

Many of the OPNFV test project provide user-guide documentation and installation instructions as provided below:

  • Functest user guide
  • Yardstick user guide
  • vSwitchPerf user guide
  • Software Fastpath Service Quality Metrics (SFQM) user guide
  • Bottlenecks user guide
  • Storage Performance Benchmarking for NFVI (StorPerf) user guide

Security Notes

The following patches were applied to fix security issues discovered in opnfv projects, during the c-release cycle.

  • Removal of private keys
  • Fix security issues of eval-s in testapi
  • Implements use of yaml.safe_load
  • Fix security issues reported by the security audit
  • Fix issues found in security review
  • Removing OpenSteak Project
  • Remove unsed files in open-contrail role
  • Get rid of private key in repo
  • Handling file loads and tmp dirs differently
  • Remove `Debug = True when run Flask and add logger

OPNFV User Guide & Configuration Guide

Abstract

OPNFV is a collaborative project aimed at providing a variety of virtualization deployments intended to host applications serving the networking and carrier industry. This document provides guidance and instructions for using platform features designed to support these applications, made available in the OPNFV Colorado release.

This document is not intended to replace or replicate documentation from other open source projects such as OpenStack or OpenDaylight, rather highlight the features and capabilities delivered through the OPNFV project.

Introduction

OPNFV provides a suite of scenarios, infrastructure depoyment options, which are able to be installed to host virtualized network functions (VNFs). This Guide intends to help Users of the platform leverage the features and capabilities delivered by the OPNFV project in support of these applications.

OPNFV Continuous Integration builds, deploys and tests combinations of virtual infrastructure components in what are defined as scenarios. A scenario may include components such as OpenStack, OpenDaylight, OVS, KVM etc. where each scenario will include different source components or Configurations. Scenarios are designed to enable specific features and capabilities in the platform that can be leveraged by the OPNFV User community.

Feature Overview

The following links outline the feature deliveries from the participant OPNFV projects in the Colorado release. Each of the participating projects provides detailed descriptions about the delivered features. Including use cases, implementation and Configuration specifics on a per OPNFV project basis.

The following are Configuration Guides and User Guides and assume that the reader has already some information about a given projects specifics and deliverables. These Guides are intended to be used following the installation of a given OPNFV installer to allow a user to deploy and implement feature delivered by OPNFV.

If you are unsure about the specifics of a given project, please refer to the OPNFV projects home page, found on http://wiki.opnfv.org, for specific details.

You can find project specific usage and Configuration information below:

Feature Configuration Guides

  • Armband Configuration Guide
  • Copper Configuration Guide
  • Doctor Configuration Guide
  • IPv6 Configuration Guide
  • KVMforNFV Configuration Guide
  • Moon Configuration Guide
  • Multisite Configuration Guide
  • ONOSFW Configuration Guide
  • OVSNFV Configuration Guide
  • Promise Configuration Guide
  • SFC Configuration Guide

Feature User Guides

  • Copper User Guide
  • Doctor User Guide
  • Domino User Guide
  • IPv6 User Guide
  • KVMforNFV User Guide
  • Moon User Guide
  • Multisite User Guide
  • ONOSFW User Guide
  • OVSNFV User Guide
  • Parser User Guide
  • Promise User Guide
  • SDNVPN User Guide
  • SFC User Guide

OPNFV Release Notes

Apex

Apex Release Notes

Armband

Armband Release Notes

Availability

No Release documentation

Bottlenecks

Bottlenecks Release Notes

Compass4nfv

Compass4nfv Release Notes

Conductor

Conductor Release Notes

Copper

Copper Release Notes

Daisy

Daisy Release Notes

Doctor

Doctor Release Notes

Domino

Domino Release Notes

Dovetail

Dovetail Release Notes

Dpacc

Dpacc Release Notes

Enfv

Enfv Release Notes

Escalator

Escalator Release Notes

Fastpathmetrics

Fastpathmetrics Release Notes

Fds

Fds Release Notes

Fuel

Fuel Release Notes

Functest

Functest Release Notes

Genesisreq

Genesisreq Release Notes

Inspector

Inspector Release Notes

Ipv6

Ipv6 Release Notes

Joid

Joid Release Notes

Kvmfornfv

Kvmfornfv Release Notes

Lsoapi

Lsoapi Release Notes

Moon

Moon Release Notes

Multisite

Multisite Release Notes

Netready

Netready Release Notes

Octopus

Octopus Release Notes

Onosfw

Onosfw Release Notes

Ovsnfv

Ovsnfv Release Notes

Parser

Parser Release Notes

Pharos

Pharos Release Notes

Prediction

Prediction Release Notes

Promise

Promise Release Notes

Qtip

Qtip Release Notes

Releng

Releng Release Notes

Sdnvpn

Sdnvpn Release Notes

Sfc

Sfc Release Notes

Storperf

Storperf Release Notes

Ves

Ves Release Notes

Vnf_Forwarding_Graph

Vnf_Forwarding_Graph Release Notes

Vswitchperf

Vswitchperf Release Notes

Yardstick

Yardstick Release Notes

Ovno

Ovno Release Notes

Snaps

Snaps Release Notes

Testing Documentation

Development Documentation

OPNFV Projects

ARMband

Barometer

Copper

Doctor

Domino

IPV6

KVM for NFV

Moon

Multisite

Netready

ONOS

OVSNFV

Parser

Pharos

Promise

SDNVPN

SFC

How to work with Documentation

Documentation Guide

This page intends to cover the documentation handling for OPNFV. OPNFV projects are expected to create a variety of document types, according to the nature of the project. Some of these are common to projects that develop/integrate features into the OPNFV platform, e.g. Installation Instructions and User/Configurations Guides. Other document types may be project-specific.

Getting Started with Documentation for Your Project

OPNFV documentation is automated and integrated into our git & gerrit toolchains.

We use RST document templates in our repositories and automatically render to HTML and PDF versions of the documents in our artifact store, our WiKi is also able to integrate these rendered documents directly allowing projects to use the revision controlled documentation process for project information, content and deliverables. Read this page which elaborates on how documentation is to be included within opnfvdocs.

Licencing your documentation

All contributions to the OPNFV project are done in accordance with the OPNFV licensing requirements. Documentation in OPNFV is contributed in accordance with the Creative Commons 4.0 licence. All documentation files need to be licensed using the creative commons licence. The following example may be applied in the first lines of all contributed RST files:

.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
.. (c) <optionally add copywriters name>

Or if you want to use the `SPDX https://spdx.org/>`_ Shorthand vs. the Creative Commons URL

How and where to store the document content files in your repository

All documentation for your project should be structured and stored in the <repo>/docs/ directory. The documentation toolchain will look in these directories and be triggered on events in these directories when generating documents.

Document structure and contribution

A general structure is proposed for storing and handling documents that are common across many projects but also for documents that may be project specific. The documentation is divided into three areas Release, Development and Testing. Templates for these areas can be found under opnfvdocs/docs/templates/.

Project teams are encouraged to use templates provided by the opnfvdocs project to ensure that there is consistency across the community. Following representation shows the expected structure:

docs/
├── development
│   ├── design
│   ├── overview
│   └── requirements
├── release
│   ├── configguide
│   ├── installation
│   ├── release-notes
│   ├── scenarios
│   │   └── scenario.name
│   └── userguide
└── testing
    ├── developer
    └── user
Release documentation

Release documentation is the set of documents that are published for each OPNFV release. These documents are created and developed following the OPNFV release process and milestones and should reflect the content of the OPNFV release. These documents have a master index.rst file in the <opnfvdocs> repository and extract content from other repositories. To provide content into these documents place your <content>.rst files in a directory in your repository that matches the master document and add a reference to that file in the correct place in the corresponding index.rst file in opnfvdocs/docs/release/.

Platform Overview: opnfvdocs/docs/release/overview

  • Note this document is not a contribution driven document
  • Content for this is prepared by the Marketing team together with the opnfvdocs team

Installation Instruction: <repo>/docs/release/installation

  • Folder for documents describing how to deploy each installer and scenario descriptions
  • Release notes will be included here <To Confirm>
  • Security related documents will be included here
  • Note that this document will be compiled into ‘OPNFV Installation Instruction’

User Guide: <repo>/docs/release/userguide

  • Folder for manuals to use specific features
  • Folder for documents describing how to install/configure project specific components and features
  • Can be the directory where API reference for project specific features are stored
  • Note this document will be compiled into ‘OPNFV userguide’

Configuration Guide: <repo>/docs/release/configguide

  • Brief introduction to configure OPNFV with its dependencies.

Release Notes: <repo>/docs/release/release-notes

  • Changes brought about in the release cycle.
  • Include version details.
Testing documentation

Documentation created by test projects can be stored under two different sub directories /user or /developemnt. Release notes will be stored under <repo>/docs/release/release-notes

User documentation: <repo>/testing/user/ Will collect the documentation of the test projects allowing the end user to perform testing towards a OPNFV SUT e.g. Functest/Yardstick/Vsperf/Storperf/Bottlenecks/Qtip installation/config & user guides.

Development documentation: <repo>/testing/developent/ Will collect documentation to explain how to create your own test case and leverage existing testing frameworks e.g. developer guides.

Development Documentation

Project specific documents such as design documentation, project overview or requirement documentation can be stored under /docs/development. Links to generated documents will be dislayed under Development Documentaiton section on docs.opnfv.org. You are encouraged to establish the following basic structure for your project as needed:

Requirement Documentation: <repo>/docs/development/requirements/

  • Folder for your requirement documentation
  • For details on requirements projects’ structures see the Requirements Projects page.

Design Documentation: <repo>/docs/development/design

  • Folder for your upstream design documents (blueprints, development proposals, etc..)

Project overview: <repo>/docs/development/overview

  • Folder for any project specific documentation.

Including your Documentation

In your project repository

Add your documentation to your repository in the folder structure and according to the templates listed above. The documentation templates you will require are available in opnfvdocs/docs/templates/ repository, you should copy the relevant templates to your <repo>/docs/ directory in your repository. For instance if you want to document userguide, then your steps shall be as follows:

git clone ssh://<your_id>@gerrit.opnfv.org:29418/opnfvdocs.git
cp -p opnfvdocs/docs/userguide/* <my_repo>/docs/userguide/

You should then add the relevant information to the template that will explain the documentation. When you are done writing, you can commit the documentation to the project repository.

git add .
git commit --signoff --all
git review
In OPNFVDocs Composite Documentation
In toctree
To import project documents from project repositories, we use submodules.
Each project is stored in opnfvdocs/docs/submodule/ as follows:
_images/Submodules.jpg

To include your project specific documentation in the composite documentation, first identify where your project documentation should be included. Say your project userguide should figure in the ‘OPNFV Userguide’, then:

vim opnfvdocs/docs/release/userguide.introduction.rst

This opens the text editor. Identify where you want to add the userguide. If the userguide is to be added to the toctree, simply include the path to it, example:

.. toctree::
    :maxdepth: 1

 submodules/functest/docs/userguide/index
 submodules/bottlenecks/docs/userguide/index
 submodules/yardstick/docs/userguide/index
 <submodules/path-to-your-file>
‘doc8’ Validation

It is recommended that all rst content is validated by doc8 standards. To validate your rst files using doc8, install doc8.

sudo pip install doc8

doc8 can now be used to check the rst files. Execute as,

doc8 --ignore D000,D001 <file>
Testing: Build Documentation Locally
Composite OPNFVDOCS documentation

To build whole documentation under opnfvdocs/, follow these steps:

Install virtual environment.

sudo pip install virtualenv
cd /local/repo/path/to/project

Download the OPNFVDOCS repository.

git clone https://gerrit.opnfv.org/gerrit/opnfvdocs

Change directory to opnfvdocs & install requirements.

cd opnfvdocs
sudo pip install -r etc/requirements.txt

Update submodules, build documentation using tox & then open using any browser.

cd opnfvdocs
git submodule update --init
tox -edocs
firefox docs/_build/html/index.html

Note

Make sure to run tox -edocs and not just tox.

Individual project documentation

To test how the documentation renders in HTML, follow these steps:

Install virtual environment.

sudo pip install virtualenv
cd /local/repo/path/to/project

Download the opnfvdocs repository.

git clone https://gerrit.opnfv.org/gerrit/opnfvdocs

Change directory to opnfvdocs & install requirements.

cd opnfvdocs
sudo pip install -r etc/requirements.txt

Move the conf.py file to your project folder where RST files have been kept:

mv opnfvdocs/docs/conf.py <path-to-your-folder>/

Move the static files to your project folder:

mv opnfvdocs/_static/ <path-to-your-folder>/

Build the documentation from within your project folder:

sphinx-build -b html <path-to-your-folder> <path-to-output-folder>

Your documentation shall be built as HTML inside the specified output folder directory.

Note

Be sure to remove the conf.py, the static/ files and the output folder from the <project>/docs/. This is for testing only. Only commit the rst files and related content.