Using the test frameworks in OPNFV

Testing is one of the key activities in OPNFV, validation can include component level testing, system testing, automated deployment validation and performance charecteristics testing.

The following sections outline how to use the test projects delivering automated test suites and frameworks in the in the Brahmaputra release of OPNFV.

Description of the test cases

Functest is an OPNFV project dedicated to functional testing. In the continuous integration, it is launched after an OPNFV fresh installation. The Functest target is to verify the basic functions of the infrastructure.

Functest includes different test suites which several test cases within. Test cases are developed in Functest and in feature projects.

The current list of test suites can be distributed in 3 main domains:

+----------------+----------------+-------------------------------------------+
| Domain         | Test suite     | Comments                                  |
+================+================+===========================================+
|                | vPing          | NFV "Hello World" using SSH connection    |
|                |                | and floatting IP                          |
|                +----------------+-------------------------------------------+
|    VIM         | vPing_userdata | Ping using userdata and cloud-init        |
|                |                | mechanism                                 |
|                +----------------+-------------------------------------------+
|(Virtualised    | Tempest        | OpenStack reference test suite `[2]`_     |
| Infrastructure +----------------+-------------------------------------------+
| Manager)       | Rally scenario | OpenStack testing tool testing OpenStack  |
|                |                | modules `[3]`_                            |
+----------------+----------------+-------------------------------------------+
|                | OpenDaylight   | Opendaylight Test suite                   |
|                +----------------+-------------------------------------------+
| Controllers    | ONOS           | Test suite of ONOS L2 and L3 functions    |
|                +----------------+-------------------------------------------+
|                | OpenContrail   |                                           |
+----------------+----------------+-------------------------------------------+
| Features       | vIMS           | Show the capability to deploy a real NFV  |
|                |                | test cases.                               |
|                |                | The IP Multimedia Subsytem is a typical   |
|                |                | Telco test case, referenced by ETSI.      |
|                |                | It provides a fully functional VoIP System|
|                +----------------+-------------------------------------------+
|                | Promise        | Resource reservation and management       |
|                |                | project to identify NFV related           |
|                |                | requirements and realize resource         |
|                |                | reservation for future usage by capacity  |
|                |                | management of resource pools regarding    |
|                |                | compute, network and storage.             |
|                +----------------+-------------------------------------------+
|                | SDNVPN         |                                           |
+----------------+----------------+-------------------------------------------+

Most of the test suites are developed upstream. For example, Tempest is the OpenStack integration test suite. Functest is in charge of the integration of different functional test suites.

The Tempest suite has been customized but no new test cases have been created. Some OPNFV feature projects (e.g. SDNVPN) have created Tempest tests cases and pushed to upstream.

The tests run from CI are pushed into a database. The goal is to populate the database with results and to show them on a Test Dashboard.

There is no real notion of Test domain or Test coverage yet. Basic components (VIM, controllers) are tested through their own suites. Feature projects also provide their own test suites.

vIMS test case was integrated to demonstrate the capability to deploy a relatively complex NFV scenario on top of the OPNFV infrastructure.

Functest considers OPNFV as a black box. OPNFV, since Brahmaputra, offers lots of possible combinations:

  • 3 controllers (OpenDayligh, ONOS, OpenContrail)
  • 4 installers (Apex, Compass, Fuel, Joid)

However most of the tests shall be runnable on any configuration.

Executing the functest suites

Manual testing

Once the Functest docker container is running and Functest environment ready (through /home/opnfv/repos/functest/docker/prepare_env.sh script), the system is ready to run the tests.

The script run_tests.sh is located in $repos_dir/functest/docker and it has several options:

./run_tests.sh -h
Script to trigger the tests automatically.

usage:
    bash run_tests.sh [--offline] [-h|--help] [-t <test_name>]

where:
    -h|--help         show this help text
    -r|--report       push results to database (false by default)
    -n|--no-clean     do not clean up OpenStack resources after test run
    -t|--test         run specific set of tests
      <test_name>     one or more of the following: vping,vping_userdata,odl,rally,tempest,vims,onos,promise. Separated by comma.

examples:
    run_tests.sh
    run_tests.sh --test vping,odl
    run_tests.sh -t tempest,rally --no-clean

The -r option is used by the Continuous Integration in order to push the test results into a test collection database, see in next section for details. In manual mode, you must not use it, your try will be anyway probably rejected as your POD must be declared in the database to collect the data.

The -n option is used for preserving all the existing OpenStack resources after execution test cases.

The -t option can be used to specify the list of test you want to launch, by default Functest will try to launch all its test suites in the following order vPing, odl, Tempest, vIMS, Rally. You may launch only one single test by using -t <the test you want to launch>.

Within Tempest test suite you can define which test cases you want to execute in your environment by editing test_list.txt file before executing run_tests.sh script.

Please note that Functest includes cleaning mechanism in order to remove everything except what was present after a fresh install. If you create your own VMs, tenants, networks etc. and then launch Functest, they all will be deleted after executing the tests. Use the –no-clean option with run_test.sh in order to preserve all the existing resources. However, be aware that Tempest and Rally create of lot of resources (users, tenants, networks, volumes etc.) that are not always properly cleaned, so this cleaning function has been set to keep the system as clean as possible after a full Functest run.

You may also add you own test by adding a section into the function run_test().

Automated testing

As mentioned in [1], the prepare-env.sh and run_test.sh can be executed within the container from jenkins. 2 jobs have been created, one to run all the test and one that allows testing test suite by test suite. You thus just have to launch the acurate jenkins job on the target lab, all the tests shall be automatically run.

When the tests are automatically started from CI, a basic algorithm has been created in order to detect whether the test is runnable or not on the given scenario. In fact, one of the most challenging task in Brahmaputra consists in dealing with lots of scenario and installers. Functest test suites cannot be systematically run (e.g. run the ODL suite on an ONOS scenario).

CI provides several information:

  • The installer (apex|compass|fuel|joid)
  • The scenario [controller]-[feature]-[mode] with
    • controller = (odl|onos|ocl|nosdn)
    • feature = (ovs(dpdk)|kvm)
    • mode = (ha|noha)

Constraints per test case are defined in the Functest configuration file /home/opnfv/functest/config/config_functest.yaml:

test-dependencies:
   functest:
       vims:
           scenario: '(ocl)|(odl)|(nosdn)'
       vping:
       vping_userdata:
           scenario: '(ocl)|(odl)|(nosdn)'
       tempest:
       rally:
       odl:
           scenario: 'odl'
       onos:
           scenario: 'onos'
       ....

At the end of the Functest environment creation (prepare_env.sh see `[1]`_), a file (/home/opnfv/functest/conf/testcase-list.txt) is created with the list of all the runnable tests. We consider the static constraints as regex and compare them with the scenario. For instance, odl can be run only on scenario including odl in its name.

The order of execution is also described in the Functest configuration file:

test_exec_priority:

   1: vping
   2: vping_userdata
   3: tempest
   4: odl
   5: onos
   6: ovno
   7: doctor
   8: promise
   9: odl-vpnservice
   10: bgpvpn
   11: openstack-neutron-bgpvpn-api-extension-tests
   12: vims
   13: rally

The tests are executed in the following order:

  • Basic scenario (vPing, vPing_userdata, Tempest)
  • Controller suites: ODL or ONOS or OpenContrail
  • Feature projects (promise, vIMS)
  • Rally (benchmark scenario)

At the end of an automated execution, everything is cleaned. Before running Functest, a snapshot of the OpenStack configuration (users, tenants, networks, ....) is performed. After Functest, a clean mechanism is launched to delete everything that would not have been properly deleted in order to restitute the system as it was prior to the tests.

Getting Started with ‘vsperf’

VSPERF requires a traffic generators to run tests, automated traffic gen support in VSPERF includes:

  • IXIA traffic generator (IxNetwork hardware) and a machine that runs the IXIA client software.
  • Spirent traffic generator (TestCenter hardware chassis or TestCenter virtual in a VM) and a VM to run the Spirent Virtual Deployment Service image, formerly known as “Spirent LabServer”.

If you want to use another traffic generator, please select the Dummy generator option as shown in Traffic generator instructions

To see the supported Operating Systems, vSwitches and system requirements, please follow the installation instructions to install.

Follow the Traffic generator instructions to install and configure a suitable traffic generator.

In order to run VSPERF, you will need to download DPDK and OVS. You can do this manually and build them in a preferred location, OR you could use vswitchperf/src. The vswitchperf/src directory contains makefiles that will allow you to clone and build the libraries that VSPERF depends on, such as DPDK and OVS. To clone and build simply:

$ cd src
$ make

VSPERF can be used with stock OVS (without DPDK support). When build is finished, the libraries are stored in src_vanilla directory.

The ‘make’ builds all options in src:

  • Vanilla OVS
  • OVS with vhost_user as the guest access method (with DPDK support)
  • OVS with vhost_cuse s the guest access method (with DPDK support)

The vhost_user build will reside in src/ovs/ The vhost_cuse build will reside in vswitchperf/src_cuse The Vanilla OVS build will reside in vswitchperf/src_vanilla

To delete a src subdirectory and its contents to allow you to re-clone simply use:

$ make clobber

The 10_custom.conf file is the configuration file that overrides default configurations in all the other configuration files in ./conf The supplied 10_custom.conf file MUST be modified, as it contains configuration items for which there are no reasonable default values.

The configuration items that can be added is not limited to the initial contents. Any configuration item mentioned in any .conf file in ./conf directory can be added and that item will be overridden by the custom configuration value.

If your 10_custom.conf doesn’t reside in the ./conf directory of if you want to use an alternative configuration file, the file can be passed to vsperf via the --conf-file argument.

$ ./vsperf --conf-file <path_to_custom_conf> ...

Note that configuration passed in via the environment (--load-env) or via another command line argument will override both the default and your custom configuration files. This “priority hierarchy” can be described like so (1 = max priority):

  1. Command line arguments
  2. Environment variables
  3. Configuration file(s)

vsperf uses a VM called vloop_vnf for looping traffic in the PVP and PVVP deployment scenarios. The image can be downloaded from http://artifacts.opnfv.org/.

$ wget http://artifacts.opnfv.org/vswitchperf/vloop-vnf-ubuntu-14.04_20151216.qcow2

vloop_vnf forwards traffic through a VM using one of: * DPDK testpmd * Linux Bridge * l2fwd kernel Module.

Alternatively you can use your own QEMU image.

A Kernel Module that provides OSI Layer 2 Ipv4 termination or forwarding with support for Destination Network Address Translation (DNAT) for both the MAC and IP addresses. l2fwd can be found in <vswitchperf_dir>/src/l2fwd

Before running any tests make sure you have root permissions by adding the following line to /etc/sudoers:

username ALL=(ALL)       NOPASSWD: ALL

username in the example above should be replaced with a real username.

To list the available tests:

$ ./vsperf --list

To run a single test:

$ ./vsperf $TESTNAME

Where $TESTNAME is the name of the vsperf test you would like to run.

To run a group of tests, for example all tests with a name containing ‘RFC2544’:

$ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf --tests="RFC2544"

To run all tests:

$ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf

Some tests allow for configurable parameters, including test duration (in seconds) as well as packet sizes (in bytes).

$ ./vsperf --conf-file user_settings.py
    --tests RFC2544Tput
    --test-param "duration=10;pkt_sizes=128"

For all available options, check out the help dialog:

$ ./vsperf --help
  1. If needed, recompile src for all OVS variants
$ cd src
$ make distclean
$ make

2. Update your ‘‘10_custom.conf’’ file to use the appropriate variables for Vanilla OVS:

VSWITCH = 'OvsVanilla'
VSWITCH_VANILLA_PHY_PORT_NAMES = ['$PORT1', '$PORT1']

Where $PORT1 and $PORT2 are the Linux interfaces you’d like to bind to the vswitch.

  1. Run test:
$ ./vsperf --conf-file=<path_to_custom_conf>

Please note if you don’t want to configure Vanilla OVS through the configuration file, you can pass it as a CLI argument; BUT you must set the ports.

$ ./vsperf --vswitch OvsVanilla

To run tests using vhost-user as guest access method:

  1. Set VHOST_METHOD and VNF of your settings file to:
VHOST_METHOD='user'
VNF = 'QemuDpdkVhost'
  1. If needed, recompile src for all OVS variants
$ cd src
$ make distclean
$ make
  1. Run test:
$ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf

To run tests using vhost-cuse as guest access method:

  1. Set VHOST_METHOD and VNF of your settings file to:
VHOST_METHOD='cuse'
VNF = 'QemuDpdkVhostCuse'
  1. If needed, recompile src for all OVS variants
$ cd src
$ make distclean
$ make
  1. Run test:
$ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf

To run tests using Vanilla OVS:

  1. Set the following variables:
VSWITCH = 'OvsVanilla'
VNF = 'QemuVirtioNet'

VANILLA_TGEN_PORT1_IP = n.n.n.n
VANILLA_TGEN_PORT1_MAC = nn:nn:nn:nn:nn:nn

VANILLA_TGEN_PORT2_IP = n.n.n.n
VANILLA_TGEN_PORT2_MAC = nn:nn:nn:nn:nn:nn

VANILLA_BRIDGE_IP = n.n.n.n

or use --test-param

./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
         --test-param "vanilla_tgen_tx_ip=n.n.n.n;
                       vanilla_tgen_tx_mac=nn:nn:nn:nn:nn:nn"
  1. If needed, recompile src for all OVS variants
$ cd src
$ make distclean
$ make
  1. Run test:
$ ./vsperf --conf-file<path_to_custom_conf>/10_custom.conf

To select loopback application, which will perform traffic forwarding inside VM, following configuration parameter should be configured:

GUEST_LOOPBACK = ['testpmd', 'testpmd']

or use –test-param

$ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
      --test-param "guest_loopback=testpmd"

Supported loopback applications are:

'testpmd'       - testpmd from dpdk will be built and used
'l2fwd'         - l2fwd module provided by Huawei will be built and used
'linux_bridge'  - linux bridge will be configured
'buildin'       - nothing will be configured by vsperf; VM image must
                  ensure traffic forwarding between its interfaces

Guest loopback application must be configured, otherwise traffic will not be forwarded by VM and testcases with PVP and PVVP deployments will fail. Guest loopback application is set to ‘testpmd’ by default.

Every developer participating in VSPERF project should run pylint before his python code is submitted for review. Project specific configuration for pylint is available at ‘pylint.rc’.

Example of manual pylint invocation:

$ pylint --rcfile ./pylintrc ./vsperf

If you encounter the following error: “before (last 100 chars): ‘-path=/dev/hugepages,share=on: unable to map backing store for hugepages: Cannot allocate memoryrnrn” with the PVP or PVVP deployment scenario, check the amount of hugepages on your system:

$ cat /proc/meminfo | grep HugePages

By default the vswitchd is launched with 1Gb of memory, to change this, modify –socket-mem parameter in conf/02_vswitch.conf to allocate an appropriate amount of memory:

VSWITCHD_DPDK_ARGS = ['-c', '0x4', '-n', '4', '--socket-mem 1024,0']