Using the test frameworks in OPNFV

Testing is one of the key activities in OPNFV, validation can include component level testing, system testing, automated deployment validation and performance charecteristics testing.

The following sections outline how to use the test projects delivering automated test suites and frameworks in the in the Brahmaputra release of OPNFV.

Overview of the test suites

Functest is the OPNFV project primarily targeting function testing. In the Continuous Integration pipeline, it is launched after an OPNFV fresh installation to validate and verify the basic functions of the infrastructure.

The current list of test suites can be distributed in 3 main domains:

+----------------+----------------+-------------------------------------------+
| Domain         | Test suite     | Comments                                  |
+================+================+===========================================+
|                | vPing          | NFV "Hello World" using SSH connection    |
|                |                | and floatting IP                          |
|                +----------------+-------------------------------------------+
|    VIM         | vPing_userdata | Ping using userdata and cloud-init        |
|                |                | mechanism                                 |
|                +----------------+-------------------------------------------+
|(Virtualised    | Tempest        | OpenStack reference test suite `[2]`_     |
| Infrastructure +----------------+-------------------------------------------+
| Manager)       | Rally bench    | OpenStack testing tool benchmarking       |
|                |                | OpenStack modules `[3]`_                  |
+----------------+----------------+-------------------------------------------+
|                | OpenDaylight   | Opendaylight Test suite                   |
|                +----------------+-------------------------------------------+
| Controllers    | ONOS           | Test suite of ONOS L2 and L3 functions    |
|                +----------------+-------------------------------------------+
|                | OpenContrail   |                                           |
+----------------+----------------+-------------------------------------------+
| Features       | vIMS           | Example of a real VNF deployment to show  |
|                |                | the NFV capabilities of the platform.     |
|                |                | The IP Multimedia Subsytem is a typical   |
|                |                | Telco test case, referenced by ETSI.      |
|                |                | It provides a fully functional VoIP System|
|                +----------------+-------------------------------------------+
|                | Promise        | Resource reservation and management       |
|                |                | project to identify NFV related           |
|                |                | requirements and realize resource         |
|                |                | reservation for future usage by capacity  |
|                |                | management of resource pools regarding    |
|                |                | compute, network and storage.             |
|                +----------------+-------------------------------------------+
|                | SDNVPN         |                                           |
+----------------+----------------+-------------------------------------------+

Functest includes different test suites with several test cases within. Some of the tests are developed by Functest team members whereas others are integrated from upstream communities or other OPNFV projects. For example, Tempest is the OpenStack integration test suite and Functest is in charge of the selection, integration and automation of the tests that fit in OPNFV.

The Tempest suite has been customized but no new test cases have been created. Some OPNFV feature projects (e.g. SDNVPN) have written some Tempest tests cases and pushed upstream to be used by Functest.

The results produced by the tests run from CI are pushed and collected in a NoSQL database. The goal is to populate the database with results from different sources and scenarios and to show them on a Dashboard.

There is no real notion of Test domain or Test coverage. Basic components (VIM, controllers) are tested through their own suites. Feature projects also provide their own test suites with different ways of running their tests.

vIMS test case was integrated to demonstrate the capability to deploy a relatively complex NFV scenario on top of the OPNFV infrastructure.

Functest considers OPNFV as a black box. OPNFV, since the Brahmaputra release, offers lots of potential combinations:

  • 3 controllers (OpenDayligh, ONOS, OpenContrail)
  • 4 installers (Apex, Compass, Fuel, Joid)

Most of the tests are runnable on any combination, but some others might have restrictions imposed by the installers or the available deployed features.

Executing the functest suites

Manual testing

Once the Functest docker container is running and Functest environment ready (through /home/opnfv/repos/functest/docker/prepare_env.sh script), the system is ready to run the tests.

The script run_tests.sh launches the test in an automated way. Although it is possible to execute the different tests manually, it is recommended to use the previous shell script which makes the call to the actual scripts with the appropriate parameters.

It is located in $repos_dir/functest/docker and it has several options:

./run_tests.sh -h
Script to trigger the tests automatically.

usage:
    bash run_tests.sh [-h|--help] [-r|--report] [-n|--no-clean] [-t|--test <test_name>]

where:
    -h|--help         show this help text
    -r|--report       push results to database (false by default)
    -n|--no-clean     do not clean up OpenStack resources after test run
    -s|--serial       run tests in one thread
    -t|--test         run specific set of tests
      <test_name>     one or more of the following separated by comma:
                         vping_ssh,vping_userdata,odl,rally,tempest,vims,onos,promise,ovno

examples:
    run_tests.sh
    run_tests.sh --test vping,odl
    run_tests.sh -t tempest,rally --no-clean

The -r option is used by the OPNFV Continuous Integration automation mechanisms in order to push the test results into the NoSQL results collection database. This database is read only for a regular user given that it needs special rights and special conditions to push data.

The -t option can be used to specify the list of a desired test to be launched, by default Functest will launch all the test suites in the following order: vPing, Tempest, vIMS, Rally.

A single or set of test may be launched at once using -t <test_name> specifying the test name or names separated by commas in the following list: [vping,vping_userdata,odl,rally,tempest,vims,onos,promise].

The -n option is used for preserving all the possible OpenStack resources created by the tests after their execution.

Please note that Functest includes cleaning mechanism in order to remove all the VIM resources except what was present before running any test. The script $repos_dir/functest/testcases/VIM/OpenStack/CI/libraries/generate_defaults.py is called once by prepare_env.sh when setting up the Functest environment to snapshot all the OpenStack resources (images, networks, volumes, security groups, tenants, users) so that an eventual cleanup does not remove any of this defaults.

The -s option forces execution of test cases in a single thread. Currently this option affects Tempest test cases only and can be used e.g. for troubleshooting concurrency problems.

The script $repos_dir/functest/testcases/VIM/OpenStack/CI/libraries/clean_openstack.py is normally called after a test execution if the -n is not specified. It is in charge of cleaning the OpenStack resources that are not specified in the defaults file generated previously which is stored in /home/opnfv/functest/conf/os_defaults.yaml in the docker container.

It is important to mention that if there are new OpenStack resources created manually after preparing the Functest environment, they will be removed if this flag is not specified in the run_tests.sh command. The reason to include this cleanup meachanism in Functest is because some test suites such as Tempest or Rally create a lot of resources (users, tenants, networks, volumes etc.) that are not always properly cleaned, so this cleaning function has been set to keep the system as clean as it was before a full Functest execution.

Within the Tempest test suite it is possible to define which test cases to execute by editing test_list.txt file before executing run_tests.sh script. This file is located in $repos_dir/functest/testcases/VIM/OpenStack/CI/custom_tests/test_list.txt

Although run_tests.sh provides an easy way to run any test, it is possible to do a direct call to the desired test script. For example:

python $repos_dir/functest/testcases/vPing/vPing.py -d

Automated testing

As mentioned in [1], the prepare-env.sh and run_test.sh can be called within the container from Jenkins. There are 2 jobs that automate all the manual steps explained in the previous section. One job runs all the tests and the other one allows testing test suite by test suite specifying the test name. The user might use one or the other job to execute the desired test suites.

One of the most challenging task in the Brahmaputra release consists in dealing with lots of scenarios and installers. Thus, when the tests are automatically started from CI, a basic algorithm has been created in order to detect whether a given test is runnable or not on the given scenario. Some Functest test suites cannot be systematically run (e.g. ODL suite can not be run on an ONOS scenario).

CI provides some useful information passed to the container as environment variables:

  • Installer (apex|compass|fuel|joid), stored in INSTALLER_TYPE
  • Installer IP of the engine or VM running the actual deployment, stored in INSTALLER_IP
  • The scenario [controller]-[feature]-[mode], stored in DEPLOY_SCENARIO with
    • controller = (odl|onos|ocl|nosdn)
    • feature = (ovs(dpdk)|kvm)
    • mode = (ha|noha)

The constraints per test case are defined in the Functest configuration file /home/opnfv/functest/config/config_functest.yaml:

test-dependencies:
   functest:
       vims:
           scenario: '(ocl)|(odl)|(nosdn)'
       vping:
       vping_userdata:
           scenario: '(ocl)|(odl)|(nosdn)'
       tempest:
       rally:
       odl:
           scenario: 'odl'
       onos:
           scenario: 'onos'
       ....

At the end of the Functest environment creation (prepare_env.sh see `[1]`_), a file /home/opnfv/functest/conf/testcase-list.txt is created with the list of all the runnable tests. Functest considers the static constraints as regular expressions and compare them with the given scenario name. For instance, ODL suite can be run only on an scenario including ‘odl’ in its name.

The order of execution is also described in the Functest configuration file:

test_exec_priority:

   1: vping_ssh
   2: vping_userdata
   3: tempest
   4: odl
   5: onos
   6: ovno
   7: doctor
   8: promise
   9: odl-vpnservice
   10: bgpvpn
   11: openstack-neutron-bgpvpn-api-extension-tests
   12: vims
   13: rally

The tests are executed in the following order:

  • Basic scenario (vPing_ssh, vPing_userdata, Tempest)
  • Controller suites: ODL or ONOS or OpenContrail
  • Feature projects (promise, vIMS)
  • Rally (benchmark scenario)

As explained before, at the end of an automated execution, the OpenStack resources might be eventually removed.

Getting Started with ‘vsperf’

VSPERF requires a traffic generators to run tests, automated traffic gen support in VSPERF includes:

  • IXIA traffic generator (IxNetwork hardware) and a machine that runs the IXIA client software.
  • Spirent traffic generator (TestCenter hardware chassis or TestCenter virtual in a VM) and a VM to run the Spirent Virtual Deployment Service image, formerly known as “Spirent LabServer”.

If you want to use another traffic generator, please select the Dummy generator option as shown in Traffic generator instructions

To see the supported Operating Systems, vSwitches and system requirements, please follow the installation instructions to install.

Follow the Traffic generator instructions to install and configure a suitable traffic generator.

In order to run VSPERF, you will need to download DPDK and OVS. You can do this manually and build them in a preferred location, OR you could use vswitchperf/src. The vswitchperf/src directory contains makefiles that will allow you to clone and build the libraries that VSPERF depends on, such as DPDK and OVS. To clone and build simply:

$ cd src
$ make

VSPERF can be used with stock OVS (without DPDK support). When build is finished, the libraries are stored in src_vanilla directory.

The ‘make’ builds all options in src:

  • Vanilla OVS
  • OVS with vhost_user as the guest access method (with DPDK support)
  • OVS with vhost_cuse s the guest access method (with DPDK support)

The vhost_user build will reside in src/ovs/ The vhost_cuse build will reside in vswitchperf/src_cuse The Vanilla OVS build will reside in vswitchperf/src_vanilla

To delete a src subdirectory and its contents to allow you to re-clone simply use:

$ make clobber

The 10_custom.conf file is the configuration file that overrides default configurations in all the other configuration files in ./conf The supplied 10_custom.conf file MUST be modified, as it contains configuration items for which there are no reasonable default values.

The configuration items that can be added is not limited to the initial contents. Any configuration item mentioned in any .conf file in ./conf directory can be added and that item will be overridden by the custom configuration value.

If your 10_custom.conf doesn’t reside in the ./conf directory of if you want to use an alternative configuration file, the file can be passed to vsperf via the --conf-file argument.

$ ./vsperf --conf-file <path_to_custom_conf> ...

Note that configuration passed in via the environment (--load-env) or via another command line argument will override both the default and your custom configuration files. This “priority hierarchy” can be described like so (1 = max priority):

  1. Command line arguments
  2. Environment variables
  3. Configuration file(s)

vsperf uses a VM called vloop_vnf for looping traffic in the PVP and PVVP deployment scenarios. The image can be downloaded from http://artifacts.opnfv.org/.

$ wget http://artifacts.opnfv.org/vswitchperf/vloop-vnf-ubuntu-14.04_20151216.qcow2

vloop_vnf forwards traffic through a VM using one of: * DPDK testpmd * Linux Bridge * l2fwd kernel Module.

Alternatively you can use your own QEMU image.

A Kernel Module that provides OSI Layer 2 Ipv4 termination or forwarding with support for Destination Network Address Translation (DNAT) for both the MAC and IP addresses. l2fwd can be found in <vswitchperf_dir>/src/l2fwd

Before running any tests make sure you have root permissions by adding the following line to /etc/sudoers:

username ALL=(ALL)       NOPASSWD: ALL

username in the example above should be replaced with a real username.

To list the available tests:

$ ./vsperf --list

To run a single test:

$ ./vsperf $TESTNAME

Where $TESTNAME is the name of the vsperf test you would like to run.

To run a group of tests, for example all tests with a name containing ‘RFC2544’:

$ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf --tests="RFC2544"

To run all tests:

$ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf

Some tests allow for configurable parameters, including test duration (in seconds) as well as packet sizes (in bytes).

$ ./vsperf --conf-file user_settings.py
    --tests RFC2544Tput
    --test-param "duration=10;pkt_sizes=128"

For all available options, check out the help dialog:

$ ./vsperf --help
  1. If needed, recompile src for all OVS variants
$ cd src
$ make distclean
$ make

2. Update your ‘‘10_custom.conf’’ file to use the appropriate variables for Vanilla OVS:

VSWITCH = 'OvsVanilla'
VSWITCH_VANILLA_PHY_PORT_NAMES = ['$PORT1', '$PORT1']

Where $PORT1 and $PORT2 are the Linux interfaces you’d like to bind to the vswitch.

  1. Run test:
$ ./vsperf --conf-file=<path_to_custom_conf>

Please note if you don’t want to configure Vanilla OVS through the configuration file, you can pass it as a CLI argument; BUT you must set the ports.

$ ./vsperf --vswitch OvsVanilla

To run tests using vhost-user as guest access method:

  1. Set VHOST_METHOD and VNF of your settings file to:
VHOST_METHOD='user'
VNF = 'QemuDpdkVhost'
  1. If needed, recompile src for all OVS variants
$ cd src
$ make distclean
$ make
  1. Run test:
$ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf

To run tests using vhost-cuse as guest access method:

  1. Set VHOST_METHOD and VNF of your settings file to:
VHOST_METHOD='cuse'
VNF = 'QemuDpdkVhostCuse'
  1. If needed, recompile src for all OVS variants
$ cd src
$ make distclean
$ make
  1. Run test:
$ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf

To run tests using Vanilla OVS:

  1. Set the following variables:
VSWITCH = 'OvsVanilla'
VNF = 'QemuVirtioNet'

VANILLA_TGEN_PORT1_IP = n.n.n.n
VANILLA_TGEN_PORT1_MAC = nn:nn:nn:nn:nn:nn

VANILLA_TGEN_PORT2_IP = n.n.n.n
VANILLA_TGEN_PORT2_MAC = nn:nn:nn:nn:nn:nn

VANILLA_BRIDGE_IP = n.n.n.n

or use --test-param

$ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
           --test-param "vanilla_tgen_tx_ip=n.n.n.n;
                         vanilla_tgen_tx_mac=nn:nn:nn:nn:nn:nn"
  1. If needed, recompile src for all OVS variants
$ cd src
$ make distclean
$ make
  1. Run test:
$ ./vsperf --conf-file<path_to_custom_conf>/10_custom.conf

To select loopback application, which will perform traffic forwarding inside VM, following configuration parameter should be configured:

GUEST_LOOPBACK = ['testpmd', 'testpmd']

or use –test-param

$ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
      --test-param "guest_loopback=testpmd"

Supported loopback applications are:

'testpmd'       - testpmd from dpdk will be built and used
'l2fwd'         - l2fwd module provided by Huawei will be built and used
'linux_bridge'  - linux bridge will be configured
'buildin'       - nothing will be configured by vsperf; VM image must
                  ensure traffic forwarding between its interfaces

Guest loopback application must be configured, otherwise traffic will not be forwarded by VM and testcases with PVP and PVVP deployments will fail. Guest loopback application is set to ‘testpmd’ by default.

To select application, which will perform packet forwarding, following configuration parameter should be configured:

VSWITCH = 'none'
PKTFWD = 'TestPMD'

or use --vswitch and --fwdapp

$ ./vsperf --conf-file user_settings.py
         --vswitch none
         --fwdapp TestPMD

Supported Packet Forwarding applications are:

'testpmd'       - testpmd from dpdk

1. Update your ‘‘10_custom.conf’’ file to use the appropriate variables for selected Packet Forwarder:

# testpmd configuration
TESTPMD_ARGS = []
# packet forwarding mode: io|mac|mac_retry|macswap|flowgen|rxonly|txonly|csum|icmpecho
TESTPMD_FWD_MODE = 'csum'
# checksum calculation layer: ip|udp|tcp|sctp|outer-ip
TESTPMD_CSUM_LAYER = 'ip'
# checksum calculation place: hw (hardware) | sw (software)
TESTPMD_CSUM_CALC = 'sw'
# recognize tunnel headers: on|off
TESTPMD_CSUM_PARSE_TUNNEL = 'off'
  1. Run test:
$ ./vsperf --conf-file <path_to_settings_py>

VSPERF can be run in different modes. By default it will configure vSwitch, traffic generator and VNF. However it can be used just for configuration and execution of traffic generator. Another option is execution of all components except traffic generator itself.

Mode of operation is driven by configuration parameter -m or –mode

-m MODE, --mode MODE  vsperf mode of operation;
    Values:
        "normal" - execute vSwitch, VNF and traffic generator
        "trafficgen" - execute only traffic generator
        "trafficgen-off" - execute vSwitch and VNF

In case, that VSPERF is executed in “trafficgen” mode, then configuration of traffic generator should be configured through –test-param option. Supported CLI options useful for traffic generator configuration are:

'traffic_type'  - One of the supported traffic types. E.g. rfc2544,
                  back2back or continuous
                  Default value is "rfc2544".
'bidirectional' - Specifies if generated traffic will be full-duplex (true)
                  or half-duplex (false)
                  Default value is "false".
'iload'         - Defines desired percentage of frame rate used during
                  continuous stream tests.
                  Default value is 100.
'multistream'   - Defines number of flows simulated by traffic generator.
                  Value 0 disables MultiStream feature
                  Default value is 0.
'stream_type'   - Stream Type is an extension of the "MultiStream" feature.
                  If MultiStream is disabled, then Stream Type will be
                  ignored. Stream Type defines ISO OSI network layer used
                  for simulation of multiple streams.
                  Default value is "L4".

Example of execution of VSPERF in “trafficgen” mode:

$ ./vsperf -m trafficgen --trafficgen IxNet --conf-file vsperf.conf
    --test-params "traffic_type=continuous;bidirectional=True;iload=60"

Every developer participating in VSPERF project should run pylint before his python code is submitted for review. Project specific configuration for pylint is available at ‘pylint.rc’.

Example of manual pylint invocation:

$ pylint --rcfile ./pylintrc ./vsperf

If you encounter the following error: “before (last 100 chars): ‘-path=/dev/hugepages,share=on: unable to map backing store for hugepages: Cannot allocate memoryrnrn” with the PVP or PVVP deployment scenario, check the amount of hugepages on your system:

$ cat /proc/meminfo | grep HugePages

By default the vswitchd is launched with 1Gb of memory, to change this, modify –socket-mem parameter in conf/02_vswitch.conf to allocate an appropriate amount of memory:

VSWITCHD_DPDK_ARGS = ['-c', '0x4', '-n', '4', '--socket-mem 1024,0']