Bottlenecks User Guide

User Guide

For each testsuite, you can either setup teststory or testcase to run certain test. teststory comprises several testcases as a set in one configuration file. You could call teststory or testcase by using Bottlenecks user interfaces. Details will be shown in the following section.

Brief Introdcution of the Test suites in Project Releases

Brahmaputra:

  • rubbos is introduced, which is an end2end NFVI perforamnce tool.
  • Virtual switch test framework (VSTF) is also introduced, which is an test framework used for vswitch performance test.

Colorado:

  • rubbos is refactored by using puppet, and ease the integration with several load generators(Client) and worker(Tomcat).
  • VSTF is refactored by extracting the test case’s configuration information.

Danube:

  • posca testsuite is introduced to implement stress (factor), feature and tuning test in parametric manner.
  • Two testcases are developed and integrated into community CI pipeline.
  • Rubbos and VSTF are not supported any more.

Euphrates:

  • Introduction of a simple monitoring module, i.e., Prometheus+Collectd+Node+Grafana to monitor the system behavior when executing stress tests.
  • Support VNF scale up/out tests to verify NFVI capability to adapt the resource consuming.
  • Extend Life-cycle test to data-plane to validate the system capability to handle concurrent networks usage.
  • Testing framework is revised to support installer-agnostic testing.

These enhancements and test cases help the end users to gain more comprehensive understanding of the SUT. Graphic reports of the system behavior additional to test cases are provided to indicate the confidence level of SUT. Installer-agnostic testing framework allow end user to do stress testing adaptively over either Open Source or commercial deployments.

Integration Description

Release Integrated Installer Supported Testsuite
Brahmaputra Fuel Rubbos, VSTF
Colorado Compass Rubbos, VSTF
Danube Compass POSCA
Euphrates Any POSCA

Test suite & Test case Description

POSCA posca_factor_ping
posca_factor_system_bandwidth
posca_facotor_througputs
posca_feature_scaleup
posca_feature_scaleout

As for the abandoned test suite in the previous Bottlenecks releases, please refer to http://docs.opnfv.org/en/stable-danube/submodules/bottlenecks/docs/testing/user/userguide/deprecated.html.

POSCA Testsuite Guide

POSCA Introduction

The POSCA (Parametric Bottlenecks Testing Catalogue) test suite classifies the bottlenecks test cases and results into 5 categories. Then the results will be analyzed and bottlenecks will be searched among these categories.

The POSCA testsuite aims to locate the bottlenecks in parametric manner and to decouple the bottlenecks regarding the deployment requirements. The POSCA testsuite provides an user friendly way to profile and understand the E2E system behavior and deployment requirements.

Goals of the POSCA testsuite:
  1. Automatically locate the bottlenecks in a iterative manner.
  2. Automatically generate the testing report for bottlenecks in different categories.
  3. Implementing Automated Staging.
Scopes of the POSCA testsuite:
  1. Modeling, Testing and Test Result analysis.
  2. Parameters choosing and Algorithms.
Test stories of POSCA testsuite:
  1. Factor test (Stress test): base test cases that Feature test and Optimization will be dependant on.
  2. Feature test: test cases for features/scenarios.
  3. Optimization test: test to tune the system parameter.

Detailed workflow is illutrated below.

Preinstall Packages

if [ -d usr/local/bin/docker-compose ]; then
    rm -rf usr/local/bin/docker-compose
fi
curl -L https://github.com/docker/compose/releases/download/1.11.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Run POSCA Locally

The test environment preparation, the installation of the testing tools, the execution of the tests and the reporting/analyisis of POSCA test suite are highly automated. A few steps are needed to run it locally.

In Euphrates, Bottlenecks has modified its framework to support installer-agnostic testing which means that test cases could be executed over different deployments.

Downloading Bottlenecks Software

mkdir /home/opnfv
cd /home/opnfv
git clone https://gerrit.opnfv.org/gerrit/bottlenecks
cd bottlenecks

Preparing Python Virtual Evnironment

. pre_virt_env.sh

Preparing configuration/description files

Put OpenStack RC file (admin_rc.sh), os_carcert and pod.yaml (pod descrition file) in /tmp directory. Edit admin_rc.sh and add the following line

export OS_CACERT=/tmp/os_cacert

If you are using compass, fuel, apex or joid to deploy your openstack environment, you could use the following command to get the required files.

bash /utils/env_prepare/config_prepare.sh -i <installer> [--debug]

Note that if we execute the command above, then admin_rc.sh and pod.yml gets created automatically in /tmp folder along with the line export OS_CACERT=/tmp/os_cacert added in admin_rc.sh file.

Executing Specified Testcase

  1. Bottlenecks provides a CLI interface to run the tests, which is one of the most convenient way since it is more close to our natural languge. An GUI interface with rest API will also be provided in later update.
bottlenecks testcase|teststory run <testname>

For the *testcase* command, testname should be as the same name of the test case configuration file located in testsuites/posca/testcase_cfg.
For stress tests in Danube/Euphrates, *testcase* should be replaced by either *posca_factor_ping* or *posca_factor_system_bandwidth*.
For the *teststory* command, a user can specify the test cases to be executed by defining it in a teststory configuration file located in testsuites/posca/testsuite_story. There is also an example there named *posca_factor_test*.
  1. There are also other 2 ways to run test cases and test stories.

    The first one is to use shell script.

bash run_tests.sh [-h|--help] -s <testsuite>|-c <testcase>


The second is to use python interpreter.
$REPORT=False
opts="--privileged=true -id"
docker_volume="-v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp"
docker run $opts --name bottlenecks-load-master $docker_volume opnfv/bottlenecks:latest /bin/bash
sleep 5
POSCA_SCRIPT="/home/opnfv/bottlenecks/testsuites/posca"
docker exec bottlenecks-load-master python ${POSCA_SCRIPT}/../run_posca.py testcase|teststory <testname> ${REPORT}

Showing Report

Bottlenecks uses ELK to illustrate the testing results. Asumming IP of the SUT (System Under Test) is denoted as ipaddr, then the address of Kibana is http://[ipaddr]:5601. One can visit this address to see the illustrations. Address for elasticsearch is http://[ipaddr]:9200. One can use any Rest Tool to visit the testing data stored in elasticsearch.

Cleaning Up Environment

. rm_virt_env.sh

If you want to clean the dockers that established during the test, you can excute the additional commands below.

bash run_tests.sh --cleanup

Note that you can also add cleanup parameter when you run a test case. Then environment will be automatically cleaned up when completing the test.

Run POSCA through Community CI

POSCA test cases are runned by OPNFV CI now. See https://build.opnfv.org for details of the building jobs. Each building job is set up to execute a single test case. The test results/logs will be printed on the web page and reported automatically to community MongoDB. There are two ways to report the results.

  1. Report testing result by shell script
bash run_tests.sh [-h|--help] -s <testsuite>|-c <testcase> --report
  1. Report testing result by python interpreter
REPORT=True
opts="--privileged=true -id"
docker_volume="-v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp"
docker run $opts --name bottlenecks-load-master $docker_volume opnfv/bottlenecks:latest /bin/bash
sleep 5
REPORT="True"
POSCA_SCRIPT="/home/opnfv/bottlenecks/testsuites/posca"
docker exec bottlenecks_load-master python ${POSCA_SCRIPT}/../run_posca.py testcase|teststory <testcase> ${REPORT}

Test Result Description

Dashbard guide

Scope

This document provides an overview of the results of test cases developed by the OPNFV Bottlenecks Project, executed on OPNFV community labs.

OPNFV CI(Continous Integration) system provides automated build, deploy and testing for the software developed in OPNFV. Unless stated, the reported tests are automated via Jenkins Jobs.

Test results are visible in the following dashboard:

  • Testing dashboard: uses Mongo DB to store test results and Bitergia for visualization, which includes the rubbos test result, vstf test result.

Bottlenecks - Test Cases

POSCA Stress (Factor) Test of System bandwidth

Test Case

Bottlenecks POSCA Stress Test Traffic
test case name posca_factor_system_bandwith
description Stress test regarding baseline of the system for a single user, i.e., a VM pair while increasing the package size
configuration
config file:
/testsuite/posca/testcase_cfg/
posca_factor_system_bandwith.yaml

stack number: 1

test result PKT loss rate, latency, throupht, cpu usage

Configration

test_config:
  tool: netperf
  protocol: tcp
  test_time: 20
  tx_pkt_sizes: 64, 256, 1024, 4096, 8192, 16384, 32768, 65536
  rx_pkt_sizes: 64, 256, 1024, 4096, 8192, 16384, 32768, 65536
  cpu_load: 0.9
  latency: 100000
runner_config:
  dashboard: "y"
  dashboard_ip:
  stack_create: yardstick
  yardstick_test_ip:
  yardstick_test_dir: "samples"
  yardstick_testcase: "netperf_bottlenecks"

POSCA Stress (Factor) Test of Perfomance Life-Cycle

Test Case

Bottlenecks POSCA Stress Test Ping
test case name posca_posca_ping
description Stress test regarding life-cycle while using ping to validate the VM pairs constructions
configuration
config file:
/testsuite/posca/testcase_cfg/posca_posca_ping.yaml

stack number: 5, 10, 20, 50 ...

test result PKT loss rate, success rate, test time, latency

Configuration

load_manager:
  scenarios:
    tool: ping
    test_times: 100
    package_size:
    num_stack: 5, 10, 20
    package_loss: 10%

  contexts:
    stack_create: yardstick
    flavor:
    yardstick_test_ip:
    yardstick_test_dir: "samples"
    yardstick_testcase: "ping_bottlenecks"

dashboard:
  dashboard: "y"
  dashboard_ip: