QTIP User Guide

Overview

QTIP is the project for Platform Performance Benchmarking in OPNFV. It aims to provide user a simple indicator for performance, simple but supported by comprehensive testing data and transparent calculation formula.

QTIP introduces a concept called QPI, a.k.a. QTIP Performance Index, which aims to be a TRUE indicator of performance. TRUE reflects the core value of QPI in four aspects

  • Transparent: being an open source project, user can inspect all details behind QPI, e.g. formulas, metrics, raw data
  • Reliable: the integrity of QPI will be guaranteed by traceability in each step back to raw test result
  • Understandable: QPI is broke down into section scores, and workload scores in report to help user to understand
  • Extensible: users may create their own QPI by composing the existed metrics in QTIP or extend new metrics

Benchmarks

The builtin benchmarks of QTIP are located in <package_root>/benchmarks folder

  • QPI: specifications about how an QPI is calculated and sources of metrics
  • metric: performance metrics referred in QPI, currently it is categorized by performance testing tools
  • plan: executable benchmarking plan which collects metrics and calculate QPI

Run with Ansible

QTIP benchmarking tasks are built upon Ansible playbooks and roles. If you are familiar with Ansible, it is possible to run it with ansible-playbook command.

Create workspace

There is a playbook in tests/integration used for creating a new workspace for QTIP benchmarking:

cd qtip/tests/integration
ansible-playbook workspace-create.yml

You will be prompted for required information for the Pod under test:

(qtip) ➜  integration git:(master) ✗ ansible-playbook workspace-create.yml
name of the pod under test (used in reporting) [qtip-pod]:
scenario deployed in the pod: [default]:
installer type of the pod (apex|fuel|other) [fuel]:
master host/vm of the installer (accessible by `ssh <hostname>`): f5
workspace name (new directory will be created) [workspace]:

PLAY [localhost] ***************************************************************

TASK [qtip-workspace : generating workspace] ***********************************

PLAY RECAP *********************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=0

You may hit Enter to accept default value

NOTE: if this playbook is moved to other directory, configuration in ansible.cfg needs to be updated accordingly. The ansible roles from QTIP, i.e. <path_of_qtip>/resources/ansible_roles must be added to roles_path in Ansible configuration file. For example:

roles_path = ../../resources/ansible_roles

Executing benchmark

Before executing the setup playbook, make sure ~/.ssh/config has been configured properly so that you can login the master node “directly”. Skip next section, if you can login with ssh <master-host> from localhost,

SSH access to master node

It is common that the master node is behind some jump host. In this case, ssh option ProxyCommand and ssh-agent shall be required.

Assume that you need to login to deploy server, then login to the master node from there. An example configuration is as following:

Host fuel-deploy
  HostName 172.50.0.250
  User root

Host fuel-master
  HostName 192.168.122.63
  User root
  ProxyCommand ssh -o 'ForwardAgent yes' apex-deploy 'ssh-add && nc %h %p'

If several jumps are required to reach the master node, we may chain the jump hosts like below:

Host jumphost
  HostName 10.62.105.31
  User zte
  Port 22

Host fuel-deploy
  HostName 172.50.0.250
  User root
  ProxyJump jumphost

Host fuel-master
  HostName 192.168.122.63
  User root
  ProxyCommand ssh -o 'ForwardAgent yes' apex-deploy 'ssh-add && nc %h %p'

NOTE: ProxyJump is equivalent to the long ProxyCommand option, but it is only available since OpenSSH 7.3

Setup testing environment

Run the setup playbook to generate ansible inventory of system under test by querying the slave nodes from the installer master:

cd workspace
ansible-playbook setup.yml

Currently, QTIP supports automatic discovery from apex and fuel

It will update the hosts and ssh.cfg

Run the tests

It is important to note that ssh-agent is required to run the tests. It must started correctly to ensure execution. The way to do it is

eval $(ssh-agent)

One can also kill the process as

eval $(ssh-agent -k)

Run the benchmarks with the following command:

ansible-playbook run.yml

CAVEAT: QTIP will install required packages in system under test.

Inspect the results

The test results and calculated output are stored in results:

current/
    node-2/
        arithmetic/
            metric.json
            report
            unixbench.log
        dpi/
        ...
    node-4/
    ...
    qtip-pod-qpi.json
qtip-pod-20170425-1710/
qtip-pod-20170425-1914/
...

The folders are named as <pod_name>-<start_time>/ and the results are organized by hosts under test. Inside each host, the test data are organized by metrics as defined in QPI specification.

For each metrics, it usually includes the following content

  • log file generated by the performance testing tool
  • metrics collected from the log files
  • reported rendered with the metrics collected

Teardown the test environment

QTIP will create temporary files for testing in system under test. Execute the teardown playbook to clean it up:

ansible-playbook teardown.yml

CLI User Manual

QTIP consists of a number of benchmarking tools or metrics, grouped under QPI’s. QPI’s map to the different components of a NFVI ecosystem, such as compute, network and storage. Depending on the type of application, a user may group them under plans.

QTIP CLI provides interface to all of the above the components. A help page provides a list of all the commands along with a short description.

qtip [-h|--help]

Typically a complete plan is executed at the target environment. QTIP defaults to a number of sample plans. A list of all the available plans can be viewed

qtip plan list

In order to view the details about a specific plan.

qtip plan show <plan_name>

where plan_name is one of those listed from the previous command.

To execute a complete plan

qtip plan run <plan_name> -p <path_to_result_directory>

QTIP does not limit result storage at a specific directory. Instead a user may specify his own result storage as above. An important thing to remember is to provide absolute path of result directory.

mkdir result
qtip plan run <plan_name> -p $PWD/result

Similarly, the same commands can be used for the other two components making up the plans, i.e QPI’s and metrics. For example, in order to run a single metric

qtip metric run <metric_name> -p $PWD/result

The same can be applied for a QPI.

QTIP also provides the utility to view benchmarking results on the console. One just need to provide to where the results are stored. Extending the example above

qtip report show <metric_name> -p $PWD/result

Debug option helps identify the error by providing a detailed traceback. It can be enabled as

qtip [-d|--debug] plan run <plan_name>

API User Manual

QTIP consists of a number of benchmarking tools or metrics, grouped under QPI’s. QPI’s map to the different components of an NFVI ecosystem, such as compute, network and storage. Depending on the type of application, a user may group them under plans.

QTIP API provides a RESTful interface to all of the above components. User can retrieve list of plans, QPIs and metrics and their individual information.

Running

After installing QTIP. API server can be run using command qtip-api on the local machine.

All the resources and their corresponding operation details can be seen at /v1.0/ui.

The whole API specification in json format can be seen at /v1.0/swagger.json.

The data models are given below:

  • Plan
  • Metric
  • QPI

Plan:

{
  "name": <plan name>,
  "description": <plan profile>,
  "info": <{plan info}>,
  "config": <{plan configuration}>,
  "QPIs": <[list of qpis]>,
},

Metric:

{
  "name": <metric name>,
  "description": <metric description>,
  "links": <[links with metric information]>,
  "workloads": <[cpu workloads(single_cpu, multi_cpu]>,
},

QPI:

{
  "name": <qpi name>,
  "description": <qpi description>,
  "formula": <formula>,
  "sections": <[list of sections with different metrics and formulaes]>,
}

The API can be described as follows

Plans:

Method Path Description
GET /v1.0/plans Get the list of of all plans
GET /v1.0/plans/{name} Get details of the specified plan

Metrics:

Method Path Description
GET /v1.0/metrics Get the list of all metrics
GET /v1.0/metrics/{name} Get details of specified metric

QPIs:

Method Path Description
GET /v1.0/qpis Get the list of all QPIs
GET /v1.0/qpis/{name} Get details of specified QPI
Note:
running API with connexion cli does not require base path (/v1.0/) in url

Compute Performance Benchmarking

The compute QPI aims to benchmark the compute components of an OPNFV platform. Such components include, the CPU performance, the memory performance.

The compute QPI consists of both synthetic and application specific benchmarks to test compute components.

All the compute benchmarks could be run in the scenario: On Baremetal Machines provisioned by an OPNFV installer (Host machines)

Note: The Compute benchmank constains relatively old benchmarks such as dhrystone and whetstone. The suite would be updated for better benchmarks such as Linbench for the OPNFV E release.

Getting started

Notice: All descriptions are based on QTIP container.

Inventory File

QTIP uses Ansible to trigger benchmark test. Ansible uses an inventory file to determine what hosts to work against. QTIP can automatically generate a inventory file via OPNFV installer. Users also can write their own inventory infomation into /home/opnfv/qtip/hosts. This file is just a text file containing a list of host IP addresses. For example:

[hosts]
10.20.0.11
10.20.0.12

QTIP key Pair

QTIP use a SSH key pair to connect to remote hosts. When users execute compute QPI, QTIP will generate a key pair named QtipKey under /home/opnfv/qtip/ and pass public key to remote hosts.

If environment variable CI_DEBUG is set to true, users should delete it by manual. If CI_DEBUG is not set or set to false, QTIP will delete the key from remote hosts before the execution ends. Please make sure the key deleted from remote hosts or it can introduce a security flaw.

Commands

In a QTIP container, you can run compute QPI by using QTIP CLI:

mkdir result
qtip plan run <plan_name> -p $PWD/result

QTIP generates results in the $PWD/result directory are listed down under the timestamp name.

you can get more details from userguide/cli.rst.

Metrics

The benchmarks include:

Dhrystone 2.1

Dhrystone is a synthetic benchmark for measuring CPU performance. It uses integer calculations to evaluate CPU capabilities. Both Single CPU performance is measured along multi-cpu performance.

Dhrystone, however, is a dated benchmark and has some short comings. Written in C, it is a small program that doesn’t test the CPU memory subsystem. Additionally, dhrystone results could be modified by optimizing the compiler and insome cases hardware configuration.

References: http://www.eembc.org/techlit/datasheets/dhrystone_wp.pdf

Whetstone

Whetstone is a synthetic benchmark to measure CPU floating point operation performance. Both Single CPU performance is measured along multi-cpu performance.

Like Dhrystone, Whetstone is a dated benchmark and has short comings.

References:

http://www.netlib.org/benchmark/whetstone.c

OpenSSL Speed

OpenSSL Speed can be used to benchmark compute performance of a machine. In QTIP, two OpenSSL Speed benchmarks are incorporated:

  1. RSA signatunes/sec signed by a machine
  2. AES 128-bit encryption throughput for a machine for cipher block sizes

References:

https://www.openssl.org/docs/manmaster/apps/speed.html

RAMSpeed

RAMSpeed is used to measure a machine’s memory perfomace. The problem(array)size is large enough to ensure Cache Misses so that the main machine memory is used.

INTmem and FLOATmem benchmarks are executed in 4 different scenarios:

  1. Copy: a(i)=b(i)
  2. Add: a(i)=b(i)+c(i)
  3. Scale: a(i)=b(i)*d
  4. Tniad: a(i)=b(i)+c(i)*d

INTmem uses integers in these four benchmarks whereas FLOATmem uses floating points for these benchmarks.

References:

http://alasir.com/software/ramspeed/

https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/W51a7ffcf4dfd_4b40_9d82_446ebc23c550/page/Untangling+memory+access+measurements

DPI

nDPI is a modified variant of OpenDPI, Open source Deep packet Inspection, that is maintained by ntop. An example application called pcapreader has been developed and is available for use along nDPI.

A sample .pcap file is passed to the pcapreader application. nDPI classifies traffic in the pcap file into different categories based on string matching. The pcapreader application provides a throughput number for the rate at which traffic was classified, indicating a machine’s computational performance. The results are run 10 times and an average is taken for the obtained number.

nDPI may provide non consistent results and was added to Brahmaputra for experimental purposes

References:

http://www.ntop.org/products/deep-packet-inspection/ndpi/

http://www.ntop.org/wp-content/uploads/2013/12/nDPI_QuickStartGuide.pdf