QTIP User Guide

Overview

QTIP is the project for Platform Performance Benchmarking in OPNFV. It aims to provide user a simple indicator for performance, simple but supported by comprehensive testing data and transparent calculation formula.

QTIP introduces a concept called QPI, a.k.a. QTIP Performance Index, which aims to be a TRUE indicator of performance. TRUE reflects the core value of QPI in four aspects

  • Transparent: being an open source project, user can inspect all details behind QPI, e.g. formulas, metrics, raw data
  • Reliable: the integrity of QPI will be guaranteed by traceability in each step back to raw test result
  • Understandable: QPI is broke down into section scores, and workload scores in report to help user to understand
  • Extensible: users may create their own QPI by composing the existed metrics in QTIP or extend new metrics

Benchmarks

The builtin benchmarks of QTIP are located in <package_root>/benchmarks folder

  • QPI: specifications about how an QPI is calculated and sources of metrics
  • metric: performance metrics referred in QPI, currently it is categorized by performance testing tools
  • plan: executable benchmarking plan which collects metrics and calculate QPI

Getting started with QTIP

Installation

Refer to `installation and configuration guide`_ for details

Create

Create a new project to hold the necessary configurations and test results

qtip create <project_name>

The user would be prompted for OPNFV installer, its hostname etc

**Pod Name [unknown]: zte-pod1**
User's choice to name OPNFV Pod

**OPNFV Installer [manual]: fuel**
QTIP currently supports fuel and apex only

**Installer Hostname [dummy-host]: master**
The hostname for the fuel or apex installer node. The same hostname can be added to **~/.ssh/config** file of current user,
if there are problems resolving the hostname via interactive input.

**OPNFV Scenario [unknown]: os-nosdn-nofeature-ha**
Depends on the OPNFV scenario deployed

Setup

With the project is created, user should now proceed on to setting up testing environment. In this step, ssh connection to hosts in SUT will be configured automatically:

cd <project_name>
$ qtip setup

Run

QTIP uses ssh-agent for authentication of ssh connection to hosts in SUT. It must be started correctly before running the tests:

eval $(ssh-agent)

Then run test with qtip run

Teardown

Clean up the temporary folder on target hosts.

Note

The installed packages for testing won’t be uninstalled.

One more thing

You may use -v for verbose output (-vvv for more, -vvvv to enable connection debugging)

CLI User Manual

QTIP consists of a number of benchmarking tools or metrics, grouped under QPI’s. QPI’s map to the different components of a NFVI ecosystem, such as compute, network and storage. Depending on the type of application, a user may group them under plans.

Bash Command Completion

To enable command completion, an environment variable needs to be enabled. Add the following line to the .bashrc file

eval "$(_QTIP_COMPLETE=source qtip)"

Getting help

QTIP CLI provides interface to all of the above the components. A help page provides a list of all the commands along with a short description.

qtip [-h|--help]

Usage

Typically a complete plan is executed at the target environment. QTIP defaults to a number of sample plans. A list of all the available plans can be viewed

qtip plan list

In order to view the details about a specific plan.

qtip plan show <plan_name>

where plan_name is one of those listed from the previous command.

To execute a complete plan

qtip plan run <plan_name> -p <path_to_result_directory>

QTIP does not limit result storage at a specific directory. Instead a user may specify his own result storage as above. An important thing to remember is to provide absolute path of result directory.

mkdir result
qtip plan run <plan_name> -p $PWD/result

Similarly, the same commands can be used for the other two components making up the plans, i.e QPI’s and metrics. For example, in order to run a single metric

qtip metric run <metric_name> -p $PWD/result

The same can be applied for a QPI.

QTIP also provides the utility to view benchmarking results on the console. One just need to provide to where the results are stored. Extending the example above

qtip report show <metric_name> -p $PWD/result

Debugging options

Debug option helps identify the error by providing a detailed traceback. It can be enabled as

qtip [-d|--debug] plan run <plan_name>

API User Manual

QTIP consists of a number of benchmarking tools or metrics, grouped under QPI’s. QPI’s map to the different components of an NFVI ecosystem, such as compute, network and storage. Depending on the type of application, a user may group them under plans.

QTIP API provides a RESTful interface to all of the above components. User can retrieve list of plans, QPIs and metrics and their individual information.

Running

After installing QTIP. API server can be run using command qtip-api on the local machine.

All the resources and their corresponding operation details can be seen at /v1.0/ui.

The whole API specification in json format can be seen at /v1.0/swagger.json.

The data models are given below:

  • Plan
  • Metric
  • QPI

Plan:

{
  "name": <plan name>,
  "description": <plan profile>,
  "info": <{plan info}>,
  "config": <{plan configuration}>,
  "QPIs": <[list of qpis]>,
},

Metric:

{
  "name": <metric name>,
  "description": <metric description>,
  "links": <[links with metric information]>,
  "workloads": <[cpu workloads(single_cpu, multi_cpu]>,
},

QPI:

{
  "name": <qpi name>,
  "description": <qpi description>,
  "formula": <formula>,
  "sections": <[list of sections with different metrics and formulaes]>,
}

The API can be described as follows

Plans:

Method Path Description
GET /v1.0/plans Get the list of of all plans
GET /v1.0/plans/{name} Get details of the specified plan

Metrics:

Method Path Description
GET /v1.0/metrics Get the list of all metrics
GET /v1.0/metrics/{name} Get details of specified metric

QPIs:

Method Path Description
GET /v1.0/qpis Get the list of all QPIs
GET /v1.0/qpis/{name} Get details of specified QPI
Note:
running API with connexion cli does not require base path (/v1.0/) in url

Web Portal User Manual

QTIP consists of different tools(metrics) to benchmark the NFVI. These metrics fall under different NFVI subsystems(QPI’s) such as compute, storage and network. QTIP benchmarking tasks are built upon Ansible playbooks and roles. QTIP web portal is a platform to expose QTIP as a benchmarking service hosted on a central host.

Running

After setting up the web portal as instructed in config guide, cd into the web directory.

and run.

python manage.py runserver 0.0.0.0

You can access the portal by logging onto <host>:8000/bench/login/

If you want to use port 80, you may need sudo permission.

sudo python manage.py runserver 0.0.0.0:80

To Deploy on wsgi, Use the Django deployment tutorial

Features

After logging in You’ll be redirect to QTIP-Web Dashboard. You’ll see following menus on left.

  • Repos
  • Run Benchmarks
  • Tasks

Repo

Repos are links to qtip workspaces. This menu list all the aded repos. Links to new repos can be added here.

Run Benchmarks

To run a benchmark, select the corresponding repo and run. QTIP Benchmarking service will clone the workspace and run the benchmarks. Inventories used are predefined in the workspace repo in the /hosts/ config file.

Tasks

All running or completed benchmark jobs can be seen in Tasks menu with their status.

New users can be added by Admin on the Django Admin app by logging into `/admin/’.

Compute Performance Benchmarking

The compute QPI aims to benchmark the compute components of an OPNFV platform. Such components include, the CPU performance, the memory performance.

The compute QPI consists of both synthetic and application specific benchmarks to test compute components.

All the compute benchmarks could be run in the scenario: On Baremetal Machines provisioned by an OPNFV installer (Host machines)

Note: The Compute benchmank constains relatively old benchmarks such as dhrystone and whetstone. The suite would be updated for better benchmarks such as Linbench for the OPNFV E release.

Getting started

Notice: All descriptions are based on QTIP container.

Inventory File

QTIP uses Ansible to trigger benchmark test. Ansible uses an inventory file to determine what hosts to work against. QTIP can automatically generate a inventory file via OPNFV installer. Users also can write their own inventory infomation into /home/opnfv/qtip/hosts. This file is just a text file containing a list of host IP addresses. For example:

[hosts]
10.20.0.11
10.20.0.12

QTIP key Pair

QTIP use a SSH key pair to connect to remote hosts. When users execute compute QPI, QTIP will generate a key pair named QtipKey under /home/opnfv/qtip/ and pass public key to remote hosts.

If environment variable CI_DEBUG is set to true, users should delete it by manual. If CI_DEBUG is not set or set to false, QTIP will delete the key from remote hosts before the execution ends. Please make sure the key deleted from remote hosts or it can introduce a security flaw.

Commands

In a QTIP container, you can run compute QPI by using QTIP CLI:

mkdir result
qtip plan run <plan_name> -p $PWD/result

QTIP generates results in the $PWD/result directory are listed down under the timestamp name.

you can get more details from userguide/cli.rst.

Metrics

The benchmarks include:

Dhrystone 2.1

Dhrystone is a synthetic benchmark for measuring CPU performance. It uses integer calculations to evaluate CPU capabilities. Both Single CPU performance is measured along multi-cpu performance.

Dhrystone, however, is a dated benchmark and has some short comings. Written in C, it is a small program that doesn’t test the CPU memory subsystem. Additionally, dhrystone results could be modified by optimizing the compiler and insome cases hardware configuration.

References: http://www.eembc.org/techlit/datasheets/dhrystone_wp.pdf

Whetstone

Whetstone is a synthetic benchmark to measure CPU floating point operation performance. Both Single CPU performance is measured along multi-cpu performance.

Like Dhrystone, Whetstone is a dated benchmark and has short comings.

References:

http://www.netlib.org/benchmark/whetstone.c

OpenSSL Speed

OpenSSL Speed can be used to benchmark compute performance of a machine. In QTIP, two OpenSSL Speed benchmarks are incorporated:

  1. RSA signatunes/sec signed by a machine
  2. AES 128-bit encryption throughput for a machine for cipher block sizes

References:

https://www.openssl.org/docs/manmaster/apps/speed.html

RAMSpeed

RAMSpeed is used to measure a machine’s memory perfomace. The problem(array)size is large enough to ensure Cache Misses so that the main machine memory is used.

INTmem and FLOATmem benchmarks are executed in 4 different scenarios:

  1. Copy: a(i)=b(i)
  2. Add: a(i)=b(i)+c(i)
  3. Scale: a(i)=b(i)*d
  4. Tniad: a(i)=b(i)+c(i)*d

INTmem uses integers in these four benchmarks whereas FLOATmem uses floating points for these benchmarks.

References:

http://alasir.com/software/ramspeed/

https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/W51a7ffcf4dfd_4b40_9d82_446ebc23c550/page/Untangling+memory+access+measurements

DPI

nDPI is a modified variant of OpenDPI, Open source Deep packet Inspection, that is maintained by ntop. An example application called pcapreader has been developed and is available for use along nDPI.

A sample .pcap file is passed to the pcapreader application. nDPI classifies traffic in the pcap file into different categories based on string matching. The pcapreader application provides a throughput number for the rate at which traffic was classified, indicating a machine’s computational performance. The results are run 10 times and an average is taken for the obtained number.

nDPI may provide non consistent results and was added to Brahmaputra for experimental purposes

References:

http://www.ntop.org/products/deep-packet-inspection/ndpi/

http://www.ntop.org/wp-content/uploads/2013/12/nDPI_QuickStartGuide.pdf