QTIP Developer Guide¶
Overview¶
QTIP uses Python as primary programming language and build the framework from the following packages
Module | Package |
---|---|
api | Connexion - API first applications with OpenAPI/Swagger and Flask |
cli | Click - the “Command Line Interface Creation Kit” |
template | Jinja2 - a full featured template engine for Python |
docs | sphinx - a tool that makes it easy to create intelligent and beautiful documentation |
testing | pytest - a mature full-featured Python testing tool that helps you write better programs |
Source Code¶
The structure of repository is based on the recommended sample in The Hitchhiker’s Guide to Python
Path | Content |
---|---|
./benchmarks/ |
builtin benchmark assets including plan, QPI and metrics |
./contrib/ |
independent project/plugin/code contributed to QTIP |
./docker/ |
configuration for building Docker image for QTIP deployment |
./docs/ |
release notes, user and developer documentation, design proposals |
./legacy/ |
legacy obsoleted code that is unmaintained but kept for reference |
./opt/ |
optional component, e.g. scripts to setup infrastructure services for QTIP |
./qtip/ |
the actual package |
./tests/ |
package functional and unit tests |
./third-party/ |
third part included in QTIP project |
Coding Style¶
QTIP follows OpenStack Style Guidelines for source code and commit message.
Specially, it is recommended to link each patch set with a JIRA issue. Put:
JIRA: QTIP-n
in commit message to create an automatic link.
Testing¶
All testing related code are stored in ./tests/
Path | Content |
---|---|
./tests/data/ |
data fixtures for testing |
./tests/unit/ |
unit test for each module, follow the same layout as ./qtip/ |
./conftest.py |
pytest configuration in project scope |
tox is used to automate the testing tasks
cd <project_root>
pip install tox
tox
The test cases are written in pytest. You may run it selectively with
pytest tests/unit/reporter
Branching¶
Stable branches are created when features are frozen for next release. According to OPNFV release milestone description, stable branch window is open on MS6 and closed on MS7.
- Contact gerrit admin <opnfv-helpdesk@rt.linuxfoundation.org> to create branch for project.
- Setup qtip jobs and docker jobs for stable branch in releng
- Follow instructions for stable branch.
NOTE: we do NOT create branches for feature development as in the popular GitHub Flow
Releasing¶
Tag Deliverable and write release note
Git repository¶
Follow the example in Git Tagging Instructions for Danube to tag the source code:
git fetch gerrit
git checkout stable/<release-name>
git tag -am "<release-version>" <release-version>
git push gerrit <release-version>
Docker image¶
- Login OPNFV Jenkins
- Go to the `qtip-docker-build-push-<release>`_ and click “Build With Parameters”
- Fill in
RELEASE_VERSION
with version number not including release name, e.g.1.0
- Trigger a manual build
Python Package¶
QTIP is also available as a Python Package. It is hosted on the Python Package Index(PyPI).
- Install twine with
pip install twine
- Build the distributions
python setup.py sdist bdist_wheel
- Upload the distributions built with
twine upload dist/*
NOTE: only package maintainers are permitted to upload the package versions.
Release note¶
Create release note under qtip/docs/release/release-notes
and update index.rst
Architecture¶
In Danube, QTIP releases its standalone mode, which is also know as solo
:

The runner could be launched from CLI (command line interpreter) or API (application programming interface) and drives the testing jobs. The generated data including raw performance data and testing environment are fed to collector. Performance metrics will be parsed from the raw data and used for QPI calculation. Then the benchmark report is rendered with the benchmarking results.
The execution can be detailed in the diagram below:

Framework¶
QTIP is built upon Ansible by extending modules, playbook roles and plugins.
Modules¶
QTIP creates dedicated modules to gather slave node list and information from installer master. See embedded document
in qtip/ansible_library/modules
for details
Plugins¶
Stored in qtip/ansible_library/plugins
Action plugins¶
Several action plugins have been created for test data post processing
- collect - parse and collect metrics from raw test results like log files
- calculate - calculate score according to specification
- aggregate - aggregate calculated results from all hosts under test
Playbook roles¶
QTIP roles¶
- qtip - main qtip tasks
- qtip-common - common tasks required in QTIP
- qtip-workspace - generate a workspace for running benchmarks
qtip
roles should be included with a specified action
and output
directory, e.g.:
- { role: inxi, output: "{{ qtip_results }}/sysinfo", tags: [run, inxi, sysinfo] }
testing roles¶
Testing roles are organized by testing tools
- inxi - system information tool
- nDPI
- openssl
- ramspeed
- unixbench
supporting roles
- opnfv-testapi - report result to testapi
Tags¶
Tags are used to categorize the test tasks from different aspects.
- stages like
run
,collect
,calculate
,aggregate
,report
- test tools like
inxi
,ndpi
and etc - information or metrics like
sysinfo
,dpi
,ssl
Use
ansible-playbook run.yml --list-tags
to list all tagsansible-playbook run.yml --list-tasks
to list all tasks
During development of post processing, you may skip run
stage to save time, e.g.
ansible-playbook run.yml --tags collect,calculate,aggregate
CLI - Command Line Interface¶
QTIP consists of different tools(metrics) to benchmark the NFVI. These metrics fall under different NFVI subsystems(QPI’s) such as compute, storage and network. A plan consists of one or more QPI’s, depending upon how the end user would want to measure performance. CLI is designed to help the user, execute benchmarks and view respective scores.
Framework¶
QTIP CLI has been created using the Python package Click, Command Line Interface Creation Kit. It has been chosen for number of reasons. It presents the user with a very simple yet powerful API to build complex applications. One of the most striking features is command nesting.
As explained, QTIP consists of metrics, QPI’s and plans. CLI is designed to provide interface to all these components. It is responsible for execution, as well as provide listing and details of each individual element making up these components.
Design¶
CLI’s entry point extends Click’s built in MultiCommand class object. It provides two methods, which are overridden to provide custom configurations.
class QtipCli(click.MultiCommand):
def list_commands(self, ctx):
rv = []
for filename in os.listdir(cmd_folder):
if filename.endswith('.py') and \
filename.startswith('cmd_'):
rv.append(filename[4:-3])
rv.sort()
return rv
def get_command(self, ctx, name):
try:
if sys.version_info[0] == 2:
name = name.encode('ascii', 'replace')
mod = __import__('qtip.cli.commands.cmd_' + name,
None, None, ['cli'])
except ImportError:
return
return mod.cli
Commands and subcommands will then be loaded by the get_command
method above.
Extending the Framework¶
Framework can be easily extended, as per the users requirements. One such example can be to override the builtin configurations with user defined ones. These can be written in a file, loaded via a Click Context and passed through to all the commands.
class Context:
def __init__():
self.config = ConfigParser.ConfigParser()
self.config.read('path/to/configuration_file')
def get_paths():
paths = self.config.get('section', 'path')
return paths
The above example loads configuration from user defined paths, which then need to be provided to the actual command definitions.
from qtip.cli.entry import Context
pass_context = click.make_pass_decorator(Context, ensure=False)
@cli.command('list', help='List the Plans')
@pass_context
def list(ctx):
plans = Plan.list_all(ctx.paths())
table = utils.table('Plans', plans)
click.echo(table)
API - Application Programming Interface¶
QTIP consists of different tools(metrics) to benchmark the NFVI. These metrics fall under different NFVI subsystems(QPI’s) such as compute, storage and network. A plan consists of one or more QPI’s, depending upon how the end-user would want to measure performance. API is designed to expose a RESTful interface to the user for executing benchmarks and viewing respective scores.
Framework¶
QTIP API has been created using the Python package Connexion. It has been chosen for a number of reasons. It follows API First approach to create micro-services. Hence, firstly the API specifications are defined from the client side perspective, followed by the implementation of the micro-service. It decouples the business logic from routing and resource mapping making design and implementation cleaner.
It has two major components:
API Specifications
The API specification is defined in a yaml or json file. Connexion follows Open API specification to determine the design and maps the endpoints to methods in python.
- Micro-service Implementation
- Connexion maps the
operationId
corresponding to every operation in API Specification to methods in python which handles request and responses.
As explained, QTIP consists of metrics, QPI’s and plans. The API is designed to provide a RESTful interface to all these components. It is responsible to provide listing and details of each individual element making up these components.
Design¶
Specification¶
API’s entry point (main
) runs connexion App
class object after adding API Specification
using App.add_api
method. It loads specification from swagger.yaml
file by specifying
specification_dir
.
Connexion reads API’s endpoints(paths), operations, their request and response parameter
details and response definitions from the API specification i.e. swagger.yaml
in this case.
Following example demonstrates specification for the resource plans
.
paths:
/plans/{name}:
get:
summary: Get a plan by plan name
operationId: qtip.api.controllers.plan.get_plan
tags:
- Plan
- Standalone
parameters:
- name: name
in: path
description: Plan name
required: true
type: string
responses:
200:
description: Plan information
schema:
$ref: '#/definitions/Plan'
404:
description: Plan not found
schema:
$ref: '#/definitions/Error'
501:
description: Resource not implemented
schema:
$ref: '#/definitions/Error'
default:
description: Unexpected error
schema:
$ref: '#/definitions/Error'
definitions:
Plan:
type: object
required:
- name
properties:
name:
type: string
description:
type: string
info:
type: object
config:
type: object
Every operationId
in above operations corresponds to a method in controllers.
QTIP has three controller modules each for plan, QPI and metric. Connexion will
read these mappings and automatically route endpoints to business logic.
Swagger Editor can be explored to play with more such examples and to validate the specification.
Controllers¶
The request is handled through these methods and response is sent back to the client. Connexion takes care of data validation.
@common.check_endpoint_for_error(resource='Plan')
def get_plan(name):
plan_spec = plan.Plan(name)
return plan_spec.content
In above code get_plan
takes a plan name and return its content.
The decorator check_endpoint_for_error
defined in common
is used to handle error
and return a suitable error response.
During Development the server can be run by passing specification file(swagger.yaml
in this case) to connexion cli -
connexion run <path_to_specification_file> -v
Extending the Framework¶
Modifying Existing API:¶
API can be modified by adding entries in
swagger.yaml
and adding the corresponding controller mapped fromoperationID
.Adding endpoints:
New endpoints can be defined in
paths
section inswagger.yaml
. To add a new resource dummy -paths: /dummies: get: summary: Get all dummies operationId: qtip.api.controllers.dummy.get_dummies tags: - dummy responses: 200: description: Foo information schema: $ref: '#/definitions/Dummy default: description: Unexpected error schema: $ref: '#/definitions/Error'And then model of the resource can be defined in the
definitions
section.definitions: Dummy: type: object required: - name properties: name: type: string description: type: string id: type: string
- Adding controller methods:
Methods for handling requests and responses for every operation for the endpoint added can be implemented in
controller
.In
controllers.dummy
def get_dummies(): all_dummies = [<code to get all dummies>] return all_dummies, httplib.OK- Adding error responses
Decorators for handling errors are defined in
common.py
inapi
.from qtip.api import common @common.check_endpoint_for_error(resource='dummy',operation='get') def get_dummies() all_dummies = [<code to get all dummies>] return all_dummies
Adding new API:¶
API can easily be extended by adding more APIs to
Connexion.App
class object usingadd_api
class method.In
__main__
def get_app(): app = connexion.App(__name__, specification_dir=swagger_dir) app.add_api('swagger.yaml', base_path='/v1.0', strict_validation=True) return appExtending it to add new APIs. The new API should have all endpoints mapped using
operationId
.from qtip.api import __main__ my_app = __main__.get_app() my_app.add_api('new_api.yaml',base_path'api2',strict_validation=True) my_app.run(host="0.0.0.0", port=5000)
Compute QPI¶
The compute QPI gives user an overall score for system compute performace.
Summary¶
The compute QPI are calibrated a ZTE E9000 server as a baseline with score of 2500 points. Higher scores are better, with double the score indicating double the performance. The compute QPI provides three different kinds of scores:
- Workload Scores
- Section Scores
- Compute QPI Scores
Baseline¶
ZTE E9000 server with an 2 Deca core Intel Xeon CPU processor,128560.0MB Memory.
Workload Scores¶
Each time a workload is executed QTIP calculates a score based on the computer’s performance compared to the baseline performance.
Section Scores¶
QTIP uses a number of different tests, or workloads, to measure performance. The workloads are divided into five different sections:
Section | Detail | Indication |
---|---|---|
Integer | Integer workloads measure the integer instruction performace of host or vm by performing Dhrystone test. | All app relies on integer performance |
Floating point | Floating point workloads measure the floating pointperfo rmance by performing Whetstone test. | Floating point performance is especially important in video games,digital content creation applications. |
Memory | Memory workloads measure memory bandwidth by performing RamSpeed test. | Software working with cipher large amounts data relies on SSL Performace. |
DPI | DPI workloads measure deep-packet inspection speed by performing nDPI test. | Software working with network packet anlysis relies on DPI performance. |
SSL | SSL Performance workloads measure cipher speeds by using the OpenSSL tool. | Software working with cipher large amounts data relies on SSL Performace |
A section score is the geometric mean of all the workload scores for workloads that are part of the section. These scores are useful for determining the performance of the computer in a particular area.
Compute QPI Scores¶
The compute QPI score is the weighted arithmetic mean of the five section scores. The compute QPI score provides a way to quickly compare performance across different computers and different platforms without getting bogged down in details.