Post Configuration Activities

Once you have deployed and configured your scenario and features you should validate the state of the system using the following guides.

Scenario validation activities

The following guides provide information on how to validate the installation of you scenario based on the tools and test suites available for the installation tool you have selected:

IPv6 Post Installation Procedures

Congratulations, you have completed the setup of using a service VM to act as an IPv6 vRouter. You have validated the setup based on the instruction in previous sections. If you want to further test your setup, you can ping6 among VM1, VM2, vRouter and ipv6-router.

This setup allows further open innovation by any 3rd-party. For more instructions and documentations, please refer to:

  1. IPv6 Configuration Guide (HTML): http://artifacts.opnfv.org/ipv6/docs/setupservicevm/index.html
  2. IPv6 User Guide (HTML): http://artifacts.opnfv.org/ipv6/docs/gapanalysis/index.html

Refer to the relevant testing guides, results, and release notes of Yardstick Project.

<Project> post installation procedures

Add a brief introduction to the methods of validating the installation according to this specific installer or feature.

Automated post installation activities

Describe specific post installation activities performed by the OPNFV deployment pipeline including testing activities and reports. Refer to the relevant testing guides, results, and release notes.

note: this section should be singular and derived from the test projects once we have one test suite to run for all deploy tools. This is not the case yet so each deploy tool will need to provide (hopefully very simillar) documentation of this.

<Project> post configuration procedures

Describe any deploy tool or feature specific scripts, tests or procedures that should be carried out on the deployment post install and configuration in this section.

Platform components validation

Describe any component specific validation procedures necessary for your deployment tool in this section.

Feature validation activities

The following sections provide information on how to validate the features you have installed in your scenario:

Copper post installation procedures

This release focused on use of the OpenStack Congress service for managing configuration policy. The Congress install verify procedure described here is largely manual. This procedure, as well as the longer-term goal of automated verification support, is a work in progress. The procedure is further specific to one OPNFV installer (JOID, i.e. MAAS/JuJu) based environment.

Automated post installation activities

No automated procedures are provided at this time.

Copper post configuration procedures

No configuration procedures are required beyond the basic install procedure.

Platform components validation

Following are notes on creating a container as test driver for Congress. This is based upon an Ubuntu host as installed by JOID.

Create and Activate the Container

On the jumphost:

sudo lxc-create -n trusty-copper -t /usr/share/lxc/templates/lxc-ubuntu \
      -- -b ubuntu ~/opnfv
sudo lxc-start -n trusty-copper -d
sudo lxc-info --name trusty-copper
      (typical output)
Name:           trusty-copper
State:          RUNNING
PID:            4563
IP:             10.0.3.44
CPU use:        28.77 seconds
BlkIO use:      522.79 MiB
Memory use:     559.75 MiB
KMem use:       0 bytes
Link:           vethDMFOAN
 TX bytes:      2.62 MiB
 RX bytes:      88.48 MiB
 Total bytes:   91.10 MiB

Login and configure the test server

ssh ubuntu@10.0.3.44
sudo apt-get update
sudo apt-get upgrade -y

# Install pip
sudo apt-get install python-pip -y

# Install java
sudo apt-get install default-jre -y

# Install other dependencies
sudo apt-get install git gcc python-dev libxml2 libxslt1-dev \
      libzip-dev php5-curl -y

# Setup OpenStack environment variables per your OPNFV install
export CONGRESS_HOST=192.168.10.117
export KEYSTONE_HOST=192.168.10.108
export CEILOMETER_HOST=192.168.10.105
export CINDER_HOST=192.168.10.101
export GLANCE_HOST=192.168.10.106
export HEAT_HOST=192.168.10.107
export NEUTRON_HOST=192.168.10.111
export NOVA_HOST=192.168.10.112
source ~/admin-openrc.sh

# Install and test OpenStack client
mkdir ~/git
cd git
git clone https://github.com/openstack/python-openstackclient.git
cd python-openstackclient
git checkout stable/liberty
sudo pip install -r requirements.txt
sudo python setup.py install
openstack service list
      (typical output)
+----------------------------------+------------+----------------+
| ID                               | Name       | Type           |
+----------------------------------+------------+----------------+
| 2f8799ae50f24c928c021fabf8a50f5f | keystone   | identity       |
| 351b13f56d9a4e25849406ec1d5a2726 | cinder     | volume         |
| 5129510c3143454f9ba8ec7e6735e267 | cinderv2   | volumev2       |
| 5ee1e220460f41dea9be06921400ce9b | congress   | policy         |
| 78e73a7789a14f56a5d248a0cd141201 | quantum    | network        |
| 9d5a00fb475a45b2ae6767528299ed6b | ceilometer | metering       |
| 9e4b1624ef0b434abc0b82f607c5045c | heat       | orchestration  |
| b6c01ceb5023442d9f394b83f2a18e01 | heat-cfn   | cloudformation |
| ba6199e3505045ad87e2a7175bd0c57f | glance     | image          |
| d753f304a0d541dbb989780ae70328a8 | nova       | compute        |
+----------------------------------+------------+----------------+

# Install and test Congress client
cd ~/git
git clone https://github.com/openstack/python-congressclient.git
cd python-congressclient
git checkout stable/liberty
sudo pip install -r requirements.txt
sudo python setup.py install
openstack congress driver list
      (typical output)
+------------+--------------------------------------------------------------------------+
| id         | description                                                              |
+------------+--------------------------------------------------------------------------+
| ceilometer | Datasource driver that interfaces with ceilometer.                       |
| neutronv2  | Datasource driver that interfaces with OpenStack Networking aka Neutron. |
| nova       | Datasource driver that interfaces with OpenStack Compute aka nova.       |
| keystone   | Datasource driver that interfaces with keystone.                         |
| cinder     | Datasource driver that interfaces with OpenStack cinder.                 |
| glancev2   | Datasource driver that interfaces with OpenStack Images aka Glance.      |
+------------+--------------------------------------------------------------------------+

# Install and test Glance client
cd ~/git
git clone https://github.com/openstack/python-glanceclient.git
cd python-glanceclient
git checkout stable/liberty
sudo pip install -r requirements.txt
sudo python setup.py install
glance image-list
      (typical output)
+--------------------------------------+---------------------+
| ID                                   | Name                |
+--------------------------------------+---------------------+
| 6ce4433e-65c0-4cd8-958d-b06e30c76241 | cirros-0.3.3-x86_64 |
+--------------------------------------+---------------------+

# Install and test Neutron client
cd ~/git
git clone https://github.com/openstack/python-neutronclient.git
cd python-neutronclient
git checkout stable/liberty
sudo pip install -r requirements.txt
sudo python setup.py install
neutron net-list
      (typical output)
+--------------------------------------+----------+------------------------------------------------------+
| id                                   | name     | subnets                                              |
+--------------------------------------+----------+------------------------------------------------------+
| dc6227df-af41-439f-bd2c-c2c2f0fe7fc5 | public   | 5745846c-dd79-4900-a7da-bf506348ceac 192.168.10.0/24 |
| a3f9f13a-5de9-4d3b-98c8-d2e40a2ef8e9 | internal | 5e0be862-90da-44ab-af43-56d5c65aa049 10.0.0.0/24     |
+--------------------------------------+----------+------------------------------------------------------+

# Install and test Nova client
cd ~/git
git clone https://github.com/openstack/python-novaclient.git
cd python-novaclient
git checkout stable/liberty
sudo pip install -r requirements.txt
sudo python setup.py install
nova hypervisor-list
      (typical output)
+----+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status  |
+----+---------------------+-------+---------+
| 1  | compute1.maas       | up    | enabled |
+----+---------------------+-------+---------+

# Install and test Keystone client
cd ~/git
git clone https://github.com/openstack/python-keystoneclient.git
cd python-keystoneclient
git checkout stable/liberty
sudo pip install -r requirements.txt
sudo python setup.py install

Setup the Congress Test Webapp

# Clone Copper (if not already cloned in user home)
cd ~/git
if [ ! -d ~/git/copper ]; then \
      git clone https://gerrit.opnfv.org/gerrit/copper; fi

# Copy the Apache config
sudo cp ~/git/copper/components/congress/test-webapp/www/ubuntu-apache2.conf \
      /etc/apache2/apache2.conf

# Point proxy.php to the Congress server per your install
sed -i -- "s/192.168.10.117/$CONGRESS_HOST/g" \
~/git/copper/components/congress/test-webapp/www/html/proxy/index.php

# Copy the webapp to the Apache root directory and fix permissions
sudo cp -R ~/git/copper/components/congress/test-webapp/www/html /var/www
sudo chmod 755 /var/www/html -R

# Make webapp log directory and set permissions
mkdir ~/logs
chmod 777 ~/logs

# Restart Apache
sudo service apache2 restart

Using the Test Webapp

Browse to the trusty-copper server IP address.

Interactive options are meant to be self-explanatory given a basic familiarity with the Congress service and data model. But the app will be developed with additional features and UI elements.

Additional testing and validation activities

Many of our testing tools can be manually installed to facilitate targeted testing of features and capabilities of your scenario. The following guides provide instruction on setting up these testing suites:

Functional testing Installation

Pull the Functest Docker image from the Docker hub:

$ docker pull opnfv/functest:brahmaputra.1.0

Check that the image is available:

$ docker images

Run the docker container giving the environment variables:

- INSTALLER_TYPE. Possible values are "apex", "compass", "fuel" or "joid".
- INSTALLER_IP. each installer has its installation strategy.

Functest may need to know the IP of the installer to retrieve the credentials (e.g. usually “10.20.0.2” for fuel, not neede for joid...).

The minimum command to create the Functest docker file can be described as follows:

docker run -it -e "INSTALLER_IP=10.20.0.2" -e "INSTALLER_TYPE=fuel" opnfv/functest:brahmaputra.1.0 /bin/bash

Optionally, it is possible to precise the container name through the option –name:

docker run --name "CONTAINER_NAME" -it -e "INSTALLER_IP=10.20.0.2" -e "INSTALLER_TYPE=fuel" opnfv/functest:brahmaputra.1.0 /bin/bash

It is also possible to to indicate the path of the OpenStack creds using -v:

docker run  -it -e "INSTALLER_IP=10.20.0.2" -e "INSTALLER_TYPE=fuel" -v <path_to_your_local_creds_file>:/home/opnfv/functest/conf/openstack.creds opnfv/functest:brahmaputra.1.0 /bin/bash

The local file will be mounted in the container under /home/opnfv/functest/conf/openstack.creds

After the run command the prompt appears which means that we are inside the container and ready to run Functest.

Inside the container, the following directory structure should be in place:

`-- home
    `-- opnfv
      |-- functest
      |   |-- conf
      |   |-- data
      |   `-- results
      `-- repos
          |-- bgpvpn
          |-- functest
          |-- odl_integration
          |-- rally
          |-- releng
          `-- vims-test

Basically the container includes:

  • Functest directory to store the configuration (the OpenStack creds are paste in /home/opngb/functest/conf), the data (images neede for test for offline testing), results (some temporary artifacts may be stored here)
  • Repositories: the functest repository will be used to prepare the environment, run the tests. Other repositories are used for the installation of the tooling (e.g. rally) and/or the retrieval of feature projects scenarios (e.g. bgpvpn)

The arborescence under the functest repo can be described as follow:

.
  |-- INFO
  |-- LICENSE
  |-- commons
  |   |-- ims
  |   |-- mobile
  |   `-- traffic-profile-guidelines.rst
  |-- docker
  |   |-- Dockerfile
  |   |-- common.sh
  |   |-- prepare_env.sh
  |   |-- requirements.pip
  |   `-- run_tests.sh
  |-- docs
  |   |-- configguide
  |   |-- functest.rst
  |   |-- images
  |   `-- userguide
  `-- testcases
      |-- Controllers
      |-- VIM
      |-- __init__.py
      |-- config_functest.py
      |-- config_functest.yaml
      |-- functest_utils.py
      |-- functest_utils.pyc
      |-- vIMS
      `-- vPing

We may distinguish 4 different folders:

  • commons: it is a folder dedicated to store traffic profile or any test inputs that could be reused by any test project
  • docker: this folder includes the scripts that will be used to setup the environment and run the tests
  • docs: this folder includes the user and installation/configuration guide
  • testcases: this folder includes the scripts required by Functest internal test cases

Firstly run the script to install functest environment:

$ ${repos_dir}/functest/docker/prepare_env.sh

NOTE: ${repos_dir} is a default environment variable inside the docker container, which points to /home/opnfv/repos

Run the script to start the tests:

$ ${repos_dir}/functest/docker/run_tests.sh

Installing vswitchperf

  • CentOS 7
  • Fedora 20
  • Fedora 21
  • Fedora 22
  • Ubuntu 14.04

The vSwitch must support Open Flow 1.3 or greater.

  • OVS (built from source).
  • OVS with DPDK (built from source).
  • Qemu version 2.3.

A simple VNF that forwards traffic through a VM, using:

  • DPDK testpmd
  • Linux Brigde
  • custom l2fwd module

The VM image can be downloaded from: http://artifacts.opnfv.org/vswitchperf/vloop-vnf-ubuntu-14.04_20151216.qcow2

The test suite requires Python 3.3 and relies on a number of other packages. These need to be installed for the test suite to function.

Installation of required packages, preparation of Python 3 virtual environment and compilation of OVS, DPDK and QEMU is performed by script systems/build_base_machine.sh. It should be executed under user account, which will be used for vsperf execution.

Please Note: Password-less sudo access must be configured for given user account before script is executed.

Execution of installation script:

$ cd systems
$ ./build_base_machine.sh

Please Note: you don’t need to go into any of the systems subdirectories, simply run the top level build_base_machine.sh, your OS will be detected automatically.

Script build_base_machine.sh will install all the vsperf dependencies in terms of system packages, Python 3.x and required Python modules. In case of CentOS 7 it will install Python 3.3 from an additional repository provided by Software Collections (a link). Installation script will also use virtualenv to create a vsperf virtual environment, which is isolated from the default Python environment. This environment will reside in a directory called vsperfenv in $HOME.

You will need to activate the virtual environment every time you start a new shell session. Its activation is specific to your OS:

CentOS 7

$ scl enable python33 bash
$ cd $HOME/vsperfenv
$ source bin/activate

Fedora and Ubuntu

$ cd $HOME/vsperfenv
$ source bin/activate

Working Behind a Proxy

If you’re behind a proxy, you’ll likely want to configure this before running any of the above. For example:

export http_proxy=proxy.mycompany.com:123
export https_proxy=proxy.mycompany.com:123