Scenario Overview and Description

1. Scenario Abstract

This chapter includes detailed explanation of various scenarios files deployed as part of kvm4nfv E-Release.

1.1. Release Features

Scenario Name Colorado Danube Euphrates
  • os-nosdn-kvm-ha
Y Y  
  • os-nosdn-kvm_ovs_dpdk-noha
  Y Y
  • os-nosdn-kvm_ovs_dpdk-ha
  Y Y
  • os-nosdn-kvm_ovs_dpdk_bar-noha
  Y  
  • os-nosdn-kvm_ovs_dpdk_bar-ha
  Y  

1.2. E- Release Scenario’s overview

Scenario Name No of Controllers No of Computes Plugin Names DPDK OVS
  • os-nosdn-kvm_ovs_dpdk-noha
1 1 KVM Y Y
  • os-nosdn-kvm_ovs_dpdk-ha
3 2 KVM Y Y

2. KVM4NFV Scenario-Description

2.1. Abstract

This document describes the procedure to deploy/test KVM4NFV scenarios in a nested virtualization environment. This has been verified with os-nosdn-kvm-ha, os-nosdn-kvm-noha,os-nosdn-kvm_ovs_dpdk-ha, os-nosdn-kvm_ovs_dpdk-noha and os-nosdn-kvm_ovs_dpdk_bar-ha test scenarios.

2.2. Version Features

Release Features
Colorado
  • Scenario Testing feature was not part of the Colorado release of KVM4NFV
Danube
  • High Availability/No-High Availability deployment configuration of KVM4NFV software suite using Fuel
  • Multi-node setup with 3 controller and 2 compute nodes are deployed for HA
  • Multi-node setup with 1 controller and 3 compute nodes are deployed for NO-HA
  • Scenarios os-nosdn-kvm_ovs_dpdk-ha, os-nosdn-kvm_ovs_dpdk_bar-ha, os-nosdn-kvm_ovs_dpdk-noha, os-nosdn-kvm_ovs_dpdk_bar-noha are supported
Euphrates
  • High Availability/No-High Availability deployment configuration of KVM4NFV software suite using Apex
  • Multi-node setup with 3 controller and 2 compute nodes are deployed for HA
  • Multi-node setup with 1 controller and 1 compute node are deployed for NO-HA
  • Scenarios os-nosdn-kvm_ovs_dpdk-ha, os-nosdn-kvm_ovs_dpdk-noha, are supported

2.3. Introduction

The purpose of os-nosdn-kvm_ovs_dpdk-ha,os-nosdn-kvm_ovs_dpdk_bar-ha and os-nosdn-kvm_ovs_dpdk-noha,os-nosdn-kvm_ovs_dpdk_bar-noha scenarios testing is to test the High Availability/No-High Availability deployment and configuration of OPNFV software suite with OpenStack and without SDN software.

This OPNFV software suite includes OPNFV KVM4NFV latest software packages for Linux Kernel and QEMU patches for achieving low latency and also OPNFV Barometer for traffic, performance and platform monitoring.

When using Fuel installer, High Availability feature is achieved by deploying OpenStack multi-node setup with 1 Fuel-Master,3 controllers and 2 computes nodes. No-High Availability feature is achieved by deploying OpenStack multi-node setup with 1 Fuel-Master,1 controllers and 3 computes nodes.

When using Apex installer, High Availability feature is achieved by deploying Openstack multi-node setup with 1 undercloud, 3 overcloud controllers and 2 overcloud compute nodes. No-High Availability feature is achieved by deploying Openstack multi-node setup with 1 undercloud, 1 overcloud controller and 1 overcloud compute nodes.

KVM4NFV packages will be installed on compute nodes as part of deployment. The scenario testcase deploys a multi-node setup by using OPNFV Fuel and Apex deployer.

2.4. System pre-requisites

  • RAM - Minimum 16GB
  • HARD DISK - Minimum 500GB
  • Linux OS installed and running
  • Nested Virtualization enabled, which can be checked by,
$ cat /sys/module/kvm_intel/parameters/nested
  Y

$ cat /proc/cpuinfo | grep vmx

Note: If Nested virtualization is disabled, enable it by,

For Ubuntu:
$ modeprobe kvm_intel
$ echo Y > /sys/module/kvm_intel/parameters/nested
$ sudo reboot

For RHEL:
$ cat << EOF > /etc/modprobe.d/kvm_intel.conf
  options kvm-intel nested=1
  options kvm-intel enable_shadow_vmcs=1
  options kvm-intel enable_apicv=1
  options kvm-intel ept=1
  EOF
$ cat << EOF > /etc/sysctl.d/98-rp-filter.conf
  net.ipv4.conf.default.rp_filter = 0
  net.ipv4.conf.all.rp_filter = 0
  EOF
$ sudo reboot

2.5. Environment Setup

2.5.1. Enable network access after the installation

For CentOS., Login as “root” user. After the installation complete, the Ethernet interfaces are not enabled by the default in Centos 7, you need to change the line “ONBOOT=no” to “ONBOOT=yes” in the network interface configuration file (such as ifcfg-enp6s0f0 or ifcfg-em1 … whichever you want to connect) in /etc/sysconfig/network-scripts sub-directory. The default BOOTPROTO is dhcp in the network interface configuration file. Then use following command to enable the network access:

systemctl restart network

2.5.2. Configuring Proxy

For Ubuntu., Create an apt.conf file in /etc/apt if it doesn’t exist. Used to set proxy for apt-get if working behind a proxy server.

Acquire::http::proxy "http://<username>:<password>@<proxy>:<port>/";
Acquire::https::proxy "https://<username>:<password>@<proxy>:<port>/";
Acquire::ftp::proxy "ftp://<username>:<password>@<proxy>:<port>/";
Acquire::socks::proxy "socks://<username>:<password>@<proxy>:<port>/";

For CentOS., Edit /etc/yum.conf to work behind a proxy server by adding the below line.

$ echo "proxy=http://<username>:<password>@<proxy>:<port>/" >> /etc/yum.conf

2.5.3. Install redsocks

For CentOS., Since there is no redsocks package for CentOS Linux release 7.2.1511, you need build redsocks from source yourself. Using following commands to create “proxy_redsocks” sub-directory at /root:

cd ~
mkdir proxy_redsocks

Since you can’t download file at your Centos system yet. At other Centos or Ubuntu system, use following command to download redsocks source for Centos into a file “redsocks-src”;

wget -O redsocks-src --no-check-certificate https://github.com/darkk/redsocks/zipball/master

Also download libevent-devel-2.0.21-4.el7.x86_64.rpm by:

wget ftp://fr2.rpmfind.net/linux/centos/7.2.1511/os/x86_64/Packages/libevent-devel-2.0.21-4.el7.x86_64.rpm

Copy both redsock-src and libevent-devel-2.0.21-4.el7.x86_64.rpm files into ~/proxy_redsocks in your Centos system by “scp”.

Back to your Centos system, first install libevent-devel using libevent-devel-2.0.21-4.el7.x86_64.rpm as below:

cd ~/proxy_redsocks
yum install –y libevent-devel-2.0.21-4.el7.x86_64.rpm

Build redsocks by:

cd ~/proxy_redsocks
unzip redsocks-src
cd darkk-redsocks-78a73fc
yum –y install gcc
make
cp redsocks ~/proxy_redsocks/.

Create a redsocks.conf in ~/proxy_redsocks with following contents:

base {
log_debug = on;
log_info = on;
log = "file:/root/proxy.log";
daemon = on;
redirector = iptables;
}
redsocks {
local_ip = 0.0.0.0;
local_port = 6666;
// socks5 proxy server
ip = <proxy>;
port = 1080;
type = socks5;
}
redudp {
local_ip = 0.0.0.0;
local_port = 8888;
ip = <proxy>;
port = 1080;
}
dnstc {
local_ip = 127.0.0.1;
local_port = 5300;
}

Start redsocks service by:

cd ~/proxy_redsocks
./redsocks –c redsocks.conf

Note The redsocks service is not persistent and you need to execute the above-mentioned commands after every reboot.

Create intc-proxy.sh in ~/proxy_redsocks with following contents and make it executable by “chmod +x intc-proxy.sh”:

iptables -t nat -N REDSOCKS
iptables -t nat -A REDSOCKS -d 0.0.0.0/8 -j RETURN
iptables -t nat -A REDSOCKS -d 10.0.0.0/8 -j RETURN
iptables -t nat -A REDSOCKS -d 127.0.0.0/8 -j RETURN
iptables -t nat -A REDSOCKS -d 169.254.0.0/16 -j RETURN
iptables -t nat -A REDSOCKS -d 172.16.0.0/12 -j RETURN
iptables -t nat -A REDSOCKS -d 192.168.0.0/16 -j RETURN
iptables -t nat -A REDSOCKS -d 224.0.0.0/4 -j RETURN
iptables -t nat -A REDSOCKS -d 240.0.0.0/4 -j RETURN
iptables -t nat -A REDSOCKS -p tcp -j REDIRECT --to-ports 6666
iptables -t nat -A REDSOCKS -p udp -j REDIRECT --to-ports 8888
iptables -t nat -A OUTPUT -p tcp  -j REDSOCKS
iptables -t nat -A PREROUTING  -p tcp  -j REDSOCKS

Enable the REDSOCKS nat chain rule by:

cd ~/proxy_redsocks
./intc-proxy.sh

Note These REDSOCKS nat chain rules are not persistent and you need to execute the above-mentioned commands after every reboot.

2.5.4. Network Time Protocol (NTP) setup and configuration

Install ntp by:

$ sudo apt-get update
$ sudo apt-get install -y ntp

Insert the following two lines after “server ntp.ubuntu.com” line and before “ # Access control configuration; see link for” line in /etc/ntp.conf file:

server 127.127.1.0
fudge 127.127.1.0 stratum 10

Restart the ntp server to apply the changes

$ sudo service ntp restart

2.6. Scenario Testing

There are three ways of performing scenario testing,
  • 1 Fuel
  • 2 Apex
  • 3 OPNFV-Playground
  • 4 Jenkins Project

2.6.1. Fuel

1 Clone the fuel repo :

$ git clone https://gerrit.opnfv.org/gerrit/fuel.git

2 Checkout to the specific version of the branch to deploy by:

The default branch is master, to use a stable release-version use the below.,

3 Building the Fuel iso :

$ cd ~/fuel/ci/
$ ./build.sh -h

Provide the necessary options that are required to build an iso. Create a customized iso as per the deployment needs.

$ cd ~/fuel/build/
$ make

(OR) Other way is to download the latest stable fuel iso from here.

http://artifacts.opnfv.org/fuel.html

4 Creating a new deployment scenario

(i). Naming the scenario file

Include the new deployment scenario yaml file in ~/fuel/deploy/scenario/. The file name should adhere to the following format:

<ha | no-ha>_<SDN Controller>_<feature-1>_..._<feature-n>.yaml

(ii). Meta data

The deployment configuration file should contain configuration metadata as stated below:

deployment-scenario-metadata:
        title:
        version:
        created:

(iii). “stack-extentions” Module

To include fuel plugins in the deployment configuration file, use the “stack-extentions” key:

Example:
        stack-extensions:
           - module: fuel-plugin-collectd-ceilometer
             module-config-name: fuel-barometer
             module-config-version: 1.0.0
             module-config-override:
             #module-config overrides

Note: The “module-config-name” and “module-config-version” should be same as the name of plugin configuration file.

The “module-config-override” is used to configure the plugin by overrriding the corresponding keys in the plugin config yaml file present in ~/fuel/deploy/config/plugins/.

(iv).  “dea-override-config” Module

To configure the HA/No-HA mode, network segmentation types and role to node assignments, use the “dea-override-config” key.

Example:
dea-override-config:
       environment:
           mode: ha
           net_segment_type: tun
       nodes:
       - id: 1
          interfaces: interfaces_1
          role: mongo,controller,opendaylight
       - id: 2
         interfaces: interfaces_1
         role: mongo,controller
       - id: 3
          interfaces: interfaces_1
          role: mongo,controller
       - id: 4
          interfaces: interfaces_1
          role: ceph-osd,compute
       - id: 5
          interfaces: interfaces_1
          role: ceph-osd,compute
settings:
    editable:
        storage:
             ephemeral_ceph:
                      description: Configures Nova to store ephemeral volumes in RBD.
                      This works best if Ceph is enabled for volumes and images, too.
                      Enables live migration of all types of Ceph backed VMs (without this
                      option, live migration will only work with VMs launched from
                      Cinder volumes).
                      label: Ceph RBD for ephemeral volumes (Nova)
                      type: checkbox
                      value: true
                      weight: 75
             images_ceph:
                      description: Configures Glance to use the Ceph RBD backend to store
                      images.If enabled, this option will prevent Swift from installing.
                      label: Ceph RBD for images (Glance)
                      restrictions:
                      - settings:storage.images_vcenter.value == true: Only one Glance
                      backend could be selected.
                      type: checkbox
                      value: true
                      weight: 30

Under the “dea-override-config” should provide atleast {environment:{mode:’value},{net_segment_type:’value’} and {nodes:1,2,...} and can also enable additional stack features such ceph,heat which overrides corresponding keys in the dea_base.yaml and dea_pod_override.yaml.

(v). “dha-override-config”  Module

In order to configure the pod dha definition, use the “dha-override-config” key. This is an optional key present at the ending of the scenario file.

(vi). Mapping to short scenario name

The scenario.yaml file is used to map the short names of scenario’s to the one or more deployment scenario configuration yaml files. The short scenario names should follow the scheme below:

       [os]-[controller]-[feature]-[mode]-[option]

[os]: mandatory
possible value: os

Please note that this field is needed in order to select parent jobs to list and do blocking relations between them.

[controller]: mandatory
example values: nosdn, ocl, odl, onos

[mode]: mandatory
possible values: ha, noha

[option]: optional

Used for the scenarios those do not fit into naming scheme. Optional field in the short scenario name should not be included if there is no optional scenario.

Example:
    1. os-nosdn-kvm-noha
    2. os-nosdn-kvm_ovs_dpdk_bar-ha

Example of how short scenario names are mapped to configuration yaml files:

os-nosdn-kvm_ovs_dpdk-ha:
    configfile: ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml

Note:

  • ( - ) used for separator of fields. [os-nosdn-kvm_ovs_dpdk-ha]
  • ( _ ) used to separate the values belong to the same field. [os-nosdn-kvm_ovs_bar-ha].

5 Deploying the scenario

Command to deploy the os-nosdn-kvm_ovs_dpdk-ha scenario:

$ cd ~/fuel/ci/
$ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default \
-s ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
where,

-b is used to specify the configuration directory

-f is used to re-deploy on the existing deployment

-i is used to specify the image downloaded from artifacts.

-l is used to specify the lab name

-p is used to specify POD name

-s is used to specify the scenario file

Note:

Check $ sudo ./deploy.sh -h for further information.

2.6.2. Apex

Apex installer uses CentOS as the platform.

1 Install Packages :

Install necessary packages by following:

cd ~
yum install –y git rpm-build python-setuptools python-setuptools-devel
yum install –y epel-release gcc
curl -O https://bootstrap.pypa.io/get-pip.py
um install –y python3 python34
/usr/bin/python3.4 get-pip.py
yum install –y python34-devel python34-setuptools
yum install –y libffi-devel python-devel openssl-devel
yum -y install libxslt-devel libxml2-devel

Then you can use “dev_deploy_check.sh“ in Apex installer source to install the remaining necessary packages by following:

cd ~
git clone https://gerrit.opnfv.org/gerrit/p/apex.git
export CONFIG=$(pwd)/apex/build
export LIB=$(pwd)/apex/lib
export PYTHONPATH=$PYTHONPATH:$(pwd)/apex/lib/python
cd ci
./dev_deploy_check.sh
yum install –y python2-oslo-config python2-debtcollector

2 Create ssh key :

Use following commands to create ssh key, when asked for passphrase, just enter return for empty passphrase:

cd ~
ssh-keygen -t rsa

Then prepare the authorized_keys for Apex scenario deployment:

cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

3 Create default pool :

Use following command to default pool device:

cd ~
virsh pool-define /dev/stdin <<EOF
<pool type='dir'>
  <name>default</name>
  <target>
    <path>/var/lib/libvirt/images</path>
  </target>
</pool>
EOF

Use following commands to start and set autostart the default pool device:

virsh pool-start default
virsh pool-autostart default

Use following commands to verify the success of the creation of the default pool device and starting and setting autostart of the default pool device:

virsh pool-list
virsh pool-info default

4 Get Apex source code :

Get Apex installer source code:

git clone https://gerrit.opnfv.org/gerrit/p/apex.git
cd apex

5 Modify code to work behind proxy :

In “lib” sub-directory of Apex source, change line 284 “if ping -c 2 www.google.com > /dev/null; then” to “if curl www.google.com > /dev/null; then” in “common-functions.sh” file, since we can’t ping www.google.com behind Intel proxy.

6 Setup build environment :

Setup build environment by:

cd ~
export BASE=$(pwd)/apex/build
export LIB=$(pwd)/apex/lib
export PYTHONPATH=$PYTHONPATH:$(pwd)/apex/lib/python
export IMAGES=$(pwd)/apex/.build

7 Build Apex installer :

Build undercloud image by:

cd ~/apex/build
make images-clean
make undercloud

You can look at the targets in ~/apex/build/Makefile to build image for specific feature. Following show how to build vanilla ODL image (this can be used to build the overcloud image for basic (nosdn-nofeature) and opendaylight test scenario:

cd ~/apex/build
make overcloud-opendaylight

You can build the complete full set of images (undercloud, overcloud-full, overcloud-opendaylight, overcloud-onos) by:

cd ~/apex/build
make images

8 Modification of network_settings.yaml :

Since we are working behind proxy, we need to modify the network_settings.yaml in ~/apex/config/network to make the deployment work properly. In order to avoid checking our modification into the repo accidentally, it is recommend that you copy “network_settings.yaml” to “intc_network_settings.yaml” in the ~/apex/config/network and do following modification in intc_network_settings.yaml:

Change dns_nameservers settings from

dns_servers: ["8.8.8.8", "8.8.4.4"]

to

dns_servers: ["<ip-address>"]

Also, you need to modify deploy.sh in apex/ci from “ntp_server=”pool.ntp.org”” to “ntp_server=”<ip-address>”” to reflect that fact we couldn’t reach outside NTP server, just use local time.

9 Commands to deploy scenario :

Following shows the commands used to deploy os-nosdn-kvm_ovs_dpdk-noha scenario behind the proxy:

cd ~/apex/ci
./clean.sh
./dev_deploy_check.sh
./deploy.sh -v --ping-site <ping_ip-address> --dnslookup-site <dns_ip-address> -n \
~/apex/config/network/intc_network_settings.yaml -d \
~/apex/config/deploy/os-nosdn-kvm_ovs_dpdk-noha.yaml

10 Accessing the Overcloud dashboard :

If the deployment completes successfully, the last few output lines from the deployment will look like the following:

INFO: Undercloud VM has been setup to NAT Overcloud public network
Undercloud IP: <ip-address>, please connect by doing 'opnfv-util undercloud'
Overcloud dashboard available at http://<ip-address>/dashboard
INFO: Post Install Configuration Complete

11 Accessing the Undercloud and Overcloud through command line :

At the end of the deployment we obtain the Undercloud ip. One can login to the Undercloud and obtain the Overcloud ip as follows:

cd ~/apex/ci/
./util.sh undercloud
source stackrc
nova list
ssh heat-admin@<overcloud-ip>

2.6.3. OPNFV-Playground

Install OPNFV-playground (the tool chain to deploy/test CI scenarios in fuel@opnfv, ):

$ cd ~
$ git clone https://github.com/jonasbjurel/OPNFV-Playground.git
$ cd OPNFV-Playground/ci_fuel_opnfv/
  • Follow the README.rst in this ~/OPNFV-Playground/ci_fuel_opnfv sub-holder to complete all necessary

installation and setup. - Section “RUNNING THE PIPELINE” in README.rst explain how to use this ci_pipeline to deploy/test CI test scenarios, you can also use

./ci_pipeline.sh --help  ##to learn more options.

1 Downgrade paramiko package from 2.x.x to 1.10.0

The paramiko package 2.x.x doesn’t work with OPNFV-playground tool chain now, Jira ticket FUEL - 188 has been raised for the same.

Check paramiko package version by following below steps in your system:

$ python
Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright",
"credits" or "license" for more information.

>>> import paramiko
>>> print paramiko.__version__
>>> exit()

You will get the current paramiko package version, if it is 2.x.x, uninstall this version by

$  sudo pip uninstall paramiko

Ubuntu 14.04 LTS has python-paramiko package (1.10.0), install it by

$ sudo apt-get install python-paramiko

Verify it by following:

$ python
>>> import paramiko
>>> print paramiko.__version__
>>> exit()

2  Clone the fuel@opnfv

Check out the specific version of specific branch of fuel@opnfv

$ cd ~
$ git clone https://gerrit.opnfv.org/gerrit/fuel.git
$ cd fuel
By default it will be master branch, in-order to deploy on the Colorado/Danube branch, do:
$ git checkout stable/Danube

3 Creating the scenario

Implement the scenario file as described in 3.1.4

4 Deploying the scenario

You can use the following command to deploy/test os-nosdn kvm_ovs_dpdk-(no)ha and os-nosdn-kvm_ovs_dpdk_bar-(no)ha scenario

$ cd ~/OPNFV-Playground/ci_fuel_opnfv/

For os-nosdn-kvm_ovs_dpdk-ha :

$ ./ci_pipeline.sh -r ~/fuel -i /root/fuel.iso -B -n intel-sc -s os-nosdn-kvm_ovs_dpdk-ha

For os-nosdn-kvm_ovs_dpdk_bar-ha:

$ ./ci_pipeline.sh -r ~/fuel -i /root/fuel.iso -B -n intel-sc -s os-nosdn-kvm_ovs_dpdk_bar-ha

The “ci_pipeline.sh” first clones the local fuel repo, then deploys the os-nosdn-kvm_ovs_dpdk-ha/os-nosdn-kvm_ovs_dpdk_bar-ha scenario from the given ISO, and run Functest and Yarstick test. The log of the deployment/test (ci.log) can be found in ~/OPNFV-Playground/ci_fuel_opnfv/artifact/master/YYYY-MM-DD—HH.mm, where YYYY-MM-DD—HH.mm is the date/time you start the “ci_pipeline.sh”.

Note:

Check $ ./ci_pipeline.sh -h for further information.

2.6.4. Jenkins Project

os-nosdn-kvm_ovs_dpdk-(no)ha and os-nosdn-kvm_ovs_dpdk_bar-(no)ha scenario can be executed from the jenkins project :

HA scenarios:
  1. “fuel-os-nosdn-kvm_ovs_dpdk-ha-baremetal-daily-master” (os-nosdn-kvm_ovs_dpdk-ha)
  2. “fuel-os-nosdn-kvm_ovs_dpdk_bar-ha-baremetal-daily-master” (os-nosdn-kvm_ovs_dpdk_bar-ha)
  3. “apex-os-nosdn-kvm_ovs_dpdk-ha-baremetal-master” (os-nosdn-kvm_ovs_dpdk-ha)
NOHA scenarios:
  1. “fuel-os-nosdn-kvm_ovs_dpdk-noha-virtual-daily-master” (os-nosdn-kvm_ovs_dpdk-noha)
  2. “fuel-os-nosdn-kvm_ovs_dpdk_bar-noha-virtual-daily-master” (os-nosdn-kvm_ovs_dpdk_bar-noha)
  3. “apex-os-nosdn-kvm_ovs_dpdk-noha-baremetal-master” (os-nosdn-kvm_ovs_dpdk-noha)

os-nosdn-kvm_ovs_dpdk-noha Overview and Description

1. os-nosdn-kvm_ovs_dpdk-noha Description

1.1. Introduction

The purpose of os-nosdn-kvm_ovs_dpdk-noha scenario testing is to test the No High Availability deployment and configuration of OPNFV software suite with OpenStack and without SDN software. This OPNFV software suite includes OPNFV KVM4NFV latest software packages for Linux Kernel and QEMU patches for achieving low latency. When deployed using Fuel, No High Availability feature is achieved by deploying OpenStack multi-node setup with 1 controller and 3 computes nodes and using Apex the setup is with 1 controller and 1 compute.

KVM4NFV packages will be installed on compute nodes as part of deployment. This scenario testcase deployment is happening on multi-node by using OPNFV Fuel and Apex deployer.

Using Fuel Installer

1.2. Scenario Components and Composition

This scenario deploys the No High Availability OPNFV Cloud based on the configurations provided in no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml. This yaml file contains following configurations and is passed as an argument to deploy.py script

  • scenario.yaml: This configuration file defines translation between a short deployment scenario name(os-nosdn-kvm_ovs_dpdk-noha) and an actual deployment scenario configuration file(no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml)
  • deployment-scenario-metadata: Contains the configuration metadata like title,version,created,comment.
deployment-scenario-metadata:
   title: NFV KVM and OVS-DPDK NOHA deployment
   version: 0.0.1
   created: Dec 20 2016
   comment: NFV KVM and OVS-DPDK
  • stack-extensions: Stack extentions are opnfv added value features in form of a fuel-plugin.Plugins listed in stack extensions are enabled and configured. os-nosdn-kvm_ovs_dpdk-noha scenario currently uses KVM-1.0.0 plugin.
stack-extensions:
   - module: fuel-plugin-kvm
     module-config-name: fuel-nfvkvm
     module-config-version: 1.0.0
     module-config-override:
       # Module config overrides
  • dea-override-config: Used to configure the NO-HA mode,network segmentation types and role to node assignments.These configurations overrides corresponding keys in the dea_base.yaml and dea_pod_override.yaml. These keys are used to deploy multiple nodes(1 controller,3 computes) as mention below.

    • Node 1:
      • This node has MongoDB and Controller roles
      • The controller node runs the Identity service, Image Service, management portions of Compute and Networking, Networking plug-in and the dashboard
      • Uses VLAN as an interface
    • Node 2:
      • This node has compute and Ceph-osd roles
      • Ceph is a massively scalable, open source, distributed storage system
      • By default, Compute uses KVM as the hypervisor
      • Uses DPDK as an interface
    • Node 3:
      • This node has compute and Ceph-osd roles
      • Ceph is a massively scalable, open source, distributed storage system
      • By default, Compute uses KVM as the hypervisor
      • Uses DPDK as an interface
    • Node 4:
      • This node has compute and Ceph-osd roles
      • Ceph is a massively scalable, open source, distributed storage system
      • By default, Compute uses KVM as the hypervisor
      • Uses DPDK as an interface

    The below is the dea-override-config of the no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml file.

dea-override-config:
  fuel:
    FEATURE_GROUPS:
    - experimental
  environment:
    net_segment_type: vlan
  nodes:
  - id: 1
    interfaces: interfaces_vlan
    role: mongo,controller
  - id: 2
    interfaces: interfaces_dpdk
    role: ceph-osd,compute
    attributes: attributes_1
  - id: 3
    interfaces: interfaces_dpdk
    role: ceph-osd,compute
    attributes: attributes_1
  - id: 4
    interfaces: interfaces_dpdk
    role: ceph-osd,compute
    attributes: attributes_1

  attributes_1:
    hugepages:
      dpdk:
        value: 1024
      nova:
        value:
          '2048': 1024

  network:
    networking_parameters:
      segmentation_type: vlan
    networks:
    - cidr: null
      gateway: null
      ip_ranges: []
      meta:
        configurable: false
        map_priority: 2
        name: private
        neutron_vlan_range: true
        notation: null
        render_addr_mask: null
        render_type: null
        seg_type: vlan
        use_gateway: false
        vlan_start: null
      name: private
      vlan_start: null

  settings:
    editable:
      storage:
        ephemeral_ceph:
          description: Configures Nova to store ephemeral volumes in RBD. This works best if Ceph
          is enabled for volumes and images, too. Enables live migration of all types of Ceph
          backed VMs (without this option, live migration will only work with VMs launched from
          Cinder volumes).
          label: Ceph RBD for ephemeral volumes (Nova)
          type: checkbox
          value: true
          weight: 75
        images_ceph:
          description: Configures Glance to use the Ceph RBD backend to store images. If enabled,
          this option will prevent Swift from installing.
          label: Ceph RBD for images (Glance)
          restrictions:
          - settings:storage.images_vcenter.value == true: Only one Glance backend could be selected.
          type: checkbox
          value: true
          weight: 30
  • dha-override-config: Provides information about the VM definition and Network config for virtual deployment.These configurations overrides the pod dha definition and points to the controller,compute and fuel definition files. The no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml has no dha-config changes i.e., default configuration is used.
  • os-nosdn-kvm_ovs_dpdk-noha scenario is successful when all the 4 Nodes are accessible, up and running.

Note:

  • In os-nosdn-kvm_ovs_dpdk-noha scenario, OVS is installed on the compute nodes with DPDK configured
  • Hugepages for DPDK are configured in the attributes_1 section of the

no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml

  • Hugepages are only configured for compute nodes
  • This results in faster communication and data transfer among the compute nodes

1.3. Scenario Usage Overview

  • The high availability feature is disabled and deploymet is done by deploy.py with noha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml as an argument.
  • Install Fuel Master and deploy OPNFV Cloud from scratch on Hardware Environment:

Command to deploy the os-nosdn-kvm_ovs_dpdk-noha scenario:

$ cd ~/fuel/ci/
$ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default \
-s no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
where,

-b is used to specify the configuration directory

-i is used to specify the image downloaded from artifacts.

Note:

Check $ sudo ./deploy.sh -h for further information.
  • os-nosdn-kvm_ovs_dpdk-noha scenario can be executed from the jenkins project “fuel-os-nosdn-kvm_ovs_dpdk-noha-baremetal-daily-master”
  • This scenario provides the No High Availability feature by deploying 1 controller,3 compute nodes and checking if all the 4 nodes are accessible(IP,up & running).
  • Test Scenario is passed if deployment is successful and all 4 nodes have accessibility (IP , up & running).

Using Apex Installer

1.4. Scenario Components and Composition

This scenario is composed of common OpenStack services enabled by default, including Nova, Neutron, Glance, Cinder, Keystone, Horizon. Optionally and by default, Tacker and Congress services are also enabled. Ceph is used as the backend storage to Cinder on all deployed nodes.

The os-nosdn-kvm_ovs_dpdk-noha.yaml file contains following configurations and is passed as an argument to deploy.sh script.

  • global-params: Used to define the global parameter and there is only one such parameter exists,i.e, ha_enabled
global-params:
  ha_enabled: false
  • deploy_options: Used to define the type of SDN controller, configure the tacker, congress, service functioning chaining support(sfc) for ODL and ONOS, configure ODL with SDNVPN support, which dataplane to use for overcloud tenant networks, whether to run the kvm real time kernel (rt_kvm) in the compute node(s) to reduce the network latencies caused by network function virtualization and whether to install and configure fdio functionality in the overcloud
deploy_options:
  sdn_controller: false
  tacker: true
  congress: true
  sfc: false
  vpn: false
  rt_kvm: true
  dataplane: ovs_dpdk
  • performance: Used to set performance options on specific roles. The valid roles are ‘Compute’, ‘Controller’ and ‘Storage’, and the valid sections are ‘kernel’ and ‘nova’
performance:
  Controller:
    kernel:
      hugepages: 1024
      hugepagesz: 2M
  Compute:
    kernel:
      hugepagesz: 2M
      hugepages: 2048
      intel_iommu: 'on'
      iommu: pt
    ovs:
      socket_memory: 1024
      pmd_cores: 2
      dpdk_cores: 1

1.5. Scenario Usage Overview

  • The high availability feature can be acheived by executing deploy.sh with os-nosdn-kvm_ovs_dpdk-noha.yaml as an argument.
  • Build the undercloud and overcloud images as mentioned below:
cd ~/apex/build/
make images-clean
make images
  • Command to deploy os-nosdn-kvm_ovs_dpdk-noha scenario:
cd ~/apex/ci/
./clean.sh
./dev_dep_check.sh
./deploy.sh -v --ping-site <ping_ip-address> --dnslookup-site <dns_ip-address> -n \
~/apex/config/network/intc_network_settings.yaml -d ~/apex/config/deploy/os-nosdn-kvm_ovs_dpdk-noha.yaml
where,
-v is used for virtual deployment -n is used for providing the network configuration file -d is used for providing the scenario configuration file

1.6. References

For more information on the OPNFV Euphrates release, please visit http://www.opnfv.org/Euphrates

os-nosdn-kvm_ovs_dpdk-ha Overview and Description

1. os-nosdn-kvm_ovs_dpdk-ha Description

1.1. Introduction

The purpose of os-nosdn-kvm_ovs_dpdk-ha scenario testing is to test the High Availability deployment and configuration of OPNFV software suite with OpenStack and without SDN software. This OPNFV software suite includes OPNFV KVM4NFV latest software packages for Linux Kernel and QEMU patches for achieving low latency. High Availability feature is achieved by deploying OpenStack multi-node setup with 3 controllers and 2 computes nodes.

KVM4NFV packages will be installed on compute nodes as part of deployment. This scenario testcase deployment is happening on multi-node by using OPNFV Fuel and Apex deployer.

Using Fuel Installer

1.2. Scenario Components and Composition

This scenario deploys the High Availability OPNFV Cloud based on the configurations provided in ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml. This yaml file contains following configurations and is passed as an argument to deploy.py script

  • scenario.yaml: This configuration file defines translation between a short deployment scenario name(os-nosdn-kvm_ovs_dpdk-ha) and an actual deployment scenario configuration file(ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml)
  • deployment-scenario-metadata: Contains the configuration metadata like title,version,created,comment.
deployment-scenario-metadata:
   title: NFV KVM and OVS-DPDK HA deployment
   version: 0.0.1
   created: Dec 20 2016
   comment: NFV KVM and OVS-DPDK
  • stack-extensions: Stack extentions are opnfv added value features in form of a fuel-plugin.Plugins listed in stack extensions are enabled and configured. os-nosdn-kvm_ovs_dpdk-ha scenario currently uses KVM-1.0.0 plugin.
stack-extensions:
   - module: fuel-plugin-kvm
     module-config-name: fuel-nfvkvm
     module-config-version: 1.0.0
     module-config-override:
       # Module config overrides
  • dea-override-config: Used to configure the HA mode,network segmentation types and role to node assignments.These configurations overrides corresponding keys in the dea_base.yaml and dea_pod_override.yaml. These keys are used to deploy multiple nodes(3 controllers,2 computes) as mention below.

    • Node 1:
      • This node has MongoDB and Controller roles
      • The controller node runs the Identity service, Image Service, management portions of Compute and Networking, Networking plug-in and the dashboard
      • Uses VLAN as an interface
    • Node 2:
      • This node has Ceph-osd and Controller roles
      • The controller node runs the Identity service, Image Service, management portions of Compute and Networking, Networking plug-in and the dashboard
      • Ceph is a massively scalable, open source, distributed storage system
      • Uses VLAN as an interface
    • Node 3:
      • This node has Controller role in order to achieve high availability.
      • Uses VLAN as an interface
    • Node 4:
      • This node has compute and Ceph-osd roles
      • Ceph is a massively scalable, open source, distributed storage system
      • By default, Compute uses KVM as the hypervisor
      • Uses DPDK as an interface
    • Node 5:
      • This node has compute and Ceph-osd roles
      • Ceph is a massively scalable, open source, distributed storage system
      • By default, Compute uses KVM as the hypervisor
      • Uses DPDK as an interface

    The below is the dea-override-config of the ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml file.

dea-override-config:
  fuel:
    FEATURE_GROUPS:
    - experimental
  nodes:
  - id: 1
    interfaces: interfaces_1
    role: controller
  - id: 2
    interfaces: interfaces_1
    role: mongo,controller
  - id: 3
    interfaces: interfaces_1
    role: ceph-osd,controller
  - id: 4
    interfaces: interfaces_dpdk
    role: ceph-osd,compute
    attributes: attributes_1
  - id: 5
    interfaces: interfaces_dpdk
    role: ceph-osd,compute
    attributes: attributes_1

  attributes_1:
    hugepages:
      dpdk:
        value: 1024
      nova:
        value:
          '2048': 1024

  settings:
    editable:
      storage:
        ephemeral_ceph:
          description: Configures Nova to store ephemeral volumes in RBD. This works best if Ceph
          is enabled for volumes and images, too. Enables live migration of all types of Ceph
          backed VMs (without this option, live migration will only work with VMs launched from
          Cinder volumes).
          label: Ceph RBD for ephemeral volumes (Nova)
          type: checkbox
          value: true
          weight: 75
        images_ceph:
          description: Configures Glance to use the Ceph RBD backend to store images. If enabled,
          this option will prevent Swift from installing.
          label: Ceph RBD for images (Glance)
          restrictions:
          - settings:storage.images_vcenter.value == true: Only one Glance backend could be selected.
          type: checkbox
          value: true
          weight: 30
  • dha-override-config: Provides information about the VM definition and Network config for virtual deployment.These configurations overrides the pod dha definition and points to the controller,compute and fuel definition files.

    The below is the dha-override-config of the ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml file.

dha-override-config:
  nodes:
  - id: 1
    libvirtName: controller1
    libvirtTemplate: templates/virtual_environment/vms/controller.xml
  - id: 2
    libvirtName: controller2
    libvirtTemplate: templates/virtual_environment/vms/controller.xml
  - id: 3
    libvirtName: controller3
    libvirtTemplate: templates/virtual_environment/vms/controller.xml
  - id: 4
    libvirtName: compute1
    libvirtTemplate: templates/virtual_environment/vms/compute.xml
  - id: 5
    libvirtName: compute2
    libvirtTemplate: templates/virtual_environment/vms/compute.xml
  - id: 6
    libvirtName: fuel-master
    libvirtTemplate: templates/virtual_environment/vms/fuel.xml
    isFuel: yes
    username: root
    password: r00tme
  • os-nosdn-kvm_ovs_dpdk-ha scenario is successful when all the 5 Nodes are accessible, up and running.

Note:

  • In os-nosdn-kvm_ovs_dpdk-ha scenario, OVS is installed on the compute nodes with DPDK configured
  • Hugepages for DPDK are configured in the attributes_1 section of the

no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml

  • Hugepages are only configured for compute nodes
  • This results in faster communication and data transfer among the compute nodes

1.3. Scenario Usage Overview

  • The high availability feature can be acheived by executing deploy.py with ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml as an argument.
  • Install Fuel Master and deploy OPNFV Cloud from scratch on Hardware Environment:

Command to deploy the os-nosdn-kvm_ovs_dpdk-ha scenario:

$ cd ~/fuel/ci/
$ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default \
-s ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
where,

-b is used to specify the configuration directory

-i is used to specify the image downloaded from artifacts.

Note:

Check $ sudo ./deploy.sh -h for further information.
  • os-nosdn-kvm_ovs_dpdk-ha scenario can be executed from the jenkins project “fuel-os-nosdn-kvm_ovs_dpdk-ha-baremetal-daily-master”
  • This scenario provides the High Availability feature by deploying 3 controller,2 compute nodes and checking if all the 5 nodes are accessible(IP,up & running).
  • Test Scenario is passed if deployment is successful and all 5 nodes have accessibility (IP , up & running).

Using Apex Installer

1.4. Scenario Components and Composition

This scenario is composed of common OpenStack services enabled by default, including Nova, Neutron, Glance, Cinder, Keystone, Horizon. Optionally and by default, Tacker and Congress services are also enabled. Ceph is used as the backend storage to Cinder on all deployed nodes.

All services are in HA, meaning that there are multiple cloned instances of each service, and they are balanced by HA Proxy using a Virtual IP Address per service.

The os-nosdn-kvm_ovs_dpdk-ha.yaml file contains following configurations and is passed as an argument to deploy.sh script.

  • global-params: Used to define the global parameter and there is only one such parameter exists,i.e, ha_enabled
global-params:
  ha_enabled: true
  • deploy_options: Used to define the type of SDN controller, configure the tacker, congress, service functioning chaining support(sfc) for ODL and ONOS, configure ODL with SDNVPN support, which dataplane to use for overcloud tenant networks, whether to run the kvm real time kernel (rt_kvm) in the compute node(s) to reduce the network latencies caused by network function virtualization and whether to install and configure fdio functionality in the overcloud
deploy_options:
  sdn_controller: false
  tacker: true
  congress: true
  sfc: false
  vpn: false
  rt_kvm: true
  dataplane: ovs_dpdk
  • performance: Used to set performance options on specific roles. The valid roles are ‘Compute’, ‘Controller’ and ‘Storage’, and the valid sections are ‘kernel’ and ‘nova’
performance:
  Controller:
    kernel:
      hugepages: 1024
      hugepagesz: 2M
  Compute:
    kernel:
      hugepagesz: 2M
      hugepages: 2048
      intel_iommu: 'on'
      iommu: pt
    ovs:
      socket_memory: 1024
      pmd_cores: 2
      dpdk_cores: 1

1.5. Scenario Usage Overview

  • The high availability feature can be acheived by executing deploy.sh with os-nosdn-kvm_ovs_dpdk-ha.yaml as an argument.
  • Build the undercloud and overcloud images as mentioned below:
cd ~/apex/build/
make images-clean
make images
  • Command to deploy os-nosdn-kvm_ovs_dpdk-ha scenario:
cd ~/apex/ci/
./clean.sh
./dev_dep_check.sh
./deploy.sh -v --ping-site <ping_ip-address> --dnslookup-site <dns_ip-address> -n \
~/apex/config/network/intc_network_settings.yaml -d ~/apex/config/deploy/os-nosdn-kvm_ovs_dpdk-ha.yaml
where,
-v is used for virtual deployment -n is used for providing the network configuration file -d is used for providing the scenario configuration file

1.6. References

For more information on the OPNFV Euphrates release, please visit http://www.opnfv.org/Euphrates

os-nosdn-kvm_ovs_dpdk_bar-noha Overview and Description

1. os-nosdn-kvm_ovs_dpdk_bar-ha Description

1.1. Introduction

The purpose of os-nosdn-kvm_ovs_dpdk_bar-noha scenario testing is to test the No High Availability deployment and configuration of OPNFV software suite with OpenStack and without SDN software. This OPNFV software suite includes OPNFV KVM4NFV latest software packages for Linux Kernel and QEMU patches for achieving low latency.No High Availability feature is achieved by deploying OpenStack multi-node setup with 1 controller and 3 computes nodes.

OPNFV Barometer packages is used for traffic,performance and platform monitoring. KVM4NFV packages will be installed on compute nodes as part of deployment. This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer.

1.2. Scenario Components and Composition

This scenario deploys the No High Availability OPNFV Cloud based on the configurations provided in no-ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml. This yaml file contains following configurations and is passed as an argument to deploy.py script

  • scenario.yaml: This configuration file defines translation between a short deployment scenario name(os-nosdn-kvm_ovs_dpdk_bar-noha) and an actual deployment scenario configuration file(no-ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml)
  • deployment-scenario-metadata: Contains the configuration metadata like title,version,created,comment.
deployment-scenario-metadata:
   title: NFV KVM and OVS-DPDK HA deployment
   version: 0.0.1
   created: Dec 20 2016
   comment: NFV KVM and OVS-DPDK
  • stack-extensions: Stack extentions are opnfv added value features in form of a fuel-plugin.Plugins listed in stack extensions are enabled and configured. os-nosdn-kvm_ovs_dpdk_bar-noha scenario currently uses KVM-1.0.0 plugin and barometer-1.0.0 plugin.
stack-extensions:
   - module: fuel-plugin-kvm
     module-config-name: fuel-nfvkvm
     module-config-version: 1.0.0
     module-config-override:
      # Module config overrides
   - module: fuel-plugin-collectd-ceilometer
     module-config-name: fuel-barometer
     module-config-version: 1.0.0
     module-config-override:
       # Module config overrides
  • dea-override-config: Used to configure the HA mode,network segmentation types and role to node assignments.These configurations overrides corresponding keys in the dea_base.yaml and dea_pod_override.yaml. These keys are used to deploy multiple nodes(1 controller,3 computes) as mention below.

    • Node 1:
      • This node has MongoDB and Controller roles
      • The controller node runs the Identity service, Image Service, management portions of Compute and Networking, Networking plug-in and the dashboard
      • Uses VLAN as an interface
    • Node 2:
      • This node has compute and Ceph-osd roles
      • Ceph is a massively scalable, open source, distributed storage system
      • By default, Compute uses KVM as the hypervisor
      • Uses DPDK as an interface
    • Node 3:
      • This node has compute and Ceph-osd roles
      • Ceph is a massively scalable, open source, distributed storage system
      • By default, Compute uses KVM as the hypervisor
      • Uses DPDK as an interface
    • Node 4:
      • This node has compute and Ceph-osd roles
      • Ceph is a massively scalable, open source, distributed storage system
      • By default, Compute uses KVM as the hypervisor
      • Uses DPDK as an interface

    The below is the dea-override-config of the no-ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml file.

dea-override-config:
  fuel:
    FEATURE_GROUPS:
    - experimental
  environment:
    net_segment_type: vlan
  nodes:
  - id: 1
    interfaces: interfaces_vlan
    role: mongo,controller
  - id: 2
    interfaces: interfaces_dpdk
    role: ceph-osd,compute
    attributes: attributes_1
  - id: 3
    interfaces: interfaces_dpdk
    role: ceph-osd,compute
    attributes: attributes_1
  - id: 4
    interfaces: interfaces_dpdk
    role: ceph-osd,compute
    attributes: attributes_1

  attributes_1:
    hugepages:
      dpdk:
        value: 1024
      nova:
        value:
          '2048': 1024

  network:
    networking_parameters:
      segmentation_type: vlan
    networks:
    - cidr: null
      gateway: null
      ip_ranges: []
      meta:
        configurable: false
        map_priority: 2
        name: private
        neutron_vlan_range: true
        notation: null
        render_addr_mask: null
        render_type: null
        seg_type: vlan
        use_gateway: false
        vlan_start: null
      name: private
      vlan_start: null

  settings:
    editable:
      storage:
        ephemeral_ceph:
          description: Configures Nova to store ephemeral volumes in RBD. This works best if Ceph
          is enabled for volumes and images, too. Enables live migration of all types of Ceph
          backed VMs (without this option, live migration will only work with VMs launched from
          Cinder volumes).
          label: Ceph RBD for ephemeral volumes (Nova)
          type: checkbox
          value: true
          weight: 75
        images_ceph:
          description: Configures Glance to use the Ceph RBD backend to store images. If enabled,
          this option will prevent Swift from installing.
          label: Ceph RBD for images (Glance)
          restrictions:
          - settings:storage.images_vcenter.value == true: Only one Glance backend could be selected.
          type: checkbox
          value: true
          weight: 30
  • dha-override-config: Provides information about the VM definition and Network config for virtual deployment.These configurations overrides the pod dha definition and points to the controller,compute and fuel definition files. The noha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml has no dha-config changes i.e., default configuration is used.
  • os-nosdn-kvm_ovs_dpdk_bar-noha scenario is successful when all the 4 Nodes are accessible, up and running.

Note:

  • In os-nosdn-kvm_ovs_dpdk_bar-noha scenario, OVS is installed on the compute nodes with DPDK configured
  • Baraometer plugin is also implemented along with KVM plugin.
  • Hugepages for DPDK are configured in the attributes_1 section of the no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml
  • Hugepages are only configured for compute nodes
  • This results in faster communication and data transfer among the compute nodes

1.3. Scenario Usage Overview

  • The high availability feature is disabled and deploymet is done by deploy.py with noha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml as an argument.
  • Install Fuel Master and deploy OPNFV Cloud from scratch on Hardware Environment:

Command to deploy the os-nosdn-kvm_ovs_dpdk_bar-noha scenario:

$ cd ~/fuel/ci/
$ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default \
-s no-ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
where,

-b is used to specify the configuration directory

-i is used to specify the image downloaded from artifacts.

Note:

Check $ sudo ./deploy.sh -h for further information.
  • os-nosdn-kvm_ovs_dpdk_bar-noha scenario can be executed from the jenkins project “fuel-os-nosdn-kvm_ovs_dpdk_bar-noha-baremetal-daily-master”
  • This scenario provides the No High Availability feature by deploying 1 controller,3 compute nodes and checking if all the 4 nodes are accessible(IP,up & running).
  • Test Scenario is passed if deployment is successful and all 4 nodes have accessibility (IP , up & running).

1.4. Known Limitations, Issues and Workarounds

  • Test scenario os-nosdn-kvm_ovs_dpdk_bar-noha result is not stable.

1.5. References

For more information on the OPNFV Euphrates release, please visit http://www.opnfv.org/Euphrates

os-nosdn-kvm_ovs_dpdk_bar-ha Overview and Description

1. os-nosdn-kvm_ovs_dpdk_bar-ha Description

1.1. Introduction

The purpose of os-nosdn-kvm_ovs_dpdk_bar-ha scenario testing is to test the High Availability deployment and configuration of OPNFV software suite with OpenStack and without SDN software. This OPNFV software suite includes OPNFV KVM4NFV latest software packages for Linux Kernel and QEMU patches for achieving low latency. High Availability feature is achieved by deploying OpenStack multi-node setup with 3 controllers and 2 computes nodes.

OPNFV Barometer packages is used for traffic,performance and platform monitoring. KVM4NFV packages will be installed on compute nodes as part of deployment. This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer.

1.2. Scenario Components and Composition

This scenario deploys the High Availability OPNFV Cloud based on the configurations provided in ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml. This yaml file contains following configurations and is passed as an argument to deploy.py script

  • scenario.yaml: This configuration file defines translation between a short deployment scenario name(os-nosdn-kvm_ovs_dpdk_bar-ha) and an actual deployment scenario configuration file(ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml)
  • deployment-scenario-metadata: Contains the configuration metadata like title,version,created,comment.
deployment-scenario-metadata:
   title: NFV KVM and OVS-DPDK HA deployment
   version: 0.0.1
   created: Dec 20 2016
   comment: NFV KVM and OVS-DPDK
  • stack-extensions: Stack extentions are opnfv added value features in form of a fuel-plugin.Plugins listed in stack extensions are enabled and configured. os-nosdn-kvm_ovs_dpdk_bar-ha scenario currently uses KVM-1.0.0 plugin and barometer plugin.
stack-extensions:
   - module: fuel-plugin-kvm
     module-config-name: fuel-nfvkvm
     module-config-version: 1.0.0
     module-config-override:
      # Module config overrides
   - module: fuel-plugin-collectd-ceilometer
     module-config-name: fuel-barometer
     module-config-version: 1.0.0
     module-config-override:
       # Module config overrides
  • dea-override-config: Used to configure the HA mode,network segmentation types and role to node assignments.These configurations overrides corresponding keys in the dea_base.yaml and dea_pod_override.yaml. These keys are used to deploy multiple nodes(3 controllers,2 computes) as mention below.

    • Node 1:
      • This node has MongoDB and Controller roles
      • The controller node runs the Identity service, Image Service, management portions of Compute and Networking, Networking plug-in and the dashboard
      • Uses VLAN as an interface
    • Node 2:
      • This node has Ceph-osd and Controller roles
      • The controller node runs the Identity service, Image Service, management portions of Compute and Networking, Networking plug-in and the dashboard
      • Ceph is a massively scalable, open source, distributed storage system
      • Uses VLAN as an interface
    • Node 3:
      • This node has Controller role in order to achieve high availability.
      • Uses VLAN as an interface
    • Node 4:
      • This node has compute and Ceph-osd roles
      • Ceph is a massively scalable, open source, distributed storage system
      • By default, Compute uses KVM as the hypervisor
      • Uses DPDK as an interface
    • Node 5:
      • This node has compute and Ceph-osd roles
      • Ceph is a massively scalable, open source, distributed storage system
      • By default, Compute uses KVM as the hypervisor
      • Uses DPDK as an interface

    The below is the dea-override-config of the ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml file.

dea-override-config:
  fuel:
    FEATURE_GROUPS:
    - experimental
  nodes:
  - id: 1
    interfaces: interfaces_1
    role: controller
  - id: 2
    interfaces: interfaces_1
    role: mongo,controller
  - id: 3
    interfaces: interfaces_1
    role: ceph-osd,controller
  - id: 4
    interfaces: interfaces_dpdk
    role: ceph-osd,compute
    attributes: attributes_1
  - id: 5
    interfaces: interfaces_dpdk
    role: ceph-osd,compute
    attributes: attributes_1

  attributes_1:
    hugepages:
      dpdk:
        value: 1024
      nova:
        value:
          '2048': 1024

  settings:
    editable:
      storage:
        ephemeral_ceph:
          description: Configures Nova to store ephemeral volumes in RBD. This works best if Ceph
          is enabled for volumes and images, too. Enables live migration of all types of Ceph
          backed VMs (without this option, live migration will only work with VMs launched from
          Cinder volumes).
          label: Ceph RBD for ephemeral volumes (Nova)
          type: checkbox
          value: true
          weight: 75
        images_ceph:
          description: Configures Glance to use the Ceph RBD backend to store images. If enabled,
          this option will prevent Swift from installing.
          label: Ceph RBD for images (Glance)
          restrictions:
          - settings:storage.images_vcenter.value == true: Only one Glance backend could be selected.
          type: checkbox
          value: true
          weight: 30
  • dha-override-config: Provides information about the VM definition and Network config for virtual deployment.These configurations overrides the pod dha definition and points to the controller,compute and fuel definition files.

    The below is the dha-override-config of the ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml file.

dha-override-config:
  nodes:
  - id: 1
    libvirtName: controller1
    libvirtTemplate: templates/virtual_environment/vms/controller.xml
  - id: 2
    libvirtName: controller2
    libvirtTemplate: templates/virtual_environment/vms/controller.xml
  - id: 3
    libvirtName: controller3
    libvirtTemplate: templates/virtual_environment/vms/controller.xml
  - id: 4
    libvirtName: compute1
    libvirtTemplate: templates/virtual_environment/vms/compute.xml
  - id: 5
    libvirtName: compute2
    libvirtTemplate: templates/virtual_environment/vms/compute.xml
  - id: 6
    libvirtName: fuel-master
    libvirtTemplate: templates/virtual_environment/vms/fuel.xml
    isFuel: yes
    username: root
    password: r00tme
  • os-nosdn-kvm_ovs_dpdk_bar-ha scenario is successful when all the 5 Nodes are accessible, up and running.

Note:

  • In os-nosdn-kvm_ovs_dpdk_bar-ha scenario, OVS is installed on the compute nodes with DPDK configured
  • Baraometer plugin is also implemented along with KVM plugin
  • Hugepages for DPDK are configured in the attributes_1 section of the

no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml

  • Hugepages are only configured for compute nodes
  • This results in faster communication and data transfer among the compute nodes

1.3. Scenario Usage Overview

  • The high availability feature can be acheived by executing deploy.py with ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml as an argument.
  • Install Fuel Master and deploy OPNFV Cloud from scratch on Hardware Environment:

Command to deploy the os-nosdn-kvm_ovs_dpdk_bar-ha scenario:

$ cd ~/fuel/ci/
$ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default \
-s ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
where,

-b is used to specify the configuration directory

-i is used to specify the image downloaded from artifacts.

Note:

Check $ sudo ./deploy.sh -h for further information.
  • os-nosdn-kvm_ovs_dpdk_bar-ha scenario can be executed from the jenkins project “fuel-os-nosdn-kvm_ovs_dpdk_bar-ha-baremetal-daily-master”
  • This scenario provides the High Availability feature by deploying 3 controller,2 compute nodes and checking if all the 5 nodes are accessible(IP,up & running).
  • Test Scenario is passed if deployment is successful and all 5 nodes have accessibility (IP , up & running).

1.4. Known Limitations, Issues and Workarounds

  • Test scenario os-nosdn-kvm_ovs_dpdk_bar-ha result is not stable.

1.5. References

For more information on the OPNFV Euphrates release, please visit http://www.opnfv.org/Euphrates