OPNFV Daisy4nfv Installation Guide

Abstract

This document describes how to install the Euphrates release of OPNFV when using Daisy4nfv as a deployment tool covering it’s limitations, dependencies and required resources.

Version history

Date Ver. Author Comment
2017-02-07 0.0.1 Zhijiang Hu (ZTE) Initial version

Daisy4nfv configuration

This document provides guidelines on how to install and configure the Euphrates release of OPNFV when using Daisy as a deployment tool including required software and hardware configurations.

Installation and configuration of host OS, OpenStack etc. can be supported by Daisy on Virtual nodes and Bare Metal nodes.

The audience of this document is assumed to have good knowledge in networking and Unix/Linux administration.

Prerequisites

Before starting the installation of the Euphrates release of OPNFV, some plannings must be done.

Retrieve the installation iso image

First of all, the installation iso which includes packages of Daisy, OS, OpenStack, and so on is needed for deploying your OPNFV environment.

The stable release iso image can be retrieved via OPNFV software download page

The daily build iso image can be retrieved via OPNFV artifact repository:

http://artifacts.opnfv.org/daisy.html

NOTE: Search the keyword “daisy/Euphrates” to locate the iso image.

E.g. daisy/opnfv-2017-10-06_09-50-23.iso

Download the iso file, then mount it to a specified directory and get the opnfv-*.bin from that directory.

The git url and sha512 checksum of iso image are recorded in properties files. According to these, the corresponding deployment scripts can be retrieved.

Retrieve the deployment scripts

To retrieve the repository of Daisy on Jumphost use the following command:

To get stable Euphrates release, you can use the following command:

  • git checkout euphrates.1.0

Setup Requirements

If you have only 1 Bare Metal server, Virtual deployment is recommended. if you have more than 3 servers, the Bare Metal deployment is recommended. The minimum number of servers for each role in Bare metal deployment is listed below.

Role Number of Servers
Jump Host 1
Controller 1
Compute 1

Jumphost Requirements

The Jumphost requirements are outlined below:

  1. CentOS 7.2 (Pre-installed).
  2. Root access.
  3. Libvirt virtualization support(For virtual deployment).
  4. Minimum 1 NIC(or 2 NICs for virtual deployment).
    • PXE installation Network (Receiving PXE request from nodes and providing OS provisioning)
    • IPMI Network (Nodes power control and set boot PXE first via IPMI interface)
    • Internet access (For getting latest OS updates)
    • External Interface(For virtual deployment, exclusively used by instance traffic to access the rest of the Internet)
  5. 16 GB of RAM for a Bare Metal deployment, 64 GB of RAM for a Virtual deployment.
  6. CPU cores: 32, Memory: 64 GB, Hard Disk: 500 GB, (Virtual deployment needs 1 TB Hard Disk)

Bare Metal Node Requirements

Bare Metal nodes require:

  1. IPMI enabled on OOB interface for power control.
  2. BIOS boot priority should be PXE first then local hard disk.
  3. Minimum 1 NIC for Compute nodes, 2 NICs for Controller nodes.
    • PXE installation Network (Broadcasting PXE request)
    • IPMI Network (Receiving IPMI command from Jumphost)
    • Internet access (For getting latest OS updates)
    • External Interface(For virtual deployment, exclusively used by instance traffic to access the rest of the Internet)

Network Requirements

Network requirements include:

  1. No DHCP or TFTP server running on networks used by OPNFV.
  2. 2-7 separate networks with connectivity between Jumphost and nodes.
    • PXE installation Network
    • IPMI Network
    • Internet access Network
    • OpenStack Public API Network
    • OpenStack Private API Network
    • OpenStack External Network
    • OpenStack Tenant Network(currently, VxLAN only)
  3. Lights out OOB network access from Jumphost with IPMI node enabled (Bare Metal deployment only).
  4. Internet access Network has Internet access, meaning a gateway and DNS availability.
  5. OpenStack External Network has Internet access too if you want instances to access the Internet.

Note: All networks except OpenStack External Network can share one NIC(Default configuration) or use an exclusive NIC(Reconfigurated in network.yml).

Execution Requirements (Bare Metal Only)

In order to execute a deployment, one must gather the following information:

  1. IPMI IP addresses of the nodes.
  2. IPMI login information for the nodes (user/password).

Installation Guide (Bare Metal Deployment)

Nodes Configuration (Bare Metal Deployment)

The below file is the inventory template of deployment nodes:

”./deploy/config/bm_environment/zte-baremetal1/deploy.yml”

You can write your own name/roles reference into it.

  • name – Host name for deployment node after installation.
  • roles – Components deployed. CONTROLLER_LB is for Controller,

COMPUTER is for Compute role. Currently only these two role is supported. The first CONTROLLER_LB is also used for ODL controller. 3 hosts in inventory will be chosen to setup the Ceph storage cluster.

Set TYPE and FLAVOR

E.g.

TYPE: virtual
FLAVOR: cluster

Assignment of different roles to servers

E.g. OpenStack only deployment roles setting

hosts:
  - name: host1
    roles:
      - CONTROLLER_LB
  - name: host2
    roles:
      - COMPUTER
  - name: host3
    roles:
      - COMPUTER

NOTE: For B/M, Daisy uses MAC address defined in deploy.yml to map discovered nodes to node items definition in deploy.yml, then assign role described by node item to the discovered nodes by name pattern. Currently, controller01, controller02, and controller03 will be assigned with Controler role while computer01, ‘computer02, computer03, and computer04 will be assigned with Compute role.

NOTE: For V/M, There is no MAC address defined in deploy.yml for each virtual machine. Instead, Daisy will fill that blank by getting MAC from “virsh dump-xml”.

Network Configuration (Bare Metal Deployment)

Before deployment, there are some network configurations to be checked based on your network topology. The default network configuration file for Daisy is ”./deploy/config/bm_environment/zte-baremetal1/network.yml”. You can write your own reference into it.

The following figure shows the default network configuration.

+-B/M--------+------------------------------+
|Jumperserver+                              |
+------------+                       +--+   |
|                                    |  |   |
|                +-V/M--------+      |  |   |
|                | Daisyserver+------+  |   |
|                +------------+      |  |   |
|                                    |  |   |
+------------------------------------|  |---+
                                     |  |
                                     |  |
      +--+                           |  |
      |  |       +-B/M--------+      |  |
      |  +-------+ Controller +------+  |
      |  |       | ODL(Opt.)  |      |  |
      |  |       | Network    |      |  |
      |  |       | CephOSD1   |      |  |
      |  |       +------------+      |  |
      |  |                           |  |
      |  |                           |  |
      |  |                           |  |
      |  |       +-B/M--------+      |  |
      |  +-------+  Compute1  +------+  |
      |  |       |  CephOSD2  |      |  |
      |  |       +------------+      |  |
      |  |                           |  |
      |  |                           |  |
      |  |                           |  |
      |  |       +-B/M--------+      |  |
      |  +-------+  Compute2  +------+  |
      |  |       |  CephOSD3  |      |  |
      |  |       +------------+      |  |
      |  |                           |  |
      |  |                           |  |
      |  |                           |  |
      +--+                           +--+
        ^                             ^
        |                             |
        |                             |
       /---------------------------\  |
       |      External Network     |  |
       \---------------------------/  |
              /-----------------------+---\
              |    Installation Network   |
              |    Public/Private API     |
              |      Internet Access      |
              |      Tenant Network       |
              |     Storage Network       |
              |     HeartBeat Network     |
              \---------------------------/

Note: For Flat External networks(which is used by default), a physical interface is needed on each compute node for ODL NetVirt recent versions. HeartBeat network is selected,and if it is configured in network.yml,the keepalived interface will be the heartbeat interface.

Start Deployment (Bare Metal Deployment)

  1. Git clone the latest daisy4nfv code from opnfv: “git clone https://gerrit.opnfv.org/gerrit/daisy

(2) Download latest bin file(such as opnfv-2017-06-06_23-00-04.bin) of daisy from http://artifacts.opnfv.org/daisy.html and change the bin file name(such as opnfv-2017-06-06_23-00-04.bin) to opnfv.bin. Check the https://build.opnfv.org/ci/job/daisy-os-odl-nofeature-ha-baremetal-daily-master/, and if the ‘snaps_health_check’ of functest result is ‘PASS’, you can use this verify-passed bin to deploy the openstack in your own environment

(3) Assumed cloned dir is $workdir, which laid out like below: [root@daisyserver daisy]# ls ci deploy docker INFO LICENSE requirements.txt templates tests tox.ini code deploy.log docs known_hosts setup.py test-requirements.txt tools Make sure the opnfv.bin file is in $workdir

(4) Enter into the $workdir, which laid out like below: [root@daisyserver daisy]# ls ci code deploy docker docs INFO LICENSE requirements.txt setup.py templates test-requirements.txt tests tools tox.ini Create folder of labs/zte/pod2/daisy/config in $workdir

(5) Move the ./deploy/config/bm_environment/zte-baremetal1/deploy.yml and ./deploy/config/bm_environment/zte-baremetal1/network.yml to labs/zte/pod2/daisy/config dir.

Note: If selinux is disabled on the host, please delete all xml files section of below lines in dir templates/physical_environment/vms/

<seclabel type=’dynamic’ model=’selinux’ relabel=’yes’>
<label>system_u:system_r:svirt_t:s0:c182,c195</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c182,c195</imagelabel>

</seclabel>

(6) Config the bridge in jumperserver,make sure the daisy vm can connect to the targetnode,use the command below: brctl addbr br7 brctl addif br7 enp3s0f3(the interface for jumperserver to connect to daisy vm) ifconfig br7 10.20.7.1 netmask 255.255.255.0 up service network restart

(7) Run the script deploy.sh in daisy/ci/deploy/ with command: sudo ./ci/deploy/deploy.sh -L $(cd ./;pwd) -l zte -p pod2 -s os-nosdn-nofeature-noha

Note: The value after -L should be a absolute path which points to the directory which contents labs/zte/pod2/daisy/config directory. The value after -p parameter(pod2) comes from path “labs/zte/pod2” The value after -l parameter(zte) comes from path “labs/zte” The value after -s “os-nosdn-nofeature-ha” used for deploy multinode openstack The value after -s “os-nosdn-nofeature-noha” used for deploy all-in-one openstack

(8) When deploy successfully,the floating ip of openstack is 10.20.7.11, the login account is “admin” and the password is “keystone”

Installation Guide (Virtual Deployment)

Nodes Configuration (Virtual Deployment)

The below file is the inventory template of deployment nodes:

”./deploy/conf/vm_environment/zte-virtual1/deploy.yml”

You can write your own name/roles reference into it.

  • name – Host name for deployment node after installation.
  • roles – Components deployed.

Set TYPE and FLAVOR

E.g.

TYPE: virtual
FLAVOR: cluster

Assignment of different roles to servers

E.g. OpenStack only deployment roles setting

hosts:
  - name: host1
    roles:
      - controller

  - name: host2
    roles:
      - compute

NOTE: For B/M, Daisy uses MAC address defined in deploy.yml to map discovered nodes to node items definition in deploy.yml, then assign role described by node item to the discovered nodes by name pattern. Currently, controller01, controller02, and controller03 will be assigned with Controller role while computer01, computer02, computer03, and computer04 will be assigned with Compute role.

NOTE: For V/M, There is no MAC address defined in deploy.yml for each virtual machine. Instead, Daisy will fill that blank by getting MAC from “virsh dump-xml”.

E.g. OpenStack and ceph deployment roles setting

hosts:
  - name: host1
    roles:
      - controller

  - name: host2
    roles:
      - compute

Network Configuration (Virtual Deployment)

Before deployment, there are some network configurations to be checked based on your network topology. The default network configuration file for Daisy is “daisy/deploy/config/vm_environment/zte-virtual1/network.yml”. You can write your own reference into it.

The following figure shows the default network configuration.

+-B/M--------+------------------------------+
|Jumperserver+                              |
+------------+                       +--+   |
|                                    |  |   |
|                +-V/M--------+      |  |   |
|                | Daisyserver+------+  |   |
|                +------------+      |  |   |
|                                    |  |   |
|     +--+                           |  |   |
|     |  |       +-V/M--------+      |  |   |
|     |  +-------+ Controller +------+  |   |
|     |  |       | ODL(Opt.)  |      |  |   |
|     |  |       | Network    |      |  |   |
|     |  |       | Ceph1      |      |  |   |
|     |  |       +------------+      |  |   |
|     |  |                           |  |   |
|     |  |                           |  |   |
|     |  |                           |  |   |
|     |  |       +-V/M--------+      |  |   |
|     |  +-------+  Compute1  +------+  |   |
|     |  |       |  Ceph2     |      |  |   |
|     |  |       +------------+      |  |   |
|     |  |                           |  |   |
|     |  |                           |  |   |
|     |  |                           |  |   |
|     |  |       +-V/M--------+      |  |   |
|     |  +-------+  Compute2  +------+  |   |
|     |  |       |  Ceph3     |      |  |   |
|     |  |       +------------+      |  |   |
|     |  |                           |  |   |
|     |  |                           |  |   |
|     |  |                           |  |   |
|     +--+                           +--+   |
|       ^                             ^     |
|       |                             |     |
|       |                             |     |
|      /---------------------------\  |     |
|      |      External Network     |  |     |
|      \---------------------------/  |     |
|             /-----------------------+---\ |
|             |    Installation Network   | |
|             |    Public/Private API     | |
|             |      Internet Access      | |
|             |      Tenant Network       | |
|             |     Storage Network       | |
|             |     HeartBeat Network     | |
|             \---------------------------/ |
+-------------------------------------------+

Note: For Flat External networks(which is used by default), a physical interface is needed on each compute node for ODL NetVirt recent versions. HeartBeat network is selected,and if it is configured in network.yml,the keepalived interface will be the heartbeat interface.

Start Deployment (Virtual Deployment)

(1) Git clone the latest daisy4nfv code from opnfv: “git clone https://gerrit.opnfv.org/gerrit/daisy”, make sure the current branch is master

(2) Download latest bin file(such as opnfv-2017-06-06_23-00-04.bin) of daisy from http://artifacts.opnfv.org/daisy.html and change the bin file name(such as opnfv-2017-06-06_23-00-04.bin) to opnfv.bin. Check the https://build.opnfv.org/ci/job/daisy-os-odl-nofeature-ha-baremetal-daily-master/, and if the ‘snaps_health_check’ of functest result is ‘PASS’, you can use this verify-passed bin to deploy the openstack in your own environment

(3) Assumed cloned dir is $workdir, which laid out like below: [root@daisyserver daisy]# ls ci code deploy docker docs INFO LICENSE requirements.txt setup.py templates test-requirements.txt tests tools tox.ini Make sure the opnfv.bin file is in $workdir

  1. Enter into $workdir, Create folder of labs/zte/virtual1/daisy/config in $workdir

(5) Move the deploy/config/vm_environment/zte-virtual1/deploy.yml and deploy/config/vm_environment/zte-virtual1/network.yml to labs/zte/virtual1/daisy/config dir.

Note: zte-virtual1 config file deploy openstack with five nodes(3 lb nodes and 2 computer nodes), if you want to deploy an all-in-one openstack, change the zte-virtual1 to zte-virtual2

Note: If selinux is disabled on the host, please delete all xml files section of below lines in dir templates/virtual_environment/vms/

<seclabel type=’dynamic’ model=’selinux’ relabel=’yes’>
<label>system_u:system_r:svirt_t:s0:c182,c195</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c182,c195</imagelabel>

</seclabel>

(6) Run the script deploy.sh in daisy/ci/deploy/ with command: sudo ./ci/deploy/deploy.sh -L $(cd ./;pwd) -l zte -p virtual1 -s os-nosdn-nofeature-ha

Note: The value after -L should be a absolute path which points to the directory which contents labs/zte/virtual1/daisy/config directory. The value after -p parameter(virtual1) is get from labs/zte/virtual1/daisy/config/ The value after -l parameter(zte) is get from labs/ The value after -s “os-nosdn-nofeature-ha” used for deploy multinode openstack The value after -s “os-nosdn-nofeature-noha” used for deploy all-in-one openstack

(7) When deploy successfully,the floating ip of openstack is 10.20.11.11, the login account is “admin” and the password is “keystone”

Deployment Error Recovery Guide

Deployment may fail due to different kinds of reasons, such as Daisy VM creation error, target nodes failure during OS installation, or Kolla deploy command error. Different errors can be grouped into several error levels. We define Recovery Levels below to fulfill recover requirements in different error levels.

1. Recovery Level 0

This level restart whole deployment again. Mainly to retry to solve errors such as Daisy VM creation failed. For example we use the following command to do virtual deployment(in the jump host):

sudo ./ci/deploy/deploy.sh -b ./ -l zte -p virtual1 -s os-nosdn-nofeature-ha

If command failed because of Daisy VM creation error, then redo above command will restart whole deployment which includes rebuild the daisy VM image and restart Daisy VM.

2. Recovery Level 1

If Daisy VM was created successfully, but bugs was encountered in Daisy code or software of target OS which prevent deployment from being done, in this case, the user or the developer does not want to recreate the Daisy VM again during next deployment process but just to modify some pieces of code in it. To achieve this, he/she can redo deployment by deleting all clusters and hosts first(in the Daisy VM):

source /root/daisyrc_admin
for i in `daisy cluster-list | awk -F "|" '{print $2}' | sed -n '4p' | tr -d " "`;do daisy cluster-delete $i;done
for i in `daisy host-list | awk -F "|" '{print $2}'| grep -o "[^ ]\+\( \+[^ ]\+\)*"|tail -n +2`;do daisy host-delete $i;done

Then, adjust deployment command as below and run it again(in the jump host):

sudo ./ci/deploy/deploy.sh -S -b ./ -l zte -p virtual1 -s os-nosdn-nofeature-ha

Pay attention to the “-S” argument above, it lets the deployment process to skip re-creating Daisy VM and use the existing one.

3. Recovery Level 2

If both Daisy VM and target node’s OS are OK, but error ocurred when doing OpenStack deployment, then there is even no need to re-install target OS for the deployment retrying. In this level, all we need to do is just retry the Daisy deployment command as follows(in the Daisy VM):

source /root/daisyrc_admin
daisy uninstall <cluster-id>
daisy install <cluster-id>

This basically do kolla-ansible destroy and kolla-asnible deploy.

OpenStack Minor Version Update Guide

Thanks for the Kolla’s kolla-ansible upgrade function, Daisy enable to update OpenStack minor version as the follows:

1. Get new version file only from Daisy team. Since Daisy’s Kolla images are build by meeting the OPNFV requirements and have their own file packaging layout, Daisy requires user to always use Kolla image file built by Daisy team. Currently, it can be got from http://artifacts.opnfv.org/daisy/upstream, or please see this chapter for how to build your own image.

2. Put new version file into /var/lib/daisy/versionfile/kolla/, for example: /var/lib/daisy/versionfile/kolla/kolla-image-ocata-170811155446.tgz

3. Add version file to Daisy’s version management database then get the version ID.

[root@daisy ~]# source /root/daisyrc_admin
[root@daisy ~]# daisy version-add kolla-image-ocata-170811155446.tgz kolla
+-------------+--------------------------------------+
| Property    | Value                                |
+-------------+--------------------------------------+
| checksum    | None                                 |
| created_at  | 2017-08-28T06:45:25.000000           |
| description | None                                 |
| id          | 8be92587-34d7-43e8-9862-a5288c651079 |
| name        | kolla-image-ocata-170811155446.tgz   |
| owner       | None                                 |
| size        | 0                                    |
| status      | unused                               |
| target_id   | None                                 |
| type        | kolla                                |
| updated_at  | 2017-08-28T06:45:25.000000           |
| version     | None                                 |
+-------------+--------------------------------------+
  1. Get cluster ID
[root@daisy ~]# daisy cluster-list
+--------------------------------------+-------------+...
| ID                                   | Name        |...
+--------------------------------------+-------------+...
| d4c1e0d3-c4b8-4745-aab0-0510e62f0ebb | clustertest |...
+--------------------------------------+-------------+...
  1. Issuing update command passing cluster ID and version ID
[root@daisy ~]# daisy update d4c1e0d3-c4b8-4745-aab0-0510e62f0ebb --update-object kolla --version-id 8be92587-34d7-43e8-9862-a5288c651079
+----------+--------------+
| Property | Value        |
+----------+--------------+
| status   | begin update |
+----------+--------------+

6. Since step 5’s command is non-blocking, the user need to run the following command to get updating progress.

[root@daisy ~]# daisy host-list --cluster-id d4c1e0d3-c4b8-4745-aab0-0510e62f0ebb
...+---------------+-------------+-------------------------+
...| Role_progress | Role_status | Role_messages           |
...+---------------+-------------+-------------------------+
...| 0             | updating    | prechecking envirnoment |
...+---------------+-------------+-------------------------+

Notes. The above command returns many fields. User only have to take care about the Role_xxx fields in this case.

Build Your Own Kolla Image For Daisy

The following command will build Ocata Kolla image for Daisy based on Daisy’s fork of openstack/kolla project. This is also the method Daisy used for the Euphrates release.

The reason why here use fork of openstack/kolla project is to backport ODL support from pike branch to ocata branch.

cd ./ci
./kolla-build.sh

After building, the above command will put Kolla image into /tmp/kolla-build-output directory and the image version will be 4.0.2.

If you want to build an image which can update 4.0.2, run the following command:

cd ./ci
./kolla-build.sh -e 1

This time the image version will be 4.0.2.1 which is higher than 4.0.2 so that it can be used to replace the old version.

Deployment Test Guide

After successful deployment of openstack, daisy4nfv use Functest to test the api of openstack. You can follow below instruction to test the successfully deployed openstack on jumperserver.

1.docker pull opnfv/functest run ‘docker images’ command to make sure have the latest functest images.

2.docker run -ti –name functest -e INSTALLER_TYPE=”daisy”-e INSTALLER_IP=”10.20.11.2” -e NODE_NAME=”zte-vtest” -e DEPLOY_SCENARIO=”os-nosdn-nofeature-ha” -e BUILD_TAG=”jenkins-functest-daisy-virtual-daily-master-1259” -e DEPLOY_TYPE=”virt” opnfv/functest:latest /bin/bash Before run above command change below parameters: DEPLOY_SCENARIO: indicate the scenario DEPLOY_TYPE: virt/baremetal NODE_NAME: pod name INSTALLER_IP: daisy vm node ip

3.Log in the daisy vm node to get the /etc/kolla/admin-openrc.sh file, and write them in /home/opnfv/functest/conf/openstack.creds file of functest container.

4.Run command ‘functest env prepare’ to prepare the functest env.

5.Run command ‘functest testcase list’ to list all the testcase can be run.

6.Run command ‘functest testcase run testcase_name’ to run the testcase_name testcase of functest.