Table of Contents
This document describes how to install the Arno SR1 release of OPNFV when using Foreman/Quickstack as a deployment tool covering it's limitations, dependencies and required system resources.
Arno SR1 release of OPNFV when using Foreman as a deployment tool Docs (c) by Tim Rozet (RedHat)
Arno SR1 release of OPNFV when using Foreman as a deployment tool Docs are licensed under a Creative Commons Attribution 4.0 International License. You should have received a copy of the license along with this. If not, see <http://creativecommons.org/licenses/by/4.0/>.
Date | Ver. | Author | Comment |
2015-05-07 | 0.0.1 | Tim Rozet (RedHat) | First draft |
2015-05-27 | 0.0.2 | Christopher Price (Ericsson AB) | Minor changes & formatting |
2015-06-02 | 0.0.3 | Christopher Price (Ericsson AB) | Minor changes & formatting |
2015-06-03 | 0.0.4 | Ildiko Vancsa (Ericsson) | Minor changes |
2015-09-10 | 0.2.0 | Tim Rozet (Red Hat) | Update to SR1 |
2015-09-25 | 0.2.1 | Randy Levensalor (CableLabs) | Added CLI verification |
This document describes the steps to install an OPNFV Arno SR1 reference platform, as defined by the Bootstrap/Getting-Started (BGS) Project using the Foreman/QuickStack installer.
The audience is assumed to have a good background in networking and Linux administration.
Foreman/QuickStack uses the Foreman Open Source project as a server management tool, which in turn manages and executes Genesis/QuickStack. Genesis/QuickStack consists of layers of Puppet modules that are capable of provisioning the OPNFV Target System (3 controllers, n number of compute nodes).
The Genesis repo contains the necessary tools to get install and deploy an OPNFV target system using Foreman/QuickStack. These tools consist of the Foreman/QuickStack bootable ISO (arno.2015.2.0.foreman.iso), and the automatic deployment script (deploy.sh).
An OPNFV install requires a "Jumphost" in order to operate. The bootable ISO will allow you to install a customized CentOS 7 release to the Jumphost, which then gives you the required packages needed to run deploy.sh. If you already have a Jumphost with CentOS 7 installed, you may choose to ignore the ISO step and instead move directly to cloning the git repository and running deploy.sh. In this case, deploy.sh will install the necessary packages for you in order to execute.
deploy.sh installs Foreman/QuickStack VM server using Vagrant with VirtualBox as its provider. This VM is then used to provision the OPNFV target system (3 controllers, n compute nodes). These nodes can be either virtual or bare metal. This guide contains instructions for installing both.
The Jumphost requirements are outlined below:
Network requirements include:
Note: Storage network will be consolidated to the private network if only 3 networks are used.
Bare metal nodes require:
In order to execute a deployment, one must gather the following information:
Note: For single NIC/network barmetal deployment, the MAC address of the admin and private interface will be the same.
The setup presumes that you have 6 bare metal servers and have already setup connectivity on at least 1 or 3 interfaces for all servers via a TOR switch or other network implementation.
The physical TOR switches are not automatically configured from the OPNFV reference platform. All the networks involved in the OPNFV infrastructure as well as the provider networks and the private tenant VLANs needs to be manually configured.
The Jumphost can be installed using the bootable ISO. The Jumphost should then be configured with an IP gateway on its admin or public interface and configured with a working DNS server. The Jumphost should also have routable access to the lights out network.
deploy.sh is then executed in order to install the Foreman/QuickStack Vagrant VM. deploy.sh uses a configuration file with YAML format in order to know how to install and provision the OPNFV target system. The information gathered under section Execution Requirements (Bare Metal Only) is put into this configuration file.
deploy.sh brings up a CentOS 7 Vagrant VM, provided by VirtualBox. The VM then executes an Ansible project called Khaleesi in order to install Foreman and QuickStack. Once the Foreman/QuickStack VM is up, Foreman will be configured with the nodes' information. This includes MAC address, IPMI, OpenStack type (controller, compute, OpenDaylight controller) and other information. At this point Khaleesi makes a REST API call to Foreman to instruct it to provision the hardware.
Foreman will then reboot the nodes via IPMI. The nodes should already be set to PXE boot first off the admin interface. Foreman will then allow the nodes to PXE and install CentOS 7 as well as Puppet. Foreman/QuickStack VM server runs a Puppet Master and the nodes query this master to get their appropriate OPNFV configuration. The nodes will then reboot one more time and once back up, will DHCP on their private, public and storage NICs to gain IP addresses. The nodes will now check in via Puppet and start installing OPNFV.
Khaleesi will wait until these nodes are fully provisioned and then return a success or failure based on the outcome of the Puppet application.
The VM nodes deployment operates almost the same way as the bare metal deployment with a few differences. deploy.sh still installs Foreman/QuickStack VM the exact same way, however the part of the Khaleesi Ansible playbook which IPMI reboots/PXE boots the servers is ignored. Instead, deploy.sh brings up N number more Vagrant VMs (where N is 3 control nodes + n compute). These VMs already come up with CentOS 7 so instead of re-provisioning the entire VM, deploy.sh initiates a small Bash script that will signal to Foreman that those nodes are built and install/configure Puppet on them.
To Foreman these nodes look like they have just built and register the same way as bare metal nodes.
VM deployment will automatically use the default gateway interface on the host for all of the VMs internet access via bridging the VMs NICs (public network). The other networks - such as admin, private, storage will all be created as internal VirtualBox networks. Therefore only a single interface on the host is needed for VM deployment.
This section goes step-by-step on how to correctly install and provision the OPNFV target system to bare metal nodes.
You now need to take the MAC address/IPMI info gathered in section Execution Requirements (Bare Metal Only) and create the YAML inventory (also known as configuration) file for deploy.sh.
You are now ready to deploy OPNFV! deploy.sh will use your /var/opt/opnfv/ directory to store its Vagrant VMs. Your Foreman/QuickStack Vagrant VM will be running out of /var/opt/opnfv/foreman_vm/.
It is also recommended that you power off your nodes before running deploy.sh If there are DHCP servers or other network services that are on those nodes it may conflict with the installation.
Follow the steps below to execute:
Note: This is for default detection of at least 3 VLAN/interfaces configured on your jumphost with defaulting interface assignment by the NIC order (1st Admin, 2nd Private, 3rd Public). If you wish to use a single interface for baremetal install, see help output for "-single_baremetal_nic". If you would like to specify the NIC mapping to logical network, see help output for "-admin_nic", "-private_nic", "-public_nic", "-storage_nic".
Now that the installer has finished it is a good idea to check and make sure things are working correctly. To access your Foreman/QuickStack VM:
Note: You can find out more about how to use Foreman by going to http://www.theforeman.org/ or by watching a walkthrough video here: https://bluejeans.com/s/89gb/
Now that you have Horizon access, let's make sure OpenStack the OPNFV target system are working correctly:
Note: You may also want to expand this pool by giving a larger range, or you can simply hit Create with entering nothing and the entire subnet range will be used for DHCP
Congratulations you have successfully installed OPNFV!
This section is for users who do not have web access or prefer to use command line rather than a web browser to validate the OpenStack installation. Do not run this if you have already completed the OpenStack verification, since this uses the same names.
Install the OpenStack CLI tools or log-in to one of the compute or control servers.
Find the IP of keystone public VIP. As root:
cat /var/opt/opnfv/foreman_vm/opnfv_ksgen_settings.yml | grep keystone_public_vip
Set the environment variables. Substitute the keystone public VIP for <VIP> below.
Load the CirrOS image into glance.
glance image-create --copy-from http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --name 'CirrOS'
Verify the image is downloaded. The status will be "active" when the download completes.
glance image-show CirrOS
Create a private tenant network.
neutron net-create test_network
Verify the network has been created by running the command below.
neutron net-show test_network
Create a subnet for the tenant network.
neutron subnet-create test_network --name test_subnet --dns-nameserver 8.8.8.8 10.0.0.0/24
Verify the subnet was created.
neutron subnet-show test_subnet
Add an interface from the test_subnet to the provider router.
neutron router-interface-add provider_router test_subnet
Verify the interface was added.
neutron router-port-list
Deploy a VM.
nova boot --flavor 1 --image CirrOS cirros1
Wait for the VM to complete booting. This can be completed by viewing the console log until a login prompt appears.
nova console-log cirros1
Get the local ip from the VM.
nova show cirros1 | grep test_network
Get the port ID for the ip from the previous command. Replace <IP> with the IP from the previous command. The port id is the first series of numbers and letters.
neutron port-list | grep 10.0.0.2 | awk ' { print $2 } '
Assign a floating ip to the VM. Substitue the port-id from the previous command for <PORT_ID>
neutron floatingip-create --port-id <PORT_ID> provider_network
Log into the vm. Substitute FLOATING_IP for the floating_ip_address displayed in the output in the above command.
ssh cirros@<FLOATING_IP>
Logout and create a second VM.
nova boot --flavor 1 --image CirrOS cirros2
Get the ip for cirros2.
nova show cirros2 | grep test_network
Redo step 17 to log back into cirros1 and ping cirros2. Replace <CIRROS2> with the ip from the previous step.
ping <CIRROS2>
This section goes step-by-step on how to correctly install and provision the OPNFV target system to VM nodes.
Follow the instructions in the Install Bare Metal Jumphost section, except that you only need 1 network interface on the host system with internet connectivity.
It is optional to create an inventory file for virtual deployments. Since the nodes are virtual you are welcome to use the provided opnfv_ksgen_settings files. You may also elect to customize your deployment. Those options include modifying domain name of your deployment as well as allocating specific resources per node.
Modifying VM resources is necessary for bigger virtual deployments in order to run more nova instances. To modify these resources you can edit each of the follow node paramters in the Inventory file:
You are now ready to deploy OPNFV! deploy.sh will use your /var/opt/opnfv/ directory to store its Vagrant VMs. Your Foreman/QuickStack Vagrant VM will run out of /var/opt/opnfv/foreman_vm/. Your compute and subsequent controller nodes will run in:
Each VM will be brought up and bridged to your Jumphost NIC for the public network. deploy.sh will first bring up your Foreman/QuickStack Vagrant VM and afterwards it will bring up each of the nodes listed above, in order of controllers first.
Follow the steps below to execute:
Note: You may also wish to use other options like manually selecting the NIC to be used on your host, etc. Please use "deploy.sh -h" to see a full list of options available.
Follow the instructions in the Verifying the Setup section.
Also, for VM deployment you are able to easily access your nodes by going to /var/opt/opnfv/<node name> and then vagrant ssh (password is "vagrant"). You can use this to go to a controller and check OpenStack services, OpenDaylight, etc.
Follow the steps in OpenStack Verification section.
Please see the Arno FAQ.
All Foreman/QuickStack and "common" entities are protected by the Apache 2.0 License.
Upstream OpenDaylight provides a number of packaging and deployment options meant for consumption by downstream projects like OPNFV.
Currently, OPNFV Foreman uses OpenDaylight's Puppet module, which in turn depends on OpenDaylight's RPM hosted on the CentOS Community Build System.
Authors: | Tim Rozet (trozet@redhat.com) |
---|---|
Version: | 0.2.0 |
Documentation tracking
Revision: 563547b4a9f44090f32c0e17d040114854563760
Build date: Wed Sep 30 21:27:27 UTC 2015