Compliance Verification program test specification

Introduction

The OPNFV CVP provides a series or test areas aimed to evaluate the operation of an NFV system in accordance with carrier networking needs. Each test area contains a number of associated test cases which are described in detail in the associated test specification.

All tests in the CVP are required to fulfill a specific set of criteria in order that the CVP is able to provide a fair assessment of the system under test. Test requirements are described in the ‘Test Case Requirements’_ document.

All tests areas addressed in the CVP are covered in the following test specification documents.

Revision: 5d2ee5ed2a8629e2a14d371f0f040efcb669e0d6 Build date: 2017-10-25

OpenStack Services HA test specification

maxdepth:

Scope

The HA test area evaluates the ability of the System Under Test to support service continuity and recovery from component failures on part of OpenStack controller services(“nova-api”, “neutron-server”, “keystone”, “glance-api”, “cinder-api”) and on “load balancer” service.

The tests in this test area will emulate component failures by killing the processes of above target services, stressing the CPU load or blocking disk I/O on the selected controller node, and then check if the impacted services are still available and the killed processes are recovered on the selected controller node within a given time interval.

References

This test area references the following specifications:

Definitions and abbreviations

The following terms and abbreviations are used in conjunction with this test area

  • SUT - system under test
  • Monitor - tools used to measure the service outage time and the process outage time
  • Service outage time - the outage time (seconds) of the specific OpenStack service
  • Process outage time - the outage time (seconds) from the specific processes being killed to recovered

System Under Test (SUT)

The system under test is assumed to be the NFVi and VIM in operation on a Pharos compliant infrastructure.

SUT is assumed to be in high availability configuration, which typically means more than one controller nodes are in the System Under Test.

Test Area Structure

The HA test area is structured with the following test cases in a sequential manner.

Each test case is able to run independently. Preceding test case’s failure will not affect the subsequent test cases.

Preconditions of each test case will be described in the following test descriptions.

Test Descriptions

Test Case 1 - Controller node OpenStack service down - nova-api
Short name

opnfv.ha.tc001.nova-api_service_down

Use case specification

This test case verifies the service continuity capability in the face of the software process failure. It kills the processes of OpenStack “nova-api” service on the selected controller node, then checks whether the “nova-api” service is still available during the failure, by creating a VM then deleting the VM, and checks whether the killed processes are recovered within a given time interval.

Test preconditions

There is more than one controller node, which is providing the “nova-api” service for API end-point. Denoted a controller node as Node1 in the following configuration.

Basic test flow execution description and pass/fail criteria
Methodology for verifying service continuity and recovery

The service continuity and process recovery capabilities of “nova-api” service is evaluated by monitoring service outage time, process outage time, and results of nova operations.

Service outage time is measured by continuously executing “openstack server list” command in loop and checking if the response of the command request is returned with no failure. When the response fails, the “nova-api” service is considered in outage. The time between the first response failure and the last response failure is considered as service outage time.

Process outage time is measured by checking the status of “nova-api” processes on the selected controller node. The time of “nova-api” processes being killed to the time of the “nova-api” processes being recovered is the process outage time. Process recovery is verified by checking the existence of “nova-api” processes.

All nova operations are carried out correctly within a given time interval which suggests that the “nova-api” service is continuously available.

Test execution
  • Test action 1: Connect to Node1 through SSH, and check that “nova-api” processes are running on Node1
  • Test action 2: Create a image with “openstack image create test-cirros –file cirros-0.3.5-x86_64-disk.img –disk-format qcow2 –container-format bare”
  • Test action 3: Execute”openstack flavor create m1.test –id auto –ram 512 –disk 1 –vcpus 1” to create flavor “m1.test”.
  • Test action 4: Start two monitors: one for “nova-api” processes and the other for “openstack server list” command. Each monitor will run as an independent process
  • Test action 5: Connect to Node1 through SSH, and then kill the “nova-api” processes
  • Test action 6: When “openstack server list” returns with no error, calculate the service outage time, and execute command “openstack server create –flavor m1.test –image test-cirros test-instance”
  • Test action 7: Continuously Execute “openstack server show test-instance” to check if the status of VM “test-instance” is “Active”
  • Test action 8: If VM “test-instance” is “Active”, execute “openstack server delete test-instance”, then execute “openstack server list” to check if the VM is not in the list
  • Test action 9: Continuously measure process outage time from the monitor until the process outage time is more than 30s
Pass / fail criteria

The process outage time is less than 30s.

The service outage time is less than 5s.

The nova operations are carried out in above order and no errors occur.

A negative result will be generated if the above is not met in completion.

Post conditions

Restart the process of “nova-api” if they are not running. Delete image with “openstack image delete test-cirros” Delete flavor with “openstack flavor delete m1.test”

Test Case 2 - Controller node OpenStack service down - neutron-server
Short name

opnfv.ha.tc002.neutron-server_service_down

Use case specification

This test verifies the high availability of the “neutron-server” service provided by OpenStack controller nodes. It kills the processes of OpenStack “neutron-server” service on the selected controller node, then checks whether the “neutron-server” service is still available, by creating a network and deleting the network, and checks whether the killed processes are recovered.

Test preconditions

There is more than one controller node, which is providing the “neutron-server” service for API end-point. Denoted a controller node as Node1 in the following configuration.

Basic test flow execution description and pass/fail criteria
Methodology for monitoring high availability

The high availability of “neutron-server” service is evaluated by monitoring service outage time, process outage time, and results of neutron operations.

Service outage time is tested by continuously executing “openstack router list” command in loop and checking if the response of the command request is returned with no failure. When the response fails, the “neutron-server” service is considered in outage. The time between the first response failure and the last response failure is considered as service outage time.

Process outage time is tested by checking the status of “neutron-server” processes on the selected controller node. The time of “neutron-server” processes being killed to the time of the “neutron-server” processes being recovered is the process outage time. Process recovery is verified by checking the existence of “neutron-server” processes.

Test execution
  • Test action 1: Connect to Node1 through SSH, and check that “neutron-server” processes are running on Node1
  • Test action 2: Start two monitors: one for “neutron-server” process and the other for “openstack router list” command. Each monitor will run as an independent process.
  • Test action 3: Connect to Node1 through SSH, and then kill the “neutron-server” processes
  • Test action 4: When “openstack router list” returns with no error, calculate the service outage time, and execute “openstack network create test-network”
  • Test action 5: Continuously executing “openstack network show test-network”, check if the status of “test-network” is “Active”
  • Test action 6: If “test-network” is “Active”, execute “openstack network delete test-network”, then execute “openstack network list” to check if the “test-network” is not in the list
  • Test action 7: Continuously measure process outage time from the monitor until the process outage time is more than 30s
Pass / fail criteria

The process outage time is less than 30s.

The service outage time is less than 5s.

The neutron operations are carried out in above order and no errors occur.

A negative result will be generated if the above is not met in completion.

Post conditions

Restart the processes of “neutron-server” if they are not running.

Test Case 3 - Controller node OpenStack service down - keystone
Short name

opnfv.ha.tc003.keystone_service_down

Use case specification

This test verifies the high availability of the “keystone” service provided by OpenStack controller nodes. It kills the processes of OpenStack “keystone” service on the selected controller node, then checks whether the “keystone” service is still available by executing command “openstack user list” and whether the killed processes are recovered.

Test preconditions

There is more than one controller node, which is providing the “keystone” service for API end-point. Denoted a controller node as Node1 in the following configuration.

Basic test flow execution description and pass/fail criteria
Methodology for monitoring high availability

The high availability of “keystone” service is evaluated by monitoring service outage time and process outage time

Service outage time is tested by continuously executing “openstack user list” command in loop and checking if the response of the command request is reutrned with no failure. When the response fails, the “keystone” service is considered in outage. The time between the first response failure and the last response failure is considered as service outage time.

Process outage time is tested by checking the status of “keystone” processes on the selected controller node. The time of “keystone” processes being killed to the time of the “keystone” processes being recovered is the process outage time. Process recovery is verified by checking the existence of “keystone” processes.

Test execution
  • Test action 1: Connect to Node1 through SSH, and check that “keystone” processes are running on Node1
  • Test action 2: Start two monitors: one for “keystone” process and the other for “openstack user list” command. Each monitor will run as an independent process.
  • Test action 3: Connect to Node1 through SSH, and then kill the “keystone” processes
  • Test action 4: Calculate the service outage time and process outage time
  • Test action 5: The test passes if process outage time is less than 20s and service outage time is less than 5s
  • Test action 6: Continuously measure process outage time from the monitor until the process outage time is more than 30s
Pass / fail criteria

The process outage time is less than 30s.

The service outage time is less than 5s.

A negative result will be generated if the above is not met in completion.

Post conditions

Restart the processes of “keystone” if they are not running.

Test Case 4 - Controller node OpenStack service down - glance-api
Short name

opnfv.ha.tc004.glance-api_service_down

Use case specification

This test verifies the high availability of the “glance-api” service provided by OpenStack controller nodes. It kills the processes of OpenStack “glance-api” service on the selected controller node, then checks whether the “glance-api” service is still available, by creating image and deleting image, and checks whether the killed processes are recovered.

Test preconditions

There is more than one controller node, which is providing the “glance-api” service for API end-point. Denoted a controller node as Node1 in the following configuration.

Basic test flow execution description and pass/fail criteria
Methodology for monitoring high availability

The high availability of “glance-api” service is evaluated by monitoring service outage time, process outage time, and results of glance operations.

Service outage time is tested by continuously executing “openstack image list” command in loop and checking if the response of the command request is returned with no failure. When the response fails, the “glance-api” service is considered in outage. The time between the first response failure and the last response failure is considered as service outage time.

Process outage time is tested by checking the status of “glance-api” processes on the selected controller node. The time of “glance-api” processes being killed to the time of the “glance-api” processes being recovered is the process outage time. Process recovery is verified by checking the existence of “glance-api” processes.

Test execution
  • Test action 1: Connect to Node1 through SSH, and check that “glance-api” processes are running on Node1
  • Test action 2: Start two monitors: one for “glance-api” process and the other for “openstack image list” command. Each monitor will run as an independent process.
  • Test action 3: Connect to Node1 through SSH, and then kill the “glance-api” processes
  • Test action 4: When “openstack image list” returns with no error, calculate the service outage time, and execute “openstack image create test-image –file cirros-0.3.5-x86_64-disk.img –disk-format qcow2 –container-format bare”
  • Test action 5: Continuously execute “openstack image show test-image”, check if status of “test-image” is “active”
  • Test action 6: If “test-image” is “active”, execute “openstack image delete test-image”. Then execute “openstack image list” to check if “test-image” is not in the list
  • Test action 7: Continuously measure process outage time from the monitor until the process outage time is more than 30s
Pass / fail criteria

The process outage time is less than 30s.

The service outage time is less than 5s.

The glance operations are carried out in above order and no errors occur.

A negative result will be generated if the above is not met in completion.

Post conditions

Restart the processes of “glance-api” if they are not running.

Delete image with “openstack image delete test-image”.

Test Case 5 - Controller node OpenStack service down - cinder-api
Short name

opnfv.ha.tc005.cinder-api_service_down

Use case specification

This test verifies the high availability of the “cinder-api” service provided by OpenStack controller nodes. It kills the processes of OpenStack “cinder-api” service on the selected controller node, then checks whether the “cinder-api” service is still available by executing command “openstack volume list” and whether the killed processes are recovered.

Test preconditions

There is more than one controller node, which is providing the “cinder-api” service for API end-point. Denoted a controller node as Node1 in the following configuration.

Basic test flow execution description and pass/fail criteria
Methodology for monitoring high availability

The high availability of “cinder-api” service is evaluated by monitoring service outage time and process outage time

Service outage time is tested by continuously executing “openstack volume list” command in loop and checking if the response of the command request is returned with no failure. When the response fails, the “cinder-api” service is considered in outage. The time between the first response failure and the last response failure is considered as service outage time.

Process outage time is tested by checking the status of “cinder-api” processes on the selected controller node. The time of “cinder-api” processes being killed to the time of the “cinder-api” processes being recovered is the process outage time. Process recovery is verified by checking the existence of “cinder-api” processes.

Test execution
  • Test action 1: Connect to Node1 through SSH, and check that “cinder-api” processes are running on Node1
  • Test action 2: Start two monitors: one for “cinder-api” process and the other for “openstack volume list” command. Each monitor will run as an independent process.
  • Test action 3: Connect to Node1 through SSH, and then execute kill the “cinder-api” processes
  • Test action 4: Continuously measure service outage time from the monitor until the service outage time is more than 5s
  • Test action 5: Continuously measure process outage time from the monitor until the process outage time is more than 30s
Pass / fail criteria

The process outage time is less than 30s.

The service outage time is less than 5s.

The cinder operations are carried out in above order and no errors occur.

A negative result will be generated if the above is not met in completion.

Post conditions

Restart the processes of “cinder-api” if they are not running.

Test Case 6 - Controller Node CPU Overload High Availability
Short name

opnfv.ha.tc006.cpu_overload

Use case specification

This test verifies the availability of services when one of the controller node suffers from heavy CPU overload. When the CPU usage of the specified controller node is up to 100%, which breaks down the OpenStack services on this node, the Openstack services should continue to be available. This test case stresses the CPU usage of a specific controller node to 100%, then checks whether all services provided by the SUT are still available with the monitor tools.

Test preconditions

There is more than one controller node, which is providing the “cinder-api”, “neutron-server”, “glance-api” and “keystone” services for API end-point. Denoted a controller node as Node1 in the following configuration.

Basic test flow execution description and pass/fail criteria
Methodology for monitoring high availability

The high availability of related OpenStack service is evaluated by monitoring service outage time

Service outage time is tested by continuously executing “openstack router list”, “openstack stack list”, “openstack volume list”, “openstack image list” commands in loop and checking if the response of the command request is returned with no failure. When the response fails, the related service is considered in outage. The time between the first response failure and the last response failure is considered as service outage time.

Methodology for stressing CPU usage

To evaluate the high availability of target OpenStack service under heavy CPU load, the test case will first get the number of logical CPU cores on the target controller node by shell command, then use the number to execute ‘dd’ command to continuously copy from /dev/zero and output to /dev/null in loop. The ‘dd’ operation only uses CPU, no I/O operation, which is ideal for stressing the CPU usage.

Since the ‘dd’ command is continuously executed and the CPU usage rate is stressed to 100%, the scheduler will schedule each ‘dd’ command to be processed on a different logical CPU core. Eventually to achieve all logical CPU cores usage rate to 100%.

Test execution
  • Test action 1: Start four monitors: one for “openstack image list” command, one for “openstack router list” command, one for “openstack stack list” command and the last one for “openstack volume list” command. Each monitor will run as an independent process.
  • Test action 2: Connect to Node1 through SSH, and then stress all logical CPU cores usage rate to 100%
  • Test action 3: Continuously measure all the service outage times until they are more than 5s
  • Test action 4: Kill the process that stresses the CPU usage
Pass / fail criteria

All the service outage times are less than 5s.

A negative result will be generated if the above is not met in completion.

Post conditions

No impact on the SUT.

Test Case 7 - Controller Node Disk I/O Overload High Availability
Short name

opnfv.ha.tc007.disk_I/O_overload

Use case specification

This test verifies the high availability of control node. When the disk I/O of the specific disk is overload, which breaks down the OpenStack services on this node, the read and write services should continue to be available. This test case blocks the disk I/O of the specific controller node, then checks whether the services that need to read or write the disk of the controller node are available with some monitor tools.

Test preconditions

There is more than one controller node. Denoted a controller node as Node1 in the following configuration. The controller node has at least 20GB free disk space.

Basic test flow execution description and pass/fail criteria
Methodology for monitoring high availability

The high availability of nova service is evaluated by monitoring service outage time

Service availability is tested by continuously executing “openstack flavor list” command in loop and checking if the response of the command request is returned with no failure. When the response fails, the related service is considered in outage.

Methodology for stressing disk I/O

To evaluate the high availability of target OpenStack service under heavy I/O load, the test case will execute shell command on the selected controller node to continuously writing 8kb blocks to /test.dbf

Test execution
  • Test action 1: Connect to Node1 through SSH, and then stress disk I/O by continuously writing 8kb blocks to /test.dbf
  • Test action 2: Start a monitor: for “openstack flavor list” command
  • Test action 3: Create a flavor called “test-001”
  • Test action 4: Check whether the flavor “test-001” is created
  • Test action 5: Continuously measure service outage time from the monitor until the service outage time is more than 5s
  • Test action 6: Stop writing to /test.dbf and delete file /test.dbf
Pass / fail criteria

The service outage time is less than 5s.

The nova operations are carried out in above order and no errors occur.

A negative result will be generated if the above is not met in completion.

Post conditions

Delete flavor with “openstack flavor delete test-001”.

Test Case 8 - Controller Load Balance as a Service High Availability
Short name

opnfv.ha.tc008.load_balance_service_down

Use case specification

This test verifies the high availability of “load balancer” service. When the “load balancer” service of a specified controller node is killed, whether “load balancer” service on other controller nodes will work, and whether the controller node will restart the “load balancer” service are checked. This test case kills the processes of “load balancer” service on the selected controller node, then checks whether the request of the related OpenStack command is processed with no failure and whether the killed processes are recovered.

Test preconditions

There is more than one controller node, which is providing the “load balancer” service for rest-api. Denoted as Node1 in the following configuration.

Basic test flow execution description and pass/fail criteria
Methodology for monitoring high availability

The high availability of “load balancer” service is evaluated by monitoring service outage time and process outage time

Service outage time is tested by continuously executing “openstack image list” command in loop and checking if the response of the command request is returned with no failure. When the response fails, the “load balancer” service is considered in outage. The time between the first response failure and the last response failure is considered as service outage time.

Process outage time is tested by checking the status of processes of “load balancer” service on the selected controller node. The time of those processes being killed to the time of those processes being recovered is the process outage time. Process recovery is verified by checking the existence of processes of “load balancer” service.

Test execution
  • Test action 1: Connect to Node1 through SSH, and check that processes of “load balancer” service are running on Node1
  • Test action 2: Start two monitors: one for processes of “load balancer” service and the other for “openstack image list” command. Each monitor will run as an independent process
  • Test action 3: Connect to Node1 through SSH, and then kill the processes of “load balancer” service
  • Test action 4: Continuously measure service outage time from the monitor until the service outage time is more than 5s
  • Test action 5: Continuously measure process outage time from the monitor until the process outage time is more than 30s
Pass / fail criteria

The process outage time is less than 30s.

The service outage time is less than 5s.

A negative result will be generated if the above is not met in completion.

Post conditions

Restart the processes of “load balancer” if they are not running.

VIM compute operations test specification

Scope

The VIM compute operations test area evaluates the ability of the system under test to support VIM compute operations. The test cases documented here are the compute API test cases in the OpenStack Interop guideline 2016.8 as implemented by the RefStack client. These test cases will evaluate basic OpenStack (as a VIM) compute operations, including:

  • Image management operations
  • Basic support operations
  • API version support operations
  • Quotas management operations
  • Basic server operations
  • Volume management operations

References

Definitions and abbreviations

The following terms and abbreviations are used in conjunction with this test area

  • API - Application Programming Interface
  • NFVi - Network Functions Virtualization infrastructure
  • SUT - System Under Test
  • UUID - Universally Unique Identifier
  • VIM - Virtual Infrastructure Manager
  • VM - Virtual Machine

System Under Test (SUT)

The system under test is assumed to be the NFVi and VIM deployed with a Pharos compliant infrastructure.

Test Area Structure

The test area is structured based on VIM compute API operations. Each test case is able to run independently, i.e. irrelevant of the state created by a previous test. Specifically, every test performs clean-up operations which return the system to the same state as before the test.

For brevity, the test cases in this test area are summarized together based on the operations they are testing.

Test Descriptions

API Used and Reference

Servers: https://developer.openstack.org/api-ref/compute/

  • create server
  • delete server
  • list servers
  • start server
  • stop server
  • update server
  • get server action
  • set server metadata
  • update server metadata
  • rebuild server
  • create image
  • delete image
  • create keypair
  • delete keypair

Block storage: https://developer.openstack.org/api-ref/block-storage

  • create volume
  • delete volume
  • attach volume to server
  • detach volume from server
Test Case 1 - Image operations within the Compute API
Test case specification

tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON.test_create_delete_image tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON.test_create_image_specify_multibyte_character_image_name

Test preconditions
  • Compute server extension API
Basic test flow execution description and pass/fail criteria
  • Test action 1: Create a server VM1 with an image IMG1 and wait for VM1 to reach ‘ACTIVE’ status
  • Test action 2: Create a new server image IMG2 from VM1, specifying image name and image metadata. Wait for IMG2 to reach ‘ACTIVE’ status, and then delete IMG2
  • Test assertion 1: Verify IMG2 is created with correct image name and image metadata; verify IMG1’s ‘minRam’ equals to IMG2’s ‘minRam’ and IMG2’s ‘minDisk’ equals to IMG1’s ‘minDisk’ or VM1’s flavor disk size
  • Test assertion 2: Verify IMG2 is deleted correctly
  • Test action 3: Create another server IMG3 from VM1, specifying image name with a 3 byte utf-8 character
  • Test assertion 3: Verify IMG3 is created correctly
  • Test action 4: Delete VM1, IMG1 and IMG3

This test evaluates the Compute API ability of creating image from server, deleting image, creating server image with multi-byte character name. Specifically, the test verifies that:

  • Compute server create image and delete image APIs work correctly.
  • Compute server image can be created with multi-byte character name.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 2 - Action operation within the Compute API
Test case specification

tempest.api.compute.servers.test_instance_actions.InstanceActionsTestJSON.test_get_instance_action tempest.api.compute.servers.test_instance_actions.InstanceActionsTestJSON.test_list_instance_actions

Test preconditions
  • Compute server extension API
Basic test flow execution description and pass/fail criteria
  • Test action 1: Create a server VM1 and wait for VM1 to reach ‘ACTIVE’ status
  • Test action 2: Get the action details ACT_DTL of VM1
  • Test assertion 1: Verify ACT_DTL’s ‘instance_uuid’ matches VM1’s ID and ACT_DTL’s ‘action’ matched ‘create’
  • Test action 3: Create a server VM2 and wait for VM2 to reach ‘ACTIVE’ status
  • Test action 4: Delete server VM2 and wait for VM2 to reach termination
  • Test action 5: Get the action list ACT_LST of VM2
  • Test assertion 2: Verify ACT_LST’s length is 2 and two actions are ‘create’ and ‘delete’
  • Test action 6: Delete VM1

This test evaluates the Compute API ability of getting the action details of a provided server and getting the action list of a deleted server. Specifically, the test verifies that:

  • Get the details of the action in a specified server.
  • List the actions that were performed on the specified server.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 3 - Generate, import and delete SSH keys within Compute services
Test case specification

tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_specify_keypair

Test preconditions
  • Compute server extension API
Basic test flow execution description and pass/fail criteria
  • Test action 1: Create a keypair KEYP1 and list all existing keypairs
  • Test action 2: Create a server VM1 with KEYP1 and wait for VM1 to reach ‘ACTIVE’ status
  • Test action 3: Show details of VM1
  • Test assertion 1: Verify value of ‘key_name’ in the details equals to the name of KEYP1
  • Test action 4: Delete KEYP1 and VM1

This test evaluates the Compute API ability of creating a keypair, listing keypairs and creating a server with a provided keypair. Specifically, the test verifies that:

  • Compute create keypair and list keypair APIs work correctly.
  • While creating a server, keypair can be specified.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 4 - List supported versions of the Compute API
Test case specification

tempest.api.compute.test_versions.TestVersions.test_list_api_versions

Test preconditions
  • Compute versions extension API
Basic test flow execution description and pass/fail criteria
  • Test action 1: Get a List of versioned endpoints in the SUT
  • Test assertion 1: Verify endpoints versions start at ‘v2.0’

This test evaluates the functionality of listing all available APIs to API consumers. Specifically, the test verifies that:

  • Compute list API versions API works correctly.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 5 - Quotas management in Compute API
Test case specification

tempest.api.compute.test_quotas.QuotasTestJSON.test_get_default_quotas tempest.api.compute.test_quotas.QuotasTestJSON.test_get_quotas

Test preconditions
  • Compute quotas extension API
Basic test flow execution description and pass/fail criteria
  • Test action 1: Get the default quota set using the tenant ID
  • Test assertion 1: Verify the default quota set ID matches tenant ID and the default quota set is complete
  • Test action 2: Get the quota set using the tenant ID
  • Test assertion 2: Verify the quota set ID matches tenant ID and the quota set is complete
  • Test action 3: Get the quota set using the user ID
  • Test assertion 3: Verify the quota set ID matches tenant ID and the quota set is complete

This test evaluates the functionality of getting quota set. Specifically, the test verifies that:

  • User can get the default quota set for its tenant.
  • User can get the quota set for its tenant.
  • User can get the quota set using user ID.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 6 - Basic server operations in the Compute API
Test case specification

This test case evaluates the Compute API ability of basic server operations, including:

  • Create a server with admin password
  • Create a server with a name that already exists
  • Create a server with a numeric name
  • Create a server with a really long metadata
  • Create a server with a name whose length exceeding 255 characters
  • Create a server with an unknown flavor
  • Create a server with an unknown image ID
  • Create a server with an invalid network UUID
  • Delete a server using a server ID that exceeds length limit
  • Delete a server using a negative server ID
  • Get a nonexistent server details
  • Verify the instance host name is the same as the server name
  • Create a server with an invalid access IPv6 address
  • List all existent servers
  • Filter the (detailed) list of servers by flavor, image, server name, server status or limit
  • Lock a server and try server stop, unlock and retry
  • Get and delete metadata from a server
  • List and set metadata for a server
  • Reboot, rebuild, stop and start a server
  • Update a server’s access addresses and server name

The reference is,

tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_server_with_admin_password tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_with_existing_server_name tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_numeric_server_name tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_server_metadata_exceeds_length_limit tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_server_name_length_exceeds_256 tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_with_invalid_flavor tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_with_invalid_image tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_with_invalid_network_uuid tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_delete_server_pass_id_exceeding_length_limit tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_delete_server_pass_negative_id tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_get_non_existent_server tempest.api.compute.servers.test_create_server.ServersTestJSON.test_host_name_is_same_as_server_name tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_host_name_is_same_as_server_name tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_invalid_ip_v6_address tempest.api.compute.servers.test_create_server.ServersTestJSON.test_list_servers tempest.api.compute.servers.test_create_server.ServersTestJSON.test_list_servers_with_detail tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_list_servers tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_list_servers_with_detail tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_flavor tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_image tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_server_name tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_server_status tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_limit_results tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_flavor tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_image tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_limit tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_server_name tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_server_status tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filtered_by_name_wildcard tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since_future_date tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since_invalid_date tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_limits tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_limits_greater_than_actual_count tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_limits_pass_negative_value tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_limits_pass_string tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_non_existing_flavor tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_non_existing_image tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_non_existing_server_name tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_detail_server_is_deleted tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_status_non_existing tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_with_a_deleted_server tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_lock_unlock_server tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_delete_server_metadata_item tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_get_server_metadata_item tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_list_server_metadata tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_set_server_metadata tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_set_server_metadata_item tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_update_server_metadata tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_server_name_blank tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_reboot_server_hard tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_reboot_non_existent_server tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_rebuild_server tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_rebuild_deleted_server tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_rebuild_non_existent_server tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_stop_start_server tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_stop_non_existent_server tempest.api.compute.servers.test_servers.ServersTestJSON.test_update_access_server_address tempest.api.compute.servers.test_servers.ServersTestJSON.test_update_server_name tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_update_name_of_non_existent_server tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_update_server_name_length_exceeds_256 tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_update_server_set_empty_name tempest.api.compute.servers.test_create_server.ServersTestJSON.test_verify_created_server_vcpus tempest.api.compute.servers.test_create_server.ServersTestJSON.test_verify_server_details tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_verify_created_server_vcpus tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_verify_server_details

tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_active_status tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_rebuild_reboot_deleted_server

Note: the last 2 test cases are the alias of another 2 test cases respectively, which are

tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_server_status tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_rebuild_deleted_server

Alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of OpenStack.

Test preconditions
  • Compute quotas extension API
Basic test flow execution description and pass/fail criteria
  • Test action 1: Create a server VM1 with a admin password ‘testpassword’
  • Test assertion 1: Verify the password returned in the response equals to ‘testpassword’
  • Test action 2: Generate a VM name VM_NAME
  • Test action 3: Create 2 servers VM2 and VM3 both with name VM_NAME
  • Test assertion 2: Verify VM2’s ID is not equal to VM3’s ID, and VM2’s name equal to VM3’s name
  • Test action 4: Create a server VM4 with a numeric name ‘12345’
  • Test assertion 3: Verify creating VM4 failed
  • Test action 5: Create a server VM5 with a long metadata ‘{‘a’: ‘b’ * 260}’
  • Test assertion 4: Verify creating VM5 failed
  • Test action 6: Create a server VM6 with name length exceeding 255 characters
  • Test assertion 5: Verify creating VM6 failed
  • Test action 7: Create a server VM7 with an unknown flavor ‘-1’
  • Test assertion 6: Verify creating VM7 failed
  • Test action 8: Create a server VM8 with an unknown image ID ‘-1’
  • Test assertion 7: Verify creating VM8 failed
  • Test action 9: Create a server VM9 with an invalid network UUID ‘a-b-c-d-e-f-g-h-i-j’
  • Test assertion 8: Verify creating VM9 failed
  • Test action 10: Delete a server using a server ID that exceeds system’s max integer limit
  • Test assertion 9: Verify deleting server failed
  • Test action 11: Delete a server using a server ID ‘-1’
  • Test assertion 10: Verify deleting server failed
  • Test action 12: Get a nonexistent server by using a random generated server ID
  • Test assertion 11: Verify get server failed
  • Test action 13: SSH into a provided server and get server’s hostname
  • Test assertion 12: Verify server’s host name is the same as the server name
  • Test action 14: SSH into a provided server and get server’s hostname (manual disk configuration)
  • Test assertion 13: Verify server’s host name is the same as the server name (manual disk configuration)
  • Test action 15: Create a server with an invalid access IPv6 address
  • Test assertion 14: Verify creating server failed, a bad request error is returned in response
  • Test action 16: List all existent servers
  • Test assertion 15: Verify a provided server is in the server list
  • Test action 17: List all existent servers in detail
  • Test assertion 16: Verify a provided server is in the detailed server list
  • Test action 18: List all existent servers (manual disk configuration)
  • Test assertion 17: Verify a provided server is in the server list (manual disk configuration)
  • Test action 19: List all existent servers in detail (manual disk configuration)
  • Test assertion 18: Verify a provided server is in the detailed server list (manual disk configuration)
  • Test action 20: List all existent servers in detail and filter the server list by flavor
  • Test assertion 19: Verify the filtered server list is correct
  • Test action 21: List all existent servers in detail and filter the server list by image
  • Test assertion 20: Verify the filtered server list is correct
  • Test action 22: List all existent servers in detail and filter the server list by server name
  • Test assertion 21: Verify the filtered server list is correct
  • Test action 23: List all existent servers in detail and filter the server list by server status
  • Test assertion 22: Verify the filtered server list is correct
  • Test action 24: List all existent servers in detail and filter the server list by display limit ‘1’
  • Test assertion 23: Verify the length of filtered server list is 1
  • Test action 25: List all existent servers and filter the server list by flavor
  • Test assertion 24: Verify the filtered server list is correct
  • Test action 26: List all existent servers and filter the server list by image
  • Test assertion 25: Verify the filtered server list is correct
  • Test action 27: List all existent servers and filter the server list by display limit ‘1’
  • Test assertion 26: Verify the length of filtered server list is 1
  • Test action 28: List all existent servers and filter the server list by server name
  • Test assertion 27: Verify the filtered server list is correct
  • Test action 29: List all existent servers and filter the server list by server status
  • Test assertion 28: Verify the filtered server list is correct
  • Test action 30: List all existent servers and filter the server list by server name wildcard
  • Test assertion 29: Verify the filtered server list is correct
  • Test action 31: List all existent servers and filter the server list by part of server name
  • Test assertion 30: Verify the filtered server list is correct
  • Test action 32: List all existent servers and filter the server list by a future change-since date
  • Test assertion 31: Verify the filtered server list is empty
  • Test action 33: List all existent servers and filter the server list by a invalid change-since date format
  • Test assertion 32: Verify a bad request error is returned in the response
  • Test action 34: List all existent servers and filter the server list by display limit ‘1’
  • Test assertion 33: Verify the length of filtered server list is 1
  • Test action 35: List all existent servers and filter the server list by a display limit value greater than the length of the server list
  • Test assertion 34: Verify the length of filtered server list equals to the length of server list
  • Test action 36: List all existent servers and filter the server list by display limit ‘-1’
  • Test assertion 35: Verify a bad request error is returned in the response
  • Test action 37: List all existent servers and filter the server list by a string type limit value ‘testing’
  • Test assertion 36: Verify a bad request error is returned in the response
  • Test action 38: List all existent servers and filter the server list by a nonexistent flavor
  • Test assertion 37: Verify the filtered server list is empty
  • Test action 39: List all existent servers and filter the server list by a nonexistent image
  • Test assertion 38: Verify the filtered server list is empty
  • Test action 40: List all existent servers and filter the server list by a nonexistent server name
  • Test assertion 39: Verify the filtered server list is empty
  • Test action 41: List all existent servers in detail and search the server list for a deleted server
  • Test assertion 40: Verify the deleted server is not in the server list
  • Test action 42: List all existent servers and filter the server list by a nonexistent server status
  • Test assertion 41: Verify the filtered server list is empty
  • Test action 43: List all existent servers in detail
  • Test assertion 42: Verify a provided deleted server’s id is not in the server list
  • Test action 44: Lock a provided server VM10 and retrieve the server’s status
  • Test assertion 43: Verify VM10 is in ‘ACTIVE’ status
  • Test action 45: Stop VM10
  • Test assertion 44: Verify stop VM10 failed
  • Test action 46: Unlock VM10 and stop VM10 again
  • Test assertion 45: Verify VM10 is stopped and in ‘SHUTOFF’ status
  • Test action 47: Start VM10
  • Test assertion 46: Verify VM10 is in ‘ACTIVE’ status
  • Test action 48: Delete metadata item ‘key1’ from a provided server
  • Test assertion 47: Verify the metadata item is removed
  • Test action 49: Get metadata item ‘key2’ from a provided server
  • Test assertion 48: Verify the metadata item is correct
  • Test action 50: List all metadata key/value pair for a provided server
  • Test assertion 49: Verify all metadata are retrieved correctly
  • Test action 51: Set metadata {‘meta2’: ‘data2’, ‘meta3’: ‘data3’} for a provided server
  • Test assertion 50: Verify server’s metadata are replaced correctly
  • Test action 52: Set metadata item nova’s value to ‘alt’ for a provided server
  • Test assertion 51: Verify server’s metadata are set correctly
  • Test action 53: Update metadata {‘key1’: ‘alt1’, ‘key3’: ‘value3’} for a provided server
  • Test assertion 52: Verify server’s metadata are updated correctly
  • Test action 54: Create a server with empty name parameter
  • Test assertion 53: Verify create server failed
  • Test action 55: Hard reboot a provided server
  • Test assertion 54: Verify server is rebooted successfully
  • Test action 56: Soft reboot a nonexistent server
  • Test assertion 55: Verify reboot failed, an error is returned in the response
  • Test action 57: Rebuild a provided server with new image, new server name and metadata
  • Test assertion 56: Verify server is rebuilt successfully, server image, name and metadata are correct
  • Test action 58: Create a server VM11
  • Test action 59: Delete VM11 and wait for VM11 to reach termination
  • Test action 60: Rebuild VM11 with another image
  • Test assertion 57: Verify rebuild server failed, an error is returned in the response
  • Test action 61: Rebuild a nonexistent server
  • Test assertion 58: Verify rebuild server failed, an error is returned in the response
  • Test action 62: Stop a provided server
  • Test assertion 59: Verify server reaches ‘SHUTOFF’ status
  • Test action 63: Start the stopped server
  • Test assertion 60: Verify server reaches ‘ACTIVE’ status
  • Test action 64: Stop a provided server
  • Test assertion 61: Verify stop server failed, an error is returned in the response
  • Test action 65: Create a server VM12 and wait it to reach ‘ACTIVE’ status
  • Test action 66: Update VM12’s IPv4 and IPv6 access addresses
  • Test assertion 62: Verify VM12’s access addresses have been updated correctly
  • Test action 67: Create a server VM13 and wait it to reach ‘ACTIVE’ status
  • Test action 68: Update VM13’s server name with non-ASCII characters ‘u00CDu00F1stu00E1u00F1cu00E9’
  • Test assertion 63: Verify VM13’s server name has been updated correctly
  • Test action 69: Update the server name of a nonexistent server
  • Test assertion 64: Verify update server name failed, an ‘object not found’ error is returned in the response
  • Test action 70: Update a provided server’s name with a 256-character long name
  • Test assertion 65: Verify update server name failed, a bad request is returned in the response
  • Test action 71: Update a provided server’s server name with an empty string
  • Test assertion 66: Verify update server name failed, a bad request error is returned in the response
  • Test action 72: Get the number of vcpus of a provided server
  • Test action 73: Get the number of vcpus stated by the server’s flavor
  • Test assertion 67: Verify that the number of vcpus reported by the server matches the amount stated by the server’s flavor
  • Test action 74: Create a server VM14
  • Test assertion 68: Verify VM14’s server attributes are set correctly
  • Test action 75: Get the number of vcpus of a provided server (manual disk configuration)
  • Test action 76: Get the number of vcpus stated by the server’s flavor (manual disk configuration)
  • Test assertion 69: Verify that the number of vcpus reported by the server matches the amount stated by the server’s flavor (manual disk configuration)
  • Test action 77: Create a server VM15 (manual disk configuration)
  • Test assertion 70: Verify VM15’s server attributes are set correctly (manual disk configuration)
  • Test action 78: Delete all VMs created

This test evaluates the functionality of basic server operations. Specifically, the test verifies that:

  • If an admin password is provided on server creation, the server’s root password should be set to that password
  • Create a server with a name that already exists is allowed
  • Create a server with a numeric name or a name that exceeds the length limit is not allowed
  • Create a server with a metadata that exceeds the length limit is not allowed
  • Create a server with an invalid flavor, an invalid image or an invalid network UUID is not allowed
  • Delete a server with a server ID that exceeds the length limit or a nonexistent server ID is not allowed
  • A provided server’s host name is the same as the server name
  • Create a server with an invalid IPv6 access address is not allowed
  • A created server is in the (detailed) list of servers
  • Filter the (detailed) list of servers by flavor, image, server name, server status, and display limit, respectively.
  • Filter the list of servers by a future date
  • Filter the list of servers by an invalid date format, a negative display limit or a string type display limit value is not allowed
  • Filter the list of servers by a nonexistent flavor, image, server name or server status is not allowed
  • Deleted servers are not in the list of servers
  • Deleted servers do not show by default in list of servers
  • Locked server is not allowed to be stopped by non-admin user
  • Can get and delete metadata from servers
  • Can list, set and update server metadata
  • Create a server with name parameter empty is not allowed
  • Hard reboot a server and the server should be power cycled
  • Reboot, rebuild and stop a nonexistent server is not allowed
  • Rebuild a server using the provided image and metadata
  • Stop and restart a server
  • A server’s name and access addresses can be updated
  • Update the name of a nonexistent server is not allowed
  • Update name of a server to a name that exceeds the name length limit is not allowed
  • Update name of a server to an empty string is not allowed
  • The number of vcpus reported by the server matches the amount stated by the server’s flavor
  • The specified server attributes are set correctly

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 7 - Retrieve volume information through the Compute API
Test case specification

This test case evaluates the Compute API ability of attaching volume to a specific server and retrieve volume information, the reference is,

tempest.api.compute.volumes.test_attach_volume.AttachVolumeTestJSON.test_attach_detach_volume tempest.api.compute.volumes.test_attach_volume.AttachVolumeTestJSON.test_list_get_volume_attachments

Test preconditions
  • Compute volume extension API
Basic test flow execution description and pass/fail criteria
  • Test action 1: Create a server VM1 and a volume VOL1
  • Test action 2: Attach VOL1 to VM1
  • Test assertion 1: Stop VM1 successfully and wait VM1 to reach ‘SHUTOFF’ status
  • Test assertion 2: Start VM1 successfully and wait VM1 to reach ‘ACTIVE’ status
  • Test assertion 3: SSH into VM1 and verify VOL1 is in VM1’s root disk devices
  • Test action 3: Detach VOL1 from VM1
  • Test assertion 4: Stop VM1 successfully and wait VM1 to reach ‘SHUTOFF’ status
  • Test assertion 5: Start VM1 successfully and wait VM1 to reach ‘ACTIVE’ status
  • Test assertion 6: SSH into VM1 and verify VOL1 is not in VM1’s root disk devices
  • Test action 4: Create a server VM2 and a volume VOL2
  • Test action 5: Attach VOL2 to VM2
  • Test action 6: List VM2’s volume attachments
  • Test assertion 7: Verify the length of the list is 1 and VOL2 attachment is in the list
  • Test action 7: Retrieve VM2’s volume information
  • Test assertion 8: Verify volume information is correct
  • Test action 8: Delete VM1, VM2, VOL1 and VOL2

This test evaluates the functionality of retrieving volume information. Specifically, the test verifies that:

  • Stop and start a server with an attached volume work correctly.
  • Retrieve a server’s volume information correctly.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

VIM identity operations test specification

Scope

The VIM identity test area evaluates the ability of the system under test to support VIM identity operations. The tests in this area will evaluate API discovery operations within the Identity v3 API, auth operations within the Identity API.

References

Definitions and abbreviations

The following terms and abbreviations are used in conjunction with this test area

  • API - Application Programming Interface
  • NFVi - Network Functions Virtualisation infrastructure
  • VIM - Virtual Infrastructure Manager

System Under Test (SUT)

The system under test is assumed to be the NFVi and VIM in operation on an Pharos compliant infrastructure.

Test Area Structure

The test area is structured based on VIM identity operations. Each test case is able to run independently, i.e. irrelevant of the state created by a previous test.

Dependency Description

The VIM identity operations test cases are a part of the OpenStack interoperability tempest test cases. For Danube based dovetail release, the OpenStack interoperability guidelines (version 2016.08) is adopted, which is valid for Kilo, Liberty, Mitaka and Newton releases of Openstack.

Test Descriptions

API discovery operations within the Identity v3 API
Use case specification

tempest.api.identity.v3.TestApiDiscovery.test_api_version_resources tempest.api.identity.v3.TestApiDiscovery.test_api_media_types tempest.api.identity.v3.TestApiDiscovery.test_api_version_statuses tempest.api.identity.v3.test_api_discovery.TestApiDiscovery.test_api_version_resources tempest.api.identity.v3.test_api_discovery.TestApiDiscovery.test_api_media_types tempest.api.identity.v3.test_api_discovery.TestApiDiscovery.test_api_version_statuses

note: the latter three test cases are the alias of the former three, respectively. alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of Openstack.

Test preconditions

None

Basic test flow execution description and pass/fail criteria
Test execution
  • Test action 1: Show the v3 identity api description, the test passes if keys ‘id’, ‘links’, ‘media-types’, ‘status’, ‘updated’ are all included in the description response message.
  • Test action 2: Get the value of v3 identity api ‘media-types’, the test passes if api version 2 and version 3 are all included in the response.
  • Test action 3: Show the v3 indentity api description, the test passes if ‘current’, ‘stable’, ‘experimental’, ‘supported’, ‘deprecated’ are all of the identity api ‘status’ values.
Pass / fail criteria

This test case passes if all test action steps execute successfully and all assertions are affirmed. If any test steps fails to execute successfully or any of the assertions is not met, the test case fails.

Post conditions

None

Auth operations within the Identity API
Use case specification

tempest.api.identity.v3.test_tokens.TokensV3Test.test_create_token

Test preconditions

None

Basic test flow execution description and pass/fail criteria
Test execution
  • Test action 1: Get the token by system credentials, the test passes if the returned token_id is not empty and is string type.
  • Test action 2: Get the user_id in getting token response message, the test passes if it is equal to the user_id which is used to get token.
  • Test action 3: Get the user_name in getting token response message, the test passes if it is equal to the user_name which is used to get token.
  • Test action 4: Get the method in getting token response message, the test passes if it is equal to the password which is used to get token.
Pass / fail criteria

This test case passes if all test action steps execute successfully and all assertions are affirmed. If any test steps fails to execute successfully or any of the assertions is not met, the test case fails.

Post conditions

None

VIM image operations test specification

Scope

The VIM image test area evaluates the ability of the system under test to support VIM image operations. The test cases documented here are the Image API test cases in the Openstack Interop guideline 2016.8 as implemented by the Refstack client. These test cases will evaluate basic Openstack (as a VIM) image operations including image creation, image list, image update and image deletion capabilities using Glance v2 API.

References

Definitions and abbreviations

The following terms and abbreviations are used in conjunction with this test area

  • API - Application Programming Interface
  • CRUD - Create, Read, Update, and Delete
  • NFVi - Network Functions Virtualization infrastructure
  • VIM - Virtual Infrastructure Manager

System Under Test (SUT)

The system under test is assumed to be the NFVi and VIM in operation on a Pharos compliant infrastructure.

Test Area Structure

The test area is structured based on VIM image operations. Each test case is able to run independently, i.e. irrelevant of the state created by a previous test.

For brevity, the test cases in this test area are summarized together based on the operations they are testing.

Test Descriptions

API Used and Reference

Images: https://developer.openstack.org/api-ref/image/v2/

  • create image
  • delete image
  • show image details
  • show images
  • show image schema
  • show images schema
  • upload binary image data
  • add image tag
  • delete image tag
Image get tests using the Glance v2 API
Test case specification

tempest.api.image.v2.test_images.ListImagesTest.test_get_image_schema tempest.api.image.v2.test_images.ListImagesTest.test_get_images_schema tempest.api.image.v2.test_images_negative.ImagesNegativeTest.test_get_delete_deleted_image tempest.api.image.v2.test_images_negative.ImagesNegativeTest.test_get_image_null_id tempest.api.image.v2.test_images_negative.ImagesNegativeTest.test_get_non_existent_image

tempest.api.image.v2.test_images.ListUserImagesTest.test_get_image_schema tempest.api.image.v2.test_images.ListUserImagesTest.test_get_images_schema

Note: the latter two test cases are the alias of the former first two, respectively. Alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of Openstack.

Test preconditions

Glance is available.

Basic test flow execution description and pass/fail criteria
  • Test action 1: Create 6 images and store their ids in a created images list.
  • Test action 2: Use image v2 API to show image schema and check the body of the response.
  • Test assertion 1: In the body of the response, the value of the key ‘name’ is ‘image’.
  • Test action 3: Use image v2 API to show images schema and check the body of the response.
  • Test assertion 2: In the body of the response, the value of the key ‘name’ is ‘images’.
  • Test action 4: Create an image with name ‘test’, container_formats ‘bare’ and disk_formats ‘raw’. Delete this image with its id and then try to show it with its id. Delete this deleted image again with its id and check the API’s response code.
  • Test assertion 3: The operations of showing and deleting a deleted image with its id both get 404 response code.
  • Test action 5: Use a null image id to show a image and check the API’s response code.
  • Test assertion 4: The API’s response code is 404.
  • Test action 6: Generate a random uuid and use it as the image id to show the image.
  • Test assertion 5: The API’s response code is 404.
  • Test action 7: Delete the 6 images with the stored ids. Show all images and check whether the 6 images’ ids are not in the show list.
  • Test assertion 6: The 6 images’ ids are not found in the show list.

The first two test cases evaluate the ability to use Glance v2 API to show image and images schema. The latter three test cases evaluate the ability to use Glance v2 API to show images with a deleted image id, a null image id and a non-existing image id. Specifically it verifies that:

  • Glance image get API can show the image and images schema.
  • Glance image get API can’t show an image with a deleted image id.
  • Glance image get API can’t show an image with a null image id.
  • Glance image get API can’t show an image with a non-existing image id.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

None

CRUD image operations in Images API v2
Test case specification

tempest.api.image.v2.test_images.ListImagesTest.test_list_no_params

tempest.api.image.v2.test_images.ListImagesTest.test_index_no_params tempest.api.image.v2.test_images.ListUserImagesTest.test_list_no_params

Note: the latter two test cases are the alias of the former one. Alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of Openstack.

Test preconditions

Glance is available.

Basic test flow execution description and pass/fail criteria
  • Test action 1: Create 6 images and store their ids in a created images list.
  • Test action 2: List all images and check whether the ids listed are in the created images list.
  • Test assertion 1: The ids get from the list images API are in the created images list.

This test case evaluates the ability to use Glance v2 API to list images. Specifically it verifies that:

  • Glance image API can show the images.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

None

Image list tests using the Glance v2 API
Test case specification

tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_container_format tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_disk_format tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_limit tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_min_max_size tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_size tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_status tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_visibility

tempest.api.image.v2.test_images.ListUserImagesTest.test_list_images_param_container_format tempest.api.image.v2.test_images.ListUserImagesTest.test_list_images_param_disk_format tempest.api.image.v2.test_images.ListUserImagesTest.test_list_images_param_limit tempest.api.image.v2.test_images.ListUserImagesTest.test_list_images_param_min_max_size tempest.api.image.v2.test_images.ListUserImagesTest.test_list_images_param_size tempest.api.image.v2.test_images.ListUserImagesTest.test_list_images_param_status tempest.api.image.v2.test_images.ListUserImagesTest.test_list_images_param_visibility

Note: the latter 7 test cases are the alias of the former 7, respectively. Alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of Openstack.

Test preconditions

Glance is available.

Basic test flow execution description and pass/fail criteria
  • Test action 1: Create 6 images with a random size ranging from 1024 to 4096 and visibility ‘private’; set their (container_format, disk_format) pair to be (ami, ami), (ami, ari), (ami, aki), (ami, vhd), (ami, vmdk) and (ami, raw); store their ids in a list and upload the binary images data.
  • Test action 2: Use Glance v2 API to list all images whose container_format is ‘ami’ and store the response details in a list.
  • Test assertion 1: The list is not empty and all the values of container_format in the list are ‘ami’.
  • Test action 3: Use Glance v2 API to list all images whose disk_format is ‘raw’ and store the response details in a list.
  • Test assertion 2: The list is not empty and all the values of disk_format in the list are ‘raw’.
  • Test action 4: Use Glance v2 API to list one image by setting limit to be 1 and store the response details in a list.
  • Test assertion 3: The length of the list is one.
  • Test action 5: Use Glance v2 API to list images by setting size_min and size_max, and store the response images’ sizes in a list. Choose the first image’s size as the median, size_min is median-500 and size_max is median+500.
  • Test assertion 4: All sizes in the list are no less than size_min and no more than size_max.
  • Test action 6: Use Glance v2 API to show the first created image with its id and get its size from the response. Use Glance v2 API to list images whose size is equal to this size and store the response details in a list.
  • Test assertion 5: All sizes of the images in the list are equal to the size used to list the images.
  • Test action 7: Use Glance v2 API to list the images whose status is active and store the response details in a list.
  • Test assertion 6: All status of images in the list are active.
  • Test action 8: Use Glance v2 API to list the images whose visibility is private and store the response details in a list.
  • Test assertion 7: All images’ values of visibility in the list are private.
  • Test action 9: Delete the 6 images with the stored ids. Show images and check whether the 6 ids are not in the show list.
  • Test assertion 8: The stored 6 ids are not found in the show list.

This test case evaluates the ability to use Glance v2 API to list images with different parameters. Specifically it verifies that:

  • Glance image API can show the images with the container_format.
  • Glance image API can show the images with the disk_format.
  • Glance image API can show the images by setting a limit number.
  • Glance image API can show the images with the size_min and size_max.
  • Glance image API can show the images with the size.
  • Glance image API can show the images with the status.
  • Glance image API can show the images with the visibility type.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

None

Image update tests using the Glance v2 API
Test case specification

tempest.api.image.v2.test_images.BasicOperationsImagesTest.test_update_image tempest.api.image.v2.test_images_tags.ImagesTagsTest.test_update_delete_tags_for_image tempest.api.image.v2.test_images_tags_negative.ImagesTagsNegativeTest.test_update_tags_for_non_existing_image

Test preconditions

Glance is available.

Basic test flow execution description and pass/fail criteria
  • Test action 1: Create an image with container_formats ‘ami’, disk_formats ‘ami’ and visibility ‘private’ and store its id returned in the response. Check whether the status of the created image is ‘queued’.
  • Test assertion 1: The status of the created image is ‘queued’.
  • Test action 2: Use the stored image id to upload the binary image data and update this image’s name. Show this image with the stored id. Check if the stored id and name used to update the image are equal to the id and name in the show list.
  • Test assertion 2: The id and name returned in the show list are equal to the stored id and name used to update the image.
  • Test action 3: Create an image with container_formats ‘bare’, disk_formats ‘raw’ and visibility ‘private’ and store its id returned in the response.
  • Test action 4: Use the stored id to add a tag. Show the image with the stored id and check if the tag used to add is in the image’s tags returned in the show list.
  • Test assertion 3: The tag used to add into the image is in the show list.
  • Test action 5: Use the stored id to delete this tag. Show the image with the stored id and check if the tag used to delete is not in the show list.
  • Test assertion 4: The tag used to delete from the image is not in the show list.
  • Test action 6: Generate a random uuid as the image id. Use the image id to add a tag into the image’s tags.
  • Test assertion 5: The API’s response code is 404.
  • Test action 7: Delete the images created in test action 1 and 3. Show the images and check whether the ids are not in the show list.
  • Test assertion 6: The two ids are not found in the show list.

This test case evaluates the ability to use Glance v2 API to update images with different parameters. Specifically it verifies that:

  • Glance image API can update image’s name with the existing image id.
  • Glance image API can update image’s tags with the existing image id.
  • Glance image API can’t update image’s tags with a non-existing image id.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

None

Image deletion tests using the Glance v2 API
Test case specification

tempest.api.image.v2.test_images.BasicOperationsImagesTest.test_delete_image tempest.api.image.v2.test_images_negative.ImagesNegativeTest.test_delete_image_null_id tempest.api.image.v2.test_images_negative.ImagesNegativeTest.test_delete_non_existing_image tempest.api.image.v2.test_images_tags_negative.ImagesTagsNegativeTest.test_delete_non_existing_tag

Test preconditions

Glance is available.

Basic test flow execution description and pass/fail criteria
  • Test action 1: Create an image with container_formats ‘ami’, disk_formats ‘ami’ and visibility ‘private’. Use the id of the created image to delete the image. List all images and check whether this id is in the list.
  • Test assertion 1: The id of the created image is not found in the list of all images after the deletion operation.
  • Test action 2: Delete images with a null id and check the API’s response code.
  • Test assertion 2: The API’s response code is 404.
  • Test action 3: Generate a random uuid and delete images with this uuid as image id. Check the API’s response code.
  • Test assertion 3: The API’s response code is 404.
  • Test action 4: Create an image with container_formats ‘bare’, disk_formats ‘raw’ and visibility ‘private’. Delete this image’s tag with the image id and a random tag Check the API’s response code.
  • Test assertion 4: The API’s response code is 404.
  • Test action 5: Delete the images created in test action 1 and 4. List all images and check whether the ids are in the list.
  • Test assertion 5: The two ids are not found in the list.

The first three test cases evaluate the ability to use Glance v2 API to delete images with an existing image id, a null image id and a non-existing image id. The last one evaluates the ability to use the API to delete a non-existing image tag. Specifically it verifies that:

  • Glance image deletion API can delete the image with an existing id.
  • Glance image deletion API can’t delete an image with a null image id.
  • Glance image deletion API can’t delete an image with a non-existing image id.
  • Glance image deletion API can’t delete an image tag with a non-existing image tag.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

None

VIM network operations test specification

Scope

The VIM network test area evaluates the ability of the system under test to support VIM network operations. The test cases documented here are the network API test cases in the Openstack Interop guideline 2016.8 as implemented by the Refstack client. These test cases will evaluate basic Openstack (as a VIM) network operations including basic CRUD operations on L2 networks, L2 network ports and security groups.

References

Definitions and abbreviations

The following terms and abbreviations are used in conjunction with this test area

  • API - Application Programming Interface
  • CRUD - Create, Read, Update and Delete
  • NFVi - Network Functions Virtualization infrastructure
  • VIM - Virtual Infrastructure Manager

System Under Test (SUT)

The system under test is assumed to be the NFVi and VIM in operation on a Pharos compliant infrastructure.

Test Area Structure

The test area is structured based on VIM network operations. Each test case is able to run independently, i.e. irrelevant of the state created by a previous test. Specifically, every test performs clean-up operations which return the system to the same state as before the test.

For brevity, the test cases in this test area are summarized together based on the operations they are testing.

Test Descriptions

API Used and Reference

Network: http://developer.openstack.org/api-ref/networking/v2/index.html

  • create network
  • update network
  • list networks
  • show network details
  • delete network
  • create subnet
  • update subnet
  • list subnets
  • show subnet details
  • delete subnet
  • create port
  • bulk create ports
  • update port
  • list ports
  • show port details
  • delete port
  • create security group
  • update security group
  • list security groups
  • show security group
  • delete security group
  • create security group rule
  • list security group rules
  • show security group rule
  • delete security group rule
Basic CRUD operations on L2 networks and L2 network ports
Test case specification

tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_allocation_pools tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_dhcp_enabled tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_gw tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_gw_and_allocation_pools tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_host_routes_and_dns_nameservers tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_without_gateway tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_all_attributes tempest.api.network.test_networks.NetworksTest.test_create_update_delete_network_subnet tempest.api.network.test_networks.NetworksTest.test_delete_network_with_subnet tempest.api.network.test_networks.NetworksTest.test_list_networks tempest.api.network.test_networks.NetworksTest.test_list_networks_fields tempest.api.network.test_networks.NetworksTest.test_list_subnets tempest.api.network.test_networks.NetworksTest.test_list_subnets_fields tempest.api.network.test_networks.NetworksTest.test_show_network tempest.api.network.test_networks.NetworksTest.test_show_network_fields tempest.api.network.test_networks.NetworksTest.test_show_subnet tempest.api.network.test_networks.NetworksTest.test_show_subnet_fields tempest.api.network.test_networks.NetworksTest.test_update_subnet_gw_dns_host_routes_dhcp tempest.api.network.test_ports.PortsTestJSON.test_create_bulk_port tempest.api.network.test_ports.PortsTestJSON.test_create_port_in_allowed_allocation_pools tempest.api.network.test_ports.PortsTestJSON.test_create_update_delete_port tempest.api.network.test_ports.PortsTestJSON.test_list_ports tempest.api.network.test_ports.PortsTestJSON.test_list_ports_fields tempest.api.network.test_ports.PortsTestJSON.test_show_port tempest.api.network.test_ports.PortsTestJSON.test_show_port_fields tempest.api.network.test_ports.PortsTestJSON.test_update_port_with_security_group_and_extra_attributes tempest.api.network.test_ports.PortsTestJSON.test_update_port_with_two_security_groups_and_extra_attributes

tempest.api.network.test_networks.NetworksTestJSON.test_create_delete_subnet_with_allocation_pools tempest.api.network.test_networks.NetworksTestJSON.test_create_delete_subnet_with_dhcp_enabled tempest.api.network.test_networks.NetworksTestJSON.test_create_delete_subnet_with_gw tempest.api.network.test_networks.NetworksTestJSON.test_create_delete_subnet_with_gw_and_allocation_pools tempest.api.network.test_networks.NetworksTestJSON.test_create_delete_subnet_with_host_routes_and_dns_nameservers tempest.api.network.test_networks.NetworksTestJSON.test_create_delete_subnet_without_gateway tempest.api.network.test_networks.NetworksTestJSON.test_create_delete_subnet_all_attributes tempest.api.network.test_networks.NetworksTestJSON.test_create_update_delete_network_subnet tempest.api.network.test_networks.NetworksTestJSON.test_delete_network_with_subnet tempest.api.network.test_networks.NetworksTestJSON.test_list_networks tempest.api.network.test_networks.NetworksTestJSON.test_list_networks_fields tempest.api.network.test_networks.NetworksTestJSON.test_list_subnets tempest.api.network.test_networks.NetworksTestJSON.test_list_subnets_fields tempest.api.network.test_networks.NetworksTestJSON.test_show_network tempest.api.network.test_networks.NetworksTestJSON.test_show_network_fields tempest.api.network.test_networks.NetworksTestJSON.test_show_subnet tempest.api.network.test_networks.NetworksTestJSON.test_show_subnet_fields tempest.api.network.test_networks.NetworksTestJSON.test_update_subnet_gw_dns_host_routes_dhcp

Note: the latter 18 test cases are the alias of the former first 18, respectively. Alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of Openstack.

Test preconditions

Neutron is available.

Basic test flow execution description and pass/fail criteria
  • Test action 1: Create a network and create a subnet of this network by setting allocation_pools, then check the details of the subnet and delete the subnet and network
  • Test assertion 1: The allocation_pools returned in the response equals to the one used to create the subnet, and the network and subnet ids are not found after deletion
  • Test action 2: Create a network and create a subnet of this network by setting enable_dhcp “True”, then check the details of the subnet and delete the subnet and network
  • Test assertion 2: The enable_dhcp returned in the response is “True” and the network and subnet ids are not found after deletion
  • Test action 3: Create a network and create a subnet of this network by setting gateway_ip, then check the details of the subnet and delete the subnet and network
  • Test assertion 3: The gateway_ip returned in the response equals to the one used to create the subnet, and the network and subnet ids are not found after deletion
  • Test action 4: Create a network and create a subnet of this network by setting allocation_pools and gateway_ip, then check the details of the subnet and delete the subnet and network
  • Test assertion 4: The allocation_pools and gateway_ip returned in the response equal to the ones used to create the subnet, and the network and subnet ids are not found after deletion
  • Test action 5: Create a network and create a subnet of this network by setting host_routes and dns_nameservers, then check the details of the subnet and delete the subnet and network
  • Test assertion 5: The host_routes and dns_nameservers returned in the response equal to the ones used to create the subnet, and the network and subnet ids are not found after deletion
  • Test action 6: Create a network and create a subnet of this network without setting gateway_ip, then delete the subnet and network
  • Test assertion 6: The network and subnet ids are not found after deletion
  • Test action 7: Create a network and create a subnet of this network by setting enable_dhcp “true”, gateway_ip, ip_version, cidr, host_routes, allocation_pools and dns_nameservers, then check the details of the subnet and delete the subnet and network
  • Test assertion 7: The values returned in the response equal to the ones used to create the subnet, and the network and subnet ids are not found after deletion
  • Test action 8: Create a network and update this network’s name, then create a subnet and update this subnet’s name, delete the subnet and network
  • Test assertion 8: The network’s status and subnet’s status are both ‘ACTIVE’ after creation, their names equal to the new names used to update, and the network and subnet ids are not found after deletion
  • Test action 9: Create a network and create a subnet of this network, then delete this network
  • Test assertion 9: The subnet has also been deleted after deleting the network
  • Test action 10: Create a network and list all networks
  • Test assertion 10: The network created is found in the list
  • Test action 11: Create a network and list networks with the id and name of the created network
  • Test assertion 11: The id and name of the list network equal to the created network’s id and name
  • Test action 12: Create a network and create a subnet of this network, then list all subnets
  • Test assertion 12: The subnet created is found in the list
  • Test action 13: Create a network and create a subnet of this network, then list subnets with the id and network_id of the created subnet
  • Test assertion 13: The id and network_id of the list subnet equal to the created subnet
  • Test action 14: Create a network and show network’s details with the id of the created network
  • Test assertion 14: The id and name returned in the response equal to the created network’s id and name
  • Test action 15: Create a network and just show network’s id and name info with the id of the created network
  • Test assertion 15: The keys returned in the response are only id and name, and the values of all the keys equal to network’s id and name
  • Test action 16: Create a network and create a subnet of this network, then show subnet’s details with the id of the created subnet
  • Test assertion 16: The id and cidr info returned in the response equal to the created subnet’s id and cidr
  • Test action 17: Create a network and create a subnet of this network, then show subnet’s id and network_id info with the id of the created subnet
  • Test assertion 17: The keys returned in the response are just id and network_id, and the values of all the keys equal to subnet’s id and network_id
  • Test action 18: Create a network and create a subnet of this network, then update subnet’s name, host_routes, dns_nameservers and gateway_ip
  • Test assertion 18: The name, host_routes, dns_nameservers and gateway_ip returned in the response equal to the values used to update the subnet
  • Test action 19: Create 2 networks and bulk create 2 ports with the ids of the created networks
  • Test assertion 19: The network_id of each port equals to the one used to create the port and the admin_state_up of each port is True
  • Test action 20: Create a network and create a subnet of this network by setting allocation_pools, then create a port with the created network’s id
  • Test assertion 20: The ip_address of the created port is in the range of the allocation_pools
  • Test action 21: Create a network and create a port with its id, then update the port’s name and set its admin_state_up to be False
  • Test assertion 21: The name returned in the response equals to the name used to update the port and the port’s admin_state_up is False
  • Test action 22: Create a network and create a port with its id, then list all ports
  • Test assertion 22: The created port is found in the list
  • Test action 23: Create a network and create a port with its id, then list ports with the id and mac_address of the created port
  • Test assertion 23: The created port is found in the list
  • Test action 24: Create a network and create a port with its id, then show the port’s details
  • Test assertion 24: The key ‘id’ is in the details
  • Test action 25: Create a network and create a port with its id, then show the port’s id and mac_address info with the port’s id
  • Test assertion 25: The keys returned in the response are just id and mac_address, and the values of all the keys equal to port’s id and mac_address
  • Test action 26: Create a network, 2 subnets (SUBNET1 and SUBNET2) and 2 security groups (SG1 and SG2), create a port with SG1 and SUBNET1, then update the port’s security group to SG2 and its subnet_id to SUBNET2
  • Test assertion 26: The port’s subnet_id equals to SUBNET2’s id and its security_group_ids equals to SG2’s id
  • Test action 27: Create a network, 2 subnets (SUBNET1 and SUBNET2) and 3 security groups (SG1, SG2 and SG3), create a port with SG1 and SUBNET1, then update the port’s security group to SG2 and SG3 and its subnet_id to SUBNET2
  • Test assertion 27: The port’s subnet_id equal to SUBNET2’s id and its security_group_ids equals to the ids of SG2 and SG3

These test cases evaluate the ability of basic CRUD operations on L2 networks and L2 network ports. Specifically it verifies that:

  • Subnets can be created successfully by setting different parameters.
  • Subnets can be updated after being created.
  • Ports can be bulk created with network ids.
  • Port’s security group(s) can be updated after being created.
  • Networks/subnets/ports can be listed with their ids and other parameters.
  • All details or special fields’ info of networks/subnets/ports can be shown with their ids.
  • Networks/subnets/ports can be successfully deleted.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Basic CRUD operations on security groups
Test case specification

tempest.api.network.test_security_groups.SecGroupTest.test_create_list_update_show_delete_security_group tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_additional_args tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_icmp_type_code tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_protocol_integer_value tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_remote_group_id tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_remote_ip_prefix tempest.api.network.test_security_groups.SecGroupTest.test_create_show_delete_security_group_rule tempest.api.network.test_security_groups.SecGroupTest.test_list_security_groups tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_additional_default_security_group_fails tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_duplicate_security_group_rule_fails tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_bad_ethertype tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_bad_protocol tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_bad_remote_ip_prefix tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_invalid_ports tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_non_existent_remote_groupid tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_non_existent_security_group tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_delete_non_existent_security_group tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_show_non_existent_security_group tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_show_non_existent_security_group_rule

Test preconditions

Neutron is available.

Basic test flow execution description and pass/fail criteria
  • Test action 1: Create a security group SG1, list all security groups, update the name and description of SG1, show details of SG1 and delete SG1
  • Test assertion 1: SG1 is in the list, the name and description of SG1 equal to the ones used to update it, the name and description of SG1 shown in the details equal to the ones used to update it, and SG1’s id is not found after deletion
  • Test action 2: Create a security group SG1, and create a rule with protocol ‘tcp’, port_range_min and port_range_max
  • Test assertion 2: The values returned in the response equal to the ones used to create the rule
  • Test action 3: Create a security group SG1, and create a rule with protocol ‘icmp’ and icmp_type_codes
  • Test assertion 3: The values returned in the response equal to the ones used to create the rule
  • Test action 4: Create a security group SG1, and create a rule with protocol ‘17’
  • Test assertion 4: The values returned in the response equal to the ones used to create the rule
  • Test action 5: Create a security group SG1, and create a rule with protocol ‘udp’, port_range_min, port_range_max and remote_group_id
  • Test assertion 5: The values returned in the response equal to the ones used to create the rule
  • Test action 6: Create a security group SG1, and create a rule with protocol ‘tcp’, port_range_min, port_range_max and remote_ip_prefix
  • Test assertion 6: The values returned in the response equal to the ones used to create the rule
  • Test action 7: Create a security group SG1, create 3 rules with protocol ‘tcp’, ‘udp’ and ‘icmp’ respectively, show details of each rule, list all rules and delete all rules
  • Test assertion 7: The values in the shown details equal to the ones used to create the rule, all rules are found in the list, and all rules are not found after deletion
  • Test action 8: List all security groups
  • Test assertion 8: There is one default security group in the list
  • Test action 9: Create a security group whose name is ‘default’
  • Test assertion 9: Failed to create this security group because of name conflict
  • Test action 10: Create a security group SG1, create a rule with protocol ‘tcp’, port_range_min and port_range_max, and create another tcp rule with the same parameters
  • Test assertion 10: Failed to create this security group rule because of duplicate protocol
  • Test action 11: Create a security group SG1, and create a rule with ethertype ‘bad_ethertype’
  • Test assertion 11: Failed to create this security group rule because of bad ethertype
  • Test action 12: Create a security group SG1, and create a rule with protocol ‘bad_protocol_name’
  • Test assertion 12: Failed to create this security group rule because of bad protocol
  • Test action 13: Create a security group SG1, and create a rule with remote_ip_prefix ‘92.168.1./24’, ‘192.168.1.1/33’, ‘bad_prefix’ and ‘256’ respectively
  • Test assertion 13: Failed to create these security group rules because of bad remote_ip_prefix
  • Test action 14: Create a security group SG1, and create a tcp rule with (port_range_min, port_range_max) (-16, 80), (80, 79), (80, 65536), (None, 6) and (-16, 65536) respectively
  • Test assertion 14: Failed to create these security group rules because of bad ports
  • Test action 15: Create a security group SG1, and create a tcp rule with remote_group_id ‘bad_group_id’ and a random uuid respectively
  • Test assertion 15: Failed to create these security group rules because of nonexistent remote_group_id
  • Test action 16: Create a security group SG1, and create a rule with a random uuid as security_group_id
  • Test assertion 16: Failed to create these security group rules because of nonexistent security_group_id
  • Test action 17: Generate a random uuid and use this id to delete security group
  • Test assertion 17: Failed to delete security group because of nonexistent security_group_id
  • Test action 18: Generate a random uuid and use this id to show security group
  • Test assertion 18: Failed to show security group because of nonexistent id of security group
  • Test action 19: Generate a random uuid and use this id to show security group rule
  • Test assertion 19: Failed to show security group rule because of nonexistent id of security group rule

These test cases evaluate the ability of Basic CRUD operations on security groups and security group rules. Specifically it verifies that:

  • Security groups can be created, list, updated, shown and deleted.
  • Security group rules can be created with different parameters, list, shown and deleted.
  • Cannot create an additional default security group.
  • Cannot create a duplicate security group rules.
  • Cannot create security group rules with bad ethertype, protocol, remote_ip_prefix, ports, remote_group_id and security_group_id.
  • Cannot show or delete security groups or security group rules with nonexistent ids.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

VIM volume operations test specification

Scope

The VIM volume operations test area evaluates the ability of the system under test to support VIM volume operations. The test cases documented here are the volume API test cases in the OpenStack Interop guideline 2016.8 as implemented by the RefStack client. These test cases will evaluate basic OpenStack (as a VIM) volume operations, including:

  • Volume attach and detach operations
  • Volume service availability zone operations
  • Volume cloning operations
  • Image copy-to-volume operations
  • Volume creation and deletion operations
  • Volume service extension listing
  • Volume metadata operations
  • Volume snapshot operations

References

Definitions and abbreviations

The following terms and abbreviations are used in conjunction with this test area

  • API - Application Programming Interface
  • NFVi - Network Functions Virtualization infrastructure
  • SUT - System Under Test
  • VIM - Virtual Infrastructure Manager
  • VM - Virtual Machine

System Under Test (SUT)

The system under test is assumed to be the NFVI and VIM deployed with a Pharos compliant infrastructure.

Test Area Structure

The test area is structured based on VIM volume API operations. Each test case is able to run independently, i.e. irrelevant of the state created by a previous test. Specifically, every test performs clean-up operations which return the system to the same state as before the test.

For brevity, the test cases in this test area are summarized together based on the operations they are testing.

Test Descriptions

API Used and Reference

Block storage: https://developer.openstack.org/api-ref/block-storage

  • create volume
  • delete volume
  • update volume
  • attach volume to server
  • detach volume from server
  • create volume metadata
  • update volume metadata
  • delete volume metadata
  • list volume
  • create snapshot
  • update snapshot
  • delete snapshot
Test Case 1 - Volume attach and detach operations with the Cinder v2 API
Test case specification

tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_attach_detach_volume_to_instance tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_get_volume_attachment tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_attach_volumes_with_nonexistent_volume_id tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_detach_volumes_with_invalid_volume_id

Test preconditions
  • Volume extension API
Basic test flow execution description and pass/fail criteria
  • Test action 1: Create a server VM1
  • Test action 2: Attach a provided VOL1 to VM1
  • Test assertion 1: Verify VOL1 is in ‘in-use’ status
  • Test action 3: Detach VOL1 from VM1
  • Test assertion 2: Verify VOL1 is in ‘available’ status
  • Test action 4: Create a server VM2
  • Test action 5: Attach a provided VOL2 to VM2 and wait for VOL2 to reach ‘in-use’ status
  • Test action 6: Retrieve VOL2’s attachment information ATTCH_INFO
  • Test assertion 3: Verify ATTCH_INFO is correct
  • Test action 7: Create a server VM3 and wait for VM3 to reach ‘ACTIVE’ status
  • Test action 8: Attach a non-existent volume to VM3
  • Test assertion 4: Verify attach volume failed, a ‘NOT FOUND’ error is returned in the response
  • Test action 9: Detach a volume from a server by using an invalid volume ID
  • Test assertion 5: Verify detach volume failed, a ‘NOT FOUND’ error is returned in the response

This test evaluates the volume API ability of attaching a volume to a server and detaching a volume from a server. Specifically, the test verifies that:

  • Volumes can be attached and detached from servers.
  • Volume attachment information can be retrieved.
  • Attach and detach a volume using an invalid volume ID is not allowed.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 2 - Volume service availability zone operations with the Cinder v2 API
Test case specification

tempest.api.volume.test_availability_zone.AvailabilityZoneV2TestJSON.test_get_availability_zone_list

tempest.api.volume.test_availability_zone.AvailabilityZoneTestJSON.test_get_availability_zone_list

Note: the second test case is the alias of the first one. Alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of Openstack.

Test preconditions
  • Volume extension API
Basic test flow execution description and pass/fail criteria
  • Test action 1: List all existent availability zones
  • Test assertion 1: Verify the availability zone list length is greater than 0

This test case evaluates the volume API ability of listing availability zones. Specifically, the test verifies that:

  • Availability zones can be listed.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 3 - Volume cloning operations with the Cinder v2 API
Test case specification

tempest.api.volume.test_volumes_get.VolumesV2GetTest.test_volume_create_get_update_delete_as_clone

tempest.api.volume.test_volumes_get.VolumesGetTest.test_volume_create_get_update_delete_as_clone

Note: the second test case is the alias of the first one. Alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of Openstack.

Test preconditions
  • Volume extension API
  • Cinder volume clones feature is enabled
Basic test flow execution description and pass/fail criteria
  • Test action 1: Create a volume VOL1
  • Test action 2: Create a volume VOL2 from source volume VOL1 with a specific name and metadata
  • Test action 2: Wait for VOL2 to reach ‘available’ status
  • Test assertion 1: Verify the name of VOL2 is correct
  • Test action 3: Retrieve VOL2’s detail information
  • Test assertion 2: Verify the retrieved volume name, ID and metadata are the same as VOL2
  • Test assertion 3: Verify VOL2’s bootable flag is ‘False’
  • Test action 4: Update the name of VOL2 with the original value
  • Test action 5: Update the name of VOL2 with a new value
  • Test assertion 4: Verify the name of VOL2 is updated successfully
  • Test action 6: Create a volume VOL3 with no name specified and a description contains characters '@#$%^*
  • Test assertion 5: Verify VOL3 is created successfully
  • Test action 7: Update the name of VOL3 and description with the original value
  • Test assertion 6: Verify VOL3’s bootable flag is ‘False’

This test case evaluates the volume API ability of creating a cloned volume from a source volume, getting cloned volume detail information and updating cloned volume attributes.

Specifically, the test verifies that:

  • Cloned volume can be created from a source volume.
  • Cloned volume detail information can be retrieved.
  • Cloned volume detail information can be updated.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 4 - Image copy-to-volume operations with the Cinder v2 API
Test case specification

tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_volume_bootable tempest.api.volume.test_volumes_get.VolumesV2GetTest.test_volume_create_get_update_delete_from_image

tempest.api.volume.test_volumes_get.VolumesActionsTest.test_volume_bootable tempest.api.volume.test_volumes_get.VolumesGetTest.test_volume_create_get_update_delete_from_image

Note: the last 2 test cases are the alias of the former 2. Alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of Openstack.

Test preconditions
  • Volume extension API
Basic test flow execution description and pass/fail criteria
  • Test action 1: Set a provided volume VOL1’s bootable flag to ‘True’
  • Test action 2: Retrieve VOL1’s bootable flag
  • Test assertion 1: Verify VOL1’s bootable flag is ‘True’
  • Test action 3: Set a provided volume VOL1’s bootable flag to ‘False’
  • Test action 4: Retrieve VOL1’s bootable flag
  • Test assertion 2: Verify VOL1’s bootable flag is ‘False’
  • Test action 5: Create a bootable volume VOL2 from one image with a specific name and metadata
  • Test action 6: Wait for VOL2 to reach ‘available’ status
  • Test assertion 3: Verify the name of VOL2 name is correct
  • Test action 7: Retrieve VOL2’s information
  • Test assertion 4: Verify the retrieved volume name, ID and metadata are the same as VOL2
  • Test assertion 5: Verify VOL2’s bootable flag is ‘True’
  • Test action 8: Update the name of VOL2 with the original value
  • Test action 9: Update the name of VOL2 with a new value
  • Test assertion 6: Verify the name of VOL2 is updated successfully
  • Test action 10: Create a volume VOL3 with no name specified and a description contains characters '@#$%^*
  • Test assertion 7: Verify VOL3 is created successfully
  • Test action 11: Update the name of VOL3 and description with the original value
  • Test assertion 8: Verify VOL3’s bootable flag is ‘True’

This test case evaluates the volume API ability of updating volume’s bootable flag and creating a bootable volume from an image, getting bootable volume detail information and updating bootable volume.

Specifically, the test verifies that:

  • Volume bootable flag can be set and retrieved.
  • Bootable volume can be created from a source volume.
  • Bootable volume detail information can be retrieved.
  • Bootable volume detail information can be updated.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 5 - Volume creation and deletion operations with the Cinder v2 API
Test case specification

tempest.api.volume.test_volumes_get.VolumesV2GetTest.test_volume_create_get_update_delete tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_invalid_size tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_nonexistent_source_volid tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_nonexistent_volume_type tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_out_passing_size tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_size_negative tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_size_zero

tempest.api.volume.test_volumes_get.VolumesGetTest.test_volume_create_get_update_delete tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_create_volume_with_invalid_size tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_create_volume_with_nonexistent_source_volid tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_create_volume_with_nonexistent_volume_type

tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_without_passing_size tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_create_volume_without_passing_size

tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_create_volume_with_size_negative tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_create_volume_with_size_zero

Note: test cases 8 to 11 are the alias of the fist 4 test cases, test cases 12 and 13 are both alias of test case 5, and test cases 14 and 15 are the alias of the cases 6 and 7, respectively. Alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of OpenStack.

Test preconditions
  • Volume extension API
Basic test flow execution description and pass/fail criteria
  • Test action 1: Create a volume VOL1 with a specific name and metadata
  • Test action 2: Wait for VOL1 to reach ‘available’ status
  • Test assertion 1: Verify the name of VOL1 is correct
  • Test action 3: Retrieve VOL1’s information
  • Test assertion 2: Verify the retrieved volume name, ID and metadata are the same as VOL1
  • Test assertion 3: Verify VOL1’s bootable flag is ‘False’
  • Test action 4: Update the name of VOL1 with the original value
  • Test action 5: Update the name of VOL1 with a new value
  • Test assertion 4: Verify the name of VOL1 is updated successfully
  • Test action 6: Create a volume VOL2 with no name specified and a description contains characters '@#$%^*
  • Test assertion 5: Verify VOL2 is created successfully
  • Test action 7: Update the name of VOL2 and description with the original value
  • Test assertion 6: Verify VOL2’s bootable flag is ‘False’
  • Test action 8: Create a volume with an invalid size ‘#$%’
  • Test assertion 7: Verify create volume failed, a bad request error is returned in the response
  • Test action 9: Create a volume with a nonexistent source volume
  • Test assertion 8: Verify create volume failed, a ‘Not Found’ error is returned in the response
  • Test action 10: Create a volume with a nonexistent volume type
  • Test assertion 9: Verify create volume failed, a ‘Not Found’ error is returned in the response
  • Test action 11: Create a volume without passing a volume size
  • Test assertion 10: Verify create volume failed, a bad request error is returned in the response
  • Test action 12: Create a volume with a negative volume size
  • Test assertion 11: Verify create volume failed, a bad request error is returned in the response
  • Test action 13: Create a volume with volume size ‘0’
  • Test assertion 12: Verify create volume failed, a bad request error is returned in the response

This test case evaluates the volume API ability of creating a volume, getting volume detail information and updating volume, the reference is, Specifically, the test verifies that:

  • Volume can be created from a source volume.
  • Volume detail information can be retrieved/updated.
  • Create a volume with an invalid size is not allowed.
  • Create a volume with a nonexistent source volume or volume type is not allowed.
  • Create a volume without passing a volume size is not allowed.
  • Create a volume with a negative volume size is not allowed.
  • Create a volume with volume size ‘0’ is not allowed.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 6 - Volume service extension listing operations with the Cinder v2 API
Test case specification

tempest.api.volume.test_extensions.ExtensionsV2TestJSON.test_list_extensions

tempest.api.volume.test_extensions.ExtensionsTestJSON.test_list_extensions

Note: the second test case is the alias of the first one. Alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of Openstack.

Test preconditions
  • Volume extension API
  • At least one Cinder extension is configured
Basic test flow execution description and pass/fail criteria
  • Test action 1: List all cinder service extensions
  • Test assertion 1: Verify all extensions are list in the extension list

This test case evaluates the volume API ability of listing all existent volume service extensions.

  • Cinder service extensions can be listed.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 7 - Volume GET operations with the Cinder v2 API
Test case specification

tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_get_invalid_volume_id tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_get_volume_without_passing_volume_id tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_volume_get_nonexistent_volume_id

tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_get_invalid_volume_id tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_get_volume_without_passing_volume_id” tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_volume_get_nonexistent_volume_id

Note: the latter 3 test cases is the alias of the first 3 ones. Alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of Openstack.

Test preconditions
  • Volume extension API
Basic test flow execution description and pass/fail criteria
  • Test action 1: Retrieve a volume with an invalid volume ID
  • Test assertion 1: Verify retrieve volume failed, a ‘Not Found’ error is returned in the response
  • Test action 2: Retrieve a volume with an empty volume ID
  • Test assertion 2: Verify retrieve volume failed, a ‘Not Found’ error is returned in the response
  • Test action 3: Retrieve a volume with a nonexistent volume ID
  • Test assertion 3: Verify retrieve volume failed, a ‘Not Found’ error is returned in the response

This test case evaluates the volume API ability of getting volumes. Specifically, the test verifies that:

  • Get a volume with an invalid/an empty/a nonexistent volume ID is not allowed.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 8 - Volume listing operations with the Cinder v2 API
Test case specification

tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_by_name tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_details_by_name tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_param_display_name_and_status tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_with_detail_param_display_name_and_status tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_with_detail_param_metadata tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_with_details tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_with_param_metadata tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volumes_list_by_availability_zone tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volumes_list_by_status tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volumes_list_details_by_availability_zone tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volumes_list_details_by_status tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_list_volumes_detail_with_invalid_status tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_list_volumes_detail_with_nonexistent_name tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_list_volumes_with_invalid_status tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_list_volumes_with_nonexistent_name tempest.api.volume.v2.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_details_pagination tempest.api.volume.v2.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_details_with_multiple_params tempest.api.volume.v2.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_pagination

tempest.api.volume.test_volumes_list.VolumesListTestJSON.test_volume_list tempest.api.volume.test_volumes_list.VolumesListTestJSON.test_volume_list_by_name tempest.api.volume.test_volumes_list.VolumesListTestJSON.test_volume_list_details_by_name tempest.api.volume.test_volumes_list.VolumesListTestJSON.test_volume_list_param_display_name_and_status tempest.api.volume.test_volumes_list.VolumesListTestJSON.test_volume_list_with_detail_param_display_name_and_status tempest.api.volume.test_volumes_list.VolumesListTestJSON.test_volume_list_with_detail_param_metadata tempest.api.volume.test_volumes_list.VolumesListTestJSON.test_volume_list_with_details tempest.api.volume.test_volumes_list.VolumesListTestJSON.test_volume_list_with_param_metadata tempest.api.volume.test_volumes_list.VolumesListTestJSON.test_volume_list_by_availability_zone tempest.api.volume.test_volumes_list.VolumesListTestJSON.test_volume_list_by_status tempest.api.volume.test_volumes_list.VolumesListTestJSON.test_volume_list_details_by_availability_zone tempest.api.volume.test_volumes_list.VolumesListTestJSON.test_volume_list_details_by_status tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_list_volumes_detail_with_invalid_status tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_list_volumes_detail_with_nonexistent_name tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_list_volumes_with_invalid_status tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_list_volumes_with_nonexistent_name tempest.api.volume.v2.test_volumes_list.VolumesListTestJSON.test_volume_list_details_pagination tempest.api.volume.v2.test_volumes_list.VolumesListTestJSON.test_volume_list_details_with_multiple_params tempest.api.volume.v2.test_volumes_list.VolumesListTestJSON.test_volume_list_pagination

Note: the latter 19 test cases is the alias of the first 19 ones. Alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of Openstack.

Test preconditions
  • Volume extension API
  • The backing file for the volume group that Nova uses has space for at least 3 1G volumes
Basic test flow execution description and pass/fail criteria
  • Test action 1: List all existent volumes
  • Test assertion 1: Verify the volume list is complete
  • Test action 2: List existent volumes and filter the volume list by volume name
  • Test assertion 2: Verify the length of filtered volume list is 1 and the retrieved volume is correct
  • Test action 3: List existent volumes in detail and filter the volume list by volume name
  • Test assertion 3: Verify the length of filtered volume list is 1 and the retrieved volume is correct
  • Test action 4: List existent volumes and filter the volume list by volume name and status ‘available’
  • Test assertion 4: Verify the name and status parameters of the fetched volume are correct
  • Test action 5: List existent volumes in detail and filter the volume list by volume name and status ‘available’
  • Test assertion 5: Verify the name and status parameters of the fetched volume are correct
  • Test action 6: List all existent volumes in detail and filter the volume list by volume metadata
  • Test assertion 6: Verify the metadata parameter of the fetched volume is correct
  • Test action 7: List all existent volumes in detail
  • Test assertion 7: Verify the volume list is complete
  • Test action 8: List all existent volumes and filter the volume list by volume metadata
  • Test assertion 8: Verify the metadata parameter of the fetched volume is correct
  • Test action 9: List existent volumes and filter the volume list by availability zone
  • Test assertion 9: Verify the availability zone parameter of the fetched volume is correct
  • Test action 10: List all existent volumes and filter the volume list by volume status ‘available’
  • Test assertion 10: Verify the status parameter of the fetched volume is correct
  • Test action 11: List existent volumes in detail and filter the volume list by availability zone
  • Test assertion 11: Verify the availability zone parameter of the fetched volume is correct
  • Test action 12: List all existent volumes in detail and filter the volume list by volume status ‘available’
  • Test assertion 12: Verify the status parameter of the fetched volume is correct
  • Test action 13: List all existent volumes in detail and filter the volume list by an invalid volume status ‘null’
  • Test assertion 13: Verify the filtered volume list is empty
  • Test action 14: List all existent volumes in detail and filter the volume list by a non-existent volume name
  • Test assertion 14: Verify the filtered volume list is empty
  • Test action 15: List all existent volumes and filter the volume list by an invalid volume status ‘null’
  • Test assertion 15: Verify the filtered volume list is empty
  • Test action 16: List all existent volumes and filter the volume list by a non-existent volume name
  • Test assertion 16: Verify the filtered volume list is empty
  • Test action 17: List all existent volumes in detail and paginate the volume list by desired volume IDs
  • Test assertion 17: Verify only the desired volumes are listed in the filtered volume list
  • Test action 18: List all existent volumes in detail and filter the volume list by volume status ‘available’ and display limit ‘2’
  • Test action 19: Sort the filtered volume list by IDs in ascending order
  • Test assertion 18: Verify the length of filtered volume list is 2
  • Test assertion 19: Verify the status of retrieved volumes is correct
  • Test assertion 20: Verify the filtered volume list is sorted correctly
  • Test action 20: List all existent volumes in detail and filter the volume list by volume status ‘available’ and display limit ‘2’
  • Test action 21: Sort the filtered volume list by IDs in descending order
  • Test assertion 21: Verify the length of filtered volume list is 2
  • Test assertion 22: Verify the status of retrieved volumes is correct
  • Test assertion 23: Verify the filtered volume list is sorted correctly
  • Test action 22: List all existent volumes and paginate the volume list by desired volume IDs
  • Test assertion 24: Verify only the desired volumes are listed in the filtered volume list

This test case evaluates the volume API ability of getting a list of volumes and filtering the volume list. Specifically, the test verifies that:

  • Get a list of volumes (in detail) successful.
  • Get a list of volumes (in detail) and filter volumes by name/status/metadata/availability zone successful.
  • Volume list pagination functionality is working.
  • Get a list of volumes in detail using combined condition successful.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 9 - Volume metadata operations with the Cinder v2 API
Test case specification

tempest.api.volume.test_volume_metadata.VolumesV2MetadataTest.test_create_get_delete_volume_metadata tempest.api.volume.test_volume_metadata.VolumesV2MetadataTest.test_update_volume_metadata_item

tempest.api.volume.test_volume_metadata.VolumesMetadataTest.test_crud_volume_metadata tempest.api.volume.test_volume_metadata.VolumesV2MetadataTest.test_crud_volume_metadata

tempest.api.volume.test_volume_metadata.VolumesMetadataTest.test_update_volume_metadata_item tempest.api.volume.test_volume_metadata.VolumesMetadataTest.test_update_show_volume_metadata_item

Note: Test case 3 and 4 are the alias of the first test case, and the last 2 test cases are the alias of the second test case. Alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of OpenStack.

Test preconditions
  • Volume extension API
Basic test flow execution description and pass/fail criteria
  • Test action 1: Create metadata for a provided volume VOL1
  • Test action 2: Get the metadata of VOL1
  • Test assertion 1: Verify the metadata of VOL1 is correct
  • Test action 3: Update the metadata of VOL1
  • Test assertion 2: Verify the metadata of VOL1 is updated
  • Test action 4: Delete one metadata item ‘key1’ of VOL1
  • Test assertion 3: Verify the metadata item ‘key1’ is deleted
  • Test action 5: Create metadata for a provided volume VOL2
  • Test assertion 4: Verify the metadata of VOL2 is correct
  • Test action 6: Update one metadata item ‘key3’ of VOL2
  • Test assertion 5: Verify the metadata of VOL2 is updated

This test case evaluates the volume API ability of creating metadata for a volume, getting the metadata of a volume, updating volume metadata and deleting a metadata item of a volume. Specifically, the test verifies that:

  • Create metadata for volume successfully.
  • Get metadata of volume successfully.
  • Update volume metadata and metadata item successfully.
  • Delete metadata item of a volume successfully.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 10 - Verification of read-only status on volumes with the Cinder v2 API
Test case specification

tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_volume_readonly_update

tempest.api.volume.test_volumes_actions.VolumesActionsTest.test_volume_readonly_update

Note: the second test case is the alias of the first one. Alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of Openstack.

Test preconditions
  • Volume extension API
Basic test flow execution description and pass/fail criteria
  • Test action 1: Update a provided volume VOL1’s read-only access mode to ‘True’
  • Test assertion 1: Verify VOL1 is in read-only access mode
  • Test action 2: Update a provided volume VOL1’s read-only access mode to ‘False’
  • Test assertion 2: Verify VOL1 is not in read-only access mode

This test case evaluates the volume API ability of setting and updating volume read-only access mode. Specifically, the test verifies that:

  • Volume read-only access mode can be set and updated.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 11 - Volume reservation operations with the Cinder v2 API
Test case specification

tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_reserve_unreserve_volume tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_reserve_volume_with_negative_volume_status tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_reserve_volume_with_nonexistent_volume_id tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_unreserve_volume_with_nonexistent_volume_id

tempest.api.volume.test_volumes_actions.VolumesActionsTest.test_reserve_unreserve_volume tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_reserve_volume_with_negative_volume_status tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_reserve_volume_with_nonexistent_volume_id tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_unreserve_volume_with_nonexistent_volume_id

Note: the last 4 test cases are the alias of the first 4 ones. Alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of Openstack.

Test preconditions
  • Volume extension API
Basic test flow execution description and pass/fail criteria
  • Test action 1: Update a provided volume VOL1 as reserved
  • Test assertion 1: Verify VOL1 is in ‘attaching’ status
  • Test action 2: Update VOL1 as un-reserved
  • Test assertion 2: Verify VOL1 is in ‘available’ status
  • Test action 3: Update a provided volume VOL2 as reserved
  • Test action 4: Update VOL2 as reserved again
  • Test assertion 3: Verify update VOL2 status failed, a bad request error is returned in the response
  • Test action 5: Update VOL2 as un-reserved
  • Test action 6: Update a non-existent volume as reserved by using an invalid volume ID
  • Test assertion 4: Verify update non-existent volume as reserved failed, a ‘Not Found’ error is returned in the response
  • Test action 7: Update a non-existent volume as un-reserved by using an invalid volume ID
  • Test assertion 5: Verify update non-existent volume as un-reserved failed, a ‘Not Found’ error is returned in the response

This test case evaluates the volume API ability of reserving and un-reserving volumes. Specifically, the test verifies that:

  • Volume can be reserved and un-reserved.
  • Update a non-existent volume as reserved is not allowed.
  • Update a non-existent volume as un-reserved is not allowed.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 12 - Volume snapshot creation/deletion operations with the Cinder v2 API
Test case specification

tempest.api.volume.test_snapshot_metadata.SnapshotV2MetadataTestJSON.test_create_get_delete_snapshot_metadata tempest.api.volume.test_snapshot_metadata.SnapshotV2MetadataTestJSON.test_update_snapshot_metadata_item tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_nonexistent_snapshot_id tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_delete_invalid_volume_id tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_delete_volume_without_passing_volume_id tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_volume_delete_nonexistent_volume_id tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_snapshot_create_get_list_update_delete tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_volume_from_snapshot tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_snapshots_list_details_with_params tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_snapshots_list_with_params tempest.api.volume.test_volumes_snapshots_negative.VolumesV2SnapshotNegativeTestJSON.test_create_snapshot_with_nonexistent_volume_id tempest.api.volume.test_volumes_snapshots_negative.VolumesV2SnapshotNegativeTestJSON.test_create_snapshot_without_passing_volume_id

tempest.api.volume.test_snapshot_metadata.SnapshotMetadataTestJSON.test_crud_snapshot_metadata tempest.api.volume.test_snapshot_metadata.SnapshotV2MetadataTestJSON.test_crud_snapshot_metadata

tempest.api.volume.test_snapshot_metadata.SnapshotMetadataTestJSON.test_update_snapshot_metadata_item tempest.api.volume.test_snapshot_metadata.SnapshotMetadataTestJSON.test_update_show_snapshot_metadata_item

tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_create_volume_with_nonexistent_snapshot_id tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_delete_invalid_volume_id tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_delete_volume_without_passing_volume_id tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_volume_delete_nonexistent_volume_id tempest.api.volume.test_volumes_snapshots.VolumesSnapshotTestJSON.test_snapshot_create_get_list_update_delete tempest.api.volume.test_volumes_snapshots.VolumesSnapshotTestJSON.test_volume_from_snapshot

tempest.api.volume.test_volumes_snapshots_list.VolumesSnapshotListTestJSON.test_snapshots_list_details_with_params tempest.api.volume.test_volumes_snapshots_list.VolumesV2SnapshotListTestJSON.test_snapshots_list_details_with_params

tempest.api.volume.test_volumes_snapshots_list.VolumesSnapshotListTestJSON.test_snapshots_list_with_params tempest.api.volume.test_volumes_snapshots_list.VolumesV2SnapshotListTestJSON.test_snapshots_list_with_params

tempest.api.volume.test_volumes_snapshots_negative.VolumesSnapshotNegativeTestJSON.test_create_snapshot_with_nonexistent_volume_id tempest.api.volume.test_volumes_snapshots_negative.VolumesSnapshotNegativeTestJSON.test_create_snapshot_without_passing_volume_id

Note: test case 13 and 14 are the alias of test case 1, test case 15 and 16 are the alias of test case 2, test case 17 to 22 are the alias of test case 3 to 8 respectively, test case 23 and 24 are the alias of test case 9, test case 25 and 26 are the alias of test case 10, and test case 27 and 28 are the alias of test case 11 and 12 respectively. Alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of OpenStack.

Test preconditions
  • Volume extension API
Basic test flow execution description and pass/fail criteria
  • Test action 1: Create metadata for a provided snapshot SNAP1
  • Test action 2: Get the metadata of SNAP1
  • Test assertion 1: Verify the metadata of SNAP1 is correct
  • Test action 3: Update the metadata of SNAP1
  • Test assertion 2: Verify the metadata of SNAP1 is updated
  • Test action 4: Delete one metadata item ‘key3’ of SNAP1
  • Test assertion 3: Verify the metadata item ‘key3’ is deleted
  • Test action 5: Create metadata for a provided snapshot SNAP2
  • Test assertion 4: Verify the metadata of SNAP2 is correct
  • Test action 6: Update one metadata item ‘key3’ of SNAP2
  • Test assertion 5: Verify the metadata of SNAP2 is updated
  • Test action 7: Create a volume with a nonexistent snapshot
  • Test assertion 6: Verify create volume failed, a ‘Not Found’ error is returned in the response
  • Test action 8: Delete a volume with an invalid volume ID
  • Test assertion 7: Verify delete volume failed, a ‘Not Found’ error is returned in the response
  • Test action 9: Delete a volume with an empty volume ID
  • Test assertion 8: Verify delete volume failed, a ‘Not Found’ error is returned in the response
  • Test action 10: Delete a volume with a nonexistent volume ID
  • Test assertion 9: Verify delete volume failed, a ‘Not Found’ error is returned in the response
  • Test action 11: Create a snapshot SNAP2 from a provided volume VOL1
  • Test action 12: Retrieve SNAP2’s detail information
  • Test assertion 10: Verify SNAP2 is created from VOL1
  • Test action 13: Update the name and description of SNAP2
  • Test assertion 11: Verify the name and description of SNAP2 are updated in the response body of update snapshot API
  • Test action 14: Retrieve SNAP2’s detail information
  • Test assertion 12: Verify the name and description of SNAP2 are correct
  • Test action 15: Delete SNAP2
  • Test action 16: Create a volume VOL2 with a volume size
  • Test action 17: Create a snapshot SNAP3 from VOL2
  • Test action 18: Create a volume VOL3 from SNAP3 with a bigger volume size
  • Test action 19: Retrieve VOL3’s detail information
  • Test assertion 13: Verify volume size and source snapshot of VOL3 are correct
  • Test action 20: List all snapshots in detail and filter the snapshot list by name
  • Test assertion 14: Verify the filtered snapshot list is correct
  • Test action 21: List all snapshots in detail and filter the snapshot list by status
  • Test assertion 15: Verify the filtered snapshot list is correct
  • Test action 22: List all snapshots in detail and filter the snapshot list by name and status
  • Test assertion 16: Verify the filtered snapshot list is correct
  • Test action 23: List all snapshots and filter the snapshot list by name
  • Test assertion 17: Verify the filtered snapshot list is correct
  • Test action 24: List all snapshots and filter the snapshot list by status
  • Test assertion 18: Verify the filtered snapshot list is correct
  • Test action 25: List all snapshots and filter the snapshot list by name and status
  • Test assertion 19: Verify the filtered snapshot list is correct
  • Test action 26: Create a snapshot from a nonexistent volume by using an invalid volume ID
  • Test assertion 20: Verify create snapshot failed, a ‘Not Found’ error is returned in the response
  • Test action 27: Create a snapshot from a volume by using an empty volume ID
  • Test assertion 21: Verify create snapshot failed, a ‘Not Found’ error is returned in the response

This test case evaluates the volume API ability of managing snapshot and snapshot metadata. Specifically, the test verifies that:

  • Create metadata for snapshot successfully.
  • Get metadata of snapshot successfully.
  • Update snapshot metadata and metadata item successfully.
  • Delete metadata item of a snapshot successfully.
  • Create a volume from a nonexistent snapshot is not allowed.
  • Delete a volume using an invalid volume ID is not allowed.
  • Delete a volume without passing the volume ID is not allowed.
  • Delete a non-existent volume is not allowed.
  • Create snapshot successfully.
  • Get snapshot’s detail information successfully.
  • Update snapshot attributes successfully.
  • Delete snapshot successfully.
  • Creates a volume and a snapshot passing a size different from the source successfully.
  • List snapshot details by display_name and status filters successfully.
  • Create a snapshot from a nonexistent volume is not allowed.
  • Create a snapshot from a volume without passing the volume ID is not allowed.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A

Test Case 13 - Volume update operations with the Cinder v2 API
Test case specification

tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_update_volume_with_empty_volume_id tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_update_volume_with_invalid_volume_id tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_update_volume_with_nonexistent_volume_id

tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_update_volume_with_empty_volume_id tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_update_volume_with_invalid_volume_id tempest.api.volume.test_volumes_negative.VolumesNegativeTest.test_update_volume_with_nonexistent_volume_id

Note: the last 3 test cases are the alias of the first 3 ones. Alias should always be included so that the test run will be tempest version agnostic, which can be used to test different version of Openstack.

Test preconditions
  • Volume extension API
Basic test flow execution description and pass/fail criteria
  • Test action 1: Update a volume by using an empty volume ID
  • Test assertion 1: Verify update volume failed, a ‘Not Found’ error is returned in the response
  • Test action 2: Update a volume by using an invalid volume ID
  • Test assertion 2: Verify update volume failed, a ‘Not Found’ error is returned in the response
  • Test action 3: Update a non-existent volume by using a random generated volume ID
  • Test assertion 3: Verify update volume failed, a ‘Not Found’ error is returned in the response

This test case evaluates the volume API ability of updating volume attributes. Specifically, the test verifies that:

  • Update a volume without passing the volume ID is not allowed.
  • Update a volume using an invalid volume ID is not allowed.
  • Update a non-existent volume is not allowed.

In order to pass this test, all test assertions listed in the test execution above need to pass.

Post conditions

N/A