Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

____________________________________________________________________________________________

 
LFS264: 
OPNFV Fundamentals  
- V.10.08.2020 -

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

Lab 1 - Deploy an OPNFV Scenario

Important Notes

We recommend doing this entire set of labs in one session. If, for some reason, you are not able
to do so, you have two options:

1. You can keep the VM running. This will maintain continuity, but can get expensive.

2. You can tear down the VM and spin it up when you want to continue. You will need to
run ​Lab 1​ every time you spin up a VM. Labs 2, 3, 4 are independent and need not be
completed sequentially.

Commands and outputs will be displayed in Courier New Bold Blue font.

Highlighted text​ means you need to modify the command with the indicated parameter.

Lab

1. If you do not already have one, create an AzureCloud account:​ https://portal.azure.com​.

In order to get access to the image you will use for this lab, use this ​Google Form​ to provide us
your email address linked to your Azure Cloud account. You should get access to the image
within 24 hours; if not, please send an email with your information to
training@linuxfoundation.org​.

2. Create a VM with 16 vCPUs, 64GB memory, 300GB HDD using an OPNFV Fuel image
(image name: ​aarna-opnfv-iruya-90-pub-09252020-00 - Gen1​). Use the resource
group as ​aarna-opnfv-resource-group-00​. Your email address needs to be added to the
_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

resource group for the image, or else this operation will fail. This should be done automatically
once you sign up for the course, but if it is not, please contact us.

Choose the machine type as ​Standard D16 v3.


 

3. Create the instance.

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

4. SSH into your new instance, either using the Azure CLI or starting a new session from the
Azure dashboard. 
 
Download​ ​aarna.pem
Download​ ​aarna.ppk
chmod 400 aarna.pem
chmod 400 aarna.ppk
ssh -i aarna.pem aarna@<​VM External IP​>

For Windows Users: PuTTY instructions (these are Linux PuTTY instructions, minor
modifications might be needed for Windows)

Under Session, enter the instance Public IP (e.g. 104.154.133.161), Connection Type = ssh,
Port = 22

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

Go to ​Connection > Data​ and A


​ uto Login User Name = aarna

Go to ​Connection SSH > Auth (Select the aarna.ppk file)

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

5. In order to deploy OPNFV on a single node, we will use the Fuel deployment method for
virtual deploy. Make sure of the following prerequisites:

● OS distribution support:
OPNFV Fuel has been validated by CI using the following distribution installed on node.
● CentOS 7 (recommended by pharos specification)
● Ubuntu Xenial 16.04
● Ubuntu Bionic 18.04, which is what is used for this lab.

● Virtualization support:
The machine on which we need to install and configure OPNFV must support
virtualization and in case of installing it on Virtual machine, it must support nested
virtualization. Use the below command to check if the machine supports virtualization or
not.

lscpu | grep Virtualization

# Output
Virtualization: ​VT-x

# Make sure the output shows VT-x


 
6. Install KVM and verify. Run the following commands to install the KVM on the node:
_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

sudo apt update


sudo apt install qemu qemu-kvm libvirt-bin bridge-utils virt-manager

# Make sure KVM module loaded using lsmod command and ​grep command​:

lsmod | grep -i kvm


 
7. Make sure the user running the ​deploy​ script belongs to ​sudo​ and ​libvirt​ groups, and
have passwordless ​sudo​ access. For example, if the user name is ​aarna​, then ​aarna​ should
be a member of ​sudo​ and ​libvirt​ groups. Run the following commands to add the user to the
group.

sudo usermod -aG sudo aarna


sudo usermod -aG libvirt aarna
sudo reboot

# After reboot, edit sudo file and add the user


sudo visudo

# Add the following line


%aarna ALL=(ALL) NOPASSWD:ALL
 
8. Local Artifact Storage - The folder containing temporary deploy artifacts
(​/fuel_deployment/tmpdir​ in the example below) needs to have a mask 777 in order for
libvirt​ to be able to use them.

mkdir -p -m 777 /fuel_deployment/tmpdir

9. Install Docker, after updating your existing list of packages. Next, install a few prerequisite
packages which lets ​apt​ use packages over HTTPS.

sudo apt update

sudo apt install apt-transport-https ca-certificates curl


software-properties-common

# Add the GPG key for the official Docker repository to your system

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -


# Add the Docker repository to APT sources:

sudo add-apt-repository "deb [arch=amd64]


https://download.docker.com/linux/ubuntu bionic stable"

# Update the package database with the Docker packages from the
_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

# newly added repo

sudo apt update

# Make sure you are about to install from the Docker repo instead of
# the default Ubuntu repo

apt-cache policy docker-ce

# You’ll see output like this, although the version number for
# Docker may be different:
# Output of apt-cache policy docker-ce

docker-ce:
Installed: (none)
Candidate: 18.03.1~ce~3-0~ubuntu
Version table:
18.03.1~ce~3-0~ubuntu 500
500 https://download.docker.com/linux/ubuntu bionic/stable
amd64 Packages

# Notice that docker-ce is not installed, but the candidate for


# installation is from the Docker repository for Ubuntu 18.04 (bionic).
# Finally, install Docker:

sudo apt install docker-ce

# Docker should now be installed, the daemon started, and the


# process enabled to start on boot. Check that it’s running

sudo systemctl status docker

# The output should be similar to the following, showing that


# the service is active and running

docker.service - Docker Application Container Engine


Loaded: loaded (/lib/systemd/system/docker.service; enabled;
vendor preset: enabled)
Active: active (running) since Thu 2018-07-05 15:08:39 UTC;
2min 55s ago
Docs: https://docs.docker.com
Main PID: 10096 (dockerd)
Tasks: 16
CGroup: /system.slice/docker.service
├─10096 /usr/bin/dockerd -H fd://
└─10113 docker-containerd --config
/var/run/docker/containerd/containerd.toml
 

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

10. Follow the below steps to start the deployment.

● Clone the OPNFV Fuel code from gerrit


● Checkout the Iruya branch or release tag
● Start the script.

The deployment uses the OPNFV Pharos project as input (PDF and IDF files) for hardware and
network configuration of all current OPNFV PODs.

When deploying a new POD, you may pass the​ -b​ flag to the deploy script to override the path
for the ​labconfig​ directory structure containing the PDF and IDF (​<URI to
configuration repo ...>​ is the absolute path to a local or remote directory structure,
populated similar to ​pharos git repo​, i.e. PDF/IDF reside in a subdirectory called
labs/<lab_name>​).

git clone ​https://git.opnfv.org/fuel


cd fuel
sudo git checkout -b stable/iruya

ci/deploy.sh -l <lab_name> -p <pod_name> -b <URI to configuration repo


containing the PDF/IDF files> -s <scenario> -D -S <Storage directory for
deploy artifacts> |& tee deploy.log

# Below is the example command to deploy Fuel OPNFV

# nohup sudo fuel/ci/deploy.sh -l aarna -p virtual1 -b


file:///fuel_deployment/fuel/mcp/scripts/pharos/ -s os-nosdn-nofeature-noha -S
/fuel_deployment/tmpdir/ -o ubuntu1804 -D &

# Note: For Virtual deploy, the existing virtual POD definitions


# can be used as-is. You need not to edit any of the files.
# This will take nearly 1.5 hours to install.

# Typical Cluster Examples


# Common cluster layouts usually fall into one of the cases described
# below, categorized by deployment type (baremetal, virtual or hybrid)
# and high availability (HA or noHA).
# A simplified overview of the steps deploy.sh will
# automatically perform is:
# create a Salt Master Docker container on the jumpserver, which will
# drive the rest of the installation;
# baremetal or hybrid only: create a MaaS container node, which will be
# leveraged using Salt to handle OS provisioning on the baremetal nodes;
# leverage Salt to install & configure OpenStack;

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

Workaround:

After powering off and On of the VM multiple times, it can break the Openstack cluster and in
that case Functest and Yardstick test will fail with ​communication failure errors​. In that case
you can re-deploy the Fuel OPNFV on the same machine without any cleanup. You just need to
re-run the deploy script and it will redeploy.

# Below is the command to redeploy Fuel OPNFV.


# It is the same command which we used for deploying OPNFV Fuel

# nohup sudo fuel/ci/deploy.sh -l aarna -p virtual1 -b


file:///fuel_deployment/fuel/mcp/scripts/pharos/ -s os-nosdn-nofeature-noha
-S /fuel_deployment/tmpdir/ -o ubuntu1804 -D &

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

Lab 2 - Explore an OPNFV Scenario

Important Notes

We recommend doing this entire set of labs in one session. If, for some reason, you are not able
to do so, you have two options:

1. You can keep the VM running. This will maintain continuity, but can get expensive.

2. You can tear down the VM and spin it up when you want to continue. You will need to
run ​Lab 1​ every time you spin up a VM. Labs 2, 3, 4 are independent and need not be
completed sequentially.

Commands and outputs will be displayed in Courier New Bold Blue font.

Highlighted text​ means you need to modify the command with the indicated parameter.

Lab

1. Explore other scenarios under the ​/etc/opnfv-fuel/​ folder; explore how scenarios are
defined.

diff /etc/opnfv-fuel/os-nosdn-nofeature-noha.yaml 
/etc/opnfv-fuel/os-odl-nofeature-ha.yaml 

2. Check the Openstack installation, and explore details.

Questions:

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

● Give an example of a service that would normally not be in a standard OpenStack


distribution.

● Why is it included?

source openrc 

# Command to list all openstack API endpoint details 


openstack endpoint list 
 
# Command to list all regions 
openstack region list 
 
# Command to list tenants/project  
openstack project list 
 
# Command to list networks 
openstack network list 
 
3. Next, we will access the Horizon Dashboard. Open a ​NEW​ terminal window since the below
command does not exit.

# Open a separate terminal


# Create SOCKS5 SSH proxy tunnel on your Azure instance
# To access the VM horizon dashboard
# This command on your laptop will create a tunnel
# Command will not exit till you close the terminal

ssh -t -i aarna.pem -o CheckHostIP=no -o IdentitiesOnly=yes -o 


StrictHostKeyChecking=no aarna@{​VM External IP​} -N -p 22 -D localhost:5000 

MAC USERS: 
ssh -t -i aarna.pem -o CheckHostIP=no -o IdentitiesOnly=yes -o 
StrictHostKeyChecking=no aarna@{​VM External IP​} -N -p 22 -D localhost:5000

WINDOWS USERS:
Under Session, just enter the instance Public IP (e.g. 104.154.133.161, 
Connection Type = ssh, Port = 22 
 

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

 
Go to Connection --> Data and Auto Login User Name = aarna 
 

Go to Connection SSH --> Auth (Select the aarna.ppk) file 

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

Go to Connection --> SSH --> Tunnel 


  Source Port = 5000, Destination = Dynamic, Protocol Auto 
And then press add button 
 
 

After Adding it you should see D5000 (Under Forward Ports) 

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

Now select the Open button 


(This will open a SSH terminal window as well a background socks tunnel proxy 
process listening on your laptop local port 5000) 

4. Configure Firefox as follows:

# Steps to configure your proxy server on Firefox browser


Open Firefox and go "Preferences" --> Network Proxy --> Settings and enter
your SSH proxy details

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

5. Now access Horizon.

# Go to Firefox, insert only IP address, not the subdirectory or port number


# e.g. http://192.168.37.10 and NOT http://192.168.37.10:5000/v3
Go to URL ​<INSERT ABOVE IP ADDRESS ONLY>
 
# Username: admin, password: OS_PASSWORD from openrc file 

6. Let us view some of the above information using the Horizon GUI

Go to Networks → Network Topology


Go to Identity → Projects → admin

exit # back to jumphost

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

Lab 3 - Run Functest

Important Notes

We recommend doing this entire set of labs in one session. If, for some reason, you are not able
to do so, you have two options:

1. You can keep the VM running. This will maintain continuity, but can get expensive.

2. You can tear down the VM and spin it up when you want to continue. You will need to
run ​Lab 1​ every time you spin up a VM. Labs 2, 3, 4 are independent and need not be
completed sequentially.

Commands and outputs will be displayed in Courier New Bold Blue font.

Highlighted text​ means you need to modify the command with the indicated parameter.

Lab

1. Run as superuser (if not already running as superuser)

sudo -i # if needed

2. Set up the environment to run Functest.

mkdir /opnfv-functest 
 
# Log into the saltmaster node cfg01 which is a docker container 

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

 
ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2 
 
# You can login using the Docker command as well 
docker exec -it fuel bash 

 
# From this node we can reach to all other openstack nodes.  
# View the IPs of All the Components 
 
salt "*" network.ip_addrs 
 
cfg01.mcp-odl-ha.local: 
- 10.20.0.2 
- 172.16.10.100 
mas01.mcp-odl-ha.local: 
- 10.20.0.3 
- 172.16.10.3 
- 192.168.11.3 
......................... 
 
# Login to openstack controller node (ctl01) via this saltmaster node. 
 
ssh ctl01 
 
# You will enter the shell of undercloud instance, as a user ‘stack’ 
 
ifconfig eth0 
 
 
# Accessing Openstack 
# Once the deployment is complete, Openstack CLI is accessible from  
# controller VM (ctl01) 
 
# Openstack credentials are at /root/keystonercv3. 
source /root/keystonercv3 
openstack image list 
 
+--------------------------------------+--------------------------------
---------------+--------+ 
| ID | Name 
| Status | 
+======================================+================================
===============+========+ 
| 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage 
| active | 
_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

| 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04 
| active | 
+--------------------------------------+--------------------------------
---------------+--------+ 
 
openstack network list 
 
+--------------------------------------+--------------------------------
-------+--------------------------------------+ 
| ID | Name 
| Subnets | 
+--------------------------------------+--------------------------------
-------+--------------------------------------+ 
| 32920d7b-b180-4ee6-acb8-fb88b9a601bf | 
HeatUtilsCreateComplexStackTests- | 
e6bc435e-d51c-4c2d-bad1-6ead4edcfeda | 
| | fc8bb9e8-3751-477e-bade- 
| | 
| | d7bef1b71cf5-net 
| | 
| f5451589-83b3-4471-bcb8-734858026903 | external 
| 7cb8e43c-61e4-4c5e-8451-d670f1023e31 | 
+--------------------------------------+--------------------------------
-------+--------------------------------------+ 
 
# You will have to copy the contents of /root/keystonercv3  
 
cat keystonercv3  
 
# Note the name of the external network (usually ‘external’) 
 
# Exit from undercloud ssh 
exit # back to jumphost 
 
# Create the file ‘/opnfv-functest/openstack.creds’ with the contents  
# from the the keystonercv3 file that you copied 
 
# Sample openstack.creds : 
export OS_IDENTITY_API_VERSION=3 
export OS_AUTH_URL=​http://172.16.10.36:35357/v3 
export OS_PROJECT_DOMAIN_NAME=Default 
export OS_USER_DOMAIN_NAME=Default 
export OS_PROJECT_NAME=admin 
export OS_TENANT_NAME=admin 
export OS_USERNAME=admin 
export OS_PASSWORD=opnfv_secret 
_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

export OS_REGION_NAME=RegionOne 
export OS_INTERFACE=internal 
export OS_ENDPOINT_TYPE="internal" 
export VOLUME_DEVICE_NAME=sdc 
export EXTERNAL_NETWORK="floating_net" 
 
 
# Substitute vi with your favorite editor 
vi /opnfv-functest/openstack.creds  
 
# Insert contents into this file 
# Add the following line to this file  
 
export EXTERNAL_NETWORK=<​INSERT EXTERNAL NETWORK NAME​> 
 
# Save file 
 
# Now, create env file with the following 
vi /opnfv-functest/env 
 
# Insert the following contents, with the changes mentioned in comments 
INSTALLER_TYPE=apex 
# Change this one to your undercloud eth0 IP address 
INSTALLER_IP=<​INSERT IP address of eth0 on undercloud instance​> 
DEPLOY_SCENARIO=os-nosdn-nofeature-noha 
CI_DEBUG=true 
EXTERNAL_NETWORK=<​INSERT EXTERNAL NETWORK NAME​> 
 
# Save file 
 
# Source the environment  
source /opnfv-functest/env 
 
# Make sure the undercloud IP is accessible 
ping -c 2 $INSTALLER_IP 
 
# Create folder images/, and download cirros-0.x.x-x86_64-disk.img to  
# images folder 

3. Run Functest (~10 minutes).

nohup sudo docker run --env-file /opnfv-functest/env -v\ 


/opnfv-functest/openstack.creds:/home/opnfv/functest/conf/env_file -v\ 

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

/opnfv-functest/images:/home/opnfv/functest/images\ 
opnfv/functest-healthcheck:opnfv-9.0.0 & 
 
 
# We can access the container output in nohup.out file 
# Just press Ctrl+C to close the log 
tail -f nohup.out 
 
# Get the docker container information 
docker ps -a 
 
# We can access the docker container logs to see the progress 
docker logs {CONTAINER ID for opnfv/functest-healthcheck:opnfv-9.0.0} 
 
 
# Go to the Horizon window, during the api_check test, look at the network 
# topology to see activity, and then resources torn down once the test is 
# complete 
 
# Alternatively, we can use the below command to access the docker container  
# console logs; similar to tail, just press Ctrl+C to close the log 
 
docker ps -a 
docker logs -f <container name> # must be run while funtest is running 
 
 
2020-09-19 13:50:47,882 - xtesting.ci.run_tests - INFO - Xtesting report: 
 
+--------------------------+------------------+---------------------+------------------+----------------+ 
| TEST CASE | PROJECT | TIER | DURATION | RESULT | 
+--------------------------+------------------+---------------------+------------------+----------------+ 
| connection_check | functest | healthcheck | 00:03 | PASS | 
| tenantnetwork1 | functest | healthcheck | 00:04 | PASS | 
| tenantnetwork2 | functest | healthcheck | 00:06 | PASS | 
| vmready1 | functest | healthcheck | 00:06 | PASS | 
| vmready2 | functest | healthcheck | 00:07 | PASS | 
| singlevm1 | functest | healthcheck | 05:23 | PASS | 
| singlevm2 | functest | healthcheck | 05:25 | PASS | 
| vping_ssh | functest | healthcheck | 05:59 | PASS | 
+--------------------------+------------------+---------------------+------------------+----------------+ 
 
 
 
 
 
4. Run tests individually. 
 
# Execute individual tests  
 

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

# Stop running docker container 


docker stop $(docker ps -a -q) 
 
# Delete old docker container 
docker rm $(docker ps -a -q) 
 
# We can run all the tests manually by passing the test cases name  
# along with the docker run command as given below. 
 
 
sudo docker run --env-file /opnfv-functest/env -v\ 
/opnfv-functest/openstack.creds:/home/opnfv/functest/conf/env_file -v\ 
/opnfv-functest/images:/home/opnfv/functest/images\ 
opnfv/functest-healthcheck:opnfv-9.0.0 run_tests -t <test case name> 
 
# Sample command:   
sudo docker run --env-file /opnfv-functest/env -v\ 
/opnfv-functest/openstack.creds:/home/opnfv/functest/conf/env_file -v\ 
/opnfv-functest/images:/home/opnfv/functest/images\ 
opnfv/functest-healthcheck:opnfv-9.0.0 run_tests -t connection_check 
 
# The above command will run the test case named “connection_check” 
# individually. Similarly you can run all other tests. 
 
 
 
 

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

Lab 4 - Run Yardstick

Important Notes

We recommend doing this entire set of labs in one session. If, for some reason, you are not able
to do so, you have two options:

1. You can keep the VM running. This will maintain continuity, but can get expensive.

2. You can tear down the VM and spin it up when you want to continue. You will need to
run ​Lab 1​ every time you spin up a VM. Labs 2, 3, 4 are independent and need not be
completed sequentially.

Commands and outputs will be displayed in Courier New Bold Blue font.

Highlighted text​ means you need to modify the command with the indicated parameter.

Lab

1. Create the yardstick subdirectory on the jumphost.

# No need to run this sudo command if you ran it already 


sudo -i # if needed 
 
mkdir /opnfv-yardstick 
cd /opnfv-yardstick 

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

2. Set up the environment to run Yardstick. Feel free to copy the two files (​openstack.creds​,
end​) from the Functest directory if you just ran Functest to save some time.

# Log into the saltmaster node cfg01 which is a docker container. 


 
ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2 
 
# We can login using the Docker command as well. 
 
docker exec -it fuel bash 
 
# From this node we can reach to all other openstack nodes.  
# View the IPs of All the Components 
 
salt "*" network.ip_addrs 
 
cfg01.mcp-odl-ha.local: 
- 10.20.0.2 
- 172.16.10.100 
mas01.mcp-odl-ha.local: 
- 10.20.0.3 
- 172.16.10.3 
- 192.168.11.3 
......................... 
 
# Login to openstack controller node (ctl01) via this saltmaster node 
 
ssh ctl01 
 
# You will enter the shell of undercloud instance, as a user ‘stack’ 
 
ifconfig eth0 
 
 
# Accessing Openstack 
 
# Once the deployment is complete, Openstack CLI is accessible from  
# controller VM (ctl01) 
 
# Openstack credentials are at /root/keystonercv3 
 
source keystonercv3 
openstack image list 
 

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

+--------------------------------------+--------------------------------
---------------+--------+ 
| ID | Name 
| Status | 
+======================================+================================
===============+========+ 
| 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage 
| active | 
| 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04 
| active | 
+--------------------------------------+--------------------------------
---------------+--------+ 
 
 
 
# You will enter the shell of undercloud instance, as a user ‘stack’ 
 
ifconfig eth0 
 
# Make note of the IP address of eth0 on an undercloud instance -  
# it is usually of the format 192.168.122.X. It is easiest to copy  
# these items in an editor 
 
# You will have to copy the contents of /root/keystonercv3  
 
source keystonercv3  
cat keystonercv3  
 
# Copy the contents of this file: OpenStack credentials needed for  
# Functest 
 
openstack network list 
 
+--------------------------------------+--------------------------------
-------+--------------------------------------+ 
| ID | Name 
| Subnets | 
+--------------------------------------+--------------------------------
-------+--------------------------------------+ 
| 32920d7b-b180-4ee6-acb8-fb88b9a601bf | 
HeatUtilsCreateComplexStackTests- | 
e6bc435e-d51c-4c2d-bad1-6ead4edcfeda | 
| | fc8bb9e8-3751-477e-bade- 
| | 
| | d7bef1b71cf5-net 
| | 
_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

| f5451589-83b3-4471-bcb8-734858026903 | external 
| 7cb8e43c-61e4-4c5e-8451-d670f1023e31 | 
+--------------------------------------+--------------------------------
-------+--------------------------------------+ 
 
# Note the name of the external network (usually ‘external’) 
 
# Exit from undercloud ssh 
 
exit # back to jumphost 
 
# Create the file ‘/opnfv-yardstick/openstack.creds’ with the contents  
# from the the keystonercv3 file that you copied 
 
# Sample openstack.creds : 
export OS_IDENTITY_API_VERSION=3 
export OS_AUTH_URL=​http://172.16.10.36:35357/v3 
export OS_PROJECT_DOMAIN_NAME=Default 
export OS_USER_DOMAIN_NAME=Default 
export OS_PROJECT_NAME=admin 
export OS_TENANT_NAME=admin 
export OS_USERNAME=admin 
export OS_PASSWORD=opnfv_secret 
export OS_REGION_NAME=RegionOne 
export OS_INTERFACE=internal 
export OS_ENDPOINT_TYPE="internal" 
export VOLUME_DEVICE_NAME=sdc 
export EXTERNAL_NETWORK="floating_net" 
 
 
# Substitute vi with your favorite editor 
vi /opnfv-yardstick/openstack.creds  
 
# Insert contents into this file 
# Add the following line to this file  
 
export EXTERNAL_NETWORK=<​INSERT EXTERNAL NETWORK NAME​> 
 
# Save file 
 
 
 
 
# Now, create env file with the following 
vi /opnfv-yardstick/env 
 
_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

# Insert the following contents, with the changes mentioned in comments 


 
INSTALLER_TYPE=apex 
# Change this one to your undercloud eth0 IP address 
INSTALLER_IP=<INSERT IP address of eth0 on undercloud instance> 
DEPLOY_SCENARIO=os-nosdn-nofeature-noha 
CI_DEBUG=true 
EXTERNAL_NETWORK=<​INSERT EXTERNAL NETWORK NAME​> 
 
# Save file 
 
# Source the environment  
source /opnfv-yardstick/env 
 
# Make sure the undercloud IP is accessible 
ping -c 2 $INSTALLER_IP 

3. Run Yardstick.

# This command will download the docker images when it is run for first time 
  
docker run -itd --privileged -v /var/run/docker.sock:/var/run/docker.sock -v 
/opnfv-yardstick/openstack.creds:/etc/yardstick/openstack.creds -p 
8888:5000 --name yardstick opnfv/yardstick:opnfv-8.0.0 
 
# To run the individual tests, enter the bash shell of Yardstick container 
 
docker exec -it yardstick /bin/bash 
 
# Run the prepare command for setting up Yardstick environment 
# This command may take few minutes to run 
 
yardstick env prepare 
cd yardstick  
 
# Get a feel for the command 
 
yardstick -h 
 
# Run a sample ping test 
 
more samples/ping.yaml 
yardstick task start samples/ping.yaml 

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

 
# By default, the results will be written to a text file  
# Look at the results in /tmp/yardstick.out 
 
cat /tmp/yardstick.out 
 
cat /tmp/report.html 
 
# Sample snip of report.html 
 
<td>1</td> 
<td>opnfv_yardstick_tc002</td> 
<td>PASS</td> 
 
# Run few other tests in sample directory 
 
yardstick task start samples/fio.yaml 
 
# Once you see a message saying Heat Stack creation is done, 
# Go to Horizon, Orchestration → Stack → <stack name>,  
# Compute → Instances, Compute → Images, Network → Network Topology 
# Get a sense for what the test is doing 
 
yardstick task start samples/perf.yaml 
 

3. Add a new test and then run it.

# Copy one of the existing tests (samples/nstat.yaml) to a new test  


# eg., samples/nstat-new.yaml 
 
cp samples/nstat.yaml samples/nstat-new.yaml 
 
# Modify the “runner” option from Iteration to Duration, as follows:  
 
vi samples/nstat-new.yaml 
 
runner: 
type: Duration 
duration: 30 
 
# Run the new test now 
 
yardstick task start samples/nstat-new.yaml 
 

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

# Look for the test results in /tmp/yardstick.out 


 
cat /tmp/yardstick.out 
 

4. By default, Yardstick stores the result in text file (in JSON format) - ​/tmp/yardstick.out​,
but in this lab, we will configure it to store results in local influxdb and display them graphically
using Grafana.

The diagram below shows various methods of viewing the results.

# Configure InfluxDB and Store results in DB, and display them graphically 
_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

# Run the following commands from the Yardstick container 


 
yardstick env influxdb 
yardstick env grafana 
 
# Exit the container and go to the Jump Host, by pressing CTRL-P-Q sequence.  
# This will make sure the container is still running.  
# DO NOT PRESS CTRL-D!  
 
docker ps # note the name of the influxdb container 
 
# There will be 3 Docker containers running - yardstick, influxdb & grafana.  
# Note the name of the influxdb container e.g. sad-stallman 
 
CONTAINER ID IMAGE COMMAND 
CREATED STATUS PORTS 
NAMES 
31b0ab8669bb grafana/grafana:4.4.3 "/run.sh" 
42 minutes ago Up 42 minutes 0.0.0.0:1948->3000/tcp 
adoring_feynman 
805682d70060 tutum/influxdb:0.13 "/run.sh" 
45 minutes ago Up 45 minutes 0.0.0.0:8083->8083/tcp, 
0.0.0.0:8086->8086/tcp sad_stallman 
c9495b943a93 opnfv/yardstick:opnfv-5.1.0 
"/usr/bin/supervisord" 4 hours ago Up 4 hours 
0.0.0.0:8888->5000/tcp yardstick 
 
 
# Run following commands from the Jump Host 
 
docker network ls  
docker network inspect bridge  
 
# From the output of above command, note the IP address of  
# influxdb container name, ignore the /16, for example the IP 
# address below is 172.17.0.3 (our container is called “gifted_nobel”) 
 
 
"cf98ac830d4b75692ff4d33a3cd661c459f46b5d31a289b43d71276dc8d6b775": { 
"Name": "gifted_nobel", 
"EndpointID": 
"d68ca4c63e8618fbf71800a589351b0ea048a9c51a0f1453c38668fc260f2 
1ac", 
"MacAddress": "02:42:ac:11:00:03", 
"IPv4Address": "172.17.0.3/16", 

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

"IPv6Address": "" 

 
# Now, login to Yardstick container 
 
sudo docker exec -it yardstick /bin/bash 
 
# Copy and edit the file /etc/yardstick/yardstick.conf 
 
cp yardstick/etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf 
 
# Substitute vi with your favorite editor 
 
vi /etc/yardstick/yardstick.conf 
 
# Modify yardstick.conf on Yardstick Container 
 
[DEFAULT] 
debug = False 
dispatcher = influxdb 
 
[dispatcher_http] 
timeout = 5 
target = http://127.0.0.1:8000/results 
 
[dispatcher_file] 
file_path = /tmp/yardstick.out 
max_bytes = 0 
backup_count = 0 
 
[dispatcher_influxdb] 
timeout = 5 
target = http://<INSERT influxdb container IP>:8086 
db_name = yardstick 
username = root 
password = root 
 
# Save the above file,  
 
# Run a Yardstick test (takes 15+ minutes) ​opnfv_yardstick_tc001.yaml 
# Review the test - What metrics does the test measure, what is the max  
# number of ports tested, how many parameters can be configured? 
 
cd yardstick 
 
# To list all the testcases, use this command: 
_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

 
yardstick testcase list 
 
yardstick task start tests/opnfv/test_cases/opnfv_yardstick_tc001.yaml 
 
 
# The test may throw DEBUG messages about SSH failures, which can be ignored. 
# Wait for the test to run to completion before looking at the results  
# in Dashboard.  
 
 
 
# In-class users will be provided the IP address 
 
# From your regular browser (NOT Firefox with proxy), login to the Grafana  
# Dashboard at the URL http://<Your VM External IP>:1948 
# Note that it runs at port number 3000 on the docker container,  
# which is mapped to host port 1948 

# Login to Grafana dashboard using the following credentials:  
# username: admin, pw: admin 
http://<​INSERT YOUR GCLOUD INSTANCE EXTERNAL IP​>:1948 
 
 
 

Please make sure your cloud firewall rules allow http access to port 
1948 
 
 
# Click on Home 
View dashboard for opnfv_yardstick_tc_001 
Click on “Today” and switch to “Last 15 minutes” 
 

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

 
 
# Review the graphs   

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.


____________________________________________________________________________________________

Troubleshooting steps
 
If you want to use a custom image for the yardstick test, you can create a glance image with
name ​yardstick-image​.

If the VM takes more time to boot the instance, increase the resources of ​yardstick-flavor
as per OS requirements. For example, you can increase it to 2 vcpu,1.5GB RAM and 5GB Disk)

_________________________________________________________________________________________________________

Copyright, The Linux Foundation 2018-2020. All rights reserved.

You might also like