Professional Documents
Culture Documents
0mh97xutc6em-LFS264 v10.08.2020
0mh97xutc6em-LFS264 v10.08.2020
LFS264:
OPNFV Fundamentals
- V.10.08.2020 -
_________________________________________________________________________________________________________
Important Notes
We recommend doing this entire set of labs in one session. If, for some reason, you are not able
to do so, you have two options:
1. You can keep the VM running. This will maintain continuity, but can get expensive.
2. You can tear down the VM and spin it up when you want to continue. You will need to
run Lab 1 every time you spin up a VM. Labs 2, 3, 4 are independent and need not be
completed sequentially.
Commands and outputs will be displayed in Courier New Bold Blue font.
Highlighted text means you need to modify the command with the indicated parameter.
Lab
In order to get access to the image you will use for this lab, use this Google Form to provide us
your email address linked to your Azure Cloud account. You should get access to the image
within 24 hours; if not, please send an email with your information to
training@linuxfoundation.org.
2. Create a VM with 16 vCPUs, 64GB memory, 300GB HDD using an OPNFV Fuel image
(image name: aarna-opnfv-iruya-90-pub-09252020-00 - Gen1). Use the resource
group as aarna-opnfv-resource-group-00. Your email address needs to be added to the
_________________________________________________________________________________________________________
resource group for the image, or else this operation will fail. This should be done automatically
once you sign up for the course, but if it is not, please contact us.
_________________________________________________________________________________________________________
4. SSH into your new instance, either using the Azure CLI or starting a new session from the
Azure dashboard.
Download aarna.pem
Download aarna.ppk
chmod 400 aarna.pem
chmod 400 aarna.ppk
ssh -i aarna.pem aarna@<VM External IP>
For Windows Users: PuTTY instructions (these are Linux PuTTY instructions, minor
modifications might be needed for Windows)
Under Session, enter the instance Public IP (e.g. 104.154.133.161), Connection Type = ssh,
Port = 22
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
5. In order to deploy OPNFV on a single node, we will use the Fuel deployment method for
virtual deploy. Make sure of the following prerequisites:
● OS distribution support:
OPNFV Fuel has been validated by CI using the following distribution installed on node.
● CentOS 7 (recommended by pharos specification)
● Ubuntu Xenial 16.04
● Ubuntu Bionic 18.04, which is what is used for this lab.
● Virtualization support:
The machine on which we need to install and configure OPNFV must support
virtualization and in case of installing it on Virtual machine, it must support nested
virtualization. Use the below command to check if the machine supports virtualization or
not.
# Output
Virtualization: VT-x
# Make sure KVM module loaded using lsmod command and grep command:
9. Install Docker, after updating your existing list of packages. Next, install a few prerequisite
packages which lets apt use packages over HTTPS.
# Add the GPG key for the official Docker repository to your system
# Update the package database with the Docker packages from the
_________________________________________________________________________________________________________
# Make sure you are about to install from the Docker repo instead of
# the default Ubuntu repo
# You’ll see output like this, although the version number for
# Docker may be different:
# Output of apt-cache policy docker-ce
docker-ce:
Installed: (none)
Candidate: 18.03.1~ce~3-0~ubuntu
Version table:
18.03.1~ce~3-0~ubuntu 500
500 https://download.docker.com/linux/ubuntu bionic/stable
amd64 Packages
_________________________________________________________________________________________________________
The deployment uses the OPNFV Pharos project as input (PDF and IDF files) for hardware and
network configuration of all current OPNFV PODs.
When deploying a new POD, you may pass the -b flag to the deploy script to override the path
for the labconfig directory structure containing the PDF and IDF (<URI to
configuration repo ...> is the absolute path to a local or remote directory structure,
populated similar to pharos git repo, i.e. PDF/IDF reside in a subdirectory called
labs/<lab_name>).
_________________________________________________________________________________________________________
Workaround:
After powering off and On of the VM multiple times, it can break the Openstack cluster and in
that case Functest and Yardstick test will fail with communication failure errors. In that case
you can re-deploy the Fuel OPNFV on the same machine without any cleanup. You just need to
re-run the deploy script and it will redeploy.
_________________________________________________________________________________________________________
Important Notes
We recommend doing this entire set of labs in one session. If, for some reason, you are not able
to do so, you have two options:
1. You can keep the VM running. This will maintain continuity, but can get expensive.
2. You can tear down the VM and spin it up when you want to continue. You will need to
run Lab 1 every time you spin up a VM. Labs 2, 3, 4 are independent and need not be
completed sequentially.
Commands and outputs will be displayed in Courier New Bold Blue font.
Highlighted text means you need to modify the command with the indicated parameter.
Lab
1. Explore other scenarios under the /etc/opnfv-fuel/ folder; explore how scenarios are
defined.
diff /etc/opnfv-fuel/os-nosdn-nofeature-noha.yaml
/etc/opnfv-fuel/os-odl-nofeature-ha.yaml
Questions:
_________________________________________________________________________________________________________
● Why is it included?
source openrc
MAC USERS:
ssh -t -i aarna.pem -o CheckHostIP=no -o IdentitiesOnly=yes -o
StrictHostKeyChecking=no aarna@{VM External IP} -N -p 22 -D localhost:5000
WINDOWS USERS:
Under Session, just enter the instance Public IP (e.g. 104.154.133.161,
Connection Type = ssh, Port = 22
_________________________________________________________________________________________________________
Go to Connection --> Data and Auto Login User Name = aarna
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
6. Let us view some of the above information using the Horizon GUI
_________________________________________________________________________________________________________
Important Notes
We recommend doing this entire set of labs in one session. If, for some reason, you are not able
to do so, you have two options:
1. You can keep the VM running. This will maintain continuity, but can get expensive.
2. You can tear down the VM and spin it up when you want to continue. You will need to
run Lab 1 every time you spin up a VM. Labs 2, 3, 4 are independent and need not be
completed sequentially.
Commands and outputs will be displayed in Courier New Bold Blue font.
Highlighted text means you need to modify the command with the indicated parameter.
Lab
sudo -i # if needed
mkdir /opnfv-functest
# Log into the saltmaster node cfg01 which is a docker container
_________________________________________________________________________________________________________
ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2
# You can login using the Docker command as well
docker exec -it fuel bash
# From this node we can reach to all other openstack nodes.
# View the IPs of All the Components
salt "*" network.ip_addrs
cfg01.mcp-odl-ha.local:
- 10.20.0.2
- 172.16.10.100
mas01.mcp-odl-ha.local:
- 10.20.0.3
- 172.16.10.3
- 192.168.11.3
.........................
# Login to openstack controller node (ctl01) via this saltmaster node.
ssh ctl01
# You will enter the shell of undercloud instance, as a user ‘stack’
ifconfig eth0
# Accessing Openstack
# Once the deployment is complete, Openstack CLI is accessible from
# controller VM (ctl01)
# Openstack credentials are at /root/keystonercv3.
source /root/keystonercv3
openstack image list
+--------------------------------------+--------------------------------
---------------+--------+
| ID | Name
| Status |
+======================================+================================
===============+========+
| 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage
| active |
_________________________________________________________________________________________________________
| 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04
| active |
+--------------------------------------+--------------------------------
---------------+--------+
openstack network list
+--------------------------------------+--------------------------------
-------+--------------------------------------+
| ID | Name
| Subnets |
+--------------------------------------+--------------------------------
-------+--------------------------------------+
| 32920d7b-b180-4ee6-acb8-fb88b9a601bf |
HeatUtilsCreateComplexStackTests- |
e6bc435e-d51c-4c2d-bad1-6ead4edcfeda |
| | fc8bb9e8-3751-477e-bade-
| |
| | d7bef1b71cf5-net
| |
| f5451589-83b3-4471-bcb8-734858026903 | external
| 7cb8e43c-61e4-4c5e-8451-d670f1023e31 |
+--------------------------------------+--------------------------------
-------+--------------------------------------+
# You will have to copy the contents of /root/keystonercv3
cat keystonercv3
# Note the name of the external network (usually ‘external’)
# Exit from undercloud ssh
exit # back to jumphost
# Create the file ‘/opnfv-functest/openstack.creds’ with the contents
# from the the keystonercv3 file that you copied
# Sample openstack.creds :
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_URL=http://172.16.10.36:35357/v3
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=opnfv_secret
_________________________________________________________________________________________________________
export OS_REGION_NAME=RegionOne
export OS_INTERFACE=internal
export OS_ENDPOINT_TYPE="internal"
export VOLUME_DEVICE_NAME=sdc
export EXTERNAL_NETWORK="floating_net"
# Substitute vi with your favorite editor
vi /opnfv-functest/openstack.creds
# Insert contents into this file
# Add the following line to this file
export EXTERNAL_NETWORK=<INSERT EXTERNAL NETWORK NAME>
# Save file
# Now, create env file with the following
vi /opnfv-functest/env
# Insert the following contents, with the changes mentioned in comments
INSTALLER_TYPE=apex
# Change this one to your undercloud eth0 IP address
INSTALLER_IP=<INSERT IP address of eth0 on undercloud instance>
DEPLOY_SCENARIO=os-nosdn-nofeature-noha
CI_DEBUG=true
EXTERNAL_NETWORK=<INSERT EXTERNAL NETWORK NAME>
# Save file
# Source the environment
source /opnfv-functest/env
# Make sure the undercloud IP is accessible
ping -c 2 $INSTALLER_IP
# Create folder images/, and download cirros-0.x.x-x86_64-disk.img to
# images folder
_________________________________________________________________________________________________________
/opnfv-functest/images:/home/opnfv/functest/images\
opnfv/functest-healthcheck:opnfv-9.0.0 &
# We can access the container output in nohup.out file
# Just press Ctrl+C to close the log
tail -f nohup.out
# Get the docker container information
docker ps -a
# We can access the docker container logs to see the progress
docker logs {CONTAINER ID for opnfv/functest-healthcheck:opnfv-9.0.0}
# Go to the Horizon window, during the api_check test, look at the network
# topology to see activity, and then resources torn down once the test is
# complete
# Alternatively, we can use the below command to access the docker container
# console logs; similar to tail, just press Ctrl+C to close the log
docker ps -a
docker logs -f <container name> # must be run while funtest is running
2020-09-19 13:50:47,882 - xtesting.ci.run_tests - INFO - Xtesting report:
+--------------------------+------------------+---------------------+------------------+----------------+
| TEST CASE | PROJECT | TIER | DURATION | RESULT |
+--------------------------+------------------+---------------------+------------------+----------------+
| connection_check | functest | healthcheck | 00:03 | PASS |
| tenantnetwork1 | functest | healthcheck | 00:04 | PASS |
| tenantnetwork2 | functest | healthcheck | 00:06 | PASS |
| vmready1 | functest | healthcheck | 00:06 | PASS |
| vmready2 | functest | healthcheck | 00:07 | PASS |
| singlevm1 | functest | healthcheck | 05:23 | PASS |
| singlevm2 | functest | healthcheck | 05:25 | PASS |
| vping_ssh | functest | healthcheck | 05:59 | PASS |
+--------------------------+------------------+---------------------+------------------+----------------+
4. Run tests individually.
# Execute individual tests
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
Important Notes
We recommend doing this entire set of labs in one session. If, for some reason, you are not able
to do so, you have two options:
1. You can keep the VM running. This will maintain continuity, but can get expensive.
2. You can tear down the VM and spin it up when you want to continue. You will need to
run Lab 1 every time you spin up a VM. Labs 2, 3, 4 are independent and need not be
completed sequentially.
Commands and outputs will be displayed in Courier New Bold Blue font.
Highlighted text means you need to modify the command with the indicated parameter.
Lab
_________________________________________________________________________________________________________
2. Set up the environment to run Yardstick. Feel free to copy the two files (openstack.creds,
end) from the Functest directory if you just ran Functest to save some time.
_________________________________________________________________________________________________________
+--------------------------------------+--------------------------------
---------------+--------+
| ID | Name
| Status |
+======================================+================================
===============+========+
| 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage
| active |
| 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04
| active |
+--------------------------------------+--------------------------------
---------------+--------+
# You will enter the shell of undercloud instance, as a user ‘stack’
ifconfig eth0
# Make note of the IP address of eth0 on an undercloud instance -
# it is usually of the format 192.168.122.X. It is easiest to copy
# these items in an editor
# You will have to copy the contents of /root/keystonercv3
source keystonercv3
cat keystonercv3
# Copy the contents of this file: OpenStack credentials needed for
# Functest
openstack network list
+--------------------------------------+--------------------------------
-------+--------------------------------------+
| ID | Name
| Subnets |
+--------------------------------------+--------------------------------
-------+--------------------------------------+
| 32920d7b-b180-4ee6-acb8-fb88b9a601bf |
HeatUtilsCreateComplexStackTests- |
e6bc435e-d51c-4c2d-bad1-6ead4edcfeda |
| | fc8bb9e8-3751-477e-bade-
| |
| | d7bef1b71cf5-net
| |
_________________________________________________________________________________________________________
| f5451589-83b3-4471-bcb8-734858026903 | external
| 7cb8e43c-61e4-4c5e-8451-d670f1023e31 |
+--------------------------------------+--------------------------------
-------+--------------------------------------+
# Note the name of the external network (usually ‘external’)
# Exit from undercloud ssh
exit # back to jumphost
# Create the file ‘/opnfv-yardstick/openstack.creds’ with the contents
# from the the keystonercv3 file that you copied
# Sample openstack.creds :
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_URL=http://172.16.10.36:35357/v3
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=opnfv_secret
export OS_REGION_NAME=RegionOne
export OS_INTERFACE=internal
export OS_ENDPOINT_TYPE="internal"
export VOLUME_DEVICE_NAME=sdc
export EXTERNAL_NETWORK="floating_net"
# Substitute vi with your favorite editor
vi /opnfv-yardstick/openstack.creds
# Insert contents into this file
# Add the following line to this file
export EXTERNAL_NETWORK=<INSERT EXTERNAL NETWORK NAME>
# Save file
# Now, create env file with the following
vi /opnfv-yardstick/env
_________________________________________________________________________________________________________
3. Run Yardstick.
# This command will download the docker images when it is run for first time
docker run -itd --privileged -v /var/run/docker.sock:/var/run/docker.sock -v
/opnfv-yardstick/openstack.creds:/etc/yardstick/openstack.creds -p
8888:5000 --name yardstick opnfv/yardstick:opnfv-8.0.0
# To run the individual tests, enter the bash shell of Yardstick container
docker exec -it yardstick /bin/bash
# Run the prepare command for setting up Yardstick environment
# This command may take few minutes to run
yardstick env prepare
cd yardstick
# Get a feel for the command
yardstick -h
# Run a sample ping test
more samples/ping.yaml
yardstick task start samples/ping.yaml
_________________________________________________________________________________________________________
# By default, the results will be written to a text file
# Look at the results in /tmp/yardstick.out
cat /tmp/yardstick.out
cat /tmp/report.html
# Sample snip of report.html
<td>1</td>
<td>opnfv_yardstick_tc002</td>
<td>PASS</td>
# Run few other tests in sample directory
yardstick task start samples/fio.yaml
# Once you see a message saying Heat Stack creation is done,
# Go to Horizon, Orchestration → Stack → <stack name>,
# Compute → Instances, Compute → Images, Network → Network Topology
# Get a sense for what the test is doing
yardstick task start samples/perf.yaml
_________________________________________________________________________________________________________
4. By default, Yardstick stores the result in text file (in JSON format) - /tmp/yardstick.out,
but in this lab, we will configure it to store results in local influxdb and display them graphically
using Grafana.
# Configure InfluxDB and Store results in DB, and display them graphically
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
"IPv6Address": ""
}
# Now, login to Yardstick container
sudo docker exec -it yardstick /bin/bash
# Copy and edit the file /etc/yardstick/yardstick.conf
cp yardstick/etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
# Substitute vi with your favorite editor
vi /etc/yardstick/yardstick.conf
# Modify yardstick.conf on Yardstick Container
[DEFAULT]
debug = False
dispatcher = influxdb
[dispatcher_http]
timeout = 5
target = http://127.0.0.1:8000/results
[dispatcher_file]
file_path = /tmp/yardstick.out
max_bytes = 0
backup_count = 0
[dispatcher_influxdb]
timeout = 5
target = http://<INSERT influxdb container IP>:8086
db_name = yardstick
username = root
password = root
# Save the above file,
# Run a Yardstick test (takes 15+ minutes) opnfv_yardstick_tc001.yaml
# Review the test - What metrics does the test measure, what is the max
# number of ports tested, how many parameters can be configured?
cd yardstick
# To list all the testcases, use this command:
_________________________________________________________________________________________________________
yardstick testcase list
yardstick task start tests/opnfv/test_cases/opnfv_yardstick_tc001.yaml
# The test may throw DEBUG messages about SSH failures, which can be ignored.
# Wait for the test to run to completion before looking at the results
# in Dashboard.
# In-class users will be provided the IP address
# From your regular browser (NOT Firefox with proxy), login to the Grafana
# Dashboard at the URL http://<Your VM External IP>:1948
# Note that it runs at port number 3000 on the docker container,
# which is mapped to host port 1948
#
# Login to Grafana dashboard using the following credentials:
# username: admin, pw: admin
http://<INSERT YOUR GCLOUD INSTANCE EXTERNAL IP>:1948
Please make sure your cloud firewall rules allow http access to port
1948
# Click on Home
View dashboard for opnfv_yardstick_tc_001
Click on “Today” and switch to “Last 15 minutes”
_________________________________________________________________________________________________________
# Review the graphs
_________________________________________________________________________________________________________
Troubleshooting steps
If you want to use a custom image for the yardstick test, you can create a glance image with
name yardstick-image.
If the VM takes more time to boot the instance, increase the resources of yardstick-flavor
as per OS requirements. For example, you can increase it to 2 vcpu,1.5GB RAM and 5GB Disk)
_________________________________________________________________________________________________________