Professional Documents
Culture Documents
Edu en Vskdm7 Lab Ie
Edu en Vskdm7 Lab Ie
Lab Manual
Copyright © 2022 VMware, Inc. All rights reserved. This manual and its accompanying
materials are protected by U.S. and international copyright and intellectual property laws.
VMware products are covered by one or more patents listed at
http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of
VMware, Inc. in the United States and/or other jurisdictions. All other marks and names
mentioned herein may be trademarks of their respective companies. VMware vSphere® with
VMware Tanzu®, VMware vSphere® High Availability, VMware vSphere® Distributed
Switch™, VMware vSphere® Distributed Resource Scheduler™, VMware vSphere® Client™,
VMware vSphere® 2015, VMware vSphere®, VMware vCenter Server®, VMware
Workstation™, VMware View®, VMware Horizon® View™, VMware Verify™, VMware Tanzu®,
VMware Tanzu® Enterprise, VMware Tanzu® Community, VMware Tanzu® Basic, VMware
Tanzu® Standard, VMware Horizon® 7, VMware Horizon® 7, VMware Horizon® 7 on VMware
Cloud™ on AWS, VMware Certificate Authority. No trademark., VMware Tanzu® Kubernetes
Grid™ Service, VMware Tanzu® Kubernetes Grid™, VMware Pivotal Labs® Platform
Management™, Project Photon OS™, VMware Photon™, VMware NSX-T™ Data Center,
VMware NSX® Manager™, VMware NSX®, VMware Go™, VMware ESXi™, and VMware
vSphere® Distributed Resource Scheduler™ are registered trademarks or trademarks of
VMware, Inc. in the United States and/or other jurisdictions.
The training material is provided “as is,” and all express or implied conditions, representations,
and warranties, including any implied warranty of merchantability, fitness for a particular
purpose or noninfringement, are disclaimed, even if VMware, Inc., has been advised of the
possibility of such claims. This material is designed to be used for reference purposes in
conjunction with a training course.
The training material is not a standalone training tool. Use of the training material for self-
study without class attendance is not recommended. These materials and the computer
programs to which it relates are the property of, and embody trade secrets and confidential
information proprietary to, VMware, Inc., and may not be reproduced, copied, disclosed,
transferred, adapted or modified without the express written approval of VMware, Inc.
www.vmware.com/education
Typographical Conventions
• <ESXi_host_name>
www.vmware.com/education
Contents
iii
Task 2: Log In to the Developer Workstation VM ............................................................................................ 20
Task 3: Download and Install the Kubernetes CLI Package............................................................................ 21
Lab 6 Creating and Configuring a vSphere with Tanzu Namespace ....................... 23
Task 1: Log In to the vSphere Client ....................................................................................................................... 24
Task 2: Create a vSphere with Tanzu Namespace............................................................................................ 24
Task 3: Configure Permissions and Storage for the Namespace................................................................. 25
Task 4: Access the Namespace Using the kubectl CLI ................................................................................... 26
Lab 7 Deploying a Container Application That Runs in a vSphere Pod ................. 27
Task 1: Deploy a vSphere Pod .................................................................................................................................. 28
Task 2: View the Deployed vSphere Pod............................................................................................................. 29
Lab 8 Scaling Out a vSphere Pod Deployment ............................................................... 31
Task 1: Scale Out a vSphere Pod Deployment ................................................................................................... 32
Task 2: View the Scaled-Out vSphere Pod Deployment................................................................................ 33
Lab 9 Deploying a vSphere Pod with a Persistent Volume........................................ 35
Task 1: Create a Persistent Volume Claim............................................................................................................. 36
Task 2: View the Persistent Volume Claim ............................................................................................................37
Task 3: Create a vSphere Pod Deployment ........................................................................................................ 38
Task 4: Delete the Deployment and the Persistent Volume Claim ............................................................. 39
Lab 10 Creating a Kubernetes Service ................................................................................ 41
Task 1: Create a Load Balancer Kubernetes Service ........................................................................................ 42
Task 2: Obtain the Kubernetes Service External IP Address ........................................................................ 43
Lab 11 Creating a Kubernetes Network Policy ................................................................ 45
Task 1: Create a Network Policy to Deny Traffic............................................................................................... 46
Lab 12 Viewing Kubernetes Objects ................................................................................... 49
Task 1: Log In to NSX Manager ................................................................................................................................. 50
Task 2: View Segments ............................................................................................................................................... 50
Task 3: View Virtual Servers ...................................................................................................................................... 52
Task 4: View the Distributed Firewall ..................................................................................................................... 53
Task 5: View Namespaces .......................................................................................................................................... 55
Task 6: View the Network Topology ..................................................................................................................... 56
Lab 13 Enabling the Harbor Registry................................................................................... 59
Task 1: Log In to the vSphere Client ....................................................................................................................... 60
Task 2: Enable an Embedded Harbor Registry .................................................................................................... 61
iv
Lab 14 Pushing and Deploying Harbor Images ................................................................ 63
Task 1: Install and Configure the vSphere Docker Credential Helper ......................................................... 64
Task 2: Push an Image to Harbor ............................................................................................................................. 65
Task 3: Review the Image in Harbor ....................................................................................................................... 65
Task 4: Deploy an Image from Harbor ................................................................................................................... 66
Lab 15 Configuring a Content Library ................................................................................. 69
Task 1: Log In to the vSphere Client ....................................................................................................................... 70
Task 2: Upload the Tanzu Kubernetes Cluster Template .................................................................................71
Lab 16 Deploying a Tanzu Kubernetes Cluster ............................................................... 73
Task 1: View a Tanzu Kubernetes Cluster Deployment YAML File ............................................................. 74
Task 2: Deploy a Tanzu Kubernetes Cluster ........................................................................................................ 76
Lab 17 Working with Tanzu Kubernetes Clusters .......................................................... 77
Task 1: Apply a Pod Security Policy ........................................................................................................................ 78
Task 2: Deploy a Container Application................................................................................................................. 80
Task 3: Scale Out a Tanzu Kubernetes Cluster................................................................................................... 82
Lab 18 Control Plane Certificate Management ................................................................ 85
Task 1: Log In to the vSphere Client ....................................................................................................................... 86
Task 2: Generate a Certificate Signing Request ................................................................................................. 87
Task 3: Obtain a Signed Certificate ......................................................................................................................... 88
Task 4: Install the Certificate Authority Root Certificate ................................................................................ 89
Task 5: Replace the Control Plane Management Certificate ......................................................................... 90
Lab 19 (Optional) Deploying the Yelb Application as vSphere Pods ....................... 91
Task 1: Create a Namespace ...................................................................................................................................... 92
Task 2: Deploy the Yelb Application....................................................................................................................... 92
Lab 20 (Optional) Deploying the Yelb Application to a Tanzu Kubernetes Cluster
.......................................................................................................................................................... 93
Task 1: Deploy the Yelb Application........................................................................................................................ 94
Task 2: Delete a Tanzu Kubernetes Cluster ......................................................................................................... 95
v
vi
Lab 1 Verifying Docker on the
Developer Workstation
2. Start Docker
GuideMe Lab 01
1
Task 1: Connect to the Developer Workstation VM
You connect to the Photon OS virtual machine using SSH. You use this VM to perform CLI-based
tasks.
1. Start MTPuTTY from the taskbar shortcut on the student Windows desktop.
• Password: VMware1!
which docker
The command should return the path to the Docker binary /usr/bin/docker.
2
4. Verify that Docker is running.
systemctl status docker
If necessary, press Ctrl+C to display the command prompt.
The command should return an active status for Docker: active (running).
root@sa-cli-vm [ ~ ]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; disabled;
vendor preset: disabled)
Active: active (running) since Fri 2020-04-10 08:24:25 UTC; 5s ago
Docs: https://docs.docker.com
Main PID: 742 (dockerd)
Tasks: 11
Memory: 114.6M
CGroup: /system.slice/docker.service
└─742 /usr/bin/dockerd -H fd:// --
containerd=/run/containerd/containerd.sock
# snipped #
Apr 10 08:24:25 sa-cli-vm systemd[1]: Started Docker Application
Container Engine.
root@sa-cli-vm [ ~ ]#
5. Verify the Docker client and server version.
docker version
This command returns the Docker client and server version.
root@sa-cli-vm [ ~ ]# docker version
Client: Docker Engine - Community
Version: 18.09.9
API version: 1.39
Go version: go1.11.13
Git commit: 039a7df
Built: Thu Nov 14 01:02:31 2019
OS/Arch: linux/amd64
Experimental: false
3
Task 3: Inspect the vSphere Environment
You verify that no alarms or alerts are present in the vSphere environment.
1. Open the Chrome browser by clicking the shortcut on the taskbar of the student desktop.
The browser automatically redirects you to the vSphere Client URL at https://sa-vcsa-
01.vclass.local/ui.
2. If the browser displays a security warning, select Advanced > Proceed to sa-vcsa-
01.vclass.local.
3. If the browser does not automatically redirect to the vSphere Client URL, enter
https://sa-vcsa-01.vclass.local/ui in the address bar.
4. Log in to the vSphere Client using the Single Sign-On Administrator credentials.
5. In the vSphere Client, select Menu (hamburger) > Inventory > Hosts and Clusters and, if
necessary, expand the vCenter Server inventory.
1. On the student Windows desktop, open the Chrome browser and go to the NSX UI at
https://sa-nsxmgr-01.vclass.local.
3. On the NSX Manager home page, click the Alarm Bell icon.
4
Lab 2 Running a Container Image
GuideMe Lab 02
5
Task 1: Connect to the Developer Workstation VM
You connect to the Photon OS virtual machine using SSH. You use this VM to perform CLI-based
tasks.
1. Start MTPuTTY from the taskbar shortcut on the student Windows desktop.
2. Pull the Nginx container image from the local image registry.
docker pull 172.20.10.30/nginx:1.16
The number after the colon (:) is the version of the container image, which is called a tag. If
no tag is specified, the latest version of an image is pulled.
1. Use Docker to run the container image that you previously pulled.
docker run -d -p 8080:80 172.20.10.30/nginx:1.16
-d runs the container in the background, releasing the command prompt, and prints the
container ID.
-p maps an external port (8080) to the internal port (80) of the container.
2. List the running containers.
docker ps
This command returns all the containers that are currently running.
The Nginx application is now running as a container in the SA-CLI-VM virtual machine.
6
3. On the student Windows desktop, open the Chrome browser and go to http://sa-cli-
vm.vclass.local:8080.
Port 8080 is appended to the URL because this port was specified for running the container.
The Nginx web server landing page opens in the browser, confirming access to the container
application. Docker runs the container application in SA-CLI-VM.
7
Task 4: Stop a Running Container
You stop a running container.
4. List the running containers to confirm that the container is no longer running.
docker ps
5. On the student Windows desktop, refresh the Chrome browser to open http://sa-cli-
vm.vclass.local:8080.
exit
8
Lab 3 Building a Custom Container
Image
GuideMe Lab 03
9
Task 1: Connect to the Developer Workstation VM
You connect to the Photon OS virtual machine using SSH. You use this VM to perform CLI-based
tasks.
1. Start MTPuTTY from the taskbar shortcut on the student Windows desktop.
• Password: VMware1!
cd /root/Lab3
2. Read the Dockerfile.
cat Dockerfile
FROM 172.20.10.10/nginx:1.16
RUN rm /usr/share/nginx/html/index.html
COPY index.html /usr/share/nginx/html/
COPY image.svg /usr/share/nginx/html/
RUN chmod 444 /usr/share/nginx/html/index.html
RUN chmod 444 /usr/share/nginx/html/image.svg
This Dockerfile provides specifications for building a container image:
• RUN the chmod command to modify the permissions of the copied file.
10
Task 3: Use the Dockerfile to Build an Image
You use Dockerfile to build a new image for a container.
docker images
-p maps an external port (8080) to the internal port (80) of the container.
2. List the running containers.
docker ps
This command returns all the containers that are running.
3. On the student Windows desktop, open the Chrome browser and go to http://sa-cli-
vm.vclass.local:8080.
Port 8080 is appended to the URL because this port was specified for running the container.
The Nginx web server landing page opens in the browser, confirming access to the container
application.
The webpage looks different from the one that you viewed previously because you are
running an image with a custom index.html file.
11
4. List the running containers.
docker ps
This command returns all the running containers.
7. List the running containers to confirm that the container is no longer running.
docker ps
8. Log out of SA-CLI-VM.
exit
12
Lab 4 Enabling vSphere with Tanzu
GuideMe Lab 04
13
Task 1: Log In to the vSphere Client
You log in to the vSphere environment using the vSphere Client.
1. Open the Chrome browser by clicking the shortcut on the taskbar of the student desktop.
The browser automatically redirects you to the vSphere Client URL at https://sa-vcsa-
01.vclass.local/ui.
2. If the browser displays a security warning, select Advanced > Proceed to sa-vcsa-
01.vclass.local.
3. If the browser does not automatically redirect to the vSphere Client URL, enter
https://sa-vcsa-01.vclass.local/ui in the address bar.
4. Log in to the vSphere Client using the Single Sign-On Administrator credentials.
2. Click CREATE.
3. In the New Content Library wizard, enter Kubernetes as the content library name and
click Next.
5. Click Next.
6. Select SA-CL-01 as the datastore for the content library and click Next.
7. Click Finish.
NOTE
The content library is used in a later lab, but you must create the content library before you
can enable the cluster for vSphere with Tanzu.
14
Task 3: Verify That vSphere HA and vSphere DRS Are Enabled
Because vSphere HA and vSphere DRS must be enabled on the ESXi cluster to support vSphere
with Tanzu, you verify that these features are enabled.
1. In the vSphere Client, select Menu (hamburger) > Inventory > Hosts and Clusters.
9. Before continuing, wait until vSphere HA is enabled and all tasks are completed.
3. On the vCenter Server and Network section, select NSX and click Next.
4. On the Select a Cluster section, select the ESXi cluster to support vSphere with Tanzu.
5. Select SA-Compute-01 from the list of compatible clusters and click Next.
If the SA-Compute-01 cluster is not visible, it might mean that vSphere HA or vSphere DRS
is not enabled on the cluster.
6. On the Storage section, assign the K8S Storage Policy to the control plane nodes, the
ephemeral disks, and the image cache.
a. From the Control Plane Storage Policy drop-down men, select K8S Storage Policy.
b. From the Ephemeral Disks Storage Policy drop-down menu, select K8S Storage
Policy.
c. From the Image Cache Storage Policy drop-down menu, select K8S Storage Policy.
d. Click Next.
15
7. In the Management Network section, define values for management networking and click
Next.
Parameter Action
8. In the Workload Network section, define values for workload networking and click Next.
Parameter Action
16
10. Select the Kubernetes content library, click OK, and click Next.
a. From the Image Control Plane Size drop-down menu, select Tiny.
c. Click Finish.
The entire process can take up to 45 minutes to complete. You might need to refresh the
vSphere Client.
The process is complete when the cluster reports a Config Status of Running and a Control
Plane Node IP Address of 192.168.30.34. In some cases, the IP address might differ. As long
as it begins with 192.168.30.x, you can continue.
NOTE
When the process is finished, the SA-Compute-01 cluster is enabled for vSphere with Tanzu.
Three control plane VMs are deployed to the cluster, and Spherelet is installed on each of
the ESXi hosts.
17
13. After the enablement process completes and the Config Status reports Running, record the
control plane node IP address. __________
NOTE
Alarms might appear on the control plane VMs. It is expected that the user cannot clear these
alarms from the virtual machine object.
Select vCenter > Monitor > Triggered Alarms. From here, you can select, acknowledge, and
reset the alarms for the control plane VMs.
1. In the vSphere Client, select Menu (hamburger) > Administration > Licenses.
5. Select the check box beside the asset called SA-Compute-01 and click Assign License.
6. Select the TZ-BS-TLSS-C license from the list of existing licenses and click OK.
18
Lab 5 Downloading and Configuring
the Kubernetes CLI
IMPORTANT
vSphere with Tanzu must be enabled before you can perform the tasks in this lab.
GuideMe Lab 05
19
Task 1: Access the vSphere with Tanzu Landing Page
After vSphere with Tanzu is enabled, you access a landing page for information about
downloading vSphere with Tanzu CLI tools.
• For <cluster_ip>, enter the control plane node IP address that you recorded
previously.
• Select Menu (hamburger) > Workload Management > Supervisor Clusters in the
vSphere Client to find the IP address again.
• If the browser displays a security warning, click Advanced > Proceed to <cluster_ip>.
The page provides useful information and steps for downloading the Kubernetes CLI
package for your operating system.
1. Start MTPuTTY from the taskbar shortcut on the student Windows desktop.
• Password: VMware1!
20
Task 3: Download and Install the Kubernetes CLI Package
You download and install the vSphere with Tanzu CLI packages to SA-CLI-VM.
1. If you are not in the /root directory, run the cd /root command to change to the
/root directory.
2. Run the wget command to download the vsphere-plugin.zip package for Linux
operating systems.
wget https://<cluster_ip>/wcp/plugin/linux-amd64/vsphere-
plugin.zip
The <cluster_ip> parameter is the control plane node IP address that you recorded
previously. You can also find this address by selecting Menu (hamburger) > Workload
Management > Supervisor Clusters in the vSphere Client.
unzip vsphere-plugin.zip
4. Configure the environment variable PATH to include the extracted bin directory and set up
Tab autocompletion.
cat ~/.bash_profile
The output of the file is similar to the example.
exit
7. Under Site-A Systems in the left pane of MTPuTTY, double-click SA-CLI-VM to log back in.
9. Close MTPuTTY.
21
22
Lab 6 Creating and Configuring a
vSphere with Tanzu Namespace
GuideMe Lab 06
23
Task 1: Log In to the vSphere Client
You log in to the vSphere environment using the vSphere Client.
1. Open the Chrome browser by clicking the shortcut on the taskbar of the student desktop.
The browser automatically redirects you to the vSphere Client URL at https://sa-vcsa-
01.vclass.local/ui.
2. If the browser displays a security warning, select Advanced > Proceed to sa-vcsa-
01.vclass.local.
3. If the browser does not automatically redirect to the vSphere Client URL, enter
https://sa-vcsa-01.vclass.local/ui in the address bar.
4. Log in to the vSphere Client using the Single Sign-On Administrator credentials.
1. In the vSphere Client, select Menu (hamburger) > Workload Management > Namespaces.
5. Click Create.
The namespace is created and shows a Config Status of Running and a Kubernetes Status of
Active.
24
Task 3: Configure Permissions and Storage for the Namespace
Using the vSphere Client, you assign permissions and a VM storage policy to the vSphere with
Tanzu namespace so that a user can authenticate to the namespace.
b. Search for the user devops01 and select the Can edit role.
c. Click OK.
4. In the Select Storage Policies window, select K8S Storage Policy and click OK.
Your namespace is configured with a storage policy and user permissions.
c. Click OK.
The associated VM classes determine the resource sizing of virtual machines that can be
deployed in the namespace. The VM class is used in a later lab when you deploy a Tanzu
Kubernetes cluster.
25
Task 4: Access the Namespace Using the kubectl CLI
You log in to the vSphere with Tanzu namespace using the kubectl CLI to view the newly
created namespace.
1. Start MTPuTTY from the taskbar shortcut on the student Windows desktop.
• Password: VMware1!
3. From SA-CLI-VM, use the kubectl CLI to connect to the vSphere with Tanzu control plane as
the devops01 user.
• Password: VMware1!
4. After logging in to the vSphere with Tanzu control plane, list the available namespaces.
5. If namespace-01 is not marked as the current namespace, run the kubectl config
use-context namespace-01 command.
6. Inspect the namespace.
26
Lab 7 Deploying a Container
Application That Runs in a vSphere
Pod
GuideMe Lab 07
27
Task 1: Deploy a vSphere Pod
You use the kubectl CLI to deploy a vSphere Pod that runs a container application.
1. From SA-CLI-VM, use the kubectl CLI to connect to the vSphere with Tanzu control plane as
the devops01 user.
kubectl vsphere login --server <cluster_ip> -u
devops01@vsphere.local
The <cluster_ip> parameter is the control plane node IP address. If needed, you can
access this address by selecting Menu (hamburger) > Workload Management > Supervisor
Clusters in the vSphere Client.
The credentials for the devops01 user are as follows:
• User name: devops01@vsphere.local
• Password: VMware1!
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: 172.20.10.30/nginx:1.16
ports:
- containerPort: 80
28
4. Create the Kubernetes deployment.
1. In the vSphere Client, select Menu (hamburger) > Inventory > Hosts and Clusters.
3. In the vCenter Server inventory, select the namespace-01 object and click Monitor.
This object is the Kubernetes pod that you deployed using the kubectl CLI. vSphere Pods
are first-class citizens in vSphere.
29
30
Lab 8 Scaling Out a vSphere Pod
Deployment
GuideMe Lab 08
31
Task 1: Scale Out a vSphere Pod Deployment
You use the kubectl CLI to scale out the number of replicas in a vSphere Pod deployment.
1. From SA-CLI-VM, use the kubectl CLI to connect to the vSphere with Tanzu control plane as
the devops01 user.
kubectl vsphere login --server <cluster_ip> -u
devops01@vsphere.local
The <cluster_ip> parameter is the control plane node IP address that you recorded
previously. You can also find this address by selecting Menu (hamburger) > Workload
Management > Supervisor Clusters in the vSphere Client.
The credentials for the devops01 user are as follows:
• User name: devops01@vsphere.local
• Password: VMware1!
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: 172.20.10.30/nginx:1.16
ports:
- containerPort: 80
32
4. Apply the updated YAML.
1. In the vSphere Client, select Menu (hamburger) > Inventory > Hosts and Clusters.
3. In the vCenter Server inventory, select the namespace-01 object and click Monitor.
5. In the vCenter Server inventory, verify that three nginx-deployment vSphere Pods are
running.
6. If the additional vSphere Pods are not visible, refresh the vSphere Client.
33
34
Lab 9 Deploying a vSphere Pod with a
Persistent Volume
GuideMe Lab 09
35
Task 1: Create a Persistent Volume Claim
You create a persistent volume claim to be used with a vSphere Pod deployment.
1. From SA-CLI-VM, use the kubectl CLI to connect to the vSphere with Tanzu control plane as
the devops01 user.
kubectl vsphere login --server <cluster_ip> -u
devops01@vsphere.local
The <cluster_ip> parameter is the control plane node IP address that you can find by
selecting Menu (hamburger) > Workload Management > Supervisor Clusters in the
vSphere Client.
The credentials for the devops01 user are as follows:
• User name: devops01@vsphere.local
• Password: VMware1!
36
4. Create the persistent volume claim by applying the YAML file.
1. In the vSphere Client, select Menu (hamburger) > Inventory > Hosts and Clusters.
3. In the vCenter Server inventory, select SA-Compute-01 and select Monitor > Cloud Native
Storage > Container Volumes.
You can view the persistent volume claims for the cluster.
4. To the left of the volume name, click the Details icon to view more information about the
persistent volume claim.
The persistent volume claim is created but not associated with any vSphere Pods.
37
Task 3: Create a vSphere Pod Deployment
You create a vSphere Pod deployment to consume the persistent volume claim.
1. From SA-CLI-VM, run the cat command to read the pod deployment YAML file
/root/Lab9/pod-with-pv.yaml.
cat /root/Lab9/pod-with-pv.yaml
The YAML file deploys an instance of Nginx. A volumes section appears.
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-pvc
2. Create the Kubernetes deployment with a persistent volume by applying the deployment
YAML file.
5. Click the Details icon next to the volume name to view more information about the
persistent volume claim.
The persistent volume claim is now associated with the new pod deployment.
38
Task 4: Delete the Deployment and the Persistent Volume Claim
You delete the pod deployment and the persistent volume claim.
NOTE
Deleting a deployment does not delete the persistent volumes associated with the
deployment. Persistent volume claims must be deleted as a separate task.
39
40
Lab 10 Creating a Kubernetes Service
GuideMe Lab 10
41
Task 1: Create a Load Balancer Kubernetes Service
You create a load balancer Kubernetes service to provide ingress connectivity to a vSphere Pod
deployment.
1. From SA-CLI-VM, use the kubectl CLI to connect to the vSphere with Tanzu control plane as
the devops01 user.
• Password: VMware1!
cat /root/Lab10/lb-service.yaml
The command returns a simple service YAML file, which declares a service of the type
LoadBalancer to apply to pods with the label nginx.
apiVersion: v1
kind: Service
metadata:
name: lb-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: LoadBalancer
4. Create the load balancer service.
kubectl apply -f /root/Lab10/lb-service.yaml
A new NSX virtual service is created and issued with an IP address from the ingress CIDR
that was defined during the vSphere with Tanzu enablement process.
42
Task 2: Obtain the Kubernetes Service External IP Address
You obtain the Kubernetes service external IP address to access the container application.
2. Record the external IP address value for your load balancer service (lb-service).
__________
The <external_ip> is the external IP address value of the lb-service, as recorded in the
previous step.
The Nginx web server landing page opens in the browser, confirming access to the container
application running as a vSphere Pod.
43
44
Lab 11 Creating a Kubernetes Network
Policy
GuideMe Lab 11
45
Task 1: Create a Network Policy to Deny Traffic
You create a Kubernetes network policy to deny traffic to a vSphere Pod.
1. From SA-CLI-VM, use the kubectl CLI to connect to the vSphere with Tanzu control plane as
the devops01 user.
• Password: VMware1!
cat /root/Lab11/network-policy.yaml
This simple network policy YAML file declares that all ingress and egress traffic is denied for
pods with the label nginx.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-nginx
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
- Egress
4. Create the Kubernetes network policy.
46
6. On the student Windows desktop, open the Chrome browser and go to
http://<external_ip>.
The <external_ip> parameter is the value of the load balancer service (lb-service) that
you recorded previously.
If needed, you can obtain the <external_ip> by running kubectl get services
from SA-CLI-VM.
NOTE
The browser might have a cache of the webpage. Refresh the browser if you continue to see
the Nginx webpage.
7. In the vSphere Client, select namespace-01 and select Network > Network Policies.
You can view the Kubernetes network policies that are applied in the namespace.
47
48
Lab 12 Viewing Kubernetes Objects
2. View Segments
5. View Namespaces
GuideMe Lab 12
49
Task 1: Log In to NSX Manager
You log in to the NSX Manager user interface.
1. On the student Windows desktop, open the Chrome browser and go to the NSX UI at
https://sa-nsxmgr-01.vclass.local.
IMPORTANT
Values and variables in your lab are different from the screenshots. Screenshots are
examples only.
1. From the NSX Manager home page, select Networking > Segments.
You can see only the segment that is created when the namespace-01 object is created in
the vSphere Client.
50
3. Click the number in the Ports column for the namespace-01 segment.
4. Click Close.
51
Task 3: View Virtual Servers
You view the virtual server that is created when a Kubernetes load balancer service is created.
1. From the NSX Manager home page, select Networking > Load Balancing.
• Distributed Load Balancer: Used by the control plane VM system pods and the cluster IP
address of pods
• Server Load Balancer: Used for the control plane API access
• Server Load Balancer: Used to service the Kubernetes load balancer services for
external access (ingress) for a specific namespace
NOTE
The screenshot was captured when using vSphere 7 Update 1c. Previous versions might
display a different configuration.
2. Click the Virtual Servers tab to view a list of all virtual servers.
3. In the filter text field, enter lb-service, which is the name for the Kubernetes load
balancer service.
Two results are listed. The virtual server with the word "domain" at the start of its name is
assigned an IP address from the 192.168.30.32/27 CIDR, which is defined as the ingress
CIDR.
52
Task 4: View the Distributed Firewall
You view the distributed firewall rules that are created when a Kubernetes network policy is
created.
1. From the NSX Manager home page, select Security > Distributed Firewall.
3. Expand the two rules with namespace-01-deny-all at the beginning of their names.
These rules are created by the network policy. As defined by the Kubernetes network
policy, the firewall rule is configured to drop all ingress and egress traffic.
53
5. In the View Members window, click Segment Ports.
6. Click Close.
54
Task 5: View Namespaces
You view the container workloads that are visible to NSX Manager.
1. From the NSX Manager home page, select Inventory > Containers > Namespaces.
2. In the filter text field, enter namespace-01 to display only this namespace.
3. Click the numbers in the Pods, Services, and Networking columns to view more details about
each item.
NOTE
You get a global view of all container-capable clusters that are visible to NSX Manager.
55
Task 6: View the Network Topology
You access the network topology view of the environment.
1. From the NSX Manager home page, select Networking > Network Topology.
A collapsed view of the entire network topology appears.
3. Click and drag the topology view to find the VMs icon and click the icon.
The vSphere with Tanzu control plane VMs appear.
56
4. Click and drag the topology view to find the Pods icon and click the icon.
NOTE
vSphere Pods appear as both VM and pod objects in the topology view.
57
58
Lab 13 Enabling the Harbor Registry
GuideMe Lab 13
59
Task 1: Log In to the vSphere Client
You log in to the vSphere environment using the vSphere Client.
1. Open Chrome using the shortcut on the taskbar of the student desktop.
2. If the browser does not automatically redirect to the vSphere Client URL, enter
https://sa-vcsa-01.vclass.local/ui in the address bar.
3. Log in to the vSphere Client using the Single Sign-On Administrator credentials.
60
Task 2: Enable an Embedded Harbor Registry
You use the vSphere Client to enable the embedded Harbor registry.
1. In the vSphere Client, click SA-Compute-01 and select Configure > Supervisor Cluster >
Image Registry.
3. In the Select Storage Policies window, select K8S Storage Policy and click OK.
NOTE
61
62
Lab 14 Pushing and Deploying Harbor
Images
GuideMe Lab 14
63
Task 1: Install and Configure the vSphere Docker Credential Helper
You install and configure the vSphere Docker credential helper so that you can authenticate to
Harbor.
2. If you are not in the /root directory, run the cd /root command.
wget https://<cluster_ip>/wcp/helper/linux-amd64/vsphere-
docker-credential-helper.zip
The <cluster_ip> value is the control plane node IP address that you recorded
previously. If needed, you can find the address by selecting Menu (hamburger) > Workload
Management > Supervisor Clusters in the vSphere Client.
unzip vsphere-docker-credential-helper.zip
5. Create a subdirectory in /etc/docker/certs.d/ that corresponds to the IP address
of the Harbor instance.
mkdir -p /etc/docker/certs.d/<harbor_ip>
The <harbor_ip> value is the IP address that you recorded in the previous lab when you
enabled the embedded Harbor registry. You can also access this address in the vSphere
Client by selecting the SA-Compute-01 cluster and selecting Configure > Namespaces >
Image Registry.
64
Task 2: Push an Image to Harbor
You tag an image and push the image to the Harbor repository.
1. From SA-CLI-VM, run the docker tag command to tag the custom image that you
created in a previous lab.
docker images
3. Upload the image to the Harbor repository.
The <harbor_ip> value is the IP address that you recorded when you enabled the
embedded Harbor registry. You can also find the address in the vSphere Client by selecting
SA-Compute-01 and selecting Configure > Namespaces > Image Registry.
• Password: VMware1!
The image that you pushed to the repository is listed. All images pushed to the repository
are listed here.
65
Task 4: Deploy an Image from Harbor
You deploy an image stored in a Harbor repository.
If Harbor has deployed with an IP address other than 192.168.30.35, you must update the
YAML file to use the correct Harbor IP address.
66
2. Apply this YAML file to deploy the application and service.
<external_ip> is the external IP address that you recorded in the previous step.
67
68
Lab 15 Configuring a Content Library
GuideMe Lab 15
69
Task 1: Log In to the vSphere Client
You log in to the vSphere environment using the vSphere Client.
1. Open the Chrome browser using the shortcut on the taskbar of the student desktop.
2. If the browser does not automatically redirect to the vSphere Client URL, enter the URL
https://sa-vcsa-01.vclass.local/ui in the address bar.
3. Log in to the vSphere Client using the Single Sign-On Administrator credentials.
70
Task 2: Upload the Tanzu Kubernetes Cluster Template
You upload the Tanzu Kubernetes cluster template to the content library.
IMPORTANT
Changing the name of the imported template might cause deployed Tanzu Kubernetes
clusters to be reconciled and updated with new virtual machines.
8. Click Import.
9. Monitor the progress of the upload in the Recent Tasks pane of the vSphere Client.
CAUTION
Do not refresh or close the browser window while the upload is in progress. The upload can
take up to 10 minutes.
71
72
Lab 16 Deploying a Tanzu Kubernetes
Cluster
GuideMe Lab 16
73
Task 1: View a Tanzu Kubernetes Cluster Deployment YAML File
You view the Tanzu Kubernetes cluster deployment YAML file.
1. From SA-CLI-VM, use the kubectl CLI to connect to the vSphere with Tanzu control plane as
the devops01 user.
• Password: VMware1!
74
5. To read the /root/Lab16/deploy-tkg-cluster.yaml file, run the cat
command.
cat /root/Lab16/deploy-tkg-cluster.yaml
A YAML declaration is returned. It declares that a Tanzu Kubernetes cluster of version
v1.18.10 is deployed to namespace-01, with one control plane VM and one worker VM. The
deployment uses the same storage class listed in the previous step: k8s-storage-policy. The
deployment uses the VM Class best-effort-xsmall.
75
Task 2: Deploy a Tanzu Kubernetes Cluster
Using the Tanzu Kubernetes Grid Service for vSphere, you deploy a Tanzu Kubernetes cluster.
4. Verify the completion of the deployment and VM creation in the Recent Tasks pane.
5. After both the control plane VM and the worker VM are deployed, view the Tanzu
Kubernetes cluster from SA-CLI-VM.
Node Status:
tkg-cluster-01-control-plane-cxqg6: ready
tkg-cluster-01-workers-dp4kj-6596cd84c-l6p6h: ready
Phase: running
Vm Status:
tkg-cluster-01-control-plane-cxqg6: ready
tkg-cluster-01-workers-dp4kj-6596cd84c-l6p6h: ready
Events: <none>
76
Lab 17 Working with Tanzu
Kubernetes Clusters
GuideMe Lab 17
77
Task 1: Apply a Pod Security Policy
You apply a pod security policy to a Tanzu Kubernetes cluster so that a demo application can
run.
NOTE
For more information about using pod security policies with Tanzu Kubernetes clusters, see
vSphere with Tanzu Configuration and Management at vSphere/7.0/vmware-vsphere-with-
kubernetes/GUID-152BE7D2-E227-4DAA-B527-
557B564D9718.html">https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-
vsphere-with-kubernetes/GUID-152BE7D2-E227-4DAA-B527-
557B564D9718.htmlvSphere/7.0/vmware-vsphere-with-kubernetes/GUID-152BE7D2-E227-
4DAA-B527-557B564D9718.html">https://docs.vmware.com/en/VMware-
vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-152BE7D2-E227-4DAA-B527-
557B564D9718.html.
1. From SA-CLI-VM, use the kubectl CLI to connect to the Tanzu Kubernetes cluster as the
devops01 user.
The <tkg_cluster> parameter is the name of the Tanzu Kubernetes cluster, and the
<namespace> parameter is the Supervisor Cluster namespace where this cluster resides.
Example command with parameter values:
• Password: VMware1!
78
3. To read the /root/Lab17/allow-runasroot-clusterrole.yaml file, run the
cat command.
cat /root/Lab17/allow-runasroot-clusterrole.yaml
Tanzu Kubernetes Grid Service provisions Tanzu Kubernetes clusters with PodSecurityPolicy
Admission Controller enabled. A pod security policy is required to deploy workloads.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp:privileged
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- vmware-system-privileged
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: all:psp:privileged
roleRef:
kind: ClusterRole
name: psp:privileged
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
4. Apply the allow-runasroot-clusterrole.yaml file to grant privileges for the
demo application to run.
79
Task 2: Deploy a Container Application
You deploy a container application to a Tanzu Kubernetes cluster.
This process is almost identical to deploying container applications using vSphere Pods.
80
2. Deploy the container application by applying the YAML file.
<external_ip> is the external IP address of the load balancer service from the previous
step.
NOTE
Pods that are deployed on a Tanzu Kubernetes cluster are not visible from the vSphere
Client. They reside in the Tanzu Kubernetes cluster worker node VMs.
81
Task 3: Scale Out a Tanzu Kubernetes Cluster
You scale the number of Tanzu Kubernetes cluster worker nodes from one to three.
1. On SA-CLI-VM, return to the supervisor namespace context where the Tanzu Kubernetes
cluster resides.
cat /root/Lab17/scale-tkg-cluster.yaml
The file is a copy of the deploy-tkg-cluster.yaml file, except that the number of
worker node VMs is increased to three.
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
name: tkg-cluster-01
namespace: namespace-01
spec:
distribution:
fullVersion: v1.18.10+vmware.1-tkg.1.3a6cd48
topology:
controlPlane:
count: 1
class: best-effort-xsmall
storageClass: k8s-storage-policy
workers:
count: 3 # <<<--- increased number of workers to 3
class: best-effort-xsmall
storageClass: k8s-storage-policy
settings:
network:
cni:
name: calico
services:
cidrBlocks: ["172.16.100.0/24"]
pods:
cidrBlocks: ["172.16.200.0/24"]
82
3. Apply the updated YAML file.
4. If your active context is not the Supervisor Namespace called namespace-01, rerun step 1.
6. After all the new worker VMs are deployed, view the Tanzu Kubernetes cluster from SA-
CLI-VM.
83
84
Lab 18 Control Plane Certificate
Management
GuideMe Lab 18
85
Task 1: Log In to the vSphere Client
You log in to the vSphere environment using the vSphere Client.
1. Start the Chrome browser by using the shortcut on the taskbar of the student desktop.
2. If the browser does not automatically redirect to the vSphere Client URL, enter
https://sa-vcsa-01.vclass.local/ui in the address bar.
3. Log in to the vSphere Client using the Single Sign-On Administrator credentials.
86
Task 2: Generate a Certificate Signing Request
You create a certificate signing request (CSR) for the control plane management interface. The
CSR is then provided to a certificate authority.
1. In the vSphere Client, select SA-Compute-01 and select Configure > Supervisor Cluster >
Certificates.
2. In the Workload Platform Management tile, select Generate CSR from the Actions drop-
down menu.
NOTE
The value for the common name must be vspherek8s.vclass.local. The other values can be
changed.
Parameter Action
State/Province Enter CA
6. Click Finish.
87
Task 3: Obtain a Signed Certificate
You provide a CSR to a certificate authority to download a signed certificate.
2. Log in.
3. In the Microsoft Active Directory Certificate Services home page, click Request a
Certificate.
5. Paste the copied contents of the CSR into the Saved Request text box.
7. Click Submit.
The browser might display the download warning This type of file can harm
your computer. Do you want to keep certnew.cer anyway?.
10. Click Keep on the warning to save the certnew.cer file.
88
Task 4: Install the Certificate Authority Root Certificate
You install the certificate authority root certificate into the vCenter Server trusted root store.
1. In the vSphere Client, select Menu (hamburger) > Administration > Certificate
Management.
5. Click Open.
6. Click Add.
A second entry is visible in the vSphere Client under Trusted Root Certificates.
89
Task 5: Replace the Control Plane Management Certificate
You install a new signed certificate for the vSphere with Tanzu control plane VMs.
1. In the vSphere Client, select Menu (hamburger) > Inventory > Hosts and Clusters.
2. Select SA-Compute-01.
3. Select Configure > Supervisor Cluster > Certificates.
4. In the Workload Platform Management tile, select Replace Certificate from the Actions
drop-down menu.
5. In the Replace Certificate window, click Upload Certificate File.
6. In the Open window, browse to C:\Materials\Downloads and select the
certnew.cer file.
7. Click Open.
8. Click Replace.
9. Open a new browser tab and go to https://192.168.30.33.
The vSphere with Tanzu control plane landing page opens.
NOTE
You might need to wait a few minutes, and close and reopen the browser, for the certificate
to appear as trusted and secure.
90
Lab 19 (Optional) Deploying the Yelb
Application as vSphere Pods
1. Create a Namespace
GuideMe Lab 19
91
Task 1: Create a Namespace
You create and configure a namespace to host the Yelb application.
1. In the vSphere Client, create a vSphere with Tanzu namespace called yelb.
2. Provide the yelb namespace with the VM storage policy called K8S Storage Policy.
5. Obtain the external IP address of the yelb-ui load balancer service and open a browser to
this IP address.
92
Lab 20 (Optional) Deploying the Yelb
Application to a Tanzu Kubernetes
Cluster
GuideMe Lab 20
93
Task 1: Deploy the Yelb Application
You deploy the Yelb application to a Tanzu Kubernetes cluster.
1. From SA-CLI-VM, log in to the vSphere with Tanzu control plane, including the Tanzu
Kubernetes cluster login flags.
2. Verify that the active context is the Tanzu Kubernetes cluster namespace.
5. Obtain the external IP address of the yelb-ui load balancer service and open a browser to this
IP address.
94
Task 2: Delete a Tanzu Kubernetes Cluster
You delete a Tanzu Kubernetes cluster and its associated services.
IMPORTANT
Any container applications running in the Tanzu Kubernetes cluster are deleted.
3. From the vSphere Client, observe that the Tanzu Kubernetes cluster VMs are deleted.
Any services, for example, load balancer services, that are associated with the Tanzu
Kubernetes cluster are also deleted.
95
96