Download as pdf or txt
Download as pdf or txt
You are on page 1of 104

VMware vSphere with Tanzu:

Deploy and Manage [V7]


Lab Manual

VMware® Education Services


VMware, Inc.
www.vmware.com/education
VMware vSphere with Tanzu: Deploy and Manage

Lab Manual

Part Number EDU-EN-VSKDM7-LAB (11-FEB-2022)

Copyright © 2022 VMware, Inc. All rights reserved. This manual and its accompanying
materials are protected by U.S. and international copyright and intellectual property laws.
VMware products are covered by one or more patents listed at
http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of
VMware, Inc. in the United States and/or other jurisdictions. All other marks and names
mentioned herein may be trademarks of their respective companies. VMware vSphere® with
VMware Tanzu®, VMware vSphere® High Availability, VMware vSphere® Distributed
Switch™, VMware vSphere® Distributed Resource Scheduler™, VMware vSphere® Client™,
VMware vSphere® 2015, VMware vSphere®, VMware vCenter Server®, VMware
Workstation™, VMware View®, VMware Horizon® View™, VMware Verify™, VMware Tanzu®,
VMware Tanzu® Enterprise, VMware Tanzu® Community, VMware Tanzu® Basic, VMware
Tanzu® Standard, VMware Horizon® 7, VMware Horizon® 7, VMware Horizon® 7 on VMware
Cloud™ on AWS, VMware Certificate Authority. No trademark., VMware Tanzu® Kubernetes
Grid™ Service, VMware Tanzu® Kubernetes Grid™, VMware Pivotal Labs® Platform
Management™, Project Photon OS™, VMware Photon™, VMware NSX-T™ Data Center,
VMware NSX® Manager™, VMware NSX®, VMware Go™, VMware ESXi™, and VMware
vSphere® Distributed Resource Scheduler™ are registered trademarks or trademarks of
VMware, Inc. in the United States and/or other jurisdictions.

The training material is provided “as is,” and all express or implied conditions, representations,
and warranties, including any implied warranty of merchantability, fitness for a particular
purpose or noninfringement, are disclaimed, even if VMware, Inc., has been advised of the
possibility of such claims. This material is designed to be used for reference purposes in
conjunction with a training course.

The training material is not a standalone training tool. Use of the training material for self-
study without class attendance is not recommended. These materials and the computer
programs to which it relates are the property of, and embody trade secrets and confidential
information proprietary to, VMware, Inc., and may not be reproduced, copied, disclosed,
transferred, adapted or modified without the express written approval of VMware, Inc.

www.vmware.com/education
Typographical Conventions

The following typographical conventions are used in this course.

Conventions Usage and Examples

Monospace Identifies command names, command options, parameters, code


fragments, error messages, filenames, folder names, directory names,
and path names:

• Run the esxtop command.

• ... found in the var/log/messages file.

Monospace Identifies user inputs:


Bold
• Enter ipconfig /release.

Boldface Identifies user interface controls:

• Click the Configuration tab.

Italic Identifies book titles:

• vSphere Virtual Machine Administration

<> Indicates placeholder variables:

• <ESXi_host_name>

• ... the Settings/<Your_Name>.txt file

www.vmware.com/education
Contents

Lab 1 Verifying Docker on the Developer Workstation ................................................... 1


Task 1: Connect to the Developer Workstation VM ............................................................................................ 2
Task 2: Start Docker ........................................................................................................................................................ 2
Task 3: Inspect the vSphere Environment ............................................................................................................... 4
Task 4: Inspect the NSX-T Data Center Environment ........................................................................................ 4
Lab 2 Running a Container Image ........................................................................................... 5
Task 1: Connect to the Developer Workstation VM ............................................................................................ 6
Task 2: Pull a Container Image ..................................................................................................................................... 6
Task 3: Run a Container Image..................................................................................................................................... 6
Task 4: Stop a Running Container ............................................................................................................................... 8
Lab 3 Building a Custom Container Image ........................................................................... 9
Task 1: Connect to the Developer Workstation VM .......................................................................................... 10
Task 2: Inspect the Dockerfile .................................................................................................................................... 10
Task 3: Use the Dockerfile to Build an Image ........................................................................................................ 11
Task 4: Use the Newly Built Image to Run a Container ..................................................................................... 11
Lab 4 Enabling vSphere with Tanzu..................................................................................... 13
Task 1: Log In to the vSphere Client ........................................................................................................................ 14
Task 2: Create a Content Library .............................................................................................................................. 14
Task 3: Verify That vSphere HA and vSphere DRS Are Enabled ................................................................ 15
Task 4: Enable vSphere with Tanzu ......................................................................................................................... 15
Task 5: License the Cluster.......................................................................................................................................... 18
Lab 5 Downloading and Configuring the Kubernetes CLI ............................................ 19
Task 1: Access the vSphere with Tanzu Landing Page.................................................................................... 20

iii
Task 2: Log In to the Developer Workstation VM ............................................................................................ 20
Task 3: Download and Install the Kubernetes CLI Package............................................................................ 21
Lab 6 Creating and Configuring a vSphere with Tanzu Namespace ....................... 23
Task 1: Log In to the vSphere Client ....................................................................................................................... 24
Task 2: Create a vSphere with Tanzu Namespace............................................................................................ 24
Task 3: Configure Permissions and Storage for the Namespace................................................................. 25
Task 4: Access the Namespace Using the kubectl CLI ................................................................................... 26
Lab 7 Deploying a Container Application That Runs in a vSphere Pod ................. 27
Task 1: Deploy a vSphere Pod .................................................................................................................................. 28
Task 2: View the Deployed vSphere Pod............................................................................................................. 29
Lab 8 Scaling Out a vSphere Pod Deployment ............................................................... 31
Task 1: Scale Out a vSphere Pod Deployment ................................................................................................... 32
Task 2: View the Scaled-Out vSphere Pod Deployment................................................................................ 33
Lab 9 Deploying a vSphere Pod with a Persistent Volume........................................ 35
Task 1: Create a Persistent Volume Claim............................................................................................................. 36
Task 2: View the Persistent Volume Claim ............................................................................................................37
Task 3: Create a vSphere Pod Deployment ........................................................................................................ 38
Task 4: Delete the Deployment and the Persistent Volume Claim ............................................................. 39
Lab 10 Creating a Kubernetes Service ................................................................................ 41
Task 1: Create a Load Balancer Kubernetes Service ........................................................................................ 42
Task 2: Obtain the Kubernetes Service External IP Address ........................................................................ 43
Lab 11 Creating a Kubernetes Network Policy ................................................................ 45
Task 1: Create a Network Policy to Deny Traffic............................................................................................... 46
Lab 12 Viewing Kubernetes Objects ................................................................................... 49
Task 1: Log In to NSX Manager ................................................................................................................................. 50
Task 2: View Segments ............................................................................................................................................... 50
Task 3: View Virtual Servers ...................................................................................................................................... 52
Task 4: View the Distributed Firewall ..................................................................................................................... 53
Task 5: View Namespaces .......................................................................................................................................... 55
Task 6: View the Network Topology ..................................................................................................................... 56
Lab 13 Enabling the Harbor Registry................................................................................... 59
Task 1: Log In to the vSphere Client ....................................................................................................................... 60
Task 2: Enable an Embedded Harbor Registry .................................................................................................... 61

iv
Lab 14 Pushing and Deploying Harbor Images ................................................................ 63
Task 1: Install and Configure the vSphere Docker Credential Helper ......................................................... 64
Task 2: Push an Image to Harbor ............................................................................................................................. 65
Task 3: Review the Image in Harbor ....................................................................................................................... 65
Task 4: Deploy an Image from Harbor ................................................................................................................... 66
Lab 15 Configuring a Content Library ................................................................................. 69
Task 1: Log In to the vSphere Client ....................................................................................................................... 70
Task 2: Upload the Tanzu Kubernetes Cluster Template .................................................................................71
Lab 16 Deploying a Tanzu Kubernetes Cluster ............................................................... 73
Task 1: View a Tanzu Kubernetes Cluster Deployment YAML File ............................................................. 74
Task 2: Deploy a Tanzu Kubernetes Cluster ........................................................................................................ 76
Lab 17 Working with Tanzu Kubernetes Clusters .......................................................... 77
Task 1: Apply a Pod Security Policy ........................................................................................................................ 78
Task 2: Deploy a Container Application................................................................................................................. 80
Task 3: Scale Out a Tanzu Kubernetes Cluster................................................................................................... 82
Lab 18 Control Plane Certificate Management ................................................................ 85
Task 1: Log In to the vSphere Client ....................................................................................................................... 86
Task 2: Generate a Certificate Signing Request ................................................................................................. 87
Task 3: Obtain a Signed Certificate ......................................................................................................................... 88
Task 4: Install the Certificate Authority Root Certificate ................................................................................ 89
Task 5: Replace the Control Plane Management Certificate ......................................................................... 90
Lab 19 (Optional) Deploying the Yelb Application as vSphere Pods ....................... 91
Task 1: Create a Namespace ...................................................................................................................................... 92
Task 2: Deploy the Yelb Application....................................................................................................................... 92
Lab 20 (Optional) Deploying the Yelb Application to a Tanzu Kubernetes Cluster
.......................................................................................................................................................... 93
Task 1: Deploy the Yelb Application........................................................................................................................ 94
Task 2: Delete a Tanzu Kubernetes Cluster ......................................................................................................... 95

v
vi
Lab 1 Verifying Docker on the
Developer Workstation

Objective and Tasks


Connect to the developer workstation and verify that Docker is running:

1. Connect to the Developer Workstation VM

2. Start Docker

3. Inspect the vSphere Environment

4. Inspect the NSX-T Data Center Environment

GuideMe Lab 01

1
Task 1: Connect to the Developer Workstation VM
You connect to the Photon OS virtual machine using SSH. You use this VM to perform CLI-based
tasks.

1. Start MTPuTTY from the taskbar shortcut on the student Windows desktop.

2. In the left pane of MTPuTTY, double-click SA-CLI-VM under Site-A Systems.

MTPuTTY saves the credentials for SA-CLI-VM:

• User name: root

• Password: VMware1!

Task 2: Start Docker


You verify that Docker is installed and running on the developer workstation.

1. Verify that the Docker engine is installed.

which docker
The command should return the path to the Docker binary /usr/bin/docker.

2. Verify that Docker is running.

systemctl status docker


The command might return an inactive status for Docker: inactive (dead).

root@sa-cli-vm [ ~ ]# systemctl status docker


● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; disabled;
vendor preset: disabled)
Active: inactive (dead)
Docs: https://docs.docker.com
root@sa-cli-vm [ ~ ]#
3. Start Docker.

systemctl start docker

2
4. Verify that Docker is running.
systemctl status docker
If necessary, press Ctrl+C to display the command prompt.
The command should return an active status for Docker: active (running).
root@sa-cli-vm [ ~ ]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; disabled;
vendor preset: disabled)
Active: active (running) since Fri 2020-04-10 08:24:25 UTC; 5s ago
Docs: https://docs.docker.com
Main PID: 742 (dockerd)
Tasks: 11
Memory: 114.6M
CGroup: /system.slice/docker.service
└─742 /usr/bin/dockerd -H fd:// --
containerd=/run/containerd/containerd.sock
# snipped #
Apr 10 08:24:25 sa-cli-vm systemd[1]: Started Docker Application
Container Engine.
root@sa-cli-vm [ ~ ]#
5. Verify the Docker client and server version.
docker version
This command returns the Docker client and server version.
root@sa-cli-vm [ ~ ]# docker version
Client: Docker Engine - Community
Version: 18.09.9
API version: 1.39
Go version: go1.11.13
Git commit: 039a7df
Built: Thu Nov 14 01:02:31 2019
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community


Engine:
Version: 18.09.9
API version: 1.39 (minimum version 1.12)
Go version: go1.11.13
Git commit: 039a7df
Built: Thu Nov 14 01:05:16 2019
OS/Arch: linux/amd64
Experimental: false
root@sa-cli-vm [ ~ ]#
6. Log out of SA-CLI-VM.
exit

3
Task 3: Inspect the vSphere Environment
You verify that no alarms or alerts are present in the vSphere environment.

1. Open the Chrome browser by clicking the shortcut on the taskbar of the student desktop.

The browser automatically redirects you to the vSphere Client URL at https://sa-vcsa-
01.vclass.local/ui.

2. If the browser displays a security warning, select Advanced > Proceed to sa-vcsa-
01.vclass.local.

3. If the browser does not automatically redirect to the vSphere Client URL, enter
https://sa-vcsa-01.vclass.local/ui in the address bar.
4. Log in to the vSphere Client using the Single Sign-On Administrator credentials.

a. Enter administrator@vSphere.local for the user name.

b. Enter VMware1! for the password.

5. In the vSphere Client, select Menu (hamburger) > Inventory > Hosts and Clusters and, if
necessary, expand the vCenter Server inventory.

6. Check for alarms or alerts.

Task 4: Inspect the NSX-T Data Center Environment


You verify that no alarms or alerts are present in the NSX-T Data Center environment.

1. On the student Windows desktop, open the Chrome browser and go to the NSX UI at
https://sa-nsxmgr-01.vclass.local.

2. Log in to NSX Manager.

a. Enter admin as the user name.

b. Enter VMware1!VMware1! as the password.

3. On the NSX Manager home page, click the Alarm Bell icon.

4. Check for alarms or alerts.

4
Lab 2 Running a Container Image

Objective and Tasks


Pull, run, and stop a container image:

1. Connect to the Developer Workstation VM

2. Pull a Container Image

3. Run a Container Image

4. Stop a Running Container

GuideMe Lab 02

5
Task 1: Connect to the Developer Workstation VM
You connect to the Photon OS virtual machine using SSH. You use this VM to perform CLI-based
tasks.

1. Start MTPuTTY from the taskbar shortcut on the student Windows desktop.

2. In the left pane of MTPuTTY, double-click SA-CLI-VM under Site-A Systems.


MTPuTTY saves the credentials for SA-CLI-VM:
• User name: root
• Password: VMware1!

Task 2: Pull a Container Image


You use Docker to pull a container image from the default Docker image repository: docker.io.

1. List the current available container images on the workstation.


docker images
The command returns a list containing one image. This image is used by the Linux
workstation to host a local container image registry.

2. Pull the Nginx container image from the local image registry.
docker pull 172.20.10.30/nginx:1.16
The number after the colon (:) is the version of the container image, which is called a tag. If
no tag is specified, the latest version of an image is pulled.

3. List the available container images.


docker images
You can see the image that you pulled.

Task 3: Run a Container Image


You use Docker to run a container image.

1. Use Docker to run the container image that you previously pulled.
docker run -d -p 8080:80 172.20.10.30/nginx:1.16
-d runs the container in the background, releasing the command prompt, and prints the
container ID.
-p maps an external port (8080) to the internal port (80) of the container.
2. List the running containers.
docker ps
This command returns all the containers that are currently running.
The Nginx application is now running as a container in the SA-CLI-VM virtual machine.

6
3. On the student Windows desktop, open the Chrome browser and go to http://sa-cli-
vm.vclass.local:8080.

Port 8080 is appended to the URL because this port was specified for running the container.

The Nginx web server landing page opens in the browser, confirming access to the container
application. Docker runs the container application in SA-CLI-VM.

7
Task 4: Stop a Running Container
You stop a running container.

1. List the running containers.


docker ps
2. Record the Container ID value for the Nginx container. __________

3. Stop the container.

docker stop <container_id>


Replace <container_id> with the value of the container ID that you recorded.

Do not stop the registry container.

4. List the running containers to confirm that the container is no longer running.

docker ps
5. On the student Windows desktop, refresh the Chrome browser to open http://sa-cli-
vm.vclass.local:8080.

The webpage is not accessible, because the container is no longer running.

6. Close the browser tab window.

7. Log out of SA-CLI-VM.

exit

8
Lab 3 Building a Custom Container
Image

Objective and Tasks


Use a Dockerfile to build a custom container image:

1. Connect to the Developer Workstation VM

2. Inspect the Dockerfile

3. Use the Dockerfile to Build an Image

4. Use the Newly Built Image to Run a Container

GuideMe Lab 03

9
Task 1: Connect to the Developer Workstation VM
You connect to the Photon OS virtual machine using SSH. You use this VM to perform CLI-based
tasks.

1. Start MTPuTTY from the taskbar shortcut on the student Windows desktop.

2. In the left pane of MTPuTTY, double-click SA-CLI-VM under Site-A Systems.

MTPuTTY saves the credentials for SA-CLI-VM:

• User name: root

• Password: VMware1!

Task 2: Inspect the Dockerfile


You read the Dockerfile for information about how to build the new image.

1. Change to the Lab3 directory.

cd /root/Lab3
2. Read the Dockerfile.

cat Dockerfile
FROM 172.20.10.10/nginx:1.16
RUN rm /usr/share/nginx/html/index.html
COPY index.html /usr/share/nginx/html/
COPY image.svg /usr/share/nginx/html/
RUN chmod 444 /usr/share/nginx/html/index.html
RUN chmod 444 /usr/share/nginx/html/image.svg
This Dockerfile provides specifications for building a container image:

• FROM a base using the nginx:1.16 image.


• RUN the rm command to remove the /usr/share/nginx/html/index.html
file.

• COPY the index.html and image.svg files from /root/Lab3/ on SA-CLI-VM


to the /usr/share/nginx/html/ directory.

• RUN the chmod command to modify the permissions of the copied file.

10
Task 3: Use the Dockerfile to Build an Image
You use Dockerfile to build a new image for a container.

1. Build a new container image from the Dockerfile.


docker build -t myimage:1.0 /root/Lab3/
The docker build command requires several parameters and has the following syntax:

docker build -t <image_name>:<tag> <Dockerfile_Path>


2. List the available images to verify that the newly compiled image is available.

docker images

Task 4: Use the Newly Built Image to Run a Container


You use Docker to run the container image.

1. Use Docker to run the container image that you created.

docker run -d -p 8080:80 myimage:1.0


-d runs the container in the background, releasing the command prompt, and prints the
container ID.

-p maps an external port (8080) to the internal port (80) of the container.
2. List the running containers.

docker ps
This command returns all the containers that are running.

The custom Nginx application is running as a container in SA-CLI-VM.

3. On the student Windows desktop, open the Chrome browser and go to http://sa-cli-
vm.vclass.local:8080.

Port 8080 is appended to the URL because this port was specified for running the container.

The Nginx web server landing page opens in the browser, confirming access to the container
application.

The webpage looks different from the one that you viewed previously because you are
running an image with a custom index.html file.

11
4. List the running containers.

docker ps
This command returns all the running containers.

5. Record the Container ID value for the myimage container. __________

6. Stop the container.


docker stop <container_id>
Replace <container_id> with the value of the container ID that you recorded.

Do not stop the registry container.

7. List the running containers to confirm that the container is no longer running.

docker ps
8. Log out of SA-CLI-VM.

exit

12
Lab 4 Enabling vSphere with Tanzu

Objective and Tasks


Use the vSphere Client to enable vSphere with Tanzu:

1. Log In to the vSphere Client

2. Create a Content Library

3. Verify That vSphere HA and vSphere DRS Are Enabled

4. Enable vSphere with Tanzu

5. License the Cluster

GuideMe Lab 04

13
Task 1: Log In to the vSphere Client
You log in to the vSphere environment using the vSphere Client.

1. Open the Chrome browser by clicking the shortcut on the taskbar of the student desktop.

The browser automatically redirects you to the vSphere Client URL at https://sa-vcsa-
01.vclass.local/ui.

2. If the browser displays a security warning, select Advanced > Proceed to sa-vcsa-
01.vclass.local.

3. If the browser does not automatically redirect to the vSphere Client URL, enter
https://sa-vcsa-01.vclass.local/ui in the address bar.
4. Log in to the vSphere Client using the Single Sign-On Administrator credentials.

a. Enter administrator@vSphere.local for the user name.

b. Enter VMware1! for the password.

5. Expand the vCenter Server inventory, if needed.

Task 2: Create a Content Library


You create a content library to host the Tanzu Kubernetes Grid templates.

1. In the vSphere Client, select Menu (hamburger) > Content Libraries.

2. Click CREATE.

3. In the New Content Library wizard, enter Kubernetes as the content library name and
click Next.

4. Select Local content library and click Next.

5. Click Next.

6. Select SA-CL-01 as the datastore for the content library and click Next.

7. Click Finish.

NOTE

The content library is used in a later lab, but you must create the content library before you
can enable the cluster for vSphere with Tanzu.

14
Task 3: Verify That vSphere HA and vSphere DRS Are Enabled
Because vSphere HA and vSphere DRS must be enabled on the ESXi cluster to support vSphere
with Tanzu, you verify that these features are enabled.

1. In the vSphere Client, select Menu (hamburger) > Inventory > Hosts and Clusters.

You can expand the inventory if required.

2. Select SA-Compute-01 > Configure > vSphere DRS.

3. Confirm that vSphere DRS is turned on.

4. If vSphere DRS is not turned on, enable it.

5. Ensure that DRS automation is configured as fully automated.

6. Select SA-Compute-01 > Configure > vSphere Availability.


7. Confirm that vSphere HA is turned on.

8. If vSphere HA is turned off, enable it with the default settings.

9. Before continuing, wait until vSphere HA is enabled and all tasks are completed.

Task 4: Enable vSphere with Tanzu


You use the vSphere Client to enable vSphere with Tanzu on the SA-Compute-01 cluster.

1. In the vSphere Client, select Menu (hamburger) > Workload Management.


Workload Management is a new section of the vSphere Client for deploying and managing
vSphere with Tanzu.

2. Click Get Started.


The Enable Workload Management wizard opens.

3. On the vCenter Server and Network section, select NSX and click Next.

4. On the Select a Cluster section, select the ESXi cluster to support vSphere with Tanzu.

5. Select SA-Compute-01 from the list of compatible clusters and click Next.
If the SA-Compute-01 cluster is not visible, it might mean that vSphere HA or vSphere DRS
is not enabled on the cluster.

6. On the Storage section, assign the K8S Storage Policy to the control plane nodes, the
ephemeral disks, and the image cache.
a. From the Control Plane Storage Policy drop-down men, select K8S Storage Policy.
b. From the Ephemeral Disks Storage Policy drop-down menu, select K8S Storage
Policy.
c. From the Image Cache Storage Policy drop-down menu, select K8S Storage Policy.
d. Click Next.

15
7. In the Management Network section, define values for management networking and click
Next.

Parameter Action

Network Mode Select Static.

Network Select Mgmt-DPortGroup.

Starting IP Address Enter 172.20.10.160

Subnet Mask Enter 255.255.255.0

Gateway Enter 172.20.10.10

DNS Server(s) Enter 172.20.10.10

DNS Search Domain(s) Enter vclass.local

NTP Server Enter ntp.vclass.local

8. In the Workload Network section, define values for workload networking and click Next.

Parameter Action

vSphere Distributed Switch Select DSwitch.

Edge Cluster Select nsx-basic-ec.

DNS Server Enter 172.20.10.10

Tier-0 Gateway Select nsx-basic-t0.

NAT Mode Click Enabled.

Subnet Prefix Enter /24

Namespace Network Enter 10.244.0.0/21

Service CIDRs Enter 10.96.0.0/24

Ingress CIDRs Enter 192.168.30.32/27

Egress CIDRs Enter 192.168.30.64/27

9. In the TKG Configuration section, click Add.

16
10. Select the Kubernetes content library, click OK, and click Next.

11. On the Review and Confirm page, configure Advance Settings.

a. From the Image Control Plane Size drop-down menu, select Tiny.

b. In API Server DNS Name(s), enter vspherek8s.vclass.local

c. Click Finish.

The entire process can take up to 45 minutes to complete. You might need to refresh the
vSphere Client.

12. Click the Supervisor Clusters tab.

The process is complete when the cluster reports a Config Status of Running and a Control
Plane Node IP Address of 192.168.30.34. In some cases, the IP address might differ. As long
as it begins with 192.168.30.x, you can continue.

NOTE

The IP address temporarily appears as 172.20.10.160.

When the process is finished, the SA-Compute-01 cluster is enabled for vSphere with Tanzu.
Three control plane VMs are deployed to the cluster, and Spherelet is installed on each of
the ESXi hosts.

17
13. After the enablement process completes and the Config Status reports Running, record the
control plane node IP address. __________

NOTE

Alarms might appear on the control plane VMs. It is expected that the user cannot clear these
alarms from the virtual machine object.

Select vCenter > Monitor > Triggered Alarms. From here, you can select, acknowledge, and
reset the alarms for the control plane VMs.

Task 5: License the Cluster


You use the vSphere Client to apply a vSphere with Tanzu license to the enabled cluster.

1. In the vSphere Client, select Menu (hamburger) > Administration > Licenses.

2. On the Licenses tab, find the license called TZ-BS-TLSS-C.

This license appears as unassigned in the State column.

3. Click the Assets tab.

4. Click the Supervisor Clusters tab.

5. Select the check box beside the asset called SA-Compute-01 and click Assign License.

6. Select the TZ-BS-TLSS-C license from the list of existing licenses and click OK.

The SA-Compute-01 cluster is now licensed for vSphere with Tanzu.

18
Lab 5 Downloading and Configuring
the Kubernetes CLI

Objective and Tasks


Download and configure the Kubernetes CLI:

1. Access the vSphere with Tanzu Landing Page

2. Log In to the Developer Workstation VM

3. Download and Install the Kubernetes CLI Package

IMPORTANT

vSphere with Tanzu must be enabled before you can perform the tasks in this lab.

GuideMe Lab 05

19
Task 1: Access the vSphere with Tanzu Landing Page
After vSphere with Tanzu is enabled, you access a landing page for information about
downloading vSphere with Tanzu CLI tools.

1. Open a Chrome browser tab and go to https://<cluster_ip>.

• For <cluster_ip>, enter the control plane node IP address that you recorded
previously.

• Select Menu (hamburger) > Workload Management > Supervisor Clusters in the
vSphere Client to find the IP address again.

• If the browser displays a security warning, click Advanced > Proceed to <cluster_ip>.

The vSphere with Tanzu landing page opens.

2. Review the vSphere with Tanzu landing page.

The page provides useful information and steps for downloading the Kubernetes CLI
package for your operating system.

3. Close the browser tab.

Task 2: Log In to the Developer Workstation VM


You connect to the Photon OS virtual machine so that you can perform CLI-based tasks.

1. Start MTPuTTY from the taskbar shortcut on the student Windows desktop.

2. In the left pane of MTPuTTY, double-click SA-CLI-VM under Site-A Systems.

MTPuTTY saves the credentials for SA-CLI-VM:

• User name: root

• Password: VMware1!

20
Task 3: Download and Install the Kubernetes CLI Package
You download and install the vSphere with Tanzu CLI packages to SA-CLI-VM.

1. If you are not in the /root directory, run the cd /root command to change to the
/root directory.
2. Run the wget command to download the vsphere-plugin.zip package for Linux
operating systems.

wget https://<cluster_ip>/wcp/plugin/linux-amd64/vsphere-
plugin.zip
The <cluster_ip> parameter is the control plane node IP address that you recorded
previously. You can also find this address by selecting Menu (hamburger) > Workload
Management > Supervisor Clusters in the vSphere Client.

3. Run the unzip command to extract the downloaded ZIP package.

unzip vsphere-plugin.zip
4. Configure the environment variable PATH to include the extracted bin directory and set up
Tab autocompletion.

a. Enter echo 'export PATH=/root/bin:$PATH' >> ~/.bash_profile

b. Enter echo 'source <(kubectl completion bash)' >>


~/.bash_profile
5. To read the ~/.bash_profile file, run the cat command.

cat ~/.bash_profile
The output of the file is similar to the example.

root@sa-cli-vm [ ~ ]# cat ~/.bash_profile


export PATH=/root/bin:$PATH
source <(kubectl completion bash)
root@sa-cli-vm [ ~ ]#
6. Log out of SA-CLI-VM.

exit
7. Under Site-A Systems in the left pane of MTPuTTY, double-click SA-CLI-VM to log back in.

8. Verify Tab autocompletion of kubectl commands.


a. Enter kubectl and press the spacebar.

b. Press Tab twice.

9. Close MTPuTTY.

21
22
Lab 6 Creating and Configuring a
vSphere with Tanzu Namespace

Objective and Tasks


Use the vSphere Client to create and configure a vSphere with Tanzu namespace:

1. Log In to the vSphere Client

2. Create a vSphere with Tanzu Namespace

3. Configure Permissions and Storage for the Namespace

4. Access the Namespace Using the kubectl CLI

GuideMe Lab 06

23
Task 1: Log In to the vSphere Client
You log in to the vSphere environment using the vSphere Client.

1. Open the Chrome browser by clicking the shortcut on the taskbar of the student desktop.

The browser automatically redirects you to the vSphere Client URL at https://sa-vcsa-
01.vclass.local/ui.

2. If the browser displays a security warning, select Advanced > Proceed to sa-vcsa-
01.vclass.local.

3. If the browser does not automatically redirect to the vSphere Client URL, enter
https://sa-vcsa-01.vclass.local/ui in the address bar.
4. Log in to the vSphere Client using the Single Sign-On Administrator credentials.

a. Enter administrator@vSphere.local for the user name.

b. Enter VMware1! for the password.

5. Expand the vCenter Server inventory, if needed.

Task 2: Create a vSphere with Tanzu Namespace


You use the vSphere Client to create a vSphere with Tanzu namespace for assigning resources
and permissions.

1. In the vSphere Client, select Menu (hamburger) > Workload Management > Namespaces.

2. Click Create Namespace.

3. Expand the inventory tree and select the SA-Compute-01 cluster.

4. Enter namespace-01 as your namespace name.

The name must be in a DNS-compliant format (a-z, 0-9, -).

5. Click Create.

The namespace is created and shows a Config Status of Running and a Kubernetes Status of
Active.

6. Select the Don't show for future workloads check box.

7. Click Got It.

24
Task 3: Configure Permissions and Storage for the Namespace
Using the vSphere Client, you assign permissions and a VM storage policy to the vSphere with
Tanzu namespace so that a user can authenticate to the namespace.

1. On the Summary tab for namespace-01, click Add Permissions.

2. In the Add Permissions window, assign edit permissions.

a. Select vsphere.local as the identity source.

b. Search for the user devops01 and select the Can edit role.

c. Click OK.

3. On the Summary tab for namespace-01, click Add Storage.

4. In the Select Storage Policies window, select K8S Storage Policy and click OK.
Your namespace is configured with a storage policy and user permissions.

The assigned storage policy is translated into a Kubernetes storage class.

5. In the VM Service window, add a VM class.

a. Click ADD VM CLASS.

b. Select best-effort-xsmall in the VM Class Name column.

c. Click OK.

The associated VM classes determine the resource sizing of virtual machines that can be
deployed in the namespace. The VM class is used in a later lab when you deploy a Tanzu
Kubernetes cluster.

25
Task 4: Access the Namespace Using the kubectl CLI
You log in to the vSphere with Tanzu namespace using the kubectl CLI to view the newly
created namespace.

1. Start MTPuTTY from the taskbar shortcut on the student Windows desktop.

2. In the left pane of MTPuTTY, double-click SA-CLI-VM under Site-A Systems.

MTPuTTY saves the credentials for SA-CLI-VM:

• User name: root

• Password: VMware1!

3. From SA-CLI-VM, use the kubectl CLI to connect to the vSphere with Tanzu control plane as
the devops01 user.

kubectl vsphere login --server <cluster_ip> -u


devops01@vsphere.local
The <cluster_ip> parameter is the control plane node IP address. If needed, you can
access this address by selecting Menu > Workload Management > Clusters in the vSphere
Client.

The credentials for the devops01 user are as follows:

• User name: devops01@vsphere.local

• Password: VMware1!

4. After logging in to the vSphere with Tanzu control plane, list the available namespaces.

kubectl config get-contexts


The asterisk (*) denotes the current namespace.

5. If namespace-01 is not marked as the current namespace, run the kubectl config
use-context namespace-01 command.
6. Inspect the namespace.

kubectl describe ns namespace-01


The storage class available to this namespace is shown. The storage class corresponds to
the VM storage policy that you applied in the vSphere Client.

7. Log out of the vSphere with Tanzu control plane.

kubectl vsphere logout

26
Lab 7 Deploying a Container
Application That Runs in a vSphere
Pod

Objective and Tasks


Use the kubectl CLI to deploy a container application running in a vSphere Pod:

1. Deploy a vSphere Pod

2. View the Deployed vSphere Pod

GuideMe Lab 07

27
Task 1: Deploy a vSphere Pod
You use the kubectl CLI to deploy a vSphere Pod that runs a container application.

1. From SA-CLI-VM, use the kubectl CLI to connect to the vSphere with Tanzu control plane as
the devops01 user.
kubectl vsphere login --server <cluster_ip> -u
devops01@vsphere.local
The <cluster_ip> parameter is the control plane node IP address. If needed, you can
access this address by selecting Menu (hamburger) > Workload Management > Supervisor
Clusters in the vSphere Client.
The credentials for the devops01 user are as follows:
• User name: devops01@vsphere.local
• Password: VMware1!

2. Verify that you are in the namespace-01 context.


kubectl config use-context namespace-01
3. To read the /root/Lab7/nginx-deployment.yaml file, run the cat command.
cat /root/Lab7/nginx-deployment.yaml
The command returns a simple deployment YAML file. The file declares a Kubernetes
deployment, which consists of an Nginx application container with one replica.

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: 172.20.10.30/nginx:1.16
ports:
- containerPort: 80

28
4. Create the Kubernetes deployment.

kubectl apply -f /root/Lab7/nginx-deployment.yaml


5. List the available Kubernetes deployments.

kubectl get deployment


If the READY column does not display 1/1, wait a few seconds and try the command again.

6. List the pods in the deployment.


kubectl get pods
7. Display more details about the deployment.

kubectl describe deployment nginx-deployment

Task 2: View the Deployed vSphere Pod


You use the vSphere Client to view the deployed vSphere Pod.

1. In the vSphere Client, select Menu (hamburger) > Inventory > Hosts and Clusters.

2. Expand the vCenter Server inventory, if needed.

The vSphere Pod is located in the namespace-01 inventory object.

3. In the vCenter Server inventory, select the namespace-01 object and click Monitor.

4. Under Events, select Kubernetes.

Kubernetes-related events about the deployment task appear.

5. In the vCenter Server inventory, select the nginx-deployment object.

This object is the Kubernetes pod that you deployed using the kubectl CLI. vSphere Pods
are first-class citizens in vSphere.

29
30
Lab 8 Scaling Out a vSphere Pod
Deployment

Objective and Tasks


Scale out a vSphere Pod deployment using the kubectl CLI:

1. Scale Out a vSphere Pod Deployment

2. View the Scaled-Out vSphere Pod Deployment

GuideMe Lab 08

31
Task 1: Scale Out a vSphere Pod Deployment
You use the kubectl CLI to scale out the number of replicas in a vSphere Pod deployment.

1. From SA-CLI-VM, use the kubectl CLI to connect to the vSphere with Tanzu control plane as
the devops01 user.
kubectl vsphere login --server <cluster_ip> -u
devops01@vsphere.local
The <cluster_ip> parameter is the control plane node IP address that you recorded
previously. You can also find this address by selecting Menu (hamburger) > Workload
Management > Supervisor Clusters in the vSphere Client.
The credentials for the devops01 user are as follows:
• User name: devops01@vsphere.local
• Password: VMware1!

2. Verify that you are in the namespace-01 context.


kubectl config use-context namespace-01
3. To read the /root/Lab8/scale-nginx-deployment.yaml file, run the cat
command.
cat /root/Lab8/scale-nginx-deployment.yaml
The command returns the YAML file. In this YAML file, the number of replicas is set to three.

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: 172.20.10.30/nginx:1.16
ports:
- containerPort: 80

32
4. Apply the updated YAML.

kubectl apply -f /root/Lab8/scale-nginx-deployment.yaml


5. List the available Kubernetes deployments.

kubectl get deployment


The command should return a value of 3/3 for the Ready, Up-To-Date, and Available
columns.

6. List the pods in the deployment.

kubectl get pods


Three pods are listed.

Task 2: View the Scaled-Out vSphere Pod Deployment


You use the vSphere Client to review the scaled-out vSphere Pod deployment.

1. In the vSphere Client, select Menu (hamburger) > Inventory > Hosts and Clusters.

2. Expand the vCenter Server inventory, if needed.

3. In the vCenter Server inventory, select the namespace-01 object and click Monitor.

4. Under Events, select Kubernetes.

Kubernetes-related events about the scale-out task appear.

5. In the vCenter Server inventory, verify that three nginx-deployment vSphere Pods are
running.

6. If the additional vSphere Pods are not visible, refresh the vSphere Client.

33
34
Lab 9 Deploying a vSphere Pod with a
Persistent Volume

Objective and Tasks


Deploy a vSphere Pod with a persistent volume:

1. Create a Persistent Volume Claim

2. View the Persistent Volume Claim

3. Create a vSphere Pod Deployment

4. Delete the Deployment and the Persistent Volume Claim

GuideMe Lab 09

35
Task 1: Create a Persistent Volume Claim
You create a persistent volume claim to be used with a vSphere Pod deployment.

1. From SA-CLI-VM, use the kubectl CLI to connect to the vSphere with Tanzu control plane as
the devops01 user.
kubectl vsphere login --server <cluster_ip> -u
devops01@vsphere.local
The <cluster_ip> parameter is the control plane node IP address that you can find by
selecting Menu (hamburger) > Workload Management > Supervisor Clusters in the
vSphere Client.
The credentials for the devops01 user are as follows:
• User name: devops01@vsphere.local
• Password: VMware1!

2. Verify that you are in the namespace-01 context.


kubectl config use-context namespace-01
3. To read the persistent volume claim YAML file /root/Lab9/pvc.yaml, run the cat
command.
cat /root/Lab9/pvc.yaml
This persistent volume claim defines a volume of size 3 Gi and uses the storage class called
k8s-storage-policy (or VM Storage Policy).

36
4. Create the persistent volume claim by applying the YAML file.

kubectl apply -f /root/Lab9/pvc.yaml


5. List the available persistent volume claims.

kubectl get pvc


If you observe the value <Invalid> in the Age column, wait a few seconds and run the
command again.

Task 2: View the Persistent Volume Claim


You use the vSphere Client to inspect container volumes for persistent claims.

1. In the vSphere Client, select Menu (hamburger) > Inventory > Hosts and Clusters.

2. Expand the vCenter Server inventory, if needed.

3. In the vCenter Server inventory, select SA-Compute-01 and select Monitor > Cloud Native
Storage > Container Volumes.

You might need to scroll down to find Container Volumes.

You can view the persistent volume claims for the cluster.

4. To the left of the volume name, click the Details icon to view more information about the
persistent volume claim.

The persistent volume claim is created but not associated with any vSphere Pods.

37
Task 3: Create a vSphere Pod Deployment
You create a vSphere Pod deployment to consume the persistent volume claim.

1. From SA-CLI-VM, run the cat command to read the pod deployment YAML file
/root/Lab9/pod-with-pv.yaml.
cat /root/Lab9/pod-with-pv.yaml
The YAML file deploys an instance of Nginx. A volumes section appears.

volumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-pvc
2. Create the Kubernetes deployment with a persistent volume by applying the deployment
YAML file.

kubectl apply -f /root/Lab9/pod-with-pv.yaml


3. Monitor the pod creation and wait for the Ready column to report 1/1 and the Status column
to report Running.

kubectl get pods


4. In the vSphere Client, select SA-Compute-01 and select Monitor > Cloud Native Storage >
Container Volumes.

You might need to refresh the page.

5. Click the Details icon next to the volume name to view more information about the
persistent volume claim.

The persistent volume claim is now associated with the new pod deployment.

38
Task 4: Delete the Deployment and the Persistent Volume Claim
You delete the pod deployment and the persistent volume claim.

1. From SA-CLI-VM, delete the deployment called pod-with-pv-deployment.


kubectl delete deployment pod-with-pv-deployment

NOTE

Deleting a deployment does not delete the persistent volumes associated with the
deployment. Persistent volume claims must be deleted as a separate task.

2. Delete the persistent volume claim.

kubectl delete pvc my-pvc


3. In the vSphere Client, select SA-Compute-01 and select Monitor > Cloud Native Storage >
Container Volumes.

You might need to refresh the vSphere Client.

The persistent volume claim is deleted.

39
40
Lab 10 Creating a Kubernetes Service

Objective and Tasks


Create a load balancer Kubernetes service:

1. Create a Load Balancer Kubernetes Service

2. Obtain the Kubernetes Service External IP Address

GuideMe Lab 10

41
Task 1: Create a Load Balancer Kubernetes Service
You create a load balancer Kubernetes service to provide ingress connectivity to a vSphere Pod
deployment.

1. From SA-CLI-VM, use the kubectl CLI to connect to the vSphere with Tanzu control plane as
the devops01 user.

kubectl vsphere login --server <cluster_ip> -u


devops01@vsphere.local
<cluster_ip> is the control plane node IP address that you recorded previously. If
needed, you can find this address by selecting Menu (hamburger) > Workload Management
> Supervisor Clusters in the vSphere Client.

The credentials for the devops01 user are as follows:

• User name: devops01@vsphere.local

• Password: VMware1!

2. Verify that you are in the namespace-01 context.

kubectl config use-context namespace-01


3. To read the /root/Lab10/lb-service.yaml file, run the cat command.

cat /root/Lab10/lb-service.yaml
The command returns a simple service YAML file, which declares a service of the type
LoadBalancer to apply to pods with the label nginx.

apiVersion: v1
kind: Service
metadata:
name: lb-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: LoadBalancer
4. Create the load balancer service.
kubectl apply -f /root/Lab10/lb-service.yaml
A new NSX virtual service is created and issued with an IP address from the ingress CIDR
that was defined during the vSphere with Tanzu enablement process.

42
Task 2: Obtain the Kubernetes Service External IP Address
You obtain the Kubernetes service external IP address to access the container application.

1. From SA-CLI-VM, list the Kubernetes services.

kubectl get services


All services in the namespace are listed.

2. Record the external IP address value for your load balancer service (lb-service).
__________

3. On the student Windows desktop, open the Chrome browser and go to


http://<external_ip>.

The <external_ip> is the external IP address value of the lb-service, as recorded in the
previous step.

The Nginx web server landing page opens in the browser, confirming access to the container
application running as a vSphere Pod.

4. Close the browser tab to the Nginx application.

43
44
Lab 11 Creating a Kubernetes Network
Policy

Objective and Tasks


Create and view a Kubernetes network policy:

1. Create a Network Policy to Deny Traffic

GuideMe Lab 11

45
Task 1: Create a Network Policy to Deny Traffic
You create a Kubernetes network policy to deny traffic to a vSphere Pod.

1. From SA-CLI-VM, use the kubectl CLI to connect to the vSphere with Tanzu control plane as
the devops01 user.

kubectl vsphere login --server <cluster_ip> -u


devops01@vsphere.local
The <cluster_ip> value is the control plane node IP address that you recorded
previously. You can find the address in the vSphere Client by selecting Menu (hamburger) >
Workload Management > Supervisor Clusters.

The credentials for the devops01 user are as follows:

• User name: devops01@vsphere.local

• Password: VMware1!

2. Verify that you are in the namespace-01 context.

kubectl config use-context namespace-01


3. To read the /root/Lab11/network-policy.yaml file, run the cat command.

cat /root/Lab11/network-policy.yaml
This simple network policy YAML file declares that all ingress and egress traffic is denied for
pods with the label nginx.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-nginx
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
- Egress
4. Create the Kubernetes network policy.

kubectl apply -f /root/Lab11/network-policy.yaml


5. List all network policies in the current namespace.

kubectl get networkpolicy


The new Kubernetes network policy results in the creation of an NSX distributed firewall rule.

46
6. On the student Windows desktop, open the Chrome browser and go to
http://<external_ip>.

The <external_ip> parameter is the value of the load balancer service (lb-service) that
you recorded previously.

If needed, you can obtain the <external_ip> by running kubectl get services
from SA-CLI-VM.

The browser can no longer access the container application.

NOTE

The browser might have a cache of the webpage. Refresh the browser if you continue to see
the Nginx webpage.

7. In the vSphere Client, select namespace-01 and select Network > Network Policies.

You can view the Kubernetes network policies that are applied in the namespace.

47
48
Lab 12 Viewing Kubernetes Objects

Objective and Tasks


View Kubernetes objects in NSX Manager:

1. Log In to NSX Manager

2. View Segments

3. View Virtual Servers

4. View the Distributed Firewall

5. View Namespaces

6. View the Network Topology

GuideMe Lab 12

49
Task 1: Log In to NSX Manager
You log in to the NSX Manager user interface.

1. On the student Windows desktop, open the Chrome browser and go to the NSX UI at
https://sa-nsxmgr-01.vclass.local.

2. Log in to the NSX Manager.

a. Enter admin as the user name.

b. Enter VMware1!VMware1! as the password.

IMPORTANT

Values and variables in your lab are different from the screenshots. Screenshots are
examples only.

Task 2: View Segments


You view the segment that is created when the vSphere with Tanzu namespace is configured.

1. From the NSX Manager home page, select Networking > Segments.

A list of all segments appears.

2. Enter namespace-01 in the filter text field.

You can see only the segment that is created when the namespace-01 object is created in
the vSphere Client.

50
3. Click the number in the Ports column for the namespace-01 segment.

A list of the vSphere Pods connected to the segment appears.

4. Click Close.

51
Task 3: View Virtual Servers
You view the virtual server that is created when a Kubernetes load balancer service is created.

1. From the NSX Manager home page, select Networking > Load Balancing.

Three load balancers are listed:

• Distributed Load Balancer: Used by the control plane VM system pods and the cluster IP
address of pods

• Server Load Balancer: Used for the control plane API access

• Server Load Balancer: Used to service the Kubernetes load balancer services for
external access (ingress) for a specific namespace

NOTE

The screenshot was captured when using vSphere 7 Update 1c. Previous versions might
display a different configuration.

2. Click the Virtual Servers tab to view a list of all virtual servers.

3. In the filter text field, enter lb-service, which is the name for the Kubernetes load
balancer service.

Two results are listed. The virtual server with the word "domain" at the start of its name is
assigned an IP address from the 192.168.30.32/27 CIDR, which is defined as the ingress
CIDR.

52
Task 4: View the Distributed Firewall
You view the distributed firewall rules that are created when a Kubernetes network policy is
created.

1. From the NSX Manager home page, select Security > Distributed Firewall.

2. Click the All Rules tab.

A list of all distributed firewall rules appears.

3. Expand the two rules with namespace-01-deny-all at the beginning of their names.

These rules are created by the network policy. As defined by the Kubernetes network
policy, the firewall rule is configured to drop all ingress and egress traffic.

4. In the Applied To column, click the namespace-01-deny-all-nginx-tgt label.

53
5. In the View Members window, click Segment Ports.

The segment ports corresponding to the vSphere Pods are listed.

6. Click Close.

54
Task 5: View Namespaces
You view the container workloads that are visible to NSX Manager.

1. From the NSX Manager home page, select Inventory > Containers > Namespaces.

All namespaces, including system-managed namespaces, are listed.

2. In the filter text field, enter namespace-01 to display only this namespace.

3. Click the numbers in the Pods, Services, and Networking columns to view more details about
each item.

NOTE

The IP address is the namespace's egress IP address.

4. Click the Clusters tab.

You get a global view of all container-capable clusters that are visible to NSX Manager.

55
Task 6: View the Network Topology
You access the network topology view of the environment.

1. From the NSX Manager home page, select Networking > Network Topology.
A collapsed view of the entire network topology appears.

2. Click the Segments icon to expand the segments.


You can use the zoom-in and zoom-out icons at the bottom right of the topology view, if
needed.

3. Click and drag the topology view to find the VMs icon and click the icon.
The vSphere with Tanzu control plane VMs appear.

56
4. Click and drag the topology view to find the Pods icon and click the icon.

The vSphere Pods appear.

NOTE

vSphere Pods appear as both VM and pod objects in the topology view.

5. Close the browser tab to the NSX Manager user interface.

57
58
Lab 13 Enabling the Harbor Registry

Objective and Tasks


Enable an embedded Harbor registry:

1. Log In to the vSphere Client

2. Enable an Embedded Harbor Registry

GuideMe Lab 13

59
Task 1: Log In to the vSphere Client
You log in to the vSphere environment using the vSphere Client.

1. Open Chrome using the shortcut on the taskbar of the student desktop.

The browser automatically redirects to the vSphere Client at https://sa-vcsa-


01.vclass.local/ui.

2. If the browser does not automatically redirect to the vSphere Client URL, enter
https://sa-vcsa-01.vclass.local/ui in the address bar.
3. Log in to the vSphere Client using the Single Sign-On Administrator credentials.

a. Enter administrator@vsphere.local as the user name.

b. Enter VMware1! as the password.

4. Expand the vCenter Server inventory, if needed.

60
Task 2: Enable an Embedded Harbor Registry
You use the vSphere Client to enable the embedded Harbor registry.

1. In the vSphere Client, click SA-Compute-01 and select Configure > Supervisor Cluster >
Image Registry.

2. Click Enable Harbor.

3. In the Select Storage Policies window, select K8S Storage Policy and click OK.

NOTE

Enabling Harbor can take up to 20 minutes to complete.

A system-managed namespace (vmware-system-registry-xxxx) is created. Harbor vSphere


Pods are also created.

4. Record the IP address in Link to Harbor UI. __________

61
62
Lab 14 Pushing and Deploying Harbor
Images

Objective and Tasks


Push, review, and deploy Harbor images:

1. Install and Configure the vSphere Docker Credential Helper

2. Push an Image to Harbor

3. Review the Image in Harbor

4. Deploy an Image from Harbor

GuideMe Lab 14

63
Task 1: Install and Configure the vSphere Docker Credential Helper
You install and configure the vSphere Docker credential helper so that you can authenticate to
Harbor.

1. Using MTPuTTY, log in to SA-CLI-VM.

2. If you are not in the /root directory, run the cd /root command.

3. To download the vsphere-docker-credential-helper.zip package for Linux


operating systems, run the wget command.

wget https://<cluster_ip>/wcp/helper/linux-amd64/vsphere-
docker-credential-helper.zip
The <cluster_ip> value is the control plane node IP address that you recorded
previously. If needed, you can find the address by selecting Menu (hamburger) > Workload
Management > Supervisor Clusters in the vSphere Client.

4. Run the unzip command to extract the downloaded ZIP package.

unzip vsphere-docker-credential-helper.zip
5. Create a subdirectory in /etc/docker/certs.d/ that corresponds to the IP address
of the Harbor instance.

mkdir -p /etc/docker/certs.d/<harbor_ip>
The <harbor_ip> value is the IP address that you recorded in the previous lab when you
enabled the embedded Harbor registry. You can also access this address in the vSphere
Client by selecting the SA-Compute-01 cluster and selecting Configure > Namespaces >
Image Registry.

6. Run the cp command to copy the ca.crt certificate to the


/etc/docker/certs.d/<harbor_ip> directory that you created.
cp /root/Lab14/ca.crt /etc/docker/certs.d/<harbor_ip>/ca.crt
A copy of ca.crt, the vCenter Server VMware CA root certificate, is prepopulated in the
/root/Lab14/ directory on SA-CLI-VM.
7. Restart the Docker process.

systemctl restart docker


8. Log in to Harbor using the devops01@vsphere.local credentials.

docker-credential-vsphere login <harbor_ip>


The credentials for the devops01 user are as follows:
• User name: devops01@vsphere.local
• Password: VMware1!

64
Task 2: Push an Image to Harbor
You tag an image and push the image to the Harbor repository.

1. From SA-CLI-VM, run the docker tag command to tag the custom image that you
created in a previous lab.

docker tag myimage:1.0 <harbor_ip>/namespace-01/myimage:1.0


The <harbor_ip> value is the IP address that you recorded when you enabled the
embedded Harbor registry. You can also find the address in the vSphere Client by selecting
SA-Compute-01 and selecting Configure > Namespaces > Image Registry.

2. List the images and confirm the tag.

docker images
3. Upload the image to the Harbor repository.

docker push <harbor_ip>/namespace-01/myimage:1.0

Task 3: Review the Image in Harbor


You log in to the Harbor user interface and review the uploaded image.

1. Open a Chrome browser window to https://<harbor_ip>.

The <harbor_ip> value is the IP address that you recorded when you enabled the
embedded Harbor registry. You can also find the address in the vSphere Client by selecting
SA-Compute-01 and selecting Configure > Namespaces > Image Registry.

2. Log in to the Harbor user interface by using the devops01@vsphere.local credentials.

The credentials for the devops01 user are as follows:

• User name: devops01@vsphere.local

• Password: VMware1!

3. In the list of Projects, select the namespace-01 project.

A unique project is automatically created for each namespace.

4. Click the Repositories tab.

The image that you pushed to the repository is listed. All images pushed to the repository
are listed here.

5. Close the Harbor browser window.

65
Task 4: Deploy an Image from Harbor
You deploy an image stored in a Harbor repository.

1. From SA-CLI-VM, run the cat command to read the /root/Lab14/deploy-from-


harbor.yaml file.
cat /root/Lab14/deploy-from-harbor.yaml
The command returns a simple deployment YAML file. It is almost identical to previous files.
The main change is that it provides the full path to the image location. It also includes a load
balancer service.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myimage-deployment
labels:
app: myimage
spec:
replicas: 1
selector:
matchLabels:
app: myimage
template:
metadata:
labels:
app: myimage
spec:
containers:
- name: myimage
image: 192.168.30.35/namespace-01/myimage:1.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: myimage-lb
spec:
selector:
app: myimage
ports:
- port: 80
targetPort: 80
type: LoadBalancer
NOTE

If Harbor has deployed with an IP address other than 192.168.30.35, you must update the
YAML file to use the correct Harbor IP address.

66
2. Apply this YAML file to deploy the application and service.

kubectl apply -f /root/Lab14/deploy-from-harbor.yaml


3. Get the external IP address of this new deployment by listing the Kubernetes services.

kubectl get services


4. Record the external_ip value for the myimage-lb service. __________
5. Open a browser to http://<external_ip>.

<external_ip> is the external IP address that you recorded in the previous step.

67
68
Lab 15 Configuring a Content Library

Objective and Tasks


Configure a content library for Tanzu Kubernetes Grid Service:

1. Log In to the vSphere Client

2. Upload the Tanzu Kubernetes Cluster Template

GuideMe Lab 15

69
Task 1: Log In to the vSphere Client
You log in to the vSphere environment using the vSphere Client.

1. Open the Chrome browser using the shortcut on the taskbar of the student desktop.

The browser automatically redirects to the vSphere Client at https://sa-vcsa-


01.vclass.local/ui.

2. If the browser does not automatically redirect to the vSphere Client URL, enter the URL
https://sa-vcsa-01.vclass.local/ui in the address bar.
3. Log in to the vSphere Client using the Single Sign-On Administrator credentials.

a. Enter administrator@vsphere.local for the user name.

b. Enter VMware1! as the password.

4. Expand the vCenter Server inventory, if needed.

70
Task 2: Upload the Tanzu Kubernetes Cluster Template
You upload the Tanzu Kubernetes cluster template to the content library.

1. In the vSphere Client, select Menu (hamburger) > Content Libraries.

2. Click the Kubernetes content library.

3. Select Templates > OVF & OVA Templates.

4. From the Actions drop-down menu, select Import Item.

5. In the Import Library Item window, select Local file.

6. Upload the Tanzu Kubernetes cluster template.

a. Click Upload Files.

b. In the Open window, browse to the E:\photon-3-k8s-v1.18.10---


vmware.1-tkg.1.3a6cd48 directory.
c. Select all files in the directory and click Open.

7. In the Import Library Item window, enter v1.18.10---vmware.1-tkg.1.3a6cd48


to change the item name.

IMPORTANT

Changing the name of the imported template might cause deployed Tanzu Kubernetes
clusters to be reconciled and updated with new virtual machines.

8. Click Import.

9. Monitor the progress of the upload in the Recent Tasks pane of the vSphere Client.

CAUTION

Do not refresh or close the browser window while the upload is in progress. The upload can
take up to 10 minutes.

71
72
Lab 16 Deploying a Tanzu Kubernetes
Cluster

Objective and Tasks


Use the Tanzu Kubernetes Grid Service to deploy a Tanzu Kubernetes cluster:

1. View a Tanzu Kubernetes Cluster Deployment YAML File

2. Deploy a Tanzu Kubernetes Cluster

GuideMe Lab 16

73
Task 1: View a Tanzu Kubernetes Cluster Deployment YAML File
You view the Tanzu Kubernetes cluster deployment YAML file.

1. From SA-CLI-VM, use the kubectl CLI to connect to the vSphere with Tanzu control plane as
the devops01 user.

kubectl vsphere login --server <cluster_ip> -u


devops01@vsphere.local
The <cluster_ip> parameter is the control plane node IP address that you recorded
previously. If needed, you can find the address by selecting Menu (hamburger) > Workload
Management > Supervisor Clusters in the vSphere Client.

The credentials for the devops01 user are as follows:

• User name: devops01@vsphere.local

• Password: VMware1!

2. Verify that you are in the namespace-01 context.

kubectl config use-context namespace-01


3. List the available Tanzu Kubernetes cluster versions.

kubectl get virtualmachineimages


The Tanzu Kubernetes cluster template that you uploaded to the content library is listed.

4. View the available Kubernetes storage classes in the namespace.

kubectl describe ns namespace-01


The available storage classes are listed under Resource.

74
5. To read the /root/Lab16/deploy-tkg-cluster.yaml file, run the cat
command.

cat /root/Lab16/deploy-tkg-cluster.yaml
A YAML declaration is returned. It declares that a Tanzu Kubernetes cluster of version
v1.18.10 is deployed to namespace-01, with one control plane VM and one worker VM. The
deployment uses the same storage class listed in the previous step: k8s-storage-policy. The
deployment uses the VM Class best-effort-xsmall.

root@sa-cli-vm [ ~ ]# cat /root/Lab16/deploy-tkg-cluster.yaml


apiVersion: vm/v1alpha1
kind: TanzuKubernetesCluster
metadata:
name: tkg-cluster-01
namespace: namespace-01
spec:
distribution:
fullVersion: v1.18.10+vmware.1-tkg.1.3a6cd48
topology:
controlPlane:
count: 1
class: best-effort-xsmall
storageClass: k8s-storage-policy
workers:
count: 1
class: best-effort-xsmall
storageClass: k8s-storage-policy
settings:
network:
cni:
name: calico
services:
cidrBlocks: ["172.16.100.0/24"]
pods:
cidrBlocks: ["172.16.200.0/24"]

75
Task 2: Deploy a Tanzu Kubernetes Cluster
Using the Tanzu Kubernetes Grid Service for vSphere, you deploy a Tanzu Kubernetes cluster.

1. From SA-CLI-VM, deploy a Tanzu Kubernetes cluster by applying the


/root/Lab16/deploy-tkg-cluster.yaml file.
kubectl apply -f /root/Lab16/deploy-tkg-cluster.yaml
2. Monitor the deployment of the Tanzu Kubernetes cluster.

kubectl get tanzukubernetesclusters,virtualmachines


The control plane VM is deployed first, and then the worker VM is deployed.

3. In the vSphere Client, monitor the deployment.

4. Verify the completion of the deployment and VM creation in the Recent Tasks pane.

The deployment might take more than 15 minutes to complete.

5. After both the control plane VM and the worker VM are deployed, view the Tanzu
Kubernetes cluster from SA-CLI-VM.

kubectl describe tanzukubernetesclusters


At the bottom of the output, the node status and the VM status appear as ready.

Node Status:
tkg-cluster-01-control-plane-cxqg6: ready
tkg-cluster-01-workers-dp4kj-6596cd84c-l6p6h: ready
Phase: running
Vm Status:
tkg-cluster-01-control-plane-cxqg6: ready
tkg-cluster-01-workers-dp4kj-6596cd84c-l6p6h: ready
Events: <none>

76
Lab 17 Working with Tanzu
Kubernetes Clusters

Objective and Tasks


Apply a policy to, deploy an application to, and scale out a Tanzu Kubernetes cluster:

1. Apply a Pod Security Policy

2. Deploy a Container Application

3. Scale Out a Tanzu Kubernetes Cluster

GuideMe Lab 17

77
Task 1: Apply a Pod Security Policy
You apply a pod security policy to a Tanzu Kubernetes cluster so that a demo application can
run.

NOTE

For more information about using pod security policies with Tanzu Kubernetes clusters, see
vSphere with Tanzu Configuration and Management at vSphere/7.0/vmware-vsphere-with-
kubernetes/GUID-152BE7D2-E227-4DAA-B527-
557B564D9718.html">https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-
vsphere-with-kubernetes/GUID-152BE7D2-E227-4DAA-B527-
557B564D9718.htmlvSphere/7.0/vmware-vsphere-with-kubernetes/GUID-152BE7D2-E227-
4DAA-B527-557B564D9718.html">https://docs.vmware.com/en/VMware-
vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-152BE7D2-E227-4DAA-B527-
557B564D9718.html.

1. From SA-CLI-VM, use the kubectl CLI to connect to the Tanzu Kubernetes cluster as the
devops01 user.

kubectl vsphere login --server <cluster_ip> -u


devops01@vsphere.local --tanzu-kubernetes-cluster-name
<tkg_cluster> --tanzu-kubernetes-cluster-namespace
<namespace>
The <cluster_ip> parameter is the control plane node IP address that you recorded
previously. You can find it in the vSphere Client by selecting Menu (hamburger) > Workload
Management > Supervisor Clusters.

The <tkg_cluster> parameter is the name of the Tanzu Kubernetes cluster, and the
<namespace> parameter is the Supervisor Cluster namespace where this cluster resides.
Example command with parameter values:

kubectl vsphere login --server 192.168.30.33 -u


devops01@vsphere.local --tanzu-kubernetes-cluster-name tkg-
cluster-01 --tanzu-kubernetes-cluster-namespace namespace-01
The credentials for the devops01 user are as follows:

• User name: devops01@vsphere.local

• Password: VMware1!

2. Verify that you are in the tkg-cluster-01 context.

kubectl config use-context tkg-cluster-01

78
3. To read the /root/Lab17/allow-runasroot-clusterrole.yaml file, run the
cat command.
cat /root/Lab17/allow-runasroot-clusterrole.yaml
Tanzu Kubernetes Grid Service provisions Tanzu Kubernetes clusters with PodSecurityPolicy
Admission Controller enabled. A pod security policy is required to deploy workloads.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp:privileged
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- vmware-system-privileged
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: all:psp:privileged
roleRef:
kind: ClusterRole
name: psp:privileged
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
4. Apply the allow-runasroot-clusterrole.yaml file to grant privileges for the
demo application to run.

kubectl apply -f /root/Lab17/allow-runasroot-


clusterrole.yaml

79
Task 2: Deploy a Container Application
You deploy a container application to a Tanzu Kubernetes cluster.

This process is almost identical to deploying container applications using vSphere Pods.

1. On SA-CLI-VM, use the cat command to read the /root/Lab17/app-on-tkg-


deployment.yaml file.
cat /root/Lab17/app-on-tkg-deployment.yaml
The command returns a simple deployment YAML file for deploying an Nginx container
application and a load balancer service to provide ingress access to the container application.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-on-tkg-deployment
labels:
app: app-on-tkg
spec:
replicas: 1
selector:
matchLabels:
app: app-on-tkg
template:
metadata:
labels:
app: app-on-tkg
spec:
containers:
- name: app-on-tkg
image: 172.20.10.30/nginx:1.16
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: tkg-loadbalancer
spec:
selector:
app: app-on-tkg
ports:
- port: 80
targetPort: 80
type: LoadBalancer

80
2. Deploy the container application by applying the YAML file.

kubectl apply -f /root/Lab17/app-on-tkg-deployment.yaml


3. Monitor the creation of the deployment, the pod, and the load balancer service.

kubectl get deployments,pods,services


Monitor until the deployment and pod services display 1/1 in the Ready column and the load
balancer service displays an external IP address.

4. On the student desktop, open the Chrome browser to http://<external_ip>.

<external_ip> is the external IP address of the load balancer service from the previous
step.

The default Nginx landing page opens in the browser window.

NOTE

Pods that are deployed on a Tanzu Kubernetes cluster are not visible from the vSphere
Client. They reside in the Tanzu Kubernetes cluster worker node VMs.

81
Task 3: Scale Out a Tanzu Kubernetes Cluster
You scale the number of Tanzu Kubernetes cluster worker nodes from one to three.

1. On SA-CLI-VM, return to the supervisor namespace context where the Tanzu Kubernetes
cluster resides.

kubectl config use-context namespace-01


2. Run the cat command to read the /root/Lab17/scale-tkg-cluster.yaml file.

cat /root/Lab17/scale-tkg-cluster.yaml
The file is a copy of the deploy-tkg-cluster.yaml file, except that the number of
worker node VMs is increased to three.

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
name: tkg-cluster-01
namespace: namespace-01
spec:
distribution:
fullVersion: v1.18.10+vmware.1-tkg.1.3a6cd48
topology:
controlPlane:
count: 1
class: best-effort-xsmall
storageClass: k8s-storage-policy
workers:
count: 3 # <<<--- increased number of workers to 3
class: best-effort-xsmall
storageClass: k8s-storage-policy
settings:
network:
cni:
name: calico
services:
cidrBlocks: ["172.16.100.0/24"]
pods:
cidrBlocks: ["172.16.200.0/24"]

82
3. Apply the updated YAML file.

kubectl apply -f /root/Lab17/scale-tkg-cluster.yaml


If you get an error message, it might indicate that your active context is not the Supervisor
Namespace.

4. If your active context is not the Supervisor Namespace called namespace-01, rerun step 1.

kubectl config use-context namespace-01


5. From the vSphere Client, monitor the deployment of the new worker node VMs and wait for
them to be deployed and powered on.

6. After all the new worker VMs are deployed, view the Tanzu Kubernetes cluster from SA-
CLI-VM.

kubectl describe tanzukubernetesclusters


Verify that the three nodes are ready and healthy.

83
84
Lab 18 Control Plane Certificate
Management

Objective and Tasks


Replace the default control plane management certificate:

1. Log In to the vSphere Client

2. Generate a Certificate Signing Request

3. Obtain a Signed Certificate

4. Install the Certificate Authority Root Certificate

5. Replace the Control Plane Management Certificate

GuideMe Lab 18

85
Task 1: Log In to the vSphere Client
You log in to the vSphere environment using the vSphere Client.

1. Start the Chrome browser by using the shortcut on the taskbar of the student desktop.

The browser automatically redirects to the vSphere Client at https://sa-vcsa-


01.vclass.local/ui.

2. If the browser does not automatically redirect to the vSphere Client URL, enter
https://sa-vcsa-01.vclass.local/ui in the address bar.
3. Log in to the vSphere Client using the Single Sign-On Administrator credentials.

a. Enter administrator@vsphere.local as the user name.

b. Enter VMware1! as the password.

4. Expand the vCenter Server inventory, if needed.

86
Task 2: Generate a Certificate Signing Request
You create a certificate signing request (CSR) for the control plane management interface. The
CSR is then provided to a certificate authority.

1. In the vSphere Client, select SA-Compute-01 and select Configure > Supervisor Cluster >
Certificates.

2. In the Workload Platform Management tile, select Generate CSR from the Actions drop-
down menu.

3. Configure the CSR and click Next.

NOTE

The value for the common name must be vspherek8s.vclass.local. The other values can be
changed.

Parameter Action

Common name Enter vspherek8s.vclass.local

Organization Enter VMware

Organizational Unit Enter VMware Education

Country Enter United States

State/Province Enter CA

Locality Enter Palo Alto

Email Address Enter noemail@vmware.com

4. Click Download to save the CSR file to the student desktop.

5. Click Copy to copy the contents to the clipboard.

6. Click Finish.

87
Task 3: Obtain a Signed Certificate
You provide a CSR to a certificate authority to download a signed certificate.

1. Open a Chrome browser window to http://dc.vclass.local/certsrv.

2. Log in.

a. Enter VCLASS\Administrator as the user name.


b. Enter VMware1! as the password.

3. In the Microsoft Active Directory Certificate Services home page, click Request a
Certificate.

4. Click Advanced certificate request.

5. Paste the copied contents of the CSR into the Saved Request text box.

6. From the Certificate Template drop-down menu, select vSphere.

7. Click Submit.

8. Select Base 64 encoded.

9. Click Download Certificate.

The browser might display the download warning This type of file can harm
your computer. Do you want to keep certnew.cer anyway?.
10. Click Keep on the warning to save the certnew.cer file.

The certnew.cer file is downloaded to the C:\Materials\Downloads directory


on the student desktop.

88
Task 4: Install the Certificate Authority Root Certificate
You install the certificate authority root certificate into the vCenter Server trusted root store.

1. In the vSphere Client, select Menu (hamburger) > Administration > Certificate
Management.

2. Click Add next to Trusted Root Certificates.

3. In the Add Trusted Root Certificates dialog box, click Browse.

4. Browse to C:\Materials\Downloads and select the MSCA_Root.cer file.

5. Click Open.

6. Click Add.

A second entry is visible in the vSphere Client under Trusted Root Certificates.

89
Task 5: Replace the Control Plane Management Certificate
You install a new signed certificate for the vSphere with Tanzu control plane VMs.
1. In the vSphere Client, select Menu (hamburger) > Inventory > Hosts and Clusters.
2. Select SA-Compute-01.
3. Select Configure > Supervisor Cluster > Certificates.
4. In the Workload Platform Management tile, select Replace Certificate from the Actions
drop-down menu.
5. In the Replace Certificate window, click Upload Certificate File.
6. In the Open window, browse to C:\Materials\Downloads and select the
certnew.cer file.
7. Click Open.
8. Click Replace.
9. Open a new browser tab and go to https://192.168.30.33.
The vSphere with Tanzu control plane landing page opens.

The webpage should identify itself as secure.

NOTE

You might need to wait a few minutes, and close and reopen the browser, for the certificate
to appear as trusted and secure.

90
Lab 19 (Optional) Deploying the Yelb
Application as vSphere Pods

Objective and Tasks


Deploy the Yelb application as vSphere Pods:

1. Create a Namespace

2. Deploy the Yelb Application

GuideMe Lab 19

91
Task 1: Create a Namespace
You create and configure a namespace to host the Yelb application.

1. In the vSphere Client, create a vSphere with Tanzu namespace called yelb.

2. Provide the yelb namespace with the VM storage policy called K8S Storage Policy.

3. Assign Edit permissions on the yelb namespace to the devops01@vsphere.local user.

Task 2: Deploy the Yelb Application


You deploy the Yelb application to the new namespace.

1. From SA-CLI-VM, log in to the vSphere with Tanzu control plane.

2. Verify that the active context is the yelb namespace.

3. Deploy the Yelb application by applying the /root/Lab19/deploy-yelb-


app.yaml file using the kubectl CLI.
4. Use the kubectl CLI to monitor the creation of the pods and services for the application.

5. Obtain the external IP address of the yelb-ui load balancer service and open a browser to
this IP address.

6. Vote for your favorite food!

92
Lab 20 (Optional) Deploying the Yelb
Application to a Tanzu Kubernetes
Cluster

Objective and Tasks


Deploy a Yelb application to a Tanzu Kubernetes Cluster:

1. Deploy the Yelb Application

2. Delete a Tanzu Kubernetes Cluster

GuideMe Lab 20

93
Task 1: Deploy the Yelb Application
You deploy the Yelb application to a Tanzu Kubernetes cluster.

1. From SA-CLI-VM, log in to the vSphere with Tanzu control plane, including the Tanzu
Kubernetes cluster login flags.

2. Verify that the active context is the Tanzu Kubernetes cluster namespace.

3. Deploy the Yelb application by applying the /root/Lab20/deploy-yelb-on-


tkg.yaml file using the kubectl CLI.
4. Use the kubectl CLI to monitor the creation of the pods and services for the application.

5. Obtain the external IP address of the yelb-ui load balancer service and open a browser to this
IP address.

6. Vote for your favorite food!

94
Task 2: Delete a Tanzu Kubernetes Cluster
You delete a Tanzu Kubernetes cluster and its associated services.

1. From SA-CLI-VM, list the running Tanzu Kubernetes clusters.

kubectl get tanzukubernetesclusters


The Tanzu Kubernetes cluster should be in a running state, with one control plane and three
workers.

root@sa-cli-vm [ ~ ]# kubectl get tanzukubernetesclusters


NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tkg-cluster-01 1 3 v1.18.10+vmware.1-tkg.1.3a6cd48 177m running
root@sa-cli-vm [ ~ ]#
2. Delete the Tanzu Kubernetes cluster.

kubectl delete tanzukubernetesclusters <tkg_cluster_name>


The <tkg_cluster_name> value is the name of the Tanzu Kubernetes cluster to be
deleted.

root@sa-cli-vm [ ~ ]# kubectl delete tanzukubernetesclusters


tkg-cluster-01
tanzukubernetescluster.run.tanzu.vmware.com "tkg-cluster-01"
deleted
root@sa-cli-vm [ ~ ]#

IMPORTANT

Any container applications running in the Tanzu Kubernetes cluster are deleted.

3. From the vSphere Client, observe that the Tanzu Kubernetes cluster VMs are deleted.

Any services, for example, load balancer services, that are associated with the Tanzu
Kubernetes cluster are also deleted.

95
96

You might also like