Download as pdf or txt
Download as pdf or txt
You are on page 1of 180

VMware VCP-DCV 2023 Study Guide

vSphere 8.0 Version

SECTION 1: Architectures & Technologies

Objective 1.1: Identify Pre-Requisites & Components for vSphere Implementation


Resource: ESXi 8.0 Installation and Setup Guide (2022), pp. 12-19
vCenter Server 8.0U1 Installation and Setup Guide (2023), pp. 8-24

Hello vCommunity friends! I am glad to be able to provide this VMware vSphere VCP-DCV
2023 Study Guide, referencing vSphere version 8. Thank you for downloading and reading. I
hope you find this Study Guide useful for your VCP-DCV studies. Please don’t hesitate to
provide any feedback you may have to make this Guide better. You can find and message
me on Twitter or LinkedIn. Now let’s get to the Guide…

The two core components of vSphere are ESXi and vCenter Server. With the vSphere 7
release, the Platform Services Controller (PSC) is no longer a part of the vSphere installation
process. All PSC services and functions have been incorporated into vCenter Server,
providing for an easier install process and less complexity. In this Objective, I will cover all
the pre-requisites and components making up the two vSphere components, as well as
provide several common component maximums you should be familiar with. Installation
steps and more detailed component feature explanations will be discussed in other
Objectives later in this Guide.

• vSphere (ESXi) pre-requisites and components

o ESXi requirements include:


▪ Verify hardware support on VMware’s Hardware Compatibility List (HCL)
▪ Enable the following BIOS settings:
o Intel-VD or AMD-RVI hardware virtualization to run 64bit VMs
o The NX/XD bit; enabling hyperthreading is optional
▪ Minimum 2 CPU Cores / 8 GB RAM
▪ 64bit processors released after September 2006
▪ Supported Gigabit Ethernet Controller
▪ ESXi boot storage requirements:
o Boot storage of at least 8GB for USB & SD drives; NOTE: SD & USB
will "soon" be deprecated
o If booting from Local disk, SAN, or iSCSI LUNs, minimum of 32GB
o ESXi can take up to 138GB total and only creates a VMFS
datastore if there’s a minimum of 4GB free, so a total of 142GB
minimum space is needed to share boot with datastore storage
▪ Firewall ports – review port requirements on VMware’s new ports utility
▪ ESXi Host Client is supported by the following browser versions & higher:
Page 1 of 180 Author: Shane Williford
VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o Chrome 89, Firefox 80, MS Internet Edge 90, Safari 9

o The following are several vSphere maximums per ESXi Host. To see a full list,
refer to the VMware config max website

Host CPU 896


Host memory 24TB
vCPUs 4096
Virtual machines 1024
Fault tolerance VMs 4
VMFS / FC / iSCSI volume, LUN size 64TB
VMFS volumes 1024
FC or iSCSI LUNs 1024
Physical NICs 32 – 1GbE ; 16 – 10GbE
Number of vDS 16
Network switch ports (vSS and vDS) 4096
VMDirectPaths 8
Hosts per Cluster 96
Resource pools 1600 (tree depth = 8)

The following section contains the components and requirements for installing vCenter Server:

• vCenter Server pre-requisites and components

o vCenter Appliance is preinstalled with – Photon 3.0 OS, PostgreSQL Database, &
virtual hardware 10; and has two components:
▪ Authentication Services:
o vCenter Single Sign-On (SSO)
o License Service
o Lookup Service
o VMware Certificate Authority (VMCA)
▪ vCenter Server services:
o vCenter Server
o PostgreSQL
o vSphere Client
o vSphere ESXi Dump Collector
o vSphere Auto Deploy
o vSphere Lifecycle Manager Extension
o vSphere Lifecycle Manager
Page 2 of 180 Author: Shane Williford
VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

▪ Minimum install on ESXi or vCenter 6.7 or later


▪ Hardware requirements:

▪ Storage requirements:

▪ Proper DNS and time synchronization


▪ Firewall ports – review port requirements on VMware’s new ports utility
▪ Client machine used to install the VCSA:
o Windows 10 desktop or higher; Windows 2016 Server or higher,
with at least 2 CPUs / 4 Cores, 2.3 GHz, 4GB RAM, 32 GB disk
Page 3 of 180 Author: Shane Williford
VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o Linux OS support: SUSE 15 or Ubuntu 18.04 with at least 1 CPU /


2 Cores of 2.3 GHz, 4GB RAM, and 16 GB disk; CLI requires x64
o MAC OS: v10.15 or higher, 1 CPU / 4cores, 8GB RAM, 150GB disk
NOTE: all OS’s require a minimum 1024x768 resolution for proper GUI
display

The following are several vSphere maximums for vCenter Server. To see a full
list, refer to the VMware config max website

Hosts 2500
Powered-on and Registered VMs 40000 / 45000
Linked vCenters 15
Concurrent web client sessions 100
vMotion operations per Host 4 – 1GbE / 8 – 10GbE
Storage vMotions per Host 2
vMotion / Storage vMotion per datastore 128 / 8
Datastore Clusters 256
Datastores per Datastore Cluster 64
Number of vDS 128

Objective 1.2: Describe the Components and Topology of a VMware vCenter


Architecture
Resource: vCenter Server 8.0U1 Installation and Setup Guide (2023)

• vCenter Topology was discussed in Objective 1.1 above, but to review:


o vCenter Server, since vSphere 7, can only be installed as an appliance; a
Windows version is no longer provided
o Also, there is no longer a PSC installed alongside vCenter. All components and
services associated with the former PSC are now incorporated into the VCSA
o The VCSA has 2 main services – Authentication Services and vCenter Server
Services; see 1.1 above for listed services within each component
o The vCenter Server appliance is comprised of:
▪ vCenter Server
▪ PostgreSQL
▪ Photon 3.0 OS
▪ Virtual hardware version 10

Page 4 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 1.3: Describe Storage Concepts


Resource: vSphere 8.0 Storage Guide (2023)
vSphere 8.0 Resource Management Guide (2022), pp. 63-69; 90-102
vSAN 8.0 Planning and Deployment Guide (2022)

1.3.1 – Identify and Differentiate Different Storage Access Protocols for vSphere

• Local storage
o ESXi Host local storage refers to internal hard disks or external storage systems
connected to the Host via SAS or SATA protocols. Supported local devices
include: SCSI, IDE, SAS, SATA, USB, Flash, and NVMe
Since this storage type is local to the ESXi Host, you cannot share storage with
other Hosts. As such, the use of vSphere features such as High Availability (HA)
and vMotion are not available.
NOTE: USB and IDE cannot be used to store VMs

• Networked storage
o These external storage systems are accessed by ESXi Hosts over a high-speed
storage network, and are shared among all Hosts configured to access the
system. Storage devices are represented as Logical Units (LUNs) and have a
distinct UUID name – World Wide Port Name (WWPN) for FC, and an iSCSI
Qualified Name (IQN) for iSCSI
▪ Shared LUNs must be presented to all ESXi Hosts with the same UUID
▪ You cannot access LUNs with different protocols (i.e. FC and iSCSI)
▪ LUNs can contain only one VMware File System (VMFS) datastore
o VMware supports the following types of Networked storage:

Fibre Channel (FC)


o Uses a Storage Area Network (SAN) to transfer data from ESXi Hosts to the
storage system using the FC protocol. This protocol packages SCSI commands
into FC frames. To connect to the FC SAN, the Host uses a FC Host Bus Adapter
(HBA)
o FC supports three Storage Array types:
▪ Active-active – supports access to LUNs simultaneously through all
available storage ports. All paths are active unless a path fails
▪ Active-passive – one storage processor is active providing access to a
given LUN, while other storage processors act as a backup (standby) for
the LUN yet can provide active access to other LUNs. If an active port
fails, the backup/standby path can be activated by servers access it
▪ Asymmetric – Asymmetric Logical Unit Access (ALUA) provides different
levels of access per port

Page 5 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o Access to a FC LUN can be configured in one of two ways:


▪ LUN Zoning defines which Host’s HBA devices can connect to which
Storage Processors (SPs) or targets. Using single-initiator-single-target is
the preferred Zoning method
▪ LUN Masking is a permission-based process making a LUN available to
some ESXi Hosts and unavailable to other Hosts
o Increase the SCSI TimeoutValue in Windows VMs registry to 60 so the VM can
tolerate delayed I/O from a path failover
o N-Port ID Virtualization can be used by VMs for RDM access. A WWPN and
WWNN are paired in a VM .vmx file, and the VMkernel instantiates a virtual port
(vPort) on the Host HBA used for LUN access
o FC HBA considerations
▪ Do not mix HBA vendors
▪ Ensure the same firmware level on each HBA
▪ Set the timeout value for detecting failover as discussed above
▪ ESXi supports 32GB end-to-end FC connectivity
o FC Multipathing is a method to ensure a remote storage system is highly
available to an ESXi Host. To achieve a highly available connection to the storage
system, multiple FC HBAs must be used. View VMware’s representation below:

o Multipathing is possible by way of the Pluggable Storage Architecture (PSA). The


PSA will be discussed in more detail in Sub-Objective 1.3.3 below

Page 6 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Fibre Channel over Ethernet (FCoE)


o Hardware FCoE Adapters are Converged Network Adapters (CNAs) containing
both FC and network capabilities; this now is the only FCoE protocol supported
▪ Appear as both a vmnic and FCoE adapter in the vSphere Client
▪ No configuration is necessary
o Software FCoE is no longer a supported storage protocol in production
environments

Internet SCSI (iSCSI)


o Connect to storage systems using Ethernet connections between ESXi Hosts and
high-performance storage systems using iSCSI HBAs or NICs (i.e. “initiators”)
o iSCSI names are generated by ESXi and are represented as IQNs or Extended
Unique Identifies (EUIs) to identify an iSCSI node
▪ IQN example – iqn.1998-01.com.vmware.iscsi:hostname1
▪ EUI example – eui.0123456789ABCDEF ( eui.16_hex_digits )
o There are two types of iSCSI Initiators to connect to iSCSI targets:
▪ Software iSCSI uses built-in VMware code in the VMkernel to connect to
iSCSI storage through standard NICs
▪ Hardware iSCSI uses 3rd party adapters to offload iSCSI processing from
the ESXi Host, using one of two categories:
-Dependent Hardware relies on VMware networking, iSCSI
configuration, and management interfaces (ex. Broadcom 5709)
-Independent Hardware uses its own networking and iSCSI
configuration and management interfaces (ex. QLogic QLA4052)
o iSCSI Extensions for RDMA (iSER)
▪ With this protocol, ESXi Hosts instead replace traditional TCP/IP transport
with RDMA, enabling transfer of data directly between ESXi Host memory
buffers and storage devices, reducing TCP/IP overhead and thus latency
and CPU load on storage devices
o iSCSI supports four Storage Types
▪ Active-active – same as discussed in FC above
▪ Active-passive – same as discussed in FC above
▪ Asymmetric – same as discussed in FC above
▪ Virtual port – supports access to all LUNs through a single virtual port,
and handles connection balancing and failovers transparently
o Discovery sessions returns targets to the ESXi Hosts and occur in one of two
methods:
▪ Dynamic Discovery or SendTargets obtains a list of accessible targets
▪ Static Discovery can only access a particular target by name and address
o Authentication is used by a name and key pair. ESXi uses the Challenge
Handshake Authentication Protocol (CHAP)

Page 7 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o Access control is setup on the iSCSI storage system by Initiator Name, IP Address,
or CHAP
o Error correction, disabled by default, can be achieved by enabling header digests
and data digests. They check header info and SCSI data transferred from initiator
to target in both directions. Although, enabling them produces additional
processing, thus affecting throughput and CPU load, & reduce performance
o Jumbo Frames of MTU size greater than 1500 (generally, 9000) is required end-
to-end from Host to storage system. This configuration is needed on vSwitch,
Software iSCSI Adapter, VMkernel port, and physical switches
o iSCSI Port binding is used for multipathing to ensure a highly available network
storage configuration. What this entails is assigning an ESXi Host physical NIC to
a separate VMkernel port, then binding each VMkernel port to the Software
iSCSI Adapter. For Hardware multipathing, 2 physical HBAs are required and are
utilized similar to FC multipathing. Hardware and Software multipathing are
visually represented by VMware as in the image below:

Networked-Attached Storage (NAS)


o Uses the TCP/IP network to access remote file servers through the NFS client in
the VMkernel using standard network adapters
o ESXi uses one of two NFS protocols, version 3 and 4.1

Page 8 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o NFS 3 and NFS 4.1 have the following differences:

NFS 3 NFS 4.1


Security AUTH_SYS AUTH_SYS and Kerberos (krb5)

Encryption not supported AES 128 or AES 256


Multipathing not supported Through session trunking
Locking mechanism Client-side locking Server-side locking
vSphere features Does not support: SIOC, SDRS,
All supported
(HA, DRS, FT, etc) or vVols; SRM w/array-replic

o To upgrade NFS 3 datastores to NFS 4.1, you must storage vMotion VMs off the
NFS 3 datastore and unmount it, then remount it back as NFS 4.1 and storage
vMotion VMs back
o Some general NFS rules:
▪ You cannot connect a NFS datastore to different Hosts using different
NFS versions
▪ You can run different NFS version datastores on the same Host

o NFS 3 firewall rule behavior


▪ When adding or mounting a NFS 3 datastore, If the nfsClient rule set
is disabled, ESXi enables it and disables the Allow All IP Address policy by
setting the allowedAll flag to FALSE . The NFS server IP is added to the
allowed list of outgoing IP addresses
▪ If the nfsClient rule set is enabled, its state and the Allow All IP
Address policy are not changed. The NFS server IP is added to the allowed
list of outgoing IP addresses
▪ When removing or unmounting a NFS 3 datastore, if no remaining NFS
datastores are mounted from the NFS server IP of the datastore being
removed, ESXi removes the NFS server IP
▪ If there are no other mounted NFS 3 datastores, ESXi disables the
nfsClient rule set

o NFS 4 firewall rule behavior


▪ When mounting the first NFS 4.1 datastore, ESXi enables the
nfsClient rule set, opens port 2049 for all IP addresses, and sets the
allowedAll flag to TRUE
▪ Unmounting a NFS 4.1 datastore does not affect the firewall state. Port
2049 remains open until closed explicitly

Page 9 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Overview of Storage Types and vSphere Feature Supportability:

Below is a visual representation of the major storage protocols used in vSphere

Local Storage Fibre Channel iSCSI NFS

Page 10 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

1.3.2 – Describe Storage Datastore Types for VMware vSphere

• VMware File System (VMFS) – datastores using local, FC, or iSCSI storage devices

• Network File System (NFS) – datastores created using NAS over NFS devices; only NFS
v3 and NFS v4.1 are supported

• Software Defined Storage (vSAN) – datastore created when using VMware vSAN, a
VMware-proprietary version of software defined storage. vSAN aggregates all Cluster
ESXi Host local storage into a single datastore shared by all Hosts in the vSAN Cluster. I’ll
discuss the requirements and components of vSAN later in Objective 1.7

• Virtual Volumes (vVols)


o vVOLs, identified by a unique GUID, is an encapsulation of VM files, VMDKs, and
their derivatives stored natively on a storage system. vVOLs are created
automatically when performing a VM operation: creation, cloning, snapshotting
o The following are vVol types which make up core VM elements
▪ Data-vVols correspond directly to each VM .vmdk file, either thick- or
thin- provisioned
▪ Config-vVol is a home directory representing a small directory containing
VM metadata files, such as .vmx, .log, etc; is thin-provisioned
▪ Swap-vVol holds a VM’s memory pages and is created when a VM
powers on; is thick-provisioned by default
▪ Snapshot-vVol holds content of VM memory for a snapshot, and is thick-
provisioned
▪ Other is used for a special feature, such as Content-Based Read Cache
(CBRC)
NOTE: VM at minimum creates a Data-vVol, Config-vVol, and Swap-vVol
o Storage Providers, or VASA Provider, is a software component acting as a storage
awareness service for vSphere, mediating out-of-band communication between
vCenter Server and ESXi Hosts on one side and the storage system on the other
o Storage Containers are a pool of raw storage capacity or an aggregation of
storage capabilities a storage system can provide to virtual volumes
o Protocol Endpoints are a logical I/O proxy for ESXi Hosts to communicate with
virtual volumes and virtual disk files in which virtual volumes encapsulate

Page 11 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Below is VMware’s visual representation of the vVol architecture

• Persistent Memory (PMem or Non-Volatile Memory [NVM])


o These devices combine performance and speed for VM workloads such as
acceleration databases and analytics which require high bandwidth, low latency,
and persistence.

1.3.3 – Explain the Importance of Advanced Storage Configuration (VASA, VAAI, PSA)

• vSphere API for Storage Awareness (VASA) – a set of APIs allowing storage arrays to
communicate and integrate with vCenter Server for management functionality such as:
o Storage array features
o Storage health, configuration info, and capacity
o Disk type
o Snapshot space reduced
o Storage deduplication
o vVol type
Page 12 of 180 Author: Shane Williford
VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

• vSphere API for Array Integration (VAAI) – allows for hardware acceleration and
offloading of certain operations to the storage system from the ESXi Host, allowing the
Host to perform operations faster while consuming less resources
o Requirements: vSphere 4.1 (NAS not supported until v5.0), ESXi Enterprise
license (or higher), storage arrays supporting VAAI
o Operations using VAAI (vSphere 5.x and higher):
▪ Hardware Assisted Locking or Atomic Test and Set (ATS) for locking VMFS
files
▪ Clone Blocks / Full Copy / XCOPY to copy data within an array
▪ Block Zeroing or 'write same' to zero out disk regions
▪ Thin provisioning
▪ Block Delete / unmap for reclaiming unused disk space
▪ Full File Clone (NAS)
▪ Fast File Clone or Snapshot Support (NAS)

• Pluggable Storage Architecture (PSA)


o PSA is a VMkernel layer within ESXi. It’s an open and modular framework
coordinating various software modules responsible for multipathing. The
following components make up the PSA:

Native Multipathing Plugin (NMP) is VMware’s VMkernel multipathing plugin


ESXi uses by default, providing a default path selection algorithm based on
storage array type. NMP manages 2 VMware sub-modules:

▪ Path Selection Plugin (PSP) is responsible for selecting the physical path
for I/O requests, and use one of 3 policies:

VMW_PSP_MRU – the most recently used policy selects the first


working path discovered at system boot. When it becomes unavailable,
an alternate path is chosen and if becomes available again, I/O is not
reverted to it. Does not use preferred path setting, but can use path
ranking. This is default for most active-passive arrays

VMW_PSP_FIXED – the fixed policy uses a designated preferred path,


and if it doesn’t get assigned, the first working path discovered at boot is
selected. When it becomes unavailable, an alternate path is chosen and if
becomes available again, I/O is reverted back to it. This policy is default
for most active-active storage systems

Page 13 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

VMW_PSP_RR – the round robin policy uses an automatic path


selection algorithm rotating through configured paths, selecting optimal
paths based on I/O bandwidth and latency. Active-passive arrays use all
active paths, while active-active arrays use all available paths

▪ Storage Array Type Plugin (SATP) is responsible for array-specific


operations, determine the state of a path, monitors path health,
performs path activation, and detects path errors. Rules are searched
first by drivers, then vendor/model, then lastly by transport. One of the
following generic SATP policies is used:

VMW_SATP_LOCAL – is used for local devices


VMW_SATP_DEFAULT_AA – is used for active-active arrays
VMW_SATP_DEFAULT_AP – is used for active-passive arrays
VMW_SATP_DEFAULT_ALUA – is used for ALUA-compliant arrays

The default SATP if none found is “_AA” and its PSP is VMW_PSP_FIXED
If SATP “_ALUA” policy is used, the default PSP is VMW_PSP_MRU

o Multipathing Plugins (MPP) are 3rd party plugins created by using various
VMkernel APIs. These 3rd party plugins offer multipathing and failover functions
for a particular storage array
▪ VMware High Performance Plugin (HPP) is like the NMP but is used for
high-speed storage devices, such as NVMe PCIe flash, and uses the
following Path Selection Schemes (PSS):
– Fixed
– LB-RR (Load-Balance Round Robin)
– LB-IOPs
– LB-Bytes
– LB-Latency
▪ Claim Rules determine whether a MPP or NMP owns a path to a storage
device. NMP claim rules match the device with a SATP and PSP, whereas
MPP rules are ordered with lower numbers having precedence

Page 14 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Below is VMware’s visual representation of the PSA and Plugins architecture

1.3.4 – Describe Storage Policies

• Storage Policies – fundamental vSphere object controlling which underlying types of


storage is provided to which VMs, assuring compliance & guaranteeing SLAs. Before
configuring Policies, make sure you've installed storage (VASA) providers or I/O filters on
your ESXi Hosts and created tags to associate an underlying storage to its capabilities.
You can also modify Storage Policy Components to meet your specific needs, then use
those when configuring your Policies
VM Storage Policies
o Policy characteristics (i.e. storage requirements):
▪ Types = Host, vSAN, vSAN Direct, Tag, or combination of the 4
o Policies to configure
▪ Host policies:
– Encryption
– Storage I/O Control: IOPs limits/reservation/shares
▪ vSAN policies:
– Availability: Site Disaster Tolerance , Failure to Tolerate settings

– Storage rules policies:


* Encryption: Data At-Rest, No preference, None
* Space Efficiency: Dedup & Compression, Compression, None
Page 15 of 180 Author: Shane Williford
VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

* Tier - All Flash, Hybrid


– Advanced rules: Disk Stripes, IOPs, Space Reservation, Flash cache,
Disable Checksum, Force Provisioning
– Tags
▪ vSAN Direct (Datastore) Placement: uses Tagging
▪ Tags
o Once policies are configured, assign a Datastore to the configured above criteria,
then assign the Policy(ies) to desired VMs to meet SLAs and/or underlying
storage requirements
o Using the Policy wizard, different policy Rules or Rule Sets can be defined:
▪ Datastore-specific Rules
▪ Placement Rules - Capability
▪ Placement Rules - Tags
▪ Host-based Services Rules

1.3.5 – Describe Basic Storage Concepts in vSAN, and vVols

• Virtual SAN (vSAN)


o vSAN is a distributed layer of software which zaggregates local or direct-attached
storage into a single storage pool shared across all ESXi Hosts in the vSAN
Cluster. vSAN uses a software-defined approach, virtualizing Host physical
storage into pools of storage which are then assigned to VMs according to their
quality-of-service (QoS) requirements
o vSAN can be implemented in one of 2 architectures – vSAN Original Storage
Architecture, which uses the traditional Disk Group using Hybrid storage; and
vSAN Express Storage Architecture (ESA), which uses All Flash devices, used for

Page 16 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

both cache and capacity, aggregated to form a storage pool. There can only be 1
storage pool configured per ESXi Host
o The Original Storage Architecture uses flash and capacity (HDD) Drives combined
to form a Disk Group. A Disk Group contains at least a minimum of 2 drives – one
flash drive used for caching, and one or more HDDs used for capacity. A
maximum of 5 Disk Groups can be configured per ESXi Host. I'll discuss more
limitations and requirements in Objective 1.7
o A vSAN Datastore consists of the following object types
▪ VM Home Namespace – VM home directory where all VM configuration
files are stored
▪ VMDK – a VM disk file; .vmdk
▪ VM Swap Object – created when powering on a VM
▪ Snapshot Delta VMDKs – created upon taking VM snapshots; not created
on ESA vSANs
▪ Memory Object – created when the snapshot memory option is selected
when creating or suspending a VM
o Some more Disk Group characteristics to be aware of:
▪ Only 1 flash device can be used per Disk Group
▪ There can only be a maximum of 7 HDDs used per Disk Group
▪ When creating a Disk Group, VMware suggests using a 10% Flash Cache
to Capacity ratio
▪ ESXi Hosts can not contribute to more than one vSAN Cluster; but Hosts
can access external storage shared across Clusters (i.e. SAN, NFS, vVol)
▪ Whether an ESXi Host has 1 or several Disk Groups, all Disk Groups
configured on every Host in the Cluster only form 1 vSAN Datastore
▪ vSAN does not support DPM, RDM, SIOC, or SE Sparse Disks

• Virtual Volumes (vVols) – I discussed vVols in depth in Sub-Objective 1.3.2 above. Refer
to that section to review vVol concepts

1.3.6 – Identify Use Cases for Raw Device Mapping (RDM), Persistent Memory (PMem), Non-
Volatile Memory Express (NVMe), NVMe over Fabrics (NVMe-oF), and RDMA (iSER)

• Raw Device Mapping (RDM) – RDM provides a mechanism for a VM to have direct
access to a LUN on a physical storage subsystem. RDM is a mapping file which acts as a
proxy for a raw physical storage device (see screenshot below). VMFS Datastores are
pretty much the standard when assigning storage to VMs, but there are a couple use-
cases when RDMs are preferred:
o SAN snapshot or other layered applications run in a VM. Using RDM enables
backup offloading systems using features inherent to the SAN
o Microsoft Clustering Service (MSCS) spanning physical hosts

Page 17 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

• RDM Benefits include the following:


o User-friendly persistent names
o Dynamic name resolution
o Distributed file locking
o File permissions
o File system operations
o VM Snapshots – virtual compatibility mode only
o vMotion
o SAN management agents
o N-Port ID virtualization

• Persistent Memory (PMem or NVM) – I touched on this in Sub-Objective 1.3.2, but to


further expand on what I shared earlier, PMem devices combine performance and speed
for VM workloads such as acceleration databases and analytics which require high
bandwidth, low latency, and persistence.
o Once added to an ESXi Host, the Host detects the hardware and then formats
and mounts it as a PMem Datastore, using VMFS-L as the filesystem. Only one
PMem Datastore per ESXi Host is supported
o PMem Datastores can only store NVDIMM devices and VM virtual disks. They
cannot be used to store other VM files such as .vmx or .log, etc.
o PMem has two access modes:
▪ Direct-access mode (vPMem) – PMem regions present NVDIMMs to VMs;
VM hardware must be vSphere 6.7+ & guest OS Windows 2016+
▪ Virtual disk mode (vPMemDisk) – available to any VM with any OS for
configuring virtual disks

Page 18 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

• Non-Volatile Memory Express (NVMe) – is a standardized protocol designed specifically


for high-performance multi-queue communication with NVM (PMem) devices. NVMe
can connect to local or network storage devices. Serves as alternative to SCSI storage
o NVMe over Fabrics (NVMe-oF) – fabric transports used by the VMware NVMe
protocol, and provides a distrance connectivity between a Host and target
storage device on a shared storage array. NVMe-oF are discovered by the
VMware HPP, via a Discovery Controller, which returns a list of NVMe
Controllers, which then in turn is associated with NVMe namespaces, similar to a
storage volume or LUN. Namespaces are where you create Datastores on. The
following are different types of NVMe-oF:
▪ NVMe over RDMA
▪ NVMe over FC (FC-NVMe)
▪ NVMe over TCP
o Since these are a type of PMem, the use cases for this transport is the same as
shared above – any workloads which require high-bandwith, low-latency, and
persistance such as those using acceleration databases or analytics

• RDMA (iSER) – iSCSI Extensions for RDMA (iSER) uses the same iSCSI framework, but
instead replaces the TCP/IP transport with the Random Direct Memory Access (RDMA)
transport. Using this transport allows data to transfer directly between the memory
buffers of the ESXi Host and storage devices, using an underlying RDMA fabric. This
eliminates TCP/IP processing overhead, and thus reduces latency. To use RDMA, a
RDMA-capable network adapter must be installed in the ESXI Hosts (e.g. Mellanox
MT27700), and underlying physical switches must support RDMA
o There are no specific use cases provided in VMware documentation on why you
would use iSER over iSCSI. But, if there are workloads needing reduced latency
than regular workloads, and you don't have nor want to incur the costs of
implementing high-speed storage solutions mentioned above, this would be a
viable alternative

1.3.7 – Describe Datastore Clusters (SDRS)

• vSphere Storage DRS (SDRS)


o SDRS is the result of creating a Datastore Cluster. A Datastore Cluster is a
collection of datastores to manage storage resources and support resource
allocation policies created at the Cluster level. Like compute DRS, if SDRS is
configured for manual mode you are prompted for initial placement of VM
virtual disks. In automatic mode, VM disks are moved for you

Page 19 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o You can use the following resource management capabilities to trigger SDRS
recommendations
▪ Space utilization load balancing is used to trigger recommendations
based off a set threshold on a datastore. If the threshold is exceeded, a
migration is recommended
▪ I/O latency load balancing can be based off a set I/O latency threshold,
below which migrations are not allowed
▪ Anti-affinity rules can be used to have VM disks on different datastores
Each setting above can be configured to: Use cluster settings, No automation
(manual mode), or Fully automated

• I’ll share more about requirements and cover a few advanced features/options later in
Sub-Objective 7.4.3

1.3.8 – Describe Storage I/O Control (SIOC)

• Storage I/O Control (SIOC)


o SIOC provides Cluster-wide storage I/O prioritization allowing better workload
consolidation and helps reduce cost due to over provisioning, using Shares and
Limits constructs per-VM, to control I/O allocated to VMs during congestion
o Configuring SIOC:
▪ Enable SIOC on a Datastore: rt-click > Configure Storage I/O Control
▪ Set SIOC Shares and upper Limit of IOPs allowed for each VM

Page 20 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o SIOC has the following requirements/limitations, & is configured per Datastore:


▪ vCenter Server and ESXi 4.1 or higher
▪ Datastore with SIOC can be managed by only 1 vCenter Server
▪ Datastore with SIOC cannot have RDMs; iSCSI, FC, NFS are supported
▪ Datastore with SIOC cannot have multiple extents
o SIOC integrates with Storage DRS based on SPBM (Storage Policy) rules
▪ The SDRS EnforceStorageProfiles Advanced option has 1 of 3
values:
- 0 (default value) means SPBM policy is not enforced on the SDRS Cluster
- 1 means SPBM policy is soft enforced on the SDRS Cluster
- 2 means SPBM policy is hard enforced on the SDRS Cluster

Objective 1.4: Describe ESXi Cluster Concepts


Resource: vSphere 8.0 Availability Guide (2022), pp. 11-48
vSphere 8.0 Resource Management Guide (2022), pp. 71-88; 107-126
vSphere 8.0 vCenter Server and Host Management Guide

In this Objective, I’ll be discussing Cluster terms, concepts, and limitations. I'll share about
configuration and management of ESXi Clusters later in Section Objectives 4.6, 4.7, and 7.5.

1.4.1 – Describe VMware Distributed Resource Scheduler (DRS)

• vSphere Distributed Resource Scheduler (DRS)


o Simply put, DRS is a collection of ESXi Hosts and VMs with shared resources and
management interface. When VMs are powered-on in the Cluster, DRS
automatically places VMs on Hosts in such a way to optimize VM resource
utilization in the Cluster. DRS performs admission control during this Initial
Placement phase by verifying if there are sufficient Cluster resources to power
VMs on. If there isn’t, the operation is not allowed. If there are available
resources, DRS displays a recommendation on the Power On Recommendations
section as to where to place the VM(s). If the DRS Automation Level is set to
Fully Automated, the recommendation action is performed automatically. For
group VM power-on, initial placement recommendations are given at the Cluster
level. If any VM in a group power-on has placement-level settings set to manual,
then all VM power-on placement recommendations will be manual. DRS
Automatic migrations have 1 of 5 threshold levels, ranging from Conservative
(mandatory migrations only) to Aggressive (more frequent migrations). "VM
Happiness" level is Level 3
o Additionally, when a DRS Cluster becomes resource-unbalanced, DRS migrates
(or recommends migration of) VMs to other Hosts in the Cluster using VMware
vMotion. If the Automation Level is manual (which is the default) or partially

Page 21 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

automated, a recommendation is given and manual intervention is required. If


needed, you can specify per-VM migration settings
o With the release of vSphere 7U1, DRS now relies on vSphere Cluster Services
(vCLS) and the availability of the vCLS "agent" VMs. With vSphere 7U2, vCLS is
enabled by default and assures if vCenter becomes unavailable, all vCenter
Cluster services remain available to maintain the resources and health of
workloads running in Clusters. vCLS agent VMs get created when you add ESXi
Hosts to Clusters, and up to 3 vCLS VMs per Cluster are required and are
distributed among the Hosts in the Cluster. If there are only 1 or 2 ESXi Hosts in
the Cluster, then only 1 or 2 vCLS VMs get created. The vCLS VMs are managed
by the ESX Agent Manager and Workload Control Plane vCenter services. Also,
vCLS VMs can run on versions of ESXi other than v7 (i.e. 6.7, etc) as long as the
vSphere version is compatible running in vCenter Server 7U2. For vCLS VM
Datastore placement, DRS uses the following:
▪ Shared Datastores are selected if possible over local Datastores
▪ Datastores with more space has preference
▪ No more than 1 vCLS VM is placed in a Datastore if possible
o vCLS VM characteristics:
▪ vCLS VMs do not have a NIC
▪ You can tag vCLS VMs
▪ You can Storage vMotion vCLS VMs
▪ You can assign a different Storage Policy to vCLS VMs (a warning message
is displayed)
▪ Anti-affinity Rules are created to keep vCLS VMs off the same Host, and is
run/checked every 3mins
▪ vCLS VMs are not displayed in Hosts & Clusters View, but are rather
placed in a VMs and Templates Folder, named vCLS
▪ If needing to put a ESXi Host in Maintenance Mode, vCLS VMs must be
manually Storage vMotioned off, or put the Host in Retreat Mode
Retreat Mode:
– Copy the Cluster ID part of the Cluster browser URL:
domain-c< number>
– Go to vCenter > Configure > Advanced Settings, and add a new setting
for: config.vcls.clusters.domain-c<number>.enabled
using the domain copied above, and set the value to False
– Click Save
– Results of doing so:
1. vCLS Service runs every 30secs
2. After 1 minute, all vCLS VMs are cleaned up
3. Cluster Service health = Degraded

Page 22 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

4. If DRS is enabled, it stops functioning and only will become functional


when Retreat Mode disabled (i.e. changing False value to True)
▪ Since DRS relies on these VMs, they are always on. For a list of operations
which might disrupt these VMs, see pg. 86, Resource Management Guide
o DRS Migration recommendations can be one of the following:
▪ Balance average CPU load or reservations
▪ Balance average memory load or reservations
▪ Satisfy resource pool reservations
▪ Meet affinity rules
▪ Host is entering maintenance mode
o Also new in vSphere 7, DRS now uses a score of 'VM happiness' to determine
migration recommendations, which measures VM execution efficiency, known as
the VM DRS Score. It is a weighted average of VM DRS Scores of all powered-on
VMs in the Cluster. I will share more detail below in Sub-Objective 1.4.3
o DRS also allows you flexibility in controlling VM placement using affinity and anti-
affinity rules, allowing you to either keep VMs together on the same Host or
place them on separate Hosts. You can create 2 types of rules:
▪ VM-Host affinity or anti-affinity rules are specified against a group of
VMs and group of ESXi Hosts. Good use case is for licensing requirements
– Affinity rule: group of VMs should or must run on a group of Hosts
– Anti-affinity rule: VMs in a group cannot run on a group of Hosts
▪ VM-VM affinity or anti-affinity rules are used between individual VMs
– Affinity rule: DRS tries to keep a pair of VMs on the same Host
– Anti-affinity: DRS tries to keep a pair of VMs on different Hosts
o If 2 rules are in conflict of each other, the older rule takes precedence and the
newer rule is disabled. Also, DRS gives higher precedence to anti-affinity rules
over affinity rules. Any violations of affinity rules can be found under the Cluster
> Monitor tab > vSphere DRS section, then click Faults
o DRS Clusters can be in 1 of 3 states: valid (green), overcommitted (yellow), and
invalid (red)
o Keep in mind the following behaviors when using DRS, affinity rules, and
resource pools:
▪ When DRS is disabled, any configured affinity rules are lost
▪ When DRS is disabled, all resource pools are removed and only
recoverable if you saved a snapshot of them
▪ When adding a standalone Host to a DRS Cluster or removing a Host from
the Cluster, the Host’s Resource Pools are removed

Page 23 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

1.4.2 – Describe Enhanced vMotion Compatibility (EVC)

• Enhanced vMotion Compatibility (EVC)


o vSphere EVC is a Cluster feature to improve vMotion compatibility between
source and destination Hosts which have differing CPU feature sets. It allows you
to mask these CPU feature sets from VMs based on the following – ESXi Host
version, Host CPU family/model, BIOS settings, VM compatibility setting, and VM
guest OS. If there is a mismatch between source and destination Host user-level
or kernel-level CPU features, migration will not be allowed. User-level features
include SSE3, SSSE3, AES, etc and kernel-level features include AMD-NX or Intel-
XD security. When using EVC, you configure all Cluster Hosts processors to
present the feature set of a baseline processor, called EVC mode, to mask certain
CPU features so Hosts can present a feature set of an earlier processor. EVC only
masks CPU feature sets affecting vMotion compatibility. With vSphere 7U1, EVC
now supports Virtual Shared Graphics Acceleration (vSGA). To enable EVC, Hosts
must meet the following requirements:
▪ Check CPU compatibility on the HCL
▪ ESXi 6.5 or later
▪ Hosts are connected to vCenter Server
▪ A single processor vendor used for all Cluster ESXi Hosts – Intel or AMD
▪ Configure Hosts for vMotion
▪ Enable BIOS features if available:
– Intel-V or AMD-VT hardware virtualization
– Intel-XD or AMD-NV
▪ Power off VMs or migrate out of the Cluster
o Be aware when changing EVC in a Cluster, a VM does not use the new feature set
until it is powered-off then back on, not restarted

1.4.3 – Describe How DRS Scores Virtual Machines

The vSphere Resource Management documentation shares how DRS scores VMs, but not
with much detail. As such, I will reference two blog posts from VMware Technical Marketing
Architect Niels Hagoort, who goes into more Score detail. I’ll just cover the main points, but
for further reading & to fill in the blanks, the posts can be read from here & here.

• In previous versions of vSphere, DRS used a Cluster-wide standard deviation model to


optimize workload 'happiness'. The previous DRS model focused on the ESXi Host,
whereas the new model focuses on the VM itself.

Page 24 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

DRS Deviation Model (previous vSphere versions)

DRS VM Score Model (expressed as a percentage)

'VM happiness' refers to the ability for a resource (VM) to consume the resources they
are entitled to, which depends on many factors such as – Cluster sizing, ESXi Host
utilization, workload characteristics, and VM configuration of vCPU/Memory/Network.
The VM DRS Score is simply the goodness ('happiness') of the VM on its current Host in
the Cluster, expressed as a percentage, and calculated every minute. The Goodness
Modelling can be expressed as VMs having an ideal throughput and actual throughput
for resources (CPU, RAM, Network). When there’s no contention, those 2 items are

Page 25 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

equal (actual = ideal). When there is contention for resources, there is a cost for the
resource, which hurts the VM’s actual throughput metric.

VM under resource contention with other Cluster VMs


Actual = “goodness”; Ideal = “demand”
o Actual = Ideal – Cost
o Efficiency = Actual / Ideal
o Total Efficiency = Efficiency of CPU * Efficiency of Memory * Efficiency of
Network
o Total Efficiency = VM DRS Score; or, more specifically, the combination of the
efficiency of each resource
o What are the Costs? For CPU:
▪ Cache
▪ Ready
▪ Tax
o Memory Cost?
▪ Burstiness
▪ Reclamation
▪ Tax
o Network?
▪ Utilization
o There’s also a cost for migration of the VM (vMotion)

1.4.4 – Describe VMware vSphere High Availability (HA)

• vSphere HA
o vSphere HA provides high availability to virtual machines by combining VMs and
the Hosts they reside on into vSphere Clusters. ESXi Hosts are monitored and, in
the event of a Host failure, VMs are restarted on other available ESXi Hosts in the
Cluster
o vSphere HA works in in the following way: when vSphere HA is enabled, vCenter
Server installs the HA Agent (.fdm ) on each ESXi Host. The ESXi Host with the
most datastores connected to it has the advantage to becoming the primary as
determined by an election process, and the remaining Hosts are secondary. The
primary Host monitors secondary Hosts every second through network
heartbeats through the management network, or if vSAN is used it’s the vSAN
network. If heartbeats are not received through this method, before the primary
Host determines a secondary Host has failed, the primary Host detects the
liveness of the secondaries by seeing if they are exchanging datastore
heartbeats. The default number of datastores used in heartbeating is 2, and a
maximum of 5 can be configured using the vSphere HA Advanced option:

Page 26 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

das.heartbeatDsPerHost . When vSphere HA is enabled, a root directory is


created on datastores used for heartbeating, named .vSphere-HA. vSAN
datastores cannot be used for heartbeating
o The Primary Host is responsible for detecting the following types of Host failures:
▪ A Host failure is when a Host stops functioning
▪ A Host network partition is when a Host loses connection to the primary
▪ A Host network isolation is when a Host is running but cannot observe
traffic from other Host HA agents, or is unable to ping the network
isolation address

– When a Host is running but isolated, you can configure one of 3


responses:
Disabled means host isolation responses are suspended. Also, if an
individual VM has its isolation response disabled, then no isolation
response is made for that VM
Power off and restart VMs hard powers VMs and restarts them on
another Host
Shutdown and restart VMs gracefully shuts down VMs if VMware Tools is
installed on the VM, and restarts them on another Host

▪ A Proactive HA failure occurs when there is a failure in a Host component


resulting in loss of redundancy, for example a power supply failure. For
these failures, remediation can be handled via automation in the vSphere
Availability section of the vSphere Client

o The vSphere HA Primary Host uses several factors when determining which
available Cluster Hosts to restart VMs on:
▪ File accessibility of a VM’s files
▪ VM Host compatibility with affinity rules
▪ Available resources on a Host to meet VM reservations
▪ ESXi Host limits, for example VMs per Host or vCPUs per Host maximums
▪ Feature constraints, e.g. vSphere HA does not violate anti-affinity rules or
fault tolerance limits

Page 27 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

1.4.5 – Identify Use Cases for Fault Tolerance (FT)

• Fault Tolerance (FT) provides continuous availability to your most mission-critical VMs.
When enabled on a VM, the source VM, called the primary or protected VM, replicates
to a newly created secondary VM on another ESXi Host separate from the primary VM.
These VMs continuously monitor the status of one another ensuring FT is maintained.
The use-case for having FT VMs is to ensure continuation of service in the event of an
ESXi Host failure only. FT VMs are not backups! Secondary VMs are not a completely
isolated copy of a primary VM with incremental data to rollback to. So, if any data gets
corrupt within the primary VM guest OS, this same corrupt data gets synchronized over
to the secondary VM, thus making the secondary VM corrupt as well. Recoverability is
thus only achieved having a VM backup solution in place. FT also avoids a 'split-brain'
situation which can arise by having 2 active VM copies after a failure. This is achieved by
storage Anti File Locking to coordinate failover so only 1 VM continues running as
primary, and a new secondary is respawned automatically. Also, FT can now support
SMP VMs with up to 8 vCPUs (Enterprise Plus licensing)

• VMware states FT has the following specific use-cases:


o Applications which must always be available, especially with long client
connections
o Custom applications with no other Clustering options
o Cases where there may be Clustering options, but are too complicated to
implement and maintain

• FT has the following limitiations:


o No snapshots
o No Storage vMotion
o No Linked Clones
o VMs cannot reside on vVol Datastores
o No SBPM is allowed
o No VBS-enabled VMs

Objective 1.5: Explain the Differences Between VMware Standard Switches (VSS)
and Distributed Switches (VDS)
Resource: vSphere 8.0 Networking Guide (2022)

• Standard Switch and Distributed Switch Differences


o In describing each Switch type, understand whatever is not listed, generally
means both types have the same features (i.e. VMkernel capability, Port Groups,
VLANs, etc.). VSS and VDS difference are as follows:

Page 28 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

▪ VSS is host-based, whereas VDS is centralized in vCenter


▪ VSS comes with ESXi installed by default, whereas VDS requires
Enterprise Plus licensing and vCenter Server
▪ Since vSphere 5.1 both VSS and VDS have the capability to
rollback/recover the management network via the Host DCUI
▪ Both have 2 component 'planes' making up its construct – management
plane (configures data plane) and data plane (switching, filtering, tagging,
etc). But with a VSS, they are as one and reside and are configured
individually per Host; with a VDS the planes are separate, with
management being vCenter Server and data still residing on the Host and
sometimes referred to as the host proxy switch
▪ VDS offers more feature sets than a vSS, including
– NIOC capability
– Load balance policy based off physical NIC Load
– Netflow traffic monitoring
– Port Blocking
– Inbound traffic shaping, in addition to outbound; VSS is outbound only
▪ VDS offers networking vMotion

See below graphic for further vSwitch Policy differences:

Page 29 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

• vSphere Standard Switch (VSS)


o To provide network connectivity to ESXi Hosts and VMs, you connect Host
physical NICs (pNics/vmnics) to standard switch uplink ports to provide external
network connectivity. In turn, VMs have virtual NICs (vNics) you connect to VSS
port groups. And each VSS port group can use one or more Host vmnics to
handle network traffic. When connecting 2 or more vmnics to a standard switch,
they are transparently teamed. If no vmnic is connected to a port group, VMs
connected to the same port group can communicate with each other in what’s
referred to as an isolated environment, because there is no communication with
the external network. And, to ensure efficient use of Host resources, standard
switch ports are dynamically scaled up and down, up to the Host maximum
supported (1016 active/ephemeral ports)
o Each standard switch port group is identified by a network label, unique to the
Host. You can use network labels to make network configuration of VMs
portable across Hosts. This can be achieved by creating the same network label
name on port groups in a datacenter that use vmnics connected to one
broadcast domain on the physical network. For example, you can create
Production and Test port groups on Hosts sharing the same broadcast domain

VMware visual representation of standard switches is shown below:

Page 30 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o When creating a standard switch, you have the option to create it in one of 3
ways - VMkernel Network Adapter, Physical Network Adapter, Virtual Machine
Port Group for a Standard Switch

VMkernel Network – see the next Sub-Objective for more information

Virtual Machine Port Group handles VM network traffic on the standard switch

Physical Network Adapter provides external network connectivity

o VSS networking management, such as creating a VSS, adding port groups,


configuring EST/VST/VGT VLAN tagging, & MTU size will be discussed in Obj. 4.2

1.5.1 – Describe VMKernel Networking

• VMkernel Networking
o VMkernel networking is used for handling system network traffic and provides
connectivity between Hosts. Keep in mind when configuring virtual networking,
if creating a vMotion network, all ESXi Hosts for which you want to migrate VMs
to must be in the same broadcast domain – Layer 2 subnet. The VMkernel
Network has several TCP/IP Stacks:
▪ Default – supports several network traffic services (see pg. 46)
▪ vMotion – for VM live migration traffic

Page 31 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

▪ Provisioning – for cold migration, cloning, and snapshot traffic


▪ Custom – create a separate stack for your needs; must be done via CLI
▪ Mirror – for when using mirror stack on ERSPAN
o VMkernel System Traffic Types:
▪ Management – for configuration and mgmt communication for Hosts,
vCenter Server, and HA
▪ vMotion
▪ Provisioning
▪ IP Storage – software iSCSI, Dependent Hardware iSCSI, and NFS
▪ Fault Tolerance – inter-VM traffic, and FT logging
▪ vSphere Replication – outgoing source Host replication data transferred
to Replicaton Server
▪ vSphere Replication NFC – incoming replication data at replication target
▪ vSAN
▪ vSphere Backup NFC
▪ NVMe over TCP
▪ NVMe over RDMA

1.5.2 – Manage Networking on Multiple ESXi Hosts with vSphere Distributed Switch (VDS)

• vSphere Distributed Switch (VDS)


o A Distributed Switch provides centralized management and monitoring of
vSphere networking configuration among all ESXi Hosts associated with it. A VDS
is setup on vCenter and all configured settings are propagated to Hosts
connected to it. That is also very relevant information to keep in mind in
distinguishing between a standard and distributed switch – standard switches
are local to ESXi Hosts, whereas distributed switches are a vCenter Server
construct and shared among Hosts which connected to it

Page 32 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o As mentioned at the beginning of this Objective, distributed switches consist of


two logical sections – the data plane (also called “host proxy switch”), which is
used for package switching, filtering, tagging, etc; and a management plane,
used for configuring data plane functionality. The management plane resides on
vCenter, while the data plane remains local to each ESXi Host. VDS versions have
evolved over the past few VMware releases. When creating a distributed switch,
you can view when certain feature sets were added to which version by clicking
the 'info icon' next to the "Feature per version" text:

Though not shown in the image below, the latest VDS version is 8.0

Page 33 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

And not shown above, features such as Network I/O Control and ICGMP/MLD
snooping was introduced in VDS version 6.0

o If you need to upgrade your VDS to a newer version, ESXi Hosts and VMs
connected to the VDS will experience brief downtime. To begin adding Hosts, rt-
click on the VDS > Add and Manage Hosts option

At the Manage Physical Adapters screen, expand each Host and select the
Physical Adapter you want added to the VDS and click Next. You can then
migrate any VMkernel Adapters or VM Networking (next 2 screens) if desired, or
create or move them later. Again, what’s nice about a VDS is it’s a "one-stop-

Page 34 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

shop" configuration platform for all ESXi Hosts in your Datacenter. You can view
its Topology from the Configure tab

o A distributed switch has two abstractions used to create consistent networking


configuration for Hosts, VMs, and VMkernel services:
▪ Uplink port group, or dvUplink, is defined at VDS creation and can have
one or more uplinks. Uplinks are a template used to configure physical
connections to Hosts, as well as failover and load balancing policies. As
with a standard switch, you assign vmnics to VDS uplinks, configure
appropriate policies, and those settings are propagated down to the host
proxy switches
▪ Distributed port group provides network connectivity to VMs and
accommodate VMkernel traffic, each identified using a network label
unique to the datacenter. Like uplinks, you configure load balancing and
teaming policies, but additionally configure VLAN, security policy, traffic
shaping, and others, which are also propagated down to the host proxy
switches
o Lastly, I’ll leave you with a few best practices to keep in mind when setting up
your vSphere virtual networking environment. For a full list of VMware
recommendations, review pg. 279 in the Networking Guide
▪ Isolate your vSphere network services to improve security and
performance, i.e. management, vMotion, FT, NFS, etc
▪ Dedicate a 2nd vmnic to a group of VMs if possible, or use NIOC and traffic
shaping to guarantee bandwidth to VMs
▪ For highest security and traffic service isolation, create separate standard
and/or distributed switches and dedicate vmnics to them
▪ Keep vMotion traffic on a separate network as guest OS memory
contents is transmitted over the network

Page 35 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

▪ Configure the same MTU size on all VMkernel network adapters,


otherwise you might experience network connectivity issues
▪ Use vmxnet3 vNics on VMs for best performance

1.5.3 – Describe Networking Policies

• Teaming and Failover Policy


o Load Balancing – handles only egress traffic. Ingress is configured on the physical
switch
▪ Route based on IP hash – vSwitch selects uplinks based on source &
destination IP of each packet
▪ Route based on source MAC hash – vSwitch selects uplinks based on VM
MAC address
▪ Route based on originating virtual port (default policy) – vSwitch selects
uplinks based on VM port IDs
▪ Use explicit failover – no actual load balancing. The vSwitch just uses the
vmnic which is first in the list
▪ Route based on physical NIC load (VDS only)
o Network Failure Detection
▪ Link Status Only – relies on pulled cables or pSwitch power failures, but
not on downstream issues or STP
▪ Beacon Probing – sends out beacon probes every second; best to use
with at least 3 vmnics in a team
o Notify Switches (Yes / No) – ESXi Host vSwitch sends notifications to the physical
network to update lookup tables when the Host physical NIC 1st connects to the
vSwitch or traffic is re-routed to a different vmnic in the team
o Failback (Yes / No) – enabled by default in a team; when a failed vmnic returns
online, it is set back to active and replaces the standby NIC

• Security Policy
o Promiscuous – vSwitch allows all traffic to pass to the VM; default = Reject
o MAC Address Changes – if the VM guest OS changes the effective MAC of the
VM, the vSwitch drops all INBOUND packets default = Reject
o Forged Transmits – if the source MAC is different than what’s in the VM .vmx
file, the vSwitch drops all OUTBOUND traffic from a VM default = Reject

• Traffic Shaping Policies


o Average bandwidth (kbits)
o Peak bandwidth (kbits)
o Burst size (KB)

Page 36 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

1.5.4 – Manage Network I/O Control (NIOC) on vSphere Distributed Switch (VDS)

• Configure NIOC
o To first utilize NIOC, you have to create a Distributed Switch. The VDS features
you want or need in your organization will determine the NIOC version you need.
If a VDS is already created, then the only thing needing done is to modify its
Properties and enable it for NIOC functionality.

Page 37 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o To configure and manage system traffic, you then click on the VDS > Configure,
then select System Traffic under the Resource Allocation section. You can then
click on a System Traffic type and Edit its resource allocation, controlling the
bandwidth this traffic type utilizes

o To manage VM Traffic, you would go into the Network Resource Pools area and
add an allocation. After you’ve created all you need, you then assign the
Resource Pool to the VDS Port Group the VMs are attached

Page 38 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

1.5.5 – Describe Network I/O Control (NIOC)

• Network I/O Control (NIOC)


NIOC, first introduced in VDS version 6, is used to allocate network bandwidth to
business-critical applications and resolve situations where several types of traffic
compete for network resources. NIOC allocates bandwidth to VMs via Shares,
Reservations, and Limits, concepts of which will be discussed more in detail in Objective
5.1.1
o NIOC has the following requirements
▪ NIOC requires a vSphere Enterprise Plus license
▪ NIOC version 3 requires ESXi 6.0 and higher and VDS 6.0 and higher
▪ VMs cannot use SR-IOV
o Bandwidth can be allocated for the following system network traffic types
NOTE: total bandwidth reserved cannot exceed 75% of the bandwidth provided
by the physical adapter
▪ Management
▪ Fault Tolerance
▪ NFS
▪ vSAN
▪ vMotion
▪ vSphere Replication
▪ vSphere Data Protection Backup
▪ VM
▪ NVMe over TCP
o NIOC can be implemented in one of two allocation models for VMs:
▪ Allocation across an entire VDS
▪ Allocation on the physical adapter which carries VM traffic
o vSphere guarantees sufficient bandwidth by utilizing Admission Control at ESXi
Host and Cluster levels based on bandwidth reservation and teaming policy
▪ Admission control on the VDS verifies if an ESXi Host physical adapter can
supply minimum bandwidth to the VM according to teaming policy and
bandwidth reservation and the VM network adapter reservation is less
than the quota of the network resource pool
▪ Admission control with vSphere DRS ensures when powering on a VM,
the VM is placed on an ESXi Host to guarantee bandwidth for the VM
according to the teaming policy
▪ Admission control with vSphere HA ensures a VM is restarted on an ESXi
Host according to bandwidth reservation and teaming policies

Page 39 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 1.6: Describe VMware vSphere Lifecycle Manager Concepts


Resource: Managing Host and Cluster Lifecycle Guide (2022)

• vSphere Lifecycle Manager (vLCM) Overview


o I will just discuss vSphere Lifecycle Manager (vLCM) as an overview here, then
will go over configuration and management tasks in Objectives 4.14 and 7.9.
vSphere Lifecycle Manager is a vSphere install/update/upgrade patch
management service which runs on a PostgreSQL database. vLCM is an improved
version of VMware Update Manager (VUM). It uses the Internet to download
updates for the vLCM depot or, in more secure enviornments, uses the Update
Manager Download Service (UMDS) to perform the downloads, or you can
import them manually. Generally speaking, on the surface it is much the same
and has many of the same attributes as VUM – Baselines/Baseline Groups,
Imported ISOs, Updates, Stage/Remediate, Attach/Detach/Check Compliance,
Settings. So what’s the major difference? vLCM Images. vLCM Images are a
desired software specification which can be applied to all ESXi Hosts in a Cluster.
Software & firmware updates happen simultaneously in a single workflow.
Specifically, a vLCM Image is a combination of software, drivers, and firmware
installs & upgrades all packaged together. When creating an Image, there are 4
potential components you can add:
▪ ESXi version – this component is mandatory; others are optional
▪ Vendor add-on
▪ Firmware and drivers add-on
▪ Any additional components
o Another advantage of vLCM is the ability to check ESXi Hosts hardware against
the VMware Compatibility Guide (VCG). Initially, vLCM uses Baselines for
updating. You can create your image against a Cluster at any time, and when you
do, Baselines can no longer be used for the Cluster – the Baselines option
disappears from the vLCM UI (highlighted below).

Page 40 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

With vLCM you can also update VM Tools and VM Hardware, as seen from the
list shown in the image above.

To check a Host against the VCG, select a Host in the Inventory in the Client >
Updates tab, then select Hardware Compatibility. Choose a vSphere version
from the drop-down, and click Apply. You will then see a list of items displayed
to verify compatibility with (last 3 screenshots)

Page 41 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o If you have multiple vCenter Servers, and thus multiple vLCMs, you can configure
each vLCM per vCenter only. Settings will not propagate to other vLCM
instances. Similarly, when patching, you do so to objects attached only to a given
vCenter Server.

Page 42 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o When you enable other VMware solutions which use a vLCM Image, the feature
automatically uploads an offline bundle with components into the vLCM depot
and adds its components to all Hosts in a Cluster. Cluster solutions currently
integrated with vLCM are: vSphere HA, vSAN, Tanzu, and NSX
o Though vLCM with Images is a powerful upgrade to VUM, it still may not be the
best upgrade solution for your organization. View the diagram below to see
some of the differences, then make the best upgrade management choice for
your environment (view all differences on pp. 23-25, Lifecycle Guide):

Page 43 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 1.7: Describe the Basics of vSAN as Primary Storage


Resource: vSAN 8.0 Planning and Deployment Guide (2022)
Administering VMware vSAN 8.0 Guide (2022)

• vSAN Components:
o vSAN architecture components consist of:
▪ vSphere, minimum 5.5 U1
▪ vSphere Cluster
▪ Disk groups, consisting of at least 1 flash device for cache and at least 1,
but up to 7, hard disk drives for capacity storage; maximum of 5 groups
per ESXi Host
▪ Witness is the component containing metadata that serves as a
tiebreaker when a decision must be made regarding availability of
surviving datastore components
▪ Storage Policy-Based Management (SPBM) for VM storage requirements
▪ Ruby vSphere Console (RVC) is a CLI console for managing and
troubleshooting a vSAN Cluster
▪ vSAN Observer runs on RVC and is used to analyze performance and
monitor a vSAN Cluster
▪ Fault domains
▪ vSAN storage objects
o vSAN datastore components:
▪ A VM Home Namespace is the home directory where all VM
configuration files are stored, such as .vmx, .vmdk, .log, and snapshot
descriptor files
▪ VMDK is the VM disk that stores a VM hard disk contents
▪ A VM Swap object is created when a VM is powered on
▪ Snapshot Delta VMDKs are created when a snapshot is taken
▪ A Memory object is created when the snapshot memory is selected when
taking a VM snapshot

• vSAN Supportability
o vSAN does not support vSphere features such as: SIOC, DPM, RDMs, or VMFS

• vSAN Health and Topology info:


o VMs are in one of two compliance states, Compliant and Noncompliant, viewable
on the Virtual Disks page > Physical Disk Placement tab
o vSAN components can be in one of two following states:
▪ Absent is when vSAN detects a temporary component failure, e.g. a Host
is restarted

Page 44 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version


Degraded is when vSAN detects a permanent component failure, e.g. a
component is on a failed device
o vSAN objects are in one of two following states:
▪ Healthy is when at least one full RAID-1 mirror is available or the
minimum required number of data segments are available
▪ Unhealthy is when there is no full mirror available or the minimum
required number of data segments are unavailable for RAID-5 or RAID-6
objects
o vSAN Cluster can be one of two types
▪ Hybrid Clusters have a mix of flash storage, used for cache; and hard disk
storage, used for capacity
▪ All flash Clusters use flash for both cache and capacity
o vSAN Clusters can be Deployed in one of the following ways:
▪ Standard vSAN Cluster deployment consists of minimum of 3 ESXi Hosts
▪ Two-Node vSAN Cluster deployment is generally used for ROBO locations
with an optional Witness Host at a separate remote location
▪ Stretched vSAN Cluster provides resiliency against the loss of an entire
site by distributing ESXi Hosts in a Cluster across two sites with latency
less than 5ms between the sites, and a Witness Host at a 3rd location

1.7.1 – Identify Basic vSAN Requirements

• Virtual SAN (vSAN) Requirements


o To implement vSAN, the following minimum requirements must be met:
▪ 3 ESXi Hosts, or 2 ESXi Hosts and a Witness for Stretched Clusters
▪ Only ESXi 5.5 U1 and later Hosts can join a vSAN Cluster
▪ 8GB RAM; 512GB RAM for vSAN ESA (see what ESA is below)
▪ 1 Gbe for Hybrid Clusters; 10 Gbe for All Flash; 25Gbe for ESA
▪ 1 flash drive for cache, and 1 hard disk drive for capacity storage
▪ Storage controllers configured for passthrough (recommended) or RAID-0
▪ Deactivate storage controller cache, or set it to 100% if not able to
deactivate
▪ Set storage controller queue depth greater than or equal to 256
▪ Have uniform vSphere and vCenter versions and similar hardware vendor
▪ ESXi Hosts configured for same on-disk format, version 2 or version 3
▪ A valid vSAN Standard license (or Enterprise for encryption or Stretched
Clusters)
▪ Must turn off vSphere HA to enable vSAN; re-enable HA after vSAN
▪ vSAN Datastores cannot be used for vSphere HA heartbeating
▪ RTT no greater than 1ms for Standard vSAN, 5ms for Stretched, and
200ms from main site to Witness Host

Page 45 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

1.7.2 – Identify Express Storage Architecture (ESA) Concepts for vSAN

• vSAN Express Storage Architecture (ESA):


o vSAN ESA is a vSAN storage solution designed for high performance NVMe based
TLC flash devices and high performance networks. Each ESXi Host contributing to
storage contains a storage pool of at least 4 flash devices, each providing both
cache and capacity to the Cluster
o vSAN ESA can be deployed in 1 of 2 ways:
▪ vSAN Ready Nodes via VMware Partners – Cisco, DELL, Fujitsu, IBM,
Supermicro
▪ User-Defined vSAN Cluster – using VCG to comprise the vSAN
components
o vSAN ESA Deployment Options:
▪ Standard vSAN Cluster – a minimum of 3 ESXi Hosts deployed in a vSAN
Cluster in a single site, connected to the same Layer-2 network. All flash
configurations require 10Gbe, and vSAN ESA requires 25Gbe
▪ Two-Node vSAN Cluster – ROBO setup of 2 Hosts at the same location
connected to the same switch or are directly connected. You can add a
3rd Host as a witness, but this Host is usually at the main site with vCenter
▪ vSAN Stretched Cluster – ESXi Hosts distributed evenly and stretched
across 2 sites which have a network latency of no more than 5ms. A
witness Host is added at a 3rd site, and acts as a tie-breaker in the event
of a network partition between the 2 sites

Objective 1.8: Describe the Role of Virtual Machine Encryption in a Datacenter


Resource: vSphere 8.0 Security Guide (March 2023)
vSphere 8.0 Virtual Machine Administration Guide (Jan 2023)

• VM Encryption
o Before you can use VM Encryption, there are a few pre-requisites you need to
take care of. First, make sure you have a Key Provider set up in your
environment, added in vCenter under Configure tab > and under Security, select
Key Providers, which is essentially a Key Management Server (KMS). I discuss
the 3 types of Key Providers below in Sub-Objective 1.8.2. Next, you must enable
all ESXi Hosts for Encryption Mode. To do so, select a Host > Configure tab, then
under the System section click Security Profile. Scroll down to Host Encryption
Mode and click the Edit button

Page 46 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

After you’ve enabled your ESXi Hosts for Encryption Mode, you then need to
create a "Host-Based" encryption Storage Policy. I covered Storage Policies
earlier, but a screenshot review is shown below

o After you have those few things configured, all you need to do is create a VM
and enable it for encryption. During the create VM wizard, when selecting your
storage, choose the option to 'Encrypt this virtual machine'

o To encrypt a current VM, you must first power it off, then follow the procedure
noted in Sub-Objective 7.12.1

Page 47 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

• VM Encryption secures your environment by encrypting the following VM components:


o Virtual Machine Files: NVRAM, VSWP, VMSN
o Virtual Disk Files (VMDKs)
o ESXi Core Dumps
o VM Swap File
o vTPMs
• VM Compnents Not Encrypted
o VM Configuration File (VMX)
o Virtual Machine Log Files
o VM Descriptor File

1.8.1 – Describe vSphere Trust Authority

• vSphere Trust Authority (vTA) Overview


o vTA is simply a technology you can implement in vSphere to enhance workload
security, establishing a greater trust by associating your ESXi Hosts’ hardware
root of trust (using Trusted Platform Modules, or TPMs) to the workload itself.
vTA uses remote attestation – security affirmation ESXi Hosts are running
authentic VMware or VMware-signed partner software – and access control to
advance cryptographic capabilities. vTA is responsible for ensuring Hosts are
running trusted software and for releasing encryption keys only to trusted ESXi
Hosts
o To use vTA, you implement & configure vTA services to attest ESXi Hosts, which
then become capable of performing cryptographic operations. The 2 main
components of vTA are:
▪ Attestation Services – uses TPM 2.0 chips in Hosts to establish a root of
trust, and verifies software measurements against a list of administrator-
approved ESXi versions
▪ Key Provider Services – encapsulates Key Servers & exposes Trusted Key
Providers which can be specified when encrypting VMs; limited to KMIP
o vTA consists of:
▪ Preferably separate vCenter Servers – vTA vCenter and workload vCenter
▪ vTA Cluster – ESXi Hosts separate from prod workloads (preferably, but
not required) which have been attested
▪ Key Servers – store encryption keys used by Key Provider Service
▪ Trusted Clusters – ESXi Hosts remotely attested with TPM 2.0 and
running encrypted workloads
▪ Encrypted workload VMs
▪ At least one KMIP-compliant Key Management Server (KMS)

Page 48 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

VMware’s Trust Authority Architecture diagram is displayed below

1.8.2 – Describe the Role of Key Management Services (KMS) Server in vSphere

• The Key Management Server (KMS) is a Key Managemet Interoperability Protocol


(KMIP) management server associated with with a Key Provider. A Standard Key
Provider and Trusted Key Provider require an external Key Server.
o Standard Key Provider – uses vCenter to requests keys from a Key Server; the
Key Server generates and stores the keys, then vCenter gets the keys and
distributes them to the required ESXi Hosts
o Trusted Key Provider – uses a Key Provider Service which enables trusted Hosts
to fetch keys directly
o vSphere Native Key Provider – beginning with vSphere 7U2, no external Key
Server is required. vCenter generates a private key, Key Derivation Key (KDK),
and pushes it to all Hosts in the Cluster. The ESXi Hosts then generate Data
Encryption Keys (DEKs) to enable security functionality such as vTPM. An
Enterprise Plus license is required, and this service can coincide with an external
key server infrastructure

Page 49 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 1.9: Recognize Methods of Securing Virtual Machines


Resource: vSphere 8.0 Virtual Machine Administration Guide (Jan 2023)
vSphere 8.0 Security Guide (March 2023)

• Methods to Secure Virtual Machines:


o VM Hardening – keep VMs patched and update; upgrade VMware Tools; remove
any unneeded virtual devices (serial, usb, etc)
o VM Encryption
o Utilize Virtual Trusted Platform Modules to perform attestation, key generation,
and other security tasks
o Enable UEFI Secure Boot – ensures a VM boots using only software trusted by
the PC/OS manufacturer
o Perform virus scans
o Use hardened VM Templates to deploy new VMs ensuring security consistency
o Minimize VM Console use
o Deactivate applications, systems, or services within the guest OS

1.9.1 – Recognize Use Cases for a Virtual Trusted Platform Module (vTPM)

• Virtual Trusted Platform Module (vTPM)


o vTPM enables the guest OS of a VM to create and store keys which are private,
and not exposed to the guest OS, thus significantly reducing the attack surface
risk of the VM. vTPM uses an encrypted VM .nvram file as the vTPM secure
storage
o Pre-requisites to implement vTPM include:
▪ VM using EFI firmware
▪ VM Hardware version 14+
▪ vCenter Server 6.7+ for Windows guest OS; 7U2 for Linux guest OS
▪ VM Encryption
▪ Guest OS – Windows 2008 Server+/Windows 7+, or Linux
▪ Having a Trust Authority in place
▪ A Key Provider in place
o Once you’ve met the pre-req’s, create a new VM. Then on the 'Customize
Hardware' section, add a new device, and select to add a Trusted Platform
Module

Page 50 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

1.9.2 – Differentiate Between Basic Input/Output Systems (BIOS) and Unified Extensible
Firmware Interface (UEFI) Firmware

• VMware is more than likely referring to Legacy BIOS vs EFI settings. I think the best way
to tackle this topic is to first describe BIOS/UEFI differences, then discuss why VMware is
moving away from Legacy BIOS, although I think it'll become obvious as to why as I
provide more information here:
o BIOS, and for simplicity sake I'll just be referring to physical devices initially,
resides on an EPROM (Erasable Programmable Read-Only Module) chip. It
provides reading of 'helper functions' which allow reading boot sectors from
attached storage, & printing helper function items to the screen. A function it
performs you may be aware of is the Power-On Self Test (POST). After it runs this
test, BIOS initializes remaining hardware, detects connected peripherals ,and
verifies they're all healthy. BIOS can only run on a 16-bit processor, limiting the
speed at which it can perform it's boot functions. After these functions are
completed, BIOS code searches storage devices for the Boot Loader (typically,
the Master Boot Record, on the 1st 512bytes of the 1st sector) and when found,
the firmware passes computer function over to it
o UEFI does mostly the same functions as BIOS, but instead of storing info on the
firmware, it stores info in a .efi file stored on a special EFI Storage Partition
(ESP). The ESP also contains the Boot Loader (GUID Partition Table). And lastly,
UEFI runs in 32-bit or 64-bit, as opposed to 16-bit, making it more performant

• So, with this basic information, I'll share some limitiations of BIOS vs UEFI:
o With running only in 16-bit, BIOS typically doesn't allow mouse/keyboard input
when manipulating BIOS settings. You have to tab through settings. Also, the
bootup process takes signifcantly longer then UEFI
o Because it uses MBR, drive sizes cannot be larger than around 2TB
o UEFI, since it uses higher bit modes, provides a better UI, with mouse &
keyboard navigation, as well as provide significantly faster boot times
o UEFI uses GUID Partition Table (GPT), allowing disk sizes larger than 2TB
o UEFI provides a lot more functionality within it which you can choose to enable
or disable based on your computing needs. Probably the most intriguing feature
within UEFI, and VMware's most compelling reason to transition from Legacy
BIOS to UEFI, is the ability to perform Secure Boot, which prohibits a computer
from running unauthorized and unsigned applications, reducing the attack vector
by those wishing to invoke malicious intent on a system
o With regards to VMware, some other security features which require UEFI are:
▪ Persistent memory
▪ Intel vSGX Registration
▪ TPM 2.0

Page 51 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

▪ Recent DPU/SmartNIC support


▪ See this VMware KB discussing Legacy BIOS deprecation

1.9.3 – Recognize Use Cases for Microsoft Virtualization-Based Security (VBS)

• Microsoft Virtualization-Based Security (VBS)


o A MS feature introduced in Windows 10 and Windows 2016. It uses hardware
and software virtualization to enhance system securtiy by creating an isolated,
hypervisor-restricted, specialized subsystem
o Requirements to use:
▪ vSphere 6.7+
▪ VM Hardware version 14+
o Caveats to using VBS (features which don't work with VBS):
▪ Fault Tolerance
▪ CPU/Memory Hotadd
▪ PCI Passthrough
▪ New Windows 10/Windows 2016 VMs created with Harware version less
than 14 used the Legacy BIOS by default. When implementing VBS on
these machines, the VM firmware type must be changed to EFI and the
OS must be reinstalled

Objective 1.10: Describe Identity (Provider) Federation


Resource: vSphere 8.0 Authentication Guide (February 2023), pp. 109-121

• Identity Federation
o Since vSphere 7, vCenter Server supports Identity Federation for authentication,
allowing you to configure an external identity provider to interact with the
identity source on behalf of vCenter Server. What happens is when a user logs in,
the credentials are redirected by vCenter to the external identity provider. User
credentials are no longer provided to vCenter Server directly. Instead, the user
provides their credentials to the external identity provider and vCenter just
trusts this external provider to perform the authentication. Why is this
beneficial? Credentials are never handled by vCenter. And, you can use other
security measures external providers offer such as Multi-Factor Authentication
(MFA), which improves security. Configuring Identify Federation will be discussed
later in Sub-Objective 4.4.1

Page 52 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

• Below is VMware’s process flow of configuring Identity Provider Federation

• Components of vCenter Identify Federation:


o vCenter Server
o Identify Provider service configured in vCenter
o An AD FS server and Microsoft Active Directory (AD) domain
o An AD FS application group
o AD groups and users which map to vCenter Server groups and users

1.10.1 – Describe the Architecture of Identify Federation

• vCenter Identify Federation Architecture


o vCenter uses OpenID Connect (OIDC) protocol to receive an identity token which
authenticates the user with vCenter. You create an OIDC connection in AD FS, or
Application Group, consisting of a server application and web API to establish a
relying party trust between vCenter and an Identify Provider. The API and
application components specify the information vCenter uses to trust and
communicate with the AD FS server. Within vCenter, you then create an Identify
Provider then finally configure group memberships in vCenter to authorize logins
from the users in the AD FS domain. The following information is required to
configure an Identity Provider in vCenter:
▪ Client Identifier – the UUID string generated by the AD FS Application
Group wizard identifying the Application Group itself
▪ Shared Secret – also generated by the AD FS Application Group wizard &
is used to authenticate vCenter Server with AD FS
▪ OpenID Address – OpenID Provider Endpoint URL of the AD FS server, e.g.
https://webserver.domain.com/adfs/.well-known/openid-
configuration

Page 53 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

1.10.2 – Recognize Use Cases for Identity Federation

• Uses Cases (Benefits) for Identity Federation:


o Ability to use Single Sign-On with existing federated infrastructure
o Improve data security due to the fact vCenter Server never handles credentials
o Ability to use the Identity Provider's authentication mechanisms, such as MFA

Objective 1.11: Describe VMware vSphere Distributed Services Engine


Resource: ESXi 8.0 Installation and Setup Guide (2022), pp. 9-11
Managing Host and Cluster Lifecycle Guide (2022)

• vSphere Distributed Services Engine


o The vSphere Distributed Services Engine provides organizations the opportunity
to offload networking service operations from ESXi Host CPU to the server Data
Processing Unit (DPU) or SmartNIC. Not only does the DPU provide network
operation offloading, but also security and compression acceleration. Using the
DPU for such tasks frees up Host CPU for business-critical workloads
o System Requirements of the Distributed Services Engine:
▪ vCenter Server 8+
▪ ESXi 8+

1.11.1 – Describe the Role of a Data Processing Unit (DPU) in vSphere

• Data Processing Unit (DPU)


o The DPU is essentially a SmartNIC device – a high-performance network interface
card with added embedded CPU cores, memory, and a hypervisor running
independently of the Host hypervisor, making DPUs like limited-resource servers
with multiple general-purpose compute cores. The ESXi hypervisor which runs on
the DPU is a fully functional hypervisor, but can only be run on ARM CPU
architectures. As such, you don't use this hypervisor to provision VMs. It's sole
purpose is to optimize I/O activities, such as packet offloads, external
management, etc. The DPU is a pre-configured device; and, you don't manage
them separately from the ESXi Host. To begin using DPUs, they are typically pre-
installed on DPU-ready hardware (server). Then, all is needed is to install
vSphere 8 or higher on the Host. The install process installs ESXi not only on the
Host but the DPU as well without any secondary interaction or configuration
required.
o Only the following vSphere components are supported to work with DPUs:
▪ NSX

Page 54 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o Limitations for DPUs:


▪ ESXi Hosts can have only 1 DPU per Host
▪ If there is one ESXi Host in a Cluster with a DPU, all other ESXi Hosts must
have a DPU as well
▪ DPU devices must be from the same vendor and of the same model; but,
the generation may vary
▪ Currently the only DPU vendors supported are: NVIDIA and AMD on
servers from DELL and HPE
▪ You cannot install DPUs separately in your environment; they must be
pre-installed

Objective 1.12: Identify Use Cases for VMware Tools


Resource: VMware VMware Tools KB Article (2020)

• The following are use cases for installing VMware Tools


o Has supported guest OS devices drivers, including network drivers
o VM guest usability – provides a better keyboard and mouse experience, as well
as enhanced display support
o Allows graceful shutdown of the guest OS
o Can be used to quiese the guest OS for snapshotting
o Component is used with VM image backup tools
o Can be used for time synchronization

Objective 1.13: Describe the High-Level Components of VMware vSphere with


Tanzu
Resource: vSphere 8.0 With Tanzu Concepts and Planning Guide (2022)

• vSphere With Tanzu Components


o Supervisor
o Namespace
o vSphere Pods
o Control Plane VM
o Tanzu Kubernetes Grid
o VM Service/Operator

Page 55 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

1.13.1 – Identify the Use Case for a Supervisor Cluster and Supervisor Namespace

• Clusters enabled with vSphere with Tanzu are called Supervisors. You can select a 3-
Zone deployment, enabling Supervisor on 3 vSphere Clusters to have Cluster-level HA,
or a one-to-one mapping between a vSphere Cluster and Supervisor, to have ESXi Host-
level HA. Supervisors are the base of the vSphere with Tanzu platform, providing
necessary components and resources for running workloads such as vSphere Pods, VMs,
and Tanzu Kubernetes Grid (TKG) Clusters

• vSphere Namespace sets the boundaries where vSphere Pods, VMs, and TKG Clusters
can run. Upon initial creation, they have unlimited resources (CPU, Memory, Storage).
But vSphere Admins generally create limits for those resources, as well as limits for how
many Kubernetes objects can run in the Namespace. This is done by a vSphere Resource
Pool being created per Namespace

1.13.2 – Identify the Use Case for vSphere Zones

• vSphere Zones are mapped to vSphere Clusters to provide High Availability (HA) against
Cluster-level failure to workloads deployed on vSphere with Tanzu. You then are able to
create a Supervisor, mapped to 1 or 3 of these Zones

1.13.3 – Identify the Use Case for a VMware Tanzu Kubernetes Grid (TKG) Cluster

• Tanzu Kubernetes Grid provisions Clusters which include components necessary to


integrate with underlying Namespace resources. A TKG Cluster is a full distribution of
Kubernetes built, signed, and supported by VMware. They are built on top of the
Supervisor, which is itself a Kubernetes Cluster. The TKG is defined in the vSphere
Namespace as a custom resource

SECTION 2: VMware Product and Solutions


Objective 2.1: Describe the Role of vSphere in the Software-Defined Datacenter
(SDDC)
Resource: Architecture and Design 5.0 – VMware Validated SDDC Design Guide (March 2019)
Architecture and Design 6.2 for the Management Domain (February 2021)

I continued to use the referenced Resources above for this section, but the resources are
officially discontinued by VMware. Though discontinued, I believe VMware's overall

Page 56 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

concepts of what a SDDC is has not changed. How they go about implementing it has, with
the introduction of VMware vSphere+, vSphere With Tanzu, and their monitoring suite of
products getting an overall & rebranding to VMware Aria Suite

• Software-Defined Datacenter (SDDC)


o What is a SDDC? It’s the concept of running an entire datacenter with a minimal
amount of hardware and manual configuration as possible (my definition).
VMware’s description suggests “no hardware”, but from a realist perspective,
this is not possible because hardware is of course needed at the heart of the
SDDC design, which runs vSphere – ESXi and vCenter Server. In addition to
virtualizing as much of the hardware stack as possible, there should be as much
automation as possible to minimize errors, misconfigurations, and save on time-
to-deployment. The last concept is insight into your DC using VMware Aria
products such as VMware Aria Operations for Logs and Operations.

Above the physical hardware layer is where the SDDC lies, with vSphere the
foundational layer, then storage, network, ancillary services (FT, vReplication),
Containers, Cloud Services, etc

See VMware’s visual representation of the SDDC below:

o A Standard SDDC Architecture involves multiple vCenter Servers and their


associated workloads. In a Consolidated SDDC design, management and
workload Clusters fall under 1 vCenter Server umbrella.

Page 57 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 2.2: Identify Use Cases for VMware vSphere+


Resource: VMware vSphere+: Introducing the Enterprise Workload Platform

• VMware vSphere+
o What it is – vSphere+ is a multi-cloud workload platform, delivering benefits of
the Cloud to on-prem workloads; combining vSphere virtualization technology, a
Kubernetes environment, and high-value Cloud services to transform existing on-
prem deployments into SaaS-enabled infrastructure.
o vSphere+ Components
▪ Cloud Console – single management plane to centralize tasks normally
performed across multiple vCenter instances
▪ Admin Services –
– vCenter Lifecyle Management service – for on-prem vCenter updates
– Global inventory service – visualize resources across entire vSphere+
estate
– Event view service – for alerts and events across entire vSphere+ estate
– Security health check service – monitor security posture and highlight
security exposures across entire vSphere+ estate
– Provision VM service – create VMs from Cloud Console instead of
vCenter

Page 58 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

– Configuration management service – to monitor vCenter configuration


drift
▪ Developer Services – Get container-based (infrastructure) services as
needed for Developers to quickly test and deploy applications
– Tanzu Kubernetes Grid service – run containerized applications
– VM service – deploy VMs
– Networking service – configure, monitor, administer switching for VMs
and k8s workloads
– Storage service – manage persistent disks
– Tanzu integrated service
– Tanzu mission control essentials
▪ Cloud Integration
o Use case – when organizations want an infrastructure which combines all the
benefits of on-prem deployments with all the benefits of Cloud

Objective 2.3: Identify Use Cases for VMware vCenter Converter


Resource: Using VMware vCenter Converter Standalone 6.3 Guide (Oct 2022)

• vCenter vCenter Converter Standalone Use-Cases


o For use anytime you want to convert the following:
▪ Import running physical and virtual machines to standalone or managed
ESXi Hosts
▪ Import VMs running by VMware Workstation or Microsoft Hyper-V to
managed ESXi Hosts
▪ Export VMs managed by vCenter to other VMware VM formats
▪ Configure vCenter managed VMs so they're bootable, install VMware
Tools, or customize the guest OS
▪ Customize vCenter managed VM guest OS – change hostname or
network settings, as well as OS services
▪ Reduce time to setup a new virtual environment
▪ Migrate legacy servers to new hardware without reinstalling OS or apps
▪ Perform migrations across heterogenous hardware
▪ Readjust volume sizes and place volumes on separate disks

Objective 2.4: Identify DR Use-Cases


Resource: Various VMware Guides used throughout this Guide & VMware’s Documentation repository

• DR Use-Cases
o I think this goes without saying, but DR Use-Cases are obvious – a DC or VI Admin
should have a DR plan which has been tested (important!) and in place in the
event of any kind of mishap, misconfiguration, or disaster resulting in taking

Page 59 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

down business servers, applications, and/or the whole IT environment.


Organizations are generally in the business of making money. If its backbone (i.e.
IT environment) is inaccessible, then the business loses money. Developing
workable Recovery Time Objectives (RTOs) – acceptable time to recover downed
systems, and Recovery Point Objectives (RPOs) – acceptable amount of data loss
from downed systems, is critical in minimizing loss of revenue for the business in
the event of unplanned outages.

2.4.1 – Identify vSphere vCenter Replication Options


Resource: VMware vSphere Replication 8.6 Administration Guide (2023)

• VMware vCenter Replication (vRep)


o vSphere Replication is an extension of vCenter Server providing a hypervisor-
based VM replication and recovery solution for your environment. You can use
vRep in place of storage replication to replicate VMs to a remote site to protect
them against partial or complete site failure. You can configure point-in-time
(PiT) snapshots of VMs (maximum of 24) to best fit your organizational needs.
When combined with Site Recovery Manager, you can have a wholistic DR
solution
o Limitations to be aware of:
▪ There can only be one vRep appliance per vCenter instance
▪ Only 400 maximum replications per vRep appliance; 4000 per vCenter
▪ No external DB allowed; vRep uses internal PostgreSQL
▪ Unsupported VMware solutions with vRep: FT and vLCM

2.4.2 – Identify Use Cases for VMware Site Recovery Manager (SRM)

• Site Recovery Manager


o SRM is VMware’s backup, disaster recovery, and business continuity tool. With
it, you can plan, test, and run recovery of VMs between a protected vCenter
Server site and a recovery vCenter Server site. SRM allows the use of either an
appliance or Windows-based central management, deployed at both sites. You
can utilize one of 2 deployment models – single-site or two-site deployment
model

SECTION 3: There are no testable objectives for this section

Page 60 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

SECTION 4: Installing, Configuring, and Setup


Objective 4.1: Describe Single Sign-On (SSO)
Resource: vSphere 8.0 Authentication Guide (February 2023)

• vCenter Server Single Sign-On (SSO)


o SSO is part of the Authentication Services component of vCenter Server, and
provides secure authentication services to vSphere software components.
vSphere components communicate with each other via secure token exchange
mechanism, instead of requiring each component to authenticate separately
with a directory service, like Active Directory. SSO uses vsphere.local as its
domain where vSphere solutions and components are registered. SSO can
authenticate users against vsphere.local or an external directory source. The
domain name is used by the VMware Directory Service ( vmdir )
o SSO Components include:
▪ Security Token Service (STS) – issues SAML tokens representing a user
▪ Administration Server – allows admin users to configure users/groups
▪ vCenter Lookup Service
▪ VMware Directory Service ( vmdir )
▪ Identity Management – handles identity mgmt and STS auth requests

4.1.1 – Configure a Single Sign-On Domain

• To configure a SSO, you are prompted during the VCSA Stage 2 installation process to
create a new SSO or join an existing SSO Domain. For a new SSO domain, provide the
information shown in the screenshot below

Page 61 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

4.1.2 – Join an Existing Single Sign-On Domain

• Join Existing SSO Domain


o To join an existing domain during VCSA (Stage 2) install, you must provide:
– The vCenter Server SSO FQDN or IP to join
– HTTPS port to communicate with the vCenter Server SSO to join
– SSO domain name for the SSO you want to join (i.e. vsphere.local)
– Password for the SSO admin account of the SSO you want to join
o To join an existing SSO domain after VCSA install, you must use CLI, and be
running a minimum of vCenter Server 6.7U1
– Run a pre-check CLI command (not required if no replication partner):
cmsso-util domain-repoint -m pre-check --src-emb-admin Admin
--replication-partner-fqdn Node_FQDN --replication-partner-admin
Admin --dest-domain-name SSO_domain

– Execute a CLI command:


cmsso-util domain-repoint -m execute --src-emb-admin Admin
--replication-partner-fqdn Node_FQDN --replication-partner-admin
Admin --dest-domain-name SSO_domain

Page 62 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

– Repoint when there is a replication partner:


You must first unregister (decommission) the vCenter you’re repointing the
current SSO domain from by shutting it down then running a CLI command on a
partner vCenter node:
cmsso-util unregister --node-pnid Node_FQDN --username Node_Admin
--passwd Admin_Pwd

…then, power on the vCenter wanting to repoint and execute the commands
shown above

• NOTE: VMware added quite a few new SSO Groups, about half of which shouldn't be
modified. I recommend reviewing the group list in the Authentication Guide, on pp. 125-
126

Objective 4.2: Configure vSphere Distributed Switches (VDS)


Resource: vSphere 8.0 Networking Guide, (2022), pp. 32-41

• Configure a VDS
o To configure a new Distributed vSwitch, click on an Datacenter (DC) Inventory
object > Actions link, or rt-click on the DC object > Distributed Switch > New
Distributed Switch

4.2.1 – Create a Distributed Switch

• Create a Distributed Switch


o To create a new VDS, follow the initial procedure shared above
o Give the vDS a Name
o Select the vDS version
o Configure the number of Uplinks

Page 63 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o Enable/disable NIOC as desired


o Choose whether to create a Port Group & if so, give it a name

4.2.2 – Add ESXi Hosts to a Distributed Switch

• Add ESXi Hosts to a VDS


o Rt-click the vDS, or go into vCenter Networking Inventory > Actions link, and
choose Add or Manage Hosts
o Choose the Add Hosts option
o Select the ESXi Hosts from the list you want to add to the VDS
o Modify the Host physical adapters by assigning to the vDS Uplink as needed
▪ Modify Adapters for All Hosts or Adapters per Host by selecting the tab
o Manage any Host VMkernel Adapters
▪ Modify Adapters for All Hosts or Adapters per Host by selecting the tab
o Then choose to Migrate VM Networking if needed, and Complete the wizard

4.2.3. – Examine the Distributed Switch Configuration

• View VDS Settings


o Rt-click the VDS > Settings > Edit Settings option
o Click on the various tabs to view their settings: General, Advanced, Uplinks
o Advanced settings:
▪ Configure MTU
▪ Multicast Filtering Mode
▪ Discovery Protocol type & operation – disabled, CDP, LLDP
▪ Administrator name and other details info
o Rt-click the VDS > Settings > Edit Private VLAN to view/add/modify PVLAN
o Rt-click the VDS > Settings > Edit Netflow to view/add/modify Netflow settings
o VLANs: rt-click VDS Port Group > Edit Settings > VLAN section
NOTE: these VLAN types also apply to Standard Switches
▪ External Switch Tagging (EST): set VLAN to 0
▪ Virtual Guest Tagging (VGT): set VLAN to 4095
▪ Virtual Switch Tagging: set VLAN from 1-4094
o Other VDS Port Group settings:
▪ General settings:
– Name
– Port binding: static or ephemeral
– Port allocation: elastic or fixed
– Number of ports
– Network Resource Pool assignment

Page 64 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

▪ Advanced settings:

▪ Other settings are similar to vSS Port Groups

Objective 4.3: Configure VSS Advanced Virtual Networking Options


Resource: vSphere 8.0 Networking Guide, (2022)

• Configure a VSS
o To configure a Standard vSwitch, click on a Host in your virtual inventory >
Configure tab > Networking section > Virtual Switches

From here you can click the ellipsis on the Physical Adapter box to view
Discovery Protocol (CDP only for vSS) information of the physical nic and the
hardware (switch) it’s connected to. You can click the ellipsis on the left Port

Page 65 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Group (PG) box to either view the settings, edit them, or remove the PG
altogether. Most all settings are on both the vSwitch & PG. The advantage of
configuring settings on the vSwitch is settings will propagate to the PGs, if your
settings are applicable for PGs as well. If not, no worries, you can override
vSwitch-level settings at the PG layer. The only difference between vSwitch & PG
settings is the ability to configure VLANs, which is done at the PG layer; and MTU
size is configured at the vSwitch level

• Advanced VSS Options


o Some advanced options to be aware of are:
(These were covered in Sub-Objective 1.5.3 above; Refer to the Sub-Objective for
review)
▪ VLAN IDs – use these to segregate and isolate different network traffic
types
▪ Security policies – Promiscuous Mode, MAC Address Changes, & Forged
Transmits (all are Reject by default)
▪ Traffic shaping – defined by Average bandwidth, Peak bandwidth, and
Burst size; settings you define to restrict outbound bandwidth traffic

Page 66 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

▪ Teaming and failover – load balancing and failure detection policies for a
given PG, or at VSS-level. From the Edit Settings option on a VSS or PG,
you can modify the settings to meet your environment’s needs. Note the
Override option next to each configurable item below. Override is
available within Security and Traffic Shaping policies as well

Objective 4.4: Setup Identity Sources


Resource: vCenter Server 8.0 Configuration Guide (2022)
vSphere 8.0 Authentication Guide (February 2023), pp. 118-121

4.4.1 – Configure Identity Federation

• Identity Federation
o Currently, vCenter only supports 1 external identity provider – AD FS – and 1
identity source – vsphere.local. Before I go over configuration of federation, first
here are some requirements:
▪ AD FS for Windows 2016 or later is deployed, and connected to AD
▪ AD FS Application Group for vCenter created in AD, as noted here
▪ If using a local CA, upload the root cert to the Trust Root Store in vCenter
▪ A vCenter Server Admin Group created in AD FS
▪ vCenter Server 7.0 or later

Page 67 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o Because of the extensiveness of this configuration, I will only provide an


overview. I recommend you thoroughly read through the Authentication Guide
to get familiar with the process
▪ First, if you use an internal CA, add the root CA to vCenter Server Trusted
Root Store: Menu > Administration > Certificates > Certificate
Management and upload your internal root cert
▪ From Administration, now go to Single Sign-On and select Configuration,
then from the Identity Provider tab, click the “i” (info) link icon on the
right to retrieve 2 URI addresses

▪ Next, create an Open ID from the link I provided above


o Return to the Identity Provider tab, click the Change Identity Provider link on
the right, and walk through the wizard to complete the information. Most of the
information needed is the same format used for AD over LDAP / Open LDAP,
which I share in Sub-Objective 4.4.2 below. Reference this information to
complete the parameters in the wizard

• vCenter Server Identity Sources


o These are external directory services vCenter Server interacts with for user
authentication to vSphere solutions
o Configuration – select vCenter Server in the Client > Configure tab > Settings
section > General then Edit link to configure directory service timeout settings
o Add an Identity Source – Menu > Administration > under Single Sign-On section
select Configuration, then under the Identity Provider tab choose Identity
Sources and click Add

Page 68 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Options to configure will be dependent on the directory service wanting to add.


More on other options is in subsequent Sub-Objectives below.

4.4.2 – Configure Lightweight Directory Access Protocol (LDAP) Integration

• Active Directory over LDAP / Open LDAP


o AD over LDAP and Open LDAP have pretty much the same configurable options,
and each parameter required has a specific format it needs to be in. To configure
either one, click on Menu > Administration > under Single Sign-On section,
select Configuration, then under the Identity Provider tab choose Identity
Sources , then select the Add link and choose which to add from the drop-down.
After your selection, the following screen appears

Page 69 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Everything marked with an asterisk (*) is required. The data format requirements
for each parameter are shown in the screenshot below. Not pictured is the LDAP
server URL format, which needs to be in the ldap://hostname:port format
(or, ldaps)

Page 70 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

4.3.3 – Configure Active Directory Federation

• Active Directory Configuration


o To join Active Directory, you must first add your AD Domain, clicking on the
Active Directory Domain option, which is below Identity Sources as shown in
the screenshot above. After you configure your AD, you can then add it as an
Identity Source. Enter your domain in FQDN format, & is recommended to select
'Use Machine Account', unless you plan on changing the machine name at some
point, at which it is recommended to use the SPN option. The SPN needs to be in
the format: STS/domain.com , then a user (in the UPN format
user@domain.com ) and password who can authenticate with this Identity
Source needs to be supplied

Page 71 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 4.5: Deploy and Configure vCenter Server Appliance (VCSA)


Resource: vCenter Server 8.0 Installation and Setup Guide, Update 1 (2023), pp. 15-49

• Deploy VCSA
o There are two ways to deploy the VCSA – GUI and CLI. I will only be covering the
GUI process. If you’re a CLI nut, you can review the deployment process in the
vCenter Install Guide. Before VCSA deployment, make sure the system, and the
target system (vCenter or ESXi Host) are supported. I covered requirements in
Objective 1.1, so review those before proceeding. Also verify time is sync’d
among all systems and verify a DNS record for the VCSA is created
o After you verified you’ve met all the pre-requisites, download the VCSA ISO from
VMware and mount it on a supported system. Run the installer file from the
unmounted VCSA installer directory: e.g. vcsa-ui-
installer/mac/installer.app then select the Install option.
NOTE: for MAC users, on newer MAC OS, you may run into an error upon
opening the installer.app file. Use this VMware KB to be able to open the file and
begin installation

Page 72 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

If you have an external PSC, you will need to use the converge utility to embed it
into the VCSA as it is no longer supported (see KB here). Fill in the information
prompted for

Page 73 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

NOTE: for the target installing VCSA on, if you choose a Host and it’s in a DRS
Cluster, VMware recommends choosing the target vCenter Server instead, or
change DRS to manual automation, at least temporarily, until install is
completed. The user to connect to target must either be root or administrator of
the system. Review the VCSA root user password requirements by clicking the “i”
(info) link

Choose the Deployment Size and Storage; storage column in the chart will
change based off your selections

Page 74 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Finish the remaining screens – Datastore, Networking

• Configure VCSA
o Once the install completes, you are ready for Stage 2…

…and are then prompted for a few configuration items – Time Sync, SSO Domain
Join selection, CEIP, then completion

Page 75 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Log into the Mgmt Interface – https://IPorFQDN:5480 to modify settings.


One such setting I recommend is to choose your VCSA root password to not
expire, under the Administration section. Doing so will help not being able to
log in after expiry. Or, at least set some kind of reminder to reset it before expiry

Page 76 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

You can enable SSH, BASH, etc from the Access section

Configure Time Sync with the Host it resides on or via NTP server from Time

And the area I really find useful is Services. You can Start, Stop, Restart any
vCenter Server service from here

To update your appliance, simply go to Update, run a check against VMware


update depot URL or CD, or add your own URL (FTPS / HTTPS), changeable under
Update Settings. You can also do a file-level Backup of your appliance using the
following protocols: FTP, FTPS, HTTP, HTTPS, SFTP, NFS, or SMB. Inventory &

Page 77 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

configuration data is required to backup; stats, events, & tasks are optional

And lastly, you can redirect logging to a Syslog Server if desired, using 1 of the
supported protocols shown below (3 syslog servers max are allowed)

Objective 4.6: Create and Configure vSphere High Availability (HA) and DRS
Advanced Options (Admission Control, Proactive HA, Etc.)
Resource: vSphere 8.0 Availability Guide (2022), pp. 31-44

• vSphere HA Concepts and Configuration


o To be able to use vSphere HA, the following requirements must be met:
▪ vSphere Standard license
▪ Shared storage
▪ vSphere Cluster with minimum of 2 ESXi Hosts, each configured with a
management network used for HA network heartbeating
▪ ESXi Host access to all VM networks
▪ Depending on vSphere HA requirements, VMware Tools installed in VMs
o After the above requirements have been met, you then need to create a vSphere
Cluster at the Datacenter level in vCenter. Right-click the Datacenter object and
select New Cluster and enable the option for vSphere HA

Page 78 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o After vSphere HA has been enabled, you can then configure features and
options within the Cluster. I can’t cover every nook and cranny vSphere HA offers
in this Study Guide, but I recommend you view all the HA Cluster options and
make the decision for each which best meets your organization’s needs and
requirements. First, vSphere HA not only helps make VMs highly available, but
can also protect against potential VM "split-brain" scenarios in which a VM may
be running on an isolated ESXi Host, but the primary Host cannot detect the VM
because the Host the VM is on is isolated. As such, the primary Host will try and
restart the VM. The restart succeeds if the isolated Host no longer has access to
the Datastore on which the VM resides. This causes a VM split-brain because
now 2 instances of the same VM are running. The vSphere HA setting used to
mitigate this scenario is called VM Component Protection (VMCP), and is a
combination of two vSphere HA settings:

– Datastore with PDL (permanent device loss), refers to unrecoverable


accessibility to a storage device, mitigated only by powering off VMs. When
enabled, you can only select Disabled, Issue events, or Power off and restart VMs
– Datastore with APD (all paths down), refers to a transient or unknown loss to a
storage device. APD setting options are more granular than PDL

Page 79 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

IMPORTANT: If Host Monitoring or VM Restart Priority settings are disabled,


VMCP cannot be used to perform VM restarts. Also, VMCP requires minimum of
ESXi 6.0 and is not supported for VMs with FT, RDM, vSAN, or vVols

o One of the most important HA configurations you need to decide upon which
best suits your environment is the concept of Admission Control. vSphere HA
Admission Control ensures a HA-enabled Cluster has enough reserved resources
available for VM restarts in the event of 1 or more ESXi Host failures.

Page 80 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Any action on a Cluster resource which potentially violates constraints imposed


by Admission Control are not permitted, such as:
– Powering on VMs
– Migrating VMs
– Increasing CPU or memory reservations of VMs

o Admission Control can be configured for one of 3 options – Cluster resource


percentage, Slot policy, or Dedicated failover hosts. I’ll discuss each below:
– Cluster resource percentage is a policy where vSphere HA ensures a
percentage of CPU and memory are reserved for failover. The policy is enforced
by:
Calculating current powered-on VM CPU and memory usage
Calculating total Cluster CPU and memory resources
Calculating current failover capacity, by taking total Cluster resource – VM
requirements / total Cluster resource, for both CPU and memory
Determines if current capacity of either CPU or memory is more or less than the
configured failover capacity %. If current capacity is less, VM operations listed
above are not permitted

– Slot policy is where a specified number of ESXi Hosts can fail and sufficient
resources will still be available to restart all VMs on failed Hosts. This policy is
determined by:
Calculating slot size, which is a logical representation of CPU and memory sized
to satisfy requirements of powered-on VMs. Slot size is calculated by VM CPU
reservation (or 32MHz if none configured) and memory reservation then
selecting the largest value of each
Determines how many slots each Cluster Host holds, by dividing the largest of
each resource, CPU and memory, by the total resource of each Host. The Host
with the largest amount of slots is “removed” and the remaining Host slots are
added
Current failover capacity is determined by the number of ESXi Hosts which can
fail and still have available slots for powered-on VMs
Determines if current capacity of either CPU or memory is more or less than the
configured failover capacity. If current capacity (or slots) is less, VM operations
listed above are not permitted

Page 81 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

– Dedicated failover hosts is the easiest of all the admission control policies. If
this policy is selected, one or more ESXi Hosts cannot be used in a Cluster and
are solely used in the event of a Host failure. If a failover Host cannot be used,
for example there aren’t enough Host resources available, then other available
Cluster Hosts are used. Note: VM-VM affinity rules are not enforced on failover
Hosts
Regardless of Admission Control policy used, be aware of one other setting you
can configure – Percentage degradation VMs tolerate. View the image below for
explanation. Note: DRS must be enabled for this option to be available

o Lastly, some vSphere HA options can be modified to fit your organizational


needs. To do so, you can add advanced configurations in the vSphere Availability
Advanced options section. I will share a few of the more common settings, but
you can review the full list on pp. 41-43
▪ das.isolationaddressX is the ping IP address used to determine if a
Host is network isolated. You can have multiple addresses (0-9). The
address used by default is the management network gateway address
▪ das.heartbeatDsPerHost is number of heartbeat datastores to use
▪ das.slotcpuinmhz defines maximum bound CPU slot size
▪ das.slotmeminmb defines the maximum bound memory slot size
▪ das.isolationshutdowntimeout specifies time a VM waits to power
down (default is 300secs)
▪ das.ignoreRedundantNetWarning ignores the 'no HA redundant
network' warning message

• vSphere DRS Concepts and Configuration


o I discussed quite a few DRS concepts in Sub-Objective 1.4.1 above. I covered
vCLS agent VMs, affinity/anti-affinity rules, DRS characteristics, etc. Go back to
this section for review

Objective 4.7: Deploy and Configure vCenter Server High Availability (HA)
Resource: vSphere 8.0 Availability Guide (2022), pp. 72-77

• vCenter Server HA
o vCenter HA protects the vCenter Server Appliance (VCSA) against Host and
hardware failure, as well as reduce downtime due to VCSA patching. After

Page 82 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

configuring your network for vCenter HA, you implement a 3-node cluster
comprised of Active, Passive, and Witness nodes:
▪ The Active node
– Uses the active vCenter
– Contains the public IP
– Uses the vCenter HA network to replicate data to the Passive node
– Uses the vCenter HA network to communicate with the Witness node
▪ The Passive node
– Is initially a clone of the Active node
– Receives updates from and synchronizes state with the Active node
– Automatically takes over Active node roll in event of a failure
▪ The Witness node is solely a lightweight clone of the Active node
protecting against a split-brain scenario

• vCenter Server HA requirements


o vCenter HA must meet the following requirements:
▪ vCenter Server Standard license
▪ VCSA minimum 4 vCPU / 16GB RAM, requiring a 'Small' vCenter
Deployment size (see Section 1.1 above) or larger
▪ Protected vCenter Server Appliance (VCSA) needs to be version 6.5 and
higher
▪ Datastore support for VMFS, vSAN, and NFS
▪ ESXi 5.5 or higher, and strongly recommended to use 3 Hosts
– If vSphere DRS is used, 3 Hosts are required
▪ If a management vCenter Server is used, version 5.5 or higher
▪ Network requires a separate subnet for vCenter HA with less than 10ms
latency

• vCenter HA Implementations
o You can implement VCSA HA in one of two ways
▪ Basic configuration requires one of 2 environment VCSA setups to
implement: the VCSA is managing its own ESXi Host and VM (i.e. self-
managed vCenter) or the VCSA is managed by another vCenter Server
(management vCenter) and both are in the same SSO domain (i.e. v6.5).

Basic workflow is as follows:


– User deploys the VCSA which becomes the Active node
– User adds a 2nd network port group used for vCenter HA on each Host
– User starts vCenter HA process, selecting Basic and supplying IP
address, target Host or Cluster, and Datastore for each clone

Page 83 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

– The system automatically clones the Active node as a Passive node,


with same network and hostname settings & adds a 2nd vNic to each node
– The system clones the Active node as a Witness node
– The system sets up the vCenter HA network the nodes communicate on
▪ Advanced configuration allows more control over the deployment. With
this option, you have to clone the Active node manually. And, if you
remove vCenter HA, you have to delete all nodes you created. The
Advanced workflow is as follows:
– User deploys the VCSA which becomes the Active node
– User adds a 2nd network port group used for vCenter HA on each Host
– User adds a 2nd vNic to the Active node
– User logs into the VCSA with vSphere Client
– User starts vCenter HA process, selecting Advanced and supplying IP
address and subnet information for the Passive and Witness nodes
– User logs into the management vCenter and creates 2 clones of the
Active node
– User returns to the vCenter HA configuration to finish the process
– The system sets up the vCenter HA network the nodes communicate on

o vCenter HA uses two networks on the VCSA


▪ Management network configured with a static IP, serves client requests
▪ vCenter HA network connects the 3 nodes and replicates the appliance
state, as well as monitors heartbeats. This network
– Must have a static IP
– Must be on a different subnet than the management network
– Must have network latency < 10ms between all 3 nodes
– Not add a default gateway for the Cluster network

• Configure vCenter HA
o To configure vCenter HA, click on vCenter Server in the vSphere Client >
Configure tab, then select vCenter HA under the Settings section. Choose the
desired deployment type, Basic or Advanced, by deselecting the option to
Automatically create clones for Passive and Witness nodes, then follow the
workflow described above

Page 84 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o After setting up your vCenter HA environment, you can initiate a failover to


verify the Passive node takes over as Active. From the vCenter HA screen, click to
Initiate Failover to begin the process
o You can also place the vCenter HA Cluster in maintenance mode, or even disable
it. In vCenter HA settings, select Edit then choose any of the shown options –
Enable, Maintenance Mode, Disable, or even Remove vCenter HA Cluster

Page 85 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o If you need to shut down the Cluster nodes for any reason, they must be done in
a certain order:
– Passive node
– Active node
– Witness
* vCenter HA Nodes can then be started up in any order

o Lastly, you can back up the Active node for added protection against failure.
Keep in mind, before restoring the Active node, if secondary nodes are still
present, you need to remove them as not doing so may result in unpredictable
behavior

Objective 4.8: Setup a Content Library


Resource: vSphere 8.0 Virtual Machine Administration Guide (Jan 2023), pp. 50-90

First, let me share what a Content Library is, then I will move on with Content Library
configuration details. A Content Library is a container object for VM and vApp templates, as
well as files such as ISOs, images, and txt files. Using templates, you can deploy VMs or
vApps in the vSphere inventory. Doing so across vCenter Server instances in the same or
different location results in consistency, compliance, efficiency, and automation in
deploying workloads at scale. Content Libraries stores and manages content in the form of
library items. A single item can contain one or multiple files. As an example, an OVF consists
of .ovf, .vmdk, and .mf files. When uploading an OVF template, all files are uploaded, but
you only see one library item in the Content Library UI. Starting with vSphere 7U1, Content
Libraries not only support the OVF template format (sole format previously), but now also
support VM templates (NOTE: vApp templates are still converted to OVF templates).
Content Libraries are managed from a single vCenter Server, but you can share them with
other vCenters if HTTPS traffic is allowed between the instances. There are 2 Content
Libraries:

Page 86 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Local Library is used to store items in a single vCenter Server. You can publish them so other
vCenter Server users can subscribe to them, as well as configure a password for
authentication to the Library. A caveat is if you have a VM template in the Local Library, you
cannot publish it

Subscribed Library is a library you create when you subscribe to a published library. The
Subscribed Library can be created in the same or remote vCenter than the published library.
When creating a Subscribed Library, you can choose to immediately download the full
contents of the library upon creation, or download only the published library metadata and
later download the full content of the library. Keep in mind, when using a Subscribed
Library, you can only use the library content, not add items to it.

• Content Library pre-requisites


o The only pre-requisites for creating a Content Library is to at least be running
vCenter Server 6.5 and to have Content library.Create local library or Content
library.Create subscribed library privileges

4.8.1 – Create a Content Library

• Create a Content Library


o To create a Content Library from the vSphere Client, select Menu > Content
Libraries, then from the Content Library window click + to Create New Content
Library

Page 87 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o Choose the Library type you want to create, and optionally publish it and/or
enable password authentication to the Library

o To make sure you have the latest Library content if using a Subscribed Library,
you can initiate a manual synchronization by rt-clicking the Library >
Synchronize. Make sure you have the Content library.Sync subscribed library
privilege

Page 88 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o When delegating permissions for Content Libraries, it’s important to understand


Content Library hierarchical structure. They work in vCenter Server, but are not
direct children to vCenter. They are under the global root. So if you grant a user
a Content Library permission at the vCenter Server level, the user will not be able
to see nor manage Content Libraries. You need to assign permissions at the
global root level. There is a sample Content Library Administrator role which can
be assigned, or you can copy this role and modify the permissions (and rename
it) to fit your organizational needs. Administrators at the vCenter Server level can
manage Content Libraries, but only if also given the Read-Only role at the global
root
o One last little nugget introduced in vSphere 7 is the ability to view history of
changes to Templates, allowing you to revert to a prior change if need be (like
reverting to a previous snapshot). This is referred as 'version control'. Cool, eh?!
I'll dlscuss this next in Objective 4.10 below

4.8.2 – Add Content to a Content Library

• Importing and exporting content to/from a Content Library


o A user must have at least the Content library.Add library item or .Update files
privileges
o To add Items to Content Libraries, you Import them. To do so, simply rt-click the
Content Library and choose Import Library Item. You can choose to import from
a URL or local file. Conversely, to export, simply select Export Item instead of
Import
o For further Content Library management tasks, review pp. 68-90

4.8.3 – Publish a Local Content Library

• To Publish a Local Content Library, rt-click the Content Library > Edit Settings, then click
the Enable Publishing box and, if desired, to enable password authentication

Page 89 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Take note (or Copy) of the Subscription URL as this will be needed when wanting to
create a Subscribed Content Library

Objective 4.9: Subscribe to a Content Library


Resource: vSphere 8.0 Virtual Machine Administration Guide (Jan 2023), pp. 59-62

4.9.1 – Create a Subscribed Content Library

• I already touched on this in Sub-Objective 4.8.1. To create a Subscribed Content Library,


you simply create a new Content Library and choose Subscribed Content Library and fill
in the requested information of any Published (Local) Content Library – its URL,
authentication information if required, and whether to sync the data immediately or
when needed

4.9.2 – Subscribe to a Published Content Library

• Subscribing to a Content Library is the same as creating a Subscribed Content Library

Page 90 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

4.9.3 – Deploy Virtual Machines From a Subscribed Content Library

• To create a VM from Content Library content (i.e. Template), click the Content Library
name to open the Library. Then, select the VM Templates tab, rt-click a Template and
select New VM from this Template. Configure information as you're prompted in the
wizard

Objective 4.10: Manage Virtual Machine Template Versions


Resource: vSphere 8.0 Virtual Machine Administration Guide (Jan 2023), pp. 88-90

• When managing VM Templates in a Content Library, you perform a 'Check Out VM from
VM Template' operation to make changes to the Template. This is needed if, for
example, you need to perform guest OS or any OS application updates, or make any
other configuration changes. This process is analogous to the 'Convert to Virtual
Machine…' operation on regular vCenter Server Template objects. When you are done
with making any changes, you perform a 'Check in VM' operation; again, analogous to a
'Convert to Template' operation

4.10.1 – Update Template in a Content Library

• As mentioned above, to update a VM Template in a Content Library, you simply need to


Check Out the VM. To check out a Template, do the following:
o Content Library > click the Library item to open it > select the VM Templates tab
> choose a Template and click on it, then click the Check Out VM from this
Template button and finish the requested wizard items. Power on the VM and
make changes as needed
o After Template changes are needed, from the same Library VM Templates tab,
select the VM Template in the list > click Check In VM to Template button
o Template Versioning: if you need to revert a VM Template to a previous version,
from the same Library VM Templates tab, select the VM Template in the list,
then from the vertical timeline, navigate to the Previou State of the VM
Template, click the horizontal ellipsis icon (***) and select Revert to This
Version; or, if you need to remove the previous version, select Delete Version

Objective 4.11: Configure vCenter Server File-Based Backup


Resource: vCenter Server 8.0 Installation and Setup Guide, Update 1 (2023), pp. 51-64

• vCenter Server File-Based Backup (FBB) Overview


o FBB allows you to recover your vCenter Server in the event of a failure instead of
having to re-configure it all from scratch. Using the VCSMI (vCenter Server
Management Interface), you configure a remote-server with any one of several

Page 91 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

supported protocols – FTP, FTPS, HTTP, HTTPS, SFTP, NFS, or SMB to transfer a
backup of your vCenter Server config files. Then, using the vCenter Server GUI
Installer, you can perform a restore – view the deploy VCSA UI screenshot I
shared in Objective 4.5 above.
What exactly is backed up? "Inventory and configuration" information such as:
▪ Configurations – VM resource settings, DRS configuration/rules, Resource
Pool hierarchy & settings
▪ Storage DRS – Datastore Cluster configuration/membership, SIOC
▪ vSphere HA, with potential caveats
▪ Review pp. 51-55 for further details, limitations, potential inconsistencies
▪ You can optionally also select to backup 'Stats and Data' (see screenshot
below)

• Configure File-Based Backup


o Log into the VCSMI (https://IPorFQDN_VCSA:5480) > Backup on the left
pane
o Under the Backup Schedule section, click the Configure link on the right. NOTE: a
remote server must be set up prior to creating your backup schedule

Page 92 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

The remote server URL format example is provided, as shown below

The Backup will then run on a schedule. If wanting to do a manual backup,


simply click on the Backup Now link under the Activity section.

o Restore process (pp. 59-64):


▪ From the VCSA installer, browse to the OS type sub-folder (lin64, mac,
win32) and open the Installer; select the Restore option
▪ Click Next through the first couple screens, then enter your Backup
Location details, identical to the settings used when creating your Backup
Schedule
▪ Enter vSphere connection (ESXi or vCenter), similar to when you first
installed VCSA
▪ Select Deployment and Storage size; finish the remaining screens
▪ After the VCSA is deployed, Stage 2 begins the restore process. If you
have Enhanced Linked Mode, you will be prompted for SSO credentials

Objective 4.12: Configure vSphere Trust Authority


Resource: vSphere 8.0 Security Guide (March 2023), pp. 256-293

• vSphere Trust Authority (vTA) was discussed in more detail in Sub-Objective 1.8.1 above.
Review this for a refresher. Use the following procedures to configure a vTA:

Page 93 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Pre-requisites –
o Ensure ESXi Hosts used in a Trust Authority or Trust Cluster have TPMs v2.0
o Gather ESXi Hosts TPM Endorsement Key (EK) Certificates
o ESXi Hosts must have UEFI Secure Boot enabled
o Must be using vCenter Server 7.0+
o A dedicated vCenter Server for the Trust Authority Cluster and ESXi Hosts
o A separate vCenter Server for the Trusted Cluster and Trusted ESXi Hosts; NOTE:
only 1 Trusted Cluster allowed per vCenter
o A key server (KMS) already in place
o VMs must be configured for EFI firmware and Secure Boot
o Privileges listed on pg. 204
o Note the following unsupported vSphere features:
▪ vSAN encryption
▪ First Class Disk (FCD)
▪ vSphere Replication
▪ Host Profiles

• The specific details on how to configure vTA is quite lengthy. As such, I'll only cover the
high-level steps mentioned in the Security Guide, on pp. 257-258. Please review those
pages and subsequent pages 258-292 for explicit details on the implementation and
configuration process. Also, be familiar with vTA terms on pp. 241-242.
Configuration –
o Set up your workstation to configure the vTA
o Enable the Trust Authority Admin – assign users to the TrustedAdmins group
o Enable the Trust Authority State – make a vCenter Server Cluster into a Trusted
Authority Cluster
o Collect information about vCenter Server & ESXi Hosts to be trusted (export as
files)
o Import the Trusted Host information into the Trust Authority Cluster
o Create the Key Provider on the Trust Authority Cluster
o Export the Trust Authority Cluster information – this will be used later to import
into a Trusted Cluster
o Import the Trust Authority Information to the Trusted Hosts
o Configure the Trusted Key Provider for the Trusted Hosts (Client or CLI).
From the vSphere Client, select vCenter > Configure tab > under Security section
select Key Providers. Click to Add Trusted Key Providers. My screenshot shows
Add Standard Providers because I haven’t configured my vCenter and Cluster as
a vTA

Page 94 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 4.13: Configure vSphere Certificates


Resource: vSphere 8.0 Authentication Guide (February 2023), pp. 18-81
vSphere 8.0 Security Guide (March 2023), pp. 74-93

• VMware Certificate Management (VMCA)


o Before I go over the explicit certificate configuration process, you can change any
certificate CSR or Mode options you want by clicking on vCenter Server in the
vSphere Client > Configure > Settings section and select Advanced Settings ,
then Edit Settings

NOTE: New with vSphere 8, you can only generate a CSR with a minimum 3072-
bit length, but does still accept custom certficate bit lengths of 2078

o Click the filter icon (hourglass icon in the Name column), and type in:
vxpd.certmgmt and you can then view all cert parameters. Most are used for
ESXi Host csr to the VMCA (VMware Certificate Authority, i.e. vCenter). for the
mode parameter, the default is vmca. There are 3 possible modes: vmca ,
custom (for using your own CA), and thumbprint used for legacy 5.5/6.0
vSphere versions, now deprecated.

Page 95 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o To provision ESXi Hosts with a new VMCA certificate, you can do one of a few
things – disconnect/reconnect the Host from/to vCenter, or click on a Host in the
Client > Configure > under System, select Certificate. Then click the Renew
Certificate link. Or simply rt-click on a Host > Certificates

o Managing Certificates can be done a few ways – vSphere Client (minimally),


traditionally through command-line connected to vCenter: /usr/lib/vmware-
vmca/bin/certificate-manager , or other CLI methods: certool , vecs-
cli , dir-cli , sso-config – for STS mgmt, and service-control
o Certificate Management Modes are:

Page 96 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

▪ VMCA – vCenter Server (VMCA) handles all certs

▪ Hybrid – VMware-recommended approach; vCenter Server CA is


replaced; VMCA handles ESXi Host & Machine SSL/solution users certs

▪ Custom – all certs are replaced with org or external CA

Page 97 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o vSphere Certificates are stored:


▪ ESXi: /etc/vmware/ssl
▪ VMware Endpoint Certificate Store (VECS) – local repository for
certificates, private keys, and other cert info
▪ Machine SSL: VECS; used by reverse-proxy, vmdir, & vpxd services
▪ Solution users: VECS; used by machine (STS), vpxd , vpxd-
extension, vsphere-webclient , & wcp (Tanzu)
o Import Certificate Requirements
▪ 2048-bit to 16384-bit length
▪ PEM format – PKCS8 or PKCS1
▪ CRT format
▪ x509 version 3
▪ SubjectAltName must contain DNS name=machineFQDN
▪ Key Usage – Digital Signature, Key Encipherment
▪ Extended Key Usage – empty or Server Authentication
▪ Wildcard certificates are not supported
o To configure your environment certificates, use the cert management utility via
SSH to vCenter Server. Select option 1 to begin Hybrid Mode

Page 98 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Before going forward with this Mode, you must first create a cert template to
use for the Machine-SSL cert. VMware has an amazing walkthrough created
during the vSphere 6.5 release by Adam Eckerle. It still applies for vSphere 8,
with you no longer connecting to a PSC, but solely to and perform the tasks
against vCenter Server. You can view the walkthrough here ; other info also can
be found in VMware KB 2112009 . Additionally, you will also need the Root Cert
of the CA you’re using, and any Subordinate CA certs, if implemented (Microsoft
CA recommends using Sub-CAs) for chaining certs, as noted in the workflow
o You then basically follow the prompted workflow, the walkthrough, and KB if
needed. After replacing the Machine-SSL cert, you should be able to connect to
vCenter with the vSphere Client and the cert be trusted. NOTE: ESXi Host certs
will not be trusted, unless you add the VMCA as a Trusted Root in your local CA
store

4.13.1 – Describe Enterprise PKI’s Role for SSL Certificates

• Enterprise Public Key Infrastructure (PKI) role is dependent on the business. How a
company manages its PKI will depend on several factors – auditing, security practices
and requirements, compliance (i.e. SOX for Finance or HIPPA for Healthcare), budget
resources, FTE expertise, etc. Larger enterprises tend to have a more robust PKI, utilizing

Page 99 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

external CAs to manage certs for most of their environment, and thus tend to configure
their virtual infrastructure using the Custom method; whereas smaller entities are
content to use VMCA or Hybrid methods for their environment. So, what role does
Enterprise PKIs play for SSL certs? Not much if a business doesn’t have an in-house
infrastructure; or, semi-extensive if using an external CA (needing to generate CSRs for
infrastructure cert requests, etc.)

Objective 4.14: Configure vSphere Lifecycle Manager (vLCM)


Resource: Managing Host and Cluster Lifecycle Guide (2022)

• Configure vSphere Lifecycle Manager (vLCM)


o From an install standpoint, there is really nothing you need to install because,
vLCM comes pre-installed with the VCSA. But, there are some settings you need
to look at to see if the defaults meet your needs, as well as create
Baselines/Groups and/or Images. To determine if you should use Images or the
legacy Baselines method, review the vLCM theory information discussed in
Objective 1.6. To review vLCM UI and settings, log into the vSphere Client >
Menu drop-down > and select Lifecycle Manager.

Page 100 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

You are then presented with a screen of several tabs – Image Depot, Updates,
Imported ISOs, Baselines, and Settings. Go to the Settings tab to review the
information. Under the Administration section, you can modify the Patch
Downloads schedule by changing the time, and even disable it by unchecking the
'download patches' checkbox in the Edit screen, but you cannot delete it. For
Patch Setup, you can disable the pre-defined URLs, but not delete. You can
create a custom download source of 3rd party vendors by clicking the New
button.

Page 101 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Edit the Settings for Host Remediation for Baselines and Images (pretty much
same info for each), as well as VMs

After you’re satisfied with the vLCM settings, you can go to the Baselines section
and create Baselines, or use any of the 3 pre-defined ones. You can’t delete
them, but if they contain most of the settings you want, you can make a copy of
them so you don’t have to create a new one from scratch. Just give it a new
name upon creation. If there is a Baseline version of ESXi you need to deploy, the
Imported ISOs section is where you upload it. To use it, you have to create an
Upgrade Baseline type (not a 'patch' or 'extension').

Page 102 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Once you’ve created all the foundational settings, you’re ready to start patching.
To begin, go to your vSphere Inventory and click on a container (Datacenter or
Cluster) or individual Host, then select Updates tab. Scroll down to attach any of
your Baselines you created, or pre-defined ones. After you attach all you want,
check the container for Compliance

After you’ve checked for Compliance, scroll down to the Attach Baselines section
& choose to Remediate. vLCM will perform remediation according to the settings
you configured – place the Host in maintenance mode, migrate VMs, & perform
a restart or Quick Boot, depending on what you set up. Hosts will remediate in
succession, unless you configured to do so in parallel. To remediate groups of
VMs, go into VMs & Templates view, click a VM Folder container > Updates tab,
& perform a VM Tools and/or Hardware Upgrade to match the Host operation.

Page 103 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o The only remaining item to cover are Images. To use vLCM Images, click on a
Cluster > Updates, then select Images under the Hosts section and go through
the process to set up the Images method. As mentioned in the earlier, if you
choose to use this patching method, you cannot go back to Baselines. The
Baselines option will be removed from the Host section list

Page 104 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 4.15: Configure Different Network Stacks


Resource: vSphere 8.0 Networking Guide (2022), pg. 83-84
vSphere 8.0 Single Host Management Guide (2022), pg. 162-165

• TCP/IP Networking Stacks


o There are 3 default stacks – Default, which can handle several traffic types;
vMotion; and Provisioning, which handles snapshotting and cold migrations

Click on any item in the list, then Edit to change the networking settings for the
given Stack

• To create a Custom stack, SSH into a Host and run:


esxcli network ip netstack add N=”New_Stack_Name” ; once added, you
can then assign a VMkernel Adapter to it in the vSphere Client

Objective 4.16: Configure Host Profiles


Resource: vSphere 8.0 Host Profiles Guide (March 2022), pp. 18-24

• VMware ESXi Host Profiles


o Host Profiles provide VMware Admins a centralized mechanism for Host
configuration and configuration compliance to more easily manage multiple
Hosts and Clusters of Hosts

• Host Profile Configuration


o Configuring Host Profiles involves the following high-level tasks:
▪ Set up and configure a Reference Host; NOTE: since vSphere 6, the
Reference Host is no longer required to be available to perform certain
Host Profile tasks
▪ Extract the configuration, which is the creation of a Host Profile
▪ Attach individual ESXi Hosts or full Clusters to the Host Profile
▪ Check the Hosts or Clusters for Compliance
▪ If there is non-compliance, Apply the Host Profile (remediate)

Page 105 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o Profile creation: Menu drop-down in vSphere Client > Policies and Profiles >
Host Profiles
o From the main menu, click the Extract Host Profile link, choose a Host from the
list, give it a name, then click Finish

Once extracted, it will appear in the Profile list. It will be as a link. Click it to view
it. From here, if you need to, you can click to edit its settings. Scroll through the
Levels and add (click green “+”) or remove (red “x”) items as needed, add
parameters, etc.

Page 106 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

After all is good, click Save, then from the Actions menu, choose the
Attach/Detach Hosts and Clusters option. Clicking the Clusters or Hosts objects
in the left pane of the Profile main window, you can verify the Profile is attached

Page 107 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o Either from the Action menu (shown above) or by rt-clicking the Host Profile,
select to Check Host Profile Compliance. If any objects are not compliant, simply
apply the Profile by choosing Remediate. Before remediation, I recommend
doing a 'pre-check' to see what exactly will be changed on the Host or Cluster

Objective 4.17: Identify ESXi Host Boot Options


Resource: ESXi 8.0 Installation and Setup Guide (2022)

• ESXi Boot Options


o There are several ways to boot ESXi, because there are several ways it can be
installed:
▪ Local disk or USB
▪ PXE
▪ Via Auto Deploy
▪ Scripted

Page 108 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o For scripted Boot options, at the ESXi bootloader screen, press SHIFT+O

o You are prompted by the runweasel command prompt. Enter ks.cfg


information. Below is an example (default location:
/etc/vmware/weasel/ks.cfg & password - myp@ssw0rd ):

For a full list of ESXi Boot options, refer to pp. 80-82 in the Install Guide. I
recommend being familiar with the parameters and the format they need to be
in.

4.17.1 – Configure Quick Boot


VMware KB Article

• First, Quick Boot is simply ESXi Hosts "restarting" only the vSphere software. A full
hardware (cold) restart is not performed, so remediation process times are significantly
reduced. Another beauty of Quick Boot is there’s nothing on the hardware side you
really need to configure. Server vendors have hardware configured for this out of the
box. All you need to do is make sure you are running vSphere on Quick Boot-supported
hardware. Check the KB referenced at the beginning of this Objective for more info.
That being said, to make sure Quick Boot is used during Lifecycle Management tasks, as
noted in the vLCM Objective above, the Remediation settings in vLCM need to have
Quick Boot enabled. The option is enabled by default, but do verify it

Page 109 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

4.17.2 – Securely Boot ESXi Hosts

• ESXi Secure Boot


o Secure Boot has been in vSphere since version 6.5. To configure, you need to
have supported hardware with UEFI boot support as well as supported TPM
hardware, then just enable the option in the BIOS. The ESXi bootloader contains
a public key and uses this key to verify the signature of the kernel and the secure
boot VIB verifier. The VIB verifier verifies every installed VIB. If any part is
unsigned, the Host fails to boot

Objective 4.18: Deploy and Configure Clusters Using the vSphere Cluster
Quickstart Workflow
Resource: vCenter Server 8.0 Host and Management Guide (2022), pp. 45-53

• Cluster Quickstart is workflow within the vSphere Client you can use to group common
tasks and offers configuration wizards which guide you through the process of
configuring and extending a vSphere Cluster. To use Cluster Quickstart workflow, ESXi
Hosts must be 6.7U2 or later. And, you cannot assign licenses to ESXi Hosts in the
Cluster via Quickstart. Licenses must be manually assigned.

Page 110 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

The Cluster Quickstart workflow consists of 3 cards to configure your Cluster:


o Cluster Basics card – enabling Cluster features such as DRS, HA, vSAN
o Add Hosts card – add ESXi Hosts to the Cluster
o Configure Cluster card – configure network settings for VMotion and vSAN,
customize Cluster services, and create a vSAN Datastore

• To configure a Cluster with Quickstart, you must first create a Cluster. To do so, rt-click
on a Datacenter object in the vCenter Inventory > New Cluster. Give the Cluster a name,
and enable any services you want – DRS, HA, vSAN. You can also optionally choose to
Manage all Hosts in the Cluster with a single image. This option refers to Cluster
Lifecycle Management. I touched on Images in Objective 1.6. After you create your
Cluster, you are directed to the Cluster Quickstart configuration location under
Configure tab > Configuration section > Quickstart

• To continue using the Quickstart workflow, click the Add button from the Add Hosts
card, and provide the Hostname and credentials for each ESXi Host you want to add to
the Cluster. Click Next, then OK to finish

• Continue with Quickstart Cluster configuration by clicking the Configure button from the
Configure Cluster card, and change any configuration options which meet your
organizational needs

Objective 4.19: Setup and Configure VMware ESXi


Resource: vCenter Server 8.0 Host and Management Guide (2022)
ESXi 8.0 Installation and Setup Guide (2022)

4.19.1 – Configure Time Configuration

• ESXi Time Configuration


o To configure time on an ESXi Host, select it in the vSphere Client Inventory >
Configure tab > System section > Time Configuration, then click the Add Service
drop-down and choose Precision Time Protocol or Network Time Protocol
o Enter the URL of a PTP or NTP server then click OK; use a comma to separate
multiple URLs
o Stop/Start the Time Service

Page 111 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

4.19.2 – Configure ESXi Services

• ESXi Services
o To configure any ESXi service, select an ESXi Host, select it in the vSphere Client
Inventory > Configure tab > System section > Services, then select a service from
the list and choose an option: Restart, Start, Stop, Edit Startup Policy…
o You can also manage ESXi services directly from an ESXi Host. Log into the Host
DCUI, click Manage, then choose the Services tab. Select a service from the list,
then choose a management option

4.19.3 – Configure Product Locker

• ESXi Product Locker


o Product Locker is a symbolic link to the ESXi Host storage location for VMware
Tools. To modify this path, select an ESXi Host, select it in the vSphere Client
Inventory > Configure tab > System section > Advanced System Settings, then
click the Edit button. To quickly find this option, click the filter (hourglass) icon to
the right of 'Key' and type in productlocker. The full parameter name is
UserVars.ProductLockerLocation . To set this parameter value, click in its
'Value' field and enter the path

4.19.4 – Configure Lockdown Mode

• Lock-Down Mode
o To enable Lock-Down Mode on ESXi Hosts, select a Host > Configure tab >
System section, then click Security Profile. In the Lockdown Mode section click
the Edit button

Page 112 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o Lockdown Mode behavior when enabled:


▪ Normal lockdown mode – privileged users can access vCenter through
the vSphere Web Client or SDK; local Host administrator users who are in
the Exception Users list can log into the DCUI, or any user in the
DCUI.Access ESXi Host Advanced setting can access the DCUI
▪ Strict lockdown mode – as with normal mode, privileged users can access
vCenter through the Web Client or SDK; DCUI access is disabled for all
users
o For SSH or Shell access, when in Lockdown Mode, users with administrative
permissions in the Exception Users list, or who are in the DCUI.Access
Advanced setting can use these services. To add users to this group, you first
create a local Host user, or you can use an AD user, then from the Host >
Configure tab > System section, select Advanced System Settings search for
DCUI.Access, then type in the user

4.19.5 – Configure ESXi Firewall

• ESXi Firewall
o To configure firewall services and their behavior, select an ESXi Host, select it in
the vSphere Client Inventory > Configure tab > System section > Firewall. Click
either the Inbound or Outbound tab, then the Edit button. Select a Firewall
service and click the right arrow ( > ), to expand configuration options (Allow IP
list)

Page 113 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 4.20: Configure VMware vSphere With Tanzu


Resource: Installing and Configuring vSphere 8.0 With Tanzu Guide (2023)
Using Tanzu Kubernetes Grid 2 With vSphere 8.0 With Tanzu (2022)

4.20.1 – Configure a Supervisor Cluster and Supervisor Namespace

• As a reminder from the Tanzu discussion in Objective 1.13, you associate a Supervisor to
a Cluster of 1-3 Cluster resources, which are mapped to vSphere Zones. A 3-Zone
Supervisor provides failure protection at a Cluster level, whereas a 1-Zone Supervisor
provides failure protection only at an ESXi Host level. When making the decision which
deployment to use, keep in mind if you do go with the 1-Zone Supervisor, you can't
expand the Supervisor to a 3-Zone deployment later on

• Pre-requisites to be aware before creating your Supervisor Clusters:

o Each Cluster must have at least 2 ESXi Hosts; or, 3 ESXi Hosts for Clusters using
vSAN
o Clusters must be using shared storage
o Enable HA on the Cluster
o Enable DRS with Fully-Automated or Partially-Automated mode; NOTE: disabling
DRS at any point on the Supervisor will break your Tanzu Kubernetes Clusters
o Account provisioning the Cluster must have Modify cluster-wide configuration
permisssion
o Create Storage Policies to determine datastore placement of Supervisor control
plane VMs; and, Policies for containers and images if the Supervisor supports
vSphere Pods
o Configure a networking stack using either NSX or vDS with NSX Advanced Load
Balancer or HAProxy Load Balancer
o Create a Content Library to provide the system with distributions of Tanazu
Kubernetes releases in the form of OVA templates
o Create vSphere Zones; see Sub-Objective 4.20.3 below

• Create a Supervisor:
3-Zone Supervisor –
o Click Home menu > Workload Management

Page 114 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o Click the Add License button if you have a valid Tanzu License, or do a 60-day
trial
o Scroll down and click the Get Started button; NOTE: you cannot continue if you
don't enter personal info if you're evaluating
o Select the vCenter Server and Network page and choose vDS as the networking
stack, if not using NSX
o On the Supervisor Location page, select vSphere Zone Deployment and enter a
Supervisor name, choose the Datacenter object the Zones are located, then
select the compatible/configured Zones
o On the Storage page, configure storage for placement of control plane VMs
o Configure Load Balancer settings
o For the Management Network, configure settings used for the network used for
Kubernetes control plane VMs -> DHCP or Static network mode
o For the Workload Network, configure network traffic settings used for
Kubernetes workloads running on the Supervisor -> DHCP or Static network
mode; NOTE: if using DHCP, you can't create new Workload networks after the
Supervisor is created
o On the Tanzu Kubernetes Grid page, click the Add button and select the
subscribed Content Library containing the VM images for deploying the Tanzu
Kubernetes Cluster nodes
o Review all your settings, then click Finish
Creating a 1-Zone Supervisor is virtually the same. To view the slight differences,
review pp. 98-113 in the Tanzu Install and Configure Guide

Page 115 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

• Configuring vSphere Namespaces


o Namespaces are logical vSphere objects in which you set resource quotas and
user permissions on, similar to vSphere Resource Pools
o Items to note regarding Namespaces:
▪ Namespaces on 1-Zone Supervisors with NSX support vSphere Pods, VMs,
and TKG Clusters
▪ Namespaces on 3-Zone Supervisor with NSX do not support vSphere Pods
▪ Namespaces using the vSphere Networking Stack do not support vSphere
Pods; they only support TKG Clusters and VMs
o Namespaces creation pre-requisites:
▪ A Supervisor must already be created
▪ Create users or groups to access the Namespace
▪ Create policies for persistent storage
▪ Create VM classes and Content Libraries for stand-alone VMs
▪ Create a Content Library for Tanzu Kubernetes releases to use with TKG
Clusters
▪ Assure proper permissions to create Namespaces
o Namespace creation:
▪ Click Home menu > Workload Management
▪ Select the Namespace tab > Create Namespace button
▪ Select the Supervisor to place the Namespace in
▪ Enter a Namespace name
▪ Select the Workload network from the Network drop-down for vSphere
networking stack; select Override cluster network settings if using NSX
▪ Click Create when settings have been configured
▪ Assign storage to the Namespace from the Storage pane > Add Storage
▪ Set up VM Service for stand-alone VMs
▪ Configure Namespace for TKG Clusters
▪ Configure resource limits: Select the Namespace > Configure > Resource
Limits
▪ Configure object limits: Select the Namespace > Configure > Object
Limits, then Edit

4.20.2 – Configure a Tanzu Kubernetes Grid Cluster

• To configure a TKG Cluster:


o Configure vSphere Zones
o Create Storage Policies
o Configure vSphere Networking stack -> NSX with Load Balancer or vDS with
external Load Balancer
o Install a License

Page 116 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o Configure a Supervisor instance


o Install CLIs
o Create a Content Library used for Tanzu Kubernetes releases
o Create/configure Namespaces
o Provision TKG2 Clusters -> reference Ch. 6 of the Using TKG2 Guide for details

4.20.3 – Configure vSphere Zones

• Create vSphere Zones


If using a 3-Zone Supervisor
o Select the vCenter Server in the Inventory > Configure tab > vSphere Zones,
then click Add New vSphere Zone
o Give the Zone a name
o Select a Cluster to map to the Zone, then click Finish
o Repeat to create Zones for a 3-Zone Supervisor
o If you need to remove a Cluster from a Zone, you can only do so when a
Supervisor is not enabled on the Zone. To remove the Cluster, click on the Zone
and select the ellipsis (…) > Remove Cluster

4.20.4 – Configure Namespace Permissions

• Be sure to first create users or groups beforehand


• Click Home menu > Workload Management
• From the Permissions pane, click Add Permissions, then select an Identity Source, then
a user or group, and a role

SECTION 5: Performance-tuning, Optimizations, and Upgrades


Objective 5.1: Identify Resource Pools Use Cases
Resource: vSphere 8.0 Resource Management Guide (2022), pp. 71-81

• Resource Pools (RPs) characteristics

o Resource pools are logical abstractions for flexible management of CPU and
memory resources, grouped in hierarchies
o Each standalone ESXi Host & vSphere Cluster has an invisible root resource pool.
When adding an ESXi Host to a Cluster, any resource pool configured on the Host
will be lost, and the root resource pool will be a part of the Cluster

Page 117 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o RPs utilize Share, Reservation, and Limit constructs. I will define these terms and
give further explanation and examples of how to use & calculate them in Sub-
Objective 5.1.1 below:
o Resource pools behave in the following ways:
▪ A VM’s Reservation and Limit settings do not change when moving into a
new resource pool
▪ A VM’s % Shares adjust to reflect the total number of Shares in use in a
new resource pool, if a VM is configured with default High, Normal, Low
values. If Share values are custom, % Share is maintained
▪ Resource pools use Admission Control to determine if a VM can be
powered on or not. Admission Control determines the unreserved
resource capacity (CPU or memory) available in a resource pool
o A root resource pool can be segregated further with child resource pools, which
owns some of the parent resource pool resources
o A resource pool can contain child resource pools, VMs, or both
o Resource pools have the following hierarchical structure:
▪ Root resource pool is on a standalone ESXi Host or Cluster level
▪ Parent resource pool is at a higher level than sub-resource pools
▪ Sibling is a VM or resource pool at the same level
▪ Child resource pool is below a parent resource pool

Below is VMware’s visual representation of a resource pool hierarchical structure

• Resource Pools use cases and Share concepts


o Resource pools have the following use cases
▪ Compartmentalize Cluster resources for individual departments
▪ Isolation between resource pools and sharing within resource pools
▪ Access-control delegation to department administrators
▪ Provides separation of resources from hardware for management and
maintenance
▪ Provides ability to manage VMs with multitier services

Page 118 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

• To create a RP, rt-click on a Host or Cluster > New Resource Pool, then configure the
desired metrics to meet your needs

5.1.1 – Explain Shares, Limits, and Reservations

• First, I’ll define each of the terms, then I’ll give a couple real-world workable samples to
further explain how Shares work
o Shares represent the relative importance of CPU and memory resources of
sibling powered-on VMs and resource pools, with respect to a parent resource
pool’s total resources, when resources are under contention.
Page 119 of 180 Author: Shane Williford
VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Because Shares are relative and values change based on number of VMs in a
resource pool, you may have to change a VM’s Share value (see below) when
moving in or out of a resource pool
▪ High represents 4 times that of Low
▪ Normal represents 2 times that of Low; default RP setting
▪ Low represents value of 1
o Reservation represents a guaranteed amount of resource to a VM or resource
pool; default CPU and Memory Reservation is 0
o Expandable reservation is a setting allowing child resource pools to get more
compute resources from its direct parent RP when the child resource pool is
short of its configured resources; the setting is enabled by default
▪ Expandable reservation is recursive, meaning if the direct parent RP
doesn’t have enough resources available, it can ask the next higher
parent resource pool for resources
▪ If there are not enough resources available, a VM cannot power on
o Limits are upper bounds of a resource; default CPU & Memory value is Unlimited

• When figuring out Share values needed for VM workloads, it’s best to work from the
VM-level up to resource pools, instead of Cluster resource pool down. Below is a
resource pool example to further explain Shares:

▪ A Cluster has 100 CPUs and 100GB RAM

▪ Calculate High Shares allocation in a RP:


– 100 CPU x 4 = 400 Shares of CPU for “High” (using 1 = Low; 2 = Normal;
4 = High)
– Divide Shares value number (400) by Total overall Shares
(400+200+100=700) for each value to determine the % of RP Shares
needed for High value VMs; 400/700=57% for High
– Multiply the % by the total CPU resources for the resource pool, then
divide the number by how many VMs are High allocated; e.g.
57%x100=57CPUs for High Share VMs / # of VMs in RP = per VM CPU
– Do the same above procedure for memory

▪ Calculate Normal and Low Shares allocation in a RP following the same


steps above

▪ To further explain Shares calculation, review the following example:

– Cluster = 12 GHz CPU


– RP1 is allocated 8 GHz, configured for High, which is 67% of Cluster CPU

Page 120 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

– RP2 is allocated 4 GHz, configured for Normal, which is 33% of CPU


– VM1 is in RP1
– VM2 and VM3 are in RP2
– Since VM1 is the only VM in RP1, it gets 67% of Cluster CPU resources
– Because there are 2 VMs in RP2, each VM will get 16% of Cluster CPU,
which is determined by 33% RP2 Cluster allocation divided by number of
VMs in RP2

Objective 5.2: Monitor Resources of vCenter Appliance and vSphere 8


Environment
Resource: vSphere 8.0 Monitoring and Performance Guide (2022)
vCenter Server 8.0 Configuration Guide (2022), pg. 60-62

• Monitor VCSA resources


o There are several locations to monitor your vCenter appliance:
▪ From the vSphere Client, select vCenter > Monitor tab > and select
Skyline Health. This will give you an overview of the overall health of
your virtual environment, providing potential resolution avenues for you
▪ From the vCenter Server Management Interface (VCSMI), login and go to
the Monitor section. This area provides an overview of VCSA CPU,
Memory, Network, Disk, and Database use and health

▪ A final place to go to check on VCSA resources is via SSH using vimtop.


Review the vCenter Config Guide, pp. 69-71 for more info and cmd
parameters

Page 121 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

• Monitor other vSphere resources


o ESXi Hosts
▪ Click any Host in the vSphere Client > Monitor tab > Hardware Health to
get an overview of your Host’s hardware health status

▪ You can also click to get an overview of CPU, Memory, and other metrics
under the Performance (charts) section > Overview or Advanced options.
You can configure different parameters and options to fit the
performance metric you’re trying to monitor

Page 122 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

▪ You can also log into an ESXi Host DCUI and get performance and
hardware health metrics
▪ And finally, the CLI method to view ESXi Host performance via SSH is to
use esxtop.
o VMs
▪ To monitor VM resources, you can click on a VM and view its Summary
tab, or a more wholistic overview approach is to click on a vSphere
container object, like a Cluster or VM folder, then click the VMs tab. You
can view CPU and memory resources as needed. You can also use
esxtop on ESXi Hosts to view VM performance metrics

o Clusters
▪ And last, don’t forget about Cluster resources.. DRS! You can view the
health of your Cluster by browsing to the Monitor tab > vSphere DRS
section, then clicking on a status you want to view, like VM DRS Score or
CPU Utilization

Page 123 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 5.3: Identify and Use Resource Monitoring Tools


Resource: vSphere vRealize Operations Manager 8.0 Configuration Guide (March 2020)

• Performance Monitoring Tools


o Well, there’s not really much more to discuss with this Objective. I shared all the
tools and locations in the previous Objective about where and how to monitor
various resources. Aside from those locations & tools, VMware does have one
product designed solely for performance monitoring of your whole environment
from a single pane of glass, so to speak – VMware Aria Operations. Once setup, it
can pull metrics from your virtual environment for you to view, as well as
continue drilling down to get to the source

Objective 5.4: Configure Network I/O Control (NIOC)


Resource: vSphere 8.0 Networking Guide (2022), pp. 200

• Configuring System Traffic NIOC


o Enable NIOC on the VDS

Page 124 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o Then, from the vSphere Client in Networking node, select the VDS > Configure
tab, then under Resource Allocation section choose System Traffic. Select the
System Traffic type wanting to configure from the list, then Edit button.

Page 125 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Configure the network Shares, Reservations, and/or Limit for the system traffic
type, then click OK. Repeat for each System Traffic type you are wanting to
configure.

• Configuring VM Traffic NIOC


o As with System Traffic, enable NIOC if it’s not already. Then, configure the
Virtual Machine Traffic option under System Traffic – the desired Shares,
Reservations, and Limits
o Next, select Network Resource Pools to be able to allocate the desired network
quota to a set a VMs

Page 126 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o After you configured the desired quota, click OK, then assign the Network
Resource Pool to the vDS Port Group the VMs are attached for which you’re
wanting to assign the network quota to

Page 127 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 5.5: Configure Storage I/O Control (SIOC)


Resource: vSphere 8.0 Resource Management Guide (2022), pp. 63-70

• Per-Datastore SIOC
o Before sharing the simple method of enabling SIOC, there are a few
requirements which must be met before you can enable it:
▪ SIOC Datastores must be managed by only 1 vCenter Server
▪ SIOC Datastores do not support RDM
▪ SIOC Datastores cannot have multiple extents
o To configure SIOC on a Datastore, simply rt-click on the Datastore > Configure
Storage I/O Control… then, configure the peak throughput % you want

o You can then configure SIOC Shares and IOPS Limits 1 of two ways – from the
VM > Edit Settings > expand Hard Disk section and set both parameters

… or, the more appropriate way, especially for use with a group of VMs, is doing
so via a Storage Policy, then assign the Policy to the VMs

Page 128 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

• Storage DRS
o When you create SDRS Clusters, SIOC is enabled on all Datastores you place in
the Cluster by default. To avoid errors when creating a SDRS Cluster, keep in
mind the requirements listed above for SIOC

Objective 5.6: Configure a VM Port Group to be Offloaded to a DPU


Resource: vSphere 8.0 Networking Guide (2022), pp. 31-33

• VM Port Group Offloaded to a DPU


o You can offload network traffic to a DPU when NSX is enabled. Traffic forwarding
logic is offloaded from the ESXi Host to the vDS backed by the DPU
o Network offloads compatibility cannot be modified after associating ESXi Hosts
to the vDS
o You can only add ESXi Hosts backed by a DPU to the vDS
o To offload traffic to a DPU:
▪ Create a vDS by rt-clicking a Datacenter > Distributed Switch > New
Distributed Switch ; NOTE: To be able to use Network Offloads, you must
create a vDS version 8 or later
▪ On the Configure settings page, use the drop-down menu to select
Network Offloads Compatibility type:
– None
– Pensando
– NVIDIA Bluefield
▪ Finish configuring the remaining vDS settings – Number of Uplinks,
Enable/Disable NIOC, and optionally create a Default Port Group
▪ Click Finish when completed

Page 129 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 5.7: Explain the Performance Impact of Maintaining VM Snapshots


Resource: VMware KB 1015180 ; VMware KB 1025279

• Snapshot Overview
o Per VMware’s KBs, a VM snapshot is a point-in-time reference of a VM’s state
(power-on/off/suspended) and data (files, disks, memory, network). Before
continuing, one important concept regarding snapshots is *they are not
backups*. A VM backup is a complete/whole new copy of a VM and all its files
(disks, configs, logs, etc). A snapshot is a not a whole new copy of a VM. A
snapshot process simply creates a new VM disk where all subsequent I/O writes
are now processed, called a “delta” disk ( .delta-vmdk ). This delta disk has the
opportunity to grow up to the full size of the original/parent disk, thus you can
see the potential issue this presents. Snapshots are great for the following use-
cases:
▪ Testing OS patches
▪ Testing application updates/upgrades
▪ Development changes
You can take a snapshot, then perform any of the above tasks. If any of the
above tasks turn out to be non-destructive to the VM, it is advisable to
consolidate the snapshot – merging all I/O changes with the parent disk, then
removing the corresponding delta disk.

• Snapshot Performance Impact


o If you let VM snapshots grow, there are a couple things to keep in mind
▪ VM performance will degrade over time, to the point of OS and/or app
responsiveness being dreadfully slow
▪ There is a potential of VM disk corruption, leading to the VM no longer
being usable and a restore from backup being required – full VM image
or VM disk, depending on your VM backup solution capabilities
o Some best practices VMware recommends you follow:
▪ No more than 32 snapshot chains per VM
▪ Try not to keep snapshots for more than 3 days (72hrs)
▪ Since 3rd party backup solutions use snapshots via APIs, make sure they
are removed upon successful backup completion
▪ When using Storage vMotion, keep in mind any snapshots are
consolidated at the end of a successful svMotion

Objective 5.8: Use Update Planner to Identify Opportunities to Update VMware


vCenter
Resource: VMware Update Planner Blog Article (2020)

Page 130 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

• VMware Update Planner


o Update Planner is a "feature" VMware introduced with vSphere 7. To me, this is
more annoying than beneficial, but I digress. The main part of this tool is an
Update banner being displayed when VMware releases a new patch, update, or
upgrade to vCenter Server

o Though more annoying to me than beneficial, I do see where this could be


advantageous to larger environments. Not by the banner nuisance, but by the
centralized means with which to check vCenter component compatibility with
new updates. When an update is released, you can click on the Update Available
link from the vCenter Server Summary tab and you'll be redirected to the
Updates tab > Update Planner section

From Update Planner, you can click the Monitor Current Interoperability link,
which directs you then to the Monitor tab > vCenter Server section >
Interoperability and you'll see issues, if any, of upgrading your appliance
o Pre-requisites for using Update Planner:
▪ Update Planner is intended for vCenter upgrades starting with v7; it does
not assist in moving from a version older than v7 since Update Planner is
a feature introduced with v7
▪ You must join the VMware Customer Experience Improvement Program
(CEIP) to use Update Planner
– VCSA must be connected to the Internet to use CEIP
– Update Planner queries VMware Product Interoperability Matrices
online and reports findings via the vSphere Client

Page 131 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 5.9: Use vSphere Lifecycle Manager to Determine the Need for Updates
and Upgrades
Resource: Managing Host and Cluster Lifecycle Guide (2022)

5.9.1 – Update Virtual Machnes

• When using vLCM to update VMs, the update is for VMware Tools and/or Virtual
Hardware, not for the guest OS. Just wanted to make that clear if it wasn't already. To
upgrade either Tools or Hardware:
o First make sure all ESXi Hosts in a Cluster the VMs are in are updated to the
latest vSphere release
o Click on either a Cluster from Hosts and Clusters or a Folder from VMs and
Templates > Updates tab
o When updating Tools and Hardware, make sure to update Tools first. And, when
updating Tools, VMs must be powered on. If they are not, vLCM powers them
on. When updating Hardware, VMs must be powered off. If they are not, vLCM
powers them down.
o To update either Tools or Hardware, select VMs in the list, then click the
Upgrade to Match Host link
o Scroll down the VM to configure Scheduling Options -> Power on, Power off, or
Suspend VMs
o Choose Rollback Options -> take a Snapshot, and whether to delete them or
how long to retain them, when the upgrade finishes
o When VMs are selected and settings configured, click the Upgrade to Match
Host button to begin the upgrade process
o vSphere version and its corresponding Virtual Hardware version to note:
VMware KB
vSphere Version Virtual Hardware
ESXi 5.5 10
ESXi 6.0 11
ESXi 6.5 13
ESXi 6.7 14
ESXi 6.7U2 16
ESXi 7.0 17
ESXi 7U1 18
ESXi 7U2 19
ESXi 8.0 20

Page 132 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

5.9.2 – Update VMware ESXi

• Upgrade ESXi Host


o First, you need to determine which method of upgrading you want to use ->
Images or Baselines. If you use Images, you won't see a Baselines option
o Once determined, you then select a 'container' object, a Cluster or Datacenter
object > Updates tab
o Click Baselines (if using), then scroll down to the Attached Baselines section
o Click the Attach drop-down link, and choose a Baseline to attach
o Once the Baseline is attached, you then need to check for Compliance. Scroll
back up to the Compliance section and click on the Check Compliance link
o After Compliance is confirmed to be non-compliant, scroll back down to the
Attached Baselines section, select the Baseline from the list, then click the
Remediate link. Or, you can optionally Stage the patch/update to the Host and
remediate at a later time
o The ESXi Host will then complete the update

Objective 5.10: Use Performance Charts to Monitor Performance


Resource: vSphere 8.0 Monitoring and Performance Guide (2022), pp. 9-121

• Performance Chart Types – Line, Bar, Pie, Stacked


• Data Counter Attributes – Unit of measurement, Statistic type, Rollup type, Collection
level
• vSphere Metric Groups – Cluster Services (HA, etc), CPU, Memory, Disk, Datastore,
Network, Power, Storage Adapter, Storage Path, System, Virtual Disk, Virtual Flash
• Collection Intervals – 1-day, 1-week, 1-month, 1-year
• Collection Levels:
– Level 1: default; most basic level; for long-term monitoring
– Level 2: same as Level 1, but a bit more; for long-term monitoring
– Level 3: same as Level 1 and 2; includes device statistics; for short-term monitoring
– Level 4: contains all metrics; for short-term monitoring
• To view a Performance chart for a given vCenter Inventory object, click the object >
Monitor tab > Performance section, then click on Overview or Advanced options. The
Overview contains 5 major categories -> CPU, Memory, Memory Rate, Disk, and
Network; Advanced option allows viewing 1 metric at a time, and you can click the Chart
Options link to change parameters of the chart you're viewing -> add/remove Counters,
change Timespan, change Chart Type
• Depending on the Inventory object you select will determine the View options listed in
the drop-down in the Advanced area. For example, clicking an ESXi Host object will have
many more View options than a Cluster or Datacenter object. As such, you'll also have
many more Chart Options to choose from with an ESXi Host vs a Cluster

Page 133 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Overview Charts

Advanced Chart

Objective 5.11: Perform Proactive Management With VMware Skyline


Resource: Skyline Planning and Deployment Guide (2023)

• VMware Skyline
o Skyline is a proactive self-service support technology available to customers with
an active Production Support or Premier Services contract. Skyline automatically
and security collects, aggregates, and analyzes customer specific product usage
data to practively identify potential issues and improve time-to-resolution.

Page 134 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Customer data is transferred to VMware over an encrypted channel, ensuring


privacy, and is stored on a secure VMware repository in the U.S.
o Components:
▪ Cloud Services Organization (CSO) – a logical container for all your
VMware Cloud Services, to include identity & access management, billing
& subscription, and development & support centers
▪ VMware Skyline Collector – a virtual appliance which resides within the
customer's virtual infrastructure and automatically & securely collects
product usage data such as: configuration, feature, and performance data
▪ VMware Skyline Advisor – a self-service, customer-facing web portal
which allows customers to access proactive findings and
recommendations on-demand. You access Skyline Collector via VMware
Cloud Services using a Customer Connect username and password
o Supportability:
▪ vSphere 5.5+
▪ vSAN 6+
▪ NSX 6.2+
▪ Horizon View 7+
▪ Others (view on pg. 9)
o Basic steps to use Skyline:
▪ Create your Cloud Services Organization, or if you already have one,
select Proceed to Service to use Skyline Advisor
▪ Associate your Entitlement Account (EA) with your CSO
▪ Download and install the VMware Skyline Collector appliance in your
vSphere environment
– 2 vCPU, 8GB RAM
– Collector Web UI support: Edge, Chrome, FF
– Port requirements: 443, 7444, 9543
▪ Register your Collector with your CSO, using the Collector registration
token

Objective 5.12: Use vCenter Server Management Interface to Update vCenter


Resource: vCenter Server 8.0 Upgrade Guide (2022), pp. 173-177

• Update VCSA via VCSMI


o Go to URL: https://IPorFQDN:5480
o Click the Update option on the left
o Click the Check Updates drop-down > Check CD ROM + URL to check for any
updates, if none already show in the Update list
o If there are any updates, click on the update, then select Stage and Install to
start the Update process

Page 135 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o You'll have to accept a EULA (generally), and the update process can take
anywhere from 1-2 hours

Objective 5.13: Complete Lifecycle Activities for VMware vSphere With Tanzu
Resource: Maintaining vSphere 8.0 With Tanzu Guide (2023)

5.13.1 – Update Supervisor Cluster

• Update Supervisor Cluster


o To update a Supervisor, including the Kubernetes version the Supervisor is
running, you perform a vSphere Namespace update
o Before updating the Namespace, update vCenter Server and ESXi Hosts to the
supported version
o Click Menu > Workload Management, select Namespaces, then Updates tab
o Select the Available Version you want to update to; NOTE: you must always
update incrementally…do not skip an update
o Select a Supervisor(s) to update
o Click Apply Updates
o After the update is finished, update the vSphere Plugin for kubectl

5.13.2 – Backup and Restore VMware vSphere with Tanzu

• Backup vSphere with Tanzu


o Backing up Tanzu components involves various layers and tools
▪ vSphere Pods – Velero Plugin for vSphere
▪ Workloads on a TKG Cluster – Velero Plugin for vSphere
▪ Supervisor Cluster – vCenter; Velero Plugin for vSphere
▪ vCenter Configuration – vCenter
▪ NSX-T Datacenter – NSX Manager
o Install Velero Plugin for vSphere on the Supervisor
Pre-requisites:
▪ Supervisor is configured with NSX-T
▪ Supervisor is version 1.21 or later
▪ vSphere Namespace is created and configured for the Velero Plugin
▪ Create a S3-compatible object store to backup persistent volumes
▪ (Optional) Create a backup & restore network separate from Tanzu
management traffic
Install:
▪ Install and configure a Data Manager VM (URL in the Guide)
▪ Deploy the OVA in the vSphere Datacenter where Workload
Management is enabled – rt-click Datacenter > Deploy OVF Template

Page 136 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

▪ Provide the required OVA deployment details: VM name, networks, etc


– If receive an 'OVF descriptor' error, use Chrome
– Change the deployed VM CD from Host to Client Device or
configurations will not save
▪ Rt-click VM > Edit Settings > VM Options tab, select Advanced > Edit
Configuration Parameters ; use pg. 22 as a guide on what to enter
▪ Do NOT power on the VM until the vSphere Velero Operator service is
enabled
– Download the Velero Operator YAML (URL in the Guide)
– Within vCenter, Home menu > Workload Management, select the
Services tab
– Select the target vCenter Server from the drop-down
– Drag/drop the YAML file to the Add New Service card, or click Add and
browse to the file, then verify the Operator Service is in an Active state
– In the Services tab, find Velero Operator, then click Actions > Install on
Supervisors and select the target Supervisor to install the service on
– Configure the Operator Service as shown on pg. 23 of the Guide then
click Finish
– Verify the Operator Service is on the Supervisor: Home menu >
Workload Management, click the Supervisors tab and select the
Supervisor where the Service was installed. Click the Configure tab >
Overview under Supervisor Services and verify Velero Operator shows
as Configured
– In Namespaces tab, verify the Velero namespace is shown
▪ Power on the Data Manager VM
▪ Create a vSphere Namespace for the Velero Plugin; reference Sub-
Objective 4.20.1 on creating a Namespace
▪ Download/install the Velero vSphere CLI (URL in the Guide)
– Create a Linux VM jumpbox or use one already created
– Copy the CLI to the Linux VM, extract it, & make it writable
– Finish the remaining steps shown on pp. 24-25 of the Guide
o To backup a Tanzu object, run the Velero CLI against the object. Refer to pp. 29-
32 for the steps on how to do so

Page 137 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

SECTION 6: Troubleshotting and Repairing


Objective 6.1: Identify Use Cases for Enabling vSphere Cluster Services (vCLS)
Retreat Mode
Resource: vSphere 8.0 Resource Management Guide (2022), pp. 82-89

• vSphere Cluster Services (vCLS)


o vCLS is activated by default when you create a Cluster and add ESXi Hosts to the
Cluster, in vCenter Server of 7U3 or later. A maximum of 3 vCLS agent VMs are
created, unless you only have 1 or 2 Hosts in the Cluster. In that case, only 1 or 2
vCLS VMs are created, respectively. An anti-affinity Cluster rule is created to
ensure the vCLS VMs are not all on the same ESXi Host
o vCLS ensures Cluster services remain available to maintain the resources and
health of workloads in the event vCenter Server is unavailable. NOTE: DRS and
HA still require vCenter Server
o To monitor the health Cluster Services, select a Cluster > Summary tab, then
scroll to view Cluster Services tile and the Cluster status (Healthy, Degraded,
Unhealthy)
• Cluster Retreat Mode
o vCLS VM datastore placement is determined by pre-defined vCenter Server logic.
If you need to override this logic, you can add Datastores to place vCLS VMs on
via vCenter, selecting the Cluster > Configure tab > vSphere Cluster Services
section, then click Datastores, then the Add button to select Datastores for vCLS
placement. NOTE: Datastores blocked by vSAN or SRM maintenance mode
cannot be used
o To put Cluster Services in Retreat Mode, for example when needing to place a
(SDRS) Datastore in Maintenance Mode, do the following:
▪ Select the Cluster in the Inventory where SDRS is enabled
▪ Copy the Cluster domain ID from the URL, beginning with domain-c###
▪ Go to vCenter Server object > Configure tab > Settings section, then
select Advanced Settings, and click the Edit button
▪ Add a new entry:
config.vcls.clusters.domain-c####.enabled
▪ Set the new entry value to false , then click Save
▪ After placing vCLS in Retreat Mode, the Cluster status will show as
Degraded , and DRS stops functioning… until the Cluster is taken out of
Retreat Mode… which, to do so, set the Advanced Setting entry value to
true ; also note vSphere HA does not perform optimal VM placement
during a Host failure

Page 138 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 6.2: Differentiate Between the Main Management Services in vSphere


ESXi and vCenter and Their Corresponding Log Files
Resource: vSphere 8.0 Monitoring and Performance Guide (2022), pp. 210

• vSphere ESXi Logs


o To increase ESXi Host security, it is recommended to redirect Host logs to a
datastore to persist upon reboots, as logs are by default stored on in-memory
file system, and only 24hrs of log data is kept. It is also recommended, if
available, to direct logs to a central host – a log management server, for more in-
depth analysis, search capability, and troubleshooting.
To configure directing logs to a remote server, modify each ESXi Host's Advanced
settings parameter, selecting Host > Configure tab > System section, and select
Advanced System Settings and search for Syslog.global.logHost . Enter
the URL in the format: ssl://<hostName>:1514

ESXi Host stores all its logs in the /var/log directory. Each log has the following
functions:
▪ VMkernel – vmkernel.log; logs related to ESXi and VMs
• Warnings – vmkwarning.log; logs VM-related activities
• Summary – vmksummary.log; ESXi uptime and availability
▪ ESXi Host agent – hostd.log; information about VM and ESXi
configuration and management services
▪ vCenter agent – vpxa.log; information about vCenter agent
▪ Shell – shell.log; records Shell commands and events
▪ vSphere HA agent – fdm.log
▪ Authentication – auth.log; contains local authentication info
▪ System messages – syslog.log; general log information used for
troubleshooting; was formerly messages.log

o Besides the local CLI-based log locations, you can log into the Host DCUI (direct-
console UI) to view several logs – Syslog, VMkernel, Config, vCenter
(vpxa), Management(hostd), or Observation(vobd). To view logs via
DCUI, select Monitor, then choose the Logs tab

Page 139 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

To generate a log bundle via CLI, simply SSH to a Host and type: vm-support
Once complete, review the completion output for the datastore log location

• vCenter Server Logs


o Logs location for vCenter Server:
VCSA – logs are located in /var/log/vmware
▪ vpxd.log – main vCenter log consisting of all vSphere Client
and WebServices connections, internal tasks and
events, and communication with vCenter agent
(vpxa) on ESXi Hosts
▪ vpxd-profiler.log – profiled metrics performed in vCenter
▪ vpxd-alert.log – non-fatal info about vpxd processes
▪ cim-diag.log – (or vws.log ) CIM monitoring information
▪ ls.log – health report for Licensing Service extension
▪ eam.log – health report for ESXi Agent Monitor extension
▪ vimtool.log – dump of string during vCenter installation of DNS,
username, and JDBC creation output
▪ stats.log – performance data history from ESXi Hosts
▪ sms.log – health report for Storage Monitor Service
▪ vmdir.log – logging for Directory Services
▪ drmdump\ – actions taken by DRS

o Through the vSphere Client go to: vCenter > Monitor > Skyline Health to review
several categories of information to check on the health of your environment

Page 140 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

• Virtual Machine Logs


o Log location for VMs are, by default, stored with the VM’s configuration file
(.vmx), and named vmware-#.log. From an ESXi Host, location is:
/vmfs/volumes/<datastore-id>/<vm-name>

• Remote Syslog
o Finally, if you configured a remote Syslog for your VCSMI you can review logs on
your Syslog server; When configuring, you can only use TLS, TCP, UDP, & RELP
protocols

Objective 6.3: Generate a Log Bundle


Resource: vSphere 8.0 Virtual Machine Administration Guide (Jan 2023), pp. 270

• Generate a Log Bundle


o To geneate a Log Bundle from an ESXi Host directly, rt-click the Host object >
Generate support bundle. You will then need to re-authenticate, then save the
bundle to the desired directory. The downside to creating a bundle this way is
not being able to select/deselect granular options

Page 141 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Or, as mentioned in the previous Objective, SSH to the ESXi Host and type:
vm-support , and take note the directory the log bundle was saved

o To generate a Log Bundle via vCenter Server, rt-click the vCenter Server object >
Export System logs option. You can then choose which ESXi Hosts to export logs
from. Make sure to scroll down to the bottom of the 'choose ESXi Hosts' window
so you can also select to Export vCenter Server logs if they are needed

On the next screen, you then have multiple system log options to choose from
by selecting or deselecting them, as well as gather perfromance data, and
whether to encrypt Core Dumps with a password or not

Page 142 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

SECTION 7: Administrative and Operational Tasks


Objective 7.1: Create and Manage Virtual Machine Snapshots
Resource: vSphere 8.0 Virtual Machine Administration Guide (Jan 2023), pp. 283-295

• Create VM Snapshot
o I discussed Snapshot details a bit in Objective 5.7, so I won’t go over them again
here. I’ll just provide how to create a snapshot
o The easiest way to create a Snapshot is simply to rt-click a VM > Take a
Snapshot…

Page 143 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Or, you can select a VM in your Inventory and go to the Snapshot tab > Take a
Snapshot button. Provide a name and optional description, then click Create. I
suggest giving the snap a description, e.g.
pre-OS update snap - <initials>, <date & time> so you can
readily identify 1. what it was created for, 2. by whom, & 3. the date/time

• Managing Snapshots
o Managing Snapshots is quite easy as well. If you have several snapshots in a
"tree", you can click on any one in the tree to Revert the VM to the point-in-time
snap. All subsequent Snapshots will be retained, unless you choose to delete
them. If you delete them, all data associated with those Snapshots will be lost. If
you instead choose to consolidate all the Snapshots in a chain, all data in each
Snapshot delta disk will be merged with the parent disk, and then all Snapshot
files will be deleted
o There are a couple options you should be aware of, as you can see in the
screenshot above – Include VM memory, and Quiescing. If you include VM
memory when snapshotting, you retain the power-state of the VM. For
quiescing, you can take a snapshot of a VM in a consistent state. VMware Tools
quiesces the guest OS to temporarily prevent I/O so the Snap is in a consistent

Page 144 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

state. Quiescing isn’t performed, and is greyed out, if the memory is selected. If
memory and quiescing aren’t selected, the VM Snap will only be in a crash
consistent state

Objective 7.2: Create Virtual Machines Using Different Methods


Resource: vSphere 8.0 Virtual Machine Administration Guide (Jan 2023)

• Create Virtual Machines – Traditional Method


o I’ll only briefly cover a few simple VM creation options. Rt-click a vCenter
Datacenter or Cluster object, or an ESXi Host > New Virtual Machine and select
Create New Virtual Machine. Just provide all the necessary configurations –
compute, storage, network, guest OS, etc.

Page 145 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

• Create Virtual Machines – OVF


o Either rt-click on a Datacenter, Cluster, or ESXi Host; or click on one of the
objects, then from the Actions menu select Deploy OVF Template…

Page 146 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

You can then use an URL, or browse for .ova or .ovf/.vmdk files. Once chosen,
just follow through the deploy wizard and provide the requested information –
VM name, licensing (if needed), storage, network, etc

• Create VM From Template


o Whether Template or OVF files are in your vSphere environment, or vSphere
Content Library, deploying is virtually the same, except where you deploy it
from. You simply rt-click on the Template and choose New VM From This
Template. As with any new VM, just provide the necessary information, and
configure resources (vCPUs, memory, disk, network, etc) as needed

Page 147 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 7.3: Manage Virtual Machines (Modify VM Settings, per-VM EVC,


Latency Sensitivity, CPU affinity, etc)
Resource: vSphere 8.0 Virtual Machine Administration Guide (Jan 2023)

• After you create your VM, you can modify its settings as needed, anywhere from CPU to
memory, or add hard disks, & other virtual devices. To modify any of a VM’s settings, rt-
click the VM > Edit Settings, or select the VM and choose Edit Settings from the Actions
menu. In the VM Options tab (2nd screenshot), you can select the guest OS version,
enable encryption, as well as BOOT options and several other items

Page 148 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

• Other Configurations
o Aside from virtual hardware configurations, there are quite a few other VM
management tasks, such as:
▪ Per-VM EVC – select the VM > Configure tab > Settings section, then
choose VMware EVC and click the Edit button
▪ VM Latency Sensitivity – VM > Edit Settings > VM Options tab, then
expand the Advanced section and modify Latency Sensitivity (Normal,
High)
▪ CPU Affinity – VM > Edit Settings > Virtual Hardware tab, then expand
CPU and under Schedule Affinity select physical processor affinity; NOTE:
the VM must be powered off, and DRS must be off
▪ Guest OS install – a couple ways to do this is via a connected ISO from the
VM virtual CD device or PXE boot
▪ Guest Customization – configuring new VM deployments to utilize a
Customization Specification file to add the VM to the domain, use
sysprep, and prompt for vNic configuration. You create these from Menu
> Policies and Profiles section
▪ Storage Policies – from Policies and Profiles section, then assign to VMs
by rt-clicking VM > VM Policies > select Edit VM Storage Policies
▪ Upgrade VMware Tools – rt-click VM > Guest OS > Upgrade (or Install)
VMware Tools

Page 149 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 7.4: Manage Storage (Datastores, Storage Profiles, etc.)


Resource: vSphere 8.0 Storage Guide (2023)

• Create Datastores
o Although the Objective doesn’t say explicitly, I’ll first briefly share how to create
a Datastore, then I’ll discuss management tasks in the subsequent Sub-
Objectives
o To create a standard Datastore, you first configure a LUN on your array or NAS
storage system, configuring appropriate Host access using IQN/EUI, WWN, or
credentials. Once done, you rt-click on a Host or Cluster > Storage > New
Datastore…

Select what kind of storage you want to add – VMFS, NFS, or vVol, then click
Next. No Device is listed. Why? Did you perform a Rescan of your storage
adapters? Once you do a Rescan, your new LUN Device should now be listed in
the New Datastore wizard Name & Device window (will need to restart the New
Datastore wizard). Then just finish the wizard providing appropriate selections,
depending on the storage type you’re adding – VMFS version and whether to use
the whole LUN; or, NFS credentials for NAS devices, etc. Rescan the storage
adapters for all other Hosts you assigned to the LUN and the Datastore will
display automatically (may need to refresh the vSphere Client browser window)

7.4.1 – Configure and Modify Datastores (Expand, Upgrade, etc.)

• Datastore Modification
o To increase Datastore capacity, simply rt-click it > Increase Datastore Capacity,
or select the Datastore > Configure tab > General option, then click the Increase
button in the Capacity section, & follow the wizard to increase the capacity. The
amount by which you increase will be dependent on the LUN unused space left
o In this same area, but in the Datastore Capabilities section, you can enable
Storage I/O Control, clicking the Edit button

Page 150 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o If you upgraded your vSphere environment from an older version, and still have
older VMFS versions, you may want to upgrade your VMFS Datastores. To
upgrade VMFS 5 datastores, you cannot directly upgrade them. What you’ll need
to do is create a new VMFS 6 Datastore, migrate VMs, data, and files to the new
Datastore, then remove the VMFS 5 Datastore
o To remove a VMFS Datastore – go to the Storage Inventory and rt-click the
Datastore you’re wanting to remove, and select Unmount. When prompted,
select all Hosts to Unmount the Datastore from. There are a few pre-requisites
to be able to Unmount:
▪ Datastore cannot be part of a Storage DRS (SDRS) Cluster
▪ SIOC must be disabled
▪ No VMs (on or off) are on the Datastore
If the Datastore was a VMFS Datastore, it’ll still show in your vSphere Storage
Inventory. If it was a NFS or vVol, it’ll be removed. For VMFS, you can rt-click the
Datastore > Mount to re-mount it if needed. For NFS & vVol, you have to go
through the New Datastore wizard to re-mount
o When deleting a Datastore, all data residing on the Datastore will be destroyed.
Once you’ve first Unmounted it (not a requirement, but is the VMware-
recommended way to fully delete a Datastore), rt-click the Datastore and select
Delete Datastore. If you want to remove the Datastore from your vSphere
inventory, but keep all your data, what you do is after Unmounting, go to each
Host it’s connected to Configure tab > Storage Devices > select the
Device/Datastore name in the top section, then from the Paths tab in the
bottom section, click on each Path and click the Disable button. Do NOT rescan
your storage adapters! Once you perform this task on each Host, go to your
storage array and choose to make the Volume/LUN be "offline". This will disable
connections to the ESXi Hosts. You should then be able to rescan your storage
adapters and the unneeded Datastore will no longer show in your inventory

7.4.2 – Create Virtual Machine Storage Policies

• Create Storage Policies


o To create various VM Storage Policies to meet organizational SLAs, navigate in
the vSphere Client to Menu > Policies and Profiles

Page 151 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Give it a Name & optional Description, then Next. Depending on the Policy
Structure option you select will determine your remaining options to provide
information for – Host-based rules for Encryption and/or Storage I/O Control;
various storage options associated with vSAN; and Tagging (as shown in
sequence in the following screenshots)

Page 152 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Keep in mind, you can select multiple Policy Structures for your policy. You aren’t
limited to only selecting a Host-Based Policy, etc. Once you’ve configured all your

Page 153 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

options, assign the Policy to a given Datastore at the Storage Compatibility


window, then click Finish
o You can then assign the Storage Policy to a VM

7.4.3 – Configure Storage Cluster Options

• vSphere Storage DRS (SDRS)


o I covered what a Datastore Cluster was briefly in Sub-Objective 1.3.7. Here, I’ll go
into a little bit more detail, and share how to configure a Datastore Cluster.
Before you can enable SDRS, you must meet several requirements:
Storage vMotion:
▪ vSphere Standard license to support storage vMotion
▪ ESXi 5.0 or later
▪ Enough free memory on ESXi Hosts for storage vMotion
▪ Sufficient space on destination datastores
▪ Destination datastore is not in maintenance mode
General Requirements to enable SDRS, or when adding a Datastore to a Cluster:
▪ Cannot use different datastore types (i.e. NFS with VMFS or vice versa)
▪ Hosts attached to the datastore must be ESXi 5.0 or later
▪ The datastore cannot be in more than one vCenter Server
▪ Datastores must have homogenous hardware acceleration
o To enable SDRS, you create a new Datastore Cluster at the vCenter Datacenter
object level, selecting Actions menu > Storage > New Datastore Cluster, then
configure each setting that meets your organizational needs and requirements

Page 154 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o For Advanced options (see below), you can configure the following:
▪ VM disk affinity – allows you to keep all disks associated with a VM
together or on separate datastores
▪ Interval at which to check datastore imbalance
▪ Minimum space utilization difference – refers to the % difference
between source and destination datastore. For example, if the current
space usage on source Datastore is 82% and destination Datastore is 79%
and you configure "5", the migration is not performed

• SDRS Maintenance Operations


o If you need to do maintenance on Cluster datastores, keep in mind the following
before placing a Datastore in maintenance mode:
▪ SDRS must be enabled on the Cluster the datastore is in
▪ No CD image files can be stored on the Datastore
▪ There are at least 2 Datastores remaining in the SDRS Cluster

Page 155 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 7.5: Create Distributed Resource Scheduler (DRS) Affinity/Anti-Affinity


Rules for Common Uses Cases
Resource: vSphere 8.0 Resource Management Guide (2022), pp. 123-128

• Create Affinity/Anti-Affinity Rules


o After deciding what kind of Rules you need – VM to Host, or VM to VM – just
browse to the Cluster you want to create the Rules on > Configure tab > then in
Configuration section, click VM/Host Groups and click the Add button

Give your 1st Group a Name, then select which group Type to create first, a VM
or Host Group. Click the Add button and you’ll be prompted to select a group of
either VMs or Hosts, depending on what Type you selected to create first.
Repeat for the 2nd group Type (VM or Host), giving it a Name and selecting
desired VMs or Hosts. When finished with each group, click OK to finish

Page 156 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

You then go back under the Configuration section and select VM/Host Rules,
then click the Add button

Page 157 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Here is where you choose what type of Rule to create. For the 1st 2 options –
'Keep VMs Together' or 'Separate VMs', you don’t use any of the Groups you just
created. Those are VM to VM Affinity & Anti-Affinity Rules respectively. If you
select either of those Rules then click the Add button, you’re prompted to select
the VMs from a list to keep together on the same Host or keep on different
Hosts (i.e. separate). The VM > Host Rule is where you use the Groups you just
created. Notice how the options change depending on the Rule type you select

Page 158 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o Choosing which option… "must" or "should" …. will depend on your need or


requirement. Generally, for licensing requirements – a VM with an application
with certain license requirements to where it can’t run on multiple ESXi Hosts –
you should choose the "must run on Hosts…" option. NOTE: The Host Group you
create for this requirement should only contain a single ESXi Host. Using the
"should" option is for 'try your best' scenarios where you’d like to meet the
requirement but don’t wanna risk performance, so you allow for actions such as
vMotion to meet performance SLAs for the application(s).

Page 159 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 7.6: Migrate Virtual Machines


Resource: vCenter Server 8.0 and ESXi Host Management Guide (2022), pp. 120-167

• vMotion ('standard', Long-Distance, Cross-vCenter)


o When configuring each ESXi Host for vMotion, regardless of type:
▪ You must utilize a minimum of a 1GbE NIC per ESXi Host, but is
recommended to use a 10GbE NIC
▪ If using a vSS switch type, make sure the Port Group labels used for
vMotion have identical label names on each ESXi Host
▪ vSS Port Group Security option settings (i.e. Promiscuous, MAC Address,
Forged Transmit) must also be identical on all ESXi Hosts
▪ When adding VMkernel Network Adapters, the IP Address subnet must
be the same on all Hosts wanting to migrate VMs to/from, and you must
select the vMotion TCP/IP Stack; for Long-Distance vMotion, use the
Provisioning TCP/IP Stack

After you’ve finished all your vMotion configurations, to perform a vMotion,


simply rt-click a VM > Migrate, and select a migration type

Page 160 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

At the subsequent screens in the Migration wizard, choose the ESXi Host (i.e.
'Compute Source') to migrate to, a virtual network if needing to change the
network your VM vmnic is using (click the Advanced button to change networks
for multiple VM vNics), and the priority (CPU scheduling priority) of the
migration

• Storage vMotion
o As a reminder, since vSphere 4.0, you do not need to have VMs configured for
vMotion to use sVMotion, unless you’re using the 'Migrate Compute and Storage
Resource' option. To perform a sVMotion, rt-click a VM > Migrate, then choose
Change Storage Resource Only, then choose the Datastore wanting to migrate
the VM disks to, as well as the VM disk format (i.e. Thin Provisioned, Thick, etc).
Toggle the 'Configure per disk' slider button to choose storage for multiple VM
disks

Page 161 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

7.6.1 – Identify Requirements for Storage vMotion, Cold Migration, VMotion, and Cross vCenter
Export

• Cold Migration
o Cold migration simply means migrating a VM between ESXi Hosts across Clusters,
Datacenters, or vCenter Server instances while a VM is either in the powered-off
or suspended state. If your application can afford brief downtime, this may be a
valid option for you. When a powered-off VM is migrated, if the VM guest OS is
64-bit, the destination Host is checked for one requirement – guest OS
compatibility. No other CPU-compatibility checks are performed. If migrating a
suspended VM, then the destination does need to meet another check such as
CPU compatibility.

Cold migration process:


–The VM’s .vmx, .nvram, .log. and .vmss (suspended) files are moved to the
target datastore, as well as VM disks. Something to be aware of when
performing a cold migration – VM migration data (called provisioning traffic)
uses the management network for cold migrations. If you need to perform
multiple VM cold migrations, and often, it’s recommended to isolate this traffic
using a VMkernel network adapter for provisioning traffic and do so on a
dedicated VLAN or subnet
–The VM is then registered on the target Host
– Once migrated, the source VM (& disks, if performing with storage migration)
are deleted

• vMotion
o vMotion refers to the process of migrating a VM from a source to destination
ESXi Host while powered-on, migrating the entire state of the VM. You can
choose one of 3 vMotion types – compute migration, storage migration, or both.
The state information transferred refers to the VM memory content and all
information identifying and defining the VM like BIOS, device, CPU, MAC
address, chipset state information, etc. Unlike cold migration where
compatibility checks for a destination Host may be optional, the checks for
vMotion are required. During vMotion, the following stages occur:
▪ vCenter verifies the VM is in a stable state with the source Host
▪ The VM state is moved to the destination Host
▪ The VM resumes normal operation on the destination Host
o Before using vMotion, make sure to meet vMotion requirements:
▪ vSphere Standard license, Enterprise Plus for cross-vCenter vMotion
▪ Shared storage among Hosts
▪ vMotion network configured on each Host

Page 162 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

– Ensure each Host is configured with a VMkernel adapter for vMotion


– If using a vSS, make sure network labels match across Hosts
– Ensure each Host uses a dedicated network for vMotion traffic
– Ensure the network used for vMotion has at least 250Mbps bandwidth
– Best practice is to use one NIC as failover
– Best practice is to configure jumbo frames for best performance
o You can perform several types of vMotion migrations, as listed below:
▪ vMotion for migrating VM compute to a destination Host
▪ Long-distance vMotion for long distances if latency < 150ms; and use
Provisioning TCP/IP Stack
▪ vMotion to another Datacenter
▪ vMotion to another vCenter Server starting from vSphere 6.0 and later;
vCenter Servers must be Enhanced-Linked Mode; following compatibility:
– MAC addresses must be compatible between vCenters
– Cannot migrate from a vDS to vSS
– Cannot migrate from differing vDS versions
– Cannot migrate from internal network, i.e. no NIC
– Cannot migrate if vDS is not working properly
▪ Advanced Cross-vCenter vMotion – ability to vMotion across vCenters
without need of vCenters to be in Enhanced Linked Mode
– Source vCenter needs to be 6.5 or later and target vCenter needs to be
v7U1c or later
– Can be done from different local SSO domains or from Cloud
▪ Storage vMotion VM disks must be in persistent mode or virtual RDMs
o Starting with vSphere 6.7, you can vMotion vGPU VMs
o Starting with vSphere 6.5, vMotion always uses encryption for encrypted VM
migrations. Cross-vCenter vMotion has the following supportability:
▪ Using vMotion encryption with unencrypted VMs to another vCenter is
supported
▪ Using vMotion encryption with encrypted VMs to another vCenter is not
supported because one vCenter cannot verify the other vCenter KMS
▪ For local vMotion, encrypted VMs always use encrypted vMotion, and
this function cannot be disabled

Page 163 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

▪ For local vMotion, an unencrypted VM can use 1 of 3 states:

▪To configure encryption, go to VM settings > VM options tab, Encryption


section and select the option best suited for your organization
o Be aware of some vMotion limitations and considerations:
▪ Source and destination Host IPs must be same family, i.e. all IPv4 or IPv6
▪ You can vMotion VMs with Host-connected USBs if enabled for vMotion
▪ You cannot vMotion VMs using source Host devices not accessible to the
destination Host, i.e. physical CD drive
▪ You cannot vMotion with VMs connected to client devices
o To improve vMotion compatibility between source and destination Hosts, you
can mask certain CPU feature sets to the VM by enabling Enhanced vMotion
Compatibility (EVC). Refer to Sub-Objective 1.4.2 for review of this technology.
o In Objective 1.1 I provided vCenter vMotion limits, but will provide a quick
review here. If using 1GbE NICs, you can have 4 simultaneous vMotions; if using
a 10GbE NIC you can have 8; a total of 128 simultaneous vMotions can occur on
each Datastore.

• Storage vMotion (sVMotion)


o Storage vMotion requirements:
▪ vSphere Standard license
▪ ESXi 5.0 or later with access to source & destination shared storage
▪ Enough free memory on the ESXi Host for storage vMotion
▪ Sufficient space on destination datastores
▪ VM disks in persistent mode
▪ Destination datastore is not in maintenance mode

Page 164 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o To use encryption with storage vMotion, be aware of the following


supportability: if disks are encrypted, data transmitted is encrypted
o As mentioned in the vMotion section, I covered storage vMotion limits in
Objective 1.1, but will also provide a quick review here. You can have only 2
simultaneous storage vMotion migrations per Host, and 8 total per Datastore

Objective 7.7: Configure Role-Based User Management


Resource: vSphere 8.0 Security Guide (March 2023)

• Role-Based Management
o Definitions:
▪ Privileges – rights to perform actions on an object
▪ Role – set of privileges granting access to perform action on an object
▪ Permissions – role(s) assigned to a user or group to perform actions on
object
– Propagated: selecting the option for permissions to be assigned to a
vSphere object and objects below it in the vSphere hierarchy
– Explicit: permission added to an object without propagation
o Permissions priorities
▪ Propagation is not enabled by default; it must be manually enabled
▪ Child permissions have precedence over "parent" permissions
▪ User permissions have precedence over "group" permissions
▪ If a user is in multiple groups, all groups permissions apply

• Configure Role-Based Management


o To begin providing user-security in your vSphere environment, you must first
create Roles. If you’re wanting to utilize users in your organizational Directory

Page 165 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Services solution, first add your Directory. Authentication was covered in


Objective 4.4, so review this before continuing if needed. After you’ve added
your Directory Service, go to Menu > Administration > and select Roles under
Access Control, then click the “+” to add a new role. You cannot modify vSphere
default Roles (Administrator, Read Only, No Access), but you can Clone them,
then select them to modify them ("pencil icon") if you want. There are other pre-
defined 'Sample' Roles you can use, and these are modifiable. But I recommend
not using them. As with the default Roles, just make a Clone of the Sample, then
modify as needed. If you want to view all Privileges a default, Sample, or Custom
Role has, click it in the Role list, then select the Privileges tab. vSphere will
display all Privileges assigned to the Role.

To view where the Role is assigned, click the Usage tab. This will also show
whether the Role/Permission is Propagated or not.

When creating a new Role, click the add button and select a Category to create a
Role on. If multiple diverse privileges are needed, you can select different
categories and choose the desired privileges for each. Click Next, give the Role a
Name, then Finish when completed

Page 166 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o To assign a Permission to the vSphere hierarchy, add a user to your new Role by
clicking on the object in the vSphere hierarchy > Permissions tab, and clicking
the Add button to add the new Permission. Browse your Directory (or a locally-
created User) for the User and choose your newly created Role to
associate/assign to the User. Select whether the Permission should Propagate or
not.

Page 167 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o For a list of Privileges for Common Tasks, refer to pp. 50-53, Security Guide. I
have provided an overview table below:

TASK PRIVILEGES MINIMUM DEFAULT


USER
Create VM Destination Folder or Datacenter: Folder/DC:
VirtualMachine.Inventory.Create New Administrator
Destination Host/Cluster/RP:
resource.AssignVMtoResourcePool On Host, etc: RP
Destination DS: Admin
datastore.AllocateSpace
Assigning Network to VM: Dest DS: Datastore
network.AssignNetwork Consumer
To Assign Netwk:
Network Admin
Deploy VM from Several priv’s for Destination Folder or DC, Administrator (except
Template Template, Destination Host/Cluster/RP, Datastore and
Destination DS, and Network Network)
Take Snapshot Source VM/Folder: On VM: VM Power
VirtualMachine.SnapshotMgmt.CreateSnap User
Page 168 of 180 Author: Shane Williford
VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Destination DS or DS Folder:
datastore.AllocateSpace Destination DS:
Datastore Consumer
Move VM into Source VM/Folder: On VM: Administrator
Resource Pool resource.AssignVMtoResourcePool
VirtualMachine.Inventory.move

Destination RP: Dest RP: Administrator


resource.AssignVMtoResourcePool
Install Guest OS Source VM: several On VM: VM Power
VirtualMachine.Interaction.X privileges User

Datastore with ISO:


datastore.BrowseDatastore On Datastore with ISO:
VM Power User
Migrate VM with Source VM or Folder: On VM: Resource Pool
VMotion resource.migratePoweredOnVM Admin
Destination Host/Cluster/RP (if different
than Source): Destination: Resource
resource.AssignVMtoResourcePool Pool Admin
Cold Migrate VM Source VM or Folder: On VM: Resource Pool
resource.migratePoweredOffVM Admin
Destination Host/Cluster/RP (if different Destination: Resource
than Source): Pool Admin
resource.AssignVMtoResourcePool

Destination Datastore (if different than Destination DS:


Source): datastore.AllocateSpace Datastore Consumer
Migrate VM with Source VM or Folder: On VM: Resource Pool
sVMotion resource.migratePoweredOnVM Admin

Destination Datastore: Destination: Datastore


datastore.AllocateSpace Consumer
Move Host into Source Host: On Host: Administrator
Cluster host.inventory.addHostToCluster
Destination Host: Destination Cluster:
host.inventory.addHostToCluster Administrator

o Manage certificates in vCenter: Certificate Management.Create (or Delete)


o Summary of Permissions table above – minimum permissions for a task

Page 169 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

▪ Create VM/VMotion/Cold Migrate/Migrate with sVMotion – Resource


Pool Admin
▪ Deploy VM from Template/Move VM in Res Pool /Move Host in Cluster –
Administrator
▪ Take Snapshot/Power On VM/Install Guest OS – VM Power User

Objective 7.8: Manage Host Profiels


Resource: vSphere 8.0 Host Profiles Guide (2022)

• I covered Host Profiles extensively in Objective 4.16. I covered what they are, how to
create one, and how to use them, so I ask you go back to this Objective to review the
information. As far as Managing Host Profiles go, it’s pretty simple. You can create
multiple Profiles (if desired), then click on it to modify as needed and Attach it to various
ESXi Hosts to make them compliant to your desired environment configuration.
Whenever you make a change to a current Host Profile, your Hosts will then fall out of
compliance, so you will have to reapply the Profile to your Hosts

Objective 7.9: Utilize VMware vSphere Lifecycle Manager


Resource: Managing Host and Cluster Lifecycle Guide (2022)

7.9.1 – Describe Firmware Upgrades for ESXI

• Firmware updates can be done using vLCM Images, using a firmware and drivers add-on,
which is a vendor-provided add-on containing components which encapsulate firmware
update packages and which might also contain the necessary drivers. Firmware updates
are not available via VMware depot, but rather a special vendor depot whose content
you can access through a software module called a hardware support manager (HSM).
This HSM is a plug-in which registers itself as a vCenter extension. Each vendor provides
& manages a separate HSM which integrates with vCenter Server. For each Cluster you
manage with a single image, you can select the HSM to use for the Cluster, and it then
provides you with a list of firmware updates you can choose to use which might add or
remove components to/from a vLCM image. Some HSM examples are:
o Dell – OpenManage Integration for VMware vCenter (OMIVV) appliance
o HPE – a management tool which is a part of iLO Amplifier; an appliance
o Lenovo – Lenovo xClarity Integrator for VMware vCenter appliance

7.9.2 – Describe ESXi Updates

• I touched on this in Sub-Objective 5.9.2, at least as far as 'how-to'. But maybe you don’t
know what specifically an update vs upgrade is? An Upgrade is when you apply software
to your ESXi Host to get to the next 'major release', or version, of vSphere – i.e. from

Page 170 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

v6.7Ux to v7. ESXi Updates, similar to 'Patch Tuesday' for Windows Updates, is the
process of applying smaller, yet cumulative software packages to your Host – for
example, Security Fixes, Bug Fixes, Critical Patches, etc.

7.9.3 – Describe Component & Driver Updates for ESXi

• The best way to describe Components is to simply go through the hierarchy VMware
and their partners (OEMs) use for software packaging
o vSphere Install Bundles (VIBs) – basic building block for creation of install
packages containing metadata & binary payload, which is the actual software to
be installed; smallest software unit or module, not entire 'feature'
o Bulletins – used by vLCM Baselines; a grouping of one or more VIBs
o Components – a bulletin, but with additional metadata and, unlike Bulletins, is a
logical grouping of VIBs providing you complete and visible feature upon install.
With vSphere 7, these are now the basic packaging construct for VIBs. VMware
bundles components together in a fully functional Base (i.e. bootable) ESXi
Image, whereas vendors bundle components together as vendor add-ons, which
cannot be used to boot with. They are usually OEM content and drivers.

Within the vLCM Home View, Bulletins are what you see under the Updates tab,
and Components are under the Image Depot tab.

7.9.4. – Describe Hardware Compatibility Check

• ESXi Host Hardware Compatibility


o You can check an ESXi Host against the VCG in vLCM. NOTE: to be able to use this
tool, you must join the Customer Experience Improvement Program (CEIP). To
check compatibility, select a Host in the Inventory > Updates tab, then select
Hardware Compatibility. Choose a vSphere version from the drop-down, and
click Apply. You will then see a list of items displayed to verify compatibility with

Page 171 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Page 172 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

7.9.5 – Describe ESXi Cluster Image Export Functionality

• A vLCM Image Export creates either a JSON, ISO, or ZIP file. You can then import this file
and use for other Clusters in the same vCenter Server, or into another vCenter Server.
JSON files can be used for Images in other Clusters to save you from having to recreate
them fully, although when using for another vCenter Server, you might also need to use
a ZIP file to make sure all components and drivers are included in the Image on the
target vCenter Server Cluster. If you export an Image as an ISO, keep in mind a few
caveats:
o You cannot use this exported ISO for a Cluster currently using a vLCM Image
o You cannot create a vLCM Image from this ISO
o You import the exported ISO into the Imported ISOs tab of vLCM and can only
use it for Baselines

• To perform the export, go to your Cluster containing the Image you want to export,
select the Updates tab, and click on Images. Click the 'ellipsis' on the right > Export, then
choose a file type to export. Next, go to the target vCenter and/or Cluster > Updates
tab, select Images, then click the Import Image button

Page 173 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

7.9.6 – Create ESXi Cluster Image

• Set up an Image on a Cluster which currently uses Baselines


o Select the Cluster > Updates tab, then select Images and click the Setup Image button
o vLCM will then proceed to check if the Cluster is eligible to use Images. If there are no
problems, the Convert to an Image pane appears. If there are errors or warnings
reported, resolve those then restart the process
o From the ESXi Version drop-down, select an ESXi Image
o Optionally add a Vendor Add-On
o Optionally add Firmware and Drivers Add-On
o Optionally add Additional Components
o Optionally (but recommended) Validate the Image by clicking the Validate button
o Click Save, then Finish Image Setup
• Another option is to Import an Image from another Cluster. To do so, see pp. 87-88
• After creating an Image, you can go back into it later to Edit it to add any Components you didn't
when initially creating it

Objective 7.10: Use Predefined Alarms in VMware vCenter


Resource: vSphere 8.0 Monitoring and Performance Guide (2022), pp. 155-161

• Preconfigured vSphere Alarms


o vSphere has quite a few predefined, or preconfigured, alarms within its
Inventory. To see a whole list of them, view the pages listed in the Monitoring
Guide noted above. Also note, some of the preconfigured alarms are Stateless.
For these alarms, vCenter doesn't keep data on them or display their status, and
you cannot acknowledge or reset them. From the list; these alarms types are
marked with an asterisk in the Guide
o Preconfigured alarms monitor objects operations in the vSphere Inventory. Any
changes you make should only be to their respective Actions

Page 174 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

▪ If an alarm cannot be modified because when selecting it the modify


options are greyed out, it is more than likely because the alarm was
created at a higher level in the Inventory, and must be modified from
that level
▪ You can Edit, Delete, or Disable (or Enable) any of the alarms
o To view any preconfigured alarms, click on a vSphere object > Configure tab >
then scroll down to, and select, Alarm Definitions

Objective 7.11: Create Custom Alarms


Resource: vSphere 8.0 Monitoring and Performance Guide (2022)

• Create Custom Alarms


o Before diving into configuring alarms, let me first share what they are, from a
high-level. Alarms are simply notifications activated in response to an event, set
of conditions, or the state of a vSphere Inventory object. An alarm definition
consists of the following elements:
▪ Name/(optional) description
▪ Target types – type of object monitored; Clusters, Hosts, VMs
▪ Alarm rules – defines event, condition, or state which triggers the alarm
and defines notification severity and/or operations which occur in
response to the triggered alarm
– Trigger: list is dependent on the target type selected
– Severity level: warning (yellow) or critical (red); normal = green
– Action: send email; send snmp trap; run a script
– Advanced action: dependent on the target type; e.g. enter/exit
maintenance mode for Host type; power on/off VM for VM type
▪ Last modified – date/time alarm was last changed
o Configure alarms:
▪ Select a vSphere object, rt-click it, and choose Alarms > New Alarm
Definition, or click the Add link from the Inventory objects Configure tab
> Alarm Definitions section

Page 175 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

▪ Give the Alarm a Name


▪ Decide what action you want to monitor

Page 176 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

▪ As you can see above, there are a variety of "arguments" to choose from.
Once you’ve chosen what to monitor, select a Severity Level (Warning or
Critical) and whether to Send Email, SNMP Traps, or Run a Script. You can
also choose more Advanced options (2nd screenshot below)

Page 177 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

▪ Choose a reset action, if desired:

▪ Then Enable the Alarm, and click Create:

Page 178 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

Objective 7.12: Deploy an Encrypted Virtual Machine


Resource: vSphere 8.0 Security Guide (March 2023)
vCenter Server 8.0 and Host Management Guide (2022), pp. 158-161

7.12.1 – Convert a Non-Encrypted Virtual Machine to an Encrypted Virtual Machine

• Encrypt an Existing Virtual Machine


o Prerequisites needed before procedure:
▪ A Key Provider (KMS Server) must be in place
▪ An encryption Storage Policy should already be created
▪ Ensure the VM to be encrypted is powered off
▪ Permissions: Cryptographic Operations.Encrypt new
▪ You can only encrypt VM virtual disks of an encrypted VM. You cannot
encrypt VM virtual disks and not the VM Home (config files, etc)
o Process:
▪ Rt-click VM > VM Policies > Edit VM Storage Policies
▪ In the Edit Policies window, click the Storage Policy drop-down and select
the VM Encryption Policy
▪ Click to configure 'per disk' if you want to configure different policies for
VM Home (VM configuration files) and Virtual Disks. Click OK when done

▪ Another way to encrypt the VM is Edit Settings > VM Options tab,


expand the Encryption section, and click the Encrypt VM drop-down to
select the VM Encryption Policy

7.12.2 – Migrate an Encrypted Virtual Machine

• Encrypted VMotion
o Encrypted VMs always use encrypted VMotion. NOTE: if you change a VM to be
unencrypted, the Encrypted VMotion state will still be at 'Required' (see below)
until you explicitly change it to another state

Page 179 of 180 Author: Shane Williford


VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version

o You can choose 1 of 3 encryption states for unencrypted VMs to use with
VMotion:

o To configure one of the above states, expand the Encryption section, and change
the Encrypted VMotion option drop-down. NOTE: You must be running vSphere
6.5+ to use encrypted VMotion; and, you can only change the state on
unencrypted VMs
o Migrate an encrypted VM: nothing new for this. Once encryption is enabled in
the VM settings & an encrypted Storage Policy is set, just VMotion the VM
o Cross vCenter Encrypted VMotion
▪ Allowed as long as the following is met:
– Both source & target vCenters use the same Key Provider
– The Key Providers are configured with the same Name
– vCenter Server 7.0+
– ESXi 6.7+
– Must be performed by using vSphere APIs

7.12.3 – Configure Virtual Machine vMotion Encryption Properties

• I discussed this in the Sub-Objective above

We made it through the very last Objective! Congratulations on finishing and allowing me to go
through your VCP-DCV studies with you. I hope this Study Guide helps you in gaining knowledge on
the newest vSphere features, as well as for passing your VCP-DCV exam. Please let me know what
you think, and if you notice any grammatical or technical discrepancies. You can ping me here or
here. Feel free to share this Study Guide with others, but as with all my past Study Guides, please
give credit to the author (me). Thank you so much!

Page 180 of 180 Author: Shane Williford

You might also like