Professional Documents
Culture Documents
Vcpdcv2023 Study Guide
Vcpdcv2023 Study Guide
Hello vCommunity friends! I am glad to be able to provide this VMware vSphere VCP-DCV
2023 Study Guide, referencing vSphere version 8. Thank you for downloading and reading. I
hope you find this Study Guide useful for your VCP-DCV studies. Please don’t hesitate to
provide any feedback you may have to make this Guide better. You can find and message
me on Twitter or LinkedIn. Now let’s get to the Guide…
The two core components of vSphere are ESXi and vCenter Server. With the vSphere 7
release, the Platform Services Controller (PSC) is no longer a part of the vSphere installation
process. All PSC services and functions have been incorporated into vCenter Server,
providing for an easier install process and less complexity. In this Objective, I will cover all
the pre-requisites and components making up the two vSphere components, as well as
provide several common component maximums you should be familiar with. Installation
steps and more detailed component feature explanations will be discussed in other
Objectives later in this Guide.
o The following are several vSphere maximums per ESXi Host. To see a full list,
refer to the VMware config max website
The following section contains the components and requirements for installing vCenter Server:
o vCenter Appliance is preinstalled with – Photon 3.0 OS, PostgreSQL Database, &
virtual hardware 10; and has two components:
▪ Authentication Services:
o vCenter Single Sign-On (SSO)
o License Service
o Lookup Service
o VMware Certificate Authority (VMCA)
▪ vCenter Server services:
o vCenter Server
o PostgreSQL
o vSphere Client
o vSphere ESXi Dump Collector
o vSphere Auto Deploy
o vSphere Lifecycle Manager Extension
o vSphere Lifecycle Manager
Page 2 of 180 Author: Shane Williford
VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version
▪ Storage requirements:
The following are several vSphere maximums for vCenter Server. To see a full
list, refer to the VMware config max website
Hosts 2500
Powered-on and Registered VMs 40000 / 45000
Linked vCenters 15
Concurrent web client sessions 100
vMotion operations per Host 4 – 1GbE / 8 – 10GbE
Storage vMotions per Host 2
vMotion / Storage vMotion per datastore 128 / 8
Datastore Clusters 256
Datastores per Datastore Cluster 64
Number of vDS 128
1.3.1 – Identify and Differentiate Different Storage Access Protocols for vSphere
• Local storage
o ESXi Host local storage refers to internal hard disks or external storage systems
connected to the Host via SAS or SATA protocols. Supported local devices
include: SCSI, IDE, SAS, SATA, USB, Flash, and NVMe
Since this storage type is local to the ESXi Host, you cannot share storage with
other Hosts. As such, the use of vSphere features such as High Availability (HA)
and vMotion are not available.
NOTE: USB and IDE cannot be used to store VMs
• Networked storage
o These external storage systems are accessed by ESXi Hosts over a high-speed
storage network, and are shared among all Hosts configured to access the
system. Storage devices are represented as Logical Units (LUNs) and have a
distinct UUID name – World Wide Port Name (WWPN) for FC, and an iSCSI
Qualified Name (IQN) for iSCSI
▪ Shared LUNs must be presented to all ESXi Hosts with the same UUID
▪ You cannot access LUNs with different protocols (i.e. FC and iSCSI)
▪ LUNs can contain only one VMware File System (VMFS) datastore
o VMware supports the following types of Networked storage:
o Access control is setup on the iSCSI storage system by Initiator Name, IP Address,
or CHAP
o Error correction, disabled by default, can be achieved by enabling header digests
and data digests. They check header info and SCSI data transferred from initiator
to target in both directions. Although, enabling them produces additional
processing, thus affecting throughput and CPU load, & reduce performance
o Jumbo Frames of MTU size greater than 1500 (generally, 9000) is required end-
to-end from Host to storage system. This configuration is needed on vSwitch,
Software iSCSI Adapter, VMkernel port, and physical switches
o iSCSI Port binding is used for multipathing to ensure a highly available network
storage configuration. What this entails is assigning an ESXi Host physical NIC to
a separate VMkernel port, then binding each VMkernel port to the Software
iSCSI Adapter. For Hardware multipathing, 2 physical HBAs are required and are
utilized similar to FC multipathing. Hardware and Software multipathing are
visually represented by VMware as in the image below:
o To upgrade NFS 3 datastores to NFS 4.1, you must storage vMotion VMs off the
NFS 3 datastore and unmount it, then remount it back as NFS 4.1 and storage
vMotion VMs back
o Some general NFS rules:
▪ You cannot connect a NFS datastore to different Hosts using different
NFS versions
▪ You can run different NFS version datastores on the same Host
• VMware File System (VMFS) – datastores using local, FC, or iSCSI storage devices
• Network File System (NFS) – datastores created using NAS over NFS devices; only NFS
v3 and NFS v4.1 are supported
• Software Defined Storage (vSAN) – datastore created when using VMware vSAN, a
VMware-proprietary version of software defined storage. vSAN aggregates all Cluster
ESXi Host local storage into a single datastore shared by all Hosts in the vSAN Cluster. I’ll
discuss the requirements and components of vSAN later in Objective 1.7
1.3.3 – Explain the Importance of Advanced Storage Configuration (VASA, VAAI, PSA)
• vSphere API for Storage Awareness (VASA) – a set of APIs allowing storage arrays to
communicate and integrate with vCenter Server for management functionality such as:
o Storage array features
o Storage health, configuration info, and capacity
o Disk type
o Snapshot space reduced
o Storage deduplication
o vVol type
Page 12 of 180 Author: Shane Williford
VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version
• vSphere API for Array Integration (VAAI) – allows for hardware acceleration and
offloading of certain operations to the storage system from the ESXi Host, allowing the
Host to perform operations faster while consuming less resources
o Requirements: vSphere 4.1 (NAS not supported until v5.0), ESXi Enterprise
license (or higher), storage arrays supporting VAAI
o Operations using VAAI (vSphere 5.x and higher):
▪ Hardware Assisted Locking or Atomic Test and Set (ATS) for locking VMFS
files
▪ Clone Blocks / Full Copy / XCOPY to copy data within an array
▪ Block Zeroing or 'write same' to zero out disk regions
▪ Thin provisioning
▪ Block Delete / unmap for reclaiming unused disk space
▪ Full File Clone (NAS)
▪ Fast File Clone or Snapshot Support (NAS)
▪ Path Selection Plugin (PSP) is responsible for selecting the physical path
for I/O requests, and use one of 3 policies:
The default SATP if none found is “_AA” and its PSP is VMW_PSP_FIXED
If SATP “_ALUA” policy is used, the default PSP is VMW_PSP_MRU
o Multipathing Plugins (MPP) are 3rd party plugins created by using various
VMkernel APIs. These 3rd party plugins offer multipathing and failover functions
for a particular storage array
▪ VMware High Performance Plugin (HPP) is like the NMP but is used for
high-speed storage devices, such as NVMe PCIe flash, and uses the
following Path Selection Schemes (PSS):
– Fixed
– LB-RR (Load-Balance Round Robin)
– LB-IOPs
– LB-Bytes
– LB-Latency
▪ Claim Rules determine whether a MPP or NMP owns a path to a storage
device. NMP claim rules match the device with a SATP and PSP, whereas
MPP rules are ordered with lower numbers having precedence
both cache and capacity, aggregated to form a storage pool. There can only be 1
storage pool configured per ESXi Host
o The Original Storage Architecture uses flash and capacity (HDD) Drives combined
to form a Disk Group. A Disk Group contains at least a minimum of 2 drives – one
flash drive used for caching, and one or more HDDs used for capacity. A
maximum of 5 Disk Groups can be configured per ESXi Host. I'll discuss more
limitations and requirements in Objective 1.7
o A vSAN Datastore consists of the following object types
▪ VM Home Namespace – VM home directory where all VM configuration
files are stored
▪ VMDK – a VM disk file; .vmdk
▪ VM Swap Object – created when powering on a VM
▪ Snapshot Delta VMDKs – created upon taking VM snapshots; not created
on ESA vSANs
▪ Memory Object – created when the snapshot memory option is selected
when creating or suspending a VM
o Some more Disk Group characteristics to be aware of:
▪ Only 1 flash device can be used per Disk Group
▪ There can only be a maximum of 7 HDDs used per Disk Group
▪ When creating a Disk Group, VMware suggests using a 10% Flash Cache
to Capacity ratio
▪ ESXi Hosts can not contribute to more than one vSAN Cluster; but Hosts
can access external storage shared across Clusters (i.e. SAN, NFS, vVol)
▪ Whether an ESXi Host has 1 or several Disk Groups, all Disk Groups
configured on every Host in the Cluster only form 1 vSAN Datastore
▪ vSAN does not support DPM, RDM, SIOC, or SE Sparse Disks
• Virtual Volumes (vVols) – I discussed vVols in depth in Sub-Objective 1.3.2 above. Refer
to that section to review vVol concepts
1.3.6 – Identify Use Cases for Raw Device Mapping (RDM), Persistent Memory (PMem), Non-
Volatile Memory Express (NVMe), NVMe over Fabrics (NVMe-oF), and RDMA (iSER)
• Raw Device Mapping (RDM) – RDM provides a mechanism for a VM to have direct
access to a LUN on a physical storage subsystem. RDM is a mapping file which acts as a
proxy for a raw physical storage device (see screenshot below). VMFS Datastores are
pretty much the standard when assigning storage to VMs, but there are a couple use-
cases when RDMs are preferred:
o SAN snapshot or other layered applications run in a VM. Using RDM enables
backup offloading systems using features inherent to the SAN
o Microsoft Clustering Service (MSCS) spanning physical hosts
• RDMA (iSER) – iSCSI Extensions for RDMA (iSER) uses the same iSCSI framework, but
instead replaces the TCP/IP transport with the Random Direct Memory Access (RDMA)
transport. Using this transport allows data to transfer directly between the memory
buffers of the ESXi Host and storage devices, using an underlying RDMA fabric. This
eliminates TCP/IP processing overhead, and thus reduces latency. To use RDMA, a
RDMA-capable network adapter must be installed in the ESXI Hosts (e.g. Mellanox
MT27700), and underlying physical switches must support RDMA
o There are no specific use cases provided in VMware documentation on why you
would use iSER over iSCSI. But, if there are workloads needing reduced latency
than regular workloads, and you don't have nor want to incur the costs of
implementing high-speed storage solutions mentioned above, this would be a
viable alternative
o You can use the following resource management capabilities to trigger SDRS
recommendations
▪ Space utilization load balancing is used to trigger recommendations
based off a set threshold on a datastore. If the threshold is exceeded, a
migration is recommended
▪ I/O latency load balancing can be based off a set I/O latency threshold,
below which migrations are not allowed
▪ Anti-affinity rules can be used to have VM disks on different datastores
Each setting above can be configured to: Use cluster settings, No automation
(manual mode), or Fully automated
• I’ll share more about requirements and cover a few advanced features/options later in
Sub-Objective 7.4.3
In this Objective, I’ll be discussing Cluster terms, concepts, and limitations. I'll share about
configuration and management of ESXi Clusters later in Section Objectives 4.6, 4.7, and 7.5.
The vSphere Resource Management documentation shares how DRS scores VMs, but not
with much detail. As such, I will reference two blog posts from VMware Technical Marketing
Architect Niels Hagoort, who goes into more Score detail. I’ll just cover the main points, but
for further reading & to fill in the blanks, the posts can be read from here & here.
'VM happiness' refers to the ability for a resource (VM) to consume the resources they
are entitled to, which depends on many factors such as – Cluster sizing, ESXi Host
utilization, workload characteristics, and VM configuration of vCPU/Memory/Network.
The VM DRS Score is simply the goodness ('happiness') of the VM on its current Host in
the Cluster, expressed as a percentage, and calculated every minute. The Goodness
Modelling can be expressed as VMs having an ideal throughput and actual throughput
for resources (CPU, RAM, Network). When there’s no contention, those 2 items are
equal (actual = ideal). When there is contention for resources, there is a cost for the
resource, which hurts the VM’s actual throughput metric.
• vSphere HA
o vSphere HA provides high availability to virtual machines by combining VMs and
the Hosts they reside on into vSphere Clusters. ESXi Hosts are monitored and, in
the event of a Host failure, VMs are restarted on other available ESXi Hosts in the
Cluster
o vSphere HA works in in the following way: when vSphere HA is enabled, vCenter
Server installs the HA Agent (.fdm ) on each ESXi Host. The ESXi Host with the
most datastores connected to it has the advantage to becoming the primary as
determined by an election process, and the remaining Hosts are secondary. The
primary Host monitors secondary Hosts every second through network
heartbeats through the management network, or if vSAN is used it’s the vSAN
network. If heartbeats are not received through this method, before the primary
Host determines a secondary Host has failed, the primary Host detects the
liveness of the secondaries by seeing if they are exchanging datastore
heartbeats. The default number of datastores used in heartbeating is 2, and a
maximum of 5 can be configured using the vSphere HA Advanced option:
o The vSphere HA Primary Host uses several factors when determining which
available Cluster Hosts to restart VMs on:
▪ File accessibility of a VM’s files
▪ VM Host compatibility with affinity rules
▪ Available resources on a Host to meet VM reservations
▪ ESXi Host limits, for example VMs per Host or vCPUs per Host maximums
▪ Feature constraints, e.g. vSphere HA does not violate anti-affinity rules or
fault tolerance limits
• Fault Tolerance (FT) provides continuous availability to your most mission-critical VMs.
When enabled on a VM, the source VM, called the primary or protected VM, replicates
to a newly created secondary VM on another ESXi Host separate from the primary VM.
These VMs continuously monitor the status of one another ensuring FT is maintained.
The use-case for having FT VMs is to ensure continuation of service in the event of an
ESXi Host failure only. FT VMs are not backups! Secondary VMs are not a completely
isolated copy of a primary VM with incremental data to rollback to. So, if any data gets
corrupt within the primary VM guest OS, this same corrupt data gets synchronized over
to the secondary VM, thus making the secondary VM corrupt as well. Recoverability is
thus only achieved having a VM backup solution in place. FT also avoids a 'split-brain'
situation which can arise by having 2 active VM copies after a failure. This is achieved by
storage Anti File Locking to coordinate failover so only 1 VM continues running as
primary, and a new secondary is respawned automatically. Also, FT can now support
SMP VMs with up to 8 vCPUs (Enterprise Plus licensing)
Objective 1.5: Explain the Differences Between VMware Standard Switches (VSS)
and Distributed Switches (VDS)
Resource: vSphere 8.0 Networking Guide (2022)
o When creating a standard switch, you have the option to create it in one of 3
ways - VMkernel Network Adapter, Physical Network Adapter, Virtual Machine
Port Group for a Standard Switch
Virtual Machine Port Group handles VM network traffic on the standard switch
• VMkernel Networking
o VMkernel networking is used for handling system network traffic and provides
connectivity between Hosts. Keep in mind when configuring virtual networking,
if creating a vMotion network, all ESXi Hosts for which you want to migrate VMs
to must be in the same broadcast domain – Layer 2 subnet. The VMkernel
Network has several TCP/IP Stacks:
▪ Default – supports several network traffic services (see pg. 46)
▪ vMotion – for VM live migration traffic
1.5.2 – Manage Networking on Multiple ESXi Hosts with vSphere Distributed Switch (VDS)
Though not shown in the image below, the latest VDS version is 8.0
And not shown above, features such as Network I/O Control and ICGMP/MLD
snooping was introduced in VDS version 6.0
o If you need to upgrade your VDS to a newer version, ESXi Hosts and VMs
connected to the VDS will experience brief downtime. To begin adding Hosts, rt-
click on the VDS > Add and Manage Hosts option
At the Manage Physical Adapters screen, expand each Host and select the
Physical Adapter you want added to the VDS and click Next. You can then
migrate any VMkernel Adapters or VM Networking (next 2 screens) if desired, or
create or move them later. Again, what’s nice about a VDS is it’s a "one-stop-
shop" configuration platform for all ESXi Hosts in your Datacenter. You can view
its Topology from the Configure tab
• Security Policy
o Promiscuous – vSwitch allows all traffic to pass to the VM; default = Reject
o MAC Address Changes – if the VM guest OS changes the effective MAC of the
VM, the vSwitch drops all INBOUND packets default = Reject
o Forged Transmits – if the source MAC is different than what’s in the VM .vmx
file, the vSwitch drops all OUTBOUND traffic from a VM default = Reject
1.5.4 – Manage Network I/O Control (NIOC) on vSphere Distributed Switch (VDS)
• Configure NIOC
o To first utilize NIOC, you have to create a Distributed Switch. The VDS features
you want or need in your organization will determine the NIOC version you need.
If a VDS is already created, then the only thing needing done is to modify its
Properties and enable it for NIOC functionality.
o To configure and manage system traffic, you then click on the VDS > Configure,
then select System Traffic under the Resource Allocation section. You can then
click on a System Traffic type and Edit its resource allocation, controlling the
bandwidth this traffic type utilizes
o To manage VM Traffic, you would go into the Network Resource Pools area and
add an allocation. After you’ve created all you need, you then assign the
Resource Pool to the VDS Port Group the VMs are attached
With vLCM you can also update VM Tools and VM Hardware, as seen from the
list shown in the image above.
To check a Host against the VCG, select a Host in the Inventory in the Client >
Updates tab, then select Hardware Compatibility. Choose a vSphere version
from the drop-down, and click Apply. You will then see a list of items displayed
to verify compatibility with (last 3 screenshots)
o If you have multiple vCenter Servers, and thus multiple vLCMs, you can configure
each vLCM per vCenter only. Settings will not propagate to other vLCM
instances. Similarly, when patching, you do so to objects attached only to a given
vCenter Server.
o When you enable other VMware solutions which use a vLCM Image, the feature
automatically uploads an offline bundle with components into the vLCM depot
and adds its components to all Hosts in a Cluster. Cluster solutions currently
integrated with vLCM are: vSphere HA, vSAN, Tanzu, and NSX
o Though vLCM with Images is a powerful upgrade to VUM, it still may not be the
best upgrade solution for your organization. View the diagram below to see
some of the differences, then make the best upgrade management choice for
your environment (view all differences on pp. 23-25, Lifecycle Guide):
• vSAN Components:
o vSAN architecture components consist of:
▪ vSphere, minimum 5.5 U1
▪ vSphere Cluster
▪ Disk groups, consisting of at least 1 flash device for cache and at least 1,
but up to 7, hard disk drives for capacity storage; maximum of 5 groups
per ESXi Host
▪ Witness is the component containing metadata that serves as a
tiebreaker when a decision must be made regarding availability of
surviving datastore components
▪ Storage Policy-Based Management (SPBM) for VM storage requirements
▪ Ruby vSphere Console (RVC) is a CLI console for managing and
troubleshooting a vSAN Cluster
▪ vSAN Observer runs on RVC and is used to analyze performance and
monitor a vSAN Cluster
▪ Fault domains
▪ vSAN storage objects
o vSAN datastore components:
▪ A VM Home Namespace is the home directory where all VM
configuration files are stored, such as .vmx, .vmdk, .log, and snapshot
descriptor files
▪ VMDK is the VM disk that stores a VM hard disk contents
▪ A VM Swap object is created when a VM is powered on
▪ Snapshot Delta VMDKs are created when a snapshot is taken
▪ A Memory object is created when the snapshot memory is selected when
taking a VM snapshot
• vSAN Supportability
o vSAN does not support vSphere features such as: SIOC, DPM, RDMs, or VMFS
▪
Degraded is when vSAN detects a permanent component failure, e.g. a
component is on a failed device
o vSAN objects are in one of two following states:
▪ Healthy is when at least one full RAID-1 mirror is available or the
minimum required number of data segments are available
▪ Unhealthy is when there is no full mirror available or the minimum
required number of data segments are unavailable for RAID-5 or RAID-6
objects
o vSAN Cluster can be one of two types
▪ Hybrid Clusters have a mix of flash storage, used for cache; and hard disk
storage, used for capacity
▪ All flash Clusters use flash for both cache and capacity
o vSAN Clusters can be Deployed in one of the following ways:
▪ Standard vSAN Cluster deployment consists of minimum of 3 ESXi Hosts
▪ Two-Node vSAN Cluster deployment is generally used for ROBO locations
with an optional Witness Host at a separate remote location
▪ Stretched vSAN Cluster provides resiliency against the loss of an entire
site by distributing ESXi Hosts in a Cluster across two sites with latency
less than 5ms between the sites, and a Witness Host at a 3rd location
• VM Encryption
o Before you can use VM Encryption, there are a few pre-requisites you need to
take care of. First, make sure you have a Key Provider set up in your
environment, added in vCenter under Configure tab > and under Security, select
Key Providers, which is essentially a Key Management Server (KMS). I discuss
the 3 types of Key Providers below in Sub-Objective 1.8.2. Next, you must enable
all ESXi Hosts for Encryption Mode. To do so, select a Host > Configure tab, then
under the System section click Security Profile. Scroll down to Host Encryption
Mode and click the Edit button
After you’ve enabled your ESXi Hosts for Encryption Mode, you then need to
create a "Host-Based" encryption Storage Policy. I covered Storage Policies
earlier, but a screenshot review is shown below
o After you have those few things configured, all you need to do is create a VM
and enable it for encryption. During the create VM wizard, when selecting your
storage, choose the option to 'Encrypt this virtual machine'
o To encrypt a current VM, you must first power it off, then follow the procedure
noted in Sub-Objective 7.12.1
1.8.2 – Describe the Role of Key Management Services (KMS) Server in vSphere
1.9.1 – Recognize Use Cases for a Virtual Trusted Platform Module (vTPM)
1.9.2 – Differentiate Between Basic Input/Output Systems (BIOS) and Unified Extensible
Firmware Interface (UEFI) Firmware
• VMware is more than likely referring to Legacy BIOS vs EFI settings. I think the best way
to tackle this topic is to first describe BIOS/UEFI differences, then discuss why VMware is
moving away from Legacy BIOS, although I think it'll become obvious as to why as I
provide more information here:
o BIOS, and for simplicity sake I'll just be referring to physical devices initially,
resides on an EPROM (Erasable Programmable Read-Only Module) chip. It
provides reading of 'helper functions' which allow reading boot sectors from
attached storage, & printing helper function items to the screen. A function it
performs you may be aware of is the Power-On Self Test (POST). After it runs this
test, BIOS initializes remaining hardware, detects connected peripherals ,and
verifies they're all healthy. BIOS can only run on a 16-bit processor, limiting the
speed at which it can perform it's boot functions. After these functions are
completed, BIOS code searches storage devices for the Boot Loader (typically,
the Master Boot Record, on the 1st 512bytes of the 1st sector) and when found,
the firmware passes computer function over to it
o UEFI does mostly the same functions as BIOS, but instead of storing info on the
firmware, it stores info in a .efi file stored on a special EFI Storage Partition
(ESP). The ESP also contains the Boot Loader (GUID Partition Table). And lastly,
UEFI runs in 32-bit or 64-bit, as opposed to 16-bit, making it more performant
• So, with this basic information, I'll share some limitiations of BIOS vs UEFI:
o With running only in 16-bit, BIOS typically doesn't allow mouse/keyboard input
when manipulating BIOS settings. You have to tab through settings. Also, the
bootup process takes signifcantly longer then UEFI
o Because it uses MBR, drive sizes cannot be larger than around 2TB
o UEFI, since it uses higher bit modes, provides a better UI, with mouse &
keyboard navigation, as well as provide significantly faster boot times
o UEFI uses GUID Partition Table (GPT), allowing disk sizes larger than 2TB
o UEFI provides a lot more functionality within it which you can choose to enable
or disable based on your computing needs. Probably the most intriguing feature
within UEFI, and VMware's most compelling reason to transition from Legacy
BIOS to UEFI, is the ability to perform Secure Boot, which prohibits a computer
from running unauthorized and unsigned applications, reducing the attack vector
by those wishing to invoke malicious intent on a system
o With regards to VMware, some other security features which require UEFI are:
▪ Persistent memory
▪ Intel vSGX Registration
▪ TPM 2.0
• Identity Federation
o Since vSphere 7, vCenter Server supports Identity Federation for authentication,
allowing you to configure an external identity provider to interact with the
identity source on behalf of vCenter Server. What happens is when a user logs in,
the credentials are redirected by vCenter to the external identity provider. User
credentials are no longer provided to vCenter Server directly. Instead, the user
provides their credentials to the external identity provider and vCenter just
trusts this external provider to perform the authentication. Why is this
beneficial? Credentials are never handled by vCenter. And, you can use other
security measures external providers offer such as Multi-Factor Authentication
(MFA), which improves security. Configuring Identify Federation will be discussed
later in Sub-Objective 4.4.1
1.13.1 – Identify the Use Case for a Supervisor Cluster and Supervisor Namespace
• Clusters enabled with vSphere with Tanzu are called Supervisors. You can select a 3-
Zone deployment, enabling Supervisor on 3 vSphere Clusters to have Cluster-level HA,
or a one-to-one mapping between a vSphere Cluster and Supervisor, to have ESXi Host-
level HA. Supervisors are the base of the vSphere with Tanzu platform, providing
necessary components and resources for running workloads such as vSphere Pods, VMs,
and Tanzu Kubernetes Grid (TKG) Clusters
• vSphere Namespace sets the boundaries where vSphere Pods, VMs, and TKG Clusters
can run. Upon initial creation, they have unlimited resources (CPU, Memory, Storage).
But vSphere Admins generally create limits for those resources, as well as limits for how
many Kubernetes objects can run in the Namespace. This is done by a vSphere Resource
Pool being created per Namespace
• vSphere Zones are mapped to vSphere Clusters to provide High Availability (HA) against
Cluster-level failure to workloads deployed on vSphere with Tanzu. You then are able to
create a Supervisor, mapped to 1 or 3 of these Zones
1.13.3 – Identify the Use Case for a VMware Tanzu Kubernetes Grid (TKG) Cluster
I continued to use the referenced Resources above for this section, but the resources are
officially discontinued by VMware. Though discontinued, I believe VMware's overall
concepts of what a SDDC is has not changed. How they go about implementing it has, with
the introduction of VMware vSphere+, vSphere With Tanzu, and their monitoring suite of
products getting an overall & rebranding to VMware Aria Suite
Above the physical hardware layer is where the SDDC lies, with vSphere the
foundational layer, then storage, network, ancillary services (FT, vReplication),
Containers, Cloud Services, etc
• VMware vSphere+
o What it is – vSphere+ is a multi-cloud workload platform, delivering benefits of
the Cloud to on-prem workloads; combining vSphere virtualization technology, a
Kubernetes environment, and high-value Cloud services to transform existing on-
prem deployments into SaaS-enabled infrastructure.
o vSphere+ Components
▪ Cloud Console – single management plane to centralize tasks normally
performed across multiple vCenter instances
▪ Admin Services –
– vCenter Lifecyle Management service – for on-prem vCenter updates
– Global inventory service – visualize resources across entire vSphere+
estate
– Event view service – for alerts and events across entire vSphere+ estate
– Security health check service – monitor security posture and highlight
security exposures across entire vSphere+ estate
– Provision VM service – create VMs from Cloud Console instead of
vCenter
• DR Use-Cases
o I think this goes without saying, but DR Use-Cases are obvious – a DC or VI Admin
should have a DR plan which has been tested (important!) and in place in the
event of any kind of mishap, misconfiguration, or disaster resulting in taking
2.4.2 – Identify Use Cases for VMware Site Recovery Manager (SRM)
• To configure a SSO, you are prompted during the VCSA Stage 2 installation process to
create a new SSO or join an existing SSO Domain. For a new SSO domain, provide the
information shown in the screenshot below
…then, power on the vCenter wanting to repoint and execute the commands
shown above
• NOTE: VMware added quite a few new SSO Groups, about half of which shouldn't be
modified. I recommend reviewing the group list in the Authentication Guide, on pp. 125-
126
• Configure a VDS
o To configure a new Distributed vSwitch, click on an Datacenter (DC) Inventory
object > Actions link, or rt-click on the DC object > Distributed Switch > New
Distributed Switch
▪ Advanced settings:
• Configure a VSS
o To configure a Standard vSwitch, click on a Host in your virtual inventory >
Configure tab > Networking section > Virtual Switches
From here you can click the ellipsis on the Physical Adapter box to view
Discovery Protocol (CDP only for vSS) information of the physical nic and the
hardware (switch) it’s connected to. You can click the ellipsis on the left Port
Group (PG) box to either view the settings, edit them, or remove the PG
altogether. Most all settings are on both the vSwitch & PG. The advantage of
configuring settings on the vSwitch is settings will propagate to the PGs, if your
settings are applicable for PGs as well. If not, no worries, you can override
vSwitch-level settings at the PG layer. The only difference between vSwitch & PG
settings is the ability to configure VLANs, which is done at the PG layer; and MTU
size is configured at the vSwitch level
▪ Teaming and failover – load balancing and failure detection policies for a
given PG, or at VSS-level. From the Edit Settings option on a VSS or PG,
you can modify the settings to meet your environment’s needs. Note the
Override option next to each configurable item below. Override is
available within Security and Traffic Shaping policies as well
• Identity Federation
o Currently, vCenter only supports 1 external identity provider – AD FS – and 1
identity source – vsphere.local. Before I go over configuration of federation, first
here are some requirements:
▪ AD FS for Windows 2016 or later is deployed, and connected to AD
▪ AD FS Application Group for vCenter created in AD, as noted here
▪ If using a local CA, upload the root cert to the Trust Root Store in vCenter
▪ A vCenter Server Admin Group created in AD FS
▪ vCenter Server 7.0 or later
Everything marked with an asterisk (*) is required. The data format requirements
for each parameter are shown in the screenshot below. Not pictured is the LDAP
server URL format, which needs to be in the ldap://hostname:port format
(or, ldaps)
• Deploy VCSA
o There are two ways to deploy the VCSA – GUI and CLI. I will only be covering the
GUI process. If you’re a CLI nut, you can review the deployment process in the
vCenter Install Guide. Before VCSA deployment, make sure the system, and the
target system (vCenter or ESXi Host) are supported. I covered requirements in
Objective 1.1, so review those before proceeding. Also verify time is sync’d
among all systems and verify a DNS record for the VCSA is created
o After you verified you’ve met all the pre-requisites, download the VCSA ISO from
VMware and mount it on a supported system. Run the installer file from the
unmounted VCSA installer directory: e.g. vcsa-ui-
installer/mac/installer.app then select the Install option.
NOTE: for MAC users, on newer MAC OS, you may run into an error upon
opening the installer.app file. Use this VMware KB to be able to open the file and
begin installation
If you have an external PSC, you will need to use the converge utility to embed it
into the VCSA as it is no longer supported (see KB here). Fill in the information
prompted for
NOTE: for the target installing VCSA on, if you choose a Host and it’s in a DRS
Cluster, VMware recommends choosing the target vCenter Server instead, or
change DRS to manual automation, at least temporarily, until install is
completed. The user to connect to target must either be root or administrator of
the system. Review the VCSA root user password requirements by clicking the “i”
(info) link
Choose the Deployment Size and Storage; storage column in the chart will
change based off your selections
• Configure VCSA
o Once the install completes, you are ready for Stage 2…
…and are then prompted for a few configuration items – Time Sync, SSO Domain
Join selection, CEIP, then completion
You can enable SSH, BASH, etc from the Access section
Configure Time Sync with the Host it resides on or via NTP server from Time
And the area I really find useful is Services. You can Start, Stop, Restart any
vCenter Server service from here
configuration data is required to backup; stats, events, & tasks are optional
And lastly, you can redirect logging to a Syslog Server if desired, using 1 of the
supported protocols shown below (3 syslog servers max are allowed)
Objective 4.6: Create and Configure vSphere High Availability (HA) and DRS
Advanced Options (Admission Control, Proactive HA, Etc.)
Resource: vSphere 8.0 Availability Guide (2022), pp. 31-44
o After vSphere HA has been enabled, you can then configure features and
options within the Cluster. I can’t cover every nook and cranny vSphere HA offers
in this Study Guide, but I recommend you view all the HA Cluster options and
make the decision for each which best meets your organization’s needs and
requirements. First, vSphere HA not only helps make VMs highly available, but
can also protect against potential VM "split-brain" scenarios in which a VM may
be running on an isolated ESXi Host, but the primary Host cannot detect the VM
because the Host the VM is on is isolated. As such, the primary Host will try and
restart the VM. The restart succeeds if the isolated Host no longer has access to
the Datastore on which the VM resides. This causes a VM split-brain because
now 2 instances of the same VM are running. The vSphere HA setting used to
mitigate this scenario is called VM Component Protection (VMCP), and is a
combination of two vSphere HA settings:
o One of the most important HA configurations you need to decide upon which
best suits your environment is the concept of Admission Control. vSphere HA
Admission Control ensures a HA-enabled Cluster has enough reserved resources
available for VM restarts in the event of 1 or more ESXi Host failures.
– Slot policy is where a specified number of ESXi Hosts can fail and sufficient
resources will still be available to restart all VMs on failed Hosts. This policy is
determined by:
Calculating slot size, which is a logical representation of CPU and memory sized
to satisfy requirements of powered-on VMs. Slot size is calculated by VM CPU
reservation (or 32MHz if none configured) and memory reservation then
selecting the largest value of each
Determines how many slots each Cluster Host holds, by dividing the largest of
each resource, CPU and memory, by the total resource of each Host. The Host
with the largest amount of slots is “removed” and the remaining Host slots are
added
Current failover capacity is determined by the number of ESXi Hosts which can
fail and still have available slots for powered-on VMs
Determines if current capacity of either CPU or memory is more or less than the
configured failover capacity. If current capacity (or slots) is less, VM operations
listed above are not permitted
– Dedicated failover hosts is the easiest of all the admission control policies. If
this policy is selected, one or more ESXi Hosts cannot be used in a Cluster and
are solely used in the event of a Host failure. If a failover Host cannot be used,
for example there aren’t enough Host resources available, then other available
Cluster Hosts are used. Note: VM-VM affinity rules are not enforced on failover
Hosts
Regardless of Admission Control policy used, be aware of one other setting you
can configure – Percentage degradation VMs tolerate. View the image below for
explanation. Note: DRS must be enabled for this option to be available
Objective 4.7: Deploy and Configure vCenter Server High Availability (HA)
Resource: vSphere 8.0 Availability Guide (2022), pp. 72-77
• vCenter Server HA
o vCenter HA protects the vCenter Server Appliance (VCSA) against Host and
hardware failure, as well as reduce downtime due to VCSA patching. After
configuring your network for vCenter HA, you implement a 3-node cluster
comprised of Active, Passive, and Witness nodes:
▪ The Active node
– Uses the active vCenter
– Contains the public IP
– Uses the vCenter HA network to replicate data to the Passive node
– Uses the vCenter HA network to communicate with the Witness node
▪ The Passive node
– Is initially a clone of the Active node
– Receives updates from and synchronizes state with the Active node
– Automatically takes over Active node roll in event of a failure
▪ The Witness node is solely a lightweight clone of the Active node
protecting against a split-brain scenario
• vCenter HA Implementations
o You can implement VCSA HA in one of two ways
▪ Basic configuration requires one of 2 environment VCSA setups to
implement: the VCSA is managing its own ESXi Host and VM (i.e. self-
managed vCenter) or the VCSA is managed by another vCenter Server
(management vCenter) and both are in the same SSO domain (i.e. v6.5).
• Configure vCenter HA
o To configure vCenter HA, click on vCenter Server in the vSphere Client >
Configure tab, then select vCenter HA under the Settings section. Choose the
desired deployment type, Basic or Advanced, by deselecting the option to
Automatically create clones for Passive and Witness nodes, then follow the
workflow described above
o If you need to shut down the Cluster nodes for any reason, they must be done in
a certain order:
– Passive node
– Active node
– Witness
* vCenter HA Nodes can then be started up in any order
o Lastly, you can back up the Active node for added protection against failure.
Keep in mind, before restoring the Active node, if secondary nodes are still
present, you need to remove them as not doing so may result in unpredictable
behavior
First, let me share what a Content Library is, then I will move on with Content Library
configuration details. A Content Library is a container object for VM and vApp templates, as
well as files such as ISOs, images, and txt files. Using templates, you can deploy VMs or
vApps in the vSphere inventory. Doing so across vCenter Server instances in the same or
different location results in consistency, compliance, efficiency, and automation in
deploying workloads at scale. Content Libraries stores and manages content in the form of
library items. A single item can contain one or multiple files. As an example, an OVF consists
of .ovf, .vmdk, and .mf files. When uploading an OVF template, all files are uploaded, but
you only see one library item in the Content Library UI. Starting with vSphere 7U1, Content
Libraries not only support the OVF template format (sole format previously), but now also
support VM templates (NOTE: vApp templates are still converted to OVF templates).
Content Libraries are managed from a single vCenter Server, but you can share them with
other vCenters if HTTPS traffic is allowed between the instances. There are 2 Content
Libraries:
Local Library is used to store items in a single vCenter Server. You can publish them so other
vCenter Server users can subscribe to them, as well as configure a password for
authentication to the Library. A caveat is if you have a VM template in the Local Library, you
cannot publish it
Subscribed Library is a library you create when you subscribe to a published library. The
Subscribed Library can be created in the same or remote vCenter than the published library.
When creating a Subscribed Library, you can choose to immediately download the full
contents of the library upon creation, or download only the published library metadata and
later download the full content of the library. Keep in mind, when using a Subscribed
Library, you can only use the library content, not add items to it.
o Choose the Library type you want to create, and optionally publish it and/or
enable password authentication to the Library
o To make sure you have the latest Library content if using a Subscribed Library,
you can initiate a manual synchronization by rt-clicking the Library >
Synchronize. Make sure you have the Content library.Sync subscribed library
privilege
• To Publish a Local Content Library, rt-click the Content Library > Edit Settings, then click
the Enable Publishing box and, if desired, to enable password authentication
Take note (or Copy) of the Subscription URL as this will be needed when wanting to
create a Subscribed Content Library
• To create a VM from Content Library content (i.e. Template), click the Content Library
name to open the Library. Then, select the VM Templates tab, rt-click a Template and
select New VM from this Template. Configure information as you're prompted in the
wizard
• When managing VM Templates in a Content Library, you perform a 'Check Out VM from
VM Template' operation to make changes to the Template. This is needed if, for
example, you need to perform guest OS or any OS application updates, or make any
other configuration changes. This process is analogous to the 'Convert to Virtual
Machine…' operation on regular vCenter Server Template objects. When you are done
with making any changes, you perform a 'Check in VM' operation; again, analogous to a
'Convert to Template' operation
supported protocols – FTP, FTPS, HTTP, HTTPS, SFTP, NFS, or SMB to transfer a
backup of your vCenter Server config files. Then, using the vCenter Server GUI
Installer, you can perform a restore – view the deploy VCSA UI screenshot I
shared in Objective 4.5 above.
What exactly is backed up? "Inventory and configuration" information such as:
▪ Configurations – VM resource settings, DRS configuration/rules, Resource
Pool hierarchy & settings
▪ Storage DRS – Datastore Cluster configuration/membership, SIOC
▪ vSphere HA, with potential caveats
▪ Review pp. 51-55 for further details, limitations, potential inconsistencies
▪ You can optionally also select to backup 'Stats and Data' (see screenshot
below)
• vSphere Trust Authority (vTA) was discussed in more detail in Sub-Objective 1.8.1 above.
Review this for a refresher. Use the following procedures to configure a vTA:
Pre-requisites –
o Ensure ESXi Hosts used in a Trust Authority or Trust Cluster have TPMs v2.0
o Gather ESXi Hosts TPM Endorsement Key (EK) Certificates
o ESXi Hosts must have UEFI Secure Boot enabled
o Must be using vCenter Server 7.0+
o A dedicated vCenter Server for the Trust Authority Cluster and ESXi Hosts
o A separate vCenter Server for the Trusted Cluster and Trusted ESXi Hosts; NOTE:
only 1 Trusted Cluster allowed per vCenter
o A key server (KMS) already in place
o VMs must be configured for EFI firmware and Secure Boot
o Privileges listed on pg. 204
o Note the following unsupported vSphere features:
▪ vSAN encryption
▪ First Class Disk (FCD)
▪ vSphere Replication
▪ Host Profiles
• The specific details on how to configure vTA is quite lengthy. As such, I'll only cover the
high-level steps mentioned in the Security Guide, on pp. 257-258. Please review those
pages and subsequent pages 258-292 for explicit details on the implementation and
configuration process. Also, be familiar with vTA terms on pp. 241-242.
Configuration –
o Set up your workstation to configure the vTA
o Enable the Trust Authority Admin – assign users to the TrustedAdmins group
o Enable the Trust Authority State – make a vCenter Server Cluster into a Trusted
Authority Cluster
o Collect information about vCenter Server & ESXi Hosts to be trusted (export as
files)
o Import the Trusted Host information into the Trust Authority Cluster
o Create the Key Provider on the Trust Authority Cluster
o Export the Trust Authority Cluster information – this will be used later to import
into a Trusted Cluster
o Import the Trust Authority Information to the Trusted Hosts
o Configure the Trusted Key Provider for the Trusted Hosts (Client or CLI).
From the vSphere Client, select vCenter > Configure tab > under Security section
select Key Providers. Click to Add Trusted Key Providers. My screenshot shows
Add Standard Providers because I haven’t configured my vCenter and Cluster as
a vTA
NOTE: New with vSphere 8, you can only generate a CSR with a minimum 3072-
bit length, but does still accept custom certficate bit lengths of 2078
o Click the filter icon (hourglass icon in the Name column), and type in:
vxpd.certmgmt and you can then view all cert parameters. Most are used for
ESXi Host csr to the VMCA (VMware Certificate Authority, i.e. vCenter). for the
mode parameter, the default is vmca. There are 3 possible modes: vmca ,
custom (for using your own CA), and thumbprint used for legacy 5.5/6.0
vSphere versions, now deprecated.
o To provision ESXi Hosts with a new VMCA certificate, you can do one of a few
things – disconnect/reconnect the Host from/to vCenter, or click on a Host in the
Client > Configure > under System, select Certificate. Then click the Renew
Certificate link. Or simply rt-click on a Host > Certificates
Before going forward with this Mode, you must first create a cert template to
use for the Machine-SSL cert. VMware has an amazing walkthrough created
during the vSphere 6.5 release by Adam Eckerle. It still applies for vSphere 8,
with you no longer connecting to a PSC, but solely to and perform the tasks
against vCenter Server. You can view the walkthrough here ; other info also can
be found in VMware KB 2112009 . Additionally, you will also need the Root Cert
of the CA you’re using, and any Subordinate CA certs, if implemented (Microsoft
CA recommends using Sub-CAs) for chaining certs, as noted in the workflow
o You then basically follow the prompted workflow, the walkthrough, and KB if
needed. After replacing the Machine-SSL cert, you should be able to connect to
vCenter with the vSphere Client and the cert be trusted. NOTE: ESXi Host certs
will not be trusted, unless you add the VMCA as a Trusted Root in your local CA
store
• Enterprise Public Key Infrastructure (PKI) role is dependent on the business. How a
company manages its PKI will depend on several factors – auditing, security practices
and requirements, compliance (i.e. SOX for Finance or HIPPA for Healthcare), budget
resources, FTE expertise, etc. Larger enterprises tend to have a more robust PKI, utilizing
external CAs to manage certs for most of their environment, and thus tend to configure
their virtual infrastructure using the Custom method; whereas smaller entities are
content to use VMCA or Hybrid methods for their environment. So, what role does
Enterprise PKIs play for SSL certs? Not much if a business doesn’t have an in-house
infrastructure; or, semi-extensive if using an external CA (needing to generate CSRs for
infrastructure cert requests, etc.)
You are then presented with a screen of several tabs – Image Depot, Updates,
Imported ISOs, Baselines, and Settings. Go to the Settings tab to review the
information. Under the Administration section, you can modify the Patch
Downloads schedule by changing the time, and even disable it by unchecking the
'download patches' checkbox in the Edit screen, but you cannot delete it. For
Patch Setup, you can disable the pre-defined URLs, but not delete. You can
create a custom download source of 3rd party vendors by clicking the New
button.
Edit the Settings for Host Remediation for Baselines and Images (pretty much
same info for each), as well as VMs
After you’re satisfied with the vLCM settings, you can go to the Baselines section
and create Baselines, or use any of the 3 pre-defined ones. You can’t delete
them, but if they contain most of the settings you want, you can make a copy of
them so you don’t have to create a new one from scratch. Just give it a new
name upon creation. If there is a Baseline version of ESXi you need to deploy, the
Imported ISOs section is where you upload it. To use it, you have to create an
Upgrade Baseline type (not a 'patch' or 'extension').
Once you’ve created all the foundational settings, you’re ready to start patching.
To begin, go to your vSphere Inventory and click on a container (Datacenter or
Cluster) or individual Host, then select Updates tab. Scroll down to attach any of
your Baselines you created, or pre-defined ones. After you attach all you want,
check the container for Compliance
After you’ve checked for Compliance, scroll down to the Attach Baselines section
& choose to Remediate. vLCM will perform remediation according to the settings
you configured – place the Host in maintenance mode, migrate VMs, & perform
a restart or Quick Boot, depending on what you set up. Hosts will remediate in
succession, unless you configured to do so in parallel. To remediate groups of
VMs, go into VMs & Templates view, click a VM Folder container > Updates tab,
& perform a VM Tools and/or Hardware Upgrade to match the Host operation.
o The only remaining item to cover are Images. To use vLCM Images, click on a
Cluster > Updates, then select Images under the Hosts section and go through
the process to set up the Images method. As mentioned in the earlier, if you
choose to use this patching method, you cannot go back to Baselines. The
Baselines option will be removed from the Host section list
Click on any item in the list, then Edit to change the networking settings for the
given Stack
o Profile creation: Menu drop-down in vSphere Client > Policies and Profiles >
Host Profiles
o From the main menu, click the Extract Host Profile link, choose a Host from the
list, give it a name, then click Finish
Once extracted, it will appear in the Profile list. It will be as a link. Click it to view
it. From here, if you need to, you can click to edit its settings. Scroll through the
Levels and add (click green “+”) or remove (red “x”) items as needed, add
parameters, etc.
After all is good, click Save, then from the Actions menu, choose the
Attach/Detach Hosts and Clusters option. Clicking the Clusters or Hosts objects
in the left pane of the Profile main window, you can verify the Profile is attached
o Either from the Action menu (shown above) or by rt-clicking the Host Profile,
select to Check Host Profile Compliance. If any objects are not compliant, simply
apply the Profile by choosing Remediate. Before remediation, I recommend
doing a 'pre-check' to see what exactly will be changed on the Host or Cluster
o For scripted Boot options, at the ESXi bootloader screen, press SHIFT+O
For a full list of ESXi Boot options, refer to pp. 80-82 in the Install Guide. I
recommend being familiar with the parameters and the format they need to be
in.
• First, Quick Boot is simply ESXi Hosts "restarting" only the vSphere software. A full
hardware (cold) restart is not performed, so remediation process times are significantly
reduced. Another beauty of Quick Boot is there’s nothing on the hardware side you
really need to configure. Server vendors have hardware configured for this out of the
box. All you need to do is make sure you are running vSphere on Quick Boot-supported
hardware. Check the KB referenced at the beginning of this Objective for more info.
That being said, to make sure Quick Boot is used during Lifecycle Management tasks, as
noted in the vLCM Objective above, the Remediation settings in vLCM need to have
Quick Boot enabled. The option is enabled by default, but do verify it
Objective 4.18: Deploy and Configure Clusters Using the vSphere Cluster
Quickstart Workflow
Resource: vCenter Server 8.0 Host and Management Guide (2022), pp. 45-53
• Cluster Quickstart is workflow within the vSphere Client you can use to group common
tasks and offers configuration wizards which guide you through the process of
configuring and extending a vSphere Cluster. To use Cluster Quickstart workflow, ESXi
Hosts must be 6.7U2 or later. And, you cannot assign licenses to ESXi Hosts in the
Cluster via Quickstart. Licenses must be manually assigned.
• To configure a Cluster with Quickstart, you must first create a Cluster. To do so, rt-click
on a Datacenter object in the vCenter Inventory > New Cluster. Give the Cluster a name,
and enable any services you want – DRS, HA, vSAN. You can also optionally choose to
Manage all Hosts in the Cluster with a single image. This option refers to Cluster
Lifecycle Management. I touched on Images in Objective 1.6. After you create your
Cluster, you are directed to the Cluster Quickstart configuration location under
Configure tab > Configuration section > Quickstart
• To continue using the Quickstart workflow, click the Add button from the Add Hosts
card, and provide the Hostname and credentials for each ESXi Host you want to add to
the Cluster. Click Next, then OK to finish
• Continue with Quickstart Cluster configuration by clicking the Configure button from the
Configure Cluster card, and change any configuration options which meet your
organizational needs
• ESXi Services
o To configure any ESXi service, select an ESXi Host, select it in the vSphere Client
Inventory > Configure tab > System section > Services, then select a service from
the list and choose an option: Restart, Start, Stop, Edit Startup Policy…
o You can also manage ESXi services directly from an ESXi Host. Log into the Host
DCUI, click Manage, then choose the Services tab. Select a service from the list,
then choose a management option
• Lock-Down Mode
o To enable Lock-Down Mode on ESXi Hosts, select a Host > Configure tab >
System section, then click Security Profile. In the Lockdown Mode section click
the Edit button
• ESXi Firewall
o To configure firewall services and their behavior, select an ESXi Host, select it in
the vSphere Client Inventory > Configure tab > System section > Firewall. Click
either the Inbound or Outbound tab, then the Edit button. Select a Firewall
service and click the right arrow ( > ), to expand configuration options (Allow IP
list)
• As a reminder from the Tanzu discussion in Objective 1.13, you associate a Supervisor to
a Cluster of 1-3 Cluster resources, which are mapped to vSphere Zones. A 3-Zone
Supervisor provides failure protection at a Cluster level, whereas a 1-Zone Supervisor
provides failure protection only at an ESXi Host level. When making the decision which
deployment to use, keep in mind if you do go with the 1-Zone Supervisor, you can't
expand the Supervisor to a 3-Zone deployment later on
o Each Cluster must have at least 2 ESXi Hosts; or, 3 ESXi Hosts for Clusters using
vSAN
o Clusters must be using shared storage
o Enable HA on the Cluster
o Enable DRS with Fully-Automated or Partially-Automated mode; NOTE: disabling
DRS at any point on the Supervisor will break your Tanzu Kubernetes Clusters
o Account provisioning the Cluster must have Modify cluster-wide configuration
permisssion
o Create Storage Policies to determine datastore placement of Supervisor control
plane VMs; and, Policies for containers and images if the Supervisor supports
vSphere Pods
o Configure a networking stack using either NSX or vDS with NSX Advanced Load
Balancer or HAProxy Load Balancer
o Create a Content Library to provide the system with distributions of Tanazu
Kubernetes releases in the form of OVA templates
o Create vSphere Zones; see Sub-Objective 4.20.3 below
• Create a Supervisor:
3-Zone Supervisor –
o Click Home menu > Workload Management
o Click the Add License button if you have a valid Tanzu License, or do a 60-day
trial
o Scroll down and click the Get Started button; NOTE: you cannot continue if you
don't enter personal info if you're evaluating
o Select the vCenter Server and Network page and choose vDS as the networking
stack, if not using NSX
o On the Supervisor Location page, select vSphere Zone Deployment and enter a
Supervisor name, choose the Datacenter object the Zones are located, then
select the compatible/configured Zones
o On the Storage page, configure storage for placement of control plane VMs
o Configure Load Balancer settings
o For the Management Network, configure settings used for the network used for
Kubernetes control plane VMs -> DHCP or Static network mode
o For the Workload Network, configure network traffic settings used for
Kubernetes workloads running on the Supervisor -> DHCP or Static network
mode; NOTE: if using DHCP, you can't create new Workload networks after the
Supervisor is created
o On the Tanzu Kubernetes Grid page, click the Add button and select the
subscribed Content Library containing the VM images for deploying the Tanzu
Kubernetes Cluster nodes
o Review all your settings, then click Finish
Creating a 1-Zone Supervisor is virtually the same. To view the slight differences,
review pp. 98-113 in the Tanzu Install and Configure Guide
o Resource pools are logical abstractions for flexible management of CPU and
memory resources, grouped in hierarchies
o Each standalone ESXi Host & vSphere Cluster has an invisible root resource pool.
When adding an ESXi Host to a Cluster, any resource pool configured on the Host
will be lost, and the root resource pool will be a part of the Cluster
o RPs utilize Share, Reservation, and Limit constructs. I will define these terms and
give further explanation and examples of how to use & calculate them in Sub-
Objective 5.1.1 below:
o Resource pools behave in the following ways:
▪ A VM’s Reservation and Limit settings do not change when moving into a
new resource pool
▪ A VM’s % Shares adjust to reflect the total number of Shares in use in a
new resource pool, if a VM is configured with default High, Normal, Low
values. If Share values are custom, % Share is maintained
▪ Resource pools use Admission Control to determine if a VM can be
powered on or not. Admission Control determines the unreserved
resource capacity (CPU or memory) available in a resource pool
o A root resource pool can be segregated further with child resource pools, which
owns some of the parent resource pool resources
o A resource pool can contain child resource pools, VMs, or both
o Resource pools have the following hierarchical structure:
▪ Root resource pool is on a standalone ESXi Host or Cluster level
▪ Parent resource pool is at a higher level than sub-resource pools
▪ Sibling is a VM or resource pool at the same level
▪ Child resource pool is below a parent resource pool
• To create a RP, rt-click on a Host or Cluster > New Resource Pool, then configure the
desired metrics to meet your needs
• First, I’ll define each of the terms, then I’ll give a couple real-world workable samples to
further explain how Shares work
o Shares represent the relative importance of CPU and memory resources of
sibling powered-on VMs and resource pools, with respect to a parent resource
pool’s total resources, when resources are under contention.
Page 119 of 180 Author: Shane Williford
VMware VCP-DCV 2023 Study Guide
vSphere 8.0 Version
Because Shares are relative and values change based on number of VMs in a
resource pool, you may have to change a VM’s Share value (see below) when
moving in or out of a resource pool
▪ High represents 4 times that of Low
▪ Normal represents 2 times that of Low; default RP setting
▪ Low represents value of 1
o Reservation represents a guaranteed amount of resource to a VM or resource
pool; default CPU and Memory Reservation is 0
o Expandable reservation is a setting allowing child resource pools to get more
compute resources from its direct parent RP when the child resource pool is
short of its configured resources; the setting is enabled by default
▪ Expandable reservation is recursive, meaning if the direct parent RP
doesn’t have enough resources available, it can ask the next higher
parent resource pool for resources
▪ If there are not enough resources available, a VM cannot power on
o Limits are upper bounds of a resource; default CPU & Memory value is Unlimited
• When figuring out Share values needed for VM workloads, it’s best to work from the
VM-level up to resource pools, instead of Cluster resource pool down. Below is a
resource pool example to further explain Shares:
▪ You can also click to get an overview of CPU, Memory, and other metrics
under the Performance (charts) section > Overview or Advanced options.
You can configure different parameters and options to fit the
performance metric you’re trying to monitor
▪ You can also log into an ESXi Host DCUI and get performance and
hardware health metrics
▪ And finally, the CLI method to view ESXi Host performance via SSH is to
use esxtop.
o VMs
▪ To monitor VM resources, you can click on a VM and view its Summary
tab, or a more wholistic overview approach is to click on a vSphere
container object, like a Cluster or VM folder, then click the VMs tab. You
can view CPU and memory resources as needed. You can also use
esxtop on ESXi Hosts to view VM performance metrics
o Clusters
▪ And last, don’t forget about Cluster resources.. DRS! You can view the
health of your Cluster by browsing to the Monitor tab > vSphere DRS
section, then clicking on a status you want to view, like VM DRS Score or
CPU Utilization
o Then, from the vSphere Client in Networking node, select the VDS > Configure
tab, then under Resource Allocation section choose System Traffic. Select the
System Traffic type wanting to configure from the list, then Edit button.
Configure the network Shares, Reservations, and/or Limit for the system traffic
type, then click OK. Repeat for each System Traffic type you are wanting to
configure.
o After you configured the desired quota, click OK, then assign the Network
Resource Pool to the vDS Port Group the VMs are attached for which you’re
wanting to assign the network quota to
• Per-Datastore SIOC
o Before sharing the simple method of enabling SIOC, there are a few
requirements which must be met before you can enable it:
▪ SIOC Datastores must be managed by only 1 vCenter Server
▪ SIOC Datastores do not support RDM
▪ SIOC Datastores cannot have multiple extents
o To configure SIOC on a Datastore, simply rt-click on the Datastore > Configure
Storage I/O Control… then, configure the peak throughput % you want
o You can then configure SIOC Shares and IOPS Limits 1 of two ways – from the
VM > Edit Settings > expand Hard Disk section and set both parameters
… or, the more appropriate way, especially for use with a group of VMs, is doing
so via a Storage Policy, then assign the Policy to the VMs
• Storage DRS
o When you create SDRS Clusters, SIOC is enabled on all Datastores you place in
the Cluster by default. To avoid errors when creating a SDRS Cluster, keep in
mind the requirements listed above for SIOC
• Snapshot Overview
o Per VMware’s KBs, a VM snapshot is a point-in-time reference of a VM’s state
(power-on/off/suspended) and data (files, disks, memory, network). Before
continuing, one important concept regarding snapshots is *they are not
backups*. A VM backup is a complete/whole new copy of a VM and all its files
(disks, configs, logs, etc). A snapshot is a not a whole new copy of a VM. A
snapshot process simply creates a new VM disk where all subsequent I/O writes
are now processed, called a “delta” disk ( .delta-vmdk ). This delta disk has the
opportunity to grow up to the full size of the original/parent disk, thus you can
see the potential issue this presents. Snapshots are great for the following use-
cases:
▪ Testing OS patches
▪ Testing application updates/upgrades
▪ Development changes
You can take a snapshot, then perform any of the above tasks. If any of the
above tasks turn out to be non-destructive to the VM, it is advisable to
consolidate the snapshot – merging all I/O changes with the parent disk, then
removing the corresponding delta disk.
From Update Planner, you can click the Monitor Current Interoperability link,
which directs you then to the Monitor tab > vCenter Server section >
Interoperability and you'll see issues, if any, of upgrading your appliance
o Pre-requisites for using Update Planner:
▪ Update Planner is intended for vCenter upgrades starting with v7; it does
not assist in moving from a version older than v7 since Update Planner is
a feature introduced with v7
▪ You must join the VMware Customer Experience Improvement Program
(CEIP) to use Update Planner
– VCSA must be connected to the Internet to use CEIP
– Update Planner queries VMware Product Interoperability Matrices
online and reports findings via the vSphere Client
Objective 5.9: Use vSphere Lifecycle Manager to Determine the Need for Updates
and Upgrades
Resource: Managing Host and Cluster Lifecycle Guide (2022)
• When using vLCM to update VMs, the update is for VMware Tools and/or Virtual
Hardware, not for the guest OS. Just wanted to make that clear if it wasn't already. To
upgrade either Tools or Hardware:
o First make sure all ESXi Hosts in a Cluster the VMs are in are updated to the
latest vSphere release
o Click on either a Cluster from Hosts and Clusters or a Folder from VMs and
Templates > Updates tab
o When updating Tools and Hardware, make sure to update Tools first. And, when
updating Tools, VMs must be powered on. If they are not, vLCM powers them
on. When updating Hardware, VMs must be powered off. If they are not, vLCM
powers them down.
o To update either Tools or Hardware, select VMs in the list, then click the
Upgrade to Match Host link
o Scroll down the VM to configure Scheduling Options -> Power on, Power off, or
Suspend VMs
o Choose Rollback Options -> take a Snapshot, and whether to delete them or
how long to retain them, when the upgrade finishes
o When VMs are selected and settings configured, click the Upgrade to Match
Host button to begin the upgrade process
o vSphere version and its corresponding Virtual Hardware version to note:
VMware KB
vSphere Version Virtual Hardware
ESXi 5.5 10
ESXi 6.0 11
ESXi 6.5 13
ESXi 6.7 14
ESXi 6.7U2 16
ESXi 7.0 17
ESXi 7U1 18
ESXi 7U2 19
ESXi 8.0 20
Overview Charts
Advanced Chart
• VMware Skyline
o Skyline is a proactive self-service support technology available to customers with
an active Production Support or Premier Services contract. Skyline automatically
and security collects, aggregates, and analyzes customer specific product usage
data to practively identify potential issues and improve time-to-resolution.
o You'll have to accept a EULA (generally), and the update process can take
anywhere from 1-2 hours
Objective 5.13: Complete Lifecycle Activities for VMware vSphere With Tanzu
Resource: Maintaining vSphere 8.0 With Tanzu Guide (2023)
ESXi Host stores all its logs in the /var/log directory. Each log has the following
functions:
▪ VMkernel – vmkernel.log; logs related to ESXi and VMs
• Warnings – vmkwarning.log; logs VM-related activities
• Summary – vmksummary.log; ESXi uptime and availability
▪ ESXi Host agent – hostd.log; information about VM and ESXi
configuration and management services
▪ vCenter agent – vpxa.log; information about vCenter agent
▪ Shell – shell.log; records Shell commands and events
▪ vSphere HA agent – fdm.log
▪ Authentication – auth.log; contains local authentication info
▪ System messages – syslog.log; general log information used for
troubleshooting; was formerly messages.log
o Besides the local CLI-based log locations, you can log into the Host DCUI (direct-
console UI) to view several logs – Syslog, VMkernel, Config, vCenter
(vpxa), Management(hostd), or Observation(vobd). To view logs via
DCUI, select Monitor, then choose the Logs tab
To generate a log bundle via CLI, simply SSH to a Host and type: vm-support
Once complete, review the completion output for the datastore log location
o Through the vSphere Client go to: vCenter > Monitor > Skyline Health to review
several categories of information to check on the health of your environment
• Remote Syslog
o Finally, if you configured a remote Syslog for your VCSMI you can review logs on
your Syslog server; When configuring, you can only use TLS, TCP, UDP, & RELP
protocols
Or, as mentioned in the previous Objective, SSH to the ESXi Host and type:
vm-support , and take note the directory the log bundle was saved
o To generate a Log Bundle via vCenter Server, rt-click the vCenter Server object >
Export System logs option. You can then choose which ESXi Hosts to export logs
from. Make sure to scroll down to the bottom of the 'choose ESXi Hosts' window
so you can also select to Export vCenter Server logs if they are needed
On the next screen, you then have multiple system log options to choose from
by selecting or deselecting them, as well as gather perfromance data, and
whether to encrypt Core Dumps with a password or not
• Create VM Snapshot
o I discussed Snapshot details a bit in Objective 5.7, so I won’t go over them again
here. I’ll just provide how to create a snapshot
o The easiest way to create a Snapshot is simply to rt-click a VM > Take a
Snapshot…
Or, you can select a VM in your Inventory and go to the Snapshot tab > Take a
Snapshot button. Provide a name and optional description, then click Create. I
suggest giving the snap a description, e.g.
pre-OS update snap - <initials>, <date & time> so you can
readily identify 1. what it was created for, 2. by whom, & 3. the date/time
• Managing Snapshots
o Managing Snapshots is quite easy as well. If you have several snapshots in a
"tree", you can click on any one in the tree to Revert the VM to the point-in-time
snap. All subsequent Snapshots will be retained, unless you choose to delete
them. If you delete them, all data associated with those Snapshots will be lost. If
you instead choose to consolidate all the Snapshots in a chain, all data in each
Snapshot delta disk will be merged with the parent disk, and then all Snapshot
files will be deleted
o There are a couple options you should be aware of, as you can see in the
screenshot above – Include VM memory, and Quiescing. If you include VM
memory when snapshotting, you retain the power-state of the VM. For
quiescing, you can take a snapshot of a VM in a consistent state. VMware Tools
quiesces the guest OS to temporarily prevent I/O so the Snap is in a consistent
state. Quiescing isn’t performed, and is greyed out, if the memory is selected. If
memory and quiescing aren’t selected, the VM Snap will only be in a crash
consistent state
You can then use an URL, or browse for .ova or .ovf/.vmdk files. Once chosen,
just follow through the deploy wizard and provide the requested information –
VM name, licensing (if needed), storage, network, etc
• After you create your VM, you can modify its settings as needed, anywhere from CPU to
memory, or add hard disks, & other virtual devices. To modify any of a VM’s settings, rt-
click the VM > Edit Settings, or select the VM and choose Edit Settings from the Actions
menu. In the VM Options tab (2nd screenshot), you can select the guest OS version,
enable encryption, as well as BOOT options and several other items
• Other Configurations
o Aside from virtual hardware configurations, there are quite a few other VM
management tasks, such as:
▪ Per-VM EVC – select the VM > Configure tab > Settings section, then
choose VMware EVC and click the Edit button
▪ VM Latency Sensitivity – VM > Edit Settings > VM Options tab, then
expand the Advanced section and modify Latency Sensitivity (Normal,
High)
▪ CPU Affinity – VM > Edit Settings > Virtual Hardware tab, then expand
CPU and under Schedule Affinity select physical processor affinity; NOTE:
the VM must be powered off, and DRS must be off
▪ Guest OS install – a couple ways to do this is via a connected ISO from the
VM virtual CD device or PXE boot
▪ Guest Customization – configuring new VM deployments to utilize a
Customization Specification file to add the VM to the domain, use
sysprep, and prompt for vNic configuration. You create these from Menu
> Policies and Profiles section
▪ Storage Policies – from Policies and Profiles section, then assign to VMs
by rt-clicking VM > VM Policies > select Edit VM Storage Policies
▪ Upgrade VMware Tools – rt-click VM > Guest OS > Upgrade (or Install)
VMware Tools
• Create Datastores
o Although the Objective doesn’t say explicitly, I’ll first briefly share how to create
a Datastore, then I’ll discuss management tasks in the subsequent Sub-
Objectives
o To create a standard Datastore, you first configure a LUN on your array or NAS
storage system, configuring appropriate Host access using IQN/EUI, WWN, or
credentials. Once done, you rt-click on a Host or Cluster > Storage > New
Datastore…
Select what kind of storage you want to add – VMFS, NFS, or vVol, then click
Next. No Device is listed. Why? Did you perform a Rescan of your storage
adapters? Once you do a Rescan, your new LUN Device should now be listed in
the New Datastore wizard Name & Device window (will need to restart the New
Datastore wizard). Then just finish the wizard providing appropriate selections,
depending on the storage type you’re adding – VMFS version and whether to use
the whole LUN; or, NFS credentials for NAS devices, etc. Rescan the storage
adapters for all other Hosts you assigned to the LUN and the Datastore will
display automatically (may need to refresh the vSphere Client browser window)
• Datastore Modification
o To increase Datastore capacity, simply rt-click it > Increase Datastore Capacity,
or select the Datastore > Configure tab > General option, then click the Increase
button in the Capacity section, & follow the wizard to increase the capacity. The
amount by which you increase will be dependent on the LUN unused space left
o In this same area, but in the Datastore Capabilities section, you can enable
Storage I/O Control, clicking the Edit button
o If you upgraded your vSphere environment from an older version, and still have
older VMFS versions, you may want to upgrade your VMFS Datastores. To
upgrade VMFS 5 datastores, you cannot directly upgrade them. What you’ll need
to do is create a new VMFS 6 Datastore, migrate VMs, data, and files to the new
Datastore, then remove the VMFS 5 Datastore
o To remove a VMFS Datastore – go to the Storage Inventory and rt-click the
Datastore you’re wanting to remove, and select Unmount. When prompted,
select all Hosts to Unmount the Datastore from. There are a few pre-requisites
to be able to Unmount:
▪ Datastore cannot be part of a Storage DRS (SDRS) Cluster
▪ SIOC must be disabled
▪ No VMs (on or off) are on the Datastore
If the Datastore was a VMFS Datastore, it’ll still show in your vSphere Storage
Inventory. If it was a NFS or vVol, it’ll be removed. For VMFS, you can rt-click the
Datastore > Mount to re-mount it if needed. For NFS & vVol, you have to go
through the New Datastore wizard to re-mount
o When deleting a Datastore, all data residing on the Datastore will be destroyed.
Once you’ve first Unmounted it (not a requirement, but is the VMware-
recommended way to fully delete a Datastore), rt-click the Datastore and select
Delete Datastore. If you want to remove the Datastore from your vSphere
inventory, but keep all your data, what you do is after Unmounting, go to each
Host it’s connected to Configure tab > Storage Devices > select the
Device/Datastore name in the top section, then from the Paths tab in the
bottom section, click on each Path and click the Disable button. Do NOT rescan
your storage adapters! Once you perform this task on each Host, go to your
storage array and choose to make the Volume/LUN be "offline". This will disable
connections to the ESXi Hosts. You should then be able to rescan your storage
adapters and the unneeded Datastore will no longer show in your inventory
Give it a Name & optional Description, then Next. Depending on the Policy
Structure option you select will determine your remaining options to provide
information for – Host-based rules for Encryption and/or Storage I/O Control;
various storage options associated with vSAN; and Tagging (as shown in
sequence in the following screenshots)
Keep in mind, you can select multiple Policy Structures for your policy. You aren’t
limited to only selecting a Host-Based Policy, etc. Once you’ve configured all your
o For Advanced options (see below), you can configure the following:
▪ VM disk affinity – allows you to keep all disks associated with a VM
together or on separate datastores
▪ Interval at which to check datastore imbalance
▪ Minimum space utilization difference – refers to the % difference
between source and destination datastore. For example, if the current
space usage on source Datastore is 82% and destination Datastore is 79%
and you configure "5", the migration is not performed
Give your 1st Group a Name, then select which group Type to create first, a VM
or Host Group. Click the Add button and you’ll be prompted to select a group of
either VMs or Hosts, depending on what Type you selected to create first.
Repeat for the 2nd group Type (VM or Host), giving it a Name and selecting
desired VMs or Hosts. When finished with each group, click OK to finish
You then go back under the Configuration section and select VM/Host Rules,
then click the Add button
Here is where you choose what type of Rule to create. For the 1st 2 options –
'Keep VMs Together' or 'Separate VMs', you don’t use any of the Groups you just
created. Those are VM to VM Affinity & Anti-Affinity Rules respectively. If you
select either of those Rules then click the Add button, you’re prompted to select
the VMs from a list to keep together on the same Host or keep on different
Hosts (i.e. separate). The VM > Host Rule is where you use the Groups you just
created. Notice how the options change depending on the Rule type you select
At the subsequent screens in the Migration wizard, choose the ESXi Host (i.e.
'Compute Source') to migrate to, a virtual network if needing to change the
network your VM vmnic is using (click the Advanced button to change networks
for multiple VM vNics), and the priority (CPU scheduling priority) of the
migration
• Storage vMotion
o As a reminder, since vSphere 4.0, you do not need to have VMs configured for
vMotion to use sVMotion, unless you’re using the 'Migrate Compute and Storage
Resource' option. To perform a sVMotion, rt-click a VM > Migrate, then choose
Change Storage Resource Only, then choose the Datastore wanting to migrate
the VM disks to, as well as the VM disk format (i.e. Thin Provisioned, Thick, etc).
Toggle the 'Configure per disk' slider button to choose storage for multiple VM
disks
7.6.1 – Identify Requirements for Storage vMotion, Cold Migration, VMotion, and Cross vCenter
Export
• Cold Migration
o Cold migration simply means migrating a VM between ESXi Hosts across Clusters,
Datacenters, or vCenter Server instances while a VM is either in the powered-off
or suspended state. If your application can afford brief downtime, this may be a
valid option for you. When a powered-off VM is migrated, if the VM guest OS is
64-bit, the destination Host is checked for one requirement – guest OS
compatibility. No other CPU-compatibility checks are performed. If migrating a
suspended VM, then the destination does need to meet another check such as
CPU compatibility.
• vMotion
o vMotion refers to the process of migrating a VM from a source to destination
ESXi Host while powered-on, migrating the entire state of the VM. You can
choose one of 3 vMotion types – compute migration, storage migration, or both.
The state information transferred refers to the VM memory content and all
information identifying and defining the VM like BIOS, device, CPU, MAC
address, chipset state information, etc. Unlike cold migration where
compatibility checks for a destination Host may be optional, the checks for
vMotion are required. During vMotion, the following stages occur:
▪ vCenter verifies the VM is in a stable state with the source Host
▪ The VM state is moved to the destination Host
▪ The VM resumes normal operation on the destination Host
o Before using vMotion, make sure to meet vMotion requirements:
▪ vSphere Standard license, Enterprise Plus for cross-vCenter vMotion
▪ Shared storage among Hosts
▪ vMotion network configured on each Host
• Role-Based Management
o Definitions:
▪ Privileges – rights to perform actions on an object
▪ Role – set of privileges granting access to perform action on an object
▪ Permissions – role(s) assigned to a user or group to perform actions on
object
– Propagated: selecting the option for permissions to be assigned to a
vSphere object and objects below it in the vSphere hierarchy
– Explicit: permission added to an object without propagation
o Permissions priorities
▪ Propagation is not enabled by default; it must be manually enabled
▪ Child permissions have precedence over "parent" permissions
▪ User permissions have precedence over "group" permissions
▪ If a user is in multiple groups, all groups permissions apply
To view where the Role is assigned, click the Usage tab. This will also show
whether the Role/Permission is Propagated or not.
When creating a new Role, click the add button and select a Category to create a
Role on. If multiple diverse privileges are needed, you can select different
categories and choose the desired privileges for each. Click Next, give the Role a
Name, then Finish when completed
o To assign a Permission to the vSphere hierarchy, add a user to your new Role by
clicking on the object in the vSphere hierarchy > Permissions tab, and clicking
the Add button to add the new Permission. Browse your Directory (or a locally-
created User) for the User and choose your newly created Role to
associate/assign to the User. Select whether the Permission should Propagate or
not.
o For a list of Privileges for Common Tasks, refer to pp. 50-53, Security Guide. I
have provided an overview table below:
Destination DS or DS Folder:
datastore.AllocateSpace Destination DS:
Datastore Consumer
Move VM into Source VM/Folder: On VM: Administrator
Resource Pool resource.AssignVMtoResourcePool
VirtualMachine.Inventory.move
• I covered Host Profiles extensively in Objective 4.16. I covered what they are, how to
create one, and how to use them, so I ask you go back to this Objective to review the
information. As far as Managing Host Profiles go, it’s pretty simple. You can create
multiple Profiles (if desired), then click on it to modify as needed and Attach it to various
ESXi Hosts to make them compliant to your desired environment configuration.
Whenever you make a change to a current Host Profile, your Hosts will then fall out of
compliance, so you will have to reapply the Profile to your Hosts
• Firmware updates can be done using vLCM Images, using a firmware and drivers add-on,
which is a vendor-provided add-on containing components which encapsulate firmware
update packages and which might also contain the necessary drivers. Firmware updates
are not available via VMware depot, but rather a special vendor depot whose content
you can access through a software module called a hardware support manager (HSM).
This HSM is a plug-in which registers itself as a vCenter extension. Each vendor provides
& manages a separate HSM which integrates with vCenter Server. For each Cluster you
manage with a single image, you can select the HSM to use for the Cluster, and it then
provides you with a list of firmware updates you can choose to use which might add or
remove components to/from a vLCM image. Some HSM examples are:
o Dell – OpenManage Integration for VMware vCenter (OMIVV) appliance
o HPE – a management tool which is a part of iLO Amplifier; an appliance
o Lenovo – Lenovo xClarity Integrator for VMware vCenter appliance
• I touched on this in Sub-Objective 5.9.2, at least as far as 'how-to'. But maybe you don’t
know what specifically an update vs upgrade is? An Upgrade is when you apply software
to your ESXi Host to get to the next 'major release', or version, of vSphere – i.e. from
v6.7Ux to v7. ESXi Updates, similar to 'Patch Tuesday' for Windows Updates, is the
process of applying smaller, yet cumulative software packages to your Host – for
example, Security Fixes, Bug Fixes, Critical Patches, etc.
• The best way to describe Components is to simply go through the hierarchy VMware
and their partners (OEMs) use for software packaging
o vSphere Install Bundles (VIBs) – basic building block for creation of install
packages containing metadata & binary payload, which is the actual software to
be installed; smallest software unit or module, not entire 'feature'
o Bulletins – used by vLCM Baselines; a grouping of one or more VIBs
o Components – a bulletin, but with additional metadata and, unlike Bulletins, is a
logical grouping of VIBs providing you complete and visible feature upon install.
With vSphere 7, these are now the basic packaging construct for VIBs. VMware
bundles components together in a fully functional Base (i.e. bootable) ESXi
Image, whereas vendors bundle components together as vendor add-ons, which
cannot be used to boot with. They are usually OEM content and drivers.
Within the vLCM Home View, Bulletins are what you see under the Updates tab,
and Components are under the Image Depot tab.
• A vLCM Image Export creates either a JSON, ISO, or ZIP file. You can then import this file
and use for other Clusters in the same vCenter Server, or into another vCenter Server.
JSON files can be used for Images in other Clusters to save you from having to recreate
them fully, although when using for another vCenter Server, you might also need to use
a ZIP file to make sure all components and drivers are included in the Image on the
target vCenter Server Cluster. If you export an Image as an ISO, keep in mind a few
caveats:
o You cannot use this exported ISO for a Cluster currently using a vLCM Image
o You cannot create a vLCM Image from this ISO
o You import the exported ISO into the Imported ISOs tab of vLCM and can only
use it for Baselines
• To perform the export, go to your Cluster containing the Image you want to export,
select the Updates tab, and click on Images. Click the 'ellipsis' on the right > Export, then
choose a file type to export. Next, go to the target vCenter and/or Cluster > Updates
tab, select Images, then click the Import Image button
▪ As you can see above, there are a variety of "arguments" to choose from.
Once you’ve chosen what to monitor, select a Severity Level (Warning or
Critical) and whether to Send Email, SNMP Traps, or Run a Script. You can
also choose more Advanced options (2nd screenshot below)
• Encrypted VMotion
o Encrypted VMs always use encrypted VMotion. NOTE: if you change a VM to be
unencrypted, the Encrypted VMotion state will still be at 'Required' (see below)
until you explicitly change it to another state
o You can choose 1 of 3 encryption states for unencrypted VMs to use with
VMotion:
o To configure one of the above states, expand the Encryption section, and change
the Encrypted VMotion option drop-down. NOTE: You must be running vSphere
6.5+ to use encrypted VMotion; and, you can only change the state on
unencrypted VMs
o Migrate an encrypted VM: nothing new for this. Once encryption is enabled in
the VM settings & an encrypted Storage Policy is set, just VMotion the VM
o Cross vCenter Encrypted VMotion
▪ Allowed as long as the following is met:
– Both source & target vCenters use the same Key Provider
– The Key Providers are configured with the same Name
– vCenter Server 7.0+
– ESXi 6.7+
– Must be performed by using vSphere APIs
We made it through the very last Objective! Congratulations on finishing and allowing me to go
through your VCP-DCV studies with you. I hope this Study Guide helps you in gaining knowledge on
the newest vSphere features, as well as for passing your VCP-DCV exam. Please let me know what
you think, and if you notice any grammatical or technical discrepancies. You can ping me here or
here. Feel free to share this Study Guide with others, but as with all my past Study Guides, please
give credit to the author (me). Thank you so much!