Professional Documents
Culture Documents
Lab UCSPE 2017 PDF
Lab UCSPE 2017 PDF
Antonius Yunianto
Pre-Sales Data Center
anton.yunianto@comstor.com
Introduction ..........................................................................................................................................4
Installation ..........................................................................................................................................15
Setting Up UCSM.............................................................................................................................21
Introduction ....................................................................................................................................36
KVM ................................................................................................................................................60
ESXi Installation...............................................................................................................................62
GENERAL
Cisco Unified Computing System (UCS) Manager provides unified, embedded management
of all software and hardware components in the Cisco UCS. It controls multiple chassis and
manages resources for thousands of virtual machines.
DOCUMENT DIVISIONS
We aim to train partners so they can get comfortable in selling and configuring the Cisco
Unified Computing System and get familiar with the architecture. This hands-on is based on
the UCS Platform Emulator, release 2.2.1bPE1.
If you have additional questions or remarks on the lab guide, please drop me an e-mail:
anton.yunianto@comstor.com
- UCS Manager
o Embedded on the Fabric Interconnects and this manage the entire UCS
Domain
- Fabric Interconnects
o 10GE unified fabric switches which can handle native FC, FCoE, Ethernet
- Chassis IO Module
o Remote line card
- Blade server chassis
o Flexible bay configurations, up to 8 half-width blades
- I/O adapters
o Choice of multiple adapters
- Blades (and rack servers)
o x86 industry standard
o Patented extended memory
The picture below shows the front and rear view of a Cisco UCS environment with 16 blades:
Depending on the amount of cables you’ll connect between the chassis and the FI’s, you can
scale up to 160Gbps bandwidth per chassis.
Hint: It’s important to design this correctly as the Fabric Interconnects are licensed on a per
port model. So depending on your applications/server bandwidth needs, take this into
account as a port license costs $2774 list (+- $1100 buy price).
The Cisco UCS Fabric Interconnects are a core part of the Cisco Unified Computing System.
Typically deployed in redundant pairs, the Cisco Fabric Interconnects provide uniform access
to both networks and storage. They benefit from a low total cost of ownership (TCO) with
enhanced features and capabilities:
o The expansion module (PID: UCS-FI-E16UP) has 16 Unified Ports with 8 port
licenses.
When you configure the links between the Cisco UCS 2200 Series IOM and a Cisco UCS
6200 series fabric interconnect in fabric port channel mode, the available VIF namespace
on the adapter varies depending on where the IOM uplinks are connected to the fabric
interconnect ports.
Inside the 6248 fabric interconnect there are six sets of eight contiguous ports, with each
set of ports managed by a single chip. When uplinks are connected such that all of the
uplinks from an IOM are connected to a set of ports managed by a single chip, Cisco UCS
Manager maximizes the number of VIFs used in service profiles deployed on the blades in
the chassis. If uplink connections from an IOM are distributed across ports managed by
separate chips, the VIF count is decreased.
FABRIC EXTENDERS
There are currently two UCS Fabric Extenders on the market for use in the 5108 chassis.
2204 4 16
2208 8 32
The NIF (Network Interfaces) are the physical links from the Fabric Extender to the Fabric
Interconnects. The HIF (Host Interfaces) are the internal traces (links) to the servers.
Note: There are also remote Fabric Extenders available, called the Nexus 2000 Series Fabric
Extenders. These devices extend the amount of ports of the parent switch. The nexus 2000
platform MUST be combined with the Fabric Interconnects, Nexus 5000, 6000, 7000 or 9000
series. The Nexus 2000 series are ‘dumb’ devices as all the traffic from the ports will be
switched by the upstream device; the Nexus Fabric Extenders don’t switch frames locally.
- VIC 1240
- Port Expander (only in combination with the VIC 1240)
- VIC 1280
The Cisco UCS Virtual Interface Card (VIC) 1240 is a 4-port 10 Gigabit Ethernet, Fibre Channel
over Ethernet (FCoE)-capable modular LAN on motherboard (mLOM) designed exclusively
for the M3 generation of Cisco UCS B-Series Blade Servers.
When used in combination with an optional Port Expander, the Cisco UCS VIC 1240
capabilities can be expanded to eight ports of 10 Gigabit Ethernet.
The Cisco UCS VIC 1240/1280 enables a policy-based, stateless, agile server infrastructure
that can present up to 256 PCIe standards-compliant interfaces to the host that can be
dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs).
In addition, the Cisco UCS VIC 1240/1280 supports Cisco Data Center Virtual Machine Fabric
Extender (VM-FEX) technology, which extends the Cisco UCS fabric interconnect ports to
virtual machines, simplifying server virtualization deployment.
2208XP SCENARIO’S
Here you can find different scenario’s how the HIF’s connect to the server I/O card slots:
The 2208XP FEX has 32 HIF’s (Host Interfaces) so there are 4 traces available per FEX for
every blade in the chassis/FEX. The 5108 chassis can hold up 8 servers so 32 HIF’s divided by
8 = 4 traces/IO slot available. Every trace is a 10GBASE-KR link.
As you can see, only the 1240 has been installed in the blade so only 2 traces/FEX are
available. But still we have 2 x 10 Gb/FEX which result in 40 Gb.
The 2208XP FEX has 32 HIF’s (Host Interfaces) so there are 4 traces available per FEX for
every blade in the chassis. The 5108 chassis can hold up 8 servers so 32 HIF’s divided by 8 = 4
traces/IO slot available. Every trace is a 10GBASE-KR link.
As you can see, the 1240 AND the Port Expander has been installed in the blade so 4
traces/FEX are available which result in having 4 x 10 Gb/FEX that ends up having
80GB/server.
The Port Expander Card for VIC 1240 installed in the mezzanine slot acts as a pass-through
device to channel two ports to each of the Fabric Extenders.
HINT: Please bear in mind that when using the port-expander UCS Manager sees the VIC1240
and the port-expander as a single VIC. -> Even if two cards are installed, if the VIC 1240 fails,
you’ll lose connectivity to the blade. If you want redundancy, you’ll have to select the 1280
(but the 1280 is more expensive)
The 2208XP FEX has 32 HIF’s (Host Interfaces) so there are 4 traces available per FEX for
every blade in the chassis. The 5108 chassis can hold up 8 servers so 32 HIF’s divided by 8 = 4
traces/IO slot available. Every trace is a 10GBASE-KR link.
As you can see, the 1240 AND the 1280 have been installed in the blade so 4 traces/FEX are
available which result in having 4 x 10 Gb/FEX that ends up having 80GB/server.
The 2204XP FEX has 16 HIF’s (Host Interfaces) so there are 2 traces available per FEX for
every blade in the chassis. The 5108 chassis can hold up 8 servers so 16 HIF’s divided by 8 = 2
traces/blade available. Every trace is a 10GBASE-KR link.
As you can see, the 1240 has been installed in the blade so only 1 trace/FEX is available
which result in having 1 x 10 Gb/FEX that ends up having 20GB/server.
Scenario 2: 2204XP Fabric Extender with I/O Card VIC1240 + I/O Card Port Expander
The 2204XP FEX has 16 HIF’s (Host Interfaces) so there are 2 traces available per FEX for
every blade in the chassis. The 5108 chassis can hold up 8 servers so 16 HIF’s divided by 2 = 2
traces/IO slot available. Every trace is a 10GBASE-KR link.
As you can see, the 1240 AND the Port Expander have been installed in the blade so 4
traces/FEX are available which result in having 4 x 10 Gb/FEX that ends up having
80GB/server.
One port from the VIC 1240 is channeled to 2204XP Fabric Extender A and one is channeled
to 2204XP Fabric Extender B. The Port Expander Card for VIC 1240 installed in the mezzanine
slot acts as a pass-through device to channel one port to each of the Fabric Extenders. The
result is 20 Gbps of bandwidth to each Fabric Extender.
The 2204XP FEX has 16 HIF’s (Host Interfaces) so there are 2 traces available per FEX for
every blade in the chassis. The 5108 chassis can hold up 8 servers so 16 HIF’s divided by 2 = 2
traces/IO slot available. Every trace is a 10GBASE-KR link.
As you can see, the 1240 AND the VIC 1280 have been installed in the blade so 4 traces/FEX
are available which result in having 4 x 10 Gb/FEX that ends up having 80GB/server.
We will perform hands-on on a UCS Platform Emulator which can be installed by students on
their own laptop. The advantage of the Emulator is that the students don’t need access to
Comstor’s Data Center and they can use this configuration guide offline. The Emulator is a
perfect platform to discover the many features of Cisco UCS Manager as it has the same
layout and almost the same functions as the ‘real’ UCS Manager.
Installation requirements:
Use the USB drive provided in the class or download the following files
o UCS Platform Emulator (CCO-ID is required):
https://communities.cisco.com/docs/DOC-37827
o VMware Player (if you have VMware Workstation on your laptop, that’s also
supported)
(https://my.vmware.com/web/vmware/free#desktop_end_user_computing
/vmware_player/6_0)
If you downloaded the .zip file, please extract the UCSPE folder to your desktop.
Now the VMware Player (or your VMware Workstation) should open and you’ll see the Linux
booting up. The unpacking, installation and booting can take up to 10-15 minutes depending
on your system’s hardware. Please read through the following section during the
installation.
This step isn’t available in the UCS Platform Emulator as you don’t have to configure the
Fabric Interconnects IP addresses in the emulator. Please read through these steps to
understand how the Fabric Interconnects provide High Availability.
HINT: Make sure before you setting up the Fabric Interconnects L1 from FI-A is connected to
L1 from FI-B and L2 from FI-A is connected to L2 from FI-B. This GbE connections are needed
to provide a heartbeat between the Fabric Interconnects and configuration synchronization.
We will setup a virtual IP address for the UCS Manager as the management plane of a Cisco
UCS environment is active/passive. Therefore when you connect to the virtual IP address, you
connect to the active Fabric Interconnect.
The Fabric Interconnects are now in cluster mode (high availability) and ready to be accessed
through the Unified Computing System Manager (UCSM).
When the UCSM Platform Emulator has finished the boot process, please login via the UCS
UI IP address: (in this case: 192.168.70.130)
- Login: admin
- Password: admin
The second generation Fabric Interconnects (6248UP/6296UP) have Unified Ports which
means they both support the Ethernet and Fibre Channel protocol. Before configuring server
ports, uplinks, etc. we need to determine what ports we are going to use for which protocol.
This is important as the module itself need to reboot every time as we reconfigure a port
type. If we reconfigure a port on an expansion module, only the expansion module will
perform a reboot.
Please take this in account because you don’t want to reboot a whole module in a
production environment!
4. You will get a warning that the module where you change ports will reboot
afterwards, please click YES.
5. In the wizard, you can select which module you want and with the slider you can
configure Ethernet or Fibre Channel ports. Ethernet ports are blue and Fibre Channel
ports are purple.
Ethernet ports start from the beginning and Fibre Channel ports start counting backwards.
This configuration guide shows how to set up the initial steps for southbound (servers) and
northbound (LAN/SAN) connectivity.
CHASSIS POLICY
Discovery Policy:
The Chassis discovery policy is a policy that discovers only chassis’ with the amount of links
selected. Let’s explain this with an example:
HINT: Port-Channel is best practice as all the servers uses the bandwidth of the port-channel
between the FEX and the FI’s.
CHASSIS DISCOVERY
Note: This step is already done in the UCS Manager Platform Emulator but please go through
it as this enables the server ports and enables the chassis discovery.
Do the same for Fabric Interconnect B and wait till the chassis/server discovery is
completed. You can monitor the server discovery when selecting a server and look in the
status field in the general tab.
CONFIGURING UPLINKS
We are going to configure northbound LAN connectivity. In a perfect scenario you would
have 2 upstream Nexus 5500/7000 series switches connected with virtual port-channels
(vPC’s) but other (vendor) switches also work perfectly.
As you can see, the ports on the Fabric Interconnect are automatically configured (and this is
a fixed configuration) in LACP mode active, which means they will initiates a negotiation. So
your upstream switch can run LACP in Active or Passive mode.
As we now have selected the ports for our LAN uplinks, we have to create a Port-Channel.
HINT: Don’t forget to create port-channels on the upstream switch and put the LACP mode in
Active or Passive. The UCS port channels are statically set to LACP mode active and cannot be
modified.
Fabric Interconnect A / B:
ID 1
Name Port-Channel1
5. In the Next screen, we have to select the ports we need to add to our Port Channel.
6. Select Ethernet port E1/17 and E1/18 as uplink ports for the Port Channel and
export them to the right column with the >> sign.
7. Click finish to close this wizard.
Do exactly the same for Fabric B with the same input values.
Note: The port-channel would not come up as this is a simulator and they don’t have an
upstream switch.
In a real environment you should wait a couple of seconds before the port-channel comes up
as it has to negotiate with the upstream switch.
As the port-channels and LAN uplinks are configured, we have to create the VLAN’s.
HINT: Make sure you also have created the VLAN’s on the upstream switch and that they are
allowed on the port-channel.
4. Create the following VLAN’s as Common/Global VLAN: (VLAN1 is per default created
and is by default the native VLAN)
VLAN ID NAME
64 MGMT
65 DATA_TRAFFIC
66 vMOTION
Common/Global: The VLANs apply to both fabrics and use the same configuration
parameters in both cases
Fabric A: The VLANs only apply to fabric A.
Fabric B: The VLAN only apply to fabric B.
Both Fabrics Configured Differently: The VLANs apply to both fabrics but you can
specify different VLAN IDs for each fabric.
HINT: If you are working in large environments, for example Cloud or Service providers, it’s
possible that you have a lot of VLAN’s. If you are not sure the VLAN is already used, you can
use the ‘Check Overlap’ function. This will check if the VLAN is already configured.
INTRODUCTION
In this LAB guide, we will configure the environment with native Fibre Channel.
The upstream connection is a Fibre Channel connection to two MDS switches.
The Fabric Interconnect is in (default) FC end-host mode which means it runs NPV (N-Port
Virtualization). The NPV switch is connected to a NPIV switch. What NPIV does is allow a
single physical N_Port to have multiple WWPNs, and therefore multiple N_Port_IDs,
associated with it. This is necessary as the Fabric Interconnect don’t act as a normal switch
but as a “big server with several HBA’s”.
First we have to create the appropriate VSANs so we can communicate with the upstream
Fibre Channel switch.
A Fibre Channel port channel allows you to group several physical Fibre Channel ports to
create one logical Fibre Channel link to provide fault-tolerance and high-speed connectivity.
Fabric Interconnect A:
ID 10
Name Port-Channel10
Fabric Interconnect B:
ID 11
Name Port-Channel11
6. In the Next screen, we have to select the ports we need to add to our Port Channel.
7. Select Ethernet port E1/47 and E1/48 as uplink ports for the Port Channel and
export them to the right column with the >> sign. Click finish to close this wizard.
Note: The port-channel would not come up as this is a simulator and the port-channel
doesn’t have an upstream switch.
In a real environment you should wait a couple of seconds before the port-channel comes up
as it has to negotiate with the upstream switch.
Both Fiber Channel interfaces on the expansion module are automatically enabled as uplink
ports so we don’t have to change anything. You can see when selecting a Fiber Channel port,
its overall status is ‘failed’:
The overall status is ‘failed’ because the upstream MDS is configured for a specific VSAN so
we have to map the FC interfaces to the appropriate VSAN.
Fabric Interconnect A:
1. Navigate to the SAN tab and navigate to the SAN Cloud
2. Select Fabric A.
3. Select the Fiber Channel Port Channel 10 on Fabric Interconnect A
4. Under properties on the right, select VSAN10 and apply these settings by clicking on
Save Changes.
5. You will see the port will become green instead of red.
1. Select Fabric B.
2. Select the Fiber Channel Port Channel 11 on Fabric Interconnect B
3. Under properties on the right, select VSAN11 and apply these settings by clicking on
Save Changes.
4. You will see the port will become green instead of red.
After you have mapped the port-channels with the specific VSAN’s, all the ports should be
up (real equipment screenshot)
INTRODUCTION
- A compute node is just an execution engine for any application (CPU, memory, and
disk – flash or hard drive). The servers themselves aren’t doing anything till you
assign them a personality (Service Profile)
- The servers can easily then be deployed, cloned, grown, shrunk, de-activated,
archived, re-activated, etc…
If you create organizations in a multi-tenant environment, you can also set up one or more
of the following for each organization or for a sub-organization in the same hierarchy:
Resource pools
Policies
Service profiles
Service profile templates
1. You can create an organization wherever you want with the + new button on the
top of the UCSM
2. Select Create Organization.
Organization
Name UCS_Training
Description Training UCS
Before we start to create service profiles, we have to define different pools so we can use
the addressing for MAC addresses, WWNN, WWPN, etc. from pools. The use of pools is
especially important when we are dealing with stateless computing.
Organization
From: 10.8.64.2
Size: 16
Subnet Mask: 255.255.255.0
Default Gateway: 10.8.64.1
Primary DNS: 8.8.8.8
Secondary DNS: 0.0.0.0
5. Click on IP addresses and here you can see the IP’s are created and automatically
assigned to certain blades.
UUID: global ID that is unique to a given server which is composed of a prefix and a suffix.
Using pools lets you communicate WWNs to SAN team ahead of deployment so they can
pre-provision LUNs for boot-from-SAN and proactively perform zoning and LUN masking
configuration.
An adapter uses one Node WWN (WWNN) and as many Port WWN (WWPN) as there are
vHBAs for that adapter.
WWNN
From: 20:00:00:25:B5:00:00:00
Size: 32
WWNP are used to assign addresses to the Virtual Host Bus Adapter (vHBA), zoning and
masking. Use this within sub-organizations!
SAN-A
From: 20:00:00:25:B5:0A:00:00
Size: 32
SAN-B
From: 20:00:00:25:B5:0B:00:00
Size: 32
When creating a block of MAC addresses, use size 64 as best practice. I also recommends to
use the “00:25:B5” MAC prefix to ensure MAC uniqueness.
We will create MAC address for Fabric A and Fabric B so it’s easier to troubleshoot later if we
should encounter problems.
3. Click on the + sign on the right and give it the following name: MAC-MGMT-A
4. Click Next and select Add.
5. Use the following first MAC: 00:25:B5:0A:00:00 and size: 128.
6. Click finish to create the MAC Addresses block.
7. Click on the + sign on the right and give it the following name: MAC-MGMT-B
8. Click Next and select Add.
9. Use the following first MAC: 00:25:B5:0B:00:00 and size: 128.
11. Click on the + sign on the right and give it the following name: MAC-vMOTION-A
12. Click Next and select Add.
13. Use the following first MAC: 00:25:B5:1A:00:00 and size: 128.
14. Click finish to create the MAC Addresses block.
15. Click on the + sign on the right and give it the following name: MAC-vMOTION-B
16. Click Next and select Add.
17. Use the following first MAC: 00:25:B5:1B:00:00 and size: 128.
18. Click finish to create the MAC Addresses block.
19. Click on the + sign on the right and give it the following name: MAC-VMTRAFFIC-A
20. Click Next and select Add.
21. Use the following first MAC: 00:25:B5:2A:00:00 and size: 128.
22. Click finish to create the MAC Addresses block.
23. Click on the + sign on the right and give it the following name: MAC-VMTRAFFIC-B
24. Click Next and select Add.
25. Use the following first MAC: 00:25:B5:2B:00:00 and size: 128.
26. Click finish to create the MAC Addresses block.
This policy defines how a vNIC on a server connects to the LAN. This policy is also referred to
as a vNIC LAN connectivity policy. A best practice for ESXi is configuring 8 vNICs but as this is a
demo, let’s create 6 of them.
DATA_TRAFFIC-B
vNIC-B
Name DATA_TRAFFIC-B
Description: -
Fabric ID: Fabric B
Target: Adapter
Failover: No Failover
Template type: Initial
VLAN: Select DATA_TRAFFIC (default as native)
MTU: 1500 (can be set to 9000 if using f.e. NFS)
MAC Pool: MAC-VMTRAFFIC-B
Network Control Policy: CDP
MGMT-A
vNIC-A
Name MGMT-A
Description: -
Fabric ID: Fabric A
Target: Adapter
Failover: No Failover
Template type: Initial
VLAN: Select MGMT (default as native)
MTU: 1500 (can be set to 9000 if using f.e. NFS)
MAC Pool: MAC-MGMT-A
Network Control Policy: CDP
MGMT-B
vNIC-A
Name MGMT-B
vMOTION-A
vNIC-A
Name vMOTION-A
Description: -
Fabric ID: Fabric A
Target: Adapter
Failover: No Failover
Template type: Initial
VLAN: Select vMOTION (default as native)
MTU: 1500 (can be set to 9000 if using f.e. NFS)
MAC Pool: MAC-vMOTION-A
Network Control Policy: CDP
vMOTION-B
vNIC-A
Name vMOTION-B
Description: -
Fabric ID: Fabric B
Target: Adapter
Failover: No Failover
Template type: Initial
VLAN: Select vMOTION (default as native)
MTU: 1500 (can be set to 9000 if using f.e. NFS)
MAC Pool: MAC-vMOTION-B
Network Control Policy: CDP
This template is a policy that defines how a vHBA on a server connects to the SAN. It is also
referred to as a vHBA SAN connectivity template. A best practice for most environments is
configuring 2 vHBAs to both SAN-A and SAN-B.
vHBA-A
vHBA-A
Name vHBA-A
Description: -
Fabric ID: Fabric A
VSAN: VSAN10
Template Type: Initial
Max Data Field Size 2048
WWPN Pool SAN-A
vHBA-B
vHBA-A
Name vHBA-B
Description: -
Fabric ID: Fabric B
VSAN: VSAN11
Template Type: Initial
Max Data Field Size 2048
WWPN Pool SAN-B
You can configure a boot policy to boot one or more servers from an operating system
image on the SAN. This boot policy can exist of SD cards, internal HDD’s, boot from SAN,…
The boot from SAN policy can include a primary and a secondary SAN boot. If the primary
boot fails, the server attempts to boot from the secondary.
Cisco recommends that you use a SAN boot, because it offers the most service profile
mobility within the system. If you boot from the SAN when you move a service profile from
one server to another, the new server boots from the exact same operating system image.
Therefore, the new server appears to be the exact same server to the network.
9. Select Add SAN Boot Target: (Add SAN Boot target to SAN secondary)
Add SAN boot target:
Boot Target Lun: 0
Boot Target WWPN: 50:0a:09:81:88:cd:39:b7
Finally, we can create the service profile template. We are going to configure a template, so
we can create different service profiles from that template.
- Initial Template: The initial template is used to create a new server from a service
profile with UIDs, but after the server is deployed, there is no linkage between the
server and the template, so changes to the template will not propagate to the
server, and all changes to items defined by the template must be made individually
to each server deployed with the initial template.
Networking:
Name MGMT-B
Use vNIC Template: YES
MAC Address Assignment MAC-B
vNIC Template MGMT-B
Adapter Policy: VMware
Networking:
Name DATA_TRAFFIC-A
Use vNIC Template: YES
MAC Address Assignment MAC-A
vNIC Template DATA_TRAFFIC-A
Adapter Policy: VMware
Networking:
Name DATA_TRAFFIC-B
Use vNIC Template: YES
MAC Address Assignment MAC-B
vNIC Template DATA_TRAFFIC-B
Adapter Policy: VMware
Networking:
Name vMOTION-A
Use vNIC Template: YES
MAC Address Assignment MAC-A
vNIC Template vMOTION-A
Adapter Policy: VMware
Networking:
Name vMOTION-B
Use vNIC Template: YES
MAC Address Assignment MAC-B
vNIC Template vMOTION-B
Adapter Policy: VMware
Select the World Wide Node Name assignment with the WWNN pool we’ve created
(WWNN)
Create vHBA-A
Name vHBA-A
Use vHBA Template: YES
vHBA Template vHBA-A
Adapter Policy: VMware
You can see the Service Profiles are created under your organisation:
4. Under Actions, select Associate Service Profile and choose the Service Profile you’ve
created.
In a real environements you can watch the step sequence of assigning the service profile to
the blade in the FSM tab:
KVM
Note: As we aren’t working on physical devices, the ESXI installation is not supported in this
lab guide.
To install VMware ESXi on the blade server, we need to KVM the server.
6. Select OK to reset the server and select Power Cycle (as we don’t have any OS
installed on the server)
1. Go back to the KVM screen and wait till the server boots. The ESXi installer will
automatically start as we setup the CD-ROM drive as first boot. (or press Enter if
you don’t want to wait)
4. Now the ESXi installer is scanning for available devices and it should see the Boot
From SAN we’ve created in the Service Profile. Select the NetApp LUN by pressing
Enter
8. After the installation is complete, press Enter to reboot the server. There is no need
to remove (unmap) the installation virtual disk as the UCSM automatically does this
by rebooting the server.
After ESXi has been installed, we have to setup the ESXi networking. Default, ESXi will look
up a DHCP server but we don’t have DHCP enabled on the server so we need to assign a
static IP address.
1. Press F2 to customize the system and press Enter two times as we didn’t setup a
password for the ESXi login.
2. Select Configure Management Network:
Now we have setup the ESXi management and we can start using VMware vSphere Client to
configure vSwitches’ uplinks and assign VM’s to the uplinks.