Download as pdf or txt
Download as pdf or txt
You are on page 1of 68

CISCO UNIFIED COMPUTING SYSTEM

Antonius Yunianto
Pre-Sales Data Center
anton.yunianto@comstor.com

This Lab is created to provide partners a hands-


on impression on the Cisco’s Unified
Computing System.

The focus of this lab is to get experience with


the initial setup and service profiles.

P age |1 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


TABLE OF CONTENTS

Cisco Unified Computing system ..............................................................................................................1

Introduction ..........................................................................................................................................4

UCS Architecture Overview ..................................................................................................................5

Overview of UCS and Components ...................................................................................................5

Fabric Interconnects: ........................................................................................................................7

Fabric Extenders ...............................................................................................................................8

Virtual Interface Card........................................................................................................................9

Installation ..........................................................................................................................................15

Infrastructure Tasks ............................................................................................................................16

Configuring Fabric Interconnects (HA)............................................................................................16

Setting Up UCS Manager Platform Emulator ..................................................................................18

Configuring Port types ....................................................................................................................19

Setting Up UCSM.............................................................................................................................21

Configuring Service Profiles ................................................................................................................36

Introduction ....................................................................................................................................36

Configuring Organizations ..............................................................................................................37

Configuring Pools ............................................................................................................................38

Configure Network Control Policies................................................................................................44

Configuring vNIC’s Template ..........................................................................................................44

Configuring vHBA’s Template .........................................................................................................47

Configuring Boot Policy (Boot from San) ........................................................................................48

Configuring Service Profile Template .............................................................................................50

Create Service Profile from Template.............................................................................................57

Assigning Service Profiles To A Blade..............................................................................................58

Installing ESXi on UCS..........................................................................................................................60

KVM ................................................................................................................................................60

ESXi Installation...............................................................................................................................62

P age |2 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


ESXi Networking..............................................................................................................................65

Thank You! ..........................................................................................................................................68

P age |3 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


INTRODUCTION

GENERAL

Cisco Unified Computing System (UCS) Manager provides unified, embedded management
of all software and hardware components in the Cisco UCS. It controls multiple chassis and
manages resources for thousands of virtual machines.

DOCUMENT DIVISIONS

We aim to train partners so they can get comfortable in selling and configuring the Cisco
Unified Computing System and get familiar with the architecture. This hands-on is based on
the UCS Platform Emulator, release 2.2.1bPE1.

If you have additional questions or remarks on the lab guide, please drop me an e-mail:
anton.yunianto@comstor.com

P age |4 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


UCS ARCHITECTURE OVERVIEW

OVERVIEW OF UCS AND COMPONENTS

The building blocks of Cisco UCS exist of multiple components:

- UCS Manager
o Embedded on the Fabric Interconnects and this manage the entire UCS
Domain
- Fabric Interconnects
o 10GE unified fabric switches which can handle native FC, FCoE, Ethernet
- Chassis IO Module
o Remote line card
- Blade server chassis
o Flexible bay configurations, up to 8 half-width blades
- I/O adapters
o Choice of multiple adapters
- Blades (and rack servers)
o x86 industry standard
o Patented extended memory

The picture below shows the front and rear view of a Cisco UCS environment with 16 blades:

P age |5 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


As showed in the picture on the previous page, we have to connect the two Fabric Extenders
in the chassis to the Fabric Interconnects. Be sure you connect FEX A to FI A and FEX B to FI B
for proper cabling/chassis discovery.

Depending on the amount of cables you’ll connect between the chassis and the FI’s, you can
scale up to 160Gbps bandwidth per chassis.

Hint: It’s important to design this correctly as the Fabric Interconnects are licensed on a per
port model. So depending on your applications/server bandwidth needs, take this into
account as a port license costs $2774 list (+- $1100 buy price).

P age |6 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


FABRIC INTERCONNECTS:

The Cisco UCS Fabric Interconnects are a core part of the Cisco Unified Computing System.
Typically deployed in redundant pairs, the Cisco Fabric Interconnects provide uniform access
to both networks and storage. They benefit from a low total cost of ownership (TCO) with
enhanced features and capabilities:

- Increased bandwidth up to 960 Gbps


- High-performance, flexible, unified ports capable of line-rate, low-latency, lossless
1/10 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE), and 4/2/1 and 8/4/2
Fibre Channel
- Reduced port-to-port latency from 3.2 microseconds to 2 microseconds
- Centralized unified management with Cisco UCS Manager
- Efficient cooling and serviceability: Front-to-back cooling, redundant front-plug fans
and power supplies, and rear cabling

Currently there are two different Fabric Interconnects on the market:

- 6248UP (PID: UCS-FI-6248UP-UPG) : supports up to 48 ports (standard 32 ports with


12 licenses)
o 1 expansion slot

- 6296UP (PID: UCS-FI-6296UP-UPG): supports up to 96 ports (standard 48 ports with


18 licenses)
o 3 expansion slots

o The expansion module (PID: UCS-FI-E16UP) has 16 Unified Ports with 8 port
licenses.

P age |7 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


Note: Cabling Considerations for Fabric Port Channels:

When you configure the links between the Cisco UCS 2200 Series IOM and a Cisco UCS
6200 series fabric interconnect in fabric port channel mode, the available VIF namespace
on the adapter varies depending on where the IOM uplinks are connected to the fabric
interconnect ports.

Inside the 6248 fabric interconnect there are six sets of eight contiguous ports, with each
set of ports managed by a single chip. When uplinks are connected such that all of the
uplinks from an IOM are connected to a set of ports managed by a single chip, Cisco UCS
Manager maximizes the number of VIFs used in service profiles deployed on the blades in
the chassis. If uplink connections from an IOM are distributed across ports managed by
separate chips, the VIF count is decreased.

FABRIC EXTENDERS

There are currently two UCS Fabric Extenders on the market for use in the 5108 chassis.

Model: NIF: (Network HIF: (Host Interface)


Interface)

2204 4 16

2208 8 32

The NIF (Network Interfaces) are the physical links from the Fabric Extender to the Fabric
Interconnects. The HIF (Host Interfaces) are the internal traces (links) to the servers.

Note: There are also remote Fabric Extenders available, called the Nexus 2000 Series Fabric
Extenders. These devices extend the amount of ports of the parent switch. The nexus 2000
platform MUST be combined with the Fabric Interconnects, Nexus 5000, 6000, 7000 or 9000
series. The Nexus 2000 series are ‘dumb’ devices as all the traffic from the ports will be
switched by the upstream device; the Nexus Fabric Extenders don’t switch frames locally.

P age |8 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


VIRTUAL INTERFACE CARD

There are currently 3 Cisco Virtual Interface cards on the market.

- VIC 1240
- Port Expander (only in combination with the VIC 1240)
- VIC 1280

Note: The VIC 1240 is currently included in the SmartPlay 7 promotions.

The Cisco UCS Virtual Interface Card (VIC) 1240 is a 4-port 10 Gigabit Ethernet, Fibre Channel
over Ethernet (FCoE)-capable modular LAN on motherboard (mLOM) designed exclusively
for the M3 generation of Cisco UCS B-Series Blade Servers.

When used in combination with an optional Port Expander, the Cisco UCS VIC 1240
capabilities can be expanded to eight ports of 10 Gigabit Ethernet.

The Cisco UCS VIC 1240/1280 enables a policy-based, stateless, agile server infrastructure
that can present up to 256 PCIe standards-compliant interfaces to the host that can be
dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs).

In addition, the Cisco UCS VIC 1240/1280 supports Cisco Data Center Virtual Machine Fabric
Extender (VM-FEX) technology, which extends the Cisco UCS fabric interconnect ports to
virtual machines, simplifying server virtualization deployment.

2208XP SCENARIO’S

Here you can find different scenario’s how the HIF’s connect to the server I/O card slots:

Scenario 1: 2208XP Fabric Extender with I/O Card VIC1240

The 2208XP FEX has 32 HIF’s (Host Interfaces) so there are 4 traces available per FEX for
every blade in the chassis/FEX. The 5108 chassis can hold up 8 servers so 32 HIF’s divided by
8 = 4 traces/IO slot available. Every trace is a 10GBASE-KR link.

As you can see, only the 1240 has been installed in the blade so only 2 traces/FEX are
available. But still we have 2 x 10 Gb/FEX which result in 40 Gb.

P age |9 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


Scenario 2: 2208XP Fabric Extender with I/O Card VIC1240 + I/O Card Port Expander

The 2208XP FEX has 32 HIF’s (Host Interfaces) so there are 4 traces available per FEX for
every blade in the chassis. The 5108 chassis can hold up 8 servers so 32 HIF’s divided by 8 = 4
traces/IO slot available. Every trace is a 10GBASE-KR link.

As you can see, the 1240 AND the Port Expander has been installed in the blade so 4
traces/FEX are available which result in having 4 x 10 Gb/FEX that ends up having
80GB/server.

The Port Expander Card for VIC 1240 installed in the mezzanine slot acts as a pass-through
device to channel two ports to each of the Fabric Extenders.

HINT: Please bear in mind that when using the port-expander UCS Manager sees the VIC1240
and the port-expander as a single VIC. -> Even if two cards are installed, if the VIC 1240 fails,
you’ll lose connectivity to the blade. If you want redundancy, you’ll have to select the 1280
(but the 1280 is more expensive)

P a g e | 10 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


Scenario 3: 2208XP Fabric Extender with I/O Card VIC1240 + I/O Card Port Expander

The 2208XP FEX has 32 HIF’s (Host Interfaces) so there are 4 traces available per FEX for
every blade in the chassis. The 5108 chassis can hold up 8 servers so 32 HIF’s divided by 8 = 4
traces/IO slot available. Every trace is a 10GBASE-KR link.

As you can see, the 1240 AND the 1280 have been installed in the blade so 4 traces/FEX are
available which result in having 4 x 10 Gb/FEX that ends up having 80GB/server.

P a g e | 11 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


2204XP SCENARIO’S

Scenario 1: 2204XP Fabric Extender with I/O Card VIC1240

The 2204XP FEX has 16 HIF’s (Host Interfaces) so there are 2 traces available per FEX for
every blade in the chassis. The 5108 chassis can hold up 8 servers so 16 HIF’s divided by 8 = 2
traces/blade available. Every trace is a 10GBASE-KR link.

As you can see, the 1240 has been installed in the blade so only 1 trace/FEX is available
which result in having 1 x 10 Gb/FEX that ends up having 20GB/server.

Scenario 2: 2204XP Fabric Extender with I/O Card VIC1240 + I/O Card Port Expander

The 2204XP FEX has 16 HIF’s (Host Interfaces) so there are 2 traces available per FEX for
every blade in the chassis. The 5108 chassis can hold up 8 servers so 16 HIF’s divided by 2 = 2
traces/IO slot available. Every trace is a 10GBASE-KR link.

As you can see, the 1240 AND the Port Expander have been installed in the blade so 4
traces/FEX are available which result in having 4 x 10 Gb/FEX that ends up having
80GB/server.

One port from the VIC 1240 is channeled to 2204XP Fabric Extender A and one is channeled
to 2204XP Fabric Extender B. The Port Expander Card for VIC 1240 installed in the mezzanine
slot acts as a pass-through device to channel one port to each of the Fabric Extenders. The
result is 20 Gbps of bandwidth to each Fabric Extender.

P a g e | 12 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


HINT: Also bear in mind that when using the port-expander UCS Manager sees the VIC1240
and the port-expander as a single VIC. -> Even if two cards are installed, if the VIC 1240 fails,
you’ll lose connectivity to the blade. If you want redundancy, you’ll have to select the 1280
(but the 1280 is more expensive)

P a g e | 13 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


Scenario 3: 2204XP Fabric Extender with I/O Card VIC1240 + VIC1280.

The 2204XP FEX has 16 HIF’s (Host Interfaces) so there are 2 traces available per FEX for
every blade in the chassis. The 5108 chassis can hold up 8 servers so 16 HIF’s divided by 2 = 2
traces/IO slot available. Every trace is a 10GBASE-KR link.

As you can see, the 1240 AND the VIC 1280 have been installed in the blade so 4 traces/FEX
are available which result in having 4 x 10 Gb/FEX that ends up having 80GB/server.

P a g e | 14 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


INSTALLATION

We will perform hands-on on a UCS Platform Emulator which can be installed by students on
their own laptop. The advantage of the Emulator is that the students don’t need access to
Comstor’s Data Center and they can use this configuration guide offline. The Emulator is a
perfect platform to discover the many features of Cisco UCS Manager as it has the same
layout and almost the same functions as the ‘real’ UCS Manager.

Installation requirements:

Use the USB drive provided in the class or download the following files
o UCS Platform Emulator (CCO-ID is required):
https://communities.cisco.com/docs/DOC-37827
o VMware Player (if you have VMware Workstation on your laptop, that’s also
supported)
(https://my.vmware.com/web/vmware/free#desktop_end_user_computing
/vmware_player/6_0)
If you downloaded the .zip file, please extract the UCSPE folder to your desktop.

Open the folder and open the following file:

Now the VMware Player (or your VMware Workstation) should open and you’ll see the Linux
booting up. The unpacking, installation and booting can take up to 10-15 minutes depending
on your system’s hardware. Please read through the following section during the
installation.

P a g e | 15 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


INFRASTRUCTURE TASKS

CONFIGURING FABRIC I NTERCONNECTS (HA)

This step isn’t available in the UCS Platform Emulator as you don’t have to configure the
Fabric Interconnects IP addresses in the emulator. Please read through these steps to
understand how the Fabric Interconnects provide High Availability.

HINT: Make sure before you setting up the Fabric Interconnects L1 from FI-A is connected to
L1 from FI-B and L2 from FI-A is connected to L2 from FI-B. This GbE connections are needed
to provide a heartbeat between the Fabric Interconnects and configuration synchronization.

We will setup a virtual IP address for the UCS Manager as the management plane of a Cisco
UCS environment is active/passive. Therefore when you connect to the virtual IP address, you
connect to the active Fabric Interconnect.

The data plane is active/active.

Connect to Fabric Interconnect A through console cable:

1. Enter the configuration method: CONSOLE


2. Enter the setup mode: SETUP
3. You have chosen to setup new FI. Continue: Y
4. Enforce strong password: N
5. Enter password for Admin “Password” and confirm your password
6. Is the Fabric Interconnect part of a cluster? YES
7. Enter the switch fabric: A or B
8. Enter system name: FI-A (please keep this simple)
9. Physical switch Mgmt0 IP address: X.X.X.X
10. Physical switch Mgmt0 net mask: X.X.X.X
11. Default gateway: X.X.X.X
12. Cluster IP address: X.X.X.X (here you will connect to for UCS Manager)
13. Configure DNS server: N
14. Configure default domain name: N
15. UCS Central: N
16. Apply and save configuration: YES

P a g e | 16 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


Connect to Fabric Interconnect B through console cable:

1. Enter the configuration method: CONSOLE


2. FI will be added to the cluster. Continue? Y
3. Physical switch Mgmt0 IP address: X.X.X.X
4. Apply and save configuration: YES

The Fabric Interconnects are now in cluster mode (high availability) and ready to be accessed
through the Unified Computing System Manager (UCSM).

P a g e | 17 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


SETTING UP UCS MANAGER PLATFORM EMULATOR

When the UCSM Platform Emulator has finished the boot process, please login via the UCS
UI IP address: (in this case: 192.168.70.130)

Connect to the UCSM IP address and login to the UCS Manager:

- Login: admin
- Password: admin

P a g e | 18 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


CONFIGURING PORT TYPES

The second generation Fabric Interconnects (6248UP/6296UP) have Unified Ports which
means they both support the Ethernet and Fibre Channel protocol. Before configuring server
ports, uplinks, etc. we need to determine what ports we are going to use for which protocol.

This is important as the module itself need to reboot every time as we reconfigure a port
type. If we reconfigure a port on an expansion module, only the expansion module will
perform a reboot.

Please take this in account because you don’t want to reboot a whole module in a
production environment!

Change the port type:

1. Navigate in the Equipment tab to Fabric Interconnects


2. Select Fabric Interconnect A
3. In the Actions Menu select Configure Unified Ports:

4. You will get a warning that the module where you change ports will reboot
afterwards, please click YES.
5. In the wizard, you can select which module you want and with the slider you can
configure Ethernet or Fibre Channel ports. Ethernet ports are blue and Fibre Channel
ports are purple.

Ethernet ports start from the beginning and Fibre Channel ports start counting backwards.

P a g e | 19 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


6. Select the 2 first Fibre Channel ports on the fixed module. (Remember, these ports
count backwards) The interfaces should look like this on both Fabric Interconnects:

P a g e | 20 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


SETTING UP UCSM

This configuration guide shows how to set up the initial steps for southbound (servers) and
northbound (LAN/SAN) connectivity.

CHASSIS POLICY AND DISCOVERY

CHASSIS POLICY

1. Navigate in the Equipment tab to Policies: Global Policies

2. Select 1 LINK and Port-Channel.

Discovery Policy:

The Chassis discovery policy is a policy that discovers only chassis’ with the amount of links
selected. Let’s explain this with an example:

You have 3 chassis connected:


- The first chassis with 1 link per FEX to the chassis
- The second chassis with 2 links per FEX to the chassis
- The third chassis with 4 links per FEX to the chassis

When 1 link selected, every chassis will be discovered.


When 2 links are selected, only the second and third chassis will be discovered
When 4 links are selected, only the third chassis will be discovered
When 8 links are selected, no chassis will be discovered

P a g e | 21 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


Link Grouping reference:

- None: No links from the IOM to a FI are grouped in a port channel


- Port Channel: All links from an IOM to a FI are grouped in a port channel

HINT: Port-Channel is best practice as all the servers uses the bandwidth of the port-channel
between the FEX and the FI’s.

CHASSIS DISCOVERY

Note: This step is already done in the UCS Manager Platform Emulator but please go through
it as this enables the server ports and enables the chassis discovery.

1. Navigate in the Equipment tab to Fabric Interconnects


2. Select Fabric Interconnect A
3. Right click on Ethernet port e1/1
4. Select configure as server port
5. Select YES to confirm
6. Right click on Ethernet port e1/2
7. Select configure as server port
8. Select YES to confirm
9. Right click on Ethernet port e1/3
10. Select configure as server port
11. Select YES to confirm
12. Right click on Ethernet port e1/4
13. Select configure as server port
14. Select YES to confirm

Do the same for Fabric Interconnect B and wait till the chassis/server discovery is
completed. You can monitor the server discovery when selecting a server and look in the
status field in the general tab.

P a g e | 22 @Copyright Comstor Indonesia (Cisco UCS Manager Lab Guide)


P a g e | 23 Cisco UCS Manager Lab Guide
SETTING UP LAN CONNECTIVITY

CONFIGURING UPLINKS
We are going to configure northbound LAN connectivity. In a perfect scenario you would
have 2 upstream Nexus 5500/7000 series switches connected with virtual port-channels
(vPC’s) but other (vendor) switches also work perfectly.

In both topologies, we should create port-channels to the upstream switch. In an active


port-channel (LACP) with two or more links, we have redundancy. If links in the port-channel
fails, the port-channel reconverge and stays up till no links are working anymore. The LACP
port-channel also supports load-balancing.

As you can see, the ports on the Fabric Interconnect are automatically configured (and this is
a fixed configuration) in LACP mode active, which means they will initiates a negotiation. So
your upstream switch can run LACP in Active or Passive mode.

P a g e | 24 Cisco UCS Manager Lab Guide


Navigate in the Equipment tab to Fabric Interconnects and select Fabric Interconnect A:

1. Right click on Ethernet port e1/17


2. Select configure as uplink port
3. Select YES to confirm
4. Right click on Ethernet port e1/18
5. Select configure as uplink port
6. Select YES to confirm

Do the same for Fabric Interconnect B.

CONFIGURING PORT CHANNEL

As we now have selected the ports for our LAN uplinks, we have to create a Port-Channel.

HINT: Don’t forget to create port-channels on the upstream switch and put the LACP mode in
Active or Passive. The UCS port channels are statically set to LACP mode active and cannot be
modified.

1. Navigate to the LAN tab and navigate to the LAN Cloud.


2. Select Port Channels under Fabric A
3. Click on the + sign on the right, a popup wizard will open.

P a g e | 25 Cisco UCS Manager Lab Guide


4. In the wizard, give the Port Channel an ID and name:

Fabric Interconnect A / B:
ID 1
Name Port-Channel1

5. In the Next screen, we have to select the ports we need to add to our Port Channel.
6. Select Ethernet port E1/17 and E1/18 as uplink ports for the Port Channel and
export them to the right column with the >> sign.
7. Click finish to close this wizard.

Do exactly the same for Fabric B with the same input values.

Note: The port-channel would not come up as this is a simulator and they don’t have an
upstream switch.

In a real environment you should wait a couple of seconds before the port-channel comes up
as it has to negotiate with the upstream switch.

P a g e | 26 Cisco UCS Manager Lab Guide


CONFIGURING VLANS

As the port-channels and LAN uplinks are configured, we have to create the VLAN’s.

HINT: Make sure you also have created the VLAN’s on the upstream switch and that they are
allowed on the port-channel.

1. Navigate to the LAN tab and navigate to the LAN Cloud.


2. Select VLANs (make sure you don’t select the VLANs under the Fabric Switch)
3. Add a VLAN with the + sign.

4. Create the following VLAN’s as Common/Global VLAN: (VLAN1 is per default created
and is by default the native VLAN)

VLAN ID NAME
64 MGMT
65 DATA_TRAFFIC
66 vMOTION

The differences between the VLAN policies:

Common/Global: The VLANs apply to both fabrics and use the same configuration
parameters in both cases
Fabric A: The VLANs only apply to fabric A.
Fabric B: The VLAN only apply to fabric B.
Both Fabrics Configured Differently: The VLANs apply to both fabrics but you can
specify different VLAN IDs for each fabric.

P a g e | 27 Cisco UCS Manager Lab Guide


You should become the following VLAN configuration: (VLAN 1 should be native VLAN)

HINT: If you are working in large environments, for example Cloud or Service providers, it’s
possible that you have a lot of VLAN’s. If you are not sure the VLAN is already used, you can
use the ‘Check Overlap’ function. This will check if the VLAN is already configured.

This function also works for VSANs.

P a g e | 28 Cisco UCS Manager Lab Guide


SETTING UP SAN CONNECTIVITY

INTRODUCTION

In this LAB guide, we will configure the environment with native Fibre Channel.
The upstream connection is a Fibre Channel connection to two MDS switches.

- Fabric Interconnect A -> MDS A -> Storage FC interface A


- Fabric Interconnect B -> MDS B -> Storage FC interface B

The Fabric Interconnect is in (default) FC end-host mode which means it runs NPV (N-Port
Virtualization). The NPV switch is connected to a NPIV switch. What NPIV does is allow a
single physical N_Port to have multiple WWPNs, and therefore multiple N_Port_IDs,
associated with it. This is necessary as the Fabric Interconnect don’t act as a normal switch
but as a “big server with several HBA’s”.

P a g e | 29 Cisco UCS Manager Lab Guide


CONFIGURING VSANS

First we have to create the appropriate VSANs so we can communicate with the upstream
Fibre Channel switch.

HINT: Make sure the VSAN’s exist on the upstream switch.

1. Navigate to the SAN tab and navigate to the SAN Cloud.


2. Select VSANs (make sure you don’t select the VSANs under the Fabric Switch)
3. Add a VSAN with the + sign.

4. Create the following VSAN’s:

VSAN / FCOE ID Fabric: NAME


10 A VSAN10
11 B VSAN11

You should become the following VSAN configuration:

P a g e | 30 Cisco UCS Manager Lab Guide


CONFIGURE PORT CHANNEL

A Fibre Channel port channel allows you to group several physical Fibre Channel ports to
create one logical Fibre Channel link to provide fault-tolerance and high-speed connectivity.

1. Navigate to the SAN tab and navigate to the SAN Cloud


2. Select Fabric A
3. Select FC Port Channels under Fabric A
4. Click on the + sign on the right. A popup wizard will open.

5. In the wizard, give the Port Channel an ID and name:

Fabric Interconnect A:
ID 10
Name Port-Channel10

Fabric Interconnect B:
ID 11
Name Port-Channel11

6. In the Next screen, we have to select the ports we need to add to our Port Channel.
7. Select Ethernet port E1/47 and E1/48 as uplink ports for the Port Channel and
export them to the right column with the >> sign. Click finish to close this wizard.

P a g e | 31 Cisco UCS Manager Lab Guide


Do exactly the same for Fabric B with the specified values.

Note: The port-channel would not come up as this is a simulator and the port-channel
doesn’t have an upstream switch.

In a real environment you should wait a couple of seconds before the port-channel comes up
as it has to negotiate with the upstream switch.

P a g e | 32 Cisco UCS Manager Lab Guide


CONFIGURING SAN UPLINKS

Both Fiber Channel interfaces on the expansion module are automatically enabled as uplink
ports so we don’t have to change anything. You can see when selecting a Fiber Channel port,
its overall status is ‘failed’:

Note: In the emulator, the behavior is different, the status is UP.

The overall status is ‘failed’ because the upstream MDS is configured for a specific VSAN so
we have to map the FC interfaces to the appropriate VSAN.

Fabric Interconnect A:
1. Navigate to the SAN tab and navigate to the SAN Cloud
2. Select Fabric A.
3. Select the Fiber Channel Port Channel 10 on Fabric Interconnect A
4. Under properties on the right, select VSAN10 and apply these settings by clicking on
Save Changes.
5. You will see the port will become green instead of red.

P a g e | 33 Cisco UCS Manager Lab Guide


Fabric Interconnect B:

1. Select Fabric B.
2. Select the Fiber Channel Port Channel 11 on Fabric Interconnect B
3. Under properties on the right, select VSAN11 and apply these settings by clicking on
Save Changes.
4. You will see the port will become green instead of red.

After you have mapped the port-channels with the specific VSAN’s, all the ports should be
up (real equipment screenshot)

P a g e | 34 Cisco UCS Manager Lab Guide


HINT: Make sure you also have created the FC port-channel on the upstream switch. Hereby
you can find a configuration example how to configure a FC port-channel on a MDS switch:

P a g e | 35 Cisco UCS Manager Lab Guide


CONFIGURING SERVICE PROFILES

INTRODUCTION

A service profile is an extension of the virtual machine abstraction applied to physical


servers. The definition has been expanded to include elements of the environment that span
the entire data center, encapsulating the server identity (LAN and SAN addressing, I/O
configurations, firmware versions, boot order, network VLAN, physical port, and quality-of-
service [QoS] policies) in logical "service profiles" that can be dynamically created and
associated with any physical server in the system within minutes rather than hours or days.

The UCS architecture is designed to have perform stateless computing:

- A compute node is just an execution engine for any application (CPU, memory, and
disk – flash or hard drive). The servers themselves aren’t doing anything till you
assign them a personality (Service Profile)
- The servers can easily then be deployed, cloned, grown, shrunk, de-activated,
archived, re-activated, etc…

P a g e | 36 Cisco UCS Manager Lab Guide


CONFIGURING ORGANIZATIONS
In the Unified Computing System Manager it’s possible to create Organizations. As a result,
you can achieve a logical isolation between organizations without providing a dedicated
physical infrastructure for each organization.

If you create organizations in a multi-tenant environment, you can also set up one or more
of the following for each organization or for a sub-organization in the same hierarchy:

Resource pools
Policies
Service profiles
Service profile templates

Let’s create an organization for this LAB guide:

1. You can create an organization wherever you want with the + new button on the
top of the UCSM
2. Select Create Organization.

3. Give the organization the following details:

Organization
Name UCS_Training
Description Training UCS

P a g e | 37 Cisco UCS Manager Lab Guide


CONFIGURING POOLS

Before we start to create service profiles, we have to define different pools so we can use
the addressing for MAC addresses, WWNN, WWPN, etc. from pools. The use of pools is
especially important when we are dealing with stateless computing.

SERVER IP MANAGEMENT OOL

1. On the LAN tab scroll down and select Pools.


2. Navigate into the Pools -> Root -> IP Pools -> IP Pool Ext-mgmt
3. Click on the Create block of IPv4 Addresses.
4. Use the following values for the creation of a blocks of IP Addresses:

Organization
From: 10.8.64.2
Size: 16
Subnet Mask: 255.255.255.0
Default Gateway: 10.8.64.1
Primary DNS: 8.8.8.8
Secondary DNS: 0.0.0.0

5. Click on IP addresses and here you can see the IP’s are created and automatically
assigned to certain blades.

P a g e | 38 Cisco UCS Manager Lab Guide


UUID POOLS

UUID: global ID that is unique to a given server which is composed of a prefix and a suffix.

Hint for UUID:


- Use root “default” pool as the global default pool for all Service Profiles
- Populate the default pool with a block of 512 IDs
- Don’t change original Prefix, this is unique to this UCS

1. On the Servers tab, scroll down and select Pools.


2. Navigate into the Pools -> Root -> UUID Suffix Pools
3. Select Create a Block of UUID suffixes and type 512 at the size.

4. Click OK to create the UUID Suffixes block.

P a g e | 39 Cisco UCS Manager Lab Guide


WWNN POOLS

Using pools lets you communicate WWNs to SAN team ahead of deployment so they can
pre-provision LUNs for boot-from-SAN and proactively perform zoning and LUN masking
configuration.

An adapter uses one Node WWN (WWNN) and as many Port WWN (WWPN) as there are
vHBAs for that adapter.

Hints for Node Name pool:


- Create one large pool that’s a multiple of 16 and contain less than 128 entries
- Create the pool at the Root organization (you can use the default pool)
- Zoning and masking does not use Node WWN
- Ensure node pools and port pools do not overlap

1. On the SAN tab, scroll down and select Pools.


2. Navigate into the Pools -> Root -> WWNN Pools
3. Click on the + sign on the right and, give it the following name: WWNN and click
Next.
4. Click Add and fill in the following values:

WWNN
From: 20:00:00:25:B5:00:00:00
Size: 32

P a g e | 40 Cisco UCS Manager Lab Guide


WWNP POOLS

WWNP are used to assign addresses to the Virtual Host Bus Adapter (vHBA), zoning and
masking. Use this within sub-organizations!

1. On the SAN tab, scroll down and select Pools.


2. Navigate into the Pools -> Root -> Sub-Organizations -> UCS_Training -> WWPN
Pools
3. Click on the + sign on the right and, give it the following name: SAN-A and click Next.
4. Click Add and fill in the following values:

SAN-A
From: 20:00:00:25:B5:0A:00:00
Size: 32

5. Click OK and Finish to add the block.


6. Click on the + sign on the right and, give it the following name: SAN-B and click Next.
7. Click Add and fill in the following values:

SAN-B
From: 20:00:00:25:B5:0B:00:00
Size: 32

8. Click OK and Finish to add the block.

P a g e | 41 Cisco UCS Manager Lab Guide


MAC ADDRESS POOLS
The purpose of this pool is assigning addresses to adapters. This is a mandatory step as the
M81KR, 1240, Port Expander and 1280 adapters don’t have BIA’s (burnt-in addresses).
The MAC addresses assigned here will be later assigned into the Service Profiles to the
vNIC’s of the adapters.

When creating a block of MAC addresses, use size 64 as best practice. I also recommends to
use the “00:25:B5” MAC prefix to ensure MAC uniqueness.

MAC Pool 256 MACs


OUI Extension ID
00 25 B5 Domain ID OS Type ##

We will create MAC address for Fabric A and Fabric B so it’s easier to troubleshoot later if we
should encounter problems.

1. On the LAN tab, scroll down and select Pools.


2. Navigate into the Pools -> Root -> Sub-Organization -> UCS_Training -> MAC Pools

3. Click on the + sign on the right and give it the following name: MAC-MGMT-A
4. Click Next and select Add.
5. Use the following first MAC: 00:25:B5:0A:00:00 and size: 128.
6. Click finish to create the MAC Addresses block.

Do the same for MAC-MGMT-B:

7. Click on the + sign on the right and give it the following name: MAC-MGMT-B
8. Click Next and select Add.
9. Use the following first MAC: 00:25:B5:0B:00:00 and size: 128.

P a g e | 42 Cisco UCS Manager Lab Guide


10. Click finish to create the MAC Addresses block.

11. Click on the + sign on the right and give it the following name: MAC-vMOTION-A
12. Click Next and select Add.
13. Use the following first MAC: 00:25:B5:1A:00:00 and size: 128.
14. Click finish to create the MAC Addresses block.

15. Click on the + sign on the right and give it the following name: MAC-vMOTION-B
16. Click Next and select Add.
17. Use the following first MAC: 00:25:B5:1B:00:00 and size: 128.
18. Click finish to create the MAC Addresses block.

19. Click on the + sign on the right and give it the following name: MAC-VMTRAFFIC-A
20. Click Next and select Add.
21. Use the following first MAC: 00:25:B5:2A:00:00 and size: 128.
22. Click finish to create the MAC Addresses block.

23. Click on the + sign on the right and give it the following name: MAC-VMTRAFFIC-B
24. Click Next and select Add.
25. Use the following first MAC: 00:25:B5:2B:00:00 and size: 128.
26. Click finish to create the MAC Addresses block.

P a g e | 43 Cisco UCS Manager Lab Guide


CONFIGURE NETWORK CONTROL POLICIES

We will configure this as we prefer to have CDP.

1. On the LAN tab, scroll down and select Policies.


2. Navigate into Root -> Sub-Organizations -> UCS_Training -> Network Control Policy
and click Add.
3. Give the Policy a name and enable CDP:

4. Click OK to create the policy.

CONFIGURING VNIC’S TEMPLATE

This policy defines how a vNIC on a server connects to the LAN. This policy is also referred to
as a vNIC LAN connectivity policy. A best practice for ESXi is configuring 8 vNICs but as this is a
demo, let’s create 6 of them.

1. On the LAN tab, scroll down and select Policies.


2. Navigate into Root -> Sub-Organizations -> UCS_Training -> vNIC Templates and
click Add
3. Use the following values to create two vNIC templates:

P a g e | 44 Cisco UCS Manager Lab Guide


DATA_TRAFFIC-A
vNIC-A
Name DATA_TRAFFIC-A
Description: -
Fabric ID: Fabric A
Target: Adapter
Failover: No Failover
Template type: Initial
VLAN: Select DATA_TRAFFIC (default as native)
MTU: 1500 (can be set to 9000 if using f.e. iSCSI)
MAC Pool: MAC-VMTRAFFIC-A
Network Control Policy: CDP

DATA_TRAFFIC-B
vNIC-B
Name DATA_TRAFFIC-B
Description: -
Fabric ID: Fabric B
Target: Adapter
Failover: No Failover
Template type: Initial
VLAN: Select DATA_TRAFFIC (default as native)
MTU: 1500 (can be set to 9000 if using f.e. NFS)
MAC Pool: MAC-VMTRAFFIC-B
Network Control Policy: CDP

MGMT-A
vNIC-A
Name MGMT-A
Description: -
Fabric ID: Fabric A
Target: Adapter
Failover: No Failover
Template type: Initial
VLAN: Select MGMT (default as native)
MTU: 1500 (can be set to 9000 if using f.e. NFS)
MAC Pool: MAC-MGMT-A
Network Control Policy: CDP

MGMT-B
vNIC-A
Name MGMT-B

P a g e | 45 Cisco UCS Manager Lab Guide


Description: -
Fabric ID: Fabric B
Target: Adapter
Failover: No Failover
Template type: Initial
VLAN: Select MGMT (default as native)
MTU: 1500 (can be set to 9000 if using f.e. NFS)
MAC Pool: MAC-MGMT-B
Network Control Policy: CDP

vMOTION-A
vNIC-A
Name vMOTION-A
Description: -
Fabric ID: Fabric A
Target: Adapter
Failover: No Failover
Template type: Initial
VLAN: Select vMOTION (default as native)
MTU: 1500 (can be set to 9000 if using f.e. NFS)
MAC Pool: MAC-vMOTION-A
Network Control Policy: CDP

vMOTION-B
vNIC-A
Name vMOTION-B
Description: -
Fabric ID: Fabric B
Target: Adapter
Failover: No Failover
Template type: Initial
VLAN: Select vMOTION (default as native)
MTU: 1500 (can be set to 9000 if using f.e. NFS)
MAC Pool: MAC-vMOTION-B
Network Control Policy: CDP

P a g e | 46 Cisco UCS Manager Lab Guide


CONFIGURING VHBA’S TEMPLATE

This template is a policy that defines how a vHBA on a server connects to the SAN. It is also
referred to as a vHBA SAN connectivity template. A best practice for most environments is
configuring 2 vHBAs to both SAN-A and SAN-B.

1. On the SAN tab, scroll down and select Policies.


2. Navigate into Root -> Sub-Organizations -> UCS_Training -> vHBA Templates and
click Add
3. Use the following values to create two vHBAs templates:

vHBA-A
vHBA-A
Name vHBA-A
Description: -
Fabric ID: Fabric A
VSAN: VSAN10
Template Type: Initial
Max Data Field Size 2048
WWPN Pool SAN-A

vHBA-B
vHBA-A
Name vHBA-B
Description: -
Fabric ID: Fabric B
VSAN: VSAN11
Template Type: Initial
Max Data Field Size 2048
WWPN Pool SAN-B

P a g e | 47 Cisco UCS Manager Lab Guide


CONFIGURING BOOT POLICY (BOOT FROM SAN)

You can configure a boot policy to boot one or more servers from an operating system
image on the SAN. This boot policy can exist of SD cards, internal HDD’s, boot from SAN,…

The boot from SAN policy can include a primary and a secondary SAN boot. If the primary
boot fails, the server attempts to boot from the secondary.

Cisco recommends that you use a SAN boot, because it offers the most service profile
mobility within the system. If you boot from the SAN when you move a service profile from
one server to another, the new server boots from the exact same operating system image.
Therefore, the new server appears to be the exact same server to the network.

1. On the Servers tab, scroll down and select Policies.


2. Navigate into Root -> Sub-Organizations -> UCS_Training -> Boot Policies
3. click Add and use the following values:

Create boot policy:


Name BFS
Description: Boot_From_SAN
Reboot on order change: No
Enforce vNIC/vHBA/iSCSI name YES

4. Select the Local Devices dropdown menu


5. Add CD-ROM:
6. Select the vHBA dropdown menu and Add SAN boot:
Add SAN boot
vHBA: vHBA-A
Type: Primary

7. Select Add SAN Boot Target:


Add SAN boot target:
Boot Target Lun: 0
Boot Target WWPN: 50:0a:09:82:88:cd:39:b7
Type: Primary

8. Select the vHBA dropdown menu and Add SAN boot:


Add SAN boot
vHBA: vHBA-B
Type: Secondary

9. Select Add SAN Boot Target: (Add SAN Boot target to SAN secondary)
Add SAN boot target:
Boot Target Lun: 0
Boot Target WWPN: 50:0a:09:81:88:cd:39:b7

P a g e | 48 Cisco UCS Manager Lab Guide


Type: Secondary
10. Click OK to create the Boot Policy.

Verify you have the same as below:

P a g e | 49 Cisco UCS Manager Lab Guide


CONFIGURING SERVICE PROFILE TEMPLATE

Finally, we can create the service profile template. We are going to configure a template, so
we can create different service profiles from that template.

There are two different types of templates:

- Initial Template: The initial template is used to create a new server from a service
profile with UIDs, but after the server is deployed, there is no linkage between the
server and the template, so changes to the template will not propagate to the
server, and all changes to items defined by the template must be made individually
to each server deployed with the initial template.

- Updating Template: An updating template maintains a link between the template


and the deployed servers, and changes to the template cascade to the servers
deployed with that template on a schedule determined by the administrator.
o Use this with caution because when you configure something wrong here,
it will be pushed to the service profiles linked to this template!

1. Navigate to the Server tab


2. Right-click your UCS_Training organization
3. Select Create Service Template.

IDENTIFY SERVICE PROFILE TEMPLATE

Identify Service Profile Template:

Identify Service Profile Template:


Name ESXI_TEMPLATE
Type: Updating
UUID Select the default pool

P a g e | 50 Cisco UCS Manager Lab Guide


NETWORKING
How would you like to configure LAN connectivity: Expert
Click Add

P a g e | 51 Cisco UCS Manager Lab Guide


Networking:
Name MGMT-A
Use vNIC Template: YES
MAC Address Assignment MAC-A
vNIC Template MGMT-A
Adapter Policy: VMware

Add another vNIC:

Networking:
Name MGMT-B
Use vNIC Template: YES
MAC Address Assignment MAC-B
vNIC Template MGMT-B
Adapter Policy: VMware

Add another vNIC:

Networking:
Name DATA_TRAFFIC-A
Use vNIC Template: YES
MAC Address Assignment MAC-A
vNIC Template DATA_TRAFFIC-A
Adapter Policy: VMware

Add another vNIC:

Networking:
Name DATA_TRAFFIC-B
Use vNIC Template: YES
MAC Address Assignment MAC-B
vNIC Template DATA_TRAFFIC-B
Adapter Policy: VMware

Add another vNIC:

Networking:
Name vMOTION-A
Use vNIC Template: YES
MAC Address Assignment MAC-A
vNIC Template vMOTION-A
Adapter Policy: VMware

P a g e | 52 Cisco UCS Manager Lab Guide


Add another vNIC:

Networking:
Name vMOTION-B
Use vNIC Template: YES
MAC Address Assignment MAC-B
vNIC Template vMOTION-B
Adapter Policy: VMware

P a g e | 53 Cisco UCS Manager Lab Guide


STORAGE

How would you like to configure SAN connectivity: Expert


Click +Add

Select the World Wide Node Name assignment with the WWNN pool we’ve created
(WWNN)

P a g e | 54 Cisco UCS Manager Lab Guide


Click + add so we can create the vHBA’s.

Create the first vHBA’s with the following values:

Create vHBA-A
Name vHBA-A
Use vHBA Template: YES
vHBA Template vHBA-A
Adapter Policy: VMware

Create the second vHBA:


Create vHBA-B
Name vHBA-B
Use vHBA Template: YES
vHBA Template vHBA-B
Adapter Policy: VMware

Click OK and Next.

P a g e | 55 Cisco UCS Manager Lab Guide


We can skip the Zoning part as the Fabric Interconnects are in End-host mode and zoning
don’t apply here. We can also skip the vNIC/vHBA placement as the placement is performed
by the system.

SERVER BOOT ORDER

Select the Server Boot Order policy:

Server Boot Order


Boot Policy Boot_From_SAN

Click Finish to create the service profile.

P a g e | 56 Cisco UCS Manager Lab Guide


CREATE SERVICE PROFILE FROM TEMPLATE

1. Navigate to you Sub-Organisation


2. Select your template
3. In the General tab, select Create Service Profiles From Template.

4. Use the following values:

Create SP From Template


Naming Prefix: ESXI_
Name Suffix Starting number: 1
Number: 3

5. Click OK to create the Service Profile from the template.

You can see the Service Profiles are created under your organisation:

P a g e | 57 Cisco UCS Manager Lab Guide


ASSIGNING SERVICE PR OFILES TO A BLADE

1. Go to the Equipment tab


2. Select the server you want to assign the service profile.
3. You’ll see the overall status is ‘unassociated’. That means the Service Profiles are
not associated with the servers so they aren’t active. Remember, without Service
Profiles, blades won’t perform anything.

4. Under Actions, select Associate Service Profile and choose the Service Profile you’ve
created.

5. Click OK to continue, there will be a popup message this action requires an


immediately boot of the UCS server.
6. Click YES to continue.

In a real environements you can watch the step sequence of assigning the service profile to
the blade in the FSM tab:

P a g e | 58 Cisco UCS Manager Lab Guide


P a g e | 59 Cisco UCS Manager Lab Guide
INSTALLING ESXI ON UCS

KVM

Note: As we aren’t working on physical devices, the ESXI installation is not supported in this
lab guide.

To install VMware ESXi on the blade server, we need to KVM the server.

1. Go to your server with the associated service profile


2. Select KVM Console (the Java Web launcher will open.)

P a g e | 60 Cisco UCS Manager Lab Guide


3. Select Virtual Media and select Add Image…

4. Browse to the ESXi file: ESXi_5.1.0-799733.x86_64 and select Open.


5. Make sure the map the drive and reset the server.

6. Select OK to reset the server and select Power Cycle (as we don’t have any OS
installed on the server)

P a g e | 61 Cisco UCS Manager Lab Guide


ESXI INSTALLATION

1. Go back to the KVM screen and wait till the server boots. The ESXi installer will
automatically start as we setup the CD-ROM drive as first boot. (or press Enter if
you don’t want to wait)

2. Next, press Enter to continue:

P a g e | 62 Cisco UCS Manager Lab Guide


3. Press F11 to accept the EULA :

4. Now the ESXi installer is scanning for available devices and it should see the Boot
From SAN we’ve created in the Service Profile. Select the NetApp LUN by pressing
Enter

P a g e | 63 Cisco UCS Manager Lab Guide


5. Next we have to select the keyboard layout. Select your choice and continue by
pressing Enter.
6. Next step is entering a root password which is recommended by VMware (and of
course in production) but as this is a demo, we can leave it blanc.
7. Next, we have to confirm the installation of ESXi on the NetApp disk. Press F11 to
continue with the installation and wait till the installation has been finished.

8. After the installation is complete, press Enter to reboot the server. There is no need
to remove (unmap) the installation virtual disk as the UCSM automatically does this
by rebooting the server.

P a g e | 64 Cisco UCS Manager Lab Guide


ESXI NETWORKING

After ESXi has been installed, we have to setup the ESXi networking. Default, ESXi will look
up a DHCP server but we don’t have DHCP enabled on the server so we need to assign a
static IP address.

1. Press F2 to customize the system and press Enter two times as we didn’t setup a
password for the ESXi login.
2. Select Configure Management Network:

P a g e | 65 Cisco UCS Manager Lab Guide


3. Select Network Adapters
4. Select the NIC’s by pressing spacebar we have defined for the mangement parts
(you can find them by the MAC address we have defined)

5. Go to the VLAN section and provide VLAN 64 (management VLAN). As the


management VLAN is not the native VLAN, we will need to tag the frames.

6. Next, we have to set the management IP for the ESXi host.


7. Go to IP configuration.
8. With the spacebar selet Set static IP address and network configuration and fill in
the IP details
9. Press Escape and restart the management network by selecting Y.

P a g e | 66 Cisco UCS Manager Lab Guide


You can verify the management network settings by selecting Test Management Network
and test you can ping the default gateway.

Now we have setup the ESXi management and we can start using VMware vSphere Client to
configure vSwitches’ uplinks and assign VM’s to the uplinks.

P a g e | 67 Cisco UCS Manager Lab Guide


THANK YOU!

P a g e | 68 Cisco UCS Manager Lab Guide

You might also like