Download as pdf or txt
Download as pdf or txt
You are on page 1of 44

VMware Integration

BRKDCT-2868

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 2

© 2006, Cisco Systems, Inc. All rights reserved. 1


Presentation_ID.scr
Virtualization

App App App App App App


VM VM VM
Guest OS Guest OS Guest OS Guest OS Modified OS Modified OS

Mofied Stripped Hypervisor Mofied Stripped


Down OS with Down OS with
Hypervisor Host OS Hypervisor

CPU CPU CPU

VMware Microsoft XEN aka


Paravirtualization

Presentation_ID © 2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential 3

Migration

ƒ VMotion, aka VM Migration allows a


VM to be reallocated on a different
Hard are without
Hardware itho t ha
having
ing to
interrupt service.
Console
Console

App. App. App.


ƒ Downtime in the order of few
OS
OS

milliseconds to few minutes, VMware Virtualization Layer VMware Virtualization Layer


not hours or days OS OS OS

ƒ Can be used to perform Hypervisor Hypervisor


Maintenance on a server,
ƒ Can be used to shift workloads
more efficiently CPU CPU
ƒ 2 ttypes off Mi
Migration:
ti
VMotion Migration
Regular Migration

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 4

© 2006, Cisco Systems, Inc. All rights reserved. 2


Presentation_ID.scr
VMware Architecture in a Nutshell
Mgmt
Network Production
Network
VM Kernel
App. App. App. Network
Console
OS

OS OS OS

Virtual
Machines
VM Vi
Virtualization
li i Layer
L

Physical Hardware

CPU …
ESX Server Host

Presentation_ID © 2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential 5

VMware HA Clustering

App1 App2

App1 App2 App3 App4 App5


Guest OS Guest OS

Guest OS Guest OS Guest OS Guest OS Guest OS

Hypervisor Hypervisor Hypervisor

ESX Host
H t1 ESX Host
H t2 ESX Host 3

CPU CPU CPU

Presentation_ID © 2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential 6

© 2006, Cisco Systems, Inc. All rights reserved. 3


Presentation_ID.scr
Application-level HA clustering
(Provided by MSCS, Veritas etc…)

App1
App1 App2 App3 App4 App5 App2

Guest OS Guest OS Guest OS Guest OS Guest OS Guest OS Guest OS

Hypervisor Hypervisor Hypervisor

ESX Host
H t1 ESX Host
H t2 ESX Host 3

CPU CPU CPU

Presentation_ID © 2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential 7

Agenda

ƒ VMware LAN Networking


vSwitch Basics
NIC Teaming
vSwitch vs LAN Switch
Cisco/VMware DC DESIGNS
SAN Designs

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 8

© 2006, Cisco Systems, Inc. All rights reserved. 4


Presentation_ID.scr
VMware Networking Components
Per ESX-server configuration
VMs vSwitch VMNICS = uplinks

vNIC vSwitch0
VM_LUN_0007
vmnic0

VM_LUN_0005
vNIC
vmnic1
Virtual Ports
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 9

vNIC MAC Address


ƒ VM’s MAC address ƒ /vmfs/volumes/46b9d79a-
automatically generated 2de6e23e-929d-
001b78bb5a2c/VM LUN 0005
001b78bb5a2c/VM_LUN_0005
ƒ Mechanisms to avoid MAC
/VM_LUN_0005.vmx
collision
ƒ ethernet0.addressType = "vpx"
ƒ VM’s MAC address doesn’t
change with migration ƒ ethernet0.generatedAddress =
"00:50:56:b0:5f:24„
ƒ VM’s MAC addresses can be
made static by modifying the ƒ ethernet0.addressType =
configuration
g files „„static“
ƒ ethernetN.address = ƒ ethernet0.address =
00:50:56:XX:YY:ZZ "00:50:56:00:00:06„

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 10

© 2006, Cisco Systems, Inc. All rights reserved. 5


Presentation_ID.scr
vSwitch Forwarding Characteristics

ƒ Forwarding based on MAC address (No Learning):


If traffic doesn’t match a VM MAC is sent out to vmnic
ƒ VM-to-VM traffic stays local
ƒ Vswitches TAG traffic with 802.1q VLAN ID
ƒ vSwitches are 802.1q Capable
ƒ vSwitches can create Etherchannels

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 11

vSwitch Creation

YOU DON’T HAVE TO SELECT A NIC

This is just a name

vNICs

vswitch

Select the Port-Group by specifying the


NETWORK LABEL

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 12

© 2006, Cisco Systems, Inc. All rights reserved. 6


Presentation_ID.scr
VM Ù Port-Group ÙvSwitch

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 13

VLAN’s - External Switch Tagging - EST


VLAN tagging and
Service stripping is done by the
VM1 VM2 Console physical switch

No ESX configuration
Virtual NIC’s
required as the server is
not tagging
VMkernel ESX
NIC VSwitch A VSwitch B The number of VLAN’s
Server supported is limited to
VMkernel the number of physical
NIC’s
C s in tthe
e se
server
e

Physical NIC’s

Physical
Switches VLAN 100 VLAN 200

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 14

© 2006, Cisco Systems, Inc. All rights reserved. 7


Presentation_ID.scr
VLAN’s - Virtual Switch Tagging - VST
The vSwitch tags
outgooing frames with
Service
VM1 VM2 Console the VLAN Id

The vSwitch strips any


Virtual NIC’s dot1Q tags before
delivering to the VM
VMkernel ESX Physical NIC’s and
NIC VSwitch A
Server switch port operate as a
VMkernel trunk

Number of VLAN’s are


limited to the number of
Physical NIC’s vNIC’s
dot1Q

Physical No VTP or DTP. All


VLAN 100 VLAN 200
Switches static config. Prune
VLAN’s so ESX doesn’t
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public
process broadcasts 15

VLAN’s - Virtual Guest Tagging - VGT


Portgroup VLAN Id set
to 4095
Service
VM1 VM2 Console
Tagging and stripping of
dot1Q VLAN id’s happens in
Virtual NIC’s the guest VM – requires
VM applied
an 802.1Q driver
VMkernel ESX Guest can send/receive
NIC VSwitch A
Server any tagged VLAN frame
VMkernel
Number of VLAN’s per
guest are not limited to
the number of VNIC’s
Physical NIC’s
dot1Q
VMware does not ship
Physical with the driver:
VLAN 100 VLAN 200
Switches Windows E1000
Linux dot1q module
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 16

© 2006, Cisco Systems, Inc. All rights reserved. 8


Presentation_ID.scr
Agenda

ƒ VMware LAN Networking


vSwitch Basics
NIC Teaming
vSwitch vs LAN Switch
Cisco/VMware DC DESIGNS
SAN Designs

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 17

Meaning of NIC Teaming in VMware (1)

ESX server NIC cards

vSwitch Uplinks

vmnic0 vmnic1 vmnic2 vmnic3

NIC Teaming
NIC Teaming

THIS IS NOT NIC Teaming


vNIC vNIC vNIC
vNIC
vNIC

ESX Server Host

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 18

© 2006, Cisco Systems, Inc. All rights reserved. 9


Presentation_ID.scr
Meaning of NIC Teaming in VMware (2)
Teaming is Configured at
The vmnic Level
s is NOT Teaming
This

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 19

Design Example
2 NICs, VLAN 1 and 2, Active/Standby

802.1q 802.1q
Port-Group 1 Vlan 1,2 Vlan 1,2
VLAN 2

vmnic0 vmnic1 ESX Server

vSwitch0
Port-Group 2
VLAN 1

VM1 VM2 Service Console

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 20

© 2006, Cisco Systems, Inc. All rights reserved. 10


Presentation_ID.scr
Active/Standby per-Port-Group

CBS-left CBS-right

VMNIC0 VMNIC1

Port-Group1 Port-Group2

vSwitch0

VM5 VM7 VM4 VM6


.5 .7 ESX Server
.4 .6
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 21

Port-Group Overrides vSwitch Global


Configuration

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 22

© 2006, Cisco Systems, Inc. All rights reserved. 11


Presentation_ID.scr
Active/Active

ESX server NIC cards


d

ESX server
vmnic0 vmnic1

vSwitch

Port-Group

VM1 VM2 VM3 VM4 VM5

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 23

Active/Active
IP-Based Load Balancing

ƒ Works with Channel-Group


mode ON
ƒ LACP iis nott supported
t d P t h
Port-channeling
li
(see below):
9w0d: %LINK-3-UPDOWN: Interface ESX server
GigabitEthernet1/0/14, changed state vmnic0 vmnic1
to up
9w0d: %LINK-3-UPDOWN: Interface vSwitch
GigabitEthernet1/0/13, changed state
to up
Port-Group
9w0d: %EC-5-L3DONTBNDL2:
Gi1/0/14 suspended: LACP currently
not enabled on the remote port.
9w0d: %EC-5-L3DONTBNDL2:
Gi1/0/13 suspended: LACP currently
not enabled on the remote port.
VM1 VM2 VM3 VM4

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 24

© 2006, Cisco Systems, Inc. All rights reserved. 12


Presentation_ID.scr
Agenda

ƒ VMware LAN Networking


vSwitch Basics
NIC Teaming
vSwitch vs LAN Switch
Cisco/VMware DC DESIGNS
SAN Designs

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 25

All Links Active, No Spanning-Tree


Is There a Loop?

CBS-left CBS-right

NIC1 NIC2 NIC3 NIC4

Port-Group1 Port-Group2

vSwitch1

VM5 VM7 VM4 VM6


.5 .7 ESX Server
.4 .6
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 26

© 2006, Cisco Systems, Inc. All rights reserved. 13


Presentation_ID.scr
Broadcast/Multicast/Unknown Unicast
Forwarding in Active/Active (1)

802.1q 802.1q
Vlan 1,2 Vlan 1,2

vmnic0 vmnic1

vSwitch0

Port-Group 1
VLAN 2

ESX Server VM1 VM2

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 27

Broadcast/Multicast/Unknown Unicast
Forwarding in Active/Active (2)

802.1q 802.1q
Vlan 1,2 Vlan 1,2

ESX Host NIC1 NIC2


vSwitch

VM1 VM2 VM3

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 28

© 2006, Cisco Systems, Inc. All rights reserved. 14


Presentation_ID.scr
Can the vSwitch Pass Traffic Through?

E.g. HSRP?

NIC1 NIC2
vSwitch

VM1 VM2

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 29

Is This Design Possible?

Catalyst1 Catalyst2

802.1q

1 802.1q
2
ESX server1
VMNIC1 VMNIC2

vSwitch

VM5 VM7
.5 .7
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 30

© 2006, Cisco Systems, Inc. All rights reserved. 15


Presentation_ID.scr
vSwitch Security

ƒ Promiscuous mode Reject


prevents a port from
capturing traffic whose
address is not the VM’s
address
ƒ MAC Address Change,
prevents the VM from
modifying the vNIC
address
ƒ Forget Transmits prevents
the VM from sending out
traffic with a different MAC
(e.g NLB)

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 31

vSwitch vs LAN Switch


ƒ Similarly to a LAN Switch: ƒ Differently from a LAN Switch
Forwarding based on MAC No Learning
address
No Spanning-Tree protocol
VM-to-VM traffic stays local
No Dynamic trunk negotiation
Vswitches TAG traffic with (DTP)
802.1q VLAN ID
No 802.3ad LACP
vSwitches are 802.1q Capable
2 Etherchannel backing up each
vSwitches can create other is not possible
Etherchannels
No SPAN/mirroring capabilities:
Preemption Configuration Traffic capturing is not the
(similar to Flexlinks, but no equivalent of SPAN
delay preemption)
Port Security limited

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 32

© 2006, Cisco Systems, Inc. All rights reserved. 16


Presentation_ID.scr
Agenda

ƒ VMware LAN Networking


vSwitch Basics
NIC Teaming
vSwitch vs LAN Switch
Cisco/VMware DC DESIGNS
SAN Designs

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 33

vSwitch and NIC Teaming Best Practices


ƒ Q: Should I use multiple vSwitches or ƒ Q: Which NIC Teaming configuration should
multiple Port-Groups to isolate traffic? I use?
ƒ A: We didn’t see any advantage in using ƒ A: Active/Active, Virtual Port-ID based
multiple vSwitches, multiple Port-Groups
ƒ Q: Do I have to attach all NICs in the team
with different VLANs give you enough
to the same switch or to different
flexibility to isolate servers
switches?
ƒ Q: Should I use EST or VST?
ƒ A: with Active/Active Virtual Port-ID based, it
ƒ A: Always use VST, i.e. assign the VLAN doesn’t matter
from the vSwitch
ƒ Q: Should I use Beaconing?
ƒ Q: Can I use native VLAN for VMs?
ƒ A: No
ƒ A: Yes you can, but to make it simple don’t.
ƒ Q: Should I use Rolling Failover (i.e. no
If you do, do not TAG VMs with the native
preemption)
VLAN
ƒ A: No, default is good, just enable
trunkfast on the Cisco switch

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 34

© 2006, Cisco Systems, Inc. All rights reserved. 17


Presentation_ID.scr
Cisco Switchport Configuration
ƒ Make it a Trunk ƒ interface GigabitEthernetX/X
ƒ Enable Trunkfast ƒ description <<** VM Port **>>
ƒ Can the Native VLAN be used for ƒ no ip address
VMs?
ƒ switchport
ƒ Yes, but IF you do, you have 2
ƒ switchport trunk encapsulation dot1q
options
Configure VLAN ID = 0 for the VMs ƒ switchport trunk native vlan <id>
that are going to use the native VLAN
(preferred) ƒ switchport trunk allowed vlan xx,yy-zz
Configure “vlan dot1q tag native” on ƒ switchport mode trunk
the 6k (not recommended)

ƒ Do not enable Port Sec


Security
rit ƒ switchport nonegotiate
(see next slide) ƒ no cdp enable
ƒ Make sure that “teamed” NICs are in ƒ spanning-tree portfast trunk
the same Layer 2 domain
ƒ !
ƒ Provide a Redundant Layer 2 path

Typically: SC, VMKernel, VM Production


BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 35

Configuration with 2 NIC


SC, VMKernel, Production Share NICs

Trunks

802.1q: Production VLANs,


802.1q Service Console, VM Kernel
NIC teaming
Active/Active ESX Server

VMNIC1 VMNIC2
Port-Group Port-Group Port-Group
1 2 3
vSwitch 0

Global
Active/Active

VST

Service
VM1 VM2 VM Kernel
Console

HBA1 HBA2

Active/Standby Active/Standby
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public Vmnic1/vmnic2 Vmnic2/vmnic1 36

© 2006, Cisco Systems, Inc. All rights reserved. 18


Presentation_ID.scr
Configuration with 2 NICs
Dedicated NIC to SC, VMKernel, Separate NIC for Production

Trunks

802.1q: Production VLANs,


802.1q Service Console, VM Kernel
NIC teaming
Active/Active ESX Server

VMNIC1 VMNIC2
Port-Group Port-Group Port-Group
1 2 3
vSwitch 0
Global
Active/Standby
Vmnic1/vmnic2

VST

Service
VM1 VM2 VM Kernel
Console

HBA1 HBA2

Active/Standby Active/Standby
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public Vmnic2/vmnic1 Vmnic2/vmnic1 37

Network Attachment (1)


Secondary
root root
Rapid PVST+

Trunkfast 802.1q:
BPDU guard Production,
SC, VMKernel No Blocked Port,
No Loop

Catalyst1 Catalyst2

802.1q: All NICs are used


Production, 802.1q Traffic distributed
SC VMKernel
SC, On all links
1 4
2 3

VMNIC1 VMNIC2 VMNIC1 VMNIC2

vSwitch vSwitch
ESX server1 ESX server 2

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 38

© 2006, Cisco Systems, Inc. All rights reserved. 19


Presentation_ID.scr
Network Attachment (2)
802.1q: Secondary
root Production, SC, VMKernel root

Rapid PVST+

Trunkfast
BPDU guard

Typical Spanning-Tree
V-Shape Topology

802.1q:
802 1q: All NICs are used
Production, 802.1q Traffic distributed
SC, VMKernel On all links
1 4
2 3

VMNIC1 VMNIC2 VMNIC1 VMNIC2

vSwitch
ESX server1 ESX server 2 vSwitch
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 39

Configuration with 4 NICs


Dedicated NICs for SC and VMKernel

Dedicated NIC for SC


VMs become completely isolated
Dedicated
Production NIC for VMKernel
VLANs

VMNIC1
Redundant Production
VMNIC2 VMNIC3 ESX Server

How good is this design? VMNIC4

Isolates Management Access

VC cannot
Active/Active control ESX Host Isolates VMKernel
Vmnic1/vmnic2 vswitch

If this is part of an HA Cluster Management


If using iSCSI thisaccess is lost
is the worst
VMs are powered down iSCSI access
Possible failure, is lost
very complicated
Port-Group 1
ToVMotion
Service
recover can’t
VMfrom
Kernel
run
Console

HBA1 HBA2
If this is part of a DRS cluster
It prevents automatic migration
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 40

© 2006, Cisco Systems, Inc. All rights reserved. 20


Presentation_ID.scr
Configuration with 4 NICs

Redundant SC and
P d ti
Production SC, VMK
SC VMKernelSC
l swaps to vmnic4
VMKernel Connectivity
VLANs VLANs

VMNIC2 VMNIC3 ESX Server


VMNIC1
Redundant Production VC can still control Host
VMNIC4
HA augmented by teaming on Production Traffic goes to vmnic3
Different NIC chipsets

All links
Production andused
Management
Active/Active
Vmnic1/vmnic3G
Go through
h h chipset
hi 2 vswitch
“Dedicated NICs” for SC VMKernel
Production swaps to vmnic2
and Management
And VMKernel Go through chipset1
Port-Group 1
Production
Service Traffic
VM Kernel
Console
Continues on vmnic1
HBA1 HBA2
Active/Standby Active/Standby
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public Vmnic2/vmnic4 Vmnic4/vmnic2 41

Network Attachment (1)


Secondary
root root
Rapid PVST+

Trunkfast 802.1q:
BPDU guard Production, No Blocked Port,
SC, VMKernel No Loop

Catalyst1 Catalyst2

q
802.1q: 802.1q:
Production SC and VMKernel
1 2 7 8
3 6
4 5
ESX server1 ESX server 2

vSwitch vSwitch
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 42

© 2006, Cisco Systems, Inc. All rights reserved. 21


Presentation_ID.scr
Network Attachment (2)
802.1q: Secondary
root Production, SC, VMKernel root
Rapid PVST+

Trunkfast
BPDU guard

Typical Spanning-Tree
V-Shape Topology

Catalyst1 Catalyst2

802.1q:
802 1q
Production 802.1q:
SC and VMKernel
1 2
3 6 7 8
4 5

ESX server1 ESX server 2

vSwitch vSwitch
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 43

How About?
802.1q: Secondary
root Production, SC, VMKernel root

Trunkfast
BPDU guard

Typical Spanning-Tree
V-Shape Topology

Catalyst1 Catalyst2

802.1q:
802 1q
Production 802.1q:
SC and VMKernel
1 2
3 6 7 8
ESX server1 4 5 ESX server 2

vSwitch vSwitch

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 44

© 2006, Cisco Systems, Inc. All rights reserved. 22


Presentation_ID.scr
4 NICs with Etherchannel
“Clustered” switches

802.1q:
802.1q:
6 8 SC, VMKernel
Production
1 3 7
2 5
4

vSwitch vSwitch
ESX server1 ESX server 2

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 45

VMotion Migration Requirements

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 46

© 2006, Cisco Systems, Inc. All rights reserved. 23


Presentation_ID.scr
VMKernel Network can be routed
VM Kernel
Network

Mgmt VM Kernel Production


Network Network Network

Virtual
Machines


ESX Server Host

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 47

VMotion L2 Design

Rack1
Rack10

vmnic0 vmnic2
vmnic0 vmnic1 vmnic2 vmnic3

vSwitch0 vSwitch2
vSwitch0 vSwitch1 vSwitch2

vmkernel
vmkernel Service
console
ESX Host 1
ESX Host 2 VM4 VM5 VM6

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 48

© 2006, Cisco Systems, Inc. All rights reserved. 24


Presentation_ID.scr
HA clustering (1)

ƒ Recommendations:
ƒ EMC/Legato AAM based
Have 2 Service Console on
ƒ HA Agent runs in every host redundant
d d paths
h
ƒ Heartbeats Unicast UDP port Avoid losing SAN access (e.g. via
~8042 (4 UDP ports opened) iSCSI)
ƒ Hearbeats run on the Service Make sure you know before hand
Console ONLY if DRS is activated too!

ƒ When a Failure Occurs, the ESX ƒ Caveats:


Host pings the gateway (on the Losing Production VLAN
SERVICE CONSOLE ONLY) to connectivity only, ISOLATES
verify Network Connectivity VMs (there’s
(there s no equivalent of
uplink tracking on the vswitch)
ƒ If ESX Host is isolated, it shuts
down the VMs thus releaseing ƒ Solution:
locks on the SAN NIC TEAMING

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 49

HA clustering (2)

iSCSI access/VMkernel 10.0.200.0


COS 10.0.2.0

Prod 10.0.100.0

vmnic0 vmnic0

VM1 VM2

VM1 VM2

ESX1 Server Host ESX2 Server Host


BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 50

© 2006, Cisco Systems, Inc. All rights reserved. 25


Presentation_ID.scr
Agenda

ƒ VMware LAN Networking


vSwitch Basics
NIC Teaming
vSwitch vs LAN Switch
Cisco/VMware DC DESIGNS
SAN Designs

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 51

Multiple ESX Servers—Shared Storage

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 52

© 2006, Cisco Systems, Inc. All rights reserved. 26


Presentation_ID.scr
VMFS
VMFS Is High Performance Cluster File System
for Virtual Machines ƒ Stores the entire virtual machine
state in a central location
Virtual Machines
ƒ Supports heterogeneous storage
arrays
ESX ESX ESX ESX
Server
VMFS
Server
VMFS
Server
VMFS
Server
VMFS
ƒ Adds more storage to a VMFS
volume dynamically
Servers
ƒ Allows multiple ESX Servers to
access the same virtual machine
storage
g concurrentlyy
Storage
ƒ Enable virtualization-based
distributed infrastructure services
A.vmdk
such as VMotion, DRS, HA

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 53

The Storage Stack in VI3


ESX 1 VM1 VM2 VM3 VM4 VM5ESX 2 Selectively presents logical
containers to VMs

VD1 VD2
VD3 VD4 VD5
Provides services such as snapshots
VSCSI VSCSI
Disklib Disklib
ESX
VMFS Storage Stack ESX
VMFS Storage Stack
LVM LVM Provisions logical containers

Aggregates physical volumes

SAN switch
ƒ Clustered host-based VM and filesystem
ƒ Analogous to how VI3 virtualizes servers
ƒ Looks like a SAN to VMs
LUN 1 LUN 2 LUN 3 A network of LUNs
Presented to a network of VMs

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 54

© 2006, Cisco Systems, Inc. All rights reserved. 27


Presentation_ID.scr
Standard Access of Virtual
Disks on VMFS
ESX1 ESX2 ESX3

VM1 VM2 VM3 VM4 VM5 VM6

VMFS1

LUN1

ƒ The LUN(s) are presented to an ESX Server cluster via standard LUN masking and zoning
ƒ VMFS is a clustered volume manager and filesystem that arbitrates access to the shared LUN
Data is still protected so that only the right application has access. The point of control moves from the SAN to the vmkernel,
but there is no loss of security.
ƒ ESX Server creates virtual machines (VMs), each with their own virtual disk(s)
The virtual disks are really files on VMFS
Each VM has a virtual LSI SCSI adapter in its virtual HW model
Each VM sees virtual disk(s) as local SCSI targets – whether the virtual disk files sit on local storage, iSCSI, or fiber channel
VMFS makes sure that only one VM is accessing a virtual disk at one time
ƒ With VMotion, CPU state and memory are transferred from one host to another but the virtual disks stay still
VMFS manages the transfer of access from source to destination ESX Server
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 55

Three Layers of the Storage Stack


Virtual
disks
((VMDK))
Virtual Machine

Datastores
VMFS Vols ESX Server
(LUNs)

Storage Array
Physical
disks

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 56

© 2006, Cisco Systems, Inc. All rights reserved. 28


Presentation_ID.scr
ESX Server View of SAN

ƒ FibreChannel disk arrays appear as SCSI targets


(devices) which may have one or more LUNs
ƒ On boot, ESX Server scans for all LUNs by sending
inquiry command to each possible target/LUN number
ƒ Rescan command causes ESX Server to scan again,
looking for added or removed targets/LUNs
ƒ ESX Server can send normal SCSI commands to any
LUN, just like a local disk

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 57

ESX Server View of SAN (Cont.)

ƒ Built-in locking mechanism to ensure multiple hosts can


access same disk on SAN safely
VMFS-2 and VMFS-3 are distributed file systems, do
appropriate on-disk locking to allow many ESX Server servers
to access same VMFS

ƒ Storage is a resource that must be monitored and


managed to ensure performance of VM’s
g 3rd-party
Leverage p y systems
y and storage
g management
g tools
Use VirtualCenter to monitor storage performance from virtual
infrastructure point of view

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 58

© 2006, Cisco Systems, Inc. All rights reserved. 29


Presentation_ID.scr
Choices in Protocol

ƒ FC, iSCSI or NAS?


Best practice to leverage the existing infrastructure
Not to introduce too many changes all at once
Virtual environments can leverage all types
You can choose what fits best and even mix them
Common industry perceptions and trade offs still apply in the
virtual world
What works well for one does not work for all

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 59

Which Protocol to Choose?


ƒ Leverage the existing infrastructure when possible
ƒ Consider customer expertise
p and ability
y to learn
ƒ Consider the costs (Dollars and Performance)
ƒ What does the environment need in terms of throughput
Size for aggregate throughput before capacity

ƒ What functionality is really needed for Virtual Machines


Vmotion, HA, DRS (works on both NAS and SAN)
VMware Consolidated Backup (VCB)
ESX boot from disk
Future scalability
DR requirements

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 60

© 2006, Cisco Systems, Inc. All rights reserved. 30


Presentation_ID.scr
FC SAN—Considerations

ƒ Leverage multiple paths for high availability


ƒ Manually distribute I/O intensive VMs on
separate paths
ƒ Block access provides optimal performance for large
high transactional throughput work loads
ƒ Considered the industrial strength backbone for most
large enterprise environments
ƒ Requires expertise in storage management team
ƒ Expensive price per port connectivity
ƒ Increasing to 10 Gb throughput (Soon)

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 61

iSCSI—Considerations
ƒ Uses standard NAS infrastructure
Best Practice to
Have dedicated LAN/VLAN to isolate from other network
traffic
Use GbE or faster network
Use multiple NICs or iSCSI HBAs
Use iSCSI HBA for performance environments
Use SW initiator for cost sensitive environments
ƒ Supports all VI 3 features
Vmotion, DRS, HA
ESX boot from HW initiator only
VCB is in experimental support today – full support shortly

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 62

© 2006, Cisco Systems, Inc. All rights reserved. 31


Presentation_ID.scr
NFS—Considerations
ƒ Has more protocol overhead but less FS overhead than VMFS as
the NAS FS lives on the NAS Head
ƒ Simple to define in ESX by providing
Configure NFS server hostname or IP
NFS share
ESX Local datastore name

ƒ No tuning required for ESX as most are already defined


No options for rsize or wsize
Version is v3,
Protocol is TCP

ƒ Max mount points = 8 by default


Can be increase to hard limit of 32

ƒ Supports almost all VI3 features except VCB


BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 63

Summary of Features Supported


Protocol Vmotion, VCB ESX boot
DRS & HA from disk
FC SAN
Yes Yes Yes
iSCSI SAN
HW init Yes Soon Yes
iSCSI SAN
SW init
i it Y
Yes S
Soon N
No
NFS
Yes No No

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 64

© 2006, Cisco Systems, Inc. All rights reserved. 32


Presentation_ID.scr
Choosing Disk Technologies

ƒ Traditional performance factors


Capacity / Price
Disk types (SCSI, FC, SATA/SAS)
Access Time; IOPS; Sustained Transfer Rate
Drive RPM to reduce rotational latency
Seek time
Reliability (MTBF)

ƒ VM performance gated ultimately by IOPS density and


storage space
ƒ IOPS Density -> Number of read IOPS/GB
Higher = better
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 65

The Choices One Needs to Consider


ƒ FS vs. Raw
VMFS vs. RDM (when to use)

ƒ NFS vs. Block


NAS vs. SAN (why use each)

ƒ iSCSI vs. FC
What is the trade off?

ƒ Boot from SAN


Some times needed for diskless servers

ƒ Recommended Size of LUN


it depends on application needs…

ƒ File system vs. LUN snapshots (host or array vs. Vmware VMFS
snapshots) – which to pick?
ƒ Scalability (factors to consider)
# hosts, dynamic adding of capacity, practical vs. physical limits

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 66

© 2006, Cisco Systems, Inc. All rights reserved. 33


Presentation_ID.scr
Trade Offs to Consider

ƒ Ease of provisioning
ƒ Ease of on-going management
ƒ Performance optimization
ƒ Scalability – Head room to grow
ƒ Function of 3rd Party services
Remote Mirroring
Backups
Enterprise Systems Management

ƒ Skill level of administration team


ƒ How many shared vs. isolated storage resources
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 67

Isolate vs. Consolidate


Storage Resources

ƒ RDMs map a single LUN to one VM


ƒ One can also dedicate a single VMFS Volume
to one VM
ƒ When comparing VMFS to RDMs both the above
configurations are what should be compared
ƒ The bigger question is how many VM can share a
single VMFS Volume without contention causing pain
ƒ The answer is that it depends on many variables
Number of VMs and their workload type
Number of ESX servers those VM are spread across
Number of concurrent request to the same disk sector/platter
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 68

© 2006, Cisco Systems, Inc. All rights reserved. 34


Presentation_ID.scr
Isolate vs. Consolidate

ƒ Poor utilization ƒ Increased utilization


ƒ Islands of allocations ƒ Easier provisioning
ƒ More management ƒ Less management

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 69

Where Have You Heard This Before

ƒ Remember the DAS Æ SAN migration


ƒ Convergence of LAN and NAS
ƒ All the same concerns have been raised before
What if the work load of some cause problems for all?
How will we know who is taking the lions share of resource?
What if it does not work out?

Our Biggest Obstacle Is Conventional Wisdom!


The Earth Is Flat!
If Man Were Meant to fly He Would Have Wings

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 70

© 2006, Cisco Systems, Inc. All rights reserved. 35


Presentation_ID.scr
VMFS vs. RDM—RDM Advantages

ƒ Virtual machine partitions are stored in the native


guest OS file system format, facilitating “layered
applications” that need this level of access
ƒ As there is only one virtual machine on a LUN, you
have much finer grain characterization of the LUN,
and no I/O or SCSI reservation lock contention.
The LUN can be designed for optimal performance
ƒ With “Virtual
Virtual Compatibility”
Compatibility mode,
mode virtual machines
have many of the features of being on a VMFS, such
as file locking to allow multiple access, and snapshots

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 71

VMFS vs. RDM—RDM Advantages

ƒ With “Physical Compatibility” mode, it gives a virtual


machine the capability of sending almost all “low-
level” SCSI commands to the target device, including
command and control to a storage controller, such as
through SAN Management agents in the virtual
machine.
ƒ Dynamic Name Resolution: Stores unique information
about LUN regardless of changes to physical address
changes due to hardware or path changes

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 72

© 2006, Cisco Systems, Inc. All rights reserved. 36


Presentation_ID.scr
VMFS vs. RDM—RDM Disadvantages

ƒ Not available for block or RAID devices that do not


report a SCSI serial number
ƒ No snapshots in “Physical Compatibility” mode, only
available in “Virtual Compatibility” mode
ƒ Can be very inefficient, in that, unlike VMFS, you can
only have one VM access a RDM

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 73

RDMs and Replication

ƒ RDMs mapped RAW LUNs can be replicated to the


Remote Site
ƒ RDMs reference the RAW LUNs via
the LUN number
LUN ID

ƒ VMFS3 Volumes on Remote site will have unusable


RDM configuration if either properties change
ƒ Remove the old RDMs and recreate them
Must correlate RDM entries to correct RAW LUNs
Use the same RDM file name as old one to avoid editing the
vmx file
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 74

© 2006, Cisco Systems, Inc. All rights reserved. 37


Presentation_ID.scr
Storage—Type of Access

ƒ RAW ƒ VMFS
ƒ RAW may give better ƒ Leverage templates and
performance quick provisioning
ƒ RAW means more LUNs ƒ Fewer LUNs means you
More provisioning time don’t have to watch Heap

ƒ Advanced features still ƒ Scales better with


work
o Consolidated Backup
ƒ Preferred Method

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 75

Storage—How Big Can I Go?

ƒ One Big Volume or Individual?


Will you be doing replication?
More granular slices will help
High performance applications?
Individual volumes could help
With Virtual Infrastructure 3
VMDK, swap, config files, log files, and snapshots all live
on VMFS

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 76

© 2006, Cisco Systems, Inc. All rights reserved. 38


Presentation_ID.scr
What Is iSCSI?

ƒ A SCSI transport protocol, enabling access to storage


devices over standard TCP/IP networks
Maps SCSI block-oriented storage over TCP/IP
Similar to mapping SCSI over Fibre Channel

ƒ “Initiators”, such as an iSCSI HBA in an ESX Server,


send SCSI commands to “targets”, located in iSCSI
storage systems
Block storage

IP

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 77

VMware iSCSI Overview

ƒ VMware added iSCSI as a supported option in VI3


Block-level
Block level I/O over TCP/IP using SCSI-3
SCSI 3 protocol
Supporting both Hardware and Software Initiators
GigE NiCs MUST be used for SW Initiators (no 100Mb NICs)
Support iSCSI HBAs (HW init) and NICs for SW only today
Check the HCL for supported HW Initiators and SW NICs

ƒ What we do not support in ESX 3.0.1


10 gigE
Jumbo Frames
Multi Connect Session (MCS)
TCP-Offload Engine (TOE) Cards
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 78

© 2006, Cisco Systems, Inc. All rights reserved. 39


Presentation_ID.scr
VMware ESX Storage Options
FC iSCSI/NFS DAS

VM VM VM VM VM VM

FC FC SCSI

ƒ 80%+ of install base


uses FC storage
ƒ iSCSI is
i popular
l ini SMB
market
ƒ DAS is not popular
because it prohibits
VMotion

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 79

Virtual Servers Share a Physical HBA


ƒ A zone includes the physical hba and
the storage array
ƒ Access control is demanded to storage
ers

array “LUN
LUN masking and mapping”,
mapping , it is
Virtual
Serve

based on the physical HBA pWWN and


it is the same for all VMs
ƒ The hypervisor is in charge of the
mapping, errors may be disastrous

Storage Array
Hypervisor

MDS9000 (LUN Mapping and Masking)

Mapping
FC
HW

pWWN-P FC
pWWN-P

Single Login on a Single Point-to-Point Connection FC Name Server


Zone
BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 80

© 2006, Cisco Systems, Inc. All rights reserved. 40


Presentation_ID.scr
NPIV Usage Examples
Virtual Machine Aggregation ‘Intelligent Pass-thru’

FC FC FC FC

Switch becomes an HBA


FC FC FC FC concentrator

FC
NP_Port

NPIV enabled HBA


F_Port F_Port

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 81

Raw Device Mapping

ƒ RDM allows direct


read/write access VM1 VM2
to disk
ƒ Block mapping is still FC FC

maintained within a
VMFS file RDM

ƒ Rarely used but


Mapping
important for clustering FC
VMFS
(MSCS supported)
ƒ Used with NPIV
environments

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 82

© 2006, Cisco Systems, Inc. All rights reserved. 41


Presentation_ID.scr
Storage Multi-Pathing
ƒ No storage load balancing, strictly failover
ƒ Two modes of operation dictate behavior
(Fi d and
(Fixed d Most
M t Recent)
R t)
ƒ Fixed Mode VM VM

Allows definition of preferred paths


If preferred path fails a secondary path is used
FC FC
If preferred path reappears it will fail back

ƒ Most Recently Used


If current path fails a secondary path is used
If previous path reappears the current path is still used

ƒ Supports both Active/Active and Active/Passive


arrays
ƒ Auto detects multiple paths

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 83

Q and A

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 84

© 2006, Cisco Systems, Inc. All rights reserved. 42


Presentation_ID.scr
Recommended Reading

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 85

Recommended Reading

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 86

© 2006, Cisco Systems, Inc. All rights reserved. 43


Presentation_ID.scr
Complete Your Online
Session Evaluation
ƒ Give us your feedback and you could win Don’t forget to activate
fabulous prizes. Winners announced daily. your Cisco Live virtual
account for access to
ƒ Receive 20 Passport points for each session all session material
evaluation you complete. on-demand and return
for our live virtual event
ƒ Complete your session evaluation online now in October 2008.
(open a browser through our wireless network Go to the Collaboration
to access our portal) or visit one of the Internet Zone in World of
stations throughout the Convention Center. Solutions or visit
www.cisco-live.com.

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 87

BRKDCT-2868
14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 88

© 2006, Cisco Systems, Inc. All rights reserved. 44


Presentation_ID.scr

You might also like