Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 73

CCAT Troubleshooting Training

XenServer
April 2012

Citrix Consulting Architecture Team


Agenda

XenServer Storage Overview


Troubleshooting Storage Issues
XenServer Performance Troubleshooting
XenServer Storage
Architecture Overview
XenServer Storage Overview

XenServer Storage Objects


SRs, VDIs, PBDs and VBDs

Virtual Disk Data Formats


File-based VHD, LVM and StorageLink
XenServer Storage Objects
What is an SR (Shared Repository)?

Describes a particular storage target in which Virtual Disk


Images (VDIs) are stored.
Flexiblesupports a wide variety of storage types.
Centralizedeasier to manage, more reliable with a
XenServer pool.
Must be accessible to each XenServer host.
XenServer Storage Objects
VDIs, PBDs, VBDs

Virtual Disk Images are a storage abstraction that is


presented to a VM.
Physical Block Devices represent the interface between a
physical server and an attached SR.
Virtual Block Devices are connector objects that allow
mappings between VDIs and VMs.

Citrix Confidential - Do Not Distribute


XenServer Storage Objects
SR

PBD VDI VBD


XenServer Host
Virtual Machine

PBD VDI VBD


XenServer Host

Virtual Machine

VDI VBD
PBD
XenServer Host
Virtual Disk Data Formats
File-based VHD

VM images are stored as thin-provisioned VHD format files


on either a local non-shared file system (EXT type SR) or a
shared NFS target (NFS type SR).
What is VHD?
A Virtual Hard Disk (VHD) is a file formatted to be structurally identical to a
physical Hard Disk Drive.
Image Format Specification was created by Microsoft in June, 2005.
Virtual Disk Data Formats
Logical Volume (LVM)-based VHDs

The default XenServer block device-based storage inserts a


Logical Volume manager on a disk. VDIs are represented
as volumes within the Volume manager.
Introduced LVHD in XenServer 5.5
Enhances LVM for SRs
Hosts VHD files directly on LVM volumes
Adds Advanced Storage features like Fast Cloning and Snapshots
Fast and simple upgrade
Backwards compatible
Virtual Disk Data Formats
StorageLink (LUN per VDI)

LUNs are directly mapped to VMs as VDIs by SR types that


provide an array-specific plug-in (NetApp, Equallogic or
StorageLink type SRs). The array storage abstraction
therefore matches the VDI storage abstraction for
environments that manage storage provisioning at an array
level.

Citrix Confidential - Do Not Distribute


Virtual Disk Data Formats
StorageLink Architecture

XenServer calls direct to Array APIs to


provision and adjust storage on demand.
Fully leverages array hardware capabilities.
Virtual disk drives are individual LUNs.
High performance storage model.
Only the server running a VM connects to
the individual LUN(s) for that VM.
A special master server coordinates which
servers connect to which LUNs

Citrix Confidential - Do Not Distribute


LVM vs. StorageLink
XenServer
XenServer XenServer
XenServer
iSCSI
iSCSI // FC
FC iSCSI
iSCSI // FC
FC
+
Storage
Storage Repository
Repository Storage
Storage Repository
Repository

LUN
LUN

VHD
VHD header
header VHD
VHD header
header

LVM
LVM LVM
LVM LUN LUN
LUN LUN
Logical
Logical Logical
Logical
Volume
Volume Volume
Volume

LVM
LVM Volume
Volume Group
Group

VM Virtual Disk
Troubleshooting and Diagnosing
Common Storage Issues
Troubleshooting XenServer Storage
Native Troubleshooting Tools XenServer Logs

Always check the logs first! XenServer creates several logs


that are useful for diagnosing storage problems
/var/log/messages # General messages and system related stuff
/var/log/xensource.log # Logging specific to XenAPI
/var/log/SMlog # Logging specific to XenServer storage manager

Often errors logged in any of these files can be searched for in the Citrix
Knowledge Center for a solution. See http://support.citrix.com.

Citrix Confidential - Do Not Distribute


Troubleshooting XenServer Storage
Native Troubleshooting Tools XenAPI commands

The XenAPI (xe) can be used to troubleshoot storage issues too


# xe sr-scan # Force XAPI to sync the database with local VDIs present in
the underlying substrate.
# xe sr-probe # Using device-config parameters you can probe a block device
for its characteristics, such as existing VM metadata and SR
uuid.
# xe pbd-plug/unplug # Manually plug or unplug a PBD for an SR. This can be
useful when repairing an SR in XenCenter fails.

Citrix Confidential - Do Not Distribute


Troubleshooting XenServer Storage
Native Troubleshooting Tools VHD commands

See and verify mount point of VHD SR


# /var/run/sr-mount/<SR UUID>

full provision VHD SR


vhd-util
See http://support.citrix.com/article/CTX118842

Check VHD architecture


# hexdump -vC <VDI-UUID>.vhd | less

Citrix Confidential - Do Not Distribute


Troubleshooting XenServer Storage
Storage Multipathing

Ensure that multipathing is enabled if you have multiple paths zoned to the
XenServer
Use sg_map x and check the host and bus IDs

Problems if you do not enable multipath


I/O Errors
Decrease in performance
Introduce errors with SR.create

What is multipath.conf vs multipath-enabled.conf


multipath.conf is symlink to multipath-enable.conf or multipath-disabled.conf

DMP vs. MPP multipathing


http://support.citrix.com/article/ctx121364
Citrix Confidential - Do Not Distribute
Troubleshooting XenServer Storage
SAN Debugging

Always start at the hardware


adapter, use the Qlogic or
Emulex CLI tools to verify the
LUNs known to the adapter
For QLogic, run scli
For Emulex, run hbanywhere

Use xe sr-probe
type=lvmohba to trigger a bus
refresh Citrix Confidential - Do Not Distribute
Troubleshooting XenServer Storage
Additional Scenarios
Unable to create SRs
Verify that XenServer can see the storage/LUN
Use fdisk and /dev/disk/xxx
Verify that HBA can see the LUN
Use the HBA CLI tools
Verify that iSCSI can login:
# iscsiadm m node L all # Will force iscsid service to log into the storage array.

Clearing the device mappings via CMD line


# echo 1 > /sys/class/scsi_devices/x:x:x:x/device/delete
Be extremely careful what device is being deleted!

Clean up of orphaned VDIs, XC not displaying the right amount of free storage
If a logical volume has no corresponding VDI it can be deleted. Be extremely careful with this
because if you delete a parent disk, then you lost all differentiated disks.
XenServer Performance
Troubleshooting
XenServer Performance Overview

How do we determine optimal VM density for a host?


XenServer Hardware
Infrastructure, such as network and storage
Workload and sizing demands of the virtual machines
Native XenServer characteristics
Toolstack
Toolstack App
App App
App App
App App
App

Guest
Guest OS
OS Guest
Guest OS
OS

netback
netback netfront
netfront netfront
netfront

Native
Native
Driver
Driver
Dom0
Dom0 DomU
DomU DomU
DomU

Xen
Xen Hypervisor
Hypervisor

Host Machine (Hardware)


XenServer Performance Overview

External Factors
Network
Storage
VM Workload and Sizing
Domain 0 Memory Management

for Dom0

for DomU total Dom0


6346
352
400
752
MB
328 xn
6 MB + ( for DomU
= Total ) = memory
DomU required
footprint
for DomU for n VMs
Total XenServer
Memory
Dom0 Memory for
Pool DomU
(ex. 12GB)
752MB allows for about
Pool
60 VMs-per-host
Troubleshooting XenServer Performance

What happens when we start more VMs than Dom0 has memory to
manage?
Slow VM performance, poor user experience.
Slow response from XenAPItakes longer to process tasks like
starting, shutting down and migrating virtual machines.
It can cause XenServer host instability resulting in unpredictable
behavior and potentially crashing the XenServer host machine!!
Troubleshooting XenServer Performance

There are two common ways to monitor performance in XenServer

XenCenter Performance Tab XenServer Command Line Interface


Troubleshooting XenServer Performance

Using XenCenter
Good for at a glance monitoring
Unwieldy for refined or customized performance testing
Difficult to use for historical trending
Data cannot be easily exported
Some types of information not gathered.
Troubleshooting XenServer Performance

# top # Provides a dynamic real-time view of a running system.

Tasks: 68 total, 2 running, 65 sleeping, 0 stopped, 1 zombie


Cpu(s): 13.0%us, 33.6%sy, 0.0%ni, 1.0%id, 52.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 417792k total, 302832k used, 114960k free, 68384k buffers
Swap: 524280k total, 104k used, 524176k free, 80928k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND


12857 65550 15 0 27784 3012 1280 D 19 0.6 0:00.57 qemu-dm
4679 root 12 -3 281m 16m 5308 S 12 3.4 3:47.85 xapi
5993 root 15 -3 6164 2276 1188 S 2 0.5 0:24.73 stunnel
1264 root 16 -4 2244 664 384 S 0 0.1 0:24.00 udevd
4641 root 15 0 16348 1936 952 S 0 0.4 0:01.25 xenstored
4650 root 15 0 12304 652 544 S 0 0.1 0:00.05 blktapctrl
12722 root 15 0 2188 1052 836 R 0 0.2 0:00.03 top
Troubleshooting XenServer Performance

Performance monitoring commands


# xentop
# Displays
xentop real-time
- 17:24:33 information about a Xen system and domains.
Xen 3.3.1
4 domains: 1 running, 3 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
Mem: 12580820k total, 7092880k used, 5487940k free CPUs: 8 @ 1600MHz
NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS ...
Domain-0 -----r 9849 65.5 417792 3.9 no limit n/a 8 0 ...
Win2K3-01 ------ 1 1.5 2097020 16.7 2106164 16.7 2 1 ...
Win2K3-02 ------ 1 4.3 2097020 16.7 2106164 16.7 2 1 ...
Win2K3-03 ------ 0 9.8 2097020 16.7 2106164 16.7 2 1 ...
Troubleshooting XenServer Performance

What if I need to increase my VM-density-per-host?

We can tune XenServer to increase VM density.


Troubleshooting XenServer Performance

This was achieved by making two key configuration changes to a


default XenServer 6.x installation.
Increased the amount of RAM assigned to Dom0 to 2.94GB from the default
752MB; increasing it enabled us to launch more desktop clients.
Increased the Xen-heap setting to take into account the large number of VMs
on this single server host. This was done by adding "xenheap_megabytes=24"
to the Xen command-line in /boot/extlinux.conf which resulted in an increase
from the default of 16MB to 24MB.
Troubleshooting XenServer Performance

Both the scalability study and instructions for increasing


Dom0 memory limits are documented in the Citrix
Knowledge Center here:
http://support.citrix.com/article/CTX124086 - XenServer Single Server Scalability
with XenDesktop
http://support.citrix.com/article/CTX124259 - Adjusting Dom0 and Xenheap
Setting in XenServer
Disclaimer: Your results may vary! This
testing was done on very high-end equipment
using Citrix best practices!
Troubleshooting XenServer Performance

Troubleshooting commands - Storage


# iostat # Reports basic I/O stats for devices and
partitions
# hdparm # Performs timed sequential reads
# dd # Simple, common block device copy
utility
TIP: iSCSI storage throughput can usually be tied directly
to network performance. If there is slow throughput for an
iSCSI storage array, perform network diagnostics first!!
Troubleshooting XenServer Performance

Troubleshooting commands - Network


# tcpdump # Dumps traffic on a network
http://support.citrix.com/article/CTX120869 - detailed instructions for using
tcpdump.

# netstat # Display network interface statistics


# ifconfig # Display and configure network interfaces

TIP: You can always type man followed by


a Linux command name (i.e., man netstat)
to get detailed help for the command.
Troubleshooting XenServer Performance
Running Shell Scripts
Can capture customized data sets
Can be run over defined periods of time
Can be formatted specifically for reporting purposes.
Requires knowledge of Linux and shell scripting languages.
Troubleshooting XenServer Performance
Additional Information
On the Citrix Knowledge Center you can find shell script examples,
procedures and best practices for how to troubleshoot all aspects of
a XenServer environment.
Some useful links to troubleshooting articles:
http://support.citrix.com/article/CTX124157
http://support.citrix.com/article/CTX121634
http://support.citrix.com/article/CTX122806
http://support.citrix.com/article/CTX120737
Troubleshooting Virtual Machine Performance

Common performance related issues inside the VM:


High CPU
Disk/registry contention
High network utilization
Memory
High CPU

Identify offending Thread (s)


Identify the top function call and its module
Capture user memory dump of offending process for analysis
Engage respective application vendor

ProcessExplorer can be used for live stack-trace viewing!


The Windows Performance Tools

Next generation performance monitoring from Microsoft


Track CPU usage, application start times, boot issues etc.
Identify common performance problems without a debugger
Included with Windows 7 SDK Download
The Windows Performance Tools: Case Study

ICA test run where problem occurred


Notice that on this dual processor machine 1 processor is
frequently at or very close to 100%.

Looking inside the above testing to see which instructions were being
executed the most during the test was wfica32.exe.
The Windows Performance Tools: Case Study

Drilling into the calls of wfica32.exe, lead to the Windows function


NtUserSetCursor() which results in calls to the igdkmd32.sys driver
and then into the kernel specifically the memcpy() function.
Memory Dump Collection

User dumps contain a snapshot of a process memory


Kernel dumps contain a snapshot of kernel memory space
A complete memory dump contains both the kernel and the
entire user space
User Dump Collection

Configure a default post-mortem debugger:

How to Set the NT Symbolic Debugger as a Default Windows


Postmortem Debugger (CTX105888)

How to Set WinDbg as a Default Windows Postmortem


Debugger (CTX107528)

Use Task Manager for manual dumps


System Dump Collection

Small Memory Dump


Generally we avoid

To enable a complete memory dump using the registry:


Kernel Memory Set Dump
the CrashDumpEnabled value to 1
HKLM\System\CurrentControlSet\Control\CrashControl
System crash

Complete Memory Dump


System unresponsive
Control Panel -> System->Advanced Tab -> Startup and Recovery
System Dump Collection

Windows 7 introduced the Dedicated Dump Drive setting


Allows a pagefile to be configured on a dedicated drive for dump
capture
Recommended to debug VMs streamed through PVS

How to Recover Windows Kernel Level Dump Files


from Provisioned Target (CTX123642)
Backup Slides
Storage Management and
Monitoring
Management and Monitoring Overview

Understanding how XenServer Perceives the Storage


Monitoring Storage
Protecting Your Data

Citrix Confidential - Do Not Distribute


Management and Monitoring
Understanding the physical disk layout

# fdisk l # Lists the physical block devices on the host


Disk /dev/cciss/c0d0: 146.7 GB, 146778685440 bytes Denotes a SCSI block device
255 heads, 32 sectors/track, 35132 cylinders
locally attached to the system
(HP RAID array in this case)
Units = cylinders of 8160 * 512 = 4177920 bytes

Device Boot Start End Blocks Id System The first partition


on the disk
/dev/cciss/c0d0p1 * 1 981 4002464 83 Linux contains the boot
/dev/cciss/c0d0p2 982 1962 4002480 83 Linux information for the
OS.
/dev/cciss/c0d0p3 1963 35132 135333600 83 Linux

Citrix Confidential - Do Not Distribute


Management and Monitoring
Understanding the physical disk layout (continued)

# fdisk l # Continued output Implies a block device using the SCSI


Generic (sg) driver. It is likely
Disk /dev/sda: 107.3 GB, 107374182400 bytes attached via a separate interface such
as iSCSI or FC HBA
255 heads, 63 sectors/track, 13054 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes This disk is part of a Storage
Repository using an LVM file
system and therefore does
not require a local partition
Disk /dev/sda doesn't contain a valid partition table
table.

Citrix Confidential - Do Not Distribute


Management and Monitoring
Understanding the physical disk layout (continued)

# sg_map x # Displays the mapping between Linux sg and regular SCSI devices
/dev/sg0 0 0 0 0 13

/dev/sg1 0 0 0 1 0 /dev/sda

/dev/sg2 0 0 0 2 0 /dev/sdb

/dev/sg3 1 0 0 0 13

/dev/sg4 1 0 0 1 0 /dev/sdc

/dev/sg5 1 0 0 2 0 /dev/sdd

Host Bus SCSI LUN SCSI


Number ID Type

Citrix Confidential - Do Not Distribute


Management and Monitoring
Understanding the physical disk layout (continued)

# ll /dev/disk/by-id # List the attached block devices by SCSI ID.


cciss-3600508b1001035373120202020200003 -> ../../cciss/c0d0

cciss-3600508b1001035373120202020200003-part1 -> ../../cciss/c0d0p1

cciss-3600508b1001035373120202020200003-part2 -> ../../cciss/c0d0p2

cciss-3600508b1001035373120202020200003-part3 -> ../../cciss/c0d0p3

scsi-360a98000503350642f4a553833616b57 -> ../../sda

Unique ID assigned by This SCSI device is


udev. It corresponds to mapped to /dev/sda
individual block devices.
Citrix Confidential - Do Not Distribute
Management and Monitoring
Understanding the physical disk layout (continued)

To identify a specific SR
based on the SCSI ID,
compare /dev/disk/by-id
with the SR in
XenCenter

Citrix Confidential - Do Not Distribute


Management and Monitoring
LVM-related commands

# pvs # Lists physical volumes


PV VG Fmt Attr PSize PFree

/dev/sda VG_XenStorage-40bbf542-b9d9-ffa1-6efe-aa9c56aadd95 lvm2 a- 99.99G 59.88G

Linux sg LVM Volume Group stored on the


device SR UUID
physical volume.

# vgs # Lists volume groups


VG #PV #LV #SN Attr VSize VFree

VG_XenStorage-40bbf542-b9d9-ffa1-6efe-aa9c56aadd95 1 4 0 wz--n- 99.99G 59.88G

Citrix Confidential - Do Not Distribute


Management and Monitoring
LVM (continued)

# lvs # Lists the logical volumes The a and o


LV VG Attr LSize attributes indicate
the LV is active
VHD-c67a887f-3a1a-41f4-8d40-1b21f6307c4a VG_XenStor... -wi--- 24.00G and open implying
it is attached to a
VHD-c9b919a7-b93b-49ea-abe5-00acb8240cf5 VG_XenStor... -wi-ao 8.00G running VM

VHD-f3d26dde-254f-4d80-a3bb-d993e904bd63 VG_XenStor... -wi--- 24.00G

LV-e056f479-b0f3-49f3-bc5d-6c226657ae6c VG_XenStor... -wi-ao 10.00G


Tip: Type lvm help for a
Represents Logical Volume containers complete list of LVM command
LV-ebdcad46-66d9-4020-baa1-0d5b6ac439c7 VG_XenStor... -wi-ao 24.00G
for individual VDIs. options.
Citrix Confidential - Do Not Distribute
Management and Monitoring
Understanding how the physical storage is represented as virtual
objects in XenServer using the XenAPI
# xe sr-list type=lvmoiscsi params=name-label,uuid,VDIs,PBDs
# Lists the SRs configured for the pool
name-label ( RW) : NetApp - iSCSI

uuid ( RO) : 40bbf542-b9d9-ffa1-6efe-aa9c56aadd95

VDIs (SRO) : f3d26dde-254f-4d80-a3bb-d993e904bd63; c67a887f-3a1a-41f4...

PBDs (SRO) : 27d05ffc-07d3-4f02-d265-3594a2179f8f


Note that the VDI UUID is the
Using the PBD UUID from this same as the logical volume ID.
command output we will query for its We will make a note of this UUID
characteristics in the next slide to refer back to.

Citrix Confidential - Do Not Distribute


Management and Monitoring
Understanding how the physical storage is represented as virtual
objects in XenServer using the XenAPI (continued)
# xe pbd-list uuid=27d0 params=uuid,sr-uuid,device-config,currently-attached
# List PBD params
uuid ( RO) : 27d05ffc-07d3-4f02-d265-3594a2179f8f

sr-uuid ( RO): 40bbf542-b9d9-ffa1-6efe-aa9c56aadd95

device-config (MRO): port: 3260; SCSIid: 360a98000503350642f4a553833616b57;


target: 10.12.45.10; targetIQN: iqn.1992-08.com.netapp:sn.135027806

currently-attached ( RO): true


device-config describes all the physical
characteristics of the block device
attached to this PBD. Note the SCSIid as
referenced earlier from /dev/disk/by-id
Citrix Confidential - Do Not Distribute
Management and Monitoring
Understanding how the physical storage is represented as virtual
objects in XenServer using the XenAPI (continued)
# xe vdi-list uuid=f3d26dde-254f-4d80-a3bb- params=uuid,sr-uuid,vbd-uuids
# List VDI params
uuid ( RO) : f3d26dde-254f-4d80-a3bb-d993e904bd63

sr-uuid ( RO): 40bbf542-b9d9-ffa1-6efe-aa9c56aadd95

vbd-uuids (SRO): 69afb055-3b52-57e3-63fa-d26b82a9b01d

This tells us what VBDs are attached to this VDI.


We will use this UUID in the next slide to query for
the VBD characteristics and determine which VM
this disk is attached to.

Citrix Confidential - Do Not Distribute


Management and Monitoring
Understanding how the physical storage is represented as virtual
objects in XenServer using the XenAPI (continued)
# xe vbd-list uuid=69afb055-3b52- params=uuid,vm-uuid,vm-name-label,vdi-
uuid,mode
# List VBD params
uuid ( RO) : 69afb055-3b52-57e3-63fa-d26b82a9b01d

vm-uuid ( RO): 2c3a0e82-3f96-eab8-4982-db33fdb3bd88

vm-name-label ( RO): Windows 7 Test

vdi-uuid ( RO): f3d26dde-254f-4d80-a3bb-d993e904bd63


This tells us which VM (name
mode ( RW): RW and UUID) this VBD is attached
Tip: You can issue xe help to, and which VDI it is providing
<command> to get syntax help for to the VM.
any xe commands.
Citrix Confidential - Do Not Distribute
Management and Monitoring
Fibre Channel LUN Zoning
Since Enterprise SANs consolidate data from multiple servers and operating systems, many types of traffic
and data are sent through the interface, whether it is fabric or the network.

With Fibre Channel, to ensure security and dedicated resources, an administrator creates zones and zone sets
to restrict access to specified areas. A zone divides the fabric into groups of devices.

Zone sets are groups of zones. Each zone set represents different configurations that optimize the fabric for
certain functions.

WWN - Each HBA has a unique World Wide Name (similar to an Ethernet MAC)

node WWN (WWNN) - can be shared by some or all ports of a device


port WWN (WWPN) - necessarily unique to each port
Fibre Channel LUN Zoning
Pool1 Pool2

Xen1
Xen1 Xen2
Xen2 Xen3
Xen3

Xen1 WWN Xen2 WWN Xen3 WWN

Zone1 Zone2 FC Switch


Storage WWN Storage WWN

Initiator Group Initiator Group


Xen1, Xen2 Xen3 Storage
LUN0 LUN1 LUN2

FC Switch example
Management and Monitoring
iSCSI Isolation
With iSCSI type storage a similar concept of isolation as fibre-channel zoning can be achieved by using IP
subnets and, if required, VLANs.

IQN Each storage interface (NIC or iSCSI HBA) has configured a unique iSCSI Qualified Name

Target IQN Typically associated with the storage provider interface


Initiator IQN Configured on the client side, i.e. the device requesting access to the storage.

IQN format is standardized:


iqn.yyyy-mm.{reversed domain name} (e.g. iqn.2001-04.com.acme:storage.tape.sys1.xyz)
iSCSI Isolation
Pool1 Pool2

Xen1
Xen1 Xen2
Xen2 Xen3
Xen3

Xen1 Initiator IQN Xen2 Initiator IQN Xen3 Initiator IQN

VLAN1 / Subnet1 VLAN2 / Subnet2 Network Switch


Controller 1 Target IQN Controller 2 Target IQN

Controller Interface 1 Controller Interface 2


Storage
LUN0 LUN1 LUN2

iSCSI Example
Management and Monitoring
Monitoring XenServer Storage - Alerts

XenServer will generate alerts for certain storage events:


Missing or duplicate IQNs configured
HA state file lost or inaccessible
PBD plug failure on server startup

XenServer can be configured to send alert notifications via


email too.
See the XenServer Administrators Guide for more
information about configuring alerts.
Citrix Confidential - Do Not Distribute
Management and Monitoring
Monitoring XenServer Storage CLI Commands

# iostat k # Reports basic I/O stats for devices and partitions


avg-cpu: %user %nice %system %iowait %steal %idle

0.12 0.00 0.05 0.09 0.02 99.72

Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn

cciss/c0d0 4.05 0.52 32.11 164361 10156264

sda 0.11 1.38 1.79 437259 566151

Note: iostat is not a great performance indicator for shared


storage devices because it is unaware of external bottlenecks, for
example the network in the case of iSCSI.
Citrix Confidential - Do Not Distribute
Management and Monitoring
Monitoring XenServer Storage CLI Commands

# hdparm t /dev/<device> # Performs timed sequential reads


/dev/cciss/c0d0:

Timing buffered disk reads: 286 MB in 3.00 seconds = 95.19 MB/sec

Has some limitations:


Does not measure non-sequential disk reads.
Does not measure disk write speed
May not be accurate with non-local storage devices since it is
unaware of underlying bus architecture (iSCSI, FC, etc.)
Must be sampled repeatedly over time to get an accurate picture
of I/O read performance.

Citrix Confidential - Do Not Distribute


Management and Monitoring
Monitoring XenServer Storage CLI Commands

# dd if=<infile> of=<outfile> # Simple, common block device copy utility


# dd if=/dev/<device> of=/dev/null
if = infile, the source dd reads from.
1998929+0 records in of = outfile, the target dd writes to.
1998929+0 records out

1023451648 bytes (1.0 GB) copied, 13.8456 seconds, 73.9 MB/s

WARNING: NEVER run dd specifying an active, running VHD as the outfileit


WILL destroy the VM container making it unreadable!!

Citrix Confidential - Do Not Distribute


Management and Monitoring
Monitoring XenServer Storage Additional Tips

iSCSI storage throughput can usually be tied directly to network


performance. If there is slow throughput for an iSCSI storage array,
perform network diagnostics first!!
Many SAN arrays have native logging and monitoring tools that can
identify bottlenecks affecting storage performance.
Refer to the Citrix Knowledge Base for best practices and known
issues relating to storage performance.
http://support.citrix.com/article/CTX121634
http://support.citrix.com/article/CTX122806
http://support.citrix.com/article/CTX120737
Citrix Confidential - Do Not Distribute
Management and Monitoring
Protecting Your Data Backup VM Metadata
Can use xsconsole or the CLI.
Makes the SR portable.
Can be used as part of a
Disaster Recovery
For more information solution,
relating to usingor,
as part ofasregular
XenServer a Disastermaintenance
Recovery
solution, refer to the Citrix Knowledge
of the environment.
Center:
http://support.citrix.com/article/CTX117258
Can be scheduled within
http://support.citrix.com/article/CTX121099
xsconsole.
Management and Monitoring
Protecting Your Data Exporting VMs

Virtual machines can be exported directly out of XenServer


into XVA files that contain a complete clone of the VM and all
of its attached VDIs.
Can be initiated via XenCenter or from the XenServer CLI.
VM must be offline (shutdown) during export process.
Since it backs up all the VM data it can take a very long time
depending on the size of the VM!
Citrix Confidential - Do Not Distribute
Management and Monitoring
Protecting Your Data Creating VM Snapshots

Snapshots create VDI clones of a VM that can be used for


backup or quickly provisioned into new VMs or templates.
XenServer supports two types in version 5.5
Regular Supports all guest environments, including Linux
Quiesced Takes advantage of Windows Volume Shadow Copy Service (VSS).
It requires the manual installation of in-guest components to enable.

Citrix Confidential - Do Not Distribute


Management and Monitoring
Protecting Your Data Creating VM Snapshots (continued)
New in XenServer 5.6!
Introduces snapshot Revert, a.k.a.
Checkpoint.
Introduces a new snapshot mode: Snapshot
with disk and memory
XenCenter GUI enhanced for easier
management of VM snapshots and to
support Checkpoint feature.

Citrix Confidential - Do Not Distribute


Management and Monitoring
Protecting Your Data Third-Party Solutions

There are also Third-Party backup options:


In-guest backups can be performed using any guest-supported solution (backup
agents running in Windows or Linux, for example).
Volume snapshots performed directly on the storage via StorageLink plugins
(for Dell and NetApp).
Backup solutions that plug into the XenAPI to capture VM data, or clone the
LVM data directly.

Citrix Confidential - Do Not Distribute

You might also like