Professional Documents
Culture Documents
VLab VNX With VMware Integration Lab01 Unisphere Overview
VLab VNX With VMware Integration Lab01 Unisphere Overview
VLab VNX With VMware Integration Lab01 Unisphere Overview
Table of contents
Introduction ..........................................................................................................................................5
Purpose ............................................................................................................................................5
Scope ...............................................................................................................................................5
Terminology ......................................................................................................................................5
Technology overview .............................................................................................................................6
EMC Unisphere .................................................................................................................................6
EMC Unisphere features and benefits ...........................................................................................6
Lab overview .........................................................................................................................................7
Lab environment ...............................................................................................................................7
Labs..................................................................................................................................................7
Connecting to the lab ........................................................................................................................8
Lab 1 Unisphere Overview ..................................................................................................................9
Overview ...........................................................................................................................................9
Lab ...................................................................................................................................................9
Logging into Unisphere ................................................................................................................9
Browsing the Unisphere console ............................................................................................... 10
Dashboard ..................................................................................................................................10
System ........................................................................................................................................14
Hosts ..........................................................................................................................................17
Support .......................................................................................................................................28
Introduction
Unisphere overview for block and file
Purpose
Example:
This Lab Guide serves as a guide to a demonstration of EMC Unisphere for VNX
This Lab Guide provides:
Scope
Unisphere Overview
FAST Overview
Terminology
Term
Storage Groups
Terminology
Description
Unisphere
A collection of one or more LUNs or metaLUNs to which you connect one or
more servers. These LUNs are accessible only to the server that is connected
to the Storage Group.
RAID types
The RAID type of a LUN determines the type of redundancy, and therefore the
data integrity the LUN provides. AX-Series storage systems support only the
Hot Spare, RAID 5 and RAID 1/0 RAID types.
LUNs
A grouping of one or more disks or disk partitions into one span of disk
storage space. A LUN/virtual disk looks like an individual disk to the servers
operating system. It has a RAID type and properties that define it.
RAID Groups
A set of disks on which you create one or more LUNs. Each LUN you create on
a RAID Group is distributed equally across the disks in the Group. Each RAID
Group supports only one RAID Type.
The reserved LUN pool works with replication software, such as SnapView
SAN Copy, and MirrorView, to store data or information required to complete
a replication task.
Technology overview
EMC Unisphere
EMC Unisphere is a web-based user interface (UI) that you can run from any
computer with a supported web browser. Unisphere offers simplified array
management for many tasks. It is easy to learn, so you can be productive right
away. It offers:
Lab overview
Lab environment
IP: 192.168.0.30
Array Credentials
Username: sysadmin
Password: sysadmin
Labs
IP: 192.168.0.35
Array Credentials
Username: nasadmin
Password: nasadmin
IP: 192.168.0.11
Windows Credentials
Username: vlab\administrator
Password: Password
FAST Overview
NFS Overview
CIFS Overview
Lab
VNX hardware
Unisphere Analyzer
Support features
Figure 1.
2.
Launching Unisphere
Username: sysadmin
Password: sysadmin
Figure 2.
Log in to Unisphere
Dashboard
3.
10
Figure 3.
4.
Customize the All Systems Dashboard by clicking Customize on the top right.
(See Figure 4)
11
Figure 4.
5.
You can easily add/remove detail to the dashboard while in customize mode.
Once complete click Done (See Figure 5)
Figure 5.
12
6.
You will now select and array to manage. From the System list click VNXBlock. (See Figure 6)
Figure 6.
7.
You are now presented with a new Dashboard relating specifically to the array
VNX-Block. (See Figure 7)
13
Figure 7.
8.
VNX-Block Dashboard
You can also customize this Dashboard by clicking Customize on the top
right. From the Customize menu you can add multiple options to your
dashboard by dragging the icons to the dashboard. Once complete click
Done. (See Figure 8)
Figure 8.
System
14
9.
Click System from the top navigation bar to review the sub sections. Note
System is broken into Hardware, Monitoring and Alerts & Reports. (See
Figure 9)
Figure 9.
10. To review the Storage Hardware of your VNX Array click Storage Hardware.
(See Figure 10)
Figure 10.
11. Under Disk Array Enclosures expand Bus 1 Enclosure 0 > Disks and click Bus 1
Enclosure 0 Disk 1 . Note that the relevant disk has been highlighted in the
center pane. This can be used to easily identify hardware components of your
VNX Array. Note also the Wizards and Service Tasks that can be performed
from the right hand pane. (See Figure 11)
15
Figure 11.
12. To review Unisphere Monitoring and Alerts click System > Monitoring and
Alerts. (See Figure 12)
Figure 12.
13. Monitoring and Alerts contains the following (See Figure 13)
a.
b.
16
c.
d.
e.
Figure 13.
Hosts
14. To access Hosts click Hosts on the navigation toolbar. (See Figure 14)
Figure 14.
Accessing Hosts
15. To view all hosts connected to your array click Host List. (See Figure 15)
Figure 15.
16. From the Host list you can see that you have 2 VMware ESXi hosts connected
to your array. (See Figure 16)
17
Figure 16.
17. EMC Unisphere is VMware Aware. To view the VMware Integration within
Unisphere click Hosts > Virtualization. (See Figure 17)
Figure 17.
Access Virtualization
18. Currently you do not have anything listed under the VMware Infrastructure
tab. This section can be used to monitor vCenter or ESX hosts. To add your
vCenter click Add. (See Figure 18)
Figure 18.
Adding vCenter
18
Figure 19.
20. To add your vCenter Server click Add. (See Figure 20)
Figure 20.
19
21. Enter the following vCenter Server Credentials and click OK. (See Figure 21)
a.
IP Address: 192.168.0.11
b.
Username: vlab\administrator
c.
Password: Password123!
Figure 21.
20
Figure 22.
Figure 23.
Click OK
21
Figure 24.
Figure 25.
22
Figure 26.
23
Figure 27.
View VM details
28. Next you will see how easy it is to add additional hosts to a Storage Group
using Unisphere. To do this click Hosts > Storage Groups. (See Figure 28)
Figure 28.
29. Under Storage Groups click vLab Cluster and note that currently you have one
ESXi host in your storage group (esxi01.vlab.local). (See Figure 29)
24
Figure 29.
30. To add an additional host click Connect Hosts. (See Figure 30)
Figure 30.
31. Click esx02.vlab.local and click the highlighted arrow to add it to the storage
group. (See Figure 31)
25
Figure 31.
Adding esx02.vlab.local
26
Figure 32.
Figure 33.
27
Figure 34.
35. Review the Storage Groups screen and note that you now have 2 hosts in the
vLab Cluster Storage Group. Both of these hosts will now have access to all
LUNS added to this storage group. (See Figure 35)
Figure 35.
Support
36. To access the Unisphere Support section click Support. (See Figure 36)
28
Figure 36.
37. From the Support section you can access the following (Due to firewall
restrictions in vLab you will not be able to Enable the Support options). (See
Figure 37)
a.
How Tos Procedures to help you manage and service your system
b.
c.
d.
e.
Product Support Page Best practice documents, white papers and Live
Chat with EMC Engineers
f.
Figure 37.
Support options
29
Lab
If you are not already logged into Unisphere from the previous lab please do
the following. Otherwise continue to Step 4.
2.
Figure 38.
3.
Launching Unisphere
Username: sysadmin
Password: sysadmin
30
Figure 39.
4.
Figure 40.
5.
Accessing System
On the right hand pane click System Properties. (See Figure 41)
31
Figure 41.
6.
From the VNX-Block Storage System Properties screen click the FAST Cache
tab. (See Figure 42)
Figure 42.
7.
Here you can see that FAST Cache is enabled on your VNX Array and is
currently using 2 Flash disks in a Raid 1 configuration. (See Figure 43)
32
Figure 43.
8.
To close the FAST Cache configuration screen click OK. (See Figure 44)
33
Figure 44.
9.
You will now review the properties of a Storage Pool to see how easy it is to
manage FAST Cache. To do this click Storage > Storage pools. (See Figure 45)
Figure 45.
10. Right click Pool 0 - FAST Cache Enabled. Click Properties. (See Figure 46)
34
Figure 46.
11. On the VNX-Block Pool 0 : Storage Pool Properties screen click Advanced
and note that FAST Cache is enabled at the Pool level. This means that all
LUNs created from this storage pool will benefit from FAST Cache. To finish
click OK. (See Figure 47)
Figure 47.
35
Lab
Logging into Unisphere
1.
If you are not already logged into Unisphere from the previous lab please do
the following. Otherwise continue to Step 4.
2.
Figure 48.
3.
Launching Unisphere
Username: sysadmin
Password: sysadmin
36
Figure 49.
4.
To create a new storage pool you need to browse to Storage pools. Click
Storage > Storage Pools. (See Figure 50)
Figure 50.
5.
37
Figure 51.
6.
38
Figure 52.
7.
Click Advanced and note that FAST Cache is Enabled by default. Click OK to
create the new pool.(See Figure 53)
39
Figure 53.
8.
40
Figure 54.
9.
Click Yes
Click Yes to acknowledge the Warning shown below. (See Figure 55)
Figure 55.
Click Yes
10. Click OK to acknowledge the Message shown below. (See Figure 56)
Figure 56.
Click OK
Please note that it may take a few minutes for the Unisphere GUI to refresh so
that you can review the status of your new storage pool.
11. Once the GUI has refreshed you can now see that your storage pool is in an
Initializing state. It may take a few minutes for your pool to change to a Ready
State. (See Figure 57)
41
Figure 57.
12. You will now review the FAST configuration for Pool 0. To do this right click
Pool 0 FAST Cache Enabled and click Properties. (See Figure 58)
Figure 58.
13. On the General tab note that both Raid Type and Disk Type are Mixed. (See
Figure 59)
42
Figure 59.
14. To view the FAST configuration for Pool O click Tiering. Note that Auto-Tiering
is set to Scheduled. Note also the metrics for Data to Move Down/Up. In the
bottom pane note that data within the pool is spread across all three disk
tiers. (See Figure 60)
Figure 60.
15. To modify the Relocation schedule click Relocation Schedule. From here you
can easily manage and change the Data Relocation Rate and also the Data
Relocation Schedule. To finish click OK.(See Figure 61)
43
Figure 61.
Figure 62.
44
Figure 63.
18. Click Tiering. Note the Tiering Policy is set to Start High then Auto-Tier which
is the recommended setting. The following options are available under Tiering
Policy.
a.
Start High then Auto-Tier (recommended) First sets the preferred tier
for data relocation to the highest performing disk drives with available
space, then relocates the LUNs data based on the LUNs performance
statistics and the auto-tiering algorithm.
b.
Auto-Tier Sets the initial data placement to Optimized Pool and then
relocates the LUNs data based on the LUNs performance statistics and
the auto-tiering algorithm.
45
c.
Highest Available Tier Sets the preferred tier for initial data placement
and data relocation to the highest performing disk drives with available
space.
d.
Lowest Available Tier Sets the preferred tier for initial data placement
and data relocation to the most cost effective disk drives with available
space.
e.
Figure 64.
46
Figure 65.
Figure 66.
Click Next
47
21. Click Assign LUNS to the servers & check both ESXi hosts. Click Next. (See
Figure 67)
Figure 67.
22. Click Yes to acknowledge the Warning shown below. (See Figure 68)
Figure 68.
Click Yes
23. Select your Storage System VNX-Block. Click Next. (See Figure 69)
48
Figure 69.
24. Click Pool 0 FAST Cache Enabled. Click Next. (See Figure 70)
49
Figure 70.
25. Click Thin to create a virtually provisioned LUN. Click Next. (See Figure 71)
A thin LUN lets you assign more storage capacity to a host than is physically
available. Storage is assigned to the server in a capacity-on-demand method from a
shared pool. A thin LUN competes with other LUNs in the pool for the available pool
storage. The storage system software monitors and adds storage capacity, as
required, to each pool, not each LUN. This simplifies the creation and allocation of
storage capacity. For thin LUNs, you must install the thin provisioning enabler on the
system.
50
Figure 71.
LUN Features
26. Use the following details for LUN Properties and click Next. (See Figure 72)
a.
Number of LUNS: 1
b.
c.
51
Figure 72.
LUN Properties
27. Click Continue without adding LUNS to a folder. Click Next. (See Figure 73)
52
Figure 73.
Select Folder
28. Review the Summary and click Finish. (See Figure 74)
53
Figure 74.
29. Review the Results screen and click Finish. (See Figure 75)
54
Figure 75.
30. To see your newly provisioned LUN click Hosts > Storage Groups. (See Figure
76)
Figure 76.
31. Under Storage Groups click vLab Cluster. In the bottom pane click LUNS. Note
that the new LUN you just provisioned is available to both ESX hosts in the
storage group. (See Figure 77)
55
Figure 77.
Figure 78.
33. Right click Pool 0 FAST Cache Enabled. Click Expand. (See
56
Figure 79.
34. You should have 3 SAS drives available with which to expand the pool. Please
ensure to deselect Perform a Background verify on the new storage. Click OK.
(See
57
Figure 80.
58
Figure 81.
Figure 82.
37. To monitor the progress of the automatic rebalance of data right click Pool 0
FAST Cache Enabled. Click Properties. (See
Figure 83.
38. Under Operation In Progress you can see the Rebalancing % complete
details. NOTE: It may take up to 5 minutes for the rebalancing to begin. Click
Refresh to monitor progress. Click OK to close. (See
59
Figure 84.
60
Lab
Logging into Unisphere
1.
If you are not already logged into Unisphere from the previous lab please do
the following. Otherwise continue to Step 4.
2.
Figure 85.
3.
Launching Unisphere
Username: sysadmin
Password: sysadmin
61
Figure 86.
4.
This lab uses Virtual Storage Appliances (VSA) to replicate EMC arrays. For
this reason you need to add the VNX-File VSA to your VNX-Block instance of
Unisphere. To do this click Home. (See Figure 87)
Figure 87.
5.
Access Home
Figure 88.
6.
System List
62
Figure 89.
7.
Enter the following IP in the Connect dialog and click Connect. (See Figure 90)
a.
Connect: 192.168.0.35
63
Figure 90.
8.
VNX-File VSA IP
64
Figure 91.
9.
Enter the following details on the login screen. Click Login.(See Figure 92)
a.
Name: nasadmin
b.
Password: nasadmin
c.
Scope: Local (You must select Local Scope from the drop down or this
step will fail)
65
Figure 92.
Login Details
10. You should now see the VNX-File appliance listed in your system list. (See
Figure 93)
Figure 93.
VNX-File Added
11. To manage VNX-File click All Systems > VNX-File. (See Figure 94)
66
Figure 94.
Manage VNX-File
12. To view the storage pools for file click Storage > Storage Pools. (See Figure
95)
Figure 95.
13. Note that you have two storage pools from which you can create File Systems.
(See Figure 96)
Figure 96.
67
Figure 97.
15. Note that you have two existing File Systems FS01 and FS02. To create a
new File System click Create. (See Figure 98)
Figure 98.
16. Create the new File System with the following details and click OK. (See Figure
99)
a.
b.
68
c.
d.
e.
f.
g.
h.
Figure 99.
NFS Shares
17. Now that you have created a new File System you can now create shares for
users to access the File System. To do this click Storage > NFS. (See Figure
100)
69
Figure 100.
Navigate to NFS
18. Note that you have two existing NFS Exports - /FS01 and /FS02. To create a
new NFS export click Create. (See Figure 101)
Figure 101.
19. Create the new NFs export with the following details. Click OK. (See Figure
102)
a.
b.
Path: /Lab4
VNX with VMware Integration - Unisphere Overview
70
c.
d.
Figure 102.
CIFS Shares
20. EMC VNX allows you to share the same File System as an NFS share and a
CIFS share simultaneously offering more flexibility in mixed OS environments.
To create a CIFS share from your File System click Storage > CIFS. (See Figure
103)
71
Figure 103.
Navigate to CIFS
21. Currently you do not have any CIFS shares setup. To enable EMC VNX to
replace your physical file servers you can setup a CIFS server in Unisphere.
This step has been completed for you but to review it click CIFS Servers. You
have a standalone CIFS Server available that is using a VNX Interface
configured with IP 192.168.1.35.(See Figure 104)
Figure 104.
22. To create a new CIFS share click Shares > Create. (See Figure 105)
72
Figure 105.
23. Create your new CIFS share with the following details and click OK. (See
Figure 106)
a.
b.
c.
Path: \Lab4
d.
73
Figure 106.
24. Review your new share under the shares tab. (See Figure 107)
Figure 107.
25. To access your new share you can browse to the CIFS Server by doing the
following. Click Start. Enter the NetBIOS name of the CIFS Server \\cifs01
(See Figure 108)
74
Figure 108.
Browse to cifs01
26. Enter the following credentials to connect. Click OK. (See Figure 109)
a.
Username: cifs01\administrator
b.
Password: Password
27. Windows Explorer should now open and you can right click Lab4 and click
Map Network Drive. (See Figure 110)
75
Figure 110.
28. Accept the defaults and click Finish. (See Figure 111)
Figure 111.
76
A very useful feature of your unified VNX is the ability to take periodical
snapshots of your file systems using File System Checkpoints so that your
users can easily recover deleted files. To see an example of this do the
following.
29. In your new mapped drive right click, click New, click Text Document. (See
Figure 112)
Figure 112.
30. Call your new text document New Doc. (See Figure 113)
Figure 113.
77
31. Open your New Doc and enter the following text and save the changes. (See
Figure 114)
a.
Original Text
Figure 114.
Enter text
32. You will now take a manual File System Checkpoint of your Lab4 file system.
To do this click Data Protection > Create File System Checkpoint. (See Figure
115)
Figure 115.
33. Create your new checkpoint with the following parameters and click OK. (See
Figure 116)
a.
b.
c.
d.
78
Figure 116.
34. Now browse back to your mapped drive and delete the file New Doc. (See
Figure 117)
79
Figure 117.
35. Now that you have deleted your file you can restore it using the File System
Checkpoint taken earlier. Right click the Lab4 mapped drive and click Restore
Previous Versions. (See Figure 118)
80
Figure 118.
36. Under Folder versions click the manual checkpoint that you took earlier and
click Open. (See Figure 119)
81
Figure 119.
37. In the Checkpoint explorer window right click New Doc and click Copy. (See
Figure 120)
82
Figure 120.
38. Return to your mapped drive and paste the checkpoint version of the file to
restore it. (See Figure 121)
83
Figure 121.
39. Finally open the file to verify the text you typed earlier. (See Figure 122)
Figure 122.
84
Lab
In this lab you will view various Analyzer options and open a pre-confgured file with
detailed performance data. As this is a demo lab there is no load on the array to view
performance data in realtime so the lab will only be based on archived performance
data.
1.
2.
Here you will see 4 panes for various options within Analyzer. (See Figure
124)
3.
To start and stop Analyzer gathering performance data on the array click
Performance Data Logging under Settings. (See Figure 125)
VNX with VMware Integration - Unisphere Overview
85
Here you can specify internvals and stop automatically after a certain number
of days.
As this is a demo environment there is no load on the array so click OK
without starting Data Logging. You will view an existing performance file later
in the lab.
4.
86
5.
Performance data can be viewed in real time or archive files can be viewed. To
view performance details in realtime you can use the charts under
Performance Charts. (See Figure 128). (Note: as there is no load on this array,
you will view an archive file instead).
87
6.
If there are multiple .nar files over a period of time they can be merged
together into one single file. (Merge Archive)
.nar files can be copied from the array to a local machine using the Retrieve
Archive option.
.nar files can also be Dumped to an excel file if you have another
application that will do analysis on the numbers.
7.
b.
Click vLab.nar
c.
Click Open
88
8.
You can specify time ranges to view so if you have a .nar file than spans days
or weeks you can narrow down a specific time range within those
days/weeks.
Click OK without changing the times. (See Figure 131)
9.
Right anywhere on this page to see the options available. To view as much
detailed information as possible select Performance Detail. (See Figure 132)
89
10. You will then see options for performance counters you want to view. (See
Figure 133)
You can view performance counters of LUNs, Storage Pools, Raid Groups and
Storage Processors (SP). As you can see in 1 LUN is selected and Utilization is
the counter selected. Details of this LUNs performance can be seen in the
chart in the right pane.
90
11. Scroll down the list of LUNs and select EMCW-LUN10. In the pane under that
select Response Time (ms) and Total Bandwidth (MB/s). (See Figure 134)
Now you can see various resopnse times in the chart in the right pane. The Y
Axis depicts the performance counter and the X Axis depicts the time.
Figure 134. Viewing Response Time and Bandwidth for LUN EMCW-LUN10
12. Click Storage Pool and SP to browse for other performance counters to view
on various components on the array. (See Figure 135 and Figure 136)
91
13. If you right click on a graph in the right pane you have the option to modify
the line colours, types and so on by selecting Chart Configuration. This graph
can also be saved as an image file, copied or printed. (See Figure 137)
14. When you have finished Click x on both windows to close Unisphere Analyzer.
92
Conclusion
Summary
Support
FAST Overview
Unisphere Analyzer
Customer
For any issues or feedback, or for further information about EMC products, solutions,
and demonstrations, contact your EMC representative.
EMC personnel
For any issues or feedback, please email demoteam@emc.com. Please include as
much detail as possible to ensure your email is addressed in a timely manner.
References
For additional labs within this lab please download the following guides:
VMware with VNX Integration Data Protection with VNX Snapshot and
Clones.
There is additional separate lab for an indepth analysis of FAST and FAST VP with
Unisphere Analyzer. This lab is called Unisphere Analyzer Evaluating FAST Cache and
FAST-VP.
93