Professional Documents
Culture Documents
Introduction To HPE Nimble Storage DHCI Rev1.41
Introduction To HPE Nimble Storage DHCI Rev1.41
Introduction To HPE Nimble Storage DHCI Rev1.41
Storage dHCI
Lab guide
Rev. V1.0
Confidential – For Training Purposes Only
Use of this material to deliver training without prior written permission from HPE is prohibited.
Introduction to HPE Nimble Storage dHCI
The information contained herein is subject to change without notice. The only warranties for
HPE products and services are set forth in the express warranty statements accompanying
such products and services. Nothing herein should be construed as constituting an additional
warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or
omissions contained herein.
This is an HPE copyrighted work that may not be reproduced without the written permission of
Hewlett Packard Enterprise. You may not use these materials to deliver training to any person
outside of your organization without the written permission of HPE.
VMware ESX® and ESXi®, VMware vSphere and vSphere Client®, and VMware vCenter®
are trademarks or registered trademarks in the United States and certain other countries.
Microsoft Windows and Windows Server are trademarks of Microsoft Corporation. HPE
Nimble Storage Array basic volume operations is an independent publication and is neither
affiliated with, nor authorized, sponsored, or approved by, Microsoft Corporation.
Printed in USA
Table of Contents
Introduction ................................................................................................................................................................... 4
Objectives ...................................................................................................................................................................... 4
Target audience ............................................................................................................................................................ 4
Lab configuration ......................................................................................................................................................... 5
Lab 1: Initial Array Configuration............................................................................................................................. 6
Task 1: Discovering an array .................................................................................................................................. 6
Task 2: Configuring subnets ................................................................................................................................. 11
Lab 2: vCenter, Cluster, Server and Datastore configuration ........................................................................ 20
Task 1: Nimble dHCI stack-setup manager ........................................................................................................ 20
Lab 3: The HPE Nimble dHCI Plug-in for vCenter.............................................................................................. 32
Task 1: The HPE Nimble dHCI Plug-in for vCenter .......................................................................................... 32
Lab 4: The HPE Nimble dHCI Configuration Checker ....................................................................................... 38
Task 1: Configuration Check ................................................................................................................................ 38
Lab 5: Infrastructure Management ........................................................................................................................ 44
Task 1: Expanding the cluster by adding a new Server ................................................................................... 44
Task 2: Storage System Information ................................................................................................................... 50
Lab 6: One Button Updates ..................................................................................................................................... 52
Task 1: One Button Updates ................................................................................................................................ 52
Lab 7: VMFS Datastores ........................................................................................................................................... 61
Task 1: Creating a VMFS Datastores .................................................................................................................. 61
Task 2: Growing a VMFS Datastores .................................................................................................................. 63
Task 3: Snapshots and Snapshot Schedules .................................................................................................... 64
Task 4: Zero Copy Clones .................................................................................................................................... 68
Task 5: Deleting VMFS Datastores ..................................................................................................................... 71
Lab 8: HPE Nimble dHCI and VMware VVOLs .................................................................................................... 72
Task 1: Create VVOLs Datastore......................................................................................................................... 72
Task 2: Create a VMware storage policy ............................................................................................................ 75
Task 3: Create a VVol VM by using a storage policy ....................................................................................... 81
Task 4: VVol delete and restore from recycle bin .............................................................................................. 90
Task 5: Create manual snapshots for VVols from VMware ............................................................................. 94
Task 6: Restore or create a clone from a snapshot .......................................................................................... 96
Conclusion/review ................................................................................................................................................... 100
For those users interested in the setup processes please start with Addendum A
Objectives
The goals and tasks in this lab guide include:
Target audience
These labs are designed for HPE dHCI customers/potential customers, system administrators (SA),
system engineers (SE) and any HPE partners new to Nimble Storage arrays who need practical
experience with HPE dHCI solution. A good understanding of Nimble Storage array technology as well as
iSCSI storage connectivity is recommended for this lab. Some experience with VMware vCenter 6.7 or
later is also recommended.
Lab configuration
The resources used for this specific lab are shown in the diagram below.
NOTE: This lab is configured using a virtual version of the HPE Nimble array, available for lab
use only. The virtual version provides all the same administrative operations that a physical
array provides but does not provide the same performance or high availability as a physical
array and consists of only one controller node. The lab also implements a virtual ESX host
and therefore no direct iLO integration with the servers.
This lab environment is configured with the default and recommended three dHCI network subnets:
The lab environment has a pre-installed HPE Nimble dHCI cluster with vCenter and two ESXi servers. A
third ESXi server is available to expand the cluster.
Also refer to the best practice and configuration guides at HPE InfoSight for more details:
https://infosight.hpe.com
1. From the desktop double click the “Tools” folder to open it.
2. Double click the “Nimble Lab Script” icon to start the script menu.
3. From the menu select Option 13 by typing 13 and then press
Enter.When asked for confirmation the action select Y and press
Enter.
4. Please wait for the operation to complete, it should take less than a minute and the output
shouldstate that the Array-06 changed from a PoweredOff to PoweredOn status.
5. Once completed press enter to close the PowerShell window. (You might have to manually close
thewindow if it does not automatically close)
Wait for about a 1-2 minutes for the array to fully boot before continuing with this lab.
Performing the initial array configuration of a new Nimble dHCI array is a two-step process.
In the first step, you will discover the uninitialized Nimble dHCI array using Nimble Setup Manager (NSM).
NSM uses a low-level network scan to find uninitialized Nimble arrays and then uses a “zero config
network” to connect to the array instance. For this step, the Windows server with Nimble Setup Manager
installed and the uninitialized Nimble array must be on the same network segment.
After the array is connected to the temporary management IP address, the wizard provides a step-by-step
guide through the process of naming the array, establishing a new management IP and setting up the
administrators (admin) user password.
After the management IP address is set, you can connect to the new address and log on using the new
username and password to complete the second step in the process.
Objectives:
After completing this lab, you should be able perform the following tasks using the wizard provided:
NOTE: It might take a few seconds for the application to start. In some environments a User
Account Control message will be displayed. If you see this message, click the Yes button to
proceed.
2. Nimble Setup Manager (NSM) displays all the arrays that are detected on the network but have not
yet been initialized. NSM uses port UDP 5353 to discover arrays. Because the protocol used, it is
normally not forwarded over network segments or subnets. The Windows server running NSM must
be on the same subnet and sometimes the same network segment as the array for NSM to find the
Nimble array. As shown in the following screenshot, NSM should show an “af-xxxx” array that is
uninitialized. If you see any other arrays, then just ignore them.
Select the radio button next to the af array. Then click Next to proceed.
NOTE: If more than one AF array is listed, then please ask the instructor for help to select the
correct array. Selecting the incorrect array would result in issues further in the lab and the lab
will fail.
3. NSM assigns a temporary IP address on a network segment known as a zero config network to the
selected array in order to open a web browser to the array to complete the setup.
Click OK to continue.
NOTE: In some cases, you might experience a short network interruption. This is normal for
the network to discover the array.
4. A Google Chrome web browser opens automatically to the URL of the new array. Click Advanced
and then proceed when the privacy warning is displayed for the connection.
5. The HPE Nimble Storage terms and conditions, end user license agreements and third-party software
notices are displayed. After reading the agreement, scroll down to the bottom of the agreement and
select the checkbox to acknowledge it. Then click the Proceed button.
NOTE:
• The acknowledgement checkbox is not active until you have scrolled through all the
notices.
• You might need to use the outside scroll bar to view the acknowledgement checkbox
and the Proceed button.
6. The top part of the screen displays the array serial number, model and software version. Verify that
the af array was selected by looking at the first two letters of the array serial number.
In the bottom part of the screen, select the Set up this array but do not join a group radio button.
Then click Next to proceed.
NOTE: Use caution if you copy and paste from this lab guide. Be sure you do not copy blank
spaces.
Please use the provided password for the system to make sure the script later in the lab will
work correctly.
8. Initialization will take a few minutes to complete the setup. Wait for the process to complete before
continuing.
9. After initialization has completed, the success dialog window is displayed. Click Continue to proceed.
1. The browser window opens to the newly initialized array with the IP address 192.168.100.120. If
prompted with a privacy warning, click Advance → Proceed.
2. When the array login screen appears, enter the following credentials:
– Username: admin
– Password: !HPEstorage2050 (case-sensitive)
Click Log in to log on to the array as an administrator.
3. After logging in to the array, you might be presented with a usage warning. Click OK to proceed.
4. After login, the Setup wizard is displayed with five steps to complete:
– Subnet Configuration
– Interface Assignment
– Domain
– Time
– Support
5. From the first screen, you will configure the data networks. The first step is to configure the subnets.
In the top part of the screen, verify the Management IP settings. For this lab we are only using a
PRIMARY Management IP.
Delete the SECONDARY IP address before proceeding.
6. In the subnet section, ensure that the management network is set to the Mgmt only traffic type.
9. The Interface Assignment step is displayed. Configure the following interfaces using these
parameters:
• The eth1 interface is already set as a management subnet. Leave this setting unchanged.
• Set eth2 to management subnet.
• Set eth3 to Data-1 subnet and enter 172.0.1.121 for the data IP address.
• Set eth4 to Data-2 subnet and enter 172.0.2.121 for the data IP address.
• Set eth5 to Data-1 subnet and enter 172.0.1.122 for the data IP address.
• Set eth6 to Data-2 subnet and enter 172.0.2.122 for the data IP address.
• Set the controller A diagnostic IP address to: 192.168.100.121
• Set the controller B diagnostic IP address to: 192.168.100.122
Click Next to continue.
The following diagram shows the port details for the configuration for controller A of the array.
Controller B would have identical settings that get activated when controller B is active.
In the diagram, the array has two dual-port Ethernet NICs with the left-most port on each NIC
connected to the Data-1 network and the right-side ports to the Data-2 network. Each interface port
will have a dedicated IP address and an additional floating discovery IP for each data network that is
always active on one of the network ports for the subnet.
The Diagnostic IP address is specific to each controller. Each controller’s diagnostic IP is active at all
times to allow support users to connect to a standby controller for troubleshooting, if required. The
diagnostic IP does not failover during a controller failover.
The Array Discovery IP is a floating IP address on the management port and will failover between
management ports as well as between controllers. This is the main IP address for management and
discovery of the array.
Only one management port will be active on a controller at any one time. If the management port
fails, the Discovery IP and management functions are moved to the second management port of that
controller. If the second port fails, all operations are moved to the standby controller, which then
becomes the active controller for the array. In some events, a single network interface failure might
trigger a controller failover. Refer to the system architecture documentation for more details.
NOTE: Please make sure the DNS IP is entered correctly to avoid issues later.
NOTE: Please make sure the NTP IP is entered correctly to avoid issues later.
• Time Zone
i. Region: America
ii. Country/state/city: Los_Angeles
Click Next to continue.
NOTE: If an error message is displayed when you click the Finish button, your session might
have timed out. You will need to return to Step 1 of Task 3 in this lab to log in and re-enter the
configuration details.
14. It can take up to several minutes for the array to complete all the configuration setting changes and to
restart the services. Please wait for the process to complete before continuing.
15. After all the services are running, the Setup Complete message is displayed.
Click Continue to return to the array management UI login screen.
NOTE: Please make sure not to skip this step, as it can affect the rest of the setup.
2. Then open a new Chrome browser window by selecting the Chrome icon on the desktop.
3. In the browser address, enter the newly initialized array IP address 192.168.100.120 or select dHCI-
Array-06 from the Nimble array bookmark folder.
5. If an array log in screen appears, log in to the array UI using the following credentials. If not, go to
next step:
– Username: admin
– Password: !HPEstorage2050 (case-sensitive)
Click Log In to log on to the array.
6. The HPE Nimble dHCI setup welcome screen is presented. On the screen you can see a quick
overview of the configuration steps. Take a few moments to familiarize yourself with these items.
- Welcome
- Configure vCenter
- Configure Cluster
- Add New Servers
- Configure Servers
- Provision Datastores
- Summary
You may need to scroll to bottom to see Next button.
NOTE: If the page does not show hit the browser refresh button. You might also have to select
Advance → Proceed if the security warning is displayed. If that don’t work check that you
have closed and open the browser as specified in Step 1 of this lab.
8. To save time in the lab we have a vCenter server that is already configured and can be used. In a
new install you also have the option to deploy the vCenter server.
Select Use an existing vCenter Server radio button.
NOTE: If a “Failed to login to vCenter” error is displayed, check the username and password. If
the error persist then it is possible that the DNS IP was entered incorrectly during the array
setup. Ask the instructor to assist in opening a CLI (putty) session to the array
(192.168.100.120) and using command line options check if the DNS setting is correct, or
correct it if it is not.
Command line options to check DNS are:
Check DNS setting use “group --info” (The DNS setting is listed near the top of the display
output.)
Set DNS using “group --edit --dnsserver 192.168.100.2” (there will be an error displayed
after the command execute, but the “group --info” command will show the new DNS IP)
10. Click the VMware EULA Link to open the VMware license agreement in a new tab (There may be
two links depending on the version). The EULA will open in a new tab in the browser.
12. In the Configure Cluster section of the Wizard, select the radio button to Create new Cluster from
the discovered ProLiant servers radio button option.
Click NEXT.
14. In the Add New Servers dialog box, Three ESXi servers should be detected and listed as new
servers available to be added to the configuration. In some cases not all three servers are detected. If
this is the case hit the refresh button to detect the missing servers. If the refresh fails to detect the
servers use the next step (step #15) to manually add the missing servers.
• Select 2 ESXi servers by clicking the checkbox for each one. (we recommend that you select
servers with IP 192.168.100.221, and 192.168.100.222)
• Make sure to leave one server (192.168.100.223) unselected (This Server will be added in a later
lab).
15. If you were able to select the 2 servers in the previous step then select Next to continue. If there were
not enough servers listed, select the ones that you can and then manually add the missing servers
using the Add Server button.
Enter the missing server IP address(es) - 192.168.100.221, or 192.168.100.222
Enter the root password for the ESXi server: !HPEstorage2050
Select Add to add the server. Repeat for all missing servers.
Select Next when done.
16. Configure the new IP’s for the Servers. Note that each ESX server has two network interfaces on the
management subnet and one interface on each data subnet.
• Management IP Range: 192.168.100.50
• iSCSI IP Range 1: 172.0.1.50
• iSCSI IP Range 2: 172.0.2.50
• ESXi Root Password: !HPEstorage2050
• iLO Admin Password for Stack Manager: !HPEstorage2050
Click NEXT.
NOTE: A contiguous block of IP addresses are required for the management and iSCSI IP’s
specified. The IP entered is considered the starting IP address and the shadow field shows the
top IP address to be used. All IP’s in-between have to be available for use during the
configuration.
17. By default two datastores will be created and used for the cluster heartbeat. These cannot be
changed or removed. Optionally, one or more new datastores can be added as part of the
deployment, or you can use the vCenter Plug-In tools to create datastores after deployment.
Select the Add Datastore button to add an additional datastore during configuration.
19. Make sure the new datastore is listed and then select Next to continue.
20. The Summary window will be displayed. Verify that all the settings are correct and then select Finish
to start the configuration.
NOTE: At this point the setup will check that the DNS and NTS settings on the array are valid.
If these IP’s were incorrectly entered earlier in the array setup, an error will be displayed.
Please ask the instructor to assist you in opening a CLI session to the array (192.168.100.120)
and use the command line syntax to check and correct the DNS and NTP settings. The
command line options for NTP are:
Check NTP setting use “group --info” (The NTP setting is listed near the top of the display
output.)
Set NTP using “group --edit --ntpserver 192.168.100.2” (there will be an error displayed after
the command execute, but the “group --info” command will show the new NTP IP)
21. The setup progress screen will be displayed. Feel free to expand some of the progress menu items
and follow the setup configuration steps. Please be patient, it will take several minutes for the
configuration and setup to complete. Please wait for all the tasks to complete before proceeding.
22. Once 100% complete the setup task is done. At this point the close the browser.
24. Due to the fact that the lab environment is using a virtualized array and servers, the lab requires the
execution of a custom script to populate the iLO information for the virtual environment.
The script output should show that the server information was successfully populated. If not ask for
help from the instructor. Once the data is successfully loaded, this window can be closed.
NOTE: If the standard password was not used during the array initialization then the script will
need to be adjusted with the correct array admin password.
25. At this point your HPE Nimble dHCI stack is completed. In the next lab you will explore the
management of the dHCI environment.
Objectives:
• Login to vCenter
• Locate and open the dHCI plug-in for vCenter
• Navigate the dHCI plug-in
1. From the desktop, double-click the Google Chrome icon to open the browser if it is not open already.
NOTE: Predefined bookmark folders for the HPE Nimble array and VMware vCenter has been
created in the bookmark bar for the Chrome browser. These bookmarks have been added to
the lab environment for your convenience. They are not automatically created as part of the
product or any installation script.
There is also a link to HPE InfoSight Welcome Center. This site have valuable resources for
planning the installation of an HPE Nimble dHCI environment.
2. Select the vCenter-C bookmark from the VMware vCenter bookmark folder or manually enter
https://192.168.100.45/ui in the URL bar. If prompted with a privacy warning, click Advance →
Proceed (You may have to do this twice due to name resolution).
4. In the lab we are using VMware 6.7 with the HTML5 client. HPE Nimble dHCi also supports VMware
7.0. The vCenter home screen should be the default landing page, if not you can return to home by
using the Menu dropdown and then the Home option.
5. The vCenter home screen shows the overview of the cluster. In this case the cluster has two ESX
servers and no VM’s at this stage. Take a note at the Installed Plugins and note that the Nimble
Storage vCenter Plug-in was installed. This is done as part of the Stack setup during the dHCi
install.
NOTE: If a message about licenses that are about to expire is displayed, please click on the X
to the right of the message to dismiss this message. Sometimes this message covers up the
message about the Nimble Plug-In being loaded and you may need to refresh the screen.
NOTE: If the Nimble Plug-in is not listed try to logout of the vCenter web interface and log back
in again. New plug-ins only get loaded during initial login. There might also be a message on the
top of the screen about new plug-ins installed and the browser need a refresh. If this does not
work, ask help from the instructor.
6. In the Home view the HPE Nimble Storage Plug in is available as an option in the left hand menu, the
plug in can also be reached by selecting the Menu drop down and then select HPE Nimble Storage
plug-in listed in the drop down menu.
7. In the right panel a list of Nimble array groups will be displayed. Note the gr-dHCI-arrays group that
was created during dHCI setup with overview information of the array group utilization.
Click the blue gr-dHCI-arrays link to expand the array group information.
8. The array group overview screen will be displayed that is similar to the standard HPE Nimble UI
overview screen. Panels for array Performance, Protection, Usage, System info and Alarms are
displayed. There are also tabs to get more information about Datastores, VVOL VM’s, Inventory,
Events, Configuration Checks and Updates that are listed near the top of the display.
Take a moment to look at the information displayed in the main panels.
Note the Systems panel, this panel is specific to the dHCI environment and not part of the standard
Array UI. This panel provides a summary of the systems configured and indicates if any systems
have errors or warnings.
NOTE: Try to maximize the display area in order to see all the tabs and the scrollbar. If
required, change the zoom for the browser to show more details. Also minimize the Recent
Tasks menu using the double down arrows on the bottom right of the screen.
9. Note that if you hover the mouse over the numbers for the ESXi Hosts, Datastore, VM, Server or
Array, more details about the numbers are displayed in the pop-up.
10. Above the system status panels are a set of menus to manage the dHCI environment. These menus
will be covered in the next section in detail.
In the lab we will deliberately introduce an error. We will then show you how to detect and fix this error.
Objectives:
12. Note that from the current 61 rules that were checked we have 6 errors. Most of these errors are due
to the fact that we are using virtual ESX servers in the lab environment instead of physical DL servers
and these virtual ESX servers do not have iLO access. These can be ignored for the lab environment
but should be addressed if they show up in a production environment.
Some dHCI operations require VMware DRS to be enabled in order to function properly. In the lab
environment we will turn off DRS to show an error message is found in the configuration checker.
13. To create an error select the main Menu option and then select “Host and Clusters” from the drop
down menu
14. Select the dHCI-Cluster on the left and then select Configure on the right panel. Under Services
select the vSphere DRS service.
Note that the Service is currently turned on.
Select the Edit option on the far right.
15. Turn off vSphere DRS by switching the slider button to grey.
Leave the other setting as is and select OK
16. Return to the HPE Nimble storage plug-in by selecting the main Menu and then select “HPE Nimble
Storage” from the drop down.
19. In the confirmation dialog select Rerun to confirm re-running the configuration checks.
20. A task will be started in the background to perform the configuration test, this task can be monitored
in the Recent Tasks dialog. If the Recent Tasks Dialog is not open there should be an option to open
this dialog in the bottom left of the browser window. Note that sometimes you might have to change to
full screen view to see the Recent Tasks option.
21. Once the task complete, select the refresh button in the HPE Nimble plug-in view to update the
results. Note that the number of errors has now changed from 6 to 8 errors and if you scroll down
through the list of checks you will notice the vShpere DRS Setting rule gives us an error.
NOTE: Note to optimize screen space you can minimize the Recent Tasks dialog using the
double down arrows on the right.
22. Now let’s fix the DRS issue we just created. Select the main Menu option and then select “Host and
Clusters” from the drop-down menu
23. Select the dHCI-Cluster on the left and then select Configure on the right panel. Under Services
select the vSphere DRS service.
Note that the Service is currently turned off.
Select the Edit option on the far right.
Objectives:
1. Click the Menu icon and then click HPE Nimble Storage. This will open the HPE Nimble
vCenter Plug-in module that allows for easy storage management from within the vCenter Server UI.
2. Select the gr-dHCI-arrays group name for the Nimble array we are working with.
3. Select the Inventory tab and then select the Servers option from the drop-down menu.
4. The 2 ESXi servers added during the configuration are listed. Take a minute to look at all the server
information that is displayed.
Note: Since we use virtual ESX hosts in the lab we need to “fake” iLO information to the array.
If any of the servers show an error for the iLO, it might be that our script failed or did not run.
This should not prevent you from doing any actions in the lab.
5. Select the “+” option in the menu bar to add a new server to the cluster.
6. The discovery processes should discover the remaining available server in your environment. If the
server is not discovered select the Refresh button to run the discovery again. If the server is still not
discovered, then skip to Step 7 to manually add the server.
If the server is discovered select the checkbox for the server which should be 192.168.100.223.
Read Note below then click Next.
Note: There are two servers that show as unsupported, if you hover over the red icon the
reason for these servers are excluded is displayed, this can help trouble shoot if you didn’t
discover the servers you were looking for.
7. *This step is only if the server was not discovered, select the Add Server button to manually add the
server.
• Enter the server IP address, which should be 192.168.100.223
• Enter the root password !HPEstorage2050
• Select Add to add the server to the list.
Once the server has been added and the checkbox has been selected, then select Next to continue.
Scroll down within the wizard box and complete the password information:
• ESXi Root Password: !HPEstorage2050
• iLO Admin Password: !HPEstorage2050
Select ADD to complete the operation.
10. Once all the steps complete we need to run a script to “fake” the iLO information in the array due to
the fact that we are using virtual ESX servers and not real DL servers in the lab.
11. In order to update the server information in the plug-in display we need to exit the plug-in and then re-
load the information.
Select the Menu and then select Home from the drop-down menu
12. Click the Menu icon and then click HPE Nimble Storage. This will open the HPE Nimble
vCenter Plug-in module that allows for easy storage management from within the vCenter Server UI.
13. Select the gr-dHCI-arrays group name for the Nimble array we working with.
14. Select the Inventory tab and then select the Servers option from the drop-down menu.
15. The 3rd ESXi server will be listed under the Servers section of the Nimble plug-in.
NOTE: If the iLO address shows an error, then run the load_ilo script on the desktop and
refresh the browser window.
1. Select the Inventory tab and then select the Storage option from the drop-down menu.
2. The array overview panel display all the important information about the array, group name, SW
versions, Pool utilization as well as general array health. If replication was configured the replication
partner and status information would be displayed too.
NOTE: Some of the links will launch the HPE Nimble UI. The login for the UI is:
Username: admin (lowercase a)
Password: !HPEstorage2050
Feel free to explore the Nimble UI if you have time.
3. Select the Events menu item in the menu bar. This will display array log events as well as any alarms
that was triggered on the array. The same information can also be obtained from the array UI, but
having it available in the plug-in makes it easy to monitor all events in the solution.
Objectives:
2. The Browser should open to the Vcenter login page, if for some reason this did not happen then
select the vCenter-C bookmark from the VMware vCenter bookmark folder or manually enter
https://192.168.100.45/ui in the URL bar. If prompted with a privacy warning, click Advance →
Proceed (You may have to do this twice due to name resolution).
5. Select the gr-dHCI-arrays group name for the Nimble array we are working with
7. For legal reasons the HPE version of the appropriate VMware ESX software depot has to be
manually downloaded from the VMware site and then Uploaded to the array for use in the upgrade
processes.
To see the correct file name for the ESX install file click on the information icon.
To start the upload select the Choose File button
9. Wait for the file to fully complete the upload (signified by a green tick) and then select Continue.
10. The Pre-Check operations will check if the system is ready for an upgrade.
Once the checks are completed, select Continue
NOTE: You might get an error about DRS not turned on or in fully automated mode. This was
an error introduced to the environment to demonstrate the configuration checks function.
Please look at Lab 2 to correct this error and then try again.
11. Please read the End User License Agreement (EULA) or scroll to the bottom and then select the
checkbox to acknowledge you have read the agreement.
NOTE: If the checkbox does not activate when you scroll down, make sure the browser
window is not at full size. Use the “restore down” button in the top right
to take the chrome browser window on the lab desktop out of full screen mode. This normally
fixes the checkbox and you should be able to select the checkbox and continue. NOTE: You
may need to resize the window two or five times before you can check the tick box.
12. The update processes will download the Nimble Connection Manager (NCM) from InfoSight and
stage the download on the ESX server.
NOTE: If the update processes fails to start with an error it is possible that you have used “_”
(underscores) instead of “-“(hyphen) in the cluster name. The current update version has an
issue with “_” (underscores) in the cluster name. We do not have an easy solution to this in
the current lab environment. You can try re-naming the cluster, but it is not 100% confirmed
that this will work. This issue will be fixed in the next version of the update manager.
After some more tests and a dry run on the first ESX server, the Install will start automatically.
13. Once the update starts, the ESX server will be put in maintenance mode and then the software
update will be automatically performed.
The Upgrade monitor can be closed at any time and the Update will continue in the background
14. To return to the Update monitor at any time then select the Update option in the Nimble Vcenter
Plug-in and then select the % complete link.
15. The status of the ESX servers update can be monitored from the Host and Cluster view (select the
menu dropdown and then select the Host and cluster option). The current ESX server in the update
process is the server that is in Maintenance mode. After the update is complete, the server will
reboot and once back in the cluster it will exit maintenance mode.
16. Once one server update is complete, the update process will automatically continue with the next
server in the list until all servers in the cluster are at the same OS and NCM levels.
17. This process can take up to 15 minutes to complete for the 3 ESX hosts in this cluster. Once the
update is complete the new cluster version information is displayed in the array stack configuration.
Note that the HPE Service Pack for ProLiant (SPP) still need to be manually run on the servers to
update and FW on the servers.
18. Once the Update is complete, the catalog will be updated with the current version of the stack. If a
new version becomes available, then the catalog will list any new versions.
Note the catalog might also list versions that are not applicable to the configuration, these will show
as grayed out options that cannot be installed.
Objectives:
2. Note the information about the configured datastores on the VMware cluster and the usage
information on the array. This allows for a global view of VMware and array information from within
vCenter. Most users should see the two Reserved-Service-Datastore-x in the list, these are
reserved for dHCI and cluster usage and should not be deleted or modified. Users will also see a
Nimble-ds-1 datastore that was created when you performed the manual setup of dHCI cluster lab.
3. Note the three tabs on the right to switch between Summary, Performance and Protection stats
display. Take a few minutes to explore these tabs and the values displayed.
4. Click the + button in the icon par to create a new VMFS datastore.
6. Monitor the processes in the Recent Task window or the task console. Note that after the volume is
created on the array, a rescan is performed on the host and the VMFS datastore is created and
mounted on all hosts.
7. After the task complete the new datastore will be listed. You might have to use the refresh button
to refresh the view.
2. In the grow VMFS Datastore dialog enter the new size 80 GiB for the datastore and select Grow.
4. Use the refresh button to update the display once the task is complete.
1. Select the Checkbox for the dHCI-VMFS-1 datastore and then select the Pencil (Edit) icon to
edit the datastore setting.
2. In the Edit VMFS Datastore dialog we leave the Size setting as is.
Select Next to move to the Protection setting.
5. In the Schedules section set the following settings: (In the lab we use an aggressive schedule in order
to have some snapshot during the lab time. For production environments the schedule should fit your
recovery objective (RPO and RTO))
• Schedule Name: 10-minute
• Take Recovery Point: every 10 minutes
• Time Interval: 0:00 to 23:59
• Days of the week: (every day)
• Retain: 15 Snapshots
You have to option to add multiple schedules to a volume collection, for the lab we will just do this
one.
Select Next to progress to Performance setting.
6. The Performance section allows for the setting of Quality of Service (QOS) setting restricting
certain datastore to a specific performance profile. For this lab we will leave these at the default.
Select Next to progress to the Summary section.
7. In the summary section check the changes that was made and then select Save to implement the
changes.
8. It can take up to 10 minutes before the first snapshot from the volume collection schedule is created.
In the meantime, we will create an ad-hock snapshot of the datastore.
Select the checkbox for the dHCI-VMFS-1 datastore, then select the Snapshot icon
Select the checkbox for the dHCI-VMFS-1 datastore and then select the Clone icon.
3. The list of available snapshots should include the Snap-01 as well as one or more snapshots from
the 10-minute schedule. Select the Snap-01 snapshot to be used as the base of the clone
Select Clone to start the clone process.
5. Once the tasks complete use the Refresh icon to refresh the view in the plug-in. Note that the
new clone named Cloned-VMFS-1 is treated the same as a regular volume.
In the Summary view you can see how much data is used for the snapshot specific data, as well as
the total data represented by the volume. Note that although Cloned-VMFS-1 is representing 55.2
MiB of data in the screenshot below, all of that data is in reality based on the base snapshot from the
dHCI-VMFS-1 volume and the snapshot will only use space for any data unique to the snapshot itself.
1. Select the checkbox next to the Cloned-VMFS-1 datastore and then select the Delete icon to
delete the datastore.
2. In the Delete VMFS Datastore dialog you have the option to select to Retain the volume on the
group. If this checkbox is selected the volume is just removed from VMware but not deleted on the
array. With the checkbox unselected the volume will be permanently deleted from both VMware and
the Nimble array.
Leave the checkbox unselected.
Select the Delete button to delete the datastore.
During the stack setup of a new dHCI environment the VASA provider for Nimble was added to the
Vcenter. This ensure the environment is ready to support VVols
This task is optional for those that are not using VVOLs yet, but can be very informational to those that
want to explore VVols.
Objectives:
1. From the dHCI menu in the HPE Nimble Storage Plug-in for vCenter select the Datastores option
and then select VVOL from the drop down menu.
4. During the dHCI Stack setup the VASA Provider was registered with Vcenter but no VVOL datastore
was created.
Select the + icon to create new VVOL datastore.
7. In the Create VVol Datastore performance setting Quality of Service (QOS) setting can be applied for
all VVols from this datastore.
Leave the IOPS Limit and MiB/s Limit setting to No Limit
Select Create to create the VVol Datastore.
9. Once the task complete use the Refresh icon to refresh the view in the Plug-in. The dHCI-VVol-
Nimble datastore should be listed. Take a few minutes to look at the values that is displayed for this
datastore.
Note that there is option to Edit , Grow and Delete a VVol Datastore if needed.
An example is a Microsoft SQL Server database. The log files and data files would traditionally be placed
in different datastores because of the unique way each accesses the data. By adding a new hard disk to
an existing or new VM, you can apply a different storage policy to the new disk. For the SQL Server
example, you would create a VM with a storage policy for the operating system. Then add a new disk with
a specific storage policy for the SQL Server data and add another disk with a different storage policy for
the SQL Server logs.
Storage policies can define array features such as encryption, deduplication, performance, protection
schedules and replication schedules, to name a few. Each policy can have different settings and when a
new VVol is created, the vCenter server shares the policy settings with the array to create a VVol on the
array with those specific settings. A single datastore can have multiple storage policies and therefore can
have multiple VVols with different characteristics defined.
Objective
After completing this lab, you should be able to create a VMware storage policy.
1. Click the main Menu icon and then click Policies and Profiles
3. Provide a name for the new storage policy: Nimble VVol Gold Policy
Enter an optional description: Gold class volume policy for Nimble VVols
Click Next to continue.
4. Select the checkbox to Enable rules for NimbleStorage storage and leave all the other
checkboxes unchecked.
Click Next to continue.
5. Click the Add Rule dropdown and select Application Policy from the list.
6. Select the Operating System drop down list on the right. Note these are all the application profiles
available from the array. Since this is a general Policy for the Nimble volumes we will select VDI as
the application type.
7. Select the Add Rule drop down list again, and then select Protection schedule (minutely) from the
list.
9. Set Delete replicas from partner to Yes using the drop down list.
Click Next to continue.
NOTE: This setting enables the deletion of replicated VVols on the DR site if the production
VVol is deleted. This is a behavior that the administrator has to set manually based on disaster
recovery requirements.
10. A list of compatible datastores will be shown. Make sure that the dHCI-VVOL-Nimble datastores are
displayed.
Click Next to continue.
11. Review the settings and then select Finish to create the Policy.
12. The new storage policy will now be listed in the list of policies.
Select the Nimble VVol Gold Policy to see details about the policy in the bottom dialog window.
13. Take a look at the various options and tabs for the storage policy. Storage policies enable a VMware
administrator to monitor which VMs are compliant with the policy and if any are noncompliant. Over
time, policies can be changed and then applied to all the VMs that use that policy.
Later during the lab, return to the storage policies and see how the information in the tabs has
changed.
14. Note that there is by default a VVol No Requirements Policy. If you accidently select this later in
the lab you will still create VVols on any compatible VVol datastore BUT the volumes will NOT have
the application policy or the snapshot schedule that you have created in the previous Nimble VVol
Gold Policy.
1. In the vCenter web client, click the Menu icon and then click VMs and Templates
2. Right-click the dHCI-labs data center. From the drop-down menu, select New Virtual Machine…
3. In the New Virtual Machine dialog select Create a new Virtual Machine
Select Next to continue
5. Make sure dHCI-Cluster is selected for the compute resource, this will enable the VM to run on either
of the two ESXi hosts
Select Next to continue.
6. In the select Storage dialog click the drop down labeled Datastore Default for VM Storage Policy
and select Nimble VVol Gold Policy that was created in the previous task.
8. In the Select Compatibility dialog keep the default ESXi 6.7 and later
Select Next to continue.
NOTE: In the lab we just trying to get a basic VM up and running, we will not install an actual
OS so the OS setting does not really matter for the lab.
10. In the Customize hardware dialog leave all the settings at the default.
Select Next to Continue.
11. Review the setting and then select Finish to deploy the new VM
13. After the process is completed, the new VM should be listed in the tree structure on the left.
In the left tree structure, select the VVol-VM1 and then click the Summary tab.
Find the VM Storage Policies information that is listed for the VM. Note: you may need to scroll
down to find this information.
14. Click the Menu icon and then click HPE Nimble Storage. This will open the HPE Nimble
vCenter Plug-in module that allows for easy storage management from with vCenter Server UI.
15. Select the gr-dHCI-arrays group name for the Nimble array we are working with.
16. Select the VVol VMs menu and then select Local from the drop-down menu.
17. Note that VVol-VM1 is listed with details of space and snapshot usage.
2. Select the VVOL-VM1 in the left tree structure. Right Click on the VVol-VM1 and then select the
Delete from Disk option from the pop-up menu.
Note for a standard VM or on other storage implementations this will permanently delete the VM and
all the associated files/volumes.
5. Click the Menu icon and then click HPE Nimble Storage. This will open the HPE Nimble
vCenter Plug-in module that allows for easy storage management from within the vCenter Server UI.
6. Select the gr-dHCI-arrays group name for the Nimble array we working with.
7. Select the VVol VMs menu and then select Local from the drop-down menu.
8. Note that VVol-VM1 is not listed any more as a local VVol VM.
Note the trash can icon which can be used to Purge the volumes immediately freeing up any
space the VM is occupying on the array and overwrite the 72 hour wait time.
Select the Checkbox next to the VVOL-VM1 then select the undelete icon.
11. Monitor the progress in the Recent Tasks panel. It will only take a minute or so to restore.
12. Select the Local tab option, the VVol-VM1 should now be listed as a local VVol VM again.
3. In the Take VM Snapshot for VVol-VM1 dialog window, perform the following actions:
• Provide a name for the snapshot: Lab-Snapshot1
• Enter an optional description: Manual lab snapshot
• Leave all other settings at the defaults (unselected).
Click OK to create the snapshot.
1. Click the Menu icon and then click HPE Nimble Storage. This will open the HPE Nimble
vCenter Plug-in module that allows for easy storage management from within the vCenter Server UI.
2. Select the gr-dHCI-arrays group name for the Nimble array we working with.
3. Select the VVol VMs menu and then select Local from the drop down menu.
5. The restore dialog allows you to select or search for a specific time and date for a snapshot or by
default show the latest snapshot for the selected VM.
Note that you can also do the restore on individual disks for the VM if required.
Select the Show More option to see more of the recent snapshot recovery points.
6. Select a Snapshot to use as a base for the restore. (You might have only one to select from
depending on how much time has elapsed since recovering the deleted VM and how many scheduled
snapshots have been created in that time.)
Scroll down and select the radio button for Clone to a new VM. Note: if you select Replace the
existing VM, the VM will have to be in a power off status.
Provide a name for the new VM: Clone_VVol-VM1
Select the Restore button to start the restore operation.
8. Monitor the progress in the Recent Tasks panel. Note how fast the processes completes.
Refresh the vCenter plug-in using the refresh button to the far right.
9. Note that the cloned VM has the same storage policies as the original VM. You may have to scroll
down to find this information.
Conclusion/review
This lab guide has stepped you through the procedure to gain practical experience in using the HPE
Nimble dHCI, after the initial install was completed. A management overview of the of the dHCI
environment from within the VMware vCenter interface using the Nimble plug-in, as well as using the
plug-in to add an additional server to the stack was thoroughly covered. The lab also covered the one
button upgrade processes for ESX, NCM and Nimble OS versions that are NOS 5.2 or later.