Professional Documents
Culture Documents
Hands On Clustered Data ONTAP 8 2 Introductory Lab Guide v21 PDF
Hands On Clustered Data ONTAP 8 2 Introductory Lab Guide v21 PDF
2
Hands-on Introductory Lab
Version: 2.1
Date: 1 Juli 2013
By: Robin Dammers
Wessel Gans
TABLE OF CONTENTS
In this lab you will learn the basics of how to configure a Clustered Data ONTAP (cDOT) system.
These tasks will instruct you in the fundamentals of Clustered ONTAP so that you can set up an
environment to be used for basic features. You will perform basic cluster administration and
configuration using primarily OnCommand System Manager, as well as a limited amount of CLI
(command-line interface) use.
It is assumed that you already have a basic knowledge of Clustered ONTAP architecture and Vserver
functionality including volumes and LIFs.
To access the lab you will need to setup a wireless LAN conection to:
WLAN ID: cmlab
WPA key: cmlabnetapp01
From the WLAN you will enter a private network with access to your lab environment.
There are eight available configurations, each containing:
a. A Windows host
b. Two Cluster Nodes, based on ONTAP 8.2 simulators
Each configuration (POD) has its own IP range, please stick to that to avoid strange behavior..
If you need extra IP addresses for some tests, please feel free to ask us.
In this lab guide IP addressen contain a x (e.g. 192.168.80.x0), where x is the number of your POD.
When you have setup the WLAN connection please make an RDP connection to the Windows host in
your POD (192.168.80.x0) with the credentials:
User name: CMODELAB\Administrator
Password: Netapp01
You can access your cluster via the cluster management LIF (192.168.80.x1) with the credentials:
User name: admin
Password: Netapp01
You will first check the configuration, using the CLI. Then you will add the cluster to System Manager
and use this interface for most of the rest of the lab. You use CLI initially to get familiar with it, since
not all tasks can be performed with System Manager in the current release. The Clustered Data
ONTAP 8.2 CLI is a very powerful and flexible interface and is used more extensively in the Clustered
ONTAP Advanced Lab.
1. Use PuTTclu
2. Y SSH to access the CLI. Telnet is disabled by default for security reasons.
4. Highlight the clusterx Saved Session and click Load. The address shown in the Host Name (or
IP address) field is the cluster management LIF. It is set to 192.168.80.x1 (where x is the
number of your POD in this lab).
5. Click Open to start the cluster management shell. You may see a message about the RSA
fingerprint; if so, type yes to continue.
Note: You are now logged into the cluster management shell as the administrative user with full
privileges to the entire cluster. This shell, also known as the Clustered ONTAP CLI, uses the cluster
name as the prompt; cluster0x::> in your configuration.
7. You will briefly use the CLI to perform some basic cluster verification and then switch to
OnCommand System Manager, which will be used for most of the rest of the lab. Although most
basic cluster configuration can be performed using OnCommand System Manager, it is useful to
also have basic familiarity with the CLI.
8. Type the bolded commands at the Clustered ONTAP CLI prompt. Your output should look like
this (examples in this lab are based on cluster01):
b. cluster show: shows that the cluster is healthy, and consists of two cluster nodes,
called cluster01-01 and cluster01-02. The cluster node names are assigned automatically,
derived from the cluster name entered at setup time.The setup was run before the start of the
lab and the cluster was called cluster01.
c. node show: gives more information about the individual cluster nodes, including their
uptime and hardware type, shown in the Model field.The value SIMBOX shows that the
platform is actually a simulator, rather than physical FAS or V-Series controllers. As you go
through this lab, do you see any other indications that the simulator is being used?
Remember its running a true Clustered Data ONTAP 8.2 instance.
d. network interface show: displays all the logical interfaces (LIFs) in the cluster and
tells you which cluster node and physical port currently hosts each one. There are three types
of LIFs already defined during the basic cluster setup.
8. Increase the Clustered ONTAP CLI timeout value so that the session remains active for longer
(default is 30 minutes).
cluster01::> system timeout modify 90
OnCommand System Manager 3.0 can manage both Clustered Data ONTAP 8.2 as well as 7-Mode
systems in the same instance. You will launch OnCommand System Manager and discover the
cluster so it can be managed.
1. Click the indicated icon on your desktop to launch OnCommand System Manager 3.0. (If you see
a message about OnCommand System Manager updates, select Do not check for updates).
3. Type the cluster management IP address, 192.168.80.x1, in the Host Name or IP Address field
and click the More double arrow icon to expand the options. Click the Credentials radio button
and enter the same User Name and Password used to login to the cDOT CLI (admin /
Netapp01). Click Add to add the cluster.
5. Double click the cluster0x entry to launch the management tab for the cluster. The summary
panel shows information about the Properties, System Health, Alarms and aggregate storage
utilization.On the left side are three top-level entries: Cluster, Vservers, and Nodes (i.e. Storage
Controllers).
1. In the left navigation pane, under cluster0x, expand Storage (click the small white triangle on the
left of the System Manager navigation pane).
2. Click the Aggregates entry, and wait a moment to display the Aggregates list. You can drag the
Name column border to expand it to display the full name of each aggregate.
3. The root volume on each node is contained in the associated root aggregate. This volume is not
visible from System Manager, so use the cDOT CLI session to display it and check the size. It
should have plenty of free space so there is no need to expand it (48% used as shown).
3. Choose the Timezone from the pulldown (e.g. US/Pacific, Europe/Amsterdam, or Asia/Macau).
Enable the NTP Service and add the NTP Server, in this lab the AD server 192.168.80.1. Click
Add and Ok.
2. Click Create at the top of the aggregate list to launch the Create Aggregate wizard. Click Next to
start.
6. In the Number of capacity disks to use selection field, select 5 disks for the new aggregate and
click Save and Close.
9. The Aggregates pane in System Manager should automatically refresh to show the newly
created aggregate. If it does not, click Refresh.
cluster0x-01: aggr2_cluster0x_01
cluster0x-02: aggr1_cluster0x_02
cluster0x-02: aggr2_cluster0x_02
You can either use the System Manager wizard as before, or the CLI with these commands.
Note: The Clustered Data ONTAP CLI is very powerful and uses a hierarchical command structure.
Here are some quick hints and tips.
Type ? to show the list of available base commands
Type any base commend (for example, aggregate). The prompt changes so you can see
where you are in the command tree. Type ? again to see just the aggregate sub-commands.
The CLI also uses TAB key completion and pre-fills fields to save you typing where possible.
Try aggr show <TAB>.
The CLI fills in the next parameter for the command, which is aggregate. It also fills in aggr, since all
your aggregates begin with that string, and shows you all the aggregates currently defined.
Type 0 <TAB> appended to the current partial command. Now you just see the two
aggregates beginning with aggr0 with all the common text in the aggregate names completed.
11. Go to Aggregate -> Edit in System Manager and view some of the aggregate properties which
can be adjusted.
12. After all additional aggregates are created, the System Manager Aggregates panel should look
like this. You may need to click Refresh.
2. Cick any of the green arrows in the dashboard to drill down to that area. For example, to jump to
the Aggregates pane, click the arrow next to Data Aggregates. To jump to the Storage
Controllers pane, click the arrow next to Number of Nodes.
3. To display the Ethernet ports available on each cluster node, and the assigned roles, navigate
Nodes cluster0x cluster0x-01 Configuration Ports/Adapters. The ports are shown
individually per node. Repeat to show the ports of cluster0x-02.
5. Return to the cluster dashboard (select Cluster in the left pane of System Manager). Note that
Number of Vservers says NA, as no data serving Vservers are defined yet. This is your next
task, so that the cluster can start serving data.
Youve verified the cluster setup and learned some basic cluster commands. Its time to start serving
data. To do that, you need to create a Vserver.
As you know, a Vserver is the fundamental and required virtual storage entity in Clustered Data
ONTAP. The Vserver provides a namespace for NAS hosts and a container for LUNs for SAN hosts.
It provides the framework for moving storage and networking resources non-disruptively across the
cluster, and provides per-Vserver user authentication. At least one Vserver is required to serve data,
and up to hundreds of Vservers can be defined in a cluster, to provide data and application isolation
and multi-tenancy. Each Vserver is enabled for the data access protocols required, and contains its
own set of volumes and LUNS, as well as dedicated logical interfaces (LIFs).
Repeat the steps to create a second LIF and finish the LIF wizard after creating this
second interface.
vs1_nas_lif2
cifs protocol
home node cluster0x_02
home port e0d
192.168.80.x6
255.255.255.0
No gateway (leave empty)
At this point the cifs setup for vs1 has been completed and a share has been
created.
6. Assign additional aggregates, which will be used in other sections of this lab, to vs1 which
currently has only Aggr1_cluster0x-01 assigned.
Add the following aggregates to vs1
- Aggr1_cluster0x_02
- Aggr2_cluster0x_01
- Aggr2_cluster0x_02
In this section you will setup a dedicated management LIF for Vserver vs1 using System Manager.
Using a dedicated Vserver management LIF provides better security in a compartmentalized or multi-
tenancy environment. Application integration tools, like SnapDrive, are able to use this as well.
1. On the cluster dashboard, click the green arrow next to the Number of Vservers entry.
Alternatively, click Vservers in the left panel. Both methods opens the Vservers panel.
If not correctly displayed closed the tab cluster01 and reopen it.
2. Click vs1 Configuration Network Interfaces and verify that for both LIFs management
access is disbled.
Name: vs1_mgmt_lif
Interface role: Management
Node: cluster0x-02
Port: e0d
6. Enter network details below for the interface and click Next.
IP address: 192.168.80.x4
Netmask: 255.255.255.0
1. In the System Manager Vserver tab go to vs1 Configuration Security Users. Note that
ths vsadmin account is locked.
The Vserver administrator account vsadmin is now ready for use. We will use it in the SnapDrive
lab.
1. Click the vs1 Storage Volumes view. Highlight volume root_vol and click Edit on the top-
row actions. In UNIX permissions check all the Read/Write/Execute boxes for
Owner/Group/Others as shown. Click Save and Close.
If not correctly displayed close the tab cluster01 and reopen it.
3. Take a quick look at the other options and tabs available for thin provisioning, storage efficiency
and advanced options, as well as volume autogrow and other features. Do not change anything
else now there is more on storage efficiency later in the lab.
4. Click Namespace on the left System Manager pane. You will see the volume mounted in the
Vserver namespace, junctioned under the root volume, using the volume name for the junction
path (directory).
If not correctly displayed close the tab cluster01 and reopen it. Then navigate to the namespace
of vs1.(actions on the command line are not always automatically refreshed)
5. The internal volume name, and the junction path (directory) visible to NAS clients do not have to
match however. To prove this, change the volumes junction path. Click the vs1_vol1 entry to
highlight it and click Unmount in the list of actions above.
6. Check Force volume unmount operation and click Unmount. Typically you would not unmount
a volume with active client I/O; however at the moment there is no client access.
8. The Namespace view refreshes to show the volume is mounted at the desired location.
1. Access the CIFS share on your Windows client - click the Windows Explorer icon on the Launch
Pad, click Computer on the left panel if necessary to display the currently accessible drives, and
click Map network drive on the top-row actions.
2. Choose Y from the Drive pulldown and enter the folder you want to map. You can just specify
\\VS1-CLUSTER0x - remember this was defined in the CIFS Server Name in the Vserver wizard.
3. Click Browse to see the defined shares, and drilldown to show ClassShare. Select it and click
OK.
6. Use Notepad or similar to create a simple test document in the share to confirm /write access.
3. Set the Total capacity to 150MB and click anywhere in the panel to refresh the colored storage
utilization bar. Hover over any of the areas to see what they mean i.e. Data Space Available,
Snapshot Space Available, Snapshot Space Used. Click Next to expand the volume.
5. You have one more chance to confirm the resize. Click Next to commit the operation.
6. Click Finish to complete the Wizard. System Manager shows the new volume sizes.
1. Use System Manager to create another volume called vs1_vol2,100MB in size on any non-aggr0*
aggregate (lab tip: use an other aggregate as used for vs1_vol1). The finished System Manager
volume display should look like this:
2. Edit the volume to set the permissions to read/write/execute for all, as you did on the other
volumes (step 1 and 2 in section 5.1 of this lab).
3. Check the Namespace view; as before, System Manager automatically created the junction path
for the new volume under the root volume, using the volume name.
5. In the Namespace view, check the volumes are nested as shown. Correct if necessary.
6. On the CIFS client, verify the new folder/directory is now visible. Copy some data there if you
wish. See how easy it is to seamlessly add capacity to the Vserver by creating and mounting
additional volumes.
3. Highlight a destination aggregate on the other cluster node, i.e. aggr2_cluster0x_02, and click
Move.
5. A confirmation dialog will pop up with the Job ID. Click on the Job ID number to follow, or go to
Cluster Diagnostics Job to follow the volume move status.
System Manager provides management of network interfaces; however, certain tasks require the
command line. Lets look at what you can do with System Manager
2. Take note of the Home Port and the Current Port of vs1_nas_lif1 in the Failover Properties
section of the view. The status shows it currently hosted on its home port e0c of cluster node
cluster0x-01. We will compare this information at a later point in the lab.
3. As System Manager does not provide the ability to directly migrate a LIF, use the command line
to move the LIF to the other node. Open PuTTY from your desktop and issue a session to your
cluster management LIF
4. View the network interfaces at the CLI for vs1. The node and port for vs1_nas_lif1 is the same as
shown in System Manager and it is currently on its home port (IsHome is true).
cluster01::> net int show -vserver vs1
(network interface show)
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
vs1
vs1_mgmt_lif up/up 192.168.80.14/24 cluster01-02 e0d true
vs1_nas_lif1 up/up 192.168.80.15/24 cluster01-01 e0c true
vs1_nas_lif2 up/up 192.168.80.16/24 cluster01-02 e0d true
3 entries were displayed.
5. Issue the migrate command for the chosen LIF using this information
a. Vserver name: vs1
b. LIF name: vs1_nas_lif1
c. Source node: cluster0x-01
7. The LIF is now on the new node and port and the Is Home field is false, as you would expect. You
could send the LIF home at the CLI using network interface revert command. However, lets do it
with System Manager this time.
8. Look in System Manager in the Network Interfaces view again. You may have to click Refresh to
update. Compare it with the view in step 2 of this section. You can see that the Home Port and
the Current Port for vs1_nas_lif1 are different, and the Send to Home option on the toolbar is now
available. This is only valid for LIFs which are not on their home port.
9. Click Send to Home. You can see the LIF is now home.
2. Return to the Vserver view (Vservers cluster0x). Highlight the vs1 Vserver and click Edit in
the top-line actions.
3. Click the Protocols tab and check the iSCSI protocol. Click Save and Close.
5. In the left pane, navigate Vservers cluster0x vs1 Configuration Protocols iSCSI.
Note the iSCSI Service is not running and there are no iSCSI Interfaces defined.
6. Click Start to start the iSCSI Service. The display refreshes to show it is now running. An iSCSI
Target Node Name is automatically assigned, as well as an iSCSI Target Alias.
Create the LIFs that will be used by iSCSI hosts to access LUNs. You cannot use the existing NAS
LIFs LIFs are for either SAN or NAS access. One reason for this is that SAN LIFs do not migrate or
failover, whereas NAS LIFs do. It is recommended to create at least one LIF per fabric per node in
SAN configurations, to ensure there is always an available path to the LUNs. SAN paths are either
Optimized or Unoptimized, which is covered later in this lab.
2. Click Create to launch the Network Interface Create Wizard. Click Next to begin.
3. Enter vs1_san_lif1 as the name of the first LIF. Make sure to check the Data radio button for the
Role. Click Next.
5. Click Browse to show the available ports. Expand cluster0x-01 and choose e0d. Click OK.
Question: How are the ports for selection chosen? Hint: try the network port show command
at the CLI. Since iSCSI uses Ethernet, any data role port can host an iSCSI LIF.
7. The configuration summary displays. If it is correct, click Next to create the LIF.
8. Click Finish to complete the wizard. The new iSCSI LIF shows in the list of Network Interfaces.
In this section you will create the iSCSI objects for host access to a LUN.
1. Find out the name of the Windows host iSCSI Initiator: on the Windows jump-client, click the icon
on your desktop to launch the iSCSI Initiator Properties applet. On the Configuration tab,
highlight and copy the string in the Initiator Name field.
4. On the Initiators tab, click Add. Paste the Windows initiator name you copied in step 1 in this
section and click OK.
6. Create a LUN to assign to the Windows host. Click the LUN Management tab and click Create to
launch the Create LUN Wizard. Click Next to begin.
9. Map the LUN to the igroup just created - check the Map box next to the vs1-igroup1 entry and
click Next.
10. If the LUN Summary is correct, click Next to create the LUN, otherwise go back and fix the
settings.
The last step is to make the LUN available to the Windows host. You will use just native Windows
utilities in this lab. It is more common to use SnapDrive to configure LUNs. This will be covered later
in a optional Lab.
1. Access or re-open the iSCSI Initiators Properties applet on the Windows jump-host. Create an
iSCSI session from the Windows host to the SAN target LIFs on each of the cluster nodes. Click
the Discovery tab, and then Discover Portal.
192
3. In the Targets tab, highlight the Inactive connection, and click Connect.
5. Under Advanced Settings, in the Target Portal IP pulldown, select the IP address of the LIF you
configured in section 8.2 (should be 192.168.80.x7), then click OK. Click OK again in the
Connect To Target dialog.
6. The connection should now show status Connected. Highlight it and click Properties.
8. In the Connect to Target dialog, as before in step 4 of this lab section, check Enable multi-path,
and click Advanced
9. Select the IP address of the other iSCSI LIF (192.168.80.x8) from the Target Portal IP pulldown.
Click OK, then OK in the Connect To Target dialog, and OK to go to the iSCSI Intiator
Properties window,
10. Check if MPIO is configured for iSCSI devices on your Windows client.MPIO is required for all
SAN connections in Data ONTAP Cluster-Mode to provide multi-pathing capability.
Go to Start Administrative Tools MPIO. In the MPIO Properties applet, click the Discover
Multi-paths tab. The Add support for iSCSI devices checkbox should be grayed out, indicating
MPIO is already enabled;
If this is the case, skip the next step and go straight to step 12.
12. Windows should now see the LUN with the correct path status. Click the Server Management
icon on the Task Bar to launch the Server Management applet.
15. The disk is probably Unknown/Offline. Right-click Disk 1 and select Online
16. The status should now be Unknown/Not Initialized. Right-click Disk 1 again and select
InitializeDisk.
17. Accept the defaults in the Initialize Disk dialog and click OK.
18. The disk should now show status Basic/Online. Right-click the white box marked Unallocated
next to Disk 1, and select Properties.
On the MPIO tab, you should see one path with TPG State Active/Optimized and one path with
TPG State Active/Unoptimized.
20. Click Cancel to exit the Properties window. Right-click the box marked Unallocated again and
select New Simple Volume.
24. Accept the default formatting options, or enter your own Volume label (e.g. ISCSILUN1) if you
like. Click Next.
26. You will momentarily see the drive with Status Formatting in the drive list. When the format is
complete, it will display as shown.
27. Test access to the LUN by creating a file, like you did with CIFS in section 5.2 step 6.
28. You have successfully configured a Vserver for multiprotocol CIFS and iSCSI data access.
2. In the LUN Properties windows go to the MPIO tab and determine the Active/Optimized path, and
note the Path Id (in this case 3000103).
3. Access or re-open the iSCSI Initiators Properties applet. Click the Targets tab, highlight the
connected target and go to Devices.
5. The MPIO Path Details displays the Target Portal of the active path, which corresponds with the
IP address of active LIF. In this case the IP address is 192.168.80.x8.
7. We want to show some I/O on the LUN to verify that it is able to non-disruptively switch to the
other path when the optimized path goes down. Copy the mp4 movie from the workshopdata
share to drive I: and start the movie.
8. We will initiate a LIF failure by explicitly taking the LIF offline. Note that LIFs for SAN protocols do
not failover, since we always have multiple paths available.
Return to System Manager, highlight the Active/Optimized LIF (vs1_san_lif2), right-click and
Disable the interface.
Click on the Refresh button, and after a few seconds the port is diabled.
Question: Is your movie still running?
9. Examine the path status. In Windows if you still had the LUN Properties window open, it does not
refresh automatically. Close and re-open the LUN properties window. You will see only one path
which is marked Active/Unoptimized. So this shows the Windows traffic will be going through the
path on the other node so long as the original path (LIF) is down. Verify you can still read and
write to the LUN so that the multi-pathing is proven to work.
11. Check your paths again in Windows. They should be back showing the two paths with the same
state as before we initiated the failure.
11. A confirmation dialog will pop up with the Job ID. Click on the Job ID number to follow, or go to
Cluster Diagnostics Job to follow the volume move status.
1. The data vserver needs to be added to the DNS configuration of your domain or the local hosts
file.
Please check your local hosts file if the vserver vs1 has an entry via:
the shortcut on your desktop or
C:\Windows\System32\drivers\etc\hosts
2. Use System Manager to create a new SAN volume called vs1_lun2_vol,100MB in size on any
non-aggr0* aggregate. The finished System Manager volume display should look like this:
4. In the left pane, click WIN0x (Local) then click Transport Protocol Settings in the right pane.
5. Click Add, and complete the fields as shown. SnapDrive works at the Vserver level, not at the
cluster level, so we will use the Vserver management configuration (management LIF and
vsadmin user) we created in Section 2 of this lab.
6. You will see the Vserver storage system appear in the list of known Storage Systems in Transport
Protocol Settings. Click OK.
11. Navigate down to the volume previously created in this lab (vs1_lun2_vol) to hold the new
Windows LUN. Enter vs1-lun2 for the LUN Name. Click Next
12. Make sure the Dedicated radio button is selected. Click Next.
14. In the Select Initiators window, check your Windows hosts iSCSI initiator and click Next.
17. As the LUN is created, you can watch the progress in the Details panel. When complete, the
display refreshes to show the Disk Details.
This section gives a brief overview of some of the efficiency features in Clustered ONTAP: Snapshots,
Deduplication and Compression and Quality of Service
10.1 SNAPSHOTS
1. Display the snapshot policy for a volume in System Manager go to Vservers cluster0x
vs1 Storage Volumes.
2. Select Volume vs1_vol1 and click Snapshot Copies and select Configure.
3. You can change the Snapshot Reserve, choose whether the .snapshot directory is visible to
clients, and also enable snapshot schedules. The current snapshot policy is called default, with
hourly, daily and weekly snapshot schedules. Make sure the Snapshot directory is visible, and
click Cancel to exit.
5. Copy some data to volume vs1_vol1 on the Windows client. Do you remember the directory
junction point for this volume?
Open Windows Explorer and copy the files from the workshopdata (drive Z:) share to your
ClassShare (drive Y:). Z:\Clustered ONTAP Documentation Y:\testvol
Select the Volume vs1_vol1, right-click, select Snapshot Copies, click Create.
testsnapshot
7. Name the snapshot testsnapshot and click Create to take the snapshot.
9. On the Windows client, delete one (or more) of the files in the testvol directory.
Browse to the testsnapshot folder. Now you should see the deleted file in the snapshot you just
created, and be able to copy it back to restore it.
11. The second method is by using the Previous Versions functionality. In Windows Explorer highlight
the directory testvol, right-click and select Properties.
13. It is also possible to perform restores from the Storage Admin level via System Manager or CLI.
Snapshot Backup and restores from LUNs can also be done from SnapDrive at vsadmin level.
These methods are not covered in this lab, but feel free to try.
In this section you will enable storage efficiency (deduplication and data compression) on a volume.
No license is needed these are standard ONTAP features.
2. The Storage Efficiency column shows Disabled for all volumes, as it is not enabled for any
volumes yet. Highlight volume, vs1_vol2 and click the Storage Efficiency tab at the bottom. It
tells you to Edit the volume to enable storage efficiency. Click Edit at the top of the panel.
3. Click the Storage Efficiency tab on the Edit Volume screen. Check the boxes Enable Storage
Efficiency and Enable Compression.
You can also enable compression, and select to use post-process or in-line compression. If
you select post-process, the sequence is Compression followed by Deduplication. If you
select in-line, data is compressed while it is written - be careful choosing this option in
performance-sensitive environments.
4. Set the options as shown, On-demand deduplication and Inline Compression (by checking the
Compress Inline option) and click Save and Close.
6. Copy some data into the volume (share Y:\testvol\testing2). There is sample dedupable data in
the workshopdata (drive Z:) share. Select the folder Z:\DedupeData and copy it to
Y:\testvol\testing2. (If necessary increase the size of the volume)
7. After the files are copied, in the System Manager Volumes display, click Refresh and look at the
Storage Efficiency tab for the volume. You should see Used Data Space has grown
considerably. Since you enabled in-line data compression, you should also see some
compression savings.
9. Refresh the System Manager view until the Last Run Details time/date stamp has updated. The
scan will take a few minutes to complete.
You can also monitor the scan progress with the CLI sis status command.
cluster01::>sis status
Vserver Volume State Status Progress
----------- ------------------- -------- ------------ -------------------
vs1 vs1_vol2 Enabled Active 110348 KB (98%) Done
cluster01::>sis status
Vserver Volume State Status Progress
----------- ------------------- -------- ------------ -------------------
vs1 vs1_vol2 Enabled Active 172128 KB Verified
cluster01::>sis status
Vserver Volume State Status Progress
----------- ------------------- -------- ------------ -------------------
vs1 vs1_vol2 Enabled Active 0% Merged
cluster01::>sis status
Vserver Volume State Status Progress
----------- ------------------- -------- ------------ -------------------
vs1 vs1_vol2 Enabled Idle Idle for 00:00:01
10. Look at the savings in the Last Run Details pane. Now, both deduplication and compression
savings are reported, and total space savings has increased.
In this section you will enable Quality of Service on an object. This could be a vserver, volume, LUN
or file (e.g. vmdk). No license is needed this is a standard Clustered ONTAP feature.
QoS is a new feature since Clustered ONTAP 8.2. Management of QoS is not included with System
Manager 3.0, and therefore should be configured through CLI.
GUI integration is planned for ONTAP 8.2.1 and System Manager 3.1.
14. From CLY apply the created QoS policy to your LUN (tip: use lun show to copy your lun path).
cluster0x::>
15. View the Performance Monitor and mention the generated load.