Professional Documents
Culture Documents
All Netapp2
All Netapp2
management
There are two checksum types available for drives used by Data ONTAP: BCS (block) and AZCS
(zoned). Understanding how the checksum types differ and how they impact storage management
enables you to manage your storage more effectively.
Both checksum types provide the same resiliency capabilities. BCS optimizes for data access speed,
and reserves the smallest amount of capacity for the checksum for drives with 520-byte sectors.
AZCS provides enhanced storage utilization and capacity for drives with 512-byte sectors. You
cannot change the checksum type of a drive.
To determine the checksum type of a specific drive model, see the Hardware Universe.
Aggregates have a checksum type, which is determined by the checksum type of the drives or array
LUNs that compose the aggregate. The following configuration rules apply to aggregates, drives, and
checksums:
Checksum types cannot be combined within RAID groups.
This means that you must consider checksum type when you provide hot spare drives.
When you add storage to an aggregate, if it has a different checksum type than the storage in the
RAID group to which it would normally be added, Data ONTAP creates a new RAID group.
An aggregate can have RAID groups of both checksum types.
These aggregates have a checksum type of mixed.
For mirrored aggregates, both plexes must have the same checksum type.
Drives of a different checksum type cannot be used to replace a failed drive.
You cannot change the checksum type of a drive.
When Data ONTAP puts a disk into the maintenance center and that disk is housed in a storage shelf
that supports automatic power cycling, power to that disk might be turned off for a short period of
time. If the disk returns to a ready state after the power cycle, the maintenance center tests the disk.
Otherwise, the maintenance center fails the disk immediately.
When Data ONTAP can put a disk into the maintenance center
When Data ONTAP detects certain disk errors, it tries to put the disk into the maintenance center for
testing. Certain requirements must be met for the disk to be put into the maintenance center.
If a disk experiences more errors than are allowed for that disk type, Data ONTAP takes one of the
following actions:
If the disk.maint_center.spares_checkoption is set to on(the default) and two or more
spares are available (four for multi-disk carriers), Data ONTAP takes the disk out of service and
assigns it to the maintenance center for data management operations and further testing.
If the disk.maint_center.spares_checkoption is set to onand fewer than two spares are
available (four for multi-disk carriers), Data ONTAP does not assign the disk to the maintenance
center.
It fails the disk and designates the disk as a broken disk.
If the disk.maint_center.spares_checkoption is set to off, Data ONTAP assigns the disk
to the maintenance center without checking the number of available spares.
Note: The disk.maint_center.spares_checkoption has no effect on putting disks into the
Data ONTAP does not put SSDs into the maintenance center.
Usually, you manage SSDs the same as HDDs, including firmware updates, scrubs, and zeroing.
However, some Data ONTAP capabilities do not make sense for SSDs, and SSDs are not supported
on all hardware models.
SSDs cannot be combined with HDDs within the same RAID group. When you replace an SSD in an
aggregate, you must replace it with another SSD. Similarly, when you physically replace an SSD
within a shelf, you must replace it with another SSD.
The following capabilities of Data ONTAP are not available for SSDs:
Disk sanitization is not supported for all SSD part numbers.
The maintenance center
FlexShare
1. Identify the name of the adapter whose state you want to change:
storageshowadapter
The field that is labeled Slotlists the adapter name.
Configuring an optimum RAID group size requires a trade-off of factors. You must decide which
factorsspeed of RAID rebuild, assurance against risk of data loss due to drive failure, optimizing
I/O performance, and maximizing data storage spaceare most important for the aggregate that you
are configuring.
When you create larger RAID groups, you maximize the space available for data storage for the same
amount of storage used for parity (also known as the parity tax). On the other hand, when a disk
fails in a larger RAID group, reconstruction time is increased, impacting performance for a longer
period of time. In addition, having more disks in a RAID group increases the probability of a
multiple disk failure within the same RAID group.
groups, parity groups, disk groups, Parity RAID groups, and other terms.
Follow these steps when planning your Data ONTAP RAID groups for array LUNs:
1. Plan the size of the aggregate that best meets your data needs.
2. Plan the number and size of RAID groups that you need for the size of the aggregate.
Note: It is best to use the default RAID group size for array LUNs. The default RAID group
size is adequate for most organizations. The default RAID group size is different for array
LUNs and disks.
3. Plan the size of the LUNs that you need in your RAID groups.
To avoid a performance penalty, all array LUNs in a particular RAID group should be the
same size.
The LUNs should be the same size in all RAID groups in the aggregate.
4. Ask the storage array administrator to create the number of LUNs of the size you need for the
aggregate.
The LUNs should be optimized for performance, according to the instructions in the storage array
vendor documentation.
5. Create all the RAID groups in the aggregate at the same time.
Note: Do not mix array LUNs from storage arrays with different characteristics in the same
group is the same size as the other RAID groups in the aggregate, and that the array LUNs are
the same size as the LUNs in the other RAID groups in the aggregate.
down.
Logs the activity in the /etc/messagesfile.
Sends an AutoSupport message.
Attention: Always replace the failed disks with new hot spare disks as soon as possible, so that hot
larger size and restricts its capacity to match the size of the disk it is replacing.
of HDD RAID groups, and then adding one or more SSD RAID groups to that aggregate. This
results
Before you begin
You must have identified a valid 64-bit aggregate composed of HDDs to convert to a Flash Pool
aggregate.
You must have determined write-caching eligibility of the volumes associated with the aggregate,
and completed any required steps to resolve eligibility issues.
You must have determined the SSDs you will be adding, and these SSDs must be owned by the
node on which you are creating the Flash Pool aggregate.
You must have determined the checksum types of both the SSDs you are adding and the HDDs
already in the aggregate.
You must have determined the number of SSDs you are adding and the optimal RAID group size
for the SSD RAID groups.
Using fewer RAID groups in the SSD cache reduces the number of parity disks required.
You must have determined the RAID level you want to use for the SSD cache.
You must have familiarized yourself with the configuration requirements for Flash Pool
aggregates.
About this task
After you add an SSD cache to an aggregate to create a Flash Pool aggregate, you cannot remove the
SSD cache to convert the aggregate back to its original configuration.
You can change the RAID group size of the SSD cache, but you cannot make this change until after
SSDs have been added. After disks have been added to a RAID group, they cannot be removed. If
you know that you want to use a different RAID group size than the default SSD RAID group size,
you can add a small number SSDs at first. Then, after you update the RAID group size, you can add
the rest of the SSDs.
By default, the RAID level of the SSD cache is the same as the RAID level of the HDD RAID
groups. You can override this default selection by specifying the toption when you add the first
SSD RAID groups. Although the SSD cache is providing caching for the HDD RAID groups, the
SSD cache is integral to the health of the aggregate as a whole. An SSD RAID group that
experiences a failure that exceeds the RAID protection capability of the RAID level in use takes the
aggregate offline. For this reason, it is a best practice to keep the RAID level of the SSD cache the
same as that of the HDD RAID groups.
There are platform- and workload-specific best practices for Flash Pool SSD cache size and
configuration. For information about these best practices, see Technical Report 4070: NetApp Flash
Pool Design and Implementation Guide.
Steps
If this step does not succeed, determine write-caching eligibility for the target aggregate.
2. Add the SSDs to the aggregate by using the aggraddcommand.
You can specify the SSDs by ID or by using the disk_typeand ndisksparameters. You do not
need to specify a new RAID group; Data ONTAP automatically puts the SSDs into their own
RAID group.
If you plan to change the RAID group size for the SSD cache, you should add only a small
number SSDs in this step. (You must add at least three.)
If the HDDs and the SSDs do not have the same checksum type, or if the aggregate is a mixedchecksum
aggregate, then you must use the cparameter to specify the checksum type of the
disks you are adding to the aggregate.
You can specify a different RAID type for the SSD cache by using the toption.
3. If you want a different RAID group size for the SSD cache than for the HDD RAID groups,
change the SSD RAID group size:
aggroptionsaggr_namecache_raid_group_sizesize
4. If you did not add all of the required SSDs in the previous step, add the rest of the SSDs by using
the aggraddcommand again.
The procedure described here does not apply to aggregates composed of array LUNs.
If you are moving an aggregate composed of disks using Storage Encryption, you need to take some
extra steps before and after moving the aggregate. If the physical security of the disks during the
move is a concern and your key management server supports the creation of a trust relationship
between two storage systems, then you should use that capability to retain secure encryption on the
disks during the move. Otherwise, you must set the encryption key to a known value before moving
the disks and give them a new authentication key after the disks are installed in the destination
storage system. This is the method described in the steps below.
Steps
1. Enter the following command at the source storage system to locate the disks that contain the
aggregate:
aggrstatusaggr_namer
The locations of the data, parity, and dParity disks in the aggregate appear under the HA, SHELF,
and BAY columns (dParity disks appear for RAID-DP aggregates only).
2. If you are moving disks using Storage Encryption, reset their authentication key to their MSID
(the default security ID set by the manufacturer) by entering the following command on the
source storage system:
diskencryptrekey0x0disk_list
You can also use the wildcard character (*) to specify the disks to be rekeyed. For example, to
rekey all disks in a specific shelf, you can specify adapter-name.shelI-ID.* as your disk list.
3. Boot the source storage system into Maintenance mode.
4. Take the aggregate offline:
aggrofflineaggr_name
The aggregate is taken offline and its hosted volumes are unmounted.
5. Reboot into Normal mode.
6. If disk ownership autoassignment is on, turn it off:
optionsdisk.auto_assignoff
If the system is part of an HA pair, you must complete this step on each node.
7. Remove the software ownership information from the disks to be moved by entering the
following command for each disk:
diskassigndisk_namesunownedf
8. Follow the instructions in the disk shelf hardware guide to remove the disks or shelves that you
identified previously from the source storage system.
9. If you turned off disk ownership autoassignment previously, turn it back on:
optionsdisk.auto_assignon
If the system is part of an HA pair, you must complete this step on each node.
10. Install the disks or disk shelves in the target storage system.
11. Assign the disks that you moved to the target storage system by entering the following command
for each moved disk:
diskassigndisk_name
The newly relocated aggregate is offline and considered as a foreign aggregate. If the newly
relocated aggregate has the same name as an existing aggregate on the target storage system, Data
ONTAP renames it aggr_name(1), where aggr_nameis the original name of the aggregate.
12. Confirm that the newly relocated aggregate is complete:
aggrstatusaggr_name
Attention: If the aggregate is incomplete (if it has a status of partial), add all missing disks
before proceeding. Do not try to add missing disks after the aggregate comes online; doing so
causes them to become hot spare disks. You can identify the disks currently used by the
aggregate by using the aggrstatusrcommand.
13. If the storage system renamed the aggregate because of a name conflict, rename the aggregate:
aggrrenameaggr_namenew_name
aggrstatusaggr_name
After you move the aggregate and bring it online in the destination storage system, you need to
recreate the following configuration information for all volumes associated with the aggregate:
Client connections (CIFS shares or NFS exports)
Scheduled tasks (for example, deduplication or reallocation)
Quotas
Relationships between volumes (for example, SnapMirror or SnapVault)
FlexCache volumes
LUN connection information
If your destination volume is on the same storage system as the source volume, your system must
have enough free space to contain both copies of the volume during the migration.
If the new FlexVol volume will be the root volume, it must meet the minimum size requirements for
root volumes, which are based on your storage system. Data ONTAP prevents you from designating
as root a volume that does not meet the minimum size requirement.
Steps
1. Enter the following command to determine the amount of space your traditional volume uses:
dfAhvol_name
Example
sys1>dfAhvol0
Aggregatetotalusedavailcapacity
vol024GB1434MB22GB7%
vol0/.snapshot6220MB4864MB6215MB0%
The total space used by the traditional volume is displayed as usedfor the volume name.
2. Enter the following command to determine the number of inodes your traditional volume uses:
dfIvol_name
Example
sys1>dfIvol0
Filesystemiusedifree%iusedMountedon
vol01010214279218553%/vol/vol0
that writes to the volume do not fail due to a lack of available space in the containing
aggregate.
6. Confirm that the size of the destination volume is at least as large as the source volume by
entering the following command on the target volume:
dfhvol_name
7. Confirm that the destination volume has at least as many inodes as the source volume by entering
the following command on the destination volume:
dfIvol_name
Note: If you need to increase the number of inodes in the destination volume, use the
maxfilescommand.
Result
You have created a destination volume with sufficient resources to accept the data from the source
volume.
optionsndmpd.authtypechallenge
Note: If you are migrating your volume between storage systems, make sure that these options
successfully.
For more information about the ndmpcopycommand, see the Data ONTAP Data Protection
Online Backup and Recovery Guide for 7-Mode.
4. Verify that the ndmpcopyoperation completed successfully by validating the copied data.
Result
The target volume now contains the data from the source volume.
Snapshot copies on the source volume are not affected by this procedure. However, they are not
replicated to the target FlexVol volume as part of the migration.
1. If you are migrating your root volume, complete the following steps:
a. Make the new FlexVol volume the root volume by entering the following command:
voloptionsvol_nameroot
Example
voloptionsvol0root
4. If you are migrating the root volume and you changed the name of the root volume, update the
httpd.rootdiroption to point to the new root volume.
5. If quotas were used with the traditional volume, configure the quotas on the new FlexVol volume.
6. Create a Snapshot copy of the target volume and create a new Snapshot schedule as needed.
For more information, see the Data ONTAP Data Protection Tape Backup and Recovery Guide
for 7-Mode.
7. Start using the migrated volume for the data source for your applications.
8. When you are confident the volume migration was successful, you can take the original volume
offline or destroy it.
Note: You should preserve the original volume and its Snapshot copies until the new FlexVol
The volume that you are designating to be the new root volume must meet the minimum size
requirement. The required minimum size for the root volume varies, depending on the storage system
model. If the volume is too small to become the new root volume, Data ONTAP prevents you from
setting the root option.
In addition, the volume that you are designating to be the new root volume must have at least 2 GB
of free space. It must also have a fractional reserve of 100%. The volstatusvcommand
displays information about a volumes fractional reserve.
If you use a FlexVol volume for the root volume, ensure that it has a guarantee of volume.
Starting in Data ONTAP 8.0.1, you can designate a volume in a 64-bit aggregate to be the new root
volume.
If you move the root volume outside the current root aggregate, you must also change the value of
the aggregate rootoption so that the aggregate containing the root volume becomes the root
aggregate.
For storage systems with the root volume on the storage array, the array LUN used for the root
volume must meet the minimum array LUN size for the root volume. For more information about the
minimum array LUN size for the root volume, see the Hardware Universe at hwu.netapp.com.
About this task
You might want to change the storage system's root volume, for example, when you migrate your
root volume from a traditional volume to a FlexVol volume.
Steps
1. Identify an existing volume to use as the new root volume, or create the new root volume by
using the volcreatecommand.
2. Use the ndmpcopycommand to copy the /etcdirectory and all of its subdirectories from the
current root volume to the new root volume.
For more information about ndmpcopy, see the Data ONTAP Data Protection Tape Backup and
Recovery Guide for 7-Mode.
3. Enter the following command to specify the new root volume:
voloptionsvol_nameroot
vol_nameis the name of the new root volume.
If the volume does not have at least 2 GB of free space, the command fails and an error message
appears.
After a volume is designated to become the root volume, it cannot be brought offline or restricted.
4. If you moved the root volume outside the current root aggregate, enter the following command to
change the value of the aggregate rootoption so that the aggregate containing the root volume
becomes the root aggregate:
aggroptionsaggr_nameroot
aggr_nameis the name of the new root aggregate.
For more information about the aggregate rootoption, see the na_aggr(1) man page.
5. Enter the following command to reboot the storage system:
reboot
When the storage system finishes rebooting, the root volume is changed to the specified volume.
If you changed the root aggregate, a new root volume is created during the reboot when the
aggregate does not already contain a FlexVol volume designated as the root volume and when the
aggregate has at least 2 GB of free space.
6. Update the httpd.rootdiroption to point to the new root volume.
access
A FlexCache volume is a sparsely-populated volume on a local storage system that is backed by a
volume on a different, optionally remote, storage system. A sparsely-populated volume or a sparse
volume provides access to data in the backing volume (also called the origin volume) without
requiring that all the data be in the sparse volume.
You can use only FlexVol volumes to create FlexCache volumes. However, many of the regular
FlexVol volumes features are not supported on FlexCache volumes, such as Snapshot copy creation,
deduplication, compression, FlexClone volume creation, volume move, and volume copy.
You can use FlexCache volumes to speed up access to data, or to offload traffic from heavily
accessed volumes. FlexCache volumes help improve performance, especially when clients need to
access the same data repeatedly, because the data can be served directly without having to access the
source. Therefore, you can use FlexCache volumes to handle system workloads that are readintensive.
Cache consistency techniques help in ensuring that the data served by the FlexCache volumes
remains consistent with the data in the origin volumes.
The caching system for a clustered Data ONTAP volume must have Data ONTAP 8.x or later
operating in 7-Mode.
The caching system must have a valid NFS license, with NFS enabled.
Note: The NFS license is not required when the caching system is an SA system.
The licensed_feature.flexcache_nfs.enableoption must be set to on.
If the origin volume is in a vFiler unit, you must set this option for the vFiler context.
The flexcache.enableoption must be set to on.
Note: If the origin volume is in a vFiler unit, you must set this option for the vFiler context.
For information about configuring and managing FlexCache volumes in a clustered Data ONTAP
environment, see the Clustered Data ONTAP Logical Storage Management Guide
Deduplication
Creation of FlexCache volumes in any vFiler unit other than vFiler0
Creation of FlexCache volumes in the same aggregate as their origin volume
Mounting the FlexCache volume as a read-only volume
If your origin volume is larger than 16 TB, the output of the dfcommand on the caching system will
show "---" for the size information about the origin volume. To see the size information for the origin
volume, you must run the dfcommand on the origin system.
You cannot use the following Data ONTAP capabilities on FlexCache origin volumes or storage
systems without rendering all of the FlexCache volumes backed by that volume or storage system
unusable:
Note: If you want to perform these operations on an origin system, you can destroy the affected
FlexCache volumes, perform the operation, and re-create the FlexCache volumes. However, the
FlexCache volumes will need to be repopulated.
You cannot move an origin volume between vFiler units or to vFiler0 by using any of the
following commands:
vfilermove
vfileradd
vfilerremove
vfilerdestroy
Note: You can use SnapMover (vfilermigrate) to migrate an origin volume without
volume offline, the space allocated for the FlexCache becomes available for use by other volumes
in the aggregate (as with all FlexVol volumes). However, unlike regular FlexVol volumes,
FlexCache volumes cannot be brought online if there is insufficient space in the aggregate to honor
their space guarantee.
randomly selects a FlexCache volume in that aggregate to be truncated. Truncation means that
files are removed from the FlexCache volume until the size of the volume is decreased to a
predetermined percentage of its former size.
If you have regular FlexVol volumes in the same aggregate as your FlexCache volumes, and the
aggregate starts filling up, the FlexCache volumes can lose some of their unreserved space (if it is not
being used). In this case, when the FlexCache volume needs to fetch a new data block and it does not
have enough free space to accommodate the data block, an existing data block is removed from one
of the FlexCache volumes to accommodate the new data block.
If the ejected data is causing many cache misses, you can add more space to the aggregate or move
some of the data to another aggregate.
You must have configured and enabled the FlexCache feature correctly on the caching system.
If the origin volume is a clustered Data ONTAP volume, the following items must be true:
There is a data LIF configured for the origin volume's SVM enabled for the fcache
protocol.
For more information about creating and configuring LIFs, see the Clustered Data ONTAP
Network Management Guide.
An export policy exists for the origin volume's SVM that includes the flexcacheprotocol,
and all systems hosting FlexCache volumes you want to access this origin volume are listed as
clients for the export policy.
For more information about creating export policies, see the Clustered Data ONTAP File
Access Management Guide for NFS.
Step
source_volis the name of the volume you want to use as the origin volume on the origin
system.
cache_volis the name of the new FlexCache volume you want to create.
aggris the name of the containing aggregate for the new FlexCache volume.
size{ k| m| g| t} specifies the FlexCache volume size in kilobytes, megabytes, gigabytes, or
terabytes. If you do not specify a size, bytes is used and rounded up to the nearest multiple of 4
KB.
Note: For best performance, do not specify a size when you create a FlexCache volume.
The new FlexCache volume is created and an entry is added to the /etc/exportfile for the new
volume.
Example
The following command creates a FlexCache volume called newcachevol, with the
Autogrow capability enabled, in the aggregate called aggr1, with a source volume vol1on
storage system corp_storage:
volcreatenewcachevolaggr1Scorp_storage:vol1
@@@@@@@@@@
Are snapshots included in volume tape
backups?
KB Doc ID 3012459 Version: 4.0 Published date: 06/20/2014 Views: 633
Answer
Are snapshots included in volume tape backups?
When restoring from backup, will snap list show available snapshots prior to the backup made?
Snapshots are not preserved during a backup. Backup takes a snapshot of the active volume and
only backups that data, it does not backup the snapshot metadata or the data located in the snap
reserve.
Each snapshot instance would need to be backed up to have an available copy of the volume in
those various stages (which would probably already exist through normal backup scheduling).
__
What are the options when creating a
volume?
KB Doc ID 3010775 Version: 6.0 Published date: 06/20/2014 Views: 1636
Answer
What are the arguments of the vol create command
The following describes the options when creating a volume:
SYNTAX:
vol create volname [ -l language_code ] [ -f ] [ -m ] [ -L ] [ -n ] [ -t
raidtype ] [ -r raidsize ] { ndisks[@size] | -d disk1 [ disk2 ... ] [ -d diskn
[ diskn+1 ... ] }
The -t raidtype argument specifies the RAID type used when creating raidgroups for this
volume. The valid types are raid4 (one parity disk per raidgroup) and raid_dp (two parity disks
per raidgroup).
The -r raidsize argument specifies the maximum number of disks in each RAID group in the
volume. The maximum and default values of raidsize are plat_form-dependent, based on
performance and reliabil_ity considerations.
ndisks is the number of disks in the volume, including the parity disks. The disks in this newly
created volume come from the pool of spare disks. The smallest disks in this pool join the
volume first, unless you specify the @size argument. size is the disk size in gigabyte (GB), and
disks that are within 10% of the specified size will be selected for use in the volume.
The -m option can be used to specify that the new volume be mirrored (have two plexes) upon
creation. If this option is given, then the indicated disks will be split across the two plexes. By
default, the new volume will not be mirrored.
The -L option can be used to specify that the new volume be a SnapLock volume upon creation.
If this option is given, then the newly created volume will be a SnapLock volume. SnapLock
volumes behave differently from regular volumes and are not usable as regular volumes.
SnapLock volumes should only ever be created for use by applications that are specifically
designed to operate correctly with these types of volumes.
The -n option can be used to display the command that the system will execute, without actually
making any changes. This is useful for displaying the automatically selected disks, for example.
If you use the -d disk1 [ disk2 ... ] argument, the filer creates the volume with the specified spare
disks disk1, disk2, and so on. You can specify a space-separated list of disk names. Two separate
lists must be specified if the new volume is mirrored. In the case that the new volume is
mirrored, the indicated disks must result in an equal number of disks on each new plex.
The disks in a plex are not permitted to span spare pools. This behavior can be overridden with
the -f option.
If you use the -l language_code argument, the filer creates the volume with the language
specified by the language code. The default is the language of the root volume of the filer.
Language codes are:
C (POSIX)
da
(Danish)
de
(German)
en
(English)
en_US
(English (US))
es
(Spanish)
fi
(Finnish)
fr
(French)
he
(Hebrew)
it
(Italian)
ja
(Japanese euc-j)
ja_JP.PCK
(Japanese PCK (sjis))
ko
(Korean)
no
(Norwegian)
nl
(Dutch)
pt
(Portuguese)
sv
(Swedish)
zh
(Simplified Chinese)
zh.GBK
(Simplified Chinese (GBK))
zh_TW
(Traditional Chinese euc-tw)
zh_TW.BIG5
(Traditional Chinese Big 5)
To use UTF-8 as the Network File System (NFS) character set append .UTF-8
___
How to determine if volumes contain non-Unicode directories
Description
When files are created using Network File System (NFS) on volumes that have the create_ucode
option disabled, the directories must be converted to the Unicode format when CIFS users access
the directories. For very high file count volumes, this conversion process can impact the
performance.
For alternative methods of moving directories to the Unicode format, see BUG 13131:
Conversion of large directories to CIFS Unicode format can take a long time
This article describes the procedure for determining whether a volume contains non-Unicode
directories, while ensuring that the Unicode conversion is not triggered during the process.
Procedure
In this scenario, a volume uc will be available, containing around 40 files that were created
from NFS with the create_ucode option enabled. There will also be 100 additional files and two
directories that were created with the create_ucode option disabled.
Perform the following steps:
1. Create a snapshot for testing and checking the number of inodes used in the
volume.
Note: An empty volume usually has around 100 used inodes for special files.
ata3050-rtp*> snap create uc uc_check; df i uc
Filesystem
iused
ifree %iused
/vol/uc/
249
311031
0%
Mounted on
/vol/uc/
In this example, there are approximately 149 directories and files contained
in the /vol/uc path.
2. Mount uc_check, the snapshot created in Step 1.
Note: Do not attempt to access the active filesystem through CIFS, as this will trigger the
Unicode conversion process, regardless of the value of convert_ucode.
Note: From a CIFS perspective, there are only 40 files and five folders present in the root
of the volume. These are the 40 files that were created when the create_ucode option was
enabled. This disparity between the number of inodes used and those reported by CIFS
indicates the likelihood of unconverted files being present in the volume.
3. Access the active filesystem of /vol/uc and traverse the directory structure to
trigger the Unicode conversion process.
4. Once completed, take another snapshot such that a stable state is available
to view the filesystem again:
ata3050-rtp*> snap create uc after_convert; df -i uc
Filesystem
iused
ifree %iused Mounted on
/vol/uc/
249
311031
0% /vol/uc/
From the root of the volume of the after_convert snapshot, note that the
additional 100 files that were originally created in the non-Unicode format are
now reported. This indicates that the conversion has occurred.
Notes:
Running dfi during snapshot creation is useful to account for the changes
being made to the filesystem by active users.
Gathering the file/folder properties from CIFS might require significant time
for larger filesystems.
Related Links:
Description
Error message: qtree: There are shares with connections on qtree
/vol/vol1/data. Its security style cannot be changed at this time.
When changing a mixed qtree to UNIX, what happens to NT files, owner and security?
When creating a new qtree default permissions are incorrect
How to change NTFS permissions on the volume level
Procedure
The command syntax for changing security styles is as follows:
qtree [volume] [tree] [style] [Oplocks]
Note: Use the "/" to specify the root volume instead of /vol/vol0.
To change qtree test on volume vol0 to unix:
qtree security /vol/vol0/test/ unix
When trying to change security, you may receive an error indicating the qtree security cannot be
changed.
qtree security /vol/vol1/data ntfs
qtree: There are shares with connections on qtree /vol/vol1/data.
Its security style cannot be changed at this time.
There can be no clients connected to the share of the qtree to be changed. Use the cifs
sessions command to identify the connected clients. Then either have those clients disconnect,
or use the cifs terminate <clientname>.
Note: The UNIX and NTFS file permissions are not removed when the qtree security style is
changed. The file permissions will only change if their permissions are modified, otherwise they
will have their original security permissions.
For example, if NTFS qtree "home_dir" is changed to UNIX, the files in "home_dir" will retain
their NTFS permissions. If an individual file named "test" in qtree "home_dir" is later
modified by a UNIX user, assuming the UNIX user maps to a valid NTFS user that has the
proper NTFS permissions, then the permissions on file "test" will become UNIX style. All other
file permissions in qtree "home_dir" will remain NTFS style until their permissions are modified
by a UNIX user.
Warning: Be sure to fully understand how NTFS and UNIX permissions function before
changing the qtree security type. Also, be sure the desired NTFS user to UNIX user
mapping is established. See More Here.
Deleted snapshot does not release space in volume / aggregate
Symptoms
BUG 141224. The above issue might occur when the user deletes the snapshot and the space
available does not change. This might also occur when the snapshot report does not hold many
blocks but the df output shows a lot of space held by snapshot.
Another cause for the issue might be improper functioning of the volume scanner.
Solution
The snap delete processes are performed in the background, so you do not always see the space
reclaimed immediately. This is done, so that the delete and recalculation time does not slow
down system performance.
If you are relying on using the space freed up by deleting a snapshot, the fastest way to have the
storage system recalculate and finish deleting all snapshot information from the deleted snapshot
is to offline/online the volume. However, it is recommended to wait until the background
processes finish up the delete. The processes might take long when many snapshots are deleted
on a system at once.
Answer
Before estimating the necessary size of the volume, decide how to manage storage at the volume
level. In SAN environments, there are three methods to consider for managing the storage at the
volume level: Volume Autosize, Snapshot Autodelete and Fractional Reserve. The method you
select will help to determine the volume size. In Data ONTAP, by default, fractional reserve is set
to 100 percent, and Volume Autosize and Snapshot autodelete are disabled. However, in a SAN
environment, use the Snapshot Autodelete method, or the Volume Autosize method, which are
less complicated than using the Fractional Reserve method.
Volume Autosize: Volume Autosize allows you to automatically make more free space
available for autosizing a FlexVol, when that volume is nearly full by incrementally
increasing the volume size.
Fractional Reserve: Fractional Reserve is a volume setting that enables you to configure
how much space Data ONTAP reserves in the volume for overwrites in space-reserved
LUNs and files when Snapshot copies are created.
Volume Autosizing
Volume Autosize is useful if the volume's containing aggregate has enough space to support a
larger volume. Volume Autosize allows you to use the free space in the containing aggregate as a
pool of available space shared between all the volumes on the aggregate.
Volumes can be configured to automatically grow as needed, as long as the aggregate has free
space. When using the Volume Autosize method, you can increase the volume size incrementally
and set a maximum size for the volume. Monitor the space usage of both the aggregate and the
volumes within that aggregate to ensure volumes are not competing for available space.
Note: The autosize capability is disabled by default. Run the vol autosize command to enable,
configure and to view the current autosize settings for a volume.
Snapshot Autodelete
is a volume-level option that allows you to define a policy for
automatically deleting snapshot copies, based on a definable threshold.
Snapshot Autodelete
You can set the threshold, or trigger, to automatically delete snapshot copies when:
Fractional Reserve
When SIS is enabled on the volume, fractional reserve behaves as if a snapshot is always present.
Therefore, fractional reserve will be honored and the volume will appear to have less space
available. This can be problematic, as LUNs can potentially go offline if the volume fills up and
Thin Provisioned:
Best Practices:
When creating a LUN and a Volume container, it is highly recommended to take the proposed
size of the LUN and add 5GB, as a size for the containing volume. This rule of thumb is
recommended all the way up to 1TB for the size of a LUN. VMWare and SnapDrive already do
this automatically. This is to allow for buffering and metadata.
For intents to create a LUN greater than 1TB, make the containing volume 2-3% larger than the
LUN it will contain.
There are many ways to configure the NetApp storage appliance for LUN thin provisioning; each
has advantages and disadvantages. It should be noted that it is possible to have thinly provisioned
volumes and non-thinly provisioned volumes on the same storage system or even the same
aggregate. LUNs for critical production application might be configured without thin
provisioning while LUNs for other types of applications might be thinly provisioned. The
following are considered to be best practice configurations:
Volume Guarantee=None Configuration
guarantee = none
LUN reservation = enabled
fractional_reserve = 0%
snap_reserve = 0%
autodelete = volume / oldest_first
autosize = off
try_first = snap_delete
This configuration has the advantages that the free space in the aggregate is used as a shared pool
of free space. The disadvantages of this configuration is that there is a high level of dependency
between volumes and the level of thin provisioning cannot easily be tuned on an individual
volume basis. When using this configuration the total size of the volumes would be greater than
the actually storage available in the host aggregate. With this configuration the storage
administrator will generally size volume so that they only need to manage and monitor the used
space in the aggregate.
Autogrow/Autodelete Configuration
guarantee = volume
LUN reservation = disabled
fractional_reserve = 0%
snap_reserve = 0%
autodelete = volume / oldest_first
autosize = on
try_first = autogrow
This configuration has the advantage that it is possible, if desired, to finely tune the level of thin
provisioning for each application. With this configuration, the volume size defines or guarantees
an amount of space that is only available to LUNs within that volume. The aggregate provides a
shared storage pool of available space for all the volumes contained within it. If the LUNs or
snapshot copies require more space than available in the volume, the volumes will automatically
grow, taking more space from the containing aggregate.
The degree of thin provisioning is done on a per-volume level, allowing an administrator to, for
example, set the volume size to 95% of the cumulative LUN size for a more critical application
and to 80% for a less critical application. It is possible to tune how much of the shared available
space in the aggregate a particular application can consume by setting the maximum size to
which the volume is allowed to grow as explained in the description of the autogrow feature.
In cases where snapshots are also being used, the volume might also be configured larger than
the size of the LUNs contained within the volume. The advantage of having the LUN space
reservation disabled in that case is that snapshots can then use the space that is not needed by the
LUNs. The LUNs themselves are also not in danger of running out of space because the
autodelete feature will remove the snapshots consuming space. It should be noted that, currently,
snapshots used to create clones will not be deleted by autodeleted.
Thick Provisioned:
Explanation:
This is the default type. When you use thick provisioning, all of the space specified for the LUN
is allocated from the volume at the LUN creation time. Even though the volume fills up to 100%,
the LUN still has space allocated to it and will still be able to be written to.
Related Links:
Chapter 2: 'How 100 percent fractional reserve affects available space' of Block Access
Management Guide for FCP
Answer
Every volume has a language. The storage system uses a character set appropriate to the
language. The language of a volume can be specified during volume creation or can be
modified later after creation. By default, the language of a volume is the same as that of the root
volume. In some cases, volume language is not required to be set. The following are the different
situations:
1. If the volume is used for NFS ( less than V4) only:
Do nothing ( but, it does matter when the files are created by Unicode clients)
2. If the volume is used for CIFS only or NFSv4 and later:
Volume language is not relevent for CIFS.For NFS set the language of the volume to the
language of its clients.
3. If the volume is used for both CIFS and NFS (less than V4)
Set the language of the volume to the locale used by NFS.
4. Activate these options immediately on volume creation by turning on create_ucode
and convert_ucode volume options.
vol options <volume_name> create_ucode on | off
vol options <volume_name> convert_ucode on | off
5. Downlevel legacy clients such as MSDOS, which do not support unicode, DO require the
volume language setting - this provides the OEM character set used by these clients.
For legacy clients, use the 'en' volume language setting which provides the normal NFS
character set and cp850 OEM character set which covers most European countries
including German, Spanish and French. Otherwise, use 'en_US' which provides the NFS
character set and the cp437 OEM character set. The differences between these two can be
found in the relevant cp850.h and cp437.h header files.
Best practices:
It is best if all volumes are of the same language. If a volume differs from the console
language, commands with path names might not work.
To see the same file names from Windows and UNIX, only use characters that are legal
for both and are legal in the NFS character set.
For example, do not put a Japanese file name on a French volume.
Technically, there is no need to reboot after changing a volumes' language, since both the
on-disk and the in-memory translation tables have been changed to the new language. But
if error messages like Error starting OEM char set or Error starting NFS char
set are encountered, then a reboot is required, since the new in-memory tables could not
be built, perhaps due to insufficient memory. Besides, there is the risk of stale data in
memory if the system is not rebooted.
Changing the volume language after data has been written might have some effects if it falls into
any of the categories below:
1. If the volume contains only replicas of the Windows OSSV data, then there should be no
cause for concern.
2. If ALL of the following conditions prevail, then there is no workaround except reinitializing the SnapVault or qtree SnapMirror relationships when they fail:
a. The volume contains replica qtrees of non-unicode sources, that is,
Storage system qtrees which were not accessed by Common Internet File
System (CIFS) protocol and where the volume options create_ucode and
convert_ucode are both OFF
b. The volume has create_ucode turned on, and the secondary is not.
c. The primary data has non-ASCII filenames with multiple hardlinks in the same
directory (not same directory tree, but same directory)
For replica data not falling into either of the above categories:
NFS access on the secondary volume might be impaired (names might look odd, or you can only
see an NFS alternate name like: 8U10000) until a directory operation happens on the primary,
and a successful update operation completes.
To accelerate recovery in this case, rename each non-ASCII filename on the primary. Ideally, you
would rename each to another directory, and then rename it to its original position. Then
snapvault/snapmirror update correctly.
For NON-replica data:
1. If the volume's create_ucode and convert_ucode options are both OFF, and the NFS
data is accessed only using NFS (NEVER by CIFS), there will be no issues.
2. If either create_ucode or convert_ucode option is set on the volume, or if the NFS data
is accessed there could be some issues related to the NFS alternate name.
3. If you have files that have characters beyond 0x7f that are in non-Unicode directories you
will have issues in accessing them after the switch. If you are sure that those do not exist,
everything should be OK.
For files that are in Unicode directories that Unicode is definitive and the issue is that those
names are translated based on the character set you specify. So, if the client is configured to
accept UTF8 names, then everything should work.
Can I use an extended character set (i.e. Japanese) without changing the language on the
storage system?
If the volume's language is a UTF-8 language set (i.e. en_US.UTF-8), and the client is writing
filenames less than 255 bytes in length, the language does not need to change. If the filenames
are quite long - e.g. longer than 85 characters, and each of the characters translates into 3-bytes
of UTF-8, this will result in a filename larger than the allowed size and users would see the
NFS alternative name if they attempted to access the file from NFS. In the above case, changing
the language to a localized supported language (i.e. ja_v1) would allow 2 bytes per Japanese
character, making the above 85 character file name only 170 bytes instead of 255. In this case,
the administrator would still run into the file name limitation issue once you reach a file name of
128 characters.
Description
How can I migrate a SnapVault relationship to a new source/primary volume?
How can I move my SnapVault relationships from a traditional volume to a FlexVol?
Procedure
It is possible to move a SnapVault relationship from one volume to another on the primary
(source) filer. The new volume can have a different volume name or the same volume name can
be used (if migrating from traditional to FlexVols, for example). This article contains the
procedures to perform both types of migrations.
Notes:
A SnapVault restore of the data is required, and thus the work should be scheduled during
a maintenance window.
The SnapVault restore will require a data transfer from the secondary filer to the primary
filer.
If the new volume will have a different volume name than the original:
1. Verify the existing relationship:
dst_filer> snapvault status
Snapvault secondary is ON.
Source
Destination
State
Lag
Status
src_filer:/vol/source/qtree1 dst_filer:/vol/sv_dest/qtree1 Snapvaulted 0
1:25:35 Idle
2. Disable all protocol (CIFS, NFS, iSCSI, etc) access to the primary volume. This will
prevent any data change on the primary volume's qtrees while performing the data
migration.
3. Perform a SnapVault update to ensure the secondary filer has the same data as the
primary filer:
dst_filer> snapvault update /vol/sv_dest/qtree1
5. Perform a SnapVault restore from the secondary qtree to create the primary qtree on the
newly created FlexVol:
src_filer> snapvault restore -S dst_filer:/vol/sv_dest/qtree1
/vol/flex_source/qtree1
Restore from dst_filer:/vol/sv_dest/qtree1 to /vol/flex_source/qtree1
started.
Monitor progress with the 'snapvault status' command.
Abort the restore with ^C.
Data restored to /vol/flex_source/qtree1.
Made qtree /vol/flex_source/qtree1 writable.
Restore to /vol/flex_source/qtree1 completed successfully.
6. Verify the SnapVault status on the primary following the restore. It should show the
original relationship in a state of "Source" and a state of "Broken-off" for the relationship
from the restore:
src_filer> snapvault status
Snapvault primary is ON.
Source
Destination
State
Lag
Status
dst_filer:/vol/sv_dest/qtree1 src_filer:/vol/flex_source/qtree1 Brokenoff 01:36:39 Idle
src_filer:/vol/source/qtree1 dst_filer:/vol/sv_dest/qtree1
Source
01:40:10 Idle
7. Resync the SnapVault relationship for the new primary FlexVol on the secondary filer.
dst_filer> snapvault start -r -S src_filer:/vol/flex_source/qtree1
/vol/sv_dest/qtree1
8. Once the resync has completed, the SnapVault relationship will be from
src_filer:/vol/flex_source/qtree1 to dst_filer:/vol/sv_dest/qtree1. The SnapVault status on
the primary filer will show three relationships:
a. The SnapVault relationship for the SnapVault restore performed in Step 5.
b. The new SnapVault relationship to the primary FlexVol.
c. The original SnapVault relationship to the traditional volume.
src_filer> snapvault status
Snapvault primary is ON.
Source
Destination
dst_filer:/vol/sv_dest/qtree1
src_filer:/vol/flex_source/qtree1
src_filer:/vol/flex_source/qtree1
dst_filer:/vol/sv_dest/qtree1
src_filer:/vol/source/qtree1
dst_filer:/vol/sv_dest/qtree1
State
Lag
Status
00:01:40 Idle
Source
01:45:51 Idle
9. Confirm the SnapVault status on the secondary filer shows only the relationship for the
new primary FlexVol:
dst_filer> snapvault status
Snapvault secondary is ON.
Source
Destination
State
Lag
Status
src_filer:/vol/flex_source/qtree1 dst_filer:/vol/sv_dest/qtree1 Snapvaul
ted 00:06:54 Idle
10. Remove the original SnapVault relationship from the primary filer:
src_filer> snapvault release /vol/source/qtree1
dst_filer:/vol/sv_dest/qtree1
11. Clean-up the Broken-off relationship for the restore by deleting the base snapshot on the
primary filer for this relationship.
a. Determine the base snapshot's name by locating it in the output of a "snapvault
status -l". Be sure to look for the relationship where the State is "Broken-off".
src_filer> snapvault status /vol/flex_source/qtree1
Snapvault primary is ON.
Source: dst_filer:/vol/sv_dest/qtree1
Destination: src_filer:/vol/flex_source/qtree1
Status: Idle
Progress: State: Broken-off
Lag: 01:52:45
Mirror Timestamp: Sun Oct 15 18:59:31 GMT 2006
Base Snapshot: src_filer(0022579971)_flex_source_qtree1-dst.2
Current Transfer Type: Current Transfer Error: Contents: Last Transfer Type: Initialize
Last Transfer Size: 48 KB
Last Transfer Duration: 00:00:01
Last Transfer From: dst_filer:/vol/sv_dest/qtree1
12. Confirm the primary filer only shows the SnapVault relationship for the new FlexVol:
src_filer> snapvault status
13. Change the SnapVault snap schedule on the primary filer to use the new FlexVol name
and stop snapshots on the old SnapVault primary volume.
a. Add the schedule for the new source FlexVol:
14. Verify a manual successful SnapVault update from the secondary filer:
dst_filer> snapvault update /vol/sv_dest/qtree1
15. Re-enable protocols (CIFS, NFS, iSCSI, etc) on the primary filer.
If the new volume will have the same volume name as the original:
1. Verify the existing relationship:
2. Disable all protocol (CIFS, NFS, iSCSI, etc) access to the primary volume. This will
prevent any data change on the primary volume's qtrees while performing the data
migration.
3. Perform a SnapVault update to ensure the secondary filer has the same data as the
primary filer:
dst_filer> snapvault update /vol/sv_dest/qtree1
6. Perform a SnapVault restore from the secondary qtree to create the primary qtree on the
newly created flexvol:
src_filer> snapvault restore -S dst_filer:/vol/sv_dest/qtree1
/vol/source/qtree1
Restore from dst_filer:/vol/sv_dest/qtree1 to /vol/source/qtree1
started.
Monitor progress with the 'snapvault status' command.
Abort the restore with ^C.
Data restored to /vol/flex_source/qtree1.
Made qtree /vol/source/qtree1 writable.
Restore to /vol/source/qtree1 completed successfully.
7. Verify the SnapVault status on the primary following the restore. It should show the
relationship in a state of "Broken-off" for the relationship from the restore.
src_filer> snapvault status
Snapvault primary is ON.
Source
Destination
State
Lag
Status
dst_filer:/vol/sv_dest/qtree1 src_filer:/vol/source/qtree1 Broken-off 01
:36:39 Idle
8. Resync the SnapVault relationship for the new primary FlexVol on the secondary filer.
dst_filer> snapvault start -r -S src_filer:/vol/source/qtree1
/vol/sv_dest/qtree1
9. Once the resync has completed, the SnapVault relationship will be from
src_filer:/vol/source/qtree1 to dst_filer:/vol/sv_dest/qtree1. The SnapVault status on the
primary filer will show two relationships:
a. The snapvault relationship for the snapvault restore performed in Step 5.
b. The new snapvault relationship to the primary vol.
src_filer> snapvault status
Snapvault primary is ON.
Source
Destination
State
Lag
Status
dst_filer:/vol/sv_dest/qtree1 src_filer:/vol/flex_source/qtree1 Br
oken-off 01:42:21 Idle
src_filer:/vol/source/qtree1
Source 00: 01:40
Idle
dst_filer:/vol/sv_dest/qtree1
10. Confirm the SnapVault status on the secondary filer shows only the relationship for the
new primary FlexVol:
dst_filer> snapvault status
Snapvault secondary is ON.
Source
Destination
State
Lag
Status
src_filer:/vol/flex_source/qtree1 dst_filer:/vol/sv_dest/qtree1 Snapvaul
ted 00:06:54 Idle
11. Clean-up the Broken-off relationship for the restore by deleting the base snapshot on the
primary filer for this relationship.
a. Determine the base snapshot's name by locating it in the output of a "snapvault
status -l". Be sure to look for the relationship where the State is "Broken-off":
src_filer> snapvault status /vol/flex_source/qtree1
Snapvault primary is ON.
Source: dst_filer:/vol/sv_dest/qtree1
Destination: src_filer:/vol/source/qtree1
Status: Idle
Progress: State: Broken-off
Lag: 01:52:45
Mirror Timestamp: Sun Oct 15 18:59:31 GMT 2006
Base Snapshot: src_filer(0022579971)_flex_source_qtree1-dst.2
Current Transfer Type: Current Transfer Error: Contents: Last Transfer Type: Initialize
Last Transfer Size: 48 KB
Last Transfer Duration: 00:00:01
Last Transfer From: dst_filer:/vol/sv_dest/qtree1
12. Confirm the primary filer only shows the SnapVault relationship for the new FlexVol:
src_filer> snapvault status
13. Since no changes were made to the volume name, the SnapVault snap sched does not
need to be updated.
14. Verify a manual successful snapvault update from the secondary filer:
dst_filer> snapvault update /vol/sv_dest/qtree1
15. Re-enable protocols (CIFS, NFS, iSCSI, etc) on the primary filer.
Description
LUNs can be copied from one volume to another within Data ONTAP using ndmpcopy, dump
and restore, SnapMirror and Volume Copy. These tools are designed to handle the necessary
metadata which describes the LUN to Data ONTAP. LUNs must never be copied via NAS
protocols such as CIFS and NFS.
Procedure
Ndmpcopy, dump and restore can be used to copy a LUN from one volume to another volume
that is located on the same or a different filer. These tools can also be used to copy a LUN from
a traditional volume to a FlexVol or to copy a LUN from within a snapshot. Additional tools such
as SnapMirror and Volume Copy can also be used to copy LUNs with the limitations described
below.
Ndmpcopy utilizes dump and restore to perform the work of reading in the original LUN and
writing a copy to the destination, respectively. Before copying a LUN with ndmpcopy or dump,
it is important to create a consistent snapshot of the LUN or otherwise disconnect the host from
the LUN cleanly. This will allow all host data to be written or flushed to the LUN.
LUNs may only exist in the root of a volume or qtree. Copying a LUN to a subdirectory will
prevent Data ONTAP from recognizing the LUN and it will not appear in the output of 'lun
show'.
LUNs do not need to be copied under the following circumstances:
If the LUN is being moved within the same volume or if it is being renamed. To move or
rename a LUN, the command 'lun move' should be used. e.g. lun move <lun_path>
<to_lun_path> - rename a LUN. Both paths must be in the same volume.
If the LUN is to be copied from a snapshot on the same volume. To copy a LUN from a
snapshot use the command 'lun clone'.
Using ndmpcopy
> ndmpcopy source_filer:/vol/my_data_luns/lun0
destination_filer:/vol/my_data_luns
Note: If you run into issues with inode warnings or internal errors, you might need to use the -f
switch, as it tells Data ONTAP you are transferring system files.
> ndmpcopy -f /vol/<volume>/<lun> /vol/<volume>
In this example, the full path of the LUN is specified for the source filer and only the destination
volume is specified on the destination filer. The destination location must be the volume or qtree
that the LUN will be restored to. If the volume does not exist, then the ndmpcopy will place the
LUN into a subdirectory of the filer's root volume.
The filers specified in the 'ndmpcopy' command can be the same filer or different filers, and the
volumes can either both be traditional volumes or FlexVols or a combination of traditional
volumes and FlexVols. Once the LUN is copied successfully, it will appear in the output of 'lun
show' on the destination filer.
For situations where the use of a network is not available for ndmpcopy, LUNs can be copied
between filers using the 'dump' and 'restore' commands.
Using dump and restore
The 'dump' and 'restore' commands can be used to facilitate copying a LUN from one filer to
another. In this two step process, dump will be used to write the LUN to a regular file as a dump
image. Of course dump can be used to write to tape, however this procedure will assume that a
regular file is being used; otherwise the procedure is the same. The dump image can then be
copied and transported to the destination filer where it will be recovered using the restore
command.
1. Use the 'dump' command to backup the LUN to a regular file:
> dump 0f /vol/somevol/backup_lun.dump /vol/data_luns/lun0
These commands will write the backup data to a regular file, in this example called
"/vol/somevol/backup_lun.dump". This file can be copied from the filer via NFS or CIFS
protocols, and then transported and copied onto the destination filer.
2. Use the 'restore' command to recover the LUN from the dump file.
Restore the full dump archive to the desired volume:
Or restore the specified LUN, lun0, from the dump archive to the desired volume:
These commands will restore the LUN file from the dump archive file to the desired location.
The first command restores the entire archive, which may be useful if the archive contains
multiple LUNs. The second command restores on the specified LUN, lun0, to the desired
location. Notice that the restore location, like the ndmpcopy destination, must be the root of a
volume or qtree.
Using SnapMirror
SnapMirror can be used to mirror entire volumes or qtrees. The mirrored volume or qtree will
contain an exact replica copy of the LUN from the original location. SnapMirror may only mirror
volumes of the same type. For example, FlexVols may only be mirrored to other FlexVols.
SnapMirror can also mirror Qtrees without a limitation on the volume type. Please refer to the
Data ONTAP Data Protection Online Backup and Recovery Guide for documentation on using
SnapMirror and for a complete list of limitations.
Using Volume Copy
Volume copy is useful for copying an entire volume within a single filer or to another filer. The
copied volume will contain exact copies of any LUNs contained within. Volume Copy may only
copy volumes of the same type. For example, FlexVols may only be copied to other FlexVols.
Please refer to the Data ONTAP Data Protection Online Backup and Recovery Guide for
documentation on using Volume Copy and for a complete list of limitations.
__
Why is the reported iSCSI LUN latency
higher than the volume latency?
KB Doc ID 3014134 Version: 4.0 Published date: 08/06/2015 Views: 3461
Answer
It is often observed that the latency measured for iSCSI (and FCP) is significantly higher than
that for the underlying volume, and the operation count at the volume level is higher than that
measured on the contained LUNs.
For example: With a client having 256KB reads measuring latency and operation count for the
volume and its contained LUN:
o Initiator MaxRecvDataSegmentLength=65536
Target MaxRecvDataSegmentLength=65536
o 7-Mode iSCSI will always limit the burst and segment length to 64KB. This
might be further reduced by the initiator/client configuration.
o iSCSI Latency is measured from when the first PDU of the command is fully
received, until the time the last PDU of the response is sent to the output queue.
o If the data path between the client and storage is slow (for example, due to
congestion, low bandwidth elements, packet loss, and so on), then this latency
will be reflected in the latency of larger (typically > 64KB) operations.
o LUN Latency ~= VolumeLatency * ROUNDUP(OperationSize / 64KB) +
NetworkRoundTripTime * (ROUNDUP(OperationSize / SegmentLength) - 1*)\
In the example below, a Linux iSCSI client is used with network latency artificially added by
running the following command:
# tc qdisc add dev eth0 root netem delay 100ms
Initial test with 64KB read operations (host measured latency = 103ms, controller lun latency ==
volume latency ~ 0.06ms):
fas01*> stats show -r -n 1 -i 5 volume:demo2:read_latency
volume:demo2:read_ops lun:/vol/demo2/lun2-BLH0G?BQgK/T:avg_read_latency
lun:/vol/demo2/lun2-BLH0G?BQgK/T:read_ops
volume:demo2:read_latency:61.42us
volume:demo2:read_ops:9/s
lun:/vol/demo2/lun2-BLH0G?BQgK/T:avg_read_latency:0.04ms
lun:/vol/demo2/lun2-BLH0G?BQgK/T:read_ops:9/s
Test with 128KB read operations (host measured latency = 205ms, controller LUN latency of
101ms includes 1 network round trip):
fas01*> stats show -r -n 1 -i 5 volume:demo2:read_latency
volume:demo2:read_ops lun:/vol/demo2/lun2-BLH0G?BQgK/T:avg_read_latency
lun:/vol/demo2/lun2-BLH0G?BQgK/T:read_ops
volume:demo2:read_latency:44.71us
volume:demo2:read_ops:9/s
lun:/vol/demo2/lun2-BLH0G?BQgK/T:avg_read_latency:101.58ms
lun:/vol/demo2/lun2-BLH0G?BQgK/T:read_ops:4/s
Test with 256KB read operations (host measured latency = 408ms, controller LUN latency of
302ms includes 3 network round trips):
fas01*> stats show -r -n 1 -i 5 volume:demo2:read_latency
volume:demo2:read_ops lun:/vol/demo2/lun2-BLH0G?BQgK/T:avg_read_latency
lun:/vol/demo2/lun2-BLH0G?BQgK/T:read_ops
volume:demo2:read_latency:47.02us
volume:demo2:read_ops:10/s
lun:/vol/demo2/lun2-BLH0G?BQgK/T:avg_read_latency:302.62ms
lun:/vol/demo2/lun2-BLH0G?BQgK/T:read_ops:2/s
Most environments will include a mixture of operation sizes, and the effect of external latency
will vary depending on the mixture.
A good test to see if there are external factors impacting latency is to calculate the operation time
(latency * ops) at the volume and LUN level. If the LUN operation time is significantly higher
than the volume operation time and there are operations > 64KB, then it is likely that external
client or network factors are impacting performance.
Another example is 1 MB FCP operation is segmented into 16 volume operations. So the latency
for 1MB FCP operations include 16 volume operations.
The above explanation concludes that often iSCSI and FCP operations to the LUN are split into
multiple volume operations.
A-SIS Deduplication shows no changes in space saved on volume
Symptoms
Starting A-SIS service on storage system volumes for the first time and issuing a df does not
show any changes. It is the same size as before A-SIS was started on the volume. No space is
shown as saved.
df -h volname
Filesystem total used avail capacity Mounted on
/vol/volname/ 4053GB 3970GB 83GB 98% /vol/volname/
/vol/volname/.snapshot 0MB 2033GB 0MB ---% /vol/volname/.snapshot
Cause
When A-SIS service is started for the first time on a volume, there might be little or no change in
the volume size. This can occur if there are snapshots created before starting the service.
Solution
When the snapshots begin to rollover and delete, there might be some space change noticed.
Once the last snapshot before starting the A-SIS service is deleted, users should notice a
significant amount of space change.
How does user authentication work on a mixed qtree or volume?
Answer
In a mixed security qtree or volume each individual file and directory can have either Windows
permissions or UNIX permissions, not both.
In a mixed security qtree, the style of permissions on a file or directory depends upon which
host, UNIX or Windows, last changed permissions. If a user accesses a file that has a different
security style, then the user must map to a user that has the appropriate security style.
For details on mapping users between Windows and UNIX, see the Security Troubleshooter.
Also, see the wcc man page.
To determine if Windows or UNIX permissions are on a particular file or directory, use
SecureShare Access.
How does WAFL and striping distribute data among disks?
Answer
In the example above, data will be 100% balanced when the new drive has 0.75 GB of data which means you need to change 0.75 GB of data that is currently on the old drive and reallocate
it on the new drive before data distribution is balanced.
Note: The more disk drives you have, the smaller percentage of the data will need to be moved
to get the data to a balanced state.
You might have an archival system where no old data is ever deleted. In these instances, the
distribution would not even out much, but such a system is predominantly a read, so the write
performance is not much of an issue. Data distribution can be balanced while doing a full dump/
restore by copying the data around. When a file is copied and the original file is deleted, WAFL
is given a chance to even out the data distribution as it does the write allocation.
Description of write allocation:
The write allocation in the WAFL code keeps a Current Write Location (CWL) pointer for each
disk that indicates where the next write will occur. The CWL for each disk starts at the beginning
of the disks and advances to the end, filling in every unallocated slot. WAFL selects which disk
to use based on which CWL is behind the others, so the CWLs for all the disks are closed, which
is why the parity disk does not have to seek. It is possible for one CWL to get ahead of the others
because WAFL writes successive blocks of a single file onto a single disk.
The end result is that during the first few passes through all the disks, the new disk will have a
lot of data written to it because it is completely empty. As the old data is removed and new data
is written, the data evens out among the disk drives.
Reallocation:
While WAFL attempts to evenly lay out the written data, it may be necessary over time to force a
reallocation of the data. The storage administrator should reference the System Administration
Guide for their specific release of Data ONTAP regarding instructions and caveats to consider
when running this command.
Is it better to create another volume or expand an existing volume?
Answer
A new volume will perform better initially that just adding disk and expanding a current
volume .Eventually, a large volume will perform better because it can usually take advantage of
all the spindles, but the data will get spread out over all the drives over time. For a better
explanation, see 3011896: How does Write Anywhere File Layout and striping distribute data
among disks?
Symptoms
Output of df -A and df shows difference between the amount of space used on the aggregate and
the actual size of the volume in the kbytes column of the df command
Cause
Space guarantee is disabled.
Solution
If all of the volumes in that aggregate are space guaranteed, the used column of the df -A for the
aggregate will reflect the total of the kbytes column of the df output. If some (or all) of the
volumes on that aggregate are not space guaranteed, follow these rules:
Type vol status to determine if the volume is space guaranteed or not.
For the guaranteed volumes, add the space for the volume and .snapshots in the kbytes
column from the output of the df command.
For the non-guaranteed volumes, add the space for the volume and .snapshots in the used
column. This number should be roughly the same as the used column for the aggregate in
the df -A output.
snapmirrored=off,
create_ucode=on,
convert_ucode=on,
maxdirsize=10485,
fs_size_fixed=off,
guarantee=volume,
<----------------svo_enable=off,
svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off,
fractional_reserve=100
Containing aggregate: 'aggr0'
Space guaranteed
vol4
92274688 + 23068672 = 115343360
From kbytes
column
vol1
297432176 + 37531776 = 334963952
From used column
vol2
22167932 + 2918424 = 25086356
From used column
vol0
4194304 + 1048576 = 5242880
From kbytes column
vol0_lon 599800 + 144076 = 743876
From used column
----------------------------------------------------------481380424
Total
483107712 aggr0 used
The difference is caused by rounding of the values for each df -A and df output.
Note: A volume containing a space reserved LUN will show the full size of the LUN in df, under
used space, whereas df -a and aggr show_space displays only the blocks which have been
written to in the used column.
For example, LUN1.lun in volume /vol/lun/ is a 100GB space reserved LUN, with 10 GB
written data. df will show that the volume has 100GB used space (due to space reservation), but
df-a and aggr show_space will show 10GB used (as calculation is base on number of 4K
blocks which have been written).
Related Links:
Description
Remove disks from a volume
Can a volume be shrunk?
Reduce the number of data disks to increase number of hot spares
Reduce the RAID group size
Decrease the size of a volume
How do I shrink an aggregate?
How do I delete a RAID group from a volume/aggregate that has multiple RAID groups?
Procedure
An aggregate or traditional volume is comprised of one or more RAID groups. A RAID group
consists of one or more data disks, across which client data is striped and stored, plus one or two
parity disks.
When Data ONTAP's WAFL file system writes data to an aggregate/volume, the data is striped
across the data disks in the RAID group(s) and parity is calculated and written to the parity
disks. This allows for increased performance as all available disk spindles in the
aggregate/volume's RAID group(s) are used. If a disk was to be removed from a RAID group,
the data on that disk would be missing from the aggregate/volume.
For disk failures, the missing disk is reconstructed using parity. Data ONTAP will always
attempt to reconstruct data on failed/missing drives on a spare drive assuming the RAID group
contains sufficient disks to allow for reconstruction. RAID4 groups can lose 1 disk; RAID-DP
groups can lose up to 2 disks.
Since Data ONTAP uses all of the disks in a RAID group, the only way in which an
aggregate/traditional volume can be shrunk is by changing the RAID type from RAID-DP to
RAID4. This will cause the second parity drive to be returned to the Spares Pool.
CAUTION:
NetApp's Storage Best Practices recommend using RAID-DP for better data
protection. Therefore, it is not advisable to convert volumes to RAID4.
WARNING:
Consult the NetApp System Configuration Guide before converting a volume to RAID4 to
ensure that the maximum aggregate size is not exceeded.
Note that FlexVols do not contain the same restriction. FlexVols can be shrunk providing the
new FlexVol size is equal to or greater than the amount of storage space used in the FlexVol. The
"vol size" command is used to change the size of the FlexVol. Refer to the
Data ONTAP Man pages for more information regarding using this command.
If the RAID group needs to be reduced by more than one drive or it is not possible to convert the
RAID group to RAID4, the only way to remove the disks is to create a new aggregate of the
desired size and migrate the data to this aggregate. The aggr and vol commands are used to
create and destroy volumes and aggregates.
The following methods can be used to migrate data to the new properly-sized aggregate and
associated FlexVols. These methods assume the filer has sufficient disks to create a new
aggregate of the proper size. Although these methods require downtime, the downtime can be
minimized by using incremental updates to replicate changed data at the time of the cutover to
the new aggregate/volume.
ndmpcopy
o The ndmpcopy command can be used to migrate data at a file level. This method
is restricted to one level 1 and one level 2 incremental backup following the level
0 baseline transfer.
o Once the data is transferred to the new volume, the original volume can be
destroyed and the new volume renamed to the original name.
o Example:
Destroy vol1.
SnapMirror
o Qtree SnapMirror can be used to migrate data between qtrees on the original
FlexVol and the newly built FlexVol. The source and destination volumes can be
different sizes.
o Volume SnapMirror can migrate data at the FlexVol level, but the destination
volume must be equal to or larger than the source volume.
o SnapMirror allows for unlimited incremental updates providing the relationship is
not broken-off.
o Once the data is transferred to the new volume, break the SnapMirror, destroy the
original volume, and rename the new volume to the original name.
o Example:
Destroy vol1.
If the filer does not have sufficient disks to create a new aggregate, the following methods can be
used. These methods will require downtime during the work.
Backup the data to tape and then destroy the original volume.
1. Make a full tape backup of the volume.
2. Verify the backup and test a restore.
3. Disable protocol access to the volume/aggregate.
4. Destroy the volume/aggregate and recreate it with the desired number of disks.
Note that the disks will need to zero before the new aggregate creation can
complete.
5. Restore the backup to the new volume.
6. Re-enable protocol access to the new volume.
Answer
All of the space disappeared on a LUN-containing volume after the first snapshot. The space
required for a LUN containing volume with 100% fractional reserve follows the 2x LUN + delta
rule. The 2x LUN + delta rule specifies the requirement (for volumes with fractional_reserve at
100) for the lun containing volume to have up to an extra 100 percent of the space available to
the mapped host that the host would expect to see available for writes on that lun , + room for the
snapshot rate of change. The typical windows host would expect to be able to write to almost
100% of its available lun at all times if the lun is used in active production as opposed to archive.
When portions of the space on the lun provisioned to the host referenced in the above sentence
get marked as read only, that space must still be provided to the host for random metadata writes,
hence the growth of used fractional reserve that can go UP to 100%.
For example, if a 5Gig LUN has 2Gig written to it when the snapshot is initially taken, the used
space in the containing volume will show as 7Gig. This is caused by a reserve of 2Gig held for
OS overwriting on the snapshot lock of those 2Gig blocks. If a 1Gig change occurs after the last
snapshot was taken on the LUN but used space stays the same, 3gig will now be reserved and
used space shows as 8Gig. In summary, the system will continually compensate to ensure that
the OS will have perceived writability to the full 5Gig when writing to the LUN. This will even
occur when used space equals 4.9Gig according to the host OS. The host OS with one snapshot
and no change, would show used space at 9.9 gigs. With 1Gig of change and another base
snapshot, 10.9Gig would show as used.
If 3Gig of change continually occurs on a 5Gig LUN ( 4.9 gigs used on the OS level) over the
course of a snapshot's lifetime, then minimum advisable space would be 10Gig + 3Gig = 13 Gig.
This can be tested with the following process:
1. Created a volume with snap reserve of 0.
2. Create a LUN and right click on it in My Computer to get it's size.
3. Convert the LUN's size as well as the used and free space to KB.
Remember that snapinfo directories fill up with file data quickly. They will also change quickly
depending on the number of online backups required. Therefore the delta may be higher than
expected for the containing LUN.
Also, fractional reserve can be tweaked to a lower amount, but in the event of host overwrites
meeting read-read only blocks, LUNs can be abruptly taken offline.
For more information, see the chapter entitled "Understanding space reservation for volumes and
LUNs" in the Block Access Admin Guide for the appropriate release of Data ONTAP, or
see Thin Provisioning in a NetApp SAN or IP SAN Enterprise Environment.
Related links:
Answer
When determining how many disks to add to an existing aggregate or traditional volume
(TradVol), the following must be considered:
When disks are added to an existing aggregate/TradVol, the storage system will attempt to keep
the amount of data stored on each disk about equal.
For example, if you have four 20GB data disks in a TradVol/aggregate containing 60 GB of data,
each disk will hold approximately 15 GB. The total space in the volume is 80 GB. The used
space is 60GB. Each data disk contains 15GB of data.
When you add a new disk to that aggregate/TradVol, the storage system will write new data to
this disk until it matches the pre-existing disks, which contain 15 GB each.
In the previous example, after adding one 20GB disk to the aggregate/TradVol, the total size will
be 100GB. The used space is still 60GB. The original three disks contain 15GB used space each.
The newly added disk has 0GB used. Writes to the aggregate/TradVol will go to the newly added
disk until its used space reaches 15GB. Once all four data disks have 15GB used, then the data
will be striped across all four disks.
For best performance, it is advisable to add a new RAID group of equal size to existing RAID
groups. If a new RAID group cannot be added, then at minimum, three or more disks should be
added at the same time to an existing RAID group. This allows the storage system to write new
data across multiple disks.
For example, if you have four 20GB data disks in an aggregate/TradVol containing 60 GB of
data, each disk will hold approximately 15 GB. The total space in the volume is 80 GB. The used
space is 60GB. Each data disk contains 15GB of data.
When three new disks are added, the total space in the aggregate/TradVol is 420GB. The used
space is 60GB. The original three disks contain 15GB each of data. The three new disks contain
0GB of data. When new data is written to the aggregate/TradVol, it will be striped evenly across
the three new disks until each disk contains 15GB of data. Once that occurs, new data will be
striped evenly across all six data disks.
By adding a minimum of three disks at a time to an aggregate/TradVol, the throughput to disk is
increased by providing more disks to write to at a given time.
See the example below: In clustered Data ONTAP enter 'node run -node <node name>' to go
to the node shell and these commands will work.
Spare disks
RAID Disk
Device
HA SHELF BAY CHAN Pool
Type RPM Used (MB/blks)
Phys (MB/blks)
-------------------------- ---- ---- -------- --------------------------Spare disks for zoned and advanced_zoned checksum
spare
0a.01.0L2
0a
1
0
SA:A
MSATA 7200 2855865/5848812032 2861588/5860533168
spare
0a.01.2L2
0a
1
2
SA:A
MSATA 7200 2855865/5848812032 2861588/5860533168
spare
MSATA
spare
MSATA
0a.01.4L1
0a
1
4
SA:A
7200 2855865/5848812032 2861588/5860533168
0a.01.4L2
0a
1
4
SA:A
7200 2855865/5848812032 2861588/5860533168
MSATA
Spare disks
RAID Disk
Device
HA SHELF BAY CHAN Pool
Type RPM Used (MB/blks)
Phys (MB/blks)
-------------------------- ---- ---- -------- --------------------------Spare disks for zoned and advanced_zoned checksum
spare
0a.01.4L2
0a
1
4
SA:A
MSATA 7200 2855865/5848812032 2861588/5860533168
Additionally, for storage systems running Data ONTAP 7.0 and later, the reallocate command
can be run following the addition of disks. Reallocate will optimize the layout of data on the
volume. If the aggregate contains FlexVol volumes with snapshot copies, then a traditional
reallocation should be avoided, as it will require additional space to maintain the Snapshot
copies.
For additional information, refer the following:
1010989: How to add larger capacity disks to a RAID group that contains smaller
capacity disks
How to check the reserved space of a volume and display snapshot statistics
Description
The df command can be used to display snapshot statistics. To provide information about
snapshot disk utilization, the df command treats a snapshot as a separate partition from the active
file system.
To check the reserved space of a volume, type:.
filer> df -r
Filesystem
/vol/vol0/
/vol/vol0/.snapshot
/vol/vol0/.snapshot
kbytes
845518696
0
used
2512156
2265032
avail
842730768
0
reserved
275772
0
Mounted on
/vol/vol0/
The reserved space is displayed in the 5th column. This example demonstrates that the snapshot
reserve is full. If snapshot creation fails, from the output of snap list, try to delete the oldest
snapshot using the snap delete command.
When using VLDs, the following error message may appear indicating snapshot creation failed:
A snapshot of the specified Virtualized Local Disk Device can not be taken at
this time because there is not enough space in the snapshot reserve. (Error
Code: 0xc004021c) Failed to take snapshot. Failed to backup storage group.
CLEANUP FAILED BACKUP
Also check df output for the volume in question. If it is higher than 33%, it is most likely that
disks need to be added to the volume as the VLD has grown too large for the existing disk space.
Related article:
3010415: What does the reserved space column in df -r output signify?
How to disable the automatic Network File System(NFS) export created when a
volume is created
Description
When a volume is created an entry is automatically entered into the /etc/exports file
Procedure
Disable the auto- update which is ON by default with options nfs.export.auto-update off
Description
When a volume is created, the 'Everyone' group is added by default to the security permissions.
As a result, these permissions get propagated down to sub-folders as they are created. This
causes inconvenience when modifying security permissions on folders, as each folder must be
maintained. It is best practice to remove the 'Everyone' group as per
Microsoft's recommendation.
Procedure
1. Use Windows to manage the file-level permissions on the volume level.
1. Create a volume level share. You may choose to make it a hidden admin share.
2. Access that share via Windows Explorer by entering \\filername into the field. A
populated list of Windows shares will be available in which you can right-click
and manage security permissions from.
3. From the security tab, select 'advanced' and clear the 'inherit' check box.
4. Add the desired users and remove everyone. Click apply. This process takes a
little more time than secedit.
2. There is a tool called 'secedit' that allows you to adjust the DACLS on a volume and
remove the 'Everyone' attribute that is propagating down to your subdirectories.
1. To view this DACL, run the following command on your filer:
fsecurity show /vol/volname
2. The DACL is what sets the top level attribute for the volume. To adjust this,
download secedit. See readme file.
3. Open the secedit tool.
4. Click add. Locate the field for entering the path for the volume. Enter your
volume path:
In this case, the domain administrator and SnapDrive service account was
added to the list:
The -c will check the file without applying it. If it succeeds, run again without the
-c:
filer> fsecurity apply /etc/security_test.conf
Now, when creating folders under the share, the following appears in properties:
This provides global coverage, but also exposes you to issues in an environment where
you are using NFS and CIFS to access the volumes. To prevent the "permissions
clobber," see 1011133.
Mixed mode is generally not recommended, but there are no issues when using these
volumes for CIFS access.
kb 54342 has been replaced with this KB.
Answer
Although it is always recommended to set the volume language before writing data into the
volume, there might be situations where there is a need to change the volume language after
writing data. For example, on encountering BUG 133965.
Changing the volume language after data has been written might have some effects if it falls into
any of the categories below:
1. If the volume contains only replicas of the Windows OSSV data, then there should be no
cause for concern.
2. If ALL of the following conditions prevail, then there is no workaround except reinitializing the SnapVault or qtree SnapMirror relationships when they fail:
a. The volume contains replica qtrees of non-unicode sources, that is:
Storage system qtrees which were not accessed by Common Internet File
System (CIFS) protocol and where the volume options create_ucode and
convert_ucode are both OFF.
b. The volume has create_ucode turned on, and the secondary is not.
c. The primary data has non-ASCII filenames with multiple hardlinks in the same
directory (not same directory tree, but same directory).
For replica data not falling into either of the above categories:
NFS access on the secondary volume might be impaired (names look odd, or you can only see an
Network File System (NFS) alternate name like :8U10000) until a directory operation happens
on the primary, and a successful update operation completes.
To accelerate recovery in this case, rename each non-ASCII filename on the primary. Ideally, you
would rename each to another directory, and then rename it to its original position. Then
snapvault/snapmirror update correctly.
For NON-replica data:
1. If the volume's create_ucode and convert_ucode options are both OFF, and your NFS
data are accessed only via NFS (NEVER by CIFS), you have no worries.
2. If either create_ucode or convert_ucode option is set on the volume, or if the NFS data
are accessed by CIFS, you may experience some annoyance when accessing via NFS, as
described above (funny-looking name or NFS alternate name). But if you rename it you
should be fine.
3. If you have files that have characters beyond 0x7f that are in non-Unicode directories you
will have problems in accessing them after the switch. If you are sure that those do not
exist, everything should be OK.
For files that are in Unicode directories that Unicode is definitive and the issue is that those
names are translated based on the character set you specify. So, if the client is configured to
accept UTF8 names, then everything should work.
Note: A reboot is required for the new language mappings (No reboot is required for creating a
new volume with a correct language).
Answer
Deduplication has a maximum flexible volume size limit. If deduplication is to be used on a
volume, that volume must be within the size limit. If an attempt is made to enable deduplication
on a volume that is larger than the maximum volume size, it fails with an appropriate error
message. This limit varies based on the platform and Data ONTAP version. This is an important
consideration when flexible volumes are moved to a different platform with a smaller maximum
flexible volume size.
The deduplication maximum flexible volume size limits (including any snap reserve space) for
the different NetApp storage system platforms are listed below.
Note:
There is a limitation on the size of the volume while enabling dedupe in FAS2050.
For FAS2050, the maximum volume size for enabling dedupe is 1TB from Data ONTAP 7.2.5.17.3.0, and it increased to 2TB after Data ONTAP 7.3.1 and later versions from 7.3.x. In this
scenario, if you need to enable dedupe, for e.g. on a 3 TB volume, divide the 3 TB volume into 2
or more smaller volumes and then enable dedupe on it.
In Data ONTAP 8.1, volume size limitation on dedupe is removed, but FAS2050 is not
supported.
For Clustered Data ONTAP, the minimum requirement is ONTAP 8.1, and, like 7-mode, there
are no size limitations with this and all later versions.
type. Once this limit is reached, any additional new data written to the volume is not
deduplicated. Writes to the volume continue to work successfully.
Maximum Total Data Limit For Deduplication
Both the maximum supported volume sizes and the maximum shared data limit for deduplication
limit the maximum amount of data that can be stored within a flexible volume using
deduplication. The sum of the two becomes the maximum total (logical data) size limit for a
deduplicated volume.
Table 2 - Maximum total data limit in a deduplicated volume
To summarize, when using Data ONTAP 8.0.1 where the max volume sizes for all platforms is
16TB, the maximum total data limit for any storage system is:
Max Total Data Limit = [(MaxSupportedVolSize) + (MaxSharedDataLimit)]
= [ (16TB) + (16TB) ]
= 32TB
Description
Procedure
The following is the proper method for using SNMP on the filer with respect to disk utilization:
1. Enable SNMP
filer> options snmp.enable on
6. Initialize SNMP
filer> snmp init 1
(This is the time interval in seconds that you would like the trap to be polled)
filer> snmp traps dfPerCentKBytesCapacity.priority critical
A trap is now set to issue a critical alert when the volume is at 96% usage.
There are more options you can specify, but this should give you a good start. For more
information on configuring SNMP, see your respective Network Management Guide.
What is an over-committed aggregate?
Answer
After creating 100GB volume "test" with no guarantee and 20GB of data:
filer> df -Ah
Why do I see space available on my volume, but my filer tells me I don't have any space left
on my device?
An aggregate becomes over-committed by creating a situation where the volume allocation
exceeds the aggregate allocation.
If one creates an aggregate of 500GB, then they are limited to 500GB of free space (after WAFL
overhead).
If volume guarantees are on, then you could create five 100GB volumes and the aggregate would
show 100% space used in df -A. However, if volume guarantees are disabled, you could create as
many 100GB volumes as you wanted, and the aggregate would only see the data inside of the
volumes as taken. When this happens, the volumes will fill over time as they are used, and once
they reach a total of 500GB used, the aggregate will show as full and no more writes can take
place on that aggregate, even if the individual volumes have not been filled.
Why do I need volume guarantees enabled?
Volume guarantees need to be enabled in a majority of cases to avoid a situation where one can
no longer write to an aggregate due to lack of space. If volume guarantees are on, the space
usage can be monitored on a per-volume basis, and there is an accurate representation of what
you want to allocate versus what you are using. You are avoiding a situation of not knowing
when you are running out of space by guaranteeing you have space available.
How do I find out how much space I am actually using in my aggregate?
Df and df -A, when used together, can help illustrate how much space is actually on an aggregate
versus how much the volumes are using.
However, these commands can be misinterpreted and, occasionally, inaccurate.
The best way to show how much space is being used vs. being allocated is "aggr show_space"
This command will illustrate the accurate amount of space actually being used, regardless of
guarantees.
aggr show_space with volume guarantee on for "test":
filer> aggr show_space aggr1 -h
Aggregate 'aggr1'
Total space
825GB
WAFL reserve
82GB
Snap reserve
37GB
Usable space
705GB
BSR NVLOG
1180MB
Allocated
50GB
100GB*
Allocated
150GB
37GB
82GB
Used
214MB
816KB
Used
215MB
133MB
1207MB
Guarantee
volume
volume
Avail
554GB
37GB
81GB
*Note how the allocation of the volume "test" greatly differs from the "used".
aggr show_space with volume guarantee disabled for "test":
filer> aggr show_space aggr1 -h
Aggregate 'aggr1'
Total space
825GB
WAFL reserve
82GB
Snap reserve
37GB
Usable space
705GB
BSR NVLOG
1180MB
Allocated
50GB
868KB*
Allocated
50GB
37GB
82GB
Used
214MB
868KB
Used
215MB
133MB
1207MB
Guarantee
volume
none
Avail
654GB
37GB
81GB
*Notice how "test" is showing only 868KB used - this is because the "20GB" inside of the
volume is actually a lun with space reservations turned on - but no data inside of it. Additionally,
note how the space allocated matches the space used. This is how the filer sees the space in a
volume with no guarantee versus one with guarantee enabled.
Related links:
2010570 - Discrepancies between aggregate and volume when using df command
3011856 - What are the space requirements for LUNs with volumes that have 100% fractional
reserve?
2011527 - Volume guarantee is disabled after reboot or offline / online volumes
Answer
Space used in the aggregate is more than the sum of all flex volumes and df -Ar is showing
reserved space for the aggregate.
An aggregate may be showing reserved space for the following two reasons:
1. Volume guarantee is set to file and the volume contains files with space reservation,
e.g., Logical Unit Number(LUN)s or databases with file reservation turned on. To ensure
that the aggregate has always enough free blocks to accommodate the changes within that
reserved file, it reserves space within the aggregate that may not be used by other flexible
volumes.
2. The aggregate'ssnap autodelete option is turned to off. This means that the aggregate
snapshots may grow endlessly into the aggregates user space and a volume that has
guarantee set to volume could run out of space without knowing about the low space
within the aggregate. As a result, the aggregate reserves the used spaced of all flexible
volumes.
Example:
Filer> aggr options aggr0
root, diskroot, nosnap=off, raidtype=raid4, raidsize=8,
ignore_inconsistent=off, snapmirrored=off, resyncsnaptime=60,
fs_size_fixed=off, snapshot_autodelete=off, lost_write_protect=on
Filer> df -Ar aggr0
Aggregate
aggr0
aggr0/.snapshot
kbytes
62649908
0
used
53934328
2204
avail
8715580
0
reserved
1595016
0
Answer
What should I expect if the aggregate is showing 100% full?
In Data ONTAP 7G and Data ONTAP 8 7-Mode, df -A shows that an aggregate is 100% full.
For example:
storage1> df -A
Aggregate
aggr0
aggr0/.snapshot
kilobytes
5238716824
275721936
used
5231747608
60389932
available capacity
6969216
100%
215332004
22%
In Clustered Data ONTAP, storage aggregate show will display the space usage of the
aggregates on the system. For example:
cluster1::> storage aggregate show
Aggregate
Size Available Used%
--------- -------- --------- ---------------aggr0
6.21TB
1.78TB
71%
aggr1
6.65TB
6.42TB
3%
aggr2
1.77TB
1.63TB
8%
aggr3
1.77TB
1.73TB
2%
State
#Vols Nodes
RAID Status
------- ------ ---------online
online
online
online
49
4
1
2
node0
node1
node2
node3
raid_dp
raid_dp
raid_dp
raid_dp
An aggregate showing 100% used space in df -A might not actually be using 100% of the space.
If space reservations are used on the FlexVol volumes or LUNs, space will be marked reserved
and thus be calculated as used space even though the blocks on disk are actually free space. An
aggregate pools all the blocks that are not currently holding data (including free space in volumeguaranteed FlexVol volumes, unused Snapshot reserve space, and unused overwrite reserve space
for LUNs) into an internal pool that it hands out to the FlexVol volumes on demand.
If all the FlexVol volumes start to fill up at once, that might cause a problem depending
on the workload.
If they do not all fill up at once, this will not cause a problem. There is nothing wrong
with running a FlexVol volume at 100% full as long as there is still free space on the
underlying aggregate.
In Data ONTAP 7G and Data ONTAP 8 7-Mode, Snapshot usage on aggregates is limited to 5%.
In the example above:
Snapshot usage is 22%, meaning 22% used out of the of the 5%.
Snapshot space is 275721936, which is 5% of the total, and of that 60389932 (which is
22%) is used.
o 5% is the default value, and it can be changed with snap reserve -A.
The actual space used by aggregate Snapshot copies and rate of change of a specific system can
be monitored and used to determine the best reserve space based on the specific customer
environment. Also in the above example, snap sched -A is set to the default (0 1 4@9,14,19).
The Snapshot usage of 22% is contributing to the fullness of the FlexVol volume.
In Clustered Data ONTAP, the Snapshot reserve on aggregates can be viewed using storage
aggregate show -percent-snapshot-space . The percent can be changed using storage
aggregate modify -percent-snapshot-space percent .
If and when the Snapshot used space grows enough to fill the allotted space (5%), the Snapshot
copies are deleted automatically to stop them from growing beyond the aggregate Snapshot
reserve. This aggregate-level behavior is different from the FlexVol volume-level Snapshot
behavior. This happens automatically for aggregates only, not for traditional volumes or FlexVol
volumes.
If an aggregate becomes full due to actual data consuming all available blocks, the FlexVol
volumes hosted on the aggregate will also show as full. For FlexVol volumes used by
NAS protocols such as CIFS and NFS, the clients will receive a disk full error when attempting
to write to the FlexVol. For FlexVol volumes containing thin-provisioned LUNs and used by
SAN protocols such as iSCSI and FCP, the LUN will be taken offline when it reaches full
capacity.
Data ONTAP administrators have several options when managing available storage space:
Depending on how future storage needs are projected, quotas may be setup within Data
ONTAP for the users to manage unexpected storage usage.
Snapshot space usage should be monitored to ensure it is not overrunning the Snapshot
reserve as this would reduce the amount of writable space available to users in the
FlexVol volume.
Data ONTAP 7.1 or later contains FlexVol volume auto-grow and free space preservation
features that allows the FlexVol volume to grow automatically based on storage needs
and reduces the chance of the volume running out of space. When using this feature, the
FlexVol volume storage usage should be monitored against the space available in the
underlying aggregate.
o For more information on space management features in Data ONTAP 7G and
Data ONTAP 8 7-Mode, reference the Data ONTAP 8.2 Storage Management
Guide for 7-Mode.
o For Clustered Data ONTAP, more information can be found in the Clustered Data
ONTAP 8.2 Logical Storage Management Guide.
Deduplication may also be used to reduce the amount of space used in a FlexVol volume.
Deduplication works at the block level to eliminate duplicate data blocks.
o The Efficient IT Calculator can be used to estimate savings using deduplication.
o More information on deduplication in Clustered Data ONTAP can be found in the
Clustered Data ONTAP 8.2 Logical Storage Management Guide.
o For Data ONTAP 7G and Data ONTAP 8 7-Mode, reference the Data ONTAP 8.2
Storage Efficiency Guide for 7-Mode.
Data ONTAP 8.0.1 and later support data compression in a FlexVol volume as a way to
increase storage efficiency by enabling more data to be stored using less space.
Compression can be configured as in-line or post-process. In-line compression occurs as
the data is being written. Post-process compression runs as a low-priority background
process on data already written to disk.
o More information on data compression in Clustered Data ONTAP can be found in
the Clustered Data ONTAP 8.2 Logical Storage Management Guide.
o For Data ONTAP 7G and Data ONTAP 8 7-Mode, reference the Data ONTAP 8.2
Storage Efficiency Guide for 7-Mode.
The LUN option space_alloc can be used to control whether the LUN goes offline
when it reaches 100% utilization.
o More information on this LUN option can be found in the Data ONTAP 8.2
SAN Administration Guide for 7-Mode on page 18.
Answer
1. If the Data ONTAP version is below 7.0RC3, please do not copy an aggregate containing
the root volume. If an aggregate containing the root volume is copied, the copy becomes
the root volume. This change takes effect when the filer is rebooted.
For example, if the aggr copy command is used to copy aggr0, or if there is a
SyncMirror relationship with aggr0 and the mirror is broken, running vol status will
show that vol0 is still marked as diskroot, but vol0(1), the copy of vol0, is actually now
the root volume. Booting from vol0(1) could result in data corruption.
2. If the Data ONTAP version is 7.0RC3 or above (excluding GX), the aggregates
containing the root volume can be copied.
Answer
Understanding how creating an aggregate creates RAID Groups.
RAID Groups are created when an aggregate is created, or when new disks are added to the
aggregate - depending on how many disks are specified for the size of the RAID group. The size
of the RAID group is determined from the options specified when creating an aggregate.
Definitions:
Syntax for creating an aggregate:
aggr create <aggr-name>
[-f] [-l <language-code-code>] [-L [compliance | enterprise]]
[-m] [-n] [-r <raid-group-size>] [-R<rpm>]
[-T { ATA | EATA | FCAL | LUN | SAS | SATA | SCSI | XATA | XSAS } ]
[-t { raid4 | raid_dp } ] [-v]<disk-list>
Flexible Volume (FlexVol) - Total amount of disk space to be used. Remember that a
Flexvol is created inside an aggregate.
RAID Group Size - The maximum number of disks allowed before a new RAID group is
created.
Examples:
Suppose an aggregate is created with the following commands (note that the -r option specifies
the maximum number of disks per RAID group):
aggr create aggr1 -r 5 15
This command will create an aggregate with 15 disks and 3 separate RAID groups - each with 5
disks.
Now suppose the following is entered:
aggr create aggr1 -r 16 5
This command will create an aggregate with 5 disks and 1 RAID group of 5 disks. As disks are
added to the aggregate they will automatically added to the RAID group until they are 16 disks.
If the aggregate grows to 17 disks and new RAID group will be created.
And finally, suppose the following is entered:
aggr create aggr1 -r 16 3
In the case an aggregate with 3 disks will be created and a RAID group of 3 disks. However,
since the default is dual parity, two disks in this new raid group will be used for parity. Therefore
the aggregate will have at most one disk available for data.
For more information, please see the Storage Management Guide for your version of Data
ONTAP.
Description
How to set a different flexible volume to be root if that volume exists on a different aggregate
from the current root volume?
Procedure
Data ONTAP allows the end user to define a new root volume on a different aggregate.
Note: This procedure is only valid for Data ONTAP 7-Mode.
To do so is a two step process.
Note: The steps below will not work while in Takeover.
1. From maintenance mode, set the 'root' flag to the aggregate that holds the new volume
that we want to make root. In the example below, we'll be making aggr1/vol1 our new
root volume.
=========
(1) Normal boot.
(2) Boot without /etc/rc.
(3) Change password.
(4) Initialize all disks.
(4a) Same as option 4, but create a flexible root volume.
(5) Maintenance mode boot.
Selection (1-5)? 5
*> aggr options aggr1 root
Aggregate 'aggr1' will become root at the next boot.
*>
=========
2. Halt the storage system and reboot it to the 1-5 menu. At the 1-5 menu, issue the
vol_pick_root command to enqueue the pick_root process on the new root volume.
Note: The command "vol_pick_root " should only be run under the instruction of
NetApp Technical Support as it can alter file systems and may result in data loss if used
incorrectly."
=========
(1) Normal boot.
(2) Boot without /etc/rc.
(3) Change password.
(4) Initialize all disks.
(4a) Same as option 4, but create a flexible root volume.
(5) Maintenance mode boot.
Selection (1-5)? vol_pick_root vol1 aggr1
vol_pick_root: successfully enqueued
Please choose one of the following:
(1)
(2)
(3)
(4)
Normal boot.
Boot without /etc/rc.
Change password.
Initialize all disks.
The system will now boot. However, it will use vol1 in aggr1 as the new root volume. Unless the
contents of /etc to vol1 were previously copied, Data ONTAP will begin the setup process,
prompting for a hostname, IP address, gateway, etc.
License data will NOT be affected.
To restore the original volume (i.e., vol0) as the root volume, simply issue 'vol options vol0 root'
at the command line, followed by a reboot. Any changes made to the etc directory on vol1 will
NOT be reflected on vol0 (shares, rc, Data ONTAP changes).
Keep in mind that when booting to a new root volume, the system files will likely not be on the
new root volume (unless they were pre-staged). Either reinstall them via the NOW site or run
without them. If opting for the latter, features that require a complete install (HTTP, Java,
NDMP, etc) will not function until the said system files are installed.
Answer
Yes, it is possible to rename SyncMirror aggregates without any disruption in 'user access' or
SyncMirror.
Example: CLI output of a SyncMirror aggregate rename
> aggr status aggr06
Aggr State Status Options
aggr06 online raid4, aggr
mirrored
32-bit
Volumes: Test01
In the example above, since there is no change in the volume name, there would not be
any impact on the clients. Also, as seen in the aggr status output after the renaming, there is no
impact on SyncMirror as well.
Description
This article explains, how move an aggregate from one controller in a HA pair to its partner in a
software disk owned system. This procedure only applies to 7-mode systems.
Procedure
For reference: FILER1 owns the disk initially. FILER2 is where the
disks/volume are being moved to.
WARNING:
Before starting this procedure, confirm that there are no aggregates or FlexVols on FILER2
that have the same name as the original aggregate/FlexVols or traditional volume being
moved. Failure to do so will result in the relocated volume(s) with conflicting names being
appended with a (1) instead of the original name.
Note:
This procedure is only supported under the following conditions:
If disks are moved outside the HA pair, shelf MUST NOT be moved.
It could be misunderstood to move disk/shelf to another filer using similar procedure. If the
shelf needs to be moved outside of the HA pair, downtime is required.
1. Start by taking the aggregate offline from FILER1.
Example:
For a traditional volume:
FILER1>aggr offline <volname>
Note:
It will be necessary to reconfigure any Common Internet File System Protocol (CIFS)
shares,Network File System (NFS) exports or configure the appropriate igroups on the
partner for the relocated volume(s) before clients can access this data.
Answer
Is it possible to use the aggregate level snapshot to recreate or restore an individual volume,
without overwriting other volumes on the aggregate?
Aggregate level snapshots can be used to revert the entire aggregate, thus reverting all flexible
volumes. However, it is not possible to restore an individual volume. The command 'aggr copy
start -S' can be used to make a copy of an aggregate and restore the volume from there.
See article 3011218 - FlexVol Volumes FAQ
For more information about the 'aggr copy' command, consult the Manual Page Reference for
your version of Data ONTAP, available on the NetApp Support site.
Suppose you have an aggregate 'aggrsrc' and you want to restore the flexvol 'vol1' from the
aggregate snapshot 'monkey'. You can use the following procedure to make a copy of aggrsrc,
and then get the volume from the snapshot:
1. Create and restrict the target aggregate.
ata3050-rtp> aggr create aggrdst -t raid4 2
ata3050-rtp> aggr restrict aggrdst
Answer
Disk attributes used during spare selection
Spare selection for new aggregate creation
Spare selection for disk addition to an existing aggregate
Spare selection for replacement of a failed disk
Spare selection with DS4486 shelves
Spare selection parameters and options
Spare selection with the '-disklist' option
Examples
Disk type
RPM
Checksum type
Disk size
Pool
Pre-zeroed status
Disk type
Data ONTAP associates a disk type with every disk in the system, based on the disk technology
and connectivity type. The disk types used by Data ONTAP are:
1. BSAS - High capacity bridged SATA disks with additional hardware to enable them to be
plugged into a SAS shelf
2. SAS - Serial Attached SCSI disks in matching shelves
3. FSAS - High-capacity (Fat) Serial Attached SCSI disks
4. SATA - Serial ATA disks in SAS shelves
5. MSATA - SATA disks in DS4486 multi-carrier disk shelves
6. SSD - Solid State disks
7. ATA - ATA disks with either IDE or serial ATA interface in shelves connected in FC-AL
(Fibre Channel Arbitrated Loop)
8. FCAL - FC disks in shelves connected in FC-AL
9. LUN - A logical storage device backed by third-party storage and used by Data ONTAP
as a disk
Disk type mixing options
Data ONTAP provides a configuration option, raid.disktype.enable, that determines whether
or not mixing certain disk types in the same aggregate is allowed. If this option is set to true,
separation of disks by disk type is strictly enforced, and only disks of a single disk type are
allowed to be part of an aggregate. If the option is set to false, Data ONTAP forms the
following groups of disks, and considers all disks in a group equal during spare selection:
1. Group disk type SAS - This group includes high performance, enterprise class disk types
- FCAL and SAS.
2. Group disk type SATA - This group includes high capacity, near-line disk types - BSAS,
FSAS, SATA and ATA. The MSATA disk type is not included in this group, and cannot be
mixed together with any other disk type.
With the raid.disktype.enable option set to false, specifying a disk type with the '-T'
option will result in the equivalent group disk type being used for spare selection, and the final
set of selected spare disks may include disks from all the disk types included in the group disk
type. For example, specifying '-T BSAS' in the aggregate creation or addition command will
result in the group disk type SATA being used, and all BSAS, SATA and ATA disks will be
considered equally during spare selection. The final set of selected spares may have a mix of
BSAS, SATA and ATA disks, all of which will be added into the same aggregate. Thus, with the
raid.disktype.enable option set to false, it is not possible to enforce the selection of disks of
strictly one disk type, if the desired disk type is part of either of the two groups listed above. The
only way to enforce selection of disks of a single disk type is to set raid.disktype.enable to
true. The default value of the option is false.
If the raid.disktype.enable option is changed from false to true on a system that has
existing aggregates with a mix of disk types, those aggregates will continue to accept new disks
belonging to all the disk types already present in the aggregate. However, Data ONTAP will not
allow new aggregates to be created with a mix of disk types, for as long as the
raid.disktype.enable option is set to true.
Starting with Data ONTAP 8.2, the raid.disktype.enable option is deprecated, and has been
replaced by two new configuration options:
1. raid.mix.hdd.disktype.performance This option controls the mixing of high
performance, enterprise class disk types - FCAL and SAS. The default value of this
option is false, which means that these disk types cannot be present together in the same
aggregate by default.
2. raid.mix.hdd.disktype.capacity This option controls the mixing of high capacity,
near-line disk types - BSAS, FSAS, SATA and ATA. The default value of this option is
true, which means that these disk types can be present together in the same aggregate by
default.
Note that the behaviour of these two options is exactly the opposite of the behaviour of the
raid.disktype.enable option. For these two options, a value of true means that mixing of
disk types is allowed, while a value of false means that mixing is not allowed. In the case of
raid.disktype.enable, it is the opposite - a value of true means that disk types are strictly
enforced and mixing is not allowed, while a value of false means that mixing is allowed.
The rest of this article uses the term 'disk type mixing options' to refer to the configuration
options described above that determine whether or not mixing certain disk types in the same
aggregate is allowed. In Data ONTAP 8.1 and earlier releases, this term refers to the option
raid.disktype.enable. In Data ONTAP 8.2 and later releases, this term refers to the options
raid.mix.hdd.disktype.performance and raid.mix.hdd.disktype.capacity.
Flash Pools
Data ONTAP 8.1.1 introduces support for Flash Pools, which are aggregates containing both
HDDs (hard disk drives) and SSDs (solid state disks), in different RAID groups. The term HDD
refers to a mechanical storage device that uses rotating media. All disk types listed in the Disk
type section, with the exception of SSD and LUN, are considered HDD disk types. The term
SSD refers to a flash memory-based storage device, and is represented by the disk type 'SSD' in
the list in the Disk type section. The disk type 'LUN' represents a logical storage device and is
considered neither HDD nor SSD. A Flash Pool can be created by enabling the conversion
feature on an existing aggregate containing HDDs, and then adding SSDs to it. To enable the
feature on an existing aggregate, the 'hybrid_enabled' option needs to be set to true for the
aggregate, using the command 'aggr options <aggr_name> hybrid_enabled true' in 7Mode and 'aggregate modify -aggregate <aggr_name> -hybrid_enabled true' in CMode. Enabling this option on an aggregate allows it to have two storage tiers - an HDD tier and
an SSD tier. New disks can be added to both tiers, using the 'aggr add' command in 7-Mode
and the 'aggregate add-disks' command in C-Mode. The command specified must contain
enough information to unambiguously identify which storage tier the disks are to be added to.
This can be achieved by using the '-T' (disktype) option to specify a disk type, the '-g' (RAID
group) option to specify the RAID group to which the disks are to be added, or the '-d'
(disklist) option to explicitly specify a disk list. Data ONTAP uses the specified input to
decide which storage tier to add the new disks to.
The HDD tier of a Flash Pool behaves just like a normal aggregate containing HDDs. The disk
types that are allowed to be present together in the HDD tier depend on the values of the disk
type mixing options, as described earlier.
RPM
Data ONTAP also uses disk speed as a spare selection criterion. For hard disk drives (HDDs),
which use rotating media, the speed is measured in revolutions per minute (RPM). Currently,
Data ONTAP supports the following rotational speeds for disks:
The concept of rotational speed does not apply to non-HDD disks. Thus, disks of type SSD and
LUN do not have associated RPM values.
Mixing disks with different RPMs in a single aggregate is not recommended. Adding a disk with
a lower RPM value to an aggregate will result in a reduction in the maximum I/O throughput
achievable from the aggregate, because throughput is limited by the speed of the slowest disk in
the aggregate. For the same reason, adding a disk with higher RPM value to an aggregate will
result in no improvement in performance.
However, in some scenarios, it is useful to have the ability to mix disks of different RPMs in an
aggregate. For example, ATA disks have transitioned from 5.4K RPM to 7.2K RPM drives, and
many systems have a mix of disks with these two speeds. It would be inconvenient if Data
ONTAP did not permit the mixing of these two disk speeds in the same aggregate, because there
would be no way to gradually transition an aggregate from using the slower disks to using the
faster disks. Thus, even though it recommends against it, Data ONTAP allows the mixing of
disks with different RPMs in the same aggregate, and provides two configuration options to
control this behavior.
RPM mixing options
The following two configuration options determine whether or not the mixing of disks with
different RPMs in a single aggregate is allowed:
1. The option raid.rpm.ata.enable controls the mixing of ATA disks (disks of type
ATA, SATA, BSAS and MSATA) of different RPMs in the same aggregate. If the option
is set to true, ATA disks with different RPM values are considered different, and Data
ONTAP only selects disks with the same RPM value to be part of an aggregate. If the
option is set to false, ATA disks with different RPMs are considered equal and Data
ONTAP may select disks with different RPMs to be part of the same aggregate.
2. The option raid.rpm.fcal.enable controls the mixing of SAS and FCAL disks with
different RPMs in the same aggregate. If the option is set to true, FCAL and SAS disks
with different RPMs are considered different, and Data ONTAP only selects disks with
the same RPM value to be part of an aggregate. If the option is set to false, FCAL and
SAS disks with different RPMs are considered equal and Data ONTAP may select disks
with different RPMs to be part of the same aggregate.
The default value of raid.rpm.fcal.enable is true, which means that mixing of FCAL and
SAS disks of different speeds in the same aggregate is not allowed by default. This is because
15K RPM drives are more expensive than 10K RPM drives, and using 15K RPM drives
exclusively in an aggregate guarantees better performance. The default value of
raid.rpm.ata.enable, however, is false, which means that mixing of ATA disks of different
speeds in the same aggregate is allowed by default. This allows systems that have aggregates
with 5.4K RPM ATA disks nearing end-of-life (EOL) to transition easily to 7.2K RPM disks.
As in the case of the disk type mixing options, there is no way to ensure the selection of disks
with a certain RPM value during aggregate creation or disk addition if the above two options are
set to false. If the system has a mix of disks with different RPMs, a desired RPM value
specified with the '-R' option during aggregate creation may be ignored, if the corresponding
configuration option is set to false. For example, if the user specifies '-T ATA -R 5400' in the
aggregate creation command, to ensure the selection of 5.4K RPM ATA disks on a system with
5.4K RPM and 7.2K RPM ATA disks, Data ONTAP could end up selecting the 7.2K RPM ATA
disks instead, if the option raid.rpm.ata.enable is set to false. This is because the two sets
of disks are considered equivalent with respect to RPM, and the final selection is made based on
one of the other disk attributes like disk size, checksum type, etc., which could result in the 7.2K
RPM disks being given preference. To enforce the selection of disks of a specific RPM value, the
configuration option for that disk type must be set to true.
Starting with Data ONTAP 8.2, the raid.rpm.ata.enable and raid.rpm.fcal.enable options
are deprecated, and have been replaced by two new options that behave in exactly the same way,
but are named differently to better indicate their functionality:
1. raid.mix.hdd.rpm.capacity This option replaces raid.rpm.ata.enable and
controls the mixing of capacity-based hard disk types (BSAS, FSAS, SATA, ATA and
MSATA). The default value is true, which means that mixing is allowed.
2. raid.mix.hdd.rpm.performance This option replaces raid.rpm.fcal.enable and
controls the mixing of performance-based hard disk types (FCAL and SAS). The default
value is false, which means that mixing is not allowed.
Note that the behaviour of the two new options is exactly the opposite of the behaviour of the old
options. For the new options, a value of true means that disks with different RPMs are allowed
to be part of the same aggregate, while a value of false means that they are not. In the case of
raid.rpm.ata.enable and raid.rpm.fcal.enable, it is the opposite - a value of true means
that disks are strictly separated by RPM and mixing of RPMs in the same aggregate is not
allowed, while a value of false means that mixing is allowed.
The rest of this article uses the term 'RPM mixing options' to refer to the configuration options
described above that determine whether or not the mixing of disks with different RPMs in the
same aggregate is allowed. In Data ONTAP 8.1 and earlier releases, this term refers to the
options raid.rpm.ata.enable and raid.rpm.fcal.enable. In Data ONTAP 8.2 and later
releases, this term refers to the options raid.mix.hdd.rpm.capacity and
raid.mix.hdd.rpm.performance.
Checksum
The checksum type of a disk is another attribute used by Data ONTAP during spare selection.
Data ONTAP supports the following checksum types:
1. Block checksum (BCS): This checksum scheme uses 64 bytes to store checksum
information for every 4096 bytes (4KB) of data. This scheme can be used on disks
formatted with 520 bytes per sector ('bps') or 512 bytes per sector. On 520 bps disks, sets
of 8 sectors are used to store 4KB of data and 64 bytes of checksum information. This
scheme makes the best use of the available disk capacity. On disks formatted with 512
bps, Data ONTAP uses a scheme called 8/9 formatting to implement BCS. The scheme
uses sets of 9 sectors - 8 512-byte sectors to store 4KB of data, with the 9th sector used to
store 64 bytes of checksum information for the preceding 8 sectors. This scheme leaves
about 10% of the available disk capacity unused, because only 64 bytes of every 9th
sector is used for storing the checksum, with the remaining 448 bytes not used. Block
checksums can also be used on disks formatted with 4160 bytes per sector.
2. Zone checksum (ZCS): In this checksum scheme, 63 blocks of 4KB each are followed
by a single 4KB block of checksum information for the preceding 63 blocks. This scheme
makes good use of the available disk capacity, but has a performance penalty because
data and checksums are not co-located and an extra seek may be required to read the
checksum information. Because of this performance penalty, the ZCS scheme is not
widely used on disks any longer. However, it is still used on some older systems, and
with LUNs.
3. Advanced Zone checksum (AZCS): This checksum scheme was introduced in Data
ONTAP 8.1.1, specifically for disks requiring optimal storage efficiency and for disks
formatted with 4 Kilobytes per sector. A new scheme is required for 4K bps disks because
a scheme similar to the 8/9 BCS scheme on these disks would result in wastage of almost
50% of the disk capacity, and the performance penalty of the ZCS scheme would be too
high. In the AZCS scheme, a disk is divided into zones with 64 4KB blocks in each zone.
The middle block in each zone is designated the checksum block, and stores checksum
information for all the other blocks in the zone. Placing the checksum block in the middle
of a zone reduces the average seek distance between a data block and the checksum
block, and results in better performance when compared to the ZCS scheme. The AZCS
scheme can also be used on disks formatted with 512 bytes per sector.
The following list shows the current checksum types supported by the various Data ONTAP disk
types. Note that this list is subject to change. To get up-to-date information for a specific Data
ONTAP release, check the product documentation on the Support site.
1. SAS, FCAL - BCS
2. ATA, SATA, BSAS, FSAS - BCS
3. MSATA - AZCS
4. SSD - BCS
Disks of type LUN can be used in BCS, ZCS and AZCS aggregates.
The 'disk assign -c' command in 7-Mode and 'storage disk assign -checksum'
command in C-Mode can be used to assign a specified checksum type to a disk or LUN. The
command accepts two checksum values - 'block' and 'zoned'. Disks and LUNs that are
assigned the 'block' checksum type can be added to BCS aggregates, and those that are
assigned the 'zoned' checksum type can be added to AZCS aggregates as well as older ZCS
aggregates.
Mixed checksum aggregates
Each aggregate in a Data ONTAP system is assigned a checksum type, based on the checksum
type of the disks in the aggregate. Aggregates with BCS checksum disks have a checksum type
of 'block', aggregates with AZCS checksum disks have a checksum type of 'azcs', and aggregates
with zoned checksum LUNs have a checksum type of 'zoned'. Data ONTAP also allows
aggregates with checksum type 'mixed' - these aggregates have both AZCS and BCS checksum
disks, but in separate RAID groups. Such aggregates are called 'mixed checksum aggregates'. A
mixed checksum aggregate is created when BCS disks are added to an AZCS aggregate, or when
AZCS disks are added to a block checksum aggregate. A new RAID group is formed with the
newly added disks, and the aggregate's checksum type is set to 'mixed'.
Disk size
Data ONTAP also uses disk size as a spare selection criterion. The user can specify a desired disk
size value in the aggregate creation or disk addition command (using the '@size' option). In the
case of failed disk replacement, the desired size value is the size of the failed disk needing
replacement.
Given a desired value of disk size, Data ONTAP uses a spread factor of 20% to identify suitable
spare disks. For every spare disk being considered, Data ONTAP computes two sizes - a
'minimum' size, which is 80% of the spare disk's size, and a 'maximum' size, which is 120% of
the spare disk's size. It then checks to see if the desired size value falls in the range defined by
the spare disk's minimum and maximum sizes. If it does, the spare disk is considered suitable for
selection, with respect to disk size.
The disk size value used by Data ONTAP for all these calculations is the right-sized value of a
disk's physical capacity, also referred to as the disk's 'usable capacity'. Right-sizing is a process
that Data ONTAP uses to standardize the number of usable sectors on a disk, so that disks of
similar sizes from different manufacturers can be used interchangeably in a Data ONTAP system.
Right-sizing also takes into account the amount of space on the disk needed by Data ONTAP for
its own use. The usable capacity of a disk is smaller than the physical capacity, and can be
viewed using the 'sysconfig -r' command on 7-Mode (column 'Used MB/blks') and the
'storage disk show -fields usable-size' command in C-Mode. The Storage
Management Guide contains a table listing the physical capacities and usable capacities for the
different disks supported by Data ONTAP.
Another point to be noted is that Data ONTAP calculates and reports disk size values using
binary prexes, while disk manufacturers report disk sizes using SI prexes. Because of the use of
different units, the disk sizes reported by Data ONTAP are smaller than the disk sizes advertised
by the manufacturers. For more information, see article 3011274: What are the different numbers
that are used as disk capacity, system capacity, aggregate and volume sizes, in Data ONTAP,
technical documentation, and marketing materials?
The size policy followed by Data ONTAP, in combination with the right-sizing of disks and the
difference in disk size reporting units could result in unexpected spare selection behavior. For
example, on a system with 2 TB SATA disks, specifying a desired size value of 2 TB in the
aggregate creation or addition command does not result in the selection of the 2 TB disks present
in the system. This is because 2 TB disks actually have a usable capacity of 1.62 TB, after rightsizing and using binary prexes to calculate disk size. Using the Data ONTAP size selection
policy, the 20% spread calculated on a spare disk of size 1.62 TB gives a range of {1.29 TB, 1.94
TB}, which does not include the specified disk size of 2 TB. Thus, Data ONTAP does not select
any of the 2 TB spare disks, even though the system has 2 TB disks and the user has specifically
asked for them. The same behavior is seen with disks of size 1 TB and 3 TB.
To ensure that Data ONTAP picks a specific spare disk given an input size, the user should
specify a size value such that the 80%-120% calculation performed on the desired spare disk's
usable capacity results in a range that includes the specified size value. For example, to ensure
the selection of 2 TB disks present in a system, the user should check the usable capacity of a 2
TB disk using the command 'sysconfig -r' and then specify a size value that lies in the 80%120% range of that value.
Usable capacity of a 2 TB disk, from 'sysconfig -r':
Used (MB/blks)
-------------1695466/3472314368
So, any size value in the range {80% of 1695466 MB, 120% of 1695466 MB} will result in the
selection of the 2 TB spare disks. For example: '@1695466M' or '@1695G' or '@1700G'.
Pool
A pool is an abstraction used by Data ONTAP to segregate disks into groups, according to user
specified assignments. All spare disks in a Data ONTAP system are assigned to one of two spare
pools - Pool0 or Pool1. The general guidelines for assigning disks to pools are:
1. Disks in the same shelf or storage array should be assigned to the same pool
2. There should be an equal or close to equal number of disks assigned to each pool
By default, all spare disks are assigned to Pool0 when a Data ONTAP system is started up. If the
system is not configured to use SyncMirror, having all disks in a single pool is sufficient for the
creation of aggregates. If SyncMirror is enabled on the system, Data ONTAP requires the
segregation of disks into two pools for the creation of SyncMirror aggregates. A SyncMirror
aggregate contains two copies of the same WAFL filesystem, which are kept in sync with other.
Each copy is called a 'plex'. In order to provide the best protection against data loss, the disks
comprising one plex of a SyncMirror aggregate need to be physically separated from the disks
comprising the other plex. During the creation of a SyncMirror aggregate, Data ONTAP selects
an equal number of spare disks from each pool, and creates one plex of the aggregate with the
disks selected from Pool0, and the other plex with the disks selected from Pool1. If the
assignment of disks to pools has been done according to the guidelines listed above, this method
of selecting disks ensures that the loss of a single disk shelf or storage array affects only one plex
of the aggregate, and that normal data access can continue from the other plex while the affected
plex is being restored.
The command 'disk assign -p <pool_number>' can be used to assigns disks to a pool, in
both 7-Mode and C-Mode. If SyncMirror is enabled on the system, a system administrator will
have to assign disks to Pool1 using this command, before any SyncMirror aggregates can be
created.
Pre-zeroed status
Data ONTAP requires all spare disks that were previously part of an aggregate to be zeroed
before they can be added to a new aggregate. Disk zeroing ensures that the creation of a new
aggregate does not require a parity computation, and that addition of disks to an existing
aggregate does not require a re-computation of parity across all RAID groups to which the new
disks have been added. Non-zeroed spare disks that are selected for aggregate creation or
addition have to be zeroed first, lengthening the overall duration of the aggregate creation or
addition process. Replacement of a failed disk does not require completely zeroed spares, since
reconstruction of data on the replacement disk overwrites the existing data on some of the disk
blocks. The blocks that are not overwritten during reconstruction, however, have to be zeroed
before the disk can be used by the aggregate.
Data ONTAP gives preference to pre-zeroed disks during spare selection for aggregate creation
and addition, as well as failed disk replacement. However, despite the benefits of having prezeroed spare disks available in a system, Data ONTAP does not automatically zero disks as soon
as they are removed from aggregates. This is to minimize the possibility of irrecoverable data
loss in the event of a scenario where data on a disk is required even after the disk has been
removed from the aggregate. Disk zeroing can only be started by the system administrator, using
the command 'disk zero spares' in 7-Mode and 'storage disk zerospares' in C-Mode.
This command starts the zeroing process in the background on the spare disks present in the
system at that time.
Topology-based optimization of selected spares
Data ONTAP performs an optimization based on the topology of the storage system, on the set of
spare disks that have been selected for aggregate creation or addition or failed disk replacement.
First, it constructs a topology layout with the selected spare disks ordered by channel, shelf and
slot. Then, it considers all the points of failure in the storage system (adapters, switches, bridges,
shelves,), and estimates the 'load' on each, by counting the number of existing filesystem disks
associated with each point of failure. When allocating spares, Data ONTAP attempts to distribute
disks evenly across the different points of failure. It also attempts to minimize the points of
failure that the selected disks have in common with the other disks in the target RAID group.
Finally, it allocates the required number of spares, alternating the selected disks between all
significant points of failure.
Spare selection for new aggregate creation
Data ONTAP uses the following disk attributes for spare selection - disk type, checksum type,
RPM and disk size. The user may specify desired values for some of these attributes in the
aggregate creation command. For the attributes not specified by the user, Data ONTAP
determines the values that will provide the best selection of spares.
First, Data ONTAP decides the disk type and checksum type of the disks to be selected. If the
user has not specified a desired disk type, it finds the disk type with the most number of spare
disks. If the user has specified a desired checksum type, it only counts the disks with that
checksum type. If not, it looks through the disks in the following order of checksum type:
1. Advanced zone checksum disks
2. Block checksum disks
3. Zoned checksum disks
For each checksum type, Data ONTAP determines the disk type that has the most number of
disks. If this number is insufficient for the creation of the new aggregate, it considers the disks
with the next checksum type, and so on. If no checksum type has a sufficient number of disks,
the aggregate create operation fails. Additional user-specified attributes are also considered in
this step. For example, if the user has specified a desired checksum type and a desired RPM
value, Data ONTAP determines the disk type that has the most disks with the specified checksum
and RPM values.
If there are two or more disk types with the same number of spare disks, Data ONTAP selects a
disk type in the following order of preference:
1. MSATA
2. FSAS
3. BSAS
4. SSD
5. SATA
6. SAS
7. LUN
8. ATA
9. FCAL
Once it has identified a set of disks according to disk type and checksum type, a subset is
selected based on RPM. This step is only performed if the identified disk type is neither SSD nor
LUN, since the concept of rotational speed does not apply to these disk types. If the user has
specified a desired RPM value, only disks with that value are present in the selected set. If the
user has not specified a value, Data ONTAP groups all selected disks by their RPM values and
chooses the group that has the most number of disks. If two or more groups have the same
number of disks, the group with the highest RPM is selected. The value of the RPM mixing
option for a specified disk type determines if the disks of that disk type will be considered equal
with respect to RPM. If the option is set to false, all disks of that disk type are counted together
in the same group, even if they have different RPM values. If the option is set to true, disks of
that disk type are strictly separated into groups according to their RPM values.
If the user has specified a desired disk size in the aggregate creation command, Data ONTAP
selects spare disks such that the desired size lies within 80%-120% of the spare disk's size. If the
user has not specified a desired size, Data ONTAP uses the selected disks in ascending order of
size. The largest disk is made the dparity disk and the next largest disk is made the parity disk of
the RAID group. Among disks of the same size, preference is given to pre-zeroed disks.
Once a set of spare disks has been identified based on these attributes, Data ONTAP optimizes
the selection based on the topology of the storage system. The topology optimization procedure
is described in detail in the Topology-based optimization of selected spares section.
As mentioned earlier, the values of disk type and RPM considered by Data ONTAP during spare
selection depend on the values of the disk type mixing options and the RPM mixing options.
Creation of a root aggregate
Data ONTAP is designed to prefer HDDs over SSDs for the creation of the root aggregate in the
system, even if SSDs are more numerous. SSDs are selected for the root aggregate only if there
are not enough HDDs.
Creation of a unmirrored aggregate
For an unmirrored aggregate, Data ONTAP selects a set of spare disks from one of the two pools.
It counts the number of available spare disks in each pool and chooses the set that has the larger
number. If neither of the two pools has a sufficient number of disks, the aggregate creation will
fail with an error message. Data ONTAP will never select a set of disks that spans the two pools.
However, this behavior can be overridden by specifying the '-d/-disklist' option with a list
of disks spanning both pools, and the '-f/-force' option to override the pool check.
2. 'new' - create one or more new RAID groups with the disks being added
3. 'all' - add disks to all existing RAID groups until they are full; create new RAID
groups after that
If the user has not specified a disk type but has specified a RAID group value, Data ONTAP will
try to determine the disk type from the RAID group value specified. For example, if the user
species an existing RAID group, Data ONTAP will choose spare disks with the same disk type as
the disks in that RAID group. If no RAID group value is specified, Data ONTAP will choose
disks with the disk type of the first RAID group in the aggregate. If new disks are to be added to
a Flash Pool, the aggregate addition command must contain enough information to
unambiguously identify the tier to which the disks are to be added. This can be done by explicitly
specifying the disk type using the '-T' option, or by specifying a RAID group value (with the
'-g' option) that allows Data ONTAP to infer the disk type from. The '-d' option may also be
used to explicitly specify a disk list. However, Data ONTAP only allows disks to be added to one
tier in a single command, so the disk list specified may not contain both HDDs and SSDs.
Checksum type: The user may specify a desired checksum type for the disks to be added. If the
specified checksum type is different from the prevailing checksum type of the aggregate, the
aggregate will become a mixed-checksum aggregate (described in the Mixed checksum
aggregates section), and one or more new RAID groups will be created with the newly added
disks. If the user has not specified a desired checksum type, Data ONTAP chooses disks of the
same checksum type as the first RAID group in the aggregate.
RPM: The user is not allowed to specify a desired RPM value for disks to be added to an
existing aggregate. Data ONTAP determines the prevailing RPM value in the aggregate, by
grouping the disks in the aggregate by RPM, and choosing the RPM with the largest count of
disks. If there are two or more same-sized sets of disks with different RPMs, the larger RPM
value is chosen as the desired RPM value. In the absence of spares with the desired RPM value,
Data ONTAP may select disks with a different RPM. This depends on the value of the RPM
mixing option for the selected disk type - if the value is set to false, disks with a different RPM
value may be selected. Disks with an RPM different from that of the majority of disks in the
aggregate may be added to the aggregate, by specifying the disks with the '-d/-disklist'
option together with the '-f/-force' option.
Size: If the user has specified a desired size for the disks to be added, Data ONTAP chooses
spare disks such that the desired size lies within 80% - 120% of the selected spare disk's size. If
the user has not specified a desired size, Data ONTAP uses the size of the largest data disk in the
target RAID group as a 'baseline' size, and selects spare disks in the following order:
1. Disks that are the same size as the baseline size
2. Disks that are smaller than the baseline size, in descending order
3. Disks that are larger than the baseline size, in ascending order
If the disks are going to form a new RAID group, Data ONTAP finds the newest RAID group in
the aggregate with the same disk type and checksum type as the disks being added, and uses the
size of the largest data disk in that RAID group as the baseline size.
Once a set of spare disks has been identified based on these attributes, Data ONTAP optimizes
the selection based on the topology of the storage system. The optimization procedure is
described in detail in the Topology-based optimization of selected spares section.
Pool: A matching spare disk has to belong to the same pool as the parent plex of the aggregate
containing the failed disk. In the absence of a matching spare disk, Data ONTAP may select a
suitable spare disk from the opposite pool, if the aggregate is unmirrored. For a mirrored
aggregate, Data ONTAP will select a disk from the opposite pool only if the aggregate is mirrordegraded or is resyncing.
Checksum: The desired checksum type of a spare disk is the checksum type of the RAID group
that the failed disk belonged to. Data ONTAP may select a spare disk with a different checksum
type, if the selected spare disk also supports the desired checksum type.
Size: Selected spare disks have to be the same size as or larger than the failed disk being
replaced. If the disks selected are larger in size, they are downsized before being used.
If multiple matching or suitable spare disks are found, Data ONTAP uses two additional
attributes to choose a single disk - the pre-zeroed status of the disks and the topology of the
storage system. Data ONTAP gives preference to spares that are already zeroed, as described in
the Pre-zeroed status section. It also tries to optimize the selection based on the topology of the
storage system, as described in the Topology-based optimization of selected spares section.
Failed disk replacement in an unmirrored aggregate
Data ONTAP first tries to find a matching spare disk to replace a failed disk in an unmirrored
aggregate. If there are no matching spares found, it tries to find a suitable spare disk, by varying
the selection attributes in the following order:
1. Different RPM, same pool
2. Same RPM, different pool
3. Different RPM, different pool
Failed disk replacement in a SyncMirror aggregate
As in the case of an unmirrored aggregate, Data ONTAP first tries to find a matching spare disk
to replace a failed disk. If there are no matching spares available, it looks for suitable spares. The
attribute variations listed above are tried in the same order, with one difference - Data ONTAP
does not look for suitable spare disks in the opposite pool, if the aggregate is in a normal, faultisolated state. Data ONTAP will search for suitable spares in the opposite pool only if the
aggregate is mirror-degraded or is resyncing, with the plex containing the failed disk serving as
the source of the resync. In all other cases, the disk replacement will fail if there are not any
suitable or matching spares available in the same pool.
Spare selection with DS4486 shelves
Data ONTAP 8.1.1 introduces support for DS4486 disk shelves - a new dense disk shelf in which
two physical disks are housed per disk carrier. In a DS4486 shelf, the smallest field replaceable
unit (FRU) is the disk carrier, which means that it is the smallest unit in the shelf that can be
replaced individually. If either of the disks in a carrier fails, the entire carrier has to be replaced,
even if the other disk is healthy. If the healthy disk in a failed carrier is part of an aggregate, Data
ONTAP has to initiate a disk copy operation to copy the healthy disk to another disk, before the
carrier can be taken out of the shelf to be replaced. Thus, spare selection in a DS4486
environment is slightly different, because each carrier has to be considered a single point of
failure.
Data ONTAP avoids allocating two spares from the same carrier into the same RAID group,
because a failure in one of the disks in the carrier would require a complete disk copy of the
healthy disk along with the reconstruction on the selected spare disk, putting the RAID group at
risk while these operations are in progress. Data ONTAP also avoids selecting a spare disk from
a carrier that already has a failed or pre-failed disk. These modifications in the selection are all
performed during the topology optimization stage. The selection of spare disks is done as usual,
with each disk in a carrier considered independently (disks within the same carrier usually have
identical characteristics). Once Data ONTAP has identified candidate spare disks, it orders all of
them by channel, shelf, carrier and slot. All selected spare disks that have a failed or pre-failed
disk as a carrier-mate are removed from consideration. It then estimates the 'load' on each point
of failure in the topology, including each carrier. A carrier that has two spare disks is given a
higher preference than a carrier that has one spare disk and one used disk. Data ONTAP then
allocates disks, trying as far as possible to evenly distribute disks across all points of failure, and
alternating the selected disks between channels, shelves and carriers.
When the number of spare disks in the system is low, Data ONTAP cannot avoid allocating two
disks from a carrier into the same RAID group. When this happens, a background process is
started after the aggregate addition, which performs a series a disk copy operations to rearrange
the disks in existing RAID groups to eliminate cases of two disks from one carrier being in the
same RAID group.
Spare selection parameters and options
The aggregate creation and addition commands accept certain input parameters which can be
used to specify values for disk attributes that must be considered during spare selection. During
aggregate creation or addition, the user should specify values for as many of these parameters as
possible to ensure the selection of a desired set of disks. These parameters are as follows:
1. -T <disk type>
2. -R <rpm value>
3. -c <checksum type>
4. @<size value>
In addition to these parameters, spare selection behavior also depends on the values of the disk
type mixing options and RPM mixing options. Unexpected spare disk selections could arise as a
result of the values that these options are set to. For instance, in Data ONTAP 8.1 and earlier,
disk type mixing is allowed by default, which could result in an unexpected disk type being
selected, even when the '-T' option is explicitly used to specify a disk type. As an example, if
disk type mixing is allowed, Data ONTAP considers FCAL and SAS disks to be part of the same
disk type group ('SAS'), so a command like 'aggr create <aggrname> -T FCAL
<diskcount>' may result in the aggregate being created with SAS disks, even if the required
number of FCAL disks are present in the system. This is because the FCAL and SAS disks are
considered equivalent with regard to disk type, and so the selection of disks is made on the basis
of other disk attributes like RPM, checksum type, size, topology, etc., which could result in the
SAS disks being given preference over the FCAL disks. If a strict enforcement of disk types is
required, the disk type mixing options should be disabled.
Similar to the enforcement of disk type, the RPM mixing options control the selection of disks
based on RPM. If a strict enforcement of RPM is required, these options should be disabled.
Spare selection with the '-disklist' option
The aggregate creation and addition commands have an option '-d' that accepts a spaceseparated list of spare disks. Data ONTAP checks this list to ensure that the disks have
compatible values of disk type, RPM, checksum type and pool, and then carries out the creation
or addition operation with the specified disks. For the creation of an unmirrored aggregate, Data
ONTAP checks that the disks in the disk list belong to the same pool and have the same RPM
value. For the addition of disks to an unmirrored aggregate, Data ONTAP checks that the disks in
the disk list belong to the same pool, and have the same RPM value as the prevailing RPM in the
aggregate. If these checks fail, Data ONTAP rejects the disk list and fails the command. This
behavior can be overridden with the '-f/-force' option - when a disk list is specified along
with the '-f' option, Data ONTAP ignores the results of the RPM and pool checks, thus
allowing disks from different pools and with different RPMs to be present in the same aggregate.
For the creation of or addition of disks to a SyncMirror aggregate, Data ONTAP expects two disk
lists to be specified, one for each pool. The '-f' option can be used here as well, to override the
RPM and pool checks.
Examples
On a system with 10 FCAL, 10 SAS and 10 SATA disks, the user executes the command
'aggr create <aggrname> 5'. Which disk type does Data ONTAP select for the creation
of the new aggregate?
The disk type selected depends on the value of the disk type mixing option. If disk type mixing is
allowed, FCAL and SAS disks are considered as having group disk type SAS, so they are
counted together. Data ONTAP picks the disk type that has the most number of disks. Assuming
that all disks have the same checksum type, it selects disk type SAS (10 FCAL + 10 SAS disks =
20 disks with group disk type SAS vs. 10 disks with group disk type ATA). From the set of disks
with group disk type SAS, Data ONTAP could end up selecting either FCAL or SAS disks for
the creation of the aggregate - that would depend on the other disk attributes, such as RPM, size,
pre-zeroed status and storage topology.
If disk type mixing is not allowed, the three disk types are considered separately. Since all three
disk types have the same number of disks, Data ONTAP chooses a disk type in the order listed in
the Spare selection for new aggregate creation section. SAS is higher on the list than FCAL and
SATA, so Data ONTAP will select 5 SAS disks for the creation of the new aggregate.
On a system with 6 SATA BCS disks, 4 MSATA AZCS disks and 8 FCAL BCS disks, the
user executes the command 'aggr create <aggrname> 5'. Which disk type and checksum
type does Data ONTAP select for the creation of the aggregate?
The selection is done first by the checksum type, then by disk type and count. Data ONTAP first
considers AZCS checksum disks, and counts the number of disks of each disk type. Since there
are only 4 AZCS checksum disks in total and the user wants 5 disks, we move on to the next
checksum type - BCS. There are 6 SATA disks and 8 FCAL disks with checksum type BCS. Data
ONTAP selects the disk type which has the higher number of disks - FCAL. If there was an equal
number of SATA and FCAL disks, it would have selected a disk type in the order listed in the
Spare selection for new aggregate creation section, so it would have picked SATA. In both cases,
the checksum type selected is BCS.
A disk in an unmirrored aggregate fails and Data ONTAP has to select a spare disk to
replace it. The other disks in the aggregate are of type FCAL, checksum BCS, 10K RPM
and from Pool0. The available spare disks are as follows:
1. Group 1 - disk type FCAL, checksum BCS, RPM 15K, Pool1
2. Group 2 - disk type SATA, checksum BCS, RPM 7.2K, Pool1
3. Group 3 - disk type SAS, checksum BCS, RPM 15K, Pool0
Which group of disks does Data ONTAP pick a replacement disk from?
In this case, there is no perfectly matching spare available for the failed disk, because none of the
spare disks have all the desired attributes. Data ONTAP first identifies the spare disks with
matching disk type. Assuming that disk type mixing is allowed on the system, Data ONTAP
treats FCAL and SAS disks as having the same effective disk type, so all FCAL and SAS spare
disks are considered suitable replacements with respect to disk type. From this set of disks, Data
ONTAP tries to find a suitable spare disk to replace the failed disk using the variations listed
earlier:
1. Different RPM, same pool
2. Same RPM, different pool
3. Different RPM, different pool
Looking at the list of variations, the disks in group 3 match variation 1 on the list - different
RPM, same pool. So Data ONTAP will pick a replacement disk from group 3. In this example, if
the disks in group 3 were not present, Data ONTAP would go down the list till variation 3 different RPM, different pool - and pick a disk from group 1.
If disk type mixing was turned off on the system, Data ONTAP would consider FCAL and SAS
disks different with regard to disk type, and would consider only the FCAL spare disks suitable
replacements for the failed disk. Thus, it would select a replacement disk from among the
available FCAL spare disks in Group 1.
Can I mix Fibre Channel (FC) and Serial Attached SCSI (SAS) drives in the same
aggregate?
Answer
I have FC and SAS disks in my system. Can I use them in the same aggregate?
SAS and FC disks are treated as the same disk type when creating or increasing the size of an
aggregate.
Example of a sysconfig -r with both SAS and FC drives:
Aggregate aggr1 (online, raid_dp)
Plex /aggr1/plex0 (online, normal, active, pool0)
RAID group /aggr1/plex0/rg0
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------------------------------------------------------------------------dparity 0a.32 0a 2 0
FC:A 0 FCAL 15000 136000/278528000 137104/280790184
parity 0c.00.5 0c 0 5 SA:2 0 SAS 15000 136000/278528000 137104/280790184
data
0a.33 0a 2 1
FC:A 0 FCAL 15000 136000/278528000 137104/280790184
data
0c.00.6 0c 0 6 SA:2 0 SAS 15000 136000/278528000 137104/280790184
See the section on How Data ONTAP works with disks in the Storage Management Guide.
Answer
NetApp Flash Pool is an intelligent storage caching product within the NetApp Virtual Storage
Tier (VST) product family. A Flash Pool aggregate (or Hybrid aggregate) configures Solid-State
Drives (SSDs) and Hard Disk Drives (HDDs), either performance disk drives (often referred to
as SAS or FC) or capacity disk drives (often called SATA) into a single storage pool (aggregate)
with the SSDs providing a fast-response-time cache for volumes that are provisioned on the
Flash Pool aggregate.
Provisioning a volume in a Flash Pool aggregate can provide one or more of the following
benefits:
Persistent low read latency for large active datasets: NetApp systems configured with
Flash Pool can cache up to 100 times more data than configurations that have no
supplemental flash-based cache. The data can be read 2 to 10 times faster from the cache
than from HDDs. In addition, data cached in a Flash Pool aggregate is available through
planned and unplanned storage controller takeovers, enabling consistent read
performance throughout these events.
More HDD operations for other workloads: Repeat random read and random overwrite
operations utilize the SSD cache, enabling HDDs to handle more reads and writes for
other workloads, such as sequential reads and writes.
Increased system throughput (IOPS): For a system where throughput is limited due to
high HDD utilization, adding Flash Pool cache can increase total IOPS by serving
random requests from the SSD cache.
HDD reduction: A storage system that is configured with Flash Pool to support a given
set of workloads typically has fewer of the same type of HDD, and often fewer and
lower-cost-per-terabyte HDDs, than does a system that is not configured with Flash Pool.
Although configuring a NetApp storage system with Flash Pool can provide significant benefits,
there are some things that Flash Pool does not do. For example:
Accelerate write operations: The NetApp Data ONTAP operating system is already
write-optimized through the use of write cache and nonvolatile memory (NVRAM or
NVMEM). Flash Pool caching of overwrite data is done primarily to offload the intensive
write operations of rapidly changing data from HDDs.
A Flash Pool aggregate can be created non-disruptively, that is, while the system is operating and
serving data. The process of creating a Flash Pool aggregate has three steps:
1. Create the 64-bit HDD aggregate (unless it already exists).
Notes:
o When creating an aggregate of multiple HDD RAID groups, NetApp's best
practice is to size each RAID group with the same number of drives or with no
more than 1 drive difference (for example, one RAID group of 16 HDDs and a
second one of 15 HDDs is acceptable).
o If an existing aggregate is 32-bit, it must be converted to a 64-bit aggregate before
it is eligible to become a Flash Pool aggregate. As noted in section 3.1, there are
situations in which a converted 64-bit aggregate is not eligible to become a Flash
Pool aggregate.
2. Set the hybrid_enabled option to on for the aggregate:
Note: A RAID group cannot be removed from an aggregate after the aggregate has been
created.
For Data ONTAP operating in 7-Mode, run the following commands:
1. aggr options <aggr_name> hybrid_enabled on
2. aggr add <aggr_name> -T SSD <number_of_disks>
-Oraggr add <aggr_name> -d <diskid1>,<disksid2>
Reverting a Flash Pool aggregate back to a standard HDD-only aggregate requires migrating the
volumes to an HDD-only aggregate. After all volumes have been moved from a Flash Pool
aggregate, the aggregate can be destroyed, and then the SSDs and HDDs are returned to the
spares pool, which makes them available for use in other aggregates or Flash Pool aggregates.
A Flash Pool aggregate that has a SSD RAID group containing one data drive is supported;
however, with such a configuration, the SSD cache can become a bottleneck for some system
deployments. Therefore, NetApp recommends configuring Flash Pool aggregates with a
minimum number of data SSDs, as shown in the table below:
For further details, see TR-4070: Flash Pool Design and Implementation Guide.
Answer
On upgrading Data ONTAP to version 8.1 or later, aggregates will show an additional status in
the output of the aggr status v command (this is a diagnostic command).
aggr2 online
raid_lost_write=on,
raid_dp, aggr
64-bit
rlw_upgrading
thorough_scrub=off
Volumes: <none>
nosnap=off, raidtype=raid_dp,
raidsize=14,
ignore_inconsistent=off,
snapmirrored=off,
resyncsnaptime=60,
fs_size_fixed=off,
snapshot_autodelete=on,
lost_write_protect=on,
ha_policy=cfo,
hybrid_enabled=off,
percent_snapshot_space=5%,
free_space_realloc=off,
raid_cv=on,
Lost writes are writes that a disk has confirmed to Data ONTAP as written, but that have not
made it to the disk (usually due to some damage on the media or the head of the drive, and in
rare cases, also the shelf module or HBA hardware or firmware defects).
RAID lost write protection (rlw) in Data ONTAP 8.1 and higher is an enhancement to the preexisting (Data ONTAP 7.0 and higher) lost write protection.
Initially when the filer is upgraded, the aggregates will have the status rlw_upgrading. This
means that the rlw feature is enabled (by default) but is not yet active. For it to become active a
full RAID scrub needs to be run over all RAID groups of the concerned aggregate. Note that any
existing RAID scrubs are typically cancelled, and a new (modified) RAID scrub will have to be
run, which happens either automatically by the RAID scrub schedule (default is Sunday 1 a.m.)
or by starting a one-time manual scrub.
Note: 'rlw_upgrading' is just a flag/state, it does not indicate an active process running in the
background. This means that there is -no- background process impacting the storage system's
performance. The only performance impact expected is that of scrub, which can be scheduled
and stopped by the usual means (for more information, see the 'aggr scrub' man pages).
The active process of performing the upgrade is included as part of a RAID scrub. A full manual
scrub will not be initiated automatically following an upgrade of Data ONTAP. The aggr scrub
status command will indicate if RAID scrubs are currently suspended (not actively running at
that moment).
When a scrub has completed successfully, the status will usually change to rlw_on (exceptions
are documented in the Related Link below). The different states are referenced in the diag man
pages.
An explanation of the different rlw states displayed in "aggr status -v" is given in the man
pages.
About scrubbing:
default scrub is started only once a week and runs only for 6 hours with low performance
impact. Under these conditions, a scrub might run for many weeks until completion.
Completion happens per RAID group, so verify that it has completed for all RAID groups
of the rlw enabled aggregate. The completion message is printed in EMS.
For Example:
Sun May 20 00:33:41 GMT [filer1: config_thread: raid.rg.scrub.done:notice]:
/aggr04/plex0/rg2: scrub completed in 1:33:35.76
Sun May 20 00:33:41 GMT [filer1: config_thread: raid.rg.scrub.done:notice]:
/aggr04/plex0/rg0: scrub completed in 1:33:35.76
Sun May 20 00:33:41 GMT [filer1: config_thread: raid.rg.scrub.done:notice]:
/aggr04/plex0/rg1: scrub completed in 1:33:35.76
Sun May 20 00:33:41 GMT [filer1: config_thread: raid.rg.scrub.done:notice]:
/aggr04/plex0/rg3: scrub completed in 1:33:35.76
Speeding up a scrub:
In order to speed up a scrub, there are several options available.
Wait for the weekly schedule to start but allow it to run to completion by changing the
following options:
options raid.scrub.duration "360"
---> "-1"
-OR
Set up a specific schedule to run a scrub during periods of low user impact:
options raid.scrub.schedule
480m@mon@22,480m@tue@22,480m@wed@22,480m@thu@22,480m@fri@22,480m@sat@22,4
80m@sun@22
You can disable RAID LW protection on either aggregate level or globally on the filer. In
all the versions of Data ONTAP between 8.1 and 8.2, subsequent attempts to re-enable
this protection on existing aggregates will probably fail.
Also, downgrading Data ONTAP to a version prior to 8.1 will remove this feature for
existing aggregates. For aggregates that existed prior to the downgrade, subsequent
upgrades to the 8.1.x family will initiate a RAID LW upgrade scan that is probably going
to fail.
Aggregates created after the feature was disabled, or after the feature was re-enabled, will
be able to successfully enable RAID LW protection.
Upgrades and downgrades between Data ONTAP versions 8.1 and later (e.g. 8.1 to 8.1.1,
8.1.x to 8.2) will not affect this feature, as long as the feature is not disabled by the user
or by NGS.
This feature disables parity flip (when adding a larger disk to the RAID group) and tetris
optimization for SyncMirror.
Enabling or disabling the RAID LW detection (please be aware of the previous notes on
disabling RLW before running either of the commands!):
Enable or disable the option globally:
options raid.lost_write_enable on|off
Description
Upgrading 10K FC aggregates to 15K FC aggregates.
Procedure
Warning: It is not a best practice to mix disks of different RPMs in the same aggregate
because an RG can operate as fast as the slowest disk in that particular RAID group.
Therefore, in this setup an RG is not taking advantage of higher RPM rate.
The procedure to upgrade 10K FC aggregates to 15K FC aggregates can also use the disk
replace command with the -m option in Data ONTAP 7.1 and 7.2. For example:
disk replace start -f -m disk10K spare15K
-OR
Answer
What are the different layers of storage ?
Aggregate is a collection of physical disk space that is a container for one or more RAID groups.
Within each aggregate, there is 1 or more FlexVol volumes. FlexVol volumes are allocated as a
portion of available space with an aggregate. It contains 1 or more LUNs for use with iSCSI, FC
or FCoE protocols.
Thin Provisioning
Data
Deduplication
Data
Compression
Thin Replication
File: No space reserved at volume creation. Individual files or LUNs guaranteed space
when created.
A space guarantee of file reserves space in the aggregate so that any file in the volume
with space reservation enabled can be completely rewritten, even if its blocks are being
retained on disk by a Snapshot copy.
Note: When the uncommitted space in an aggregate is exhausted, only writes to volumes
or files in that aggregate with space guarantees are guaranteed to succeed.
WAFL reserve WAFL reserves a 10% of the total disk space for aggregate level metadata
and performance. The space used for maintaining the volumes in the aggregate comes out
of the WAFL reserve and it cannot be changed.
Aggregate Snap reserve is the amount of space reserved for aggregate snapshots.
Snapshot reserve percentage of disk space reserved for snapshots for each of the
volumes in the system. Reserve space can be used only by snapshots and not by the
active file system. Default value is 20% and it can be changed.
Use snap delta to displays the rate of change of data between snapshot copies. When used
without any arguments, it displays the rate of change of data between snapshots for all volumes
in the system, or all aggregates in the case of snap delta -A.
snap delta [ vol_name] [ snapshot-name ] [ snapshot-name ]
If a volume is specified, the rate of change of data is displayed for that particular volume.
- Displays the amount of space that would be
reclaimed if the mentioned list of snapshots is deleted from the volume.
snap reclaimable <volname snapshot-name>
To reserve space for overwrites (Fractional reserve) once snapshots are taken to the size
of LUN.
At this point 100GB space is full and any additional attempt to write fails.
Check the occupied Space in LUN by running the lun show -v <LUN Name> command.
link is
Set fractional reserve to 0 and reserved space disappears allowing you to take a snapshot.
If you overwrite data >25GB to Snapshot-protected LUN, it goes offline.
If the volume containing the LUN is full, any subsequent attempts to write data on the LUN will
fail. Data ONTAP will make the LUN offline to avoid any inconsistencies. When volume is
running out of space, you can choose 1 of the following options:
1. Snapshot Autodelete:
snap autodelete vol_name option value
To define which Snapshot copies to delete, use the following options and their
corresponding values in the snap autodelete command.
Option
Values
Specifies whether a snapshot copy is linked to data protection utilities
(SnapMirror or NDMPcopy) or data backing mechanisms (volume or
LUN clones).
commitment
You can set this value for a volume with the snapshot-clone-dependency
value set to on. An error message is returned if you set this option on a
volume with the snapshot-clone-dependency option set to off.
Defines when to automatically begin deleting snapshot copies.
trigger
For example, if you specify 30, then snapshot copies are deleted until 30
percent of the volume becomes free.
delete_order
Delete snapshot copies with a specific prefix last. You can specify up to
15 characters (for example, sv_snap_week). Use this option only if you
specify prefix for the defer_deleteoption.
prefix
Destroy one of the following types of snapshot copies. You can specify
the following values:
destroy_list
2. Volume autosize
Volume autosize allows a flexible volume to automatically grow or shrink in size within an
aggregate. Autogrow is useful when a volume is about to run out of available space, but
there is space available in the containing aggregate for the volume to grow. Autoshrink is
useful in combination with autogrow. It can return unused blocks to the aggregate when
the amount of data in a volume drops below a user configurable shrink threshold.
Autoshrink can be enabled via the grow_shrink subcommand. Autoshrink without
autogrow is not supported.
vol autosize volname [-m size [k|m|g|t] ][-i size [k|m|g|t]][-minimumsize size [k|m|g|t]][-grow-threshold-percent <used space %>][-shrink-
The autogrow feature works together with snap autodelete to automatically reclaim space
when a volume is about to become full. The volume option try_first controls the order
in which these two reclaim policies are used.
Vol options <vol-name> try_first [vol_grow | snap_delete]
Description
Procedure
At NetApp, a thick to thin conversion means to remove the reservations and guarantees.
The NetApp storage model starts with the RAID group. RAID groups are combined into pools of
storage called Aggregates. Aggregates are partitioned into logical containers called FlexVols.
Either LUNs or files can reside in FlexVols. A thin LUN is given the illusion of being thick by
reserving space within the Flexvol for the full size of the LUN. To convert a thick LUN to thin,
just set the LUN SpaceReserved property of the LUN to 'off'. This can be performed on the fly
and involves no actual movement of data; the LUN is already thin inside.
For more information, see From Thick to Thin and Back Again - A Whirlwind Tour of Thin
Provisioning on NetApp
There is no impact in performing this conversion since you are just removing the reservations
and guarantees from the volume or LUN.
Usually you can set autogrow for the volumes or can enable alerting, and expand the volume
manually when it reaches the threshold.
on 2010-07-07 09:59 AM
PowerShell Toolkit
Erick Moore recently did a nice little script to determine how much space your current thick
LUNs and volumes are wasting http://communities.netapp.com/docs/DOC-6383 . This set me
thinking about Thin Provisioning and how it applies to NetApp.
Since at least 2004, all LUNs on NetApp are at heart thin. Back in those days, the market wasnt
as accepting of thin provisioning. You may recall that as late as 2008 much of the industry
thinking revolved around the fallacy of Real Fibre Channel. In order to give thin LUNs the
appearance of being thick, NetApp used a system of reservations and guarantees to ensure that
the space was really there.
What a difference the last few years have made. Today, the market gets thin. Many vendors
have various means and methods to migrate or convert thick LUNs to thin ones. Many of these
methods are time consuming and involve the physical movement of data between storage pools.
On NetApp, which has always been thin inside this is not the case. On NetApp, a thick to thin
conversion simply means that we remove the reservations and guarantees.
The NetApp storage model starts with the RAID group. RAID groups are combined into pools
of storage called Aggregates. Aggregates are partitioned into logical containers called FlexVols.
Either LUNs or files can reside in Flexvols. A thin LUN is given the illusion of being thick by
reserving space within the Flexvol for the full size of the LUN. To convert a thick LUN to think,
you simply set the LUNSpaceReserved property of the LUN to off. This can be done on the
fly and involves no actual movement of data; the LUN is already thin inside.
At the Flexvol, we have a combination of the volume reserve and the volume guarantee. The
guarantee reserves the space of the volume from the containing aggregate or pool. The guarantee
can be set to volume, file, or none. Volume reserves the full declared space of volume from the
containing aggregate. File, reserves the total declared space of files in the volume from the
containing aggregate. None, reserves no space from the aggregate. When the reserve is set to
volume, you can also declare a percentage of space to reserve called a fractional reserve. If you
set that to 0, then no space is reserved.
Thin or thick LUNs can exist in thin or thick volumes. There are no restrictions in this regard.
One oddity, a holdover from the days of emulated thick LUNs, is that in versions of Data
ONTAP prior to 7.3.3, in order to set a volume guarantee to none, you must first set the reserve
to 100%. This caveat was removed in ONTAP 7.3.3. This means in versions prior to 7.3.3, you
need to make sure you temporarily have in your volume, if you are currently using a reserve less
than 100, before you can set the guarantee to none. In ONTAP 7.3.3, there is no such need; if
your LUNs (thick or thin) fit in your thick volume today, then you can convert without potential
for space issues.
All of this, converting thick to thin LUNs and volumes in any version of ONTAP can be done
quickly and easily via the Data ONTAP PowerShell Toolkit. I have included two examples; one
for ONTAP versions prior to 7.3.3, and one for ONTAP 7.3.3. Both versions convert all of the
LUNs, and the volumes that contain them, on aggr1 from thick to thin:
ONTAP Prior to 7.3.3
Connect-nacontroller SIM1;foreach($vol in (get-navol | ? {$_.ContainingAggregate eq
aggr1} )){ get-nalun | ? {$_.Path -Like "*"+$Vol.Name+"*"} | set-nalunspacereserved
off;set-navoloption $Vol.name guarantee volume;Set-navoloption $vol.name
fractional_reserve 100;set-navoloption $vol.name guarantee none}
ONTAP 7.3.3
Connect-nacontroller SIM1;foreach($vol in (get-navol | ? {$_.ContainingAggregate eq
aggr1} )){ get-nalun | ? {$_.Path -Like "*"+$Vol.Name+"*"} | set-nalunspacereserved
off; set-navoloption $vol.name guarantee none}
Should you decide that thin provisioning is not for you after youve made the conversion, no
worries. Converting back from thin to thick is as simple as setting the reservations and
guarantees (provided you have the space). If you decide to move forward with thin provisioning
then, as Erick states in his post, its important to monitor if you do decide to thin provision. You
want to have enough time to take action before running out of space. Expect more to follow on
that one
Answer
The following questions have been asked by Technical Support Engineers and Escalation
Engineers while doing hands on lab for flex volumes:
1. When you create a flexible volume, where is the volume created?
The volume is created in the aggregate you specified. For example:
vol create flex_vol_1 aggr_1 30m
13. Does changing raidtype of an aggregate from RAID-DP to RAID4 free up the 2nd
parity disk?
Yes. This is the same as for a traditional volume.
14. Are there any snap commands for the aggregate?
Yes. The volume snap commands are applicable for an aggregate. They are accessed via
the "-A" option on the snap commands.
15. For SnapMirror transfers, does the RAID type matter?
No. Flexible volume snapmirror transfers are above the level of RAID and thus are
completely independent of the RAID configuration and geometry.
16. What is snap reclaimable?
Displays the amount of space that can be reclaimed when the list of snapshots passed in
(as input argument) is deleted.
17. Why is guarantee-volume(disabled) set on a SnapMirror destination?
Prior to Data ONTAP 8, quarantees were disabled by design. With guarantees disabled, it
was possible for the destination aggregate to run out of space, thereby causing the
SnapMirror transfers to fail. Data ONTAP 8 and later do not contain this behavior.
18. Why can you QSM from a qtree on a traditional volume to a qtree on a flex volume,
but can't SnapMirror from a trad volume to a flexible volume?
QSM operates at the logical level (think files); hence, the type of volume doesn't matter.
SnapMirror operates at block level (think disk blocks); the buftree internals of a trad/flex
volume are different; hence, you can't move blocks between different volume types.
19. What does guarantee=none mean on a flexible volume?
This means that space for the flexible volume isn't guaranteed by the aggregate. Writes on
the volume could fail if the aggregate containing the volume becomes full, even before
the volume has used an space on the aggregate.
in maintanence
Answer
Data ONTAP 7.0.1 is the first release to offer non-disruptive disk firmware upgrades for filers
having the requisite drive configuration. The non-disruptive disk firmware upgrades take place
automatically in the background when the disks are members of aggregates of the following
types:
RAID-DP
Manually updating on a per disk basis. For example, if you want to update firmware on
disk ID 0, 1 and 3 on adapter 8, enter the following command:
disk_fw_update 8.0 8.1 8.3
Manually updating all disks at once. Use the disk_fw_update command without
arguments:
disk_fw_update
Automatic updates during reboot: move files to /etc/disk_fw and reboot the filer. This
process will happen with any Data ONTAP update that includes newer firmware.
Automatic updates on disk insertion: happens automatically every time a disk with earlier
firmware is inserted. Since inserted disks are spare disks first, there is no risk to data
availability.
Symptoms
A volume with logical unit number (LUNs) inside it and fractional reserve at 100% should only
show usable space reserved if a snapshot exists on the volume, locking the reserved blocks.
However, in some cases, fractional reserve can be seen utilizing the available space of a volume
even when no snapshots exist.
For example:
7251SIM*> vol create sis space 100m
Creation of volume sis with size 100m on containing aggregate
space has completed.
7251SIM*> snap sched sis 0 0 0
7251SIM*> snap reserve sis 0
7251SIM*> vol status -v sis
Volume State Status Options
sis online raid_dp, flex nosnap=off, nosnapdir=off,
sis minra=off, no_atime_update=off,
nvfail=off,
ignore_inconsistent=off,
snapmirrored=off,
create_ucode=on,
convert_ucode=on,
maxdirsize=1310,
schedsnapname=ordinal,
fs_size_fixed=off,
guarantee=volume, svo_enable=off,
svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off,
no_i2p=off,
fractional_reserve=100,
extent=off,
try_first=volume_grow,
sis_logging=on, read_realloc=off,
Containing aggregate: space
Plex /space/plex0: online, normal, active
RAID group /space/plex0/rg0: normal
7251SIM*> df -h sis
Filesystem total used avail capacity Mounted on
/vol/sis/ 100MB 80KB 99MB 0% /vol/sis/
/vol/sis/.snapshot 0GB 0GB 0GB ---% /vol/sis/.snapshot
This now shows that some reserve is being used, as a partition is being written to the lun.
Notice now that 2MB has been used for the NTFS partition, which is committed to reserve:
volume:sis:wv_fsinfo_blks_blks_rsrv_overwrite:564 (564 blocks * 4.096 KB /
1024 = 2.2MB)
Hence it appears that fractional reserve is being used even though there are no snapshots on the
volume:
7251SIM*> snap list -n sis
Volume sis
working...
No snapshots exist.
Cause
When SIS is enabled on the volume, fractional reserve behaves as if a snapshot is always present.
Therefore, fractional reserve will be honored and the volume will appear to have less space
available. This can be problematic, as LUNs can potentially go offline if the volume fills up and
no overwrite space is available. LUNs going offline in this manner can be dangerous, as they can
become corrupted.
Solution
When using LUNs inside of a volume with SIS enabled, you should reference paragraph 'LUN'
in Chapter 'DEDUPLICATION WITH OTHER NETAPP FEATURES' of TR-3505.
This document covers several setup possibilities depending on the desire to save space or protect
against over-committing.
Answer
General Questions
What is the Maintenance Center?
The purpose of the Maintenance Center is to improve storage reliability by reducing the number
of unnecessary disk returns to NetApp due to transient errors.
The Maintenance Center provides a new disk diagnostics capability built into Data ONTAP. The
Maintenance Center automatically manages disk failures through a systematic failureverification process while the failing disk is still in the customers system. A disk is identified by
the current health management system as being a potential failure. Instead of the disk being
failed and an AutoSupport Return Merchandise Authorization(RMA) case being generated, the
disk is removed from the current aggregate and sent to the Maintenance Center. User-data is
migrated from the disk onto a spare, through reconstruction or Rapid RAID recovery, depending
on the type of errors being received. The process occurs without user intervention and only a few
messages are sent to the console reporting the action.
Once in the Maintenance Center, the disk is tested in the background, without disrupting the
other operations of the system. If the transient errors can be repaired, the disk will be returned to
the spares pool. If not, the disk is failed. In many cases, the testing provided can correct errors
that would have previously caused a drive to be failed, or would have caused system
interruption, for example, a WAFL hang panic.
What are the key customer benefits of the Maintenance Center?
The Maintenance Center improves the customer experience with NetApp disk drives by
significantly reducing the number of unnecessary disk returns. Customers will have lower
lifetime management costs stemming from fewer component failures and increased system
reliability.
How does a drive get selected to go into the Maintenance Center?
Data ONTAP has a defined set of errors and thresholds, which are used to select disks for
maintenance.This set of thresholds and errors may vary between releases as they are modified
based on new information.Disks that receive errors, which are known fatal errors, will not go
into maintenance testing and will be failed.
Currently the list includes:
Health triggers which are based on recommendations from disk drive manufacturers to
warn of potential problems
The errors and error thresholds will evolve with new disk technologies and information
gathered from the current release.
How does the customer know when a disk enters the Maintenance Center?
When a disk enters the Maintenance Center, an Event Management System (EMS) event is
posted. There is another EMS event when a disk completes testing successfully, fails testing, or
when testing is aborted. All Maintenance Center EMS events have a syslog message. The CLI
commands vol status -r and sysconfig -r show disks that are in the Maintenance Center. The
disk maint status command can be used to list drives that are being maintenance tested and to
display test progress.
Can I turn off the Maintenance Center feature and what is the impact?
Yes, the following command can be executed:
options disk.maint_center.enable off
Please see the Disk performance and health section of the Storage Management Guide for more
details. The Maintenance Center improves overall disk reliability. When the Maintenance Center
is turned off, a problematic disk will be automatically failed instead of being tested.
Will the Maintenance Center affect the performance of my NetApp appliance?
The Maintenance Center has a very minimal performance impact on the NetApp appliance.Many
of the Maintenance Center diagnostics tests are executed directly by the drive instead of
requiring CPU resources from the NetApp appliance.
How many NetApp devices can be in the Maintenance Center at a time?
The Maintenance Center supports concurrent diagnostics of up to 84 disks. You can limit of
number of disks running Maintenance Center tests with the following command:
options disk.maint_center.max_disks max_disks
then the disk is failed and an ASUP is sent for a replacement disk. The current rule is only one
visit to the Maintenance Center for each disk.
What type of data does the Maintenance Center collect?
The Maintenance Center does not collect any customer data. The Maintenance Center collects
only NetApp disk-specific information such as:
Test output and whether specific errors were detected, such as medium errors
What is the relationship between AutoSupport (ASUP) and the Maintenance Center?
AutoSupport is a notification tool that is built into Data ONTAP which enables you to set up
specific notifications to both yourself and the NetApp Global Support Center. The Maintenance
Center uses AutoSupport to transport its findings back to NetApp as a part of the weekly data
log.
Where can I get more information about the Maintenance Center?
Please see the Data ONTAP 7.1 release notes and the Storage Management Guide for more
information about the Maintenance Center.
What is a maintenance disks pool?
Maintenance disks pool refers to disks being tested by Maintenance center. Sysconfig -r
output may show maintenance disks with some disks being tested.
How long will it take before Maintenance Center makes a decision to either return the disk
to service or& fail it out and generate a support case for disk replacement?
The maintenance center will fail the drive on the first test that fails. If it fails the first test, the
drive will be failed out and an ASUP generated. If all the tests run successfully, then a drive will
return to the spare pool at the end of the cycle. This time depends on the size and type of the
disk. However, the time is aproximately equal to 2 1/2 times the zeroing time for a disk.
Overview of wafliron
Answer
The following aspects of wafliron are addressed in this article:
What is wafliron?
What happens if the storage system is power cycled or rebooted while wafliron is
running?
What is wafliron?
Wafliron is a Data ONTAP(R) tool that will check a WAFL(R) file system for inconsistencies and
correct any inconsistencies found. It should be run under the direction of
NetApp Technical Support. Wafliron can be run on a traditional volume or an aggregate. When
run against an aggregate, it will check the aggregate and all associated FlexVol(R) volumes. It
cannot be run on an individual FlexVol volume within an aggregate.
Wafliron can be run with the storage system online, provided that the root volume does not need
to be checked. When run, wafliron performs the following actions on the traditional volume or in
the aggregate and associated FlexVol volumes:
Scans inodes
It is not unusual for a 1 TB aggregate to take three or more hours to mount. Specific times vary,
but for large aggregates/volumes, NetApp recommends planning a downtime window.
Once the aggregate and associated FlexVol volumes are mounted, data will be served while the
wafliron continues to check the data. For an aggregate, all FlexVol volumes must be mounted
before data can be served from any of the FlexVol volumes in the aggregate. If the aggregate
contains a FlexVol volume with LUNs, then all LUNs within that FlexVol volume must complete
their Phase 1 checks before any LUN in that FlexVol volume can be brought online.
Note: Prior to Data ONTAP 7.3, all volumes within an aggregate needed to complete Phase 1
before any volume was accessible. This behavior changed in Data ONTAP 7.3. Please see section
Can you prioritize which volumes wafliron checks first? for more information on prioritizing
volumes.
The vol status command can be used to monitor whether the volumes have been remounted. If
the volumes are still in the mounting phase, vol status will show:
storage1> vol status
Volume State
Status
Options
vol0 online
raid4, trad
root
vol status: Volume 'tst' is temporarily busy (vol remount).
vol status: Volume 'vol1' is temporarily busy (vol remount).
the data in the aggregate once it completes its baseline checks. If it is started from the Special
Boot Menu, the storage system will automatically boot and start serving data once the baseline
checks are complete.
WAFL_check, however, must be run from the Special Boot Menu and the storage system will not
be serving data until the WAFL_check completes and the administrator chooses to commit
changes.
WARNING: NetApp Technical Support should always be consulted before running either
wafliron or WAFL_check.
What are the phases of wafliron?
Wafliron has three phases to check Aggregate and Volumes.
Note: Wafliron is a diagnostic tool, and its usage and output is subject to change.
Phase 1
Verifies file system access by checking the necessary metadata. This includes checks of
the aggregate metadata associated with each FlexVol volume contained in that aggregate,
metadata tracking free space, and Snapshot copy sanity.
Phase 1 will check the aggregate first and then each FlexVol volume on that aggregate.
After all FlexVol volumes within the aggregate are checked, the aggregate and FlexVol
volumes will be mounted.
The only status provided during this phase is a message to console logging the start of
wafliron. The progress cannot be monitored during this phase.
WARNING: LUNs will not be available until Phase 1 completes. LUNs may not be
automatically set to an online state. See section Why are LUNs still offline after wafliron
phase 1 completes? for more information.
WARNING: Snapshot copies are readonly and therefore cannot be modified by wafliron.
If a Snapshot copy contains an inconsistency, the Snapshot copy will need to be deleted
in order to remove the inconsistency from the file system. Always contact NetApp
Support before deleting a Snapshot copy that is suspected to contain an inconsistency.
Phase 2
Verifies the metadata for user data. If a user requests data that has not yet been checked,
wafliron will check and repair it (if necessary) on-demand. Due to this on-demand
checking, users may see increased latency during this phase.
In Data ONTAP 7.2.3 and later, aggr wafliron status -s will provide progress for
the wafliron.
Phase 3
Performs clean-up tasks such as finding lost blocks/files and verifying used blocks.
In Data ONTAP 7.2.3 and later, aggr wafliron status -s will provide progress for
the wafliron.
The above conditions can be checked using the aggr status -r or vol status -r commands.
Example 1: online aggregate that is mounted
Wafliron can be run on this aggregate.
storage1> aggr status -r aggr0
Aggregate aggr0 (online, raid_dp) (block checksums)
Note: After entering the command above, the storage system console may become unresponsive
for a period of time. The storage system should be monitored for at least thirty minutes
following the start of the wafliron. If the console is still unresponsive after this time,
NetApp Technical Support should be contacted.
Can wafliron be run on a root aggregate/volume?
Wafliron can be run on a root aggregate/volume. However, it cannot be done with the storage
system booted. This limitation is due to several factors such as:
If the WAFL file system for a root aggregate/volume on a storage system is inconsistent,
the storage system will be unable to boot.
If the root aggregate/volume is not inconsistent and wafliron is started, wafliron would
need to unmount the root aggregate/volume to perform its baseline checks. Since the root
aggregate/volume must be online and available for the storage system to be operational,
wafliron would be unable to do this.
Because of these factors, wafliron can only be started on a root aggregate/volume from the
Special Boot Menu.
WARNING: If wafliron needs to be run on an aggregate containing the FlexVol root volume or
on a traditional root volume, downtime must be scheduled for the storage system. However, this
downtime can be minimized by running wafliron from the Special Boot Menu. When wafliron is
run from the Special Boot Menu, it will perform some preliminary checks and corrections and
then automatically boot the storage system. Once the storage system is booted, data will be
available in the affected volumes while the wafliron continues to complete its checks and make
any necessary changes.
To run wafliron on a root aggregate/traditional volume, the storage system must first be booted to
the Special Boot Menu using the following steps:
1. Reboot or boot the storage system.
2. During the boot process, when prompted to "Press CTRL-C for Special Boot Menu"
press CTRL-C. A five-item menu appears.
3. At the "(1-5)" prompt, enter the hidden command wafliron.
WARNING: Prior to Data ONTAP 7.3, the above steps will initiate a wafliron on all aggregates
and FlexVol volumes. This will cause the storage system to initiate the first phase of the
wafliron and then boot Data ONTAP. Note that the filer will boot significantly slower when
performing this task. Once Data ONTAP boots, wafliron will be running on all volumes.
For Data ONTAP 7.3 and later, if wafliron is started from the Special Boot Menu, it will only
check the root aggregate. All other aggregates can only be checked using wafliron from within
Data ONTAP.
Can wafliron be run on a deduplicated (SIS) volume?
WARNING: NetApp Technical Support must be contacted prior to running wafliron on an
aggregate containing deduplicated FlexVol volumes.
Before attempting wafliron, the storage system must be net-booted to Data ONTAP version
7.2.4P5D6 as this version includes critical fixes for wafliron when run against deduplicated
volumes.
Can wafliron be run on a volume used by SnapMirror or SnapVault?
Wafliron can be run on a volume used by SnapMirror(R) or SnapVault(R). However, some
limitations apply depending on the SnapMirror/SnapVault configuration.
If the volume is the source for a Volume SnapMirror or contains source qtrees for a Qtree
SnapMirror or SnapVault:
o Since the source of a SnapMirror or SnapVault is read/write, wafliron can be run
using the same command as used on a regular aggregate:
resume for the FlexVol volumes that were ironed. Progress of the scan can be
monitored with 'wafl scan status'.
After wafliron is run on a destination volume for Volume SnapMirror, a "block type
initialization" scan must be performing on the traditional/FlexVol volume that was
checked and modified by wafliron. Until this scanner completes, volume SnapMirror
relationships cannot be re-synchronized, updated, or initialized. This behavior is being
tracked as BUG 142586, which is first fixed in Data ONTAP 7.0.6, 7.1.2, and 7.2.2. The
"block type initialization" scan may take several days to complete depending on the size
of the FlexVol volume and the load on the storage appliance. To check the status of the
command, use the wafl scan status command in advanced priviledge mode:
storage1> priv set advanced
storage1*> wafl scan status
Volume sm_dest:
Scan id
Type of scan
progress
1
block type initialization
30454809
If multiple FlexVol volumes are specified, they are checked in order. If a FlexVol volume on the
aggregate is not listed, then it will be checked after all FlexVol volumes specified in the
command are checked.
WARNING: Several exceptions apply to FlexVol volume prioritization:
If a FlexClone(R) volume is specified in the list, its parent FlexVol volume will also be
prioritized.
44 files done
14 directories done
199685 inodes done
11282 blocks done
wafliron is active on aggregate: aggr1
Scanning (16% done).
Example:
storage1*> aggr wafliron status
wafliron is active on volume: vol1
Scanning (2% done).
wafliron is active on aggregate: aggr0
Scanning (0% done).
By default, wafliron information is logged to the storage system's console as well as the
/etc/messages file. The messages logged include wafliron start time, changes made, a summary
of the changes, and the completion time for the aggregate and all FlexVol volumes.
To check the progress of a wafliron on a FlexVol volume residing on the aggregate being ironed:
storage1> priv set advanced
storage1*> wafl scan status volname
Example:
storage1*> wafl scan status vol1
Volume vol1:
Scan id
Type of scan
158
wafliron demand
progress
156003 (156597/156595) of 3640875
Once the wafliron is complete, the storage system should be returned to normal administrative
mode using the following command:
storage1*> priv set admin
If wafliron is in Phase 1 (mounting) and is interrupted, the wafliron will start from
scratch.
Available memory
One reason for the performance penalty is that when clients access data, wafliron must first
check the data before fulfilling clients' requests. This behavior ensures the clients receive
consistent data and prevents the storage system from panicking should clients touch inconsistent
data. If the storage system's load is heavy due to client requests, it is recommended that the
administrator plan for a high performance penalty, although the actual impact may be less.
How can wafliron be run on a pre-Data ONTAP 7G storage system?
Prior to Data ONTAP 7G, only traditional volumes were available. As such, the vol wafliron
start command must be used to initiate wafliron.
Why are LUNs still offline after wafliron phase 1 completes?
When an aggregate is marked inconsistent, the FlexVol volumes and LUNs will go offline until
the file system is checked. If the NVFAIL option is enabled, the LUNs will not be brought online
automatically when the FlexVol volume is brought online after wafliron Phase 1 checks. This is
expected behavior. Once the volume is online, the storage administrator will need to manually
online the LUNs individually. NetApp highly recommends monitoring system performance using
sysstat while bringing the LUNs online.
Note: During the LUN online work, the sysstat may show the filer CPU pegged at 100%. This is
not necessarily an indication of a problem.
Description
The article describes the procedure to migrate from traditional volumes to flexible volumes.
Procedure
Administrators wishing to take advantage of all the new features available with ONTAP 7.0
flexible volumes might find that they need to migrate their data from a traditional volume to a
flexible volume. There is no upgrade in place option to perform this transformation at this point.
Ndmpcopy:
When there is no need to preserve the snapshots on the traditional volumes when migrating to
flexible volumes, ndmpcopy is the best option. To migrate all the data from a 100 Gig traditional
volume to a new flexible volume, perform the following steps:
filer>
filer>
filer>
filer>
list traditional
%/total date
name
---------- ------------ -------0% ( 0%) Mar 04 00:15 snap.1
0% ( 0%) Mar 03 14:00 snap.2
0% ( 0%) Mar 02 14:00 snap.3
0% ( 0%) Mar 01 14:00 snap.4
2. Create the flexible volume. Pick a size for the flexible volume that is suitable to hold all
the qtrees that are to be transferred. This size can be adjusted if it is too big or small with
flexible volumes using:
'vol size +amount' or 'vol size -amount'
filer> vol create flex aggregate 50g
3. Initialize the flexible volume with the qtrees to be migrated using the oldest snapshot
found on the traditional volume:
filer> snapmirror initialize -S filer:/vol/traditional/qtree -s snap.4
filer:/vol/flexible/qtree
4. For each additional qtree (if any) to be transferred, initialize the qtree:
filer> snapmirror initialize -S filer:/vol/traditional/qtree2 -s snap.4
filer:/vol/flexible/qtree2
5. When the snapmirror operation is completed, there will be one qtree created on the
flexible volume for each qtree transferred. To preserve this snapshot:
filer> snapmirror status
6. Bring over the incremental changes found in each snapshot on the traditional volume.
Start with the oldest snapshot that has not been transferred and update the flexible volume
with that data, preserving the snapshot at the completion of the transfer:
filer> snapmirror update -S filer:/vol/traditional/qtree -s snap.3
filer:/vol/flexible/qtree
7. For each additional qtree that is being transferred, update the destination:
filer> snapmirror update -S filer:/vol/traditional/qtree2 -s snap.3
filer:/vol/flexible/qtree2
filer> snapmirror status
This step is repeated for all the snapshots in the traditional volume. Note that our flexible
volume snapshots will contain only the data related to the qtrees being transferred, and
not other data that might be present on the traditional volume and preserved in snapshots
(such as other qtrees not being transferred).
8. The next step involves one last transfer of the data. This will cover all the data on the
traditional volume related to the qtrees being migrated that are not preserved in a
snapshot:
filer> snapmirror update -S filer:/vol/traditional/qtree
filer:/vol/flexible/qtree
11. For each additional qtree that was transferred, break the SnapMirror relationship:
filer> snapmirror break filer:/vol/flexible/qtree2
1. Initialize the flexible volume with the oldest snapshot found on the traditional volume:
filer> snapmirror initialize -S filer:/vol/traditional/- -s snap.1
filer:/vol/flexible/root
filer> snapmirror status
2. Bring over the data in each snapshot and preserve the snapshot when it is complete:
filer> snapmirror update -S filer:/vol/traditional/- -s snap.2
filer:/vol/flexible/root
filer> snapmirror status
3. Repeat this step for each snapshot in the traditional volume. Finally, do one last transfer
to migrate data not in any snapshot and break the SnapMirror relationship:
filer> snapmirror update -S filer:/vol/traditional/filer:/vol/flexible/root
filer> snapmirror status
(wait for update to complete)
filer> snapmirror break filer:/vol/flexible/root
Configuration changes:
After the migration is complete, make sure the configuration files such as /etc/exportfs, CIFS
shares, vfiler assignments and SnapMirror relationships are updated to reflect the new flexible
volume names.
Aggregate shows nearly full but volume doesn't
2015-06-24 12:25 PM
Labels:
Hey all. I'm fairly new to NetApp and been doing a lot of reading, but haven't found anything to
directly explain this and was hoping you guys could help. I'm not even sure this is precisilely the
right forum, but seemed a valid place to try.
My netapp has 2 disk aggregates of 26 disks each. aggr0 has 7.91TB, agg1 has 4.88
However vol1 shows 1.87TB remaining and the other volume shows 9GB remaining.
I'm really confused how the aggregate can have only 86GB free, but the volume has 1.8TB free.
Is this pre-allocated space and thus not really an issue as it appears to be?
Solved! SEE THE SOLUTION
Me too
Reply
0 Kudos
Options
JGPSHNTAP
Options
BOBSHOUSEOFCARDS
Aggregates are physical. The total space available within an aggregate is based on the physical
space on the disks that make up the aggregates.
Volumes are logical. Data of course takes up physical space when written to a volume, but until
it is, the "size" of a volume is just a logical number.
Now - how that logical space interacts with physical space, in terms of capacity available,
depends on the volumes "space guarantee". The default guarantee is "volume", which you might
also hear described as "thicK" provisioning. Space guarantee of "volume" means guarantee that
the defined capacity of the logical volume is available in the physical aggregate up front. Hence,
when you create a 1TB volume in a 4TB aggregate, the total capacity of the volume is
immediately subtracted from the available capacity of the aggregate to "reserve" the space. Note
that nothing has yet been written to the volume. The volume will show 1TB available, and the
aggregate will show 3TB available. In actuality, 4TB is still physically available in total, but
1TB of it is reserved in practice for the 1TB volume. With a space guarantee of volume you
need to have the space available as indicated by aggregate capacity to define a new volume on
that aggregate.
The alternative is "thin" provisioning which is a space guarantee set to "none". With this
definition, the aggregate available capacity is reduced only when actual data is physically used
by one of the volumes. With this type of space guarantee you can define volumes that woulud
actually consume more space than exists in the aggregate if they were all filled - this is "over"
provisioning. Over provisioning isn't a bad thing and in specific circumstances can be a very
useful thing but obviously requires a defined management strategy for dealing with space
consupmtion as aggregates start to fill.
So to your specific case. Aggr1 is 4.88TB. The two volumes defined on the aggregate total
about 4.77TB. If set to the default space guarantee, this capacity is immediately removed from
the aggregate's "available" capacity when reported. So the available space in the aggregate you
indicate is right, even though the volumes report a bunch of available space. The space
guarantee just changes the lens through which you view capacities.
Bob
ASHISHKESARKAR
Doubt for Volume and LUN Migration
2014-01-21 01:57 AM
Labels:
Dear All,
We did a POC for NetAPP V3220 Series. We found one difficulty regarding the LUN Migration
process.
As the LUN is the subpart of the Volume (In NetAPP Storage), if the LUN has to be migrated to
another disk location, it is must to migrate entier Volume consisting.
Migration process also is through command line. No any GUI Option available for it.
2015-04-09 12:57 AM
Labels:
Hello,
I have a thick provisionned LUN in thin provisionied volume.
Storage vMotion as suggested in article I mentioned is really the most straightforward way to
reduce space consumption.
2015-06-06 11:59 AM
Labels:
FAS
just a basic question... I have a LUN mapped to a snapmirrored volume. i need to increase the
LUN space at the source. So, do I need to increase the LUN space at the destination volume
also?
Because as per my understanding, the destination LUN space will automaticaly resize as per the
source LUN once the snapmirror update completes {resyncing}.
A)
You are confusing LUNs and Volumes a little in your question.
Volumes are Snapmirrored to volumes. A LUN is nothing more than a file within a volume. So
changing the size of a LUN within a volume doesn't require anything to be done on the
Snapmirror destination. Snapmirror doesn't care what is in the volume that is being mirrored - it
just makes a replica copy. So if you change a LUN (file) on the source the change will replicate
to the snapmirror destination on the next update.
Now the caveat of course is if you are resizing the volume that contains the LUN in question,
perhaps to reserve space for the newly expanded LUN. In that case, you *may* need to take
action. In 7-mode, changing the size of the source volume in a snapmirror requires you to also
change the size at the destination. In Clustered Data Ontap, the destination can automatically
pick up the new source volume size and adjust accordingly assuming space exists to expand at
the destination.
I make the point on terminology only for clarity. LUN mapping refers to making a LUN
available to hosts through an igroup. Volumes contain LUNs. Volumes are snapmirrored to a
destination or series of destination volumes.