Professional Documents
Culture Documents
ONTAP 9 Commands Manual Page Reference
ONTAP 9 Commands Manual Page Reference
exit
Quit the CLI session
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The exit command ends the current CLI session.
Examples
The following example ends the current CLI session:
cluster1::> exit
Goodbye
215-14175_A0 May 2019 Copyright © 2019 NetApp, Inc. All rights reserved. 1
Web: www.netapp.com • Feedback: doccomments@netapp.com
history
Show the history of commands for this CLI session
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The history command displays the command history of the current CLI session. A numeric ID precedes each command. Use
this number with the redo command to re-execute that history item.
Examples
The following example displays the command history of the current CLI session:
cluster1::> history
1 vserver show
2 man volume show
3 volume delete -vserver vs0 -volume temporary2
4 volume modify { -volume temp* } -state offline
cluster1::> redo 3
Related references
redo on page 3
man
Display the online manual pages
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The man command displays the manual page of the command you specify. If you do not specify a command, command displays
the man page index.
Parameters
[<text>] - Valid CLI command
The command for which you'd like to see the manual page. The syntax of the command is the same as the
command itself. The man command supports abbreviations and tab completion of the command name.
Examples
The following example displays the manual page for the storage aggregate create command.
Description
The redo command re-executes a command that has been executed previously in the current CLI session. Specify a previously
run command using:
• A string that matches part of a previous command. For example, if the only volume command you have run is volume
show, enter redo vol to re-execute the command.
• The numeric ID of a previous command, as listed by the history command. For example, enter redo 4 to re-execute the
fourth command in the history list.
• A negative offset from the end of the history list. For example, enter redo -2 to re-execute the command that you ran two
commands ago.
Parameters
[<text>] - String, Event Number, or Negative Offset
Use this parameter to specify a string, a numeric ID from the command history, or a negative number that
identifies the command to be re-executed.
Examples
The following example re-executes command number 10 in the command history:
cluster1::> redo 10
Related references
history on page 2
rows
Show/Set the rows for the CLI session
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The rows command displays the number of rows that can be displayed in the current CLI session before the interface pauses
output. If you do not set this value, it adjusts automatically based on the actual height of your terminal. If the actual height is
undefined, the default number of rows is 24.
Specify a number to set the number of rows that can be displayed. Setting this value manually disables auto-adjustment. Specify
zero (0) to disable pausing.
You can also set this value using the set -rows command.
redo 3
Parameters
[<integer>] - Number of Rows the Screen Can Display
Use this parameter to specify the number of rows your terminal can display.
Examples
The following example displays the current number of rows, then resets the number of rows to 48:
cluster1::> rows
36
cluster1::> rows 48
Related references
set on page 4
set
Display/Set CLI session settings
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The set command changes attributes of the user interface.
Parameters
[-privilege <PrivilegeLevel>] - Privilege Level
Use this parameter to specify the privilege level of the command session. Possible values are
• diagnostic - Used for detailed diagnostic commands that are used only by support personnel
• B - Bytes
• above - print the timestamp using the system timestamp format on the line above the remainder of the
prompt.
set 5
• inline - print the timestamp using the system timestamp format at the beginning of the line with the
remainder of the prompt.
Examples
The following example sets the privilege level to advanced.
Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by NetApp personnel.
Do you wish to continue? (y or n): y
cluster1::*>
The following examples cause all columns to be shown in output rows, with a comma used as the field separator.
[2/25/2016 16:38:38]
cluster1::>
Related references
rows on page 3
Description
The top command changes the current working directory of the command prompt to the top-level command directory.
Examples
The following example returns the command prompt from the storage aggregate directory to the top-level directory:
cluster1::>
Related references
storage aggregate on page 820
up
Go up one directory
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The up command, which can also be specified as two dots (..), changes the current working directory of the command prompt
to the directory that is up one level in the command hierarchy.
Examples
The following example takes the command prompt up one level from the storage aggregate directory:
cluster1::storage aggregate> up
cluster1::storage>
Related references
storage aggregate on page 820
application commands
Display and manage applications
top 7
application provisioning commands
Manage application provisioning
Description
This command modifies the options for application provisioning operations.
Parameters
[-is-mixed-storage-services-allowed {true|false}] - Is Mixed Storage Services Allowed
Specifies whether mixed cost storage services are allowed for provisioning placement. If the value of this
parameter is false, only the aggregates closest to the performance requirements of the storage service are
used. If the value of this parameter is true, all aggregates with sufficient performance are considered. The
initial value for option is false.
Examples
Description
This command displays options for application provisioning.
Examples
autobalance commands
The autobalance directory
Description
The autobalance aggregate show-aggregate-state command displays information about an aggregate state that is
considered by the Auto Balance Aggregate feature.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
If this parameter is specified, the display will be limited to only those aggregates with a node that matches the
specified value.
[-aggregate <aggregate name>] - Name of the Aggregate
If this parameter is specified, the display will be limited to only that aggregate with a name that matches the
specified value.
[-total-size {<integer>[KB|MB|GB|TB|PB]}] - Total Size of the Aggregate
If this parameter is specified, the display will be limited to only those aggregates with a total-size that matches
the specified value.
[-used-size {<integer>[KB|MB|GB|TB|PB]}] - Used Size of the Aggregate
If this parameter is specified, the display will be limited to only those aggregates with a used-size that matches
the specified value.
[-aggregate-unbalanced-threshold {<integer>[KB|MB|GB|TB|PB]}] - Threshold When Aggregate Is
Considered Unbalanced
If this parameter is specified, the display will be limited to only those aggregates with a threshold that matches
the specified value.
[-outgoing-size {<integer>[KB|MB|GB|TB|PB]}] - Size of Outgoing Volumes in the Aggregate
If this parameter is specified, the display will be limited to only those aggregates with an outgoing-size that
matches the specified value. Outgoing size will be equal to the total size of the volumes that move away from
each one of those aggregate.
[-incoming-size {<integer>[KB|MB|GB|TB|PB]}] - Size of Incoming Volumes in the Aggregate
If this parameter is specified, the display will be limited to only those aggregates with an incoming-size that
matches the specified value. Incoming size will be equal to the total size of the volumes that move towards to
each one of those aggregates.
[-raidtype {raid_tec|raid_dp|raid4}] - RAID Type
If this parameter is specified, the display will be limited to only those aggregates with a raidtype that matches
the specified value.
Examples
The following example displays information about the state for all aggregates in the cluster.
Aggregate: aggr_1
Total Size: 12.61GB
Used Size: 111.6MB
Outgoing Size: 0B
Incoming Size: 0B
Aggregate Used Space Threshold: 8.83GB
Aggregate Available Space Threshold: 5.04GB
RAID Type: raid4
Home Cluster ID: edf0379b-16da-11e6-aa3c-0050568558c2
Attributes: Excluded
The following example displays information about all entries of the aggregate state, for all aggregates in the cluster.
Description
The autobalance aggregate show-unbalanced-volume-state command displays information about a volume that is
considered by the Auto Balance Aggregate feature.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
If this parameter is specified, the display will be limited to only those volumes with a node that matches the
specified value.
[-DSID <integer>] - DSID of the Last Volume Queried
If this parameter is specified, the display will be limited to only those volumes with a DSID that matches the
specified value.
[-aggregate <aggregate name>] - Aggregate
If this parameter is specified, the display will be limited to only those volumes with an aggregate name that
matches the specified value.
Examples
The following example display information about all of the unbalanced volumes that the Auto Balance Aggregate feature
is aware of.
Volume: ro10
Footprint: 20.20MB
Last Time Over IOPS Threshold: 3/12/2014 16:20:18
Last Placed: 3/11/2014 10:16:04
Attributes: Over IOPS Threshold
Stabilizing
Volume: test
Footprint: 20.20MB
Last Time Over IOPS Threshold: 3/12/2014 16:20:18
Last Placed: 3/11/2014 10:16:42
Attributes: Over IOPS Threshold
In Mirror
Stabilizing
The following example displays all of the information that the Auto Balance Aggregate feature has collected for all of the
unbalanced volumes it is aware of.
Node Name:cluster-1-01
DSID of the Last Volume Queried:1026
Aggregate:aggr_1
Name of the Volume:test
Last Time Threshold Crossed:3/12/2014 16:20:18
Last Time Volume Was Moved:3/11/2014 10:16:42
Is Volume Currently Moving:false
Is Volume Quiesced:false
Total Size of the Volume:20.20MB
Volume's Attributes:Over IOPS Threshold
In Mirror
Stabilizing
Last Time Volume State Was Checked: 3/13/2014 08:20:18
Description
The autobalance aggregate config modify command allows the user to customize the parameters that determine when
volumes should be considered for automatic move or recommendation by the Auto Balance Aggregate feature.
Parameters
[-is-enabled {true|false}] - Is the Auto Balance Aggregate Feature Enabled
This specifies whether the Auto Balance Aggregate feature is enabled and running.
[-aggregate-unbalanced-threshold-percent <integer>] - Threshold When Aggregate Is Considered
Unbalanced (%)
This specifies the space used threshold percentage that will cause the Auto Balance Aggregate feature to
consider an aggregate as unbalanced.
[-aggregate-available-threshold-percent <integer>] - Threshold When Aggregate Is Considered
Balanced (%)
This specifies the threshold percentage which will determine if an aggregate is a target destination for a move.
The Auto Balance Aggregate feature will attempt to move volumes from an unbalanced aggregate until it is
under this percentage.
Examples
The following example displays a modification for the default configuration of the Auto Balance Aggregate feature
Description
The autobalance aggregate config show command displays information about parameters that determine when volumes
should be considered for automatic move or recommendation by the Auto Balance Aggregate feature.
Examples
The following example displays the default configuration for the Auto Balance Aggregate feature
cluster add-node
Expand the cluster by discovering and adding new nodes
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The cluster add-node command discovers and adds new nodes to the cluster. When the -node-count parameter is
specified, the command attempts to add that many nodes to the cluster. The -node-ip parameter can be specified to directly
add a node. The -cluster-ips parameter can be specified to directly add one or more nodes in parallel. Only one of the -
node-count, -node-ip and -cluster-ips parameters can be provided. The system node show-discovered command
displays all the nodes discovered on the local network.
Note: The node-count parameter is deprecated and may be removed in a future release of Data ONTAP. Use the -
cluster-ips parameter instead.
Note: The node-ip parameter is deprecated and may be removed in a future release of Data ONTAP. Use the -cluster-
ips parameter instead.
Parameters
{ -cluster-ips <IP Address>, ... - List of Cluster Interface IP Addresses of the Nodes Being Added
This parameter contains a comma separated list of cluster interface IP addresses of the nodes in the cluster you
are creating. All the nodes specified in the list will be added to the cluster.
| -retry [true] - Retry a failed cluster add-node operation
Use this parameter to retry the most recently failed cluster add-node command with the originally
specified parameters. Retry is not supported if the cluster add-node command was originally run with
either the -node-count or -node-ip parameters.
| -node-count <integer> - (DEPRECATED)-Number of Nodes Being Added
Number of nodes to be added to the cluster. If fewer nodes are discovered, all the discovered nodes are added
to the cluster and the command will fail since there are fewer nodes than specified. If more nodes are found
than the number specified, the command will fail because there is no way to determine which nodes you
intend to add to the cluster.
Note: The -node-count parameter is supported on non-shared architecture platforms only.
Cluster Commands 15
[-foreground {true|false}] - Foreground Process
When set to false the command runs in the background as a job. The default is true, which causes the
command to return after the operation completes.
[-allow-mixed-version-join [true]] - Allow a Node At a Different Version to Join Cluster
This parameter allows nodes with different, but compatible versions of Data ONTAP to be added to the cluster.
A Data ONTAP best practice is to add nodes to the cluster that are of the same Data ONTAP version as the
nodes in the cluster, but that may not always be possible.
Examples
The following example adds a node using -cluster-ips:
Related references
system node show-discovered on page 1276
cluster create on page 17
cluster add-node-status
Show cluster expansion progress
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The cluster add-node-status command displays the progress of the node joining a cluster initiated by using the cluster
create command or the cluster add-node command
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node-uuid <UUID>] - Node UUID
Select the node that match the specified node UUID.
[-node-name <text>] - Node Name
Select the nodes that match the specified node name.
[-cluster-ip <IP Address>] - IP Address of a Cluster Interface of Node
Select the nodes that match the specified cluster IP.
Examples
The following example shows the progress of a node add operation:
Related references
cluster create on page 17
cluster add-node on page 15
cluster create
Create a cluster
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The cluster create command creates a cluster with one or more nodes. When the -node-count parameter is specified, the
command attempts to add that many nodes to the cluster. The -cluster-ips parameter can be specified to add one or more
nodes in parallel. Only one of the -node-count and -cluster-ips parameters can be provided.
Note that single-node clusters do not require configuring the cluster network. A cluster network interface must be configured
before other nodes can join the cluster.
Note: The node-count parameter is deprecated and may be removed in a future release of Data ONTAP. Use the -
cluster-ips parameter instead.
Parameters
[-license <License Code V2>] - (DEPRECATED)-Base License
Note: This parameter is deprecated and may be removed in a future release of Data ONTAP.
Use this optional parameter to specify the base license for the cluster. Obtain this value from your sales or
support representative.
-clustername <text> - Cluster Name
Use this parameter to specify the name of the cluster you are creating.
• The name must contain only the following characters: A-Z, a-z, 0-9, "-" or "_".
• The first character must be one of the following characters: A-Z or a-z.
cluster create 17
• The last character must be one of the following characters: A-Z, a-z or 0-9.
• The system reserves the following names: "all", "cluster", "local" and "localhost".
| [-cluster-ips <IP Address>, ...] - List of Cluster Interface IP Addresses of the Nodes Being Added
This parameter contains a comma separated list of cluster interface IP addresses of the nodes in the cluster you
are creating. All the nodes specified in the list will be added to the cluster.
| [-node-count <integer>] - (DEPRECATED)-Node Count
Use this parameter to specify the number of nodes in the cluster you are creating.
Examples
The following example creates a cluster named cluster1
The following example creates a cluster named cluster1 with node-count 4 on a non-shared architecture platform.
cluster join
(DEPRECATED)-Join an existing cluster using the specified member's IP address or by cluster name
Availability: This command is available to cluster administrators at the admin privilege level.
Description
Note: This command is deprecated and may be removed in a future release of Data ONTAP. Use cluster add-node from a
node in the cluster instead.
The cluster join command adds a node to an existing cluster. Use the cluster create command to create a cluster if one
does not already exist.
Note that a cluster network interface must be configured for the cluster before other nodes can join the cluster.
Examples
The following example joins the local node to a cluster. The IP address 192.0.2.66 is the address of a cluster interface of a
node that already belongs to the cluster.
Related references
cluster add-node on page 15
cluster create on page 17
cluster modify
Modify cluster node membership attributes
Availability: This command is available to cluster administrators at the advanced privilege level.
Description
The cluster modify command modifies the cluster attributes of a node, including its eligibility to participate in the cluster.
At the advanced privilege level, you can use the command to specify whether a node holds epsilon. Epsilon is an extra fractional
vote that enables quorum to form using slightly weaker requirements. For example, two out of four eligible nodes are sufficient
to form quorum if one of those two nodes holds epsilon.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the node to modify. If you do not specify a node, the command runs
on the local node.
[-epsilon {true|false}] - Epsilon
Use this parameter with the value true to specify that the node holds Epsilon in the cluster. Use this
parameter with the value false to specify that the node does not hold Epsilon in the cluster. In a cluster, only
one node can be designated as Epsilon at any given time. You can designate a node as Epsilon to add weight to
its voting in a cluster with an even number of nodes.
cluster modify 19
[-eligibility {true|false}] - Eligibility
Use this parameter with the value true to specify that the node is eligible to participate in the cluster. Use this
parameter with the value false to specify that the node is not eligible to participate in the cluster.
If you modify a node as ineligible to participate in the cluster, the command prompts you for confirmation
before it runs.
[-skip-quorum-check-before-eligible [true]] - Skip Quorum Check Before Setting Node Eligible
If this parameter is specified, quorum checks will be skipped prior to setting a node eligible. When setting a
node to eligible, the operation will continue even if there is a possible data outage due to a quorum issue.
[-skip-quorum-check-before-ineligible [true]] - Skip Quorum Check Before Setting Node Ineligible
If this parameter is specified, quorum checks will be skipped prior to setting a node ineligible. When setting a
node to ineligible, the operation will continue even if there is a possible data outage due to a quorum issue.
Examples
This example modifies a node to make it eligible to participate in the cluster.
The following example removes epsilon from the node named node0 and adds it to the node named node1:
cluster ping-cluster
Ping remote cluster interfaces and perform RPC server check
Availability: This command is available to cluster administrators at the advanced privilege level.
Description
The cluster ping-cluster command probes network connectivity to remote cluster interfaces, and performs an RPC server
check.
Parameters
-node <nodename> - Node
Use this parameter to send the ping from the node you specify.
[-use-sitelist {true|false}] - Use Sitelist for Cluster Interfaces
Use this parameter with the value true to specify that the command use the sitelist to determine any
incomplete cluster IP information. Use this parameter with the value false to specify that the command not
use the sitelist.
[-skip-rpccheck {true|false}] - Skip RPC Server Check
Use this parameter with the value true to specify that the command not perform the rpcinfo check of remote
hosts. Use this parameter with the value false to specify that the command perform the rpcinfo check. The
rpcinfo check checks the status of the RPC servers on the remote hosts. By default, the rpcinfo check runs on
the program number of the portmapper. Use the -rpc-prognum parameter to override this default.
[-rpc-prognum <integer>] - RPC Server to Check
Use this parameter to override default behavior and run the rpcinfo check on the program number you specify.
By default, the rpcinfo check runs on the program number of the portmapper.
cluster remove-node
Remove a node from the cluster
Availability: This command is available to cluster administrators at the advanced privilege level.
Description
The cluster remove-node command removes a node from a cluster.
Before you can remove a node from a cluster, you must shut down all of the node's shared resources, such as virtual interfaces to
clients. If any of a node's shared resources are still active, the command fails. The failure message will display which active
resources must be shut down before the node can be removed from the cluster.
Parameters
{ -node <nodename> - Node to Unjoin
Use this parameter to specify the name of the node to remove from the cluster.
| -cluster-ip <IP Address>} - IP Address of a Cluster Interface of Node to Unjoin
Use this parameter to specify the cluster IP of the node to remove from the cluster.
[-skip-quorum-check-before-unjoin [true]] - Skip Quorum Check before Unjoin
If this parameter is specified, quorum checks will be skipped prior to the remove-node command. The
operation will continue even if there is a possible data outage due to a quorum issue.
[-skip-last-low-version-node-check [true]] - Skip the Check That Prevents Unjoining the Last Low
Versioned Node
This parameter allows the node with lowest version of Data ONTAP to be removed from the cluster.
Examples
The following example shows how to remove the node named node4 from the cluster.
The following example forcibly removes the node from the cluster:
cluster remove-node 21
cluster1::*> cluster remove-node -node node4 -force
cluster setup
Setup wizard
Availability: This command is available to cluster administrators at the admin privilege level.
Description
Note: Use of this command to join a node to an existing cluster is deprecated and might be removed in a future release of
Data ONTAP. From a node in the cluster use the system node show-discovered command and then use the cluster
add-node command.
The cluster setup command runs the cluster setup wizard, which can be used to either create a cluster or join a node to an
existing cluster. When you run the cluster setup wizard, enter the appropriate information at the prompts. You will be asked to
provide the following information to create a cluster:
• Cluster name
• Location
• Cluster IP address
The cluster management interface is used for managing the cluster. It provides one IP address to manage the cluster and will fail
over to another node, if necessary. This is the preferred IP address for managing the cluster, but you can also manage the cluster
by logging in to the node management IP address of a node in the cluster. Since the cluster management interface must be able
to fail over, the port role for the interface must be "data" and typically the best choice for an IP address is one on the data
network. The node management interface will not fail over, so an IP address on the management network and a port with the
role "node management" is the best choice. Alternatively, you can assign an IP address on the data network to the cluster
management interface - if that is better in your network topology - but the port must be a data port. The two examples below
illustrate the cluster create and cluster join operations, respectively.
Examples
The following example shows the create option of cluster setup.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
This system will send event messages and periodic reports to NetApp Technical
Support. To disable this feature, enter
autosupport modify -support disable
within 24 hours.
Otherwise, press Enter to complete cluster setup using the command line
interface:
Do you want to create a new cluster or join an existing cluster? {create, join}:
create
Do you intend for this node to be used as a single node cluster? {yes, no} [no]:
cluster setup 23
Enter the cluster management interface port [e0d]:
Enter the cluster management interface IP address: 192.0.2.60
Enter the cluster management interface netmask: 255.255.255.192
Enter the cluster management interface default gateway [192.0.2.1]:
A cluster management interface on port e0d with IP address 192.0.2.60 has been created. You can
use this address to connect to and manage the cluster.
SFO is licensed.
SFO will be enabled when the partner joins the cluster.
To complete cluster setup, you must join each additional node to the cluster
by running "system node show-discovered" and "cluster add-node" from a node in the cluster.
To complete system configuration, you can use either OnCommand System Manager
or the Data ONTAP command-line interface.
To access OnCommand System Manager, point your web browser to the cluster
management IP address (https://192.0.2.60).
cluster1::>
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
This system will send event messages and periodic reports to NetApp Technical
Support. To disable this feature, enter
autosupport modify -support disable
within 24 hours.
Otherwise, press Enter to complete cluster setup using the command line
interface:
Do you want to create a new cluster or join an existing cluster? {create, join}:
join
Enter the IP address of an interface on the private cluster network from the
cluster you want to join: 169.254.115.8
SFO is licensed.
SFO will be enabled when the partner joins the cluster.
To complete cluster setup, you must join each additional node to the cluster
by running "system node show-discovered" and "cluster add-node" from a node in the cluster.
To complete system configuration, you can use either OnCommand System Manager
or the Data ONTAP command-line interface.
To access OnCommand System Manager, point your web browser to the cluster
management IP address (https://192.0.2.60).
cluster1::>
Related references
system node show-discovered on page 1276
cluster add-node on page 15
cluster setup 25
cluster create on page 17
cluster join on page 18
cluster show
Display cluster node members
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The cluster show command displays information about the nodes in a cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the nodes that match this parameter value.
[-node-uuid <UUID>] - UUID (privilege: advanced)
Selects the nodes that match this parameter value.
[-epsilon {true|false}] - Epsilon (privilege: advanced)
Selects the nodes that match this parameter value. In a cluster, only one node can be designated as Epsilon at
any given time. You can designate a node as Epsilon to add weight to its voting in a cluster with an even
number of nodes.
[-eligibility {true|false}] - Eligibility
Selects the nodes that match this parameter value (true means eligible to participate in the cluster).
[-health {true|false}] - Health
Selects the nodes that match this parameter value (true means online).
Examples
The following example displays information about all nodes in the cluster:
The following example displays information about the node named node1:
Node: node1
Eligibility: true
Health: true
Description
The cluster contact-info modify command modifies contact information for the cluster administrators. If any values
contain spaces, you must enclose those values in quotes.
Use the cluster contact-info show command to display contact information for the cluster administrators.
Parameters
[-primary-name <text>] - Name of Primary Contact
Use this parameter to specify the name of the primary contact.
[-primary-phone <text>] - Phone Number of Primary Contact
Use this parameter to specify the phone number of the primary contact.
[-primary-alt-phone <text>] - Alternate Phone Number of Primary Contact
Use this parameter to specify the alternate phone number of the primary contact.
[-primary-email <text>] - Email Address or User ID of Primary Contact
Use this parameter to specify the email address of the primary contact.
[-secondary-name <text>] - Name of Secondary Contact
Use this parameter to specify the name of the secondary contact.
[-secondary-phone <text>] - Phone Number of Secondary Contact
Use this parameter to specify the phone number of the secondary contact.
[-secondary-alt-phone <text>] - Alternate Phone Number of Secondary Contact
Use this parameter to specify the alternate phone number of the secondary contact.
[-secondary-email <text>] - Email Address or User ID of Secondary Contact
Use this parameter to specify the email address of the secondary contact.
[-business-name <text>] - Business Name
Use this parameter to specify the name of the business responsible for this cluster.
[-address <text>] - Business Address
Use this parameter to specify the street address of the business responsible for this cluster.
[-city <text>] - City Where Business Resides
Use this parameter to specify the name of the city in which the business is located.
[-state <text>] - State Where Business Resides
Use this parameter to specify the name of the state or province in which the business is located.
[-country <Country Code>] - 2-Character Country Code
Use this parameter to specify the 2-character country code of the country in which the business is located.
Examples
The following example changes the name and phone numbers of the secondary contact person for the cluster.
The following example changes the mailing address of the business responsible for the cluster.
cluster1::> cluster contact-info modify -address "123 Example Avenue" -city Exampleville -state
"New Example" -zip-code 99999 -country US
Related references
cluster contact-info show on page 28
Description
The cluster contact-info show command displays contact information for the cluster administrators.
Examples
The following example shows example output for this command.
Parameters
[-timezone <Area/Location Timezone>] - Time Zone
This parameter sets the timezone, specified in the Olson format.
{ [-date {MM/DD/YYYY HH:MM:SS [{+|-}hh:mm]}] - Date and Time
This parameter sets the date and time, in the format MM/DD/YYYY HH:MM:SS.
| [-dateandtime <[[[[[cc]yy]mm]dd]hhmm[.ss]]>] - Date and Time
This parameter sets the date and time information, in the format [[[[[cc]yy]mm]dd]hhmm[.ss]]. The argument
for setting the date and time is interpreted as follows:
If the first two digits of the year are omitted, and the last two digits are greater than 68, a date in the 1900s is
used. Otherwise, a date in the 2000s is used. If all four digits of the year are omitted, the default is the current
year. If the month or day is omitted, the default is the current month or day, respectively. If the seconds are
omitted, the default is set to 00. The system automatically handles the time changes for Daylight Saving and
Standard time, and for leap seconds and years.
| [-utcdateandtime | -u <[[[[[cc]yy]mm]dd]hhmm[.ss]]>]} - UTC Date and Time
This parameter sets the date and time information in Coordinated Universal Time (UTC), in the format
[[[[[cc]yy]mm]dd]hhmm[.ss]]. -u is an alias for -utcdateandtime. The argument for setting the date and time is
interpreted as follows:
If the first two digits of the year are omitted, and the last two digits are greater than 68, a date in the 1900s is
used. Otherwise, a date in the 2000s is used. If all four digits of the year are omitted, the default is the current
year. If the month or day is omitted, the default is the current month or day, respectively. If the seconds are
omitted, the default is set to 00. Time changes for Daylight Saving and Standard time, and for leap seconds
and years, are handled automatically.
The following example sets the date and time in the UTC format to May 22, 2011, at 09:25:00 a.m.:
Description
The cluster date show command displays the time zone, date, and time settings for one or more nodes in the cluster. By
default, the command displays date and time settings for all nodes in the cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-utc ]
Displays date and time information in Coordinated Universal Time (UTC).
| [-utcdate ]
Displays date and time information in UTC.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the nodes that match this parameter value.
[-timezone <Area/Location Timezone>] - Time Zone
Selects the nodes that match this parameter value (specified in the Olson format).
[-date {MM/DD/YYYY HH:MM:SS [{+|-}hh:mm]}] - Date and Time
Selects the nodes that match this parameter value.
[-utc-date <MM/DD/YYYY HH:MM:SS>] - UTC Date and Time
Selects the nodes that match this parameter value.
[-dateandtime <[[[[[cc]yy]mm]dd]hhmm[.ss]]>] - Date and Time
Selects the nodes that match this parameter value (interpreted as follows):
Examples
The following example displays the date and time settings for all nodes in the cluster:
Description
The cluster date zoneinfo load-from-uri command loads a new set of timezone zoneinfo data to replace the version
installed in the cluster. Releases of Data ONTAP software contain the timezone data that is current at the time of release. If a
change is made to the timezone between Data ONTAP releases, then an update can be made to the release data. For instance, if a
change is made to when daylight saving time is observed for a country then an update to cluster zoneinfo data may be required.
Only zoneinfo files provided by NetApp for use in Data ONTAP should be used with this command.
To update the zoneinfo database do the following:
• Download the required zoneinfo file from the NetApp support website.
• Place the file on a local web server accessible without password from the cluster.
• Execute the cluster date zoneinfo load-from-uri command, passing the Universal Resource Identifier (URI) of
the file as parameter.
Parameters
-uri {(ftp|http)://(hostname|IPv4 Address|'['IPv6 Address']')...} - URI of Timezone Zoneinfo
Data
URI of the new zoneinfo file.
Examples
The following example loads a new version of the timezone zoneinfo database to the cluster:
Related references
cluster date zoneinfo show on page 32
Description
Display information about the current timezone zoneinfo data.
Examples
The following example shows the zoneinfo information for a cluster:
cluster ha commands
Manage high-availability configuration
cluster ha modify
Modify high-availability configuration of cluster management services
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The cluster ha modify command enables or disables cluster high availability in a two-node cluster. Enable high availability
when performing some procedures, such as replacing hardware.
Note: This command is required to enable high availability if the cluster only has two nodes. Do not run this command in a
cluster that has three or more nodes.
Note: Cluster high availability for two-node clusters differs from the storage failover technology used between two nodes for
storage high availability.
Examples
The following example enables cluster high availability in a cluster.
cluster ha show
Show high-availability configuration status for the cluster
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The cluster ha show command displays the high-availability status of the cluster. Cluster high-availability mode applies
only to two-node clusters.
Examples
The following example displays the high-availability status for a two-node cluster:
Description
The cluster identity modify command changes a cluster's identity information.
Parameters
[-name <Cluster name>] - Cluster Name
Use this parameter to specify a new name for the cluster.
• The name must contain only the following characters: A-Z, a-z, 0-9, "-" or "_".
• The first character must be one of the following characters: A-Z or a-z.
• The last character must be one of the following characters: A-Z, a-z or 0-9.
• The system reserves the following names: "all", "cluster", "local" and "localhost".
Examples
The following example renames the current cluster to cluster2:
Description
The cluster identity show command displays the identity information of the cluster.
Examples
The following example displays the cluster's UUID, name, serial number, location and contact information:
cluster1::>
The following example displays the cluster's UUID, name, serial number, location, contact information, and RDB UUID:
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
cluster1::*>
Description
The cluster image cancel-update command is used to cancel an update that is in either paused-by-user or paused-by-
error state. The update cannot be canceled if it is not in a paused state.
Examples
The following example displays a cancel-update operation:
Info: Canceling update. It may take a few minutes to finish canceling the update
Description
The cluster image pause-update command is used to pause a currently running update. The update pauses at the next
predefined pause point (for example, after validation, download to the boot device, takeover completion, or giveback
completion) which might take some time to reach. When the update reaches the pause point, it transitions into the pause-by-user
state.
Examples
The following example displays pause-update operation:
Info: Pausing update. It may take a few minutes to finish pausing the update
Description
The cluster image resume-update command is used to resume an update that is currently paused in paused-by-user or
paused-by-error state. If the update is not paused then an error is returned.
Examples
The following example shows an resume-update operation:
Description
The cluster image show command displays information about the version of Data ONTAP that is running on each node and
the date/time when it was installed. By default, the command displays the following information:
• Node name
• Current version
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays information about the specified node.
[-version <text>] - Current Version
Displays information about the nodes running the specified version.
[-date <MM/DD/YYYY HH:MM:SS>] - Date Installed
Displays information about the nodes with the specified installation date.
Examples
The following example displays information about currently running images on all nodes of the cluster:
Description
The cluster image show-update-history command displays the update history for each node. By default, the command
displays the following information:
• Status
• Package version
• Start time
• Completion time
• Component ID
• Previous version
• Updated version
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-component-id {<nodename>|local}] - Component ID
Displays updates for the specified component.
[-start-time <MM/DD/YYYY HH:MM:SS>] - Start Time
Displays updates with the specified start time.
[-package-version <text>] - Package Version
Displays updates for the specified package version.
[-status {successful|canceled|back-out}] - Status
Displays updates that completed with the specified status.
[-completion-time <MM/DD/YYYY HH:MM:SS>] - Completion Time
Displays updates with the specified completion time.
[-previous-version <text>] - Previous Version
Displays updates with the specified previous version.
[-updated-version <text>] - Updated Version
Displays updates with the specified updated version.
Description
The cluster image show-update-log command displays detailed information about the currently running, or previously
run nondisruptive updates. By default, the command displays the following information:
• Phase
• Transaction
• Transaction ID
• Component ID
• Time stamp
• Status
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-trans-id <integer>] - Transaction ID
Displays information for the step associated with the specified transaction ID.
[-component-id {<nodename>|local}] - Component ID
Displays information for steps associated with the specified component.
Examples
The following example displays information about automated nondisruptive update events:
Description
The cluster image show-update-log-detail command displays detailed information about the currently running and
previously run nondisruptive update events. By default, the command displays the following information:
• Node
• Transaction ID
• Time stamp
• Destination node
• Task phase
• Task name
• Task status
• Message
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays information only for the specified node.
[-task-id <integer>] - Task Id
Displays information only for the specified task ID.
[-posted-time <MM/DD/YYYY HH:MM:SS>] - Posted Time
Displays information that occurred at the specified time.
[-msg-seq-no <integer>] - Message Sequence
Displays information only for the specified message sequence number.
[-current-pid <integer>] - Process ID
Displays information only for the specified process ID.
[-destination <text>] - Task Target node
Displays information only for the specified destination node.
[-ndu-phase {validation|prereq-updates|ontap-updates|package-management|default-phase|post-
update-checks}] - Update phase
Displays information only for the specified phase.
[-task-name {initialize|mount-image|restart-hm|get-health|run-scripts|unmount-image|clear-
alert|post-restart-hm|cleanup-rd|synch-image|do-download-job|do-failover-job|do-giveback-
Examples
The following example displays detailed information automated nondisruptive updates:
Description
The cluster image show-update-progress command displays information about the current state of an update. By
default, the command displays the following information:
• Update phase
• Status
• Estimated Duration
• Elapsed Duration
Examples
The following example shows the automated nondisruptive update of two nodes, nodeA and nodeB. In this case, nodeA's
update is waiting, nodeB's update is in progress. nodeB's giveback operation is in progress.
Estimated Elapsed
Update Phase Status Duration Duration
-------------------- ----------------- --------------- ---------------
Pre-update checks completed 00:10:00 00:00:02
Data ONTAP updates in-progress 01:23:00 00:32:07
Details:
cluster1::>
The following example shows the automated nondisruptive update of two nodes, nodeA and nodeB. In this case,
automated nondisruptive update is paused-on-error in "Data ONTAP updates" phase. nodeA's update is waiting, nodeB's
update is failed. "Status Description" show nodeB's error and action.
Estimated Elapsed
Update Phase Status Duration Duration
-------------------- ----------------- --------------- ---------------
Pre-update checks completed 00:10:00 00:00:02
Data ONTAP updates paused-on-error 00:49:00 00:05:21
Details:
cluster1:>
The following example shows that the automated nondisruptive update is paused-on-error in "Post-update checks" update
phase and "Status Description" shows the error and action.
Estimated Elapsed
Update Phase Status Duration Duration
-------------------- --------------- --------------- ---------------
Data ONTAP updates completed 02:19:00 00:00:03
Post-update checks paused-on-error 00:10:00 00:00:02
Details:
cluster1::>
The following example shows that the automated nondisruptive update is completed on nodeA and nodeB.
Estimated Elapsed
The following example shows the automated update of two-node MetroCluster configuration having clusters cluster_A
and cluster_B. In this case, cluster_A's update is waiting and cluster_B's update is in progress. cluster_B's switchback
operation is in progress.
Estimated Elapsed
Cluster Duration Duration Status
------------------------------ --------------- --------------- ----------------
cluster_A 00:00:00 00:00:00 waiting
cluster_B 00:00:00 00:06:42 in-progress
cluster_A::>
The following example shows that the automated update is completed on both cluster_A and cluster_B in two-node
MetroCluster configuration.
Estimated Elapsed
Cluster Duration Duration Status
------------------------------ --------------- --------------- ----------------
cluster_A 00:00:00 00:20:44 completed
cluster_B 00:00:00 00:10:43 completed
cluster_A::>
Description
The cluster image update command is used to initiate a Data ONTAP update. The update is preceded by a validation of
the cluster to ensure that any issues that might affect the update are identified. There are two types of updates of a cluster. A
rolling update updates Data ONTAP one HA pair at a time. This type of update is performed for clusters with fewer than eight
nodes or when the -force-rolling option is specified for clusters with eight or more nodes. A batch update is used for
clusters of eight or more nodes, and performs updates of multiple HA pairs at the same time.
There are predefined points in the update when the update can be paused (either by the user or by an error). These pause points
occur after validation, after download to the boot device, after takeover has completed, and after giveback has completed.
Examples
The following example shows the update operation:
Description
The cluster image validate command checks for issues within the cluster that might lead to problems during the update.
Parameters
[-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
[-version <text>] - Update Version
Specifies the Data ONTAP version to use to validate the cluster.
[-rolling [true]] - Rolling Update
Specify this optional parameter on a cluster with eight or more nodes to perform a rolling-update check. The
default is to perform a batch-update check.
Note: This parameter is only supported on a cluster with eight or more nodes, and is not supported for two-
node MetroCluster.
Examples
The following example shows the validate operation:
Parameters
-version <text> - Version To Be Deleted
Specifies the package version that is to be deleted.
Examples
The following example deletes the package with version 8.3:
Description
The cluster image package get command fetches a Data ONTAP package file specified by the URL into the cluster. The
package is stored in the cluster package respository and the information from the package is stored in the update database.
Parameters
-url <text> - Package URL
Specifies the URL from which to get the package.
Examples
The following example displays how to get a package from a URL:
Description
The cluster image package show-repository command displays the package versions that are in the cluster package
repository. By default, the command displays the following information:
• Package version
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
Examples
The following example displays the packages in the cluster package repository:
Description
The cluster kernel-service show command displays the following information from the master node for each node in
the cluster:
• Node name
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
Examples
The following example displays information about all nodes in the cluster:
Parameters
-destination <Remote InetAddress> - Destination Host
Host name or IPv4 or IPv6 address of the server to forward the logs to.
[-port <integer>] - Destination Port
The port that the destination server listen on.
[-protocol {udp-unencrypted|tcp-unencrypted|tcp-encrypted}] - Log Forwarding Protocol
The protocols are used for sending messages to the destination. The protocols can be one of the following
values:
Examples
This example causes audit logs to be forwarded to a server at address 192.168.0.1, port 514 with USER facility.
cluster1::> cluster log-forwarding create -destination 192.168.0.1 -port 514 -facility user
Description
The cluster log-forwarding delete command deletes log forwarding destinations for remote logging.
Parameters
-destination <Remote InetAddress> - Destination Host
Host name or IPv4 or IPv6 address of the server to delete the forwarding entry for.
-port <integer> - Destination Port
The port that the destination server listen on.
Description
The cluster log-forwarding modify command modifies log forwarding destinations for remote logging.
Parameters
-destination <Remote InetAddress> - Destination Host
The host name or IPv4 or IPv6 address of the server to be modified.
-port <integer> - Destination Port
The port that the destinations servers listen on.
[-verify-server {true|false}] - Verify Destination Server Identity
When this parameter is set to true, the identity of the log forwarding destination is verified by validating its
certificate. The value can be set to true only when the tcp-encrypted value is selected in the protocol
field. When this value is true the remote server might be validated by OCSP. The OCSP validation for cluster
logs is controlled with the security config ocsp enable -app audit_log and security config
ocsp disable -app audit_log.
[-facility <Syslog Facility>] - Syslog Facility
The syslog facility to use for the forwarded logs.
Examples
This example modifies the facility of audit logs that are forwarded to the destination server at address 192.168.0.1, port
514.
cluster1::> cluster log-forwarding modify -destination 192.168.0.1 -port 514 -facility local1
Description
The cluster log-forwarding show command displays log forwarding information:
• Destination (IPv4/IPv6/hostname)
• Port number
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-destination <Remote InetAddress>] - Destination Host
If this optional parameter is specified, the command displays information about the forwarding destinations
with the specified host name, IPv4 or IPv6 address.
[-port <integer>] - Destination Port
If this optional parameter is specified, the command displays information about the forwarding destinations
with the specified ports.
[-protocol {udp-unencrypted|tcp-unencrypted|tcp-encrypted}] - Log Forwarding Protocol
If this optional parameter is specified, the command displays information about the forwarding destinations
with the specified protocols.
[-verify-server {true|false}] - Verify Destination Server Identity
If this optional parameter is specified, the command displays information about the forwarding destinations
with the specified verify-server values.
[-facility <Syslog Facility>] - Syslog Facility
If this optional parameter is specified, the command displays information about the forwarding destinations
with the specified facility.
Examples
Verify Syslog
Destination Host Port Protocol Server Facility
------------------------ ------ --------------- ------ --------
192.168.0.1 514 udp-unencrypted false user
Description
The cluster peer create command establishes a peer relationship between two clusters. Cluster peering enables
independent clusters to coordinate and exchange data.
Before creating a new cluster peer relationship, make sure that both clusters are individually healthy and that there are no other
peer relationships between the two clusters that might interfere with the new relationship.
Parameters
[-peer-addrs <Remote InetAddress>, ...] - Remote Intercluster Addresses
Use this parameter to specify the names or IP addresses of the logical interfaces used for intercluster
communication. Separate the addresses with commas.
The addresses you provide here are associated with the remote cluster until you modify or delete the
relationship, regardless of whether the addresses are valid. Make sure to provide addresses which you know
will remain available on the remote cluster. You can use the hostnames of the remote cluster's intercluster
addresses, the IP addresses of the remote cluster's intercluster LIFs or both.
[-username <text>] - Remote User Name
Use this optional parameter to specify a username that runs a reciprocal cluster peer create command
on the peered cluster. If you choose not to use the reciprocal creation option, by not supplying a username for
reciprocal creation, you must run cluster peer create again on the remote cluster to complete the
peering relationship.
If you specify the username for the remote cluster, you will be prompted to enter the associated remote
password. These credentials are not stored, they are used only during creation to authenticate with the remote
cluster and to enable the remote cluster to authorize the peering request. The provided username's profile must
have access to the console application in the remote cluster.
Use the security login role show and security login show commands on each cluster to find user
names and their privilege levels.
[-no-authentication [true]] - Do Not Use Authentication
Use this optional parameter when omitting the -username parameter to indicate that you will create an
unauthenticated peering relationship.
[-timeout <integer>] - Operation Timeout (seconds) (privilege: advanced)
Use this optional parameter to specify a timeout value for peer communications. Specify the value in seconds.
The default timeout value is 60 seconds.
[-address-family {ipv4|ipv6}] - Address Family of Relationship
Use this optional parameter to specify the address family of the cluster peer relationship. The default is based
on existing relationships, existing local intercluster LIFs belonging to a particular address-family, and the
addresses supplied to the cluster peer create command.
[-offer-expiration {MM/DD/YYYY HH:MM:SS | {1..7}days | {1..168}hours | PnDTnHnMnS | PnW}] -
Passphrase Match Deadline
Specifying cluster peer create normally creates an offer to establish authentication with a cluster that is
a potential cluster peer to this cluster. Such offers expire unless they are accepted within some definite time.
Use this optional parameter to specify the date and time at which this offer should expire, the time after which
the offer will no longer be accepted.
[-rpc-connect-timeout <integer>] - Timeout for RPC Connect (seconds) (privilege: advanced)
Use this optional parameter to specify a timeout value for the RPC connect during peer communications.
Specify the value in seconds. The default timeout value is 10 seconds.
[-update-ping-timeout <integer>] - Timeout for Update Pings (seconds) (privilege: advanced)
Use this optional parameter to specify a timeout value for pings while updating remote cluster information.
Specify the value in seconds. The default timeout value is 5 seconds. This parameter applies only to cluster
peer relationships using the IPv4 protocol.
Examples
This example creates a peer relationship between cluster1 and cluster2. This reciprocal create executes the create
command on both the local cluster and the remote cluster. The cluster peer create command can use the hostnames of
cluster2's intercluster addresses, the IP addresses of cluster2's intercluster LIFs, or both. Note that the admin user's
password was typed at the prompt, but was not displayed.
Remote Password:
This example shows coordinated peer creation. The cluster peer create command was issued locally on each
cluster. This does not require you to provide the username and password for the remote cluster. There is a password
prompt, but if you are logged in as the admin user, you may simply press enter.
Remote Password:
NOTICE: Addition of the local cluster information to the remote cluster has
failed with the following error: not authorized for that command. You may
need to repeat this command on the remote cluster.
Remote Password:
NOTICE: Addition of the local cluster information to the remote cluster has
failed with the following error: not authorized for that command. You may
need to repeat this command on the remote cluster.
This example shows a reciprocal cluster peer create over IPv6 addresses, that establishes a cluster peer relationship with
an IPv6 address family.
Remote Password:
This example shows creation of an authenticated peering relationship. It is an example of using the coordinated method to
create a cluster peer relationship. The cluster peer create command is issued locally on each cluster. Before
executing this pair of commands, a passphrase to be used with the commands is chosen and given at the prompts. The
passphrase can be any text; it is prompted for twice on each cluster, and all four copies of the passphrase must agree. The
passphrase does not echo on the screen. The passphrase must be longer than the minimum length as specified by the
cluster peer policy on both clusters.
Notice: Now use the same passphrase in the "cluster peer create" command in the
other cluster.
The following example create a peer relationship between cluster1 and cluster2 using system-generated passphrases:
Passphrase: UCa+6lRVICXeL/gq1WrK7ShR
Peer Cluster Name: cluster2
Initial Allowed Vserver Peers: -
Expiration Time: 6/7/2017 09:16:10 +5:30
Intercluster LIF IP: 10.140.106.185
The following example creates a cluster peer offer from cluster1 for an anonymous cluster using system-generated
passphrase with offer expiration period of two days and the cluster2 uses the offer from cluster2 with the system-
generated passphrase:
Passphrase: UCa+6lRVICXeL/gq1WrK7ShR
Peer Cluster Name: Clus_7ShR (temporary generated)
Initial Allowed Vserver Peers: -
Expiration Time: 6/9/2017 08:16:10 +5:30
Intercluster LIF IP: 10.140.106.185
Cluster "cluster1" creates an offer with initial-allowed-vserver-peers option set to Vservers "vs1" and "vs2".
And the peer cluster "cluster2" uses the offer and creates peer relationship with cluster1, upon the successful peer
relationship establishment, Vserver peer permission entries are created for the Vservers "vs1" and "vs2" in cluster
"cluster1" for the peer cluster "cluster2". The following example describes the usage of initial-allowed-vserver-
peers option in the cluster peer creation workflow:
Passphrase: UCa+6lRVICXeL/gq1WrK7ShR
Peer Cluster Name: Clus_7ShR (temporary generated)
Initial Allowed Vserver Peers: vs1,vs2
Expiration Time: 6/7/2017 09:16:10 +5:30
Intercluster LIF IP: 10.140.106.185
Related references
security login role show on page 558
security login show on page 542
cluster show on page 26
cluster peer show on page 62
cluster peer policy on page 76
Description
The cluster peer delete command removes a peering relationship. It removes the relationship records, state data, and all
associated jobs.
Before removing the relationship, the command verifies that no resources depend on the relationship. For example, if any
SnapMirror relationships exist, the command denies the request to delete the peering relationship. You must remove all
dependencies for the deletion to succeed. The cluster peer delete command removes only the local instance of the peer
relationship. An administrator in the peer cluster must use the cluster peer delete command there as well to completely
remove the relationship.
Parameters
-cluster <text> - Peer Cluster Name
Use this parameter to specify the peering relationship to delete by specifying the name of the peered cluster.
Examples
This example shows a failed deletion due to a SnapMirror dependency.
Description
The cluster peer modify command modifies the attributes of a peering relationship. When you modify a peer relationship
and specify -peer-addrs, all of the remote addresses must respond, must be intercluster addresses, and must belong to the
remote cluster that is being modified; or the modification request is denied.
Parameters
-cluster <text> - Peer Cluster Name
Use this parameter to specify the peering relationship to modify by specifying the name of the peered cluster.
[-peer-addrs <Remote InetAddress>, ...] - Remote Intercluster Addresses
Use this parameter to specify the names or IP addresses of the logical interfaces used for intercluster
communication. Separate the addresses with commas. The list of addresses you provide replaces the existing
list of addresses.
[-address-family {ipv4|ipv6}] - Address Family of Relationship
Use this parameter to specify the address family of the names specified with the peer-addrs parameter.
[-timeout <integer>] - Operation Timeout (seconds) (privilege: advanced)
Use this parameter to specify a timeout value for peer communications. Specify the value in seconds.
• use-authentication - The cluster peer relationship is to be authenticated. After you use this value, you will
be prompted for a passphrase to be used in determining a new authentication key, just as in the
authenticated cluster peer create command or you can use the option generate-passphrase to
automatically generate the passphrase.
• revoked - The cluster peer relationship is no longer to be trusted. Peering communication with this cluster
peer is suspended until the two clusters set their auth-status-admin attributes either both to no-
authentication or both to use-authentication.
Examples
This example modifies the peering relationship to use a new IP address in the remote cluster for intercluster
communications and revoke authentication.
View existing cluster peer configuration using following command :
Warning: You are removing authentication from the peering relationship with
cluster "cluster2". Use the "cluster peer modify" command on
cluster "cluster2" with the "-auth-status-admin
no-authentication" parameter to complete authentication removal from
the peering relationship.
The following example modifies the peering relationship to use authentication with -generate-passphrase option.
Notice: Use the below system-generated passphrase in the "cluster peer modify"
command in the other cluster.
Passphrase: UCa+6lRVICXeL/gq1WrK7ShR
Expiration Time: 6/7/2017 09:16:10 +5:30
Peer Cluster Name: cluster2
Until then, the operational authentication state of the relationship remains as "pending".
Modify cluster peer relationship in cluster2 with use-authentication option and use the auto-
generated passphrase.
Related references
cluster peer create on page 52
Description
The cluster peer modify-local-name command modifies the local name for a remote cluster. The new local name must
be unique among all the local names for the remote clusters with which this cluster is peered.
Parameters
-name <text> - Cluster Peer Name
Use this parameter to specify the existing local name for a peer cluster.
-new-name <Cluster name> - Cluster Peer Local Name
Use this parameter to specify the new local name of the peer cluster. The new local name must conform to the
same rules as a cluster name.
Examples
Related references
cluster identity modify on page 33
Description
The cluster peer ping command displays the status of the network mesh used by the peering relationship. The command
checks the network connection to each remote IP address known by the cluster. This includes all intercluster addresses. It is
possible for a known address to be not present during the ping. These addresses are not checked, but the absence is temporary.
The most useful parameters for diagnosing problems are -count and -packet-size. Use the -count and -packet-size
parameters to diagnose problems similarly to how you use them with the standard ping utility.
To display network connection status within a cluster, use the network ping command.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-originating-node {<nodename>|local}] - Node that Initiates Ping
Use this parameter to send the ping from the node you specify.
[-destination-cluster <Cluster name>] - Cluster to Ping
Use this parameter to specify the peer cluster you wish to ping.
Examples
This example shows a ping of cluster1 and cluster2 from cluster2. All nodes are reachable.
Related references
network ping on page 299
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-cluster <text>] - Peer Cluster Name
Selects the peered clusters that match this parameter value.
[-cluster-uuid <UUID>] - Cluster UUID (privilege: advanced)
Selects the peered clusters that match this parameter value.
[-peer-addrs <Remote InetAddress>, ...] - Remote Intercluster Addresses
Selects the peered clusters that match this parameter value (remote-host name or IP address).
[-availability <availability>] - Availability of the Remote Cluster
Selects the peered clusters that match this parameter value. This parameter can have four different values:
• Available - The peer cluster availability status will be Available only if all the nodes in the local cluster
are able to contact all the nodes in the remote cluster.
• Partial - The peer cluster availability status will be Partial only if some nodes in the local cluster are not
able to contact some or all nodes in the peer cluster.
• Unavailable - The peer cluster availability status will be Unavailable only if all the nodes in the local
cluster are not able to contact any node in the peer cluster.
• Pending - The peer cluster availability status will be Pending while the system is creating in-memory
health data.
• Unidentified - The peer cluster availability status will be Unidentified if the cluster peer offer is created for
an anonymous cluster and is unused. When the offer is used, then the availability will get changed to any of
the above mentioned status.
Note: If one or more nodes in the local cluster are offline or unreachable, then those nodes are not used to
determine the availability status for the remote nodes.
[-rcluster <text>] - Remote Cluster Name
Selects the peered clusters that match this parameter value.
[-ip-addrs <Remote InetAddress>, ...] - Active IP Addresses
Selects the peered clusters that match this parameter value.
[-serialnumber <Cluster Serial Number>] - Cluster Serial Number
Selects the peered clusters that match this parameter value.
[-remote-cluster-nodes <text>, ...] - Remote Cluster Nodes
Selects the peered clusters that match this parameter value.
[-remote-cluster-health {true|false}] - Remote Cluster Health
Selects the peered clusters that match this parameter value.
• true - This means that there is cluster quorum in the peer cluster.
• revoked - The cluster peer relationship is revoked until agreement can be reached.
[-auth-status-operational {ok|absent|pending|expired|revoked|declined|refused|ok-and-offer|
absent-but-offer|revoked-but-offer|key-mismatch|intent-mismatch|incapable}] - Authentication
Status Operational
Selects the peered clusters that match this parameter value, which must be one of the following values.
• ok - The clusters both use authentication and they have agreed on an authentication key.
• pending - This cluster has made an outstanding offer to authenticate with the other cluster, but agreement
has not yet been reached.
• expired - This cluster's offer to authenticate with the other cluster expired before agreement was reached.
• declined - The other cluster has revoked the authentication agreement and is declining to communicate
with this cluster.
• refused - The other cluster actively refuses the communication attempts, perhaps because its part of the
peering relationship has been deleted.
• ok-and-offer - The clusters agree on an authentication key and are using it. In addition, this cluster has
made an outstanding offer to re-authenticate with the other cluster.
• absent-but-offer - The clusters currently agree that neither side requires authentication of the other, but this
cluster has made an outstanding offer to authenticate.
• revoked-but-offer - This cluster has revoked any authentication agreement, but it has made an outstanding
offer to authenticate.
• key-mismatch - The two clusters both believe that they are authenticated, but one of the shared secrets has
become corrupted.
• incapable - The other cluster is no longer running a version of Data ONTAP that supports authenticated
cluster peering.
Examples
This example shows the output of the cluster peer show command when all nodes in the local cluster are able to contact
all nodes in the remote peer cluster. Additionally, the peer relationship is authenticated and operating correctly.
This example shows the output of the cluster peer show command when some nodes in the local cluster are not able to
contact some or all of the nodes in the remote peer cluster.
This example shows the output of the cluster peer show command when some nodes in the local cluster cannot be
contacted from the node where the command is executed, but all the other nodes including node on which command is
executed are able to contact all nodes in the remote peer cluster.
This example shows the output of the cluster peer show command when some nodes in the local cluster cannot be
contacted from the node where the command is executed, and the node on which command is executed is also not able to
contact the remote peer cluster.
This example shows the output of the cluster peer show command when all the nodes in the local cluster are not able to
contact any nodes in the remote peer cluster.
This example shows the output of the cluster peer show command while the system is creating in-memory health
data.
This example shows the output of the cluster peer show command when all nodes in the local cluster are able to contact
all nodes in the remote peer cluster. Additionally, the peer relationship is authenticated and operating correctly.
This example shows the output of the cluster peer show command when some nodes in the local cluster are not able to
contact some or all of the nodes in the remote peer cluster.
This example shows the output of the cluster peer show command when some nodes in the local cluster cannot be
contacted from the node where the command is executed, but all the other nodes including node on which command is
executed are able to contact all nodes in the remote peer cluster.
This example shows the output of the cluster peer show command when some nodes in the local cluster cannot be
contacted from the node where the command is executed, and the node on which command is executed is also not able to
contact the remote peer cluster.
This example shows the output of the cluster peer show command when all the nodes in the local cluster are not able to
contact any nodes in the remote peer cluster.
This example shows the output of the cluster peer show command while the system is creating in-memory health
data.
This example shows the output of the cluster peer show command for the offer created for an anonymous cluster:
Description
The cluster peer connection show command displays information about the current TCP connections and how they are
supporting the set of peering relationships.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-cluster-name <text>] - Remote Cluster
Selects the connections associated with the named peered cluster.
[-node {<nodename>|local}] - Node
Selects the connections hosted by the given node.
[-connection-type {mgmt-client|mgmt-server|data}] - Cluster Peering Connection Type
Selects the connections of the named type. This parameter can have one of three different values:
• Mgmt-client - Management-plane client connections, created so that this node may make management-
plane requests of other nodes.
• Mgmt-server - Management-plane server connections, over which this node services requests made by
other nodes' mgmt-client connections.
Examples
This example shows the output of the cluster peer connection show command.
Description
The cluster peer health show command displays information about the health of the nodes in peer clusters from the
perspective of the nodes in the local cluster. The command obtains health information by performing connectivity and status
probes of each peer cluster's nodes from each node in the local cluster.
To enable quick access to remote cluster health information, remote cluster health status is periodically checked and cached.
These cached results enable users and system features to quickly assess the availability of remote resources. By default, this
command accesses cached results. Use the -bypass-cache true option to force a current, non-cached check of remote cluster
health.
Examples
The following example shows typical output for this command in a cluster of two nodes that has a peer cluster of two
nodes.
The following example shows detailed health information for node3 in cluster2 from the perspective of node1 in cluster1.
Related references
cluster peer create on page 52
cluster peer modify on page 58
Description
The cluster peer offer cancel command cancels an outstanding offer to authenticate with a potentially peered cluster.
After the command completes, the given cluster can no longer establish authentication using the given authentication offer.
Parameters
-cluster <text> - Peer Cluster Name
Use this parameter to specify which offer should be cancelled, by specifying the name of the cluster to which
the offer is extended.
Examples
The following example cancels the authentication offer to cluster2.
Parameters
-cluster <text> - Peer Cluster Name
Use this parameter to specify the offer to be modified by indicating the name of the cluster to which it has
been extended.
[-offer-expiration {MM/DD/YYYY HH:MM:SS | {1..7}days | {1..168}hours | PnDTnHnMnS | PnW}] -
Authentication Offer Expiration Time
Use this parameter to specify the new expiration time for the offer.
[-initial-allowed-vserver-peers <Vserver Name>, ...] - Vservers Initially Allowed for Peering
Use this optional parameter to specify the list of Vservers for which reciprocal Vserver peering with peer
cluster should be enabled.
Examples
This example modifies the expiration time for the authentication offer to push it out by an hour.
cluster1::> cluster peer offer modify -cluster cluster2 -offer-expiration "7/23/2013 16:45:47"
Related references
cluster peer offer cancel on page 74
Description
The cluster peer offer show command displays information about authentication offers still pending with potential peer
clusters. By default, the command displays information about all unexpired offers made by the local cluster.
To display detailed information about a specific offer, run the command with the -cluster parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example displays information about the outstanding authentication offers:
Description
The cluster peer policy modify command modifies the prevailing policy settings. One setting governs whether
unauthenticated cluster peer relationships can exist. The other setting specifies a minimum length for passphrases.
Parameters
[-is-unauthenticated-access-permitted {true|false}] - Is Unauthenticated Cluster Peer Access Permitted
Use this parameter to specify whether unauthenticated peering relationships are allowed to exist. Setting the
parameter value to true allows such relationships to exist. Setting the value to false prevents both the
creation of unauthenticated peering relationships as well as the modification of existing peering relationships
to be unauthenticated. Setting the value to false is not possible if the cluster currently is in any
unauthenticated relationships.
Examples
This example modifies the peering policy to disallow unauthenticated intercluster communications.
Related references
cluster peer create on page 52
cluster peer modify on page 58
Description
The cluster peer policy show command displays the prevailing cluster peer authentication policy. There are two policies
at present: one to control whether any cluster peer relationships can be unauthenticated, and one to control the minimum length
for a passphrase. If the policy is set to preclude unauthenticated peering relationships, then unauthenticated relationships cannot
be created inadvertently. Passphrases of less than the minimum length may not be used. By default, this minimum length is set
to 8, so passphrases must be 8 characters long or longer.
Examples
This example shows the cluster peer policy when unauthenticated relationships may not be created inadvertently.
Description
The cluster quorum-service options modify command modifies the values of cluster quorum services options.
Parameters
[-ignore-quorum-warning-confirmations {true|false}] - Whether or Not Warnings Are Enabled
Specifies whether cluster quorum warnings and confirmations should be ignored when cluster operations
could negatively impact cluster quorum:
Examples
The following example shows the usage of this command:
Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y
Related references
system node halt on page 1266
system node reboot on page 1269
storage failover takeover on page 1012
Description
The cluster quorum-service options show command displays the values of cluster quorum services options.
Examples
The following example demonstrates showing the state of ignore-quorum-warning-confirmations when it is false and true.
Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y
Description
The cluster ring show command displays a cluster's ring-replication status. Support personnel might ask you to run this
command to assist with troubleshooting.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the rings that match this parameter value.
[-unitname {mgmt|vldb|vifmgr|bcomd|crs|availd}] - Unit Name
Selects the rings that match this parameter value. Possible values are:
Examples
The following example displays information about all replication rings in a two-node cluster:
Description
The cluster statistics show command displays the following information. Each item lists the current value and; if
applicable, the change (delta) from the previous reported value.
At the diagnostic privilege level, the command displays the following information:
Description
The cluster time-service ntp key create command creates a cryptographic key that can be used to verify that
Network Time Protocol (NTP) packets are coming from a configured NTP server.
To use the created key it must be assigned to the required NTP server configuration using the cluster time-service ntp
server create or cluster time-service ntp server modify commands.
Note: The id, key-type and value must all be configured identically on both the ONTAP cluster and the external NTP time
server for the cluster to be able to synchronize time to that server.
Parameters
-id <integer> - NTP Symmetric Authentication Key ID
Uniquely identifies this key in the cluster. Must be an integer between 1 and 65535.
-type <sha1> - NTP Symmetric Authentication Key Type
The cryptographic algorithm that this key is used with. Only SHA1 is currently supported.
Examples
The following example creates a new SHA-1 NTP symmetric authentication key.
Related references
cluster time-service ntp server create on page 86
cluster time-service ntp server modify on page 87
Description
Delete an NTP symmetric authentication key.
Note: It is not possible to delete a key that is referenced in an existing NTP server configuration. Remove all references to this
key using the cluster time-service ntp server modify or cluster time-service ntp server delete
commands before attempting to delete the key using this command.
Parameters
-id <integer> - NTP Symmetric Authentication Key ID
Unique identifier of this key in the cluster.
Examples
The following example deletes the NTP key with ID 1.
Related references
cluster time-service ntp server modify on page 87
cluster time-service ntp server delete on page 87
Description
The cluster time-service ntp key modify command modifies a Network Time Protocol (NTP) symmetric
authentication key.
Parameters
-id <integer> - NTP Symmetric Authentication Key ID
Unique identifier of this key in the cluster.
Examples
The following example modifies the NTP key with ID 1 to have a new value.
Description
The cluster time-service ntp key show command displays the configured Network Time Protocol (NTP) symmetric
authentication keys.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-id <integer>] - NTP Symmetric Authentication Key ID
If this parameter is specified, the command displays the keys that match the specified key ID.
[-type <sha1>] - NTP Symmetric Authentication Key Type
If this parameter is specified, the command displays the keys that match the specified key type.
[-value <text>] - NTP Symmetric Authentication Key Value
If this parameter is specified, the command displays the keys that match the specified value.
Examples
The following example displays information about the NTP authentication keys in the cluster:
Description
The cluster time-service ntp security modify command allows setting of security parameters related to the
Network Time Protocol (NTP) subsystem.
Parameters
[-is-query-enabled {true|false}] - Is Querying of NTP Server Enabled?
Setting this parameter to true allows querying of the NTP subsystem from systems external to the cluster. For
example, querying a node using the standard "ntpq" command can be enabled by this command. The default
setting is false to protect against possible security vulnerabilities. If querying of the NTP subsystem is
disabled, the cluster time-service ntp status show command can be used to obtain similar
information. Although querying of the NTP subsystem from external hosts can be disabled with this
command, executing a local query to the localhost address is always enabled.
Examples
The following example enables the querying of the NTP subsystem from clients external to the cluster:
Related references
cluster time-service ntp status show on page 90
Description
The cluster time-service ntp security show command displays the configuration of security features related to the
Network Time Protocol (NTP) subsystem.
Examples
The following example displays the NTP security configuration of the cluster:
Description
The cluster time-service ntp server create command associates the cluster with an external network time server for
time correction and adjustment by using the Network Time Protocol (NTP).
The command resolves the time server host name to an IP address and performs several validation checks. If an error is detected
during validation, it is reported.
The validation checks performed by this command include the following:
• The NTP replies to an NTP query with the specified protocol version.
• The NTP reply indicates that the external time server is synchronized to another time server.
• The distance and dispersion of the NTP reply from the "root" or source clock are within the required limits.
Parameters
-server <text> - NTP Server Host Name, IPv4 or IPv6 Address
This parameter specifies the host name or IP address of the external NTP server that is to be associated with
the cluster for time correction and adjustment.
[-version {3|4|auto}] - NTP Version for Server (default: auto)
Use this parameter to optionally specify the NTP protocol version that should be used for communicating with
the external NTP server. If the external NTP server does not support the specified protocol version, time
exchange cannot take place.
The supported values for this parameter include the following:
• 3 - Use NTP protocol version 3, which is based on Internet Standard request for comments (RFC) #1305.
• 4 - Use NTP protocol version 4, which is based on Internet Standard RFC #5905.
Examples
The following example associates the cluster with an NTP server named ntp1.example.com.
Related references
cluster time-service ntp key create on page 82
Description
The cluster time-service ntp server delete command removes the association between the cluster and an external
network time server that uses the Network Time Protocol (NTP).
Parameters
-server <text> - NTP Server Host Name, IPv4 or IPv6 Address
This specifies the host name or IP address of an existing external NTP server that the cluster will disassociate
from.
Examples
The following example disassociates an NTP server named ntp2.example.com from the cluster:
Description
The cluster time-service ntp server modify command modifies the configuration of an existing external network
time server that uses the Network Time Protocol (NTP) for time correction and adjustment.
Parameters
-server <text> - NTP Server Host Name, IPv4 or IPv6 Address
This parameter specifies the host name or IP address of an existing external NTP server that is to be modified.
[-version {3|4|auto}] - NTP Version for Server (default: auto)
Use this parameter to optionally specify the NTP protocol version that should be used for communicating with
the external NTP server. If the external NTP server does not support the specified protocol version, time
exchange cannot take place.
The supported values for this parameter include the following:
• 3 - Use NTP protocol version 3, which is based on Internet Standard request for comments (RFC) #1305.
• 4 - Use NTP protocol version 4, which is based on Internet Standard RFC #5905.
Examples
The following example modifies the NTP version of an NTP server named ntp1.example.com. The NTP version is
changed to 4.
Related references
cluster time-service ntp key create on page 82
Description
The cluster time-service ntp server reset command replaces the current configuration with one of the selected
configurations.
If none or more than one time service configuration is selected, the command will fail.
Parameters
[-use-public {true|false}] - Reset Server List to Public Identified Defaults (default: false)
When set to true, this specifies that the public NTP server list used by Data ONTAP should replace the current
configuration.
The default setting is false.
Examples
The following example replaces the current time service configuration with the default public NTP server list that is used
by Data ONTAP.
Description
The cluster time-service ntp server show command displays the association between the cluster and external
network time servers that use the Network Time Protocol (NTP).
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command only displays the fields that you
specify. For example: -fields server, version.
| [-instance ]}
If this parameter is specified, the command displays all the available field information.
[-server <text>] - NTP Server Host Name, IPv4 or IPv6 Address
If this parameter is specified, the command displays the external NTP servers that match the specified server
name or IP address.
[-version {3|4|auto}] - NTP Version for Server (default: auto)
If this parameter is specified, the command displays the external NTP servers that use the specified NTP
version.
[-is-preferred {true|false}] - Is Preferred NTP Server (default: false) (privilege: advanced)
If this parameter is specified, the command displays the external NTP server or servers that match the
specified preferred server status.
[-is-public {true|false}] - Is Public NTP Server Default (privilege: advanced)
If this parameter is specified, the command displays the information for the external NTP servers that are
either on the NTP server list defined by Data ONTAP (`true') or not on the list (`false').
[-is-authentication-enabled {true|false}] - Is NTP Symmetric Key Authentication Enabled
If this parameter is specified, the command displays the external NTP server or servers that require NTP
symmetric key authentication for communication.
[-key-id <integer>] - NTP Symmetric Authentication Key ID
If this parameter is specified, the command displays the external NTP server or servers that match the
specified symmetric authentication key ID.
Examples
The following example displays information about all external NTP time servers that are associated with the cluster:
Description
The cluster time-service ntp status show command displays the status of the associations between the cluster and
external network time servers that use the Network Time Protocol (NTP).
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all entries.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays information related to associations on the specifed node.
[-server <text>] - NTP Server Host Name, IPv4 or IPv6 Address
If this parameter is specified, the command displays information about the associations related to the specified
NTP server. The server should be specified as it is configured in the cluster time-service ntp server
show command.
[-server-address <IP Address>] - Server IP Address
If this parameter is specified, the command displays information about the associations related to the NTP
server with the specified IP address.
[-is-peer-reachable {true|false}] - Is Peer Reachable and Responding to Polls?
If this parameter is specified as true, the command displays information about associations with the NTP
servers that have been successfully polled.
[-is-peer-selected {true|false}] - Is Peer Selected as Clock Source?
If this parameter is specified as true, the command displays information about associations with the NTP
servers that have been selected as the current clock source.
[-selection-state <State of NTP Peer Selection>] - State of Server Selection
If this parameter is specified, the command displays information about associations with the specified
selection state.
[-selection-state-text <text>] - Description of Server Selection State
If this parameter is specified, the command displays information about associations with the specified
selection state description.
[-poll-interval <integer>] - Poll Interval (secs)
If this parameter is specified, the command displays information about associations that have the specified
polling interval.
[-time-last-poll <integer>] - Time from Last Poll (secs)
If this parameter is specified, the command displays information about associations that are polled at the
specified time.
Examples
The following example displays the status of the NTP associations of the cluster:
The following example displays the status of the association with the specified external NTP server:
Node: node-1
NTP Server Host Name, IPv4 or IPv6 Address: ntp1.example.com
Server IP Address: 10.56.32.33
Is Peer Reachable and Responding to Polls?: true
Is Peer Selected as Clock Source?: true
State of Server Selection: sys_peer
Description of Server Selection State: Currently Selected Server
Poll Interval (secs): 64
Time from Last Poll (secs): 1
Offset from Server Time (ms): 26.736
Delay Time to Server (ms): 61.772
Maximum Offset Error (ms): 3.064
Reachability of Server: 01
Stratum of Server Clock: 2
Reference Clock at Server: 10.56.68.21
Reported Packet and Peer Errors: -
Event Commands
Manage system events
The event commands enable you to work with system events and set up notifications.
Description
The event catalog show command displays information about events in the catalog. By default, this command displays the
following information:
To display detailed information about a specific event, run the command with the -message-name parameter, and specify the
name of the event. The detailed view adds the following information:
You can specify additional parameters to limit output to the information that matches those parameters. For example, to display
information only about events with an event name that begins with raid, enter the command with the -message-name raid*
parameter. The parameter value can either be a specific text string or a wildcard pattern.
Alternatively, an event filter can also be specified to limit the output events.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-message-name <Message Name>] - Message Name
Selects the events that match this parameter value.
Examples
The following example displays the event catalog:
Description
The event filter copy command copies an existing filter to a new filter. The new filter will be created with rules from the
source filter. For more information, see the event filter create command.
Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter to copy.
-new-filter-name <text> - New Event Filter Name
Use this mandatory parameter to specify the name of the new event filter to create and copy the rules.
Filter Name Rule Rule Message Name SNMP Trap Type Severity
Position Type
----------- -------- --------- ---------------------- --------------- --------
no-info-debug-events
2 exclude * * *
12 entries were displayed.
Related references
event filter create on page 95
These fields are evaluated for a match using a logical "AND" operation: name AND severity AND SNMP trap type. Within a
field, the specified values are evaluated with an implicit logical "OR" operation. So, if -snmp-trap-type Standard,
Built-in is specified, then the event must match Standard OR Built-in. The wildcard matches all values for the field.
• Type - include or exclude. When an event matches an include rule, it will be included into the filter, whereas it will be
excluded from the filter if it matches an exclude rule.
Rules are checked in the order they are listed for a filter, until a match is found. There is an implicit rule at the end that matches
every event to be excluded. For more information, see the event filter rule command.
There are three system-defined event filters provided for your use:
• default-trap-events - This filter matches all ALERT and EMERGENCY events. It also matches all Standard, Built-in SNMP
trap type events.
• no-info-debug-events - This filter matches all non-INFO and non-DEBUG messages (EMERGENCY, ALERT, ERROR and
NOTICE).
Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter to create. An event filter name is 2 to 64
characters long. Valid characters are the following ASCII characters: A-Z, a-z, 0-9, "_", and "-". The name
must start and end with: A-Z, a-z, "_", or 0-9.
Examples
The following example creates an event filter named filter1:
Related references
event filter rule on page 102
Description
The event filter delete command deletes an existing event filter, along with all its rules.
The system-defined event filters cannot be deleted.
For more information, see the event filter create command.
Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter to delete.
Examples
The following example deletes an event filter named filter1:
Related references
event filter create on page 95
Description
The event filter rename command is used to rename an existing event filter.
There are system-defined event filters provided for your use. The system-defined event filters cannot be modified or deleted.
For more information, see the event filter create comamnd.
Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter to rename.
-new-filter-name <text> - New Event Filter Name
Use this mandatory parameter to specify the new name the event filter should be renamed to.
Examples
The following example renames an existing filter named filter1 as emer-wafl-events:
Related references
event filter create on page 95
Description
The event filter show command displays all the event filters which are configured. An event filter is used to select the
events of interest and is made up of one or more rules, each of which contains the following three fields:
These fields are evaluated for a match using a logical "AND" operation: name AND severity AND SNMP trap type. Within a
field, the specified values are evaluated with an implicit logical "OR" operation. So, if -snmp-trap-type Standard,
Built-in is specified, then the event must match Standard OR Built-in. The wildcard matches all values for the field.
• Type - include or exclude. When an event matches an include rule, it will be included into the filter, whereas it will be
excluded from the filter if it matches an exclude rule.
Rules are checked in the order they are listed for a filter, until a match is found. There is an implicit rule at the end that matches
every event to be excluded. For more information, see event filter rule command.
There are three system-defined event filters provided for your use:
• default-trap-events - This filter matches all ALERT and EMERGENCY events. It also matches all Standard, Built-in SNMP
trap type events.
• no-info-debug-events - This filter matches all non-INFO and non-DEBUG messages (EMERGENCY, ALERT, ERROR and
NOTICE).
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
• include - Events matching this rule are included in the specified filter.
• exclude - Events matching this rule are excluded in the specified filter.
• EMERGENCY - Disruption.
• NOTICE - Information.
• INFORMATIONAL - Information.
• Severity-based - Traps specific to events that do not belong to the above two types.
• * - Includes all SNMP trap types.
Examples
The following example displays the event filters:
The following example displays the event filters queried on the SNMP trap type value "Standard":
The following example displays the event filters with one or more rules that have no condition on the SNMP trap type.
Note that the wildcard character has to be specified in double-quotes. Without double-quotes, output would be the same as
not querying on the field.
Related references
event filter rule on page 102
Description
The event filter test command is used to test an event filter. When specified with a message name, the command displays
whether the message name is included or excluded from the filter. When specified without a message name, the command
displays the number of events from the catalog that match the filter. For more information, see the event filter create
command.
Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter to test.
[-message-name <Message Name>] - Message Name
Use this optional parameter to specify the message name of the event to test against the filter.
Examples
The following example tests an event filter named err-wafl-no-scan-but-clone:
Filter Name Rule Rule Message Name SNMP Trap Type Severity
Position Type
----------- -------- --------- ---------------------- --------------- --------
no-info-debug-events
2 exclude * * *
12 entries were displayed.
Related references
event filter create on page 95
Description
The event filter rule add command adds a new rule to an existing event filter. See event filter create for more
information on event filters and how to create a new event filter.
Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter to add the rule. Rules cannot be added to
system-defined event filters.
Examples
The following example adds a rule to an existing event filter "emer-and-wafl": All events with severity EMERGENCY
and message name starting with "wafl.*" are included in the filter. Not specifiying the SNMP trap type implies a default
value of "*".
cluster1::> event filter rule add -filter-name emer-and-wafl -type include -message-name wafl.* -
severity EMERGENCY
cluster1::> event filter show
Filter Name Rule Rule Message Name SNMP Trap Type Severity
Position Type
----------- -------- --------- ---------------------- --------------- --------
default-trap-events
1 include * * EMERGENCY, ALERT
2 include * Standard, Built-in
*
3 exclude * * *
emer-and-wafl
1 include wafl.* * EMERGENCY
2 exclude * * *
important-events
1 include * * EMERGENCY, ALERT
2 include callhome.* * ERROR
3 exclude * * *
no-info-debug-events
1 include * * EMERGENCY, ALERT, ERROR,
NOTICE
2 exclude * * *
10 entries were displayed.
The following example adds a rule to the event filter "emer-and-wafl" at position 1: All events with severity ALERT and
message name starting with "wafl.scan.*" are included in the filter.
cluster1::> event filter rule add -filter-name emer-and-wafl -type include -message-name
wafl.scan.* -position 1 -severity ALERT
The following example adds a rule to the event filter "emer-and-wafl" to include all "Standard" SNMP trap type events:
cluster1::> event filter rule add -filter-name emer-and-wafl -type include -snmp-trap-type Standard
Related references
event filter create on page 95
Description
The event filter rule delete command deletes a rule from an event filter. The position of all the rules following the
deleted rule is updated to maintain a contiguous sequence. Use event filter show command to view the filters and the rules
associated with them.
Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter from which you want to delete the rule.
Rules cannot be deleted from system-defined filters.
Examples
The following example deletes a rule at position 2 from an existing event filter "emer-and-wafl":
Related references
event filter show on page 99
Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter from which you want to change the
position of the rule. Rules from system-defined event filters cannot be modified.
-position <integer> - Rule Positon
Use this mandatory parameter to specify the position of the rule you want to change. It should be in the range
(1..n-1), where 'n' is the position of the last rule, which is an implicit rule.
-to-position <integer> - New Rule Position
Use this mandatory parameter to specify the new position to move the rule. It should be in the range (1..n-1),
where 'n' is the position of the last rule, which is an implicit rule.
Examples
The following example changes the position of a rule from 1 to 2 from an existing event filter "emer-and-wafl":
Description
The event config force-sync command forces a node's EMS configuration to be synchronized with the cluster wide EMS
configuration. The configuration is automatically synchronized among all nodes in the cluster, but in rare cases a node may not
be updated. This command simplifies the recovery from this issue.
The following example shows where this command is useful: An email destination is configured for all CRITICAL level event
occurrences. When the event is generated, all nodes generate an email except one. This command forces that node to refresh a
stale configuration.
Parameters
[-node {<nodename>|local}] - Node
The node parameter specifies which controller will be synchronized.
Description
Use the event config modify command to configure event notification and logging for the cluster.
Parameters
[-mail-from <mail address>] - Mail From
Use this parameter to configure the email address from which email notifications will be sent. You can
configure the cluster to send email notifications when specific events occur. Use the event route add-
destinations and event destination create commands to configure email destinations for events.
[-mail-server <text>] - Mail Server (SMTP)
Use this parameter to configure the name or IP address of the SMTP server used by the cluster when sending
email notification of events.
[-suppression {on|off}] - Event Throttling/Suppression (privilege: advanced)
Use this parameter to configure whether event suppression algorithms are enabled ("on") or disabled ("off").
The event processing system implements several algorithms to throttle events. The documentation for event
show-suppression command describes the suppression algorithms in detail.
Note: The suppression parameter can disable both autosuppression and duplicate suppression, but timer
suppression cannot be disabled.
Examples
The following command sets the "Mail From" address for event notifications to "admin@example.com" and the "Mail
Server" to "mail.example.com":
Related references
event route add-destinations on page 131
event destination create on page 110
event log show on page 116
event config set-proxy-password on page 108
Description
Use the event config set-proxy-password command to set the password for authenticated access to an HTTP or HTTPS
proxy being used for EMS notifications. This password is used with the user name you specify using the event config
modify -proxy-user command to send EMS messages to REST API destinations via the proxy you specify using the event
config modify -proxy-url command. IF you enter the command without parameters, the command prompts you for a
password and for a confirmation of that password. Enter the same password at both prompts. The password is not displayed.
Examples
The following example shows successful execution of this command:
Description
The event config show command displays information about the configuration of event notification and event logging for
the cluster.
"Mail From" is the email address that the event notification system uses as the "From" address for email notifications.
"Mail Server" is the name or IP address of the SMTP server that the event notification system uses to send email notification of
events.
"Proxy URL" is the HTTP or HTTPS proxy server URL used by rest-api type EMS notification destinations if your organization
uses a proxy.
"Proxy User Name" is the user name for the HTTP or HTTPS proxy server if authentication is required.
"Suppression" indicates whether event suppression algorithms are enabled ("on") or disabled ("off"). The event processing
system implements several algorithms to throttle events.
Note: The suppression parameter can disable both autosuppression and duplicate suppression, but not timer suppression.
"Console" indicates whether events are displayed on the console port ("on") or not displayed("off").
Examples
The following example displays the configuration of event notification for the cluster:
The following example displays the configuration of event notification with HTTP or HTTPS proxy:
Description
Note: This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead, use the "event
notification destination" command set.
The event destination create command creates a new event destination. An event destination is a list of addresses that
receive event notifications. These addresses can be e-mail addresses, SNMP trap hosts, and syslog servers. Event destinations
are used by event routes. Event routes describe which events generate notifications, and event destinations describe where to
send those notifications.
When you create a destination, you can add e-mail addresses, SNMP trap hosts, and syslog hosts to the definition of the
destination. Once the destination is fully defined, use the event route add-destinations command to associate the
destination with event routes so that notifications of those events are sent to the recipients in the destination.
To see the current list of all destinations and their recipients, use the event destination show command.
There are several default destinations provided for your use.
• allevents - A useful destination for all system events, though no events are routed to this destination by default.
• asup - Events routed to this destination trigger AutoSupport(tm). Only use this destination to send notifications to technical
support. See system node autosupport for more information.
• criticals - A useful destination for critical events though no events are routed to this destination by default.
• pager - A useful destination for all events that are urgent enough to page a system administrator, though no events are routed
to this destination by default.
• traphost - The default destination for all SNMP traps. You can also use the system snmp traphost add command to add
SNMP recipients to the traphost default destination.
To add recipients to the default destinations, use the event destination modify command.
You should not create a destination that sends events to more than one type of recipient. Use separate destinations for e-mail,
SNMP, and syslog activity. Also, use the traphost default destination for all SNMP activity. You must not create any other
destination that sends traps to SNMP trap hosts. The traphost default destination is not required to be added to any event route.
Parameters
-name <text> - Name
This mandatory parameter specifies name of the event destination to create.
Examples
The following example creates an event destination named support.email that e-mails events to the addresses
supportmgr@example.com, techsupport@example.com, and oncall@example.com.
This example creates an event destination named support.bucket01 that sends the notifications to a syslog host.
Related references
event config modify on page 107
event route add-destinations on page 131
event destination show on page 114
system node autosupport on page 1279
system snmp traphost add on page 1437
event destination modify on page 112
Description
Note: This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead, use the "event
notification destination" command set.
• asup - Events routed to this destination trigger AutoSupport(tm). Only use this destination to send notifications to technical
support. See system node autosupport for more information.
• criticals - A useful destination for critical events though no events are routed to this destination by default.
• pager - A useful destination for all events that are urgent enough to page a system administrator, though no events are routed
to this destination by default.
• traphost - The default destination for all SNMP traps. You can also use the system snmp traphost delete command to
delete SNMP recipients from the traphost default destination.
To see the current list of all destinations, use the event destination show command. To add a new destination to the list,
use the event destination create command.
Parameters
-name <text> - Name
This mandatory parameter specifies the event destination to delete.
Examples
The following example deletes an event destination named manager.pager:
Related references
event route remove-destinations on page 133
system node autosupport on page 1279
system snmp traphost delete on page 1437
event destination show on page 114
event destination create on page 110
Description
Note: This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead, use the "event
notification destination" command set.
The event destination modify command changes the definition of an existing event destination. An event destination is a
list of addresses that receive event notifications. These addresses can be e-mail addresses, SNMP traphosts, and syslog servers.
You must not create a destination that sends events to more than one type of recipient. Use separate destinations for e-mail,
SNMP, and syslog activity. Also, use the traphost default destination for all SNMP activity. You should not create any other
destination that sends to SNMP traphosts. The traphost default destination is not required to be added to any event route.
Parameters
-name <text> - Name
This mandatory parameter specifies name of the event destination to modify.
[-mail <mail address>, ...] - Mail Destination
Use this parameter to specify one or more e-mail addresses to which event notifications will be sent. For
events to properly generate e-mail notifications, the event system must also be configured with an address and
mail server from which to send mail. See event config modify for more information.
[-snmp <Remote IP>, ...] - SNMP Destination
To send traps to SNMP trap hosts, use this parameter with the host names or IP addresses of those trap hosts.
[-syslog <Remote IP>, ...] - Syslog Destination
Use this parameter with the host names or IP addresses of any remote syslog daemons to which syslog entries
will be sent.
[-syslog-facility <Syslog Facility>] - Syslog Facility
This parameter optionally specifies a syslog facility with which the syslog is sent. Possible values for this
parameter are default, local0, local1, local2, local3, local4, local5, local6, and local7. If you specify the default
syslog facility, syslogs are tagged LOG_KERN or LOG_USER.
[-snmp-community <text>] - SNMP Trap Community
To specify an SNMP trap community, use this parameter with that string.
[-hide-parameters {true|false}] - Hide Parameter Values?
Enter this parameter with the value "true" to hide event parameters by removing them from event notifications.
This is useful to prevent sensitive information from being sent over non-secure channels. Enter it with the
value "false" to turn off parameter hiding.
Examples
The following example modifies an event destination named snmp.hosts to send events to SNMP trap hosts named
traphost1 and traphost2:
This example adds the e-mail address of a remote support facility to an existing list of e-mail recipients.
Name: support
Mail Destination: support.hq@company.com
SNMP Destination: -
Syslog Destination: -
Syslog Facility: -
SNMP Trap Community: -
Hide Parameter Values?: -
Name: support
Mail Destination: support.hq@company.com, support.remote@company.com
SNMP Destination: -
Syslog Destination: -
Syslog Facility: -
SNMP Trap Community: -
Hide Parameter Values?: -
Related references
event config modify on page 107
event destination show on page 114
Description
Note: This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead, use the "event
notification destination" command set.
The event destination show command displays information about configured event destinations. An event destination is a
list of addresses that receive event notifications. These addresses can be e-mail addresses, SNMP trap hosts, and syslog servers.
Event destinations are used by event routes. Event routes describe which events generate notifications, and event destinations
describe where to send those notifications.
Default destinations:
• allevents - A useful destination for all system events, though no events are routed to this destination by default.
• asup - Events routed to this destination trigger AutoSupport(tm). Only use this destination to send notifications to technical
support. See system node autosupport for more information.
• criticals - A useful destination for critical events although no events are routed to this destination by default.
• pager - A useful destination for all events that are urgent enough to page a system administrator, though no events are routed
to this destination by default.
• traphost - The default destination for all SNMP traps. You can also use the system snmp traphost show command to
view SNMP recipients for the traphost default destination.
To add recipients to the default destination, use the event destination modify command.
Note: While you can use both host names and IP addresses with parameters, only IP addresses are stored. Unless all DNS and
reverse-DNS operations complete successfully, IP addresses might appear in command output.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-facility ]
Displays only the syslog destinations and syslog facilities.
Examples
The following example displays information about all event destinations:
Hide
Name Mail Dest. SNMP Dest. Syslog Dest. Params
---------------- ----------------- ------------------ ------------------ ------
allevents - - logger.example.com -
asup - - - -
criticals oncall - - -
@example.com
pager pager@example.com - - -
support.email supportmgr - - -
@example.com,
techsupport
@example.com,
oncall
@example.com - - -
traphost - th0.example.com, - -
th1.example.com
Related references
system node autosupport on page 1279
system snmp traphost show on page 1438
event destination modify on page 112
Description
The event log show command displays the contents of the event log, which lists significant occurrences within the cluster.
Use the event catalog show command to display information about events that can occur.
By default, the command displays EMERGENCY, ALERT and ERROR severity level events with the following information,
with the most recent events listed first:
To display detailed information about events, use one or more of the optional parameters that affect how the command output is
displayed and the amount of detail that is included. For example, to display all detailed event information, use the -detail
parameter.
To display NOTICE, INFORMATIONAL or DEBUG severity level events, use the -severity parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-detail ]
Displays additional event information such the sequence number of the event.
| [-detailtime ]
Displays detailed event information in reverse chronological order.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays a list of events for the node you specify. Use this parameter with the -seqnum parameter to display
detailed information.
[-seqnum <Sequence Number>] - Sequence#
Selects the events that match this parameter value. Use with the -node parameter to display detailed
information.
[-time <MM/DD/YYYY HH:MM:SS>] - Time
Selects the events that match this parameter value. Use the format: MM/DD/YYYY HH:MM:SS [+-
HH:MM]. You can specify a time range by using the ".." operator between two time statements.
Comparative time values are relative to "now". For example, to display only events that occurred within the
last minute:
• ERROR - Degradation.
• NOTICE - Information.
• INFORMATIONAL - Information.
To display all events, including ones with severity levels of NOTICE, INFORMATIONAL and DEBUG,
specify severity as follows:
[-ems-severity {NODE_FAULT|SVC_FAULT|NODE_ERROR|SVC_ERROR|WARNING|NOTICE|INFO|DEBUG|VAR}] -
EMS Severity (privilege: advanced)
Selects the events that match this parameter value. Severity levels:
• NODE_FAULT - Data corruption has been detected or the node is unable to provide client service
• SVC_FAULT - A temporary loss of service, typically a transient software fault, has been detected
• NODE_ERROR - A hardware error that is not immediately fatal has been detected
• SVC_ERROR - A software error that is not immediately fatal has been detected
Examples
Selects the events that match this parameter value. Use the format: MM/DD/YYYY HH:MM:SS [+- HH:MM]. You can
specify a time range by using the ".." operator between two time statements.
Comparative time values are relative to "now". For example, to display only events that occurred within the last minute:
Selects the events that match this parameter value. Severity levels are as follows:
• EMERGENCY - Disruption.
• ERROR - Degradation.
• NOTICE - Information.
• INFORMATIONAL - Information.
To display all events, including ones with severity levels of NOTICE, INFORMATIONAL and DEBUG, specify severity
as follows:
This example demonstrates how to use a range with the -time parameter to display all events that occurred during an
extended time period. It displays all events that occurred between 1:45pm and 1:50pm on November 9, 2010.
The -time parameter also accepts values that are relative to "now". The following example displays events that occurred
more than one hour ago:
Severity levels sort in the order opposite to what you might expect. The following example displays all events that have a
severity level of ERROR or more severe:
Related references
event catalog show on page 92
Description
Note: This command has been deprecated. It may be removed from a future major release of Data ONTAP. Instead, use the
"event notification history" command set.
The event mailhistory delete command deletes a record from the e-mail history.
To delete a record, you must know which node contains the record, and the record's sequence number. Use the event
mailhistory show command to view this information.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the node that contains the e-mail history record to delete.
-seqnum <Sequence Number> - Sequence Number
Use this parameter to specify the sequence number of the e-mail history record to delete.
Related references
event mailhistory show on page 120
Description
Note: This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead, use the "event
notification history" command set.
The event mailhistory show command displays a list of the event notifications that have been e-mailed. The command
output depends on the parameters you specify with the command. By default, the command displays basic information about all
notification e-mails that were sent.
To display detailed information about a specific mail-history record, run the command with the -seqnum parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the mail-history records that match this parameter value.
[-seqnum <Sequence Number>] - Sequence Number
Selects the mail-history records that match this parameter value.
[-message-name <Message Name>] - Message Name
Selects the mail-history records that match this parameter value.
[-address <mail address>, ...] - Mail Address
Selects the mail-history records that match this parameter value.
[-time <MM/DD/YYYY HH:MM:SS>] - Transmission Time
Selects the mail-history records that match this parameter value.
[-message <text>] - Alert Message
Selects the mail-history records that match this parameter value (text pattern).
[-previous-time <MM/DD/YYYY HH:MM:SS>] - Previous Transmission Time
Selects the mail-history records that match this parameter value.
[-num-drops-since-previous <integer>] - Number of Drops Since Previous Transmission
Selects the mail-history records that match this parameter value (number of event drops since last
transmission).
Description
The event notification create command is used to create a new notification of a set of events defined by an event filter
to one or more notification destinations.
Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter. Events that are included in the event filter
are forwarded to the destinations specified in the destinations parameter.
The filter name passed to this command must be an existing filter. For more information, see the event
filter create command.
Examples
The following example creates an event notification for filter name "filter1" to destinations "email_dest, snmp-traphost
and syslog_dest":
Related references
event filter create on page 95
event destination create on page 110
Description
The event notification delete command deletes an existing event notification.
Parameters
-ID <integer> - Event Notification ID
Use this parameter to specify the ID of the notification to be deleted.
Examples
The following example shows the deletion of event notification with ID 1:
Description
The event notification modify command is used to modify an existing notification.
Examples
The following example shows the modification of event notification with ID 1:
Description
The event notification show command is used to display the list of existing event notifications.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-ID <integer>] - Event Notification ID
Use this parameter to display the detailed information about the notification ID you specify.
[-filter-name <text>] - Event Filter Name
Use this parameter to display event notifications that use the filter-name you specify.
[-destinations <text>, ...] - List of Event Notification Destinations
Use this parameter to display event notifications that use the destinations you specify.
Examples
The following example displays the event notification:
Description
The event notification destination create command creates a new event notification destination of either email or
syslog type.
The following system-defined notification destination is configured for your use:
Parameters
-name <text> - Destination Name
Use this mandatory parameter to specify the name of the notification destination that is to be created. An event
notification destination name must be 2 to 64 characters long. Valid characters are the following ASCII
characters: A-Z, a-z, 0-9, "_", and "-". The name must start and end with: A-Z, a-z, or 0-9.
{ -email <mail address> - Email Destination
Use this parameter to specify the email address to which event notifications will be sent. For events to properly
generate email notifications, the event system must also be configured with an address and mail server from
which the mail will be sent. See event config modify command for more information.
| -syslog <text> - Syslog Destination
Use this parameter to specify syslog server host name or IP address to which syslog entries will be sent.
| -rest-api-url <text> - REST API Server URL
Use this parameter to specify REST API server URL to which event notifications will be sent. Enter the full
URL, which must start either with an http:// or https:// prefix. To specify a URL that contains a question mark,
press ESC followed by the "?".
If an https:// URL is specified, then Data ONTAP verifies the identity of the destination host by validating its
certificate. If the Online Certificate Status Protocol (OCSP) is enabled for EMS, then Data ONTAP uses that
protocol to determine the certificate's revocation status. Use the security config oscp show -
application ems command to determine if the OCSP-based certificate revocation status check is enabled
for EMS.
[-certificate-authority <text>] - Client Certificate Issuing CA
Use this parameter to specify the name of the certificate authority (CA) that signed the client certificate that
will be sent in case mutual authentication with the REST API server is required.
There can be multiple client certificates installed for the admin vserver in the cluster, and this parameter, along
with the certificate-serial parameter, uniquely identifies which one.
Use the security certificate show command to see the list of certificates installed in the cluster.
Examples
The following example shows the creation of a new event notification destination of type email called
"StorageAdminEmail":
The following example shows the creation of a new event notification destination of type rest-api called "RestApi":
Related references
event config modify on page 107
security certificate show on page 478
Description
The event notification destination delete command deletes an event notification destination.
The following system-defined notification destination is configured for your use:
• snmp-traphost - This destination reflects the configuration in "system snmp traphost". To remove snmp-traphost addresses,
use the system snmp traphost command.
Parameters
-name <text> - Destination Name
Use this mandatory parameter to specify the name of an event destination to be removed.
Examples
The following shows the examples of deleting event notification destinations:
Related references
system snmp traphost on page 1436
Description
The event notification destination modify command modifies event notification destination.
The following system-defined notification destination is configured for your use:
• snmp-traphost - This destination reflects the configuration in "system snmp traphost". To modify traphost addresses, use the
system snmp traphost command.
Parameters
-name <text> - Destination Name
Use this mandatory parameter to specify the name of an event notification destination to be modified. The
name of the destination must already exist.
{ [-email <mail address>] - Email Destination
Use this parameter to specify a new value of email address to replace the current address in the event
notification destination. The parameter is specified only when the event notification destination type is already
"email". It is not allowed to specify the parameter for a destination that already has another type of destination
address.
| [-syslog <text>] - Syslog Destination
Use this parameter to specify a new syslog server host name or IP address to replace the current address of the
event notification destination. The parameter is specified only when the event notification destination type is
Examples
The following example shows the modification of event notification destinations:
The following example shows how to clear the client certificate configuration when mutual authentication with the REST
API server is no longer required:
Related references
security certificate show on page 478
system snmp traphost on page 1436
Description
The event notification destination show command displays event notification destinations. Note: In the case of a
rest-api destination type, OCSP information is not included. It's available in security config ocsp show -app ems
command.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-name <text>] - Destination Name
Use this optional parameter to display information of an event notification destination that has the specified
name.
[-type {snmp|email|syslog|rest-api}] - Type of Destination
Use this optional parameter to display information of event notification destinations that have the specified
destination type.
[-destination <text>, ...] - Destination
Use this optional parameter to display information of event notification destinations that have the specified
destination address. Enter multiple addresses separated by a comma.
[-server-ca-present {true|false}] - Server CA Certificates Present?
Use this optional parameter to display information of event notification destinations that have the specified
server-ca-present value. This field indicates whether there are certificates of the server-ca type exist in the
system. If not, event messages will not be sent to a rest-api type destination having an HTTPS URL.
[-certificate-authority <text>] - Client Certificate Issuing CA
Use this optional parameter to display information of event notification destinations that have the specified
certificate authority name.
[-certificate-serial <text>] - Client Certificate Serial Number
Use this optional parameter to display information of event notification destinations that have the specified
certificate serial number.
Examples
The following shows examples of "event notification destination show":
Description
The event notification history show command displays a list of event messages that have been sent to a notification
destination. Information displayed by the command for each event is identical to that of the event log show command. This
command displays events sent to a notification destination while the event log show command displays all events that have been
logged.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
-destination <text> - Destination
Specifies the destination to which event messages have been sent to be displayed.
[-node {<nodename>|local}] - Node
Displays a list of events for the node you specify. Use this parameter with the -seqnum parameter to display
detailed information.
• ERROR - Degradation.
• NOTICE - Information.
• INFORMATIONAL - Information.
Examples
The following example displays all the events which match "important-events" filter and forwarded to the "snmp-
traphost" destination:
Description
Note: This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead, use the "event
notification" command set.
The event route add-destinations command adds destinations to an event route. Any existing destinations assigned to
the route are not removed.
The destinations you add must already exist. See the documentation for the event destination create command for
information about creating destinations. To show all existing destinations and their attributes, use the event destination
show command. To remove destinations from an event route, use the event route remove-destinations command.
You can use extended queries with such parameters as -severity and -snmp-support to specify multiple events that meet
certain criteria. See examples below that show how to use extended queries.
Parameters
-message-name <Message Name> - Message Name
Specify the message name of the event you are modifying. You can use wildcards to specify a family of events
or type of event.
[-severity {EMERGENCY|ALERT|ERROR|NOTICE|INFORMATIONAL|DEBUG}] - Severity
Use this optional parameter to specify a set of events that match this parameter value. You must use the -
message-name parameter with wildcards to specify the family of events or type of events.
-destinations <Event Destination>, ... - Destinations
Specify a comma-separated list of destinations to which notifications for the named event are sent. These
destinations will be added to any existing destinations assigned to this event route.
Examples
The following example specifies that all RAID events go to the destinations named support.email, mgr.email, and
sreng.pager:
The following example specifies that all alert, and emergency events go to the destination named test_dest:
The following example specifies that all alert events that support a SNMP trap go to the destination named traphost. In
this example, because the -snmp-support parameter is specified as part of extended queries, the -severity parameter
must also be specified in the extended queries:
Related references
event destination create on page 110
event destination show on page 114
event route remove-destinations on page 133
Description
Note: This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead, use the "event
notification" command set.
Use the event route modify command to modify an event's destination, frequency threshold, and time threshold. The
event's destination must already exist; see the documentation for the event destination create command for information
about creating destinations. The frequency threshold and time threshold prevent multiple event notifications in a brief period of
time.
You can use extended queries with such parameters as -severity and -snmp-support to specify multiple events that meet
certain criteria. See examples provided in the event route add-destinations command manpage that show how to use
extended queries.
The frequency threshold specifies the number of times an event occurs before a repeat notification of the event is sent; for
instance, a frequency threshold of 5 indicates that a notification is sent every fifth time an event occurs. The time threshold
specifies the number of seconds between notifications for an event; for instance, a time threshold of 120 indicates that a
notification is sent only if it has been two minutes or more since the last notification for that event was sent.
If both the frequency threshold and time threshold are set, a notification is sent if either threshold is met. For instance, if the
frequency threshold is set to 5 and the time threshold is set to 120, and the event occurs more than five times within two
minutes, a notification is sent. If both thresholds are set to 0 (zero) or empty ("-" or ""), there is no suppression of multiple event
notifications.
Parameters
-message-name <Message Name> - Message Name
Specify the message name of the event you are modifying. You can use wildcards to specify a family of events
or type of event.
[-severity {EMERGENCY|ALERT|ERROR|NOTICE|INFORMATIONAL|DEBUG}] - Severity
Use this optional parameter to specify a set of events that match this parameter value. You must use the -
messagename parameter with wildcards to specify the family of events or type of events.
[-destinations <Event Destination>, ...] - Destinations
Use this optional parameter to specify a comma-separated list of destinations to which notifications for the
named event are sent. Using this parameter replaces the current list of destinations with the list of destinations
Examples
The following example modifies all RAID events to send messages to a destination named "support.email", and specify
that multiple messages should only be sent if and event occurs more than five times within 60 seconds.
Related references
event route add-destinations on page 131
event route remove-destinations on page 133
event destination create on page 110
Description
Note: This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead, use the "event
notification" command set.
The event route remove-destinations command can be used to remove existing destinations from an event route. This
command removes only the specified destinations from the route, leaving any other destinations assigned to that route.
The named destinations are not deleted, just removed from the specified event route. To delete a destination entirely, use the
event destination delete command. To show all existing destinations and their attributes, use the event destination
show command.
You can use extended queries with such parameters as -severity and -snmp-support to specify multiple events that meet
certain criteria. See examples provided in the event route add-destinations command manpage that show how to use
extended queries.
Parameters
-message-name <Message Name> - Message Name
Specify the message name of the event you are modifying. You can use wildcards to specify a family of events
or type of event.
[-severity {EMERGENCY|ALERT|ERROR|NOTICE|INFORMATIONAL|DEBUG}] - Severity
Use this optional parameter to specify a set of events that match this parameter value. You must use the -
message-name parameter with wildcards to specify the family of events or type of events.
-destinations <Event Destination>, ... - Destinations
Specify a comma-separated list of destinations to remove from the event's list of destinations.
Related references
event destination delete on page 111
event destination show on page 114
event route add-destinations on page 131
Description
Note: This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead, use the "event
catalog" command set.
This command displays information about event routes. Event routes describe which events generate notifications. A route
specifies what to watch for, whom to notify, and what to do should a particular event occur. By default, the command displays
the following information:
To display detailed information about a specific event route, run the command with the -message-name parameter, and specify
the name of the message. The detailed view adds the following information:
You can specify additional parameters to limit output to the information that matches those parameters. For example, to display
information only about events with a message name that begins with "raid", run the command with the -message-name
raid* parameter. You can enter either a specific text string or a wildcard pattern.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-message-name <Message Name>] - Message Name
Selects the event routes that match this parameter value.
• EMERGENCY - Disruption
• ERROR - Degradation
• NOTICE - Infromation
• INFORMATIONAL - Information
• DEBUG - Debug information
Examples
The following example displays information about all event routes:
Description
Note: This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead, use the "event
notification history" command set.
The event snmphistory delete command deletes an SNMP trap-history record. To delete a record, you will need to know
which node generated the event, and you will need to know the sequence number of that event in the trap-history.
Use the event snmphistory show command to display a list of trap-history records and their sequence numbers.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the node that contains the snmp history record to delete.
-seqnum <Sequence Number> - Sequence Number
Use this parameter to specify the sequence number of the SNMP trap-history record to delete.
Examples
The following example deletes all SNMP trap-history records on node1:
Related references
event snmphistory show on page 136
Description
Note: This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead, use the "event
notification history" command set.
The event snmphistory show command displays a list of event notifications that have been sent to SNMP traps. The
command output depends on the parameters specified with the command. By default, the command displays general information
about all trap-history records.
To display detailed information about a specific trap-history record, run the command with the -seqnum parameter.
Examples
The following example displays information about all SNMP trap-history records:
Description
The event status show command summarizes information about occurrences of events. For detailed information about
specific occurrences of events, use the event log show command.
• NODE_FAULT - The node has detected data corruption, or is unable to provide client service.
• NODE_ERROR - The node has detected a hardware error that is not immediately fatal.
• SVC_ERROR - The node has detected a software error that is not immediately fatal.
• VAR - These messages have variable severity. Severity level for these messages is selected at runtime.
Examples
The following example displays recent event-occurrence status for node1:
The following example displays a summary of events which are warnings or more severe:
cluster1::> event status show -node node1 -severity <=warning -fields indications,drops,severity
node message-name indications drops severity
------- ------------------------ ----------- ----- --------
node1 api.output.invalidSchema 5463 840 WARNING
node1 callhome.dsk.config 1 0 WARNING
node1 callhome.sys.config 1 0 SVC_ERROR
node1 cecc_log.dropped 145 0 WARNING
node1 cecc_log.entry 5 0 WARNING
node1 cecc_log.entry_no_syslog 4540 218 WARNING
node1 cecc_log.summary 5 0 WARNING
node1 cf.fm.noPartnerVariable 5469 839 WARNING
node1 cf.fm.notkoverBadMbox 1 0 WARNING
node1 cf.fm.notkoverClusterDisable 1 0 WARNING
node1 cf.fsm.backupMailboxError 1 0 WARNING
node1 cf.takeover.disabled 23 0 WARNING
node1 cmds.sysconf.logErr 1 0 NODE_ERROR
node1 config.noPartnerDisks 1 0 NODE_ERROR
node1 fci.initialization.failed 2 0 NODE_ERROR
node1 fcp.service.adapter 1 0 WARNING
node1 fmmb.BlobNotFound 1 0 WARNING
node1 ha.takeoverImpNotDef 1 0 WARNING
node1 httpd.config.mime.missing 2 0 WARNING
node1 mgr.opsmgr.autoreg.norec 1 0 WARNING
node1 monitor.globalStatus.critical 1 0 NODE_ERROR
The example below demonstrates using the -probability parameter and -fields parameter together to display a list
of events that might be suppressed.
Related references
event config modify on page 107
event log show on page 116
Job Commands
Manage jobs and job schedules
The job commands enable you to manage jobs and schedules. A job is defined as any asynchronous operation.
job delete
Delete a job
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The job delete command deletes a job. Use the job show command to view a list of running jobs that can be deleted.
Parameters
-id <integer> - Job ID
The numeric ID of the job you want to delete. A job ID is a positive integer.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the name of the Vserver that owns the job.
Examples
The following example deletes the job that has ID 99:
job pause
Pause a job
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The job pause command pauses a job. Use the job resume command to resume a paused job. Use the job show command
to view a list of running jobs that can be paused.
Parameters
-id <integer> - Job ID
The numeric ID of the job you want to pause. A job ID is a positive integer.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the name of the Vserver that owns the job.
Examples
The following example pauses the job that has ID 183:
Related references
job resume on page 141
job show on page 142
job resume
Resume a job
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The job resume command resumes a job that was previously paused by using the job pause command. Use the job show
command to view a list of paused jobs that can be resumed.
Parameters
-id <integer> - Job ID
The numeric ID of the paused job to be resumed. A job ID is a positive integer.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the name of the Vserver that owns the job.
Examples
The following example resumes the paused job that has ID 183:
Related references
job pause on page 141
job show on page 142
job show
Display a list of jobs
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The job show command displays information about jobs. By default, the command displays information about all current jobs.
To display detailed information about a specific job, run the command with the -id parameter.
You can specify additional parameters to select information that matches the values you specify for those parameters. For
example, to display information only about jobs running on a specific node, run the command with the -node parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-inprogress ]
Displays the job ID, the job name, the owning Vserver, and the progress of the job.
| [-jobstate ]
Displays information about each job's state, including the queue state, whether the job was restarted, and when
the job has completely timed out.
| [-sched ]
Displays the job ID, the job name, the owning Vserver, and the schedule on which the job runs.
| [-times ]
Displays the job ID, the job name, the owning Vserver, the time when the job was last queued, the time when
the job was last started, and the time when the job most recently ended.
| [-type ]
Displays the job ID, the job name, the job type, and the job category.
| [-jobuuid ] (privilege: advanced)
Displays the job ID, the job name, the owning Vserver, and the job UUID.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-id <integer>] - Job ID
Selects the jobs that match the ID or range of IDs that you specify.
[-vserver <vserver name>] - Owning Vserver
Selects jobs that are owned by the specified Vserver.
job show-bynode
Display a list of jobs by node
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The job show-bynode command displays information about jobs on a per-node basis. The command output depends on the
parameters specified with the command. If no parameters are specified, the command displays information about all jobs in the
cluster that are currently owned by a node.
To display detailed information about a specific job, run the command with the -id parameter. The detailed view includes all of
the default information plus additional items.
You can specify additional parameters to display only information that matches the values you specify for those parameters. For
example, to display information only about jobs running on a specific node, run the command with the -node parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display information only about the jobs that are associated with the node you specify.
[-id <integer>] - Job ID
Use this parameter to display information only about the jobs that match the ID or range of IDs you specify.
[-vserver <vserver name>] - Owning Vserver
Use this parameter with the name of a Vserver to display only jobs that are owned by that Vserver.
Examples
The following example displays information about all jobs on a per-node basis:
job show-cluster
Display a list of cluster jobs
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The job show-cluster command displays information about cluster-affiliated jobs. The command output depends on the
parameters specified with the command. If no parameters are specified, the command displays information about all cluster-
affiliated jobs.
To display detailed information about a specific job, run the command with the -id parameter. The detailed view includes all of
the default information plus additional items.
You can specify additional parameters to display only information that matches the values you specify for those parameters. For
example, to display information only about jobs running on a specific node, run the command with the -node parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-id <integer>] - Job ID
Use this parameter to display information only about the jobs that match the ID or range of IDs you specify.
Examples
The following example displays information about all cluster-affiliated jobs:
job show-completed
Display a list of completed jobs
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The job show-completed command displays information about completed jobs. The command output depends on the
parameters you specify with the command. If you do not use any parameters, the command displays information about all
completed jobs.
To display detailed information about a specific job, run the command with the -id parameter. The detailed view includes all of
the default information plus additional items.
You can specify additional parameters to display only information that matches those parameters. For instance, to display
information only about jobs running on a specific node, run the command with the -node parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-id <integer>] - Job ID
Use this parameter to display information only about the jobs that match the ID or range of IDs you specify.
[-vserver <vserver name>] - Owning Vserver
Use this parameter with the name of a Vserver to display only jobs that are owned by that Vserver.
[-name <text>] - Name
Use this parameter to display information only about the jobs that match the name you specify.
[-description <text>] - Description
Use this parameter to display information only about the jobs that match the description you specify.
[-priority {Low|Medium|High|Exclusive}] - Priority
Use this parameter to display information only about the jobs that match the priority you specify.
Examples
The following example displays information about all completed jobs:
job stop
Stop a job
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The job stop command stops a running job. A stopped job cannot be resumed. Use the job pause command to pause a job
so that you can later resume it. Use the job show command to view a list of running jobs.
Parameters
-id <integer> - Job ID
The numeric ID of the job to stop. A job ID is a positive integer.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the name of the Vserver that owns the job.
Examples
The following example stops the job that has ID 101:
Related references
job pause on page 141
job show on page 142
job unclaim
Unclaim a cluster job
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
Description
The job unclaim command causes a cluster-affiliated job that is owned by an unavailable node to be unclaimed by that node.
Another node in the cluster can then take ownership of the job. Use the job show-cluster command to obtain a list of
cluster-affiliated jobs.
Parameters
-id <integer> - Job ID
Use this parameter to specify the ID number of the job to unclaim.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the name of the Vserver that owns the job.
Related references
job show-cluster on page 145
job watch-progress
Watch the progress of a job
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The job watch-progress command displays the progress of a job, and periodically updates that display. You can specify the
frequency of the updates.
Parameters
-id <integer> - Job ID
Use this parameter to specify the numeric ID of the job to monitor.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the name of the Vserver that owns the job.
[-interval <integer>] - Refresh Interval (seconds)
Use this parameter to specify the number of seconds between updates.
Examples
The following example show how to monitor the progress of the job that has ID 222 on Vserver vs0. The progress
display updates every 3 seconds.
Description
The job history show command displays a history of completed jobs with newer entries displayed first. You can specify
optional parameters to select information about job history items that match only those parameters. For example, to display
information about jobs that were completed on February 27 at noon, run the command with -endtime "02/27 12:00:00".
Examples
The following example displays information about all completed jobs:
The following example shows how to use a range with the "endtime" parameter to select only the events that ended
between 8:15 and 8:16 on August 22nd.
Description
The job initstate show command displays information about the initialization states of job-manager processes.
Examples
The following example shows how to display general job-manager initialization-state information for a cluster.
Description
The job private delete command deletes a private job. Private jobs are affiliated with a specific node and do not use any
cluster facilities, such as the replicated database.
If you use this command on a job that does not support the delete operation, the command returns an error message.
Use the job private show command to view a list of private jobs that can be deleted.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node with which the private job is associated.
-id <integer> - Job ID
Use this parameter to specify the numeric ID of the private job to be deleted. A job ID is a positive integer.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the name of the Vserver that owns the job.
Examples
The following example shows how to delete the job that has ID 273 from the node named node2:
Related references
job private show on page 156
Description
The job private pause command pauses a private job. Private jobs are affiliated with a specific node and do not use any
cluster facilities, such as the replicated database.
If you use this command to pause a job that does not support it, the command returns an error message.
Use the job private resume command to resume a paused private job.
Use the job private show command to view a list of private jobs.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node with which the private job is associated.
-id <integer> - Job ID
Use this parameter to specify the numeric ID of the paused private job to be paused. A job ID is a positive
integer.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the name of the Vserver that owns the job.
Examples
The following example pauses the private job that has ID 99 on the node node1:
Related references
job private resume on page 155
job private show on page 156
Description
The job private resume command resumes a private job that was paused by using the job private pause command.
Private jobs are affiliated with a specific node and do not use any cluster facilities, such as the replicated database.
Use the job private show command to view a list of paused private jobs that can be resumed.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node with which the paused private job is associated.
-id <integer> - Job ID
Use this parameter to specify the numeric ID of the paused private job to be resumed. A job ID is a positive
integer.
Examples
The following example resumes the paused private job that has ID 99 on a node named node2:
Related references
job private pause on page 155
job private show on page 156
Description
The job private show command displays information about private jobs. Private jobs are affiliated with a specific node and
do not use any cluster facilities, such as the replicated database.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-inprogress ]
Displays the job ID, name, owning Vserver, and progress of each private job.
| [-jobstate ]
Displays information about each private job's state, including the queue state, whether the job was restarted,
and when the job has timed out.
| [-jobuuid ]
Displays the ID, name, owning Vserver, and UUID of each private job.
| [-sched ]
Displays the job ID, name, owning Vserver, and run schedule of each private job.
| [-times ]
Displays the queue time, start time, and end time of each private job.
| [-type ]
Displays the type and category of each private job.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the private jobs that match this parameter value. .
[-id <integer>] - Job ID
Selects the private jobs that match the ID or range of IDs that you specify.
Examples
The following example displays information about all private jobs on the local node:
Description
The job private show-completed command displays information about completed private jobs. Private jobs are affiliated
with a specific node and do not use any cluster facilities, such as the replicated database.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display information only about completed jobs that are associated with the node you
specify.
[-id <integer>] - Job ID
Use this parameter to display information only about completed jobs that have the ID you specify.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to display only completed jobs that are owned by the Vserver you specify.
[-name <text>] - Name
Use this parameter to display information only about completed jobs that have the name you specify.
[-description <text>] - Description
Use this parameter to display information only about completed jobs that have the description you specify.
[-priority {Low|Medium|High|Exclusive}] - Priority
Use this parameter to display information only about completed jobs that have the priority you specify.
[-schedule <job_schedule>] - Schedule
Use this parameter to display information only about completed jobs that have the schedule you specify.
[-queuetime <MM/DD HH:MM:SS>] - Queue Time
Use this parameter to display information only about completed jobs that have the queue time you specify.
Examples
The following example shows how to display information about all completed private jobs on the node named node1:
Parameters
-node {<nodename>|local} - Node
This specifies the node on which the job is running.
-id <integer> - Job ID
This specifies the numeric ID of the job that is to be stopped.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the name of the Vserver that owns the job.
Examples
The following example stops a private job with the ID 416 on a node named node0:
Description
The job private watch-progress command displays and periodically updates the progress of a private job. A private job
is a job that is associated with a specific node and does not use cluster facilities. You can specify the frequency of the progress
updates.
Parameters
-node {<nodename>|local} - Node
This specifies the node on which the job is running.
-id <integer> - Job ID
This specifies the numeric ID of the job whose progress is to be monitored.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the Vserver with which the paused private job is associated.Use this parameter to
specify the name of the Vserver that owns the job.
[-interval <integer>] - Refresh Interval (seconds)
This optionally specifies, in seconds, the frequency of the updates.
Examples
The following example monitors the progress of the private job that has ID 127 on a node named node1. The progress is
updated every 2 seconds.
Description
The job schedule delete command deletes a schedule. Use the job schedule show command to display all current
schedules.
You cannot delete any schedules that are in use by jobs. Use the job schedule show-jobs command to display jobs by
schedule.
You cannot delete any schedules that are referenced by:
• SnapMirror entries
You must remove all references to a schedule before you can delete it. If you attempt to delete a schedule that is referenced, an
error message will list which entries reference the schedule you want to delete. Use the show command for each of the items
listed by the error message to display which entries reference the schedule. You may need to use the -instance parameter to
display more detail.
Parameters
[-cluster <Cluster name>] - Cluster
This parameter specifies the name of the cluster on which you want to delete a schedule. By default, the
schedule is deleted from the local cluster. In a MetroCluster configuration, the partner cluster can be specified
if the local cluster is in switchover state.
-name <text> - Schedule Name
Use this parameter with the name of an existing schedule to specify the schedule you want to delete.
Examples
The following example deletes a schedule named overnightbackup:
Related references
job schedule show on page 162
job schedule show-jobs on page 162
Description
The job schedule show command displays information about schedules.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-cluster <Cluster name>] - Cluster
Selects the schedules that match this parameter value.
[-name <text>] - Schedule Name
Selects the schedules that match this parameter value.
[-type {cron|interval|builtin}] - Schedule Type
Selects the schedules that match this parameter value.
[-description <text>] - Description
Selects the schedules that match this parameter value.
Examples
The following example displays information about all schedules:
Description
The job schedule show-jobs command displays information about jobs that are associated with schedules.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
Examples
The following example shows information about schedules that are associated with jobs:
Description
The job schedule cron create command creates a cron schedule. A cron schedule, like a UNIX cron job, runs at a
specified time. You can also specify months, days of the month, or days of the week on which the schedule will run.
If you specify values for both days of the month and days of the week, they are considered independently. For example, a cron
schedule with the day specification Friday, 13 runs every Friday and on the 13th day of each month, not just on every Friday the
13th.
Parameters
[-cluster <Cluster name>] - Cluster
This parameter specifies the name of the cluster on which you want to create a cron schedule. By default, the
schedule is created on the local cluster. In a MetroCluster configuration, the partner cluster can be specified if
the local cluster is in switchover state.
Examples
The following example creates a cron schedule named weekendcron that runs on weekend days (Saturday and Sunday) at
3:00 a.m.
cluster1::> job schedule cron create -name weekendcron -dayofweek "Saturday, Sunday" -hour 3 -
minute 0
Description
The job schedule cron delete command deletes a cron schedule. Use the job schedule cron show command to
display all current cron schedules.
You cannot delete any cron schedules that are associated with jobs. Use the job schedule show-jobs command to display
jobs by schedule.
Parameters
[-cluster <Cluster name>] - Cluster
This parameter specifies the name of the cluster on which you want to delete a cron schedule. By default, the
schedule is deleted from the local cluster. In a MetroCluster configuration, the partner cluster can be specified
if the local cluster is in switchover state.
-name <text> - Name
Use this parameter with the name of an existing cron schedule to specify the cron schedule that you want to
delete.
Examples
The following example deletes a cron schedule named midnightcron:
Related references
job schedule cron show on page 166
job schedule show-jobs on page 162
Description
The job schedule cron modify command modifies a cron schedule. A cron schedule, like a UNIX cron job, runs at a
specified time. You can also specify months, days of the month, or days of the week on which the schedule runs. Use the job
schedule cron show command to display all current cron schedules. See the documentation for job schedule cron
show for more information about how cron schedules work.
Modifying one parameter of a cron schedule does not affect the other parameters. For example, if cron schedule is set to run at
3:15 AM, and you modify the "hour" parameter to 4, the schedule's new time will be 4:15am. To clear a parameter of the
schedule's interval, you must explicitly set that portion to "0" or "-" Some parameters can also be set to "all".
Parameters
[-cluster <Cluster name>] - Cluster
Use this parameter to specify the cluster of an existing cron schedule you want to modify. The local cluster is
provided as the default value. In a MetroCluster configuration, the partner cluster can be specified if the local
cluster is in switchover state.
-name <text> - Name
Use this parameter with the name of an existing cron schedule to specify the cron schedule you want to
modify.
[-month <cron_month>, ...] - Month
Use this parameter to specify a new "month" value for the cron schedule. Valid values are January, February,
March, April, May, June, July, August, September, October, November, December, or all. Specify "all" to run
the schedule every month.
[-dayofweek <cron_dayofweek>, ...] - Day of Week
Use this parameter to specify a new "day of week" value for the cron schedule. Valid values include Sunday,
Monday, Tuesday, Thursday, Friday, Saturday, or all. Specify "all" to run the schedule every day.
[-day <cron_dayofmonth>, ...] - Day
Use this parameter to specify a new "day of month" value for the cron schedule. Valid values range from 1 to
31.
[-hour <cron_hour>, ...] - Hour
Use this parameter to specify a new "hour of the day" value for the cron schedule. Valid values range from 0
(midnight) to 23 (11:00 p.m.), Specify "all" to run the schedule every hour.
[-minute <cron_minute>, ...] - Minute
Use this parameter to specify a new "minute of the hour" value for the cron schedule. Valid values range from
0 to 59.
Examples
The following example modifies a cron schedule named weekendcron so that it runs at 3:15 a.m.:
Related references
job schedule cron show on page 166
Description
The job schedule cron show command displays information about cron schedules. A cron schedule runs a job at a
specified time on specified days.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-cluster <Cluster name>] - Cluster
Selects the cron schedules that match this parameter value.
[-name <text>] - Name
Selects the cron schedules that match this parameter value.
[-month <cron_month>, ...] - Month
Selects the cron schedules that match this parameter value. Valid values are January, February, March,
April, May, June, July, August, September, October, November, December, or all.
[-dayofweek <cron_dayofweek>, ...] - Day of Week
Selects the cron schedules that match this parameter value. Valid values include Sunday, Monday, Tuesday,
Wednesday, Thursday, Friday, Saturday, or all.
[-day <cron_dayofmonth>, ...] - Day
Selects the cron schedules that match this parameter value. Valid values range from 1 to 31.
[-hour <cron_hour>, ...] - Hour
Selects the cron schedules that match this parameter value.
[-minute <cron_minute>, ...] - Minute
Selects the cron schedules that match the minute or range of minutes that you specify.
[-description <text>] - Description
Selects the cron schedules that match this parameter value.
Examples
The following example displays information about all current cron schedules:
The following example displays information about the cron schedule named weekly:
Cluster: cluster1
Name: weekly
Month: -
Day of Week: Sunday
Day: -
Hour: 0
Minute: 15
Description: Sun@0:15
Description
The job schedule interval create creates an interval schedule. An interval schedule runs jobs at specified intervals after
the previous job finishes. For instance, if a job uses an interval schedule of 12 hours and takes 30 minutes to complete, the job
runs at the following times:
Each of the numerical parameters of the interval must be a whole number. These parameters can be used individually, or
combined to define complex time values. For example, use a value of 1 day, 12 hours to create an interval of 1.5 days.
Large parameter values are converted into larger units. For example, if you create a schedule with an interval of 36 hours, the
job schedule interval show command will display it with an interval of 1 day 12 hours.
Parameters
[-cluster <Cluster name>] - Cluster
This parameter specifies the name of the cluster on which you want to create an interval schedule. By default,
the schedule is created on the local cluster. In a MetroCluster configuration, the partner cluster can be
specified if the local cluster is in switchover state.
-name <text> - Name
Use this parameter to specify the name of the interval schedule you want to create.
[-days <integer>] - Days
Use this parameter to specify the "days" portion of the schedule's interval. A day is one calendar day.
Examples
The following example creates an interval schedule named rollingdaily that runs six hours after the completion of the
previous occurrence of the job:
Related references
job schedule interval show on page 169
Description
The job schedule interval delete command deletes an interval schedule. Use the job schedule interval show
command to display all current interval schedules.
You cannot delete interval schedules that are currently being run. Use the job schedule show-jobs command to display
jobs by schedule.
Parameters
[-cluster <Cluster name>] - Cluster
This parameter specifies the name of the cluster on which you want to delete an interval schedule. By default,
the schedule is deleted from the local cluster. In a MetroCluster configuration, the partner cluster can be
specified if the local cluster is in switchover state.
-name <text> - Name
Use this parameter with the name of an existing interval schedule to specify the interval schedule you want to
delete.
Examples
The following example deletes an interval schedule named rollingdaily:
Related references
job schedule interval show on page 169
job schedule show-jobs on page 162
Description
The job schedule interval modify command modifies an interval schedule. An interval schedule runs jobs at a specified
interval after the previous job finishes. Use the job schedule interval show command to display all current interval
schedules. See the documentation of job schedule interval show for more information on how interval schedules work.
Modifying one parameter of a schedule's interval does not affect the other parameters. For example, if a schedule's interval is 1
day 12 hours, and you modify the "hours" parameter to 16, the schedule's new interval is 1 day 16 hours. To clear a parameter of
the schedule's interval, you must explicitly set that parameter to "0" or "-".
Parameters
[-cluster <Cluster name>] - Cluster
Use this parameter to specify the cluster of an existing interval schedule you want to modify. The local cluster
is provided as the default value. In a MetroCluster configuration, the partner cluster can be specified if the
local cluster is in switchover state.
-name <text> - Name
Use this parameter with the name of an existing interval schedule to specify the interval schedule you want to
modify.
[-days <integer>] - Days
Use this parameter to specify a different "days" value for the schedule's interval.
[-hours <integer>] - Hours
Use this parameter to specify a different "hours" value for the schedule's interval.
[-minutes <integer>] - Minutes
Use this parameter to specify a different "minutes" value for the schedule's interval.
[-seconds <integer>] - Seconds
Use this parameter to specify a different "seconds" value for the schedule's interval.
Examples
The following example sets the schedule named rollingdaily to run every eight hours:
Related references
job schedule interval show on page 169
Description
The job schedule interval show command displays information about interval schedules.
Examples
The following example displays information about all interval schedules:
lun commands
Manage LUNs
lun create
Create a new LUN
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
This command creates a new LUN of a specific size. You cannot create a LUN at a path that already exists. You must create
LUNs at the root of a volume or qtree. You can not create LUNs in the Vserver root volume.
Note: When you create a LUN from a file, that file cannot be deleted without deleting the LUN itself.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
{ -path <path> - LUN Path
Specifies the path of the new LUN. The LUN path cannot contain any files. Examples of correct LUN paths
are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume that contains the new LUN.
[-qtree <qtree name>] - Qtree Name
Specifies the qtree that contains the new LUN.
-lun <text>} - LUN Name
Specifies the new LUN name. A LUN name is a case-sensitive name and has the following requirements:
• Can contain the letters A-Z, a-z, numbers 0-9, "-", "_", "}", "{", and ".".
• c (1 byte)
• w (2 bytes)
• B (512 bytes)
• k (1024 bytes)
• M (k*k bytes)
• G (k*m bytes)
• T (m*m bytes)
• hyper_v - the LUN stores Windows Server 2008 or Windows Server 2012 Hyper-V data
• linux - the LUN stores a Linux raw disk without a partition table.
• windows - the LUN stores a raw disk type in a single-partition Windows disk using the Master Boot
Record (MBR) partitioning style.
• windows_gpt - the LUN stores Windows data using the GUID Partition Type (GPT) partitioning style.
• windows_2008 - the LUN stores Windows data for Windows 2008 or later systems.
• auto - Read caches all metadata and randomly read user data blocks, and write caches all randomly
overwritten user data blocks.
• random_read - Read caches all metadata and randomly read user data blocks.
• random_read_write - Read caches all metadata, randomly read and randomly written user data blocks.
• all_read - Read caches all metadata, randomly read and sequentially read user data blocks.
• all_read_random_write - Read caches all metadata, randomly read, sequentially read, and randomly written
user data.
• all - Read caches all data blocks read and written. It does not do any write caching.
Examples
cluster1::> lun create -vserver vs1 -path /vol/vol1/lun1 -size 100M -ostype linux
lun delete
Delete the LUN
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
{ -path <path> - LUN Path
Specifies the path of the LUN you want to delete. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume that contains the LUN you want to delete.
[-qtree <qtree name>] - Qtree Name
Specifies the qtree that contains the LUN you want to delete.
-lun <text>} - LUN Name
Specifies the LUN that you want to delete.
[-force | -f [true]] - Force Deletion of an Online and Mapped LUN
Force deletion of an online LUN that is mapped to an initiator group.
[-force-fenced [true]] - Force Deletion of a Fenced LUN
Force deletion of a LUN that is currently fenced.
Examples
Related references
lun mapping delete on page 212
lun maxsize
Display the maximum possible size of a LUN on a given volume or qtree.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
This command returns the maximum size of LUNs for different OS types in a volume or qtree. The command also includes
possible maximum size for LUNs with Snapshots or without Snapshots. You can specify the path of the volume or qtree to
determine the maximum size of a LUN that you want to create within that volume or qtree.
If you do not specify a path, the command returns the maximum LUN size for each OS type for all volumes and qtrees in a
cluster.
The available space in a volume can change over time which means that the size reported by lun maxsize can change as well.
In addition, the maximum LUN size allowed in a lun resize command may be less than the size reported by lun maxsize.
• hyper_v - the LUN stores Windows Server 2008 or Windows Server 2012 Hyper-V data
• linux - the LUN stores a Linux raw disk without a partition table.
• windows - the LUN stores a raw disk type in a single-partition Windows disk using the Master Boot
Record (MBR) partitioning style.
• windows_gpt - the LUN stores Windows data using the GUID Partition Type (GPT) partitioning style.
• windows_2008 - the LUN stores Windows data for Windows 2008 or later systems.
Related references
lun resize on page 180
lun modify
Modify a LUN
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
This command modifies LUN attributes. Because LUN modifications can result in data corruption or other problems, we
recommend that you call technical support if you are unsure of the possible consequences of modifying a LUN.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
{ -path <path> - LUN Path
Specifies the path for the LUN you want to modify. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume for the LUN you want to modify.
-qtree <qtree name> - Qtree Name
Specifies the qtree for the LUN you want to modify.
-lun <text>} - LUN Name
Specifies the name for the LUN you want to modify. A LUN name is a case-sensitive name and has the
following requirements:
• Can contain the letters A through Z, a through z, numbers 0 through 9, hyphen (-), underscore (_), right
bracket (}), left bracket ({) and period (.).
• numbers
Some of the characters that are valid in a LUN serial number also have special meaning to the cluster shell
command line:
• The question mark (?) activates the command line active help. In order to type a question mark as part of a
LUN's serial number, it is necessary to disable active help with the command set -active-help
false. Active help can later be re-enabled with the command set -active-help true.
• The number sign (#) indicates the beginning of a comment to the command line and will cause the
remainder of the line to be ignored. To avoid this, enclose the serial number in double quotes (").
Alternatively, the -serial-hex parameter can be used to set the LUN serial number specifying the serial
number encoded in hexadecimal form.
| [-serial-hex <Hex String>]} - Serial Number (Hex)
Specifies the serial number, encoded in hexadecimal form, for the LUN you want to modify. See the
description of the -serial parameter for additional details.
[-comment <text>] - Comment
Specifies the comment for the LUN you want to modify.
[-space-allocation {enabled|disabled}] - Space Allocation
Specifies the new value for the space allocation attribute of the LUN. The space allocation attribute determines
if the LUN supports the SCSI Thin Provisioning features defined in the Logical Block Provisioning section of
the SCSI SBC-3 standard.
Specifying enabled for this parameter enables support for the SCSI Thin Provisioning features.
Specifying disabled for this parameter disables support for the SCSI Thin Provisioning features.
Hosts and file systems that do not support SCSI Thin Provisioning should not enable space allocation.
[-state {online|offline|nvfail|space-error|foreign-lun-error}] - State
Specifies the administrative state of a LUN. The options are:
• online
• offline
• auto - Read caches all metadata and randomly read user data blocks, and write caches all randomly
overwritten user data blocks.
• random_read_write - Read caches all metadata, randomly read and randomly written user data blocks.
• all_read - Read caches all metadata, randomly read and sequentially read user data blocks.
• all_read_random_write - Read caches all metadata, randomly read, sequentially read, and randomly written
user data.
• all - Read caches all data blocks read and written. It does not do any write caching.
Examples
lun move-in-volume
Move a LUN within a volume
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
This command moves a LUN to a new path in the same volume or renames a LUN. If you are organizing LUNs in a qtree, the
command moves a LUN from one qtree to another. You can perform a LUN move while the LUN is online and serving data.
The process is non-disruptive. Use the lun move start command to move a LUN to a different volume within the same
Vserver.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
{ -path <path> - LUN Path
Specifies the path of the LUN you want to move. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume of the LUN you want to move.
[-qtree <qtree name>] - Qtree Name
Specifies the qtree of the LUN you want to move.
-lun <text>} - LUN Name
Specifies the name of the LUN that you want to move.
{ -new-path <path> - New LUN Path
Specifies the new path of the LUN. Examples of correct LUN paths are /vol/vol1/lun1 and /vol/vol1/
qtree1/lun1.
| [-new-qtree <qtree name>] - New Qtree Name
Specifies the new qtree name that you want to move the LUN to.
-new-lun <text>} - New LUN Name
Specifies the new name of the LUN.
Examples
cluster1::> lun move-in-volume -vserver vs1 -volume vol1 -lun lun1 -new-lun newlun1
Related references
lun move start on page 220
lun resize
Changes the size of the LUN to the input value size.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
This command resizes a LUN. You can resize a LUN that is mapped and online. However, to prevent any potential problems,
take the LUN offline before resizing it.
When you reduce the size of the LUN, the data in the LUN could be truncated. You will receive an error message if you reduce
the size of the LUN. To avoid this error message, use the force parameter.
When you increase the size of a LUN, the maximum resize size is based on the initial geometry of the LUN and the currently
available space in the volume. You will receive an error message if you exceed this limit. The lun show -instance command
reports the "Maximum Resize Size" for a LUN based on the initial geometry. The lun maxsize command reports the
maximum LUN size based on the available space. The maximum size of the LUN is the smaller of the two limits issued by the
lun show -instance command or the lun maxsize command.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
{ -path <path> - LUN Path
Specifies the path of the LUN that you want to resize. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume that contains the LUN that you want to resize.
[-qtree <qtree name>] - Qtree Name
Specifies the qtree that contains the LUN that you want to resize.
-lun <text>} - LUN Name
Specifies the LUN name that you want to resize.
[-force | -f [true]] - Force Reduce LUN Size
Overrides any warnings if you are reducing the size of the LUN. If you use this parameter without a value, it is
set to true, and the command does not prompt you when reducing the size of a LUN would produce warnings.
If you do not use this parameter, the command displays an error if reducing the size of a LUN would create a
problem.
[-size <size>] - New Size
Specifies the new size of the LUN.
• c (1 byte)
• B (512 bytes)
• k (1024 bytes)
• M (k*k bytes)
• G (k*m bytes)
• T (m*m bytes)
Examples
cluster1::> lun resize -vserver vs1 -path /vol/vol1/lun1 -size 500M -force
Error: command failed: Reducing LUN size without coordination with the host system
may cause permanent data loss or corruption. Use the force flag to allow
LUN size reduction.
Related references
lun show on page 181
lun maxsize on page 174
lun show
Display a list of LUNs
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The command displays information for LUNs. Use the instance parameter to display additional LUN details, such as serial
number and space-reservation settings.
• hyper_v- the LUN stores Windows Server 2008 or Windows Server 2012 Hyper-V data
• linux- the LUN stores a Linux raw disk without a partition table.
• windows- the LUN stores a raw disk type in a single-partition Windows disk using the Master Boot
Record (MBR) partitioning style.
• windows_gpt- the LUN stores Windows data using the GUID Partition Type (GPT) partitioning style.
• windows_2008- the LUN stores Windows data for Windows 2008 or later systems.
• numbers
• the characters: &, <, >, /, -, #, $, %, *, +, =, ?, @, [, !, ], ^, ~
Some of the characters that are valid in a LUN serial number also have special meaning to the cluster shell
command:
• The question mark (?) activates the command line active help. In order to type a question mark as part of a
LUN's serial number, it is necessary to disable active help with the command set -active-help
false. Active help can later be re-enabled with the command set -active-help true.
• The number sign (#) indicates the beginning of a comment to the command line and will cause the
remainder of the line to be ignored. To avoid this, enclose the serial number in double quotes (").
• The less than (<), greater than (>), asterisk (*), and exclamation point (!) influence the query behavior of
the command. To use these as characters in a LUN's serial query, you must first press escape (ESC). To use
these characters to influence the query, enclose the serial number, or partial serial number, in double quotes
(") and apply <, >, *, or !, outside of the double quotes.
Alternatively, the -serial-hex parameter can be used to select LUNs using the serial number encoded in
hexadecimal form.
[-serial-hex <Hex String>] - Serial Number (Hex)
Selects the LUNs that match this parameter value. This parameter applies to the LUN serial number encoded
in hexadecimal form. See the description of the -serial parameter for additional details.
[-comment <text>] - Comment
Selects the LUNs that match this parameter value.
[-space-reserve-honored {true|false}] - Space Reservations Honored
Selects the LUNs that match this parameter value. A value of true select LUNs that have their space
reservation honored by the container volume. A value of false displays the LUNs that are non-space
reserved.
[-space-allocation {enabled|disabled}] - Space Allocation
Selects the LUNs that match this parameter value. The space allocation attribute determines if the LUN
supports the SCSI Thin Provisioning features defined in the Logical Block Provisioning section of the SCSI
SBC-3 standard.
Specifying enabled for this parameter selects LUNs with support enabled for the SCSI Thin Provisioning
features.
Specifying disabled for this parameter selects LUNs with support disabled for the SCSI Thin Provisioning
features.
Hosts and file systems that do not support SCSI Thin Provisioning should not enable space allocation.
[-container-state {online|aggregate-offline|volume-offline|error}] - LUN Container State
(privilege: advanced)
Selects the LUNs that match this parameter value. The container states are:
• offline- The LUN is administratively offline, or a more detailed offline reason is not available.
• foreign-lun-error- The LUN has been automatically taken offline due to a media error on the
associated foreign LUN.
• nvfail- The LUN has been automatically taken offline due to an NVRAM failure.
• space-error- The LUN has been automatically taken offline due to insufficient space.
• auto - Read caches all metadata and randomly read user data blocks, and write caches all randomly
overwritten user data blocks.
• random_read - Read caches all metadata and randomly read user data blocks.
• random_read_write - Read caches all metadata, randomly read and randomly written user data blocks.
• all_read - Read caches all metadata, randomly read and sequentially read user data blocks.
• all_read_random_write - Read caches all metadata, randomly read, sequentially read, and randomly written
user data.
• all - Read caches all data blocks read and written. It does not do any write caching.
Examples
The following example displays details of the LUN at path /vol/vol1/lun1 in Vserver vs1.
The following example displays information for the LUN with serial number 1r/wc+9Cpbls:
The following example displays all the LUNs on Vserver vs1 and volume vol1:
Description
This command creates a new binding between a protocol endpoint and a vvol LUN. If a binding between the specified endpoint
and vvol already exists, the reference count for the binding is incremented by one.
Note: For optimal results, the protocol endpoint and vvol must be hosted by the same node in the cluster.
Parameters
-vserver <Vserver Name> - Vserver name
Specifies the name of the Vserver.
-protocol-endpoint-path <path> - Protocol Endpoint
Specifies the path to the protocol endpoint. The specified LUN must already exist and be of class "protocol-
endpoint". Examples of correct LUN paths are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
-vvol-path <path> - VVol Path
Specifies the path to the vvol. The specified LUN must already exist and be of the class "vvol". Examples of
correct LUN paths are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
Examples
cluster1::*> lun bind create -vserver vs1 -protocol-endpoint-path /vol/VV1/PE1 -vvol-path /vol/
VV3/234ace
Description
Decrement the reference count of the binding between a protocol endpoint and vvol LUN. If the resulting reference count is
zero, the binding is removed.
Parameters
-vserver <Vserver Name> - Vserver name
Specifies the Vserver.
-protocol-endpoint-path <path> - Protocol Endpoint
Specifies the path of the protocol endpoint LUN. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
-vvol-path <path> - VVol Path
Specifies the path of the vvol LUN. Examples of correct LUN paths are /vol/vol1/lun1 and /vol/vol1/
qtree1/lun1.
[-force [true]] - If true, unbind the Vvol completely even if the current reference count is greater than 1. The
default is false.
Completely remove the specified binding, regardless of the current reference count.
Description
Shows the configured VVol to protocol endpoint bindings.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Selects the bindings that match this parameter value.
[-protocol-endpoint-msid <integer>] - PE MSID
Selects the bindings that match this parameter value.
[-protocol-endpoint-vdisk-id <text>] - PE Vdisk ID
Selects the bindings that match this parameter value.
[-vvol-msid <integer>] - VVol MSID
Selects the bindings that match this parameter value.
[-vvol-vdisk-id <text>] - VVol Vdisk ID
Selects the bindings that match this parameter value.
[-vserver-uuid <UUID>] - Vserver UUID
Selects the bindings that match this parameter value.
[-protocol-endpoint-path <path>] - Protocol Endpoint
Selects the bindings that match this parameter value. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
[-protocol-endpoint-node <nodename>] - PE Node
Selects the bindings that match this parameter value.
[-vvol-path <path>] - VVol
Selects the bindings that match this parameter value. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
[-vvol-node <nodename>] - VVol Node
Selects the bindings that match this parameter value.
[-secondary-lun <Hex 64bit Integer>] - Secondary LUN
Selects the bindings that match this parameter value.
Examples
Description
The lun copy cancel command cancels an ongoing LUN copy operation prior to creation of the new LUN. The command
fails if the LUN already exists at the destination path; in that case, use the lun delete command to delete the LUN at the
destination path.
All data transfers will be halted.
Note: This is an advanced command because the preferred way to cancel a LUN copy operation is to wait until the new LUN
becomes visible, and then use the lun delete command to delete the LUN.
Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the destination LUN.
-destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
Examples
Related references
lun delete on page 173
Description
The lun copy modify command modifies the maximum throughput of an ongoing copy operation.
Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the destination LUN.
-destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
-max-throughput {<integer>[KB|MB|GB|TB|PB]} - Maximum Transfer Rate (per sec)
Specifies the maximum amount of data, in bytes, that can be transferred per second in support of this
operation. This mechanism can be used to throttle a transfer, to reduce its impact on the performance of the
source and destination nodes.
Note: The specified value will be rounded up to the nearest megabyte.
Examples
cluster1::> lun copy modify -vserver vs1 -destination-path /vol/vol2/lun2 -max-throughput 25MB
Description
The lun copy pause command pauses an ongoing copy operation. Use the lun copy resume command to resume the copy
operation.
Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the destination LUN.
-destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
Examples
Related references
lun copy resume on page 191
Description
The lun copy resume command resumes a paused copy operation.
Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the destination LUN.
-destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
Examples
Related references
lun copy pause on page 190
Description
The lun copy show command shows information about LUNs currently being copied in the cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Destination Vserver Name
Selects LUN copy operations that match this parameter value.
[-destination-path <path>] - Destination Path
Selects LUN copy operations that match this parameter value.
[-source-vserver <vserver name>] - Source Vserver Name
Selects LUN copy operations that match this parameter value.
[-source-path <path>] - Source Path
Selects LUN copy operations that match this parameter value.
[-source-snapshot <snapshot name>] - Source Snapshot Name
Selects LUN copy operations that match this parameter value.
Examples
The example above displays information about all the LUN copy operations in the cluster.
Description
The lun copy start command initiates copying of a LUN from one volume to another. The destination volume can be
located in the same Vserver as the source volume (intra-Vserver copy) or in a different Vserver (inter-Vserver).
Note: A cluster administrator must first create a Vserver peering relationship using vserver peer create before initiating
an inter-Vserver LUN copy operation.
Parameters
-vserver <Vserver Name> - Destination Vserver Name
Specifies the name of the Vserver that will host the new LUN.
| -destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
-source-path <path>} - Source Path
Specifies the full path to the source LUN, in the format /vol/<volume>[/.snapshot/<snapshot>][/<qtree>]/
<lun>.
[-source-vserver <vserver name>] - Source Vserver Name
Optionally specifies the name of the Vserver hosting the LUN to be copied.
If this parameter is not specified, it is assumed that an intra-Vserver copy operation is being initiated. The
source volume is expected to be in the same Vserver as the destination volume.
[-promote-early [true]] - Promote Early
Optionally specifies that the destination LUN needs to be promoted early.
If the destination is promoted early, the new LUN will be visible immediately. However, Snapshot copies of
the volume containing the new LUN cannot be taken until the LUN copy operation reaches 'Moving Data'
status.
If the destination is promoted late, the new LUN will be visible only after it has been fully framed. However,
the LUN copy job will not block the creation of Snapshot copies of the volume containing the new LUN.
If this parameter is not specified, the destination LUN will be promoted late.
[-max-throughput {<integer>[KB|MB|GB|TB|PB]}] - Maximum Transfer Rate (per sec)
Optionally specifies the maximum amount of data, in bytes, that can be transferred per second in support of
this operation. This mechanism can be used to throttle a transfer, to reduce its impact on the performance of
the source and destination nodes.
If this parameter is not specified, throttling is not applied to the data transfer.
Note: The specified value will be rounded up to the nearest megabyte.
cluster1::> lun copy start -vserver vs2 -destination-path /vol/vol2/lun2 -source-vserver vs1 -
source-path /vol/vol1/lun1
Starts an inter-Vserver copy of LUN lun1 from volume vol1 in Vserver vs1 to lun2 on volume vol2 in Vserver vs2.
cluster1::> lun copy start -vserver vs1 -destination-path /vol/vol2/lun2 -source-path /vol/vol1/
lun1
Starts an intra-Vserver copy of LUN lun1 from volume vol1 in Vserver vs1 to lun2 on volume vol2 in Vserver vs1.
cluster1::> lun copy start -vserver vs1 -destination-path /vol/vol2/lun2 -source-path /vol/
vol1/.snapshot/snap1/lun1
Related references
lun copy resume on page 191
vserver peer create on page 2049
lun copy modify on page 190
lun copy pause on page 190
lun copy show on page 191
Description
This command adds initiators to an existing initiator group (igroup). You can add an initiator to an initiator group only if there
are no LUN mapping conflicts. Mapping conflicts occur when an initiator is already paired with a LUN. If you attempt to run
this command and there are LUN mapping conflicts, the command returns an error.
An initiator cannot be a member of two igroups of different OS types. For example, if you have an initiator that belongs to a
Solaris igroup, the command does not allow you to add this initiator to an AIX igroup.
When you add FCP initiators, you can specify an alias instead of the initiator's World Wide Port Name (WWPN) or the iSCSI
Qualified name (IQN).
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-igroup <text> - Igroup Name
Specifies the initiator group to which you want to add a new initiator.
-initiator <text>, ... - Initiators
Specifies the initiator that you want to add. You can specify the WWPN, IQN, or alias of the initiator.
cluster1::> lun igroup add -vserver vs1 -igroup ig1 -initiator iqn.1992-08.com.mv.mvinitiator
Description
This command binds an initiator group to a port set so the host knows which LIFs or TPGs to access. When you bind a port set
to an igroup, the host knows which iSCSI or FCP LIF to access. If you do not bind an igroup to a port set, and you map a LUN
to the igroup, then the initiators in the igroup can access the LUN on any port on the Vserver.
The initiator group cannot be bound to another port set when you use this command. If you attempt to bind a port set to an
initiator group that is already bound to an existing port set, the command returns an error. You can only bind an initiator group
to one port set at a time.
If the initiator group is bound, use the lun igroup unbind command to unbind the initiator group from the port set. After the
initiator group is unbound, you can bind it to another port set.
You can only bind an initiator group to a non-empty port set.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-igroup <text> - Igroup Name
Specifies the initiator group that you want to bind a port set to.
-portset <text> - Portset Binding Igroup
Specifies the port set name that you want to bind an initiator group to.
Examples
cluster1::> lun igroup bind -vserver vs1 -igroup ig1 -portset-name ps1
Related references
lun igroup unbind on page 201
Description
This command creates a new initiator group (igroup). Use igroups to control which hosts have access to specific LUNs. When
you bind an igroup to a port set, a host in the igroup can access the LUNs only by connecting to the target ports in the port set.
When you create an igroup, you can add multiple existing initiators by specifying them in a list, separating them with commas.
Later, you can add or remove initiators from the initiator group. Use the lun igroup add command to add initiators. Use the
lun igroup remove command to remove an initiator. Unless the -initiator option is supplied, no initiators are added to a
new igroup.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-igroup <text> - Igroup Name
Specifies the name of the new initiator group. An initiator group name is a case-sensitive name that must
contain one to 96 characters. Spaces are not allowed.
Note: It might be useful to provide meaningful names for igroups, ones that describe the hosts that can
access the LUNs mapped to them.
• solaris
• windows
• hpux
• aix
• linux
• netware
• vmware
• openvms
• xen
• hyper_v
cluster1::> lun igroup create -vserver vs1 -igroup ig1 -protocol mixed -ostype linux -initiator
iqn.2001-04.com.example:abc123
Related references
lun igroup add on page 194
lun igroup remove on page 199
lun igroup bind on page 195
lun igroup unbind on page 201
Description
This command deletes an existing initiator group. By default, you cannot delete an initiator group if LUN maps for that initiator
group exist. You need to unmap all the LUNs that are associated with that initiator group before you can delete the initiator
group. Use the lun unmap command to remove LUNS from an initiator group.
You can specify the force option to delete an initiator group and remove existing LUN maps defined for that initiator group.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-igroup <text> - Igroup Name
Specifies the initiator group that you want to delete.
[-force | -f [true]] - Force
Deletes an initiator group and all associated LUN maps.
Examples
Related references
lun mapping delete on page 212
Description
This command disables the SAN AIX support across the cluster (all Vservers and all AIX initiator groups). However, before
you can disable SAN AIX support, you must remove all SAN AIX related objects from the cluster. You need to unmap all the
LUNs that are associated with the AIX initiator groups. Then you need to delete all of the AIX initiator groups. Use the lun
unmap command to remove LUNS from an initiator group. Use the igroup delete command to delete an initiator group.
Examples
Related references
lun mapping delete on page 212
Description
This command modifies an attribute for an initiator group. Currently, the only settable attribute is the operating system.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-igroup <text> - Igroup Name
Specifies the initiator group whose attribute you want to modify.
[-ostype | -t {solaris|windows|hpux|aix|linux|netware|vmware|openvms|xen|hyper_v}] - OS Type
Specifies the operating system that you want to modify. The operating system types of initiators are
• solaris
• windows
• hpux
• aix
• linux
• netware
• vmware
• openvms
• xen
• hyper_v
cluster1::> lun igroup modify -vserver vs1 -igroup ig1 -ostype windows
Description
This command removes an initiator from an initiator group. You can only remove an initiator if no existing LUN maps are
defined for that initiator group. You must unmap the LUNs from the initiator group with the lun unmap command before you
can remove initiators from the initiator group.
You can use the force option to remove an initiator and associated LUN maps.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-igroup <text> - Igroup Name
Specifies the initiator group from which you want to remove an initiator.
-initiator <text>, ... - Initiators
Specifies the initiator name you want to remove. Use the WWPN, IQN or the alias of the initiator.
[-force | -f [true]] - Force
Forcibly removes an initiator and any associated LUN maps.
Examples
cluster1::> lun igroup remove -vserver vs1 -igroup ig1 -initiator iqn.1992-08.com.mv.mvinitiator
Related references
lun mapping delete on page 212
Description
This command renames an existing initiator group. When you rename an initiator group, this action does not affect access to the
LUNs mapped to the initiator group you want to rename.
An initiator group name is a case-sensitive name and must meet the following requirements:
• Can contain the letters A through Z, a through z, numbers 0 through 9, hyphen (-), underscore (_), colon (:), and period (.).
Examples
cluster1::> lun igroup rename -vserver vs1 -igroup ig1 -new-name ignew1
Description
This command displays status information for initiator groups (igroup). By default, the command displays status for all initiator
groups.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Specifies the Vserver.
[-igroup <text>] - Igroup Name
Selects the initiator groups that match this parameter value.
[-protocol <protocol_enum>] - Protocol
Selects the initiator groups that match this parameter value (FCP, iSCSI, or mixed).
[-ostype | -t {solaris|windows|hpux|aix|linux|netware|vmware|openvms|xen|hyper_v}] - OS Type
Selects the initiator groups that match this parameter value. The operating system types are
• solaris
• windows
• hpux
• aix
• linux
• netware
• vmware
• xen
• hyper_v
Examples
Description
This command unbinds an initiator group from a port set. When you unbind an initiator group from a port set, all of the
initiators in the initiator group have access to all target LUNs on all network interfaces.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
Examples
Description
This command creates an import relationship between a specified LUN and a specified foreign disk so you can import the
foreign disk data into a LUN.
The foreign disk must be marked as foreign using storage disk set-foreign-lun command before you can begin the
import progress.
The LUN must be of the same size as the foreign disk.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN where you import data to from the foreign disk data.
-foreign-disk <text> - Foreign Disk Serial Number
Specifies the serial number of the Foreign Disk.
-path <path> - LUN Path
Specifies the path of the LUN where you want to import the data of the foreign disk to. Examples of correct
LUN paths are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
Examples
Related references
storage disk set-foreign-lun on page 948
Description
This command deletes the import relationship of a specified LUN or a specified foreign disk.
Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN that you want to delete the import relationship.
-path <path> - LUN Path
Specifies the path of the LUN where you want to delete the import relationship. Examples of correct LUN
paths are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
| -foreign-disk <text>} - Foreign Disk Serial Number
Specifies the serial number of the foreign disk.
[-force {true|false}] - Force Delete
When set to true, stops the in progress data import.
Examples
Related references
lun import stop on page 208
Description
This command pauses the data import to a specified LUN.
This command does not reset all import checkpoints. To resume a paused import, use the lun import resume command to retart
from the last checkpoint taken before you paused the data import.
If you want to resume the data import from the beginning, use the lun import stop command. Then use the lun import start
command.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN you want to pause the data import to.
-path <path> - LUN Path
Specifies the path of the LUN you want to pause the data import to. Examples of correct LUN paths
are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
Description
This command prepares the cluster for a downgrade to a version of Data ONTAP earlier than 8.3.1 by disabling the online LUN
import feature. Before using this command verify that all LUNs in an import relationships are offline by running lun show.
Examples
Related references
lun show on page 181
Description
Resumes the data import to a specified LUN.
The import starts from the last checkpoint taken before you paused the data import.
If you want to resume the data import from the beginning, use the lun import stop command. Then use the lun import start
command.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN you want to resume the data import to.
-path <path> - LUN Path
Specifies the path of the LUN that you want to resume the data import to. Examples of correct LUN paths
are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
Examples
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Displays import relationships for a specified Vserver.
[-foreign-disk <text>] - Foreign Disk Serial Number
Enables you to see the import relationship for a particular foreign disk with the specified serial number.
[-path <path>] - LUN Path
Enables you to see the import relationship for a particular LUN path. Examples of correct LUN paths
are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
[-import-home-node {<nodename>|local}] - Import Home Node
Enables you to see the node that initially started the data import and where the I/O for the foreign disk is
directed. If failover occurs, any in-progress data import restarts on the partner node.
[-import-current-node {<nodename>|local}] - Import Current Node
Displays the node that is currently importing data and where the I/O for the foreign disk is directed. During
giveback and if the import home node is different from the current home node , import restarts on the initial
node (import-home-node).
[-operation-in-progress {import|verify}] - Operation in Progress
Enables you to see the imports in progress or import verification in progress.
[-admin-state {stopped|started|paused}] - Admin State
Enables you to see the import relationships for a specified administrative state. For example, you can list all
the imports that have started in a cluster.
[-operational-state {in_progress|failed|completed|stopped|paused}] - Operational State
Enables you to see the import relationships for a specified operational state. For example, you can list all the
imports that have completed in a cluster.
[-percent-complete <integer>] - Percent Complete
Enables you to see the percentage of completion for both import and verification. If you want to see all the
complete imports and verifications, you would specify 100.
[-imported-blocks <integer>] - Blocks Imported
Enables you to see the number of blocks that have been imported to the LUN.
[-compared-blocks <integer>] - Blocks Compared
Enables you to see the number of LUN and foreign disk blocks that have been compared.
[-total-blocks <integer>] - Total Blocks
Enables you to see the total number of blocks that must be imported to complete the data import to a LUN or
the number of blocks that must be compared to complete the import verification.
[-estimated-remaining-duration {<seconds>|[<d> days] <hh>:<mm>[:<ss>]}] - Estimated Remaining
Duration
If this parameter is specified, the command displays import or verify operations that match the specified time.
Examples
Description
This command initiates the data import to a specified LUN.
You must use the lun import create command to create an import relationship between a LUN and a foreign disk before you can
initiate the data import.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN you want to import data to.
-path <path> - LUN Path
Specifies the path of the LUN that you want to import data to. Examples of correct LUN paths are /vol/
vol1/lun1 and /vol/vol1/qtree1/lun1.
Examples
Description
This command stops the data import into a specified LUN.
After you stop the data import and if you start the import again using lun import start command, then the import restarts
from the beginning.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN you want to stop importing data to.
-path <path> - LUN Path
Specifies the path of the LUN that you want to stop the data import to.
Examples
Related references
lun import start on page 207
Description
This command throttles the speed of the import for a given LUN by specifying a maximum throughput limit on the import.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN to which data from the foreign disk is imported.
-path <path> - LUN Path
Specifies the path of the LUN to which data from the foreign disk is imported. Examples of correct LUN paths
are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
-max-throughput-limit {<integer>[KB|MB|GB|TB|PB]} - Maximum Throughput Limit (per sec)
Specifies the maximum amount of throughput to be allocated for processing import requests on the bound
LUN. At the time of creation, the default is zero. A value of zero implies that import traffic is processed by the
system at best effort rate along with on-going user I/O. A non-zero value indicates that import will be throttled
at a rate which is at most the maximum throughput limit set.
Examples
Description
This command compares the LUN and the foreign disk block by block. You are not required to run this command; it is optional.
Before you can do this verification process, the operation state must be stopped or completed. Use the lun import show
command to determine the operation state.
If a block mismatch occurs, the verification process stops.
Verification must be done offline. Ensure the foreign disk and LUN cannot be accessed by a host. To prevent access of the LUN,
the LUN should be taken offline administratively using the lun offline command.
Note: The specified LUN must be in an import relationship with a foreign disk before you can verify the data import.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN you want to compare block by block with the foreign disk.
-path <path> - LUN Path
Specifies the path of the LUN that you want to compare the foreign disk to. Examples of correct LUN paths
are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
Examples
Description
This command stops the block by block verification of the foreign disk and LUN data.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN you want to stop block by block comparison with the foreign
disk.
-path <path> - LUN Path
Specifies the path of the LUN that you want to stop the block by block comparison. Examples of correct LUN
paths are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
Related references
lun import verify start on page 209
Description
This command is used before or after a data mobility event that modifies the owning node of the LUN to add the new optimized
nodes to the specified LUN mapping's reporting nodes.
For more information on managing reporting nodes in response to data mobility events, please see the Data ONTAP SAN
Administration Guide.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver containing the LUN.
{ -path <path> - LUN Path
Specifies the path of the LUN. Examples of correct LUN paths are /vol/vol1/lun1 and /vol/vol1/
qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume that contains the LUN.
[-qtree <qtree name>] - Qtree Name
Specifies the qtree that contains the LUN.
-lun <text>} - LUN Name
Specifies the LUN name.
-igroup | -g <text> - Igroup Name
Specifies the igroup the LUN is mapped to.
{ -local-nodes [true] - Add Nodes for Current LUN Location
Add the current LUN owner node and HA partner to the LUN mapping's reporting nodes.
This option should be used after a LUN mobility event to restore optimized access to the LUN.
| -destination-aggregate <aggregate name> - Add Nodes for Aggregate
Add the specified aggregate's owner node and HA partner to the LUN mapping's reporting nodes.
This option may be used prior to a LUN mobility event that changes the LUN's containing aggregate.
Examples
cluster1::> lun mapping add-reporting-nodes -vserver vs1 -path /vol/vol1/lun1 -igroup ig1
Add the current owner node and HA partner for the LUN mapping of /vol/vol1/lun1 to igroup ig1
cluster1::> lun mapping add-reporting-nodes -vserver vs1 -volume vol1 -lun * -igroup ig1 -
destination-aggregate aggr2
Description
This command maps a LUN to all of the initiators in an initiator group (igroup). After you map the LUN, the LUN is visible to
all initiators in the igroup.
Data ONTAP ensures that there are no LUN map conflicts whether the LUN is offline or online. A LUN map conflict is a
mapping that would violate either of the following rules:
• Each LUN can be mapped to an initiator only once. A LUN can be mapped to multiple igroups as long as each igroup has a
distinct set of initiators.
• LUN IDs must be unique such that every initiator has a unique ID for each LUN to which it is mapped. If you map a LUN to
an igroup, the LUN ID for that mapping cannot be reused by any of the initiators in that igroup.
In order to determine if a LUN ID is valid for a mapping, Data ONTAP checks each initiator in the igroup to make sure that
the LUN ID is not used for another mapping that includes that initiator.
Note: Prior to mapping a LUN, you must have at least one iSCSI or FCP LIF provisioned on the LUN's owner node and high-
availability partner node.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN you want to map.
{ -path <path> - LUN Path
Specifies the path of the LUN that you want to map. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume that contains the LUN you want to map.
[-qtree <qtree name>] - Qtree Name
Specifies the qtree that contains the LUN you want to map.
Examples
cluster1::> lun mapping create -vserver vs1 -path /vol/vol1/lun1 -igroup ig1 -lun-id 8
Description
This command unmaps a LUN from an initiator group. After you use this command, the LUN is not visible to any of the
initiators in the initiator group.
Parameters
-vserver <Vserver Name> - Vserver Name
Selects the LUN maps for the Vserver that matches the parameter value.
{ -path <path> - LUN Path
Specifies the path of the LUN you want to unmap. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume of the LUN you want to unmap.
-qtree <qtree name> - Qtree Name
Specifies the qtree of the LUN you want to unmap.
-lun <text>} - LUN Name
Specifies the name of the LUN you want to unmap.
-igroup | -g <text> - Igroup Name
Specifies the initiator group that you want to unmap the LUN from.
Examples
cluster1::> lun mapping delete -vserver vs1 -path /vol/vol1/lun1 -igroup ig1
Description
This command is used after a data mobility event to remove reporting nodes that are no longer required for optimized access
from the specified LUN mapping.
For more information on managing reporting nodes in response to data mobility events, please see the Data ONTAP SAN
Administration Guide.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver containing the LUN.
{ -path <path> - LUN Path
Specifies the path of the LUN. Examples of correct LUN paths are /vol/vol1/lun1 and /vol/vol1/
qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume that contains the LUN.
[-qtree <qtree name>] - Qtree Name
Specifies the qtree that contains the LUN.
-lun <text>} - LUN Name
Specifies the LUN name.
-igroup | -g <text> - Igroup Name
Specifies the igroup the LUN is mapped to.
-remote-nodes [true] - Remove Remote Nodes for LUN Location
If specified, remove all nodes other than the LUN's owner and HA partner from the LUN mapping's reporting
nodes.
Examples
cluster1::> lun mapping remove-reporting-nodes -vserver vs1 -path /vol/vol1/lun1 -igroup ig1
Description
This command lists the mappings between LUNs and initiator groups.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
• windows - the LUN stores a raw disk type in a single-partition Windows disk using the Master Boot
Record (MBR) partitioning style.
• linux - the LUN stores a Linux raw disk without a partition table.
• hyper_v - the LUN stores Windows Server 2008 or Windows Server 2012 Hyper-V data.
Examples
Description
The lun mapping show-initiator command lists the LUNs which are mapped to an initiator group which contains a
specific initiator.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Selects the LUN mappings for the vserver that you specify.
-initiator <text> - Initiator Name
Selects the LUN mappings for the initiator that you specify.
[-lun-id <integer>] - Logical Unit Number
Selects the LUN mappings with a LUN ID that you specify.
[-igroup <text>] - Igroup Name
Selects the LUN mappings for the initiator group that you specify.
[-path <path>] - LUN Path
Selects the LUN mappings for the LUN path that you specify.
[-node <nodename>] - LUN Node
Selects the LUN mappings for the LUNs which are being hosted on the node that you specify.
[-reporting-nodes <nodename>, ...] - Reporting Nodes
Selects the LUN mappings for the LUNs which have reporting nodes that you specify.
[-vserver-uuid <UUID>] - Vserver UUID
Selects the LUN mappings for the Vserver UUID that you specify.
Examples
The following example displays the LUN mappings for initiator 20:10:0a:50:00:01:01:01 in Vserver vs1.
Description
The lun move cancel command cancels an ongoing LUN move operation prior to creation of the new LUN. The command
fails if the LUN already exists at the destination path; in that case, allow the current move operation to complete and then move
it back using the lun move start command.
All data transfers will be halted. If the source LUN was quiesced, it will be restored to normal operation.
Note: This is an advanced command because the preferred way to cancel a LUN move operation is to wait until the new LUN
becomes visible, and then move it back.
Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the destination LUN.
-destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
Related references
lun move start on page 220
Description
The lun move modify command modifies the maximum throughput of an ongoing move operation.
Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the destination LUN.
-destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
-max-throughput {<integer>[KB|MB|GB|TB|PB]} - Maximum Transfer Rate (per sec)
Specifies the maximum amount of data, in bytes, that can be transferred per second in support of this
operation. This mechanism can be used to throttle a transfer, to reduce its impact on the performance of the
source and destination nodes.
Note: The specified value will be rounded up to the nearest megabyte.
Examples
cluster1::> lun move modify -vserver vs1 -destination-path /vol/vol2/lun2 -max-throughput 25MB
Description
The lun move pause command pauses an ongoing move operation. Use the lun move resume command to resume the
move operation.
Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the destination LUN.
-destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
Related references
lun move resume on page 218
Description
The lun move resume command resumes a paused move operation.
Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the destination LUN.
-destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
Examples
Related references
lun move pause on page 217
Description
The lun move show command shows information about LUNs currently being moved in the cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Selects LUN move operations that match this parameter value.
[-destination-path <text>] - Destination Path
Selects LUN move operations that match this parameter value.
Examples
The example above displays information about all the LUN move operations in the cluster.
Description
The lun move start command initiates moving of a LUN from one volume to another. The destination volume can be
located on the same node as the original volume or on a different node.
Note: Use lun move-in-volume command if you want to rename the LUN or move it within the same volume.
Note: This command does not support movement of LUNs that are created from files.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the new LUN.
| -destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
-source-path <path>} - Source Path
Specifies the full path to the source LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
[-promote-late [true]] - Promote Late
Optionally specifies that the destination LUN needs to be promoted late.
If the destination is promoted early, the new LUN will be visible immediately. However, Snapshot copies of
the volume containing the new LUN cannot be taken until the LUN move operation reaches 'Moving Data'
status.
If the destination is promoted late, the new LUN will be visible only after it has been fully framed. However,
the LUN move job will not block the creation of Snapshot copies of the volume containing the new LUN.
If this parameter is not specified, the destination LUN will be promoted early.
[-max-throughput {<integer>[KB|MB|GB|TB|PB]}] - Maximum Transfer Rate (per sec)
Optionally specifies the maximum amount of data, in bytes, that can be transferred per second in support of
this operation. This mechanism can be used to throttle a transfer, to reduce its impact on the performance of
the source and destination nodes.
If this parameter is not specified, throttling is not applied to the data transfer.
Note: The specified value will be rounded up to the nearest megabyte.
cluster1::> lun move start -vserver vs1 -destination-path /vol/vol2/lun2 -source-path /vol/vol1/
lun1
Related references
lun move resume on page 218
lun move-in-volume on page 179
lun move modify on page 217
lun move pause on page 217
lun move show on page 218
Description
Clears the persistent reservation for the specified LUN.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
{ -path <path> - LUN Path
Specifies the path of the LUN. Examples of correct LUN paths are /vol/vol1/lun1 and /vol/vol1/
qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume.
-lun <text> - LUN Name
Specifies the name of the LUN.
[-qtree <qtree name>]} - Qtree Name
Specifies the qtree.
Examples
Description
Displays reservation information for a specified LUN in a Vserver. Unlike other show commands, the user must specify the
LUN.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
{ -path <path> - LUN Path
Specifies the path of the LUN. Examples of correct LUN paths are /vol/vol1/lun1 and /vol/vol1/
qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume.
-lun <text> - LUN Name
Specifies the name of the LUN.
[-qtree <qtree name>]} - Qtree Name
Specifies the qtree.
[-scsi-revision {scsi2|scsi3}] - SCSI Revision
Selects the reservations that match this parameter value.
[-entry-type {reservation|registration}] - Reservation or Registration
Selects the reservations that match this parameter value.
[-protocol {fcp|iscsi}] - Protocol
Selects the reservations that match this parameter value.
[-reservation-key <text>] - Reservation Key
Selects the reservations that match this parameter value.
[-reservation-type-code <text>] - Reservation Type
Selects the reservations that match this parameter value. The possible values for SCSI-3 reservations are:
• write exclusive
• exclusive access
• regular
• third party
Examples
Description
This command adds existing iSCSI and FCP LIFs to a port set. To create a new port set, use the lun portset create
command.
Use the network interface create command to create new LIFs.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
Examples
Related references
lun portset create on page 224
network interface create on page 335
Description
This command creates a new port set for FCP and iSCSI. The port set name can include a maximum of 96 characters. You can
add LIFs to the new port set. If you do not add a LIF to the port set, you create an empty port set. To add LIFs to an existing port
set, use the lun portset add command.
After you create a port set, you must bind the port set to an igroup so the host knows which FC or iSCSI LIFs to access. If you
do not bind an igroup to a port set, and you map a LUN to an igroup, then the initiators in the igroup can access the LUN on any
LIF on the Vserver.
Note: You cannot bind an igroup to an empty port set because the initiators in the igroup would have no LIFs to access the
LUN.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-portset <text> - Portset Name
Specifies the name of the new port set. A port set name is a case-sensitive name that must contain one to 96
characters. Spaces are not allowed.
[-port-name <port_name>, ...] - LIF Or TPG Name
Specifies the name of the logical interface that you want to add to the portset you want to create.
{ [-protocol {mixed|fcp|iscsi}] - Protocol
Specifies if the portset protocol type is fcp, iscsi, or mixed. The default is mixed.
| [-fcp | -f [true]] - FCP
Specifies FCP as the protocol type of the new port set.
| [-iscsi | -i [true]]} - iSCSI
Specifies iSCSI as the protocol type of the new port set.
Examples
Creates a port set iscsips on Vserver vs1 with the protocol type of iscsi.
Creates a port set fcppc on Vserver vs1 with the protocol type of fcp.
cluster1::> portset create -vserver vs1 -portset ps2 -protocol mixed -port-name l11
Related references
lun portset add on page 223
Description
This command deletes an existing port set. By default, you cannot delete a port set if it is bound to an initiator group. If a port
set is bound to an initiator group, you can do one of the following:
• specify the force option to unbind the port set from the initiator group and delete the port set.
• use the lun igroup unbind command to unbind the port set from the initiator group. Then you can delete the port set.
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-portset <text> - Portset Name
Specifies the port set you want to delete.
[-force | -f [true]] - Force
Forcibly unbinds the port set from the initiator group.
Examples
Related references
lun igroup unbind on page 201
Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-portset <text> - Portset Name
Specifies the port set you want to remove a LIF from.
-port-name <port_name>, ... - LIF or TPG Name
Specifies the LIF name you want to remove from the port set.
Examples
cluster1::> port set remove -vserver vs1 -portset ps1 -port-name lif1
Related references
lun igroup unbind on page 201
Description
This command displays the LIFs in a port set. By default, the command displays all LIFs in all port sets.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Selects the port sets that match this parameter value.
[-portset <text>] - Portset Name
Selects the port sets that match this parameter value.
[-port-name <port_name>, ...] - LIF Or TPG Name
Selects the port sets that match this parameter value.
[-protocol {mixed|fcp|iscsi}] - Protocol
Selects the port sets that match this parameter value.
[-port-count <integer>] - Number Of Ports
Selects the port sets that match this parameter value.
Examples
The example above displays the port sets that contain zero LIFs.
The example above displays the port sets that have the iSCSI protocol.
Description
The lun transition show command displays information about the LUN transition processing status of volumes. If no
parameters are specified, the command displays the following information about all volumes:
• Vserver name
• Volume name
• Transition status
• active - The volume is in an active SnapMirror transition relationship and not yet transitioned.
• none - The volume did not contain LUNs to transition from Data ONTAP 7-Mode.
Examples
The following example displays LUN transition information for all volumes in a Vserver named vs1:
Description
The lun transition start command starts LUN transition for the specified volume. Normally, transition is started
automatically when snapmirror break is issued for the volume, this command allows restarting in the event automatic
transitioning was interrupted or failed.
Examples
The following example starts LUN transition on a volume named volume1 in a Vserver named vs1:
Related references
snapmirror break on page 635
Description
The lun transition 7-mode delete command deletes an untransitioned LUN copied from a Data ONTAP 7-Mode
system. This allows the admin to recover space from the volume for LUNs that may not be transitioned to clustered Data
ONTAP without distrupting LUNs that have transitioned, for example, if the LUN is an unsupported OS type.
Parameters
-vserver <Vserver Name> - Vserver Name
This specifies the name of the Vserver from which the LUN is to be deleted. If only one data Vserver exists,
you do not need to specify this parameter.
-path <path> - LUN Path
This specifies the path to the LUN to delete.
Examples
The following example deletes the LUN /vol/vol1/lun1 in a Vserver named vs1:
• Vserver name
• LUN path
• Size
• Whether or not the LUN has been transitioned to clustered Data ONTAP
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Selects the 7-Mode LUNs in the specified Vserver.
[-path <path>] - LUN Path
Selects the 7-Mode LUNs with the specified path.
[-volume <volume name>] - Volume Name
Selects the 7-Mode LUNs that match the specified volume.
[-ostype {vmware|hyper_v|windows_2008|windows_gpt|windows|linux|xen|solaris|solaris_efi|
hpux|aix|netware|openvms}] - OS Type
Selects the 7-Mode LUNs that match the specified operating system type.
[-size <size>] - LUN Size
Selects the 7-Mode LUNs that match the specified size.
[-prefix-size <size>] - Prefix Stream Size
Selects the 7-Mode LUNs that match the specified prefix stream size.
[-suffix-size <size>] - Suffix Stream Size
Selects the 7-Mode LUNs that match the specified suffix stream size.
[-serial <text>] - Serial Number
Selects the 7-Mode LUNs that match the specified serial number for clustered Data ONTAP. LUNs where is-
transitioned is false do not have a serial number assigned for clustered Data ONTAP.
[-uuid <UUID>] - UUID
Selects the 7-Mode LUNs that match the specified UUID for clustered Data ONTAP. LUNs where is-
transitioned is false do not have a UUID assigned for clustered Data ONTAP.
[-serial-7-mode <text>] - 7-mode Serial Number
Selects the 7-Mode LUNs that match the specified serial number from 7-Mode.
[-is-transitioned {true|false}] - Transition Complete
Selects the 7-Mode LUNs that match the specified transition state. LUNs where this value is true have been
transitioned and are available to be mapped for client access. LUNs where this value is false have not yet
been transitioned and may not be mapped.
Examples
The following example displays a summary of all 7-Mode LUNs for the volume vol1 in a Vserver named vs1:
The following example displays detailed information for the 7-Mode LUN /vol/vol1/lun2 in a Vserver named vs1:
metrocluster commands
Manage MetroCluster
The metrocluster commands enable you to manage MetroCluster.
metrocluster configure
Configure MetroCluster and start DR mirroring for the node and its DR group
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The metrocluster configure command creates a MetroCluster configuration on either all the nodes in both MetroCluster
clusters or solely on nodes in a DR group. The command configures a HA partner, DR partner, and a DR auxiliary partner for
the nodes and also starts NVRAM mirroring between the configured DR partner nodes.
In MetroCluster, a DR group is a group of four nodes, two in each of the MetroCluster clusters:
In a two node MetroCluster configuration, a DR group is a group of two nodes, one in each of the MetroCluster clusters.
There can be several DR groups in the MetroCluster configuration. MetroCluster provides synchronous DR protection to all data
sets belonging to nodes within a properly configured DR group.
Without the -node parameter, the metrocluster configure command configures all the DR groups in both the
MetroCluster clusters.
With the -node mynode parameter, the command configures both the mynode node and its HA partner node from the local
cluster, and its DR partner and DR auxiliary partner from the peer cluster.
Before running the metrocluster configure command, the aggregates and Vservers on each node must be prepared for the
MetroCluster configuration. Each node should have:
• At least one non-root, mirrored aggregate of size greater than 10GB. This non-root aggregate should not have any volumes in
it.
• No other non-root aggregates. Any other non-root, unmirrored aggregates and volumes should be deleted.
• No Vservers other than Vservers of type "node" or "admin." Any Vservers that are not of type "node" or "admin" should be
deleted.
Parameters
[-node-name {<nodename>|local}] - Node to Configure
This optional parameter specifies the name of a single node in the local cluster. The command creates
MetroCluster configuration on the local node specified by this parameter and the three other nodes belonging
to the same DR group.
[-refresh {true|false}] - Refresh Configuration (privilege: advanced)
This optional parameter specifies if the node partner configuration steps should be done again. Not specifying
this parameter will cause the MetroCluster configuration to continue using the current node partner
information.
[-allow-with-one-aggregate {true|false}] - Override the Two Data Aggregates Requirement (privilege:
advanced)
This optional parameter specifies if MetroCluster configuration should be allowed with only one data
aggregate in each cluster. This option has no effect if two or more aggregates are present.
Examples
The following example shows the creation of the MetroCluster configuration for a single DR group:
The following example shows the creation of the MetroCluster configuration for all DR groups:
Related references
metrocluster configuration-settings on page 265
metrocluster configuration-settings dr-group create on page 275
metrocluster configuration-settings interface create on page 278
metrocluster configuration-settings connection connect on page 268
metrocluster show on page 236
metrocluster heal
Heal DR data aggregates and DR root aggregates
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The metrocluster heal command heals DR data aggregates and DR root aggregates in preparation for a DR switchback.
You must issue this command twice to complete the two phases of the healing process: first to heal the aggregates by
resynchronizing the mirrored plexes and then to heal the root aggregates by switching them back to the disaster site. The DR
partner nodes must be powered off and remote disk shelves must be powered on before running this command.
Parameters
-phase {aggregates|root-aggregates} - MetroCluster Healing Phase
This parameter specifies the healing phase. The first phase, aggregates, heals aggregates by resynchronizing
mirrored plexes. The second phase, root-aggregates, heals the root aggregates of partner nodes. Healing
root aggregates switches them back to the disaster site, allowing the site to boot up.
[-override-vetoes [true]] - Override All Soft Vetoes
This optional parameter overrides almost all heal operation soft vetoes. If this optional parameter is set to true,
the system overrides subsystem soft vetoes that might prevent the heal operation. Hard vetoes cannot be
overridden and can still prevent the switchback operation.
Examples
The following example performs the healing of both the aggregates and root aggregates:
Description
The metrocluster modify command modifies MetroCluster parameters for nodes in the MetroCluster configuration.
Parameters
{ -auto-switchover-failure-domain <MetroCluster AUSO Failure Domain> - Cluster Level AUSO Option
This parameter specifies the configuration of automatic switchover. Modifying auto-switchover-
failure-domain is not supported on a MetroCluster over IP configuration.
All nodes in a MetroCluster configuration must have this option enabled to enable automatic switchover on
failure.
Examples
The following example shows the output of Metrocluster modification done on a node:
Related references
metrocluster configure on page 231
Description
The metrocluster show command displays configuration information for the pair of clusters configured in MetroCluster.
This command displays the following details about the local cluster and the DR partner cluster:
• Configuration State: This field specifies the configuration state of the cluster.
• AUSO Failure Domain: This field specifies the AUSO failure domain of the cluster.
Parameters
[-periodic-check-status ]
If this option is used the MetroCluster periodic check status is displayed.
Examples
The following example shows the output of the command before MetroCluster configuration is done:
The following example shows the output of the command after MetroCluster configuration is done only for some DR
groups:
The following example shows the output of the command after MetroCluster configuration is done:
The following example shows the output of the command in switchover mode:
The following example shows the output of the command when -peridiodic-check-status option is used:
Related references
metrocluster node show on page 291
metrocluster configure on page 231
metrocluster switchback
Switch back storage and client access
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The metrocluster switchback command initiates the switchback of storage and client access from nodes in the DR site to
their home nodes. The home nodes and storage shelves must be powered on and reachable by nodes in the DR site. The
metrocluster heal -phase aggregates and metrocluster heal -phase root-aggregates commands must have
successfully completed before running the metrocluster switchback command.
Parameters
[-override-vetoes | -f [true]] - Override All Soft Vetoes
This optional parameter overrides all switchback operation soft vetoes. If this optional parameter is used, the
system overrides subsystem soft vetoes that might prevent the switchback operation. Hard vetoes cannot be
overridden and can still prevent the switchover operation.
Examples
The following is an example of how to start the switchback operation.
Related references
metrocluster heal on page 234
metrocluster switchover
Switch over storage and client access
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The metrocluster switchover command initiates the switchover of storage and client access from the source cluster to the
disaster recovery (DR) site. This command is to be used after a disaster that renders all the nodes in the source cluster
unreachable and powered off. It can also be used for negotiated switchover when the outage of the source cluster is anticipated
as in cases such as disaster recovery testing or a site going offline for maintenance. If a switchover operation previously failed
on certain nodes on the DR site then issuing the command retries the operation on all of those nodes.
Parameters
{ [-simulate [true]] - Simulate Negotiated Switchover (privilege: advanced)
If this optional parameter is used, the system runs a simulation of the negotiated switchover operation to make
sure all the prerequisites for the operation are met. This parameter cannot be used with switchover with the -
forced-on-disaster parameter.
| [-forced-on-disaster [true]] - Force Switchover on Disaster
This optional parameter forces a switchover on disaster. This parameter should be used if all the nodes on the
disaster stricken site are powered off and unreachable. In the absence of this parameter, the command attempts
to perform a negotiated switchover operation.
[-force-nvfail-all [true]] - Sets in-nvfailed-state on All Volumes (privilege: advanced)
If this parameter is used, the switchover command will set the in-nvfailed-state parameter to true for all
volumes being switched over and will set the -dr-force-nvfail parameter to true for any volumes that do
not already have it enabled. This parameter has no effect when performning a negotiated switchover.
[-retry-failed-nodes <Node name>, ...]} - Nodes to Switchover
This optional parameter takes the list of nodes that previously failed the switchover operation and it retries the
switchover operation on each of the nodes. This parameter is applicable only for a switchover with the -
forced-on-disaster parameter.
[-override-vetoes [true]] - Override All Soft Vetoes
This optional parameter overrides all switchover operation soft vetoes. If this parameter is used, the system
overrides all subsystem soft vetoes that might prevent the switchover operation. Hard vetoes cannot be
overridden and can still prevent the switchover operation.
Related references
metrocluster show on page 236
metrocluster operation show on page 294
metrocluster heal on page 234
metrocluster switchback on page 237
Description
The metrocluster check disable-periodic-check command disables the periodic checking of the MetroCluster
configuration.
After this command is run, the MetroCluster Check job will be prevented from periodically checking the configuration for
errors.
Examples
Related references
metrocluster check enable-periodic-check on page 240
Description
The metrocluster check enable-periodic-check command enables the periodic checking of the MetroCluster
configuration.
After this command is run, the MetroCluster Check job will able to run in the background and periodically check the
configuration for errors.
Examples
Related references
metrocluster check disable-periodic-check on page 239
Description
The metrocluster check run command performs checks on the MetroCluster configuration and reports configuration
errors if any.
To run this command, at least one DR group needs to be configured. The command checks the following parts of the
configuration:
Node Configuration:
• metrocluster-ready: This check verifies that the node is ready for MetroCluster configuration.
• local-ha-partner: This check verifies that the HA partner node is in the same cluster.
• ha-mirroring-on: This check verifies that HA mirroring for the node is configured.
• symmetric-ha-relationship: This check verifies that the relationship between the node and its HA partner is symmetric.
• remote-dr-partner: This check verifies that the DR partner node is in the remote cluster.
• dr-mirroring-on: This check verifies that DR mirroring for the node is configured.
• symmetric-dr-relationship: This check verifies that the relationship between the node and its DR partner is symmetric.
• remote-dr-auxiliary-partner: This check verifies that the DR auxiliary partner node is in the remote cluster.
• symmetric-dr-auxiliary-relationship: This check verifies that the relationship between the node and its DR auxiliary partner
is symmetric.
• node-object-limit: This check verifies that the node object limit option for the node is turned on.
Aggregate Configuration:
• disk-pool-allocation: This check verifies that the disks belonging to this aggregate have been correctly allocated to the right
pools.
At the end of the check the command displays a summary of the results. This summary output can be viewed again by running
metrocluster check show. If any of the rows in this output show any warnings more details can be viewed by running the
metrocluster check show command for that component.
Parameters
[-skip-dr-simulation {true|false}] - Skip the DR Readiness Checks (privilege: advanced)
If this optional parameter is set to true, the switchover and switchback simulations are not run.
Examples
The following example shows the execution of the command when there are no warnings:
Component Result
------------------- ---------
nodes ok
clusters ok
lifs ok
config-replication ok
aggregates ok
5 entries were displayed.
Command completed. Use the "metrocluster check show -instance" command or sub-commands in
"metrocluster check" directory for detailed results.
The following example shows the execution of the command when there are some warnings:
Component Result
------------------- ---------
nodes warning
clusters ok
lifs ok
config-replication ok
aggregates ok
5 entries were displayed.
Command completed. Use the "metrocluster check show -instance" command or sub-commands in
"metrocluster check" directory for detailed results.
Related references
metrocluster check show on page 242
Description
The metrocluster check show command displays the results of the metrocluster check run command.
This command displays the high-level verification results for each of the components. If there are any errors for a component,
running the show command for that component (for example metrocluster check node show or metrocluster check
aggregate show) will display more information about the warning.
Note: Please note that this command does not run the checks but only displays the results of checks. To look at the latest
results, run the metrocluster check run command and then run this command.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-timestamp <MM/DD/YYYY HH:MM:SS>] - Time of Check
This is the time at which the metrocluster check run command was last run in this cluster and these
results were produced. If this parameter is specified, only rows with this timestamp will be displayed.
[-component <MetroCluster Check Components>] - Name of the Component
This is the name of the component. If this parameter is specified, only rows with this component will be
displayed.
[-result {ok|warning|not-run|not-applicable}] - Result of the Check
This is the result of the check for the component. If this parameter is specified, only rows with this result will
be displayed.
[-additional-info <text>] - Additional Information/Recovery Steps
This is the additional info for the verification for this component. This field will have detailed information
about the warning and recovery steps. If this parameter is specified, only rows with this additional info will be
displayed.
Examples
The following example shows the execution of the command when there are no warnings:
Component Result
------------------- ---------
nodes ok
clusters ok
lifs ok
config-replication ok
aggregates ok
The following example shows the execution of the command when there are some warnings:
Component Result
------------------- ---------
nodes warning
clusters ok
lifs ok
config-replication ok
aggregates ok
connections ok
6 entries were displayed.
The following example shows the execution of the command with -instance option:
Related references
metrocluster check run on page 240
metrocluster check node show on page 257
metrocluster check aggregate show on page 244
metrocluster check cluster show on page 246
metrocluster check config-replication show on page 249
metrocluster check connection show on page 252
Description
The metrocluster check aggregate show command displays the results of aggregate checks performed by the
metrocluster check run command.
The command verifies the following aspects of the configuration of all aggregates in MetroCluster:
• disk-pool-allocation: This check verifies that the disks belonging to this aggregate have been correctly allocated to the right
pools.
Additional information about the warnings (if any) and recovery steps can be viewed by running the command with the -
instance option.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <Node name>] - Node Name
This is the name of the node for which the check was run. If this parameter is specified, only rows with this
node will be displayed.
[-aggregate <aggregate name>] - Name of the Aggregate
This is the name of the aggregate for which the check was run. If this parameter is specified, only rows with
this aggregate will be displayed.
[-check <MetroCluster Aggregate Check>] - Type of Check
This is the type of the check performed. If this parameter is specified, only rows with this check will be
displayed.
[-cluster <Cluster name>] - Name of Cluster
This is the name of the cluster the node belongs to. If this parameter is specified, only rows with this cluster
will be displayed.
[-result {ok|warning|not-run|not-applicable}] - Result of the Check
This is the result of the check. If this parameter is specified, only rows with this result will be displayed.
[-additional-info <text>, ...] - Additional Information/Recovery Steps
This is additional information about the check. This field has more information and recovery steps for the
warning. If this parameter is specified, only rows with this additional info will be displayed.
Examples
The following example shows the execution of the command in a MetroCluster configuration with two nodes per cluster:
The following example shows the execution of the command with -instance option:
Related references
metrocluster check run on page 240
metrocluster check show on page 242
metrocluster check node show on page 257
Description
The metrocluster check cluster show command displays the results of cluster checks performed by the metrocluster
check run command.
• negotiated-switchover-ready: This check verifies that the cluster is ready for a negotiated switchover operation.
• switchback-ready: This check verifies that the cluster is ready for a switchback operation.
• job-schedules: This check verifies that the job schedules between the local and remote clusters are consistent.
• licenses: This check verifies that the licenses between the local and remote clusters are consistent.
• periodic-check-enabled: This check verifies that the periodic MetroCluster Check Job is enabled.
• onboard-key-management: This check verifies that the Onboard Key Management hierarchies are consistent.
• external-key-management: This check verifies that the External Key Management configurations are consistent.
Additional information about the warnings (if any) and recovery steps can be viewed by running the command with the -
instance parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-check {negotiated-switchover-ready|switchback-ready|job-schedules|licenses|periodic-
check-enabled|onboard-key-management|external-key-management}] - Type of Check
This is the type of the check performed. If this parameter is specified, only rows with this check will be
displayed.
[-cluster <Cluster name>] - Cluster Name
This is the name of the cluster the check results apply to. If this parameter is specified, only rows matching the
specified cluster will be displayed.
[-result {ok|warning|not-run|not-applicable}] - Result of the Check
This is the result of the check. If this parameter is specified, only rows with this result will be displayed.
[-additional-info <text>] - Additional Information/Recovery Steps
This is additional information about the check. This field has more information and recovery steps for the
warning. If this parameter is specified, only rows with this additional info will be displayed.
Examples
The following example shows the execution of the command in a MetroCluster configuration:
The following example shows the execution of the command with the -instance parameter:
Related references
metrocluster check run on page 240
metrocluster check show on page 242
metrocluster check node show on page 257
Description
The metrocluster check config-replication show command displays the results of MetroCluster configuration
replication.
The command verifies the following aspects of MetroCluster configuration replication :
• Remote Heartbeat: Verifies that the MetroCluster configuration replication heartbeat with the remote cluster is healthy.
• Last Heartbeat Sent: Prints the timestamp of the last MetroCluster configuration replication heartbeat sent to the remote
cluster.
• Last Heartbeat Received: Prints the timestamp of the last MetroCluster configuration replication hearbeat received from the
remote cluster.
• Storage Remarks: Prints the underlying root cause for non healthy MetroCluster configuration storage.
• Vserver Streams: Verifies that MetroCluster configuration replication Vserver streams are healthy.
• Cluster Streams: Verifies that MetroCluster configuration replication Cluster streams are healthy.
Additional information about the warnings (if any) and recovery steps can be viewed by running the command with the -
instance option.
Examples
The following example shows the output of metrocluster check config-replication show:
Related references
metrocluster check run on page 240
metrocluster check show on page 242
metrocluster check config-replication show-aggregate-eligibility on page 250
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The metrocluster check config-replication show-aggregate-eligibility command displays the MetroCluster
configuration replication aggregate eligibility.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Aggregate
This is the aggregate name. If this parameter is specified, only rows with this aggregate will be displayed.
[-hosted-configuration-replication-volumes <volume name>, ...] - Currently Hosted Configuration
Replication Volumes
This is the list of the configuration replication volumes hosted on this aggregate. If this parameter is specified,
only rows with these configuration replication volumes will be displayed.
[-is-eligible-to-host-additional-volumes {true|false}] - Eligibility to Host Another Configuration
Replication Volume
This is the eligibility of the aggregate to host additional configuration replication volumes. If this parameter is
specified, only rows with this eligibility will be displayed.
Examples
The following example shows the execution of the command in a MetroCluster configuration with thirteen aggregates in
the cluster:
Related references
metrocluster check run on page 240
metrocluster check show on page 242
metrocluster check config-replication show on page 249
Description
The metrocluster check config-replication show-capture-status command indicates whether or not a
configuration change that would prevent a negotiated switchover is currently being captured for replication.
Examples
The following example shows the execution of the command in a MetroCluster configuration when capture is not in
progress:
Description
The metrocluster check connection show command displays the check results of connections for nodes in a
MetroCluster over IP configuration.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that you
specify.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all entries.
[-dr-group-id <integer>] - DR Group ID
If this parameter is specified, the command displays information for the matching DR group.
[-cluster-uuid <UUID>] - Cluster UUID
If this parameter is specified, the command displays information for the matching cluster specified by uuid.
[-cluster <Cluster name>] - Cluster Name
If this parameter is specified, the command displays information for the matching cluster.
[-node-uuid <UUID>] - Node UUID
If this parameter is specified, the command displays information for the matching node specified by uuid.
[-node <text>] - Node Name
If this parameter is specified, the command displays information for the matching nodes.
[-home-port {<netport>|<ifgrp>}] - Home Port
If this parameter is specified, the command displays information for the matching home-port.
[-relationship-type <Roles of MetroCluster Nodes>] - Relationship Role Type
If this parameter is specified, the command displays information for the matching relationship-type.
[-source-address <IP Address>] - Source Network Address
If this parameter is specified, the command displays information for the matching source address.
[-destination-address <IP Address>] - Destination Network Address
If this parameter is specified, the command displays information for the matching destination address.
[-partner-cluster-uuid <UUID>] - Partner Cluster UUID
If this parameter is specified, the command displays information for the matching partner-cluster-uuid.
[-partner-node-uuid <UUID>] - Partner Node UUID
If this parameter is specified, the command displays information for the matching partner-node-uuid.
Examples
The following example shows the output of the metrocluster check connection show command:
Description
The metrocluster check lif repair-placement command reruns LIF placement for those LIFs displayed by the
metrocluster check lif show command. This command is expected to be run after the admin manually rectifies the LIF
placement failures displayed in the metrocluster check lif show command output. The command is successful if the LIF
placement rerun does not encounter any LIF placement failure. This is to be confirmed by subsequent running of the
metrocluster check lif show .
Parameters
-vserver <Vserver Name> - sync-source Vserver Name
This is the name of the sync source Vserver that has LIF placement failures as reported by the metrocluster
check lif show command. This input ensures that the command is run on the specified Vserver.
Examples
The following example shows the execution of the command with a sync source Vserver and a LIF specified:
The following example shows the execution of the command with only a sync-source Vserver specified:
Command completed. Run the "metrocluster check lif show" command for results.
clusA::>
Related references
metrocluster check lif show on page 255
Description
The metrocluster check lif show command displays the LIF placement failures in the MetroCluster configuration.
The command verifies the following aspects of the LIF placements of all the data LIFs in Metrocluster:
• lif-placed-on-dr-node: This check verifies that the LIF is placed on DR partner node.
• port-selection: This check verifies that the LIF is placed on correct port.
The LIF placement failures are mostly fabric/network connectivity issues that require manual intervention. Once the
connectivity issues are resolved manually, the admin is expected to run metrocluster check lif repair-placement
command to resolve the LIF placement issues for the sync source Vserver.
Additional information about the warnings (if any) and recovery steps can be viewed by running the command with the -
instance option.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example shows the execution of the command in a MetroCluster configuration with two nodes per cluster:
Related references
metrocluster check lif repair-placement on page 254
Description
The metrocluster check node show command displays the results of node checks performed by the metrocluster
check run command.
The command displays the results of the following node configuration checks:
• node-reachable: This check verifies that the node is reachable.
• metrocluster-ready: This check verifies that the node is ready for MetroCluster configuration.
• local-ha-partner: This check verifies that the HA partner node is in the same cluster.
• ha-mirroring-on: This check verifies that HA mirroring for the node is configured.
• symmetric-ha-relationship: This check verifies that the relationship between the node and its HA partner is symmetric.
• remote-dr-partner: This check verifies that the DR partner node is in the remote cluster.
• dr-mirroring-on: This check verifies that DR mirroring for the node is configured.
• remote-dr-auxiliary-partner: This check verifies that the DR auxiliary partner node is in the remote cluster.
• symmetric-dr-auxiliary-relationship: This check verifies that the relationship between the node and its DR auxiliary partner
is symmetric.
• has-intercluster-lif: This check verifies that the node has an intercluster LIF.
• node-object-limit: This check verifies that the node object limit option for the node is turned on.
• automatic-uso: This check verifies that the automatic USO option for the node is enabled.
Additional information about the warnings (if any) and recovery steps can be viewed by running the command with the -
instance parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <Node name>] - Node Name
This is the name of the node for which the check was run. If this parameter is specified, only rows with this
node will be displayed.
Examples
The following example shows the execution of the command in a MetroCluster configuration with two nodes per cluster:
The following example shows the execution of the command with the -instance parameter:
Related references
metrocluster check run on page 240
metrocluster check show on page 242
metrocluster check aggregate show on page 244
Description
The metrocluster check volume show command displays the results of volume checks performed by the metrocluster
check run command.
The command displays the results of the following volume configuration checks:
• mixed-flexgroups: This check looks for flexgroups residing on a mix of mirrored and unmirrored aggregates.
Additional information about the warnings, if any, and recovery steps can be viewed by running the command with the -
instance parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
This is the name of the vserver that contains the volume that the check results apply to. If this parameter is
specified, only rows matching the specified cluster will be displayed.
[-volume <volume name>] - Volume Name
This is the name of the volume that the check results apply to. If this parameter is specified, only rows
matching the specified volume will be displayed.
[-check <MetroCluster Volume Check>] - Type of Check
This is the type of the check performed. If this parameter is specified, only rows with this check will be
displayed.
[-result {ok|warning|not-run|not-applicable}] - Result of the Check
This is the result of the check. If this parameter is specified, only rows with this result will be displayed.
[-additional-info <text>, ...] - Additional Information/Recovery Steps
This is additional information about the check. This field has more information and recovery steps for the
warning. If this parameter is specified, only rows with this additional info will be displayed.
Examples
The following example shows the execution of the command in a MetroCluster configuration:
Related references
metrocluster check run on page 240
metrocluster check show on page 242
metrocluster check node show on page 257
metrocluster check aggregate show on page 244
Description
The metrocluster config-replication cluster-storage-configuration modify command modifies the
configuration of storage used for configuration replication.
Parameters
[-disallowed-aggregates <aggregate name>, ...] - Disallowed Aggregates
Use this parameter to set the list of storage aggregates that are not available to host storage for configuration
replication.
Examples
The following example disallows two aggregates named aggr1 and aggr2:
Related references
metrocluster config-replication cluster-storage-configuration show on page 264
Description
The metrocluster config-replication cluster-storage-configuration show command shows details of the
configuration of the storage used for configuration replication.
The information displayed is the following:
• Disallowed Aggregates - The list of storage aggregates that are configured as not allowed to host storage areas.
• Auto-Repair - Displays true if the automatic repair of storage areas used by configuration replication is enabled.
• Auto-Recreate - Displays true if the automatic recreation of storage volumes used by configuration replication is enabled.
• Use Mirrored Aggregate - Displays true if storage areas for configuration replication are to be hosted on a mirrored
aggregate.
Examples
The following is an example of the metrocluster config-replication cluster-storage-configuration
show command:
Disallowed Aggregates: -
Auto-Repair: true
Auto-Recreate: true
Use Mirrored Aggregate: true
Related references
metrocluster config-replication cluster-storage-configuration modify on page 263
Description
The metrocluster config-replication resync-status show command displays the state of the configuration
synchronization operation between the two clusters in the MetroCluster configuration.
• Source: This is the source side whose configuration is being replicated to the destination side.
• Destination: This is the destination side where the configuration is being replicated to from the source side.
Examples
The following example shows the output of the command when synchronization is in progress:
The following example shows the output of the command when synchronization from clusB to clusA is in progress:
Related references
metrocluster show on page 236
metrocluster check config-replication show on page 249
Description
The metrocluster configuration-settings show-status command displays the configuration settings status for
nodes in a MetroCluster setup. If a DR group has not been created, then status for nodes in the local cluster only are displayed.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that you
specify.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all entries.
Examples
The following example shows the display of MetroCluster setup status:
Nodes do not have a valid platform-specific personality value (equivalent to HAOSC parameter on
non-Apollo platforms) for a MetroCluster setup.
Output of the command when MetroCluster setup uses IP links and before "metrocluster
configuration-settings dr-group create" command is run:
Examples
The following example shows the output for the check command in MetroCluster over IP configurations:
Description
The metrocluster configuration-settings connection connect command configures the connections that mirror
NV logs and access remote storage between partner nodes in a MetroCluster setup.
This command is used for MetroCluster setups that are connected though IP links. MetroCluster setups that are connected
through FC links will configure the FC connections automatically.
The metrocluster configuration-settings commands are run in the following order to set up MetroCluster:
• The DR groups must have been configured. Run the metrocluster configuration-settings dr-group show
command to verify that every node is partnered in a DR group.
• Have NV log mirroring configured and mirroring disabled. NV log mirroring will be enabled by the metrocluster
configure command.
• Have access to remote storage. Use the storage disk show -pool Pool1 command to view the remote disks that are
hosted on DR partner nodes.
The DR groups and network logical interfaces that were configured by the metrocluster configuration-settings
commands cannot be deleted after the connections have been configured. The metrocluster configuration-settings
connection disconnect command must be run to remove the connections before the DR groups and network logical
interfaces can be deleted.
Examples
The following example shows configuration of connections in a MetroCluster over IP setup:
Related references
metrocluster configuration-settings on page 265
metrocluster configuration-settings dr-group create on page 275
metrocluster configuration-settings interface create on page 278
metrocluster configuration-settings dr-group show on page 277
metrocluster configuration-settings interface show on page 282
metrocluster configure on page 231
metrocluster configuration-settings connection disconnect on page 270
Description
The metrocluster configuration-settings connection disconnect command removes the connections between
nodes in a DR group that are used to mirror NV logs and access remote storage.
This command cannot be run if a node in the DR group has remote disks assigned to the node. The assigned ownership of
remote disks can be removed by running the storage disk removeowner command.
The metrocluster configuration-settings commands are run in the following order to remove MetroCluster over IP
configuration:
Parameters
-dr-group-id <integer> - DR Group ID
This parameter identifies the DR group to be disconnected.
Related references
storage disk removeowner on page 946
metrocluster configuration-settings on page 265
metrocluster configuration-settings interface delete on page 280
metrocluster configuration-settings dr-group delete on page 276
Description
The metrocluster configuration-settings connection show command displays the connection configuration
information between the nodes in a MetroCluster setup.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that you
specify.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all entries.
[-dr-group-id <integer>] - DR Group ID
If this parameter is specified, the command displays information for the matching DR group.
[-cluster-uuid <UUID>] - Cluster UUID
If this parameter is specified, the command displays information for the matching cluster specified by uuid.
[-cluster <Cluster name>] - Cluster Name
If this parameter is specified, the command displays information for the matching cluster.
[-node-uuid <UUID>] - Node UUID
If this parameter is specified, the command displays information for the matching node specified by uuid.
[-node <text>] - Node Name
If this parameter is specified, the command displays information for the matching nodes.
[-home-port {<netport>|<ifgrp>}] - Home Port
If this parameter is specified, the command displays information for the matching home-port.
[-relationship-type <Roles of MetroCluster Nodes>] - Relationship Role Type
If this parameter is specified, the command displays information for the matching relationship-type.
[-source-address <IP Address>] - Source Network Address
If this parameter is specified, the command displays information for the matching source address.
[-destination-address <IP Address>] - Destination Network Address
If this parameter is specified, the command displays information for the matching destination address.
Examples
The following example shows the output of metrocluster configuration-settings connection connect
command:
Output of the command before the connections are established using the metrocluster
configuration-settings connection connect command:
Output of the command after the connections are established using the metrocluster
configuration-settings connection connect command:
Related references
metrocluster configuration-settings connection connect on page 268
Description
The metrocluster configuration-settings dr-group create command partners the nodes that will comprise a DR
group in a MetroCluster setup.
This command is used for MetroCluster setups that are connected though IP links. MetroCluster setups that are connected
through FC links will configure DR groups automatically and do not require the metrocluster configuration-settings
commands.
The metrocluster configuration-settings commands are run in the following order to set up MetroCluster:
Before running this command, cluster peering must be configured between the local and partner clusters. Run the cluster
peer show command to verify that peering is available between the local and partner clusters.
This command configures a local node and a remote node as DR partner nodes. The command also configures the HA partner of
the local node and the HA partner of the remote node as the other DR partner nodes in the DR group.
Parameters
-partner-cluster <Cluster name> - Partner Cluster Name
Use this parameter to specify the name of the partner cluster.
-local-node {<nodename>|local} - Local Node Name
Use this parameter to specify the name of a node in the local cluster.
-remote-node <text> - Remote Node Name
Use this parameter to specify the name of a node in the partner cluster that is to be the DR partner of the
specified local node.
Examples
The following example shows the creation of the MetroCluster DR group:
Related references
metrocluster configuration-settings on page 265
metrocluster configuration-settings interface create on page 278
metrocluster configuration-settings connection connect on page 268
cluster peer show on page 62
Description
The metrocluster configuration-settings dr-group delete command deletes a DR group and its node
partnerships that were configured using the metrocluster configuration-settings dr-group create command.
This command cannot be run if the metrocluster configuration-settings interface create command has
configured a network logical interface on a network port provisioned for MetroCluster. The metrocluster configuration-
settings interface delete command must then be run to delete the network logical interfaces on every node in the DR
group.
The metrocluster configuration-settings commands are run in the following order to remove the MetroCluster over
IP configuration:
Examples
The following example shows the deletion of the MetroCluster DR group:
Warning: This command deletes the existing DR group relationship. Are you sure
you want to proceed ? {y|n}: y
[Job 279] Job succeeded: DR Group Delete is successful.
Related references
metrocluster configuration-settings dr-group create on page 275
metrocluster configuration-settings interface create on page 278
metrocluster configuration-settings interface delete on page 280
metrocluster configuration-settings on page 265
metrocluster configuration-settings connection disconnect on page 270
Description
The metrocluster configuration-settings dr-group show command displays the DR groups and their nodes.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that you
specify.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all entries.
[-dr-group-id <integer>] - DR Group ID
If this parameter is specified, the command displays information for the matching DR group.
[-cluster-uuid <UUID>] - Cluster UUID
If this parameter is specified, the command displays information for the matching cluster uuid.
Examples
The following example illustrates the display of DR group configuration in a four-node MetroCluster setup:
Description
The metrocluster configuration-settings interface create command configures the network logical interfaces
that will be used on a node in a MetroCluster setup to mirror NV logs and access remote storage.
This command is used for MetroCluster setups that are connected though IP links. MetroCluster setups that are connected
through FC links do not require the user to provision network logical interfaces to mirror NV logs and access remote storage.
The metrocluster configuration-settings commands are run in the following order to set up MetroCluster:
Before running this command , the node's DR group must be configured using the metrocluster configuration-
settings dr-group create command. Run the metrocluster configuration-settings dr-group show
command to verify that the node's DR group has been configured.
Examples
This example shows configuring logical interface on MetroCluster IP capable port:
Related references
metrocluster configuration-settings on page 265
metrocluster configuration-settings dr-group create on page 275
metrocluster configuration-settings connection connect on page 268
metrocluster configuration-settings dr-group show on page 277
The metrocluster configuration-settings commands are run in the following order to remove the MetroCluster over
IP configuration:
Parameters
-cluster-name <Cluster name> - Cluster Name
Use this parameter to specify the name of the local or partner cluster.
-home-node <text> - Home Node
Use this parameter to specify the home node in the cluster which hosts the interface.
-home-port {<netport>|<ifgrp>} - Home Port
Use this parameter to specify the home port provisioned for MetroCluster.
Examples
The following example shows the deletion of interface in a MetroCluster setup:
Related references
metrocluster configuration-settings connection connect on page 268
metrocluster configuration-settings connection disconnect on page 270
metrocluster configuration-settings on page 265
metrocluster configuration-settings dr-group delete on page 276
Description
The metrocluster configuration-settings interface show command displays the network logical interfaces that
were provisioned for MetroCluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that you
specify.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all entries.
[-dr-group-id <integer>] - DR Group ID
If this parameter is specified, the command displays information for the matching DR group.
[-cluster-uuid <UUID>] - Cluster UUID
If this parameter is specified, the command displays information for the matching cluster specified by uuid.
[-cluster <Cluster name>] - Cluster Name
If this parameter is specified, the command displays information for the matching cluster..
[-node-uuid <UUID>] - Node UUID
If this parameter is specified, the command displays information for the matching nodes uuid.
[-node <text>] - Node Name
If this parameter is specified, the command displays information for the matching nodes.
Examples
The following example illustrates display of logical interfaces configured in a four-node MetroCluster setup:
Parameters
-node {<nodename>|local} - Node Name
This parameter specifies the node name.
-is-ood-enabled {true|false} - Is Out-of-Order Delivery Enabled?
This parameter specifies the out-of-order delivery setting on the adapter.
Examples
The following example enables out-of-order delivery for the port 'fcvi_device_0' on the node 'clusA-01':
Description
The metrocluster interconnect adapter show command displays interconnect adapter information for the nodes in a
MetroCluster configuration.
This command displays the following details about the local node and the HA partner node:
• Node: This field specifies the name of the node in the cluster.
• Adapter Name: This field specifies the name of the interconnect adapter.
• Adapter Type: This field specifies the type of the interconnect adapter.
• Link Status: This field specifies the physical link status of the interconnect adapter.
• Is OOD Enabled: This field specifies the out-of-order delivery status of the interconnect adapter.
• IP Address: This field specifies the IP address assigned to the interconnect adapter.
• Port Number: This field specifies the port number of the interconnect adapter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-connectivity ]
Displays the connectivity information from all the interconnect adapters to the connected nodes.
| [-switch ]
Displays details of switches connected to all the interconnect adapters.
Examples
The following example shows the output of the command during normal operation (neither cluster is in switchover state):
The following example shows the output of the command after MetroCluster switchover is performed:
The following example shows the output of the command with connectivity field during normal operation (neither cluster
is in swithover state):
clusA::> metrocluster interconnect adapter show -connectivity -node local -type FC-VI
Remote Adapters:
Remote Adapters:
The following example shows the output of the command with connectivity field after MetroCluster swithover is
performed.
clusA::> metrocluster interconnect adapter show -connectivity -node local -type FC-VI
Remote Adapters:
Remote Adapters:
Description
The metrocluster interconnect mirror show command displays NVRAM mirror information for the nodes configured
in a MetroCluster.
This command displays the following details about the local node and the HA partner node:
• Node: This field specifies the name of the node in the cluster.
• Partner Name: This field specifies the name of the partner node.
• Mirror Admin Status: This field specifies the administrative status of the NVRAM mirror between partner nodes.
• Mirror Oper Status: This field specifies the operational status of the NVRAM mirror between partner nodes.
• Adapter: This field specifies the name of the interconnect adapter used for NVRAM mirroring.
• Type: This field specifies the type of the interconnect adapter used for NVRAM mirroring.
• Status: This field specifies the physical status of the interconnect adapter used for NVRAM mirroring.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
If this parameter is specified, mirror details of the specified node are displayed.
[-partner-type {HA|DR|AUX}] - Partner Type
If this parameter is specified, mirror details of the specified partner type are displayed.
[-adapter <text>] - Adapter
If this parameter is specified, mirror details of the specified adapter are displayed.
[-type <text>] - Adapter Type
If this parameter is specified, mirror details of the specified adapter type are displayed.
[-status <text>] - Status
If this parameter is specified, mirror details of the adapter with the specified status are displayed.
Examples
The following example shows the output of the command during normal operation (neither cluster is in switchover state):
The following example shows the output of the command after MetroCluster switchover is performed:
Description
The metrocluster interconnect mirror multipath show command displays the NVRAM mirror multipath policy for
the nodes configured in a MetroCluster.
This command displays the following details about the local node and the HA partner node:
• Node: This field specifies the name of the node in the cluster.
• Multipath Policy: This field specifies the multipath policy used for NVRAM mirroring.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
If this parameter is specified, mirror details of the specified node are displayed.
[-multipath-policy {no-mp|static-map|dynamic-map|round-robin}] - Multipath Policy
If this parameter is specified, nodes with the specidifed multipath policy are displayed.
Examples
The following example shows the output of the command:
Description
The metrocluster node show command displays configuration information for the nodes in the MetroCluster configuration.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-partners ]
If this option is used the MetroCluster node partnership view will be displayed.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-dr-group-id <integer>] - DR Group ID
If this parameter is specified, all nodes belonging to the specified DR group are displayed.
[-cluster <Cluster name>] - Cluster Name
If this parameter is specified, all nodes belonging to the specified cluster are displayed.
[-node <Node name>] - Node Name
If this parameter is specified, the specified node is displayed.
[-ha-partner <Node name>] - HA Partner Name
If this parameter is specified, the node with the specified HA partner is displayed.
[-dr-cluster <Cluster name>] - DR Cluster Name
If this parameter is specified, all nodes belonging to the specified cluster are displayed.
[-dr-partner <Node name>] - DR Partner Name
If this parameter is specified, the node with the specified DR partner is displayed.
[-dr-auxiliary <Node name>] - DR Auxiliary Name
If this parameter is specified, the node with the specified DR auxiliary partner is displayed.
[-node-uuid <UUID>] - Node UUID
If this parameter is specified, the node with the specified Uuid is displayed.
[-ha-partner-uuid <UUID>] - HA Partner UUID
If this parameter is specified, the nodes with the specified HA partner is displayed.
[-dr-partner-uuid <UUID>] - DR Partner UUID
If this parameter is specified, the node with the specified DR partner is displayed.
[-dr-auxiliary-uuid <UUID>] - DR Auxiliary UUID
If this parameter is specified, the node with the specified DR auxiliary partner is displayed.
[-node-cluster-uuid <UUID>] - Node Cluster UUID
If this parameter is specified, all nodes belonging to the specified cluster are displayed.
[-ha-partner-cluster-uuid <UUID>] - HA Partner Cluster UUID
If this parameter is specified, all nodes whose HA partner belong to the specified cluster are displayed.
Examples
The following example shows the output of the command before the MetroCluster configuration is done:
The following example shows the output of the command when some DR groups in the MetroCluster configuration are
not yet configured:
The following example shows the output of the command after after all DR groups in the MetroCluster configuration are
configured:
Related references
metrocluster configure on page 231
metrocluster show on page 236
Description
The metrocluster operation show command displays information about the most recent MetroCluster operation run on
the local cluster.
This command will display information about all MetroCluster commands except for the commands in the metrocluster
check directory. This command will not display any information after MetroCluster has been completely unconfigured using
the metrocluster unconfigure command.
Examples
The following example shows the output of metrocluster operation show after running a metrocluster
configure command was successful:
Related references
metrocluster check on page 239
metrocluster configure on page 231
metrocluster operation history show on page 294
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-operation-uuid <UUID>] - Identifier for the Operation
This is the UUID of the operation. If this parameter is specified, only the operation with this UUID is
displayed.
[-cluster <Cluster name>] - Cluster Where the Command Was Run
This is the name of the cluster where the command was run. If this parameter is specified, only the operations
that were run in this cluster are displayed.
[-node-name <Node name>] - Node Where the Command Was run
This is the name of the node where the command was run. If this parameter is specificed, only the operations
that were run on this node are displayed.
[-operation <MetroCluster Operation Name>] - Name of the Operation
This is the name of the operation. If this parameter is specificed, only the operations with this name are
displayed.
[-start-time <MM/DD/YYYY HH:MM:SS>] - Start Time
This is the time the operation started execution. If this parameter is specificed, only the operations that were
started at this time are displayed.
[-state <MetroCluster Operation state>] - State of the Operation
This is the state of the operation. If this parameter is specificed, only the operations that are in this state are
displayed.
[-end-time <MM/DD/YYYY HH:MM:SS>] - End Time
This is the time the operation completed. If this parameter is specificed, only the operations that completed at
this time are displayed.
[-error-list <text>, ...] - Error List For the Operation
This is the list of errors that were encountered during an operation's execution. If this parameter is specificed,
only the operations that have the matching errors are displayed.
[-job-id <integer>] - Identifier for the Job
This is the job id for the operation. If this parameter is specificed, only the operation that has the matching job
id displayed.
[-additional-info <text>] - Additional Info for Auto Heal
This is the completion status of the auto heal aggregates and auto heal root aggregates phases when processing
switchover with auto heal.
Related references
metrocluster check on page 239
metrocluster operation show on page 294
Description
The metrocluster vserver recover-from-partial-switchback command executes the necessary steps needed for a
Vserver to be in healthy state after partial completion of the Switchback.
Examples
Related references
metrocluster vserver recover-from-partial-switchover on page 296
Description
The metrocluster vserver recover-from-partial-switchover command executes the necessary steps needed for a
Vserver to be in healthy state after partial completion of the Switchover.
Related references
metrocluster vserver recover-from-partial-switchback on page 296
Description
The metrocluster vserver resync command resynchronizes the Vserver with its partner Vserver
Parameters
-cluster <Cluster name> - Cluster Name
Name of the cluster where the Vserver belongs
-vserver <vserver> - Vserver
Name of the Vserver to be resynchronized
Examples
Related references
metrocluster vserver show on page 297
Description
The metrocluster vserver show command displays configuration information for all pairs of Vservers in MetroCluster.
Parameters
{ [-fields <fieldname>, ...]
The command output includes the specified field or fields
| [-creation-time ] (privilege: advanced)
Shows the last configuration modification time on the Vserver
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
• healthy
• unhealthy
Examples
The following example shows the output of the command when partner Vservers are created
Cluster: clusA
Partner Configuration
Vserver Vserver State
------------------- ---------------------- -----------------
clusA clusB healthy
vs1 vs1-mc healthy
Cluster: clusB
Partner Configuration
Vserver Vserver State
------------------- ---------------------- -----------------
clusB clusA healthy
3 entries were displayed.
The following example shows the output of the command when the partner Vserver creation is pending
Cluster: clusA
Related references
metrocluster vserver resync on page 297
Network Commands
Manage physical and virtual network connections
The network commands enable you to manage the network interfaces in a cluster.
network ping
Ping
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The network ping command displays whether a remote address is reachable and responsive, the (if specified) number of
transmitted and received packets, and their round-trip time. The command requires a source node or logical interface from
where the ping will be run, and a destination IP address. You can specify the source node by name, or a logical interface and its
Vserver.
Parameters
{ -node <nodename> - Node
Use this parameter to send the ping from the node you specify.
| -lif <lif-name>} - Logical Interface
Use this parameter to send the ping from the logical interface you specify.
-vserver <vserver> - Vserver
Use this parameter to send the ping from the Vserver where the intended logical interface resides. The default
value is the system Vserver for cluster administrators.
[-use-source-port {true|false}] - (DEPRECATED)-Use Source Port of Logical Interface (privilege:
advanced)
This parameter is only applicable when the -lif parameter is specified. When set to true, the ping packet will
be sent out via the port which is currently hosting the IP address of the logical interface. Otherwise, the ping
packet will be sent out via a port based on the routing table.
Note: The use-source-port parameter is deprecated and may be removed in a future release of Data
ONTAP.
Examples
This example shows a ping from node xena to the destination server 10.98.16.164 with the server responding that it is up
and running.
network ping6
Ping an IPv6 address
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The network ping6 command uses the ICMPv6 protocol's mandatory ICMP6_ECHO_REQUEST datagram to elicit an
ICMP6_ECHO_REPLY from a host or gateway. ICMP6_ECHO_REQUEST datagrams ("pings") have an IPv6 header, and
ICMPv6 header formatted as documented in RFC2463.
Parameters
{ -node <nodename> - Node Name
Use this parameter to originate ping6 from the specified node.
Related references
network ping on page 299
network traceroute on page 303
network traceroute6 on page 304
network test-path
Test path performance between two nodes
Availability: This command is available to cluster administrators at the advanced privilege level.
Description
The network test-path command runs a performance test between two nodes. The command requires a source node,
destination node, destination cluster, and application, or session type. All tests are run using intracluster or intercluster LIFs,
depending on whether the test is between two nodes in the same cluster, or between nodes in peered clusters.
The test itself is different from most bandwidth test tools. It creates a "session" consisting of TCP connections between all
possible paths between the nodes being tested. This is how internal Data ONTAP applications communicate between nodes.
This means the test is using multiple paths, and thus the bandwidth reported might exceed the capacity of a single 10 Gb path.
Parameters
-source-node {<nodename>|local} - Node Initiating Session
Use this parameter to specify the node that initiates the test. Source-node parameter must be a member of the
cluster in which the command is run.
-destination-cluster <Cluster name> - Cluster Containing Passive Node
Use this parameter to specify the destination cluster; the local cluster, or a peered cluster.
-destination-node <text> - Remote Node in Destination Cluster
Use this parameter to specify the destination node in the destination cluster
-session-type {AsyncMirrorLocal|AsyncMirrorRemote|RemoteDataTransfer} - Type of Session to Test
The session type parameter is used to mimic the application settings used. A session consists of multiple TCP
connections.
• RemoteDataTransfer: settings used by Data ONTAP for remote data access between nodes in the same
cluster
Related references
network test-link on page 419
network traceroute
Traceroute
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The network traceroute command performs a network probe from a node to a specified IP address. The command requires
a source node or logical interface and a destination IP address. You can specify the source node by name, or specify a logical
interface and its Vserver. The traceroute is performed between the source and destination.
Parameters
{ -node <nodename> - Node
Use this parameter to originate the traceroute from the node you specify.
| -lif <lif-name>} - Logical Interface
Use this parameter to originate the traceroute from the specified network interface.
-vserver <vserver> - LIF Owner
Use this parameter to originate the traceroute from the Vserver where the intended logical interface resides.
The default value is the system Vserver for cluster administrators.
-destination <Remote InetAddress> - Destination
Use this parameter to specify the remote internet address destination of the traceroute.
[-maxttl | -m <integer>] - Maximum Number of Hops
Use this parameter to specify the maximum number of hops (time-to-live) setting used by outgoing probe
packets. The default is 30 hops.
[-numeric | -n [true]] - Print Hop Numerically
Use this parameter to print the hop addresses only numerically rather than symbolically and numerically.
[-port <integer>] - Base UDP Port Number
Use this parameter to specify the base UDP port number used in probes. The default is port 33434.
[-packet-size <integer>] - Packet Size
Use this parameter to specify the size of probe packets, in bytes.
Examples
This example shows a traceroute from node node1 to a destination address of 10.98.16.164, showing a maximum of five
hops.
network traceroute6
traceroute6
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The network traceroute6 command performs a network probe from a node to a specified IPv6 address. The command
requires a source node or logical interface, Vserver from where traceroute6 will originate and a destination IPv6 address.
traceroute6 is performed between the source and destination.
Parameters
{ -node <nodename> - Node
Use this parameter to originate traceroute6 from the node you specify. This parameter is available only to
cluster administrators.
| -lif <lif-name>} - Logical Interface
Use this parameter to originate traceroute6 from the logical interface you specify.
-vserver <vserver name> - LIF Owner
Use this parameter to originate traceroute6 from the Vserver you specify. The default value is the system
Vserver for cluster administrators.
[-debug-mode | -d [true]] - Debug Mode
Use this parameter to enable socket level debugging. The default value is false.
{ [-icmp6 | -I [true]] - ICMP6 ECHO instead of UDP
Use this parameter to specify the use of ICMP6 ECHO instead of UDP datagrams for the probes. The default
value is false.
| [-udp | -U [true]]} - UDP
Use this parameter to specify the use of UDP datagrams for the probes. The default value is true.
Examples
The following example shows traceroute6 from node node1 to the destination fd20:8b1e:b255:4071:d255:1fcd:a8cd:b9e8.
Related references
network traceroute on page 303
network ping6 on page 300
network ping on page 299
Description
The network arp create command creates a static ARP entry for a given Vserver. Statically created ARP entries will be
stored permanently in the Vserver context and will be used by the network stack.
Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the name of the Vserver on which the ARP entry is created.
-remotehost <IP Address> - Remote IP Address
Use this parameter to specify the IP address to be added as an ARP entry.
-mac <MAC Address> - MAC Address
Use this parameter to specify the MAC address (Ethernet address) for the host specified with -remotehost.
Specify the MAC address as six hex bytes separated by colons.
Examples
The following example creates a static ARP entry on Vserver vs1 for the remote host with the IP address 10.63.0.2 having
MAC address 40:55:39:25:27:c1
cluster1::> network arp create -vserver vs1 -remotehost 10.63.0.2 -mac 40:55:39:25:27:c1
Description
The network arp delete command deletes static ARP entries from the Vserver and from the network stack.
Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the name of the Vserver from which the ARP entry is deleted.
-remotehost <IP Address> - Remote IP Address
Use this parameter to specify the IP address of the ARP entry being deleted.
Examples
The following example deletes the ARP entry for IP address 10.63.0.2 from the Vserver vs1.
Description
The network arp show command displays static ARP entries present in a given Vserver. This command will not display
dynamically learnt ARP entries in the network stack. Use the network arp active-entry show command to display
dynamically learned ARP entries in the network stack.
Parameters
{ [-fields <fieldname>, ...]
Use this parameter to display only certain fields of the ARP table.
| [-instance ]}
Use this parameter to display all the fields of the ARP table.
[-vserver <vserver name>] - Vserver Name
Use this parameter to display ARP entries that are specific to a given Vserver.
[-remotehost <IP Address>] - Remote IP Address
Use this parameter to display ARP entries for the specified IP address
[-mac <MAC Address>] - MAC Address
Use this parameter to display ARP entry for the specified MAC address
[-ipspace <IPspace>] - IPspace
Use this parameter to specify the IPspace associated with the Vserver
Examples
The following example displays static ARP entries from the Vserver vs1.
Related references
network arp active-entry show on page 308
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the node in which the ARP entry is deleted.
-vserver <vserver> - System or Admin Vserver Name
Use this parameter to specify the name of the Vserver in which the ARP entry is deleted. Only Vservers with a
type of Admin or System have dynamically learned ARP entries.
-subnet-group <IP Address/Mask> - Subnet Group Name
Use this parameter to specify the name of the routing group in which the ARP entry is deleted.
-remotehost <text> - Remote IP Address
Use this parameter to specify the IP address to be deleted from the active ARP entries.
-port <text> - Port
Use this parameter to specify the name of the Port to be deleted from the active ARP entries.
Examples
The following example deletes the active ARP entry with an IP address of 10.224.64.1, subnet group of 0.0.0.0/0, port e0c
on node node2 in the Admin Vserver cluster1:
Related references
network arp delete on page 306
Description
The network arp active-entry show command displays ARP entries present in the network stack of the node. The
entries includes both dynamically learned ARP entries and user configured static ARP entries.
Parameters
{ [-fields <fieldname>, ...]
Use this parameter to display only certain fields of the active ARP table.
| [-instance ]}
Use this parameter to display all the fields of the active ARP table.
[-node {<nodename>|local}] - Node
Use this parameter to display active ARP entries that are specific to a given node.
[-vserver <vserver>] - System or Admin Vserver Name
Use this parameter to display active ARP entries that are specific to a given System or Admin Vserver. Data
and Node Vservers will not have any active-arp entries.
Examples
The following example displays active ARP entries for the Admin Vserver cluster1:
Node: node-01
Vserver: cluster1
Subnet Group: 169.254.0.0/16
Remote IP Address MAC Address Port
----------------- ----------------- -------
169.254.106.95 0:55:39:27:d1:c1 lo
Description
The network bgp config create command is used to create the border gateway protocol (BGP) configuration for a node.
It can be used to override the BGP parameters defined in the global BGP defaults.
Parameters
-node {<nodename>|local} - Node
This parameter specifies the node on which configuration details will be created.
-asn <integer> - Autonomous System Number
This parameter specifies the autonomous system number (ASN). The ASN attribute is a non-negative 16-bit
integer. It should typically be chosen from RFC6996 "Autonomous System (AS) Reservation for Private Use"
or the AS number assigned to the operator's organization.
Examples
cluster1::> network bgp config create -node node1 -asn 10 -hold-time 180 -router-id 10.0.1.112
Description
The network bgp config delete command deletes a node's border gateway protocol (BGP) configuration. A BGP
configuation cannot be deleted if there are BGP peer groups configured on the associated node.
Parameters
-node {<nodename>|local} - Node
This parameter specifies the node for which the BGP configuration will be deleted.
Examples
Description
The network bgp config modify command is used to modify a node's border gateway protocol (BGP) configuration.
Parameters
-node {<nodename>|local} - Node
This parameter specifies the node on which BGP configuration will be modified.
[-asn <integer>] - Autonomous System Number
This parameter specifies the autonomous system number (ASN). The ASN attribute is a non-negative 16-bit
integer. It should typically be chosen from RFC6996 "Autonomous System (AS) Reservation for Private Use"
or the AS number assigned to the operator's organization.
[-hold-time <integer>] - Hold Time
This parameter specifies the hold time in seconds.
Examples
cluster1::> network bgp config modify -node node1 -router-id 1.1.1.1 -asn 20
Description
The network bgp config show command displays the border gateway protocol (BGP) configuration for each node.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter selects the BGP configurations that match the specified node.
[-asn <integer>] - Autonomous System Number
This parameter selects the BGP configurations that match the specified autonomous system number.
[-hold-time <integer>] - Hold Time
This parameter selects BGP configurations that match the specified hold time.
[-router-id <IP Address>] - Router ID
This parameter selects the BGP configurations that match the specfied router ID.
Examples
Description
The network bgp defaults modify command modifies the global defaults for border gateway protocol (BGP)
configurations.
Parameters
[-asn <integer>] - Autonomous System Number
This parameter specifies the autonomous system number (ASN). The ASN attribute is a non-negative 16-bit
integer. It should typically be chosen from RFC6996 "Autonomous System (AS) Reservation for Private Use",
or the AS number assigned to the operator's organization. The default ASN is 65501.
[-hold-time <integer>] - Hold Time
This parameter specifies the hold time in seconds. The default value is 180.
Examples
Description
The network bgp defaults show command displays the global defaults for border gateway protocol (BGP) configurations.
Examples
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter selects the BGP status that match the specified node.
[-vserver <vserver name>] - Vserver
This parameter selects the BGP status for specified vserver.
[-ipv4-status {unknown|unconfigured|up|down}] - IPv4 status
This parameter selects the BGP status that matches the specified status for IPv4 address family.
[-ipv6-status {unknown|unconfigured|up|down}] - IPv6 status
This parameter selects the BGP status that matches the specified status for IPv6 address family.
Examples
Description
The network bgp peer-group create command is used to create a border gateway protocol (BGP) peer group. A BGP
peer group will advertise VIP routes for the list of vservers in the peer group's vserver-list using the BGP LIF of the peer
group. A BGP peer group will advertise VIP routes to a peer router using the border gateway protocol. The address of the peer
router is identfied by the peer-address value.
Parameters
-ipspace <IPspace> - IPspace Name
This parameter specifies the IPspace of the peer group being created.
-peer-group <text> - Peer Group Name
This parameter specifies the name of the peer group being created.
Examples
cluster1::> network bgp peer-group create -peer-group group1 -ipspace Default -bgp-lif bgp_lif -
peer-address 10.0.1.112
Description
The network bgp peer-group delete command is used to delete border gateway protocol (BGP) peer group
configuration.
Parameters
-ipspace <IPspace> - IPspace Name
This parameter specifies the IPspace of the BGP peer group being deleted.
-peer-group <text> - Peer Group Name
This parameter specifies the name of the BGP peer group being deleted.
Examples
Description
The network bgp peer-group modify command is used to modify a border gateway protocol (BGP) peer group
configuration.
Examples
cluster1::> network bgp peer-group modify -ipspace Default -peer-group peer1 -peer-address
10.10.10.10
Description
The network bgp peer-group rename command is used to assign a new name to a BGP peer group.
Parameters
-ipspace <IPspace> - IPspace Name
This parameter specifies the IPspace of the peer group being renamed.
-peer-group <text> - Peer Group Name
The name of the peer group to be updated.
-new-name <text> - New Name
The new name for the peer group.
Examples
Description
The network bgp peer-group show command displays the BGP peer groups configuration.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
Examples
Parameters
-route-table-id <text> - Route Table ID
This parameter is used to provide the name of the external routing table to be created.
Examples
The following example creates an external routing table "eni-123456":
Description
The network cloud routing-table delete deletes an existing external routing table.
Parameters
-route-table-id <text> - Route Table ID
This parameter is used to provide the name of an existing external routing table to be deleted.
Examples
The following example deletes the external routing table "eni-123456":
Description
The network connections active show command displays information about active network connections.
Note: The results of this command set are refreshed independently every 30 seconds and might not reflect the immediate state
of the system.
Examples
The following example displays information about active network connections for the node named node0:
Description
The network connections active show-clients command displays information about client connections, including the
client's IP address and the number of client connections.
Note: The results of this command set are refreshed independently every 30 seconds and might not reflect the immediate state
of the system.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example displays information about active client connections:
Description
The network connections active show-lifs command displays the number of active connections on each logical
interface, organized by node and Vserver.
Note: The results of this command set are refreshed independently every 30 seconds and might not reflect the immediate state
of the system.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display information only about the connections on the node you specify.
[-vserver <vserver>] - Vserver
Use this parameter to display information only about the connections that are using the node or Vserver you
specify.
[-lif-name <lif-name>] - Logical Interface Name
Use this parameter to display information only about the connections that are using the logical interface you
specify.
Examples
The following example displays information about the servers and logical interfaces being used by all active connections:
Description
The network connections active show-protocols command displays the number of active connections per protocol,
organized by node.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display information only about the connections on the node you specify.
[-vserver <vserver>] - Vserver
This parameter is used by the system to break down the output per vserver.
[-proto {UDP|TCP}] - Protocol
Use this parameter to display information only about the connections that use the network protocol you
specify. Possible values include tcp (TCP), udp (UDP), and NA (not applicable).
[-count <integer>] - Client Count
Use this parameter to display only protocols with the number of active client connections you specify.
Examples
The following example displays information about all network protocols being used by active connections:
Description
The network connections active show-services command displays the number of active connections by protocol
service, organized by node.
Note: The results of this command set are refreshed independently every 30 seconds and might not reflect the immediate state
of the system.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example displays information about all protocol services being used by active connections:
Description
The network connections listening show command displays information about network connections that are in an
open and listening state.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the listening connections that match this parameter value.
Examples
The following example displays information about all listening network connections:
The following example displays detailed information about listening network connections for the node named node0:
Description
The network device-discovery show command displays information about discovered devices. This information may be useful
in determining the network topology or investigating connectivity issues. By default, the command displays the following
information:
• Local interface
• Discovered device
• Discovered interface
• Discovered platform
Parameters
{ [-fields <fieldname>, ...]
Include the specified field or fields in the command output. Use '-fields ?' to display the valid fields.
| [-instance ]}
Use this parameter to display detailed information about all fields.
[-node <nodename>] - Node
Displays the discovery ports that match the node name.
[-protocol {cdp|lldp}] - Protocol
Displays the devices that are discovered by the given protocol.
• "router" - Router
• "switch" - Switch
• "host" - Host
• "igmp" - IGMP
• "repeater" - Repeater
• "phone" - Phone
Examples
Description
Modifies the FCP target adapter information.
The adapter argument is in the form Xy or Xy_z where X and z are integers and y is a letter. An example is 4a or 4a_1.
You cannot bring an adapter offline until all logical interfaces connected to that adapter are offline. Use the network
interface modify command to take your logical interfaces offline.
The speed option sets the Fibre Channel link speed of an adapter. You can set adapters that support:
• 10Gb/s to 10 or auto
• 8Gb/s to 2, 4, 8 or auto
• 4Gb/s to 2, 4 or auto
• 2Gb/s to 2 or auto
By default, the link speed option is set to auto for auto negotiation. Setting the link speed to a specific value disables auto
negotiation. Under certain conditions, a speed mismatch can prevent the adapter from coming online.
Note: The system reports the actual link speed with the "Data Link Rate (Gbit)" field in the output of network fcp
adapter show -instance.
Parameters
-node {<nodename>|local} - Node
Specifies the node of the target adapter.
Examples
Related references
network fcp adapter show on page 328
network interface modify on page 340
Description
Displays FCP target adapter information. You can also use this information to determine if adapters are active and online.
The adapter argument is in the form Xy or Xy_z where X and z are integers and y is a letter. An example is 4a or 4a_1.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays information only about the FCP target adapters that are
present on the specified node.
[-adapter <text>] - Adapter
If this parameter is specified, the command displays information only about the FCP target adapters that match
the specified name.
[-description <text>] - Description
If this parameter is specified, the command displays information only about the FCP target adapters that match
the specified description.
[-physical-protocol {fibre-channel|ethernet}] - Physical Protocol
If this parameter is specified, the command displays information only about the FCP target adapters that match
the specified physical protocol. Possible values are fibre-channel and ethernet.
Examples
The example above displays information regarding FCP adapters within cluster1.
Node: sti6280-021
Adapter: 0a
Description: Fibre Channel Target Adapter 0a (QLogic 2532 (2562), rev. 2, 8G)
Physical Protocol: fibre-channel
Maximum Speed: 8
Administrative Status: up
Operational Status: online
Extended Status: ADAPTER UP
Host Port Address: 30012c
Firmware Revision: 5.8.0
Data Link Rate (Gbit): 4
Fabric Established: true
Fabric Name: 20:14:54:7f:ee:54:b9:01
Connection Established: ptp
Is Connection Established: true
Mediatype: ptp
Configured Speed: auto
Adapter WWNN: 50:0a:09:80:8f:7f:8b:1c
Adapter WWPN: 50:0a:09:81:8f:7f:8b:1c
Switch Port: RTP-AG01-410B51:1/41
Form Factor Of Transceiver: SFP
Vendor Name Of Transceiver: OPNEXT,INC
Part Number Of Transceiver: TRS2000EN-SC01
Revision Of Transceiver: 0000
Serial Number Of Transceiver: T10H64793
FC Capabilities Of Transceiver: 10 (Gbit/sec)
Vendor OUI Of Transceiver: 0:11:64
Wavelength In Nanometers: 850
Date Code Of Transceiver: 10:08:17
Validity Of Transceiver: true
Connector Used: LC
Encoding Used: 64B66B
Is Internally Calibrated: true
Received Optical Power: 441.3 (uWatts)
Is Received Power In Range: true
SFP Transmitted Optical Power: 600.4 (uWatts)
Is Xmit Power In Range: true
Description
Display FCP topology interconnect elements per adapter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to select the interconnect elements for adapters that are located on the node that you
specify.
[-adapter <text>] - Adapter
Use this parameter to select the interconnect elements for the specified adapter.
[-domain-id <integer>] - Domain Id
Use this parameter to select the interconnect elements with the specified domain identifier.
[-port-wwpn <text>] - Port WWPN
Use this parameter to select the interconnect elements with the specified port world wide name.
[-switch-name <text>] - Switch Name
Use this parameter to select the interconnect elements with the specified switch.
[-switch-vendor <text>] - Switch Vendor
Use this parameter to select the interconnect elements with the specified vendor.
[-switch-release <text>] - Switch Release
Use this parameter to select the interconnect elements with the specified release.
[-switch-wwn <text>] - Switch WWN
Use this parameter to select the interconnect elements with the specified world wide name.
[-port-count <integer>] - Port Count
Use this parameter to select the interconnect elements with the specified port count.
[-port-slot <text>] - Port Slot
Use this parameter to select the interconnect elements with the specified port slot.
[-port-state {Unknown|Online|Offline|Testing|Fault}] - Port State
Use this parameter to select the interconnect elements with the specified port state.
[-port-type {None|N-Port|NL-Port|FNL-Port|NX-Port|F-Port|FL-Port|E-Port|B-Port|TNP-Port|TF-
Port|NV-Port|FV-Port|SD-Port|TE-Port|TL-Port}] - Port Type
Use this parameter to select the interconnect elements with the specified port type.
Examples
Description
Displays the active zone set information.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
Description
The network interface create command creates a logical interface (LIF).
Note: A logical interface is an IP address associated with a physical network port. For logical interfaces using NAS data
protocols, the interface can fail over or be migrated to a different physical port in the event of component failures, thereby
continuing to provide network access despite the component failure. Logical interfaces using SAN data protocols do not
support migration or failover.
Note: On some cloud platforms, this operation might perform changes to the external route tables.
Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver on which the LIF is created.
-lif <lif-name> - Logical Interface Name
Use this parameter to specify the name of the LIF that is created. For iSCSI and FC LIFs, the name cannot be
more than 254 characters.
[-service-policy <text>] - Service Policy
Use this parameter to specify a service policy for the LIF. If no policy is specified, a default policy will be
assigned automatically. Use the network interface service-policy show command to review
available service policies.
-role {cluster|data|node-mgmt|intercluster|cluster-mgmt} - (DEPRECATED)-Role
Note: This parameter has been deprecated and may be removed in a future version of ONTAP. Use the -
service-policy parameter instead.
Use this parameter to specify the role of the LIF. LIFs can have one of five roles:
• Data LIFs, which provide data access to NAS and SAN clients
LIFs with the cluster-management role behave as LIFs with the node-management role except that cluster-
management LIFs can failover between nodes.
[-data-protocol {nfs|cifs|iscsi|fcp|fcache|none|fc-nvme}, ...] - Data Protocol
Use this parameter to specify the list of data protocols that can be configured on the LIF. The supported
protocols are NFS, CIFS, iSCSI, FCP, and FC-NVMe. NFS and CIFS are available by default when you create
a LIF. If you specify "none", the LIF does not support any data protocols. Also, none, iscsi, fcp or fc-nvme
cannot be combined with any other protocols.
• system-defined - The system determines appropriate failover targets for the LIF. The default behavior is
that failover targets are chosen from the LIF's current hosting node and also from one other non-parter
node when possible.
• local-only - The LIF fails over to a port on the local or home node of the LIF.
• sfo-partner-only - The LIF fails over to a port on the home node or SFO partner only.
• broadcast-domain-wide - The LIF fails over to a port in the same broadcast domain as the home port.
The failover policy for cluster logical interfaces is local-only and cannot be changed. The default failover
policy for data logical interfaces is system-defined. This value can be changed.
Note: Logical interfaces for SAN protocols do not support failover. Thus, such interfaces will always show
this parameter as disabled.
Note: Logical interfaces for SAN traffic do not support auto-revet. Thus, this parameter is always false on
such interfaces.
Examples
The following example creates an IPv4 LIF named datalif1 and an IPv6 LIF named datalif2 on a Vserver named vs0.
Their home node is node0 and home port is e0c. The failover policy broadcast-domain-wide is assigned to both LIFs.
The firewall policy is data and the LIFs are automatically reverted to their home node at startup and under other
circumstances. The datalif1 has the IP address 192.0.2.130 and netmask 255.255.255.128, and datalif2 has the IP address
3ffe:1::aaaa and netmask length of 64.
cluster1::> network interface create -vserver vs0 -lif datalif1 -role data -home-node node0 -home-
port e0c -address 192.0.2.130 -netmask 255.255.255.128 -failover-policy broadcast-domain-wide -
firewall-policy data -auto-revert true
cluster1::> network interface create -vserver vs0 -lif datalif2 -role data -home-node node0 -home-
port e0c -address 3ffe:1::aaaa -netmask-length 64 -failover-policy broadcast-domain-wide -firewall-
policy data -auto-revert true
Related references
network interface service-policy show on page 366
network interface revert on page 343
system services firewall policy show on page 1393
system services firewall policy modify on page 1393
network interface failover-groups on page 352
Description
The network interface delete command deletes a logical interface (LIF) from a Vserver. Only administratively down
LIFs can be deleted. To make a LIF administratively down, use the network interface modify command to set the "status-
admin" parameter to "down".
Note: If the LIF is configured for a SAN protocol and is part of a port set, the LIF must be removed from the port set before it
can be deleted. To determine if a LIF is in a port set, use the lun portset show command. To remove the LIF from the
port set, use the lun portset remove command.
Note: On some cloud platforms, this operation might perform changes to the external route tables.
Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver on which the logical interface to be deleted is located.
Examples
The following example deletes a logical interface named cluslif3 that is located on a Vserver named vs0.
Related references
network interface modify on page 340
lun portset show on page 226
lun portset remove on page 225
Description
The network interface migrate command migrates a logical interface to a port or interface group on the node you
specify.
Note: Manual migration of a logical interface can take up to 15 seconds to complete. Also, when you migrate a cluster logical
interface, you must do so from the local node. Logical interface migration is a best-effort command, and can only be
completed if the destination node and port are operational
Note: Logical interfaces for SAN protocols do not support migration. Attempts to do so will result in an error.
Note: On some cloud platforms, this operation might perform changes to the external route tables.
Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver that owns the logical interface that is to be migrated.
-lif <lif-name> - Logical Interface Name
Use this parameter to specify the logical interface that is to be migrated.
-destination-node <nodename> - Destination Node
Use this parameter to specify the node to which the logical interface is to be migrated.
[-destination-port {<netport>|<ifgrp>}] - Destination Port
Use this parameter to specify the port or interface group to which the logical interface is to be migrated.
[-force [true]] - Force Migrate Data LIF Flag (privilege: advanced)
Use this parameter to force the migration operation.
Examples
The following example migrates a logical interface named datalif1 on a Vserver named vs0 to port e0c on a node named
node2:
cluster1::> network interface migrate -vserver vs0 -lif datalif1 -dest-node node2 -dest-port e0c
Description
The network interface migrate-all command migrates all data logical interfaces from the node you specify.
Note: Manual migration of a logical interface can take up to 15 seconds to complete. Logical interface migration is a best-
effort command and can only be completed if the destination node and port are operational. Logical interface migration
requires that the logical interface be pre-configured with valid failover rules to facilitate failover to a remote node.
Note: Logical interfaces for SAN protocols do not support migration. Attempts to do so will result in an error.
Note: On some cloud platforms, this operation might perform changes to the external route tables.
Parameters
-node <nodename> - Node
Use this parameter to specify the node from which all logical interfaces are migrated. Each data logical
interface is migrated to another node in the cluster, assuming that the logical interface is configured with
failover rules that specify an operational node and port.
[-port {<netport>|<ifgrp>}] - Port
Use this parameter to specify the port from which all logical interfaces are migrated. This option cannot be
used with asynchronous migrations. If this parameter is not specified, then logical interfaces will be migrated
away from all ports on the specified node.
Examples
The following example migrates all data logical interfaces from the current (local) node.
Description
The network interface modify command modifies attributes of a logical interface (LIF).
Note: You cannot modify some properties of an iSCSI or FCP LIF, such as -home-node or -home-port, if the LIF is in a
port set. To modify these properties, first remove the LIF from the port set. To determine if a LIF is in a port set, use the lun
portset show command. To remove the LIF from the port set, use the lun portset remove command.
Note: On some cloud platforms, this operation might perform changes to the external route tables.
Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver on which the LIF to be modified is located.
-lif <lif-name> - Logical Interface Name
Use this parameter to specify the name of the LIF that is to be modified
• system-defined - The system determines appropriate failover targets for the LIF. The default behavior is
that failover targets are chosen from the LIF's current hosting node and also from one other non-partner
node when possible.
• local-only - The LIF fails over to a port on the local or home node of the LIF.
• sfo-partner-only - The LIF fails over to a port on the home node or SFO partner only.
• broadcast-domain-wide - The LIF fails over to a port in the same broadcast domain as the home port.
Note: The failover policy for cluster logical interfaces is local-only and cannot be changed. The default
failover policy for data logical interfaces is system-defined. This value can be changed.
Note: Logical interfaces for SAN protocols do not support failover. Thus, such interfaces always show this
parameter as disabled.
Examples
The following example modifies a LIF named datalif1 on a logical server named vs0. The LIF's netmask is modified to
255.255.255.128.
cluster1::> network interface modify -vserver vs0 -lif datalif1 -netmask 255.255.255.128
Related references
network interface revert on page 343
system services firewall policy show on page 1393
system services firewall policy modify on page 1393
network interface failover-groups on page 352
lun portset show on page 226
lun portset remove on page 225
Description
Use the network interface rename command to change the name of an existing logical interface.
Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver on which the logical interface to rename is located.
-lif <lif-name> - Logical Interface Name
Use this parameter to specify the name of the logical interface to rename.
-newname <text> - The new name for the interface
Use this parameter to specify the new name of the logical interface. For iSCSI and FC LIFs, the name cannot
be more than 254 characters.
Examples
The following example renames a cluster logical interface named cluslif1 to cluslif4 on a Vserver named vs0.
cluster1::> network interface rename -vserver vs0 -lif cluslif1 -newname cluslif4
Note: On some cloud platforms, this operation might perform changes to the external route tables.
Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver on which the logical interface to be reverted is located.
-lif <lif-name> - Logical Interface Name
Use this parameter to specify the logical interface that is to be reverted.
Note: Logical interfaces for SAN protocols are always home. Thus, this command has no effect on such
interfaces. The same applies to logical interfaces for NAS protocols that are already home.
Examples
The following example returns any logical interfaces that are not currently on their home ports to their home ports.
Related references
network interface show on page 344
Description
The network interface show command displays information about logical interfaces.
Running the command with the -failover parameter displays information relevant to logical interface failover rules.
Running the command with the -status parameter displays information relevant to logical interface operational status.
Running the command with the -by-ipspace parameter displays information relevant to logical interfaces on a specific
IPspace.
See the examples for more information.
You can specify additional parameters to display only information that matches those parameters. For example, to display
information only about logical interfaces whose operational status is down, run the command with the -status-oper down
parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that you
specify.
| [-by-ipspace ]
Use this parameter to display logical-interfaces sorted by IPspace and Vserver.
Examples
The following example displays general information about all logical interfaces.
The following example displays failover information about all logical interfaces.
Description
The network interface start-cluster-check command initiates an accessibility check from every logical interface to
every aggregate. Automatic checks run periodically, but this command manually initiates a check immediately.
This command produces no direct output. Any errors encountered during the check are reported in the event log. See the event
log show command for more information.
Examples
This example shows an execution of this command, with all parameters and output.
Related references
event log show on page 116
Description
The network interface capacity show command displays the number of IP LIFs of role data supported on the cluster,
as well as the number of IP LIFs of role data currently configured on the cluster.
Note: The number of IP LIFs of role data that are supported on a node depends on the hardware platform and the Cluster's
Data ONTAP version. If one or more nodes in the cluster cannot support additional LIFs, then none of the nodes in the cluster
can support additional LIFs.
Examples
The following displays the IP data LIF capacity.
Description
The network interface capacity details show command displays the number of IP LIFs of role data that can be
configured on each node, the number of IP data LIFs of role data that are supported on each node, and the number of IP data
LIFs of role data that are configured to be homed on each node.
Note: The number of IP LIFs of role data that are supported on a node depends on the hardware platform and the Cluster's
Data ONTAP version. If one or more nodes in the cluster cannot support additional LIFs, then none of the nodes in the cluster
can support additional LIFs.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
Use this parameter to specify the node for which to obtain data LIF capacity.
Examples
The following displays the IP data LIF capacity.
Related references
cluster image show on page 36
Description
This command identifies logical interfaces (LIFs) at risk of becoming inaccessible if their hosting nodes were to experience an
outage. The source-nodes parameter is the only required input.
The tuple <destination-nodes, vserver-name, lif-name> is sufficient to uniquely identify a record in the returned listing. All
fields other than source-nodes can be filtered on in the usual fashion. There are some examples of this filtering below.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example shows all the at-risk LIFs for a specific two-node outage in a six-node cluster.
The following example shows the same two-node outage scenario, but now with some filtering applied to the results.
Description
The network interface failover-groups add-targets command enables you to add a list of failover targets such as
network ports, interface groups, or VLANs to an existing logical interface failover group.
Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the name of the Vservers from which this failover group is accessible.
-failover-group <text> - Failover Group Name
Use this parameter to specify the failover group that you want to extend.
-targets <<node>:<port>>, ... - Failover Targets
Use this parameter to specify the failover targets such as network ports, interface groups, or VLANs you wish
to add to the failover group.
Examples
This example shows the failover group "clyde" being extended to include additional failover targets.
Description
The network interface failover-groups create command creates a grouping of failover targets for logical interfaces
on one or more nodes. Use this command to add a new network port or interface group to an existing failover group.
Note: Interfaces for SAN protocols do not support failover. Such interfaces are not valid failover targets.
Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the name of the Vservers from which this failover group is accessible.
-failover-group <text> - Failover Group Name
Use this parameter to specify the name of the logical interface failover group that you want to create.
Examples
The following example shows how to create a failover group named failover-group_2 containing ports e1e and e2e on
node Xena.
Description
The network interface failover-groups delete command deletes a logical interface failover group.
Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the name of the Vservers from which this failover group is accessible.
-failover-group <text> - Failover Group Name
Use this parameter to specify the name of the logical interface failover group to be deleted.
Examples
The following example shows how to delete a failover group named failover-group_2.
Description
The network interface failover-groups modify command enables you modify the list of network ports, interface
groups, or VLANs belonging to an existing logical interface failover group. The specified list will overwrite the existing list of
network ports, interface groups, and VLANs currently belonging to the logical interface failover group.
Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the name of the Vserver(s) from which this failover group is accessible.
-failover-group <text> - Failover Group Name
Use this parameter to specify the failover group that you want to modify.
[-targets <<node>:<port>>, ...] - Failover Targets
Use this parameter to specify the network ports, interface groups, or VLANs you wish to now belong to the
failover group.
cluster1::> network interface failover-group modify -vserver vs1 -failover-group clyde -targets
xena1:e0c, xena1:e0d-100, xena2:a0a
Description
The network interface failover-groups remove-targets command enables you to specify a list of failover targets
such as network ports, interface groups, or VLANs to be removed from an existing logical interface failover group.
Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the name of the Vserver(s) from which this failover group is accessible.
-failover-group <text> - Failover Group Name
Use this parameter to specify the failover group that you want to remove failover targets from.
-targets <<node>:<port>>, ... - Failover Targets
Use this parameter to specify the failover targets such as network ports, interface groups, or VLANs you wish
to remove from the failover group.
Examples
This example shows the failover targets xena1:e0c and xena1:e0d-100 being removed from the failover group "clyde".
Description
The network interface failover-groups rename command enables you to rename an existing logical interface failover
group.
Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the name of the Vservers from which this failover group is accessible.
-failover-group <text> - Failover Group Name
Use this parameter to specify the failover group that you want to rename.
-new-failover-group-name <text> - New name
Use this parameter to specify the new name of the failover group.
Description
The network interface failover-groups show command displays information about logical interface failover groups.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver>] - Vserver Name
Use this parameter to display information only about the logical interface failover groups that have the target
Vserver you specify.
[-failover-group <text>] - Failover Group Name
Use this parameter to display information only about the logical interface failover groups you specify.
[-targets <<node>:<port>>, ...] - Failover Targets
Use this parameter to display information only about the logical interface failover groups that have the failover
target (physical port, interfacde group, or VLAN) you specify.
[-broadcast-domain <Broadcast Domain>] - Broadcast Domain
Use this parameter to display information only about the logical interface failover groups that have the
broadcast domain you specify.
Examples
The following example displays information about all logical interface failover groups on a two node cluster.
Description
The network interface dns-lb-stats show command displays the statistics for DNS load-balancing lookups for the
zones belonging to the specified Vserver. These statistics represent the data for the Vserver on the local node. The following
counts can be seen in the statistics output:
• success-count : Number of successful lookups.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver>] - Vserver
Use this parameter to display DNS load-balancer statistics only for the specified Vservers.
[-zone <text>] - DNS Zone
Use this parameter to display DNS load-balancer statistics only for the specified DNS zones.
[-success-count <integer>] - Successful Lookup Count
Use this parameter to display DNS load-balancer statistics only for the specified number of successful
lookups.
[-authoritative-count <integer>] - Authoritative Answer Count
Use this parameter to display DNS load-balancer statistics only for the specified number of authoritative
answers sent.
[-nonauthoritative-count <integer>] - Non Authoritative Answer Count
Use this parameter to display DNS load-balancer statistics only for the specified number of non-authoritative
answers sent.
[-rr-set-missing-count <integer>] - RR Set Missing Count
Use this parameter to display DNS load-balancer statistics only for the specified number of times the RR set
was missing.
Examples
The following example displays stats for the zone "x.com".
Description
The network interface lif-weights show command displays the weights assigned to each LIF in a DNS load-balancing
zone in a Vserver.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver>] - Vserver
Use this parameter to display information only for the specified Vservers.
[-zone <text>] - DNS Zone
Use this parameter to display information only for the specified DNS zones.
[-address <IP Address>] - Network Address
Use this parameter to display information only for the specified IP addresses.
[-weight <double>] - Load Balancer Weight
Use this parameter to display information only for the specified load balancer weights
Examples
The following example displays LIF weights for vserver "vs1".
Description
The network interface service-policy add-service command adds an additional service to an existing service-
policy. When an allowed address list is specified, the list applies to only the service being added. Existing services included in
this policy are not impacted.
Parameters
-vserver <vserver name> - Vserver
Use this parameter to specify the name of the Vserver of the service policy to be updated.
-policy <text> - Policy Name
Use this parameter to specify the name of service policy to be updated.
-service <LIF Service Name> - Service entry to be added
Use this parameter to specify the name of service to be added to the existing service policy.
[-allowed-addresses <IP Address/Mask>, ...] - Allowed address ranges for the service
Use this parameter to specify a list of subnet masks for addresses that are allowed to access this service. Use
the value 0.0.0.0/0 to represent the wildcard IPv4 address and ::/0 to represent the wildcard IPv6 address.
Examples
The following example shows the addition of a service to an existing service policy.
Description
The network interface service-policy clone command creates a new service policy that includes the same services
and allowed addresses as an existing policy. Once the new service policy has been created, it can be modified as necessary
without impacting the original policy.
Parameters
-vserver <vserver name> - Vserver
Use this parameter to specify the name of the Vserver of the service policy to be cloned.
-policy <text> - Policy Name
Use this parameter to specify the name of the service policy to be cloned.
-target-vserver <vserver name> - Vserver Name
Use this parameter to specify the name of the vserver on which the new service policy should be created.
-target-policy <text> - New Service Policy Name
Use this parameter to specify the name of the new service policy.
Examples
The following example shows the cloning of a service policy.
ipspace1
default-intercluster intercluster-core: 0.0.0.0/0
management-https: 0.0.0.0/0
cluster1::> network interface service-policy clone -vserver cluster1 -policy custom1 -target-
vserver ipspace1 -target-policy custom2
ipspace1
custom2 intercluster-core: 0.0.0.0/0
management-core: 0.0.0.0/0
management-ssh: 0.0.0.0/0
Description
The network interface service-policy create command creates a new service policy with a list of included services.
LIFs can reference this policy to control the list of services that they are able to transport on their network. Services can
represent applications accessed by a LIF as well as applications served by this cluster.
Parameters
-vserver <vserver name> - Vserver
Use this parameter to specify the name of the Vserver on which the service policy will be created.
-policy <text> - Policy Name
Use this parameter to specify the name of service policy to be created.
[-services <LIF Service Name>, ...] - Included Services
Use this parameter to specify a list of services that should be included in this policy.
Examples
The following example shows the creation of a service policy with no initial services.
empty -
The following example shows the creation of a new service policy with a specified service list.
cluster1::> network interface service-policy create -vserver cluster1 -policy custom -services
intercluster-core,management-ssh
empty -
Description
The network interface service-policy delete command deletes an existing service policy.
Examples
The following example shows the deletion of a service policy.
Description
The network interface service-policy modify-service command modifies the policy-specific attributes of a
service that is already included in a particular service policy. Other services in the policy are not impacted by the change.
Parameters
-vserver <vserver name> - Vserver
Use this parameter to specify the name of the Vserver of the service policy to be updated.
Examples
The following example shows the modification of a service on an existing service policy.
Description
The network interface service-policy remove-service command removes an individual service from an existing
service policy. Other services in the the policy are not impacted by the change.
Parameters
-vserver <vserver name> - Vserver
Use this parameter to specify the name of the Vserver of the service policy to be updated.
Examples
The following example shows the removal of a service from an existing service policy.
Description
The network interface service-policy rename command assigns a new name to an existing service policy without
disrupting the LIFs using the policy.
Parameters
-vserver <vserver name> - Vserver
Use this parameter to specify the name of the Vserver of the service policy to be renamed.
-policy <text> - Policy Name
Use this parameter to specify the name of the service policy to be renamed.
-new-name <text> - New Service Policy Name
Use this parameter to specify the new name for the service policy.
cluster1::> network interface service-policy rename -vserver cluster1 -policy custom -new-name
system
Description
The network interface service-policy restore-defaults command restores a built-in service-policy to its original
state. The default list of services replaces any customizations that have been applied by an administrator. All included services
will be updated to use the default allowed address list.
Parameters
-vserver <vserver name> - Vserver
Use this parameter to specify the name of the Vserver of the service policy to be restored.
-policy <text> - Policy Name
Use this parameter to specify the name of the service policy to be restored.
Description
The network interface service-policy show command displays existing service policies.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that you
specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver
Selects service policies that match the specified vserver name.
[-policy <text>] - Policy Name
Selects service policies that match the specified service policy name.
Examples
The following example displays the built-in service policies.
Description
The network interface service show command displays available services for IP LIFs and the TCP or UDP ports that
each service listens on. The ports listed in this table correspond to well-known ports that each service can be expected to open a
listening socket. Services that do not listen for ingress connections are presented with an empty port list.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that you
specify.
| [-restrictions ] (privilege: advanced)
The network interface service show-restrictions command displays available services for IP
LIFs and usage restrictions for each service. The restrictions determine which LIFs are permitted to use each
service and what restrictions the service implies for the LIFs that do use it.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-service <LIF Service Name>] - Service Name
Selects services that match the specified service name.
Examples
The following example displays the built-in services.
Description
IPspaces are distinct IP address spaces in which Storage Virtual Machines (SVMs) reside. The "Cluster" IPspace and "Default"
IPspace are created by default. You can create more custom IPspaces when you need your SVMs to have overlapping IP
addresses, or you need more control over networking configurations for cluster peering. Please reference the "Network
Management Guide" for the limit of how many custom IPspaces are supported on your system..
Parameters
-ipspace <IPspace> - IPspace name
The name of the IPspace to be created.
• The name must contain only the following characters: A-Z, a-z, 0-9, ".", "-" or "_".
• The first character of each label, delimited by ".", must be one of the following characters: A-Z or a-z.
• The last character of each label, delimited by ".", must be one of the following characters: A-Z, a-z or 0-9.
• The system reserves the following names: "all", "local" and "localhost".
Examples
The following example creates IPspace "ips1".
Description
Delete an IPspace that contains no ports or Vservers.
Parameters
-ipspace <IPspace> - IPspace name
The name of the IPspace to be deleted. If the IPspace is associated with one or more logical-interfaces, you
must delete them before you can delete the IPspace.
Examples
The following example deletes the IPspace "ips1".
Description
Rename an IPspace.
Parameters
-ipspace <IPspace> - IPspace name
The name of the IPspace to be renamed.
-new-name <IPspace> - New Name
The new name for the IPspace.
• The name must contain only the following characters: A-Z, a-z, 0-9, ".", "-" or "_".
• The first character of each label, delimited by ".", must be one of the following characters: A-Z or a-z.
• The last character of each label, delimited by ".", must be one of the following characters: A-Z, a-z or 0-9.
• The system reserves the following names: "all", "cluster", "local" and "localhost".
Description
Display network IPspaces.
Parameters
{ [-fields <fieldname>, ...]
Specify the fields to be displayed for each IPspace.
| [-instance ]}
Display all parameters of the IPspace objects.
[-ipspace <IPspace>] - IPspace name
Display the names of the IPspaces.
[-ports <<node>:<port>>, ...] - Ports
The list of network ports assigned to each IPspace.
[-broadcast-domains <Broadcast Domain>, ...] - Broadcast Domains
The list of broadcast domains that belong to the IPspace.
[-vservers <vserver name>, ...] - Vservers
The list of Vservers assigned to each IPspace.
Examples
The following example displays general information about IPspaces.
Description
The network ndp default-router delete-all command deletes default router lists from the specified IPspace.
Parameters
-ipspace <IPspace> - IPspace Name
Use this parameter to specify the IPspace where the default routers are to be deleted.
Examples
The following example deletes default routers from IPspace ips1.
Description
The network ndp default-router show command displays Neighbor Discovery Protocol (NDP) default routers learned
on a specified port.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays the NDP default routers from the specified node.
[-ipspace <IPspace>] - IPspace name
Displays the NDP default routers from the specified IPspace.
[-port {<netport>|<ifgrp>}] - Port
Displays the NDP default routers from the specified port.
[-router-addr <IP Address>] - Router Address
Displays the default routers that have the specified IPv6 addresses.
Examples
The following example displays NDP default routers on local port e0f.
Node: node1
IPspace: Default
Port Router Address Flag Expire Time
-------- -------------------------- -------------- --------------
e0f fe80::5:73ff:fea0:107 none 0d0h23m9s
Description
The network ndp neighbor create command creates a static Neighbor Discovery Protocol (NDP) neighbor entry within a
Vserver.
Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver on which the NDP neighbor is to be created.
-neighbor <IP Address> - Neighbor Address
Use this parameter to specify the neighbor's IPv6 address.
-mac-address <MAC Address> - MAC Address
Use this parameter to specify the neighbor's MAC address.
Examples
The following example creates a NDP neighbor entry within Vserver vs0.
cluster1::*> network ndp neighbor create -vserver vs0 -neighbor 20:20::20 -mac-address
10:10:10:0:0:1
Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver on which the NDP neighbor is to be deleted.
-neighbor <IP Address> - Neighbor Address
Use this parameter to specify the neighbor's IPv6 address.
Examples
The following example deletes a NDP neighbor entry within Vserver vs0.
Description
The network ndp neighbor show command displays a group of static Neighbor Discovery Protocol (NDP) neighbors
within one or more Vservers. You can view static NDP neighbors within specified Vservers, neighbors with specified IPv6
address, and neighbors with specified MAC address.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
Displays the static NDP neighbors that have the specified Vserver as their origin.
[-neighbor <IP Address>] - Neighbor Address
Displays the static NDP neighbors that have the specified IPv6 address.
[-mac-address <MAC Address>] - MAC Address
Displays the static NDP neighbors that have the specified MAC address.
Examples
The following example displays all of the static NDP neighbors configured on Vserver vs0.
Description
The network ndp neigbhor active-entry delete command deletes a Network Discovery Protocol (NDP) neighbor
entry on the specified port from a given Vserver's subnet group.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which the neighbor entry is to be deleted.
-vserver <vserver> - System or Admin Vserver Name
Use this parameter to specify the System or Admin Vserver on which the neighbor entry is to be deleted.
-subnet-group <IP Address/Mask> - Subnet Group
Use this parameter to specify the subnet group from which the neighbor entry is to be deleted.
-neighbor <IP Address> - Neighbor
Use this parameter to specify the IPv6 address of the neighbor entry which is to be deleted.
-port {<netport>|<ifgrp>} - Port
Use this parameter to specify the port on which the neighbor entry is to be deleted.
Examples
The following example deletes a neighbor entry from the Admin Vserver cluster1:
cluster1::*> network ndp neighbor active-entry delete -vserver cluster1 -node local -subnet-
group ::/0 -neighbor fe80:4::5:73ff:fea0:107 -port e0d
Description
The network ndp neighbor active-entry show command displays Network Discovery Protocol (NDP) neighbor cache
entries on one or more nodes. You can view ndp neighbors within specified nodes and within specified System or Admin
Vservers.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-verbose ]
Displays the expire time, state, is-router, and probe count fields.
Examples
The following example displays NDP neighbors on the Admin Vserver cluster1:
Node: node1
Vserver: cluster1
Subnet Group: ::/0
Neighbor MAC Address Port
------------------------------ -------------------- --------
fe80:4::5:73ff:fea0:107 00:05:73:a0:01:07 e0d
fe80:4::226:98ff:fe0c:b6c1 00:26:98:0c:b6:c1 e0d
fe80:4::4255:39ff:fe25:27c1 40:55:39:25:27:c1 e0d
3 entries were displayed.
Description
The network ndp prefix delete-all command deletes all prefixes learned from the specified IPspace.
Parameters
-ipspace <IPspace> - IPspace Name
Use this parameter to specify the IPspace where the IPv6 prefixes are to be deleted.
Examples
The following example deletes all IPv6 prefixes within IPspace ips1.
Description
The network ndp prefix show command displays IPv6 prefixes on one or more nodes.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-verbose ]
Displays the valid-lifetime, preferred-lifetime, origin and advertising-router fields.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays the IPv6 prefixes from the specified node.
[-ipspace <IPspace>] - IPspace name
Displays the IPv6 prefixes from the specified IPspace.
[-port {<netport>|<ifgrp>}] - Port
Displays the IPv6 prefixes on the specified port.
[-prefix <IP Address/Mask>] - Prefix
Displays the IPv6 prefixes with the specified prefix value.
[-flag {none|on-link|autonomous|on-link-autonomous}] - Flag
Displays the IPv6 prefixes with the specified flag. The flag indicates whether a prefix is on-link and whether it
can be used in autonomous address configuration.
Examples
The following example displays IPv6 prefixes on port e0f.
Node: node1
IPspace: Default
Port Prefix Flag Expire Time
--------- ------------------------- ------------------ -------------
e0f fd20:8b1e:b255:814e::/64 on-link-autonomous 29d23h56m48s
Description
This command enables or disables cluster health notifications on the specified node.
Parameters
-node {<nodename>|local} - Node
This parameter specifies the node for which the cluster health notification status will be modified.
[-enabled {true|false}] - Cluster Health Notifications Enabled
Setting this parameter to true enables cluster health notification. Setting it to false disables cluster health
notification.
Examples
The following example modifies the cluster health notification status for a node:
Description
The network options cluster-health-notifications show command displays whether the node's cluster health
notifications are enabled.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter specifies the node for which the cluster health notification status will be displayed.
[-enabled {true|false}] - Cluster Health Notifications Enabled
Selects the entries that match this parameter value.
Examples
The following example displays the cluster health notification status for a node:
Description
This command enables or disables the automatic detection of a switchless cluster. A switchless cluster consists of two nodes
where the cluster ports are directly connected without a switch between them.
Examples
Description
The network options detect-switchless-cluster show command displays whether switchless cluster detection is enabled.
Examples
Description
This command sets the state of IPv6 options for the cluster.
Parameters
[-enabled [true]] - IPv6 Enabled
Setting this parameter to true enables IPv6 for the cluster. IPv6 cannot be disabled once it is enabled for the
cluster. Call technical support for guidance regarding disabling IPv6.
[-is-ra-processing-enabled {true|false}] - Router Advertisement (RA) Processing Enabled
Setting this parameter to true enables cluster to process IPv6 router advertisements. Setting it to false
disables router advertisement processing by the cluster.
Examples
The following example disables IPv6 Router Advertisement processing for the
cluster:
cluster1::> network options ipv6 modify -is-ra-processing-enabled false
Description
This command displays the current state of IPv6 options for the cluster.
Examples
Description
This command sets the state of geometric mean algorithm for load balancing
Parameters
[-enable {true|false}] - Geometric Mean Algorithm for load balancing
Setting this parameter to true enables the geometric mean algorithm for load balancing. Setting it to false
disables the geometric mean algorithm for the cluster.
Examples
The following example will enable the geometric mean algorithm for load
balancing.
cluster1::> network options load-balancing modify -enable true
The following example will disable the geometric mean algorithm for load
balancing.
cluster1::> network options load-balancing modify -enable false
Description
This command displays the use of geometric mean load balancing algorithm.
Examples
Description
The network options multipath-routing modify command is used to modify cluster-wide multipath routing
configuration.
Parameters
[-is-enabled {true|false}] - Is Multipath Routing Enabled
This parameter specifies whether multipath routing configuration is enabled or not. Setting this parameter to
true enables multipath routing for all nodes in the cluster.
Examples
Description
The network options multipath-routing show command displays the multipath routing configuration for the cluster.
Description
This command disables the given port health monitors for the given IPspaces in the cluster.
Parameters
-ipspace <IPspace> - IPspace Name
The name of the IPspace for which the specified port health monitors are disabled.
-health-monitors {l2-reachability|link-flapping|crc-errors|vswitch-link}, ... - List of Port Health
Monitors to Disable
The port health monitors to disable.
Examples
The following example disables the "l2_reachability" health monitor for the "Default" IPspace.
Note: The status of the "link_flapping" monitor is unaffected by the command.
Description
This command enables the given port health monitors for the given IPspaces in the cluster.
Parameters
-ipspace <IPspace> - IPspace Name
The name of the IPspace for which the specified port health monitors are enabled.
-health-monitors {l2-reachability|link-flapping|crc-errors|vswitch-link}, ... - List of Port Health
Monitors to Enable
The port health monitors to enable. Upon enabling the l2_reachability health monitor, it runs in an
"unpromoted" state. While in this state, the monitor does not mark any ports as unhealthy due to the
l2_reachability health check. The monitor is promoted in the "Cluster" IPspace when the "Cluster" broadcast
domain is found to have passed the l2_reachability health check. An EMS event called
"vifmgr.hm.promoted" event is generated when the health monitor is promoted for the IPspace.
Examples
The following example enables the "l2_reachability" health monitor for the "Default" IPspace:
Note: The status of the "link_flapping" monitor is unaffected by the command.
Description
This command modifies the enabled port health monitors for the given IPspaces in the cluster.
Examples
The following example modifies the port health monitor configuration of the "Default" IPspace such that only the
"link_flapping" port health monitor is enabled. enabled for all IPspaces in the cluster.
Note: Only the specified monitor is enabled after the modify command is issued.
Description
This command displays the enabled port health monitors for the IPspaces in the cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example lists all port health monitors that are enabled for all IPspaces in the cluster.
Description
This command sets the status of sending statement of authority record in the DNS response.
Parameters
[-enable {true|false}] - Enable sending SOA
Setting this parameter to true enables sending the statement of authority (SOA) record in the DNS response.
Setting it to false disables sending the statement of authority (SOA) record in the DNS response for the
cluster.
Examples
The following example will enable the sending of statement of authority (SOA)
in the DNS response.
cluster1::> network options send-soa modify -enable true
The following example will disable the sending of statement of authority (SOA)
in the DNS response.
cluster1::> network options send-soa modify -enable false
Examples
Description
This command sets whether the cluster network is in switchless or switched mode. A switchless cluster is physically formed by
connecting two nodes back-to-back, without a switch between them.
Parameters
[-enabled {true|false}] - Enable Switchless Cluster
This parameter specifies whether the switchless cluster is enabled or not. Setting this parameter to true
enables the switchless cluster.
Examples
Description
The network options switchless-cluster show command displays the attributes of a switchless cluster.
Examples
Description
The network port delete command deletes a network port that is no longer physically present on the storage system.
Parameters
-node {<nodename>|local} - Node
This specifies the node on which the port is located.
-port {<netport>|<ifgrp>} - Port
This specifies the port to delete.
Examples
The following example deletes port e0c from a node named node0. The command works only when the port does not
physically exist on the storage system.
Description
The network port modify command enables you to change the maximum transmission unit (MTU) setting, autonegotiation
setting, administrative duplex mode, and administrative speed of a specified network port.
The MTU of ports that belong to broadcast-domains must be updated through the broadcast-domain modify command.
Modification of a port's IPspace will only work before a node is added to a cluster, when the cluster version is below Data
ONTAP 8.3, or when the node is offline. To change the IPspace of a port once the node is in a Data ONTAP 8.3 cluster, the port
should be added to a broadcast-domain that belongs to that IPspace.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which the port is located.
-port {<netport>|<ifgrp>} - Port
Use this parameter to specify the port that you want to modify.
[-mtu <integer>] - MTU
The port's MTU setting. The default setting for ports in the "Cluster" IPspace is 9000 bytes. All other ports use
a default value of 1500 bytes.
Examples
The following example modifies port e0a on a node named node0 not to use auto-negotiation, to preferably use half
duplex mode, and to preferably run at 100 Mbps.
cluster1::> network port modify -node node0 -port e0a -autonegotiate-admin false -duplex-admin
half -speed-admin 100
Description
The network port show command displays information about network ports. The command output indicates any inactive
links, and lists the reason for the inactive status.
Some parameters can have "administrative" and "operational" values. The administrative setting is the preferred value for that
parameter, which is set when the port is created or modified. The operational value is the actual current value of that parameter.
For example, if the network is underperforming due to network problems, the operational speed value can be lower than the
administrative setting.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-health ]
Use this parameter to display detailed health information for the specified network ports.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the network ports that match this parameter value. Use this parameter with the -port parameter to
select a port.
[-port {<netport>|<ifgrp>}] - Port
Selects the network ports that match this parameter value. If you do not use this parameter, the command
displays information about all network ports.
[-link {off|up|down}] - Link
Selects the network ports that match this parameter value.
[-mtu <integer>] - MTU
Selects the network ports that match this parameter value.
[-autonegotiate-admin {true|false}] - Auto-Negotiation Administrative
Selects the network ports that match this parameter value.
[-autonegotiate-oper {true|false}] - Auto-Negotiation Operational
Selects the network ports that match this parameter value.
[-duplex-admin {auto|half|full}] - Duplex Mode Administrative
Selects the network ports that match this parameter value.
[-duplex-oper {auto|half|full}] - Duplex Mode Operational
Selects the network ports that match this parameter value.
[-speed-admin {auto|10|100|1000|10000|100000|40000|25000}] - Speed Administrative
Selects the network ports that match this parameter value.
[-speed-oper {auto|10|100|1000|10000|100000|40000|25000}] - Speed Operational
Selects the network ports that match this parameter value.
[-flowcontrol-admin {none|receive|send|full}] - Flow Control Administrative
Selects the network ports that match this parameter value.
[-flowcontrol-oper {none|receive|send|full}] - Flow Control Operational
Selects the network ports that match this parameter value.
[-mac <MAC Address>] - MAC Address
Selects the network ports that match this parameter value.
[-up-admin {true|false}] - Up Administrative (privilege: advanced)
Selects the network ports that match this parameter value.
[-type {physical|if-group|vlan|vip}] - Port Type
Selects the network ports that match this parameter value.
Examples
The following example displays information about all network ports.
Node: node1
Ignore
Speed(Mbps) Health Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status
--------- ------------ ---------------- ---- ---- ----------- -------- ------
e0a Cluster Cluster up 9000 auto/1000 healthy false
e0b Cluster Cluster up 9000 auto/1000 healthy false
e0c Default Default up 1500 auto/1000 degraded false
e0d Default Default up 1500 auto/1000 degraded true
The following example displays health information about all network ports.
node2
e0a up healthy false -
e0b up healthy false -
e0c up healthy false -
e0d up degraded false -
Description
The network port show-address-filter-info command displays information about the port's address filter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
-node <nodename> - Node
Use this parameter to specify the node.
-port {<netport>|<ifgrp>} - Port
Use this parameter to specify the port. For example, e0c.
[-num-total <integer>] - Total Number Of Entries
Use this parameter to specify the total number of entries.
[-num-used <integer>] - Number Of Used Entries
Use this parameter to specify the number of used entries.
Examples
The following example displays information of the given port's address filter on the specified node of the cluster.
Node: node1
Total Number Number of
Port of Address Used Address Used Address
Name Filter Entries Filter Entries Filter Entries
---- -------------- -------------- ------------------
e0c 1328 3 U 0 a0 98 40 e 6
M 1 80 c2 0 0 e
M 1 0 5e 0 0 fb
Description
Add ports to a broadcast domain.
Note: The IPSpace of the ports added will be updated to the IPSpace of the broadcast-domain. The ports will be added to the
failover-group of the broadcast-domain. The MTU of the ports will be updated to the MTU of the broadcast-domain.
Parameters
-ipspace <IPspace> - IPspace Name
The IPspace of the broadcast domain.
-broadcast-domain <Broadcast Domain> - Layer 2 Broadcast Domain
The broadcast domain for this port assignment.
-ports <<node>:<port>>, ... - List of ports
The ports to be added to this broadcast domain.
Examples
The following example adds the port "e0d" on node "cluster1-1" and port "e0d" on node "cluster1-2" to broadcast domain
"mgmt" in IPspace "Default".
Description
Create a new broadcast domain.
Note: The IPSpace of the ports added will be updated to the IPSpace of the broadcast-domain. A failover-group will be
generated containing the ports of the broadcast-domain. The MTU of all of the ports in the broadcast-domain will be updated
to the MTU specified for the broadcast-domain.
Parameters
[-ipspace <IPspace>] - IPspace Name
The IPspace to which the new broadcast domain belongs.
-broadcast-domain <Broadcast Domain> - Layer 2 Broadcast Domain
The name of the broadcast domain to be created. The name of the broadcast domain needs to be unique within
the IPspace.
-mtu <integer> - Configured MTU
MTU of the broadcast domain.
[-ports <<node>:<port>>, ...] - Ports
The network ports to be added to the broadcast domain. Ports need to be added to the broadcast domain before
interfaces can be hosted on the port. By default, no port will be added to the broadcast domain.
Examples
The following example creates broadcast domain "mgmt" in IPspace "Default" with an MTU of 1500 and network ports
e0c from node "gx1" and node "gx2".
cluster1::> network port broadcast-domain create -ipspace Default -broadcast-domain mgmt -mtu 1500
-ports gx1:e0c,gx2:e0c
Description
Delete a broadcast domain that contains no ports.
Parameters
-ipspace <IPspace> - IPspace Name
The IPspace to which the broadcast domain belongs
-broadcast-domain <Broadcast Domain> - Layer 2 Broadcast Domain
The name of the broadcast domain to be deleted.
Examples
The following example deletes the broadcast domain "mgmt" in IPspace "Default".
Description
Merges a broadcast domain into an existing broadcast domain.
Parameters
-ipspace <IPspace> - IPspace Name
The IPspace of the broadcast domain.
-broadcast-domain <Broadcast Domain> - Layer 2 Broadcast Domain
The merging broadcast domain.
-into-broadcast-domain <Broadcast Domain> - Merge with This Layer 2 Broadcast Domain
The target broadcast domain for the merge operation.
Examples
The following example merges broadcast domain "bd-mgmt" in IPspace "Default" to broadcast domain "bd-data".
Description
Modify a broadcast domain.
Parameters
-ipspace <IPspace> - IPspace Name
The IPspace to which the broadcast domain belongs.
-broadcast-domain <Broadcast Domain> - Layer 2 Broadcast Domain
The name of the broadcast domain.
[-mtu <integer>] - Configured MTU
MTU of the broadcast domain.
Examples
The following example modifies the mtu attribute of broadcast domain "mgmt" in IPspace "Default" to 1500
cluster1::network port broadcast-domain*> modify -ipspace Default -broadcast-domain mgmt -mtu 1500
Description
Remove port assignments from a broadcast domain.
Parameters
-ipspace <IPspace> - IPspace Name
The IPspace of the broadcast domain.
-broadcast-domain <Broadcast Domain> - Layer 2 Broadcast Domain
The broadcast domain of the ports.
-ports <<node>:<port>>, ... - List of ports
The ports to removed from the broadcast-domain.
Examples
The following example removes port "e0d" on node "cluster1-1" and port "e0d" on node "cluster1-2" from broadcast
domain "mgmt" in IPspace "Default".
Description
Rename a broadcast domain.
Parameters
-ipspace <IPspace> - IPspace Name
The IPspace to which the broadcast domain belongs.
-broadcast-domain <Broadcast Domain> - Layer 2 Broadcast Domain
The name of the broadcast domain.
-new-name <text> - New Name
The new name of the broadcast domain.
Examples
The following example renames the broadcast domain named "mgmt" to "mgmt2" in IPspace "Default".
Description
Display broadcast domain information.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-ipspace <IPspace>] - IPspace Name
Selects the broadcast domains that match the IPspace name.
[-broadcast-domain <Broadcast Domain>] - Layer 2 Broadcast Domain
Selects the broadcast domains that match the broadcast domain name.
[-mtu <integer>] - Configured MTU
Selects the broadcast domains that match the MTU value. This field is the MTU that was configured by the
user, which might be different from the operational MTU.
[-ports <<node>:<port>>, ...] - Ports
Selects the broadcast domains that contain the network ports. For example, node1:e0a will display broadcast
domains that contain node1:e0a network port.
[-port-update-status {complete|in-progress|error|overridden-while-offline}, ...] - Port Update
Status
Selects the broadcast domains that contain the network port status. For example, specifying "error" will
display broadcast domains that contain "Error" network port status.
[-port-update-status-details <text>, ...] - Status Detail Description
Selects the broadcast domains that contain the network port status detail text.
[-port-update-status-combined {complete|in-progress|error|overridden-while-offline}] -
Combined Port Update Status
Selects the broadcast domains that contain the combined network port status. For example, specifying "error"
will display broadcast domains that contain a combined network port status of "Error".
[-failover-groups <failover-group>, ...] - Failover Groups
Selects the broadcast domains that contain the failover groups.
[-subnet-names <subnet name>, ...] - Subnet Names
Selects the broadcast domains that contain the subnet name or names.
[-is-vip {true|false}] - Is VIP Broadcast Domain
Selects the broadcast domains that match a specific "is-vip" flag. Specifying "true" matches only broadcast
domains associated with the Virtual IP feature; "false" matches only broadcast domains that do not.
Examples
The following example displays general information about broadcast domains.
Description
Splits ports from a broadcast domain into a new broadcast domain.
The following restrictions apply to this command:
• If the ports are in a failover group, all ports in the failover group must be provided. Use network interface failover-
groups show to see which ports are in failover groups.
• If the ports have LIFs associated with them, the LIFs cannot be part of a subnet's ranges and the LIF's curr-port and
home-port must both be provided. Use network interface show -fields subnet-name, home-node, home-port,
curr-node, curr-port to see which ports have LIFs associated with them and whether the LIFs are part of a subnet's
ranges. Use network subnet remove-ranges with the LIF's IP address and -force-update-lif-associations set
to true to remove the LIF's association with a subnet.
Parameters
-ipspace <IPspace> - IPspace Name
The IPspace of the broadcast domain.
-broadcast-domain <Broadcast Domain> - Layer 2 Broadcast Domain
The broadcast domain to split.
-new-broadcast-domain <Broadcast Domain> - New Layer 2 Broadcast Domain Name
The new broadcast domain.
-ports <<node>:<port>>, ... - List of Ports
The ports to be split from this broadcast domain.
Examples
The following example splits port "e0d" on node "cluster1-1" and port "e0d" on node "cluster1-2" from broadcast domain
"bd-mgmt" in IPspace "Default" to broadcast domain "bd-data".
cluster1::> network port broadcast-domain split -ipspace Default -broadcast-domain bd-mgmt -new-
broadcast-domain bd-data -ports cluster1-1:e0d, cluster1-2:e0d
Related references
network interface failover-groups show on page 355
network interface show on page 344
network subnet remove-ranges on page 414
Description
The network port ifgrp add-port command adds a network port to a port interface group. The port interface group must
already exist. You can create a port interface group by using the network port ifgrp create command.
The following restrictions apply to port interface groups:
• A port that is already a member of a port interface group cannot be added to another port interface group.
• A port to which a logical interface is already bound cannot be added to a port interface group.
• A port that already has an assigned failover role cannot be added to a port interface group.
• A port that is assigned to a broadcast domain cannot be added to a port interface group.
• All ports in a port interface group must be physically located on the same node.
Parameters
-node {<nodename>|local} - Node
The node on which the port interface group is located.
-ifgrp <ifgrp name> - Interface Group Name
The port interface group to which a port is to be added.
-port <netport> - Specifies the name of port.
The network port that is to be added to the port interface group.
Examples
The following example adds port e0c to port interface group a1a on a node named node1:
cluster1::> network port ifgrp add-port -node node1 -ifgrp a1a -port e0c
Related references
network port ifgrp create on page 398
Parameters
-node {<nodename>|local} - Node
The node on which the port interface group will be created.
-ifgrp <ifgrp name> - Interface Group Name
The name of the port interface group that will be created. Port interface groups must be named using the
syntax "a<number><letter>", where <number> is an integer in the range [0-999] without leading zeros and
<letter> is a lowercase letter. For example, "a0a", "a0b", "a1c", and "a2a" are all valid port interface group
names.
-distr-func {mac|ip|sequential|port} - Distribution Function
The distribution function of the port interface group that will be created. Valid values are:
• mac - Network traffic is distributed based on MAC addresses
• sequential - Network traffic is distributed in round-robin fashion from the list of configured, available ports
• port - Network traffic is distributed based on the transport layer (TCP/UDP) ports
• multimode - Bundle multiple member ports of the interface group to act as a single trunked port
• multimode_lacp - Bundle multiple member ports of the interface group using Link Aggregation Control
Protocol
• singlemode - Provide port redundancy using member ports of the interface group for failover
Examples
The following example creates a port interface group named a0a on node node0 with a distribution function of ip:
cluster1::> network port ifgrp create -node node0 -ifgrp a0a -distr-func ip -mode multimode
Related references
network port ifgrp add-port on page 398
Description
The network port ifgrp delete command destroys a port interface group.
Note: When you delete an interface group port, it is automatically removed from failover rules and groups to which it
belongs.
Examples
The following example deletes port interface group a0b from a node named node0.
Description
The network port ifgrp remove-port command removes a network port from a port interface group.
Parameters
-node {<nodename>|local} - Node
The node on which the port interface group is located.
-ifgrp <ifgrp name> - Interface Group Name
The port interface group from which a port will be removed.
-port <netport> - Specifies the name of port.
The network port that will be removed from the port interface group.
Examples
The following example removes port e0d from port interface group a1a on a node named node1:
cluster1::> network port ifgrp remove-port -node node1 -ifgrp a1a -port e0d
Description
The network port ifgrp show command displays information about port interface groups. By default, it displays
information about all port interface groups on all nodes in the cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example displays information about all port interface groups.
Description
The network port vip create command creates a VIP port in the specified IPspace on the specified node. Only one VIP
port can be created per IPspace on the given node.
Examples
This example shows how to create a VIP port named v0a in ipspace ips on node1.
cluster1::> network port vip create -node node1 -port v0a -ipspace ips
Description
The network port vip delete command deletes a VIP port.
Parameters
-node {<nodename>|local} - Node
The node associated with the VIP port to be deleted.
-port <netport> - Network Port
The name of the VIP port to be deleted.
Examples
This example shows how to delete VIP Port v0a on node1.
Description
The network port vip show command displays information about VIP ports.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
Examples
The example below shows VIP port v0a is created in IPspace ips on node1.
Description
The network port vlan create command attaches a VLAN to a network port on a specified node.
Parameters
-node {<nodename>|local} - Node
The node to which the VLAN is to be attached.
Note: You cannot attach a VLAN to a cluster port.
Examples
This example shows how to create VLAN e1c-80 attached to network port e1c on node1.
Description
The network port vlan delete command deletes a VLAN from a network port.
Note: When you delete a VLAN port, it is automatically removed from all failover rules and groups that use it.
Parameters
-node {<nodename>|local} - Node
The node from which the VLAN is to be deleted.
{ -vlan-name {<netport>|<ifgrp>} - VLAN Name
The name of the VLAN that is to be deleted
| -port {<netport>|<ifgrp>} - Associated Network Port
The network port to which the VLAN is to be attached.
-vlan-id <integer>} - Network Switch VLAN Identifier
The ID tag of the deleted VLAN.
Examples
This example shows how to delete VLAN e1c-80 from network port e1c on node1.
Description
The network port vlan show command displays information about network ports that are attached to VLANs. The
command output indicates any inactive links and lists the reason for the inactive status.
If the operational duplex mode and speed cannot be determined (for instance, if the link is down), they are listed as undef,
meaning undefined.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the VLAN network ports that match this parameter value.
{ [-vlan-name {<netport>|<ifgrp>}] - VLAN Name
Selects the VLAN network ports that match this parameter value.
Examples
Description
The network qos-marking modify command modifies the QoS marking values for different protocols, for each IPspace.
Parameters
-ipspace <IPspace> - IPspace name
Use this parameter to specify the IPspace for which the QoS marking entry is to be modified.
-protocol <text> - Protocol
Use this parameter to specify the protocol for which the QoS marking entry is to be modified. The possible
values are NFS, CIFS, iSCSI, SnapMirror, SnapMirror-Sync, NDMP, FTP, HTTP-admin, HTTP-filesrv, SSH,
Telnet, and SNMP.
[-dscp <integer>] - DSCP Marking Value
Use this parameter to specify the DSCP value. The possible values are 0 to 63.
[-is-enabled {true|false}] - Is QoS Marking Enabled
Use this parameter to enable or disable the QoS marking for the specified protocol and IPspace.
Examples
The following example modifies the QoS marking entry for the NFS protocol in the Default IPspace:
cluster1::> network qos-marking modify -ipspace Default -protocol NFS -dscp 10 -is-enabled true
Description
The network qos-marking show command displays the QoS marking values for different protocols, for each IPspace.
Parameters
{ [-fields <fieldname>, ...]
Use this parameter to display only certain fields of the QoS marking table.
| [-instance ]}
Use this parameter to display all the fields of the QoS marking table.
[-ipspace <IPspace>] - IPspace name
Use this parameter to display the QoS marking entries for the specified IPspace.
[-protocol <text>] - Protocol
Use this parameter to display the QoS marking entries for the specified protocol. The possible values are NFS,
CIFS, iSCSI, SnapMirror, SnapMirror-Sync, NDMP, FTP, HTTP-admin, HTTP-filesrv, SSH, Telnet, and
SNMP.
[-dscp <integer>] - DSCP Marking Value
Use this parameter to display the QoS marking entries matching the specified DSCP value. The possible
values are 0 to 63.
[-is-enabled {true|false}] - Is QoS Marking Enabled
Use this parameter to display the QoS marking entries matching the specified flag.
Examples
The following example displays the QoS marking entries for the Default IPspace.
Description
The network route create command creates a static route within a Vserver.
Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver on which the route is to be created.
-destination <IP Address/Mask> - Destination/Mask
Use this parameter to specify the IP address and subnet mask of the route's destination. The format for this
value is: address, slash ("/"), mask. 0.0.0.0/0 is a valid destination value to create default IPv4 route.
And ::/0 is a valid destination value to create default IPv6 route
-gateway <IP Address> - Gateway
Use this parameter to specify the IP address of the gateway server leading to the route's destination.
[-metric <integer>] - Metric
Use this parameter to specify the metric of the route.
Examples
The following example creates default routes within Vserver vs0 for IPv4 and IPv6.
cluster1::> network route create -vserver vs0 -destination 0.0.0.0/0 -gateway 10.61.208.1
cluster1::> network route create -vserver vs0 -destination ::/0 -gateway 3ffe:1::1
Description
The network route delete command deletes a static route from a Vserver.
Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver on which the route is to be deleted.
-destination <IP Address/Mask> - Destination/Mask
Use this parameter to specify the IP address and subnet mask of the route's destination. The format for this
value is: address, slash ("/"), mask. For example, 0.0.0.0/0 is a correctly formatted value for the -destination
parameter.
-gateway <IP Address> - Gateway
Use this parameter to specify the gateway on which the route is to be deleted.
Examples
The following example deletes a route within Vserver vs0 for destination 0.0.0.0/0.
Description
The network route show command displays a group of static routes within one or more Vservers. You can view routes
within specified Vservers, routes with specified destinations, and routes with specified gateways.
Parameters
{ [-fields <fieldname>, ...]
Use this parameter to display only certain fields of the routing tables.
| [-instance ]}
Use this parameter to display all fields of the routing tables.
[-vserver <vserver>] - Vserver Name
Use this parameter to display only routes that have the specified Vserver as their origin.
[-destination <IP Address/Mask>] - Destination/Mask
Use this parameter to display only routes that have the specified IP address and subnet mask as their
destination. The format for this value is: address, slash ("/"), mask. The example below has 0.0.0.0/0 as a
valid value for the -destination parameter.
[-gateway <IP Address>] - Gateway
Use this parameter to display only routes that have the specified IP address as their gateway.
[-metric <integer>] - Metric
Use this parameter to display only routes that have the specified metric.
[-ipspace <IPspace>] - IPspace Name
Use this parameter to optionally specify the IPspace associated with the Vserver. This parameter can be used
in conjunction with the Vserver parameter in order to configure the same route across multiple Vservers within
an IPspace.
[-address-family {ipv4|ipv6|ipv6z}] - Address family of the route
Use this parameter to display only the routes that have the specified address-family.
Examples
The following example displays information about all routing groups.
Description
The network route show-lifs command displays the association of static routes and Logical Interfaces (LIFs) within one
or more Vservers. You can view routes within specified Vservers, routes with specified destinations, and routes with specified
gateways.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver>] - Vserver Name
Use this parameter to display only routes that have the specified Vserver as their origin.
[-destination <IP Address/Mask>] - Destination/Mask
Use this parameter to display only routes that have the specified IP address and subnet mask as their
destination. The format for this value is: address, slash ("/"), mask. For example, 0.0.0.0/0 is a valid value
for the -destination parameter.
[-gateway <IP Address>] - Gateway
Use this parameter to display only routes that have the specified IP address as their gateway.
[-lifs <lif-name>, ...] - Logical Interfaces
Use this parameter to display only the routes that are associated with the specified Logical Interfaces (LIFs).
[-address-family {ipv4|ipv6|ipv6z}] - Address Family
Use this parameter to display only the routes that belong to specified address family.
Description
The network route active-entry show command displays installed routes on one or more nodes. You can view routes
within specified nodes, within specified Vservers, routes in specified subnet groups, and routes with specified destinations.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
• U - Usable
• G - Gateway
• H - Host
• R - Reject
• D - Dynamic
• S - Static
• 1 - Protocol1
• 2 - Protocol2
• L - Llinfo
• C - Clone
Examples
The following example displays active routes on all nodes in Vserver vs0 with subnet-group 10.10.10.0/24.
Vserver: vs0
Node: node1
Subnet Group: 10.10.10.0/24
Destination Gateway Interface Metric Flags
---------------------- ------------------- --------- ------ -----
default 10.10.10.1 e0c 0 UGS
Vserver: vs0
Node: node2
Subnet Group: 10.10.10.0/24
Destination Gateway Interface Metric Flags
---------------------- ------------------- --------- ------ -----
default 10.10.10.1 e0c 0 UGS
2 entries were displayed.
Related references
network route show on page 408
Description
Add new address ranges to a subnet.
Note: All addresses in a range must be the same address family (IPv4 or IPv6) and must have the same subnet mask. Ranges
that overlap or are next to existing ranges will be merged with the existing ranges.
Parameters
-ipspace <IPspace> - IPspace Name
The IPspace in which the range resides.
Examples
The following example allocates addresses for subnet s1 in IPspace Default.
Description
Create a new subnet.
Parameters
[-ipspace <IPspace>] - IPspace Name
The IPspace to which the new subnet belongs.
-subnet-name <subnet name> - Subnet Name
The name of the subnet to be created. The name of the subnet needs to be unique within the IPspace.
-broadcast-domain <Broadcast Domain> - Broadcast Domain
The broadcast domain to which the new subnet belongs.
-subnet <IP Address/Mask> - Layer 3 Subnet
The address and mask of the subnet.
[-gateway <IP Address>] - Gateway
The gateway of the subnet.
[-ip-ranges {<ipaddr>|<ipaddr>-<ipaddr>}, ...] - IP Addresses or IP Address Ranges
The IP ranges associated with this subnet.
[-force-update-lif-associations [true]] - Change the subnet association
This command will fail if any service processor interfaces or network interfaces are using the IP addresses in
the ranges provided. Using this parameter will associate any manually addressed interfaces with the subnet and
will allow the command to succeed.
Examples
The following examples create subnets named s1 and s6 in IPspace Default.
Description
Delete a subnet that contains no ports.
Parameters
-ipspace <IPspace> - IPspace Name
The IPspace to which the subnet belongs.
-subnet-name <subnet name> - Subnet Name
The name of the subnet to be deleted.
[-force-update-lif-associations [true]] - Change the subnet association
This command will fail if the subnet has ranges containing any existing service processor interface or network
interface IP addresses. Setting this value to true will remove the network interface associations with the subnet
and allow the command to succeed. However, it will not affect service processor interfaces.
Examples
The following example deletes subnet s1 in IPspace Default.
Description
Modify a subnet.
Parameters
-ipspace <IPspace> - IPspace Name
The IPspace to which the subnet belongs.
-subnet-name <subnet name> - Subnet Name
The name of the subnet to modify.
[-subnet <IP Address/Mask>] - Layer 3 Subnet
The new address and mask of the subnet.
[-gateway <IP Address>] - Gateway
The new gateway address.
Examples
The following example modifies the subnet address and gateway.
cluster1::> network subnet modify -ipspace Default -subnet-name s1 -subnet 192.168.2.0/24 -gateway
192.168.2.1
Description
Remove address ranges from a subnet.
Parameters
-ipspace <IPspace> - IPspace Name
The IPspace in which the range resides.
-subnet-name <subnet name> - Subnet Name
The name of the subnet.
-ip-ranges {<ipaddr>|<ipaddr>-<ipaddr>}, ... - IP Ranges
IP ranges to remove.
[-force-update-lif-associations [true]] - Force Update LIF Associations
This command will fail if any existing service processor interfaces or network interfaces are using IP
addresses in the ranges provided. Using this parameter will remove the subnet's association with those
interfaces and allow the command to succeed.
Examples
The following example removes an address range with starting address of 10.98.1.1 from subnet s1 in IPspace
Default.
Parameters
-ipspace <IPspace> - IPspace Name
The IPspace to which the subnet belongs.
-subnet-name <subnet name> - Subnet Name
The name of the subnet to rename.
-new-name <text> - New Name
The new name for the subnet.
Examples
The following example renames subnet s1 to s3.
Description
Display subnet information.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-ipspace <IPspace>] - IPspace Name
Selects the subnets that match the given IPspace name.
[-subnet-name <subnet name>] - Subnet Name
Selects the subnets that match the given subnet name.
[-broadcast-domain <Broadcast Domain>] - Broadcast Domain
Selects the subnets that match the given broadcast domain name.
[-subnet <IP Address/Mask>] - Layer 3 Subnet
Selects the subnets that match the given address and mask.
[-gateway <IP Address>] - Gateway
Selects the subnets that match the given gateway address.
[-ip-ranges {<ipaddr>|<ipaddr>-<ipaddr>}, ...] - IP Addresses or IP Address Ranges
Selects the subnets that match the given IP range.
[-total-count <integer>] - Total Address Count
Selects the subnets that match the given total address count.
Examples
The following example displays general information about the subnets.
IPspace: Default
Subnet Broadcast Avail/
Name Subnet Domain Gateway Total Ranges
--------- ---------------- --------- --------------- --------- ---------------
s4 192.168.4.0/24 bd4 192.168.4.1 5/5 192.168.5.6-192.168.5.10
s6 192.168.6.0/24 bd4 192.168.6.1 5/5 192.168.6.6-192.168.6.10
IPspace: ips1
Subnet Broadcast Avail/
Name Subnet Domain Gateway Total Ranges
--------- ---------------- --------- --------------- --------- ---------------
s10 192.168.6.0/24 bd10 192.168.6.1 0/0 -
3 entries were displayed.
Description
The network tcpdump show command shows currently running packet traces (via tcpdump) on a matching node.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
Use this parameter optionally to show the details of running packet traces on a matching node.
[-port {<netport>|<ifgrp>}] - Port
Use this parameter optionally to show the details of running packet trace on a matching network interface.
Examples
The following example shows the details of running packet traces on nodes "node1" and "node2":
Description
The network tcpdump start command starts packet tracing (via tcpdump) with the given parameters.
Parameters
-node {<nodename>|local} - Node Name
Use this parameter to specify the node on which the packet trace should run.
-port {<netport>|<ifgrp>} - Port
Use this parameter to specify the network interface for packet tracing.
[-address <IP Address>] - IP Address
Use this parameter to optionally specify the address for packet tracing.
[-protocol-port <integer>] - Protocol Port Number
Use this parameter to optionally specify the protocol port number for packet tracing.
[-buffer-size <integer>] - Buffer Size in KB
Use this parameter to optionally specify the buffer size for packet tracing. The default buffer size is 4 KB.
[-file-size <integer>] - Trace File Size in MB
Use this parameter to optionally specify the trace file size for packet tracing. The default trace file size is 1
GB.
[-rolling-traces <integer>] - Number of Rolling Trace Files
Use this parameter to optionally specify the number of rolling trace files for packet tracing. The default
number of rolling trace files is 2.
Examples
The following example starts packet tracing on node "node1" with address "10.98.16.164", network interface "e0c", buffer
size "10 KB", and protocol port number "10000":
Parameters
-node {<nodename>|local} - Node Name
Use this parameter to specify the node on which the packet tracing must be stopped.
-port {<netport>|<ifgrp>} - Port
Use this parameter to specify the network interface on which the packet tracing must be stopped.
Examples
The following example stops a packet trace on network interface "e0a" from node "node1":
Description
The network tcpdump trace delete command deletes the tcpdump trace file from a matching node.
Parameters
-node {<nodename>|local} - Node Name
Use this parameter to delete the tcpdump trace file from a matching node.
-trace-file <text> - Trace File
Use this parameter to specify the tcpdump trace file to be deleted.
Examples
The following example deletes the list of tcpdump trace files from node "node1" using wildcard pattern:
Description
The network tcpdump trace show command shows the list of tcpdump trace files. The trace files could be located in /
mroot/etc/log/packet_traces/.
Examples
The following example shows the list of trace files on nodes "node1" and "node2":
Description
The network test-link run-test command runs a performance test between two nodes. The command requires a source
node, Vserver, and destination address.
Before executing the network test-link run-test command, the network test-link start-server command must
be run to start a server on the node hosting the destination LIF. After all tests to that node are complete the network test-
link stop-server command must be run to stop the server.
The test results are stored non-persistently and can be viewed using the network test-link show command. Results include
input parameters, the bandwidth achieved, and the date and time of the test.
Parameters
-node {<nodename>|local} - Node Name
Use this parameter to specify the node which initiates the test.
-vserver <vserver> - Vserver
Use this parameter to specify the Vserver to access the destination LIF. DC (Data Channel) Vserver option is
available only in an ONTAP Select or ONTAP Cloud cluster. It is a special vserver that hosts LIFs that are
used to mirror data aggregates to partner node.
Examples
The following example runs a test between the cluster LIFs, including the start and stop of the server side of the test:
cluster1::*> network test-link run-test -node node2 -vserver Cluster -destination 172.31.112.173
Node: node2
Vserver: Cluster
Destination: 172.31.112.173
Time of Test: 4/22/2016 15:33:18
MB/s: 41.2678
Related references
network test-link start-server on page 421
network test-link stop-server on page 422
network test-link show on page 420
network test-link on page 419
network test-path on page 302
storage iscsi-initiator on page 1032
Description
The network test-link show command displays the results of prior network test-link run-test commands.
The test results are stored non-persistently and can be viewed using the network test-link show command. Results include
input parameters, the bandwidth achieved, and the date and time of the test.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
-node {<nodename>|local} - Node
Selects the nodes that match this parameter value. Use this parameter to display the test results specific to a
node. By default, the test results across all nodes are shown.
Examples
The following example runs a test between the cluster LIFs twice and then demonstrates the show command results:
cluster1::*> network test-link run-test -node node2 -vserver Cluster -destination 172.31.112.173
Node: node2
Vserver: Cluster
Destination: 172.31.112.173
Time of Test: 4/25/2016 10:37:52
MB/s: 29.9946
cluster1::*> network test-link run-test -node node2 -vserver Cluster -destination 172.31.112.173
Node: node2
Vserver: Cluster
Destination: 172.31.112.173
Time of Test: 4/25/2016 10:38:32
MB/s: 39.8192
Related references
network test-link run-test on page 419
network test-link start-server on page 421
network test-link stop-server on page 422
Description
The network test-link start-server command starts the server side of the network test-link test on the designated
node.
Only one server at a time can be running for the network test-link command on a given node. If the network test-link
start-server command is issued and a server is already running on the node, then the command is ignored, and the existing
server continues to run.
Parameters
-node {<nodename>|local} - Node Name
Use this parameter to specify the node where the server is to be started.
Examples
The following example starts a server:
Related references
network test-link on page 419
network test-link run-test on page 419
network test-link stop-server on page 422
network test-link show on page 420
Description
The network test-link stop-server command stops the network test-link server running on the designated node.
Parameters
-node {<nodename>|local} - Node Name
Use this parameter to specify the node where the server is to be stopped.
Examples
The following example starts a server and stops it:
Related references
network test-link on page 419
network test-link run-test on page 419
network test-link start-server on page 421
network test-link show on page 420
Description
This command displays options which can be used to fine tune icmp protocol behavior.
Parameters
-node {<nodename>|local} - Node
Sets this parameter to indicate on which node the ICMP tuning options are modified.
[-is-drop-redirect-enabled {true|false}] - Drop redirect ICMP
Sets this parameter to drop redirect ICMP message.
[-tx-icmp-limit <integer>] - Maximum number of ICMP packets sent per second
Sets the maximum number of ICMP messages including TCP RSTs can be sent per second.
[-redirect-timeout <integer>] - Maximum seconds for route redirect timeout
Sets this parameter to indicate the number of seconds after which the route is deleted. Value of zero means
infinity. The default value is 300 seconds.
Examples
Description
This command displays the current state of the ICMP tuning options for the given node.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
Displays all ICMP tuning options.
Examples
Description
This command displays options which can be used to fine tune icmpv6 protocol behavior.
Parameters
-node {<nodename>|local} - Node
Sets this parameter to indicate on which node the ICMPv6 tuning options are modified.
[-is-v6-redirect-accepted {true|false}] - Accept redirects via ICMPv6
Sets this parameter to indicate whether or not redirect ICMPv6 messages are accepted.
[-redirect-v6-timeout <integer>] - Maximum seconds for route redirect timeout
Sets this parameter to indicate the number of seconds after which the route is deleted. Value of zero means
infinity. The default value is 300 seconds.
Examples
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
Displays all ICMPv6 tuning options.
[-node {<nodename>|local}] - Node
Specifies the node for which the ICMPv6 tuning options are displayed.
[-is-v6-redirect-accepted {true|false}] - Accept redirects via ICMPv6
Displays all entries that match the "is-v6-redirect-accepted" value.
[-redirect-v6-timeout <integer>] - Maximum seconds for route redirect timeout
Displays all the entries that match the "redirect-v6-timeout" value.
Examples
Description
This commands sets TCP tuning options on the node.
Parameters
-node {<nodename>|local} - Node
Indicates on which node the TCP tuning options will be modified.
[-is-path-mtu-discovery-enabled {true|false}] - Path MTU discovery enabled
Enables path MTU discovery feature.
[-is-rfc3465-enabled {true|false}] - RFC3465 enabled
Enables the rfc3465 feature.
[-max-cwnd-increment <integer>] - Maximum congestion window segments incrementation
Sets the maximum congestion window increment segements during slow start.
Examples
Description
This command displays the current state of the TCP tuning options for the given node.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
Displays all TCP tuning options.
[-node {<nodename>|local}] - Node
Specifies the node for which the TCP tuninng options will be displayed.
[-is-path-mtu-discovery-enabled {true|false}] - Path MTU discovery enabled
Displays all entries that match the "is-path-mtu-discovery-enabled" value.
[-is-rfc3465-enabled {true|false}] - RFC3465 enabled
Displays all entries that match the "is-rfc3465-enabled" value.
[-max-cwnd-increment <integer>] - Maximum congestion window segments incrementation
Displays all entries that match the "max-cwnd-increment" value.
[-is-rfc3390-enabled {true|false}] - RFC3390 enabled
Displays all entries that match the "is-rfc3390-enabled" value.
[-is-sack-enabled {true|false}] - SACK supoort enabled
Displays all entries that match the "is-sack-enabled" value.
Examples
protection-type show
Display the supported protection types and available RPOs
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
Description
This command displays the protection types available for application provisioning.
Parameters
{ [-fields <fieldname>, ...]
Specifies fields that you want included in the output. You can use -fields ? to display the available fields.
| [-instance ]}
Specifies the display of all available fields for each selected protection type.
[-vserver <vserver name>] - Vserver
Selects the protection types of Vservers that match the parameter value.
[-protection-type {local|remote}] - Protection Type
Selects the protection types that match the parameter value.
[-rpo-list <text>, ...] - List of available RPOs
Selects the protection types whose list of available RPOs matches the parameter value.
[-rpo-list-description <text>, ...] - List of descriptions of available RPOs
Selects the protection types whose list of description of available RPOs matches the parameter value.
[-description <text>] - Description of Protection Type
Selects the protection types with a description that matches the parameter value.
Examples
The following example displays the protection types and the associated available RPOs for all Vservers in the cluster.
qos commands
QoS settings
Description
The qos adaptive-policy-group create command creates a new adaptive policy group. After the adaptive policy group
is created, you can assign one or more storage objects to the policy. When a storage object is assigned to an adaptive policy
group, the maximum throughput QoS setting automatically adjusts based on the storage object used space or the storage object
allocated space. QoS minimum throughput setting is calculated from the expected-iops parameter and the storage object
allocated size. It is set only for the storage objects that reside on AFF platforms.
After you create an adaptive policy group, use the storage object create command or storage object modify command to apply
the adaptive policy group to a storage object.
Parameters
-policy-group <text> - Name
Specifies the name of the adaptive policy group. Adaptive policy group names must be unique and are
restricted to 127 alphanumeric characters including underscores "_" and hyphens "-". Adaptive policy group
names must start with an alphanumeric character. Use the qos adaptive-policy-group rename
command to change the adaptive policy group name.
-vserver <vserver name> - Vserver
Specifies the data Vserver to which this adaptive policy group belongs to. You can apply this adaptive policy
group to only the storage objects contained in the specified Vserver. If the system has only one Vserver, then
the command uses that Vserver by default.
-expected-iops {<integer>[IOPS[/{GB|TB}]] (default: TB)} - Expected IOPS
Specifies the minimum expected IOPS per TB or GB allocated based on the storage object allocated size.
-peak-iops {<integer>[IOPS[/{GB|TB}]] (default: TB)} - Peak IOPS
Specifies the maximum possible IOPS per TB or GB allocated based on the storage object allocated size or the
storage object used size.
[-absolute-min-iops <qos_tput>] - Absolute Minimum IOPS
Specifies the absolute minimum IOPS which is used as an override when the expected IOPS is less than this
value. The default value is computed as follows:
if expected-iops >= 6144/TB, then absolute-min-iops = 1000IOPS; if expected-iops >= 2048/TB
and expected-iops < 6144/TB, then absolute-min-iops = 500IOPS; if expected-iops >= 1/MB and expected-
iops < 2048/TB, then absolute-min-iops = 75IOPS.
[-expected-iops-allocation {used-space|allocated-space}] - Expected IOPS Allocation
Specifies the expected IOPS allocation policy. The allocation policy is either allocated-space or used-
space. When the expected-iops-allocation policy is set to allocated-space, the expected IOPS is
calculated based on the size of the storage object. When the expected-iops-allocation policy is set to used-
space, the expected IOPS is calculated based on the amount of data stored in the storage object taking into
account storage efficiencies. The default value is allocated-space.
Examples
Creates the "p1" adaptive policy group which belongs to Vserver "vs1" with expected-iops of 100IOPS/TB and peak-iops
of 1000IOPS/TB with default value for absolute-min-iops
Related references
qos adaptive-policy-group rename on page 431
Description
The qos adaptive-policy-group delete command deletes an adaptive policy group from a cluster. You cannot delete a
policy group if a QoS workload associated with a storage object is assigned to it unless you use the -force parameter. Using
the -force parameter will delete all the QoS workloads for storage objects associated with the specified adaptive policy groups.
Only user created adaptive policy groups can be deleted. Default adaptive policy groups are read only and cannot be deleted.
Parameters
-policy-group <text> - Name
Specifies the name of the adaptive policy group that you want to delete.
[-force [true]] - Force Delete Workloads for the QoS adaptive policy group (privilege: advanced)
Specifies whether to delete an adaptive policy group along with any underlying workloads.
Examples
The following example deletes "p1" adaptive policy group:
Deletes the "p1" adaptive policy group along with any underlying QoS workloads.
Description
The qos adaptive-policy-group modify command modifies an adaptive policy group.
Only user-created adaptive policy groups can be modified. Default adaptive policy groups are read-only and cannot be modified.
Parameters
-policy-group <text> - Name
Specifies the name of the adaptive policy group. Adaptive policy group names must be unique and are
restricted to 127 alphanumeric characters including underscores "_" and hyphens "-". Adaptive policy group
names must start with an alphanumeric character. Use the qos adaptive-policy-group rename
command to change the adaptive policy group name.
[-expected-iops {<integer>[IOPS[/{GB|TB}]] (default: TB)}] - Expected IOPS
Specifies the minimum expected IOPS per TB or GB allocated based on the storage object allocated size. QoS
minimum throughput setting is calculated from the expected-iops parameter. It is set only for the storage
objects that reside on AFF platforms.
[-peak-iops {<integer>[IOPS[/{GB|TB}]] (default: TB)}] - Peak IOPS
Specifies the maximum possible IOPS per TB or GB allocated based on the storage object allocated size or the
storage object used size.
[-absolute-min-iops <qos_tput>] - Absolute Minimum IOPS
Specifies the absolute minimum IOPS which is used as an override when the expected IOPS is less than this
value. The default value is computed as follows:
if expected-iops >= 6144/TB, then absolute-min-iops = 1000IOPS; if expected-iops >= 2048/TB
and expected-iops < 6144/TB, then absolute-min-iops = 500IOPS; if expected-iops >= 1/MB and expected-
iops < 2048/TB, then absolute-min-iops = 75IOPS.
[-expected-iops-allocation {used-space|allocated-space}] - Expected IOPS Allocation
Specifies the expected IOPS allocation policy. The allocation policy is either allocated-space or used-
space. When the expected-iops-allocation policy is set to allocated-space, the expected IOPS is
calculated based on the size of the storage object. When the expected-iops-allocation policy is set to used-
space, the expected IOPS is calculated based on the amount of data stored in the storage object taking into
account storage efficiencies. The default value is allocated-space.
[-peak-iops-allocation {used-space|allocated-space}] - Peak IOPS Allocation
Specifies the peak IOPS allocation policy. The allocation policy is either allocated-space or used-space.
When the peak-iops-allocation policy is set to allocated-space, the peak IOPS is calculated based on the
size of the storage object. When the peak-iops-allocation policy is set to used-space, the peak IOPS is
calculated based on the amount of data stored in the storage object taking into account storage efficiencies.
The default value is used-space.
[-block-size {ANY|4K|8K|16K|32K|64K|128K}] - Block Size
Specifies the I/O block size for the QoS adaptive policy group. The default value is "ANY". When block-size
of "ANY" is specified, then control is by IOPS. When block-size other than "ANY" is specified, then control
is by IOPS and bytes per second(bps). bps is the product of IOPS and block-size.
Related references
qos adaptive-policy-group rename on page 431
Description
The qos adaptive-policy-group rename command changes the name of an existing adaptive policy group.
Parameters
-policy-group <text> - Name
Specifies the existing name of the adaptive policy group that you want to rename.
-new-name <text> - New adaptive policy group name
Specifies the new name of the adaptive policy group. Adaptive policy group names must be unique and are
restricted to 127 alphanumeric characters including underscores "_" and hyphens "-". Adaptive policy group
names must start with an alphanumeric character.
Examples
Description
The qos adaptive-policy-group show command shows the current settings of the adaptive policy groups on a cluster.
You can view the list of adaptive policy groups and also the detailed information about a specific adaptive policy group.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-policy-group <text>] - Name
Selects the adaptive policy groups that match this parameter value.
Examples
The example above displays all adaptive policy groups on the cluster.
Description
The qos policy-group create command creates a new policy group. You can use a QoS policy group to control a set of
storage objects known as "workloads" - LUNs, volumes, files, or Vservers. Policy groups define measurable service level
objectives (SLOs) that apply to the storage objects with which the policy group is associated.
After you create a policy group, you use the storage object create command or the storage object modify command to apply the
policy group to a storage object.
Parameters
-policy-group <text> - Policy Group Name
Specifies the name of the policy group. Policy group names must be unique and are restricted to 127
alphanumeric characters including underscores "_" and hyphens "-". Policy group names must start with an
alphanumeric character. You use the qos policy-group rename command to change the policy group
name.
-vserver <vserver name> - Vserver
Specifies the data Vserver to which this policy group belongs. You can apply this policy group to only the
storage objects contained in the specified Vserver. For example, if you want to apply this policy group to a
volume, that volume must belong to the specified Vserver. Using this parameter does not apply the policy
group's SLOs to the Vserver. You need to use the vserver modify command if you want to apply this policy
group to the Vserver. If the system has only one Vserver, then the command uses that Vserver by default.
[-max-throughput <qos_tput>] - Maximum Throughput
Specifies the maximum throughput for the policy group. A maximum throughput limit specifies the
throughput that the policy group must not exceed. It is specified in terms of IOPS or MB/s, or a combination
of comma separated IOPS and MB/s. The range is one to infinity. A value of zero is accepted but is internally
treated as infinity.
The values entered here are case-insensitive, and the units are base ten. There should be no space between the
number and the units. The default value for max-throughput is infinity, which can be specified by the special
value "INF". Note that there is no default unit - all numbers except zero require explicit specification of the
units.
Two reserved keywords, "none" and "INF", are available for the situation that requires removal of a value, and
the situation that needs to specify the maximum available value.
Examples of valid throughput specifications are: "100B/s", "10KB/s", "1gb/s", "500MB/s", "1tb/s", "100iops",
"100iops,400KB/s", and "800KB/s,100iops"
Examples
Creates the "p1" policy group which belongs to Vserver "vs1" with default policy values.
Creates the "p2" policy group which belongs to Vserver "vs1" with the maximum throughput set to 500 MB/s.
cluster1::> qos policy-group create p3 -vserver vs1 -max-throughput 500MB/s -is-shared false
Related references
qos policy-group rename on page 435
Description
The qos policy-group delete command deletes a policy group from a cluster. You cannot delete a policy group if a qos
workload associated with storage object is assigned to it unless you use "-force". Using "-force" will delete all the qos
workloads for storage objects associated with the specified policy groups.
You can only delete user-defined policy groups. You cannot delete preset policy groups.
Parameters
-policy-group <text> - Policy Group Name
Specifies the name of the policy group that you want to delete.
[-force [true]] - Force Delete Workloads for the QoS Policy Group (privilege: advanced)
Specifies whether to delete a policy group along with any underlying workloads.
Examples
Description
The qos policy-group modify command modifies a user-created policy group.
Parameters
-policy-group <text> - Policy Group Name
Specifies the name of the policy group that you want to modify.
[-max-throughput <qos_tput>] - Maximum Throughput
Specifies the maximum throughput for the policy group. A maximum throughput limit specifies the
throughput that the policy group must not exceed. It is specified in terms of IOPS or MB/s, or a combination
of comma separated IOPS and MB/s. The range is one to infinity. A value of zero is accepted but is internally
treated as infinity.
The values entered here are case-insensitive, and the units are base ten. There should be no space between the
number and the units. The default value for max-throughput is infinity, which can be specified by the special
value "INF". Note there is no default unit - all numbers except zero require explicit specification of the units.
Two reserved keywords, "none" and "INF", are available for the situation that requires removal of a value, and
the situation that needs to specify the maximum available value.
Examples of valid throughput specifications are: "100B/s", "10KB/s", "1gb/s", "500MB/s", "1tb/s", and
"100iops".
[-min-throughput <qos_tput>] - Minimum Throughput
Specifies the minimum throughput for the policy group. A minimum throughput specifies the desired
performance level for a policy group. It is specified in terms of IOPS.
The values entered here are case-insensitive, and the units are base ten. There should be no space between the
number and the units. The default value for min-throughput is "0". The default unit is IOPS.
One reserved keyword, 'none' is available for the situation that requires removal of a value.
Examples of valid throughput specifications are: "100iops" and "100".
Examples
Description
The qos policy-group rename command changes the name of an existing policy group.
Examples
Description
The qos policy-group show command shows the current settings of the policy groups on a cluster. You can display a list of
the policy groups and you can view detailed information about a specific policy group.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-policy-group <text>] - Policy Group Name
Selects the policy groups that match this parameter value
Policy groups define measurable service level objectives (SLOs) that apply to the storage objects with which
the policy group is associated.
[-vserver <vserver name>] - Vserver
Selects the policy groups that match this parameter value
[-uuid <UUID>] - Uuid
Selects the policy groups that match this parameter value
[-class <QoS Configuration Class>] - Policy Group Class
Selects the policy groups that match this parameter value
[-pgid <integer>] - Policy Group ID
Selects the policy groups that match this parameter value
This uniquely identifies the policy group
[-max-throughput <qos_tput>] - Maximum Throughput
Selects the policy groups that match this parameter value
A maximum throughput limit specifies the throughput (in IOPS or MB/s) that the policy group must not
exceed.
Examples
Description
The qos settings cache modify command modifies the existing default caching-policy. The list of caching policies can be
obtained from the qos setting cache show -fields cache-setting command.
Examples
Description
The qos settings cache show shows the current caching-policies, class to which they belong, the number of workloads
associated with each of the policies, and whether or not they are set to default. The following external-cache policies are
available:
• auto - Read caches all metadata and randomly read user data blocks, and write caches all randomly overwritten user data
blocks.
• all - Read caches all data blocks read and written. It does not do any write caching.
• all-random_write - Read caches all data blocks read and written. It also write caches randomly overwritten user data blocks.
• all_read - Read caches all metadata, randomly read, and sequentially read user data blocks.
• all_read-random_write - Read caches all metadata, randomly read, and sequentially read user data blocks. It also write
caches randomly overwritten user data blocks.
• all_read_random_write - Read caches all metadata, randomly read, sequentially read and randomly written user data.
• all_read_random_write-random_write - Read caches all metadata, randomly read, sequentially read, and randomly written
user data blocks. It also write caches randomly overwritten user data blocks.
• meta-random_write - Read caches all metadata and write caches randomly overwritten user data blocks.
• noread-random_write - Write caches all randomly overwritten user data blocks. It does not do any read caching.
• random_read - Read caches all metadata and randomly read user data blocks.
• random_read_write - Read caches all metadata, randomly read and randomly written user data blocks.
• random_read_write-random_write - Read caches all metadata, randomly read, and randomly written user data blocks. It also
write caches randomly overwritten user data blocks.
Note: Note that in a caching-policy name, a hyphen (-) separates read and write caching policies.
Examples
Description
The qos statistics characteristics show command displays data that characterizes the behavior of QoS policy
groups.
The command displays the following data:
• Throughput achieved in kilobytes per second (KB/s) or megabytes per second (MB/s) as appropriate (Throughput)
• Concurrency, which indicates the number of concurrent users generating the I/O traffic (Concurrency)
The results displayed per iteration are sorted by IOPS. Each iteration starts with a row that displays the total IOPS used across
all QoS policy groups. Other columns in this row are either totals or averages.
Parameters
[-node {<nodename>|local}] - Node
Selects the policy groups that match this parameter value. If you do not specify this parameter, the command
displays data for the entire cluster.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this parameter,
the command iterates until interrupted by Ctrl-C.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. Valid values are from 1 to 20. The default value
is 10.
| [-policy-group <text>]} - QoS Policy Group Name
Selects the QoS policy group whose name matches the specified value. If you do not specify this parameter,
the command displays data for all QoS policy groups.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
Examples
The example above displays the characteristics of the 4 QoS policy groups with the highest IOPS values and refreshes the
display 100 times before terminating.
Description
The qos statistics latency show command displays the average latencies for QoS policy groups across the various Data
ONTAP subsystems.
The command displays the following data:
• Latency observed per I/O operation across the internally connected nodes in a Cluster (Cluster)
• Latency observed per I/O operation in the Data management subsystem (Data)
The results displayed per iteration are sorted by the Latency field. Each iteration starts with a row that displays the average
latency, in microseconds (us) or milliseconds (ms), observed across all QoS policy groups.
Examples
The example above displays latencies for the 3 QoS policy groups with the highest latencies and refreshes the display 100
times before terminating.
Description
The qos statistics performance show command shows the current system performance levels that QoS policy groups
are achieving.
The command displays the following data:
• Throughput in kilobytes per second (KB/s) or megabytes per second (MB/s) as appropriate (Throughput)
• Latency observed per request in microseconds (us) or milliseconds (ms) as appropriate (Latency)
The results displayed per iteration are sorted by IOPS. Each iteration starts with a row that displays the total IOPS used across
all QoS policy groups. Other columns in this row are either totals or averages.
Parameters
[-node {<nodename>|local}] - Node
Selects the policy groups that match this parameter value. If you do not specify this parameter, the command
displays data for the entire cluster.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this parameter,
the command iterates until interrupted by Ctrl-C.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. Valid values are from 1 to 20. The default value
is 10.
| [-policy-group <text>]} - QoS Policy Group Name
Selects the QoS policy group whose name matches the specified value. If you do not specify this parameter,
the command displays data for all QoS policy groups.
Examples
The example above displays the system performance for the 4 QoS policy groups with the highest IOPS and it refreshes
the display 100 times before terminating.
Description
The qos statistics resource cpu show command displays the CPU utilization for QoS policy groups per node.
The command displays the following data:
The results displayed per iteration are sorted by total CPU utilization. Each iteration starts with a row that displays the total CPU
utilization across all QoS policy groups.
Examples
cluster1::> qos statistics resource cpu show -node nodeA -iterations 100 -rows 3
Policy Group CPU
-------------------- -----
-total- (100%) 9%
fast 1%
slow 3%
medium 5%
-total- (100%) 8%
slow 1%
fast 3%
medium 3%
The following example shows the output when the session privilege level is diagnostic.
cluster1::*> qos statistics resource cpu show -node nodeB -iterations 100 -rows 3
Policy Group CPU Wafl_exempt Kahuna Network Raid Exempt Protocol
-------------------- ----- ----------- ------ ------- ----- ------ --------
-total- (200%) 21% 0% 0% 0% 0% 18% 3%
fast 19% 0% 0% 0% 0% 16% 3%
medium 1% 0% 0% 0% 0% 1% 0%
slow 1% 0% 0% 0% 0% 1% 0%
-total- (200%) 22% 0% 0% 0% 0% 19% 3%
fast 18% 0% 0% 0% 0% 15% 3%
medium 2% 0% 0% 0% 0% 2% 0%
slow 2% 0% 0% 0% 0% 2% 0%
The example above displays the total CPU utilization for the 3 QoS policy groups with the highest CPU utilization and it
refreshes the display 100 times before terminating.
cluster1::> qos statistics resource cpu show -node local -iterations 100 -policy-group pg1
Policy Group CPU
-------------------- -----
-total- (100%) 7%
pg1 1%
-total- (100%) 7%
pg1 1%
-total- (100%) 7%
Description
The qos statistics resource disk show command displays the disk utilization for QoS policy groups per node. The
disk utilization shows the percentage of time spent on the disk during read and write operations. The command displays disk
utilization for system-defined policy groups; however, their disk utilization is not included in the total utilization. The command
only supports hard disks.
The command displays the following data:
The results displayed are sorted by total disk utilization. Each iteration starts with a row that displays the total disk utilization
across all QoS policy groups.
Parameters
-node {<nodename>|local} - Node
Selects the policy groups that match this parameter value.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this parameter,
the command iterates until interrupted by Ctrl-C.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. Valid values are from 1 to 20. The default value
is 10.
| [-policy-group <text>]} - QoS Policy Group Name
Selects the QoS policy group whose name matches the specified value. If you do not specify this parameter,
the command displays data for all QoS policy groups.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
Examples
cluster1::> qos statistics resource disk show -node nodeA -iterations 100 -rows 3
Policy Group Disk Number of HDD Disks
-------------------- ----- -------------------
-total- 40% 27
pg1 22% 5
The example above displays the total disk utilization for the 3 QoS policy groups with the highest disk utilization and it
refreshes the display 100 times before terminating.
cluster1::> qos statistics resource disk show -node local -iterations 100 -policy-group pg1
Policy Group Disk Number of HDD Disks
-------------------- ----- -------------------
-total- 3% 10
pg1 1% 24
-total- 3% 10
pg1 1% 24
-total- 3% 10
pg1 1% 24
-total- 3% 10
pg1 1% 24
Description
The qos statistics volume characteristics show command displays data that characterizes the behavior of volumes.
The command displays the following data:
• Throughput achieved in kilobytes per second (KB/s) or megabytes per second (MB/s) as appropriate (Throughput)
• Concurrency, which indicates the number of concurrent users generating the I/O traffic (Concurrency)
The results displayed per iteration are sorted by IOPS. Each iteration starts with a row that displays the total IOPS used across
all volumes. Other columns in this row are either totals or averages.
Examples
The example above displays characteristics for the 3 volumes with the highest IOPS and it refreshes the display 100 times
before terminating.
cluster1::> qos statistics volume characteristics show -vserver vs0 -volume vs0_vol0 -
iterations 100
Workload ID IOPS Throughput Request Size Read Concurrency
--------------- ------ -------- ---------------- ------------ ---- -----------
-total- - 1567 783.33KB/s 512Kb 90% 2
vs0_vol0-wid1.. 15658 785 392.33KB/s 512Kb 89% 1
-total- - 1521 760.50KB/s 512Kb 90% 1
vs0_vol0-wid1.. 15658 982 491.17KB/s 512Kb 90% 0
-total- - 1482 741.00KB/s 512Kb 89% 0
vs0_vol0-wid1.. 15658 945 472.50KB/s 512Kb 90% 0
-total- - 1482 741.00KB/s 512Kb 89% 0
Description
The qos statistics volume latency show command displays the average latencies for volumes on Data ONTAP
subsystems.
The command displays the following data:
• Latency observed per I/O operation across the internally connected nodes in a Cluster (Cluster)
• Latency observed per I/O operation in the Data management subsystem (Data)
The results displayed per iteration are sorted by the total latency field. Each iteration starts with a row that displays the average
latency, in microseconds (us) or milliseconds (ms) observed across all volumes.
Parameters
[-node {<nodename>|local}] - Node
Selects the volumes that match this parameter value. If you do not specify this parameter, the command
displays data for the entire cluster.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. The default setting is 10. The allowed range of
values is 1 to 20.
| -vserver <vserver name> - Vserver Name
Specifies the Vserver to which the volume belongs.
-volume <volume name>} - Volume Name
Selects the latency data that match this parameter value. Enter a complete volume name or press the <Tab>
key to complete the name. Wildcard query characters are not supported.
Examples
The example above displays latencies for the 3 volumes with the highest latencies and it refreshes the display 100 times
before terminating.
cluster1::> qos statistics volume latency show -vserver vs0 -volume vs0_vol0 -iterations 100
Workload ID Latency Network Cluster Data Disk QoS
NVRAM Cloud
--------------- ------ ---------- ---------- ---------- ---------- ---------- ----------
---------- ----------
-total- - 455.00us 158.00us 0ms 297.00us 0ms 0ms
0ms 0ms
vs0_vol0-wid1.. 15658 428.00us 155.00us 0ms 273.00us 0ms 0ms
0ms 0ms
-total- - 337.00us 130.00us 0ms 207.00us 0ms 0ms
0ms 0ms
vs0_vol0-wid1.. 15658 316.00us 128.00us 0ms 188.00us 0ms 0ms
0ms 0ms
-total- - 464.00us 132.00us 0ms 332.00us 0ms 0ms
0ms 0ms
vs0_vol0-wid1.. 15658 471.00us 130.00us 0ms 341.00us 0ms 0ms
0ms 0ms
-total- - 321.00us 138.00us 0ms 183.00us 0ms 0ms
0ms 0ms
vs0_vol0-wid1.. 15658 302.00us 137.00us 0ms 165.00us 0ms 0ms
0ms 0ms
Description
The qos statistics volume performance show command shows the current system performance that each volume is
achieving.
The command displays the following data:
• Throughput in kilobytes per second (KB/s) or megabytes per second (MB/s) as appropriate (Throughput)
• Latency observed per request in microseconds (us) or milliseconds (ms) as appropriate (Latency)
The results displayed per iteration are sorted by IOPS. Each iteration starts with a row that displays the total IOPS used across
all volumes. Other columns in this row are either totals or averages.
Parameters
[-node {<nodename>|local}] - Node
Selects the volumes that match this parameter value. If you do not specify this parameter, the command
displays data for the entire cluster.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. The default setting is 10. The allowed range of
values is 1 to 20.
| -vserver <vserver name> - Vserver Name
Specifies the Vserver to which the volume belongs.
-volume <volume name>} - Volume Name
Selects the performance data that match this parameter value. Enter a complete volume name or press the
<Tab> key to complete the name. Wildcard query characters are not supported.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this parameter,
the command iterates until interrupted by Ctrl-C.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
Examples
The example above displays the system performance for the 3 volumes with the highest IOPS and it refreshes the display
100 times before terminating.
cluster1::> qos statistics volume performance show -vserver vs0 -volume vs0_vol0 -iterations
100
Workload ID IOPS Throughput Latency
--------------- ------ -------- ---------------- ----------
-total- - 1278 639.17KB/s 404.00us
vs0_vol0-wid1.. 15658 526 263.17KB/s 436.00us
-total- - 1315 657.33KB/s 86.00us
vs0_vol0-wid1.. 15658 528 264.17KB/s 88.00us
-total- - 1220 609.83KB/s 418.00us
vs0_vol0-wid1.. 15658 515 257.33KB/s 531.00us
-total- - 1202 600.83KB/s 815.00us
vs0_vol0-wid1.. 15658 519 259.67KB/s 924.00us
-total- - 1240 620.17KB/s 311.00us
vs0_vol0-wid1.. 15658 525 262.50KB/s 297.00us
Description
The qos statistics volume resource cpu show command displays the CPU utilization for volumes per node.
The command displays the following data:
The results displayed per iteration are sorted by total CPU utilization. Each iteration starts with a row that displays the total CPU
utilization across all volumes.
Parameters
-node {<nodename>|local} - Node
Selects the volumes that match this parameter value.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. The default setting is 10. The allowed range of
values is 1 to 20.
| -vserver <vserver name> - Vserver Name
Specifies the Vserver to which the volume belongs.
-volume <volume name>} - Volume Name
Selects the CPU utilization data that match this parameter value. Enter a complete volume name or press the
<Tab> key to complete the name. Wildcard query characters are not supported.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this parameter,
the command iterates until interrupted by Ctrl-C.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
[-show-flexgroup-as-constituents {true|false}] - Display Flexgroups as Constituents
If the parameter is specified and if the value is true, it will display data for FlexVols and Flexgroup
Constituents. Otherwise it will display data for FlexVols and Flexgroups.
Examples
cluster1::> qos statistics volume resource cpu show -node nodeA -iterations 100 -rows 3
Workload ID CPU
--------------- ----- -----
--total- (100%) - 9%
vs0vol1-wid-102 102 5%
vs0vol2-wid-121 121 2%
vs2_vol0-wid-.. 212 2%
-total- (100%) - 8%
vs0vol1-wid-102 102 5%
vs0vol2-wid-121 121 2%
vs2_vol0-wid-.. 212 1%
The example above displays total CPU utilization for the 3 volumes with the highest CPU utilization and it refreshes the
display 100 times before terminating.
cluster1::> qos statistics volume resource cpu show -node local -vserver vs0 -volume vs0_vol1 -
iterations 100
Workload ID CPU
--------------- ----- -----
-total- (100%) - 2%
vs0_vol1-wid7.. 7916 2%
-total- (100%) - 2%
vs0_vol1-wid7.. 7916 2%
-total- (100%) - 1%
vs0_vol1-wid7.. 7916 1%
The following example shows the output when the session privilege level is "diagnostic".
cluster1::*> qos statistics volume resource cpu show -node nodeB -iterations 100 -rows 3
Workload ID CPU Wafl_exempt Kahuna Network Raid Exempt Protocol
--------------- ----- ----- ----------- ------ ------- ----- ------ --------
-total- (200%) - 23% 0% 0% 0% 0% 18% 5%
vs0vol1-wid-102 102 18% 0% 0% 0% 0% 15% 3%
vs0vol2-wid-121 121 3% 0% 0% 0% 0% 2% 1%
vs2_vol0-wid-.. 212 2% 0% 0% 0% 0% 1% 1%
-total- (200%) - 24% 0% 0% 0% 0% 19% 5%
vs0vol1-wid-102 102 19% 0% 0% 0% 0% 16% 3%
vs0vol2-wid-121 121 3% 0% 0% 0% 0% 2% 1%
vs2_vol0-wid-.. 212 2% 0% 0% 0% 0% 1% 1%
Description
The qos statistics volume resource disk show command displays the disk utilization for volumes per node. The
disk utilization shows the percentage of time spent on the disk during read and write operations. The command only supports
hard disks.
The command displays the following data:
The results displayed are sorted by total disk utilization. Each iteration starts with a row that displays the total disk utilization
across all volumes.
Parameters
-node {<nodename>|local} - Node
Selects the volumes that match this parameter value.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. The default setting is 10. The allowed range of
values is 1 to 20.
| -vserver <vserver name> - Vserver Name
Specifies the Vserver to which the volume belongs.
Examples
cluster1::> qos statistics volume resource disk show -node nodeB -iterations 100 -rows
3
Workload ID Disk Number of HDD Disks
--------------- ------ ----- -------------------
-total- (100%) - 30% 4
vs0vol1-wid101 101 12% 2
vs0vol2-wid121 121 10% 1
vol0-wid1002 1002 8% 1
-total- (100%) - 30% 4
vs0vol1-wid101 101 12% 2
vs0vol2-wid121 121 10% 1
vol0-wid1002 1002 8% 1
The example above displays total disk utilization for the 3 volumes with the highest disk utilization and it refreshes the
display 100 times before terminating.
cluster1::> qos statistics volume resource disk show -node local -vserver vs0 -volume
vs0_vol0 -iterations 100
Workload ID Disk Number of HDD Disks
--------------- ------ ------ -------------------
-total- - 5% 10
vs0_vol0-wid1.. 15658 1% 6
-total- - 5% 10
vs0_vol0-wid1.. 15658 1% 6
-total- - 6% 10
vs0_vol0-wid1.. 15658 2% 6
-total- - 6% 10
vs0_vol0-wid1.. 15658 2% 6
-total- - 6% 10
vs0_vol0-wid1.. 15658 2% 6
Description
The qos statistics workload characteristics show command displays data that characterizes the behavior of QoS
workloads.
The command displays the following data:
• The QoS workload name (Workload)
• Concurrency, which indicates the number of concurrent users generating the I/O traffic (Concurrency)
The results displayed per iteration are sorted by IOPS. Each iteration starts with a row that displays the total IOPS used across
all QoS workloads. Other columns in this row are either totals or averages.
Parameters
[-node {<nodename>|local}] - Node
Selects the QOS workloads that match this parameter value. If you do not specify this parameter, the command
displays data for the entire cluster.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this parameter,
the command iterates until interrupted by Ctrl-C.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. Valid values are from 1 to 20. The default value
is 10.
[-policy-group <text>] - QoS Policy Group Name
Selects the QoS workloads that belong to the QoS policy group specified by this parameter value. If you do
not specify this parameter, the command displays data for all QoS workloads.
| [-workload <text>] - QoS Workload Name
Selects the QoS workload that match this parameter value. If you do not specify this parameter, the command
displays data for all QoS workloads.
| [-workload-id <integer>]} - QoS Workload ID
Selects the QoS workload that match the QoS workload ID specified by this parameter value.
[-show-flexgroup-as-constituents {true|false}] - Display Flexgroups as Constituents
If the parameter is specified and if the value is true, it will display data for FlexVols and Flexgroup
Constituents. Otherwise it will display data for FlexVols and Flexgroups.
The example above displays characteristics for the 4 QoS workloads with the highest IOPS and it refreshes the display
100 times before terminating.
cluster1::> qos statistics workload characteristics show -iterations 100 -rows 2 -policy-
group pg1
Workload ID IOPS Throughput Request size Read Concurrency
--------------- ------ -------- ---------------- ------------ ---- -----------
-total- - 243 546.86KB/s 2307B 61% 1
file-test1_a-.. 6437 34 136.00KB/s 4096B 100% 0
file-test1_c-.. 5078 33 133.33KB/s 4096B 100% 0
-total- - 310 3.09MB/s 10428B 55% 1
file-test1_a-.. 6437 36 142.67KB/s 4096B 100% 0
file-test1_b-.. 9492 35 138.67KB/s 4096B 100% 0
-total- - 192 575.71KB/s 3075B 71% 1
file-test1-wi.. 7872 39 157.33KB/s 4096B 100% 0
file-test1_c-.. 5078 38 153.33KB/s 4096B 100% 0
The example above displays the characteristics for the 2 QoS workloads belonging to QoS policy group pg1 with the
highest IOPS and it refreshes the display 100 times before terminating.
cluster1::> qos statistics workload characteristics show -iterations 100 -workload-id 9492
Workload ID IOPS Throughput Request size Read Concurrency
--------------- ------ -------- ---------------- ------------ ---- -----------
-total- - 737 2.14MB/s 3045B 79% 1
file-test1_b-.. 9492 265 1058.67KB/s 4096B 100% 0
-total- - 717 4.26MB/s 6235B 80% 1
file-test1_b-.. 9492 272 1086.67KB/s 4096B 100% 1
-total- - 623 2.50MB/s 4202B 86% 0
file-test1_b-.. 9492 263 1050.67KB/s 4096B 100% 0
-total- - 595 2.11MB/s 3712B 89% 0
file-test1_b-.. 9492 266 1064.00KB/s 4096B 100% 0
• Latency observed per I/O operation across the internally connected nodes in a Cluster (Cluster)
• Latency observed per I/O operation in the Data management subsystem (Data)
The results displayed per iteration are sorted by the total latency field. Each iteration starts with a row that displays the average
latency, in microseconds (us) or milliseconds (ms) observed across all QoS workloads.
Parameters
[-node {<nodename>|local}] - Node
Selects the QOS workloads that match this parameter value. If you do not specify this parameter, the command
displays data for the entire cluster.
[-iterations <integer>] - Number of Iterations
Specifies the number of times that the command refreshes the display with updated data before terminating. If
you do not specify this parameter, the command continues to run until you interrupt it by pressing Ctrl-C.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. Valid values are from 1 to 20. The default value
is 10.
[-policy-group <text>] - QoS Policy Group Name
Selects the QoS workloads that belong to the QoS policy group specified by this parameter value. If you do
not specify this parameter, the command displays data for all QoS workloads.
| [-workload <text>] - QoS Workload Name
Selects the QoS workload that match this parameter value. If you do not specify this parameter, the command
displays data for all QoS workloads.
| [-workload-id <integer>]} - QoS Workload ID
Selects the QoS workload that match the QoS workload ID specified by this parameter value.
[-show-flexgroup-as-constituents {true|false}] - Display Flexgroups as Constituents
If the parameter is specified and if the value is true, it will display data for FlexVols and Flexgroup
Constituents. Otherwise it will display data for FlexVols and Flexgroups.
The example above displays latencies for the 3 QoS workloads with the highest latencies and it refreshes the display 100
times before terminating.
cluster1::> qos statistics workload latency show -iterations 100 -rows 2 -policy-group pg1
Workload ID Latency Network Cluster Data Disk QoS
NVRAM Cloud
--------------- ------ ---------- ---------- ---------- ---------- ---------- ----------
---------- ----------
-total- - 4.80ms 287.00us 0ms 427.00us 4.08ms 0ms
0ms 0ms
file-test1-wi.. 7872 9.60ms 265.00us 0ms 479.00us 8.85ms 0ms
0ms 0ms
file-test1_a-.. 6437 8.22ms 262.00us 0ms 424.00us 7.53ms 0ms
0ms 0ms
-total- - 4.20ms 296.00us 0ms 421.00us 3.48ms 0ms
0ms 0ms
file-test1-wi.. 7872 8.70ms 211.00us 0ms 489.00us 8.00ms 0ms
0ms 0ms
file-test1_a-.. 6437 6.70ms 297.00us 0ms 464.00us 5.94ms 0ms
0ms 0ms
-total- - 5.90ms 303.00us 0ms 1.71ms 3.88ms 0ms
0ms 0ms
file-test1-wi.. 7872 11.36ms 263.00us 0ms 2.06ms 9.04ms 0ms
0ms 0ms
file-test1_a-.. 6437 9.48ms 250.00us 0ms 2.30ms 6.93ms 0ms
0ms 0ms
The example above displays latencies for the 2 QoS workloads belonging to QoS policy group pg1 with the highest IOPS
and it refreshes the display 100 times before terminating.
cluster1::> qos statistics workload latency show -iterations 100 -workload-id 9492
Workload ID Latency Network Cluster Data Disk QoS
NVRAM Cloud
--------------- ------ ---------- ---------- ---------- ---------- ---------- ----------
---------- ----------
-total- - 443.00us 273.00us 0ms 170.00us 0ms 0ms
Description
The qos statistics workload performance show command shows the current system performance that each QoS
workload is achieving.
The command displays the following data:
• Throughput in kilobytes per second (KB/s) or megabytes per second (MB/s) as appropriate (Throughput)
• Latency observed per request in microseconds (us) or milliseconds (ms) as appropriate (Latency)
The results displayed per iteration are sorted by IOPS. Each iteration starts with a row that displays the total IOPS used across
all QoS workloads. Other columns in this row are either totals or averages.
Parameters
[-node {<nodename>|local}] - Node
Selects the QOS workloads that match this parameter value. If you do not specify this parameter, the command
displays data for the entire cluster.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this parameter,
the command iterates until interrupted by Ctrl-C.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. Valid values are from 1 to 20. The default value
is 10.
Examples
The example above displays the system performance for the 4 QoS workloads with the highest IOPS and it refreshes the
display 100 times before terminating.
cluster1::> qos statistics workload performance show -iterations 100 -rows 2 -policy-group pg1
Workload ID IOPS Throughput Latency
--------------- ------ -------- ---------------- ----------
-total- - 2598 9.96MB/s 1223.00us
file-testfile.. 4228 650 2.54MB/s 1322.00us
file-testfile.. 11201 635 2.48MB/s 1128.00us
-total- - 2825 10.89MB/s 714.00us
file-testfile.. 4228 707 2.76MB/s 759.00us
file-testfile.. 11201 697 2.72MB/s 693.00us
-total- - 2696 10.13MB/s 1149.00us
file-testfile.. 4228 645 2.52MB/s 945.00us
file-testfile.. 6827 634 2.48MB/s 1115.00us
The example above displays the system performance for the 2 QoS workloads belonging to QoS policy group pg1 with
the highest IOPS and it refreshes the display 100 times before terminating.
cluster1::> qos statistics workload performance show -iterations 100 -workload-id 11201
Workload ID IOPS Throughput Latency
--------------- ------ -------- ---------------- ----------
-total- - 2866 10.92MB/s 905.00us
file-testfile.. 11201 674 2.63MB/s 889.00us
-total- - 2761 10.55MB/s 1054.00us
file-testfile.. 11201 638 2.49MB/s 1055.00us
-total- - 2810 10.58MB/s 832.00us
Description
The qos statistics workload resource cpu show command displays the CPU utilization for QoS workloads per node.
The command displays the following data:
The results displayed per iteration are sorted by total CPU utilization. Each iteration starts with a row that displays the total CPU
utilization across all QoS workloads.
Parameters
-node {<nodename>|local} - Node
Selects the QOS workloads that match this parameter value.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this parameter,
the command iterates until interrupted by Ctrl-C.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. Valid values are from 1 to 20. The default value
is 10.
[-policy-group <text>] - QoS Policy Group Name
Selects the QoS workloads that belong to the QoS policy group specified by this parameter value. If you do
not specify this parameter, the command displays data for all QoS workloads.
| [-workload <text>] - QoS Workload Name
Selects the QoS workload that match this parameter value. If you do not specify this parameter, the command
displays data for all QoS workloads.
| [-workload-id <integer>]} - QoS Workload ID
Selects the QoS workload that match the QoS workload ID specified by this parameter value.
Examples
cluster1::> qos statistics workload resource cpu show -node nodeA -iterations 100 -rows 3
Workload ID CPU
--------------- ----- -----
--total- (100%) - 9%
vs0-wid-102 102 5%
file-bigvmdk-.. 121 2%
vs2_vol0-wid-.. 212 2%
-total- (100%) - 8%
vs0-wid-101 102 5%
file-bigvmdk-.. 121 2%
vs2_vol0-wid-.. 212 1%
The following example shows the output when the session privilege level is "diagnostic".
cluster1::*> qos statistics workload resource cpu show -node nodeB -iterations 100 -rows 3
Workload ID CPU Wafl_exempt Kahuna Network Raid Exempt Protocol
--------------- ----- ----- ----------- ------ ------- ----- ------ --------
-total- (200%) - 23% 0% 0% 0% 0% 18% 5%
vs0-wid-102 102 18% 0% 0% 0% 0% 15% 3%
file-bigvmdk-.. 121 3% 0% 0% 0% 0% 2% 1%
vs2_vol0-wid-.. 212 2% 0% 0% 0% 0% 1% 1%
-total- (200%) - 24% 0% 0% 0% 0% 19% 5%
vs0-wid-102 102 19% 0% 0% 0% 0% 16% 3%
file-bigvmdk-.. 121 3% 0% 0% 0% 0% 2% 1%
vs2_vol0-wid-.. 212 2% 0% 0% 0% 0% 1% 1%
The example above displays total CPU utilization for the 3 QoS workloads with the highest CPU utilization and it
refreshes the display 100 times before terminating.
cluster1::> qos statistics workload resource cpu show -node local -iterations 100 -rows 2 -
policy-group pg1
Workload ID CPU
--------------- ----- -----
-total- (100%) - 41%
file-test1_b-.. 9492 16%
file-test1_c-.. 5078 16%
-total- (100%) - 43%
file-test1_c-.. 5078 17%
file-test1_b-.. 9492 16%
-total- (100%) - 40%
file-test1_c-.. 5078 16%
file-test1_b-.. 9492 15%
The example above displays total CPU utilization for the 2 QoS workloads belonging to QoS policy group pg1 with the
highest IOPS and it refreshes the display 100 times before terminating.
cluster1::> qos statistics workload resource cpu show -node local -iterations 100 -workload-id
9492
Workload ID CPU
--------------- ----- -----
-total- (100%) - 15%
file-test1_b-.. 9492 3%
-total- (100%) - 14%
file-test1_b-.. 9492 3%
-total- (100%) - 14%
Description
The qos statistics workload resource disk show command displays the disk utilization for QoS workloads per
node. The disk utilization shows the percentage of time spent on the disk during read and write operations. The command
displays disk utilization for system-defined workloads; however, their disk utilization is not included in the total utilization. The
command only supports hard disks.
The command displays the following data:
The results displayed are sorted by total disk utilization. Each iteration starts with a row that displays the total disk utilization
across all QoS workloads.
Parameters
-node {<nodename>|local} - Node
Selects the QOS workloads that match this parameter value.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this parameter,
the command iterates until interrupted by Ctrl-C.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. Valid values are from 1 to 20. The default value
is 10.
[-policy-group <text>] - QoS Policy Group Name
Selects the QoS workloads that belong to the QoS policy group specified by this parameter value. If you do
not specify this parameter, the command displays data for all QoS workloads.
| [-workload <text>] - QoS Workload Name
Selects the QoS workload that match this parameter value. If you do not specify this parameter, the command
displays data for all QoS workloads.
Examples
cluster1::> qos statistics workload resource disk show -node nodeB -iterations 100 -
rows 3
Workload ID Disk Number of HDD Disks
--------------- ------ ----- -------------------
-total- (100%) - 30% 4
_RAID - 20% 4
vs0-wid101 101 12% 2
file-1-wid121 121 10% 1
vol0-wid1002 1002 8% 1
_WAFL - 7% 3
-total- (100%) - 30% 4
vs0-wid101 101 12% 2
file-1-wid121 121 10% 1
_RAID - 10% 4
vol0-wid1002 1002 8% 1
_WAFL - 7% 3
The example above displays total disk utilization for the 3 QoS workloads with the highest disk utilization and it refreshes
the display 100 times before terminating.
cluster1::> qos statistics workload resource disk show -node local -iterations 100 -rows 2 -policy-
group pg1
Workload ID Disk Number of HDD Disks
--------------- ------ ----- -------------------
-total- - 3% 10
file-test1_a-.. 6437 6% 6
file-test1-wi.. 7872 6% 6
-total- - 3% 10
file-test1_a-.. 6437 5% 6
file-test1-wi.. 7872 5% 6
-total- - 3% 10
file-test1_a-.. 6437 6% 6
file-test1-wi.. 7872 6% 6
The example above displays total disk utilization for the 2 QoS workloads belonging to QoS policy group pg1 with the
highest IOPS and it refreshes the display 100 times before terminating.
cluster1::> qos statistics workload resource disk show -node local -iterations 100 -workload-id
6437
Workload ID Disk Number of HDD Disks
--------------- ------ ----- -------------------
-total- - 3% 10
file-test1_a-.. 6437 6% 6
-total- - 3% 10
file-test1_a-.. 6437 5% 6
-total- - 3% 10
file-test1_a-.. 6437 6% 6
Description
Shows the current status of workloads on a cluster. Use this command to determine the types of workloads that are currently on
a cluster. The types of workloads include: system-defined, preset, and user-defined. The system generates system-defined and
preset workloads. You cannot create, modify, or delete these workloads. Also, you can only modify or delete a user-defined
workload , but cannot create one
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-workload <text>] - Workload Name
If you use this parameter, the command displays the workloads that contain the specified workload name.
[-uuid <UUID>] - Workload UUID (privilege: advanced)
If you use this parameter, the command displays the workloads that contain the specified UUID.
[-class <QoS Configuration Class>] - Workload Class
If you use this parameter, the command displays the workloads that contain the specified class. The Class
options include system-defined, preset, and user-defined.
[-wid <integer>] - Workload ID
If you use this parameter, the command displays the workloads that contain the specified internal workload ID.
[-category <text>] - Workload Category
If you use this parameter, the command displays the workloads that contain the specified category. The
category options include Scanner and Efficiency.
[-policy-group <text>] - Policy Group Name
If you use this parameter, the command displays the workloads that match the specified policy group name.
[-read-ahead <text>] - Read-ahead Tunables
If you use this parameter, the command displays the workloads that contain the specified read-ahead cache
tunable.
[-vserver <vserver name>] - Vserver
If you use this parameter, the command displays the workloads that match the specified Vserver.
[-volume <volume name>] - Volume
If you use this parameter, the command displays the workloads that match the specified volume.
[-qtree <qtree name>] - Qtree Name
If you use this parameter, the command displays the workloads that match the specified Qtree name.
Examples
Security Commands
The security directory
The security commands enable you to manage security for the management interface.
security snmpusers
Show SNMP users
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The security snmpusers displays the following information about SNMP users:
• User name
• Authentication method
• Hexadecimal engine ID
• Authentication protocol
• Privacy protocol
• Security group
Examples
The following example displays information about all SNMP users:
Description
The security audit modify command modifies the following audit-logging settings for the management interface:
• Whether get requests for the Data ONTAP API (ONTAPI) are audited
Parameters
[-cliget {on|off}] - Enable Auditing of CLI Get Operations
This specifies whether get requests for the CLI are audited. The default setting is off.
[-httpget {on|off}] - Enable Auditing of HTTP Get Operations
This specifies whether get requests for the web (HTTP) interface are audited. The default setting is off.
[-ontapiget {on|off}] - Enable Auditing of Data ONTAP API Get Operations
This specifies whether get requests for the Data ONTAP API (ONTAPI) interface are audited. The default
setting is off.
Examples
The following example turns off auditing of get requests for the CLI interface:
Description
The security audit show command displays the following audit-logging settings for the management interface:
• Whether get requests for the web (HTTP) interface are audited
• Whether get requests for the Data ONTAP API (ONTAPI) are audited
Audit log entries are written to the 'audit' log, viewable via the 'security audit log show' command.
Description
The security audit log show command displays cluster-wide audit log messages. Messages from each node are
interleaved in chronological order.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-detail ]
This display option shows the individual fields of the audit record.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-timestamp <Date>] - Log Entry Timestamp
Selects the entries that match the specified input for timestamp. This will be in a human-readable format
<day> <month> <day of month> <hour>:<min>:<sec> <year> in the local timezone.
[-node {<nodename>|local}] - Node
Selects the entries that match the specified input for node.
[-entry <text>] - Log Message Entry
Selects the entries that match the specified input for entry.
[-session-id <text>] - Session ID
This is the "session id" for this audit record. Eash ssh/console session is assigned a unique session ID. Eash
ZAPI/HTTP/SNMP request is assigned a uniqueue session ID
[-command-id <text>] - Command ID
This is useful with ssh/console sessions. Each command in a session is assigned a unique command ID. Each
ZAPI/HTTP/SNMP request does not have a command ID.
[-application <text>] - Protocol
This the application the used
Show Commands
Manage Digital Certificates
Field description for all the certificate show commands.
Description
The security certificate create command creates and installs a self-signed digital certificate, which can be used for
server authentication, for signing other certificates by acting as a certificate authority (CA), or for Data ONTAP as an SSL
client. The certificate function is selected by the -type field. Self-signed digital certificates are not as secure as certificates signed
by a CA. Therefore, they are not recommended in a production environment.
Parameters
-vserver <Vserver Name> - Name of Vserver
This specifies the name of the Vserver on which the certificate will exist.
-common-name <FQDN or Custom Common Name> - FQDN or Custom Common Name
This specifies the desired certificate name as a fully qualified domain name (FQDN) or custom common name
or the name of a person. The supported characters, which are a subset of the ASCII character set, are as
follows:
• Numbers 0 through 9
The common name must not start or end with a "-" or a ".". The maximum length is 253 characters.
-type <type of certificate> - Type of Certificate
This specifies the certificate type. Valid values are the following:
• server - creates and installs a self-signed digital certificate and intermediate certificates to be used for
server authentication
• client - includes a self-signed digital certificate and private key to be used for Data ONTAP as an SSL
client
cluster1::> security certificate create -vserver vs0 -common-name www.example.com -type server
This example creates a root-ca type, self-signed digital certificate with a 2048-bit private key generated by the SHA256
hashing function that will expire in 365 days for a Vserver named vs0 for use by the Software group in IT at a company
whose custom common name is www.example.com, located in Sunnyvale, California, USA. The email address of the
contact administrator who manages the Vserver is web@example.com.
cluster1::> security certificate create -vserver vs0 -common-name www.example.com -type root-
ca -size 2048 -country US -state California -locality Sunnyvale -organization IT -unit
Software -email-addr web@example.com -expire-days 365 -hash-function SHA256
This example creates a client type of self-signed digital certificate for a Vserver named vs0 at a company that uses Data
ONTAP as an SSL client. The company's custom common name is www.example.com and its Vserver name is vs0.
cluster1::> security certificate create -vserver vs0 -common-name www.example.com -type client
-size 2048 -country US -state California -locality Sunnyvale -organization IT -unit Software
-email-addr web@example.com -expire-days 365 -hash-function SHA256
Description
This command deletes an installed digital security certificate.
Parameters
-vserver <Vserver Name> - Name of Vserver
This specifies the Vserver that contains the certificate.
-common-name <FQDN or Custom Common Name> - FQDN or Custom Common Name
This specifies the desired certificate name as a fully qualified domain name (FQDN) or custom common name
or the name of a person. The supported characters, which are a subset of the ASCII character set, are as
follows:
• Numbers 0 through 9
The common name must not start or end with a "-" or a ".". The maximum length is 253 characters.
[-serial <text>] - Serial Number of Certificate
This specifies the certificate serial number.
-ca <text> - Certificate Authority
This specifies the certificate authority (CA).
-type <type of certificate> - Type of Certificate
This specifies the certificate type. Valid values are the following:
• root-ca - includes a self-signed digital certificate to sign other certificates by acting as a certificate
authority (CA)
• client-ca - includes the public key certificate for the root CA of the SSL client. If this client-ca
certificate is created as part of a root-ca, it will be deleted along with the corresponding deletion of the
root-ca.
• server-ca - includes the public key certificate for the root CA of the SSL server to which Data ONTAP
is a client. If this server-ca certificate is created as part of a root-ca, it will be deleted along with the
corresponding deletion of the root-ca.
• client - includes a public key certificate and private key to be used for Data ONTAP as an SSL client
Examples
This example deletes a root-ca type digital certificate for a Vserver named vs0 in a company named www.example.com
with serial number 4F57D3D1.
Description
This command generates a digital certificate signing request and displays it on the console. A certificate signing request (CSR or
certification request) is a message sent securely to a certificate authority (CA) via any electronic media, to apply for a digital
identity certificate.
Parameters
-common-name <FQDN or Custom Common Name> - FQDN or Custom Common Name
This specifies the desired certificate name as a fully qualified domain name (FQDN) or custom common name
or the name of a person. The supported characters, which are a subset of the ASCII character set, are as
follows:
• Numbers 0 through 9
Examples
This example creates a certificate-signing request with a 2048-bit private key generated by the SHA256 hashing function
for use by the Software group in IT at a company whose custom common name is www.example.com, located in
Sunnyvale, California, USA. The email address of the contact administrator who manages the Vserver is
web@example.com.
Private Key :
-----BEGIN RSA PRIVATE KEY-----
MIIBOwIBAAJBAPXFanNoJApT1nzSxOcxixqImRRGZCR7tVmTYyqPSuTvfhVtwDJb
mXuj6U3a1woUsb13wfEvQnHVFNci2ninsJ8CAwEAAQJAWt2AO+bW3FKezEuIrQlu
KoMyRYK455wtMk8BrOyJfhYsB20B28eifjJvRWdTOBEav99M7cEzgPv+p5kaZTTM
gQIhAPsp+j1hrUXSRj979LIJJY0sNez397i7ViFXWQScx/ehAiEA+oDbOooWlVvu
xj4aitxVBu6ByVckYU8LbsfeRNsZwD8CIQCbZ1/ENvmlJ/P7N9Exj2NCtEYxd0Q5
cwBZ5NfZeMBpwQIhAPk0KWQSLadGfsKO077itF+h9FGFNHbtuNTrVq4vPW3nAiAA
peMBQgEv28y2r8D4dkYzxcXmjzJluUSZSZ9c/wS6fA==
-----END RSA PRIVATE KEY-----
Note: Please keep a copy of your certificate request and private key for future
reference.
Description
The security certificate install command installs digital security certificates signed by a certificate authority (CA)
and the public key certificate of the root CA. Digital security certificates also include the intermediate certificates to construct
the chain for server certificates (the server type), client-side root CA certificates (the client-ca type), or server-side root CA
certificates (the server-ca type). with FIPS enabled, the following restrictions apply to the certificate getting installed. server/
client/server-ca/client-ca: Key size >= 2048,server/client: Hash function (No MD-5, No SHA-1),server-ca/client-ca:
(Intermediate CA), Hash Function (No MD-5, No SHA-1), server-ca/client-ca: (Root CA), Hash Function (No MD-5)
Parameters
-vserver <Vserver Name> - Name of Vserver
This specifies the Vserver that contains the certificate.
-type <type of certificate> - Type of Certificate
This specifies the certificate type. Valid values are the following:
• server - includes server certificates and intermediate certificates.
• client-ca - includes the public key certificate for the root CA of the SSL client
• server-ca - includes the public key certificate for the root CA of the SSL server to which Data ONTAP
is a client
• client - includes a self-signed or CA-signed digital certificate and private key to be used for Data
ONTAP as an SSL client
Examples
This example installs a CA-signed certificate (along with intermediate certificates) for a Vserver named vs0.
You should keep a copy of the private key and the CA-signed digital certificate
for future reference.
This example installs a CA certificate for client authentication for a Vserver named vs0.
You should keep a copy of the CA-signed digital certificate for future
reference.
This example installs a CA certificate for server authentication for a Vserver named vs0. In this case, Data ONTAP acts as
an SSL client.
You should keep a copy of the CA-signed digital certificate for future
reference.
Description
This command allows the user to modify the system's identifier for an installed digital certificate. It does not alter the certificate
itself.
Parameters
-vserver <Vserver Name> - Vserver Name
This specifies the name of the Vserver on which the certificate exists.
-cert-name <text> - Existing Certificate Name
This specifies the current system identifier for the certificate.
-new-name <text> - New Certificate Name
This specifies the desired system identifier for the certificate. It must be unique among certificates in the
Vserver.
Examples
Description
This command displays information about the installed digital certificates. Some details are displayed only when you use the
command with the -instance parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
Examples
The examples below display information about digital certificates.
MIIDfTCCAmWgAwIBAwIBADANBgkqhkiG9w0BAQsFADBgMRQwEgYDVQQDEwtsYWIu
YWJjLmNvbTELMAkGA1UEBhMCVVMxCTAHBgNVBAgTADEJMAcGA1UEBxMAMQkwBwYD
VQQKEwAxCTAHBgNVBAsTADEPMA0GCSqGSIb3DQEJARYAMB4XDTEwMDQzMDE4MTQ0
BgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFCVG7dYGe51akE14ecaCdL
+LOAxUMA0G
CSqGSIb3DQEBCwUAA4IBAQBJlE51pkDY3ZpsSrQeMOoWLteIR
+1H0wKZOM1Bhy6Q
+gsE3XEtnN07AE4npjIT0eVP0nI9QIJAbP0uPKaCGAVBSBMoM2mOwbfswI7aJoEh
+XuEoNr0GOz+mltnfhgvl1fT6Ms+xzd3LGZYQTworus2
-----END CERTIFICATE-----
Country Name (2 letter code): US
State or Province Name (full name): California
Locality Name (e.g. city): Sunnyvale
Organization Name (e.g. company): example
Organization Unit (e.g. section): IT
Email Address (Contact Name): web@example.com
Protocol: SSL
Hashing Function: SHA256
Description
This command displays information about the Data ONTAP generated digital digital certificates. Some details are displayed
only when you use the command with the -instance parameter.
Examples
The examples below display information about Data ONTAP generated digital certificates.
MIIDfTCCAmWgAwIBAwIBADANBgkqhkiG9w0BAQsFADBgMRQwEgYDVQQDEwtsYWIu
YWJjLmNvbTELMAkGA1UEBhMCVVMxCTAHBgNVBAgTADEJMAcGA1UEBxMAMQkwBwYD
VQQKEwAxCTAHBgNVBAsTADEPMA0GCSqGSIb3DQEJARYAMB4XDTEwMDQzMDE4MTQ0
BgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFCVG7dYGe51akE14ecaCdL
+LOAxUMA0G
CSqGSIb3DQEBCwUAA4IBAQBJlE51pkDY3ZpsSrQeMOoWLteIR
+1H0wKZOM1Bhy6Q
+gsE3XEtnN07AE4npjIT0eVP0nI9QIJAbP0uPKaCGAVBSBMoM2mOwbfswI7aJoEh
+XuEoNr0GOz+mltnfhgvl1fT6Ms+xzd3LGZYQTworus2
-----END CERTIFICATE-----
Country Name (2 letter code): US
State or Province Name (full name): California
Locality Name (e.g. city): Sunnyvale
Organization Name (e.g. company): example
Organization Unit (e.g. section): IT
Email Address (Contact Name): web@example.com
Protocol: SSL
Hashing Function: SHA256
Related references
security certificate show on page 478
Description
This command displays information about the default CA certificates that come pre-installed with Data ONTAP. Some details
are displayed only when you use the command with the -instance parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Name of Vserver
Selects the Vserver whose digital certificates you want to display.
[-common-name <FQDN or Custom Common Name>] - FQDN or Custom Common Name
Selects the certificates that match this parameter value.
[-serial <text>] - Serial Number of Certificate
Selects the certificates that match this parameter value.
[-ca <text>] - Certificate Authority
Selects the certificates that match this parameter value.
[-type <type of certificate>] - Type of Certificate
Selects the certificates that match this parameter value.
[-subtype <kmip-cert>] - (DEPRECATED)-Certificate Subtype
Note: This parameter has been deprecated in ONTAP 9.6 and may be removed in a future release of Data
ONTAP.
Selects the certificate subtype that matches the specified value. The valid values are as follows:
Examples
The examples below display information about the pre-installed truststore digital certificates.
MIIDfTCCAmWgAwIBAwIBADANBgkqhkiG9w0BAQsFADBgMRQwEgYDVQQDEwtsYWIu
YWJjLmNvbTELMAkGA1UEBhMCVVMxCTAHBgNVBAgTADEJMAcGA1UEBxMAMQkwBwYD
VQQKEwAxCTAHBgNVBAsTADEPMA0GCSqGSIb3DQEJARYAMB4XDTEwMDQzMDE4MTQ0
BgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFCVG7dYGe51akE14ecaCdL
+LOAxUMA0G
CSqGSIb3DQEBCwUAA4IBAQBJlE51pkDY3ZpsSrQeMOoWLteIR
+1H0wKZOM1Bhy6Q
+gsE3XEtnN07AE4npjIT0eVP0nI9QIJAbP0uPKaCGAVBSBMoM2mOwbfswI7aJoEh
+XuEoNr0GOz+mltnfhgvl1fT6Ms+xzd3LGZYQTworus2
-----END CERTIFICATE-----
Country Name (2 letter code): US
State or Province Name (full name): California
Locality Name (e.g. city): Sunnyvale
Organization Name (e.g. company): example
Related references
security certificate show on page 478
Description
This command displays information about the user installed digital certificates. Some details are displayed only when you use
the command with the -instance parameter. In systems upgraded to Data ONTAP 9.4 or later, existing Data ONTAP generated
certificates will also be shown as part of this command.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Name of Vserver
Selects the Vserver whose digital certificates you want to display.
[-common-name <FQDN or Custom Common Name>] - FQDN or Custom Common Name
Selects the certificates that match this parameter value.
[-serial <text>] - Serial Number of Certificate
Selects the certificates that match this parameter value.
[-ca <text>] - Certificate Authority
Selects the certificates that match this parameter value.
[-type <type of certificate>] - Type of Certificate
Selects the certificates that match this parameter value.
[-subtype <kmip-cert>] - (DEPRECATED)-Certificate Subtype
Note: This parameter has been deprecated in ONTAP 9.6 and may be removed in a future release of Data
ONTAP.
Selects the certificate subtype that matches the specified value. The valid values are as follows:
Examples
The examples below display information about user installed digital certificates.
MIIDfTCCAmWgAwIBAwIBADANBgkqhkiG9w0BAQsFADBgMRQwEgYDVQQDEwtsYWIu
YWJjLmNvbTELMAkGA1UEBhMCVVMxCTAHBgNVBAgTADEJMAcGA1UEBxMAMQkwBwYD
VQQKEwAxCTAHBgNVBAsTADEPMA0GCSqGSIb3DQEJARYAMB4XDTEwMDQzMDE4MTQ0
BgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFCVG7dYGe51akE14ecaCdL
+LOAxUMA0G
CSqGSIb3DQEBCwUAA4IBAQBJlE51pkDY3ZpsSrQeMOoWLteIR
+gsE3XEtnN07AE4npjIT0eVP0nI9QIJAbP0uPKaCGAVBSBMoM2mOwbfswI7aJoEh
+XuEoNr0GOz+mltnfhgvl1fT6Ms+xzd3LGZYQTworus2
-----END CERTIFICATE-----
Country Name (2 letter code): US
State or Province Name (full name): California
Locality Name (e.g. city): Sunnyvale
Organization Name (e.g. company): example
Organization Unit (e.g. section): IT
Email Address (Contact Name): web@example.com
Protocol: SSL
Hashing Function: SHA256
Related references
security certificate show on page 478
Description
This command signs a digital certificate signing request and generates a certificate using a Self-Signed Root CA certificate in
either PEM or PKCS12 format. You can use the security certificate generate-csr command to generate a digital
certificate signing request.
Parameters
-vserver <Vserver Name> - Name of Vserver
This specifies the name of the Vserver on which the signed certificate will exist.
-ca <text> - Certificate Authority to Sign
This specifies the name of the Certificate Authority that will sign the certificate.
-ca-serial <text> - Serial Number of CA Certificate
This specifies the serial number of the Certificate Authority that will sign the certificate.
[-expire-days <integer>] - Number of Days until Expiration
This specifies the number of days until the signed certificate expires. The default value is 365 days. Possible
values are between 1 and 3652.
[-format <certificate format>] - Certificate Format
This specifies the format of signed certificate. The default value is PEM. Possible values include PEM and
PKCS12.
[-destination {(ftp|http)://(hostname|IPv4 Address|'['IPv6 Address']')...}] - Where to Send File
This specifies the destination to upload the signed certificate. This option can only be used when the format is
PKCS12.
[-hash-function <hashing function>] - Hashing Function
This specifies the cryptographic hashing function for the self-signed certificate. The default value is SHA256.
Possible values include SHA1, SHA256 and MD5.
Examples
This example signs a digital certificate for a Vserver named vs0 using a Certificate Authority certificate that has a ca of
www.ca.com and a ca-serial of 4F4EB629 in PEM format using the SHA256 hashing function.
Signed Certificate: :
-----BEGIN CERTIFICATE-----
MIICwDCCAaigAwIBAgIET1oskDANBgkqhkiG9w0BAQsFADBdMREwDwYDVQQDEwh2
czAuY2VydDELMAkGA1UEBhMCVVMxCTAHBgNVBAgTADEJMAcGA1UEBxMAMQkwBwYD
VQQKEwAxCTAHBgNVBAsTADEPMA0GCSqGSIb3DQEJARYAMB4XDTEyMDMwOTE2MTUx
M1oXDTEyMDQxNDE2MTUxM1owYDEUMBIGA1UEAxMLZXhhbXBsZS5jb20xCzAJBgNV
BAYTAlVTMQkwBwYDVQQIEwAxCTAHBgNVBAcTADEJMAcGA1UEChMAMQkwBwYDVQQL
EwAxDzANBgkqhkiG9w0BCQEWADBcMA0GCSqGSIb3DQEBAQUAA0sAMEgCQQD1xWpz
-----END CERTIFICATE-----
This example signs and exports a digital certificate to destination ftp://10.98.1.1//u/sam/sign.pfx for a Vserver named vs0
using a Certificate Authority certificate that expires in 36 days and has a ca value of www.ca.com and a ca-serial value of
4F4EB629 in PKCS12 format by the MD5 hashing function.
cluster1::> security certificate sign -vserver vs0 -ca www.ca.com -ca-serial 4F4EB629
-expire-days 36 -format PKCS12 -destination ftp://10.98.1.1//u/sam/sign.pfx -hash-function
MD5
Signed Certificate: :
-----BEGIN CERTIFICATE-----
MIICwDCCAaigAwIBAgIET1ot8jANBgkqhkiG9w0BAQsFADBdMREwDwYDVQQDEwh2
czAuY2VydDELMAkGA1UEBhMCVVMxCTAHBgNVBAgTADEJMAcGA1UEBxMAMQkwBwYD
VQQKEwAxCTAHBgNVBAsTADEPMA0GCSqGSIb3DQEJARYAMB4XDTEyMDMwOTE2MjEw
NloXDTEyMDQxNDE2MjEwNlowYDEUMBIGA1UEAxMLZXhhbXBsZS5jb20xCzAJBgNV
BAYTAlVTMQkwBwYDVQQIEwAxCTAHBgNVBAcTADEJMAcGA1UEChMAMQkwBwYDVQQL
EwAxDzANBgkqhkiG9w0BCQEWADBcMA0GCSqGSIb3DQEBAQUAA0sAMEgCQQD1xWpz
oarXHSyDzv3T5QIxBGRJ0ACtgdjJuqtuAdmnKvKfLS1o4C90
-----END CERTIFICATE-----
Related references
security certificate generate-csr on page 474
Description
This command revokes a digital certificate signed by a Self-Signed Root CA.
Parameters
-vserver <Vserver Name> - Vserver
This specifies the name of the Vserver on which the certificate is stored.
-serial <text> - Serial Number of Certificate
This specifies the serial number of the certificate.
-ca <text> - Certificate Authority
This specifies the name of the Certificate Authority whose certificate will be revoked.
-ca-serial <text> - Serial Number of CA Certificate
This specifies the serial number of Certificate Authority.
[-common-name <FQDN or Custom Common Name>] - FQDN or Custom Common Name
This specifies a fully qualified domain name (FQDN) or custom common name or the name of a person. This
field is optional if ca-serial is specified.
Examples
This example revokes a signed digital certificate for a Vserver named vs0 with serial as 4F5A2DF2 for a Certificate
Authority certificate that has a ca of www.ca.com and a ca-serial of 4F4EB629.
cluster1::> security certificate ca-issued revoke -vserver vs0 -serial 4F5A2DF2 -ca www.ca.com -
ca-serial 4F4EB629
Description
This command displays the following information about the digital certificates issued by the self-signed root-ca:
• Vserver
• Certificate Authority
• Expiration date
• Revocation date
To display more details, run the command with the -instance parameter. This will add the following information:
• Country name
• Locality name
• Organization name
• Organization unit
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver
Selects the certificates that match this parameter value.
[-serial <text>] - Serial Number of Certificate
Selects the certificates that match this parameter value.
[-ca <text>] - Certificate Authority
Selects the certificates that match this parameter value.
[-ca-serial <text>] - Serial Number of CA Certificate
Selects the certificates that match this parameter value.
[-common-name <FQDN or Custom Common Name>] - FQDN or Custom Common Name
Selects the certificates that match this parameter value.
[-status <status of certificate>] - Status of Certificate
Selects the certificates that match this parameter value. Possible values include active and revoked.
[-expiration <Date>] - Certificate Expiration Date
Selects the certificates that match this parameter value.
[-revocation <Date>] - Certificate Revocation Date
Selects the certificates that match this parameter value.
[-country <text>] - Country Name (2 letter code)
Selects the certificates that match this parameter value.
Examples
The examples below display information about CA issued digital certificates.
Serial Number of
Vserver Serial Number Common Name CA's Certificate Status
---------- --------------- --------------------------- ---------------- -------
vs0 4F5A2C90 example.com 4F4EB629 active
Certificate Authority: vs0.cert
Expiration Date: Sat Apr 14 16:15:13 2012
Revocation Date: -
Vserver: vs0
Serial Number of Certificate: 4F5A2C90
Certificate Authority: vs0.cert
Serial Number of CA Certificate: 4F4EB629
FQDN or Custom Common Name: example.com
Status of Certificate: active
Certificate Expiration Date: Sat Apr 14 16:15:13 2012
Certificate Revocation Date: -
Country Name (2 letter code): US
State or Province Name (full name): California
Locality Name (e.g. city): Sunnyvale
Organization Name (e.g. company): example
Organization Unit (e.g. section): IT
Email Address (Contact Name): web@example.com
Show Commands
truststore
Field description for all the certificate show commands.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Name of Vserver
Selects the Vserver whose digital certificates you want to display.
[-common-name <FQDN or Custom Common Name>] - FQDN or Custom Common Name
Selects the certificates that match this parameter value.
[-serial <text>] - Serial Number of Certificate
Selects the certificates that match this parameter value.
[-ca <text>] - Certificate Authority
Selects the certificates that match this parameter value.
[-type <type of certificate>] - Type of Certificate
Selects the certificates that match this parameter value.
[-subtype <kmip-cert>] - (DEPRECATED)-Certificate Subtype
Selects the certificate subtype that matches the specified value. The valid values are as follows:
Examples
The examples below display information about the pre-installed truststore digital certificates.
MIIDfTCCAmWgAwIBAwIBADANBgkqhkiG9w0BAQsFADBgMRQwEgYDVQQDEwtsYWIu
YWJjLmNvbTELMAkGA1UEBhMCVVMxCTAHBgNVBAgTADEJMAcGA1UEBxMAMQkwBwYD
VQQKEwAxCTAHBgNVBAsTADEPMA0GCSqGSIb3DQEJARYAMB4XDTEwMDQzMDE4MTQ0
BgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFCVG7dYGe51akE14ecaCdL
+LOAxUMA0G
CSqGSIb3DQEBCwUAA4IBAQBJlE51pkDY3ZpsSrQeMOoWLteIR
+1H0wKZOM1Bhy6Q
+gsE3XEtnN07AE4npjIT0eVP0nI9QIJAbP0uPKaCGAVBSBMoM2mOwbfswI7aJoEh
+XuEoNr0GOz+mltnfhgvl1fT6Ms+xzd3LGZYQTworus2
-----END CERTIFICATE-----
Country Name (2 letter code): US
State or Province Name (full name): California
Locality Name (e.g. city): Sunnyvale
Organization Name (e.g. company): example
Organization Unit (e.g. section): IT
Email Address (Contact Name): web@example.com
Protocol: SSL
Hashing Function: SHA256
Related references
security certificate show on page 478
Description
The security config modify command modifies the existing cluster-wide security configuration. If you enable FIPS-
compliant mode, the cluster will automatically select only compliant TLS protocols (currently TLSv1.2 and TLSv1.1). Non-
compliant protocols are not enabled when FIPS-compliant mode is disabled. Use the -supported-protocols parameter to
include or exclude TLS protocols independently from the FIPS mode. All protocols at or above the lowest version specified will
be enabled, even those not explicitly specified. By default, FIPS mode is disabled, and Data ONTAP supports the TLSv1.2,
TLSv1.1 and TLSv1 protocols. For backward compatibility, Data ONTAP supports adding SSLv3 to the supported-protocols list
when FIPS mode is disabled. Use the -supported-ciphers parameter to configure only AES, or AES and 3DES, or disable
weak ciphers such as RC4 by specifying !RC4. By default the supported-cipher setting is ALL:!LOW:!aNULL:!EXP:!eNULL.
This setting means that all supported cipher suites for the protocols are enabled, except the ones with no authentication, no
encryption, no exports, and low encryption cipher suites (currently those using 64-bit or 56-bit encryption algorithms). Select a
cipher suite which is available with the corresponding selected protocol. An invalid configuration may cause some functionality
to fail to operate properly. Refer to "https://www.openssl.org/docs/apps/ciphers.html" published by the OpenSSL software
foundation for the correct cipher string syntax. After modifying the security configuration, reboot all the nodes manually.
Parameters
-interface <SSL> - FIPS-Compliant Interface
Selects the FIPS-compliant interface. Default is SSL.
[-is-fips-enabled {true|false}] - FIPS Mode
Enables or disables FIPS-compliant mode for the entire cluster. Default is false.
[-supported-protocols {TLSv1.2|TLSv1.1|TLSv1|SSLv3}, ...] - Supported Protocols
Selects the supported protocols for the selected interface. Default is TLSv1.2,TLSv1.1,TLSv1
[-supported-ciphers <Cipher String>] - Supported Ciphers
Selects the supported cipher suites for the selected interface. Default is ALL:!LOW:!aNULL:!EXP:!eNULL.
Examples
The following command enables FIPS mode in the cluster. (Default setting for FIPS mode is false)
The following command modifies supported protocols to TLSv1.2 and TLSv1.1 in the cluster. (Default setting for
supported protocols is TLSv1.2,TLSv1.1,TLSv1)
The following command modifies supported ciphers to ALL:!LOW:!aNULL:!EXP:!eNULL:!RC4 in the cluster. (Default
setting for supported ciphers is ALL:!LOW:!aNULL:!EXP:!eNULL)
Description
The security config show command displays the security configurations of the cluster in advanced privilege mode.
Default values are as follows:
The default cipher suites represent all suites for the listed protocols except those that have no authentication, no encryption, no
exports, and low encryption (below 64 or 56 bit).
Enabling FIPS mode will cause the entire cluster to use FIPS-compliant crypto operations only.
Use the security config modify command to change the protocols and ciphers that the cluster will support. When all the
nodes in the cluster are updated with the modified settings, the cluster security config ready value will be shown as yes.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-interface <SSL>] - FIPS-Compliant Interface
Displays configurations that match the specified value for the interface.
[-is-fips-enabled {true|false}] - FIPS Mode
Display configurations that match the specified value for FIPS mode.
[-supported-protocols {TLSv1.2|TLSv1.1|TLSv1|SSLv3}, ...] - Supported Protocols
Displays configurations that match the specified protocols.
[-supported-ciphers <Cipher String>] - Supported Ciphers
Displays the configurations that match the specified supported ciphers.
Examples
The following example shows the default security configurations for a cluster.
Related references
security config modify on page 494
Description
The security config ocsp disable command disables the OCSP-based certificate status check for applications
supporting SSL/TLS communications. For more information about the OCSP-based certificate status check for applications
supporting SSL/TLS communications, see the security config ocsp show command.
Parameters
-application <Application supporting SSL/TLS protocol>, ... - Application Name
Use this parameter to specify the application to disable the OCSP support. To disable all applications, the
value 'all' can be used. Note: You cannot specify the value 'all' with other applications.
Examples
The following example disables the OCSP support for AutoSupport and EMS applications:
The following example disables the OCSP support for all applications:
Related references
security config ocsp show on page 498
Description
The security config ocsp enable command enables the OCSP-based certificate status check for applications supporting
SSL/TLS communications. For more information about the OCSP-based certificate status check for applications supporting
SSL/TLS communications, see the security config ocsp show command.
Parameters
-application <Application supporting SSL/TLS protocol>, ... - List of Applications
Use this parameter to specify the application to enable the OCSP support. To enable all applications, the value
'all' can be used. Note: You cannot specify the value 'all' with other applications.
Examples
The following example enables the OCSP support for AutoSupport and EMS applications:
The following example enables the OCSP support for all applications:
Description
The security config ocsp show command displays the support status of the OCSP-based certificate status check for
applications supporting SSL/TLS communications. If the OCSP support is enabled for an application, this check is done in
addition to the certificate chain validation as part of the SSL handshake process. The OCSP-based certificate status check is
done for all the certificates in the chain, provided that each certificate has the OCSP URI access points mentioned in them. If no
access points are specified, the OCSP-based certificate revocation status check is ignored for that certificate and checking
continues for the rest of the certificates in the chain.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-application <Application supporting SSL/TLS protocol>] - Application Name
Selects the application that matches this parameter value. Applications include:
• autosupport - AutoSupport
• ldap_ad - Lightweight Directory Access Protocol - Active Directory (query and modify items in Active
Directory)
• ldap_nis_namemap - Lightweight Directory Access Protocol - NIS and Name Mapping (query Unix user,
group, netgroup and name mapping information)
Examples
The following example displays the OCSP support for the applications supporting SSL/TLS communications:
Description
The security config status show command displays the required reboot status of the nodes in the cluster after security
configuration settings have been modified using the security config modify command. Use this command to monitor the
status of the required reboot process. When all nodes have rebooted, the cluster is ready to use the new security configuration
settings.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
Select the node whose reboot-status you want to display.
[-reboot-needed {true|false}] - Reboot Needed
reboot-needed status of the node that tells if the node requires a reboot for security configuration to take effect.
Examples
The following example displays the status of a configuration change in a four-node cluster.
The following example shows the output of the command after the cluster reboot process is complete.
Related references
security config modify on page 494
Description
Note: This command is deprecated and may be removed in a future release. Use security key-manager external
add-servers instead.
This command adds a key management server at the indicated IP address to its list of four possible active key management
servers. The command fails if there are already four key management servers configured. This command is not supported when
onboard key management is enabled.
Parameters
-address <IP Address> - IP Address
This parameter specifies the IP address of the key management server you want to use to store keys.
[-server-port <integer>] - Server TCP Port
This parameter specifies the TCP port on which the key management server will listen for incoming
connections.
Examples
The following example adds the key management server with address 10.233.1.98, listening for incoming connections on
the default TCP port 5696, to the list of key management servers used by the external key manager:
The following example adds the key management server with address 10.233.1.98, listening for incoming connections on
TCP port 15696, to the list of key management servers used by the external key manager:
Description
Note: This command is deprecated and may be removed in a future release. Use security key-manager key create
instead.
This command creates a new authentication key (AK) and stores it on the configured key management servers. The command
fails if the configured key management servers are already storing more than 128 AKs. If command fails due to more than 128
keys in cluster, delete unused keys on your key management servers and try the command again. This command is not supported
when onboard key management is enabled.
Parameters
[-key-tag <text>] - Key Tag
This parameter specifies the key tag that you want to associate with the new authentication key (AK). The
default value is the node name. This parameter can be used to help identify created authentication keys (AKs).
For example, the key-manager query command key-tag parameter can be used to query for a specific key-tag
value.
[-prompt-for-key {true|false}] - Prompt for Authentication Passphrase
If you specify this parameter as true, the command prompts you to enter an authentication passphrase
manually instead of generating it automatically. For security reasons, the authentication passphrase you
entered is not displayed at the command prompt. You must enter the authentication passphrase a second time
for verification. To avoid errors, copy and paste authentication passphrases electronically instead of entering
them manually. Data ONTAP saves the resulting authentication key/key ID pair automatically on the
configured key management servers.
Examples
The following example creates an authentication key with the node name as the default key-tag value:
Verifying requirements...
Node: node1
Creating authentication key...
Authentication key creation successful.
Key ID: 00000000000000000200000000000100D0F7C2462D626B739FE81B89F29A092F.
Node: node2
Key manager restore operation initialized.
Successfully restored key information.
Verifying requirements...
Node: node1
Creating authentication key...
Authentication key creation successful.
Key ID: 00000000000000000200000000000100B8297A6189BC24B9B84C1916ED576857.
The following example creates an authentication key with a user-specified authentication passphrase:
Verifying requirements...
Node: node1
Creating authentication key...
Authentication key creation successful.
Key ID: 000000000000000002000000000001006268333F870860128FBE17D393E5083B.
Node: node2
Key manager restore operation initialized.
Successfully restored key information.
Related references
security key-manager key create on page 524
Description
Note: This command is deprecated and may be removed in a future release. Use security key-manager external
remove-servers instead.
This command removes the key management server at the indicated IP address from the list of active key management servers.
If the indicated key management server is the sole storage location for any key that is in use by Data ONTAP, you will be unable
to remove the key server. This command is not supported when onboard key management is enabled.
Parameters
-address <IP Address> - IP Address
This parameter specifies the IP address of the key management server you want to remove from use.
Examples
The following example removes the key server at IP address 10.233.1.198 from the set of configured key management
servers:
Related references
security key-manager external remove-servers on page 517
Description
Note: This command is deprecated and might be removed in a future release. Use security key-manager onboard
disable instead.
The security key-manager delete-key-database command permanently deletes the onboard key-management
configuration from all nodes of the cluster.
Examples
The following example deletes the onboard key-management configuration from all nodes of the cluster:
Warning: This command will permanently delete all keys from onboard key management.
Do you want to continue? {y|n}: y
Related references
security key-manager onboard disable on page 530
Description
Note: This command is deprecated and may be removed in a future release. Use security key-manager external
disable instead.
The security key-manager delete-kmip-config command permanently deletes the Key Management Interoperability
Protocol (KMIP) server configuration from all nodes of the cluster.
Note: The keys stored by the external KMIP servers cannot be deleted by Data ONTAP, and must be deleted by using external
tools.
Examples
The following example deletes the KMIP-server configuration from all nodes of the cluster:
Description
Note: This command is deprecated and might be removed in a future release.
The security key-manager prepare-to-downgrade command disables the onboard key management features that are
not supported in releases prior to ONTAP 9.1.0. The features that are disabled are onboard key management support for
Metrocluster configurations and Volume Encryption (VE).
Examples
The following example disables the onboard key management support for Metrocluster configurations and Volume
Encryption (VE):
Description
Note: This command is deprecated and may be removed in a future release. Use security key-manager key query
instead.
This command displays the IDs of the keys that are stored on the key management servers. This command does not update the
key tables on the node. To refresh the key tables on the nodes with the key management server key tables, run the security
key-manager restore command. This command is not supported when onboard key management is enabled.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter specifies the name of the node that queries the specified key management servers. If this
parameter is not specified, then all nodes will query the specified key management servers.
[-address <IP Address>] - IP Address
This parameter specifies the IP address of the key management server that you want to query.
[-key-id <key id>] - Key ID
If you specify this parameter, then the command displays only the key IDs that match the specified value.
Examples
The following example shows all the keys on all configured key servers, and whether those keys have been restored for all
nodes in the cluster:
Node: node1
Key Manager: 10.0.0.10
Server Status: available
Node: node2
Key Manager: 10.0.0.10
Server Status: available
If any listed keys have "no" in the "Restored" column, run "security key-manager
restore" to restore those keys.
The following example shows all keys stored on the key server with address "10.0.0.10" from node "node1" with key-tag
"node1":
cluster-1::> security key-manager query -address 10.0.0.10 -node node1 -key-tag node1
Node: node1
Key Manager: 10.0.0.10
Server Status: available
If any listed keys have "no" in the "Restored" column, run "security key-manager
restore" to restore those keys.
The following example shows the Volume Encryption Key (VEK) with key-tag (i.e., volume UUID)
"301a4e57-9efb-11e7-b2bc-0050569c227f" on nodes where that key has not been restored:
Node: node2
Key Manager: 10.0.0.10
Server Status: available
If any listed keys have "no" in the "Restored" column, run "security key-manager restore" to
restore those keys.
Related references
security key-manager key query on page 526
security key-manager restore on page 506
Description
Note: This command is deprecated and may be removed in a future release. Use security key-manager external
restore instead.
This command retrieves and restores any current unrestored keys associated with the storage controller from the specified key
management servers. This command is not supported when onboard key management is enabled.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter specifies the name of the node that is to load the key IDs into its internal key table. If not
specified, all nodes retrieve keys into their internal key table.
Examples
The following command restores keys that are currently on a key server but are not stored within the key tables on the
cluster:
Node: node1
Key Manager: 10.0.0.10
Server Status: available
Key IDs
-------------------------------------------------------
000000000000000002000000000001001d71f3b2468d7e16a6e6972d3e6645200000000000000000
000000000000000002000000000005004d03aca5b72cd20b2f83eae1531c605e0000000000000000
Node: node2
Key Manager: 10.0.0.10
Server Status: available
Key IDs
-------------------------------------------------------
000000000000000002000000000001001d71f3b2468d7e16a6e6972d3e6645200000000000000000
000000000000000002000000000005004d03aca5b72cd20b2f83eae1531c605e0000000000000000
The following loads any keys that exist on the key servers with IP address 10.0.0.10 with key-tag "node1" that are not
currently stored in key tables of the nodes in the cluster. In this example, a key with that key-tag was missing from two
nodes in the cluster:
Node: node1
Key Manager: 10.0.0.10
Server Status: available
Key IDs
-------------------------------------------------------
000000000000000002000000000001001d71f3b2468d7e16a6e6972d3e6645200000000000000000
Node: node2
Key Manager: 10.0.0.10
Key IDs
-------------------------------------------------------
000000000000000002000000000001001d71f3b2468d7e16a6e6972d3e6645200000000000000000
Related references
security key-manager external restore on page 518
Description
Note: This command is deprecated and might be removed in a future release. To set up external key manager, use security
key-manager external enable, and to set up onboard key manager use security key-manager onboard enable
instead.
The security key-manager setup command enables you to configure key management. Data ONTAP supports two
mutually exclusive key management methods: external via one or more key management interoperability protocol (KMIP)
servers, or internal via an onboard key manager. This command is used to configure an external or internal key manager. When
configuring an external key management server, this command records networking information on all node that is used during
the boot process to retrieve keys needed for booting from the KMIP servers. For onboard key management, this command
prompts you to configure a passphrase to protect internal keys in encrypted form.
This command can also be used to refresh missing onboard keys. For example, if you add a node to a cluster that has onboard
key management configured, you will run this command to refresh the missing keys.
For onboard key management in a MetroCluster configuration, if the security key-manager update-passphrase
command is used to update the passphrase on one site, then run the security key-manager setup command with the new
passphrase on the partner site before proceeding with any key-manager operations.
Parameters
[-node <nodename>] - Node Name
This parameter is used only with onboard key management when a refresh operation is required (see command
description). This parameter is ignored when configuring external key management and during the initial setup
of onboard key management.
[-cc-mode-enabled {yes|no}] - Enable Common Criteria Mode?
When configuring onboard key management, this parameter is used to specify that Common Criteria (CC)
mode should be enabled. When CC mode is enabled, you will be required to provide a cluster passphrase that
is between 64 and 256 ASCII character long, and you will be required to enter that passphrase each time a
node reboots.
[-sync-metrocluster-config {yes|no}] - Sync MetroCluster Configuration from Peer
When configuring onboard key management in a MetroCluster configuration, this parameter is used to indicate
that the security key-manager setup command has been performed on the peer cluster, and that the
security key-manager setup command on this cluster should import the peer's configuration.
Examples
The following example creates a configuration for external key management:
Restart the key manager setup wizard with "security key-manager setup". To
accept a default or omit a question, do not enter a value.
Would you like to configure onboard key management? {yes, no} [yes]: no
Would you like to configure the KMIP server environment? {yes, no} [yes]: yes
Restart the key manager setup wizard with "security key-manager setup". To
accept a default or omit a question, do not enter a value.
Would you like to configure onboard key management? {yes, no} [yes]: yes
Enter the cluster-wide passphrase for onboard key management. To continue the
configuration, enter the passphrase, otherwise type "exit":
Re-enter the cluster-wide passphrase:
After configuring onboard key management, save the encrypted configuration data
in a safe location so that you can use it if you need to perform a manual recovery
operation. To view the data, use the "security key-manager backup show" command.
The following example creates a configuration for onboard key management with Common Critera mode enabled:
Restart the key manager setup wizard with "security key-manager setup". To
accept a default or omit a question, do not enter a value.
Would you like to configure onboard key management? {yes, no} [yes]: yes
Enter the cluster-wide passphrase for onboard key management. To continue the
configuration, enter the passphrase, otherwise type "exit":
Re-enter the cluster-wide passphrase:
After configuring onboard key management, save the encrypted configuration data
in a safe location so that you can use it if you need to perform a manual recovery
operation. To view the data, use the "security key-manager backup show" command.
Related references
security key-manager external enable on page 515
Description
Note: This command is deprecated and may be removed in a future release. Use security key-manager external
show instead.
This command displays the key management servers configured on the cluster. This command is not supported when onboard
key management is enabled.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-status ]
If you specify this parameter, the command displays the status of each key management server.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter specifies the name of the node that you want to retrieve key management server status for. If
parameter is not specified, all nodes will retrieve the key management servers status.
[-address <IP Address>] - IP Address
Shows only a key management server registered with the input address. It is also possible to show multiple key
management servers.
[-server-port <integer>] - Server TCP Port
If you specify this parameter, the command displays only key servers listening on this port.
Examples
The following example lists all configured key management servers:
The following example lists all configured key management servers, the TCP port on which those servers are expected to
listen for incoming KMIP connections, and their server status:
Related references
security key-manager external show on page 519
Description
This command displays the list of configured key managers.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver
If you specify this parameter, then the command will list the key manager configured for the given Vserver.
[-key-store <Key Store>] - Key Store
If you specify this parameter, then the command displays only the vservers that have the given key-store
configured.
Examples
The following example shows all configured key managers in the cluster. In the example, the admin vserver has onboard
key management configured and the data vserver "datavs1" has external key management configured:
Description
Note: This command is deprecated and might be removed in a future release. Use security key-manager onboard
update-passphrase instead.
Examples
The following example updates the cluster-wide passphrase used for onboard key management:
Warning: This command will reconfigure the cluster passphrase for onboard
key-management.
Do you want to continue? {y|n}: y
Related references
security key-manager onboard update-passphrase on page 533
security key-manager setup on page 508
Description
Note: This command is deprecated and might be removed in a future release. Use security key-manager onboard
show-backup instead.
This command displays the backup information for onboard key management, which would be used to recover the cluster in
case of catastrophic situations. The information displayed is for the cluster as a whole (not individual nodes). This command is
not supported for an external key management configuration.
Examples
The following example displays the onboard key management backup data for the cluster:
Related references
security key-manager onboard show-backup on page 531
Description
This command modifies the key management configuration options.
Parameters
[-cc-mode-enabled {true|false}] - Enable Common Criteria Mode
This parameter modifies the configuration state of the Onboard Key Manager (OKM) Common Criteria (CC)
mode. CC mode enforces some of the policies required by the Common Criteria "Collaborative Protection
Profile for Full Drive Encryption-Authorization Acquisition" (FDE-AA cPP) and "Collaborative Protection
Profile for Full Drive Encryption-Encryption Engine" documents.
Examples
The following command enables Common Criterial mode in the cluster:
Description
This command displays the key management configuration options.
The "cc-mode-enabled" option reflects the current configuraton state for Common-Criteria (CC) mode for onboard key
management. CC mode is an operational mode that enforces some of the policies required by the Common Criteria
"Collaborative Protection Profile for Full Drive Encryption-Authorization Acquisition" (FDE-AA cPP) and "Collaborative
Protection Profile for Full Drive Encryption-Encryption Engine" documents. The feature can be enabled when the onboard key
manager is configured using the security key-manager setup command or after the onboard key manager is configured
using the security key-manager config modify command.
Examples
The following example displays the state of all key-manager configuration options:
Related references
security key-manager setup on page 508
security key-manager config modify on page 513
Description
This command adds the key management servers of the given hosts and ports to the given Vserver's external key manager's list
of four possible key management servers. This command is not supported when external key management is not enabled for the
given Vserver.
Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver on which to add the key management servers.
-key-servers <Hostname and Port>, ... - External Key Management Servers
Use this parameter to specify the list of additional key management servers that the external key manager uses
to store keys.
Description
This command disables the external key manager associated with the given Vserver. If the key manager is in use by Data
ONTAP, you cannot disable it. This command is not supported when onboard key management is enabled for the given Vserver.
Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver on which the external key manager is to be disabled.
Examples
The following example removes the external key manager for Vserver cluster-1:
Warning: This command will permanently delete the external key management
configuration for Vserver "cluster-1".
Do you want to continue? {y|n}: y
Description
This command enables the external key manager associated with the given Vserver. This command is not supported when a key
manager for the given Vserver is already enabled.
Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver on which the external key manager is to be enabled.
-key-servers <Hostname and Port>, ... - List of External Key Management Servers
Use this parameter to specify the list of up to four key management servers that the external key manager uses
to store keys.
Examples
The following example enables the external key manager for Vserver cluster-1. The command includes three key
management servers. The first key server's hostname is ks1.local and is listening on port 15696. The second key server's
IP address is 10.0.0.10 and is listening on the default port 5696. The third key server's IPv6 address is
fd20:8b1e:b255:814e:32bd:f35c:832c:5a09, and is listening on port 1234.
Description
This command modifies the external key manager configuration associated with the given Vserver. This command is not
supported when external key management is not enabled for the given Vserver.
Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver on which the key manager to be modified is located.
[-client-cert <text>] - Name of the Client Certificate
Use this parameter to modify the name of the client certificate that the key management servers use to ensure
the identity of Data ONTAP. If the keys of the new certificate do not match the keys of the existing certificate,
or if the TLS connectivity with key-management servers fails with the new certificate, the operation fails.
Running this command in the diagnostic privilege mode ignores failures and allows the command to complete.
[-server-ca-certs <text>, ...] - Names of the Server CA Certificates
Use this parameter to modify the names of server-ca certificates that Data ONTAP uses to ensure the identity
of the key management servers. Note that the list provided completely replaces the existing list of certificates.
If the TLS connectivity with key-management servers fails with the new list of server-ca certificates, the
operation fails. Running this command in the diagnostic privilege mode ignores failures and allows the
command to complete.
Examples
The following example updates the client certificate used with the key management servers:
Description
This command modifies configuration information for configured key management servers. This command is supported only
when external key manager has been enabled for the given Vserver.
Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver on which to modify the key management server configuraiton.
-key-server <Hostname and Port> - External Key Server
Use this parameter to specify the key management server for which the command modifies the configuration.
[-timeout <integer>] - Key Server I/O Timeout
Use this parameter to specify the I/O timeout, in seconds, for the selected key management server.
[-username <text>] - Authentication User Name
Use this parameter to specify the username with which Data ONTAP authenticates with the key management
server.
Examples
The following example modifies the I/O timeout to 45 seconds for Vserver cluster-1, key server keyserver1.local:
The following example modifies the username and passphrase used to authenticate with key server keyserver1.local:
Description
This command removes the key management servers at the given hosts and ports from the given Vserver's external key
manager's list of key management servers. If any of the specified key management servers is the sole storage location for any
key that is in use by Data ONTAP, then you are unable to remove the key server. This command is not supported when external
key management is not enabled for the given Vserver.
Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver on which the external key manager is to be removed.
Examples
The following example removes the key management server keyserver1.local, listening on the default port of 5696 and the
key management server at IP 10.0.0.20, listening on port of 15696.
Description
This command retrieves and restores any current unrestored keys associated with the storage controller from the specified key
management servers. This command is not supported when external key management has not been enabled for the Vserver.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter specifies the name of the node that will load unrestored key IDs into its internal key table. If
not specified, all nodes retrieve unrestored keys into their internal key table.
[-vserver <vserver name>] - Vserver Name
This parameter specifies the Vserver for which to list the keys. If not specified, this command restores key for
all Vservers.
[-key-server <Hostname and Port>] - Key Server
If this parameter is specified, this command restores keys from the key management server identified by the
host and port. If not specified, this command restores keys from all available key management servers.
[-key-id <Hex String>] - Key ID
If you specify this parameter, then the command restores only the key IDs that match the specified value.
[-key-tag <text>] - Key Tag
If you specify this parameter, then the command restores only the key IDs that match the specified key-tag.
The key-tag for Volume Encryption Keys (VEKs) is set to the UUID of the encrypted volume. If not specified,
all key ID pairs for any key tags are restored.
Examples
The following command restores keys that are currently on a key server but are not stored within the key tables on the
cluster. One key is missing for vserver clus- ter-1 on node1, and another key is missing for vserver datavs on node1 and
node2:
Node: node1
Vserver: cluster-1
Key Server: 10.0.0.1:5696
Key ID
--------------------------------------------------------------------------------
00000000000000000200000000000100a04fc7303d9abd1e0f00896192fa9c3f0000000000000000
Node: node1
Vserver: datavs
Key Server: tenant.keysever:5696
Key ID
--------------------------------------------------------------------------------
00000000000000000200000000000400a05a7c294a7abc1e0911897132f49c380000000000000000
Node: node2
Vserver: datavs
Key Server: tenant.keysever:5696
Key ID
--------------------------------------------------------------------------------
00000000000000000200000000000400a05a7c294a7abc1e0911897132f49c380000000000000000
Description
This command displays the external key management servers configured on the cluster for a given Vserver. No entires are
displayed when external key management is not enabled for the given Vserver.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
If you specify this parameter, then the command displays the key management servers for only the given
Vserver.
[-key-server <text>] - Key Server Name with port
If you specify this parameter, then the command displays only the given key management server with the
given host name or IP address listening on the given port.
[-client-cert <text>] - Name of the Client Certificate
If you specify this parameter, then the command displays only the key management servers using a client
certificate with the given name.
[-server-ca-certs <text>, ...] - Names of the Server CA Certificates
If you specify this parameter, then the command displays only the key management servers using server-ca
certificates with the given names.
Examples
The following example lists all configured key management servers for all Vservers:
Vserver: datavs
Client Certificate: datavsClientCert
Server CA Certificates: datavsServerCaCert1, datavsServerCaCert2
Key Server
--------------------------------------------
keyserver.datavs.com:5696
Vserver: cluster-1
Client Certificate: AdminClientCert
Server CA Certificates: AdminServerCaCert
Key Server
--------------------------------------------
10.0.0.10:1234
fd20:8b1e:b255:814e:32bd:f35c:832c:5a09:1234
ks1.local:1234
4 entries were displayed.
The following example lists all configured key management servers with more detail, including timeouts and usernames:
Vserver: datavs
Client Certificate: datavsClientCert
Server CA Certificates: datavsServerCaCert1, datavsServerCaCert2
Key Server: keyserver.datavs.com:5696
Timeout: 25
Username: datavsuser
Vserver: cluster-1
Client Certificate: AdminClientCert
Server CA Certificates: AdminServerCaCert
Key Server: 10.0.0.10:1234
Timeout: 25
Username:
Vserver: cluster-1
Client Certificate: AdminClientCert
Server CA Certificates: AdminServerCaCert
Key Server: fd20:8b1e:b255:814e:32bd:f35c:832c:5a09:1234
Timeout: 25
Username:
Vserver: cluster-1
Client Certificate: AdminClientCert
Server CA Certificates: AdminServerCaCert
Key Server: ks1.local:1234
Timeout: 45
Description
This command displays connectivity information between Data ONTAP nodes and configured external key management servers.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
If you specify this parameter, then the command displays the connectivity information for only the given node.
[-vserver <vserver name>] - Vserver Name
If you specify this parameter, then the command displays the key management servers for only the given
Vserver.
[-key-server <Hostname and Port>] - Key Server
If you specify this parameter, then the command displays the connectivity information for only the given key
management server with the given name listening on the given port.
[-key-server-status {available|not-responding|unknown}] - Key Server Status
If you specify this parameter, then the command displays the connectivity information for only the key
management servers with the given status.
Examples
The following example lists all configured key management servers for all Vservers:
Description
This command enables cluster administrators to modify the IP address and route information that the external key manager uses
at boot time to restore keys from external key servers.
Parameters
-node {<nodename>|local} - Node
Use this parameter to modify information on the node that you specify.
-address-type {ipv4|ipv6|ipv6z} - Address Type
Use this parameter to modify information for the address-type that you specify.
[-address <IP Address>] - Local Interface Address
Use this parameter to modify the IP address that the system will use at boot time to restore keys from external
key servers. This parameter implies -override-default true.
{ [-netmask <IP Address>] - Network Mask
Use this parameter to modify the IP netmask that the system will use at boot time to restore keys from external
key servers. This parameter can be used only with address-type ipv4. This parameter implies -override-
default true.
| [-netmask-length <integer>]} - Bits in Network Mask
Use this parameter to modify the IP netmask length that the system will use at boot time to restore keys from
external key servers. This parameter implies -override-default true.
[-gateway <IP Address>] - Gateway
Use this parameter to modify the IP gateway that the system will use at boot time to restore keys from external
key servers. This parameter implies -override-default true.
[-port {<netport>|<ifgrp>}] - Network Port
Use this parameter to modify the port that the system will use at boot time to restore keys from external key
servers. The value that you specify cannot be a vlan or ifgrp port. This parameter implies -override-
default true.
[-override-default {true|false}] - Override Default Setting?
Use this parameter to modify the system's selection of boot time IP address and route information. When this
value is false, the system will use the information associated with a node management LIF. When this value
is true, then the administrator has chosen to override the defaults.
Examples
The following shows how to modify the port used by node "node2" at boot time to restore keys from external IPv4 key
servers. In the example, IPv6 is not enabled in the cluster, so the -address-type parameter defaults to ipv4.
The following example shows how to modify the IP address and gateway parameters used by node "node1" at boot time to
restore keys from external IPv6 key servers.
cluster-1::*> security key-manager external boot-interfaces modify -node node1 -address-type ipv6 -
address fd20:8b1e:b255:814e:749e:11a3:3bff:5820 -gateway fd20:8b1e:b255:814e::1
Description
This command enables cluster administrators to view the IP address and route information that the external key manager uses at
boot time to restore keys from external key servers.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display information only about boot-time IP address and route information for the node
that you specify.
[-address-type {ipv4|ipv6|ipv6z}] - Address Type
Use this parameter to display information only about boot-time IP address and route information for the
address-type that you specify.
[-address <IP Address>] - Local Interface Address
Use this parameter to display information only about boot-time IP address and route information for the IP
address that you specify.
[-netmask <IP Address>] - Network Mask
Use this parameter to display information only about boot-time IP address and route information for the
network mask that you specify.
[-netmask-length <integer>] - Bits in Network Mask
Use this parameter to display information only about boot-time IP address and route information for the
network mask length that you specify.
[-gateway <IP Address>] - Gateway
Use this parameter to display information only about boot-time IP address and route information for the
gateway that you specify.
[-port {<netport>|<ifgrp>}] - Network Port
Use this parameter to display information only about boot-time IP address and route information for the port
that you specify.
Examples
The following example shows how to display the IP address and route information that the external key manager uses at
boot time to restore keys. In the example, IPv6 is not enabled in the cluster and, as a result, the command displays
information for only the IPv4 address-type. he override-default value is false for all rows, which indicates that the system
automatically configured the values based on the node management LIF configuration on the nodes.
The following example shows how to display the IP address and route information that the external key manager uses at
boot time to restore keys. In the example, IPv6 is enabled in the cluster and, as a result, the command displays
information for both the IPv4 and IPv6 address-types. The override-default value is false for most rows, which indicates
that the system automatically configured the values based on the node management LIF configuration on the nodes. The
override-default value for node1 and address-type ipv4 is true, which indicates an administrator has used the security
key-manager external boot-interfaces modify command to override one or more fields, and that the values
may differ from the corresponding node management LIF.
Related references
security key-manager external boot-interfaces modify on page 522
Parameters
[-key-tag <text>] - Key Tag
This parameter specifies the key tag to associate with the new authentication key (AK). The default value is
the node name. This parameter can be used to help identify created authentication keys (AKs). For example,
the security key-manager key query command's key-tag parameter can be used to query for a specific
key-tag value.
[-prompt-for-key {true|false}] - Prompt for Authentication Passphrase
If you specify this parameter as true, then the command prompts you to enter an authentication passphrase
manually instead of generating it automatically. For security reasons, the authentication passphrase you
entered is not displayed at the command prompt. You must enter the authentication passphrase a second time
for verification. To avoid errors, copy and paste authentication passphrases electronically instead of entering
them manually. Data ONTAP saves the resulting authentication key/key ID pair automatically on the
configured key management servers.
Examples
The following example creates an authentication key with the node name as the default key-tag value:
The following example creates an authentication key with a user-specified authentication passphrase:
Related references
security key-manager key query on page 526
Description
This command removes an authentication key from the configured key management servers on the admin Vserver. The
command fails if the given key is currently in use by Data ONTAP. This command is not supported when external key
management is not enabled for the admin Vserver.
Parameters
-key-id <Hex String> - Authentication Key ID
Use this parameter to specify the key ID of the key that you want to remove.
Description
This command provides a mechanism to migrate the existing keys of a data Vserver from the admin Vserver's key manager to
their own key manager or vice versa. The keys stay the same and the data is not rekeyed, only the keys are migrated from one
Vserver's key manager to another. After a successful migration to the new key manager, the data Vserver keys are deleted from
the previous key manager.
Note: This command currently only supports key migration from the Admin Vserver's onboard key manager to a Data
Vserver's external key manager and vice versa.
Parameters
-from-vserver <vserver name> - Vserver Name
Use this parameter to specify the name of the Vserver whose key manager the keys are migrated from.
-to-vserver <vserver name> - Vserver Name
Use this parameter to specify the name of the Vserver whose key manager the keys are migrated to.
Examples
The following example migrates the keys of "datavs" data Vserver from "cluster-1" admin Vserver's key manager to
"datavs" data Vserver's key manager:
The following example migrates the keys of "datavs" data Vserver from "datavs" data Vserver's key manager to
"cluster-1" admin Vserver's key manager:
Description
This command displays the IDs of the keys that are stored in the configured key managers. This command does not update the
key tables on the node.
Examples
The following example shows all of the keys on all configured key servers, and whether or not those keys have been
restored for all nodes in the cluster:
Vserver: cluster-1
Key Manager: onboard
Node: node1
Key Server: ""
Vserver: datavs
Key Manager: external
Node: node1
Key Server: keyserver.datavs.com:5965
Vserver: cluster-1
Key Manager: onboard
Node: node2
Key Server: -
Vserver: datavs
Key Manager: external
Node: node2
Key Server: keyserver.datavs.com:5965
Description
Note: This command is deprecated and might be removed in a future release. Use security key-manager key query
instead.
This command displays the key IDs of the authentication keys (NSE-AK) and SVM keys (SVM-KEK) that are available in
onboard key management. This command is not supported for an external key management configuration.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
Examples
The following example shows all keys stored in the onboard key manager:
Node: node1
Key Store: onboard
Used By
--------
NSE-AK
Key ID: 000000000000000002000000000001001bc4c708e2a89a312e14b6ce6d4d49d40000000000000000
NSE-AK
Key ID: 000000000000000002000000000001005e89099721f8817e65e3aeb68be1bfca0000000000000000
SVM-KEK
Key ID: 00000000000000000200000000000a0046df92864d4cece662b93beb7f5366100000000000000000
Node: node2
Key Store: onboard
Used By
--------
NSE-AK
Key ID: 000000000000000002000000000001001bc4c708e2a89a312e14b6ce6d4d49d40000000000000000
NSE-AK
Key ID: 000000000000000002000000000001005e89099721f8817e65e3aeb68be1bfca0000000000000000
The following example shows a detailed view of all keys stored in the onboard key manager:
Node: node1
Key Store: onboard
Key ID Key Tag Used By Stored In Restored
------ --------------- ---------- ------------------------------------ --------
000000000000000002000000000001001bc4c708e2a89a312e14b6ce6d4d49d40000000000000000
- NSE-AK local-cluster yes
000000000000000002000000000001005e89099721f8817e65e3aeb68be1bfca0000000000000000
- NSE-AK local-cluster yes
00000000000000000200000000000a0046df92864d4cece662b93beb7f5366100000000000000000
- SVM-KEK local-cluster yes
Node: node2
Key Store: onboard
Key ID Key Tag Used By Stored In Restored
------ --------------- ---------- ------------------------------------ --------
000000000000000002000000000001001bc4c708e2a89a312e14b6ce6d4d49d40000000000000000
- NSE-AK local-cluster yes
000000000000000002000000000001005e89099721f8817e65e3aeb68be1bfca0000000000000000
- NSE-AK local-cluster yes
00000000000000000200000000000a0046df92864d4cece662b93beb7f5366100000000000000000
- SVM-KEK local-cluster yes
6 entries were displayed.
Related references
security key-manager setup on page 508
security key-manager key query on page 526
Description
This command disables the onboard key manager associated with the admin Vserver and permanently deletes the onboard key
management configuration associated with the admin Vserver.
Examples
The following example disables the onboard key manager for the admin Vserver:
Warning: This command will permanently delete all keys from onboard key
management.
Do you want to continue? {y|n}: y
Description
This command enables the onboard key manager for the admin Vserver.
Parameters
[-cc-mode-enabled {yes|no}] - Enable Common Criteria Mode?
Use this parameter to specify whether the Common Critieria (CC) mode should be enabled or not. When CC
mode is enabled, you are required to provide a cluster passphrase that is between 64 and 256 ASCII character
long, and you are required to enter that passphrase each time a node reboots. CC mode cannot be enabled in a
MetroCluster configuration.
Examples
The following example enables the Onboard Key Manager for the admin Vserver cluster-1:
After configuring onboard key management, save the encrypted configuration data in a safe location
so that you can use it if you need to perform a manual recovery operation. To view the data, use
the "security key-manager onboard show-backup" command.
Description
This command displays the backup information for onboard key management for the admin Vserver, which can be used to
recover the cluster in case of catastrophic situations. The information displayed is for the cluster as a whole (not individual
nodes).
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example displays the onboard key management backup data for the admin Vserver:
Description
This command synchronizes missing onboard keys on any node in the cluster. For example, if you add a node to a cluster that
has onboard key management configured, you should then run this command to synchronize the keys. In a MetroCluster
configuration, if the security key-manager onboard enable command is used to enable onboard key management on
one site, then run the security key-manager onboard sync command on the partner site. In a MetroCluster
configuration, if the security key-manager onboard update-passphrase command is used to update the passphrase on
one site, then run this command with the new passphrase on the partner site before proceeding with any key management
operations.
Examples
The following example synchronizes the onboard key manager key database across all nodes in the cluster. In a
MetroCluster configuration, this command synchronizes nodes in the local site.
Related references
security key-manager onboard enable on page 531
security key-manager onboard update-passphrase on page 533
Description
This command provides a way to update the cluster-wide passphrase that is used for onboard key management and initially
created by running the security key-manager onboard enable command. This command prompts for the existing
passphrase, and if that passphrase is correct then the command prompts for a new passphrase. When onboard key management
is enabled for the admin Vserver, run the security key-manager onboard show-backup command after updating the
passphrase and save the output for emergency recovery scenarios. When the security key-manager onboard update-
passphrase command is executed in a MetroCluster configuration, then run the security key-manager onboard sync
command with the new passphrase on the partner site before proceeding with any key-manager operations. This allows the
updated passphrase to be replicated to the partner site.
Examples
The following example updates the cluster-wide passphrase used for onboard key management:
Warning: This command will reconfigure the cluster passphrase for onboard
key management.
Do you want to continue? {y|n}: y
Related references
security key-manager onboard enable on page 531
security key-manager onboard show-backup on page 531
security key-manager onboard sync on page 532
Description
The security login create command creates a login method for the management utility. A login method consists of a user
name, an application (access method), and an authentication method. A user name can be associated with multiple applications.
It can optionally include an access-control role name. If an Active Directory, LDAP, or NIS group name is used, then the login
Parameters
-vserver <Vserver Name> - Vserver
This specifies the Vserver name of the login method.
-user-or-group-name <text> - User Name or Group Name
This specifies the user name or Active Directory, LDAP, or NIS group name of the login method. The Active
Directory, LDAP, or NIS group name can be specified only with the domain or nsswitch authentication
method and ontapi and ssh application. If the user is a member of multiple groups provisioned in the
security login table, then the user will get access to a combined list of the commands authorized for the
individual groups.
-application <text> - Application
This specifies the application of the login method. Possible values include console, http, ontapi, rsh, snmp,
service-processor, ssh, and telnet.
Setting this parameter to service-processor grants the user access to the Service Processor (SP). Because
the SP supports only password authentication, when you set this parameter to service-processor, you
must also set the -authentication-method parameter to password. Vserver user accounts cannot access the SP.
Therefore, you cannot use the -vserver parameter when you set this parameter to service-processor.
-authentication-method <text> - Authentication Method
This specifies the authentication method for login. Possible values include the following:
• password - Password
• password - Password
Examples
The following example illustrates how to create a login that has the user name monitor, the application ssh, the
authentication method password, and the access-control role guest for Vserver vs:
The following example illustrates how to create a login that has the user name monitor, the application ontapi, the
authentication method password, and the access-control role vsadmin for Vserver vs:
The following example illustrates how to create a login that has the user name monitor, the application ssh, the
authentication method publickey, and the access-control role guest for Vserver vs:
The following example illustrates how to create a login that has the user name monitor, the application http, the
authentication method cert, and the access-control role admin for Vserver vs:
The following example illustrates how to create a login that has the Active Directory group name adgroup in DOMAIN1,
the application ssh, the authentication method domain, and the access-control role vsadmin for Vserver vs:
The following example illustrates how to create a login that has a group name nssgroup in the LDAP or NIS server, the
application ontapi, the authentication method nsswitch, and the access-control role vsadmin for Vserver vs. Here
is-ns-switch-group must be set to yes:
The following example illustrates how to create a login that has the user name monitor, the application ssh, the
authentication method password, the second authentication method none and the access-control role vsadmin for
Vserver vs:
Description
The security login delete command deletes a login method.
Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver name of the login method.
-user-or-group-name <text> - User Name or Group Name
This specifies the user name or Active Directory, LDAP, or NIS group name of the login method that is to be
deleted. A user name can be associated with multiple applications.
-application <text> - Application
This specifies the application of the login method. Possible values include console, http, ontapi, rsh, snmp,
service-processor, ssh, and telnet.
-authentication-method <text> - Authentication Method
This specifies the authentication method of the login method. Possible values include the following:
• password - Password
Examples
The following example illustrates how to delete a login that has the username guest, the application ssh, and the
authentication method password for Vserver vs:
The following example illustrates how to delete a login that has the username guest, the application ontapi, and the
authentication method cert for Vserver vs:
The following example illustrates how to delete a login that has the Active Directory group name adgroup in DOMAIN1,
the application ssh, and the authentication method domain for Vserver vs:
The following example illustrates how to delete a login that has a group name nssgroup in the LDAP or NIS server, the
application ontapi, and the authentication method nsswitch for Vserver vs:
Description
The security login expire-password command expires a specified user account password, forcing the user to change
the password upon next login.
Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver to which the user account belongs.
-username <text> - Username
This specifies the user name of the account whose password you want to expire.
[-hash-function {sha512|sha256}] - Password Hash Function
This optionally specifies the password-hashing algorithm used for encrypting the passwords that you want to
expire. The supported values include are as follows:
Examples
The following command expires the password of the 'jdoe' user account which belongs to the 'vs1' Vserver.
The following command expires all user account passwords that are encrypted with the MD5 hash function.
The following command expires the password of any Vserver's user account named 'jdoe' that is encrypted with the MD5
hash function.
The following command expires the password of the 'vs1' Vserver user account named 'jdoe' that is encrypted with the
MD5 hash function.
cluster1::> security login expire-password -vserver vs1 -username jdoe -hash-function md5
The following command expires all user account passwords that are encrypted with the MD5 hash function and enforce
the new password hash policy after 180 days.
Description
The security login lock command locks a specified account, preventing it from accessing the management interface.
Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver to which the user account belongs.
-username <text> - Username
This specifies the user name of the account that is to be locked.
Examples
The following example locks a user account named 'jdoe' which belongs to the Vserver 'vs1'.
Description
The security login modify command modifies the access-control role name of a login method. If the user is a member of
multiple groups provisioned in the security login table, then the user will get access to a combined list of the commands
authorized for the individual groups.
Parameters
-vserver <Vserver Name> - Vserver
This specifies the Vserver name of the login method.
-user-or-group-name <text> - User Name or Group Name
This specifies the user name, Active Directory, LDAP, or NIS group name of the login method that is to be
modified. A user name can be associated with multiple applications. If the user is a member of multiple groups
provisioned in the security login table, then the user will get access to a combined list of the commands
authorized for the individual groups.
-application <text> - Application
This specifies the application of the login method. Possible values include console, http, ontapi, rsh, snmp,
service-processor, ssh, and telnet.
-authentication-method <text> - Authentication Method
This specifies the authentication method of the login method. Possible values include the following:
• password - Password
• password - Password
• publickey - Public-key authentication
Examples
The following example illustrates how to modify a login method that has the user name guest, the application ontapi,
and the authentication method password to use the access-control role guest for Vserver vs:
The following example illustrates how to modify a login method that has the user name guest, the application ssh, and
the authentication method publickey to use the access-control role vsadmin for Vserver vs:
The following example illustrates how to modify a login method that has the group name nssgroup, the application
ontapi, and the authentication method nsswitch to use the access-control role readonly for Vserver vs. Here is-ns-
switch-group must be set to yes:
The following example illustrates how to modify a login method that has the user name guest, the application ssh, and
the authentication method publickey to use the second-authentication-method password for Vserver vs:
The following example illustrates how to modify a login method to have individual authentication methods that have the
user name guest, the application ssh, and the authentication method publickey to use the second-authentication-
method none for Vserver vs:
Description
The security login password command resets the password for a specified user. The command prompts you for the user's
old and new password.
Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver name of the login method.
-username <text> - Username
This optionally specifies the user name whose password is to be changed. If you do not specify a user, the
command defaults to the user name you are currently using.
Examples
The following command initiates a password change for the 'admin' user account of the 'vs' Vserver.
The following command initiates a password change for the 'vs' Vserver user account named 'admin'. The new password
will be encrypted by using the SHA512 password-hashing algorithm.
The following command initiates a password change for the 'vs' Vserver user account named 'admin'. The new password
will be encrypted by using the SHA256 password-hashing encryption algorithm.
Description
If the password of the system administrator is not encrypted with an encryption type supported by releases earlier than ONTAP
9.0, this command prompts the administrator for a new password and encrypt it using a supported encryption type on each
cluster or at each site in a MetroCluster configuration. In a MetroCluster configuration, this command must be run on both sites.
The password for all other users are marked as "expired". This causes them to be re-encrypted using a compatible encryption
type. The expired passwords are changed with an internally generated password. The administrator must change the passwords
for all users before the users can login. The users are prompted to change their password upon login. This command disables the
logging of unsuccessful login attempts. The command must be run by a user with the cluster admin role from a clustershell
session on the console device. This user must be unlocked. If you fail to run this command, the revert process fails.
Parameters
-disable-feature-set <downgrade version> - Data ONTAP Version
This parameter specifies the Data ONTAP version that introduced the password feature set.
Warning: This command will disable the MOTD feature that prints unsuccessful login
attempts.
Do you want to continue? {y|n}: y
cluster1::*>
The following command prompts system administrator to enter password and encrypt it with the hashing algorithm
supported by releases earlier than Data ONTAP 9.0.
cluster1::*>
Description
The security login show command displays the following information about user login methods:
• User name
• Role name
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
• password - Password
• password - Password
Examples
The example below illustrates how to display information about all user login methods:
Vserver: cluster1
Second
User/Group Authentication Acct Authentication
Name Application Method Role Name Locked Method
-------------- ----------- ------------- ---------------- ------ --------------
admin console password admin no none
admin http password admin no none
admin ontapi password admin no none
admin service-processor
password admin no none
admin ssh password admin no none
autosupport console password autosupport no none
Vserver: vs1.netapp.com
Second
User/Group Authentication Acct Authentication
Name Application Method Role Name Locked Method
-------------- ----------- ------------- ---------------- ------ --------------
vsadmin http password vsadmin yes none
vsadmin ontapi password vsadmin yes none
vsadmin ssh password vsadmin yes none
9 entries were displayed.
Description
The security login unlock command unlocks a specified account, enabling it to access the management interface.
Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver to which the user account belongs.
-username <text> - Username
This specifies the user name of the account that is to be unlocked.
Examples
The following command unlocks a user account named jdoe which belongs to the Vserver vs1.
Description
The security login whoami command displays the name and role of the user logged in at the current console session. It
takes no options or other parameters.
Examples
The following example shows that the current session is logged in by using the 'admin' user account:
cluster1::> whoami
(security login whoami)
User: admin
Role: admin
Related references
security login show on page 542
security login create on page 533
Description
The security login banner modify command modifies the login banner. The login banner is printed just before the
authentication step during the SSH and console device login process.
Parameters
-vserver <Vserver Name> - Vserver Name
Use this parameter to specify the Vserver whose banner will be modified. Use the name of the cluster admin
Vserver to modify the cluster-level message. The cluster-level message is used as the default for data Vservers
that do not have a message defined.
{ [-message <text>] - Login Banner Message
This optional parameter can be used to specify a login banner message. If the cluster has a login banner
message set, the cluster login banner will be used by all data Vservers as well. Setting a data Vserver's login
banner will override the display of the cluster login banner. To reset a data Vserver's login banner to use the
cluster login banner, use this parameter with the value "-".
Examples
This example shows how to enter a login banner interactively:
cluster1::>
Description
The security login banner show command displays the login banner.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Selects login banners that match the specified value. Use the name of the admin Vserver to specify the cluster-
level login banner.
[-message <text>] - Login Banner Message
Selects login banners that match the specified value. By default, this command will not display unconfigured,
or empty, login banners. To display all banners, specify -message *.
Examples
The following shows sample output from this command:
cluster1::>
Description
This command establishes a gateway (tunnel) for authenticating Windows Active Directory (AD) domain users' access to the
cluster.
Before using this command to establish the tunnel, the following must take place:
• You must use the security login create command to create one or more AD domain user accounts that will be granted
access to the cluster.
◦ The -authmethod parameter of the security login create command must be set to 'domain'.
◦ The -username parameter of the security login create command must be set to a valid AD domain user account
that is defined in a Windows Domain Controller's Active Directory. The user account must be specified in the format of
<domainname>\<username>, where "domainname" is the name of the CIFS domain server.
• You must identify or create a CIFS-enabled data Vserver that will be used for Windows authentication with the Active
Directory server. This Vserver is the tunnel Vserver, and it must be running for this command to succeed.
Only one Vserver can be used as the tunnel. If you attempt to specify more than one Vserver for the tunnel, Data ONTAP
returns an error. If the tunnel Vserver is stopped or deleted, AD domain users' authentication requests to the cluster will fail.
Parameters
-vserver <vserver> - Authentication Tunnel Vserver
This parameter specifies a data Vserver that has been configured with CIFS. This Vserver will be used as the
tunnel for authenticating AD domain users' access to the cluster.
Examples
The following commands create an Active Directory domain user account ('DOMAIN1\Administrator') for the 'cluster1'
cluster, create a data Vserver ('vs'), create a CIFS server ('vscifs') for the Vserver, and specify 'vs' as the tunnel for
authenticating the domain user access to the cluster.
Description
The security login domain-tunnel delete command deletes the tunnel established by the security login
domain-tunnel create command. An error message will be generated if no tunnel exists.
Examples
The following command deletes the tunnel established by security login domain-tunnel create.
Related references
security login domain-tunnel create on page 547
Description
The security login domain-tunnel modify command modifies or replaces the tunnel Vserver. If a tunnel Vserver is not
already specified, it sets the current tunnel Vserver with this Vserver, otherwise, it replaces the current tunnel Vserver with the
Vserver that you specify. If the tunnel Vserver is changed, authentication requests via previous Vserver will fail. See security
login domain-tunnel create for more information.
Parameters
[-vserver <vserver>] - Authentication Tunnel Vserver
This parameter specifies a Vserver that has been configured with CIFS and is associated with a Windows
Domain Controller's Active Directory authentication. This Vserver will be used as an authentication tunnel for
login accounts so that they can be used with administrative Vservers.
Examples
The following command modifies the tunnel Vserver for administrative Vserver.
Related references
security login domain-tunnel create on page 547
Description
The security login domain-tunnel show command shows the tunnel Vserver that was specified by the security
login domain-tunnel create or security login domain-tunnel modify command.
Examples
The example below shows the tunnel Vserver, vs, that is currently used as an authentication tunnel. The output informs
you that the table is currently empty if tunnel Vserver has not been specified.
Related references
security login domain-tunnel create on page 547
security login domain-tunnel modify on page 548
Description
The security login motd modify command updates the message of the day (MOTD).
There are two categories of MOTDs: the cluster-level MOTD and the data Vserver-level MOTD. A user logging in to a data
Vserver's clustershell will potentially see two messages: the cluster-level MOTD followed by the Vserver-level MOTD for that
Vserver. The cluster administrator can enable or disable the cluster-level MOTD on a per-Vserver basis. If the cluster
administrator disables the cluster-level MOTD for a Vserver, a user logging into the Vserver will not see the cluster-level
message. Only a cluster administrator can enable or disable the cluster-level message.
Parameters
-vserver <Vserver Name> - Vserver Name
Use this parameter to specify the Vserver whose MOTD will be modified. Use the name of the cluster admin
Vserver to modify the cluster-level message.
{ [-message <text>] - Message of the Day (MOTD)
This optional parameter can be used to specify a message. If you use this parameter, the MOTD cannot contain
newlines (also known as end of lines (EOLs) or line breaks). If you do not specify any parameter other than
the -vserver parameter, you will be prompted to enter the message interactively. Messages entered
interactively can contain newlines. Non-ASCII characters must be provided as Unicode UTF-8.
The message may contain dynamically generated content using the following escape sequences:
• \C - Cluster name.
• \m - Machine architecture.
• \O - DNS domain name of the node. Note that the output is dependent on the network configuration and
may be empty.
• \u - Number of active clustershell sessions on the local node. For the cluster admin: all clustershell users.
For the data Vserver admin: only active sessions for that data Vserver.
• \W - Active sessions across the cluster for the user logging in ('who').
Examples
This example shows how to enter a MOTD interactively:
cluster1::>
Description
The security login motd show command displays information about the cluster-level and data Vserver clustershell
message of the day (MOTD).
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Selects the message of the day entries that match this parameter value. Use the name of the cluster admin
Vserver to see the cluster-level MOTD.
[-message <text>] - Message of the Day (MOTD)
Selects the message of the day entries that match this parameter value.
[-is-cluster-message-enabled {true|false}] - Is Cluster-level Message Enabled?
Selects the message of the day entries that match this parameter value.
Examples
The following example displays all message of the day entries:
Vserver: vs0
Is the Cluster MOTD Displayed?: true
Message
-----------------------------------------------------------------------------
Welcome to the Vserver!
Description
The security login publickey create associates an existing public key with a user account. This command requires that
you enter a valid OpenSSH-formatted public key, a user name, index number, and optionally, a comment.
Parameters
-vserver <Vserver Name> - Vserver
This parameter optionally specifies the Vserver of the user for whom you are adding the public key.
-username <text> - Username
This parameter specifies the name of the user for whom you are adding the public key. If you do not specify a
user, the user named admin is specified by default.
[-index <integer>] - Index
This parameter specifies an index number for the public key. The default value is the next available index
value, starting with zero if it is the first public key created for the user.
-publickey <certificate> - Public Key
This specifies the OpenSSH public key, which must be enclosed in double quotation marks.
[-comment <text>] - Comment
This optionally specifies comment text for the public key. Note that comment text should be enclosed in
quotation marks.
Examples
The following command associates a public key with a user named tsmith for Vserver vs1. The public key is assigned
index number 5 and the comment text is “This is a new key”.
cluster1::> security login publickey create -vserver vs1 -username tsmith -index 5 -publickey
"ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAspH64CYbUsDQCdW22JnK6J
/vU9upnKzd2zAk9C1f7YaWRUAFNs2Qe5lUmQ3ldi8AD0Vfbr5T6HZPCixNAIza
FciDy7hgnmdj9eNGedGr/JNrftQbLD1hZybX+72DpQB0tYWBhe6eDJ1oPLob
ZBGfMlPXh8VjeU44i7W4+s0hG0E=tsmith@publickey.example.com"
-comment "This is a new key"
Description
The security login publickey delete command deletes a public key for a specific user. To delete a public key, you
must specify a user name and index number.
Parameters
-vserver <Vserver Name> - Vserver
This parameter optionally specifies the Vserver of the user for whom you are adding the public key.
Examples
The following command deletes the public key for the user named tsmith with the index number 5.
Description
The security login publickey load-from-uri command loads one or more public keys from a Universal Resource
Identifier (URI). To load public keys from a URI, you must specify a user name, the URI from which to load them, and
optionally, whether you want to overwrite the existing public keys.
Parameters
-vserver <vserver name> - Vserver
This parameter optionally specifies the Vserver for the user associated with the public keys.
-username <text> - Username
This parameter specifies the username for the public keys. If you do not specify a username, the username
"admin" is used by default.
-uri {(ftp|http)://(hostname|IPv4 Address|'['IPv6 Address']')...} - URI to load from
This parameter specifies the URI from which the public keys will be loaded.
-overwrite {true|false} - Overwrite Entries
This parameter optionally specifies whether you want to overwrite existing public keys. The default value for
this parameter is false. If the value is true and you confirm to overwrite, then the existing public keys are
overwritten with the new public keys. If you use the value false or do not confirm the overwrite, then newly
loaded public keys are appended to the list of existing public keys using the next available index.
Examples
The following command shows how to load public keys for the user named tsmith from the URI ftp://ftp.example.com/
identity.pub. This user's existing public keys are not overwritten.
The following command shows how to load public keys for the user named tsmith from the URI ftp:ftp://
ftp.example.com/identity.pub. This user's existing public keys are overwritten if user entered the option 'y' or 'Y'. The
user's existing public keys are not overwritten if user entered the option 'n' or 'N' and the newly loaded public keys are
appended to the list of existing public keys using the next available index. The user and password credentials that you
provide when you use this command are the credentials to access the server specified by the URI.
Enter User:
Enter Password:
Warning: You are about to overwrite the existing publickeys for the user
"tsmith" in Vserver "vs0". Do you want to proceed? {y|n}:
Description
The security login publickey modify command modifies a public key and optionally its comment text.
Parameters
-vserver <Vserver Name> - Vserver
Specifies the Vserver for the user associated with the public key.
-username <text> - Username
Specifies the username for the public key. If you do not specify a username, the username 'admin' is used by
default.
-index <integer> - Index
Specifies the index number of the public key. The index number of the public key can be found by using the
security login publickey show command.
[-publickey <certificate>] - Public Key
Specifies the new public key. You must enclose the new public key in double quotation marks.
[-comment <text>] - Comment
Specifies the new comment text for the public key.
Examples
The following command modifies the public key at index number 10 for the user named tsmith of Vserver vs1.
cluster1::> security login publickey modify -vserver vs1 -username tsmith -index 10 -publickey
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDDD+pFzFgV/2dlowKRFgym9K910H/u+BVTGitCtHteHyo8thmaXT
1GLCzaoC/12+XXiYKMRhJ00S9Svo4QQKUXHdCPXFSgR5PnAs39set39ECCLzmduplJnkWtX96pQH/bg2g3upFcdC6z9
c37uqFtNVPfv8As1Si/9WDQmEJ2mRtJudJeU5GZwZw5ybgTaN1jxDWus9SO2C43F/vmoCKVT529UHt4/ePcaaHOGTiQ
O8+Qmm59uTgcfnpg53zYkpeAQV8RdYtMdWlRr44neh1WZrmW7x5N4nXNvtEzr9cvb9sJyqTX1CkQGfDOdb+7T7y3X7M
if/qKQY6FsovjvfZD"
Related references
security login publickey show on page 554
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver
Selects the public keys that match this parameter value.
[-username <text>] - Username
Selects the public keys that match this parameter value.
[-index <integer>] - Index
Selects the public keys that match this parameter value.
[-publickey <certificate>] - Public Key
Selects the public keys that match this parameter value.
[-fingerprint <text>] - Hex Fingerprint
Selects the public keys that match this parameter value.
[-bubblebabble <text>] - Bubblebabble Fingerprint
Selects the public keys that match this parameter value.
[-comment <text>] - Comment
Selects the public keys that match this parameter value.
Examples
The example below displays public key information for the user named tsmith.
Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver name associated with the role.
-role <text> - Role Name
This specifies the role that is to be created.
-cmddirname <text> - Command / Directory
This specifies the command or command directory to which the role has access. To specify the default setting,
use the special value "DEFAULT".
[-access <Access>] - Access Level
This optionally specifies an access level for the role. Possible access level settings are none, readonly, and all.
The default setting is all.
[-query <query>] - Query
This optionally specifies the object that the role is allowed to access. The query object must be applicable to
the command or directory name specified by -cmddirname. The query object must be enclosed in double
quotation marks (""), and it must be a valid field name.
Examples
The following command creates an access-control role named "admin" for the vs1.example.com Vserver. The role has all
access to the "volume" command but only within the "aggr0" aggregate.
cluster1::> security login role create -role admin -cmddirname volume -query "-aggr aggr0" -access
all -vserver vs1.example.com
Related references
security login modify on page 539
security login create on page 533
Description
The security login role delete command deletes an access-control role.
Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver name associated with the role.
-role <text> - Role Name
This specifies the role that is to be deleted.
Examples
The following command deletes an access-control role with the role name readonly and the command access "volume" for
Vserver vs.example.com.
cluster1::> security login role delete -role readonly -cmddirname volume -vserver vs.example.com
Description
The security login role modify command modifies an access-control role.
Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver name associated with the role.
-role <text> - Role Name
This specifies the role that is to be modified.
-cmddirname <text> - Command / Directory
This specifies the command or command directory to which the role has access. To specify the default setting
for a role, use the special value "DEFAULT". This value can be modified only for the roles created for the
admin Vserver.
[-access <Access>] - Access Level
This optionally specifies a new access level for the role. Possible access level settings are none, readonly, and
all. The default setting is all.
[-query <query>] - Query
This optionally specifies the object that the role is allowed to access. The query object must be applicable to
the command or directory name specified by -cmddirname. The query object must be enclosed in double
quotation marks (""), and it must be a valid field name.
Examples
The following command modifies an access-control role with the role name readonly and the command access "volume"
to have the access level readonly for Vserver vs.example.com:
cluster1::> security login role modify -role readonly -cmddirname volume -access readonly -vserver
vs.example.com
Examples
The following command restores predefined roles of all Vservers earlier than Data ONTAP 8.3.2.
Description
The security login role show command displays the following information about access-control roles:
• Role name
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver
Selects the roles that match this parameter value.
[-role <text>] - Role Name
Selects the roles that match this parameter value. If this parameter and the -cmddirname parameter are both
used, the command displays detailed information about the specified access-control role.
[-cmddirname <text>] - Command / Directory
Selects the roles that match this parameter value. If this parameter and the -role parameter are both used, the
command displays detailed information about the specified access-control role.
[-access <Access>] - Access Level
Selects the roles that match this parameter value.
[-query <query>] - Query
Selects the roles that match this parameter value.
Examples
The example below displays information about all access-control roles:
Description
The security login role show-ontapi command displays Data ONTAP APIs (ONTAPIs) and the CLI commands that
they are mapped to.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-ontapi <text>] - ONTAPI Name
Use this parameter to view the corresponding CLI command for the specified API.
[-command <text>] - CLI Command
Use this parameter to view the corresponding API or APIs for the specified CLI command.
Examples
The following command displays all Data ONTAP APIs and their mapped CLI commands:
The following example displays the CLI command that is mapped to the specified Data ONTAPI API:
Description
The security login role config modify command modifies user account and password restrictions.
For the password character restrictions documented below (uppercase, lowercase, digits, etc.), the term "characters" refers to
ASCII-range characters only - not extended characters.
Parameters
-vserver <vserver name> - Vserver
This specifies the Vserver name associated with the profile configuration.
-role <text> - Role Name
This specifies the role whose account restrictions are to be modified.
[-username-minlength <integer>] - Minimum Username Length Required
This specifies the required minimum length of the user name. Supported values are 3 to 16 characters. The
default setting is 3 characters.
[-username-alphanum {enabled|disabled}] - Username Alpha-Numeric
This specifies whether a mix of alphabetic and numeric characters are required in the user name. If this
parameter is enabled, a user name must contain at least one letter and one number. The default setting is
disabled.
[-passwd-minlength <integer>] - Minimum Password Length Required
This specifies the required minimum length of a password. Supported values are 3 to 64 characters. The
default setting is 8 characters.
[-passwd-alphanum {enabled|disabled}] - Password Alpha-Numeric
This specifies whether a mix of alphabetic and numeric characters is required in the password. If this
parameter is enabled, a password must contain at least one letter and one number. The default setting is
enabled.
Examples
The following command modifies the user-account restrictions for an account with the role name admin for a Vserver
named vs. The minimum size of the password is set to 12 characters.
Description
The security login role config reset command resets the following role based access control (RBAC) characteristics
to their default values. The system prompts you to run this command if you revert to Data ONTAP 8.1.2 or earlier. If you do not
reset these characteristics, the revert process will fail.
• Maximum number of failed login attempts permitted before the account is locked out ("0")
• Number of days that the user account is locked out after the maximum number of failed login attempts is reached ("0")
Examples
The following command resets the above mentioned RBAC characteristics of all cluster and Vserver roles to their default
values.
Description
The security login role config show command displays the following information about account restrictions for
management-utility user accounts:
You can display detailed information about the restrictions on a specific account by specifying the -role parameter. This adds
the following information:
• Minimum number of days that must elapse before users can change their passwords -change-delay
• Maximum number of failed login attempts permitted before the account is locked out -max-failed-login-attempts
• Number of days for which the user account is locked after the maximum number of failed login attempts is reached -
lockout-duration
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver
Selects the profile configurations that match this parameter value
[-role <text>] - Role Name
If this parameter is specified, the command displays detailed information about restrictions for the specified
user account.
[-username-minlength <integer>] - Minimum Username Length Required
Selects the profile configurations that match this parameter value.
Examples
The example below displays restriction information about all user accounts:
Description
The security login rest-role create command creates an access-control role. An access-control role consists of a role
name and a api to which the role has access. It optionally includes an access level (none, readonly, or all) for api. After you
create an access-control role, you can apply it to a management-utility login account by using the security login modify
or security login create commands.
Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver name associated with the role.
-role <text> - Role Name
This specifies the role that is to be created.
-api <text> - api path
This specifies the api to which the role has access.
-access <Access> - Access Level
This optionally specifies an access level for the role. Possible access level settings are none, readonly, and all.
The default setting is all.
Examples
The following command creates an access-control role named "admin" for the vs1.example.com Vserver. The role has all
access to the api "/api/storage/volume" but only within the "aggr0" aggregate.
cluster1::> security login rest-role create -role admin -api /api/storage/volume -access all -
vserver vs1.example.com
Related references
security login modify on page 539
security login create on page 533
Description
The security login rest-role delete command deletes an access-control role.
Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver name associated with the role.
-role <text> - Role Name
This specifies the role that is to be deleted.
-api <text> - api path
This specifies the api to which the role has access.
Examples
The following command deletes an access-control role with the role name readonly and the api "/api/storage/volume" for
Vserver vs.example.com.
cluster1::> security login rest-role delete -role readonly -api /api/storage/volume -vserver
vs.example.com
Description
The security login rest-role modify command modifies an access-control role.
Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver name associated with the role.
-role <text> - Role Name
This specifies the role that is to be modified.
-api <text> - api path
This specifies the api to which the role has access.
[-access <Access>] - Access Level
This optionally specifies a new access level for the role. Possible access level settings are none, readonly, and
all. The default setting is all.
Examples
The following command modifies an access-control role with the role name readonly and the api access "/api/storage/
volume" to have the access level readonly for Vserver vs.example.com:
Description
The security login rest-role show command displays the following information about access-control roles:
• vserver
• Role name
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver
Selects the roles that match this parameter value.
[-role <text>] - Role Name
Selects the roles that match this parameter value. If this parameter and the -api parameter are both used, the
command displays detailed information about the specified access-control role.
[-api <text>] - api path
Selects the roles that match this parameter value. If this parameter and the -role parameter are both used, the
command displays detailed information about the specified access-control role.
[-access <Access>] - Access Level
Selects the roles that match this parameter value.
Examples
The example below displays information about all access-control roles:
Role Access
Vserver Name API Level
---------- ------------- ----------- -----------
vs vsadmin /api none
cluster1 readonly /api/storage none
Description
The security protocol modify command modifies the existing cluster-wide configuration of RSH and Telnet. Enable
RSH and Telnet in the cluster by setting the enabled field as true.
Parameters
-application <text> - application
Selects the application. Supported values are rsh and telnet.
[-enabled {true|false}] - enabled
Enables or disables the corresponding application. The default value is false.
Examples
The following command enables RSH in the cluster. The default setting for RSH is false:
The following command enables Telnet in the cluster. The default setting for Telnet is false:
Description
The security protocol show command displays the cluster-wide configuration of RSH and Telnet in the cluster in
advanced privilege mode. RSH and Telnet are disabled by default. Use the security protocol modify command to change
the RSH and Telnet configuration that the cluster supports.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-application <text>] - application
Displays the insecure applications in the cluster.
Examples
The following example shows the default security protocol configurations for a cluster:
Application Enabled
------------ -------------
rsh false
telnet false
The following example shows the security protocol configuration after RSH and Telnet have been enabled:
Related references
security protocol modify on page 568
Description
The security protocol ssh modify command modifies the existing cluster-wide configuration of SSH
Parameters
[-per-source-limit <integer>] - Per-Source Limit
Modifies the maximum number of SSH instances per source IP address on a per-node basis.
[-max-instances <integer>] - Maximum Number of Instances
Modifies the maximum number of SSH instances that can be handled on a per-node basis.
[-connections-per-second <integer>] - Connections Per Second
Modifies the maximum number of SSH connections per second on a per-node basis.
Examples
The following example modifies cluster-wide SSH configuration:
Description
The security protocol ssh show command displays the cluster-wide SSH configuration in advanced privilege mode. Use
the security protocol ssh modify command to change the SSH configuration that the cluster supports.
Examples
The following example displays cluster-wide SSH configuration:
Per-Source Limit: 32
Maximum Number of Instances: 64
Connections Per Second: 10
Related references
security protocol ssh modify on page 569
Description
The security saml-sp create command configures ONTAP with Security Assertion Markup Language (SAML) Service
Provider (SP) for single sign-on authentication. This command does not enable SAML SP, it just configures it. Configuring and
enabling SAML SP is a two-step process:
After the SAML SP configuration is created, it cannot be modified. It must be deleted and created again to change any settings.
Note: This restarts the web server. Any HTTP/S connections that are active will be disrupted.
Parameters
-idp-uri {(ftp|http)://(hostname|IPv4 Address|'['IPv6 Address']')...} - Identity Provider (IdP)
Metadata Location
This is the URI of the desired identity provider's (IdP) metadata.
Examples
The following example configures ONTAP with SAML SP IdP information:
Related references
security saml-sp modify on page 572
Description
The security saml-sp delete command is used to remove the Security Access Markup Language (SAML) Service
Provider (SP). Running this command frees resources used by the SP. SAML SP services will no longer be available after the SP
is removed.
If the SAML SP is currently enabled, it is necesary to first use security saml-sp modify -is-enabled false prior to
security saml-sp delete. The security saml-sp modify -is-enabled false command must be issued by a
password authenticated console application user or from a SAML authenticated command interface.
Note: This restarts the web server. Any HTTP/S connections that are active will be disrupted.
Related references
security saml-sp modify on page 572
Description
The security saml-sp modify command modifies the Security Assertion Markup Language (SAML) Service Provider
(SP) configuration for single sign-on authentication. This command is used to enable or disable an existing SAML SP,
security saml-sp modify -is-enabled true or false respectively.
This command will check the validity of the current SAML SP configuration before enabling the SP. Also, it is necessary to use
this command with the -is-enabled false parameter prior to deleting an existing SAML SP configuration. SAML SP can
only be disabled in this way by a password authenticated console application user or from a SAML authenticated command
interface. The delete command must be used if the SAML configuration settings are to be changed, as only the is-enabled
parameter can be modified.
Note: This may restart the web server. Any HTTP/S connections that are active may be disrupted.
Parameters
[-is-enabled {true|false}] - SAML Service Provider Enabled
Use this paramater to enable or disable the SAML SP.
Examples
The following example enables SAML SP:
Description
The security saml-sp repair command attempts to repair a failed SAML SP configuration on a given node. The status of
the individual nodes can be viewed using the security saml-sp status show command.
Note: This restarts the web server. Any active HTTP/S requests to the web server will be disrupted.
Examples
The following example repairs a failed SAML SP configuration:
Warning: This restarts the web server. Any active HTTP/S requests to the web
server will be disrupted
cluster1:>
Related references
security saml-sp status show on page 574
Description
The security saml-sp show command displays the Security Assertion Markup Language (SAML) Service Provider (SP)
configuration.
The Identity Provider (IdP) URI indicates the URI of the desired IdP's metadata.
The Service Provider (SP) host indicates the IP address contaning SAML SP metadata.
The Certificate Common Name indicates the SAML SP certificate's common name.
The Certificate Serial indicates the SAML SP certificate's serial number.
Examples
The following example displays the SAML SP configuration:
Description
The security saml-sp status show command displays the SAML Service Provider (SP) status for all nodes in the cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This identifies the node in the cluster.
[-status {not-configured|config-in-progress|config-failed|config-success}] - Update Status
This identifies the SAML SP status on the specified node.
[-error-text <text>] - Error Text
This identifies the error text associated with the latest saml SP update for this node.
[-is-enabled {true|false}] - SAML Service Provider Enabled
When this parameter is set to true it indicates that the SAML SP is enabled on this node. Similarly, when this
parameter is set to false, it indicates that the SAML SP is not enabled on this node.
Examples
The following example displays the SAML SP status information for all nodes in the cluster.
cluster::*>
Description
The security session kill-cli command is used to terminate CLI sessions. If the session being killed is actively
processing a non-read command, the kill will wait until the command is complete before terminating the session. If the session
being killed is actively processing a read (show) command, the kill will wait until the current row is returned before terminating
the session.
Parameters
-node {<nodename>|local} - Node
Selects the sessions that match this parameter value. This identifies the node that is processing the session.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
that is processing the session.
[-start-time <MM/DD HH:MM:SS>] - Start Time
Selects the sessions that match this parameter value. This identifies the start time of the current active session.
-session-id <integer> - Session ID
Selects the sessions that match this parameter value. This number uniquely identifies a management session
within a given node.
[-vserver <vserver>] - Vserver
Selects the sessions that match this parameter value. This identifies the Vserver associated with this
management session.
[-username <text>] - Username
Selects the sessions that match this parameter value. This identifies the authenticated user associated with this
management session.
[-application <text>] - Client Application
Selects the sessions that match this parameter value. This identifies the calling application by name.
[-location <text>] - Client Location
Selects the sessions that match this parameter value. This identifies the location of the calling client
application. This is typically the IP address of the calling client, or "console" or "localhost" for console or
localhost connections.
[-idle-seconds <integer>] - Idle Seconds
Selects the sessions that match this parameter value. When a session is not actively executing a command
request (the session is idle), this indicates the time (in seconds) since the last request completed.
[-state {pending|active|idle}] - Session State
Selects the sessions that match this parameter value. This identifies the state (pending, active, or idle) of the
session. The state is "pending" if it hit a session limit and the session is waiting for another session to end. The
state is "idle" for CLI sessions that are waiting at the command prompt. The state is "active" if the session is
actively working on a request.
[-request <text>] - Active Command
Selects the sessions that match this parameter value. This identifies the request (command) that is currently
being handled by the session.
cluster1::>
cluster1::>
The following example illustrates killing a CLI session by specifying the node and specifying a query on idle-seconds.
cluster1::>
Description
The security session show command displays all active management sessions across the cluster.
Examples
The following example illustrates displaying all active sessions across the cluster. In this example, we see one active
session on node node2 from the console application. We also see three active sessions on node node1. One is from the
console application and two are from the ssh application. Also one of the ssh sessions is from user diag and the other
ssh session is from user admin.
cluster1::>
The following example illustrates displaying all active sessions that have been idle for longer than 500 seconds.
cluster1::>
Description
This command allows creation of a default management session limit that does not yet exist. The default limits can be
overridden for specific values within each category by using advanced privilege level commands.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-category {application|location|request|user|vserver} - Category
The session type for this default limit. The following categories are supported: application, location, request,
user, Vserver.
-max-active-limit <integer> - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and category.
Examples
The following example illustrates creating a default limit for management sessions using the same application.
cluster1::> security session limit create -interface ontapi -category application -max-active-
limit 8
Description
This command allows deletion of a default management session limit.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-category {application|location|request|user|vserver} - Category
The session type for this default limit. The following categories are supported: application, location, request,
user, Vserver.
Description
This command allows modification of a default management session limit.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-category {application|location|request|user|vserver} - Category
The session type for this default limit. The following categories are supported: application, location, request,
user, Vserver.
[-max-active-limit <integer>] - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and category.
Examples
The following example illustrates modifying the default limit for CLI management sessions from the same location.
cluster1::> security session limit modify -interface cli -category location -max-active-limit 4
Description
This command shows the default management session limits that have been configured for each interface and category.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example illustrates displaying the default limits for management sessions.
Description
This command allows creation of a per-application management session limit that does not yet exist.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-application <text> - Application
The specified application to which this limit applies. The limit with the application name -default- is the
limit used for any application without a specific configured limit.
-max-active-limit <integer> - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and application.
Examples
The following example illustrates creating a limit for management sessions from a custom application.
Description
This command allows deletion of a per-application management session limit.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-application <text> - Application
The specified application to which this limit applies. The limit with the application name -default- is the
limit used for any application without a specific configured limit.
Examples
The following example illustrates deleting a limit for management sessions from a custom application.
cluster1::*> security session limit application delete -interface ontapi -application "custom_app"
Description
This command allows modification of a per-application management session limit.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-application <text> - Application
The specified application to which this limit applies. The limit with the application name -default- is the
limit used for any application without a specific configured limit.
[-max-active-limit <integer>] - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and application.
Examples
The following example illustrates modifying management session limits for some custom applications.
Description
This command shows the per-application management session limits that have been configured for each interface and
application.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST) to
which the limit applies.
[-application <text>] - Application
Selects the sessions that match this parameter value. This identifies the application for the limit. The limit with
the application name -default- is the limit used for any application without a specific configured limit.
[-max-active-limit <integer>] - Max-Active Limit
Selects the sessions that match this parameter value. This identifies the configured limit that is used to throttle
or reject requests.
Examples
The following example illustrates displaying the per-application limits for ONTAPI management sessions.
Description
This command allows creation of a per-location management session limit that does not yet exist.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-location <text> - Location
The specified location to which this limit applies. The limit with the location name -default- (in the
Default IPspace) is the limit used for any location (in any IPspace) without a specific configured limit.
[-ipspace <IPspace>] - IPspace of Location
This identifies the IPspace of the client location. If not specified, changes are made in the Default IPspace.
-max-active-limit <integer> - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and location.
Examples
The following example illustrates creating a CLI limit for specific location.
cluster1::*> security session limit location create -interface cli -location 10.98.16.164 -max-
active-limit 1
Description
This command allows deletion of a per-location management session limit.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-location <text> - Location
The specified location to which this limit applies. The limit with the location name -default- (in the
Default IPspace) is the limit used for any location (in any IPspace) without a specific configured limit.
[-ipspace <IPspace>] - IPspace of Location
This identifies the IPspace of the client location. If not specified, changes are made in the Default IPspace.
Examples
The following example illustrates deleting limits for management sessions from a specific set of locations.
Description
This command allows modification of a per-location management session limit.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-location <text> - Location
The specified location to which this limit applies. The limit with the location name -default- (in the
Default IPspace) is the limit used for any location (in any IPspace) without a specific configured limit.
[-ipspace <IPspace>] - IPspace of Location
This identifies the IPspace of the client location. If not specified, changes are made in the Default IPspace.
[-max-active-limit <integer>] - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and location.
Examples
The following example illustrates modifying management sessions limits for specific locations.
cluster1::*> security session limit location modify -interface * -location 10.98.* -max-active-
limit 2
3 entries were modified.
Description
This command shows the per-location management session limits that have been configured for each interface and location.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example illustrates displaying the per-location limits for management sessions.
Description
This command allows creation of a per-request management session limit that does not yet exist.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-request <text> - Request Name
The specified request to which this limit applies. The limit with the request name -default- is the limit used
for any request without a specific configured limit.
-max-active-limit <integer> - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and request.
cluster1::*> security session limit request create -interface ontapi -request storage-disk-get-
iter -max-active-limit 2
Description
This command allows deletion of a per-request management session limit.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-request <text> - Request Name
The specified request to which this limit applies. The limit with the request name -default- is the limit used
for any request without a specific configured limit.
Examples
The following example illustrates deleting custom limits for that were configured for the volume commands and APIs.
Description
This command allows modification of a per-request management session limit.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-request <text> - Request Name
The specified request to which this limit applies. The limit with the request name -default- is the limit used
for any request without a specific configured limit.
[-max-active-limit <integer>] - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and request.
cluster1::*> security session limit request modify -interface ontapi -request storage-disk-get-
iter -max-active-limit 4
Description
This command shows the per-request management session limits that have been configured for each interface and request.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST) to
which the limit applies.
[-request <text>] - Request Name
Selects the sessions that match this parameter value. This identifies the request (command or API) for the
limit. The limit with the request name -default- is the limit used for any request without a specific
configured limit.
[-max-active-limit <integer>] - Max-Active Limit
Selects the sessions that match this parameter value. This identifies the configured limit that is used to throttle
or reject requests.
Examples
The following example illustrates displaying the per-request limits for management sessions.
Description
This command allows creation of a per-user management session limit that does not yet exist.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-vserver <vserver> - Vserver
The specified Vserver to which this limit applies. The "Cluster" Vserver is used to limit Vservers that do not
have a configured limit.
-user <text> - User
The specified user to which this limit applies. The limit with the user name -default- is the limit used for
any user without a specific configured limit.
-max-active-limit <integer> - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface, Vserver, and user.
Examples
The following example illustrates creating a per-user limit override for ONTAPI requests for the admin user in the admin
Vserver.
cluster1::*> security session limit user create -interface ontapi -vserver cluster1 -username
admin -max-active-limit 16
Description
This command allows deletion of a per-user management session limit.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-vserver <vserver> - Vserver
The specified Vserver to which this limit applies. The "Cluster" Vserver is used to limit Vservers that do not
have a configured limit.
-user <text> - User
The specified user to which this limit applies. The limit with the user name -default- is the limit used for
any user without a specific configured limit.
cluster1::*> security session limit user delete -interface cli -user !"-default-"
2 entries were deleted.
Description
This command allows modification of a per-user management session limit.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-vserver <vserver> - Vserver
The specified Vserver to which this limit applies. The "Cluster" Vserver is used to limit Vservers that do not
have a configured limit.
-user <text> - User
The specified user to which this limit applies. The limit with the user name -default- is the limit used for
any user without a specific configured limit.
[-max-active-limit <integer>] - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface, Vserver, and user.
Examples
The following example illustrates modifying the admin user's limit for CLI management sessions.
cluster1::*> security session limit user modify -interface cli -vserver cluster1 -username admin -
max-active-limit 30
Description
This command shows the per-user management session limits that have been configured for each interface, Vserver, and user.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
Examples
The following example illustrates displaying the per-user limits for CLI management sessions. In this example, there is a
default limit of 4 sessions for each user. That limit is expanded to 8 for the admin Vserver. That limit is further expanded
to 20 for the admin user in the admin Vserver.
Description
This command allows creation of a per-Vserver management session limit that does not yet exist.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-vserver <vserver> - Vserver
The specified Vserver to which this limit applies. The "Cluster" Vserver is used to limit Vservers that do not
have a configured limit.
-max-active-limit <integer> - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and Vserver.
cluster1::*> security session limit vserver create -interface ontapi -vserver cluster1 -max-active-
limit 4
Description
This command allows deletion of a per-Vserver management session limit. The "Cluster" vserver is used when the specific
Vserver doesn't have a configured limit.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-vserver <vserver> - Vserver
The specified Vserver to which this limit applies. The "Cluster" Vserver is used to limit Vservers that do not
have a configured limit.
Examples
The following example illustrates deleting all per-Vserver limits for management sessions except the default limit.
Description
This command allows modification of a per-Vserver management session limit.
Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-vserver <vserver> - Vserver
The specified Vserver to which this limit applies. The "Cluster" Vserver is used to limit Vservers that do not
have a configured limit.
[-max-active-limit <integer>] - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and Vserver.
cluster1::*> security session limit vserver modify -interface cli -vserver cluster1 -max-active-
limit 40
Description
This command shows the per-Vserver management session limits that have been configured for each interface and Vserver.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST) to
which the limit applies.
[-vserver <vserver>] - Vserver
Selects the sessions that match this parameter value. This identifies the Vserver for the limit. The "Cluster"
Vserver is used to limit Vservers that do not have a configured limit.
[-max-active-limit <integer>] - Max-Active Limit
Selects the sessions that match this parameter value. This identifies the configured limit that is used to throttle
or reject requests.
Examples
The following example illustrates displaying the per-Vserver limits for management sessions.
Description
The security session request-statistics show-by-application command shows historical statistics for
management session activity, categorized by application name. CLI sessions connections will have an application name based
on the connection method, i.e.: ssh, telnet, rsh, console, or ngsh. ONTAPI sessions will extract the application name from
the ZAPI request. ONTAP looks for the application name in the following three locations, in the following order of precedence:
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the sessions that match this parameter value. This identifies the node that processed the session.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
that processed the session.
[-application <text>] - Application
Selects the sessions that match this parameter value. This identifies the calling application by name.
[-total <integer>] - Total Requests
Selects the sessions that match this parameter value. This identifies the total number of requests that have been
made on a session. The following commands are not counted: top, up, cd, rows, history, exit.
[-blocked <integer>] - Blocked Requests
Selects the sessions that match this parameter value. This identifies the number of requests that were blocked
due to configured limits.
[-failed <integer>] - Failed Requests
Selects the sessions that match this parameter value. This identifies the number of requests that failed for any
reason (including if they were blocked by configured limits).
[-max-time <integer>] - Maximum Time (ms)
Selects the sessions that match this parameter value. This identifies the maximum amount of time (in
milliseconds) that any request took.
[-last-time <integer>] - Last Time (ms)
Selects the sessions that match this parameter value. This identifies the amount of time (in milliseconds) that
the last request took.
[-active <integer>] - Number Active Now
Selects the sessions that match this parameter value. This identifies the number of currently active sessions.
Examples
The following example illustrates displaying historical statistics for all management session activity across the cluster,
categorized by application name.
cluster1::>
The following example illustrates displaying historical statistics for management session activity on a specific node and
for a specific application.
cluster1::>
Description
The security session request-statistics show-by-location command shows historical statistics for management
session activity, categorized by client location.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the sessions that match this parameter value. This identifies the node that processed the session.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
that processed the session.
[-location <text>] - Client Location
Selects the sessions that match this parameter value. This identifies the location of the calling client
application. This is typically the IP address of the calling client, or "console" or "localhost" for console or
localhost connections.
[-ipspace <IPspace>] - IPspace of Location
Selects the sessions that match this parameter value. This identifies the IPspace of the client location.
[-total <integer>] - Total Requests
Selects the sessions that match this parameter value. This identifies the total number of requests that have been
made on a session. The following commands are not counted: top, up, cd, rows, history, exit.
[-blocked <integer>] - Blocked Requests
Selects the sessions that match this parameter value. This identifies the number of requests that were blocked
due to configured limits.
[-failed <integer>] - Failed Requests
Selects the sessions that match this parameter value. This identifies the number of requests that failed for any
reason (including if they were blocked by configured limits).
Examples
The following example illustrates displaying historical statistics for all management session activity across the cluster,
categorized by location.
cluster1::>
The following example illustrates displaying historical statistics for management session activity on a specific node and
for a specific location.
cluster1::>
Description
The security session request-statistics show-by-request command shows historical statistics for management
session activity, categorized by request (command or API name).
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the sessions that match this parameter value. This identifies the node that processed the session.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
that processed the session.
[-request <text>] - Request Name
Selects the sessions that match this parameter value. This identifies the command associated with these
requests.
[-total <integer>] - Total Requests
Selects the sessions that match this parameter value. This identifies the total number of requests that have been
made on a session. The following commands are not counted: top, up, cd, rows, history, exit.
[-blocked <integer>] - Blocked Requests
Selects the sessions that match this parameter value. This identifies the number of requests that were blocked
due to configured limits.
Examples
The following example illustrates displaying historical statistics for all management session activity on a specific node,
with a specific request query.
cluster1::>
Description
The security session request-statistics show-by-user command shows historical statistics for management
session activity, categorized by username. Entries for username 'autosupport' reflect commands that are executed by the
AutoSupport OnDemand feature.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the sessions that match this parameter value. This identifies the node that processed the session.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
that processed the session.
[-vserver <vserver>] - Vserver
Selects the sessions that match this parameter value. This identifies the Vserver associated with this
management session.
[-username <text>] - Username
Selects the sessions that match this parameter value. This identifies the authenticated user associated with this
management session.
[-total <integer>] - Total Requests
Selects the sessions that match this parameter value. This identifies the total number of requests that have been
made on a session. The following commands are not counted: top, up, cd, rows, history, exit.
[-blocked <integer>] - Blocked Requests
Selects the sessions that match this parameter value. This identifies the number of requests that were blocked
due to configured limits.
[-failed <integer>] - Failed Requests
Selects the sessions that match this parameter value. This identifies the number of requests that failed for any
reason (including if they were blocked by configured limits).
[-max-time <integer>] - Maximum Time (ms)
Selects the sessions that match this parameter value. This identifies the maximum amount of time (in
milliseconds) that any request took.
Examples
The following example illustrates displaying historical statistics for all management session activity across the cluster,
categorized by username.
cluster1::>
The following example illustrates displaying historical statistics for management session activity on a specific node and
for a specific username.
cluster1::>
Description
The security session request-statistics show-by-vserver command shows historical statistics for management
session activity, categorized by vserver.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the sessions that match this parameter value. This identifies the node that processed the session.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
that processed the session.
[-vserver <vserver>] - Vserver
Selects the sessions that match this parameter value. This identifies the Vserver associated with this
management session.
[-total <integer>] - Total Requests
Selects the sessions that match this parameter value. This identifies the total number of requests that have been
made on a session. The following commands are not counted: top, up, cd, rows, history, exit.
[-blocked <integer>] - Blocked Requests
Selects the sessions that match this parameter value. This identifies the number of requests that were blocked
due to configured limits.
[-failed <integer>] - Failed Requests
Selects the sessions that match this parameter value. This identifies the number of requests that failed for any
reason (including if they were blocked by configured limits).
Examples
The following example illustrates displaying historical statistics for all management session activity across the cluster,
categorized by Vserver.
cluster1::>
The following example illustrates displaying historical statistics for management session activity on a specific node, for a
specific Vserver.
cluster1::>
Description
The security ssh add command adds additional SSH key exchange algorithms or ciphers or MAC algorithms to the
existing configurations of the cluster or a Vserver. The added algorithms or ciphers or MAC algorithms are enabled on the
cluster or Vserver. If you change the cluster configuration settings, it is used as the default for all newly created Vservers. The
existing SSH key exchange algorithms, ciphers, and MAC algorithms remain unchanged in the configuration. If the SSH key
exchange algorithms or ciphers or MAC algorithms are already enabled in the current configuration, the command will does not
not fail. Data ONTAP supports the diffie-hellman-group-exchange-sha256 key exchange algorithm for SHA-2. Data
ONTAP also supports the diffie-hellman-group-exchange-sha1, diffie-hellman-group14-sha1, and diffie-
hellman-group1-sha1 SSH key exchange algorithms for SHA-1. The SHA-2 key exchange algorithm is more secure than the
SHA-1 key exchange algorithms. Data ONTAP also supports ecdh-sha2-nistp256, ecdh-sha2-nistp384, ecdh-sha2-
nistp521, and curve25519-sha256. Data ONTAP also supports the AES and 3DES symmetric encryptions (also known as
ciphers) of the following types: aes256-ctr, aes192-ctr, aes128-ctr, aes256-cbc, aes192-cbc, aes128-cbc,
aes128-gcm, aes256-gcm, and 3des-cbc. Data ONTAP supports MAC algorithms of the following types: hmac-sha1,
hmac-sha1-96, hmac-md5, hmac-md5-96, hmac-ripemd160, umac-64, umac-64, umac-128, hmac-sha2-256, hmac-
sha2-512, hmac-sha1-etm, hmac-sha1-96-etm, hmac-sha2-256-etm, hmac-sha2-512-etm, hmac-md5-etm, hmac-
md5-96-etm, hmac-ripemd160-etm, umac-64-etm, and umac-128-etm.
Parameters
-vserver <Vserver Name> - Vserver
Identifies the Vserver to which you want to add additional SSH key exchange algorithms or ciphers.
Examples
The following command adds the diffie-hellman-group-exchange-sha256 and diffie-hellman-group-
exchange-sha1 key exchange algorithms for the cluster1 Vserver. It also adds the aes256-cbc and aes192-cbc
ciphers and the hmac-sha1 and hmac-sha2-256MAC algorithms to the cluster1 Vserver.
Description
The security ssh modify command replaces the existing configurations of the SSH key exchange algorithms or ciphers or
MAC algorithms for the cluster or a Vserver with the configuration settings you specify. If you modify the cluster configuration
settings, it will be used as the default for all newly created Vservers. Data ONTAP supports the diffie-hellman-group-
exchange-sha256 key exchange algorithm for SHA-2. Data ONTAP also supports the diffie-hellman-group-
exchange-sha1, diffie-hellman-group14-sha1, and diffie-hellman-group1-sha1 SSH key exchange algorithms
for SHA-1. The SHA-2 key exchange algorithm is more secure than the SHA-1 key exchange algorithms. Data ONTAP also
supports the AES and 3DES symmetric encryptions (also known as ciphers) of the following types: aes256-ctr, aes192-
ctr, aes128-ctr, aes256-cbc, aes192-cbc, aes128-cbc, aes128-gcm, aes256-gcm, and 3des-cbc. Data ONTAP
supports MAC algorithms of the following types: hmac-sha1, hmac-sha1-96, hmac-md5, hmac-md5-96, hmac-ripemd160,
umac-64, umac-64, umac-128, hmac-sha2-256, hmac-sha2-512, hmac-sha1-etm, hmac-sha1-96-etm, hmac-
sha2-256-etm, hmac-sha2-512-etm, hmac-md5-etm, hmac-md5-96-etm, hmac-ripemd160-etm, umac-64-etm, and
umac-128-etm.
Parameters
-vserver <Vserver Name> - Vserver
Identifies the Vserver for which you want to replace the existing SSH key exchange algorithm and cipher
configurations.
[-key-exchange-algorithms <algorithm name>, ...] - Key Exchange Algorithms
Enables the specified SSH key exchange algorithm or algorithms for the Vserver. This parameter also replaces
all existing SSH key exchange algorithms with the specified settings.
[-ciphers <cipher name>, ...] - Ciphers
Enables the specified cipher or ciphers for the Vserver. This parameter also replaces all existing ciphers with
the specified settings.
[-mac-algorithms <MAC name>, ...] - MAC Algorithms
Enables the specified MAC algorithm or algorithms for the Vserver. This parameter also replaces all existing
MAC algorithms with the specified settings.
Examples
The following command enables the diffie-hellman-group-exchange-sha256 and diffie-hellman-group14-
sha1 key exchange algorithms for the cluster1 Vserver. It also enables the aes256-ctr, aes192-ctr and aes128-ctr
ciphers, hmac-sha1 and hmac-sha2-256 MAC algorithms for the cluster1 Vserver. It also modifies the maximum
authentication retry count to 3 for the cluster1 Vserver:
Description
This command downgrades the SSH configurations of all Vservers and the cluster to settings compatible with releases earlier
than Data ONTAP 9.2.0. This command also disables the max-authentication-retry feature. You must run this command in
advanced privilege mode when prompted to do so during the release downgrade. Otherwise, the release downgrade process will
fail.
Examples
The following command downgrades the SSH security configurations of all Vservers and the cluster to settings
compatible with releases earlier than Data ONTAP 9.2.0.
Description
The security ssh remove command removes the specified SSH key exchange algorithms or ciphers from the existing
configurations of the cluster or a Vserver. The removed algorithms or ciphers are disabled on the cluster or Vserver. If you
changed the cluster configuration settings, it will be is used as the default for all newly created Vservers. If the SSH key
exchange algorithms or ciphers that you specify with this command are not currently enabled, the command will notdoes not
fail. Data ONTAP supports the diffie-hellman-group-exchange-sha256 key exchange algorithm for SHA-2. Data
ONTAP also supports the diffie-hellman-group-exchange-sha1, diffie-hellman-group14-sha1, and diffie-
hellman-group1-sha1 SSH key exchange algorithms for SHA-1. The SHA-2 key exchange algorithm is more secure than the
SHA-1 key exchange algorithms. Data ONTAP also supports ecdh-sha2-nistp256, ecdh-sha2-nistp384, ecdh-sha2-
nistp521, and curve25519-sha256. Data ONTAP also supports the AES and 3DES symmetric encryption (also known as
ciphers) of the following types: aes256-ctr, aes192-ctr, aes128-ctr, aes256-cbc, aes192-cbc, aes128-cbc,
aes128-gcm, aes256-gcm and 3des-cbc. Data ONTAP supports MAC algorithms of the following types: hmac-sha1,
hmac-sha1-96, hmac-md5, hmac-md5-96, hmac-ripemd160, umac-64, umac-64, umac-128, hmac-sha2-256, hmac-
sha2-512, hmac-sha1-etm, hmac-sha1-96-etm, hmac-sha2-256-etm, hmac-sha2-512-etm, hmac-md5-etm, hmac-
md5-96-etm, hmac-ripemd160-etm, umac-64-etm, and umac-128-etm.
Examples
The following command removes the diffie-hellman-group1-sha1 and diffie-hellman-group-exchange-
sha1 key exchange algorithms from the cluster1 Vserver. It also removes the aes128-cbc and 3des-cbc ciphers and
the hmac-sha1-96 and hmac-sha2-256 MAC algorithms from the cluster1 Vserver.
Description
The security ssh show command displays the configurations of the SSH key exchange algorithms, ciphers, MAC
algorithms and maximum authentication retry count for the cluster and Vservers. The SSH protocol uses a Diffie-Hellman based
key exchange method to establish a shared secret key during the SSH negotiation phrase. The key exchange method specifies
how one-time session keys are generated for encryption and authentication and how the server authentication takes place. Data
ONTAP supports the diffie-hellman-group-exchange-sha256 key exchange algorithm for SHA-2. Data ONTAP also
supports the diffie-hellman-group-exchange-sha1, diffie-hellman-group14-sha1, and diffie-hellman-
group1-sha1 key exchange algorithms for SHA-1. Data ONTAP also supports ecdh-sha2-nistp256, ecdh-sha2-
nistp384, ecdh-sha2-nistp521, curve25519-sha256. Data ONTAP also supports the AES and 3DES symmetric
encryptions (also known as ciphers) of the following types: aes256-ctr, aes192-ctr, aes128-ctr, aes256-cbc, aes192-
cbc, aes128-cbc, aes128-gcm, aes256-gcm and 3des-cbc. Data ONTAP supports MAC algorithms of the following types:
hmac-sha1, hmac-sha1-96, hmac-md5, hmac-md5-96, hmac-ripemd160, umac-64, umac-64, umac-128, hmac-
sha2-256, hmac-sha2-512, hmac-sha1-etm, hmac-sha1-96-etm, hmac-sha2-256-etm, hmac-sha2-512-etm, hmac-
md5-etm, hmac-md5-96-etm, hmac-ripemd160-etm, umac-64-etm, umac-128-etm
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver
Identifies the Vserver for which you want to display the SSH key exchange algorithm, cipher, and MAC
algorithm configurations.
Examples
The following command displays the enabled SSH key exchange algorithms, ciphers, MAC algorithms and maximum
number of authentication retry count for the cluster and all Vservers. The cluster settings are used as the default for all
newly created Vservers:
Parameters
-vserver <Vserver Name> - Vserver
Identifies a Vserver for hosting SSL-encrypted web services.
[-ca <text>] - Server Certificate Issuing CA
Identifies a Certificate Authority (CA) of a certificate to be associated with the instance of a given Vserver. If
this parameter, along with serial, is omitted during modification, a self-signed SSL certificate can be
optionally generated for that Vserver.
[-serial <text>] - Server Certificate Serial Number
Identifies a serial number of a certificate to be associated with the instance of a given Vserver. If this
parameter, along with ca, is omitted during modification, a self-signed SSL certificate can be optionally
generated for that Vserver.
[-common-name <FQDN or Custom Common Name>] - Server Certificate Common Name
Identifies the common name (CN) of a certificate to be associated with the instance of a given Vserver. This
parameter becomes optional if serial and ca are specified. You can use the security certificate create
and security certificate install commands to add new certificates to Vservers.
Note: The use of self-signed SSL certificates exposes users to man-in-the-middle security attacks. Where
possible, obtain a certificate that is signed by a reputable certificate authority (CA) and use the security
certificate install command to configure it before enabling SSL on a Vserver.
Examples
The following example enables SSL server authentication for a Vserver named vs0 with a certificate that has ca as
www.example.com and serial as 4F4EB629.
cluster1::> security ssl modify -vserver vs0 -ca www.example.com -serial 4F4EB629 -server-enabled
true
The following example disables SSL server authentication for a Vserver name vs0.
The following example enables SSL client authentication for a Vserver named vs0.
The following example disables SSL client authentication for a Vserver named vs0.
Related references
security certificate create on page 471
security certificate install on page 476
vserver services web show on page 2255
Description
This command displays the configuration of encrypted HTTP (SSL) for Vservers in the cluster. Depending on the requirements
of the individual node's or cluster's web services (displayed by the vserver services web show command), this encryption
might or might not be used. If the Vserver does not have a certificate associated with it, SSL will not be available.
Examples
The following example displays the configured certificates for Vservers.
Related references
vserver services web show on page 2255
SnapLock Commands
Manages SnapLock attributes in the system
The snaplock commands manage compliance-related functionality in the system. A volume created using the volume
create command becomes a SnapLock volume when it is created on a SnapLock aggregate. A SnapLock aggregate is created
using the storage aggr create command when the snaplock-type is specified as either compliance or enterprise.
The snaplock compliance-clock command can be used to manage the ComplianceClock in the system.
The snaplock log command can be used to manage SnapLock log infrastructure.
Related references
volume create on page 1451
snaplock compliance-clock on page 618
snaplock log on page 627
Description
The snaplock legal-hold abort is used to abort an ongoing legal-hold operation. The type of legal-hold operations that
can be aborted using this command are begin, end and dump-files. This command only aborts operations that have not yet
completed. Only a user with security login role vsadmin-snaplock is allowed to perform this operation.
Examples
The following example aborts an ongoing legal-hold operation with operation-id 16842754:
Description
The snaplock legal-hold begin command is used to place specified file or files under legal-hold for a given litigation.
Only a user with security login role vsadmin-snaplock is allowed to perform this operation.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver which owns the volume. The specified file or files to be placed under legal-
hold reside on this volume.
-litigation-name <text> - Litigation Name
Specifies the name of the litigation for which the file or files have to be placed under legal-hold.
-volume <volume name> - Volume
Specifies the name of the SnapLock compliance volume on which the file or files to be placed under legal-hold
reside.
-path <text> - Path
Specifies a path relative to the volume root. The path can be either a file path of the single file to be placed
under legal-hold or a directory path where all regular files under it must be placed under legal-hold.
Examples
The following example starts a legal-hold begin operation on file file1 in volume slc_vol1:
vs1::> snaplock legal-hold begin -litigation-name litigation1 -volume slc_vol1 -path /file1
SnapLock legal-hold begin operation is queued. Run "snaplock legal-hold show -operation-id
16842773 -instance" to view the operation status.
The following example starts a legal-hold begin operation on all files in the volume slc_vol1:
Description
The snaplock legal-hold dump-files is used to dump the list of files under legal-hold for a given vserver, volume and
litigation to an auto-generated file in the user specified path. Only a user with security login role vsadmin-snaplock is
allowed to perform this operation.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver for which the list of files under legal-hold is to be dumped.
-litigation-name <text> - Litigation Name
Specifies the name of the litigation for which the list of files under legal-hold is to be dumped.
-volume <volume name> - Volume Name
Specifies the name of the SnapLock compliance volume for which the list of files under legal-hold is to be
dumped.
-output-volume <volume name> - Output Volume Name
Specifies the name of the output volume containing the output directory path where the list of files under
legal-hold is to be dumped. The output volume must be a regular read-write volume.
-output-directory-path <text> - Path Relative to Output Volume Root
Specifies the output directory path relative to the output volume root, where the list of files under legal-hold is
to be dumped. The output directory path should be of the form "/directory-path". If output needs to be dumped
on the volume root, specify the path as "/".
Examples
The following example starts a legal-hold dump-files operation:
Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver for which the list of litigations is to be dumped.
[-volume <volume name>] - Volume Name
If this parameter is specified, the command displays the list of litigations for volume that matches the specified
value. The volume must be of type SnapLock compliance.
-output-volume <volume name> - Output Volume Vame
Specifies the name of the output volume containing the output directory path where the list of litigations is to
be dumped. The output volume must be a regular read-write volume.
-output-directory-path <text> - Path Relative to Output Volume Root
Specifies the output directory path relative to the volume root, where the list of litigations is to be dumped.
The output directory path should be of the form "/directory-path". If output needs to be dumped to the volume
root, specify the path as "/".
Examples
The following example starts a legal-hold dump-litigations job:
Description
The snaplock legal-hold end command is used to release legal-hold on specified file or files for a given litigation. Only a
user with security login role vsadmin-snaplock is allowed to perform this operation.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver which owns the volume. The specified file or files to be released from legal-
hold reside on this volume.
-litigation-name <text> - Litigation Name
Specifies the name of the litigation for which the file or files have to release from legal-hold.
Examples
The following example starts a legal-hold end operation on file file1 in volume slc_vol1:
vs1::> snaplock legal-hold end -litigation-name litigation1 -volume slc_vol1 -path /file1
SnapLock legal-hold end operation is queued. Run "snaplock legal-hold show -operation-id 16842773 -
instance" to view the operation status.
The following example starts a legal-hold end operation on all files in the volume slc_vol1:
Description
The snaplock legal-hold show command displays the status of a legal-hold operation. Information about completed
operations will be cleaned up after an hour of completion. Only a user with security login role vsadmin-snaplock is allowed
to perform this operation.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
If this parameter is specified, the command displays all legal-hold operations that match the specified Vserver.
[-operation-id <integer>] - Operation ID
If this parameter is specified, the command displays all legal-hold operations that match the specified
operation ID.
[-volume <volume name>] - Volume Name
If this parameter is specified, the command displays all legal-hold operations that match the specified volume.
The parameter specifies the volume on which legal-hold operation is running or has completed.
Examples
The following examples show the status of legal-hold operations for Vserver vs1 and volume slc_vol1 and the status of
legal-hold operation for operation ID 16842786 respectively:
• System ComplianceClock
• Volume ComplianceClock
System ComplianceClock (SCC) is maintained per node. SCC is used to update the Volume ComplianceClock and to provide a
base value for Volume ComplianceClock for new SnapLock volumes. The SCC is initialized once by the user and takes the
initial base value from the system clock. snaplock compliance-clock show can be used to check the value of the System
ComplianceClock.
Volume ComplianceClock (VCC) is maintained per volume and is used as the time reference to calculate the expiry time of
SnapLock objects in the SnapLock volume, such as files and the expiry date of the volume. volume snaplock show can be
used to check the value of the Volume ComplianceClock.
Related references
snaplock compliance-clock show on page 619
volume snaplock show on page 1644
Description
snaplock compliance-clock initialize command is used to initialize System ComplianceClock from the system clock.
System ComplianceClock can be initialized only once by the user. Once initialized, user cannot make any changes to the System
ComplianceClock. Hence, user should ensure that system clock is correct before initializing the System ComplianceClock.
Parameters
-node {<nodename>|local} - Node
Specifies the name of the node on which System ComplianceClock needs to be initialized.
[-force [true]] - Forces Initialization
If you use this paramter, it will suppress the warning message during snaplock compliance-clock
initialize operation.
Warning: You are about to initialize the secure ComplianceClock of the node
node1 to the current value of the node's system clock. This
procedure can be performed only once on a given node, so you should
ensure that the system time is set correctly before proceeding.
The current node's system clock is: Wed Nov 26 16:18:30 IST 2014
cluster-1::>
Related references
snaplock compliance-clock show on page 619
Description
The snaplock compliance-clock show command will display System ComplianeClock of the nodes in the cluster. It will
display the following information:
• Node name
• ComplianceClock Time
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command will display ComplianceClock for that particular node only.
[-time <text>] - ComplianceClock Time of the Node
If this parameter is specified, the command will display nodes having the same -time value.
Examples
The following example shows the output when the session privilege level is "diagnostic".
The following example shows the output when the session privilege level is "diagnostic".
Node: node1
ComplianceClock Time: Mon Jan 12 11:37:55 IST 2015 +05:30
Node ID: 4040216954
ID: 1418640203778
ComplianceClock Time (secs): 1419002188
ComplianceClock Skew (secs): 1014
Description
The snaplock compliance-clock ntp modify command modifies the option to enable or disable the SnapLock
ComplianceClock synchronization with the system time. The ComplianceClock is synchronized only when an NTP server has
been configured so that the system time follows the NTP time and the skew between the ComplianceClock time and the system
time is greater than 1 day.
Parameters
[-is-sync-enabled {true|false}] - Enable ComplianceClock sync to NTP system time
Specifies whether synchronization should be enabled or not. This is a cluster wide option.
Examples
Description
The snaplock compliance-clock ntp show command will display ComplianceClock synchronization setting. It will
display the following information:
• is-sync-enabled - Displays if the option to synchronize the ComplianceClock with system time has been enabled or not.
Examples
Description
The snaplock event-retention abort is used to abort an ongoing Event Based Retention (EBR) operation. This
command only aborts the operations that have not yet completed. Only a user with security login role vsadmin-snaplock is
allowed to perform this operation.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the vserver on which the EBR operation is running.
-operation-id <integer> - Operation ID
Specifies the operation ID of the EBR operation that needs to be aborted.
Examples
The following example aborts an ongoing EBR operation with operation-id 16842754:
Description
The snaplock event-retention apply command starts a new operation to apply the specified Event Based Retention
(EBR) policy to all files in the specified path. If a file is a regular file, it will be made a WORM file and retained for a retention-
period as defined by the specified policy name. If a file is already WORM, its retention time will be extended to a retention-
period as defined by the specified policy name, starting from the current time. The retention time of a file will be extended only
if the file's current retention time is less than the new retention time value to be set. Only a user with security login role
vsadmin-snaplock is allowed to perform this operation.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver which has the EBR policy defined to be applied on one or more files.
-policy-name <text> - Policy Name
Specifies the name of the EBR policy to be applied on one or more files.
-volume <volume name> - Volume
Specifies the name of the SnapLock volume containing a file path or a directory path as specified by the path
parameter. The specified EBR policy is applied to one or more files depending on the value of path.
-path <text> - Path
Specifies the path relative to the output volume root, of the form "/path". The path can be path to a file or a
directory. The EBR policy is applied to all files under the specified path. To apply the EBR policy to all files in
a volume, specify the path as "/".
Examples
The following example starts an EBR operation to apply a policy on files for specified volume:
Description
The snaplock event-retention show command displays the status of an Event Based Retention (EBR) operation.
Information about completed operations will be cleaned up after an hour after completion. Only a user with security login role
vsadmin-snaplock is allowed to perform this operation.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
Examples
The following examples show the status of EBR operations for Vserver "vs1" and volume "slc" and the status of event-
retention operation for operation ID 16842753 respectively.
Description
The snaplock event-retention show-vservers command is used to display the Vservers that have SnapLock Event
Based Retention (EBR) policies created.
Parameters
[-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
Examples
The following example displays all Vservers that have SnapLock EBR policies:
Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver for which a policy needs to be created.
-name <text> - Policy Name
Specifies the name of the EBR policy to be created.
-retention-period {{<integer> seconds|minutes|hours|days|months|years} | infinite} - Event
Retention Period
Specifies the retention period for an EBR policy.
Examples
The following example creates a new EBR policy "p1" for Vserver "vs1" with a retention period of "10 years":
Description
The snaplock event-retention policy delete command is used to delete Event Based Retention (EBR) policies for a
Vserver. Only a user with security login role vsadmin-snaplock is allowed to perform this operation.
Parameters
-vserver <vserver name> - Vserver Name
If this parameter is specified, the command deletes all EBR policies that match the specified Vserver.
-name <text> - Policy Name
If this parameter is specified, the command deletes all EBR policies that match the specified name.
Examples
The following example deletes retention policy "p1" for Vserver "vs1":
Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver for which retention period of a policy needs to be modified.
-name <text> - Policy Name
Specifies the name of the EBR policy for which the retention period needs to be modified.
[-retention-period {{<integer> seconds|minutes|hours|days|months|years} | infinite}] - Event
Retention Period
Specifies the new value of retention period.
Examples
The following example modifies the retention period of policy "p1" for Vserver "vs1" to "5 years":
Description
The snaplock event-retention policy show command is used to show Event Based Retention (EBR) policies for a
Vserver. A policy consists of a policy-name and a retention-period. The command output depends on the parameter or
parameters specified. If no parameters are specified, all policies for all vservers will be displayed. If one or more parameters are
specified, only those entries matching the specified values will be displayed. Only a user with security login role vsadmin-
snaplock is allowed to perform this operation.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
If this parameter is specified, the command displays all EBR policies that match the specified Vserver.
[-name <text>] - Policy Name
If this parameter is specified, the command displays all EBR policies that match the specified name.
[-retention-period {{<integer> seconds|minutes|hours|days|months|years} | infinite}] - Event
Retention Period
If this parameter is specified, the command displays all EBR policies that match the specified retention-
period.
The SnapLock log volume is a SnapLock Compliance volume. The SnapLock log infrastructure creates directories and files in
this volume to store the SnapLock log records.
The maximum log size specifies the maximum size of a log file that stores SnapLock log records. Once the file reaches this size,
it is archived and a new log file is created.
The default retention period is the time period for which the log file is retained, if the SnapLock log records that are stored in the
file do not carry any retention period.
Description
The snaplock log create command is used to create a SnapLock log configuration for the Vserver. A SnapLock log
configuration consists of volume to store the log, the maximum size of the log file, and the default period of time for which the
log file should be retained.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver for which the configuration needs to be created.
-volume <volume name> - Log Volume Name
Specifies the name of the volume that is used for logging. This must be a SnapLock Compliance volume.
[-max-log-size {<integer>[KB|MB|GB|TB|PB]}] - Maximum Size of Log File
Specifies the maximum size of the log file. Once a log file reaches this limit, it is archived and a new log file is
created. This parameter is optional. The default value is 10MB.
Examples
Description
The snaplock log delete command deletes the SnapLock log configuration associated with the Vserver. This command
closes all the active log files in the log volume and mark the volume as disabled for SnapLock logging.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver whose SnapLock log configuration is deleted.
Examples
Description
The snaplock log modify command modifies the SnapLock log configuration of the Vserver. Log volume, maximum size
of log file, and default retention period can be modfied. If the log volume is modified, then the active log files in the existing log
volume is closed and the log volume is marked as disabled for logging. The new log volume is enabled for logging.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver for which the SnapLock log configuration needs to be modified.
[-volume <volume name>] - Log Volume Name
Specifies the new log volume that is configured for this Vserver for logging.
[-max-log-size {<integer>[KB|MB|GB|TB|PB]}] - Maximum Size of Log File
Specifies the new value for maximum log file size.
Examples
cluster1::> snaplock log modify -volume vol1 -vserver vs1 -max-log-size 15MB
[Job 48] Job succeeded: SnapLock log modified for Vserver "vs1".
Description
The snaplock log show command displays the following information about the SnapLock log infrastructure:
• Vserver name
• Volume name
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
If this parameter is specified, the command displays the log information for Vserveers that match the specified
value.
[-volume <volume name>] - Log Volume Name
If this parameter is specified, the command displays the log configuration for volumes that match the specified
value.
[-max-log-size {<integer>[KB|MB|GB|TB|PB]}] - Maximum Size of Log File
If this parameter is specified, the command displays the log configuration with a matching -max-log-size
value.
[-default-retention-period {{<integer> seconds|minutes|hours|days|months|years} | infinite}]
- Default Log Record Retention Period
If this parameter is specified, the command displays the log configuration with a matching -default-
retention-period value.
Description
The snaplock log file archive command archives the currently active log file by closing it and creating a new active log
file. If base-name is not provided, the command archives all active log files associated with the Vserver. Otherwise, the
command archives the active log file associated with the base-name provided.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver for which active log files need to be archived.
[-base-name {privileged-delete | system | legal-hold}] - Base Name of Log File
Specifies the log base-name, whose active log file needs to be archived. The base-name is the name of the
source of log records. Valid base-names are system, privileged-delete and legal-hold. Each base-
name has its own directory in which log files containing log records generated by base-name are stored.
Examples
• Vserver name
• Volume name
• File path
• File size
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
If this parameter is specified, then log files in the Vserver that match the specified value is displayed.
[-base-name {privileged-delete | system | legal-hold}] - Base Name of Log File
If this parameter is specified, then the log files having a matching -base-name is displayed.
[-volume <volume name>] - Log Volume Name
If this parameter is specified, then the log files in volumes that match the specified value are shown.
[-file-path <text>] - Log File Path
If this parameter is specified, then the log files that match the specified value are displayed.
[-expiry-time <text>] - Log File Expiry Time
If this parameter is specified, then the log files having a matching -expiry-time value are displayed.
[-file-size {<integer>[KB|MB|GB|TB|PB]}] - File Size
If this parameter is specified, then the log files having a matching -file-size value are displayed.
Examples
Vserver : vs1
Volume : vol1
Base Name : system
File Path : /vol/vol1/snaplock_log/system_logs/20160120_183756_GMT-present
Expiry Time : Wed Jul 20 18:37:56 GMT 2016
File Size : 560B
Related references
snapmirror show on page 683
snapmirror create on page 637
snapmirror abort
Abort an active transfer
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The snapmirror abort command stops SnapMirror transfers that might have started and not completed. A SnapMirror
transfer is an operation on a given SnapMirror relationship and the relationship is identified by its destination endpoint, which
can be a volume, a Vserver, or a non-Data ONTAP endpoint. You identify the SnapMirror relationship with this command and
the command aborts the transfer for the relationship. For load-sharing mirrors, the command also aborts transfers for other
relationships that are part of the same load-sharing set. For SolidFire destination endpoints, the snapmirror abort command
is only supported if the endpoint is in a SnapMirror relationship.
The use of wildcards in parameter values is not supported from the source Vserver or cluster for relationships with
"Relationship Capability" of "8.2 and above".
You can use this command from the source or the destination Vserver or cluster for FlexVol volume relationships.
For SnapMirror Synchronous relationships, this command aborts any ongoing transfer and takes the relationship OutOfSync.
This can result in primary client IO failure for relationships with a policy of type strict-sync-mirror. Instead, the best
practice recommendation is to use the snapmirror quiesce command.
For Vserver SnapMirror relationships, this command must be run only from the cluster containing the destination Vserver.
Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats. The
normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships
with an AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For
SnapMirror relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/
name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
[-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints, if
this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported
for relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-
cluster must also be specified. This parameter is not supported for relationships with non-Data ONTAP
source endpoints.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>} - Destination Path
This parameter specifies the destination endpoint of the SnapMirror relationship in one of four path formats.
The normal format includes the names of the Vserver (vserver) and/or volume (volume). To support
Examples
To stop the active SnapMirror replication to the destination volume vs2.example.com:dept_eng_dp_mirror1, type
the following command:
For relationships with "Relationship Capability" of "Pre 8.2", to stop the active SnapMirror replication to the
destination volume cluster2://vs2.example.com/dept_eng_dp_mirror1, type the following command:
To stop the active SnapMirror replication to the destination Vserver dvs1.example.com, type the following command:
Related references
job stop on page 149
snapmirror quiesce on page 663
snapmirror show on page 683
snapmirror break
Make SnapMirror destination writable
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The snapmirror break command breaks a SnapMirror relationship between a source and destination endpoint of a data
protection mirror. The destination endpoint can be a Vserver, volume or SolidFire endpoint. When Data ONTAP breaks the
relationship, if the endpoint is a volume or SolidFire endpoint, the destination is made read/write and can diverge from the
source volume, client redirection is turned off on the destination, the restart checkpoint is cleared, and the clients can see the
latest Snapshot copy. If the endpoint is a Vserver, the subtype of the destination Vserver is changed to default, volumes in the
destination Vserver are made read/write and the clients can now access the Vserver namespace for modifications. For SolidFire
destination endpoints, the snapmirror break command is only supported if the endpoint is in a SnapMirror relationship.
Subsequent manual or scheduled SnapMirror updates to the broken relationship will fail until the SnapMirror relationship is
reestablished using the snapmirror resync command.
This command applies to data protection mirrors. For vault relationships, this command is only intended for use when preparing
for a Data ONTAP revert operation (see the -delete-snapshots parameter in advanced privilege level). This command is not
intended for use with load-sharing mirrors.
For relationships with a policy of type strict-sync-mirror or sync-mirror, the relationship must be Quiesced before
running the snapmirror break command.
This command is supported for SnapMirror relationships with the field "Relationship Capability" showing as either
"8.2 and above" or "Pre 8.2" in the output of the snapmirror show command.
The snapmirror break command must be used from the destination Vserver or cluster.
Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats. The
normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships
with an AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For
SnapMirror relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/
name.
Examples
To stop the SnapMirror replication to the destination volume vs2.example.com:dept_eng_dp_mirror1, type the
following command:
For relationships with "Relationship Capability" of "Pre 8.2", to stop the SnapMirror replication to the
destination volume cluster2://vs2.example.com/dept_eng_dp_mirror1, type the following command:
To stop replication to the destination Vserver dvs1.example.com of a Vserver SnapMirror relationship, type the
following command:
Under PVR control to stop synchronous SnapMirror replication to the destination Consistency Group cg_dst in Vserver
vs2.example.com, type the following command:
Related references
snapmirror resync on page 676
snapmirror show on page 683
snapmirror create
Create a new SnapMirror relationship
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
When a FlexGroup SnapMirror relationship is created, normally hidden relationships are also created for the constituent
volumes. These relationships can be seen by using the -expand parameter of the snapmirror show command. Source
information for these relationships can be seen using the -expand parameter of the snapmirror list-destinations
command. Other SnapMirror commands are disabled for FlexGroup constituent relationships and FlexGroup constituent
volumes.
If all systems involved are running Data ONTAP version 8.2 and later, a Vserver peering relationship must be set up using the
vserver peer create command between the source and the destination Vservers to create a relationship between the source
and destination volumes. To enable interoperability with Data ONTAP 8.1, if the source volume is on a storage system running
clustered Data ONTAP 8.1, the cluster administrator can create a data protection relationship between the source and destination
volumes without a Vserver peering relationship between the source and destination Vservers. These relationships are managed
the same way as on Data ONTAP 8.1 and the "Relationship Capability" field, as shown in the output of the
snapmirror show command, is set to "Pre 8.2".
Load-sharing mirrors must be confined to a single Vserver; they are not allowed to span Vservers. Load-sharing relationships
are created with the "Relationship Capability" field set to "Pre 8.2" even if both the source and destination volumes
are on a storage system running Data ONTAP version 8.2 and later. There is no "8.2 and above" implementation for load-
sharing relationships.
A set of load-sharing mirrors can have one or more destination volumes. You create separate SnapMirror relationships between
the common source volume and each destination volume to create the set of load-sharing mirrors.
The source or destination of a load-sharing relationship cannot be the source or destination of any other SnapMirror
relationship.
After creating the relationship, the destination volume can be initialized using the snapmirror initialize command. The
destination volumes in a set of load-sharing mirrors are initialized using the snapmirror initialize-ls-set command.
The snapmirror create command must be used from the destination Vserver or cluster.
Parameters
{ -source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>} - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats. The
normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships
with an AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For
SnapMirror relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/
name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
-source-vserver <vserver name> - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints, if
this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported
for relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-
cluster must also be specified. This parameter is not supported for relationships with non-Data ONTAP
source endpoints.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>} - Destination Path
This parameter specifies the destination endpoint of the SnapMirror relationship in one of four path formats.
The normal format includes the names of the Vserver (vserver) and/or volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships
with AltaVault destinations, the destination endpoint is specified in the form hostip:/share/share-name.
Examples
To create an extended data protection relationship between the source endpoint vs1.example.com:dept_eng, and the
destination endpoint vs2.example.com:dept_eng_dp_mirror2, with the default policy of MirrorAndVault, type
the following command:
To create a synchronous SnapMirror relationship between the source Flexvol vs1.example.com:vol_log, and the
destination Flexvol vs2.example.com:vol_log_sync_dp when the source cluster is running ONTAP 9.5 or above,
type the following command:
To create a strict synchronous SnapMirror relationship between the source Flexvol vs1.example.com:vol_log, and
the destination Flexvol vs2.example.com:vol_log_sync_dp when the source cluster is running ONTAP 9.5 or
above, type the following command:
To create a data protection mirror between the source endpoint cluster1://vs1.example.com/dept_eng, and the
destination endpoint cluster2://vs2.example.com/dept_eng_dp_mirror2 when the source cluster is running
Data ONTAP 8.1 software, type the following command:
To create a load-sharing mirror between the source endpoint cluster1://vs1.example.com/mkt1, and the
destination endpoint cluster1://vs1.example.com/mkt1_ls1 with the schedule named 5min used to update the
relationship, type the following command:
To create a SnapMirror relationship between the source Vserver vs1.example.com, and the destination Vserver
dvs1.example.com with the schedule named hourly used to update the relationship, type the following command:
To create an extended data protection (XDP) relationship between the Data ONTAP source endpoint
vs1.example.com:data_ontap_vol, and the AltaVault destination endpoint 10.0.0.11:/share/share1, type the
following command:
To create an extended data protection (XDP) relationship between the SolidFire source endpoint 10.0.0.12:/lun/
0001, and the Data ONTAP destination endpoint vs2.example.com:data_ontap_vol2, type the following command:
Under PVR control to create a SnapMirror synchronous Consistency Group relationship with the following attributes:
• It is between the source Consistency Group cg_src in Vserver vs1.example.com, and the destination Consistency
Group cg_dst in Vserver vs2.example.com.
• It has item mappings between lun1 and lun2 on volume srcvol and lun1 and lun2 on volume dstvol.
• It uses a policy named SmgrSync that has a policy type of smgr-mirror that the user has previously created.
Under PVR control to create a new item mapping between lun3 on volume srcvol and lun3 on volume dstvol in the
existing SnapMirror synchronous Consistency Group relationship that was created above, type the following command:
Related references
snapmirror update on page 710
snapmirror update-ls-set on page 714
job schedule cron create on page 163
snapmirror policy on page 723
snapmirror policy create on page 725
vserver create on page 1675
vserver peer create on page 2049
snapmirror initialize on page 647
volume create on page 1451
snapmirror show on page 683
snapmirror list-destinations on page 653
lun create on page 170
snapmirror delete on page 644
snapmirror resync on page 676
snapmirror delete
Delete a SnapMirror relationship
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The snapmirror delete command removes the SnapMirror relationship between a source endpoint and a destination
endpoint. The destination endpoint can be a Vserver, a volume, or a non-Data ONTAP endpoint. The Vservers, volumes and
non-Data ONTAP destinations are not destroyed and Snapshot copies on the volumes are not removed.
For relationships with SolidFire endpoints, the SnapMirror source commands snapmirror release and snapmirror
list-destinations are not supported. Therefore, Snapshot copies that are locked by SnapMirror on the source container
cannot be cleaned up using the snapmirror release command. If the source container is a Data ONTAP volume, in order to
reclaim space captured in the base Snapshot copy on the volume, issue a snapshot delete command specifying the -
ignore-owners parameter in diag privilege level. To reclaim space captured in a Snapshot copy locked by SnapMirror on a
SolidFire system, refer to SolidFire documentation.
The snapmirror delete command fails if a SnapMirror transfer for the SnapMirror relationship is in progress for
relationships with "Relationship Capability" of "Pre 8.2". For relationships with "8.2 and above" capability the
delete will succeed even if a transfer is in progress and the transfer will ultimately stop.
A set of load-sharing mirrors can contain multiple destination volumes, each destination volume having a separate SnapMirror
relationship with the common source volume. When used on one of the SnapMirror relationships from the set of load-sharing
mirrors, the snapmirror delete command deletes the specified SnapMirror relationship from the set of load-sharing mirrors.
The snapmirror delete command preserves the read-write or read-only attributes of the volumes of a SnapMirror
relationship after the relationship is deleted. Therefore, a read-write volume that was the source of a SnapMirror relationship
retains its read-write attributes, and a data protection volume or a load-sharing volume that was a destination of a SnapMirror
relationship retains its read-only attributes. Similarly, the subtype attribute of source and destination Vservers is not modified
when a Vserver SnapMirror relationship is deleted.
Note: When a SnapMirror relationship from a set of load-sharing mirrors is deleted, the destination volume becomes a data
protection volume and retains the read-only attributes of a data protection volume.
For relationships with a policy of type strict-sync-mirror or sync-mirror, the relationship must be Quiesced before it
can be deleted.
This command is supported for SnapMirror relationships with the field "Relationship Capability" showing as either
"8.2 and above" or "Pre 8.2" in the output of the snapmirror show command.
For relationships with "Relationship Capability" of "8.2 and above", the snapmirror delete command must be
used from the destination Vserver or cluster. The SnapMirror relationship information is deleted from the destination Vserver,
but no cleanup or deletion is performed on the source Vserver. The snapmirror release command must be issued on the
source Vserver to delete the source relationship information.
For relationships with "Relationship Capability" of "Pre 8.2", you can use this command from the source or from the
destination cluster. When used from the destination cluster, the SnapMirror relationship information on the source and
destination clusters is deleted. When used from the source cluster, only the SnapMirror relationship information on the source
cluster is deleted.
Examples
To delete the SnapMirror relationship with the destination endpoint vs2.example.com:dept_eng_dp_mirror4, type
the following command:
For relationships with "Relationship Capability" of "Pre 8.2", to delete the SnapMirror relationship with the
destination endpoint cluster2://vs2.example.com/dept_eng_dp_mirror4, type the following command:
To delete the SnapMirror relationship with destination endpoint dvs1.example.com:, type the following command:
Under PVR control to delete the synchronous SnapMirror Consistency Group relationship with the destination
Consistency Group cg_dst in Vserver vs2.example.com and all item mappings, type the following command:
Under PVR control to delete the item mapping between lun3 on volume srcvol and lun3 on volume dstvol in the
SnapMirror synchronous Consistency Group relationship with the destination Consistency Group cg_dst in Vserver
vs2.example.com, type the following command:
Related references
snapmirror release on page 665
snapmirror list-destinations on page 653
snapmirror resync on page 676
snapmirror resume on page 674
snapmirror initialize
Start a baseline transfer
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The snapmirror initialize command initializes the destination Vserver, volume or a non-Data ONTAP endpoint of a
SnapMirror relationship. The command behaves differently between data protection (DP), extended data protection (XDP) and
load-sharing (LS) relationships.
If you specify a sync-mirror or strict-sync-mirror type policy, the snapmirror initialize command creates and
initializes a synchronous relationship and brings it InSync, providing zero RPO data protection.
For data protection (DP) and extended data protection (XDP) relationships, the snapmirror initialize command initializes
the destination volume.
For load-sharing (LS) relationships, the snapmirror initialize command initializes a new load-sharing mirror in an
existing set of load-sharing mirrors. If the command finishes before the start of a scheduled or manual transfer of the set of load-
sharing mirrors, the load-sharing mirror is up to date with the set of load-sharing mirrors; otherwise, the load-sharing mirror will
be brought up to date at the next scheduled or manual transfer of the set of load-sharing mirrors.
The initial transfer to an empty destination volume is called a baseline transfer. During a baseline transfer for a data protection
(DP) or extended data protection (XDP) relationship, the snapmirror initialize command takes a Snapshot copy on the
source volume to capture the current image of the source volume. For data protection relationships, the snapmirror
initialize command transfers all of the Snapshot copies up to and including the Snapshot copy created by it from the source
volume to the destination volume. For extended data protection (XDP) relationships, the snapmirror initialize command
behavior depends on the snapmirror policy associated with the relationship. If the policy type is async-mirror then
depending on the rules in the policy it can transfer either all the Snapshot copies up to and including the Snapshot copy created
by it or only the Snapshot copy created by it from the source volume to the destination volume. For extended data protection
(XDP) relationships with policy type vault or mirror-vault the snapmirror initialize transfers only the Snapshot
copy created by it.
After the snapmirror initialize command successfully completes, the last Snapshot copy transferred is made the exported
Snapshot copy on the destination volume.
You can use the snapmirror initialize command to initialize a specific load-sharing mirror that is new to the set of load-
sharing mirrors. An initialize of the new load-sharing mirror should bring it up to date with the other up-to-date destination
volumes in the set of load-sharing mirrors.
Note: Using the snapmirror initialize command to initialize a set of load-sharing mirrors will not work. Use the
snapmirror initialize-ls-set command to initialize a set of load-sharing mirrors.
If a SnapMirror relationship does not already exist, that is, the relationship was not created using the snapmirror create
command, the snapmirror initialize command will implicitly create the SnapMirror relationship, with the same
behaviors as described for the snapmirror create command before initializing the relationship. This implicit create feature
is not supported for Vservers.
This command is supported for SnapMirror relationships with the field "Relationship Capability" showing as either
"8.2 and above" or "Pre 8.2" in the output of the snapmirror show command.
For relationships with "Relationship Capability" of "8.2 and above", you can track the progress of the operation
using the snapmirror show command.
For relationships with "Relationship Capability" of "Pre 8.2", a job will be spawned to operate on the SnapMirror
relationship, and the job id will be shown in the command output. The progress of the job can be tracked using the job show
and job history show commands.
Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats. The
normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships
with an AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For
SnapMirror relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/
name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
[-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints, if
this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported
for relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-
cluster must also be specified. This parameter is not supported for relationships with non-Data ONTAP
source endpoints.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>} - Destination Path
This parameter specifies the destination endpoint of the SnapMirror relationship in one of four path formats.
The normal format includes the names of the Vserver (vserver) and/or volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships
with AltaVault destinations, the destination endpoint is specified in the form hostip:/share/share-name.
For relationships with SolidFire destinations, the destination endpoint is specified in the form hostip:/lun/
name.
| [-destination-cluster <Cluster name>] - Destination Cluster
Specifies the destination cluster of the SnapMirror relationship. If this parameter is specified, parameters -
destination-vserver and -destination-volume must also be specified. This parameter is only
applicable for relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be
specified when operating in a Vserver context on relationships with "Relationship Capability" of "8.2
and above".
-destination-vserver <vserver name> - Destination Vserver
Specifies the destination Vserver of the SnapMirror relationship. For relationships with volumes as endpoints,
if this parameter is specified, parameters -destination-volume and for relationships with
"Relationship Capability" of "Pre 8.2", -destination-cluster must also be specified. This
parameter is not supported for relationships with non-Data ONTAP destination endpoints.
Examples
To start the initial transfer for the SnapMirror relationship with the destination endpoint
vs2.example.com:dept_eng_dp_mirror2 after the relationship has been created with the snapmirror create
command, type the following command:
For relationships with "Relationship Capability" of "Pre 8.2", to start the initial transfer for the SnapMirror
relationship with the destination endpoint cluster2://vs2.example.com/dept_eng_dp_mirror2 after the
relationship has been created with the snapmirror create command, type the following command:
To create a data protection mirror relationship between the source endpoint vs1.example.com:dept_mkt, and the
destination endpoint vs2.example.com:dep_mkt_dp_mirror, and start the initial transfer, type the following
command:
To create a data protection mirror relationship between the source endpoint cluster1://vs1.example.com/
dept_mkt, and the destination endpoint cluster2://vs2.example.com/dep_mkt_dp_mirror, and start the initial
transfer when the source cluster is running Data ONTAP 8.1 software, type the following command:
To create an extended data protection (XDP) relationship between the Data ONTAP source endpoint
vs1.example.com:data_ontap_vol, and the AltaVault destination endpoint 10.0.0.11:/share/share1, and start
the initial transfer, type the following command:
To start the initial transfer for the Vserver SnapMirror relationship with destination endpoint dvs1.example.com: after
the relationship was created with the snapmirror create command, type the following command:
To initialize the SnapMirror Synchronous relationship between FlexVols vol_log and vol_log_sync_dp and bring it to
InSync, after it is created using the snapmirror create command, type the following command:
Under PVR control to create a SnapMirror synchronous Consistency Group relationship with the following attributes:
• It is between the source Consistency Group cg_src in Vserver vs1.example.com, and the destination Consistency
Group cg_dst in Vserver vs2.example.com.
• It has item mappings between lun1 and lun2 on volume srcvol and lun1 and lun2 on volume dstvol.
• It uses a policy named SmgrSync that has a policy type of smgr-mirror that the user has previously created.
Under PVR control to initialize the relationship with destination Consistency Group cg_dst in Vserver
vs2.example.com that has been created with the snapmirror create command and bring it InSync, type the
following command:
Related references
snapmirror policy on page 723
snapmirror create on page 637
snapmirror policy create on page 725
snapmirror modify on page 657
snapmirror initialize-ls-set on page 651
snapmirror show on page 683
job show on page 142
job history show on page 150
snapmirror initialize-ls-set
Start a baseline load-sharing set transfer
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The snapmirror initialize-ls-set command initializes and updates a set of load-sharing mirrors. This command is
usually used after the snapmirror create command is used to create a SnapMirror relationship for each of the destination
volumes in the set of load-sharing mirrors. The initial transfers to empty load-sharing mirrors are baseline transfers done in
This command is only supported for SnapMirror relationships with the field "Relationship Capability" showing as "Pre
8.2" in the output of the snapmirror show command.
Parameters
{ -source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>} - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats. The
normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships
with an AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For
SnapMirror relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/
name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
-source-vserver <vserver name> - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints, if
this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported
for relationships with non-Data ONTAP source endpoints.
-source-volume <volume name>} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-
cluster must also be specified. This parameter is not supported for relationships with non-Data ONTAP
source endpoints.
[-foreground | -w [true]] - Foreground Process
This specifies whether the operation runs as a foreground process. If this parameter is specified, the default
setting is true (the operation runs in the foreground). When set to true, the command will not return until
the process completes. This parameter is only applicable to relationships with "Relationship
Capability" of "Pre 8.2".
Related references
snapmirror create on page 637
snapmirror initialize on page 647
snapmirror show on page 683
snapmirror list-destinations
Display a list of destinations for SnapMirror sources
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The snapmirror list-destinations command displays information including the destination endpoints, the relationship
status, and transfer progress, for SnapMirror relationships whose source endpoints are in the current Vserver if you are in a
Vserver context, or the current cluster if you are in a cluster context.
The command might display several relationships that have the same source and destination endpoints, but have different
relationship IDs. If this is the case, some of the information is stale. It corresponds to relationships that have been deleted on the
destination Vserver or cluster, and have not been released yet on the source Vserver or source cluster.
The relationships and the information displayed are controlled by the parameters that you specify. If no parameters are specified,
the command displays the following information associated with each SnapMirror relationship whose source endpoint is in the
current Vserver if you are in a Vserver context, or the current cluster if you are in a cluster context:
• Source path
• Relationship Type
• Destination Path
• Relationship Status
• Transfer Progress
• Relationship ID
Note the following limitations on the information displayed by the snapmirror list-destinations command:
• The "Relationship Status" field is not valid after the node hosting the source volume joins the cluster quorum, until at
least one transfer is performed on the SnapMirror relationship.
• "Transfer Progress" and "Progress Last Updated" fields are only valid if a Snapshot copy transfer is in progress.
• The "Relationship ID" field is not valid for Vserver SnapMirror relationships.
The -instance and -fields parameters are mutually exclusive and select the fields that are displayed. The -instance
parameter if specified, displays detailed information about the relationships. The -fields parameter specifies what fields
should be displayed. The other parameters of the snapmirror list-destinations command, select the SnapMirror
relationships for which the information is displayed.
This command is not supported for SnapMirror relationships with non-Data ONTAP endpoints.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command only displays the fields that you
have specified.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all relationships
selected.
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>}] - Source Path
Selects SnapMirror relationships that have a matching source path name.
| [-source-vserver <vserver name>] - Source Vserver
Selects SnapMirror relationships that have a matching source Vserver name.
[-source-volume <volume name>]} - Source Volume
Selects SnapMirror relationships that have a matching source volume name.
{ [-destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/
name>|<hostip:/share/share-name>}] - Destination Path
Selects SnapMirror relationships that have a matching destination path name.
| [-destination-vserver <vserver name>] - Destination Vserver
Selects SnapMirror relationships that have a matching destination Vserver name.
[-destination-volume <volume name>]} - Destination Volume
Selects SnapMirror relationships that have a matching destination volume name.
[-relationship-id <UUID>] - Relationship ID
Selects SnapMirror relationships that have a matching relationship identifier. This parameter is not supported
for Vserver SnapMirror relationships.
[-type <snapmirrorType>] - Relationship Type
Selects SnapMirror relationships that have a matching relationship type. Possible values are:
• DP
• XDP
• RST
• none
• vserver
• flexgroup
• async-mirror
• vault
• mirror-vault
• Idle
• Transferring
This parameter is not supported for FlexGroup SnapMirror relationships, but it is supported for FlexGroup
constituent relationships.
[-transfer-progress {<integer>[KB|MB|GB|TB|PB]}] - Transfer Progress
Selects SnapMirror relationships that have a matching transfer progress. This parameter is not supported for
FlexGroup SnapMirror relationships, but it is supported for FlexGroup constituent relationships.
[-progress-last-updated <MM/DD HH:MM:SS>] - Timestamp of Last Progress Update
Selects SnapMirror relationships that have a matching transfer progress last updated timestamp. This
parameter is not supported for FlexGroup SnapMirror relationships, but it is supported for FlexGroup
constituent relationships.
[-source-volume-node <nodename>] - Source Volume Node Name
Selects SnapMirror relationships that have a matching source volume node name. For FlexGroup relationships,
it is the node which owns the root constituent source volume. This parameter is not supported for Vserver
SnapMirror relationships.
[-expand [true]] - Show Constituents of the Group
Specifies whether to display constituent relationships of Vserver and FlexGroup SnapMirror relationships. By
default, the constituents are not displayed.
Examples
To display summary information about all relationships whose source endpoints are in the current cluster, type the
following command:
Progress
Source Destination Transfer Last Relationship
Path Type Path Status Progress Updated ID
----------- ----- ------------ ------- --------- --------- ----------------
vserver1.example.com:dp_s1
DP vserver2.example.com:dp_d1
Idle - - 06b4327b-954f-11e1-af65-123478563412
vserver1.example.com:xdp_s1
XDP vserver2.example.com:xdp_d1
Idle - - a9c1db0b-954f-11e1-af65-123478563412
vserver2.example.com:
DP dvserver2.example2.com:
Idle - - -
3 entries were displayed.
To display summary information about all relationships whose source endpoints are in the current Vserver, type the
following command:
Progress
Source Destination Transfer Last Relationship
Path Type Path Status Progress Updated ID
----------- ----- ------------ ------- --------- --------- ----------------
vserver1.example.com:dp_s1
DP vserver2.example.com:dp_d1
Idle - - 06b4327b-954f-11e1-af65-123478563412
vserver1.example.com:xdp_s1
XDP vserver2.example.com:xdp_d1
Idle - - a9c1db0b-954f-11e1-af65-123478563412
2 entries were displayed.
To display detailed information about SnapMirror relationships whose source endpoints are in the current Vserver, type
the following command:
To display summary information about all relationships including constituent relationships whose source endpoints are in
the current Vserver, type the following command:
Related references
snapmirror show on page 683
Description
The snapmirror modify command allows you to change one or more properties of SnapMirror relationships. The key
parameter that identifies any SnapMirror relationship is the destination endpoint. The destination endpoint can be a Vserver, a
volume, or a non-Data ONTAP endpoint.
For load-sharing mirrors, a change to a property affects all of the SnapMirror relationships in the set of load-sharing mirrors.
Destination volumes in a set of load-sharing mirrors do not have individual property settings.
Changes made by the snapmirror modify command do not take effect until the next manual or scheduled update of the
SnapMirror relationship. Changes do not affect updates that have started and have not finished yet.
This command is supported for SnapMirror relationships with the field "Relationship Capability" showing as either
"8.2 and above" or "Pre 8.2" in the output of the snapmirror show command.
The snapmirror modify command must be used from the destination Vserver or cluster.
Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats. The
normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships
with an AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For
SnapMirror relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/
name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
[-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints, if
this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported
for relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-
cluster must also be specified. This parameter is not supported for relationships with non-Data ONTAP
source endpoints.
Note: You define and name a policy using the snapmirror policy create command.
Examples
To change the schedule to halfhour for the SnapMirror relationship with the destination endpoint
vs2.example.com:dept_eng_dp_mirror2, type the following command:
For relationships with "Relationship Capability" of "Pre 8.2", to change the schedule to halfhour for the
SnapMirror relationship with the destination endpoint cluster2://vs2.example.com/dept_eng_dp_mirror2, type
the following command:
To change the schedule to halfhour for the Vserver SnapMirror relationship with destination endpoint
dvs1.example.com:, type the following command:
To change the policy associated with the synchronous SnapMirror Consistency Group relationship with the destination
Consistency Group cg_dst in Vserver vs2.example.com to the policy Sync2, type the following command:
snapmirror promote
Promote the destination to read-write
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The snapmirror promote command performs a failover to the destination volume of a SnapMirror relationship. This
command changes the destination volume from a read-only volume to a read-write volume and makes the destination volume
assume the identity of the source volume. The command then destroys the original source volume. The destination volume must
be a load-sharing volume. Note that you can promote a load-sharing volume that has been left in read-write state by a previously
failed promote operation.
Client accesses are redirected from the original source volume to the promoted destination volume. The view clients see on the
promoted destination volume is the latest transferred Snapshot copy, which might lag behind the view clients had of the original
source volume before the promote.
The SnapMirror relationship is always deleted as part of the promotion process.
It is possible that the original source volume is the source of multiple SnapMirror relationships. For such a configuration, the
promoted destination volume becomes the new source volume of the other SnapMirror relationships.
This command is only supported for SnapMirror relationships with the field "Relationship Capability" showing as "Pre
8.2" in the output of the snapmirror show command.
The snapmirror promote command fails if a SnapMirror transfer is in progress for any SnapMirror relationship with
"Relationship Capability" of "Pre 8.2" involving the original source volume. It does not fail if a SnapMirror transfer
is in progress for a relationship with "Relationship Capability" of "8.2 and above".
Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats. The
normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships
with an AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For
SnapMirror relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/
name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
Examples
To promote a mirror named dept_eng_ls_mirror1 to be the source read-write volume for mirroring and client access,
type the following command:
snapmirror protect
Start protection for volumes
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The snapmirror protect command establishes SnapMirror protection for a list of volumes. For each volume, the command
creates a data protection destination volume in the Vserver specified by the -destination-vserver parameter, creates an
extended data protection (XDP) SnapMirror relationship, and starts the initialization of the SnapMirror relationship. This
command must be used from the destination Vserver or cluster. This command is not supported for FlexGroup volume
constituents, Vserver endpoints or non ONTAP endpoints.
Parameters
-path-list {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>}, ... - Path List
This parameter specifies the list of volumes to be protected. The list is a comma separated list of paths of the
form vserver:volume, for example vs1.example.com:dept_eng1, vs1.example.com:dept_eng2
[-destination-vserver <vserver name>] - Destination Vserver
This parameter specifies the Vserver in which to create the destination volumes of the SnapMirror
relationships.
[-schedule <text>] - SnapMirror Schedule
This optional parameter designates the name of the schedule which is used to update the SnapMirror
relationships.
-policy <sm_policy> - SnapMirror Policy
This parameter designates the name of the SnapMirror policy which is associated with the SnapMirror
relationships.
[-auto-initialize {true|false}] - Auto Initialize
This optional parameter specifies whether or not initializes of the SnapMirror relationships should be started
after the relationships are created. The default value for this parameter is true.
[-destination-volume-prefix <text>] - Destination Volume Name Prefix
This optional parameter designates the prefix for the destination volume name. For example if the source path
is of the form vserver:volume and the destination-volume-prefix specified is prefix_ and no destination-
volume-suffix is specified, then the destination volume name will be prefix_volume_dst or possibly
prefix_volume_1_dst if a name conflict is encountered. If both prefix and suffix are specified as prefix_
and _suffix, then the destination volume name will be prefix_volume_suffix or
prefix_volume_1_suffix, if a name conflict is encountered.
[-destination-volume-suffix <text>] - Destination Volume Name Suffix
This optional parameter designates the suffix for the destination volume name. If you do not desginate a suffix,
a destination volume name with suffix _dst will be used. For example if the source path is of the form
vserver:volume, and the suffix specified is _DP, the destination volume will be created with the name
volume_DP or volume_1_DP if a name conflict is encountered. If both prefix and suffix are specified as
prefix_ and _suffix, then the destination volume name will be prefix_volume_suffix or
prefix_volume_1_suffix, if a name conflict is encountered.
• snapshot-only - This policy allows tiering of only the FlexGroup Snapshot copies not associated with the
active file system. The default minimum cooling period is 2 days. The -tiering-minimum-cooling-
days parameter can be used to override the default using the volume modify command after the
destination FlexGroup has been created.
• auto - This policy allows tiering of both snapshot and active file system user data to the capacity tier. The
default cooling period is 31 days. The -tiering-minimum-cooling-days parameter can be used to
override the default using the volume modify command after the destination FlexGroup has been
created.
• backup - On DP FlexGroup this policy allows all transferred user data blocks to start in the capacity tier.
Examples
To establish SnapMirror protection for the source volumes vs1.example.com:dept_eng1 and
vs1.example.com:dept_eng2 using destination-vserver vs2.example.com and policy MirrorAllSnapshots type
the following command:
snapmirror quiesce
Disable future transfers
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The snapmirror quiesce command disables future transfers for a SnapMirror relationship. If there is no transfer in progress,
the relationship becomes "Quiesced".
If there is a transfer in progress, it is not affected, and the relationship becomes "Quiescing" until the transfer completes. If
the current transfer aborts, it will be treated like a future transfer and will not restart.
If applied to a load-sharing (LS) SnapMirror relationship, all the relationships in the load-sharing set will be quiesced.
The snapmirror quiesce command must be used from the destination Vserver or cluster.
The relationship must exist on the destination Vserver or cluster. When issuing snapmirror quiesce, you must specify the
destination endpoint. The specification of the source endpoint of the relationship is optional.
Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats. The
normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships
with an AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For
SnapMirror relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/
name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
[-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints, if
this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported
for relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-
cluster must also be specified. This parameter is not supported for relationships with non-Data ONTAP
source endpoints.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>} - Destination Path
This parameter specifies the destination endpoint of the SnapMirror relationship in one of four path formats.
The normal format includes the names of the Vserver (vserver) and/or volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships
with AltaVault destinations, the destination endpoint is specified in the form hostip:/share/share-name.
For relationships with SolidFire destinations, the destination endpoint is specified in the form hostip:/lun/
name.
Examples
To quiesce the SnapMirror relationship with the destination endpoint vs2.example.com:dept_eng_mirror2, type the
following command:
For relationships with "Relationship Capability" of "Pre 8.2", to quiesce the SnapMirror relationship with the
destination endpoint cluster2://vs2.example.com/dept_eng_mirror2, type the following command:
To quiesce the Vserver SnapMirror relationship with the destination endpoint dvs1.example.com:, type the following
command:
Under PVR control to quiesce the synchronous SnapMirror Consistency Group relationship with the destination
Consistency Group cg_dst in Vserver vs2.example.com, type the following command:
Related references
snapmirror show on page 683
snapmirror resume on page 674
snapmirror release
Remove source information for a SnapMirror relationship
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
This command is not supported for SnapMirror relationships with non-Data ONTAP endpoints.
The snapmirror release operation fails if a SnapMirror transfer for the SnapMirror relationship is in a data phase of the
transfer.
Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>}] - Source Path
Specifies the source endpoint of the SnapMirror relationship in one of two formats. The normal format
includes the names of the Vserver (vserver), and/or volume (volume). A format which also includes the name
of the cluster (cluster) is also provided for consistency with other snapmirror commands. The form of the
pathname which includes the cluster name cannot be used when operating in a Vserver context.
| [-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints, if
this parameter is specified, parameter -source-volume must also be specified.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameter -
source-vserver must also be specified.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>} - Destination Path
Specifies the destination endpoint of the SnapMirror relationship in one of two formats. The normal format
includes the names of the Vserver (vserver), and/or volume (volume). A format which also includes the name
of the cluster (cluster) is also provided for consistency with other snapmirror commands. The form of the
pathname which includes the cluster name cannot be used when operating in a Vserver context.
| -destination-vserver <vserver name> - Destination Vserver
Specifies the destination Vserver of the SnapMirror relationship. For relationships with volumes as endpoints,
if this parameter is specified, parameter -destination-volume must also be specified.
[-destination-volume <volume name>]} - Destination Volume
Specifies the destination volume of the SnapMirror relationship. If this parameter is specified, parameter -
destination-vserver must also be specified.
[-relationship-info-only [true]] - Remove relationship info only (skip cleanup of snapshots)
If this parameter is specified, the cleanup of Snapshot copies is bypassed and only the source relationship
information is removed. It is recommended to specify this parameter only when the source volume is not
accessible.
[-relationship-id <UUID>] - Relationship ID
This optional parameter specifies the relationship identifier of the relationship. It must be specified when
information for more than one relationship with the same source and destination paths is present. This
parameter is not supported for Vserver SnapMirror relationships.
Examples
To release the source information for the SnapMirror relationship with the destination endpoint
vs2.example.com:dept_eng_dp_mirror4, type the following command:
To release the source information for the SnapMirror relationship with the destination endpoint
vs2.example.com:dept_eng_dp_mirror4, and relationship-id 5f91a075-6a72-11e1-b562-123478563412, type
the following command:
To release the source information for the SnapMirror relationship with the destination endpoint dvs1.example.com:,
type the following command:
Under PVR control to release the source information for the synchronous SnapMirror Consistency Group relationship
with the destination Consistency Group cg_dst in Vserver vs2.example.com, type the following command:
Under PVR control to release just the source information but not remove the Snapshot copies that might be needed for a
subsequent resync for the synchronous SnapMirror Consistency Group relationship with the destination Consistency
Group cg_dst in Vserver vs2.example.com, type the following command:
Related references
snapmirror list-destinations on page 653
snapmirror create on page 637
snapmirror show on page 683
snapmirror restore
Restore a Snapshot copy from a source volume to a destination volume
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
• the destination volume of a data protection (DP) relationship with "Relationship Capability" of "8.2 and above"
• a data-protection volume which is not the destination endpoint of any SnapMirror relationship
• a read-write volume.
• an AltaVault endpoint. In this case the destination must be an empty Data ONTAP volume.
The following cannot be used as either the source or destination volume of a restore:
• a volume that is the source or destination endpoint of a SnapMirror load-sharing relationship.
• a volume that is the destination endpoint of a SnapMirror relationship with the "Relationship Capability" of "Pre
8.2".
• a SolidFire endpoint.
A SnapMirror relationship of type RST is created from the source volume to the destination volume by the snapmirror
restore command. This relationship lasts for the duration of the restore operation and is deleted when the command completes
successfully.
The following paragraphs describe the behavior when restoring the entire contents of a Snapshot copy to a destination volume.
By default the snapmirror restore will copy the latest Snapshot copy from the source volume to the destination volume. A
specific Snapshot copy can be selected with the -source-snapshot parameter.
Any quota rules defined for the destination volume are deactivated prior to restoring the entire contents of a Snapshot copy. Run
the command volume quota modify -vserver destination-volume-vserver -volume destination-volume-name
-state on to reactivate quota rules after the entire contents of the Snapshot copy have been restored.
If the destination volume is an empty data protection volume, the snapmirror restore command performs a baseline
restore. For a baseline restore the following steps are performed:
• The entire contents of the Snapshot copy selected to be restored are copied to the active file system of the destination
volume.
If the destination volume is a read-write volume, an incremental restore is performed. The incremental restore fails if it cannot
find a common Snapshot copy between the source and destination volumes. Restoring a Snapshot copy to an empty read-write
volume is not supported. Incremental restore from a non-Data ONTAP endpoint to a Data ONTAP volume is not supported.
An incremental restore preserves all Snapshot copies on the destination volume but does not preserve changes to the active file
system since the latest Snapshot copy. To preserve changes to the destination volume since the latest Snapshot copy use the
volume snapshot create command. Restore is a disruptive operation so client access of the destination volume is not
advised for the duration of the operation.
For an incremental restore the following steps are performed:
• This Snapshot copy is the exported Snapshot copy and it is the view to which clients are redirected when accessing the
destination volume.
• The contents of the Snapshot copy selected to be restored are copied to the active file system of the destination volume.
If snapmirror restore fails or is aborted, the RST relationship remains. Use the snapmirror show command with the
destination volume name to display the reason for the error. An EMS is also generated when a failure occurs. There are two
options to recover when restore fails or is aborted:
• Take corrective action suggested by the EMS and reissue the original command.
When specifying -clean-up-failure to cancel an incremental restore request, the following steps are performed:
• If the Snapshot copy has not been restored to the destination volume, all data copied to the active file system by
snapmirror restore to the destination volume is reverted.
When specifying -clean-up-failure to cancel a baseline restore request, the following steps are performed:
• If the Snapshot copy has been restored to the destination volume, the volume is made read-write.
The following paragraphs describe the behavior and requirements when restoring one or more files, LUNs or NVMe
namespaces to the destination volume.
The destination volume must be a read-write volume. Restoring files, LUNs or NVMe namespaces to a data protection volume
is not supported. When restoring files, LUNs or NVMe namespaces the source and destination volumes are not required to have
a common Snapshot copy. If a common Snapshot copy exists, an incremental restore is performed for those files, LUNs or
NVMe namespaces being restored which exist in the common Snapshot copy.
The contents of the files, LUNs or NVMe namespaces to which data is being restored on the destination volume are not
preserved by this command. To preserve the contents of the destination files, LUNs or NVMe namespaces, create a Snapshot
copy on the destination volume prior to running this command. Client I/O is not allowed to a file, LUN or NVMe namespace to
which data is being restored on the destination volume.
The -source-snapshot parameter is required when restoring files, LUNs or NVMe namespaces. It identifies the Snapshot
copy on the source volume from which the files, LUNs or NVMe namespaces to be restored are copied. If all files, LUNs or
NVMe namespaces to be restored do not exist in this Snapshot copy the command fails.
The source path for each file, LUN or NVMe namespace being restored is required. The source path of a file, LUN or NVMe
namespace is from the root of the source Snapshot copy of the source volume. The file is restored to the same path on the
destination volume unless an optional destination path is specified. The destination path is from the root of the destination
volume. If a file, LUN or NVMe namespace to which data is being restored on the destination volume does not exist, the file,
LUN or NVMe namespace is created. If any directory in the path of the file, LUN or NVMe namespace being restored does not
exist on the destination volume, the command fails. Overwriting the contents of an existing file with the contents of a different
file is supported. Similarly, overwriting the contents of an existing LUN or NVMe namespace with the contents of a different
LUN or NVMe namespace is supported. However, overwriting a file with the contents of a LUN or NVMe namespace is not
supported. Overwriting a LUN with the contents of a file or NVMe namespace is not supported. Overwriting an NVMe
• If any file, LUN or NVMe namespace being restored does not exist on the destination volume, create all such files, LUNs or
NVMe namespaces.
• Prevent client I/O to files, LUNs or NVMe namespaces to which data is being restored on the destination volume.
• Revoke locks and space reservations held by NAS clients for files being restored.
• Copy the contents of all source files, LUNs or NVMe namespaces to the corresponding file, LUN or NVMe namespace on
the destination volume.
• Allow client I/O to files, LUNs or NVMe namespaces to which data has been restored on the destination volume.
Note: Some file restore operations require a Snapshot copy to be created. This Snapshot copy is temporary, it is deleted
before the operation completes.
Since client I/O is not allowed to files, LUNs or NVMe namespaces being restored, client I/O to files, LUNs or NVMe
namespaces being restored should be quiesced. Mapped LUNs or NVMe namespaces remain mapped throughout the operation.
SAN clients do not need to rediscover a mapped LUN that has been restored. Restoring an NVMe namespace on top of another
NVMe namespace with a different attribute relevant to NVMe protocol accessibility (like size) is not supported.
If snapmirror restore fails or is aborted, the RST relationship remains. Use the snapmirror show command with the
destination volume to display the reason for the error. An EMS is also generated when a failure occurs. There are two options to
recover when restore fails or is aborted:
• Take corrective action suggested by the EMS and reissue the original command.
• Use snapmirror restore -clean-up-failure along with specifying the destination volume to cancel the request.
When specifying -clean-up-failure to cancel a file restore request, the following steps are performed:
• Any Snapshot copy created for use during a file restore operation is deleted.
Note: LUNs to which client I/O is not allowed remain. For LUNs to which client I/O is not allowed, do one of the following:
• Use the snapmirror restore command to restore data to the LUN. Once the command completes successfully, client I/O
to the LUN is allowed.
• Delete the LUN using the lun delete command with the -force-fenced parameter.
Note: Similarly, NVMe namespaces to which client I/O is not allowed remain. For NVMe namespaces to which client I/O is
not allowed, do one of the following:
• Delete the NVMe namespace using the vserver nvme namespace delete command with the -skip-mapped-check
parameter.
The snapmirror restore command must be used from the destination Vserver or cluster.
Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>}] - Source Path
Specifies the source endpoint in one of three formats. The basic format includes the names of the Vserver
(vserver) and volume (volume). A format which also includes the name of the cluster (cluster) is supported for
consistency with other snapmirror commands. The form of the pathname which includes the cluster name is
not valid when operating in a Vserver context. A non-Data ONTAP source endpoint (for example, AltaVault)
can be specified in the form hostip:/share/share-name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the cluster in which the source volume resides. This parameter is not needed; it is provided for
consistency with other snapmirror commands. If this parameter is specified, the -source-vserver and -
source-volume parameters must also be specified. This parameter is not valid when operating in a Vserver
context. This parameter is not supported if the source is a non-Data ONTAP endpoint.
[-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. If this parameter is specified, the -source-
volume parameter must also be specified. This parameter is not supported if the source is a non-Data ONTAP
endpoint.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, the -source-
vserver parameter must also be specified. This parameter is not supported if the source is a non-Data
ONTAP endpoint.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>} - Destination Path
Specifies the destination endpoint in one of two formats. The basic format includes the names of the Vserver
(vserver) and volume (volume). A format that also includes the name of the cluster (cluster) is supported for
consistency with other snapmirror commands. The form of the pathname which includes the cluster name is
not valid when operating in a Vserver context.
| [-destination-cluster <Cluster name>] - Destination Cluster
Specifies the cluster in which the destination volume resides. This parameter is not needed; it is provided for
consistency with other snapmirror commands. If this parameter is specified, the -destination-vserver
and -destination-volume parameters must also be specified. This parameter is not valid when operating in
a Vserver context. This parameter is only applicable for relationships with "Relationship Capability"
of "Pre 8.2".
-destination-vserver <vserver name> - Destination Vserver
Specifies the destination Vserver. If this parameter is specified, the -destination-volume parameter must
also be specified.
-destination-volume <volume name>} - Destination Volume
Specifies the destination volume. If this parameter is specified, the -destination-vserver parameter must
also be specified.
/dira/file1
/dira/file1,@/dirb/file2
/dira/file1,@/dirb/file2,/dirc/file3
Examples
The following example does an incremental restore between the restore source volume
vs2.example.com:dept_eng_dp_mirror2 and the restore destination volume vs1.example.com:dept_eng:
The following example restores /file3 from the source Snapshot copy snap3 on the source volume
vs2.example.com:dept_eng_dp_mirror2 to the active file system of the restore destination volume
vs1.example.com:dept_eng:
The following example restores /file3 from the source Snapshot copy snap3 on the source volume
vs2.example.com:dept_eng_dp_mirror2 to /file3.new in the active file system of the restore destination volume
vs1.example.com:dept_eng:
The following example restores /file1, /file2, and /file3 from the source Snapshot copy snap3 on the source volume
vs2.example.com:dept_eng_dp_mirror2 respectively to /file1.new, /file2, and /file3.new in the active file system of
the restore destination volume vs1.example.com:dept_eng:
The following example deletes data from an incomplete file restore, captured in a Snapshot copy which was later
promoted to the active file system of a volume.
Related references
snapmirror on page 632
volume snapshot restore on page 1652
volume clone on page 1507
snapmirror break on page 635
volume quota modify on page 1608
volume snapshot create on page 1647
snapmirror show on page 683
snapmirror update on page 710
lun delete on page 173
vserver nvme namespace delete on page 2038
volume show-space on page 1500
snapmirror resume
Enable future transfers
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The snapmirror resume command enables future transfers for a SnapMirror relationship that has been quiesced.
If there is a scheduled transfer for the relationship, it will be triggered on the next schedule. If there is a restart checkpoint, it
will be re-used if possible.
If applied on a load-sharing (LS) SnapMirror relationship, it enables future transfers for all the relationships in the load-sharing
set.
If applied on a relationship with a policy of type strict-sync-mirror or sync-mirror, it enables future resync operations
and initiates an Auto Resync.
When a quiesced SnapMirror relationship is resumed, future transfers remain enabled across reboots and fail-overs.
This command is supported for SnapMirror relationships with the field "Relationship Capability" showing as either
"8.2 and above" or "Pre 8.2" in the output of the snapmirror show command.
The snapmirror resume command must be used from the destination Vserver or cluster.
The relationship must exist on the destination Vserver or cluster. When issuing snapmirror resume, you must specify the
destination endpoint. The specification of the source endpoint of the relationship is optional.
Examples
To re-enable future transfers for the SnapMirror relationship with the destination endpoint
vs2.example.com:dept_eng_dp_mirror2 that has been previously quiesced, type the following command:
To re-enable future transfers for the SnapMirror relationship with the destination endpoint cluster2://
vs2.example.com/dept_eng_dp_mirror2 that has been previously quiesced, type the following command:
To re-enable future transfers for the Vserver SnapMirror relationship with the destination endpoint dvs1.example.com:
that has been previously quiesced, type the following command:
Under PVR control to re-enable future transfers and initiate an Auto Resync of the synchronous SnapMirror Consistency
Group relationship with the destination Consistency Group cg_dst in Vserver vs2.example.com, type the following
command:
Related references
snapmirror show on page 683
snapmirror quiesce on page 663
snapmirror resync
Start a resynchronize operation
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The snapmirror resync command establishes or reestablishes a mirroring relationship between a source and a destination
endpoint. The endpoints can be Vservers, volumes or non-Data ONTAP endpoints that support SnapMirror. snapmirror
resync for a SnapMirror relationship with volumes as endpoints is typically executed in the following cases:
• The destination mirror is broken (that is, the destination volume is a read-write volume and no longer a data protection
mirror). After the snapmirror resync command completes, the destination volume is made a data protection mirror and
the mirror can be manually updated or scheduled for updates.
• The volumes are the first and third endpoints in a cascade chain of relationships and they have a common Snapshot copy. In
this case, snapmirror resync might implicitly create the SnapMirror relationship between them.
Attention: The snapmirror resync command can cause data loss on the destination volume because the command can
remove the exported Snapshot copy on the destination volume.
The default behavior of the snapmirror resync command for volume relationships is defined as follows:
• Finds the most recent common Snapshot copy between the source and destination volumes, removes Snapshot copies on the
destination volume that are newer than the common Snapshot copy and mounts the destination volume as a DP volume with
the common Snapshot copy as the exported Snapshot copy.
• For data protection (DP) relationships, takes a Snapshot copy of the source volume to capture the current image and transfers
Snapshot copies that are newer than the common Snapshot copy from the source volume to the destination volume. For
extended data protection (XDP) relationships, transfers Snapshot copies newer than the common Snapshot copy according to
the relationship policy, i.e., Snapshot copies will match rules associated with the policy as defined by the snapmirror
policy commands. For relationships associated with snapmirror policy of type async-mirror and mirror-vault
the snapmirror resync first takes a Snapshot copy of the source volume and includes it in the Snapshot copies selected
for transfer.
• For a SnapLock Compliance volume in an XDP relationship with SnapMirror policy of type async-mirror, if SnapMirror
resync operation detects data divergence between the common Snapshot copy and the AFS on the destination volume, the
resync operation preserves the data changes in a locked Snapshot copy for the duration of the current volume expiry time. If
the volume expiry time is in the past or has not been set, then the Snapshot copy is locked for a duration of 30 days. The
common Snapshot copy is also locked for the same duration.
For Vserver SnapMirror relationships, a resync operation is typically executed when the relationship is broken-off, the subtype
of the destination Vserver is default and the destination volumes are of type read-write. Once the command is queued, the
subtype of the destination Vserver changes from default to dp-destination. A successful resync operation also makes the
destination Vserver's volumes data protection volumes.
If the resync command is executed on a Vserver SnapMirror relationship, and the corresponding source and destination Vservers
have volumes with volume level SnapMirror relationships, then the volume level SnapMirror relationships will be converted to
volumes under the Vserver SnapMirror relationship. This conversion is supported only for source and destination Vservers
which have been transitioned from a 7-Mode vFiler into a C-Mode Vserver. Some basic pre-requisites for the conversion are that
the destination Vserver should be in a stopped state and all the destination Vserver volumes except the root volume should be
in a volume level SnapMirror relationship with volumes of the source Vserver. The state of these volume level SnapMirror
relationships should be Snapmirrored and status should be Idle.
snapmirror resync for a relationship with a policy of type strict-sync-mirror or sync-mirror is typically executed in
the following case:
• The destination mirror is broken (that is, the destination volume is read-write and no longer read-only). After the
snapmirror resync command completes, the destination volume changes to read-only and the relationship to InSync.
Note: The snapmirror resync command is typically not required to return a relationship that has fallen out of sync due to
an error condition to InSync because SnapMirror has Auto Resync for synchronous relationships. When SnapMirror detects
that the relationship has fallen out of sync for any reason other than a snapmirror quiesce, snapmirror break or
snapmirror delete command was exectuted on the relationship, it will automatically initiate a resync operation.
The default behavior of the snapmirror resync command for relationships with a policy of type strict-sync-mirror or
sync-mirror is defined as follows:
• Creates a Snapshot copy on the destination of the current image of the destination file system. This Snapshot copy becomes
the exported Snapshot copy for the volume during the resync operation.
• Finds the most recent common Snapshot copy between the source and destination volumes. Performs a local rollback
transfer to give the active file system the same data as the common Snapshot copy. It then loops through a sequence, creating
• At the conclusion of the resync operation, the exported Snapshot copy on the destination is removed and the client will then
see the active file system on the destination volume. The relationship will be InSync and periodic creation of common
Snapshot copies will resume.
The snapmirror resync command supports an optional parameter "preserve". The parameter "preserve" is only
supported for extended data protection (XDP) relationships. It is not supported for relationships with a non-Data ONTAP
endpoint. It is not supported for relationships with a policy of type strict-sync-mirror and sync-mirror. When used, the
parameter "preserve" changes the behavior of the snapmirror resync command. The changed behavior of the command
can be described as follows:
• Finds the most recent common Snapshot copy between the source and destination volumes, preserves all Snapshot copies on
the destination volume that are newer than the common Snapshot copy, and mounts the destination volume as a DP volume
with the common Snapshot copy as the exported Snapshot copy.
• Performs a local rollback transfer to make a copy of the common Snapshot copy on the destination volume and establish it as
the latest Snapshot copy on the destination volume. The command then transfers all Snapshot copies that are newer than the
common Snapshot copy, from the source volume to the destination volume. The command only transfers Snapshot copies
that match the relationship's policy, i.e., Snapshot copies will match rules associated with the policy as defined by the
snapmirror policy commands.
If a SnapMirror relationship does not already exist, that is, the relationship was not created using the snapmirror create
command, the snapmirror resync command will implicitly create the SnapMirror relationship, with the same behaviors as
described for the snapmirror create command before resyncing it.
For Vservers, you must create SnapMirror relationships between Vservers by using the snapmirror create command before
you run the snapmirror resync command. The snapmirror resync command does not implicitly create the relationship.
This command is supported for SnapMirror relationships with the field "Relationship Capability" showing as either
"8.2 and above" or "Pre 8.2" in the output of the snapmirror show command.
For relationships with "Relationship Capability" of "8.2 and above", you can track the progress of the operation
using the snapmirror show command.
For relationships with "Relationship Capability" of "Pre 8.2", a job will be spawned to operate on the SnapMirror
relationship, and the job id will be shown in the command output. The progress of the job can be tracked using the job show
and job history show commands.
The snapmirror resync command fails if the destination volume does not have a Snapshot copy in common with the source
volume.
The snapmirror resync command does not work on load-sharing mirrors.
The snapmirror resync command must be used from the destination Vserver or cluster.
Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats. The
normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships
with an AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For
Examples
To reestablish mirroring for the destination endpoint vs2.example.com:dept_mkt_mirror that has been previously broken
off with the snapmirror break command, type the following command:
For relationships with "Relationship Capability" of "Pre 8.2", to reestablish mirroring for the destination
endpoint cluster2://vs2.example.com/dept_mkt_mirror that has been previously broken off with the
snapmirror break command, type the following command:
To create a SnapMirror relationship and reestablish mirroring between the destination endpoint named
vs2.example.com:dept_eng_dp_mirror2 and the source endpoint named vs1.example.com:dept_eng, type the
following command:
To create a SnapMirror relationship and reestablish mirroring between the destination endpoint named cluster2://
vs2.example.com/dept_eng_dp_mirror2 and the source endpoint named cluster1://vs1.example.com/
dept_eng when the source cluster is running Data ONTAP 8.1 software, type the following command:
To create and reestablish an extended data protection (XDP) relationship between the Data ONTAP source endpoint
vs1.example.com:data_ontap_vol, and the non-Data ONTAP (for example, AltaVault) destination endpoint
10.0.0.11:/share/share1, and start the initial transfer, type the following command:
To reestablish mirroring for the destination endpoint dvs1.example.com: of a Vserver relationship that has been
previously broken off with the snapmirror break command, type the following command:
Under PVR control to create a SnapMirror synchronous Consistency Group relationship with the following attributes:
• It is between the source Consistency Group cg_src in Vserver vs1.example.com, and the destination Consistency
Group cg_dst in Vserver vs2.example.com.
• It uses a policy named Sync that has a policy type of smgr-mirror that the user has previously created.
Under PVR control to reestablish mirrroring to the destination Consistency Group cg_dst in Vserver
vs2.example.com that has been previously broken off with the snapmirror break command, type the following
command:
Related references
snapmirror create on page 637
snapmirror policy create on page 725
snapmirror modify on page 657
snapmirror update on page 710
snapmirror policy on page 723
snapmirror quiesce on page 663
snapmirror break on page 635
snapmirror delete on page 644
snapmirror show on page 683
job show on page 142
job history show on page 150
snapmirror set-options
Display/Set SnapMirror options
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The snapmirror set-options command can be used to display or set snapmirror options.
Parameters
[-dp-source-xfer-reserve-pct {0|25%|50%|75%|100%}] - Percentage Reserved for DP Source Transfers
Specifies the percentage of maximum allowed concurrent transfers reserved for source DP transfers
[-xdp-source-xfer-reserve-pct {0|25%|50%|75%|100%}] - Percentage Reserved for XDP Source Transfers
Specifies the percentage of maximum allowed concurrent transfers reserved for source XDP transfers
[-dp-destination-xfer-reserve-pct {0|25%|50%|75%|100%}] - Percentage Reserved for DP Destination
Transfers
Specifies the percentage of maximum allowed concurrent transfers reserved for destination DP transfers
Examples
The following example displays SnapMirror options:
snapmirror show
Display a list of SnapMirror relationships
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The snapmirror show command displays information associated with SnapMirror relationships. By default, the command
displays the following information:
• Source path
• Relationship Type
• Destination Path
• Mirror State
• Relationship Status
• Total Progress
• Healthy
For backward compatibility with clustered Data ONTAP 8.1, and to accommodate load-sharing relationships which are only
supported in a Data ONTAP 8.1 compatible way, SnapMirror relationships, which match one of the following conditions are
managed as on clustered Data ONTAP 8.1: (1) The relationship is of type load-sharing; (2) The source endpoint of the
relationship is on a remote Data ONTAP 8.1 cluster; (3) The local cluster was upgraded from clustered Data ONTAP 8.1, the
relationship was created before the upgrade, and the relationship has not yet been converted to one with Data ONTAP 8.2
capabilities. These relationships have the same limitations as on clustered Data ONTAP 8.1. Especially, they support the same
set of information fields. The "Relationship Capability" field is set to "Pre 8.2" for these relationships.
The snapmirror show command displays information for SnapMirror relationships whose destination endpoints are in the
current Vserver if you are in a Vserver context, or in the current cluster if you are in a cluster context, or on a non-Data ONTAP
endpoint that supports SnapMirror (for example, AltaVault). For backward compatibility with clustered Data ONTAP 8.1, the
The -instance and -fields parameters are mutually exclusive and select the information fields that are displayed. The other
parameters to the snapmirror show command select the SnapMirror relationships for which information is displayed. The -
instance displays detailed information fields including:
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
• To match all the Snapmirror relationships except Vserver Snapmirror relationships in the cluster, use: -
destination-path *
• DP
• LS
• XDP
• TDP
• RST
• none
• vserver
• async-mirror
• vault
• mirror-vault
• Uninitialized
• Snapmirrored
• Broken-off
• Idle
• Queued
• Transferring
• Preparing
• Finalizing
• Aborting
• Quiesced
• Quiescing
• Checking
Status values Finalizing, Checking, Waiting and Preparing are not supported for Vserver SnapMirror
relationships.
• true
• false
Examples
The snapmirror show command displays information associated with SnapMirror relationships. By default, the
command displays the following information:
• Source path
• Relationship Type
• Destination Path
• Mirror State
• Relationship Status
• Total Progress
• Healthy
For backward compatibility with clustered Data ONTAP 8.1, and to accommodate load-sharing relationships which are
only supported in a Data ONTAP 8.1 compatible way, SnapMirror relationships, which match one of the following
conditions are managed as on clustered Data ONTAP 8.1: (1) The relationship is of type load-sharing; (2) The source
endpoint of the relationship is on a remote Data ONTAP 8.1 cluster; (3) The local cluster was upgraded from clustered
Data ONTAP 8.1, the relationship was created before the upgrade, and the relationship has not yet been converted to one
The snapmirror show command displays information for SnapMirror relationships whose destination endpoints are in
the current Vserver if you are in a Vserver context, or in the current cluster if you are in a cluster context, or on a non-
Data ONTAP endpoint that supports SnapMirror (for example, AltaVault). For backward compatibility with clustered
Data ONTAP 8.1, the command also displays information for SnapMirror relationships with the "Relationship
Capability" of "Pre 8.2", and whose source endpoints are in the current Vserver or cluster, and destination
endpoints are in different Vservers or clusters. You must use the snapmirror list-destinations command to
display information for SnapMirror relationships whose source endpoints are in the current Vserver or current cluster.
Some of the SnapMirror relationship information is cached. The snapmirror show command only returns the cached
information, therefore there is a delay after the information is changed before it is reflected in the snapmirror show
output. Other information, such as progress metrics during a transfer, is only updated periodically and can be very delayed
in the snapmirror show output.
The -instance and -fields parameters are mutually exclusive and select the information fields that are displayed. The
other parameters to the snapmirror show command select the SnapMirror relationships for which information is
displayed. The -instance displays detailed information fields including:
The following fields are available only under PVR control for relationships with policy type sync-mirror.
The following fields are available only at the diagnostic privilege level:
Last Transfer Error Codes: Set of Data ONTAP internal error codes
providing information on the context
of the previous transfer failure.
Only for relationships with
"Relationship Capability"
of "8.2 and above".
This parameter is not supported for Vserver
SnapMirror relationships.
Source Vserver UUID: The unique identifier of the source Vserver.
Only for relationships with
"Relationship Capability"
of "8.2 and above".
Destination Vserver UUID: The unique identifier of the destination
Vserver.
Only for relationships with
"Relationship Capability"
of "8.2 and above".
Source Endpoint UUID: The unique identifier of the non-Data ONTAP
source endpoint of a relationship. Only for
relationships with a non-Data ONTAP source
endpoint.
Destination Endpoint UUID: The unique identifier of the non-Data ONTAP
destination endpoint of a relationship. Only
for relationships with a non-Data ONTAP
destination endpoint.
The example below displays summary information for all SnapMirror relationships with destination endpoints in the
current cluster:
The example below displays detailed information for the SnapMirror relationship with the destination endpoint
cluster2-vs2.example.com:dp_dst1.
The example below displays detailed information for SnapMirror relationships with the Relationship Capability of
"Pre 8.2" source or destination endpoints in the current cluster.
The example below displays detailed information for the Vserver SnapMirror relationship with the destination endpoint
cluster2-dvs2.example2.com:.
The following example displays detailed information for the SnapMirror relationship with the AltaVault destination
endpoint 10.0.0.11:/share/share1:
The example shows the usage of the -expand parameter to additionally display the constituents of Vserver SnapMirror
relationships with destination endpoints in the current cluster. Note that in the following example, since there is no
volume level relationship for the root volume of a Vserver, it is not shown in the output:
Related references
snapmirror list-destinations on page 653
snapmirror policy create on page 725
snapmirror show-history
Displays history of SnapMirror operations.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The snapmirror show-history command displays the history of SnapMirror operations. This command is not supported for
relationships with non-Data ONTAP endpoints.
By default, the command displays the following information:
• Destination Path
• Source Path
• Operation
• Start Time
• End Time
• Result
The snapmirror show-history command displays - in reverse chronological order - the history of completed SnapMirror
operations whose destination endpoints are in the current Vserver for Vserver administrators, or the current cluster for cluster
administrators. This command does not return information on the following operations and relationships:
• Operations that happened prior to installing Data ONTAP 8.3.
• Relationships with the "Relationship Capability" field, as shown in the output of the SnapMirror show command, set
to "Pre 8.2".
• Operations on FlexGroup relationships that happened prior to installing Data ONTAP 9.5.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
{ [-destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/
name>|<hostip:/share/share-name>}] - Destination Path
Select SnapMirror operations that have a matching destination path name.
| [-destination-vserver <vserver name>] - Destination Vserver
Select SnapMirror operations that have a matching destination Vserver name.
[-destination-volume <volume name>]} - Destination Volume
Select SnapMirror operations that have a matching destination volume name.
[-operation-id <UUID>] - Operation ID
Select SnapMirror operations that have a matching operation ID.
• create
• modify
• quiesce
• resume
• delete
• initialize
• manual-update
• scheduled-update
• break
• resync
• abort
• restore
• none
• vserver
• flexgroup
• success
• failure
Examples
The example below displays summary information for all SnapMirror operations on relationships with destination
endpoints in the current cluster:
The example below displays detailed information for the SnapMirror operation with operation ID
dc158715-0583-11e3-89bd-123478563412
The example below displays detailed information for all SnapMirror operations on relationships with the Result of
"success" and whose destination endpoints are in the current cluster.
The example below displays summary information for all SnapMirror operations on relationships with max-rows-per-
relationship of 1 and whose destination endpoints are in the current cluster.
snapmirror update
Start an incremental transfer
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The snapmirror update command updates the destination volume or non-Data ONTAP endpoint of a SnapMirror
relationship. The snapmirror update command behaves differently for data protection (DP), extended data protection (XDP)
For data protection SnapMirror relationships with volumes as endpoints, the snapmirror update command makes the
destination volume an up-to-date mirror of the source volume with the following steps:
• If the source volume is read-write, takes a Snapshot copy on the source volume to capture the current image of the source
volume
• Finds the most recent Snapshot copy on the destination volume and validates that the corresponding Snapshot copy is still
present on the source
• Incrementally transfers Snapshot copies that are newer than the corresponding Snapshot copy to the destination volume
You can use the snapmirror update command to update a specific load-sharing mirror that lags behind up-to-date
destination volumes in the set of load-sharing mirrors. An update to the lagging load-sharing mirror should bring it up to date
with the other up-to-date destination volumes in the set of load-sharing mirrors.
Note: Using the snapmirror update command to update a set of load-sharing mirrors will not work. Use the snapmirror
update-ls-set command to update a set of load-sharing mirrors.
For extended data protection (XDP) relationships with a snapmirror policy of type async-mirror, a snapmirror
update always creates a new Snapshot copy on the source volume. Depending on the rules in the policy, the command might
transfer just the newly created Snapshot copy or all Snapshot copies that are newer than the common Snapshot copy including
the newly created Snapshot copy to the destination volume.
For extended data protection (XDP) relationships with a snapmirror policy of type vault, a snapmirror update does
not create a new Snapshot copy on the source volume but transfers only selected Snapshot copies that are newer than the
common Snapshot copy to the destination volume. (Those older than the common copy can be transferred by using the -
source-snapshot parameter.) Snapshot copies are selected by matching the value of -snapmirror-label of a Snapshot
copy with the value of -snapmirror-label of one of the rules from the corresponding SnapMirror policy associated with the
SnapMirror relationship. All matching Snapshot copies are incrementally transferred to the destination volume.
For extended data protection (XDP) relationships with a snapmirror policy of type mirror-vault, a snapmirror
update always creates a new Snapshot copy on the source volume and transfers only selected Snapshot copies that are newer
than the common snapshot copy. The newly created Snapshot copy is always selected.
For extended data protection (XDP) relationships with a snapmirror policy of type vault or mirror-vault, the
snapmirror update command also manages expiration of Snapshot copies on the destination volume. It does so by deleting
Snapshot copies that have exceeded the value of -keep for the matching rule from the corresponding SnapMirror policy
associated with the SnapMirror relationship. Snapshot copies that match the same -snapmirror-label will be deleted in
oldest-first order.
For relationships with a policy of type strict-sync-mirror or sync-mirror, this command creates a new common
Snapshot copy and designates it as the exported Snapshot copy on the destination volume. It updates the destination read-only
view because IO is redirected to the new exported Snapshot copy. Clients could experience a brief latency spike during this
process as the primary IO is temporarily fenced. This command is allowed only when the relationship-status is InSync. The
command retains two pairs of common Snapshot copies and deletes the older ones.
For data protection relationships, the parameter -source-snapshot is optional and only allows for the transfer of Snapshot
copies newer than the common Snapshot copy up to the specified -source-snapshot.
For extended data protection (XDP) relationships the parameter -source-snapshot is optional.
For extended data protection (XDP) relationships with a snapmirror policy of type vault or mirror-vault, the
parameter -source-snapshot allows transfer of a Snapshot copy that is older than the common Snapshot copy and/or might
not be selected for transfer based on policy-based selection of a scheduled update transfer.
If the snapmirror update does not finish successfully--for example, due to a network failure or because a snapmirror
abort command was issued--a restart checkpoint might be recorded on the destination volume. If a restart checkpoint is
recorded, the next update restarts and continues the transfer from the restart checkpoint. For extended data protection (XDP)
relationships, the next update will restart and continue the old transfer regardless of whether the Snapshot copy being transferred
is a matching Snapshot copy or not.
This command is supported for SnapMirror relationships with the field "Relationship Capability" showing as either
"8.2 and above" or "Pre 8.2" in the output of the snapmirror show command.
For relationships with "Relationship Capability" of "8.2 and above", you can track the progress of the operation
using the snapmirror show command.
For relationships with "Relationship Capability" of "Pre 8.2", a job will be spawned to operate on the SnapMirror
relationship, and the job id will be shown in the command output. The progress of the job can be tracked using the job show
and job history show commands.
For Vserver SnapMirror relationships, the snapmirror update command makes the destination Vserver an up-to-date mirror
of the source Vserver.
The snapmirror update command must be used from the destination Vserver or cluster.
Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats. The
normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships
with an AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For
SnapMirror relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/
name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
[-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints, if
this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported
for relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-
Examples
To update the mirror relationship between the destination endpoint vs2.example.com:dept_eng_dp_mirror3 and its
source endpoint, type the following command:
For relationships with "Relationship Capability" of "Pre 8.2", to update the mirror relationship between the
destination endpoint cluster2://vs2.example.com/dept_eng_dp_mirror3 and its source endpoint, type the
following command:
To update the Vserver SnapMirror relationship between destination endpoint dvs1.example.com: and its source
endpoint, type the following command:
Related references
snapmirror create on page 637
snapmirror modify on page 657
snapmirror initialize on page 647
snapmirror initialize-ls-set on page 651
snapmirror update-ls-set on page 714
snapmirror policy on page 723
snapmirror abort on page 632
snapmirror show on page 683
job show on page 142
job history show on page 150
snapmirror update-ls-set
Start an incremental load-sharing set transfer
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
After an update using the snapmirror update-ls-set command successfully completes, the last Snapshot copy transferred
is made the new exported Snapshot copy on the destination volumes.
This command is only supported for SnapMirror relationships with the field "Relationship Capability" showing as "Pre
8.2" in the output of the snapmirror show command.
Parameters
{ -source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>} - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats. The
normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships
with an AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For
SnapMirror relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/
name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
-source-vserver <vserver name> - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints, if
this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported
for relationships with non-Data ONTAP source endpoints.
-source-volume <volume name>} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-
cluster must also be specified. This parameter is not supported for relationships with non-Data ONTAP
source endpoints.
[-foreground | -w [true]] - Foreground Process
This specifies whether the operation runs as a foreground process. If this parameter is specified, the default
setting is true (the operation runs in the foreground). When set to true, the command will not return until
the process completes. This parameter is only applicable to relationships with "Relationship
Capability" of "Pre 8.2".
Related references
snapmirror update on page 710
snapmirror show on page 683
Description
The snapmirror config-replication status show command displays the current SnapMirror configuration replication
status.
The command displays the following aspects of SnapMirror configuration replication:
• Storage Remarks: Prints the underlying root cause for non-healthy SnapMirror configuration storage.
• Vserver Streams: Verifies that SnapMirror configuration replication Vserver streams are healthy.
Additional information about the warnings (if any) and recovery steps can be viewed by running the command with the -
instance option.
Parameters
[-instance ]
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example shows the execution of the command:
Related references
snapmirror config-replication status show-communication on page 718
Description
The snapmirror config-replication status show-aggregate-eligibility command displays the SnapMirror
configuration replication aggregate eligibility.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Aggregate
Display only rows that have a matching aggregate name.
[-hosted-configuration-replication-volumes <volume name>, ...] - Currently Hosted Configuration
Replication Volumes
Display only rows that have matching configuration replication volumes hosted on this aggregate.
[-is-eligible-to-host-additional-volumes {true|false}] - Eligibility to Host Another Configuration
Replication Volume
Display only rows that have a matching eligibility of the aggregate to host additional configuration replication
volumes.
[-comment <text>] - Comment for Eligibility Status
Display only rows that have a matching comment regarding the eligibility of the aggregate to host
configuration replication volumes.
Examples
The following example shows the execution of the command in a SnapMirror configuration with thirteen aggregates in the
cluster:
Related references
snapmirror config-replication status show on page 716
Description
The snapmirror config-replication status show-communication command displays the current SnapMirror
configuration replication communication status.
The command displays the following aspects of SnapMirror configuration replication for each peer cluster:
• Remote Heartbeat: Verifies that the SnapMirror configuration replication heartbeat with the remote cluster is healthy.
• Last Heartbeat Sent: Prints the timestamp of the last SnapMirror configuration replication heartbeat sent to the remote
cluster.
• Last Heartbeat Received: Prints the timestamp of the last SnapMirror configuration replication hearbeat received from the
remote cluster.
Additional information about the warnings (if any) and recovery steps can be viewed by running the command with the -
instance option.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-cluster-uuid <UUID>] - Remote Cluster
Display only rows that have a matching peer cluster UUID.
[-cluster <text>] - Peer Cluster Name
Display only rows that have matching peer cluster name.
[-remote-heartbeat {ok|warning|not-run|not-applicable}] - Remote Heartbeat
Display only rows that have a matching remote heartbeat status.
[-last-heartbeat-sent <MM/DD/YYYY HH:MM:SS>] - Last Heartbeat Sent Time
Display only rows that have a matching timestamp of the last heartbeat sent.
Examples
The following example shows the execution of the command in a SnapMirror configuration with two peer clusters:
Related references
snapmirror config-replication status show on page 716
Description
The snapmirror config-replication cluster-storage-configuration modify command modifies the
configuration of storage used for configuration replication.
Parameters
[-disallowed-aggregates <aggregate name>, ...] - Disallowed Aggregates
Use this parameter to set the list of storage aggregates that are not available to host storage for configuration
replication.
Examples
The following example disallows two aggregates named aggr1 and aggr2:
Related references
snapmirror config-replication cluster-storage-configuration show on page 720
Description
The snapmirror config-replication cluster-storage-configuration show command shows details of the
configuration of the storage used for configuration replication.
The information displayed is the following:
• Disallowed Aggregates - The list of storage aggregates that are configured as not allowed to host storage areas.
• Auto-Repair - Displays true if the automatic repair of storage areas used by configuration replication is enabled.
• Auto-Recreate - Displays true if the automatic recreation of storage volumes used by configuration replication is enabled.
• Use Mirrored Aggregate - Displays true if storage areas for configuration replication are to be hosted on a mirrored
aggregate.
Examples
The following is an example of the snapmirror config-replication cluster-storage-configuration show
command:
Disallowed Aggregates: -
Auto-Repair: true
Auto-Recreate: true
Use Mirrored Aggregate: true
Related references
snapmirror config-replication cluster-storage-configuration modify on page 719
Description
The snapmirror object-store profiler abort command will abort an ongoing object store profiler run. This
command requires two parameters - an object store configuration and a node on which the profiler is currently running.
Examples
The following example aborts the object store profiler :
Related references
storage aggregate object-store profiler abort on page 885
Availability: This command is available to cluster administrators at the advanced privilege level.
Description
The snapmirror object-store profiler show command is used to monitor progress and results of the snapmirror
object-store profiler start command.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node Name
This parameter specifies the node on which the profiler was started.
[-object-store-name <text>] - ONTAP Name for this Object Store Configuration
This parameter specifies the object store configuration that describes the object store. The object store
configuration has information about the object store server name, port, access credentials, and provider type.
[-profiler-status <text>] - Profiler Status
Current status of the profiler.
[-start-time <MM/DD/YYYY HH:MM:SS>] - Profiler Start Time
Time at which profiler run started.
[-op-name <text>] - Operation Name - PUT/GET
Name of the operation. Possible values are PUT or GET.
[-op-size {<integer>[KB|MB|GB|TB|PB]}] - Size of Operation
Size of the PUT or GET operation.
[-op-count <integer>] - Number of Operations Performed
Number of operations issued to the object store.
Examples
The following example displays the results of snapmirror object-store profiler start :
Related references
storage aggregate object-store profiler show on page 886
snapmirror object-store profiler start on page 722
Description
The snapmirror object-store profiler start command writes objects to an object store and reads those objects to
measure latency and throughput of an object store. This command requires two parameters - an object store configuration and
node from which to send the PUT/GET/DELETE operations. This command verifies whether the object store is accessible
through the intercluster LIF of the node on which it runs. The command fails if the object store is not accessible. The command
will create a 10GB dataset by doing 2500 PUTs for a maximum time period of 60 seconds. Then it will issue GET operations of
different sizes - 4KB, 8KB, 32KB, 256KB for a maximum time period of 180 seconds. Finally it will delete the objects it
created. This command can result in additional charges to your object store account. This is a CPU intensive command. It is
recommended to run this command when the system is under 50% CPU utilization.
Parameters
-node {<nodename>|local} - Node on Which the Profiler Should Run
This parameter specifies the node from which PUT/GET/DELETE operations are sent.
-object-store-name <text> - Object Store Configuration Name
This parameter specifies the object store configuration that describes the object store. The object store
configuration has information about the object store server name, port, access credentials, and provider type.
Related references
storage aggregate object-store profiler start on page 887
Description
The snapmirror policy add-rule command adds a rule to a SnapMirror policy. Rules define which Snapshot copies are
protected by vault relationships or define the schedule at which Snapshot copies are created on the SnapMirror destination.
Rules which do not include a schedule are rules for protecting Snapshot copies. Rules which include a schedule are rules for
creating Snapshot copies on the SnapMirror destination. A rule with a schedule can only be added to SnapMirror policies of
type vault or mirror-vault. A rule must not be added to a policy that will be associated with a SnapMirror data protection
relationship. A policy that will be associated with a SnapMirror vault relationship must have at least one rule and at most ten
rules. A SnapMirror policy with rules must have at least one rule without a schedule.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the Vserver for the SnapMirror policy.
-policy <sm_policy> - SnapMirror Policy Name
Specifies the SnapMirror policy name.
-snapmirror-label <text> - Snapshot Copy Label
This parameter is primarily used for the purpose of Snapshot copy selection for extended data protection
(XDP) relationships. Only Snapshot copies that have a SnapMirror label that matches this parameter will be
transferred to the SnapMirror destination. However, when this parameter is associated with a rule containing a
schedule, Snapshot copies will be created on the SnapMirror destination using this snapmirror-label parameter.
The label can be 31 or fewer characters in length. SnapMirror policies of type async-mirror and mirror-
vault have a rule added for label sm_created at the time of policy creation. This rule cannot be removed or
modified by the user. This rule when coupled with create-snapshot field set to true indicates that the
SnapMirror relationship using this policy shall create a new Snapshot copy and transfer it as part of a
snapmirror update operation. SnapMirror policies of type async-mirror support one additional rule
with SnapMirror label all_source_snapshots. This rule along with the rule for SnapMirror label
sm_created indicates that all new Snapshot copies on the primary volume along with the newly created
Snapshot copy are transferred as a part of a snapmirror update or snapmirror initialize operation.
Rules with any other SnapMirror labels cannot be added to SnapMirror policies of type async-mirror. The
rule for label sm_created when added to a snapmirror policy of type vault indicates that all
SnapMirror created Snapshot copies of the primary volume are selected for transfer.
Examples
The following example adds a rule named nightly to the SnapMirror policy named TieredBackup on Vserver
vs0.example.com. The rule will retain a maximum of 5 nightly Snapshot copies.
The following example adds a rule named SyncProtectMe to the SnapMirror policy named Smgr on Vserver
vs0.example.com. The rule will retain the same SyncProtectMe snapshots on the destination as are present on the
source when the relationship is InSync.
Related references
snapmirror update on page 710
snapmirror initialize on page 647
snapmirror policy on page 723
snapmirror resync on page 676
Description
The snapmirror policy create command creates a SnapMirror policy. When applied to a SnapMirror relationship, the
SnapMirror policy controls the behavior of the relationship and specifies the configuration attributes for that relationship. The
policies DPDefault, MirrorAllSnapshots, MirrorAndVault, MirrorLatest, Unified7year and XDPDefault are
created by the system for asynchronous replication. The policies Sync and StrictSync are created by the system for
synchronous replication.
Note: All SnapMirror policies have a field create-snapshot. This field specifies whether SnapMirror creates a new
Snapshot copy on the primary volume at the beginning of a snapmirror update or snapmirror resync operation.
Currently this field cannot be set or modified by the user. It is set to true for SnapMirror policies of type async-mirror
and mirror-vault at the time of creation. SnapMirror policies of type vault have create-snapshot set to false at the
time of creation.
Note: Use the snapmirror policy add-rule command to add a rule to a policy.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the Vserver for the SnapMirror policy.
-policy <sm_policy> - SnapMirror Policy Name
This parameter specifies the SnapMirror policy name. A policy name can be made up of the characters A to Z,
a to z, 0 to 9, ".", "-", and "_". The name can be up to 256 characters in length.
[-type {vault|async-mirror|mirror-vault|strict-sync-mirror|sync-mirror}] - Snapmirror Policy
Type
This parameter specifies the SnapMirror policy type. The supported values are async-mirror, vault,
mirror-vault, sync-mirror and strict-sync-mirror. Data protection (DP) relationships support only
async-mirror policy type, while extended data protection (XDP) relationships support all policy types.
If the type is set to async-mirror then the policy is for Disaster Recovery. When the policy type is
associated with extended data protection (XDP) relationships, snapmirror update and snapmirror
resync operations transfer selected Snapshot copies from the primary volume to the secondary volume. The
selection of Snapshot copies is governed by the rules in the policy. However snapmirror initialize and
snapmirror update operations on data protection (DP) relationships ignore the rules in the policy and
transfer all Snapshot copies of the primary volume which are newer than the common Snapshot copy on the
destination. For both data protection (DP) and extended data protection (XDP) relationships, the Snapshot
copies are kept on the secondary volume as long as they exist on the primary volume. Once a protected
Snapshot copy is deleted from the primary volume, it is deleted from the secondary volume as part of the next
transfer. The policy type supports rules with certain pre-defined label names only. Refer to the man page for
the snapmirror policy add-rule command for the details.
If the type is set to vault then the policy is used for Backup and Archive. The rules in this policy type
determine which Snapshot copies are protected and how long they are retained on the secondary volume. This
policy type is supported by extended data protection (XDP) relationships only.
If the type is set to mirror-vault then the policy is used for unified data protection which provides both
Disaster Recovery and Backup using the same secondary volume. This policy type is supported by extended
data protection (XDP) relationships only.
This parameter is supported only for policies of type async-mirror and applicable only for identity-preserve
Vserver SnapMirror relationships.
Examples
The following example creates a SnapMirror policy named TieredBackup on a Vserver named vs0.example.com.
The following example executed under PVR control creates a SnapMirror policy named Sync on a Vserver named
vs0.example.com with -always-replicate_snapshots set to true to be used for a relationship between items in
Consistency Groups.
Related references
snapmirror update on page 710
snapmirror resync on page 676
snapmirror initialize on page 647
snapmirror policy add-rule on page 723
snapmirror quiesce on page 663
snapmirror resume on page 674
snapmirror policy on page 723
job schedule cron create on page 163
Description
The snapmirror policy delete command deletes a SnapMirror policy. A policy that is to be deleted must not be
associated with any SnapMirror relationship. The built-in policies cannot be deleted.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the Vserver for the SnapMirror policy.
-policy <sm_policy> - SnapMirror Policy Name
Specifies the SnapMirror policy name.
Examples
The following example deletes a SnapMirror policy named TieredBackup on Vserver vs0.example.com:
Description
The snapmirror policy modify command can be used to modify the policy attributes.
Note: Use the snapmirror policy modify-rule command to modify a rule in a SnapMirror policy.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the Vserver for the SnapMirror policy.
-policy <sm_policy> - SnapMirror Policy Name
Specifies the SnapMirror policy name.
[-comment <text>] - Comment
Specifies a text comment for the SnapMirror policy. If the comment contains spaces, it must be enclosed
within quotes.
[-tries <unsigned32_or_unlimited>] - Tries Limit
Determines the maximum number of times to attempt each manual or scheduled transfer for a SnapMirror
relationship. The value of this parameter must be a positive integer or unlimited. The default value is 8.
[-transfer-priority {low|normal}] - Transfer Scheduling Priority
Specifies the priority at which a transfer runs. The supported values are normal or low. The normal transfers
are scheduled before the low priority transfers. The default is normal.
[-ignore-atime {true|false}] - Ignore File Access Time
This parameter applies only to extended data protection (XDP) relationships. It specifies whether incremental
transfers will ignore files which have only their access time changed. The supported values are true or
false. The default is false.
[-restart {always|never|default}] - Restart Behavior
This parameter applies only to data protection relationships. It defines the behavior of SnapMirror if an
interrupted transfer exists. The supported values are always, never, or default. If the value is set to
always, an interrupted SnapMirror transfer always restarts provided it has a restart checkpoint and the
conditions are the same as they were before the transfer was interrupted. In addition, a new SnapMirror
Snapshot copy is created which will then be transferred. If the value is set to never, an interrupted
SnapMirror transfer will never restart, even if a restart checkpoint exists. A new SnapMirror Snapshot copy
will still be created and transferred. Data ONTAP version 8.2 will interpret a value of default as being the
same as always. Vault transfers will always resume based on a restart checkpoint, provided the Snapshot copy
still exists on the source volume.
[-is-network-compression-enabled {true|false}] - Is Network Compression Enabled
Specifies whether network compression is enabled for transfers. The supported values are true or false. The
default is false.
This parameter is supported only for policies of type async-mirror and applicable only for identity-preserve
Vserver SnapMirror relationships.
Examples
The following example changes the "transfer-priority" and the "comment" text of a snapmirror policy named
TieredBackup on Vserver vs0.example.com:
Related references
snapmirror update on page 710
snapmirror initialize on page 647
snapmirror policy on page 723
snapmirror resync on page 676
job schedule cron create on page 163
snapmirror policy modify-rule on page 729
Description
The snapmirror policy modify-rule command can be used to modify the retention count, preserve setting, warning
threshold count, schedule, and prefix for a rule in a SnapMirror policy. Reducing the retention count or disabling the preserve
setting for a rule in a SnapMirror policy might result in the deletion of Snapshot copies on the vault destination when the next
transfer by the snapmirror update command occurs or when the next scheduled Snapshot copy creation on the destination
for the rule occurs. Modifying a rule to add a schedule will enable creation of Snapshot copies on the SnapMirror destination.
Snapshot copies on the source that have a SnapMirror label matching this rule will not be selected for transfer. Schedule and
prefix can only be modified for rules associated with SnapMirror policies of type vault or mirror-vault. A SnapMirror
policy with rules must have at least one rule without a schedule.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the Vserver for the SnapMirror policy.
-policy <sm_policy> - SnapMirror Policy Name
Specifies the SnapMirror policy name.
-snapmirror-label <text> - Snapshot Copy Label
This parameter specifies the rule that is to be modified in a SnapMirror policy.
[-keep <text>] - Snapshot Copy Retention Count
Specifies the maximum number of Snapshot copies that are retained on the SnapMirror destination volume for
a rule. The total number of Snapshot copies retained for all the rules in a policy cannot exceed 1019. For all
the rules in SnapMirror policies of type async-mirror, this parameter must be set to value 1.
[-preserve {true|false}] - Snapshot Copy Preserve Enabled
Specifies the behavior when the Snapshot copy retention count is reached on the SnapMirror vault destination
for the rule. The default value is false, which means that the oldest Snapshot copy will be deleted to make
room for new ones only if the number of Snapshot copies has exceeded the retention count specified in the
"keep" parameter. When set to true, and when the Snapshot copies have reached the retention count, then an
incremental SnapMirror vault update transfer will fail or if the rule has a schedule, Snapshot copies will no
longer be created on the SnapMirror destination. For all the rules in SnapMirror policies of type async-
mirror this parameter must be set to value false.
[-warn <integer>] - Warning Threshold Count
Specifies the warning threshold count for the rule. The default value is 0. When set to a value greater than
zero, an event is generated after the number of Snapshot copies (for the particular rule) retained on a
SnapMirror vault destination reaches the specified warn limit. The preserve parameter for the rule must be
true to set the warn parameter to a value greater than zero.
[-schedule <text>] - Snapshot Copy Creation Schedule
This optional parameter specifies the name of the schedule associated with a rule. This parameter is allowed
only for rules associated with SnapMirror policies of type vault or mirror-vault. When this parameter is
specified, Snapshot copies are directly created on the SnapMirror destination. The Snapshot copies created
will have the same content as the latest Snapshot copy already present on the SnapMirror destination.
Snapshot copies on the source that have a SnapMirror label matching this rule will not be selected for transfer.
The default value is -.
Note: You define and name a schedule using the job schedule cron create command.
Examples
The following example changes the retention count for nightly Snapshot copies to 6 for a rule named nightly on a
SnapMirror policy named TieredBackup on Vserver vs0.example.com:
Description
The snapmirror policy remove-rule command removes a rule from a SnapMirror policy. On the destination of a
SnapMirror relationship with snapmirror policy of type vault or mirror-vault, all Snapshot copies with a SnapMirror
label matching the rule being removed are no longer processed by SnapMirror and might need to be deleted manually. A
snapmirror policy of type vault must have at least one rule if that policy is associated with a SnapMirror relationship. A
SnapMirror policy with rules must have at least one rule without a schedule.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the Vserver for the SnapMirror policy.
-policy <sm_policy> - SnapMirror Policy Name
Specifies the SnapMirror policy name.
-snapmirror-label <text> - Snapshot Copy Label
This parameter specifies the rule that is removed from the SnapMirror policy.
The rule for SnapMirror label sm_created cannot be removed from SnapMirror policies of type async-
mirror or mirror-vault.
Examples
The following example removes a rule named nightly from a SnapMirror policy named TieredBackup on Vserver
vs0.example.com:
Related references
snapmirror policy on page 723
Description
The snapmirror policy show command displays the following information about SnapMirror policies:
• Vserver Name
• Transfer Priority
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
Selects the policies that match this parameter value.
[-policy <sm_policy>] - SnapMirror Policy Name
Selects the policies that match this parameter value.
[-type {vault|async-mirror|mirror-vault|strict-sync-mirror|sync-mirror}] - Snapmirror Policy
Type
Selects the policies that match this parameter value. A policy can be of type async-mirror, vault or
mirror-vault.
[-owner {cluster-admin|vserver-admin}] - Owner of the Policy
Selects the policies that match this parameter value. A policy can be owned by either the "Cluster Admin"
or a "Vserver Admin".
[-comment <text>] - Comment
Selects the policies that match this parameter value.
[-tries <unsigned32_or_unlimited>] - Tries Limit
Selects the policies that match this parameter value.
[-transfer-priority {low|normal}] - Transfer Scheduling Priority
Selects the policies that match this parameter value.
[-ignore-atime {true|false}] - Ignore File Access Time
Selects the policies that match this parameter value.
[-restart {always|never|default}] - Restart Behavior
Selects the policies that match this parameter value.
[-is-network-compression-enabled {true|false}] - Is Network Compression Enabled
Selects the policies that match this parameter value.
[-create-snapshot {true|false}] - Create a New Snapshot Copy
Selects the policies that match this parameter value.
[-snapmirror-label <text>, ...] - Snapshot Copy Label
Selects the policies that match this parameter value.
[-keep <text>, ...] - Snapshot Copy Retention Count
Selects the policies that match this parameter value.
Examples
The following example displays information about all SnapMirror policies:
vs0.example.com
TieredBackup vault 0 8 normal Use for tiered backups
Snapmirror-label: - Keep: -
Total Keep: 0
cs XDPDefault vault 2 8 normal Vault policy with daily and weekly rules.
SnapMirror Label: daily Keep: 7
weekly 52
Total Keep: 59
The following example shows all the policies with the following fields - vserver (default), policy (default) and transfer-
priority:
Description
The snapmirror snapshot-owner create command adds an owner on the specified Snapshot copy. A Snapshot copy can
have at most one owner. An owner can only be added to a Snapshot copy on a read-write volume. The Snapshot copy must have
a valid SnapMirror label.
Note: Refer to the ONTAP Data Protection Guide for valid use cases to add an owner on a Snapshot copy.
Parameters
-vserver <vserver name> - Vserver Name
This parameter specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This parameter specifies the name of the volume.
-snapshot <snapshot name> - Snapshot Copy Name
This parameter specifies the name of the Snapshot copy.
Examples
The following example adds owner app1 on Snapshot copy snap1 on volume vol1 in Vserver vs0.example.com.
The following example adds a default owner on Snapshot copy snap2 on volume vol1 in Vserver vs0.example.com.
Description
The snapmirror snapshot-owner delete command removes an owner on the specified Snapshot copy, which was added
using the snapmirror snapshot-owner create command.
Parameters
-vserver <vserver name> - Vserver Name
This parameter specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This parameter specifies the name of the volume.
-snapshot <snapshot name> - Snapshot Copy Name
This parameter specifies the name of the Snapshot copy.
[-owner <owner name>] - Snapshot Copy Owner Name
This parameter specifies the name of the owner for the Snapshot copy. When not specified, the owner with the
system-generated default name will be removed.
Examples
The following example removes owner app1 on Snapshot copy snap1 on volume vol1 in Vserver vs0.example.com.
The following example removes the default owner on Snapshot copy snap2 on volume vol1 in Vserver
vs0.example.com.
Description
The snapmirror snapshot-owner show command is used to list all Snapshot copies with owners that were added using the
snapmirror snapshot-owner create command.
Parameters
{ [-fields <fieldname>, ...]
If this parameter is specified, the command displays information about the specified fields.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all fields.
-vserver <vserver name> - Vserver Name
This parameter specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This parameter specifies the name of the volume.
[-snapshot <snapshot name>] - Snapshot Copy Name
If this parameter is specified, the command displays the owner name for the specified Snapshot copy.
Examples
The following example lists all Snapshot copies with owners on volume vol1 in Vserver vs0. The system-generated
default owner name is displayed as "-".
The following example displays the owner name for Snapshot copy snap1 on volume vol1 in Vserver
vs0.example.com.
Vserver: vs0.example.com
Volume: vol1
Snapshot: snap1
Owner Names: app1
Related references
snapmirror snapshot-owner create on page 734
Description
The statistics-v1 nfs show-mount command displays the following statistics about the NFS mounts on each node in the
cluster:
This command is designed to be used to analyze performance characteristics and to help diagnose issues.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified result (success/failure/all).
Examples
The following example displays statistics about the NFS mounts for a node named node1:
Description
The statistics-v1 nfs show-nlm command displays the following statistics about the Network Lock Manager (NLM) on
each node in the cluster:
This command is designed to be used to analyze performance characteristics and to help diagnose issues.
Note: This command requires an effective cluster version earlier than Data ONTAP 9.0. Data for nodes running Data ONTAP
9.0 or later is not collected, and will not be displayed. Use the statistics show-object nlm command instead.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified result (success/failure/all).
Examples
The following example displays statistics about the NLM for a node named node1:
Related references
statistics show on page 761
Description
The statistics-v1 nfs show-statusmon command displays the following statistics about the Status Monitor on each
node in the cluster:
This command is designed to be used to analyze performance characteristics and to help diagnose issues.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays statistics only for the specified node.
Examples
The following example displays statistics about the status monitor for a node named node1:
Description
The statistics-v1 nfs show-v3 command displays the following statistics about the NFSv3 operations on each node in
the cluster:
• Result of the operations (success or failure)
This command is designed to be used to analyze performance characteristics and to help diagnose issues.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays NFSv3 statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified result (success/failure/all).
[-null <Counter with Delta>] - Null Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of null operations.
[-gattr <Counter with Delta>] - GetAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of getattr operations.
Examples
The following example displays statistics about the NFSv3 operations for a node named node1:
Description
The statistics-v1 nfs show-v4 command displays the following statistics about the NFSv4 operations on each node in
the cluster:
• Result of the operations (success or failure)
This command is designed to be used to analyze performance characteristics and to help diagnose issues.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays NFSv4 statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified result (success/failure/all).
[-null <Counter with Delta>] - Null Procedure
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of null operations.
[-cmpnd <Counter with Delta>] - Compound Procedure
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of compound operations.
Examples
The following example displays statistics about the NFSv4 operations for a node named node1:
Description
This command displays size statistics for CIFS and NFS protocol read and write requests. The output of the command includes
the following information:
• Node name
• Statistic type
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays statistics only for the specified node.
[-stat-type <Protocol Type>] - RW Request Stat Type
If this parameter is specified, the command displays only the statistics of the specified protocol type. Protocol
types include the following: cifs_read, cifs_write, nfs2_read, nfs2_write, nfs3_read, and nfs3_write.
[-total-req-count <Counter64 with Delta>] - Total Request Count
If this parameter is specified, the command displays only statistics with the specified total number of requests.
[-average-size <Counter64 with Delta>] - Average Request Size
If this parameter is specified, the command displays only statistics with the specified average request size.
[-histo08 <Counter64 with Delta>] - 0 - 511
If this parameter is specified, the command displays only statistics with the specified number of requests in
this size range.
[-histo09 <Counter64 with Delta>] - 512 - 1023
If this parameter is specified, the command displays only statistics with the specified number of requests in
this size range.
[-histo10 <Counter64 with Delta>] - 1024 - 2047
If this parameter is specified, the command displays only statistics with the specified number of requests in
this size range.
[-histo11 <Counter64 with Delta>] - 2048 - 4096
If this parameter is specified, the command displays only statistics with the specified number of requests in
this size range.
Examples
The following example displays the number of NFS v3 requests in each size range for only one node in the cluster.
Node: node0
Stat Type: nfs3_read
Value Delta
-------------- -------- ----------
Average Size: 6 -
Total Request Count:
465947409 -
0-511: 567023 -
512-1023: 4306 -
1K-2047: 175 -
2K-4095: 160404 -
4K-8191: 537576 -
8K-16383: 1742701 -
16K-32767: 1418620 -
32K-65535:
461516604 -
64K-131071: 0 -
128K - : 0 -
Node: node0
Stat Type: nfs3_write
Value Delta
-------------- -------- ----------
Average Size: 0 -
Total Request Count:
199294247 -
0-511: 36556 -
512-1023: 3683 -
1K-2047: 745 -
2K-4095: 1413 -
4K-8191: 28643 -
8K-16383:
199223207 -
16K-32767: 0 -
32K-65535: 0 -
64K-131071: 0 -
128K - : 0 -
statistics show
Display performance data for a time interval
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
Description
This command displays performance data for a period of time.
To display data for a period of time, collect a sample using the statistics start and statistics stop commands. The
data that displays is calculated data based on the samples the cluster collects. To view the sample, specify the -sample-id
parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-tab ]}
If this parameter is specified, the command displays performance data in tabular format.
[-object <text>] - Object
Selects the objects for which you want to display performance data. To view a list of valid object names, type
statistics show -object ? or statistics catalog object show. To specify multiple objects, use
"|" between each object.
Caution: You should limit the scope of this command to only a few objects at a time to avoid a potentially
significant impact on the performance of the system.
Examples
The following example starts collecting statistics and displays statistics for the sample named smpl_1 for counters:
avg_processor_busy and cpu_busy
The following example starts and stops data collection and displays statistics for the sample named smpl_1 for counters:
avg_processor_busy and cpu_busy
Counter Value
-------------------------------- --------------------------------
avg_processor_busy 249876451
cifs_ops 0
cpu_busy 303355441
disk_data_read 51453952
disk_data_written 486117376
fcp_data_recv 0
fcp_data_sent 0
fcp_ops 0
hdd_data_read 51453952
hdd_data_written 486117376
hostname node-name1
http_ops 0
instance_name cluster
iscsi_ops 0
net_data_recv 35034112
net_data_sent 3177472
nfs_ops 0
node_name node-name1
node_uuid
[...]
The following example displays raw statistics for counters "avg_processor_busy" and "cpu_busy":
Counter Value
-------------------------------- --------------------------------
avg_processor_busy 249876451
cpu_busy 303355441
[...]
Related references
statistics catalog object show on page 773
statistics start on page 766
statistics stop on page 768
Description
This command continuously displays specified performance data at a regular interval. The command output displays data in the
following columns:
Note: This command has been deprecated and may be removed from a future version of Data ONTAP. Use the "statistics
show" command with the tabular format instead.
• cpu avg: Average processor utilization across all processors in the system.
• cpu busy: Overall system utilization based on CPU utilization and subsystem utilization. Examples of subsystems include
the storage subsystem and RAID subsystem.
• pkts recv: Number of packets received over physical ports per second.
• pkts sent: Number of packets sent over physical ports per second.
• total recv: Total network traffic received over physical ports per second (KBps).
• total sent: Total network traffic sent over physical ports per second (KBps).
• data busy: The percentage of time that data ports sent or received data.
• cluster busy: The percentage of time that cluster ports sent or received data.
Parameters
[-object <text>] - Object
Selects the object for which you want to display performance data. The default object is "cluster".
[-instance <text>] - Instance
Selects the instance for which you want to display performance data. This parameter is required if you specify
the -object parameter and enter any object other than "cluster". Multiple values for this parameter are not
supported.
Examples
The following example displays the "cluster" statistics for a node named node1. Because no number of iterations is
specified, this command will continue to run until you interrupt it by pressing Ctrl-C.
The following example displays the processor statistics for an instance named processor1 and counters "processor_busy"
and "sk_switches". This command will display only five iterations.
statistics start
Start data collection for a sample
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
Description
This command starts the collection of performance data. Use the statistics stop command to stop the collection. You view
the sample of performance data by using the statistics show command. You can collect more than one sample at a time.
Parameters
[-object <text>] - Object
Selects the objects for which you want to collect performance data. This parameter is required. To view a list
of valid object names, type statistics catalog object show at the command prompt. To specify
multiple objects, use "|" between each object.
Caution: You should limit the scope of this command to only a few objects at a time to avoid a potentially
significant impact on the performance of the system.
Examples
The following example starts statistics collection for sample "smpl_1":
The following example starts collecting statistics for the sample named smpl_1 for counters: avg_processor_busy and
cpu_busy
statistics stop
Stop data collection for a sample
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
Description
This command stops the collection of performance data. You view the sample of performance data by using the statistics
show command.
Parameters
[-sample-id <text>] - Sample Identifier
Specifies the identifier of the sample for which you want to stop data collection. If you do not specify this
parameter, the command stops data collection for the last sample that you started by running the statistics
start command without the -sample-id parameter.
Examples
The following example stops data collection for sample "smpl_1":
Related references
statistics start on page 766
statistics show on page 761
Description
This command continuously displays performance data for aggregates at a regular interval. The command output displays data
in the following columns:
Parameters
[-aggregate <text>] - Aggregate
Selects the aggregate for which you want to display performance data.
[-node {<nodename>|local}] - Node
Selects the node for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the number
is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies maximum number of aggregates to display. The default setting is 25.
Examples
The following example displays aggregate statistics:
[...]
Description
This command continuously displays performance data for flash pool caches at a regular interval. The command output displays
data in the following columns:
Parameters
[-aggregate <text>] - Aggregate
Selects the aggregate for which you want to display performance data.
[-vserver <vserver name>] - Vserver
Selects the vserver for which you want to display performance data.
[-volume <text>] - Volume
Selects the volume for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the number
is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of flash pools to display. The default setting is 25.
Examples
The following example displays flash pool statistics:
Description
This command displays the names and descriptions of counters. The displayed data is either node-specific or cluster-wide,
depending on the objects specified.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-describe ]}
Displays detailed information about each counter, including privilege level, label, and whether the counter is a
key counter.
-object <text> - Object
Selects the object for which you want to display the list of counters. This parameter is required. To view a list
of valid object names, type statistics catalog counter show -object ? or statistics catalog
object show.
[-counter <text>] - Counter
Selects the counters that match this parameter value. If you do not specify this parameter, the command
displays details for all counters.
[-filter <text>] - Filter Data
Selects the counters that match this parameter value. For example, to display counters from node1, specify -
filter "node_name=node1".
[-label <text>, ...] - Labels for Array Counters
Selects the counters that match this parameter value. A label is the name of the bucket to which an array
counter belongs.
[-description <text>] - Description
Selects the counters that match this parameter value.
[-privilege <text>] - Privilegel Level
Selects the counters that match this parameter value.
[-is-key-counter {true|false}] - Is Key Counter
Selects the counters that are key counters (true) or are not key counters (false). A key counter is a counter that
uniquely identifies an instance across the cluster. The default setting is false. For example, "vserver_name" and
"node_name" are key counters because they identify the specific Vserver or node to which the instance
belongs.
[-is-deprecated {true|false}] - Is Counter Deprecated
Selects the counters that are deprecated (true) or are not deprecated (false).
[-replaced-by <text>] - Replaced By Counter If Deprecated
Selects all deprecated counters that are replaced by the counter provided to this parameter.
Related references
statistics catalog object show on page 773
Description
This command displays the names of instances associated with the specified object. The displayed data is either node-specific or
cluster-wide, depending on the objects specified.
Parameters
[-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
-object <text> - Object
Selects the object for which you want to display the list of instances. This parameter is required. To view a list
of valid object names, type statistics catalog instance show -object ? or statistics catalog
object show.
[-instance <text>] - Instance Name
Selects the instances that match this parameter value. If you do not specify this parameter, the command
displays all the instances.
[-filter <text>] - Filter Data
Selects the instances that match this parameter value. For example, to display instances from vserver1, specify
-filter "vserver_name=vserver1".
[-vserver <vserver name>, ...] - Vserver Name
Selects the instances that match this parameter value. If you do not specify this parameter, the command
displays instances for all of the Vservers in the cluster.
Examples
The following example displays the list of instances associated with the processor object.
Related references
statistics catalog object show on page 773
Description
This command displays the names and descriptions of objects from which you can obtain performance data. The displayed data
is either node-specific or cluster-wide, depending on the objects specified.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-describe ]}
Displays detailed information about each object, including privilege level.
[-object <text>] - Object
Selects the objects for which you want to display information. If you do not specify this parameter, the
command displays details for all of the objects.
[-privilege <text>] - Privilege Level
Selects the objects that match this parameter value.
[-is-deprecated {true|false}] - Is Object Deprecated
Selects the objects that are deprecated (true) or are not deprecated (false).
[-replaced-by <text>] - Replaced By Object If Deprecated
Selects all deprecated objects that are replaced by the object provided to this parameter.
[-is-statistically-tracked {true|false}] - Is Object Statistically Tracked
Specifies if the object is statistically tracked
Examples
The following example displays descriptions of all objects in the cluster:
Description
This command continuously displays performance data for disks at a regular interval. The command output displays data in the
following columns:
• Busy (%) - percentage of time there was at least one outstanding request to the disk.
Parameters
[-disk <text>] - Disk
Selects the disk for which you want to display performance data.
[-node {<nodename>|local}] - Node
Selects the node for which you want to display performance data.
Examples
The following example displays disk statistics:
[...]
Description
This command continuously displays performance data for LIFs at a regular interval. The command output displays data in the
following columns:
Examples
The following example displays LIFs statistics:
[...]
Description
This command continuously displays performance data for LUNs at a regular interval. The command output displays data in the
following columns:
Parameters
[-vserver <vserver name>] - Vserver
Selects the vserver for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the number
is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of LUNs to display. The default setting is 25.
Examples
The following example displays LUN statistics:
[...]
Description
This command continuously displays performance data for Namespaces at a regular interval. The command output displays data
in the following columns:
Parameters
[-namespace <text>] - Namespace
Selects the Namespace for which you want to display performance data.
[-vserver <vserver name>] - Vserver
Selects the vserver for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the number
is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of Namespaces to display. The default setting is 25.
Examples
The following example displays Namespace statistics:
[...]
Description
The statistics nfs show-mount command displays the following statistics about the NFS mounts on each node in the
cluster:
• Result of the operations (success or failure)
This command is designed to be used to analyze performance characteristics and to help diagnose issues.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified result (success/failure/all).
[-null <Counter with Delta>] - Null Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of null operations.
[-mount <Counter with Delta>] - Mount Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of mount operations.
[-dump <Counter with Delta>] - Dump Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of dump operations.
Examples
The following example displays statistics about the NFS mounts for a node named node1:
Description
The statistics nfs show-nlm command displays the following statistics about the Network Lock Manager (NLM) on each
node in the cluster:
This command is designed to be used to analyze performance characteristics and to help diagnose issues.
Note: This command requires an effective cluster version earlier than Data ONTAP 9.0. Data for nodes running Data ONTAP
9.0 or later is not collected, and will not be displayed. Use the statistics show-object nlm command instead.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified result (success/failure/all).
[-null <Counter with Delta>] - Null Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of null operations.
[-test <Counter with Delta>] - Test Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of test operations.
[-lock <Counter with Delta>] - Lock Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of lock operations.
[-cancel <Counter with Delta>] - Cancel Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of cancel operations.
Examples
The following example displays statistics about the NLM for a node named node1:
Description
The statistics nfs show-statusmon command displays the following statistics about the Status Monitor on each node in
the cluster:
This command is designed to be used to analyze performance characteristics and to help diagnose issues.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified result (success/failure/all).
[-null <Counter with Delta>] - Null Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of null operations.
[-stat <Counter with Delta>] - Stat Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of stat operations.
[-monitor <Counter with Delta>] - Monitor Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of monitor operations.
Examples
The following example displays statistics about the status monitor for a node named node1:
Description
The statistics nfs show-v3 command displays the following statistics about the NFSv3 operations on each node in the
cluster:
This command is designed to be used to analyze performance characteristics and to help diagnose issues.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays NFSv3 statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified result (success/failure/all).
[-null <Counter with Delta>] - Null Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of null operations.
[-gattr <Counter with Delta>] - GetAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of getattr operations.
[-sattr <Counter with Delta>] - SetAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of setattr operations.
[-lookup <Counter with Delta>] - LookUp Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of lookup operations.
[-access <Counter with Delta>] - Access Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of access operations.
[-rsym <Counter with Delta>] - ReadSymlink Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of readsymlink operations.
Examples
The following example displays statistics about the NFSv3 operations for a node named node1:
Description
The statistics nfs show-v4 command displays the following statistics about the NFSv4 operations on each node in the
cluster:
This command is designed to be used to analyze performance characteristics and to help diagnose issues.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays NFSv4 statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified result (success/failure/all).
[-null <Counter with Delta>] - Null Procedure
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of null operations.
[-cmpnd <Counter with Delta>] - Compound Procedure
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of compound operations.
[-access <Counter with Delta>] - Access Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of access operations.
[-close <Counter with Delta>] - Close Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of close operations.
[-commit <Counter with Delta>] - Commit Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of commit operations.
[-create <Counter with Delta>] - Create Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of create operations.
[-delpur <Counter with Delta>] - Delegpurge Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have the
specified number of delegpurge operations.
Examples
The following example displays statistics about the NFSv4 operations for a node named node1:
Description
This command continuously displays performance data for nodes at a regular interval. The command output displays data in the
following columns:
Parameters
[-node {<nodename>|local}] - Node
Selects the node for which you want to display performance data.
Examples
The following example displays node statistics:
[...]
Description
Attention: This command is deprecated and will be removed in a future major release.
The statistics oncrpc show-rpc-calls command displays information about the Open Network Computing Remote
Procedure Call (ONC RPC) calls performed by the nodes of a cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display information only about the RPC calls performed by the node you specify.
Examples
Description
This command continuously displays performance data for FCP ports at a regular interval. The command output displays data in
the following columns:
Parameters
[-port <text>] - Port
Selects the port for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the number
is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of ports to display. The default setting is 25.
Examples
The following example displays port statistics:
[...]
Description
Deletes a performance preset configuration and all of its associated details.
Parameters
-preset <text> - Preset Name
Specifies the name of the performance presets that you want to delete.
Examples
Parameters
-preset <text> - Preset Name
Name of the performance preset to be modified.
[-new-name <text>] - New Preset Name
Set preset name to the given new name.
[-comment <text>] - Preset Description
Set comment to the given value.
[-privilege <PrivilegeLevel>] - Preset Privilege Level
Set privilege level at which this preset can be viewed or modified to the given value. Possible values: admin,
advanced, diagnostic.
Examples
Description
Displays information about performance preset configurations.
Parameters
{ [-fields <fieldname>, ...]
Selects which performance preset attributes to display.
| [-instance ]}
Shows details of all attributes of performance preset configuration.
[-preset <text>] - Preset Name
Selects the performance presets that match the specified preset name.
Examples
Description
Displays the specific details of each preset, including the objects sampled, the counter sample periods, and the counters
sampled.
Parameters
{ [-fields <fieldname>, ...]
Selects which performance preset detail attributes to display.
| [-instance ]}
Displays all of the performance preset detail attributes.
[-preset <text>] - Preset Name
Selects the performance preset details that match the specified preset name.
[-object <text>] - Performance Object
Selects the performance preset details that match the specified object name.
Examples
Description
This command continuously displays performance data for qtrees at a regular interval. The command output displays data in the
following columns:
Parameters
[-qtree <text>] - Qtree
Selects the qtree for which you want to display performance data. The qtree name has to be given in the format
"vol_name/qtree_name"
[-vserver <vserver name>] - Vserver
Selects the Vserver for which you want to display performance data.
[-volume <text>] - Volume
Selects the volume for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the number
is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of qtrees to display. The default setting is 25.
Examples
The following example displays qtree statistics:
[...]
Description
This command deletes samples that you created using the statistics start command.
Examples
The following example deletes the sample "smpl_1":
Related references
statistics start on page 766
Description
This command displays information about the samples that you created using the statistics start command.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-describe ]}
Displays detailed information about each sample.
[-vserver <vserver name>] - Vserver
Selects the samples that match this parameter value. If you omit this parameter, the command displays details
for all samples.
[-sample-id <text>] - Sample Identifier
Selects the samples that match this parameter value. If you do not specify this parameter, the command will
display information about all the samples in the cluster.
Examples
The following example displays information for sample "smpl_1":
Vserver: vs1
Sample ID: smpl_1
Object: processor
Instance: -
Counter: -
Start Time: 09/13 18:06:46
Stop Time: -
Status: Ready - -
Privilege: admin
Related references
statistics start on page 766
Description
This command modifies the settings for all of the statistics commands.
Parameters
[-display-rates {true|false}] - Display Rates
Specifies whether the statistics commands display rate counters in rates/second. The default is true.
[-client-stats {enabled|disabled}] - Collect Per-Client Statistics
Specifies whether statistics commands display per-client information. The default is disabled.
Note: If you enable this setting, you might significantly impact system performance.
Examples
The following example sets the value of the -display-rates parameter to false:
Related references
statistics on page 761
Description
This command displays the current settings for all of the statistics commands.
Examples
The following example displays the current settings for all statistics commands:
Related references
statistics on page 761
Description
This command continuously displays performance data for cluster at a regular interval. The command output displays data in the
following columns:
Parameters
[-system <text>] - System
Selects the cluster for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
Examples
The following example displays system statistics:
[...]
Description
This command continuously displays performance data for the top NFS and CIFS clients at a regular interval. The command
output displays data in the following columns:
Parameters
[-node {<nodename>|local}] - Node
Selects the node for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
Examples
The following example displays top client statistics:
*Total Total
Client Vserver Node Ops (Bps)
------------------ --------- ------------- ------ -----
172.17.236.53:938 vserver01 cluster-node2 9 80
172.17.236.160:898 vserver02 cluster-node1 6 50
[...]
Description
This command continuously displays performance data for top files at a regular interval. The command output displays data in
the following columns:
Parameters
[-node {<nodename>|local}] - Node
Selects the node for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
Examples
The following example displays top files statistics:
*Total Total
File Volume Vserver Aggregate Node Ops (Bps)
--------------------- ------- --------- --------- ------------- ------ -----
/vol/vol01/clus/cache vol01 vserver01 aggr1 cluster-node2 9 80
/vol/vol02 vol02 vserver02 aggr2 cluster-node1 6 50
[...]
Description
This command continuously displays performance data for volumes at a regular interval. The command output displays data in
the following columns:
Parameters
[-volume <text>] - Volume
Selects the volume for which you want to display performance data.
Examples
The following example displays volume statistics:
[...]
Description
This command continuously displays performance data for Vservers at a regular interval. The command output displays data in
the following columns:
Parameters
[-vserver <vserver name>] - Vserver
Selects the vserver for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the number
is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of Vservers to display. The default setting is 25.
Examples
The following example displays Vserver statistics:
[...]
Description
This command continuously displays performance data for workloads at a regular interval. The command output displays data in
the following columns:
Parameters
[-workload <text>] - Workload
Selects the workload for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the number
is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of workloads to display. The default setting is 25.
Examples
The following example displays workload statistics:
[...]
storage-service commands
Manage Storage Services
storage-service show
Display the available storage services
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
Description
This command displays the available storage services.
Note: The available storage services are defined by the type of storage making up an aggregate.
Examples
Storage Commands
Manage physical storage, including disks, aggregates, and failover
The storage commands enable you to manage physical and logical storage, including disks and storage aggregates. They also
enable you to manage storage failover.
Description
The storage aggregate add-disks command adds disks to an existing aggregate. You must specify the number of disks or
provide a list of disks to be added. If you specify the number of disks without providing a list of disks, the system selects the
disks.
Parameters
-aggregate <aggregate name> - Aggregate
This parameter specifies the aggregate to which disks are to be added.
[-diskcount <integer>] - Disk Count
This parameter specifies the number of disks that are to be added to the aggregate.
{ [-disktype | -T {ATA | BSAS | FCAL | FSAS | LUN | MSATA | SAS | SSD | VMDISK | SSD-NVM}] -
Disk Type
This parameter specifies the type of disk that is to be added. It must be specified with the -diskcount
parameter when adding disks to a Flash Pool.
Note: When this parameter is used, disk selection is not influenced by RAID options
raid.mix.hdd.disktype.capacity, raid.mix.hdd.disktype.performance, or
raid.mix.disktype.solid_state. Only disks of the specified type are considered eligible for
selection.
• capacity = Capacity-oriented, near-line disk types. Includes disk types FSAS, BSAS and ATA.
• performance = Performance-oriented, enterprise class disk types. Includes disk types FCAL and SAS.
• archive = Archive class SATA disks in multi-disk carrier storage shelves. Includes disk type MSATA.
• array = Logical storage devices backed by storage arrays and used by Data ONTAP as disks. Includes disk
type LUN.
• virtual = Virtual disks that are formatted and managed by the hypervisor. Includes disk type VMDISK.
Note: When this parameter is used, disk selection is not influenced by RAID options
raid.mix.hdd.disktype.capacity, raid.mix.hdd.disktype.performance, or
raid.mix.disktype.solid_state.
In this example, an aggregate is converted to a Flash Pool aggregate using SSD capacity from a storage pool. The
aggregate was created using RAID-DP for the hard disks and the SSDs are added using RAID4.
Related references
storage aggregate modify on page 830
storage pool show-available-capacity on page 1053
storage raid-options on page 1062
Description
This command analyzes available spare disks in the cluster, and it provides a recommendation how spare disks should be used to
create aggregates according to best practices. The command prints the summary of recommended aggregates including their
names and usable size. It then prompts the user whether the aggregates should be created as recommended. On positive
response, ONTAP creates aggregates as described in the recommendation.
The command parameters allow to restrict the command to some nodes in the cluster, print more details about recommended
aggregates, and to skip the prompt.
Parameters
[-nodes {<nodename>|local}, ...] - List of Nodes
Comma separated list of node names to which the command applies. If this parameter is not used, the
command applies to all nodes in the cluster.
[-verbose [true]] - Report More Details
Report additional details about recommended aggregates and spare disks. Per node summary shows number
and total size of aggregates to create, discovered spares, and also remaining spare disks and partitions after
aggregate creation. RAID group layout shows how spare disks and partitions will be used in new data
aggregates to be created. The last table shows spare disks and partitions remaining unused after aggregate
creation.
[-skip-confirmation [true]] - Skip the Confirmation and Create Recommended Aggregates
When this parameter is used, the command automatically creates the recommended aggregates. When this
parameter is not used, the command checks to proceed with aggregate creation or not.
Note: The command is not affected by the CLI session setting: set -confirmations on/off.
RAID group layout showing how spare disks and partitions will be used
in new data aggregates to be created:
Details about spare disks and partitions remaining after aggregate creation:
Info: Aggregate auto provision has started. Use the "storage aggregate
show-auto-provision-progress" command to track the progress.
Related references
set on page 4
storage aggregate mirror on page 829
storage aggregate create on page 826
storage aggregate add-disks on page 821
storage disk assign on page 940
storage disk zerospares on page 969
storage aggregate modify on page 830
Description
The storage aggregate create command creates an aggregate. An aggregate consists of disks. You must specify the
number of disks or provide a list of disks to be added to the new aggregate. If you specify the number of disks without providing
a list of disks, the system selects the disks.
When creating an aggregate, you can optionally specify the aggregate's home node, the RAID type for RAID groups on the
aggregate, and the maximum number of disks that can be included in a RAID group.
Parameters
-aggregate <aggregate name> - Aggregate
This parameter specifies the name of the aggregate that is to be created.
[-chksumstyle <aggrChecksumStyle>] - Checksum Style
This parameter specifies the checksum style for the aggregate. The values are block for Block Checksum and
advanced_zoned for Advanced Zoned Checksum (AZCS).
-diskcount <integer> - Number Of Disks
This parameter specifies the number of disks that are to be included in the aggregate, including the parity
disks. The disks in this newly created aggregate come from the pool of spare disks. The smallest disks in this
pool are added to the aggregate first, unless you specify the -disksize parameter.
[-diskrpm | -R <integer>] - Disk RPM
This parameter specifies the RPM of the disks on which the aggregate is to be created. The possible values
include 5400, 7200, 10000, and 15000.
[-disksize <integer>] - Disk Size(GB)
This parameter specifies the size, in GB, of the disks on which the aggregate is to be created. Disks with a
usable size between 90% and 105% of the specified size are selected.
{ [-disktype | -T {ATA | BSAS | FCAL | FSAS | LUN | MSATA | SAS | SSD | VMDISK | SSD-NVM}] -
Disk Type
This parameter specifies the type of disk on which the aggregate is to be created.
Note: When this parameter is used, disk selection is not influenced by RAID options
raid.mix.hdd.disktype.capacity, raid.mix.hdd.disktype.performance, or
raid.mix.disktype.solid_state. Only disks of the specified type are considered eligible for
selection.
• capacity = Capacity-oriented, near-line disk types. Includes disk types FSAS, BSAS and ATA.
• performance = Performance-oriented, enterprise class disk types. Includes disk types FCAL and SAS.
• archive = Archive class SATA disks in multi-disk carrier storage shelves. Includes disk type MSATA.
• virtual = Virtual disks that are formatted and managed by the hypervisor. Includes disk type VMDISK.
Note: When this parameter is used, disk selection is not influenced by RAID options
raid.mix.hdd.disktype.capacity, raid.mix.hdd.disktype.performance, or
raid.mix.disktype.solid_state.
Examples
The following example creates an aggregate named aggr0 on a home node named node0. The aggregate contains 20 disks
and uses RAID-DP. The aggregate contains regular FlexVol volumes:
The following example creates an aggregate named aggr0 on a home node named node0. The aggregate contains the disks
specified and uses RAID-DP
The following example creates a mirrored aggregate named aggr0 on the local node. The aggregate contains 10 disks in
each plex:
Related references
storage raid-options on page 1062
Description
The storage aggregate delete command deletes a storage aggregate. The command fails if there are volumes present on
the aggregate. If the aggregate has an object store attached to it, then in addition to deleting the aggregate the command deletes
the objects in the object store as well. No changes are made to the object store configuraton as part of this command.
Parameters
-aggregate <aggregate name> - Aggregate
This parameter specifies the aggregate that is to be deleted.
[-preserve-config-data [true]] - Delete Physical Aggregate but Preserve Configuration Data (privilege:
advanced)
Deletes the physical aggregate, but preserves the aggregate configuration data. The aggregate must not have
any disks associated with it. If the parameter -preserve-config-data is specified without a value, the
default value is true; if this parameter is not specified, the default value is false.
Examples
The following example deletes an aggregate named aggr1:
Description
The storage aggregate mirror command adds a plex to an existing unmirrored aggregate. You can specify a list of disks
to be used for the mirrored plex. If you do not specify the disks, the system automatically selects the disks based on the
aggregate's existing plex.
Examples
The following example mirrors an unmirrored aggregate aggr1:
The following example mirrors an unmirrored aggregate aggr1. The specified disks are used for the new plex.
cluster1::> storage aggregate mirror -aggregate aggr1 -mirror-disklist 1.2.12, 1.2.14, 1.2.16
Description
The storage aggregate modify command can be used to modify attributes of an aggregate such as RAID type and
maximum RAID group size.
Changing the RAID type immediately changes the RAID group type for all RAID groups in the aggregate.
Changing the maximum RAID size does not cause existing RAID groups to grow or to shrink; rather, it affects the size of RAID
groups created in the future, and determines whether more disks can be added to the RAID group that was most recently
created.
• online - Immediately sets the aggregate online. All volumes on the aggregate are set to the state they were
in when the aggregate was taken offline or restricted. The preferred command to bring an aggregate online
is storage aggregate online.
• offline - Takes an aggregate offline. You cannot take an aggregate offline if any of its volumes are online.
The preferred command to take an aggregate offline is storage aggregate offline.
• restricted - Restricts the aggregate. You cannot restrict an aggregate if any of its volumes are online. The
preferred command to restrict an aggregate is storage aggregate restrict.
• high: Mirrored data aggregates with this priority value start resynchronization first.
• low: Mirrored data aggregates with this priority value start resynchronization only after all the other
aggregates have started resynchronization.
Examples
The following example changes all RAID groups on an aggregate named aggr0 to use RAID-DP:
The following example changes all RAID groups with FSAS disks in an aggregate named aggr0 to use RAID-TEC:
cluster1::> storage aggregate modify -aggregate aggr0 -disktype FSAS -raidtype raid_tec
Related references
storage aggregate scrub on page 836
Description
The storage aggregate offline command takes an aggregate offline.
If you are taking a root aggregate offline, the node owning the aggregate must be in maintenance mode.
Parameters
-aggregate <aggregate name> - Aggregate
The name of the aggregate to be taken offline.
Examples
The following example takes an aggregate named aggr1 offline:
The following example takes an aggregate named aggr1 offline by unmounting its volumes:
Related references
storage aggregate online on page 834
Description
The storage aggregate online command brings an aggregate online if the aggregate is in offline or restricted state. If an
aggregate is in an inconsistent state, it must be brought to a consistent state before it can be brought online. If you have an
aggregate that is in an inconsistent state, contact technical support.
Parameters
-aggregate <aggregate name> - Aggregate
The name of the aggregate to be brought online.
Examples
The following example brings an aggregate named aggr1 online:
Related references
storage aggregate offline on page 833
storage aggregate restrict on page 835
Description
The storage aggregate remove-stale-record command removes a stale storage aggregate record on disk. A stale
aggregate record refers to an aggregate that has been removed from the storage system, but whose information remains recorded
on disk. Stale aggregate records are displayed in the nodeshell aggr status -r command, but the storage aggregate show
command does not show the aggregate as hosted on that node.
Parameters
-aggregate <aggregate name> - Aggregate
This parameter specifies the aggregate that corresponds to the stale aggregate record that is to be deleted.
Examples
The following example removes a stale aggregate record that refers to aggregate "aggr1":
Description
The storage aggregate rename command renames an aggregate.
Parameters
-aggregate <aggregate name> - Aggregate
This parameter specifies the aggregate to be renamed.
-newname <aggregate name> - New Name
This parameter specifies the new name for the aggregate.
Examples
The following example renames an aggregate named aggr5 as sales-aggr:
Description
The storage aggregate restrict command puts an aggregate in restricted state to make data in the aggregate's volumes
unavailable to clients. When an aggregate is in restricted state data access is not allowed. However, few operations such as
aggregate copy, parity recomputation, scrub and RAID reconstruction are allowed. You can also use this command if you want
the aggregate to be the target of an aggregate copy or SnapMirror replication operation.
Parameters
-aggregate <aggregate name> - Aggregate
The name of the aggregate to be restricted.
Examples
The following example restricts an aggregate named aggr1:
The following example restricts an aggregate named aggr2 by unmounting all the volumes within the aggregate:
Related references
storage aggregate show on page 837
Description
The storage aggregate scrub command scrubs an aggregate for media and parity errors. Parity scrubbing compares the
data disks to the parity disks in their RAID group and corrects the parity disks contents, as required. If no name is given, parity
scrubbing is started on all online aggregates.
Note: By default, scrubs are scheduled to run for a specified time on a weekly basis. However, you can use this command to
run scrubs manually to check for errors and data inconsistencies.
Parameters
{ -aggregate <aggregate name> - Aggregate
This parameter specifies the aggregate to be scrubbed for errors.
[-plex <text>] - Plex
This parameter specifies the name of the plex to scrub. If this parameter is not specified, the command scrubs
the entire aggregate.
[-raidgroup <text>] - RAID Group
This parameter specifies the RAID group to be scrubbed. If this parameter is not specified, the command
scrubs the entire aggregate.
Note: This parameter is only applicable when the -plex parameter is used.
Examples
The following example starts a scrub on a RAID group named rg0 of plex named plex0 on an aggregate named aggr0:
cluster1::> storage aggregate scrub -aggregate aggr0 -raidgroup rg0 -plex plex0 -action start
cluster1::> storage aggregate scrub -aggregate aggr0 -raidgroup rg0 -plex plex0 -action status
cluster1::> storage aggregate scrub -aggregate aggr1 -plex plex1 -action start
The following example queries the status of plex1 of an aggregate named aggr1:
cluster1::> storage aggregate scrub -aggregate aggr1 -plex plex1 -action status
The following example queries the status of all the plexes for an aggregate named aggr1:
Description
The storage aggregate show command displays information about aggregates. The command output depends on the
parameter or parameters specified with the command. If no parameters are specified, the command displays the following
information about all aggregates:
• Aggregate name
• Size
• Available size
• Percentage used
• State
• Number of volumes
• RAID status
To display detailed information about a single aggregate, use the -aggregate parameter.
• Aggregate name
• Checksum status (active, off, reverting, none, unknown, initializing, reinitializing, reinitialized,
upgrading_phase1, upgrading_phase2)
| [-disk ]
If this parameter is specified, the command displays disk names for all aggregates in the cluster:
• Aggregate name
| [-raid-info ]
If this parameter is specified, the command displays information about RAID groups, RAID type, maximum
RAID size, checksum state, checksum style and whether the RAID status is inconsistent.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all aggregates in the cluster.
[-aggregate <aggregate name>] - Aggregate
If this parameter is specified, the command displays detailed information about the specified aggregate.
[-storage-type {hdd | hybrid | lun | ssd | vmdisk}] - Storage Type
If this parameter is specified, the command displays information only about the aggregates with the specified
storage type. The possible values are hdd, hybrid, lun, ssd and vmdisk.
[-chksumstyle <aggrChecksumStyle>] - Checksum Style
If this parameter is specified, the command displays information only about the aggregates that use the
specified checksum style.
[-diskcount <integer>] - Number Of Disks
If this parameter is specified, the command displays information only about the aggregates that have the
specified number of disks.
[-mirror | -m [true]] - Mirror
If this parameter is specified, the command displays information only about the aggregates that have the
specified mirrored value.
[-disklist | -d <disk path name>, ...] - Disks for First Plex
If this parameter is specified, the command displays information only about the aggregates that have the
specified disk or disks.
[-mirror-disklist <disk path name>, ...] - Disks for Mirrored Plex
If this parameter is specified, the command displays information only about the aggregates that have the
specified disk or disks present in the mirrored plex.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays information only about the aggregates that are located on
the specified node.
[-cache-raid-group-size <integer>] - Flash Pool SSD Tier Maximum RAID Group Size
If this parameter is specified, the command displays information about the maximum RAID group size for the
SSD tier for Flash Pools.
Note: This parameter is applicable only for Flash Pools.
• high(fixed): This value is reserved for Data ONTAP system aggregates, which cannot have any other value
for this field. It cannot be explicitly set on a data aggregate. These aggregates always start their
resynchronization operation at the first available opportunity.
• high: Mirrored data aggregates with this priority value start resynchronization first.
• low: Mirrored data aggregates with this priority value start resynchronization only after all the other
aggregates have started resynchronization.
Examples
The following example displays information about all aggregates that are owned by nodes in the local cluster:
The following example displays information about aggregates that are owned by nodes in cluster1:
cluster1:
Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr0 6.04GB 3.13GB 48% online 2 cluster1-01 raid_dp,
mirrored,
normal
aggr1 53.24MB 12.59MB 76% online 2 cluster1-02 raid_dp,
mirrored,
normal
2 entries were displayed.
The following example displays information about aggregates that are owned by nodes in the remote cluster named
cluster2:
cluster2:
Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr2 - - - remote_cluster
- - -
aggr3 - - - remote_cluster
- - -
2 entries were displayed.
The following example displays information about aggregates that are owned by nodes in all the clusters:
cluster2:
Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr2 - - - remote_cluster
- - -
aggr3 - - - remote_cluster
- - -
cluster1:
Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr0 6.04GB 3.14GB 48% online 2 cluster1-01 raid_dp,
mirrored,
normal
aggr1 53.24MB 12.59MB 76% online 2 cluster1-02 raid_dp,
mirrored,
normal
4 entries were displayed.
Description
The storage aggregate show-auto-provision-progress command displays the status of the most recent auto
provision operation. The command output displays the progress for all the aggregates included in the provisioning operation.
The command displays the following information about each aggregate:
• Aggregate
• Provisioning Progress
Examples
The following example displays the information about all aggregates that are provisioned during the aggregate auto
provision operation:
Info: Aggregate auto provision has started. Use the "storage aggregate
show-auto-provision-progress" command to track the progress.
Related references
storage aggregate auto-provision on page 824
Description
The storage aggregate show-cumulated-efficiency command displays information about the cumulated storage
efficiency of all the aggregates. The storage efficiency is displayed at four different levels:
• Total
• Aggregate
• Volume
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-details ]
Use this parameter to show additional Storage Efficiency Ratios.
| [-all-details ] (privilege: advanced)
Use this parameter to show additional Storage Efficiency Ratios and size values.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregates <aggregate name>, ...] - List of Aggregates to cumulate Storage Efficiency ratio
If this parameter is specified, the command calculates the cumulated storage efficiency of the specified list of
aggregates.
[-nodes {<nodename>|local}, ...] - List of Aggregates to cumulate Storage Efficiency ratio
If this parameter is specified, the command calculates the cumulated storage efficiency of aggregates that are
located on the specified list of node.
[-total-logical-used {<integer>[KB|MB|GB|TB|PB]}] - Logical Size Used by volumes, clones, Snapshot
copies in the Aggregate (privilege: advanced)
Displays the total logical size used in all the specified aggregates. This includes Volumes, Clones and
Snapshots in all the specified aggregates. The logical size is computed based on physical usage and savings
obtained in all the specified aggregates.
[-total-physical-used {<integer>[KB|MB|GB|TB|PB]}] - Total Physical Used (privilege: advanced)
Displays the physical size used by all the specified aggregates.
[-total-storage-efficiency-ratio <text>] - Total Storage Efficiency Ratio
Displays the total storage efficiency ratio of the aggregate.
[-total-data-reduction-logical-used {<integer>[KB|MB|GB|TB|PB]}] - Total Data Reduction Logical
Used (privilege: advanced)
Displays the total logical size used in all the specified aggregates excluding Snapshot copies.
Examples
The following example displays information about all aggregates that are owned by nodes in the local cluster:
-------------Snapshot--------------------
Logical Physical Storage
Used Used Efficiency Ratio
----------- ----------- ----------------
0B 2.22MB 1.00:1
-------------FlexClone-------------------
Logical Physical Storage
Used Used Efficiency Ratio
----------- ----------- ----------------
0B 0B 1.00:1
Description
The storage aggregate show-efficiency command displays information about the storage efficiency of all the
aggregates. The storage efficiency is displayed at four different levels:
• Total
• Aggregate
• Volume
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
Examples
The following example displays information about all aggregates that are owned by nodes in the local cluster:
Aggregate: aggr1
Node: node1
Aggregate: aggr2
Node: node1
Aggregate: aggr1
Node: node1
Aggregate: aggr2
Node: node1
• Aggregate Name
• Resyncing Percentage
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Aggregate
This parameter specifies the name of the aggregate.
[-plex <text>] - Plex Name
This parameter specifies the name of the plex.
[-status <text>] - Status
Displays plex status. Possible values are:
• normal
• failed
• empty
• invalid
• uninitialized
• failed assimilation
• limbo
• active
• inactive
• resyncing
These values may appear by themselves or in combination separated by commas; for example,
"normal,active".
[-is-online {true|false}] - Is Online
Indicates whether the plex is online.
[-in-progress {true|false}] - Resync is in Progress
Indicates whether the plex is currently resyncing.
[-resyncing-percent <percent>] - Resyncing Percentage
Displays the resynchronization completion percentage if the plex is currently being resynced, '-' otherwise.
[-resync-level <integer>] - Resync Level
Displays the resync level if the plex is currently being resynced, '-' otherwise.
Examples
The following example displays resynchronization status for all the aggregates:
Related references
storage aggregate plex show on page 889
Description
The storage aggregate show-scrub-status command displays the following information about the scrub status of
aggregates:
• Aggregate name
• RAID groups
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Aggregate
If this parameter is specified, the command displays detailed scrub-status information about the specified
aggregate.
[-raidgroup <text>] - RAID Group
If this parameter is specified, the command displays information only about the aggregate that contains the
specified RAID group.
Examples
The following example displays scrub-status information for all the aggregates:
The following example displays detailed information about the aggregate named aggr1:
Related references
storage aggregate scrub on page 836
Description
The storage aggregate show-space command displays information about space utilization within aggregates and any
attached external capacity tier. The command output breaks down space usage in the specified aggregate by feature. If no
parameters are specified, the command displays this information about all aggregates. Note that used percentage for an external
capacity tier will be non-zero only if a size limit was set for that aggregate's attached tier.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
Examples
The following example displays information about all aggregates:
Aggregate : aggr0
Aggregate : aggr1
The following example displays information about all the aggregates in a system including the ones that have an object
store attached to them.
Aggregate : aggr0
Aggregate : aggr1
Performance Tier
Feature Used Used%
-------------------------------- ---------- ------
Volume Footprints 1.25GB 13%
Aggregate Metadata 540KB 0%
Snapshot Reserve 0B 0%
Total Used 1.25GB 13%
Aggregate : aggr1
Object Store: my-store
Feature Used Used%
-------------------------------- ---------- ------
Referenced Capacity 811.2MB 0%
Metadata 0B 0%
Unreclaimed Space 0B 0%
Space Saved by Storage Efficiency 0B 0%
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-partition-info ] (privilege: advanced)
Displays the following information about root-data and root-data1-data2 partitioned spares.
• Disk
• Type
• Class
• RPM
• Checksum
• Physical Size
• Status
| [-instance ]}
If this parameter is specified, the command displays detailed information about each spare disk.
[-original-owner <text>] - Original Owner
Selects the spare disks that match this parameter value.
[-disk <disk path name>] - Disk Name
Selects the spare disks that match this parameter value.
[-checksum-style {advanced_zoned | block | none}] - Checksum Style
Selects the spare disks that match this parameter value. Possible values are:
[-disk-type {ATA | BSAS | FCAL | FSAS | LUN | MSATA | SAS | SSD | VMDISK | SSD-NVM}] - Disk
Type
Selects the spare disks that match this parameter value.
[-effective-disk-type {ATA | BSAS | FCAL | FSAS | LUN | MSATA | SAS | SSD | VMDISK | SSD-
NVM}] - Effective Disk Type
Selects the spare disks that match this parameter value.
• capacity -- Capacity-oriented, near-line disk types. Includes disk types FSAS, BSAS and ATA.
• performance -- Performance-oriented, enterprise class disk types. Includes disk types FCAL and SAS.
• archive -- Archive class SATA disks in multi-disk carrier storage shelves. Includes disk type MSATA.
• array -- Logical storage devices backed by storage arrays and used by Data ONTAP as disks. Includes disk
type LUN.
• virtual -- Virtual disks that are formatted and managed by the hypervisor. Includes disk type VMDISK.
Disks with the same disk-class value are compatible for use in the same aggregate.
[-disk-rpm <integer>] - Disk RPM
Selects the spare disks that match this parameter value.
[-effective-disk-rpm <integer>] - Effective Disk RPM
Selects the spare disks that match this parameter value.
Hard disk drives with the same effective-disk-rpm value may be mixed together in the same aggregate
depending upon the system's raid.mix.hdd.rpm.capacity and raid.mix.hdd.rpm.performance
option settings.
[-syncmirror-pool <text>] - Pool Number
Selects the spare disks that match this parameter value.
[-owner-name {<nodename>|local}] - Current Owner
Selects the spare disks that match this parameter value.
[-home-owner-name {<nodename>|local}] - Home Owner
Selects the spare disks that match this parameter value.
[-dr-owner-name {<nodename>|local}] - DR Home Owner
Selects the spare disks that match this parameter value.
[-usable-size-blks <integer>] - Disk Usable Size in 4K blocks
Selects the spare disks that match this parameter value.
[-local-usable-data-size-blks <integer>] - Local Node Data Usable Size in 4K blocks
Selects the spare disks that match this parameter value.
Disks that have two partitions can be used for one root aggregate and one data aggregate.
Disks that have three partitions can be used for one root aggregate and one or two data aggregates.
This value describes the data partition size (of root-data partitioned disk) or the combined data1 + data2
partition size (of root-data1-data2 partitioned disk) in 4KB blocks.
Examples
Display spare disks owned by node node-b.
Usable Physical
Disk Type Class RPM Checksum Size Size Status
---------------- ----- ----------- ------ -------------- -------- -------- --------
1.1.13 BSAS capacity 7200 block 827.7GB 828.0GB zeroed
1.1.15 BSAS capacity 7200 block 413.2GB 414.0GB zeroed
Usable Physical
Disk Type Class RPM Checksum Size Size Status
---------------- ----- ----------- ------ -------------- -------- -------- --------
1.1.13 BSAS capacity 7200 block 827.7GB 828.0GB zeroing, 17% done
1.1.15 BSAS capacity 7200 block 413.2GB 414.0GB zeroing, 28% done
2 entries were displayed.
Related references
storage disk zerospares on page 969
storage raid-options on page 1062
Description
The storage aggregate show-status command displays the RAID layout and disk configuration of aggregates. The
command output depends on the parameter or parameters specified with the command. If no parameters are specified, the
command displays information about all aggregates in the cluster.
Note: This command does not use pagination. You can reduce the output by filtering with the parameters below.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
This parameter currently has no effect.
[-aggregate <text>] - Aggregate Name
Selects the aggregates that match this parameter value.
Examples
Display the RAID layout of a Flash Pool aggregate.
Description
The storage aggregate verify command verifies the two plexes of an aggregate. It compares the data in the two plexes to
ensure that the plexes are identical. It can be used whenever the administrator needs to ensure that the two plexes are completely
synchronized with each other. To view any discrepancies, use the following command:
Parameters
-aggregate <aggregate name> - Aggregate
This parameter specifies the aggregate to be verified. If no aggregate is specified then the action specified by
the parameter -action will be taken on all the aggregates.
-action {start|stop|resume|suspend|status} - Action
This parameter specifies the action to be taken. The possible actions are:
Examples
The storage aggregate verify command verifies the two plexes of an aggregate. It compares the data in the two
plexes to ensure that the plexes are identical. It can be used whenever the administrator needs to ensure that the two plexes
are completely synchronized with each other. To view any discrepancies, use the following command:
The following example queries the status of a verify on an aggregate named aggr1.
Description
The storage aggregate efficiency show command displays information about the different storage efficiency of all the
aggregates.If no parameters are specified, the command displays the following information for all aggregates:
• Aggregate
• Node
Examples
The following example displays information about all aggregates that are owned by nodes in the local cluster:
Aggregate: aggr0
Node: vivek6-vsim2
Aggregate: aggr1
Node: vivek6-vsim2
Description
The storage aggregate cross-volume-dedupe revert-to command is used to revert cross volume deduplication
savings on an aggregate.
Examples
The following example displays information for reverting cross volume background deduplication on aggregate "aggr1":
Description
The storage aggregate efficiency cross-volume-dedupe show command displays information in detail about the
different storage efficiency of all the aggregates.If no parameters are specified, the command displays the following information
for all aggregates:
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Aggregate
Displays the aggregate name. If this parameter is specified, the command displays detailed information about
the storage efficiency of the specified aggregate.
[-node {<nodename>|local}] - Node
Displays the node which owns the aggregate. If this parameter is specified, the command displays storage
efficiency information only about the aggregates that are located on the specified node.
[-background-progress <text>] - Progress
Displays the information for the aggregates that match the specified progress.
[-background-op-status <text>] - Operation Status
Displays the information for the aggregates that match the specified operation status.
[-background-last-op-state <text>] - Last Operation State
Displays the information for the aggregates that match the specified last operation state.
Examples
The following example displays information about all aggregates that are owned by nodes in the local cluster:
Aggregate: aggr0
Node: vivek6-vsim2
Description
The storage aggregate cross-volume-dedupe start command is used to start cross volume background deduplication
on an aggregate.
Parameters
-aggregate <aggregate name> - Aggregate
This specifies the aggregate on which cross volume background deduplication should be started. If no
aggregate is specified then it will start on all aggregates
[-scan-old-data | -s [true]] - Scan Old Data
This option processes all the existing data on all volumes on the aggregate. It prompts for user confirmation
before proceeding. Default value is false.
Examples
The following example displays information for starting cross volume background deduplication on aggregate "aggr1":
Parameters
-aggregate <aggregate name> - Aggregate
This specifies the aggregate on which cross volume background deduplication should be stopped. If no
aggregate is specified then it will stop on all aggregates
Examples
The following example displays information for stopping cross volume background deduplication on aggregate "aggr1":
Description
The storage aggregate encryption show-key-id command displays the key IDs of all NAE (NetApp Aggregate
Encryption) aggregates.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <text>] - Aggregate
If this parameter is specified, the command displays information only about the specific NAE (NetApp
Aggregate Encryption) aggregate.
[-aggrID <UUID>] - Aggregate UUID
If this parameter is specified, the command displays the key ID of the specified NAE (NetApp Aggregate
Encryption) aggregate ID.
[-keyid-index-zero <text>, ...] - 0th Index Keyid
If this parameter is specified, the command displays the 0th index key ID of NAE (NetApp Aggregate
Encryption) aggregates.
Description
The storage aggregate inode-upgrade resume command resumes a suspended inode upgrade process. The inode
upgrade process might have been suspended earlier due to performance reasons.
Parameters
-node {<nodename>|local} - Node Name
If this parameter is specified, the command resumes the upgrade process of an aggregate that is located on the
specified node.
-aggregate <aggregate name> - Aggregate Name
This specifies the aggregate for which the inode upgrade process is to be resumed.
Examples
The following example resumes an aggregate upgrade process:
Description
The storage aggregate inode-upgrade show command displays information about aggregates undergoing the inode
upgrade process. The command output depends on the parameter or parameters specified with the command. If no parameters
are specified, the command displays the default fields about all aggregates undergoing the inode upgrade process. The default
fields are:
• aggregate
• status
• scan-percent
• remaining-time
• space-needed
• scanner-progress
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example displays information about all aggregates undergoing the inode upgrade process:
Description
The storage aggregate object-store attach command attaches an object store to an aggregate to create a FabricPool.
This command requires two parameters to create a FabricPool - an aggregate and a configuration to attach an object-store to the
aggregate. This command verifies whether the object store is accessible through the intercluster LIF both from the node on
which the aggregate is present as well as its High Availability (HA) partner node. The command fails if the object store is not
accessible. Once an object store is attached to an aggregate, it cannot be detached.
Examples
The following example attaches an object store to aggregate aggr1:
Description
The storage aggregate object-store modify command is used to update one or more object store parameters.
Parameters
-aggregate <text> - Aggregate Name
This parameter identifies the aggregate to which the object store to be modified is attached.
-object-store-name <text> - ONTAP Name for this Object Store Config
This parameter identifies the configuration name of the object store to be modified.
[-unreclaimed-space-threshold <percent>] - Threshold for Reclaiming Unreferenced Space
This optional parameter specifies the usage threshold below which Data ONTAP reclaims unused space from
objects in the object store. When Data ONTAP writes data to the object store, it packages multiple file system
blocks into one object. Over time, blocks stored in an object can be freed, leaving part of the object unused.
When the percentage of used blocks in an object falls below this threshold, a background task moves the
blocks which are still used to a new object. Afterwards, Data ONTAP frees the original object to reclaim the
unused space. Valid values are between 0% and 99%. The default value depends on the object store's provider
type. It is 20% for AWS_S3, 15% for Azure_Cloud, 40% for SGWS, 14% for IBM_COS, 20% for AliCloud,
and 20% for GoogleCloud. Consult the FabricPool best practices guidelines for more information.
[-tiering-fullness-threshold <percent>] - Aggregate Fullness Threshold Required for Tiering
This optional parameter specifies the percentage of space in the performance tier which must be used before
data is tiered out to the capacity tier.
Examples
The following example modifies the unreclaimed space threshold of an object store attached to an aggregate named aggr1:
Description
The storage aggregate object-store show command displays information about all the object stores in the system.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <text>] - Aggregate Name
If this parameter is specified, the command displays information only about the object stores that are attached
to the specified aggregates.
[-object-store-name <text>] - ONTAP Name for this Object Store Config
If this parameter is specified, the command displays information only about object stores whose configuration
name matches the specified names.
[-object-store-availability <object Store Availability>] - Availability of the Object Store
If this parameter is specified, the command displays information only about object stores whose availability
status matches the specified value. Supported values with this parameter are - available and unavailable.
[-provider-type <providerType>] - Type of the Object Store Provider
If this parameter is specified, the command displays information only about object store configurations whose
provider type matches the specified value.
[-license-used-percent <percent_no_limit>] - License Space Used Percent
If this parameter is specified, the command displays information only about object stores whose space used by
the aggregate as a percentage of the license limit matches the specified value.
[-unreclaimed-space-threshold <percent>] - Threshold for Reclaiming Unreferenced Space (privilege:
advanced)
If this parameter is specified, the command displays information only about object stores whose threshold for
reclaiming unused space from objects in the object store matches the specified value.
[-tiering-fullness-threshold <percent>] - Aggregate Fullness Threshold Required for Tiering (privilege:
advanced)
If this parameter is specified, the command displays information only about object stores whose performance
tier fullness threshold for tiering matches the specified value.
Examples
The following example displays all information about all object stores:
Description
The storage aggregate object-store show-freeing-status command displays status information about the
background work that frees an aggregate's objects from an object store after a storage aggregate delete.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-bin-uuid <UUID>] - UUID of the Bin
If this parameter is specified, the command displays information only about the aggregate attached to the
specified bin UUID.
[-config-id <integer>] - Object Store Config ID
If this parameter is specified, the command displays information only about the aggregate attached to the
object- store with specified config ID.
[-object-store-name <text>] - Object Store Configuration Name
If this parameter is specified, the command displays information only about object stores whose configuration
name matches the specified names.
[-aggregate-name <aggregate name>] - Aggregate
If this parameter is specified, the command displays information only about the specified aggregates that were
deleted.
[-request-state {queued|running|cleaning-up|finishing}] - Request State
If this parameter is specified, the command displays information only about the object stores that have the
specified object freeing request state.
[-num-objects-freed <integer>] - Num Objects Freed
If this parameter is specified, the command displays information only about the object stores that have the
specified number of objects that have been freed.
[-last-error <text>] - The Last Error Encountered
If this parameter is specified, the command displays information only about the object stores that have the
specified last error encountered.
Related references
storage aggregate delete on page 829
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <text>] - Aggregate Name
If this parameter is specified, the command displays space information only about object stores that are
attached to the specified aggregates.
[-object-store-name <text>] - ONTAP Name for this Object Store Config
If this parameter is specified, the command displays space information only about object stores whose
configuration name matches the specified names.
[-object-store-availability <object Store Availability>] - Availability of the Object Store
If this parameter is specified, the command displays space information about the object stores whose
availability status matches the specified value. Supported values with this parameter are - available and
unavailable.
[-provider-type <providerType>] - Type of the Object Store Provider
If this parameter is specified, the command displays information only about object store configurations whose
provider type matches the specified value.
[-license-used-percent <percent_no_limit>] - License Space Used Percent
If this parameter is specified, the command displays space information only about object stores whose space
used by the associated aggregate as a percentage of the license limit matches the specified value. If the object
store does not require a license, then this field is not set.
[-unreclaimed-space-threshold <percent>] - Threshold for Reclaiming Unreferenced Space (privilege:
advanced)
If this parameter is specified, the command displays information only about object stores whose threshold for
reclaiming unused space from objects in the object store matches the specified value.
[-tiering-fullness-threshold <percent>] - Aggregate Fullness Threshold Required for Tiering (privilege:
advanced)
If this parameter is specified, the command displays information only about object stores whose performance
tier fullness threshold for tiering matches the specified value.
Examples
The following example displays space information about all object stores:
Description
The storage aggregate object-store config create command is used by a cluster administrator to tell Data ONTAP
how to connect to an object store. Following pre-requisites must be met before creating an object store configuration in Data
ONTAP.
• A valid data bucket or container must be created with the object store provider. This assumes that the user has valid account
credentials with the object store provider to access the data bucket.
• The Data ONTAP node must be able to connect to the object store. This includes
An object-store configuration once created must not be reassociated with a different object-store or container. See storage
aggregate object-store config modify command for more information. If neither the access-key nor the secret-
password are provided while setting up a configuration for AWS_S3 object store in Cloud Volumes ONTAP, then the access
key (access key ID), the secret password (secret access key), and the session token will be retrieved from EC2 instance metadata
for the AWS Identity and Access Management (IAM) role associated with the EC2 instance. If Data ONTAP is unable to create
a object store configuration, then the command will fail explaining the reason for failure.
Parameters
-object-store-name <text> - Object Store Configuration Name
This parameter specifies the name that will be used to identify the object store configuration. The name can
contain the following characters: "_", "-", A-Z, a-z, and 0-9. The first character must be one of the following:
"_", A-Z, or a-z.
-provider-type <providerType> - Type of the Object Store Provider
This parameter specifies the type of object store provider that will be attached to the aggregate. Valid options
are: AWS_S3 (Amazon S3 storage), Azure_Cloud (Microsoft Azure Cloud), SGWS (StorageGrid WebScale),
IBM_COS (IBM Cloud Object Storage), AliCloud (Alibaba Cloud Object Storage Service), and GoogleCloud
(Google Cloud Storage).
[-auth-type <object_store_auth_type>] - Authentication Used to Access the Object Store
This parameter specifies where the system obtains credentials for authentication to an object store. The
available choices depend on the platform (Cloud Volumes ONTAP or not) and provider-type (AWS_S3 or not).
The keys value is always applicable, and if selected means that the access-key and secret-password are
provided by the system administrator. In Cloud Volumes ONTAP, the EC2-IAM value is also applicable. It
means that the IAM role is associated with the EC2 instance, and that the access-key, secret-password
and session token are are retrieved from EC2 instance metadata for this IAM role. Note that -use-iam-role
and -auth-type are mutually exclusive, -auth-type EC2-IAM is an equivalent of -use-iam-role true,
and -auth-type key is an equivalent of -use-iam-role false. For the AWS_S3 provider, the CAP (C2S
Authentication Portal) value is also applicable. This should only be used when accessing C2S (Commercial
Cloud Services). If the CAP value is specified, then the-cap-url must be specified. See cap-url.
Examples
The following example creates an object store configuration in Data ONTAP:
Related references
storage aggregate object-store config modify on page 881
Description
The storage aggregate object-store config delete command removes an existing object store configuration in
Data ONTAP. The configuration cannot be deleted if it is used by any aggregates or if the system is still freeing objects from the
object store from a previously executed storage aggregate delete command. The command storage aggregate
object-store show can be used to view the aggregates attached to the object store before issuing the delete command.
Note: The storage aggregate object-store show command will not display aggregates that have been previously
deleted but still has objects in the object store.
Parameters
-object-store-name <text> - Object Store Configuration Name
This parameter specifies the object store configuration to be deleted.
Examples
The following example deletes an object store configuration named my-store:
Related references
storage aggregate delete on page 829
storage aggregate object-store show on page 876
Description
The storage aggregate object-store config modify command is used to update one or more of object store
configuration parameters. This command must not be used to reassociate an existing valid object-store configuration to a new
Parameters
-object-store-name <text> - Object Store Configuration Name
This parameter identifies the configuration to be modified.
[-new-object-store-name <text>] - Object Store Configuration New Name
This optional parameter specifies the new name for the object store configuration.
[-auth-type <object_store_auth_type>] - Authentication Used to Access the Object Store
This optional parameter specifies where the system obtains credentials for authentication to an object store.
The available choices depend on the platform (Cloud Volumes ONTAP or not) and provider-type (AWS_S3 or
not). The keys value is always applicable, and if selected means that the access-key and secret-
password are provided by the system administrator. In Cloud Volumes ONTAP, the EC2-IAM value is also
applicable. It means that the IAM role is associated with the EC2 instance, and that the access-key,
secret-password and session token are are retrieved from EC2 instance metadata for this IAM role. Note
that -use-iam-role and -auth-type are mutually exclusive, -auth-type EC2-IAM is an equivalent of -
use-iam-role true, and -auth-type key is an equivalent of -use-iam-role false. For the AWS_S3
provider, the CAP (C2S Authentication Portal) value is also applicable. This should only be used when
accessing C2S (Commercial Cloud Services). If the CAP value is specified, then the-cap-url must be
specified. See cap-url.
[-cap-url <text>] - URL to Request Temporary Credentials for C2S Account
This parameter is available only when -auth-type is CAP. It specifies a full URL of the request to a CAP
server for retrieving temporary credentials (access-key, secret-pasword and session token) for accessing the
object store server. The CAP URL may look like: https://123.45.67.89:1234/CAP/api/v1/
credentials?agency=myagency&mission=mymission&role=myrole
[-server <Remote InetAddress>] - Fully Qualified Domain Name of the Object Store Server
This optional parameter specifies the new Fully Qualified Domain Name (FQDN) of the same object store
server. For Amazon S3, server name must be an AWS regional endpoint in the format s3.amazonaws.com or
s3-<region>.amazonaws.com, for example, s3-us-west-2.amazonaws.com. The region of the server and the
bucket must match. For more information on AWS regions, refer to 'Amazon documentation on AWS regions
and endpoints'. For Azure, if the -server is a "blob.core.windows.net" or a "blob.core.usgovcloudapi.net",
then a value of -azure-account followed by a period will be added in front of the server.
[-is-ssl-enabled {true|false}] - Is SSL/TLS Enabled
This optional parameter indicates whether a secured SSL/TLS connection will be used during data access to
the object store.
[-port <integer>] - Port Number of the Object Store
This optional parameter specifies a new port number to connect to the object store server indicated in the -
server parameter.
[-access-key <text>] - Access Key ID for S3 Compatible Provider Types
This optional parameter specifies a new access key (access key ID) for the AWS S3, SGWS and IBM COS
object stores.
[-secret-password <text>] - Secret Access Key for S3 Compatible Provider Types
This optional parameter specifies a new password (secret access key) for the AWS S3, SGWS and IBM COS
object stores. For an Azure object store see -azure-private-key. If the -access-key is specified but the
-secret-password is not then one will be asked to enter the -secret-password without echoing the
input.
Examples
The following example modifies two parameters (port number and is-ssl-enabled) of an object store configuration named
my-store:
cluster1::>storage aggregate object-store config modify -object-store-name my-store -port 1235 -is-
ssl-enabled true
Description
The storage aggregate object-store config rename command is used to rename an exiting object store
configuration.
Parameters
-object-store-name <text> - Object Store Configuration Name
This paramter identifies an existing object store configuration.
-new-object-store-name <text> - Object Store Configuration New Name
This parameter specifies the new object store configuration name.
Examples
The following example renames an object store configuration from my-store to ms1:
Description
The storage aggregate object-store config show command displays information about all existing object store
configurations in the cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-object-store-name <text>] - Object Store Configuration Name
If this parameter is specified, the command displays information only about object store configurations whose
name matches the specified names.
[-object-store-uuid <UUID>] - UUID of the Object Store Configuration
If this parameter is specified, the command displays information only about object store configurations whose
UUID matches the specified UUID values.
[-provider-type <providerType>] - Type of the Object Store Provider
If this parameter is specified, the command displays information only about object store configurations whose
provider type matches the specified value.
[-auth-type <object_store_auth_type>] - Authentication Used to Access the Object Store
If this parameter is specified, the command displays information only about object store configurations whose
authentication type matches the specified value.
[-cap-url <text>] - URL to Request Temporary Credentials for C2S Account
If this parameter is specified, the command displays information only about object store configurations whose
CAP URL matches the specified value.
[-server <Remote InetAddress>] - Fully Qualified Domain Name of the Object Store Server
If this parameter is specified, the command displays information only about object store configurations whose
server name matches the specified value. The server name is specified as a Fully Qualified Domain Name
(FQDN).
[-is-ssl-enabled {true|false}] - Is SSL/TLS Enabled
If this parameter is specified, the command displays information only about object store configurations whose
status about the use of secured communication over the network matches the specified value.
[-port <integer>] - Port Number of the Object Store
If this parameter is specified, the command displays information only about object store configurations whose
port numbers matches the specified value.
[-container-name <text>] - Data Bucket/Container Name
If this parameter is specified, the command displays information only about object store configurations whose
container name matches the specified value. Data ONTAP uses this container name or object store data bucket
while accessing data from the object store.
Examples
The following example displays all available object store configuration in the cluster:
Description
The storage aggregate object-store profiler abort command will abort an ongoing object store profiler run. This
command requires two parameters - an object store configuration and a node on which the profiler is currently running.
Parameters
-node {<nodename>|local} - Node on Which the Profiler Should Run
This parameter specifies the node on which the object store profiler is running.
-object-store-name <text> - Object Store Configuration Name
This parameter specifies the object store configuration that describes the object store. The object store
configuration has information about the object store server name, port, access credentials, and provider type.
Description
The storage aggregate object-store profiler show command is used to monitor progress and results of the
storage aggregate object-store profiler start command.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node Name
This parameter specifies the node on which the profiler was started.
[-object-store-name <text>] - ONTAP Name for this Object Store Configuration
This parameter specifies the object store configuration that describes the object store. The object store
configuration has information about the object store server name, port, access credentials, and provider type.
[-profiler-status <text>] - Profiler Status
Current status of the profiler.
[-start-time <MM/DD/YYYY HH:MM:SS>] - Profiler Start Time
Time at which profiler run started.
[-op-name <text>] - Operation Name - PUT/GET
Name of the operation. Possible values are PUT or GET.
[-op-size {<integer>[KB|MB|GB|TB|PB]}] - Size of Operation
Size of the PUT or GET operation.
[-op-count <integer>] - Number of Operations Performed
Number of operations issued to the object store.
[-op-failed <integer>] - Number of Operations Failed
Number of operations that failed.
[-op-latency-minimum <integer>] - Minimum Latency for Operation in Milliseconds
Minimum latency of the operation in milliseconds, as measured from the filesystem layer.
[-op-latency-maximum <integer>] - Maximum Latency for Operation in Milliseconds
Maximum latency of the operation in milliseconds, as measured from the filesystem layer.
[-op-latency-average <integer>] - Average Latency for Operation in Milliseconds
Average latency of the operation in milliseconds, as measured from the filesystem layer.
Examples
The following example displays the results of storage aggregate object-store profiler start :
Related references
storage aggregate object-store profiler start on page 887
Description
The storage aggregate object-store profiler start command writes objects to an object store and reads those
objects to measure latency and throughput of an object store. This command requires two parameters - an object store
configuration and node from which to send the PUT/GET/DELETE operations. This command verifies whether the object store
is accessible through the intercluster LIF of the node on which it runs. The command fails if the object store is not accessible.
The command will create a 10GB dataset by doing 2500 PUTs for a maximum time period of 60 seconds. Then it will issue
GET operations of different sizes - 4KB, 8KB, 32KB, 256KB for a maximum time period of 180 seconds. Finally it will delete
the objects it created. This command can result in additional charges to your object store account. This is a CPU intensive
command. It is recommended to run this command when the system is under 50% CPU utilization.
Parameters
-node {<nodename>|local} - Node on Which the Profiler Should Run
This parameter specifies the node from which PUT/GET/DELETE operations are sent.
-object-store-name <text> - Object Store Configuration Name
This parameter specifies the object store configuration that describes the object store. The object store
configuration has information about the object store server name, port, access credentials, and provider type.
Examples
The following example starts the object store profiler :
Description
The storage aggregate plex delete command deletes the specified plex. The aggregate specified with then -
aggregate will be unmirrored and contain the remaining plex. The disks in the deleted plex become spare disks.
Parameters
-aggregate <aggregate name> - Aggregate
Name of an existing aggregate which contains the plex specified with the -plex parameter.
-plex <text> - Plex
Name of a plex which belongs to the aggregate specified with the -aggregate parameter.
Examples
The following example deletes plex0 of aggregate aggr1:
Description
The storage aggregate plex offline command takes the specified plex offline. The aggregate specified with the -
aggregate parameter must be a mirrored aggregate and both plexes must be online. Prior to taking a plex offline, the system
will flush all internally-buffered data associated with the plex and create a snapshot that is written out to both plexes. The
snapshot allows for efficient resynchronization when the plex is subsequently brought back online.
Parameters
-aggregate <aggregate name> - Aggregate
Name of an existing aggregate which contains the plex specified with the -plex parameter.
-plex <text> - Plex
Name of a plex which belongs to the aggregate specified with the -aggregate parameter.
Examples
The following example takes plex0 of aggregate aggr1 offline:
Parameters
-aggregate <aggregate name> - Aggregate
Name of an existing aggregate which contains the plex specified with the -plex parameter.
-plex <text> - Plex
Name of a plex which belongs to the aggregate specified with the -aggregate parameter.
Examples
The following example brings plex0 of aggregate aggr1 online:
Description
The storage aggregate plex show command displays information for the specified plex. By default, the command
displays the following information about all plexes:
• Aggregate Name
• Plex Name
• Is Online
• Is Resyncing
• Resyncing Percentage
• Plex Status
To display detailed information about a single plex, use the -aggregate and -plex parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Aggregate
Name of an existing aggregate which contains the plex specified with the -plex parameter.
[-plex <text>] - Plex Name
Name of a plex which belongs to the aggregate specified with the -aggregate parameter.
[-status <text>] - Status
Displays plex status. Possible values are:
• failed
• empty
• invalid
• uninitialized
• failed assimilation
• limbo
• active
• inactive
• resyncing
These values may appear by themselves or in combination separated by commas, for example,
"normal,active".
[-is-online {true|false}] - Is Online
Selects the plexes that match this parameter value.
[-in-progress {true|false}] - Resync is in Progress
Selects the plexes that match this parameter value.
[-resyncing-percent <percent>] - Resyncing Percentage
Selects the plexes that match this parameter value.
[-resync-level <integer>] - Resync Level
Selects the plexes that match this parameter value.
[-pool <integer>] - Pool
Selects the plexes that match this parameter value.
Examples
The following example displays information about all the plexes for all the aggregates:
Aggregate: aggr1
Plex Name: plex1
Status: normal,active
Is Online: true
Description
Temporarily stops any reallocation jobs that are in progress. When you use this command, the persistent state is saved. You can
use the storage aggregate reallocation restart command to restart a job that is quiesced.
There is no limit to how long a job can remain in the quiesced (paused) state.
Parameters
-aggregate <aggregate name> - Aggregate Name
Specifies the aggregate on which you want to temporarily pause the job.
Examples
Related references
storage aggregate reallocation restart on page 891
Description
Starts a reallocation job. Use this command to restart a quiesced (temporarily stopped) job or a scheduled scan that is idle for the
aggregate.
Parameters
-aggregate <aggregate name> - Aggregate Name
Specifies the aggregate on which you want to restart reallocation scans.
[-ignore-checkpoint | -i [true]] - Ignore Checkpoint
Restarts the job at the beginning when set to true. If you use this command without specifying this parameter,
its effective value is false and the job starts the scan at the point where it was stopped. If you specify this
parameter without a value, it is set to true and the scan restarts at the beginning.
Description
Schedules a reallocation scan for an existing reallocation job. If the reallocation job does not exist, use the storage
aggregate reallocation start command to define a reallocation job.
You can delete an existing reallocation scan schedule. However, if you do this, the job's scan interval reverts to the schedule that
was defined for it when the job was created with the storage aggregate reallocation start command.
Parameters
-aggregate <aggregate name> - Aggregate Name
Specifies the aggregate on which you want to schedule reallocation jobs.
[-del | -d [true]] - Delete
Deletes an existing reallocation schedule when set to true. If you use this command without specifying this
parameter, its effective value is false and the reallocation schedule is not deleted. If you specify this parameter
without a value, it is set to true and the reallocation schedule is deleted.
[-cron | -s <text>] - Cron Schedule
Specifies the schedule with the following four fields in sequence. Use a space between field values. Enclose
the values in double quotes.
• minute is a value from 0 to 59.
Use an asterisk "*" as a wildcard to indicate every value for that field. For example, an * in the day of month
field means every day of the month. You cannot use the wildcard in the minute field.
You can enter a number, a range, or a comma-separated list of values for a field.
Examples
cluster1::> storage aggregate reallocation schedule -aggregate aggr0 -cron "0 23 6 *"
Related references
storage aggregate reallocation start on page 894
Description
Displays the status of a reallocation scan, including the state, schedule, aggregate and scan id. If you do not specify the id for a
particular reallocation scan, the command displays information about all the existing reallocation scans.
Parameters
{ [-fields <fieldname>, ...]
Displays the value of relevant field that you specify for the reallocation scans that are present.
| [-v ]
Specify this parameter to display the output in a verbose format.
| [-instance ]}
Displays information about reallocation scans on aggregates in a list format.
[-id <integer>] - Job ID
Specify this parameter to display the reallocation scan that matches the reallocation job ID that you specify.
[-aggregate <aggregate name>] - Aggregate Name
Specify this parameter to display the reallocation scan that matches the aggregate that you specify.
[-description <text>] - Job Description
Specify this parameter to display reallocation scans that match the text description that you specify.
[-state {Initial|Queued|Running|Waiting|Pausing|Paused|Quitting|Success|Failure|Reschedule|
Error|Quit|Dead|Unknown|Restart|Dormant}] - Job State
Specify this parameter to display reallocation jobs that match the state that you specify.
[-progress <text>] - Execution Progress
Specify this parameter to list the running reallocation jobs whose progress indicator matches the text that you
provide. For example, if you specify "Starting ..." as the text string for the progress option, then the system
lists all the jobs that are starting.
[-schedule <job_schedule>] - Schedule Name
Specify this parameter to display reallocation scans that match the schedule name that you specify. If you want
a list of all job schedules, use the job schedule show command.
[-global-status <text>] - Global State of Scans
Specify this parameter to indicate if reallocation scans are on or off globally. You must type either of the
following text strings:
Examples
Description
Begins a reallocation scan on a specified aggregate.
Before performing a reallocation scan, the reallocation job normally performs a check of the current layout optimization. If the
current layout optimization is less than the threshold, then the system does not perform a reallocation on the aggregate.
You can define the reallocation scan job so that it runs at a specific interval, or you can use the storage aggregate
reallocation schedule command to schedule reallocation jobs.
Parameters
-aggregate <aggregate name> - Aggregate Name
Specify this parameter to specify the target aggregate on which to start a reallocation scan.
{ [-interval | -i <text>] - Interval Schedule
Specified the schedule in a single string with four fields:
Use an asterisk "*" as a wildcard to indicate every value for that field. For example, an * in the day of month
field means every day of the month. You cannot use the wildcard in the minute field.
You can enter a number, a range, or a comma-separated list of values for a field.
| [-once | -o [true]]} - Once
Specifies that the job runs once and then is automatically removed from the system when set to true. If you use
this command without specifying this parameter, its effective value is false and the reallocation scan runs as
scheduled. If you enter this parameter without a value, it is set to true and a reallocation scan runs once.
Examples
cluster1::> storage aggregate reallocation start -aggregate aggr0 -interval "0 23 * 6"
Related references
storage aggregate reallocation schedule on page 892
Parameters
-aggregate <aggregate name> - Aggregate Name
Specify this parameter to specify the target aggregate on which to stop and delete a reallocation scan.
Examples
Description
The storage aggregate relocation show command displays status of aggregates which were relocated in the last
instance of relocation operation.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command only displays the fields that you
specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all entries.
[-node {<nodename>|local}] - Node
Selects aggregates from the specified source node.
[-aggregate <text>] - Aggregate Name
Selects the aggregates that match this parameter value.
[-relocation-status <text>] - Aggregates Relocation Status
Selects the aggregates whose relocation status matches this parameter value.
[-destination <text>] - Destination for Relocation
Selects the aggregates that are designated for relocation on the specified destination node.
Examples
The following example displays the relocation status of aggregates on all nodes in the cluster:
Description
The storage aggregate relocation start command initiates the relocation of the aggregates from one node to the
partner node in a high-availablility (HA) pair.
Parameters
-node {<nodename>|local} - Name of the Node that currently owns the aggregate
This specifies the source node where the aggregates to be relocated reside.
-destination {<nodename>|local} - Destination node
This specifies the destination node where aggregates are to be relocated.
-aggregate-list <aggregate name>, ... - List of Aggregates to be relocated
This specifies the list of aggregate names to be relocated from source node to destination node.
[-override-vetoes {true|false}] - Override Vetoes
This specifies whether to override the veto checks for relocation operation. Initiating aggregate relocation with
vetoes overridden will result in relocation proceeding even if the node detects outstanding issues that would
make aggregate relocation dangerous or disruptive. The default value is false.
[-relocate-to-higher-version {true|false}] - Relocate To Higher Version
This specifies if the aggregates are to be relocated to a node which is running on a higher version of Data
ONTAP than the source node. If an aggregate is relocated to this destination then that aggregate cannot be
relocated back to the source node till the source is also upgraded to the same or higher Data ONTAP version.
This option is not required if the destination node is running on higher minor version, but the same major
version. The default value is false.
[-override-destination-checks {true|false}] - Override Destination Checks
This specifies if the relocation operation should override the check done on destination node. This option
could be used to force a relocation of aggregates even if the destination has outstanding issues. Note that this
could make the relocation dangerous or disruptive. The default value is false.
[-ndo-controller-upgrade {true|false}] - Relocate Aggregates for NDO Controller Upgrade (privilege:
advanced)
This specifies if the relocation operation is being done as a part of non-disruptive controller upgrade process.
Aggregate relocation will not change the home ownerships of the aggregates while relocating as part of
controller upgrade. The default value is false.
Examples
The following example relocates aggregates name aggr1 and aggr2 from source node node0 to destination node node1:
cluster1::> storage aggregate relocation start -node node0 -destination node1 -aggregate-list
aggr1, aggr2
Related references
storage aggregate resynchronization options on page 899
Description
The storage aggregate resynchronization modify command can be used to modify the resynchronization priority of
an aggregate.
When the number of aggregates pending resynchronization is higher than the maximum number of concurrent resynchronization
operations allowed on a node, the aggregates get resynchronized in the order of their "resync-priority" values.
For example, let the max-concurrent-resync under the storage aggregate resynchronization options directory
for a node be set to two. If there are three aggregates waiting to be resynchronized, where their respective resync-priority
values are high, medium, and low, then the third aggregate is not allowed to start resynchronization until one of the first two
aggregates has completed resynchronizing.
Parameters
-aggregate <aggregate name> - Aggregate
This parameter specifies the aggregate that is to be modified.
[-resync-priority {high(fixed)|high|medium|low}] - Resynchronization Priority
This parameter specifies the new resynchronization priority value for the specified aggregate. This field cannot
be modified for unmirrored or Data ONTAP system aggregates.
Possible values for this parameter are:
• high: Mirrored data aggregates with this priority value start resynchronization first.
• medium: Mirrored data aggregates with this priority value start resynchronization after all the system
aggregates and data aggregates with 'high' priority value have started resynchronization.
• low: Mirrored data aggregates with this priority value start resynchronization only after all the other
aggregates have started resynchronization.
Related references
storage aggregate resynchronization options on page 899
Description
The storage aggregate resynchronization show command displays the relative resynchronization priority for each
aggregate in the cluster. When a particular node restricts how many resync operations can be active concurrently, these priorities
are used to prioritize the operations. The maximum concurrent resync operations for a node is displayed in the storage
aggregate resynchronization options show command. If no parameters are specified, the command displays the
following information about all the aggregates in the cluster:
• Aggregate name
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Aggregate
If this parameter is specified, the command displays the resynchronization priority only for the specified
aggregate.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays the resynchronization priority only for the aggregates
owned by the specified node.
[-resync-priority {high(fixed)|high|medium|low}] - Resynchronization Priority
If this parameter is specified, the command displays only the resynchronization priority that matches the
specified value. Possible values for this parameter are:
• high(fixed): This value is reserved for Data ONTAP system aggregates, which cannot have any other value
for this field. These aggregates always start their resynchronization operation at the first available
opportunity. This value cannot be assigned to a data aggregate.
• high: Mirrored data aggregates with this priority value start resynchronization first.
• low: Mirrored data aggregates with this priority value start resynchronization only after all the other
aggregates have started resynchronization.
When the number of aggregates waiting for resynchronization is higher than the maximum number of
resynchronization operations allowed on a node, then the resync-priority field is used to determine which
aggregate starts resynchronization first. This field is not set for unmirrored aggregates.
Examples
The following command displays the resynchronization priorities for all the aggregates in the cluster:
Related references
storage aggregate resynchronization options show on page 900
Description
The storage aggregate resynchronization options modify command can be used to modify the options that govern
the resynchronization of aggregates on a given cluster node.
Modifying the max-concurrent-resyncs option changes the number of aggregates that are allowed to resynchronize
concurrently. When the number of aggregates waiting for resynchronization is higher than this value, the aggregates are
resynchronized in the order of their "resync-priority". This value can be modified using the storage aggregate
resynchronization modify command while specifying the -resync-priority parameter.
Parameters
-node {<nodename>|local} - Node
This parameter specifies the node for which the option is to be modified.
[-max-concurrent-resync <integer>] - Maximum Concurrent Resynchronizing Aggregates
This parameter specifies the new value for the maximum number of concurrent resync operations allowed on a
node. This option must be specified along with the -node parameter. When a node has active resync
operations, setting this parameter to a value that is lower than the number of currently resyncing aggregates
will trigger a user confirmation.
Related references
storage aggregate resynchronization modify on page 897
Description
The storage aggregate resynchronization options show command displays all the options that govern the
resynchronization of aggregates on a given cluster node. If no parameters are specified, the command displays the following
information about all nodes:
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays resynchronization options only for the specified node.
[-max-concurrent-resync <integer>] - Maximum Concurrent Resynchronizing Aggregates
If this parameter is specified, the command displays only the resynchronization option that matches the
specified value.
Examples
The following example displays the maximum number of concurrent resyncs allowed for each node in the cluster:
The following example displays the maximum number of concurrent resyncs allowed for a specified node:
The following example displays all the nodes that allow more than five concurrent resync operations:
Description
The storage array modify command lets the user change several array parameters.
Parameters
-name <text> - Name
Storage array name, either generated by Data ONTAP or assigned by the user.
[-prefix <text>] - Prefix
Abbreviation for the named array.
[-vendor <text>] - Vendor
Array manufacturer.
[-model <text>] - Model
Array model number.
[-options <text>] - options
Vendor specific array settings.
[-max-queue-depth <integer>] - Target Port Queue Depth (privilege: advanced)
The target port queue depth for all target ports on this array.
[-lun-queue-depth <integer>] - LUN Queue Depth (privilege: advanced)
The queue depth assigned to array LUNs from this array.
{ [-is-upgrade-pending {true|false}] - Upgrade Pending (privilege: advanced)
Set this parameter to true if the array requires additional Data ONTAP resilience for a pending firmware
upgrade. Keep this parameter false during normal array operation. This value can not be set to true if -path-
faliover-time is greater than zero.
Examples
This command changes the model to FastT.
Description
The storage array remove command discards array profile records for a particular storage array from the cluster database.
The command fails if a storage array is still connected to the cluster. Use the storage array config show command to
view the array connectivity status. The array target port can be removed using the storage array port remove command.
Parameters
-name <text> - Name
Name of the storage array you want to remove from the database.
Examples
Related references
storage array config show on page 905
storage array port remove on page 910
Description
The storage array rename command permits substitution of the array profile name which Data ONTAP assigned during
device discovery. By default, the name that Data ONTAP assigned to the storage array during discovery is shown in Data
ONTAP displays and command output.
Examples
Description
The storage array show command displays information about arrays visible to the cluster. If no parameters are specified,
the command displays the following information about all storage arrays:
• Prefix
• Name
• Vendor
• Model
• Options
To display detailed information about a single array, use the -name parameter. The detailed view adds the following
information:
• Serial Number
• Optimization Policy
• Affinity
• Errors
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-name <text>] - Name
Selects the arrays that match this parameter value.
[-prefix <text>] - Prefix
Abbreviation for the named array.
Examples
The following example displays information about all arrays.
Name: HITACHI_DF600F_1
Prefix: abc
Vendor: HITACHI
Model: DF600F
options:
Serial Number: 4291000000000000
Optimization Policy: iALUA
Affinity: aaa
Error Text:
Path Failover Timeout (sec): 30
Extend All Path Failure Event (secs): 50
Description
The storage array config show command displays information about how the storage arrays connect to the cluster, LUN
groups, number of LUNS, and more. Use this command to validate the configuration and to assist in troubleshooting.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-switch ]
If you specify this parameter, switch port information is shown.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Controller Name
Selects the arrays that match this parameter value.
[-group <integer>] - LUN Group
Selects the arrays that match this parameter value. A LUN group is a set of LUNs that shares the same path
set.
[-target-wwpn <text>] - Array Target Ports
Selects the arrays that match this parameter value (the World Wide Port Name of a storage array port).
[-initiator <text>] - Initiator
Selects the arrays that match this parameter value (the host bus adapter that the clustered node uses to connect
to storage arrays).
[-array-name <array name>] - Array Name
Selects the arrays that match this parameter value.
[-target-side-switch-port <text>] - Target Side Switch Port
Selects the arrays that match this parameter value.
[-initiator-side-switch-port <text>] - Initiator Side Switch Port
Selects the arrays that match this parameter value.
[-lun-count <integer>] - Number of array LUNs
Selects the arrays that match this parameter value.
[-ownership {all|assigned|unassigned}] - Ownership
Selects the arrays that match this parameter value.
Warning: Configuration errors were detected. Use 'storage errors show' for detailed information.
Description
The storage array disk paths show command displays information about disks and array LUNs. Where it appears in the
remainder of this document, "disk" may refer to either a disk or an array LUN. By default, the command displays the following
information about all disks:
• Controller name
• Initiator Port
• LUN ID
• Target IQN
• TPGN
• Port speeds
To display detailed information about a single disk, use the -disk parameter.
Parameters
{ [-fields <fieldname>, ...]
Displays the specified fields for all disks, in column style output.
| [-switch ]
Displays the switch port information for all disks, in column style output.
| [-instance ]}
Displays detailed disk information. If no disk path name is specified, this parameter displays the same detailed
information for all disks as does the -disk parameter. If a disk path name is specified, then this parameter
displays the same detailed information for the specified disks as does the -disk parameter.
[-uid <text>] - Disk Unique Identifier
Selects the disks whose unique id matches this parameter value. A disk unique identifier has the form:
20000000:875D4C32:00000000:00000000:00000000:00000000:00000000:00000000:00000000:0
0000000
[-disk <disk path name>] - Disk Name
Displays detailed information about the specified disks.
[-array-name <array name>] - Array Name
Selects information about the LUNs presented by the specified storage array.
[-diskpathnames <disk path name>, ...] - Path-Based Disk Names
Selects information about disks that have all of the specified path names.
[-nodelist {<nodename>|local}, ...] - Controller name
Selects information about disks that are visible to all of the specified nodes .
[-initiator <text>, ...] - Initiator Port
Selects information about disks that are visible to the initiator specified. Disks that are not currently in use by
that initiator are included.
[-lun <integer>, ...] - LUN ID
Selects information about the specified LUNs.
Examples
The following example displays information about all disks:
Description
The storage array port modify command lets the user change array target port parameters.
Parameters
-name <text> - Name
Selects the array ports that match this parameter value. The storage array name is either generated by Data
ONTAP or assigned by the user.
-wwnn <text> - WWNN
Selects the array ports that match this parameter value.
-wwpn <text> - WWPN
Selects the array ports that match this parameter value.
[-max-queue-depth <integer>] - Target Port Queue Depth
The target port queue depth for this target port.
[-utilization-policy {normal|defer}] - Utilization Policy
The policy used in automatically adjusting the queue depth of the target port based on its utilization.
Examples
This command changes the maximum queue depth for this target port to 32.
cluster1::> storage array port modify -name HITACHI_DF600F_1 -wwnn 50060e80004291c0 -wwpn
50060e80004291c0 -max-queue-depth 32
Parameters
-name <text> - Name
Selects the array ports that match this parameter value. The storage array name is either generated by Data
ONTAP or assigned by the user.
{ [-wwnn <text>] - WWNN
Selects the array ports that match this parameter value.
[-wwpn <text>] - WWPN
Selects the array ports that match this parameter value.
| [-target-iqn <text>] - Target IQN
Selects the array ports that match this parameter value.
[-tpgt <integer>]} - TPGT
Selects the array ports that match this parameter value.
Examples
This command removes a port record from the array profiles database.
cluster1::> storage array port remove -name HITACHI_DF600F_1 -wwnn 50060e80004291c0 -wwpn
50060e80004291c0
Description
The storage array port show command displays all the target ports known to the cluster for a given storage array (if an
array name is specified) or for all storage arrays if no storage array name is specified. Target ports remain in the database as part
of an array profile unless you explicitly remove them from the database.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-name <text>] - Name
Selects the array ports that match this parameter value. The storage array name is either generated by Data
ONTAP or assigned by the user.
[-wwnn <text>] - WWNN
Selects the array ports that match this parameter value.
[-wwpn <text>] - WWPN
Selects the array ports that match this parameter value.
• normal - This policy aggressively competes for target port resources, in effect competing with other hosts.
(default)
• defer - This policy does not aggressively compete for target port resources, in effect deferring to other
hosts.
Examples
The example below displays the port information for a single port.
Description
The automated-working-set-analyzer show command displays the Automated Working-set Analyzer running instances.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node Name
This parameter indicates the node name that the AWA instance runs on.
Examples
The following example shows a running instance of automated-working-set-analyzer on node node1 for aggregate
aggr0.
Description
The automated-working-set-analyzer start command enables the Automated Workload Analyzer that is capable of doing the
following:
• Workload monitoring
Examples
Description
The storage automated-working-set-analyzer stop command terminates one or multiple Automated Workload Analyzer running
instances.
Parameters
-node <nodename> - Node Name
This parameter indicates the node name that the AWA instance runs on.
[-flash-cache {true|false}] - Flash cache node-wide modeling
This parameter indicates whether the AWA is modeling flash-cache.
[-aggregate <aggregate name>] - Aggregate
This parameter indicates the aggregate name that the AWA instance runs on.
Examples
Description
The automated-working-set-analyzer volume show command displays the volume statistics reported by the corresponding
Automated Working-set Analyzer running instances.
Description
The storage bridge add command enables you to add FC-to-SAS bridges for SNMP monitoring in a MetroCluster
configuration.
Parameters
-address <IP Address> - Bridge Management Port IP Address
This parameter specifies the IP address of the bridge that is being added for monitoring.
[-managed-by {SNMP|in-band}] - Bridge Management Method
This parameter specifies whether the bridge uses the SNMP or in-band management method. FibreBridge
6500N uses SNMP only; FibreBridge7500 may use either.
[-name <text>] - Bridge Name
This parameter identifies the bridge being added. It is required only when the managed-by parameter is set to
in-band.
[-snmp-community <text>] - SNMP Community
This parameter specifies the SNMP community set on the bridge that is being added for monitoring.
[-veto-backend-fabric-check {true|false}] - Veto Backend Fabric Check (privilege: advanced)
If specified, the storage bridge add command will not check if the bridge is present in the MetroCluster's
backend fabric. By default, it does not let you add bridges that are not present.
Examples
The following command adds a bridge with IP address '10.226.197.16' for monitoring:
cluster1::>
Description
The storage bridge modify enables you to modify certain parameters for identifying and accessing the FC-to-SAS bridges
added for monitoring in a MetroCluster configuration.
Parameters
-name <text> - Bridge Name
This parameter specifies the name of the bridge.
[-address <IP Address>] - Bridge IP Address
This parameter specifies the IP address of the bridge.
[-snmp-community <text>] - SNMP Community Set on the Bridge
This parameter specifies the SNMP community set on the bridge.
[-managed-by {SNMP|in-band}] - Bridge Management Method
This parameter specifies whether the bridge uses the SNMP or in-band management method. FibreBridge
6500N uses SNMP only; FibreBridge7500 may use either.
Examples
The following command modifies 'ATTO_10.226.197.16' bridge SNMP community to 'public':
cluster1::>
Description
The storage bridge refresh command triggers a refresh of the SNMP data for the MetroCluster FC switches and FC-to-
SAS bridges. It does not do anything if the refresh is already going on. The FC switches and FC-to-SAS bridges must have been
previously added for monitoring by using the storage switch add and storage bridge add commands respectectively.
Examples
The following command triggers a refresh for the SNMP data:
cluster1::*>
Related references
storage switch add on page 1105
storage bridge add on page 918
Description
The storage bridge remove enables you to remove FC-to-SAS bridges that were previously added for SNMP monitoring.
Parameters
-name <text> - Bridge Name
This parameter specifies the name of the bridge added for monitoring.
Examples
The following command removes 'ATTO_10.226.197.16' bridge from monitoring:
Description
The storage bridge run-cli command enables you to execute an ATTO bridge command.
Examples
The following example executes a command on a bridge
Description
The storage bridge show command displays information about all the storage bridges in the MetroCluster configuration.
The bridges must have been previously added for monitoring using the storage bridge add command. If no parameters are
specified, the default command displays the following information about the storage bridges:
• Bridge
• Symbolic Name
• Vendor
• Model
• Bridge WWN
• Is Monitored
• Is Bridge Secure
• Managed By
• Monitor Status
To display detailed profile information about a single storage bridge, use the -name parameter.
Parameters
{ [-fields <fieldname>, ...]
Displays the specified fields for all the storage bridges, in column style output.
| [-connectivity ]
Displays the following details about the connectivity from different entities to the storage bridge:
• Node
• Initiator
| [-cooling ]
Displays the following details about the chassis temperature sensor(s) on the storage bridge:
• Sensor Name
• Sensor Status
| [-error ]
Displays the errors related to the storage bridge.
| [-ports ]
Displays the following details about the storage bridge FC ports:
• Port number
Displays the following details about the storage bridge SAS ports:
• Port number
| [-power ]
Displays the status of the replaceable power supplies for the FibreBridge 7500 only:
| [-sfp ]
Displays the following details about the storage bridge FC ports Small Form-factor Pluggable (SFP):
• Port number
• SFP vendor
Displays the following details about the storage bridge SAS ports Quad Small Form-factor Pluggable (QSFP):
• Port number
• QSFP vendor
• QSFP type
Displays the following details about the storage bridge SAS ports Mini-SAS HD:
• Port number
• Mini-SAS HD vendor
• Mini-SAS HD type
| [-stats ]
Displays the following details about the storage bridge FC ports:
• Port number
Displays the following details about the storage bridge SAS ports:
• Port number
| [-instance ]}
Displays expanded information about all the storage bridges in the system. If a storage bridge is specified, then
this parameter displays the same detailed information for the storage bridge you specify as does the -name
parameter.
[-name <text>] - Bridge Name
Displays information only about the storage bridges that match the name you specify.
[-wwn <text>] - Bridge World Wide Name
Displays information only about the storage bridges that match the bridge wwn you specify.
[-model <text>] - Bridge Model
Displays information only about the storage bridges that match the bridge model you specify.
[-vendor {unknown|Atto}] - Bridge Vendor
Displays information only about the storage bridges that match the bridge vendor you specify.
[-fw-version <text>] - Bridge Firmware Version
Displays information only about the storage bridges that match the bridge firmware version you specify.
[-serial-number <text>] - Bridge Serial Number
Displays information only about the storage bridges that match the bridge serial number you specify.
[-address <IP Address>] - Bridge IP Address
Displays information only about the storage bridges that match the bridge IP address you specify.
[-is-monitoring-enabled {true|false}] - Is Monitoring Enabled for Bridge?
Displays information only about the storage bridges that match the bridge monitoring value you specify.
[-status {unknown|ok|error}] - Bridge Status
Displays information only about the storage bridges that match the bridge monitoring status you specify.
[-profile-data-last-successful-refresh-timestamp {MM/DD/YYYY HH:MM:SS [{+|-}hh:mm]}] - Bridge
Profile Data Last Successful Refresh Timestamp
Displays information only about the storage bridges that match the profile data last successful refresh
timestamp you specify.
[-symbolic-name <text>] - Bridge Symbolic Name
Displays information only about the storage bridges that match the symbolic name you specify.
[-snmp-community <text>] - SNMP Community Set on the Bridge
Displays information only about the storage bridges that match the bridge SNMP community you specify.
Examples
The following example displays information about all storage bridges:
cluster1::>
The following example displays connectivity (node to bridge) information about all storage bridges:
The following command displays cooling (temperature sensors) information about all storage bridges:
The following command displays the error information about all storage bridges:
The following command displays the detailed information about all the storage bridges:
The following command displays power supply information about all storage bridges:
The following command displays port information about all storage bridges:
FC Ports:
Admin Oper Neg
Ports Status Status Port Mode Speed WWPN
----- -------- ------- ------------- ------- ----------------
1 enabled online n-port 8gb 2100001086603824
2 enabled offline unknown unknown 2200001086603824
SAS Ports:
Neg Data
Data Rate PHY1 PHY2 PHY3 PHY4 Admin Oper
Ports Rate Cap Status Status Status Status Status Status WWPN
----- ---- ---- ------- ------- ------- ------- -------- ------- ------------
1
3Gbps
6Gbps online online online online enabled online 5001086000603824
2
6Gbps
6Gbps offline offline offline offline disabled offline 0000000000000000
The following command displays port SFP information about all storage bridges:
SAS QSFP:
Mini-SAS HD:
FC SFP:
Speed
Ports Vendor Serial Number Part Number Capability
----- ----------------- ------------------ ------------------------ ----------
1 AVAGO AC1442J00L5 AFBR-57F5MZ 16Gbps
2 AVAGO AC1442J00L0 AFBR-57F5MZ 16Gbps
SAS QSFP:
Mini-SAS HD:
The following command displays port statistics information about all storage bridges:
FC Ports:
Oper Neg Link Sync CRC Rx Tx
Ports Status Port Mode Speed Failure Losses Error Words Words
----- ------- ------------- ------- ------- ------ ----- ---------- ----------
1 online n-port 8gb 0 0 0 2721271731 3049186605
2 offline unknown unknown 1 1 0 0 0
SAS Ports:
Invalid Disparity Sync PHY Link CRC
SAS PHY Neg Speed Dword Error Loss Reset Changed Error
Port Port Speed Capability Count Count Count Count Count Count
---- ---- ------- ---------- ------- --------- ----- ----- ------- -----
1 0 3Gbps 6Gbps 28262 26665 2 0 1 0
1 1 3Gbps 6Gbps 2110 1794 20 0 1 0
1 2 3Gbps 6Gbps 20435 18857 13 0 1 0
1 3 3Gbps 6Gbps 4573 3353 16 0 1 0
2 0 6Gbps 6Gbps 66 53 0 0 0 0
2 1 6Gbps 6Gbps 27478 25137 2 0 0 0
2 2 6Gbps 6Gbps 20537 17322 9 0 0 0
2 3 6Gbps 6Gbps 23629 21767 10 0 0 0
Related references
storage bridge add on page 918
Description
The storage bridge config-dump collect command retrieves a dumpconfiguration file from a storage bridge.
Parameters
-bridge <text> - Bridge Name
Use this parameter to retrieve a dumpconfiguration file from the specified storage bridge.
Examples
The following example retrieves a dumpconfiguration file from storage bridge ATTO_FibreBridge7500N_1:
cluster1::*>
Parameters
-node {<nodename>|local} - Node
Use this parameter to delete a dumpconfiguration file stored on the specified node.
-file <text> - Config File
Use this parameter to delete the dumpconfiguration file with the specified file name.
Examples
The following example deletes dsbridge_config.FB7500N100001.2017-04-28_14_49_30.txt from node1:
cluster1::*>
Related references
storage bridge config-dump collect on page 934
Description
The storage bridge config-dump show command displays information about all the dumpconfiguration files previously
retrieved with the storage bridge config-dump collect command. If no parameters are specified, the default command
displays the following information about the dumpconfiguration files:
• Node
• File Name
• Timestamp
• Bridge
To display detailed information about a single dumpconfiguration file, use the -node and -file parameters.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example displays information about all dumpconfiguration files:
Bridge: ATTO_FibreBridge7500N_1
cluster1::*>
The following example displays detailed information about all dumpconfiguration files:
Node: node1
Bridge Name: ATTO_FibreBridge7500N_1
Filename: dsbridge_config.FB7500N100001.2017-05-01_09_53_53.txt
Timestamp: 5/1/2017 09:53:53
Bridge Serial Number: FB7500N100001
Node: node2
Bridge Name: ATTO_FibreBridge7500N_1
Filename: dsbridge_config.FB7500N100001.2017-04-28_14_48_35.txt
Timestamp: 4/28/2017 14:48:35
Bridge Serial Number: FB7500N100001
Node: node2
Bridge Name: ATTO_FibreBridge7500N_1
Filename: dsbridge_config.FB7500N100001.2017-04-28_15_50_20.txt
Timestamp: 4/28/2017 15:50:20
Bridge Serial Number: FB7500N100001
3 entries were displayed.
cluster1::*>
Description
The storage bridge coredump collect command retrieves a core file from a storage bridge.
Parameters
-name <text> - Bridge Name
This parameter specifies the storage bridge name from which the coredump file is to be collected.
Examples
The following example retrieves a coredump from storage bridge ATTO_FibreBridge7500N_1:
cluster1::>
Description
The storage bridge coredump delete command deletes a coredump file previously retrieved with the storage bridge
coredump collect command.
Parameters
-name <text> - Bridge Name
This parameter specifies the name of the bridge that the coredump file belongs to.
-corename <text> - Coredump Filename
This parameter specifies the name of the coredump file to be deleted.
Examples
The following example deletes coredump file core.FB7500N100018.1970-01-05.17_50_30.mem collected from bridge
ATTO_FibreBridge7500N_1:
cluster1::>
Related references
storage bridge coredump collect on page 937
Description
The storage bridge coredump show command displays information about all the coredump files previously retrieved with
the storage bridge coredump collect command. If no parameters are specified, the default command displays the
following information about the coredump files:
• Bridge Name
• Coredump Filename
• Located on Node
• Panic Timestamp
• Panic String
To display detailed information about a single coredump file, use the -node and -corename parameters.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-serial-number <text>] - Bridge Serial Number
Use this parameter to select the coredump files from the storage bridge that matches the specified bridge serial
number.
[-corename <text>] - Coredump Filename
Use this parameter to select the coredump files that matches the specified file name.
[-name <text>] - Bridge Name
Use this parameter to select the coredump files from the storage bridge that matches the specified bridge name.
[-node <nodename>] - Located on Node
Use this parameter to select the coredump the coredump files that are located on the specified node.
[-panic-time <MM/DD/YYYY HH:MM:SS>] - Panic Timestamp
Use this parameter to select the coredump files that were collected at the specified time.
Examples
The following example displays information about all coredump files:
cluster1::>
Related references
storage bridge coredump collect on page 937
Description
The storage bridge firmware update command downloads the firmware onto the bridge. The bridge needs to be
rebooted for the firmware update to occur. The firmware file to be used is specified by the -uri parameter.
Parameters
-bridge <text> - bridge name
This specifies the bridge whose firmware needs to be updated.
-uri <text> - URI
This paramerter specifies the URI from which the firmware file is downloaded onto the bridge.
Examples
The following example updates the firmware on bridge ATTO_FibreBridge7500N_1.
Description
The storage disk assign command is used to assign ownership of an unowned disk or array LUN to a specific node. You
can also use this command to change the ownership of a disk or an array LUN to another node. You can designate disk
ownership by specifying disk names, array LUN names, wildcards, or all (for all disks or array LUNs visible to the node). For
disks, you can also set up disk ownership autoassignment. You can also assign disks to a particular pool. You can also assign
disks by copying ownership from another disk.
Parameters
{ [-disk <disk path name>] - Disk Path
This specifies the disk or array LUN that is to be assigned. Disk names take one of the following forms:
• Virtual disks are named in the form <prefix>.<number>, where prefix is the storage array's prefix and
number is a unique ascending number.
Disk names take one of the following forms on clusters that are not yet fully upgraded to Data ONTAP 8.3:
• Disks that are not attached to a switch are named in the form <node>:<host_adapter>.<loop_ID>. For
disks with a LUN, the form is <node>:<host_adapter>.<loop_ID>L<LUN>. For instance, disk number
16 on host adapter 1a on a node named node0a is named node0a:1a.16. The same disk on LUN lun0 is
named node0a:1a.16Llun0.
Before the cluster is upgraded to Data ONTAP 8.3, the same disk can have multiple disk names, depending on
how the disk is connected. For example, a disk known to a node named alpha as alpha:1a.19 can be known to a
node named beta as beta:0b.37. All names are listed in the output of queries and are equally valid. To
determine a disk's unique identity, run a detailed query and look for the disk's universal unique identifier
(UUID) or serial number.
A subset of disks or array LUNs can be assigned using the wildcard character (*) in the -disk parameter.
Either the -owner, the -sysid, or the -copy-ownership-from parameter must be specified with the -disk
parameter. Do not use the -node parameter with the -disk parameter.
| -disklist <disk path name>, ... - Disk list
This specifies the List of disks to be assigned.
| -all [true] - Assign All Disks
This optional parameter causes assignment of all visible unowned disks or array LUNs to the node specified in
the -node parameter. The -node parameter must be specified with the -all parameter. When the -copy-
ownership-from parameter is specified with the -node parameter, it assigns disk ownership based on the -
copy-ownership-from parameter; otherwise it assigns ownership of the disks based on the -node
parameter. Do not use the -owner or the -sysid parameter with the -all parameter.
Examples
The following example assigns ownership of an unowned disk named 1.1.16 to a node named node1:
The following example assigns all unowned disks or array LUNs visible to a node named node1 to itself:
The following example autoassigns all unowned disks (eligible for autoassignment) visible to a node named node1 to
itself:
The following two examples show the working of the -force parameter with a spare disk that is already owned by
another system:
The following example assigns ownership of the set of unowned disks on <stack> 1, to a node named node1:
The following example assigns ownership of unowned disk 1.1.16 by copying ownership from disk 1.1.18:
The following example assigns all unowned disks visible to a node named node1 by copying ownership from disk
1.1.18:
The following example assigns the root partition of disk 1.1.16 to node1.
cluster1::> storage disk assign -disk 1.1.16 -owner node1 -root true
-force true
The following example assigns the data partition of root-data partitioned disk 1.1.16 to node1.
cluster1::> storage disk assign -disk 1.1.16 -owner node1 -data true
-force true
The following example assigns the data1 partition of root-data1-data2 partitioned disk 1.1.24 to node1.
cluster1::> storage disk assign -disk 1.1.24 -owner node1 -data1 true
-force true
The following example assigns the data2 partition of root-data1-data2 partitioned disk 1.1.24 to node1.z33
cluster1::> storage disk assign -disk 1.1.24 -owner node1 -data2 true
-force true
Description
The storage disk fail command can be used to manually force a file system disk to fail. It is used to remove a file system
disk that may be logging excessive errors and requires replacement. To unfail a disk, use the storage disk unfail
command.
Parameters
-disk <disk path name> - Disk Name
This parameter specifies the disk to be failed.
[-immediate | -i [true]] - Fail immediately
This parameter optionally specifies whether the disk is to be failed immediately. It is used to avoid Rapid
RAID Recovery and remove the disk from the RAID configuration immediately. Note that when a file system
disk has been removed in this manner, the RAID group to which the disk belongs enters degraded mode
(meaning a disk is missing from the RAID group). If a suitable spare disk is available, the contents of the disk
being removed are reconstructed onto that spare disk.
Examples
The following example fails a disk named 1.1.16 immediately:
Related references
storage disk unfail on page 967
Description
The storage disk reassign is deprecated and may be removed in a future release of Data ONTAP. Disk reassignment is no
longer required as part of a controller replacement procedure. For further information, see the latest controller or NVRAM FRU
replacement flyer for your system. This command changes the ownership of all disks on a node to the ownership of another
node. Use this command only when a node has a complete failure (for instance, a motherboard failure) and is replaced by
another node. If the node's disks have already been taken over by its storage failover partner, use the -force parameter.
Parameters
-homeid | -s <nvramid> - Current Home ID
This specifies the serial number of the failed node.
-newhomeid | -d <nvramid> - New Home ID
This specifies the serial number of the node that is to take ownership of the failed node's disks.
[-force | -f [true]] - Force
This optionally specifies whether to force the reassignment operation. The default setting is false .
Examples
In the following example, a node named node0 and having serial number 12345678 has failed. Its disks have not been
taken over by its storage failover partner. A replacement node with serial number 23456789 was installed and connected
to node0's disk shelves. To assign node0's disks to the new node, start the new node and run the following command:
In the following example, a similar failure has occurred, except that node0's disks have been taken over by its storage
failover partner, node1. A new node with serial number 23456789 has been installed and configured. To assign the disks
that previously belonged to node0 to this new node, run the following command:
cluster::*> storage disk reassign -homeid 12345678 -newhomeid 23456789 -force true
node0's disks 1.1.11, 1.1.12, 1.1.13, 1.1.14, 1.1.15, 1.1.16, 1.1.23 and 1.1.24
were reassigned to new owner with serial number 23456789.
Parameters
[-node {<nodename>|local}] - Node
If this parameter is provided, the disk ownership information is updated only on the node specified.
Examples
The following example refreshes the disk ownership information for all the nodes in the cluster:
Description
The storage disk remove command removes the specified spare disk from the RAID configuration, spinning the disk down
when removal is complete.
This command does not remove disk ownership information from the disk. Therefore, if you plan to reuse the disk in a different
storage system, you should use the storage disk removeowner command instead. See the "Physical Storage Management
Guide" for the complete procedure.
NOTE: For systems with multi-disk carriers, it is important to ensure that none of the disks in the carrier are filesystem disks
before attempting removal. To convert a filesystem disk to a spare disk, see storage disk replace .
Parameters
-disk <disk path name> - Disk Name
This parameter specifies the disk to be removed.
Examples
The following example removes a spare disk named 1.1.16:
Related references
storage disk removeowner on page 946
storage disk replace on page 947
Parameters
-disk <disk path name> - Disk Name
This specifies the disk from which persistent reservation is to be removed.
Examples
The following example removes the persistent reservation from a disk named node1:switch01:port.126L1.
Description
The storage disk removeowner command removes ownership from a specified disk. Then disk can then be reassigned to a
new owner.
Parameters
-disk <disk path name> - Disk Name
This specifies the disk whose ownership is to be removed.
{ [-root [true]] - Root Partition of Root-Data/Root-Data1-Data2 Partitioned Disk (privilege: advanced)
This optional parameter removes ownership of the root partition of a root-data/root-data1-data2 partitioned
disk. You cannot use this parameter with disks that are part of a storage pool. The default value is false.
| [-data [true]] - Data Partition of Root-Data Partitioned Disk (privilege: advanced)
This optional parameter removes ownership of the data partition of a root-data partitioned disk. You cannot
use this parameter with a root-data1-data2 partitioned disk or disks that are part of a storage pool. The default
value is false.
| [-data1 [true]] - Data1 Partition of a Root-Data1-Data2 Partitioned Disk (privilege: advanced)
This optional parameter removes ownership of the data1 partition of a root-data1-data2 partitioned disk. You
cannot use this parameter with a root-data partitioned disk or disks that are part of a storage pool. The default
value is false.
| [-data2 [true]]} - Data2 Partition of a Root-Data1-Data2 Partitioned Disk (privilege: advanced)
This optional parameter removes ownership of the data2 partition of a root-data1-data2 partitioned disk. You
cannot use this parameter with a root-data partitioned disk or disks that are part of a storage pool. The default
value is false.
Examples
The following example removes the ownership from a disk named 1.1.27.
The following example removes ownership of the root partition on disk 1.1.16.
The following example removes ownership of the data1 partition on disk 1.1.23.
The following example removes ownership of the data2 partition on disk 1.1.23.
Description
The storage disk replace command starts or stops the replacement of a file system disk with spare disk. When you start a
replacement, Rapid RAID Recovery begins copying data from the specified file system disk to a spare disk. When the process is
complete, the spare disk becomes the active file system disk and the file system disk becomes a spare disk. If you stop a
replacement, the data copy is halted, and the file system disk and spare disk retain their initial roles.
Parameters
-disk <disk path name> - Disk Name
This specifies the file system disk that is to be replaced. Disk names take one of the following forms:
• Virtual disks are named in the form <prefix>.<number>, where prefix is the storage array's prefix and
number is a unique ascending number.
Disk names take one of the following forms on clusters that are not yet fully upgraded to Data ONTAP 8.3:
• Disks that are not attached to a switch are named in the form <node>:<host_adapter>.<loop_ID>. For
disks with a LUN, the form is <node>:<host_adapter>.<loop_ID>L<LUN>. For instance, disk number
16 on host adapter 1a on a node named node0a is named node0a:1a.16. The same disk on LUN lun0 is
named node0a:1a.16Llun0.
Before the cluster is upgraded to Data ONTAP 8.3, the same disk can have multiple disk names, depending on
how the disk is connected. For example, a disk known to a node named alpha as alpha:1a.19 can be known to a
node named beta as beta:0b.37. All names are listed in the output of queries and are equally valid. To
determine a disk's unique identity, run a detailed query and look for the disk's universal unique identifier
(UUID) or serial number.
-action {start | stop} - Action
This specifies whether to start or stop the replacement process.
Examples
The following example begins replacing a file system disk named 1.0.16 with a spare disk named 1.1.14.
cluster1::> storage disk replace -disk 1.0.16 -replacement 1.1.14 -action start
Description
The storage disk set-foreign-lun command sets or unsets a specified array LUN as foreign. This command will enable/
disable the feature of importing the data from foreign LUN.
Parameters
-disk <disk path name> - Disk Name
This parameter specifies the array LUN which is to be set or unset as foreign.
-is-foreign-lun [true] - Is Foreign LUN
If the parameter value specified is true then array LUN is set as foreign. If the parameter value specified is
false then array LUN foreignness is cleared.
Examples
The following example shows how to set an array LUN as foreign:
The following example shows how to mark an array LUN as not foreign:
Description
The storage disk set-led command controls the LED of a specified disk.
You can turn an LED on or off, cause it to blink or stop blinking, or test it.
This command is useful for locating a disk in its shelf.
Parameters
-action {on|off|blink|blinkoff|testall|resetall} - Action
This parameter specifies the state to which the LED is to be set. Possible values include the following:
• testall - This tests the operation of every disk enclosure's hardware and drivers per node. Do not use this
value in normal operation.
• resetall - This resets the LED of every disk on the node and lights up the LED of disks with faults.
• Virtual disks are named in the form <prefix>.<number>, where prefix is the storage array's prefix and
number is a unique ascending number.
Disk names take one of the following forms on clusters that are not yet fully upgraded to Data ONTAP 8.3:
• Disks that are not attached to a switch are named in the form <node>:<host_adapter>.<loop_ID>. For
disks with a LUN, the form is <node>:<host_adapter>.<loop_ID>L<LUN>. For instance, disk number
16 on host adapter 1a on a node named node0a is named node0a:1a.16. The same disk on LUN lun0 is
named node0a:1a.16Llun0.
Before the cluster is upgraded to Data ONTAP 8.3, the same disk can have multiple disk names, depending on
how the disk is connected. For example, a disk known to a node named alpha as alpha:1a.19 can be known to a
node named beta as beta:0b.37. All names are listed in the output of queries and are equally valid. To
determine a disk's unique identity, run a detailed query and look for the disk's universal unique identifier
(UUID) or serial number.
Examples
The following example causes the LEDs on all disks whose names match the pattern Cluster1* to turn on for 5 minutes:
The following example causes the LEDs on all disks attached to adapter 0b on Node2 to turn on for 1 minute:
The following example resets the LEDs on all disks on the local node and causes the LEDs of disks with faults to turn on:
The following example causes the LEDs on all disks whose names match the pattern Cluster1* to turn on for 2 minutes:
The following example tests the LEDs on all disks owned by the local node for 3 iterations:
Description
The storage disk show command displays information about disks and array LUNs. Where it appears in the remainder of
this document "disk" may refer to either a disk or an array LUN. By default, the command displays the following information
about all disks in column style output:
• Disk name
• Shelf number
• Bay number
• Container type (aggregate, broken, foreign, labelmaint, maintenance, mediator, remote, shared, spare, unassigned, unknown,
volume, or unsupported)
• Position (copy, data, dparity, orphan, parity, pending, present, shared or tparity)
To display detailed information about a single disk, use the -disk parameter.
Parameters
{ [-fields <fieldname>, ...]
Displays the specified fields for all disks, in column style output.
| [-broken ]
Displays the following RAID-related information about broken disks:
• Checksum compatibility
• Disk name
• Outage reason
• Shelf number
• Bay number
• Pool
• Disk type
| [-errors ]
Displays the following disk information about the disks which have errors.
• Disk Name
• Error Type
| [-longop ]
Displays the following information about long-running disk operations, in column style output:
• Disk name
• Copy destination
| [-maintenance ]
Displays the following RAID-related information about disks in the maintenance center:
• Checksum compatibility
• Disk name
• Outage Reason
• Shelf number
• Bay number
• Pool
• Disk type
| [-ownership ]
Displays the following ownership-related information:
• Disk name
• Aggregate name
• SyncMirror pool
| [-partition-ownership ]
Displays the following ownership-related information for partitioned disks:
• Disk name
• Aggregate name
• Owner system id of data or data1 partition on a root-data or a root-data1-data2 partitioned disk respectively
| [-physical ]
Displays the following information about the disk's physical attributes, in column style output:
• Disk name
• Disk type
• Disk vendor
• Disk model
| [-port ]
Displays the following path-related information:
• Disk name and disk port associated with disk primary path
• Disk name and disk port associated with the disk secondary path, for a multipath configuration
| [-raid ]
Displays the following RAID-related information:
• Disk name
• Container type (aggregate, broken, labelmaint, maintenance, mediator, remote, shared, spare, unassigned,
unknown, or volume)
• Outage reason
• Position (copy, data, dparity, orphan, parity, pending, present, shared or tparity)
• Aggregate name
| [-raid-info-for-aggregate ]
Displays the following RAID-related information about the disks used in an aggregate:
• Aggregate name
• Position (copy, data, dparity, orphan, parity, pending, present, shared or tparity)
• Disk name
• Shelf number
• Bay number
• Pool
• Disk type
When this parameter is specified, RAID groups that use shared disks are not included. Use storage
aggregate show-status to show information for all RAID groups and aggregates.
| [-spare ]
Displays the following RAID-related information about available spare disks:
• Checksum compatibility
• Disk name
• Shelf number
• Bay number
• Pool
• Disk type
• Disk class
| [-ssd-wear ]
Displays the following wear life related information about solid state disks:
• Rated Life Used : An estimate of the percentage of device life that has been used, based on the actual
device usage and the manufacturer's prediction of device life. A value greater than 99 indicates that the
• Spare Blocks Consumed Limit : Spare blocks consumed percentage limit reported by the device. When the
Spare Blocks Consumed percentage for the device reaches this read-only value, Data ONTAP initiates a
disk copy operation to prepare to remove the device from service. Omitted if value is unknown.
• Spare Blocks Consumed : Percentage of device spare blocks that have been used. Each device has a
number of spare blocks that will be used when a data block can no longer be used to store data. This value
reports what percentage of the spares have already been consumed. Omitted if value is unknown.
| [-virtual-machine-disk-info ]
Displays information about Data ONTAP virtual disks, their mapped datastores and their specific backing
device attributes, such as: disk or LUN, adapter and initiator details (if applicable).
• Disk name.
• Name of the node.
• Name of the disk backing store. A backing store represents a storage location for virtual machine files. It
can be a VMFS volume, a directory on network-attached storage, or a local file system path.
• File name of the virtual disk used by the hypervisor. Each Data ONTAP disk is mapped to a unique VM
disk file.
• Type of the disk backing store. It can be a VMFS volume, a directory on network-attached storage, or a
local file system path.
• Full path to the backing store for network-attached storage. This field is valid only for NAS connections.
• Backing adapter PCI device ID for the virtual disk, for example "50:00.0".
• The iSCSI name of the disk backing target. This field is valid only for iSCSI connections.
• The iSCSI IP address of the disk backing target. This field is valid only for iSCSI connections.
• SCSI device name for the backing disk. It takes the form target-id:lun-id, for example "2:1".
• Backing disk partition number where the corresponding VM disk file resides.
| [-vmdisk-backing-info ]
Displays information about the backing disks on certain Data ONTAP-v models:
• Disk name
• Disk name
• Array name
• Capacity in sectors
• Capacity in mb
• Serial Number
• Disk name
• Container type
• Primary path
• Location
• Disk Name
• Shelf
• Bay
• Container Type
• Primary Path
| [-instance ]}
Displays detailed disk information. If no disk path name is specified, this parameter displays the same detailed
information for all disks as does the -disk parameter. If a disk path name is specified, then this parameter
displays the same detailed information for the specified disks as does the -disk parameter.
• Virtual disks are named in the form <prefix>.<number>, where prefix is the storage array's prefix and
number is a unique ascending number.
Disk names take one of the following forms on clusters that are not yet fully upgraded to Data ONTAP 8.3:
• Disks that are not attached to a switch are named in the form <node>:<host_adapter>.<loop_ID>. For
disks with a LUN, the form is <node>:<host_adapter>.<loop_ID>L<LUN>. For instance, disk number
16 on host adapter 1a on a node named node0a is named node0a:1a.16. The same disk on LUN lun0 is
named node0a:1a.16Llun0.
Before the cluster is upgraded to Data ONTAP 8.3, the same disk can have multiple disk names, depending on
how the disk is connected. For example, a disk known to a node named alpha as alpha:1a.19 can be known to a
node named beta as beta:0b.37. All names are listed in the output of queries and are equally valid. To
determine a disk's unique identity, run a detailed query and look for the disk's universal unique identifier
(UUID) or serial number.
[-owner {<nodename>|local}] - Owner
Selects information about disks that are owned by the specified node.
[-owner-id <nvramid>] - Owner System ID
Selects the disks that are owned by the node with the specified system ID.
[-is-foreign {true|false}] - Foreign LUN (privilege: advanced)
Selects information about array LUNs that have been declared to be foreign LUNs.
[-uid <text>] - Disk Unique ID
Selects the disks whose unique id matches this parameter value. A disk unique identifier has the form:
20000000:875D4C32:00000000:00000000:00000000:00000000:00000000:00000000:00000000:0
0000000
[-aggregate <aggregate name>] - Aggregate
Selects information about disks that belong to the specified aggregate.
[-array-name <array name>] - Array Name
Selects information about the LUNs presented by the specified storage array.
[-average-latency <integer>] - Average I/O Latency Across All Active Paths
Selects information about disks that have the specified average latency.
[-bay <integer>] - Bay
Selects information about disks that are located in the carrier within the specified shelf bay.
[-bps <integer>] - Bytes Per Sector
Selects information about disks that have the specified number of bytes per sector. Possible settings are 512,
520, 4096, and 4160.
• capacity = Capacity-oriented, near-line disk types. Includes disk types FSAS, BSAS and ATA.
• performance = Performance-oriented, enterprise class disk types. Includes disk types FCAL and SAS.
• archive = Archive class SATA disks in multi-disk carrier storage shelves. Includes disk type MSATA.
• array = Logical storage devices backed by storage arrays and used by Data ONTAP as disks. Includes disk
type LUN.
• virtual = Virtual disks that are formatted and managed by the hypervisor. Includes disk type VMDISK.
• Mediator = A mediator disk is a disk used on non-shared HA systems hosted by an external node which is
used to communicate the viability of the storage failover between non-shared HA nodes.
• onedomain = The array LUN is accessible only via a single fault domain.
• control = The array LUN cannot be used because it is a control device.
• foreign = The array LUN is marked as foreign and has some external SCSI reservations other than those
from Data ONTAP.
• toobig = The array LUN exceeds the maximum array LUN size that Data ONTAP supports.
• toosmall = The array LUN is less than the minimum array LUN size that Data ONTAP supports.
• targetasymmap = The array LUN is presented more than once on a single target port.
• unknown = The array LUN from a storage array that is not supported by this version of Data ONTAP.
• netapp = A SAN front-end LUN from one Data ONTAP system that is presented as external storage to
another Data ONTAP system.
• notallflashdisk = The disk does not match the All-Flash Optimized personality of the system.
The following example displays detailed information about a disk named 1.0.75
Errors:
-
The following example displays RAID-related information about disks used in an aggregate:
The following example displays RAID-related information about disks in maintenance center:
Related references
storage aggregate show-status on page 865
Description
The storage disk unfail command can be used to unfail a broken disk.
If the attempt to unfail the disk is unsuccessful, the disk remains in the broken state.
The disk unfail command prompts for confirmation unless you specify the "-quiet" parameter.
Examples
The following example unfails a disk named 1.1.16 to become a spare disk:
Description
Note: This command is deprecated and may be removed in a future release of Data ONTAP. Use the "storage disk
firmware update" command.
The storage disk updatefirmware command updates the firmware on one or more disks.
You can download the latest firmware by using the storage firmware download command.
You can specify a list of one or more disks whose firmware is to be updated by using the -disk
parameter, or you can update the firmware on all local disks by omitting the -disk parameter.
Parameters
[-disk <disk path name>, ...] - Disk
This specifies the disk or disks whose firmware is to be updated.
If you do not specify this option, all local disks' firmware is updated.
Examples
The following example updates the firmware on all disks:
Related references
storage disk firmware update on page 974
storage firmware download on page 1025
Description
The storage disk zerospares command zeroes all non-zeroed spare disks in all nodes or a specified node in the cluster. A
node must be online to zero disks. This operation must be done before a disk can be reused in another aggregate. This version of
ONTAP uses fast zeroing, which converts a spare disk from non-zeroed to zeroed without the long wait times required when
physically zeroing a disk.
Parameters
[-owner {<nodename>|local}] - Owner
If this parameter is specified, only non-zeroed spares assigned to the specified node will be zeroed. Otherwise,
all non-zeroed spares in the cluster will be zeroed.
Examples
The following example zeroes all non-zeroed spares owned by a node named node4, using fast zeroing:
The following example zeroes all non-zeroed spares owned by a node named node2 by physically writing zeros to the
entire disk:
Description
The storage disk error show command displays disk component and array LUN configuration errors.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-uid <text>] - UID
Displays the error information of the disk whose unique ID matches the value you specify. A disk unique
identifier has the form:
20000000:875D4C32:00000000:00000000:00000000:00000000:00000000:00000000:00000000:0
0000000
• onedomain = The array LUN is accessible only via a single fault domain.
• control = The array LUN cannot be used because it is a control device.
• foreign = The array LUN is marked as foreign and has some external SCSI reservations other than those
from Data ONTAP.
• toobig = The array LUN exceeds the maximum array LUN size that Data ONTAP supports.
• toosmall = The array LUN is less than the minimum array LUN size that Data ONTAP supports.
• targetasymmap = The array LUN is presented more than once on a single target port.
• unknown = The array LUN from a storage array that is not supported by this version of Data ONTAP.
• netapp = A SAN front-end LUN from one Data ONTAP system that is presented as external storage to
another Data ONTAP system.
• notallflashdisk = The disk does not match the All-Flash Optimized personality of the system.
Examples
The following example displays configuration errors seen in the system:
Description
The storage disk firmware revert command reverts firmware on all disks or a specified list of disks on a node.
You can specify a list of one or more disks whose firmware is to be reverted by using the -disk parameter.
You can revert the firmware on all the disks owned by a node by using the -node parameter.
This command can make the disks inaccessible for up to five minutes after the start of its execution. Therefore, the network
sessions that use the concerned node must be terminated before running the storage disk firmware revert command.
This is particularly true for CIFS sessions that might be terminated when this command is executed.
If you need to view the current firmware versions, use the storage disk show -fields firmware-revision command.
The following example displays partial output from the storage disk show -fields firmware-revision command,
where the firmware version for the disks is NA02:
The firmware files are stored in the /mroot/etc/disk_fw directory on the node. The firmware file name is in the form of "product-
ID.revision.LOD". For example, if the firmware file is for Seagate disks with product ID X225_ST336704FC and the firmware
version is NA01, the file name is X225_ST336704FC.NA01.LOD. If the node in this example contains disks with firmware
version NA02, the /mroot/etc/disk_fw/X225_ST336704FC.NA01.LOD file is downloaded to every disk when you execute this
command.
How to Revert the Firmware for an HA Pair in a Cluster
Use the following procedure to perform a revert on the disks in an HA environment:
• Make sure that the nodes are not in takeover or giveback mode.
• Download the latest firmware on both nodes by using the storage firmware download command.
• Revert the disk firmware on Node A's disks by entering the storage disk firmware revert -node node-A command.
• Wait until the storage disk firmware revert command completes on Node A, and then revert the firmware on Node
B's disks by entering the storage disk firmware revert -node node-B command.
Examples
If you need to view the current firmware versions, use the storage disk show -fields firmware-revision
command. The following example displays partial output from the storage disk show -fields firmware-
revision command, where the firmware version for the disks is NA02:
The following example reverts the firmware on all disks owned by cluster-node-01:
Warning: Disk firmware reverts can be disruptive to the system. Reverts involve
power cycling all of the affected disks, as well as suspending disk
I/O to the disks being reverted. This delay can cause client
disruption. Takeover/giveback operations on a high-availability (HA)
group will be delayed until the firmware revert process is complete.
Disk firmware reverts should only be done one node at a time. Disk
firmware reverts can only be performed when the HA group is healthy;
they cannot be performed if the group is in takeover mode.
The following example reverts the firmware on disk 1.5.0 which is owned by node cluster-node-04:
Warning: Disk firmware reverts can be disruptive to the system. Reverts involve
power cycling all of the affected disks, as well as suspending disk
I/O to the disks being reverted. This delay can cause client
disruption. Takeover/giveback operations on a high-availability (HA)
group will be delayed until the firmware revert process is complete.
Disk firmware reverts should only be done one node at a time. Disk
firmware reverts can only be performed when the HA group is healthy;
they cannot be performed if the group is in takeover mode.
Related references
storage disk show on page 950
storage firmware download on page 1025
Description
The storage disk firmware show-update-status command displays the state of the background disk firmware update
process.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node
Selects the node that matches this parameter value.
[-num-waiting-download <integer>] - The Number of Disks Waiting to Download
Selects the nodes whose number of disks waiting to download by the BDFU process matches this parameter
value.
[-total-completion-estimate <integer>] - Estimated Duration to Completion (mins)
Selects the nodes whose Background Disk Firmware Update (BDFU) completion time estimate matches this
parameter value. This indicates the amount of estimated time required for BDFU to complete the firmware
update cycle.
[-average-duration-per-disk <integer>] - Average Firmware Update Duration per Disk (secs)
Selects the nodes whose BDFU reports the average time required to update a single disk matches this
parameter value. This indicates the average amount of time required by each disk drive.
[-unable-to-update <disk path name>, ...] - List of Disks with a Failed Update
Selects the nodes whose unable to update disk list matches this parameter value. This is a list of disks that
failed to update the firmware.
[-update-status {off|running|idle}] - Background Disk Firmware Update Status
Selects the nodes whose BDFU process status matches this parameter value. Possible values are:
Examples
Description
Use the storage disk firmware update command to manually update firmware on all disks or a specified list of disks on
a node. However, the recommended way to update disk firmware in a cluster is to enable automatic background firmware update
by enabling the -bkg-firmware-update parameter for all of the nodes in the cluster. You can do this by entering the
storage disk option modify -node * -bkg-firmware-update on command.
You can download the latest firmware on the node by using the storage firmware download command.
You can specify a list of one or more disks whose firmware is to be updated by using the -disk parameter.
You can update the firmware on all the disks owned by a node by using the -node parameter.
This command can make the disks inaccessible for up to five minutes after the start of its execution. Therefore, the network
sessions that use the concerned node must be terminated before running the storage disk firmware update command.
This is particularly true for CIFS sessions that might be terminated when this command is executed.
The firmware is automatically downloaded to disks, which report previous versions of the firmware. For information on
automatic firmware update downloads, see "Automatic versus Manual Firmware Download".
If you need to view the current firmware versions, use the storage disk show -fields firmware-revision command.
The following example displays partial output from the storage disk show -fields firmware-revision command,
where the firmware version for the disks is NA01:
The firmware files are stored in the /mroot/etc/disk_fw directory on the node. The firmware file name is in the form of "product-
ID.revision.LOD". For example, if the firmware file is for Seagate disks with product ID X225_ST336704FC and the firmware
version is NA02, the filename is X225_ST336704FC.NA02.LOD. The revision part of the file name is the number against
which the node compares each disk's current firmware version. If the node in this example contains disks with firmware version
NA01, the /mroot/etc/disk_fw/X225_ST336704FC.NA02.LOD file is used to update every eligible disk when you execute this
command.
Automatic versus Manual Firmware Download
The firmware is automatically downloaded to those disks that report previous versions of firmware following a system boot or
disk insertion. Note that:
• A manual download is a disruptive operation that makes disks inaccessible for up to five minutes after the download is
started. Network sessions that use the node must be terminated before running the storage disk firmware update
command.
• The firmware is not automatically downloaded to the node's partner node in an HA pair.
• The firmware is not automatically downloaded to unowned disks on nodes configured to use software-based disk ownership.
• The bkg-firmware-update parameter controls how the automatic firmware download feature works:
◦ If the bkg-firmware-update parameter is set to on, then the storage disk firmware update will update spares and
filesystem disks in a nondisruptive manner in the background after boot. Firmware downloads for these disks will be
done sequentially by temporarily taking them offline one at a time for the duration of the download. After the firmware is
updated, the disk will be brought back online and restored to its normal operation.
During an automatic download to an HA environment, the firmware is not downloaded to the disks owned by the HA partner.
When you use the storage disk firmware update command, the firmware is:
• Updated on every disk regardless of whether it is on the A-loop, the B-loop, or in an HA environment.
• If the node is configured in a software-based disk ownership system, only disks owned by this node are updated.
During an automatic firmware download in a MetroCluster(TM) environment, the firmware is not downloaded to the disks
owned by the partner cluster. During both manual and automatic firmware download in a MetroCluster-over-IP environment, the
firmware is not downloaded to any remote disks located at the partner cluster while Disaster Recovery is in progress.
Follow the instructions in "How to Update the Firmware for an HA Pair in a Cluster" to ensure that the updating process is
successful. Data ONTAP supports redundant path configurations for disks in a non-HA configuration. The firmware is
automatically downloaded to disks on the A-loop or B-loop of redundant configurations that are not configured in an HA pair
and are not configured to use software-based disk ownership.
Automatic Backgroud Firmware Update
The firmware can be updated in the background so that the firmware update process does not impact the clients. This
functionality is controlled with the bkg-firmware-update parameter. You can modify the parameter by using the CLI storage
disk option modify -node node_name -bkg-firmware-update on|off command. The default value for this parameter
is "on".
When disabled or set to "off", storage disk firmware update will update the firmware in automated mode. This means
that all disks which had older firmware revision will be updated regardless of whether they are spare or filesystem disks.
When enabled or set to "on", the background storage disk firmware update will update firmware in automated mode
only on disks that can be successfully taken offline from active filesystem RAID groups and from the spare pool. To ensure a
faster boot process, the firmware is not downloaded to spares and filesystem disks at boot time.
This provides the highest degree of safety available, without the cost of copying data from each disk in the system twice. Disks
are taken offline one at a time and then the firmware is updated on them. The disk is brought online after the firmware update
and a mini/optimized reconstruct happens for any writes, which occurred while the disk was offline. Background disk firmware
update will not occur for a disk if its containing RAID group or the volume is not in a normal state (for example, if the volume/
plex is offline or the RAID group is degraded). However, due to the continuous polling nature of background disk firmware
update, firmware updates will resume after the RAID group/plex/volume is restored to a normal mode. Similarly, background
disk firmware updates are suspended for the duration of any reconstruction within the system.
How to Update the Firmware for an HA Pair in a Cluster
The best way to update the firmware in a cluster with HA pairs is to use automatic background firmware update by enabling the
option bkg-firmware-update parameter for each node. Enable the -bkg-firmware-update parameter on all the nodes by
entering the storage disk option modify -node node_name -bkg-firmware-update on command. Alternatively, use
the following procedure to successfully perform a manual update on the disks in an HA environment:
• Make sure that the nodes are not in takeover or giveback mode.
• Download the latest firmware on both the nodes by using the storage firmware download command.
• Install the new disk firmware on Node A's disks by entering the storage disk firmware update -node node-A
command.
• Wait until the storage disk firmware update command completes on Node A, and then install the new disk firmware
on Node B's disks by entering the storage disk firmware update -node node-B command.
Examples
If you need to view the current firmware versions, use the storage disk show -fields firmware-revision
command. The following example displays partial output from the storage disk show -fields firmware-
revision command, where the firmware version for the disks is NA01:
The following example updates the firmware on all disks owned by cluster-node-01:
Warning: Disk firmware updates can be disruptive to the system. Updates involve
power cycling all of the affected disks, as well as suspending disk
I/O to the disks being updated. This delay can cause client
disruption. Takeover/giveback operations on a high-availability (HA)
group will be delayed until the firmware update process is complete.
Disk firmware updates should only be done one node at a time. Disk
firmware updates can only be performed when the HA group is healthy;
they cannot be performed if the group is in takeover mode.
The following example updates the firmware on disk 1.5.0 which is owned by node cluster-node-04:
Warning: Disk firmware updates can be disruptive to the system. Updates involve
power cycling all of the affected disks, as well as suspending disk
I/O to the disks being updated. This delay can cause client
disruption. Takeover/giveback operations on a high-availability (HA)
group will be delayed until the firmware update process is complete.
Disk firmware updates should only be done one node at a time. Disk
firmware updates can only be performed when the HA group is healthy;
they cannot be performed if the group is in takeover mode.
Related references
storage disk option modify on page 977
storage firmware download on page 1025
storage disk show on page 950
Description
The storage disk option modify command modifies the background firmware update setting, automatic copy setting,
controls automatic disk assignment of all disks assigned to a specified node, or modifies the policy of automatic disk assignment
of unowned disks.
Parameters
-node {<nodename>|local} - Node
This parameter specifies the node that owns the disks whose options are to be modified.
[-bkg-firmware-update {on|off}] - Background Firmware Update
This parameter specifies whether firmware updates run as a background process. The default setting is on,
which specifies that firmware updates to spare disks and file system disks is performed nondisruptively via a
background process. If the option is turned off, automatic firmware updates occur at system startup or during
disk insertion.
[-autocopy {on|off}] - Auto Copy
This parameter specifies whether data is to be automatically copied from a failing disk to a spare disk in the
event of a predictive failure. The default setting is on. It is sometimes possible to predict a disk failure based
on a pattern of recovered errors that have occurred. In such cases, the disk reports a predictive failure. If this
option is set to on, the system initiates Rapid RAID Recovery to copy data from the failing disk to an available
spare disk. When data is copied, the disk is marked as failed and placed in the pool of broken disks. If a spare
is not available, the node continues to use the disk until it fails. If the option is set to off, the disk is
immediately marked as failed and placed in the pool of broken disks. A spare is selected and data from the
missing disk is reconstructed from other disks in the RAID group. The disk does not fail if the RAID group is
already degraded or is being reconstructed. This ensures that a disk failure does not lead to the failure of the
entire RAID group.
[-autoassign {on|off}] - Auto Assign
This parameter specifies whether automatic assignment of unowned disks is enabled or disabled. The default
setting is on. This parameter is used to set both a node-specific and a cluster-wide disk option.
[-autoassign-policy {default|bay|shelf|stack}] - Auto Assignment Policy
This parameter defines the granularity at which auto assign should work. This option is ignored if the -
autoassign option is off. Auto assignment can be done at the stack/loop, shelf, or bay level. The possible
values for the option are default, stack, shelf, and bay. The default value is platform dependent. It is stack for
all non-entry platforms and single-node systems, whereas it is bay for entry-level platforms.
Examples
The following example sets the background firmware update setting to on for all disks belonging to a node named node0:
The following example shows how to enable auto assignment for the disks on node1:
The following example shows how to modify the auto assignment policy on node1:
Description
The storage disk option show command displays the settings of the following disk options:
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the node that owns the disks. If this parameter is not specified, the command displays information
about the disk options on all the nodes.
[-bkg-firmware-update {on|off}] - Background Firmware Update
Selects the disks that match this parameter value.
[-autocopy {on|off}] - Auto Copy
Selects the disks that match this parameter value.
[-autoassign {on|off}] - Auto Assign
Displays the auto assignment status of unowned disks. The default value is on.
[-autoassign-policy {default|bay|shelf|stack}] - Auto Assignment Policy
Selects the disks that match the automatic assignment policy value:
• Default
• Shelf
• Bay
Examples
The following example displays disk-option settings for disks owned by all nodes in the cluster:
Description
The storage encryption disk destroy command cryptographically destroys a self-encrypting disk (SED), making it
incapable of performing I/O operations. This command performs the following operations:
• Employs the inherent erase capability of SEDs to cryptographically sanitize the disk
• Changes the data and FIPS authentication keys to random values that are not recorded except within the SED.
Use this command with extreme care. The only mechanism to restore the disk to usability (albeit without the data) is the
storage encryption disk revert-to-original-state operation that is available only on disks that have the physical
secure ID (PSID) printed on the disk label.
The destroy command requires you to enter a confirmation phrase before proceeding with the operation.
The command releases the cluster shell after launching the operation. Monitor the output of the storage encryption disk
show-status command for command completion.
Upon command completion, remove the destroyed SED from the system.
Parameters
-disk <disk path name> - Disk Name
This parameter specifies the name of the disk you want to cryptographically destroy. See the man page for the
storage disk modify command for information about disk-naming conventions.
Examples
The following command cryptographically destroys the disk 1.10.20:
cluster1::>
If you do not enter the correct confirmation phrase, the operation is aborted:
cluster1::>
The following command quickly cryptographically destroys all system disks, including those in active use in aggregates
and shared devices:
Related references
storage disk show on page 950
storage encryption disk revert-to-original-state on page 982
storage encryption disk show-status on page 987
Description
The storage encryption disk modify command changes the data protection parameters of self-encrypting disks (SEDs)
and FIPS-certified SEDS (FIPS SEDs); it also modifies the FIPS-compliance AK (FIPS AK) of FIPS SEDs. The current data
AK and FIPS AK of the device are required to effect changes to the respective AKs and FIPS compliance. The current and new
AKs must be available from the key servers or onboard key management.
The command releases the cluster shell after launching the operation. Monitor the output of the storage encryption disk
show-status command for command completion.
Note: To properly protect data at rest on a FIPS SED and place it into compliance with its FIPS certification requirements, set
both the Data and FIPS AKs to a value other than the device's default key; depending on the device type, the default may be
manufacture secure ID (MSID), indicated by a key ID with the special value 0x0, or a null key represented by a blank AK.
Verify the key IDs by using the storage encryption disk show and storage encryption disk show-fips
commands.
Parameters
-disk <disk path name> - Disk Name
This parameter specifies the name of the SED or FIPS SED that you want to modify.
{ [-data-key-id <text>] - Key ID of the New Data Authentication Key
This parameter specifies the key ID associated with the data AK that you want the SED to use for future
authentications. When the provided key ID is the MSID, data at rest on the SED is not protected from
unauthorized access. Setting this parameter to a non-MSID value automatically engages the power-on-lock
protections of the device, so that when the device is power-cycled, the system must authenticate with the
device using the AK to reenable I/O operations. You cannot specify the null default key; use MSID instead.
| [-fips-key-id <text>]} - Key ID of the New Authentication Key for FIPS Compliance
This parameter specifies the key ID associated with the FIPS AK that you want the FIPS SED to apply to SED
credentials other than the one that protects the data. When the value is not the MSID, these credentials are
changed to the indicated AK, and other security-related items are set to conform to the FIPS certification
requirements ("FIPS compliance mode") of the device. You may set the -fips-key-id to any one of the key
IDs known to the system. The FIPS key ID may, but does not have to, be the same as the data key ID
parameter. Setting -fips-key-id to the MSID key ID value disables FIPS compliance mode and restores the
FIPS-related authorites and other components as required (other than data) to their default settings. A
nonMSID FIPS-comliance key may be applied only to a FIPS SED.
Examples
The following command changes both the AK and the power-cycle protection to values that protect the data at rest on the
disk. Note that the -data-key-id and -fips-key-id parameters require one of the key IDs that appear in the output of
the security key-manager query command.
Related references
storage encryption disk show-status on page 987
storage encryption disk show on page 984
security key-manager query on page 504
security key-manager create-key on page 501
Description
Some self-encrypting disks (SEDs) are capable of an operation that restores them as much as possible to their as-manufactured
state. The storage encryption disk revert-to-original-state command invokes this special operation that is
available only in SEDs that have the physical secure ID (PSID) printed on their labels.
The PSID is unique to each SED, meaning the command can revert only one SED at a time. The disk must be in a "broken" or
"spare" state as shown by the output of the storage disk show command.
The operation in the SED accomplishes the following changes:
• Sanitizes all data by changing the disk encryption key to a new random value
• Sets the data authentication key (AK) and FIPS AK to the default values
The command releases the cluster shell after launching the operation. Monitor the output of the storage encryption disk
show-status command for command completion.
When the operation is complete, it is possible to return the SED to service using the storage disk unfail command in
advanced privilege mode. To do so, you might also need to reestablish ownership of the SED using the storage disk
assign command.
Parameters
-disk <disk path name> - Disk Name
The name of the SED to be reverted to its as-manufactured state. See the man page for the storage disk
modify command for information about disk-naming conventions.
-psid <text> - Physical Secure ID
The PSID printed on the SED label.
Related references
storage disk show on page 950
storage encryption disk show-status on page 987
storage disk unfail on page 967
storage disk assign on page 940
Description
The storage encryption disk sanitize command cryptographically sanitizes one or more self-encrypting disks (SEDs),
making the existing data on the SED impossible to retrieve. This operation employs the inherent erase capability of SEDs to
perform all of the following changes:
• Sanitizes all data by changing the disk encryption key to a new random value
• Sets the data authentication key (AK) to the default AK (manufacture secure ID/MSID or null, depending on the device
type)
There is no method to restore the disk encryption key to its previous value, meaning that you cannot recover the data on the
SED. Use this command with extreme care.
The sanitize command requires you to enter a confirmation phrase before proceeding with the operation.
The command releases the cluster shell after launching the operation. Monitor the output of the storage encryption disk
show-status command for command completion.
When the operation is complete, it is possible to return the SED to service using the storage disk unfail command in
advanced privilege mode. To do so, you might also need to reestablish ownership of the SED using the storage disk
assign command.
Parameters
-disk <disk path name> - Disk Name
This parameter specifies the name of the SEDs you want to cryptographically sanitize. See the man page for
the storage disk modify command for information about disk-naming conventions.
[-force-all-states [true]] - Sanitize All Matching Disks
When this parameter is false or not specified, the operation defaults to spare and broken disks only, as
reported in the output of the storage disk show command. When you specify this parameter as true, it
allows you to cryptographically sanitize all matching disk names regardless of their state, including those in
active use in aggregates. This allows a quick erasure of all system data if you use the -disk parameter with
the asterisk wildcard (*). If you sanitize active disks, the nodes might not be able to continue operation, and
might halt or panic.
If you do not enter the correct confirmation phrase, the operation is aborted:
cluster1::>
The following command quickly cryptographically sanitizes all system disks, including those in active use in aggregates
and shared devices:
Related references
storage disk show on page 950
storage encryption disk show-status on page 987
storage disk unfail on page 967
storage disk assign on page 940
Description
The storage encryption disk show command displays information about encrypting drives. When no parameters are
specified, the command displays the following information about all encrypting drives:
• The key ID associated with the data authentication key ("data AK")
In MetroCluster systems, the information is valid from the cluster that owns the drive, or from the DR cluster when in
switchover mode. If information is not available, perform the show command from the cluster partner.
You can use the following parameters together with the -disk parameter to narrow the selection of displayed drives or the
information displayed about them.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-fips ]
If you specify this parameter, the command displays the key ID associated with the FIPS-compliance
authentication key ("FIPS AK") instead of the data key ID.
| [-instance ]}
If you specify this parameter, the command displays detailed disk information about all disks, or only those
specified by a -disk parameter.
[-disk <disk path name>] - Disk Name
If you specify this parameter, the command displays information about the specified disks. If you specify a
single disk path name, the output is the same as when you use the -instance parameter. See the man page for
the storage disk modify command for information about disk-naming conventions. Default is all self-
encrypting disks.
[-container-name <text>] - Container Name
This parameter specifies the container name associated with an encrypting drive. If you specify an aggregate
name or other container name, only the encrypting drives in that container are displayed. See the man page for
the storage disk show command for a description of the container name. Use the storage aggregate
show-status and storage disk show commands to determine which aggregates the drives are in.
[-container-type {aggregate | broken | foreign | labelmaint | maintenance | mediator |
remote | shared | spare | unassigned | unknown | unsupported}] - Container Type
This parameter specifies the container type associated with an encrypting drive. If you specify a container
type, only the drives with that container type are displayed. See the man page for the storage disk show
command for a description of the container type.
[-data-key-id <text>] - Key ID of the Current Data Authentication Key
This parameter specifies the key ID associated with the data AK that the encrypting drive requires for
authentication with its data-protection authorities. The special key ID 0x0 indicates that the current data AK of
the drive is the default manufacture secure ID (MSID) that is not secret. Some devices employ an initial null
default AK that appears as a blank data-key-id; you cannot specify a null data-key-id value. To properly
protect data at rest on the device, modify the data AK using a key ID that is not a default value (MSID or null).
When you modify the data AK with a non-MSID key ID, the system automatically sets the device's power-on
lock enable control so that authentication with the data AK is required after a device power-cycle. Use
storage encryption disk modify -data-key-id key-id to protect the data. Use storage
encryption disk modify -fips-key-id key-id to place the drives into FIPS-compliance mode.
[-fips-key-id <text>] - Key ID of the Current FIPS Authentication Key
This parameter specifies the key ID associated with the FIPS authentication key ("FIPS AK") that the system
must use to authenticate with FIPS-compliance authorities in FIPS-certified drives. This parameter may not be
set to a non-MSID value in drives that are not FIPS-certified.
[-type {ATA | BSAS | FCAL | FSAS | LUN | MSATA | SAS | SSD | VMDISK | SSD-NVM}] - Disk Type
This parameter selects the drive type to include in the output.
Examples
The following command displays information about all encrypting drives:
Note in the example that only disk 1.10.2 is fully protected with FIPS mode, power-on-lock enable, and an AK that is not
the default MSID.
The following command displays information about the protection mode and FIPS key ID for all encrypting drives:
Note again that only disk 1.10.2 is fully protected with FIPS-compliance mode set, power-on-lock enabled, and a data AK
that is not the default MSID.
The following command displays the individual fields for disk 1.10.2:
Related references
storage disk show on page 950
storage aggregate show-status on page 865
storage encryption disk modify on page 981
Description
The storage encryption disk show-status command displays the results of the latest destroy, modify, or sanitize
operation of the storage encryption disk command family. Use this command to view the progress of these operations on
self-encrypting disks (SEDs).
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node Name
If you specify this parameter, the command displays disk encryption status for the nodes that match this
parameter.
[-is-fips-support {true|false}] - Node Supports FIPS Disks
If you specify this parameter, the command displays disk encryption status for the nodes that match this
parameter (true means the node supports FIPS-certified self-encrypting drives).
[-latest-op <Storage Disk Encryption Operation>] - Latest Operation Requested
If you specify this parameter, the command displays disk encryption status for the nodes with a most recent
storage encryption disk operation that matches this parameter (one of destroy, modify, revert-
to-original-state, sanitize, or unknown).
[-op-start-time <MM/DD/YYYY HH:MM:SS>] - Operation Start Time
Selects the nodes with operation start times that match this parameter.
[-op-execute-time <integer>] - Execution Time in Seconds
If you specify this parameter, the command displays disk encryption status for the nodes with operation
execution time that matches this parameter. The operation may be partial or completed.
[-disk-start-count <integer>] - Number of Disks Started
If you specify this parameter, the command displays disk encryption status for the nodes that started this
number of SEDs in their latest operation.
[-disk-done-count <integer>] - Number of Disks Done
Selects the nodes that report this number of SEDs having completed the latest operation, successfully or not.
Examples
When no operation has been requested since node boot, the status for that node is empty. If you enter a node name, the
output is in the same format as for the -instance parameter.
Once an operation begins, the status is dynamic until all devices have completed. When disks are modified, sanitized, or
destroyed, sequential executions of storage encryption disk show-status appear as in this example that shows
the progress of a modify operation on three SEDs on each node of a two-node cluster:
Related references
storage encryption disk on page 979
storage encryption disk destroy on page 979
storage encryption disk modify on page 981
storage encryption disk revert-to-original-state on page 982
storage encryption disk sanitize on page 983
Description
The storage failover giveback command returns storage that has failed over to a node's partner back to the home node.
This operation fails if other resource-intensive operations (for instance, system dumps) are running and make the giveback
operation potentially dangerous or disruptive. Some options are available only at the advanced privilege level and higher. Run
the storage failover show-giveback command to check the status of giveback operations.
Note:
• If the system ID of the partner has changed while the node is in takeover mode, the storage failover giveback
command updates the ownership of the partner's disks to the new system ID while giving back.
• If the giveback operation fails due to the operation being vetoed by a subsystem, check the syslog or EMS output for a
subsystem-specific reason for the abort. The corrective action is subsystem-specific and is detailed in the corrective action
portion of the message. Follow the corrective action specified by the subsystem and then reissue the storage failover
giveback command. If you cannot perform the corrective action, then use the override-vetoes option in the
storage failover giveback command to force the giveback.
• If the giveback operation fails because the node cannot communicate with its partner, check the EMS output for the
corrective action. Follow the corrective action and then reissue the storage failover giveback command. If you
cannot perform the corrective action, then use the -require-partner-waiting false option in the storage
failover giveback command to force the giveback. This parameter is available only at the advanced privilege level
and higher.
• If the node does not receive notification that the partner has brought online the given-back aggregate and its volumes, the
storage failover show-giveback command displays the giveback status for the aggregate as failed. A possible
reason for this failure is that the partner is overloaded and slow in bringing the aggregate online. Run the storage
aggregate show command to verify that the aggregate and its volumes are online on the partner node. The node will not
attempt the giveback operation for remaining aggregates. To force the giveback, use the -require-partner-waiting
false option in the storage failover giveback command. This parameter is available only at the advanced
privilege level and higher.
Parameters
{ -ofnode {<nodename>|local} - Node to which Control is Givenback
Specifies the node whose storage is currently taken over by its partner and will be given back by the giveback
operation.
| -fromnode {<nodename>|local}} - Node Initiating Giveback
Specifies the node that currently holds the storage that is to be returned to the partner node.
[-require-partner-waiting {true|false}] - Require Partner in Waiting (privilege: advanced)
If this optional parameter is used and set to false, the storage is given back regardless of whether the partner
node is available to take back the storage or not. If this parameter is used and set to true, the storage will not be
given back if the partner node is not available to take back the storage. If this parameter is not used, the
Examples
The following example gives back storage that is currently held by a node named node1. The partner must be available for
the giveback operation to occur.
The following example gives back only the CFO aggregates to a node named node2 (the aggregates are currently held by
a node named node1). The partner must be available for the giveback operation to occur, and the veto-giveback process
can be overridden.
Related references
storage failover modify on page 990
storage failover show-giveback on page 1008
storage aggregate show on page 837
Description
The storage failover modify command changes the storage-failover options for a node. Some options are available only
at the advanced privilege level and higher.
Parameters
-node {<nodename>|local} - Node
This specifies the node whose storage-failover options are to be modified.
{ [-enabled {true|false}] - Takeover Enabled
This optionally specifies whether storage failover is enabled. The default setting is true .
| [-mode {ha|non_ha}]} - HA Mode
This specifies whether the node is set up in high-availability mode or stand-alone mode. If the node is a
member of a high-availability configuration, set the value to ha. If the node is stand-alone, set the value to
non_ha. Before setting the HA mode, you must complete the platform dependent steps to set up the system in
a stand-alone or HA configuration as shown in the documentation for your platform.
Examples
The following example enables the storage-failover service on a node named node0:
The following examples enable storage-failover takeover on a short uptime of 30 seconds on a node named node0:
Description
The storage failover show command displays information about storage-failover configurations. By default, the command
displays the following information:
• Node name.
• The current state of storage failover. If the takeover is disabled the appropriate reason would be displayed.
To display detailed information about storage failover on a specific node, run the command with the -node parameter. The
detailed view adds the following information:
• Node NVRAM ID.
• Partner State
◦ NVRAM_DOWN
◦ OPERATOR_DISABLE_NVRAM
◦ PARTNER_RESET
◦ FM_TAKEOVER
◦ NVRAM_MISMATCH
◦ OPERATOR_DENY
◦ CLUSTER_DISABLE
◦ VERSION
◦ SHELF_HOT
◦ REVERT_IN_PROGRESS
◦ HALT_NOTKOVER
◦ TAKEOVER_ON_PANIC
◦ DISABLED
◦ DEGRADED
◦ MBX_UNKNOWN
◦ FM_VERSION
◦ PARTNER_DISABLED
◦ OPERATOR_DENY
◦ NVRAM_MISMATCH
◦ VERSION
◦ IC_ERROR
◦ BOOTING
◦ SHELF_HOT
◦ PARTNER_REVERT_IN_PROGRESS
◦ LOCAL_REVERT_IN_PROGRESS
◦ PARTNER_TAKEOVER
◦ LOCAL_TAKEOVER
◦ HALT_NOTKOVER
◦ LOG_UNSYNC
◦ UNKNOWN
◦ WAITING_FOR_PARTNER
◦ LOW_MEMORY
◦ HALTING
◦ MBX_UNCERTAIN
◦ NO_AUTO_TKOVER
• Whether operator-initiated planned takeover will be optimized for performance by relocating SFO (non-root) aggregates
serially to the partner prior to takeover.
You can specify additional parameters to select the displayed information. For example, to display information only about
storage-failover configurations whose interconnect is down, run the command with -interconnect-up false.
• Node name
• Whether the node checks its partner's readiness before initiating a giveback operation
• The time, in seconds, that the node remains unresponsive before its partner initiates a takeover operation
• Whether the node automatically takes over for its partner if the partner fails
• Whether the node automatically takes over for its partner if the partner panics
• Whether the node automatically takes over for its partner if the partner reboots
• Ip address on which the partner node listens to the Hardware Assist alerts
• Port number on which the partner node listens to the Hardware Assist alerts
• Whether operator-initiated planned takeover will be optimized for performance by relocating SFO (non-
root) aggregates serially to the partner prior to takeover
If this parameter is specified when the privilege level is set to advanced or higher, the command displays the
information in the previous list and the following additional information:
• Whether the node takes over for its partner if its partner fails after a period of time, which is listed in the
following field
• The number of seconds before the node takes over for its partner
• The number of times the node attempts an automatic giveback operation within a period of time
• The number of minutes in which the automatic giveback attempts can occur
• The interval at which the node reads its partner node's status from the mailbox disks
• The interval at which the node writes its status to the mailbox disks
| [-takeover-status ]
Displays the following information:
• Node name
• Partner name
• Takeover enabled
• Interconnect up
• State
• Node NVRAM ID
• Partner NVRAM ID
• Node name
• The time at which the last takeover or giveback operation occurred, in microseconds
• Node name
• Node name
• Node name
• Node name
• Maximum resource-table index number
• Node name
• Fast timeout
• Slow timeout
• Mailbox timeout
• Connection timeout
• Operator timeout
• Firmware timeout
• Dump-core timeout
• Booting timeout
• Reboot timeout
• Node name
• Transit Timeout
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the nodes whose name matches this parameter value.
[-partner-name <text>] - Partner Name
Selects the nodes that have the specified partner-name setting.
[-nvramid <integer>] - Node NVRAM ID
Selects the nodes that have the specified NVRAM ID setting.
• NOT_INIT
• DISABLED
• DEGRADED
• MBX_UNKNOWN
• FM_VERSION
• PARTNER_DISABLED
• OPERATOR_DENY
• NVRAM_MISMATCH
• VERSION
• IC_ERROR
• BOOTING
• SHELF_HOT
• PARTNER_REVERT_IN_PROGRESS
• LOCAL_REVERT_IN_PROGRESS
• PARTNER_TAKEOVER
• LOCAL_TAKEOVER
• HALT_NOTKOVER
• LOG_UNSYNC
• UNKNOWN
• WAITING_FOR_PARTNER
• LOW_MEMORY
• HALTING
• MBX_UNCERTAIN
• NO_AUTO_TKOVER
• OPERATOR COMPLETED
• DEBUGGUER COMPLETED
• PROGRESS COUNTER
• I/O ERROR
• BAD CHECKSUM
• RESERVED
• UNKNOWN
• INITIALIZING
• BOOTING
• BOOT FAILED
• WAITING
• KERNEL LOADED
• UP
• IN DEBUGGER
• DUMPING CORE
• HALTED
• REBOOTING
• DUMPING SPARECORE
• MULTI-DISK PANIC
• IN TAKEOVER
• MBX_STATUS_NODISKS
• MBX_STATUS_UNCERTAIN
• MBX_STATUS_STALE
• MBX_STATUS_CONFLICTED
• MBX_STATUS_OLD_VERSION
• MBX_STATUS_NOT_FOUND
• MBX_STATUS_WRONG_STATE
• MBX_STATUS_BACKUP
• MBX_UNKNOWN
• MBX_TAKEOVER_DISABLED
• MBX_TAKEOVER_ENABLED
• MBX_TAKEOVER_ACTIVE
• MBX_GIVEBACK_DONE
• UNINIT - Uninitialized
• CLOSED - Closed
• RESERVE_NO_DISKS - no disk reservations made during takeover, nor are disk reservations released
during giveback
• RESERVE_LOCK_DISKS_ONLY - only mailbox disks are released during takeover and released during
giveback
• RESERVE_ONLY_AT_TAKEOVER - reservations are issued only at takeover time. All disks are reserved.
All reservations are released at giveback
[-takeover-reason <FM Takeover Reason>] - Reason why takeover triggered (privilege: advanced)
Selects the nodes that have the specified takeover reason. Possible values are:
Examples
The following example displays information about all storage-failover configurations:
• Node name
• Giveback Status
You can specify additional parameters to display only the information that matches those parameters. For example, to display
information only about a particular aggregate, run the command with the -aggregate aggregate_name parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If this parameter is used, the command displays information about the giveback status of the aggregates
belonging to the HA partner of the specified node.
[-aggregate <text>] - Aggregate
If this parameter is used, the command displays information about the giveback status of the specified
aggregate.
[-giveback-status <text>, ...] - Aggregates Giveback State
If this parameter is used, the command displays information about the aggregates with the specified giveback
status.
[-destination <text>] - Destination for Giveback
If this parameter is used, the command displays information about the giveback status of the aggregates whose
destination after the giveback is the specified node.
Examples
The following example displays information about giveback status on all nodes:
• Node name
• Node takeover status - This contains a descriptive information about the phase of takeover.
• Aggregate
• Aggregate takeover status - This contains the following information:
◦ Takeover status of the aggregate, such as "Done", "Failed", "In progress" and "Not attempted yet".
You can specify additional parameters to display only the information that matches those parameters. For example, to display
information only about a particular node, run the command with the -node node_name parameter.
Parameters
{ [-fields <fieldname>, ...]
If this parameter is specified, the command displays the specified fields for all nodes, in column style output.
| [-instance ]}
If this parameter is specified, the command displays the same detailed information as for the -node parameter,
but for all nodes.
[-node {<nodename>|local}] - Node Name
If this parameter is specified, the command displays information about the takeover status of the specified
node, and the takeover status of the aggregates being taken over.
[-node-takeover-status <text>] - Node's Takeover Status
If this parameter is specified, the command displays information about the takeover status of the nodes with
the specified node-takeover-status. The command also displays the takeover status of the aggregates belonging
to the node being taken over.
[-aggregate <text>] - Aggregate Being Taken Over
If this parameter is specified, the command displays information about the takeover status of the specified
aggregate, and the takeover status of the nodes containing the specified aggregate.
[-aggregate-takeover-status <text>] - Aggregate's Takeover Status
If this parameter is specified, the command displays information about the takeover status of the aggregates
with the specified aggregate takeover status, and the takeover status of the nodes containing those aggregates.
Examples
The following example shows the takeover status of two nodes, nodeA and nodeB, in an High Availability (HA) pair,
when both are in normal mode; neither node has taken over its HA partner. In this case, there is no takeover status for the
aggregates.
The following example shows the takeover status of two nodes, nodeA and nodeB, in an HA pair, when nodeA is in the
SFO phase of an optimized takeover of nodeB. In this case, nodeA does not have information about the takeover status of
nodeB's aggregates.
The following example shows the takeover status of two nodes, nodeA and nodeB, in an HA pair, when nodeA has
completed the SFO phase of an optimized takeover of nodeB (but has not completed the CFO phase of the optimized
takeover). In this case, nodeA has information about the takeover status of nodeB's aggregates.
The following example shows the takeover status of two nodes, nodeA and nodeB, in an HA pair, when nodeA has
completed the SFO and CFO phases of an optimized takeover of nodeB. In this case, nodeA has information about the
takeover status of nodeB's aggregates. Since nodeB is not operational, an Remote Procedure Call(RPC) error is indicated
in the command output.
The following example shows the takeover status of two nodes, nodeA and nodeB, in an HA pair, when nodeA has
aborted the SFO phase of an optimized takeover of nodeB. In this case, nodeA does not have information about the
takeover status of nodeB's aggregates.
Description
The storage failover takeover command initiates a takeover of the partner node's storage.
Parameters
{ -ofnode {<nodename>|local} - Node to Takeover
This specifies the node that is taken over. It is shut down and its partner takes over its storage.
| -bynode {<nodename>|local}} - Node Initiating Takeover
This specifies the node that is to take over its partner's storage.
[-option <takeover option>] - Takeover Option
This optionally specifies the style of takeover operation. Possible values include the following:
• normal - Specifies a normal takeover operation; that is, the partner is given the time to close its storage
resources gracefully before the takeover operation proceeds. This is the default value.
• immediate - Specifies an immediate takeover. In an immediate takeover, the takeover operation is initiated
before the partner is given the time to close its storage resources gracefully. The use of this option results
in an immediate takeover which does not do a clean shutdown. In case of NDU this can result in a NDU
failure.
Attention: If this option is specified, migration of data LIFs from the partner will be delayed even if the
-skip-lif-migration-before-takeover option is not specified. If possible, migrate the data LIFs
to another node prior to specifying this option.
• allow-version-mismatch - If this value is specified, the takeover operation is initiated even if the partner is
running a version of software that is incompatible with the version running on the node. In this case, the
partner is given the time to close its storage resources gracefully before the takeover operation proceeds.
However, the takeover operation will not be allowed if the partner has higher WAFL or RAID label
versions. Use this value as part of a nondisruptive upgrade or downgrade procedure.
• force - If this value is specified, the takeover operation is initiated even if the node detects an error that
normally prevents a takeover operation from occurring. This value is available only at the advanced
privilege level and higher.
Attention: If this option is specified, negotiated takeover optimization is bypassed even if the -bypass-
optimization option is set to false.
Caution: The use of this option can potentially result in data loss. If the HA interconnect is detached or
inactive, or the contents of the failover partner's NVRAM cards are unsynchronized, takeover is
normally disabled. Using the -force option enables a node to take over its partner's storage despite the
unsynchronized NVRAM, which can contain client data that can be lost upon storage takeover.
Caution: The use of this parameter can potentially result in client outage.
[-skip-lif-migration-before-takeover [true]] - Skip Migrating LIFs Away from Node Prior to Takeover
This parameter specifies that LIF migration prior to takeover is skipped. However if LIFs on this node are
configured for failover, those LIFs may still failover after the takeover has occurred. Without this parameter,
the command attempts to synchronously migrate data and cluster management LIFs away from the node prior
to its takeover. If the migration fails or times out, the takeover is aborted.
[-ignore-quorum-warnings [true]] - Skip Quorum Check Before Takeover
If this parameter is specified, quorum checks will be skipped prior to the takeover. The operation will continue
even if there is a possible data outage due to a quorum issue.
[-override-vetoes [true]] - Override Vetoes
If this is an operator-initiated planned takeover, this parameter specifies whether the veto should be overriden.
If this parameter is not specified, its value is set to false.
Attention: If this parameter is specified, negotiated takeover will override any vetos to continue with
takeover.
Examples
The following example causes a node named node0 to initiate a negotiated optimized takeover of its partner's storage:
The following example causes a node named node0 to initiate an immediate takeover of its partner's storage:
Description
The storage failover hwassist show command displays information about the hardware-assisted storage failover status
on each node. By default, the command displays the following information:
• Node name.
Hardware-assisted failover establishes a notification channel from each respective node's service processor to the other (HA
partner) node. If a node becomes unresponsive, its service processor notifies the HA partner of this condition, accelerating
storage failover. By default, hwassist is enabled and configured automatically to use each node's node-mgmt LIF. To modify or
show the hardware-assisted storage failover configuration, use the storage failover modify and storage failover
show commands.
Examples
The following example displays the hardware-assisted failover information for node cluster1-01 and its HA partner node
cluster1-02:
Related references
storage failover modify on page 990
storage failover show on page 993
Description
The storage failover hwassist test command tests the Hardware Assist h/w connectivity between the two nodes in a
HA pair. The test result can be one of the following.
• Resource is busy.
• Unexpected abort.
Parameters
-node {<nodename>|local} - Node
This specifies the node from which a test alert is initiated.
Examples
The following command issues a test alert from the node cluster1-01:
Description
The storage failover hwassist stats clear command clears the statistics information maintained by Hardware
Assist functionality.
Parameters
-node {<nodename>|local} - Node
This specifies the node on which the statistics are to be cleared.
Examples
The following example clears the hwassist statistics on the node cluster1-01:
Description
The storage failover hwassist stats show command displays statistics about the hardware assist alerts processed by
a node. The command displays the following information for each alert:
• Locally enabled.
• Alert type.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the hwassist statistics for the specified node.
Node: ha1
Local Enabled: true
Partner Inactive Reason: -
The following example displays the hwassist statistics for the node ha1 where hardware assist hardware is not supported.
Node: ha1
Local Enabled: false
Partner Inactive Reason: HW assist is not supported on partner.
Description
The storage failover internal-options show command displays the following information about the storage failover
configuration:
• Node name
• IP address on which the partner node listens to the hardware-assisted takeover alerts
• Port on which the partner node listens to the hardware-assisted takeover alerts
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-more ]
This parameter displays the following additional information: :
• Node name
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example displays detailed information about the internal options for storage failover on a node named
node2:
Node: node2
Auto Giveback Enabled: false
Check Partner Enabled: true
Takeover Detection Time (secs): 15
Takeover On Failure Enabled: true
Takeover On Panic Enabled: false
Takeover On Short Uptime Enabled: true
Short Uptime (secs): -
Number of Giveback Attempts: 3
Giveback Attempts Minutes: 10
Propagate Status Via Mailbox: true
Node Status Read Interval (secs): 5
Node Status Write Interval (secs): 5
Failover the Storage when Cluster Ports Are Down: -
Failover Interval when Cluster Ports Are Down (secs): -
Takeover on Reboot Enabled: true
Delay Before Auto Giveback (secs): 300
Hardware Assist Enabled: true
Partner's Hw-assist IP:
Partner's Hw-assist Port: 4444
Hw-assist Health Check Interval (secs): 180
Hw-assist Retry count: 2
HA mode: ha
Bypass Takeover Optimization Enabled: true
Description
The storage failover mailbox-disk show command lists the mailbox disks that are used by storage failover. The
command displays the following information:
• Node name
• Disk name
This command is available only at the advanced privilege level and higher.
Parameters
{ [-fields <fieldname>, ...]
If -fields <fieldname>,... is used, the command displays only the specified fields.
| [-instance ]}
If this parameter is used, the command displays detailed information about all entries.
[-node {<nodename>|local}] - Node
Selects the mailbox disks that are associated with the specified node.
[-location {local|partner|tertiary}] - Mailbox Owner
Selects the mailbox disks that have the specified relationship to the node.
[-diskindex <integer>] - Mailbox Disk Index
Selects the mailbox disk that has the specified index number.
[-diskname <text>] - Mailbox Disk Name
Selects the mailbox disks that match the specified disk name.
[-diskuuid <text>] - Mailbox Disk UUID
Selects the mailbox disks that match the specified UUID.
[-physical-location {local|partner|mediator}] - Mailbox Disk Physical Location
Selects the mailbox disks that match the specified physical location.
[-location-id <nvramid>] - System ID of the Node where the Disk is Attached
Selects the mailbox disks that match the specified location-id.
[-location-name <text>] - Mailbox Disk Location
Selects the mailbox disks that match the specified location-name.
Examples
The following example displays information about the mailbox disks on a node named node1:
Description
The storage failover progress-table show displays status information about storage-failover operations. This
information is organized in a resource table. The command displays the following information:
• Node name
• Resource-entry name
• Resource-entry state
• Resource-entry failure code
This command is available only at the advanced privilege level and higher.
Parameters
{ [-fields <fieldname>, ...]
If -fields <fieldname>, ... is used, the command will only displays only the specified fields.
| [-instance ]}
If this parameter is used, the command displays detailed information about all entries.
[-node {<nodename>|local}] - Node
Selects the status information for the specified node.
[-index <integer>] - Resource Table Index
Selects the status information for the specified index number.
[-entryname <text>] - Resource Table Entry Name
Selects the status information for the specified entry name.
[-state <text>] - Resource Table Entry State
Selects the status information for the specified state. Possible values include UP, START_RUNNING,
START_DONE, START_FAILED, STOP_RUNNING, STOP_FAILED, TAKEOVER_BARRIER, and
ONLY_WHEN_INITD.
[-failurecode <text>] - Entry Failure Code
Selects the status information for the specified failure code. Possible values include OK, FAIL,
FAIL_ALWAYS, HANG, PANIC, and VETO.
[-timedelta <integer>] - Entry Time Delta
Selects the status information for the specified time delta.
Examples
The following example displays the entire storage-failover resource table:
Description
The storage errors show command displays configuration errors with back end storage arrays.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-uid <text>] - UID
Selects the disks that match this parameter value.
[-array-name <array name>] - Array Name
Selects the disks that have the specified name for the storage array that is connected to the cluster.
[-node {<nodename>|local}] - Controller Name
Selects the disks that match this parameter value.
[-disk <disk path name>] - Disk
Selects the disks that match this parameter value.
[-serial-number <text>] - Serial Number
Selects the disks that match this parameter value.
[-error-id <integer>, ...] - Error ID
Selects the disks with error-id values that match this parameter value.
Examples
The following example displays configuration errors seen in the system:
Description
The storage firmware download command downloads the ACP processor, disk, and shelf firmware to a specified node.
This command can also be used to download the disk qualification package.
Use the storage disk firmware update command to install downloaded disk firmware.
Use the system node run local storage download shelf command to install downloaded disk shelf module
firmware.
Use the system node run local storage download acp command to install downloaded ACP processor firmware.
Parameters
-node {<nodename>|local} - Node
This specifies the node to which the firmware is to be downloaded.
-package-url <text> - Package URL
This specifies the path to the firmware package.
The packaged ACP processor, disk, and shelf firmware files need to have ".AFW", ".LOD", and ".SFW" file
extensions, respectively.
The following URL protocols are supported: ftp, http, tftp and file. The file URL scheme can be used to
specify the location of the package to be fetched from an external device connected to the storage controller.
Currently, only USB mass storage devices are supported. The USB device is specified as file://usb0/
<filename>. The package must be present in the root directory of the USB mass storage device.
Examples
The following example downloads a disk firmware package with the path ftp://example.com/fw/disk-
fw-1.2.LOD.zip to a node named node1:
Related references
storage disk firmware update on page 974
system node run on page 1272
Description
The storage firmware acp delete command deletes the specified ACP processor firmware file from all nodes that are
currently part of the cluster.
Parameters
-filename <text> - Firmware Filename
Specifies the firmware file to delete.
Examples
The following example deletes the ACP processor firmware file with the name ACP-IOM3.0150.AFW.FVF on each node:
Related references
storage firmware acp show on page 1027
storage firmware acp rename on page 1026
Description
The storage firmware acp rename command renames the specified ACP processor firmware file on each node.
Parameters
-oldname <text> - Old Filename
This parameter specifies the firmware file to rename.
-newname <text> - New Filename
This parameter specifies the new name of the firmware file.
Related references
storage firmware acp show on page 1027
storage firmware acp delete on page 1026
Description
The storage firmware acp show command displays the ACP processor firmware files present on each node.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the files that match the specified node name.
[-filename <text>] - Storage Firmware File
Selects the files that match the specified filename.
Examples
The following example displays the ACP processor firmware files on each node:
Node: Node1
Node: Node2
Description
The storage firmware disk delete command deletes the specified disk firmware file on each node.
Parameters
-filename <text> - Storage Firmware Filename
Specifies the firmware file to delete.
Examples
The following example deletes the disk firmware file with the name X262_SMOOST25SSX.NA06.LOD on each node:
Related references
storage firmware disk show on page 1029
storage firmware disk rename on page 1028
Description
The storage firmware disk rename command renames the specified disk firmware file on each node.
Parameters
-oldname <text> - Old Filename
This parameter specifies the firmware file to rename.
-newname <text> - New Filename
This parameter specifies the new name of the firmware file.
Examples
The following example renames the disk firmware file with the name X262_SMOOST25SSX.NA06.LOD to
X262_SMOOST25SSX.LOD on each node:
Related references
storage firmware disk show on page 1029
Description
The storage firmware disk show command displays the disk firmware files present on each node.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the files that match the specified node name.
[-filename <text>] - Storage Firmware File
Selects the files that match the specified filename.
Examples
The following example displays the disk firmware files on each node:
Node: Node1
Node: Node2
Related references
storage firmware disk delete on page 1028
storage firmware disk rename on page 1028
Description
The storage firmware shelf delete command deletes the specified shelf firmware file from all nodes that are currently
part of the cluster.
Parameters
-filename <text> - Storage Firmware Filename
Specifies the firmware file to delete.
Examples
The following example deletes the shelf firmware file with the name IOM12.0210.SFW on each node:
Related references
storage firmware shelf show on page 1031
storage firmware shelf rename on page 1030
Description
The storage firmware shelf rename command renames the specified shelf firmware file on each node.
Parameters
-oldname <text> - Old Filename
This parameter specifies the firmware file to rename.
-newname <text> - New Filename
This parameter specifies the new name of the firmware file.
Examples
The following example renames the shelf firmware file with the name IOM12.0210.SFW to IOM12.000.SFW on each
node:
Description
The storage firmware shelf show command displays the shelf firmware files present on each node.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the files that match the specified node name.
[-filename <text>] - Storage Firmware File
Selects the files that match the specified filename.
Examples
The following example displays the shelf firmware files on each node:
Node: Node1
Node: Node2
Related references
storage firmware shelf delete on page 1030
storage firmware shelf rename on page 1030
Description
The storage iscsi-initiator add-target command adds an iSCSI target to a node's list of targets. This command is
only supported on high-availability shared-nothing virtualized platforms.
Parameters
-node {<nodename>|local} - Node
Specifies the name of the Data ONTAP node to which the iSCSI target will be added.
-label <text> - User Defined Identifier
Specifies a label for the target to be added.
-target-type {external|mailbox|partner|partner2|dr_auxiliary|dr_partner} - Target Type
Specifies the type of the target. It is used by the node to determine how to use the LUNs. There are five target
types:
• partner - The partner target should belong to the node's HA partner. This allows the node to access its
partner's disks.
• external - External targets' LUNs can be used by the node but do not play a role in HA.
• dr_auxiliary - The DR auxiliary target for MetroCluster over IP. Not a valid target type for the add-target
command.
• dr_partner - The DR partner target for MetroCluster over IP. Not a valid target type for the add-target
command.
Examples
The following example adds and connects to an iSCSI target from the specified node.
Description
The storage iscsi-initiator connect command connects a node to the specified target. This command is only
supported on high-availability shared-nothing virtualized platforms.
Parameters
-node {<nodename>|local} - Node
Specifies the name of the Data ONTAP node to which the iSCSI target will be connected.
[-target-type {external|mailbox|partner|partner2|dr_auxiliary|dr_partner}] - Target Type
Selects targets with the specified target type.
-label <text> - User Defined Identifier
Specifies the label of the target to connect to.
Examples
The following example adds and connects to an iSCSI target from the specified node.
Description
The storage iscsi-initiator disconnect command disconnects a node from the specified target. This command is
only supported on high-availability shared-nothing virtualized platforms.
Parameters
-node {<nodename>|local} - Node
Specifies the name of the Data ONTAP node from which the iSCSI target will be disconnected.
[-target-type {external|mailbox|partner|partner2|dr_auxiliary|dr_partner}] - Target Type
Selects targets with the specified target type.
-label <text> - User Defined Identifier
Specifies the label of the target to disconnect from.
Description
The storage iscsi-initiator remove-target command removes an iSCSI target from a node's list of targets. This
command is only supported on high-availability shared-nothing virtualized platforms.
Parameters
-node {<nodename>|local} - Node
Specifies the name of the Data ONTAP node from which the iSCSI target will be removed.
[-target-type {external|mailbox|partner|partner2|dr_auxiliary|dr_partner}] - Target Type
Selects targets with the specified target type.
-label <text> - User Defined Identifier
Specifies the label of the target to be removed.
Examples
The following example adds and connects to an iSCSI target from the specified node.
Description
The storage iscsi-initiator show displays the list of iSCSI targets configured for each Data ONTAP node in the
cluster. This command is only supported on high-availability shared-nothing virtualized platforms.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example displays the list of iSCSI targets for each node in the cluster.
Description
This command is obsolete. I/O load is balanced automatically every five minutes.
Examples
Description
This command is obsolete. The storage load show command displays the load distribution of I/O on the cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-switch ]
The switch parameter adds switch information to the display.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Controller name
The name of the clustered node for which information is being displayed.
[-initiator-port <text>] - Initiator Port
The initiator port of the array LUN for which I/O stats are being displayed.
[-wwpn <text>] - Target Port WWPN
The World Wide Port Name of the array LUN for which I/O stats are being displayed.
[-serialnumber <text>] - Serial Number
The serial number of the array LUN for which I/O stats are being displayed.
[-lun <integer>] - LUN
The array LUN for which I/O stats are being displayed.
[-pct-io <text>] - %I/O
Percent of I/O bandwidth consumed by this array LUN.
[-io-blocks <integer>] - I/O (blocks)
Number of I/O blocks transferred.
[-switch-port <text>] - Switch Port
The initiator side switch port for this array LUN.
[-target-side-switch-port <text>] - Target Side Switch Port
The target side switch port for this array LUN.
Examples
Description
The storage path quiesce command quiesces I/O on one path to a LUN. It also quiesces the given entire path immediately
or can monitor the given path for error threshold before quiesce. After the I/O has been quiesced, no new I/O is sent on the path
unless the storage path resume command is issued to continue I/O.
Parameters
-node {<nodename>|local} - Node name
The name of the clustered node for which information is being displayed.
-initiator <initiator name> - Initiator Port
Initiator port that the clustered node uses.
-target-wwpn <wwpn name> - Target Port
Target World Wide Port Name. Port on the storage array that is being used.
Examples
The following example suspends I/O between node vbv3170f1b, port 0a and the array port 50001fe1500a8669, LUN 1.
node::> storage path quiesce -node vbv3170f1b -initiator 0a -target-wwpn 50001fe1500a8669 -lun-
number 1
The following example suspends I/O immediately between node vbv3170f1b, port 0a and the array port
50001fe1500a8669.
The following example suspends I/O between node vbv3170f1b, port 0a and the array port 50001fe1500a8669 after
reaching 10 or more errors in duration of 5 mins.
node::> storage path quiesce -node vbv3170f1b -initiator 0a -target-wwpn 50001fe1500a8669 -path-
failure-threshold 10 -wait-duration 5
Related references
storage path resume on page 1038
Description
The storage path resume command continues I/O flow to an array LUN on a path or the entire path that was previously
quiesced. It also disables the path failures monitoring feature, if it was enabled using the storage path quiesce -path-
failure-threshold count command.
Parameters
-node {<nodename>|local} - Node name
The name of the clustered node for which information is being displayed.
-initiator <initiator name> - Initiator Port
Initiator port that the clustered node uses.
-target-wwpn <wwpn name> - Target Port
Target World Wide Port Name. Port on the storage array that is being used.
[-lun-number <integer>] - LUN Number
Logical Unit number. The range is: [0...65535]. If this parameter is not specified, Data ONTAP resumes the
entire path to an array.
node::> storage path resume -node vbv3170f1b -initiator 0a -target-wwpn 50001fe1500a8669 -lun-
number 1
The following example resumes I/O between node vbv3170f1b, port 0a and the array port 50001fe1500a8669
Related references
storage path quiesce on page 1037
Description
The storage path show command displays path based statistics. The default command shows:
• Node name
• Initiator port
• Target port
• Target IQN
• TPGN
• Port speeds
• IOPs
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-array ]
Using this option displays:
• Array name
• Target port
• Target IQN
• Initiator port
| [-by-target ]
Using this option displays the same information as the array option, but grouped by target port.
| [-detail ]
Using this option displays the same information as the array and by-target options, but adds the following:
• Target IOPs
• Target LUNs
• Path IOPs
• Path errors
• Path quality
• Path LUNs
• Initiator IOPs
• Initiator LUNs
| [-switch ]
Using this option adds switch port information to the default display.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Controller name
The name of the clustered node for which information is being displayed.
[-array-name <array name>] - Array Name
Name of the storage array that is connected to the cluster.
[-target-wwpn <text>] - Target Port
Target World Wide Port Name. Port on the storage array that is being used.
[-initiator <text>] - Initiator Port
Initiator port that the clustered node uses.
[-initiator-side-switch-port <text>] - Initiator Side Switch Port
Switch port connected to the clustered node.
[-tpgn <integer>] - Target Port Group Number
TPGN refers to the target port group to which the target port belongs. A target port group is a set of target
ports which share the same LUN access characteristics and failover behaviors.
[-port-speed <text>] - Port Speed
Port Speed of the specified port.
[-path-io-kbps <integer>] - Kbytes of I/O per second on Path (Rolling Average)
Rolling average of I/O per second on the path.
[-path-iops <integer>] - Number of IOPS on Path (Rolling Average)
Rolling average of Kbytes of I/O per second on the path
Examples
The following example shows the default display.
The following example shows how the information is displayed when grouped by target.
The following example shows how the information is displayed with the switch option.
Description
The storage path show-by-initiator command displays path based statistics. The output is similar to the storage
path show command but the output is listed by initiator.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Controller name
The name of the clustered node for which information is being displayed.
[-initiator <text>] - Initiator Port
Initiator port that the clustered node uses.
[-target-wwpn <text>] - Target Port
Target World Wide Port Name. Port on the storage array that is being used.
[-initiator-side-switch-port <text>] - Initiator Side Switch Port
Switch port connected to the clustered node.
[-target-side-switch-port <text>] - Target Side Switch Port
Switch port connected to the array.
[-array-name <array name>] - Array Name
Name of the storage array that is connected to the cluster.
[-tpgn <integer>] - Target Port Group Number
TPGN refers to the target port group to which the target port belongs. A target port group is a set of target
ports which share the same LUN access characteristics and failover behaviors.
[-port-speed <text>] - Port Speed
Port Speed of the specified port.
[-path-io-kbps <integer>] - Kbytes of I/O per second on Path (Rolling Average)
Rolling average of I/O per second on the path.
[-path-iops <integer>] - Number of IOPS on Path (Rolling Average)
Rolling average of Kbytes of I/O per second on the path
[-initiator-io-kbps <integer>] - Kbytes of I/O per second on Initiator (Rolling Average)
Rolling average of I/O per second on the initiator port.
[-initiator-iops <integer>] - Number of IOPS on Initiator (Rolling Average)
Rolling average of Kbytes of I/O per second on the initiator port.
Examples
Related references
storage path show on page 1039
In an HA configuration, each node takes ownership of two allocation units representing 50% of the total capacity. If desired, the
ownership of the allocation units can be adjusted using the storage pool reassign command before the capacity is used in
an aggregate.
Storage pools do not have an associated RAID type. The RAID type is determined when an allocation unit is added to an
aggregate using the storage aggregate add-disks command. A storage pool contains four allocation units, and they
All storage pool available capacity can be provisioned into aggregates. Available capacity within a storage pool is not used to
protect against a disk failure. In the case of an SSD failure or predicted failure, Data ONTAP moves a suitable whole spare SSD
from outside the storage pool into the storage pool and begins the recovery process(using either reconstruction or Rapid RAID
Recovery, whichever is appropriate).
Related references
storage pool create on page 1047
storage pool show-available-capacity on page 1053
storage pool reassign on page 1049
storage aggregate add-disks on page 821
storage pool add on page 1045
Description
The storage pool add command increases the total capacity of an existing storage pool by adding the specified SSDs to the
storage pool. The disks are split into four equal partitions and added to each of the allocation units of the storage pool. If any
allocation units from the storage pool have already been allocated to an aggregate, the cache or usable capacity of that aggregate
is increased depending on whether it is a Flash Pool or an All-Flash aggregate.
If capacity from a storage pool is already provisioned into a Flash Pool aggregate, the same storage pool cannot be used to
provision an All-Flash aggregate and vice-versa.
For provisioning storage pool capacity into All-Flash aggregates, the Vserver option raid.storagepool.data.enable must
be set to true. The storage pool data enabled mode of operation is not currently supported by OnCommand management
software.
For example, if an SSD with a usable size of 745 GB is added to a storage pool that is part of four aggregates, each aggregate
will grow its cache or usable capacity by 186.2 GB. If a different allocation is desired, create a new storage pool using the
storage pool create command.
Parameters
-storage-pool <storage pool name> - Storage Pool Name
This parameter specifies the storage pool to which disks are to be added.
{ [-disk-count <integer>] - Number of Disks to Add in Storage Pool
This parameter specifies the number of disks that are to be added to the storage pool. The disks to be added
come from the pool of spare disks.
[-nodes {<nodename>|local}, ...] - Nodes From Which Spares Should be Selected
This parameter specifies a list of nodes from which SSD disks are selected for addition to the storage pool. If
this parameter is not specified, disks to be added to the storage pool can be selected from both the nodes
sharing the storage pool. Use this parameter to restrict the selection of spare disks to one particular node.
Examples
In this example, the user requests a report detailing the changes that would occur if a new disk is added to the storage pool
SP1. In this case, 186.2 GB of cache is added to the Flash Pool aggregates nodeA_flashpool_1 and nodeB_flashpool_1.
There are two unprovisioned allocation units in the storage pool and therefore the storage pool available capacity also
grows by 372.5 GB.
This operation will result in capacity being allocated in the following way:
Check via simulation whether there is available capacity within allocation units in storage pool SP1 for allocation units
that are provisioned into aggregates.
Info: This operation results in capacity being allocated in the following way:
The following example adds one disk to a storage pool named SP1. The spare disks are selected from either local node or
its partner or both based on spare availability.
Related references
storage pool create on page 1047
Description
The storage pool create command creates an SSD storage pool using a given list of spare SSDs.
When a storage pool is created, Data ONTAP splits the capacity provided by the SSDs into four equally-sized allocation units.
In an HA configuration, two allocation units (containing 50% of the total capacity) are assigned to each node in the HA pair.
This assignment can be modified using the storage pool reassign command.
After the storage pool is created, its allocation units can be provisioned into Flash Pool or All-Flash aggregates using the
storage aggregate add-disks command and the -storage-pool parameter.
If capacity from a storage pool is already provisioned into a Flash Pool aggregate, the same storage pool cannot be used to
provision an All-Flash aggregate and vice versa.
For provisioning storage pool capacity into All-Flash aggregates, the vserver option raid.storagepool.data.enable must
be set to true. The storage pool data enabled mode of operation is not currently supported by OnCommand management
software.
Parameters
-storage-pool <storage pool name> - Storage Pool Name
This parameter specifies the name of the storage pool that is to be created. The SSDs are partitioned and
placed into the new storage pool.
{ [-nodes {<nodename>|local}, ...] - Nodes Sharing the Storage Pool
This parameter specifies a list of nodes from which SSD disks are selected to create the storage pool. If two
nodes are specified then they need to be in HA configuration. Spare disks are selected from either node or its
partner or both. If this parameter is not specified, storage pool will be created by selecting disks from either
the node or its partner or both from where command is run.
-disk-count <integer> - Number of Disks in Storage Pool
This parameter specifies the number of disks that are to be included in the storage pool. The disks in this
newly created storage pool come from the pool of spare disks. The smallest disks in this pool are added to the
storage pool first, unless you specify the -disk-size parameter.
[-disk-size {<integer>[KB|MB|GB|TB|PB]}] - Disk Size
This parameter specifies the size of the disks on which the storage pool is to be created. Disks with a usable
size between 95% and 105% of the specified size are selected.
| -disk-list <disk path name>, ...} - Disk List for Storage Pool Creation
This parameter specifies a list of SSDs to be included in the new storage pool. The SSDs must be spare disks
and can be owned by either node in an HA pair.
Examples
The following example creates a storage pool named SP1. The storage pool contains 3 SSD disks, the spare disks selected
are from either local node, or its partner or both based on spare availability.
The following example creates a storage pool named SP2. The storage pool contains 3 SSD disks, the spare disks selected
are from either node0, or its partner node1 or both based on spare availability.
The following example creates a storage pool named SP3 from four SSDs using disk list.
cluster1::> storage pool create -storage-pool SP3 -disk-list 1.0.13, 1.0.15, 1.0.17, 1.0.19
Related references
storage pool reassign on page 1049
storage aggregate add-disks on page 821
Description
The storage pool delete command deletes an existing SSD storage pool. At the end of the operation, the SSDs are
converted back to spare disks.
Parameters
-storage-pool <storage pool name> - Storage Pool Name
This parameter specifies the storage pool that you want to delete. You can delete the storage pool only if all of
the allocation units in the storage pool are available.
Examples
Verify that storage pool SP3 is ready for deletion by confirming it has four available allocation units and then delete it.
Description
The storage pool reassign command changes the ownership of unprovisioned (available) storage pool allocation units
from one HA partner to the other for an existing storage pool.
Parameters
-storage-pool <storage pool name> - Storage Pool Name
This parameter specifies the storage pool within which available capacity is reassigned from one node to
another.
-from-node {<nodename>|local} - Reassign Available Capacity from This Node
This parameter specifies the name of the node that currently owns the allocation units.
-to-node {<nodename>|local} - Reassign Available Capacity to This Node
This parameter specifies the name of the node that will now own the allocation units.
-allocation-units <integer> - Allocation Units
This parameter specifies the number of allocation units to be reassigned.
Examples
Move an available allocation unit from node-b to node-a in preparation for provisioning the allocation units on node-a.
cluster1::*> storage pool reassign -storage-pool SP2 -from-node node-b -to-node node-a -allocation-
units 1
[Job 310] Job succeeded: storage pool reassign job for "SP2" completed successfully
Related references
storage pool show-available-capacity on page 1053
Description
The storage pool rename command changes the name of the storage pool.
Parameters
-storage-pool <storage pool name> - Storage Pool Name
This parameter specifies the storage pool name.
-new-name <storage pool name> - New Name of the Storage Pool
This parameter specifies the new name of the storage pool.
Examples
Renaming storage pool "sp1" to "sp2"
Description
The storage pool show command displays information about SSD storage pools in the cluster. By default, the command
displays information about all storage pools in the cluster. You can specify parameters to limit the output to a specific set of
storage pools.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-storage-pool <storage pool name>] - Storage Pool Name
Selects the storage pools that match this parameter value.
• reassigning - allocation units are being reassigned from one node to another.
• growing - allocation units in the storage pool are expanding due to the addition of new capacity into the
storage pool.
Examples
Display the storage pools in the cluster.
The following example displays the details of a storage pool named SmallSP. Only one of its four allocation unit has been
provisioned, so 75% of its size is available (usable).
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-storage-pool <storage pool name>] - Name of Storage Pool
Selects the storage pools that match this parameter value.
[-aggregate <aggregate name>] - Aggregate
Selects the storage pools that match this parameter value.
[-capacity {<integer>[KB|MB|GB|TB|PB]}] - Capacity
Selects the storage pools that match this parameter value.
Capacity includes space provided by data and parity portions of each allocation unit. Only the data portions of
each allocation unit contribute to the cache or usable capacity of Flash Pool or All-Flash aggregates
respectively.
[-allocated-unit-count <integer>] - Number of AU's Assigned to This Aggregate
Selects the storage pools that match this parameter value.
[-original-owner <text>] - Original Owner Name
Selects the storage pools that match this parameter value.
[-node {<nodename>|local}] - Node
Selects the storage pools that match this parameter value.
Examples
Display information about the aggregate or aggregates using a storage pool called SP2:
Note: All storage pool available capacity can be provisioned into aggregates. Available capacity within a storage pool is not
used to protect against a disk failure. In the case of an SSD failure or predicted failure, Data ONTAP moves a suitable whole
SSD spare disk from outside the storage pool into the storage pool and begins the recovery process (using either
reconstruction or Rapid RAID Recovery, whichever is appropriate).
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-storage-pool <storage pool name>] - Name of Storage Pool
Selects the available capacities that match this parameter value.
[-node {<nodename>|local}] - Node
Selects the available capacities that match this parameter value.
[-allocation-unit-size {<integer>[KB|MB|GB|TB|PB]}] - Allocation Unit Size
Selects the available capacities that match this parameter value.
Allocation units are the units of storage capacity that are available to be provisioned into aggregates.
[-storage-type <SSD>] - Type of Storage Pool
Selects the available capacities that match this parameter value. Only the SSD type is supported for this version
of Data ONTAP.
[-allocation-unit-count <integer>] - Number of Allocation Units Available
Selects the available capacities that match this parameter value.
Allocation units are the units of storage capacity that are available to be provisioned into aggregates. Each
allocation unit is one minimum unit of allocation (MUA) and its capacity is given as allocation-unit-
size.
Related references
storage aggregate add-disks on page 821
Description
The storage pool show-disks command displays information about disks in storage pools in the cluster. The command
output depends on the parameter or parameters specified with the command. If no parameters are specified, the command
displays information about all disks in all storage pools in the cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-storage-pool <storage pool name>] - Name of Storage Pool
Selects the storage pools that match this parameter value.
[-disk <disk path name>] - Name of the disk
Selects the storage pools with the disks that match this parameter value.
[-disk-type {ATA | BSAS | FCAL | FSAS | LUN | MSATA | SAS | SSD | VMDISK | SSD-NVM}] - Disk
Type
Selects the storage pools with the disks that match this parameter value. Only the SSD type is supported for
this version of Data ONTAP.
[-usable-size {<integer>[KB|MB|GB|TB|PB]}] - Disk Usable Size
Selects the storage pools with the disks that match this parameter value.
In this command, usable-size refers to the sum of the capacities of all of the partitions on the disk.
[-total-size {<integer>[KB|MB|GB|TB|PB]}] - Total Size
Selects the storage pools with the disks that match this parameter value.
[-node-list <nodename>, ...] - List of Nodes
Selects the storage pools with the disks that are visible to all of the specified nodes.
Storage
Disk Type Usable Size Total Size
-------- ------- ----------- -----------
1.0.16 SSD 745.0GB 745.2GB
1.0.18 SSD 745.0GB 745.2GB
1.0.20 SSD 745.0GB 745.2GB
1.0.22 SSD 745.0GB 745.2GB
Description
The storage port disable command disables a specified storage port.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which the port resides.
-port <text> - Port
Use this parameter to specify the port that needs to be disabled.
[-force [true]] - Force (privilege: advanced)
Use this optional parameter to force the disabling of the storage port. The parameter can be used to disable the
specified port even if some devices can only be accessed using this port. Note that doing so might cause
multiple device failures.
Examples
The following example disables port 0a on node node1:
Description
The storage port enable command enables a specified storage port.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which the port resides.
-port <text> - Port
Use this parameter to specify the port that needs to be enabled.
Examples
The following example enables port 0a on node node1:
Description
The storage port rescan command rescans a specified storage port. This command is not supported on Ethernet storage
ports (type = ENET).
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which the port resides.
-port <text> - Port
Use this parameter to specify the port that needs to be rescanned.
Examples
The following example rescans port 0a on node node1:
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which the port resides.
-port <text> - Port
Use this parameter to specify the port that needs to be reset.
Examples
The following example resets port 0a on node node1:
Description
The storage port reset-device command resets a device behind a port. If the device is behind a SAS port, you need to
specify the shelf name and bay ID where the device resides. If the device is behind a FC port, you need to specify the loop ID of
the device. This command is not supported on Ethernet storage ports (type = ENET).
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which the port resides.
-port <text> - Port
Use this parameter to specify the port used to reset the device.
{ -shelf-name <text> - Shelf Name
Use this parameter to specify the shelf where the device resides.
-bay-id <integer> - Bay ID
Use this parameter to specify the bay where the device resides.
| -loop-id <integer>} - Loop ID
Use this parameter to specify the loop ID of the device.
Examples
The following example resets a device behind SAS port 0a on node node1:
cluster1::> storage port reset-device -node node1 -port 0a -shelf-name 1.0 -bay-id 10
Description
The storage port show command displays information about the storage ports in the cluster. If no parameters are specified,
the default command displays the following information about the storage ports:
• Node
• Port
• Type
• Speed
• State
• Status
To display detailed profile information about a single storage port, use the -node and -port parameters.
Parameters
{ [-fields <fieldname>, ...]
Displays the specified fields for all the storage ports, in column style output.
| [-errors ]
Displays the following error status information about the storage ports which have errors:
• Error type
• Error severity
• Error description
| [-instance ]}
Displays expanded information about all the storage ports in the system. If a storage port is specified, then this
parameter displays detailed information for that port only.
[-node {<nodename>|local}] - Node
Displays detailed information about the storage ports on the specified node.
[-port <text>] - Port
Selects the ports with the specified port name.
[-port-type {Unknown|SAS|FC|ENET}] - Port Type
Selects the ports of the specified type.
[-port-speed {0|1|1.5|2|2.5|3|4|5|6|8|10|12|14|16|25|32|40|100}] - Port Speed
Selects the ports with the specified speed.
[-state {enabled|disabled|enable-pending|disable-pending}] - Port State
Selects the ports with the specified state.
Examples
The following example displays information about all storage ports in the cluster:
The following example displays detailed information about port e3a on node node1:
Node: node1
Port: e3a
Port Type: ENET
Description: 40G/100G Ethernet Controller CX5
Firmware Revision: 16.23.1020
MAC Address: ec:0d:9a:65:e4:44
Is Dedicated: false
Serial Number: MT1730X00227
Connector Vendor: Molex Inc.
Connector Part Number: 112-00322
Connector Serial Number: 532120266
Port Speed: 40 Gb/s
Port State: disabled
Port Status: online
This option determines the behavior of automatic disk firmware update. Valid values are on or off. The default value is on. If
the option is set to on, firmware updates to spares and file-system disks are performed in a non-disruptive manner in the
background. If the option is turned off automatic firmware update occur when the system is booted or a disk is inserted.
raid.disk.copy.auto.enable
This option determines the action taken when a disk reports a predictive failure. Valid values for this option are on or off. The
default value for this option is on except for capacity-optimized platforms, where the default value is off.
Sometimes, it is possible to predict that a disk will fail soon based on a pattern of recovered errors that have occurred on the
disk. In such cases, the disk reports a predictive failure to Data ONTAP. If this option is set to on, Data ONTAP initiates Rapid
RAID Recovery to copy data from the failing disk to a spare disk. When data is copied, the disk is marked failed and placed in
the pool of broken disks. If a spare is not available, the node will continue to use the prefailed disk until the disk fails.
If the option is set to off, the disk is immediately marked as failed and placed in the pool of broken disks. A spare disk is
selected and data from the missing disk is reconstructed from other disks in the RAID group. The disk does not fail if the RAID
group is already degraded or is being reconstructed. This ensures that a disk failure does not lead to the failure of the entire
RAID group.
raid.disktype.enable
This option sets the rate of media scrub on an aggregate. Valid values for this option range from 300 to 3000 where a rate of
300 represents a media scrub of approximately 512 MB per hour, and 3000 represents a media scrub of approximately 5GB per
hour. The default value for this option is 600, which is a rate of approximately 1GB per hour.
raid.min_spare_count
This option specifies the minimum number of spare drives required to avoid warnings about low spare drives. If every filesystem
drive has the minimum number of spare drives specified in raid.min_spare_count that are appropriate replacements, then no
warning is displayed for low spares. This option can be set from 0 to 4. The default setting is 0 for capacity-optimized platforms
and 1 for others. Setting this option to 0 means that no warnings will be displayed for low spares, even if there are no spares
available. This option can be set to 0 only on standalone or HA systems with 16 or fewer attached drives with RAID-DP
aggregates. A setting of 0 is not allowed on systems with RAID4 aggregates.
raid.mirror_read_plex_pref
This option specifies the plex preference when reading from a mirrored aggregate on a MetroCluster-configured system. There
are three possible values:
• local indicates that all reads are handled by the local plex (plex consisting of disks from Pool0).
• remote indicates that all reads are handled by the remote plex (plex consisting of disks from Pool1).
• alternate indicates that the handling of read requests is shared between the two plexes.
This option is ignored if the system is not in a MetroCluster configuration. The option setting applies to all aggregates on the
node.
raid.mix.hdd.disktype.capacity
Controls mixing of SATA, BSAS, FSAS and ATA disk types. The default value is on, which allows mixing.
When this option is set to on, SATA, BSAS, FSAS and ATA disk types are considered interchangeable for all aggregate
operations, including aggregate creation, adding disks to an aggregate, and replacing disks within an existing aggregate, whether
this is done by the administrator or automatically by Data ONTAP.
If you set this option to off, SATA, BSAS, FSAS and ATA disks cannot be combined within the same aggregate. If you have
existing aggregates that combine those disk types, those aggregates will continue to function normally and accept any of those
disk types.
Note: This option is ignored in storage aggregate create and storage aggregate add-disks commands, when
either of -disktype or -diskclass parameters are used. It is better to use the -disktype or -diskclass parameter
instead of relying on this option.
raid.mix.hdd.disktype.performance
Controls mixing of FCAL and SAS disk types. The default value is off, which prevents mixing.
If you set this option to on, FCAL and SAS disk types are considered interchangeable for all aggregate operations, including
aggregate creation, adding disks to an aggregate, and replacing disks within an existing aggregate, whether this is done by the
administrator or automatically by Data ONTAP.
When this option is set to off, FCAL and SAS disks cannot be combined within the same aggregate. If you have existing
aggregates that combine those disk types, those aggregates will continue to function normally and accept either disk type.
Note: This option is ignored in storage aggregate create and storage aggregate add-disks commands, when
either of -disktype or -diskclass parameter is used. It is better to use the -disktype or -diskclass parameter instead
of relying on this option.
raid.mix.disktype.solid_state
Controls mixing of SSD and SSD-NVM disk types. The default value is on, which allows mixing.
raid.mix.hdd.rpm.capacity
This option controls separation of capacity-based hard disk drives (ATA, SATA, BSAS, FSAS, MSATA) by uniform rotational
speed (RPM). If you set this option to off, Data ONTAP always selects disks with the same RPM when creating new
aggregates or when adding disks to existing aggregates using these disk types. If you set this option to on, Data ONTAP does
not differentiate between these disk types based on rotational speed. For example, Data ONTAP might use both 5400 RPM and
7200 RPM disks in the same aggregate. The default value is on.
raid.mix.hdd.rpm.performance
This option controls separation of performance-based hard disk drives (SAS, FCAL) by uniform rotational speed (RPM). If you
set this option to off, Data ONTAP always selects disks with the same RPM when creating new aggregates or when adding
disks to existing aggregates using these disk types. If you set this option to on, Data ONTAP does not differentiate between
these disk types based on rotational speed. For example, Data ONTAP might use both 10K RPM and 15K RPM disks in the
same aggregate. The default value is off.
raid.reconstruct.perf_impact
This option sets the overall performance impact of RAID reconstruction. When the CPU and disk bandwidth are not consumed
by serving clients, RAID reconstruction consumes as much bandwidth as it needs. If the serving of clients is already consuming
most or all of the CPU and disk bandwidth, this option allows control over the CPU and disk bandwidth that can be taken away
for reconstruction, and thereby enables control over the negative performance impact on the serving of clients. As the value of
this option is increased, the speed of reconstruction also increase. The possible values low, medium, and high. The default
value is medium. When mirror resync and reconstruction are running at the same time, the system does not distinguish between
their separate resource consumption on shared resources (like CPU or a shared disk). In this case, the combined resource
utilization of these operations is limited to the maximum resource entitlement for individual operations.
raid.resync.num_concurrent_ios_per_rg
This option changes the duration of a resync by modifying the number of concurrent resync I/Os in progress for each resync'ing
raidgroup. The legacy, and default value, is 1. As the value of this option is increased, the speed of resync is increased. This will
have a negative performance impact on the serving of clients.
raid.resync.perf_impact
This option sets the overall performance impact of RAID mirror resync (whether started automatically by the system or
implicitly by an operator-issued command). When the CPU and disk bandwidth are not consumed by serving clients, a resync
operation consumes as much bandwidth as it needs. If the serving of clients is already consuming most or all of the CPU and
disk bandwidth, this option allows control over the CPU and disk bandwidth that can be taken away for resync operations, and
thereby enables control over the negative performance impact on the serving of clients. As the value of this option is increased,
the speed of resync also increases. The possible values are low, medium, and high. The default value is medium. When RAID
mirror resync and reconstruction are running at the same time, the system does not distinguish between their separate resource
consumption on shared resources (like CPU or a shared disk). In this case, the combined resource utilization of these operations
is limited to the maximum resource entitlement for individual operations.
raid.rpm.ata.enable
raid.scrub.perf_impact
This option sets the overall performance impact of RAID scrubbing (whether started automatically or manually). When the CPU
and disk bandwidth are not consumed by serving clients, scrubbing consumes as much bandwidth as it needs. If the serving of
clients is already consuming most or all of the CPU and disk bandwidth, this option allows control over the CPU and disk
bandwidth that can be taken away for scrubbing, and thereby enables control over the negative performance impact on the
serving of clients. As the value of this option is increased, the speed of scrubbing also increases. The possible values for this
option are low, medium, and high. The default value is low. When scrub and mirror verify are running at the same time, the
system does not distinguish between their separate resource consumption on shared resources (like CPU or a shared disk). In
this case, the combined resource utilization of these operations is limited to the maximum resource entitlement for individual
operations.
raid.scrub.schedule
This option specifies the weekly schedule (day, time and duration) for scrubs started automatically. On a non-AFF system, the
default schedule is daily at 1 a.m. for the duration of 4 hours except on Sunday when it is 12 hours. On an AFF system, the
default schedule is weekly at 1 a.m. on Sunday for the duration of 6 hours. If an empty string ("") is specified as an argument, it
will delete the previous scrub schedule and add the default schedule. One or more schedules can be specified using this option.
The syntax is duration[h|m]@weekday@start_time,[duration[h|m]@weekday@start_time,...] where duration is the time period
for which scrub operation is allowed to run, in hours or minutes ('h' or 'm' respectively).
Weekday is the day on which the scrub is scheduled to start. The valid values are sun, mon, tue, wed, thu, fri, sat.
start_time is the time when scrub is schedule to start. It is specified in 24 hour format. Only the hour (0-23) needs to be
specified.
For example, options raid.scrub.schedule 240m@tue@2,8h@sat@22 will cause scrub to start on every Tuesday at 2 a.m. for 240
minutes, and on every Saturday at 10 p.m. for 480 minutes.
raid.timeout
This option sets the time, in hours, that the system will run after a "single-disk failure in a RAID4 group", "two-disk failure in a
RAID-DP group" or "three-disk failure in a RAID-TEC group" condition has caused the system to go into "degraded mode",
"double-degraded mode" or "triple-degraded mode" respectively. This option also controls the shutdown timer after an
NVRAM/NVMEM battery failure has occurred. The default is 24, the minimum acceptable value is 0 and the largest acceptable
value is 4,294,967,295. If the raid.timeout option is specified when the system is in "degraded mode", "double-degraded
mode" or "triple-degraded mode", the timeout is set to the value specified and the timeout is restarted. If the value specified is 0,
automatic system shutdown is disabled.
Note: For capacity-optimized platforms, this option only applies to non-RAID failures (e.g. NVRAM battery). Automatic
system shutdown is disabled for degraded raidgroups, irrespective of the value of this option.
raid.verify.perf_impact
This option sets the overall performance impact of RAID mirror verify. When the CPU and disk bandwidth are not consumed by
serving clients, a verify operation consumes as much bandwidth as it needs. If the serving of clients is already consuming most
or all of the CPU and disk bandwidth, this option allows control over the CPU and disk bandwidth that can be taken away for
verify, and thereby enables control over the negative performance impact on the serving of clients. As you increase the value of
this option, the verify speed also increases. The possible values are low, medium, and high. The default value is low. When
scrub and mirror verify are running at the same time, the system does not distinguish between their separate resource
consumption on shared resources (like CPU or a shared disk). In this case, the combined resource utilization of these operations
is limited to the maximum resource entitlement for individual operations.
Related references
storage raid-options modify on page 1066
storage raid-options show on page 1066
storage aggregate create on page 826
storage aggregate add-disks on page 821
Description
The storage raid-options modify command is used to modify the available RAID options for each node in a cluster. The
options are described in the storage raid-options manual page.
Parameters
-node {<nodename>|local} - Node
This parameter specifies the node on which the RAID option is to be modified.
-name <text> - Option Name
This parameter specifies the RAID option to be modified. To see the list of RAID options that can be
modified, use the storage raid-options show command.
[-value <text>] - Option Value
This parameter specifies the value of the selected RAID option.
Related references
storage raid-options show on page 1066
storage raid-options on page 1062
Description
The storage raid-options show command displays information about all the RAID options in a cluster. The options are
described in the storage raid-options manual page.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information about all the RAID options on the specified node.
[-name <text>] - Option Name
Selects information about the RAID options that have the specified name.
[-value <text>] - Option Value
Selects information about all the RAID options that have the specified value.
[-constraint <text>] - Option Constraint
Selects information about all the RAID options that have the specified constraint. The 'constraint' field
indicates the expected setting for a RAID option across both nodes of an HA pair. The possible values are:
• same_preferred - the same value should be used on both nodes of an HA pair, otherwise the next
takeover may not function correctly
• same_required - the same value must be used on both nodes of an HA pair, otherwise the next takeover
will not function correctly
• only_one - the same value should be used on both nodes of an HA pair. If the values are different and a
takeover is in progress, the value of the RAID option on the node that is taking over will be used
Examples
The following example shows the raid scrub settings for a node named node1:
Related references
storage raid-options on page 1062
storage raid-options modify on page 1066
Description
The storage shelf show command displays information about all the storage shelves in the storage system. If no parameters
are specified, the default command displays the following information about the storage shelves:
• Shelf Name
• Shelf ID
• Serial Number
• Model
• Module Type
• Status
To display detailed profile information about a single storage shelf, use the -shelf parameter.
| [-connectivity ]
Displays the following details about the connectivity from the node to the storage shelf:
• Node name
| [-cooling ]
Displays the following details about the cooling elements and temperature sensors of the storage shelf:
• The current speed of the cooling fan in revolutions per minute (rpm)
| [-errors ]
Displays the following error status information about the storage shelves that have errors:
• Error type
• Error description
| [-module ]
Displays the following details about the I/O modules attached to the storage shelf:
• Module ID
• Number of times, since the last boot, that this module has been swapped
| [-port ]
Displays the following details about the storage shelf ports:
| [-power ]
Displays the following details about the power supplies, voltage sensors, and current sensors of the storage
shelf:
• PSU type
| [-instance ]}
Displays expanded information about all the storage shelves in the system.
[-node {<nodename>|local}] - Node
Displays information only about the storage shelves that are attached to the node you specify.
[-shelf <text>] - Shelf Name
Displays information only about the storage shelves that match the names you specify.
[-shelf-uid <text>] - Shelf UID
Displays information only about the storage shelf that matches the shelf UID you specify. Example:
50:05:0c:c0:02:10:64:26
[-stack-id {<integer>|-}] - Stack ID
Displays information only about the storage shelves that are attached to the stack that matches the stack ID
you specify
[-shelf-id <text>] - Shelf ID
Displays information only about the storage shelves that match the shelf ID you specify.
[-module-type {unknown|atfcx|esh4|iom3|iom6|iom6e|iom12|iom12e|iom12f|nsm100|psm3e}] - Shelf
Module Type
Displays information only about the storage shelves that match the module-type you specify.
[-connection-type {unknown|fc|sas|nvme}] - Shelf Connection Type
Displays information only about the storage shelves that match the connection type you specify. Example: FC
or SAS.
[-is-local-attach {true|false}] - Is the Shelf Local to This Cluster?
Displays information only about the storage shelves that are local (TRUE) or remote (FALSE) to this cluster.
[-vendor <text>] - Shelf Vendor
Displays information only about the storage shelves that match the vendor you specify.
[-product-id <text>] - Shelf Product Identification
Displays information only about the storage shelves that match the product ID you specify.
[-serial-number <text>] - Shelf Serial Number
Displays information only about the storage shelf that matches the serial number you specify.
[-disk-count {<integer>|-}] - Disk Count
Displays information only about the storage shelves that have the disk count you specify.
[-state {unknown|no-status|init-required|online|offline|missing}] - Shelf State
Displays information only about the storage shelves that are in the state you specify.
Module Operational
Shelf Name Shelf ID Serial Number Model Type Status
----------------- -------- --------------- ----------- ------ -----------
1.1 1 6000832415 DS2246 IOM6 Critical
1.2 2 6000647652 DS2246 IOM6 Normal
1.3 3 6000003844 DS2246 IOM6 Normal
1.4 4 SHJ000000013A9E DS4246 IOM6 Normal
1.5 5 SHJ000000013A84 DS4246 IOM6 Normal
1.6 6 6000005555 DS2246 IOM6 Normal
6 entries were displayed.
cluster1::>
The following example displays expanded information about a storage shelf named 1.2:
Modules: Module is
Monitor Is Reporting FW Update Latest
Swap Operational Module
ID Part No. ES Serial No. is Active Master Element Progress FW Rev. FW Rev.
Count Status Location
--- ------------ --------------- --------- ------ --------- ------------- ------- -------
----- ----------- --------------
a 111-00190+A0 8006437891 true false false not-available 0191 -
0 normal rear of the shelf at the top left
b 111-00190+A0 8006435180 true true true not-available 0191 -
0 normal rear of the shelf at the top right
Paths:
Speed
Controller Initiator Initiator Side Switch Port Target Side Switch Port Target
Port TPGN Gb/s I/O KB/s IOPS
------------------ --------- -------------------------- --------------------------
------------------ ------ ----- -------- -------
stsw-8020-01 0a - -
- - - - -
stsw-8020-01 2b - -
- - - - -
stsw-8020-02 0a - -
- - - - -
stsw-8020-02 2b - -
- - - - -
Voltage Sensors:
Voltage Operational
ID (V) Status Sensor Location
--- ------- ---------------------- ------------------------
1 5.70 normal rear of the shelf on the lower left power supply
2 12.300 normal rear of the shelf on the lower left power supply
3 5.70 normal rear of the shelf on the lower right power supply
4 12.180 normal rear of the shelf on the lower right power supply
Current Sensors:
Current Operational
ID (mA) Status Sensor Location
--- ------- ---------------------- ------------------------
1 0 normal rear of the shelf on the lower left power supply
2 0 normal rear of the shelf on the lower left power supply
3 0 normal rear of the shelf on the lower right power supply
4 0 normal rear of the shelf on the lower right power supply
Fans:
Speed Operational
ID (RPM) Status Fan Location
--- ------ ----------------------- ----------------------
1 3000 normal rear of the shelf on the lower left power supply
2 2970 normal rear of the shelf on the lower left power supply
3 3000 normal rear of the shelf on the lower right power supply
4 2970 normal rear of the shelf on the lower right power supply
Temperature:
-- Thresholds °C --
Temp Is Low Low High High Operational
ID °C Ambient Crit Warn Crit Warn Status Sensor Location
--- ---- ------- ---- ---- ---- ---- ------------------ --------------------------------
1 23 true 0 5 42 40 normal front of the shelf on the left, on the
OPS panel
2 26 false 5 10 55 50 normal inside of the shelf on the midplane
3 24 false 5 10 55 50 normal rear of the shelf on the lower left
power supply
4 39 false 5 10 70 65 normal rear of the shelf on the lower left
power supply
5 25 false 5 10 55 50 normal rear of the shelf on the lower right
power supply
6 36 false 5 10 70 65 normal rear of the shelf on the lower right
power supply
7 25 false 5 10 60 55 normal rear of the shelf at the top left, on
shelf module A
8 26 false 5 10 60 55 normal rear of the shelf at the top right, on
shelf module B
SAS Ports:
-- Port Speeds Gb/s -- Power Port
Phy # IOM Port Type WWPN Operational Negotiated Status Status
----- --- --------- ----------------- ----------- ---------- ------ -----------
0 A Square 500a098004b063b0 6.0 - - Enabled
1 A Square 500a098004b063b0 6.0 - - Enabled
2 A Square 500a098004b063b0 6.0 - - Enabled
3 A Square 500a098004b063b0 6.0 - - Enabled
4 A Circle 500a09800569f03f 6.0 - - Enabled
5 A Circle 500a09800569f03f 6.0 - - Enabled
6 A Circle 500a09800569f03f 6.0 - - Enabled
7 A Circle 500a09800569f03f 6.0 - - Enabled
8 A Disk 500605ba00c1cb8d 6.0 6.0 on Enabled
9 A Disk 500605ba00c1ea8d 6.0 6.0 on Enabled
10 A Disk 500605ba00c1d111 6.0 6.0 on Enabled
11 A Disk 500605ba00c1bc49 6.0 6.0 on Enabled
12 A Disk 500605ba00c1cdfd 6.0 6.0 on Enabled
13 A Disk 500605ba00c1c531 6.0 6.0 on Enabled
14 A Disk 500605ba00c1eb05 6.0 6.0 on Enabled
15 A Disk 500605ba00c1ec29 6.0 6.0 on Enabled
16 A Disk 500605ba00c1bc29 6.0 6.0 on Enabled
17 A Disk 500605ba00c1c471 6.0 6.0 on Enabled
18 A Disk 500605ba00c039a9 6.0 6.0 on Enabled
19 A Disk 500605ba00c1c4dd 6.0 6.0 on Enabled
20 A Disk - - - - Empty
21 A Disk - - - - Empty
22 A Disk - - - - Empty
FC Ports:
Port
ID Port Type Status
--- --------- -----------
- - -
Bays:
Has Operational
ID Disk Bay Type Status
--- ----- ----------- -----------
0 true single-disk normal
1 true single-disk normal
2 true single-disk normal
3 true single-disk normal
4 true single-disk normal
5 true single-disk normal
6 true single-disk normal
7 true single-disk normal
8 true single-disk normal
9 true single-disk normal
10 true single-disk normal
11 true single-disk normal
12 false single-disk normal
13 false single-disk normal
14 false single-disk normal
15 false single-disk normal
16 false single-disk normal
17 false single-disk normal
cluster1::>
The following example displays information about the power supplies, voltage sensors and current sensors of the storage
shelf 1.1:
Voltage Sensors:
Voltage Operational
ID (V) Status
--- ------- ----------------------
1 5.70 normal
2 12.180 normal
3 5.70 normal
4 12.300 normal
Current Sensors:
Current Operational
ID (mA) Status
--- ------- ----------------------
1 0 normal
2 0 normal
3 3900 normal
4 0 normal
Errors:
------
Critical condition is detected in storage shelf power supply unit "1". The unit might fail.
Critical over temperature failure for temperature sensor "1". Current temperature: "75" C
("167" F).
cluster1::>
The following example displays information about the cooling elements and temperature sensors inside the storage shelf
1.2:
Fans:
Speed Operational
ID (RPM) Status
-- ----- -------------
1 3000 normal
2 3000 normal
3 3000 normal
4 2970 normal
Temperature:
-- Thresholds °C --
Temp Is Low Low High High Operational
ID °C Ambient Crit Warn Crit Warn Status
--- ---- ------- ---- ---- ---- ---- ------------------
1 23 true 0 5 42 40 normal
2 26 false 5 10 55 50 normal
3 24 false 5 10 55 50 normal
4 39 false 5 10 70 65 normal
5 25 false 5 10 55 50 normal
6 36 false 5 10 70 65 normal
7 25 false 5 10 60 55 normal
8 27 false 5 10 60 55 normal
Errors:
------
-
cluster1::>
The following example displays information about the connectivity from the node to the storage shelf 1.2:
Paths:
Controller Initiator Initiator Side Switch Port Target Side Switch Port Target
Port TPGN
------------------ --------- -------------------------- --------------------------
------------------ ------
stsw-8020-01 0a - -
- -
stsw-8020-01 2b - -
- -
stsw-8020-02 0a - -
- -
Errors:
------
-
cluster1::>
The following example displays information about the disk bays of the storage shelf 1.2:
Bays:
Has Operational
ID Disk Bay Type Status
--- ----- ----------- -----------
0 true single-disk normal
1 true single-disk normal
2 true single-disk normal
3 true single-disk normal
4 true single-disk normal
5 true single-disk normal
6 true single-disk normal
7 true single-disk normal
8 true single-disk normal
9 true single-disk normal
10 true single-disk normal
11 true single-disk normal
12 false single-disk normal
13 false single-disk normal
14 false single-disk normal
15 false single-disk normal
16 false single-disk normal
17 false single-disk normal
18 false single-disk normal
19 false single-disk normal
20 false single-disk normal
21 false single-disk normal
22 false single-disk normal
23 false single-disk normal
Errors:
------
-
cluster1::>
The following example displays information about the ports of the storage shelf 1.2:
SAS Ports:
-- Port Speeds Gb/s -- Power Port
Phy # IOM Port Type WWPN Operational Negotiated Status Status
----- --- --------- ----------------- ----------- ---------- ------ -----------
0 A Square 500a098004b063b0 6.0 - - Enabled
1 A Square 500a098004b063b0 6.0 - - Enabled
2 A Square 500a098004b063b0 6.0 - - Enabled
3 A Square 500a098004b063b0 6.0 - - Enabled
4 A Circle 500a09800569f03f 6.0 - - Enabled
5 A Circle 500a09800569f03f 6.0 - - Enabled
6 A Circle 500a09800569f03f 6.0 - - Enabled
7 A Circle 500a09800569f03f 6.0 - - Enabled
8 A Disk 500605ba00c1cb8d 6.0 6.0 on Enabled
9 A Disk 500605ba00c1ea8d 6.0 6.0 on Enabled
10 A Disk 500605ba00c1d111 6.0 6.0 on Enabled
11 A Disk 500605ba00c1bc49 6.0 6.0 on Enabled
12 A Disk 500605ba00c1cdfd 6.0 6.0 on Enabled
13 A Disk 500605ba00c1c531 6.0 6.0 on Enabled
14 A Disk 500605ba00c1eb05 6.0 6.0 on Enabled
15 A Disk 500605ba00c1ec29 6.0 6.0 on Enabled
16 A Disk 500605ba00c1bc29 6.0 6.0 on Enabled
17 A Disk 500605ba00c1c471 6.0 6.0 on Enabled
18 A Disk 500605ba00c039a9 6.0 6.0 on Enabled
19 A Disk 500605ba00c1c4dd 6.0 6.0 on Enabled
20 A Disk - - - - Empty
21 A Disk - - - - Empty
22 A Disk - - - - Empty
23 A Disk - - - - Empty
24 A Disk - - - - Empty
25 A Disk - - - - Empty
26 A Disk - - - - Empty
27 A Disk - - - - Empty
28 A Disk - - - - Empty
29 A Disk - - - - Empty
30 A Disk - - - - Empty
31 A Disk - - - - Empty
32 A SIL - - - - Disabled
33 A SIL - - - - Disabled
34 A SIL - - - - Disabled
35 A SIL - - - - Disabled
0 B Square 500a098004af9e30 6.0 - - Enabled
1 B Square 500a098004af9e30 6.0 - - Enabled
2 B Square 500a098004af9e30 6.0 - - Enabled
3 B Square 500a098004af9e30 6.0 - - Enabled
4 B Circle 500a098005688dbf 6.0 - - Enabled
5 B Circle 500a098005688dbf 6.0 - - Enabled
6 B Circle 500a098005688dbf 6.0 - - Enabled
7 B Circle 500a098005688dbf 6.0 - - Enabled
8 B Disk 500605ba00c1cb8e 6.0 6.0 on Enabled
9 B Disk 500605ba00c1ea8e 6.0 6.0 on Enabled
10 B Disk 500605ba00c1d112 6.0 6.0 on Enabled
11 B Disk 500605ba00c1bc4a 6.0 6.0 on Enabled
12 B Disk 500605ba00c1cdfe 6.0 6.0 on Enabled
13 B Disk 500605ba00c1c532 6.0 6.0 on Enabled
14 B Disk 500605ba00c1eb06 6.0 6.0 on Enabled
15 B Disk 500605ba00c1ec2a 6.0 6.0 on Enabled
16 B Disk 500605ba00c1bc2a 6.0 6.0 on Enabled
17 B Disk 500605ba00c1c472 6.0 6.0 on Enabled
18 B Disk 500605ba00c039aa 6.0 6.0 on Enabled
19 B Disk 500605ba00c1c4de 6.0 6.0 on Enabled
20 B Disk - - - - Empty
21 B Disk - - - - Empty
22 B Disk - - - - Empty
23 B Disk - - - - Empty
24 B Disk - - - - Empty
25 B Disk - - - - Empty
26 B Disk - - - - Empty
27 B Disk - - - - Empty
FC Ports:
Port
ID Port Type Status
--- --------- -----------
- - -
Errors:
------
-
cluster1::>
The following example displays error information about the storage shelves that have errors:
Description
Configure the ACP connectivity on the cluster.
Parameters
-is-enabled {true|false} - Is Enabled?
Configures the connectivity to the specified state.
[-subnet <IP Address>] - Subnet
Configures the connectivity to the specified subnet.
Examples
The following example configures out-of-band ACP connectivity on each node:
cluster1::> storage shelf acp configure -is-enabled true -channel out-of-band -subnet 192.168.0.1 -
netmask 255.255.255.0
Description
Displays information about the ACP connectivity on each node
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <field-name>", ... parameter, the command output also includes the specified field or
fields. You can use '-fields ?' to display the fields to specify.
| [-errors ]
If you specify the -errors parameter, the command displays detailed information about all modules with errors.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the nodes that match this parameter value.
[-is-enabled {true|false}] - Is Enabled?
Selects the nodes that are enabled or disabled.
[-port <text>] - Port
Selects the nodes that match the specified port on which ACP is configured.
[-address <IP Address>] - IP Address
Selects the nodes with the specified IP address.
[-subnet <IP Address>] - Subnet
Selects the nodes with the specified subnet.
Examples
The following example displays ACP connectivity on each node (in-band):
The following example displays ACP connectivity on each node (out of band):
The following example displays the -instance output of the storage acp show (in-band) command. Use this command to
display details on connectivity and configuration.
Node: fas2750-rtp-1b
Channel: in-band
Enable Status: true
Connection Status: active
2 entries were displayed.
The following example displays the -instance output of the storage acp show (out-of-band) command. Use this command
to display details on connectivity and configuration.
Node: fas2750-rtp-1b
Channel: out-of-band
Enable Status: true
Port: e0P
IP Address: 192.168.1.75
Subnet: 192.168.0.1
Netmask: 255.255.252.0
Connection Status: full-connectivity
2 entries were displayed.
Description
Displays information about the modules connected to each node
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <field-name>", ... parameter, the command output also includes the specified field or
fields. You can use '-fields ?' to dis play the fields to specify
| [-errors ]
If you specify the -errors parameter, the command displays detailed information about all modules with errors.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the modules that match this parameter value.
[-mac-address <text>] - MAC Address
Selects the module that match the specified MAC address.
[-module-name <text>] - Module name
Selects the module that match the specified module name.
[-module-address <IP Address>] - IP Address
Selects the module that match the specified IP address.
[-protocol-version <text>] - Protocol Version
Selects the modules that match the specified protocol version.
[-firmware-version <text>] - Firmware Version
Selects the modules that match the specified firmware version.
Examples
The following example displays the ACP modules connected to each node:
The following example displays the -instance output of the storage shelf acp module show. More details on each module
can be seen here.
Node: stor-v4-1a-1b-01
Module Name: 1.10.A
Mac Address: 00:a0:98:19:53:ee
IOM Type: IOM6E
Shelf Serial Number: SHJMS000000001A
IP Address: 192.168.3.239
Protocol Version: 2.1.1.21
Assigner ID: 2.1.1.21
State: Active
Last Contact: 203
Power Cycle Count: 0
Power Off Count: 0
Power On Count: 0
Expander Reset Count: 0
Expander Power Cycle Count: 0
Node: stor-v4-1a-1b-01
Module Name: 1.10.B
Mac Address: 00:a0:98:19:55:16
IOM Type: IOM6E
Shelf Serial Number: SHJMS000000001A
IP Address: 192.168.1.23
Protocol Version: 2.1.1.21
Assigner ID: 2.1.1.21
State: Active
Last Contact: 206
Power Cycle Count: 0
Power Off Count: 0
Power On Count: 0
Expander Reset Count: 0
Expander Power Cycle Count: 0
Node: stor-v4-1a-1b-01
Module Name: 1.254.B
Mac Address: 00:a0:98:32:d6:ac
IOM Type: IOM6
Shelf Serial Number: 6000368103
IP Address: 192.168.2.173
Protocol Version: 1.2.2. 8
Assigner ID: 1.2.2. 8
State: Active
Last Contact: 215
Power Cycle Count: 0
Power Off Count: 0
Power On Count: 0
Expander Reset Count: 0
Expander Power Cycle Count: 0
Node: stor-v4-1a-1b-01
Module Name: 1.254.A
Mac Address: 00:a0:98:32:d6:dc
IOM Type: IOM6
Shelf Serial Number: 6000368103
IP Address: 192.168.2.221
Protocol Version: 1.2.2. 8
Assigner ID: 1.2.2. 8
State: Active
Node: stor-v4-1a-1b-02
Module Name: 1.106.A
Mac Address: 00:a0:98:19:53:ee
IOM Type: IOM6E
Shelf Serial Number: SHJMS000000001A
IP Address: 192.168.3.239
Protocol Version: 2.1.1.21
Assigner ID: 2.1.1.21
State: Initializing
Last Contact: 206
Power Cycle Count: 0
Power Off Count: 0
Power On Count: 0
Expander Reset Count: 0
Expander Power Cycle Count: 0
Node: stor-v4-1a-1b-02
Module Name: 1.106.B
Mac Address: 00:a0:98:19:55:16
IOM Type: IOM6E
Shelf Serial Number: SHJMS000000001A
IP Address: 192.168.1.23
Protocol Version: 2.1.1.21
Assigner ID: 2.1.1.21
State: Initializing
Last Contact: 209
Power Cycle Count: 0
Power Off Count: 0
Power On Count: 0
Expander Reset Count: 0
Expander Power Cycle Count: 0
Node: stor-v4-1a-1b-02
Module Name: 1.10.B
Mac Address: 00:a0:98:32:d6:ac
IOM Type: IOM6
Shelf Serial Number: 6000368103
IP Address: 192.168.2.173
Protocol Version: 1.2.2.8
Assigner ID: 1.2.2.8
State: Initializing
Last Contact: 217
Power Cycle Count: 0
Power Off Count: 0
Power On Count: 0
Expander Reset Count: 0
Expander Power Cycle Count: 0
Node: stor-v4-1a-1b-02
Module Name: 1.10.A
Mac Address: 00:a0:98:32:d6:dc
IOM Type: IOM6
Shelf Serial Number: 6000368103
IP Address: 192.168.2.221
Protocol Version: 1.2.2.8
Assigner ID: 1.2.2.8
State: Initializing
Last Contact: 220
Power Cycle Count: 0
Power Off Count: 0
Power On Count: 0
Expander Reset Count: 0
Expander Power Cycle Count: 0
Description
The storage shelf drawer show command displays information for storage shelf drawers in the storage system. If no
parameters are specified, the default command displays the following information for the drawers:
• Shelf Name
• Drawer Number
• Status
• Closed/Open
• Disk Count
• Firmware
To display detailed information for a single drawer, use the -shelf and -drawer parameters.
Parameters
{ [-fields <fieldname>, ...]
Displays the specified fields for all drawers, in column style output.
| [-errors ]
Displays the following error status information about the drawers that have errors:
• Status
• Error Description
| [-instance ]
Displays expanded information for all drawers in the system. If a shelf and drawer are specified, then this
parameter displays the same detailed information for the specified drawer as does the -shelf and -drawer
parameters.
[-shelf <text>] - Shelf Name
Displays the drawers in the storage shelf that matches the specified shelf name.
[-drawer <integer>] - Drawer Number
Displays the drawers that match the specified drawer number.
[-node {<nodename>|local}] - Node Name
Displays the drawers that are present for the specified node.
[-disk-count <integer>] - Drawer Disk Count
Displays the drawers that have the specified disk count.
[-part-number <text>] - Part Number
Displays the drawers that have the specified part number.
Examples
The following example displays information about all drawers:
Drawer Disk
Shelf Drawer Status A/B Closed? Count Firmware A/B
----- ------ ----------------- ------- ----- -----------------
2.5
1 normal/normal closed 4 00000634/00000634
2 normal/normal closed 4 00000634/00000634
3 normal/normal closed 4 00000634/00000634
4 normal/normal closed 5 00000634/00000634
5 normal/normal closed 4 00000634/00000634
5 entries were displayed.
cluster1::>
The following example displays expanded information about drawer 1 in shelf 2.5:
Shelf: 2.5
Drawer ID: 1
Part Numer: 111-03071
Serial Number: 021604008153
Drawer is Closed?: closed
Disk Count: 4
Firmware A: 00000634
Firmware B: 00000634
Path A: ok
Path B: ok
Status A: normal
Status B: normal
Drawer is Supported?: yes
Vendor Name: NETAPP
Mfg. Date: 02/2016
FRU Type: SASDRWR
Error Description: -
cluster1::>
The following example displays error information about the drawers that have errors:
cluster1::>
Description
The storage shelf drawer show-phy command displays information for drawer PHYs in the storage system. If no
parameters are specified, the default command displays the following information about PHYs:
• Shelf Name
• Drawer Number
• PHY Number
• Type
• SAS Address
• State
To display detailed information about a single PHY, use the -shelf, -drawer, and -phy parameters.
Parameters
{ [-fields <fieldname>, ...]
Displays the specified fields for all drawer PHYs, in column style output.
Examples
The following example displays information about all drawer PHYs:
The following example displays expanded information for PHY 0 of drawer 1 in shelf 2.5:
Shelf: 2.5
Drawer ID: 1
PHY Number: 0
Type: disk
Physical ID: 1
SAS Address: 00c5005079183f85
State A: enabled
State B: enabled
Status A: enabled-12gbs
Status B: enabled-12gbs
cluster1::>
Description
The storage shelf drawer show-slot command maps each drawer and slot number to the corresponding bay number.
Parameters
{ [-fields <fieldname>, ...]
Displays the specified fields in column style output.
| [-instance ]
Displays all slot information.
[-shelf <text>] - Shelf Name
Displays the slots in the shelf that matches the specified shelf name.
[-bay <integer>] - Bay Number
Displays the slots that have the specified bay number.
[-node {<nodename>|local}] - Node Name
Displays the slots that are present for the specified node.
[-drawer <integer>] - Drawer Number
Displays the slots in the drawers that match the specified drawer number.
[-slot <integer>] - Slot Number
Displays the slots that match the specified slot number.
[-is-installed {yes|no}] - Is Disk Installed
Displays the slots that have a disk installed.
Examples
The following example displays the mapping from drawer and slot number to bay number:
Description
The storage shelf firmware show-update-status command displays the state of the Shelf Firmware Update process.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node
Selects the node that matches this parameter value.
[-update-status {running|idle}] - Disk Shelf Firmware Update Status
Selects the nodes whose SFU process status matches this parameter value. Possible values are:
Examples
Description
The storage shelf firmware update command updates the firmware on one or more shelves. You can download the
latest firmware by using the storage firmware download command. You can specify a shelf whose firmware is to be
updated by using the -shelf parameter. You can update the firmware on all the shelves by not providing the -shelf parameter.
All the shelves of a specific module type can be updated by providing a value to the -module-type parameter.
Parameters
{ [-shelf <text>] - Shelf Name
This specifies the name of the shelf whose firmware is to be updated.
Examples
The following example updates the firmware on all the shelves in the cluster:
The following example updates the firmware on all shelves with the IOM6 module type:
The following example refreshes the firmware on all shelves with the IOM6 module type:
Related references
storage firmware download on page 1025
Description
The storage shelf location-led modify command modifies the on/off state of the shelf location LED.
Examples
The following example turns on the shelf location LED of the specified shelf.
cluster1::> storage shelf location-led modify -node node1 -shelf-name 1.0 -led-status on
The following example turns off the shelf location LED of the specified shelf.
cluster1::> storage shelf location-led modify -node node1 -shelf-name 1.0 -led-status off
Description
The storage shelf location-led show command displays the state of shelf location LED.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]
If you specify the -instance parameter, the command displays detailed information about all fields.
[-shelf-name <text>] - Shelf Name
Selects the shelves whose shelf-name matches this parameter value.
[-node {<nodename>|local}] - Node Name
Selects the nodes that match this parameter value.
[-stack-id <integer>] - Stack ID
Selects the shelves whose stack-id matches this parameter value.
[-shelf-id <integer>] - Shelf ID
Selects the shelves whose shelf-id matches this parameter value.
[-led-status {on|off}] - Location LED
Shows the state of the shelf location LED.
Examples
The following example shows the state of the shelf location LED for each shelf.
Description
The storage shelf port show command displays information for storage shelf ports in the storage system. If no
parameters are specified, the default command displays the following information for the ports:
• Shelf Name
• ID
• Module
• State
• Internal?
To display detailed information for a single port, use the -shelf and -id parameters.
Parameters
{ [-fields <fieldname>, ...]
Displays output in column style about the specified fields for all shelf ports.
| [-cables ]
Displays information about all cables connected to the shelf ports.
| [-instance ]
Displays expanded information for all shelf ports in the system. If a shelf and ID are specified, then this
parameter displays the same detailed information for the specified port as does the -shelf and -id parameters.
[-shelf <text>] - Shelf Name
Displays the ports in the storage shelf that matches the specified shelf name.
[-id <integer>] - Port ID
Displays the ports that match the specified ID.
[-node {<nodename>|local}] - Node Name
Displays the ports that are present for the specified node.
[-module-id {A|B}] - Module ID
Displays the ports from the specified shelf module ID.
Examples
The following example displays information about all shelf ports:
The following example displays expanded information about port 0 in shelf 1.4:
Shelf: 1.4
Description
The storage switch add command enables you to add FC switches for SNMP monitoring in a MetroCluster configuration.
Front end switches should not be added for monitoring and will result in a Monitor Status Error condition.
Parameters
-address <IP Address> - FC Switch Management IP Address
This parameter specifies the IP address of the back-end switch that is added for monitoring.
[-snmp-version {SNMPv1|SNMPv2c|SNMPv3}] - Supported SNMP Version
This parameter specifies the SNMP version that Data ONTAP uses to communicate with the back-end switch
that is added for monitoring. The default SNMP version is SNMPv2c.
{ [-snmp-community <text>] - (DEPRECATED)-SNMPv2c Community or SNMPv3 Username
Note: This parameter is deprecated and may be removed in a future release of Data ONTAP. Use -snmp-
community-or-username instead.
This parameter specifies the SNMPv2c community set or SNMPv3 username on the switch that is added for
monitoring.
| [-snmp-community-or-username <text>]} - SNMPv2c Community or SNMPv3 Username
This parameter specifies the SNMPv2c community set or SNMPv3 username on the switch that is added for
monitoring.
[-veto-backend-fabric-check {true|false}] - Veto Back-end Fabric Check? (privilege: advanced)
If specified, the storage switch add command will not check if the switch is present in the MetroCluster's
back-end fabric. By default, it does not let you add switches that are not present.
[-blades <integer>, ...] - Cisco Director Class Switch Blades to Monitor
This parameter specifies the blades to monitor on the back-end switch that is added for monitoring. It is only
applicable to director-class switches.
Examples
The following command adds a back-end switch with IP Address 10.226.197.34 for monitoring:
cluster1::>
The following command adds a Cisco Director Class switch for monitoring. Data ONTAP uses SNMPv3 and 'snmpuser1'
username to communicate with this switch.
Description
The storage switch modify enables you to modify certain parameters for identifying and accessing the FC back-end
switches added for monitoring in a MetroCluster configuration.
Parameters
-switch-name <text> - FC Switch Name
This parameter specifies the name of the switch.
[-snmp-version {SNMPv1|SNMPv2c|SNMPv3}] - SNMP Version
This parameter specifies the SNMP version that Data ONTAP uses to communicate with the switch.
[-switch-ipaddress <IP Address>] - Switch IP Address
This parameter specifies the IP address of the switch.
{ [-snmp-community <text>] - (DEPRECATED)-SNMPv2c Community or SNMPv3 Username
Note: This parameter is deprecated and may be removed in a future release of Data ONTAP. Use -snmp-
community-or-username instead.
This parameter specifies the SNMPv2c community set or SNMPv3 username on the switch.
| [-snmp-community-or-username <text>]} - SNMPv2c Community or SNMPv3 Username
This parameter specifies the SNMPv2c community set or SNMPv3 username on the switch.
[-blades <integer>, ...] - Director-Class Switch Blades to Monitor
This parameter specifies the blades to monitor on the switch. It is only applicable to director-class switches.
Examples
The following command modifies Cisco_10.226.197.34 switch SNMP community to 'public':
cluster1::>
cluster1::>
The following command modifies Brocade 6505 switch SNMP version to SNMPv3 and SNMPv3 username to
'snmpuser1':
Description
The storage switch refresh command triggers a refresh of the SNMP data for the MetroCluster FC switches and FC-to-
SAS bridges. It does not do anything if the refresh is already going on. The FC switches and FC-to-SAS bridges must have been
previously added for monitoring by using the storage switch add and storage bridge add commands respectectively.
Examples
The following command triggers a refresh for the SNMP data:
cluster1::*>
Related references
storage switch add on page 1105
storage bridge add on page 918
Description
The storage switch remove enables you to remove FC back-end switches that were previously added for SNMP
monitoring.
Examples
The following command removes 'Cisco_10.226.197.34' switch from monitoring:
cluster1::>
Description
The storage switch show command displays information about all the storage switches in the MetroCluster configuration.
The back-end switches must have been previously added for monitoring using the storage switch add command. If no
parameters are specified, the default command displays the following information about the storage switches:
• Switch
• Symbolic Name
• Vendor
• Switch WWN
• Is Monitored
• Monitor Status
To display detailed profile information about a single storage switch, use the -switch-name parameter.
Parameters
{ [-fields <fieldname>, ...]
Displays the specified fields for all the storage switches, in column style output.
| [-connectivity ]
Displays the following details about the connectivity from the storage switch to connected entities:
• Port name
• Peer type
Displays the following details about the connectivity from the node to the storage switch:
• Node name
• Adapter name
• Adapter type
| [-cooling ]
Displays the following details about the fans and temperature sensors on the storage switch:
• Fan name
| [-error ]
Displays the errors related to the storage switch.
| [-port ]
Displays the following details about the storage switch ports:
• Port name
| [-power ]
Displays the following details about the storage switch power supplies:
| [-san-config ]
Displays the following details about the Virutal Storage Area Networks (VSAN) and Zones of the storage
switch:
• VSAN identifier
• VSAN name
| [-sfp ]
Displays the following details about the storage switch ports Small Formfactor Pluggable (SFP):
• Port name
• Type of SFP
• SFP vendor
| [-stats ]
Displays the following details about the storage switch ports:
• Port name
| [-instance ]}
Displays expanded information about all the storage switches in the system. If a storage switch is specified,
then this parameter displays the same detailed information for the storage switch you specify as does the -
switch-name parameter.
[-switch-name <text>] - FC Switch Name
Displays information only about the storage switches that match the name you specify.
[-switch-wwn <text>] - Switch World Wide Name
Displays information only about the storage switches that match the switch wwn you specify.
[-switch-symbolic-name <text>] - Switch Symbolic Name
Displays information only about the storage switches that match the switch symbolic name you specify.
[-switch-fabric-name <text>] - Fabric Name
Displays information only about the storage switches that match the switch fabric you specify.
[-domain-id <integer>] - Switch Domain ID
Displays information only about the storage switches that match the switch domain id you specify.
[-switch-role {unknown|primary|subordinate}] - Switch Role in Fabric
Displays information only about the storage switches that match the switch role you specify.
[-snmp-version {SNMPv1|SNMPv2c|SNMPv3}] - SNMP Version
Displays information only about the storage switches that match the switch SNMP version you specify.
[-switch-model <text>] - Switch Model
Displays information only about the storage switches that match the switch model you specify.
[-switch-vendor {unknown|Brocade|Cisco}] - Switch Vendor
Displays information only about the storage switches that match the switch vendor you specify.
[-fw-version <text>] - Switch Firmware Version
Displays information only about the storage switches that match the switch firmware version you specify.
[-serial-number <text>] - Switch Serial Number
Displays information only about the storage switches that match the switch serial number you specify.
[-switch-ipaddress <IP Address>] - Switch IP Address
Displays information only about the storage switches that match the switch IP address you specify.
[-switch-status {unknown|ok|error}] - Switch Status
Displays information only about the storage switches that match the switch status you specify.
Examples
The following example displays information about all storage switches:
cluster::>
The following example displays connectivity (switch to peer and node to switch) information about all storage switches:
Connectivity:
Port Name Port Mode Port WWN Peer Port WWN Peer Type Peer Info
--------- --------- ---------------- ---------------- ------------ ---------
fc1/1 F-port 2001547fee78efb0 2100001086607d34 unknown unknown
fc1/3 F-port 2003547fee78efb0 21000024ff3dd9cb unknown unknown
fc1/4 F-port 2004547fee78efb0 21000024ff3dda8d unknown unknown
fc1/5 F-port 2005547fee78efb0 500a0980009af880 unknown unknown
fc1/6 F-port 2006547fee78efb0 500a0981009af370 unknown unknown
fc1/11 TE-port 200b547fee78efb0 200b547fee78f088 switch Cisco_10.226.197.34:fc1/11
fc1/12 TE-port 200c547fee78efb0 200c547fee78f088 switch Cisco_10.226.197.34:fc1/12
fc1/13 F-port 200d547fee78efb0 2100001086609e22 unknown unknown
fc1/15 F-port 200f547fee78efb0 21000024ff3dd91b unknown unknown
fc1/16 F-port 2010547fee78efb0 21000024ff3dbef5 unknown unknown
fc1/17 F-port 2011547fee78efb0 500a0981009afda0 unknown unknown
fc1/18 F-port 2012547fee78efb0 500a0981009a9160 unknown unknown
fc1/25 F-port 2019547fee78efb0 21000010866037e8 bridge ATTO_10.226.197.17:1
fc1/27 F-port 201b547fee78efb0 21000024ff3dd9d3 fcvi-adapter dpg-mcc-3240-15-
a1:fcvi_device_1
fc1/28 F-port 201c547fee78efb0 21000024ff3dbe3d fcvi-adapter dpg-mcc-3240-15-
a2:fcvi_device_1
fc1/29 F-port 201d547fee78efb0 500a0980009ae0a0 fcp-adapter dpg-mcc-3240-15-a2:0c
Path:
Switch
Port
Node Adapter Switch Port Speed Adapter Type
-------------------- -------------- ----------- ------- --------------
dpg-mcc-3240-15-a1 0d fc1/30 4Gbps FCP-Initiator
dpg-mcc-3240-15-a1 fcvi_device_1 fc1/27 8Gbps FC-VI
dpg-mcc-3240-15-a2 0c fc1/29 4Gbps FCP-Initiator
dpg-mcc-3240-15-a2 fcvi_device_1 fc1/28 8Gbps FC-VI
The following command displays cooling (temperature sensors and fans) information about all storage switches:
Fans:
Fan RPM Status
------------ -------- ---------------
Fan Module-1 - operational
Fan Module-2 operational
Fan Module-3 operational
Fan Module-4 operational
Temperature Sensors:
The following command displays the error information about all storage switches:
The following command displays the detailed information about all the storage switches:
Fabric:
The following command displays port information about all storage switches:
Ports:
Admin Oper SFP Speed BB
Port Name Port WWN Status Status Port Mode Present (Gbps) Credit PeerPortWWN
--------- -------- -------- ------- --------- ------- ------ ------ -----------
fc1/1 2001547fee78f088
enabled online F-port true 8 1 2100001086608b76
fc1/2 2002547fee78f088
enabled offline auto true 0 1
fc1/3 2003547fee78f088
enabled online F-port true 8 1 21000024ff48edd9
fc1/4 2004547fee78f088
enabled online F-port true 8 1 21000024ff3dd981
fc1/5 2005547fee78f088
enabled online F-port true 4 1 500a098001057f98
fc1/6 2006547fee78f088
enabled online F-port true 4 1 500a098101069778
fc1/7 2007547fee78f088
enabled offline auto true 0 1
fc1/8 2008547fee78f088
enabled offline auto true 0 1
fc1/9 2009547fee78f088
enabled offline auto true 0 1
fc1/10 200a547fee78f088
enabled offline auto true 0 32
fc1/11 200b547fee78f088
enabled offline TE-port true 8 32 200b547fee78efb0
fc1/12 200c547fee78f088
enabled offline TE-port true 8 32 200c547fee78efb0
fc1/13 200d547fee78f088
enabled online F-port true 8 32 2100001086609c2e
fc1/14 200e547fee78f088
The following command displays power supply unit information about all storage switches:
Power Supplies:
The following command displays san configuration (VSANs and Zones) information about all storage switches:
Oper
VSAN ID Vsan Name Status Load Balancing isIOD
------- ------------------------ ------ --------------- -----
1 VSAN0001 up src-id-dest-id true
2 dpg_13_storage up src-id-dest-id-ox-id
true
3 dpg_13_fcvi down src-id-dest-id-ox-id
true
10 dpg_mcc_13_fab1_fcvi up src-id-dest-id true
20 dpg_mcc_13_fab1_storage up src-id-dest-id-ox-id
true
VSAN Membership:
Zone Configuration:
Member Member Member
Zone Name VSAN ID Switch Name Port Name Port ID Member WWN
--------- ------- ----------- --------- ------- ----------
dpg_mcc_fcvi 30 Cisco_10.226.197.36
fc1/3 -
$default_zone$ 30 Cisco_10.226.197.36
fc1/4
dpg_mcc_storage
40 Cisco_10.226.197.36
fc1/1
$default_zone$ 40 Cisco_10.226.197.36
fc1/5
dpg_mcc_14_fcvi
70 Cisco_10.226.197.36
fc1/15
$default_zone$ 70 Cisco_10.226.197.36
fc1/16
dpg_mcc_14_storage
80 Cisco_10.226.197.34
fc1/13
$default_zone$ 80 Cisco_10.226.197.34
fc1/17
dpg_mcc_15_fcvi
110 Cisco_10.226.197.36
fc1/27
$default_zone$
110 Cisco_10.226.197.36
fc1/28
dpg_mcc_15_storage
120 Cisco_10.226.197.34
fc1/25
$default_zone$
The following command displays port SFP information about all storage switches:
SFP:
The following command displays port statistics information about all storage switches:
Port Statistics:
Rx Rx Tx Tx Error
Port Name Frames Octets Frames Octets Frames
------------ ------------ ------------ ------------ ------------ ------------
fc1/1 2116207233 3710682580 3906335374 859905888 0
fc1/2 1 208 1 208 0
fc1/3 3238899002 903116292 3079548736 4014304952 0
fc1/4 1888758418 1643379900 2434821325 2997002344 0
fc1/5 3719731908 1808138824 1878240211 3421335100 0
fc1/6 2644430347 1042009564 249190625 2003353056 0
fc1/7 1 228 1 228 0
fc1/8 1 156 1 156 0
fc1/9 1 148 1 148 0
fc1/10 1 224 1 224 0
fc1/11 3617142898 4129927136 39089396 2595464620 0
fc1/12 473603889 1560909460 2797562521 2833496016 0
fc1/13 1852255936 1091902804 180309704 1769859928 0
fc1/14 1 140 1 140 0
fc1/15 4997082 3519688264 4283938 3370856432 0
fc1/16 4995287 3519577592 4282173 3370732136 0
fc1/17 55146756 178045212 1733567096 3030415436 0
fc1/18 63005788 4287094736 1726651844 2640371212 0
fc1/19 1 200 1 200 0
fc1/20 1 104 1 104 0
fc1/21 1 108 1 108 0
fc1/22 1 108 1 108 0
fc1/23 1 164 1 164 0
fc1/24 1 216 1 216 0
fc1/25 2810698819 1611009260 471527156 1900246656 0
fc1/26 1 104 1 104 0
fc1/27 4165019838 887421780 3848122102 2581891136 0
fc1/28 58607737 1015197080 101621078 3482734024 0
fc1/29 4266270960 222242144 3766674764 2400640552 0
fc1/30 3984658378 1443835508 152597387 678837848 0
fc1/31 1 220 1 220 0
fc1/32 1 120 1 120 0
fc1/33 1 132 1 132 0
fc1/34 1 144 1 144 0
fc1/35 1 160 1 160 0
fc1/36 1 104 1 104 0
fc1/37 1 148 1 148 0
fc1/38 1 184 1 184 0
fc1/39 1 160 1 160 0
fc1/40 1 136 1 136 0
fc1/41 1 196 1 196 0
fc1/42 1 128 1 128 0
fc1/43 1 168 1 168 0
fc1/44 1 212 1 212 0
fc1/45 1 136 1 136 0
fc1/46 1 224 1 224 0
fc1/47 1 104 1 104 0
fc1/48 1 104 1 104 0
Description
This command takes the specified tape drive offline.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node to which the tape drive is attached.
{ -name <text> - Tape Drive Device Name
Use this parameter to specify the device name of the tape drive that needs to be taken offline. The format of
the device -name name includes a prefix to specify how the tape cartridge is handled and a suffix to describe
the density of the tape. The prefix suggests 'r', 'nr' or 'ur' for rewind, no rewind, or unload/reload and a suffix
shows density of 'l', 'm', 'h' or 'a'. For example, a tape device name for this operation might have the form
"nrst8m" were 'nr' is the 'no rewind' prefix, 'st8' is the alias-name and 'm' is the tape density. You can use the
'storage tape show -device-names' command to find more information about device names of tape drives
attached to a node.
| -device-id <text>} - Tape Drive Device ID
Use this parameter to specify the device ID of the tape drive that needs to be taken offline.
Examples
The following example takes the tape drive with device name 'nrst8m' offline. This tape drive is attached to cluster1-01.
Description
This command brings a specified tape drive online.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node to which the tape drive is attached.
Examples
The following example brings the tape drive with device id sw4:2.126L4 attached to the node, cluster1-01, online.
Description
This command changes the tape drive cartridge position.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node to which the tape drive is attached.
-name <text> - Tape Drive Device Name
Use this parameter to specify the device name of the tape drive whose cartridge position needs to be changed.
The format of the device -name includes a prefix to specify how the tape cartridge is handled and a suffix to
describe the density of the tape. The prefix suggests 'r', 'nr' or 'ur' for rewind, no rewind, or unload/reload and a
suffix shows density of 'l', 'm', 'h' or 'a'. For example, a tape device name for this operation might have the form
"nrst8m" were 'nr' is the 'no rewind' prefix, 'st8' is the alias-name and 'm' is the tape density. You can use the
'storage tape show -device-names' command to find more information about device names of tape drives
attached to a node.
-operation {weof|fsf|bsf|fsr|bsr|rewind|erase|eom} - Tape Position Operation
Use this parameter to specify the tape positioning operation. The possible values for -operation are:
Examples
The following example specifies a rewind operation on a tape device. Note the -count parameter does not need to be
specified for this type of operation.
cluster1::> storage tape position -node cluster1-01 -name nrst8m -operation rewind
The following example specifies an fsf (forward space filemark) operation on a tape device. Note the -count parameter
specifies 5 forward space filemarks for this operation.
cluster1::> storage tape position -node cluster1-01 -name nrst1a -operation fsf -count 5
The following example specifies an eom (end-of-media) operation on a tape device. The 'eom' positions a tape at end of
data (end of media if full). Note the -count parameter does not need to be specified for this type of operation.
cluster1::> storage tape position -node cluster1-01 -name rst0h -operation eom
Description
This command resets a specified tape drive.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node to which the tape drive is attached.
-device-id <text> - Tape Drive Device ID
Use this parameter to specify the device ID of the tape drive is to be reset.
Examples
The following example resets the tape drive with device ID sw4:2.126L3 attached to the node, cluster1-01.
Description
The storage tape show command displays information about tape drives and media changers attached to the cluster. Where
it appears in the remainder of this document "device" may refer to either a tape drive or a media changer. By default, this
command displays the following information about all tape drives and media changers:
• Node to which the tape drive/media changer is attached
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-alias ]
Displays the tape drive/media changer alias with the following details:
| [-connectivity ]
Displays the connectivity from the node to the tape drive/media changer with the following details:
| [-device-names ]
Displays the tape drive names for used tape positioning using the following details: rewind, no rewind, unload/
reload and density
| [-status ]
Displays the status of tape drive/media changer with the following details:
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-device-id <text>] - Device ID
Selects the tape drive/media changer with the specified device ID.
[-node {<nodename>|local}] - Node
Displays detailed information about tape drives or media changers on the specified node.
[-device-type <text>] - Device Type
Selects the devices with the specified type of tape drive or media changer.
[-description <text>] - Description
Selects the tape drives/media changers with the specified description.
[-alias-name <text>] - Alias Name
Selects the tape drive/media changer with the specified alias name.
Node: cluster1-01
Device ID Device Type Description Status
---------------------- -------------- ------------------------------ --------
sw4:10.11 tape drive HP LTO-3 error
Node: cluster1-01
Device ID Device Type Description Status
---------------------- -------------- ------------------------------ --------
sw4:10.11L1 media changer PX70-TL normal
The following example displays detailed information about a tape drive named sw4:10.11
Node: cluster1-01
Device ID Device Type Description Status
---------------------- -------------- ------------------------------ --------
sw4:10.11 tape drive HP LTO-3 error
Description
The storage tape show-errors command displays error information about tape drives attached to the cluster. By default,
this command displays the following information about all tape drives:
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays detailed information about tape drives on the specified node.
[-device-id <text>] - Device ID
Selects the tape drive with the specified device ID.
Examples
The following example shows error information for all tape drives attached to cluster1.
Node: node1
Device ID: 2d.0
Device Type: tape drive
Description: IBM LTO-6 ULT3580
Alias: st2
Errors: -
The following example shows error information for tape drive sw4:2.126L1 attached to the node, node1.
Node: node1
Device ID: sw4:2.126L1
Device Type: tape drive
Description: Hewlett-Packard LTO-3
Alias: st3
Errors: -
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-device-id <text>] - Device ID
Selects the media changer with the specified device ID.
[-node {<nodename>|local}] - Node
Displays detailed information about media changers on the specified node.
[-description <text>] - Description
Selects the media changers with the specified description.
[-alias-name <text>] - Alias Name
Selects the media changer with the specified alias name.
[-wwnn <text>] - World Wide Node Name
Selects the media changers with the specified World Wide Node Name.
[-wwpn <text>] - World Wide Port Name
>Selects the media changer with the specified World Wide Port Name.
[-serial-number <text>] - Serial Number
Selects the media changer with the specified serial number.
[-device-if-type {unknown|fibre-channel|SAS|pSCSI}] - Device If Type
Selects the media changers with the specified interface type.
[-device-state {unknown|available|ready-write-enabled|ready-write-protected|offline|in-use|
error|reserved-by-another-host|normal}] - Operational State of Device
Selects the media changers with the specified operational state.
Examples
The following example displays information about all media changers attached to the cluster:
Errors: -
Paths:
Node Initiator Alias Device State Status
------------------------ --------- ------- ------------------------ --------
cluster1-01 2b mc0 in-use normal
Errors: -
Paths:
Node Initiator Alias Device State Status
------------------------ --------- ------- ------------------------ --------
cluster1-01 5a mc1 available normal
Description
This command displays the supported and qualification status of all tape drives recognized by Data ONTAP attached to a node
in the cluster. This includes nonqualified tape drives. Such tape drives do not have a Tape Configuration File (TCF) on the
storage system. A nonqualified tape drive can be used if the tape drive emulates a qualified tape drive or if the appropriate TCF
for the nonqualified tape drive is downloaded from the NetApp Support Site to the storage system.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the tape drives that match this parameter value.
[-tape-drive <text>] - Tape Drive Name
Selects the tape drives that match this parameter value.
Examples
The following example displays support and qualification status of tape drives recognized by Data ONTAP. The command
also identifies tape drives attached to the node that are nonqualified (not supported).
Node: Node1
Is
Tape Drive Supported Support Status
------------------------- --------- ---------------------------------
sw4:2.126L6 false Nonqualified tape drive
Hewlett-Packard C1533A true Qualified
Hewlett-Packard C1553A true Qualified
Hewlett-Packard Ultrium 1 true Qualified
Sony SDX-300C true Qualified
Sony SDX-500C true Qualified
StorageTek T9840C true Dynamically Qualified
StorageTek T9840D true Dynamically Qualified
Tandberg LTO-2 HH true Dynamically Qualified
Node: Node1
Is
Tape Drives Supported Support Status
------------------------------ --------- -------------------------------
Sony SDX-300C true Qualified
Description
This storage tape show-tape-drive command displays information about tape drives attached to the cluster. By default,
this command displays the following information about all tape drives:
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-device-id <text>] - Device ID
Selects the tape drive with the specified device ID.
[-node {<nodename>|local}] - Node
Displays detailed information about tape drives on the specified node.
[-description <text>] - Description
Selects the tape drives with the specified description.
[-alias-name <text>] - Alias Name
Selects the tape drive with the specified alias name.
[-wwnn <text>] - World Wide Node Name
Selects the tape drives with the specified World Wide Node Name.
[-wwpn <text>] - World Wide Port Name
Selects the tape drive with the specified World Wide Port Name.
[-serial-number <text>] - Serial Number
Selects the tape drive with the specified serial number.
[-device-if-type {unknown|fibre-channel|SAS|pSCSI}] - Device If Type
Selects the tape drives with the specified interface type.
[-device-state {unknown|available|ready-write-enabled|ready-write-protected|offline|in-use|
error|reserved-by-another-host|normal}] - Operational State of Device
Selects the tape drives with the specified operational state.
[-error <text>] - Tape Drive Error Description
Selects the tape drives with the specified error string.
[-initiator <text>] - Initiator Port
Selects the tape drives with the specified initiator port.
[-resv-type {off|persistent|scsi}] - Reservation type for device
Selects the tape drives with the specified type.
Examples
The following example displays information about all tape drives attached to the cluster:
Paths:
Node Initiator Alias Device State Status
------------------------ --------- ------- ------------------------ --------
cluster1-01 2a st0 ready-write-enabled normal
Errors: -
Paths:
Node Initiator Alias Device State Status
------------------------ --------- ------- ------------------------ --------
cluster1-01 0b st1 ready-write-enabled normal
Description
This command enables or disables diagnostic tape trace operations for all tape drives attached to the node you have specified.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which the tape trace feature is enabled or disbled.
[-is-trace-enabled {true|false}] - Tape Trace Enabled or Disabled
Use this parameter to enable or disable the tape trace feature. By default, the tape trace feature is enabled.
Examples
The following example enables tape trace operation on the node, cluster1-01.
Description
This command clears alias names for a tape drive or media changer.
Examples
The following example clears an alias name 'st3' attached to the node, cluster1-01.
The following example clears all tape drive alias names attached to the node, cluster1-01.
The following example clears all media changer alias names attached to the node, cluster1-01.
The following example clears both tape and media changer alias names attached to the node, cluster1-01.
Description
This command sets an alias name for a tape drive or media changer.
Examples
The following example sets an alias name 'st3' for a tape drive with serial number SN[123456]L4 attached to the node,
node1.
cluster1::storage tape alias> set -node node1 -name st3 -mapping SN[123456]L4.
The following example sets an alias name 'mc1' for a media changer with serial number SN[65432] attached to the node,
node1.
cluster1::storage tape alias> set -node node1 -name mc1 -mapping SN[65432].
Description
This command displays aliases of all tape drives and media changers attached to every node in the cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example shows the aliases of all tape drives and media changers attached to every node in the cluster:
Node: node1
Alias Mapping
-------------------------------------- ----------------------------------------
mc0 SN[00FRU7800000_LL0]L1
mc1 SN[00FRU7800000_LL1]L1
mc2 SN[aa6a64c69360a0980248c8]
Node: node2
Alias Mapping
-------------------------------------- ----------------------------------------
mc1 SN[c940982fc48c8]
st3 SN[ST456HT8N]L3
st2 SN[HG68000230]L2
st11 SN[aba673980248c8]L7
Description
The storage tape config-file delete command deletes the specified tape drive configuration file from all nodes that
are currently part of the cluster.
Parameters
-filename <text> - Config File Filename
This parameter specifies the name of the tape configuration file that will be deleted from all nodes that are
currently part of the cluster.
Examples
The following example deletes the specified tape drive configuration files on every node that is currently part of the
cluster:
Description
The storage tape config-file get command uploads a specified tape drive configuration file to each node that is
currently part of the cluster.
Parameters
-url <text> - Config File URL
This parameter specifies the URL that provides the location of the package to be fetched. Standard URL
schemes, including HTTP and TFTP, are accepted.
Description
The storage tape config-file show command lists the tape drive configuration files loaded onto each node in the
cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information about tape drive configuration files for the specified node.
[-config-file <text>] - Tape Config File
Selects information about the tape drive configuration file specified.
Examples
The following example lists the tape drive config files loaded onto each node in the cluster:
Node: node1
Description
This command displays information such as how the storage tape libraries connect to the cluster, LUN groups, number of LUNs,
WWPN, and switch port information. Use this command to verify the cluster's storage tape library configuration or to assist in
troubleshooting.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-switch ]
If you specify this parameter, switch port information is shown.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Controller Name
The name of the clustered node for which information is being displayed.
[-group <integer>] - LUN Group
A LUN group is a set of LUNs that shares the same path set.
[-target-wwpn <text>] - Library Target Ports
The World Wide Port Name of a storage tape library port.
[-initiator <text>] - Initiator
The host bus adapter that the clustered node uses to connect to storage tape libraries.
[-array-name <array name>] - Library Name
Name of the storage tape library that is connected to the clustered node.
[-target-side-switch-port <text>] - Target Side Switch Port
This identifies the switch port that connects to the tape library's target port.
[-initiator-side-switch-port <text>] - Initiator Side Switch Port
This identifies the switch port that connects to the node's initiator port.
[-lun-count <integer>] - Number of LUNS
This is a command-line switch (-lun-count) used to restrict what LUN groups are displayed in the output.
Examples
The following example displays the storage tape library configuration information.
cluster1::>
Description
This command displays path information for a tape library and has the following parameters by default:
• Node name
• Initiator port
• Target port
• TPGN (Target Port Group Number)
• Port speeds
• IOPs
Parameters
{ [-fields <fieldname>, ...]
fields used to be used in this display
| [-detail ]
Using this option displays the following:
• Target IOPs
• Target LUNs
• Path IOPs
• Path errors
• Path quality
• Path LUNs
• Initiator IOPs
• Initiator LUNs
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Controller name
The name of the clustered node for which information is being displayed.
Examples
The following example displays the path information for a storage tape library
Description
This command displays path information for every initiator port connected to a tape library. The output is similar to the storage
library path show command but the output is listed by initiator.
Parameters
{ [-fields <fieldname>, ...]
fields used to be used in this display
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Controller name
The name of the clustered node for which information is being displayed.
[-initiator <text>] - Initiator Port
Initiator port that the clustered node uses.
[-target-wwpn <text>] - Target Port
Target World Wide Port Name. Port on the storage tape library that is being used.
[-initiator-side-switch-port <text>] - Initiator Side Switch Port
Switch port connected to the clustered node.
[-target-side-switch-port <text>] - Target Side Switch Port
Switch port connected to the tape library.
[-array-name <array name>] - Library Name
Name of the storage tape library that is connected to the cluster.
[-tpgn <integer>] - Target Port Group Number
TPGN refers to the target port group to which the target port belongs. A target port group is a set of target
ports which share the same LUN access characteristics and failover behaviors.
[-port-speed <text>] - Port Speed
Port Speed of the specified port.
[-path-io-kbps <integer>] - Kbytes of I/O per second on Path (Rolling Average)
Rolling average of Kbytes of I/O per second on the library path.
Examples
The following example displays the path information by initiator for a storage tape library.
Description
The storage tape load-balance modify command modifies the tape load balance setting for a specified node in the
cluster.
Parameters
-node {<nodename>|local} - Node
This parameter specifies the node on which the tape load balance setting is to be modified.
[-is-enabled {true|false}] - Is Tape Load Balance Enabled
This parameter specifies whether tape load balancing is enabled on the node. The default setting is false.
Examples
The following example modifies the tape load balance setting on node1 in the cluster:
Description
The storage tape load-balance show command displays tape load balance settings for each node in the cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information about tape load balancing for the specified node.
[-is-enabled {true|false}] - Is Tape Load Balance Enabled
Selects information about load balance configuration as specified by enabled or disabled setting.
Examples
The following example shows the load balance setting for each node in the cluster:
Node Enabled
--------------------------- ---------
node1 false
node2 false
2 entries were displayed.
System Commands
The system directory
The system commands enable you to monitor and control cluster nodes.
• Chassis ID
• Status
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information about all the chassis in the cluster.
[-chassis-id <text>] - Chassis ID
Selects information about the specified chassis.
[-member-nodes {<nodename>|local}, ...] - List of Nodes in the Chassis
Selects information about the chassis with the specified member node list.
[-num-nodes <integer>] - Number of Nodes in the Chassis
Selects information about the chassis with the specified number of nodes.
[-status {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Status
Selects information about the chassis with the specified status.
Examples
The following example displays information about all the chassis in the cluster:
• Chassis ID
• FRU name
• FRU type
• FRU state
• Nodes sharing the FRU
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information about FRUs.
[-node {<nodename>|local}] - Node
Specifies the primary node name in the cluster on which Chassis health monitor is running.
[-serial-number <text>] - FRU Serial Number
Selects information about the FRU with the specified serial number.
[-fru-name <text>] - FRU Name
Selects information about the FRU with the specified FRU name.
[-type {controller|psu|fan|dimm|bootmedia|ioxm|nvram|nvdimm}] - FRU Type
Selects information about all the FRUs with the specified FRU type.
[-name <text>] - FRU ID
Selects information about the FRU with the specified FRU unique name.
[-state <text>] - FRU State
Selects information about all the FRUs with the specified state.
[-status {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Status
Selects information about all the FRUs with the specified status.
[-display-name <text>] - Display Name for the FRU
Selects information about all the FRUs with the specified FRU display name.
[-monitor {node-connect|system-connect|system|controller|chassis|cluster-switch|example}] -
Monitor Name
Selects information about all the FRUs with the specified monitor name.
[-model <text>] - Model Type
Selects information about all the FRUs with the specified FRU model.
[-shared {shared|not_shared}] - Shared Resource
Selects information about all the FRUs with the specified sharing type.
[-chassis-id <text>] - Chassis ID
Selects information about all the FRUs in the specified chassis.
Examples
The following example displays information about all major chassis specific FRUs in the cluster:
Node: node1
FRU Serial Number: XXT122737891
FRU Name: PSU1 FRU
FRU Type: psu
FRU Name: XXT122737891
FRU State: GOOD
Status: ok
Display Name for the FRU: PSU1 FRU
Monitor Name: chassis
Model Type: none
Shared Resource: shared
Chassis ID: 4591227214
Additional Information About the FRU: Part Number: 114-00065+A0
Revision: 020F
Manufacturer: NetApp
FRU Name: PSU
List of Nodes Sharing the FRU: node1,node2
Number of Nodes Sharing the FRU: 2
Description
The system cluster-switch configure-health-monitor command downloads non-legacy cluster-switch's no-ontap-
dependency(NOD) configuration file in the zip format, where it contains the XML file and a signed version. After download,
Parameters
-node {<nodename>|local} - Node
This specifies the node or nodes on which the NOD configuration file is to be updated.
-package-url <text> - Package URL
This parameter specifies the URL that provides the location of the package to be fetched. Standard URL
schemes, including HTTP, HTTPS, FTP and FILE, are accepted.
Examples
The following example downloads NOD configuration file to node1 from a web server and enables cshmd to process it:
Description
The system cluster-switch create command adds information about a cluster switch or management switch. The cluster
switch health monitor uses this information to monitor the health of the switch.
Use this command if ONTAP cannot automatically discover a cluster or management switch. ONTAP relies on the Cisco
Discovery Protocol (CDP) to discover the switches. CDP is always enabled on all cluster ports of a node by default, disabled on
all non-cluster ports of a node. If the CDP is also enabled on your cluster switches, they will be automatically discovered.
If you want ONTAP to discover and monitor management switches, the CDP must be enabled on non-cluster ports. To verify
whether the CDP is enabled or disabled, use the command system node run -node <node_name> -command options
cdpd.enable.
Use the system cluster-switch show command to identify switches that the cluster switch health monitor is monitoring.
Parameters
-device <text> - Device Name
Specifies the device name of the switch that you want to monitor. Data ONTAP uses the device name of the
switch to identify the SNMP agent with which it wants to communicate.
-address <IP Address> - IP Address
Specifies the IP address of switch's management interface.
-snmp-version {SNMPv1|SNMPv2c|SNMPv3} - SNMP Version
Specifies the SNMP version that Data ONTAP uses to communicate with the switch. The default is SNMPv2c.
{ -community <text> - DEPRECATED-Community String or SNMPv3 Username
Note: This parameter is deprecated and may be removed in a future release of Data ONTAP. Use -
community-or-username instead.
Specifies the community string for SNMPv2 authentication or SNMPv3 user name for SNMPv3 security. The
default community string for SNMPv2 authentication is cshm1!.
Examples
cluster1::> system cluster-switch create -device SwitchA -address 1.2.3.4 -snmp-version SNMPv2c -
community-or-username cshm1! -model NX55596 -type cluster-network
cluster2::> system cluster-switch create -device SwitchB -address 5.6.7.8 -snmp-version SNMPv3 -
community-or-username snmpv3u1 -model CN1601 -type management-network
Related references
system node run on page 1272
system cluster-switch show on page 1156
Description
The system cluster-switch delete command disables switch health monitoring for a cluster or management switch.
Parameters
-device <text> - Device Name
Specifies the name of the switch.
[-force [true]] - Force Delete (privilege: advanced)
Specifies if force delete or not.
Examples
Description
The system cluster-switch modify command modifies information about a cluster switch or management switch. The
cluster switch health monitor uses this information to monitor the switch.
Parameters
-device <text> - Device Name
Specifies the device name of switch that you want to monitor.
[-address <IP Address>] - IP Address
Specifies the IP address of switch's management interface.
[-snmp-version {SNMPv1|SNMPv2c|SNMPv3}] - SNMP Version
Specifies the SNMP version that Data ONTAP uses to communicate with the switch. The default is SNMPv2c.
{ [-community <text>] - DEPRECATED-Community String or SNMPv3 Username
Note: This parameter is deprecated and may be removed in a future release of Data ONTAP. Use -
community-or-username instead.
Specifies the community string for SNMPv2 authentication or SNMPv3 username for SNMPv3 security.
| [-community-or-username <text>]} - Community String or SNMPv3 Username
Specifies the community string for SNMPv2 authentication or SNMPv3 username for SNMPv3 security.
[-type {cluster-network|management-network}] - Switch Network
Specifies the switch type.
[-is-monitoring-enabled-admin {true|false}] - Enable Switch Monitoring
Specifies the switch admin monitoring status.
Examples
Examples
Description
The system cluster-switch show command displays configuration details for the monitored cluster switches and
management switches.
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that have the specified name.
| [-snmp-config ]
Displays the following information about a switch:
• Device Name
• SNMP Version
| [-status ]
Displays the following status information about a switch:
• Is Discovered
• Model Number
• Switch Network
• Software Version
• Is Monitored ?
| [-instance ]}
Selects detailed information for all the switches.
[-device <text>] - Device Name
Selects the switches that match the specified device name.
[-address <IP Address>] - IP Address
Selects the switches that match the specified IP address.
[-snmp-version {SNMPv1|SNMPv2c|SNMPv3}] - SNMP Version
Selects the switches that match the specified SNMP version.
Examples
The example above displays the configuration of all cluster switches and management switches.
Description
The system cluster-switch show-all command displays configuration details for discovered monitored cluster switches
and management switches, including switches that are user-deleted. From the list of deleted switches, you can delete a switch
permanently from the database to re-enable automatic discovery of that switch.
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that have the specified name.
| [-instance ]}
Selects detailed information for all the switches.
[-device <text>] - Device Name
Selects the switches that match the specified device name.
[-address <IP Address>] - IP Address
Selects the switches that match the specified IP address.
[-snmp-version {SNMPv1|SNMPv2c|SNMPv3}] - SNMP Version
Selects the switches that match the specified SNMP version.
[-community <text>] - DEPRECATED-Community String or SNMPv3 Username
Note: This parameter is deprecated and may be removed in a future release of Data ONTAP. Use -
community-or-username instead.
Selects the switches that match the specified community string or SNMPv3 username.
[-community-or-username <text>] - Community String or SNMPv3 Username
Selects the switches that match the specified community string or SNMPv3 username.
[-discovered {true|false}] - Is Discovered
Selects the switches that match the specified discovery setting.
[-type {cluster-network|management-network}] - Switch Network
Selects the switches that match the specified switch type.
[-sw-version <text>] - Software Version
Selects the switches that match the specified software version.
Examples
Is Monitored: yes
Reason:
Software Version: Cisco IOS 4.1N1
Version Source: CDP
Description
The system cluster-switch log collect command initiates the collection of a cluster switch log for the specified
cluster switch.
Parameters
-device <text> - Switch Name
Specifies the cluster switch device for which the log collection is being made.
Examples
Examples
Description
The system cluster-switch log enable-collection command enables the collection of cluster switch logs.
Examples
Description
The system cluster-switch log modify command modifies the log request of the specified cluster switch.
Parameters
-device <text> - Switch Name
Specifies the cluster switch device for which the log request is being made. Note, that the device must be one
of the devices listed as a cluster switch from the system cluster-switch show command. The full device name
from the system cluster-switch show command must be used.
[-log-request {true|false}] - Requested Log
Specifies the initiation of a switch log retrieval for the specified cluster switch if set to true.
Examples
Examples
Description
The system cluster-switch log show command displays the status and requests for cluster switch logs.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
Specifies an instance of the cluster switch devices log status.
[-device <text>] - Switch Name
Specifies the name of the cluster switch device to display log status on.
[-log-request {true|false}] - Requested Log
Specifies the state of the log request for a cluster switch device. Values: true, false.
[-log-status <text>] - Log Status
Specifies the status of the log request for a cluster switch device.
[-log-timestamp <MM/DD/YYYY HH:MM:SS>] - Log Timestamp
Specifies the completion timestamp of the log request for a cluster switch device.
[-idx <integer>] - Index
Specifies the index of the cluster switch device.
[-filename <text>] - Filename
Specifies the full filename of the cluster switch log.
[-filenode <text>] - File Node
Specifies the name of the controller on which the cluster switch log resides.
Examples
Description
The system cluster-switch polling-interval modify command modifies the interval in which the cluster switch
health monitor polls cluster and management switches.
Parameters
[-polling-interval <integer>] - Polling Interval
Specifies the interval in which the health monitor polls switches. The interval is in minutes. The default value
is 5. The allowed range of values is 2 to 120.
Examples
Description
The system cluster-switch polling-interval show command displays the polling interval used by the health
monitor.
Examples
Description
The system cluster-switch threshold show command displays thresholds used by health monitor alerts.
Examples
Description
The system configuration backup copy command copies a configuration backup from one node in the cluster to another
node in the cluster.
Use the system configuration backup show command to display configuration backups to copy.
Parameters
-from-node {<nodename>|local} - Source Node
Use this parameter to specify the name of the source node where the configuration backup currently exists.
-backup <text> - Backup Name
Use this parameter to specify the name of the configuration backup file to copy.
-to-node {<nodename>|local} - Destination Node
Use this parameter to specify the name of the destination node where the configuration backup copy is created.
Examples
The following example copies the configuration backup file node1.special.7z from the node node1 to the node
node2.
Related references
system configuration backup show on page 1166
Description
The system configuration backup create command creates a new configuration backup file.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which to create the backup file.
[-backup-name <text>] - Backup Name
Use this parameter to specify the name of the backup file to create. The backup name cannot contain a space
or any of the following characters: * ? /
[-backup-type {node|cluster}] - Backup Type
Use this parameter to specify the type of backup file to create.
Examples
The following example creates a a new cluster configuration backup file called node1.special.7z on the node node1.
cluster1::*> system configuration backup create -node node1 -backup-name node1.special.7z -backup-
type cluster
[Job 194] Job is queued: Cluster Backup OnDemand Job.
Description
The system configuration backup delete command deletes a saved configuration backup.
Use the system configuration backup show command to display saved configuration backups.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the source node where the configuration backup currently exists.
-backup <text> - Backup Name
Use this parameter to specify the name of the configuration backup file to delete.
Examples
The following example shows how to delete the configuration backup file node1.special.7z from the node node1.
Related references
system configuration backup show on page 1166
Description
The system configuration backup download command copies a configuration backup from a source URL to a node in
the cluster.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the node to which the configuration backup is downloaded.
-source <text> - Source URL
Use this parameter to specify the source URL of the configuration backup to download.
[-backup-name <text>] - Backup Name
Use this parameter to specify a new local file name for the downloaded configuration backup.
Examples
The following example shows how to download a configuration backup file from a URL to a file named
exampleconfig.download.7z on the node node2.
Description
The system configuration backup rename command changes the file name of a configuration backup file.
Use the system configuration backup show command to display configuration backups to rename.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the source node where the configuration backup currently exists.
-backup <text> - Backup Name
Use this parameter to specify the name of the configuration backup file to rename.
-new-name <text> - New Name
Use this parameter to specify a new name for the configuration backup file.
cluster1::*> system configuration backup rename -node node1 -backup download.config.7z -new-name
test.config.7z
Related references
system configuration backup show on page 1166
Description
The system configuration backup show command displays information about saved configuration backups.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects configuration backups that are saved on the node you specify.
[-backup <text>] - Backup Name
Selects configuration backups that have the backup name you specify.
[-backup-type {node|cluster}] - Backup Type
Selects configuration backups of the type you specify.
[-time <MM/DD HH:MM:SS>] - Backup Creation Time
Selects configuration backups that were saved on the date and time you specify.
[-cluster-name <text>] - Cluster Name
Selects configuration backups that were saved in the cluster that has the name you specify.
[-cluster-uuid <UUID>] - Cluster UUID
Selects configuration backups that were saved in the cluster that has the UUID you specify.
[-size {<integer>[KB|MB|GB|TB|PB]}] - Size of Backup
Selects configuration backups that have the file size you specify.
[-nodes-in-backup {<nodename>|local}, ...] - Nodes In Backup
Selects configuration backups that include the configuration of the nodes you specify.
[-version <text>] - Software Version
Selects configuration backups that have the software version you specify.
Examples
The following example shows typical output for this command.
Description
The system configuration backup upload command copies a configuration backup from a node in the cluster to a
remote URL.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the node from which the configuration backup is uploaded.
-backup <text> - Backup Name
Use this parameter to specify the file name of the configuration backup to upload.
-destination <text> - Destination URL
Use this parameter to specify the destination URL of the configuration backup.
Examples
The following example show how to upload the configuration backup file testconfig.7z from the node node2 to a
remote URL.
cluster1::*> system configuration backup upload -node node2 -backup testconfig.7z -destination
ftp://www.example.com/config/uploads/testconfig.7z
Parameters
[-destination <text>] - Backup Destination URL
Use this parameter to specify the destination URL for uploads of configuration backups. Use the value "" to
remove the destination URL.
[-username <text>] - Username for Destination
Use this parameter to specify the user name to use to log in to the destination system and perform the upload.
Use the system configuration backup settings set-password command to change the password
used with this user name.
[-numbackups1 <integer>] - Number of Backups to Keep for Schedule 1
Use this parameter to specify the number of backups created by backup schedule 1 to keep on the destination
system. If the number of backups exceeds this number, the oldest backup is removed.
[-numbackups2 <integer>] - Number of Backups to Keep for Schedule 2
Use this parameter to specify the number of backups created by backup schedule 2 to keep on the destination
system. If the number of backups exceeds this number, the oldest backup is removed.
[-numbackups3 <integer>] - Number of Backups to Keep for Schedule 3
Use this parameter to specify the number of backups created by backup schedule 3 to keep on the destination
system. If the number of backups exceeds this number, the oldest backup is removed.
Examples
The following example shows how to set the destination URL and user name used for uploads of configuration backups.
Related references
system configuration backup settings set-password on page 1168
Description
The system configuration backup settings set-password command sets the password used for uploads of
configuration backups. This password is used along with the username you specify using the system configuration
backup settings modify command to log in to the system and perform the upload. Enter the command without parameters.
The command prompts you for a password, and for a confirmation of that password. Enter the same password at both prompts.
The password is not displayed.
Use the system configuration backup settings show command to display the destination URL for configuration
backups. Use the system configuration backup settings modify command to change the destination URL and
remote username for configuration backups.
Examples
The following example shows successful execution of this command.
Related references
system configuration backup settings modify on page 1167
system configuration backup settings show on page 1169
system configuration backup upload on page 1167
Description
The system configuration backup settings show command displays current settings for configuration backup. These
settings apply to backups created automatically by schedules. By default, the command displays the URL to which configuration
backups are uploaded, and the user name on the remote system used to perform the upload.
Use the system configuration backup settings set-password command to change the password used with the user
name on the destination. Use the system configuration backup settings modify command to change the destination
URL and username for uploads of configuration backups, and to change the number of backups to keep for each schedule.
Parameters
[-instance ]
Use this parameter to display detailed information about configuration backup settings, including the number
of backups to keep for each backup schedule.
Examples
The following example displays basic backup settings information.
The following example shows detailed output using the -instance parameter.
Related references
system configuration backup settings set-password on page 1168
system configuration backup settings modify on page 1167
Description
The system configuration recovery cluster modify command modifies the cluster recovery status. This command
should be used to end the cluster recovery after all recovery procedures are applied.
Parameters
[-recovery-status {complete|in-progress|not-in-recovery}] - Cluster Recovery Status
Use this parameter with the value complete to set the cluster recovery status after the cluster has been
recreated and all of the nodes have been rejoined to it. This enables each node to resume normal system
operations. The in-progress and not-in-recovery values are not applicable to this command.
Examples
The following example modifies the cluster recovery status.
Description
The system configuration recovery cluster recreate command re-creates a cluster, using either the current node
or a configuration backup as a configuration template. After you re-create the cluster, rejoin nodes to the cluster using the
system configuration recovery cluster rejoin command.
Parameters
-from {node|backup} - From Node or Backup
Use this parameter with the value node to re-create the cluster using the current node as a configuration
template. Use this parameter with the value backup to re-create the cluster using a configuration backup as a
configuration template.
[-backup <text>] - Backup Name
Use this parameter to specify the name of a configuration backup file to use as a configuration template. If you
specified the -from parameter with the value backup, you must use this parameter and specify a backup
name. Use the system configuration backup show command to view available configuration backup
files.
The following example shows how to re-create a cluster using the configuration backup siteconfig.backup.7z as a
configuration template.
cluster1::*> system configuration recovery cluster recreate -from backup -backup siteconfig.backup.
7z
Related references
system configuration backup show on page 1166
system configuration recovery cluster rejoin on page 1171
Description
The system configuration recovery cluster rejoin command rejoins a node to a new cluster created earlier using
the system configuration recovery cluster recreate command. Only use this command to recover a node from a
disaster. Because this synchronization can overwrite critical cluster information, and will restart the node you specify, you are
required to confirm this command before it executes.
Parameters
-node {<nodename>|local} - Node to Rejoin
Use this parameter to specify the node to rejoin to the cluster.
Examples
This example shows how to rejoin the node node2 to the cluster.
Warning: This command will rejoin node "node2" into the local cluster,
potentially overwriting critical cluster configuration files. This
command should only be used to recover from a disaster. Do not perform
any other recovery operations while this operation is in progress.
This command will cause node "node2" to reboot.
Do you want to continue? {y|n}: y
Related references
system configuration recovery cluster recreate on page 1170
Examples
The following example displays the cluster recovery status.
Related references
system configuration recovery cluster recreate on page 1170
system configuration recovery cluster rejoin on page 1171
Description
The system configuration recovery cluster sync command synchronizes a node with the cluster configuration.
Only use this command to recover a node from a disaster. Because this synchronization can overwrite critical cluster
information, and will restart the node you specify, you are required to confirm this command before it executes.
Parameters
-node {<nodename>|local} - Node to Synchronize
Use this parameter to specify the name of the node to synchronize with the cluster.
Examples
The following example shows the synchronization of the node node2 to the cluster configuration.
Warning: This command will synchronize node "node2" with the cluster
configuration, potentially overwriting critical cluster configuration
files on the node. This feature should only be used to recover from a
disaster. Do not perform any other recovery operations while this
operation is in progress. This command will cause all the cluster
applications on node "node2" to restart, interrupting administrative
CLI and Web interface on that node.
Do you want to continue? {y|n}: y
All cluster applications on node "node2" will be restarted. Verify that the cluster applications
go online.
Description
The system configuration recovery node restore command restores the configuration of the local node from a
configuration backup file.
Use the system configuration backup show command to view available configuration backup files.
Parameters
-backup <text> - Backup Name
Use this parameter to specify the name of a configuration backup file to use as the configuration template.
[-nodename-in-backup <text>] - Use Backup Identified by this Nodename
Use this parameter to specify a node within the configuration backup file to use as a configuration template.
Only specify this parameter if you are specifying a name other than the name of the local node.
[-force [true]] - Force Restore Operation
Use this parameter with the value true to force the restore operation and overwrite the current configuration
of the local node. This overrides all compatibility checks between the node and the configuration backup. The
configuration in the backup is installed even if it is not compatible with the node's software and hardware.
Use this parameter with the value false to be warned of the specific dangers of restoring and be prompted for
confirmation before executing the command. This value also assures that the command performs compatibility
checks between configuration stored in the backup and the software and hardware of the node. The default is
false.
Examples
The following example shows how to restore the configuration of the local node from the configuration backup of node3
that is stored in the configuration backup file example.backup.7z.
Related references
system configuration backup show on page 1166
• Controller name
• System ID
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information about all the controllers in the cluster.
[-node {<nodename>|local}] - Node
Selects information about the specified controller.
[-system-id <text>] - System ID
Selects information about the controller with the specified System ID.
[-model <text>] - Model Name
Selects information about the controllers with the specified model name.
[-part-number <text>] - Part Number
Selects information about the controllers with the specified part number.
[-revision <text>] - Revision
Selects information about the controllers with the specified revision.
[-serial-number <text>] - Serial Number
Selects information about the controller with the specified system serial number.
[-controller-type <text>] - Controller Type
Selects information about the controllers with the specified controller type.
[-status {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Status
Selects information about the controllers with the specified health monitor status.
[-chassis-id <text>] - Chassis ID
Selects information about the controllers with the specified chassis ID.
Examples
The below example displays information about all controllers in the cluster.
The example below displays detailed information about specified controller in the cluster.
Description
The system controller bootmedia show command displays details of the bootmedia devices present in all the nodes in a
cluster. These commands are available for 80xx, 25xx and later systems. Earlier models are not supported. By default, the
command displays the following information about the bootmedia:
• Node name
• Display name
• Vendor ID
• Device ID
• Memory size
• Bootmedia state
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information for all the bootmedia devices.
[-node {<nodename>|local}] - Node
Selects the bootmedia device that is present on the specified node.
Examples
The following example displays the information of the bootmedia devices present in all the nodes in a cluster:
Size Bootmedia
Node Display Name Vendor ID Device ID (MB) State Status
------------- -------------------- --------- --------- ------- --------- ------
node1 Micron Technology 634 655 1929 good ok
0x655
node2 Micron Technology 634 655 1929 good ok
0x655
The example below displays the detailed information about the bootmedia present in a node.
Node: node1
Vendor ID: 634
Device ID: 655
Display Name: Micron Technology 0x655
Unique Name: Micron Technology 0x655 (ad.0)
Health Monitor Name: controller
USBMON Health Monitor: present
Bootmedia State: good
Max memory size(in MB): 1929
Status: ok
Description
The system controller bootmedia show-serial-number command displays the Boot Media Device serial number.
These commands are available for 80xx, 25xx and later systems. Earlier models are not supported. By default, the command
displays the following information about the bootmedia:
• Node name
• Display name
• Serial Number
• Size
• Bootmedia state
• Status
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information for all the bootmedia devices.
[-node {<nodename>|local}] - Node
Selects the bootmedia device that is present on the specified node.
[-serial-num <text>] - Serial Number
Selects the bootmedia devices with the specified serial number.
[-vendor-id <Hex Integer>] - Vendor ID
Selects the bootmedia devices with the specified vendor ID.
[-device-id <Hex Integer>] - Device ID
Selects the bootmedia devices with the specified device ID.
[-display-name <text>] - Display Name
Selects the bootmedia devices with the specified display name.
[-unique-name <text>] - Unique Name
Selects the bootmedia device with the specified unique name.
[-monitor {node-connect|system-connect|system|controller|chassis|cluster-switch|example}] -
Health Monitor Name
Selects the bootmedia devices with the specified health monitor.
[-usbmon-status {present|not-present}] - Bootmedia Health Monitor
Selects the bootmedia devices with the specified USBMON status.
[-device-state {good|warn|bad}] - Bootmedia State
Selects the bootmedia devices with the specified device state.
Examples
The following example displays the information of the bootmedia devices present in all the nodes in a cluster:
The following example displays the detailed information about the bootmedia present in a node:
Node: node1
Vendor ID: 8086
Device ID: 8d02
Display Name: TOSHIBA THNSNJ060GMCU
Unique Name: /dev/ad4s1 (TOSHIBA THNSNJ060GMCU)
Health Monitor Name: controller
Bootmedia Health Monitor: present
Bootmedia State: good
Max memory size(in MB): 16367
Status: ok
Serial number: Y4IS104FTNEW
Description
The system controller clus-flap-threshold show command allows the display of the threshold for link flapping
counts for all nodes. This threshold would be the number of times the cluster port links for a given node can flap (go down)
within a polling period before triggering an alert.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the nodes that match this parameter value.
[-device <text>] - Device
Selects the configuration information that matches the specified device.
[-slot <integer>] - Slot Number
Selects the configuration information that matches the specified slot.
[-subslot <integer>] - Subslot Number
Selects the configuration information that matches the specified subslot.
[-info <text>] - Device Info
Selects the configuration information that matches the specified device information.
Examples
The following example displays configuration information for slot 1 of the controller:
Node: node1
Sub- Device/
Slot slot Information
---- ---- --------------------------------------------------------------------
1 - NVRAM10 HSL
Device Name: Interconnect HBA: Generic OFED Provider
Port Name: ib1a
Default GID: fe80:0000:0000:0000:0000:0000:0000:0104
Base LID: 0x104
Active MTU: 8192
Data Rate: 0 Gb/s (8X)
Link State: DOWN
QSFP Vendor: Amphenol
QSFP Part Number: 112-00436+A0
QSFP Type: Passive Copper 1m ID:00
QSFP Serial Number: APF16130066875
QSFP Vendor: Amphenol
QSFP Part Number: 112-00436+A0
QSFP Type: Passive Copper 1m ID:00
QSFP Serial Number: APF16130066857
cluster1::>
• node
• description
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information for all the PCI devices.
[-node {<nodename>|local}] - Node
Displays configuration errors on the specified node.
[-verbose [true]] - Verbose Output?
The -verbose parameter enables verbose mode, resulting in the display of more detailed output.
[-description <text>] - Error Description
Displays the node with the specified configuration error.
Examples
The example below displays configuration errors on all the nodes in the cluster.
cluster1::>
cluster1::>
• Node
• Model
• Type
• Slot
• Device
• Vendor
• Sub-device ID
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information about PCI devices.
[-node {<nodename>|local}] - Node
Selects the PCI devices that are present in the specified node.
[-model <text>] - Model String
Selects the PCI devices that are present on the system with the specified model name.
[-type <integer>] - Device Type
Selects the PCI devices with the specified device type.
[-slot-and-sub <text>] - PCI Slot Number
Selects the PCI devices present in the specified slot or slot-subslot combination.
[-device <text>] - Device
Selects the PCI devices with the specified device ID.
[-vendor <text>] - Vendor Number
Selects the PCI devices with the specified vendor ID.
[-sub-device-id <integer>] - Sub Device ID
Selects the PCI devices with the specified sub-device ID.
Examples
The example below displays information about PCI devices found in I/O expansion slots of all the nodes in the cluster.
cluster1::>
Description
The system controller config pci show-hierarchycommand displays the PCI Hierarchy of all PCI devices found in
a controller. The command displays the following information about the PCI devices:
• Node
• Level
• Device
• Link Capability
• Link Status
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information for PCI devices.
[-node {<nodename>|local}] - Node
Displays the PCI hierarchy of the specified node.
[-level <integer>] - PCI Device Level
Displays the PCI devices that match the specified level within the PCI hierarchy.
[-pci-device <text>] - PCI Device
Displays the PCI devices that match the specified device description.
[-link-cap <text>] - Link Capability
Displays the PCI devices that match the specified link capability.
[-link-status <text>] - Link Status
Displays the PCI devices that match the specified link status.
Examples
The example below displays the PCI hierarchy for all of the nodes in the cluster.
Node: cluster1-01
Node: cluster1-02
Description
The system controller environment show displays information about all environment FRUs in the cluster. These
commands are available for 80xx, 25xx and later systems. Earlier models are not supported. By default, the command displays
the following information about the environment FRUs in the cluster:
• Node
• FRU name
• FRU state
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information about the environment FRUs.
[-node {<nodename>|local}] - Node
Selects information about all the environment FRUs that the specified node owns.
[-serial-number <text>] - FRU Serial Number
Selects information about all the environment FRUs with the specified serial number.
[-fru-name <text>] - FRU Name
Selects information about the environment FRU with the specified FRU name.
Examples
The following example displays information about all major environment FRUs in the cluster:
The following example displays detailed information about a specific environment FRU:
cluster1::> system controller environment show -node node1 -fru-name "PSU1 FRU" -instance
Node: node1
FRU Serial Number: XXT122737891
FRU Name: PSU1 FRU
FRU Type: psu
Name: XXT122737891
FRU State: GOOD
Status: ok
Display Name for the FRU: PSU1 FRU
Monitor Name: controller
Model Type: none
Shared Resource: shared
Chassis ID: 4591227214
Additional Information About the FRU: Part Number: 114-00065+A0
Description
The system controller flash-cache show command displays the current Flash Cache device information.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If this parameter is specified, only status information for the matching node is displayed.
[-slot <integer>] - Slot
If this parameter is specified, only status information for the matching slot is displayed.
[-subslot <integer>] - Subslot
If this parameter is specified, only status information for the matching subslot is displayed.
[-slot-string <text>] - Slot String
If this parameter is specified, only status information for the matching slot is displayed. Format can be slot or
slot-subslot.
[-device-state {ok|erasing|erased|failed|removed|online|offline_failed|degraded|
offline_threshold}] - Device State
If this parameter is specified, only status information for the matching device-state is displayed.
[-model-number <text>] - Model Number
If this parameter is specified, only status information for the matching model-number is displayed.
[-part-number <text>] - Part Number
If this parameter is specified, only status information for the matching part-number is displayed.
[-serial-number <text>] - Serial Number
If this parameter is specified, only status information for the matching serial-number is displayed.
[-firmware-version <text>] - Firmware Version
If this parameter is specified, only status information for the matching firmware-version is displayed.
[-hardware-revision <text>] - Hardware Revision
If this parameter is specified, only status information for the matching hardware-revision is displayed.
Examples
The following example displays the current state of all Flash Cache devices:
Description
The system controller flash-cache secure-erase run command securely erases the givien Flash Cache device.
Parameters
-node {<nodename>|local} - Node
Selects the node of the specified Flash Cache devices.
-slot <text> - Slot
Selects the slot or slot-subslot of the specified Flash Cache devices.
Examples
The following example securely erases the selected Flash Cache device:
Description
The system controller flash-cache secure-erase show command displays the current Flash Cache device secure-
erase status.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If this parameter is specified, only status information for the matching node is displayed.
[-slot <text>] - Slot
If this parameter is specified, only status information for the matching slot is displayed. Slot can have a format
of slot or slot-subslot.
[-device-state {ok|erasing|erased|failed|removed}] - Device State
If this parameter is specified, only status information for the matching device-state is displayed.
Examples
The following example displays the current state of all the Flash Cache devices:
• Node name
• Display name
• Is IOXM present?
• Power status
• Health monitor status
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information for all the IOXMs.
[-node {<nodename>|local}] - Node
Selects the IOXM that is connected to the specified node.
[-chassis-config {c-i|c-c|c-b}] - Controller-IOXM or Controller-Controller or Controller-Blank
Selects the IOXMs with the specified chassis configuration.
[-is-present {present|not-present}] - IOXM Presence
Selects the IOXMs that are connected and detected (present) or connected but not detected (not-present).
[-power {good|bad}] - Power to IOXM
Selects the IOXMs with the specified power state.
[-display-name <text>] - Display Name
Selects the IOXMs with the specified display name.
[-unique-name <text>] - Unique Name
Selects the IOXM with the specified unique name.
[-monitor {node-connect|system-connect|system|controller|chassis|cluster-switch|example}] -
Health Monitor Name
Selects the IOXMs with the specified health monitor.
[-status {ok|ok-with-suppressed|degraded|unreachable|unknown}] - IOXM Health
Selects the IOXMs with the specified health monitor status.
Examples
The example below displays the information of all the IOXMs that are connected to the nodes in a cluster.
The example below displays detailed information of an IOXM that is connected to a node.
Node: node1
Controller-IOXM or Controller-Controller or Controller-Blank: c-i
IOXM Presence: present
Power to IOXM: good
Display Name: node1/IOXM
Unique Name: 8006459930
Health Monitor Name: controller
IOXM Health: ok
Description
The system controller fru show command displays information about all the controller specific Field Replaceable Units
(FRUs) in the cluster. These commands are available for 80xx, 25xx and later systems. Earlier models are not supported. By
default, the command displays the following information about all the FRUs in the cluster:
• Node
• FRU name
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information about the controller specific FRUs in the cluster.
[-node {<nodename>|local}] - Node
Selects information about the FRUs in the specified node.
[-subsystem <Subsystem>] - Subsystem
Selects information about the FRUs of the specified subsystem.
[-serial-number <text>] - FRU Serial Number
Selects information about the FRU with the specified serial number.
[-fru-name <text>] - Name of the FRU
Selects information about the FRU with the specified FRU name.
[-type {controller|psu|fan|dimm|bootmedia|ioxm|nvram|nvdimm}] - FRU Type
Selects information about the FRU with the specified FRU type.
[-name <text>] - FRU Name
Selects information about the FRU with the specified unique name.
Examples
The example below displays information about all controller specific FRUs in the cluster.
Description
The system controller fru show-manufacturing-info command displays manufacturing information for field
replaceable units (FRUs) installed in the system. The information includes FRU-description, serial number, part number, and
revision number. To display more details, use the -instance parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
Displays detailed information about the installed FRUs in the system.
[-node {<nodename>|local}] - Node
Selects a specific node's installed FRUs.
[-system-sn <text>] - System Serial Number
Selects information about installed FRUs with the specified system serial number.
[-model-name <text>] - Model Name
Selects information about installed FRUs with the specified model name.
[-system-id <text>] - System ID
Selects information about installed FRUs with the specified system ID.
[-kernel-version <text>] - Kernel Version
Selects information about installed FRUs with the specified kernel version.
[-firmware-release <text>] - Firmware Release
Selects information about installed FRUs with the specified firmware release.
[-description <text>] - FRU Description
Selects information about installed FRUs with the specified FRU description.
[-fru-subtype <text>] - FRU Sub-type
Selects information about the FRU with the specified FRU subtype.
[-serial-number <text>] - FRU Serial Number
Selects information about the FRU with the specified serial number.
[-part-number <text>] - FRU Part Number
Selects information about the FRU with the specified part number.
[-revision <text>] - FRU Revision of Part Number
Selects information about the FRU with the specified revision.
[-manufacturer <text>] - FRU Manufacturer
Selects information about the FRU with the specified manufacturer.
[-manufacture-date <text>] - FRU Manufacturing Date
Selects information about the FRU with the specified manufacture date.
Examples
The following example displays all installed FRUs in the system:
Node: platsw-lodi-1-01
System Serial Number: 791541000047
Model Name: FAS9040
System ID: 0537024373
Firmware release: 10.0X18
Kernel Version: NetApp Release sysMman_3887886_1608151712: Mon Aug 15
15:54:00 PDT 2016
FRU Description FRU Serial Number FRU Part Number FRU Rev.
------------------------ ------------------------ ------------------ ----------
Mother Board 031537000390 111-02419 40
Chassis 031536000252 111-02392 40
DIMM-1 CE-01-1510-02A8DC73 SHB722G4LML23P2-SB -
DIMM-3 CE-01-1510-02A8DCCC SHB722G4LML23P2-SB -
DIMM-8 CE-01-1510-02A8DE54 SHB722G4LML23P2-SB -
DIMM-9 CE-01-1510-02A8DE1C SHB722G4LML23P2-SB -
DIMM-11 CE-01-1510-02A8DF42 SHB722G4LML23P2-SB -
DIMM-16 CE-01-1510-02A8DD9B SHB722G4LML23P2-SB -
FAN1 031534001263 441-00058 40
FAN2 031534001292 441-00058 40
FAN3 031534001213 441-00058 40
PSU1 PSD092153200591 114-00146 40
PSU3 PSD092153200700 114-00146 40
mSATA boot0 1439100B02C3 - MU03
1/10 Gigabit Ethernet Controller IX4-T 031538000121 111-02399 40
QLogic 8324 10-Gigabit Ethernet Controller 031535000664 111-02397 40
NVRAM10 031537000846 111-02394 40
NVRAM10 BATT 31534000932 NetApp, Inc. 111-02591
NVRAM10 DIMM CE-01-1510-02A8DC03 SHB722G4LML23P2-SB -
PMC-Sierra PM8072 (111-02396) 031537000246 111-02396 41
PMC-Sierra PM8072 (111-02396) 031537000246 111-02396 41
PMC-Sierra PM8072 (111-02396) 031537000246 111-02396 41
PMC-Sierra PM8072 (111-02396) 031537000246 111-02396 41
PMC-Sierra PM8072 (111-02396) 031537000179 111-02396 41
Disk Serial Number PNHH1J0B X421_HCOBD450A10 -
Disk Serial Number PNHH2BKB X421_HCOBD450A10 -
Disk Serial Number PNHJPZ8B X421_HCOBD450A10 -
Disk Serial Number PNHG6SKB X421_HCOBD450A10 -
Disk Serial Number PNHKJYTB X421_HCOBD450A10 -
Disk Serial Number PNHSVMEY X421_HCOBD450A10 -
Disk Serial Number PNHT8KWY X421_HCOBD450A10 -
Disk Serial Number PNHSVL0Y X421_HCOBD450A10 -
Disk Serial Number PNHG5RHB X421_HCOBD450A10 -
Disk Serial Number PNHEXLWB X421_HCOBD450A10 -
Disk Serial Number PNHT8LJY X421_HCOBD450A10 -
Disk Serial Number PNHSVKZY X421_HCOBD450A10 -
PMC-Sierra PM8072 (111-02396) 031537000179 111-02396 41
PMC-Sierra PM8072 (111-02396) 031537000179 111-02396 41
PMC-Sierra PM8072 (111-02396) 031537000179 111-02396 41
DS2246 6000113106 0190 -
DS2246-Pwr-Supply XXT111825308 114-00065+A0 9C
DS2246-Pwr-Supply XXT111825314 114-00065+A0 9C
DS2246-MODULE 8000675532 111-00690+A3 23
DS2246-MODULE 8000751790 111-00690+A3 23
DS2246-CABLE 512130075 112-00430+A0 -
DS2246-CABLE - - -
DS2246-CABLE 512130118 112-00430+A0 -
DS2246-CABLE - - -
49 entries were displayed.
Description
The system controller fru led disable-all command turns off all the controller and IOXM FRU fault LEDs.
A FRU (Field Replaceable Unit) is any piece of the system that is designed to be easily and safely replaced by a field technician.
Both the controller and IOXM FRUs have a number of internal FRUs for which there are corresponding fault LEDs. In addition,
there is a summary FRU fault LED on the external face-plate of both the controller and IOXM; labeled with a "!". A summary
fault LED will be on when any of the internal FRU fault LEDs are on. Only the controller and IOXM internal FRU fault LEDs
can be controlled by the end-user. The summary fault LEDs are turned on and off based on the simple policy described above. If
you want to turn off the summary fault LED, you must turn off all internal FRU fault LEDs.
All FRU fault LEDs are amber in color. However, not all amber LEDs in the system are FRU fault LEDs. Externally visible fault
LEDs are labeled with a "!" and internal FRU fault LEDs remain on, even when the controller or IOXM is removed from the
chassis. In addition, internal FRU fault LEDs will remain on until explicitly turned off by the end-user, even after a FRU has
been replaced.
FRUs are identified by a FRU ID and slot tuple. FRU IDs include: DIMMs, cards in PCI slots, boot media devices, NV batteries
and coin cell batteries. For each FRU ID, the FRUs are numbered 1 through N, where N is the number of FRUs of that particular
type that exist in the controller or IOXM. Both controller and IOXM have a FRU map label for use in physically locating
internal FRUs. The FRU ID/slot tuple used by the system controller fru led show command matches that specified on
the FRU map label.
Examples
Turn off all FRU fault LEDs.
Related references
system controller fru led modify on page 1195
system controller fru led show on page 1196
Description
The system controller fru led enable-all command turns on all the controller and IOXM FRU fault LEDs.
A FRU (Field Replaceable Unit) is any piece of the system that is designed to be easily and safely replaced by a field technician.
Both the controller and IOXM FRUs have a number of internal FRUs for which there are corresponding fault LEDs. In addition,
there is a summary FRU fault LED on the external face-plate of both the controller and IOXM; labeled with a "!". A summary
Examples
Turn on all FRU fault LEDs.
Related references
system controller fru led modify on page 1195
system controller fru led show on page 1196
Description
The system controller fru led modify command modifies the current state of the controller and IOXM FRU fault
LEDs.
A FRU (Field Replaceable Unit) is any piece of the system that is designed to be easily and safely replaced by a field technician.
Both the controller and IOXM FRUs have a number of internal FRUs for which there are corresponding fault LEDs. In addition,
there is a summary FRU fault LED on the external face-plate of both the controller and IOXM; labeled with a "!". A summary
fault LED will be on when any of the internal FRU fault LEDs are on. Only the controller and IOXM internal FRU fault LEDs
can be controlled by the end-user. The summary fault LEDs are turned on and off based on the simple policy described above. If
you want to turn off the summary fault LED, you must turn off all internal FRU fault LEDs.
All FRU fault LEDs are amber in color. However, not all amber LEDs in the system are FRU fault LEDs. Externally visible fault
LEDs are labeled with a "!" and internal FRU fault LEDs remain on, even when the controller or IOXM is removed from the
chassis. In addition, internal FRU fault LEDs will remain on until explicitly turned off by the end-user, even after a FRU has
been replaced.
FRUs are identified by a FRU ID and slot tuple. FRU IDs include: DIMMs, cards in PCI slots, boot media devices, NV batteries
and coin cell batteries. For each FRU ID, the FRUs are numbered 1 through N, where N is the number of FRUs of that particular
type that exist in the controller or IOXM. Both controller and IOXM have a FRU map label for use in physically locating
internal FRUs. The FRU ID/slot tuple used by the system controller fru led show command matches that specified on
the FRU map label.
Examples
Turn off DIMM 3's FRU fault LED.
cluster1::*> system controller fru led modify -node node1 -fru-id dimm -fru-slot 3 -fru-state on
cluster1::*> system controller led modify -node node1 -fru-id pci -fru-slot * -fru-state on
Related references
system controller fru led show on page 1196
Description
The system controller fru led show command displays information about the current state of the controller and IOXM
FRU fault LEDs.
A FRU (Field Replaceable Unit) is any piece of the system that is designed to be easily and safely replaced by a field technician.
Both the controller and IOXM FRUs have a number of internal FRUs for which there are corresponding fault LEDs. In addition,
there is a summary FRU fault LED on the external face-plate of both the controller and IOXM; labeled with a "!". A summary
fault LED will be on when any of the internal FRU fault LEDs are on.
All FRU fault LEDs are amber in color. However, not all amber LEDs in the system are FRU fault LEDs. Externally visible fault
LEDs are labeled with a "!" and internal FRU fault LEDs remain on, even when the controller or IOXM is removed from the
chassis.
FRUs are identified by a FRU ID and slot tuple. FRU IDs include: DIMMs, cards in PCI slots, boot media devices, NV batteries
and coin cell batteries. For each FRU ID, the FRUs are numbered 1 through N, where N is the number of FRUs of that particular
type that exist in the controller or IOXM. Both controller and IOXM have a FRU map label for use in physically locating
internal FRUs. The FRU ID/slot tuple used by the system controller fru led show command matches that specified on
the FRU map label.
Examples
List the current state of all FRU fault LEDs.
cluster1::*> system controller fru led show -node host1 -fru-id controller -fru-slot 1
Node FRU Type Bay Slot State Lit By
--------------------- ----------- --- ---- ------- -------
host1
controller A 1 off -
Description
The system controller location-led modify command modifies the current state of the location LED. When lit, the
location LED can help you find the controller in the data center.
There is a blue location LED on every controller and on the front of the chassis. When you turn on the location LED for either
controller, the chassis location LED automatically turns on. When both controller location LEDs are off, the chassis location
LED automatically turns off.
After the location LED is turned on, it stays illuminated for 30 minutes and then automatically shuts off.
Parameters
-node {<nodename>|local} - Node
Selects the location LED on the specified filers.
[-state {on|off}] - LED State
Modifies the state of the location LED on the filer.
Examples
The following example turns on the location LED:
Description
The system controller location-led show command shows the current state of the location LED. When lit, the
location LED can help you find the controller in the data center.
There is a blue location LED on every controller and on the front of the chassis. When you turn on the location LED for either
controller, the chassis location LED automatically turns on. When both controller location LEDs are off, the chassis location
LED automatically turns off.
After the location LED is turned on, it stays illuminated for 30 minutes and then automatically shuts off.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example lists the current state of the location LED:
Description
The system controller memory dimm show command displays information about the DIMMs in all the nodes in the
cluster. These commands are available for 80xx, 25xx and later systems. Earlier models are not supported. By default, the
command displays the following information about all the DIMMs in the cluster:
• Node
• DIMM name
• CPU socket
• Channel
• Slot number
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
Examples
The example below displays information about the DIMMs in all the nodes in the cluster.
The example below displays detailed information about a specific DIMM in a specific controller.
cluster1::> system controller memory dimm show -instance -node node1 -pds-id 1
Node: node1
DIMM ID: 1
Slot Name: DIMM-1
CPU Socket: 0
Channel: 0
Slot Number on a Channel: 0
Serial Number: AD-01-1306-2EA01E9A
Part Number: HMT82GV7MMR4A-H9
Correctable ECC Error Count: 0
Uncorrectable ECC Error Count: 0
Health Monitor Name: controller
Status: ok
Unique Name of DIMM: DIMM-1
Display Name for the DIMM: DIMM-1
CECC Alert Method: bucket
Replace DIMM: false
Description
The system controller nvram-bb-threshold show command displays the threshold for the NVRAM bad block counts
for a node.
Description
The system controller pci show command displays details of the PCI devices present in all of the nodes in a cluster.
These commands are available for 80xx, 25xx and later systems. Earlier models are not supported. By default, the command
displays the following information about the PCI devices:
• Node name
• Display name
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information for all of the PCI devices.
[-node {<nodename>|local}] - Node
Selects the PCI devices that are present in the specified node.
[-bus-number <integer>] - Bus Number
Selects the PCI devices with the specified bus number.
[-device-number <integer>] - Device Number
Selects the PCI devices with the specified device number.
[-function-number <integer>] - Function Number
Selects the PCI devices with the specified function number.
[-slot-number <integer>] - Slot Info
Selects the PCI devices with the specified slot number.
[-monitor {node-connect|system-connect|system|controller|chassis|cluster-switch|example}] -
Health Monitor Name
Selects the PCI devices monitored by the specified health monitor.
[-vendor-id <Hex Integer>] - Vendor ID
Selects the PCI devices with the specified vendor ID.
[-device-id <Hex Integer>] - Device ID
Selects the PCI devices with the specified device ID.
[-physical-link-width <integer>] - Physical Link Width
Selects the PCI devices with the specified physical link width.
[-functional-link-width <integer>] - Functional Link Width
Selects the PCI devices with the specified functional link width.
[-physical-link-speed <text>] - Physical Link Speed(GT/s)
Selects the PCI devices with the specified physical link speed.
[-functional-link-speed <text>] - Functional Link Speed(GT/s)
Selects the PCI devices with the specified functional link speed.
[-unique-name <text>] - Unique Name
Selects the PCI devices with the specified unique name.
[-corr-err-count <integer>] - Correctable Error Count
Selects the PCI devices with the specified correctable error count.
[-health {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Status
Selects the PCI devices with the specified health monitor status.
[-display-name <text>] - Display Name
Selects the PCI devices with the specified display name.
Examples
The example below displays the information about the PCIe devices present in all of the nodes in the cluster.
The example below displays detailed information about a PCIe device in a node.
Node: cluster1-01
Bus Number: 1
Device Number: 0
Function Number: 0
Slot Info: 0
Health Monitor Name: controller
Vendor ID: 11f8
Device ID: 8001
Physical Link Width: 4
Functional Link Width: 4
Physical Link Speed(GT/s): 5GT/s
Functional Link Speed(GT/s): 5GT/s
Unique Name: ontap0@pci0:1:0:0
Correctable Error Count: 0
Status: ok
Display Name: Ontap PCI Device 0
Correctable Error Difference: 0
Description
The system controller pcicerr threshold modify command modifies node-wide PCIe correctable error threshold
counts in the cluster.
Parameters
[-pcie-cerr-threshold <integer>] - Corr. Error Limit
The PCIe error threshold count that would trigger an alert if exceeded.
[-nvram-bb-threshold <integer>] - NVRAM Bad Block limit
The NVRAM bad block threshold count that would trigger an alert if exceeded.
Description
The system controller pcicerr threshold show command displays information about node-wide PCIe correctable
error threshold counts in the cluster.
Examples
The example below displays the information about node-wide PCIe error threshold count in the cluster:
Description
The system controller platform-capability show command displays information about all platform capabilities for
each controller in the cluster. By default, the command displays the following information about all controllers in the cluster:
• Controller Name
• Capability ID
• Capability Supported?
• Capability Name
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
Displays detailed information about all controllers in the cluster.
Examples
The following example displays platform capability information for the controller:
Description
The system controller replace cancel command is used to cancel a controller replacement that is in paused state
(paused-on-request, paused-on-error or paused-for-intervention). The update cannot be canceled if it is not in a paused state.
Examples
The following example displays a cancel operation:
Examples
The following example displays pause operation:
Description
The system controller replace resume command is used to resume an update that is currently in one of paused-on-
request, paused-on-error or paused-for-intervention states. If the update is not paused then an error is returned.
Examples
The following example shows a resume operation:
Description
The system controller replace show command displays overall information about the currently running, or previously
run controller replacement operation. The command displays the following information:
• Operation Status
• Error message
• Recommended action
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Step Details:
--------------------------------------------
Description
The system controller replace show-details command displays detailed information about the currently running and
previously run non-disruptive controller replacement operations. The command displays the following information:
• Phase
• Node
• Task name
• Task status
• Error message
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-operation-identifier {None|Controller-replacement}] - Operation Identifier
Specifies the NDO operation identifier.
[-task-identifier <integer>] - Task Identifier
Specifies the identification number of the task.
[-node <nodename>] - Node That Performs Operation
Specifies the node that is to be replaced.
Examples
The following example displays detailed information about the non-disruptive replacement operation:
Description
The system controller replace start command is used to initiate a controller-replacement operation. The update is
preceded by a validation of the HA pair to ensure that any issues that might affect the update are identified.
There are predefined points in the update when the update can be paused (either requested by the user or by the operation in case
of an error or for manual intervention).
Examples
The following example shows the replacement operation:
2. Verify that NVMEM or NVRAM batteries of the new nodes are charged, and charge them if they
are not. You need to physically check the new nodes to see if
the NVMEM or NVRAM batteries are charged. You can check the battery status either by
connecting to a console or using SSH, logging into the Service Processor
(SP) for your system, and use the system sensors to see if the battery has a sufficient
charge.
Attention: Do not try to clear the NVRAM contents. If there is a need to clear the contents
of NVRAM, contact NetApp technical support.
3. If you are replacing the controllers with an used one, please ensure to run wipeconfig
before controller replacement
Description
The system controller service-event delete command removes the service event from the list and extinguishes all
related FRU attention LEDs.
In some cases, where the underlying fault condition remains, the service event might be reported again, causing it to reappear in
the list. In such cases, it is necessary to remedy the underlying fault condition in order to clear the service event.
Parameters
-node {<nodename>|local} - Node
Selects service events on the specified nodes.
-event-id <integer> - Service Event ID
Selects the service events that match the specified event identifier. Together with the node, this field uniquely
identifies the row to delete. Use the system controller service-event show command to find the
event identifier for the service event to delete.
Examples
The following example lists the currently active service events. Then, using the listed Service Event ID, the service event
is deleted:
Related references
system controller service-event show on page 1210
Description
The system controller service-event show command displays one or more events that have been detected by the
system for which a physical service action might be required. Physical service actions sometimes involve replacing or re-seating
misbehaving FRUs. In such cases FRU attention LEDs will be illuminated to assist in physically locating the FRU in need of
attention. When the FRU in question is contained within another FRU, both the inner and outer FRU attention LEDs will be lit.
It creates a path of LEDs that starts at the chassis level and leads to the FRU in question. For example, if a DIMM is missing
from the controller motherboard, the storage OS will detect this and log a service event whose location is the DIMM slot on the
controller. The DIMM slot LED, controller LED and chassis LED will all be lit to create a path of LEDs to follow.
FRU Attention LEDs that are not visible from outside of the system (e.g. those on the controller motherboard such as DIMMs,
boot device etc.) will remain on for a few minutes, even after power is removed from the containing FRU. As such, when the
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects service events on the specified nodes.
[-event-id <integer>] - Service Event ID
Selects the service events that match the specified event identifier. Together with the node, this field uniquely
identifies the row for use with the system controller service-event delete command
[-event-loc <text>] - Location
Selects the service events that match the specified event location.
[-event-desc <text>] - Description
Selects the service events that match the specified event description.
[-event-timestamp <text>] - Timestamp
The time that the event occurred, recorded by the Service Processor
Examples
The following example lists the currently active service events.
Related references
system controller service-event delete on page 1210
Description
The system controller slot module insert command adds a module on the controller.
Parameters
-node {<nodename>|local} - Node
Selects the PCIe modules that are present in the specified node.
-slot <text> - Slot Number
Selects the PCIe modules present in the specified slot or slot-subslot combination.
Examples
The following example adds a module in the local node:
p2i030::>
Description
The system controller slot module remove command removes a module on the controller.
Parameters
-node {<nodename>|local} - Node
Selects the PCIe modules that are present in the specified node.
-slot <text> - Slot Number
Selects the PCIe modules present in the specified slot or slot-subslot combination.
Examples
The following example removes a module in the local node:
p2i030::>
Description
The system controller slot module replace command powers off a module on the controller for replacement.
Parameters
-node {<nodename>|local} - Node
Selects the PCIe modules that are present in the specified node.
-slot <text> - Slot Number
Selects the PCIe modules present in the specified slot or slot-subslot combination.
Examples
The following example powers off a module in the local node:
p2i030::>
Description
The system controller slot module show command displays hotplug status of a module on the controller. The
command displays the following information about the PCIe modules:
• Node
• Slot
• Module
• Status
Examples
The following example displays hotplug status of PCI modules found in the local node:
::>
Description
The system controller sp config show command displays the following configuration information of the service
processor for all nodes in the cluster:
To display more details, use the -instance parameter. These commands are available for 80xx, 25xx and later systems. Earlier
models are not supported.
Parameters
{ [-fields <fieldname>, ...]
Selects the field that you specify.
| [-instance ]}
Displays detailed configuration information of the service processor.
[-node {<nodename>|local}] - Node
Use this parameter to list the service processor configuration of the specific node.
[-version <text>] - Firmware Version
Selects the service processor configuration with the specified firmware version.
[-boot-version {primary|backup}] - Booted Version
Selects the service processor configuration with the specified version of the currently booted partition.
[-monitor {node-connect|system-connect|system|controller|chassis|cluster-switch|example}] -
Health Monitor Name
Selects the service processor configuration with the specified monitor name.
[-sp-status {online|offline|sp-daemon-offline|node-offline|degraded|rebooting|unknown|
updating}] - SP Status
Selects the service processor configuration with the specified status of service processor.
[-sp-config {true|false}] - Auto Update Configured
Selects information about the service processor with the specified configuration status of the service processor.
[-status {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Status
Selects the service processor configuration information with the specified service processor status.
[-link-status {up|down|disabled|unknown}] - Public Link Status
Selects the service processor configuration with the specified physical ethernet link status.
[-name <text>] - Display Name
Selects the service processor configuration with the specified unique name.
Examples
The example below displays configuration of the service processor in all the nodes in the cluster:
The example below displays configuration of the service processor of a particular node in detail:
Description
The system controller sp upgrade show command displays the following information about the service processor
firmware of all the nodes in the cluster:
• Node name
• Is autoupdate enabled?
• Status of autoupdate
To display more details, use the -instance parameter. These commands are available for 80xx, 25xx and later systems. Earlier
models are not supported.
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed upgrade information of the service processor.
[-node {<nodename>|local}] - Node
Use this parameter to list the upgrade information of the service processor on the specified node.
[-new-fw-avail {true|false}] - New Firmware Available
Selects the information of the service processors which have new firmware available.
Examples
The example below displays service processor upgrade information for all nodes in the cluster:
The example below displays the detailed service processor upgrade information for a specific node:
Node: node1
New Firmware Available: false
New Firmware Version: Not Applicable
Auto Update: true
Auto Update Status: installed
Auto Update Start Time: Thu Oct 20 20:06:03 2012 Etc/UTC
Auto Update End Time: Thu Oct 20 20:09:19 2012 Etc/UTC
Auto Update Percent Done: 0
Auto Update Maximum Retries: 5
Auto Update Current Retries: 0
Previous AutoUpdate Status: passed
Description
Display feature usage information in the cluster on a per-node and per-week basis.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays feature usage information for the specified node name.
[-serial-number <Node Serial Number>] - Node Serial Number
Displays feature usage information for the specified serial number.
[-feature-name <Managed Feature>] - Feature Name
Displays feature usage information for the specified feature name.
[-week-number <Sequence Number>] - Week Number
Displays feature usage information for the specified week number.
[-usage-status {not-used|configured|in-use|not-available}] - Usage Status
Displays feature usage information that matches the specified usage status.
[-date-collected <MM/DD/YYYY HH:MM:SS>] - Collection Date
Displays feature usage information that is collected on the day matching the specified date.
[-owner <text>] - Owner
Displays feature usage information for the specified owner name.
[-feature-message <text>] - Feature Message
Displays feature usage information that contains the specified feature message.
Examples
The following example displays a usage output filtered by the serial number and feature name:
Description
Display usage summary information about features in the cluster on a per-node basis. The summary information includes
counter information such as the number of weeks the feature was in use and the last date and time the feature was used.
Additional information can also be displayed by using the -instance parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-serial-number <Node Serial Number>] - Node Serial Number
Displays usage summary information for the specified serial number.
[-feature-name <Managed Feature>] - Feature Name
Displays usage summary information for the specified feature name.
[-weeks-in-use <integer>] - Weeks In-Use
Displays usage summary information for features matching the number of weeks in use.
[-last-used <MM/DD/YYYY HH:MM:SS>] - Date last used
Displays usage summary information for features last used on the specified date.
[-owner <text>] - Owner
Displays usage summary information for the specified owner name.
[-weeks-not-used <integer>] - Weeks Not Used
Displays usage summary information for features matching the number of weeks not in use.
[-weeks-configured <integer>] - Weeks Configured
Displays usage summary information for features matching the number of weeks that the feature was in
configuration.
[-weeks-not-available <integer>] - Weeks Data Not Available
Displays usage summary information for features matching the number of weeks when usage data was not
available.
Examples
The following example displays a usage summary output for a cluster of two nodes:
Description
The system fru-check show command checks and displays the results of quick diagnostic tests done for certain FRUs of
each controller in the cluster. The tests are not intended to be exhaustive, but simply to do a quick check of certain FRUs
especially after replacement.
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that have the specified name.
| [-instance ]}
Selects detailed information (if available) for all the FRUs.
[-node {<nodename>|local}] - Node
Selects the FRUs that belong to the node that has the specified name.
[-serial-number <text>] - FRU Serial Number
Selects the FRU matching the specified serial number.
[-fru-name <text>] - FRU Name
Selects the FRU matching the specified fru-name.
[-fru-type {controller|dimm|bootmedia|nvram|nvdimm}] - FRU Type
Selects the FRUs of the specified type.
[-fru-status {pass|fail|unknown}] - Status
Selects the FRUs whose FRU check status matches that specified. "pass" indicates the FRU is operational.
"fail" indicates the FRU is not operating correctly. "unknown" indicates a failure to obtain FRU information
during the check.
[-display-name <text>] - Display Name
Selects the FRU matching the specified display name.
system ha commands
The ha directory
Related references
system ha interconnect status on page 1236
Description
The system ha interconnect config show command displays the high-availability interconnect device basic
configuration information.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that you
specify.
| [-instance ]}
Use this parameter to display all the fields from all nodes in cluster.
[-node {<nodename>|local}] - Node
Use this parameter to display all the fields from the specified node in the cluster.
[-transport <text>] - Interconnect Type
Selects the nodes that match this HA interconnect transport type.
[-local-sysid <integer>] - Local System ID
Selects the nodes that match this local system unique identifier.
Examples
The following example displays the HA interconnect configuration information on FAS8000 series nodes in the cluster:
Node: ic-f8040-01
Interconnect Type: Infiniband (Mellanox ConnectX)
Local System ID: 536875713
Partner System ID: 536875678
Connection Initiator: local
Interface: backplane
Node: ic-f8040-02
Interconnect Type: Infiniband (Mellanox ConnectX)
Local System ID: 536875678
Partner System ID: 536875713
Connection Initiator: partner
Interface: backplane
The following example displays the HA interconnect configuration information on FAS2500 series nodes in the cluster:
Node: ic-f2554-03
Interconnect Type: Infiniband (Mellanox Sinai)
Local System ID: 1781036608
Partner System ID: 1780360209
Connection Initiator: local
Interface: backplane
Node: ic-f2554-04
Description
The system ha interconnect link off command turns off the specified link on the high-availability interconnect device.
For the nodes in the cluster with two external high-availability interconnect links, you must specify the link number (0-based) to
turn off the specified link. For the nodes in the cluster with interconnect links over the backplane, you must specify the link
number 1 to turn off the link.
Parameters
-node <nodename> - Node
This mandatory parameter specifies the node on which the interconnect link is to be turned off. The value
"local" specifies the current node.
-link {0|1} - Link
This mandatory parameter specifies the interconnect link number (0-based) to turn off.
Examples
The following example displays output of the command on the nodes with a single interconnect link or nodes with
interconnect links over the backplane:
The following example displays output of the command on the nodes with two interconnect links connected externally:
Description
The system ha interconnect link on command turns on the specified link on the high-availability interconnect device.
For the nodes in the cluster with two external high-availability interconnect links, you must specify the link number (0-based) to
turn on the specified link. For the nodes in the cluster with interconnect links over the backplane, you must specify the link
number 1 to turn on the link.
Parameters
-node <nodename> - Node
This mandatory parameter specifies the node on which the interconnect link is to be turned on. The value
"local" specifies the current node.
-link {0|1} - Link
This mandatory parameter specifies the interconnect link number (0-based) to turn on.
Examples
The following example displays output of the command on the nodes with a single interconnect link or nodes with
interconnect links over the backplane:
The following example displays output of the command on the nodes with two interconnect links connected externally:
Parameters
-node <nodename> - Node
This mandatory parameter specifies which node will have the error statistics cleared. The value "local"
specifies the current node.
Examples
Description
The system ha interconnect ood clear-performance-statistics command enables you to clear all the
performance statistics collected for the out-of-order delivery-capable high-availability interconnect device. This command is
only supported on FAS2500 series nodes in the cluster.
Parameters
-node <nodename> - Node
This mandatory parameter specifies which node will have the performance statistics cleared. The value "local"
specifies the current node.
Examples
Description
The system ha interconnect ood disable-optimization command disables the optimization capability on the high-
availability interconnect device. The command is only supported on FAS2500 series nodes in the cluster.
Examples
Description
The system ha interconnect ood disable-statistics command disables collection of the statistics on the out-of-
order delivery-capable high-availability interconnect device. This command is only supported on FAS2500 series nodes in the
cluster.
Parameters
-node <nodename> - Node
This mandatory parameter specifies which node will have the statistics collection disabled. The value "local"
specifies the current node.
Examples
Description
The system ha interconnect ood enable-optimization command enables you to turn on optimization (coalescing
out-of-order delivery requests) on the high-availability interconnect device. This command is only supported on FAS2500 series
nodes in the cluster.
Parameters
-node <nodename> - Node
This mandatory parameter specifies which node will have the optimization enabled. The value "local"
specifies the current node.
Description
The system ha interconnect ood enable-statistics command enables collection of the statistics on the out-of-order
delivery-capable high-availability interconnect device. This command is only supported on FAS2500 series nodes in the cluster.
Parameters
-node <nodename> - Node
This mandatory parameter specifies which node will have the statistics collection enabled. The value "local"
specifies the current node.
Examples
Description
The system ha interconnect ood send-diagnostic-buffer command enables you to run a short out-of-order
delivery diagnostic test. The command sends a buffer to the partner controller over the high-availability interconnect. This
command is only supported on FAS2500 series nodes in the cluster.
Parameters
-node <nodename> - Node
This mandatory parameter specifies which node will send the diagnostic buffer to its partner. The value "local"
specifies the current node.
Examples
The following example demonstrates how to use this command to send a diagnostic buffer to the partner:
Description
The system ha interconnect ood status show command displays configuration information of the out-of-order
delivery-capable high-availability interconnect devices. This command is supported only on FAS2500 series nodes in the cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that you
specify.
| [-instance ]}
Use this parameter to display all the fields from all nodes in cluster.
[-node {<nodename>|local}] - Node
Use this parameter to display all the fields from the specified node in the cluster.
[-is-ood-enabled {true|false}] - Is OOD Enabled
Selects the nodes that match this parameter value.
[-is-coalescing-enabled {true|false}] - Is Coalescing Enabled
Selects the nodes that match this parameter value.
Examples
The following example displays the HA interconnect device out-of-order delivery configuration information on FAS2500
series nodes in the cluster.
Node: ic-f2554-03
NIC Used: 0
Is OOD Enabled: true
Is Coalescing Enabled: true
Node: ic-f2554-04
NIC Used: 0
Is OOD Enabled: true
Is Coalescing Enabled: true
2 entries were displayed.
Description
The system ha interconnect port show command displays the high-availability interconnect device port physical layer
and link layer status information.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that you
specify.
| [-instance ]}
Use this parameter to display all the fields from all nodes in the cluster.
[-node {<nodename>|local}] - Node
Use this parameter to display all the fields from the specified node in the cluster.
[-link-monitor {on|off}] - Link Monitor Detection
Selects the nodes that match this parameter value.
[-port <integer>, ...] - Port Number
Selects the nodes that match this parameter value.
[-phy-layer-state {invalid|sleep|polling|disabled|port-configuration-testing|linkup|link-
error-recovery|phytest|reserved}, ...] - Physical Layer State
Selects the nodes that match this parameter value.
[-link-layer-state {invalid|down|initialize|armed|active|reserved}, ...] - Link Layer State
Selects the nodes that match this parameter value.
[-phy-link-up-count <integer>, ...] - Physical Link Up Count
Selects the nodes that match this parameter value. The value is total number of times the link on a given port is
transitioned up.
[-phy-link-down-count <integer>, ...] - Physical Link Down Count
Selects the nodes that match this parameter value. The value is total number of times the link on a given port is
transitioned down.
[-is-active-link {true|false}, ...] - Is the Link Active
Selects the nodes that match this parameter value. The value true means the interconnect data channels are
established on this link.
Examples
The following example displays the HA interconnect device port information on FAS8000 series nodes in the cluster:
Related references
system ha interconnect statistics show-scatter-gather-list on page 1231
Description
The system ha interconnect statistics clear-port command clears the high-availability interconnect device port
statistics. This command is supported only on FAS2500 series and FAS8000 series nodes in the cluster.
Note: To display the high-availability interconnect device port statistics, use the statistics show -object
ic_hw_port_stats command.
Parameters
-node <nodename> - Node
Selects the nodes that match this parameter value.
Examples
Description
The system ha interconnect statistics clear-port-symbol-error command clears the high-availability
interconnect device port symbol errors. This command is supported only on FAS2500 series nodes in the cluster.
Note: To display the high-availability interconnect device port statistics, use the statistics show -object
ic_hw_port_stats command.
Parameters
-node <nodename> - Node
Selects the nodes that match this parameter value.
Description
The system ha interconnect statistics show-scatter-gather-list command displays the high-availability
interconnect device scatter-gather list entry statistics. Out of all possible 32 entries in a scatter-gather list, the command displays
only the entries that have valid data.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that you
specify.
| [-instance ]}
Use this parameter to display all the fields from all nodes in cluster.
[-node {<nodename>|local}] - Node
Use this parameter to display all the fields from the specified node in the cluster.
[-sge <integer>, ...] - Scatter-Gather Entry
Selects the nodes that match this scatter-gather element index value.
[-total-count <integer>, ...] - Total Count
Selects the nodes that match this parameter value. The value is the total number of times a particular scatter-
gather list element is used.
[-total-size <integer>, ...] - Total Size
Selects the nodes that match this parameter value. The value is the total number of bytes written by the high-
availability interconnect device using a particular scatter-gather list element.
Examples
Node: ic-f8040-02
Entry Count Size
----- ----------------- ----------------
1 1544405 310004390
2 6217 16779908
3 1222 12003411
4 338606 5543436659
6 2 41980
Description
The system ha interconnect statistics performance show command displays the high-availability interconnect
device performance statistics.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that you
specify.
| [-instance ]}
Use this parameter to display all the fields from all nodes in cluster.
[-node {<nodename>|local}] - Node
Use this parameter to display all the fields from the specified node in the cluster.
[-elapsed <integer>] - Elapsed Time (secs)
Selects the nodes that match this parameter value. Displays the total elapsed time between statistics collection
start time to end time. During the initialization stage, statistics collection starts when the partner node is up
and ready. After the initialization stage, the statistics collection start time is reset after every execution of this
command. This means that after the initialization stage, elapsed time represents the time between current
command execution and previous command execution.
[-qmax-wait <integer>] - Maximum Queue Wait Count
Selects the nodes that match this wait value. The queue maximum wait value is the total number of times the
interconnect device waited to post requests on the send queue.
[-qmax-wait-time <integer>] - Average Queue Wait Time (usecs)
Selects the nodes that match this average wait time value. The queue maximum wait time is the average
amount of time the interconnect device waited to post requests on the send queue.
[-qmax-timeout <integer>] - Maximum Queue Timeouts
Selects the nodes that match this parameter value. The queue maximum timeout value is the total number of
times the interconnect device timed out waiting to post requests on the send queue.
[-preempt-timeout <integer>] - Preempt Timeouts
Selects the nodes that match this parameter value. The timeout value is the total number of times polling on
the given transfer ID is preempted.
Examples
The following example displays the HA interconnect device performance statistics for FAS8000 series nodes in the
cluster:
Node: ic-f8040-02
Elapsed Time (secs): 12
Maximum Queue Wait Count: 29
Average Queue Wait Time (usecs): 68
Remote NV Messages Average Time (usecs): 1386
Total Remote NV Transfers: 19190
Remote NV Average Transfer Size: 375
Remote NV Transfers Average Time (usecs): 670
Total IC waits for Given ID: 304
Average IC Waitdone Time (usecs): 5
Total IC isdone Checks: 1409
Total IC isdone Checks Success: 1409
Total IC isdone Checks Failed: 0
IC Small Writes: 20964
IC 4K Writes: 5
IC 8K Writes: 99
IC 16K+ Writes: 229
IC XORDER Writes: 10261
IC XORDER Reads: 0
RDMA Read Count: 337
Average IC Waitdone RDMA-READ Time (usecs): 0
Average MB/s: 0.57080
Average Bytes per Transfer: 187
Total Transfers: 42883
Average Time for NVLOG Sync (msecs): 1009
Maximum Time for NVLOG Sync (msecs): 1009
Maximum Scatter-Gather Elements in a List: 32
Total Receive Queue Waits to Post Buffer: 0
The following example displays the HA interconnect device performance statistics for FAS2500 series nodes in the
cluster:
Node: ic-f2554-04
Elapsed Time (secs): 257
Maximum Queue Wait Count: 7
Average Queue Wait Time (usecs): 10172
Maximum Queue Timeouts: 0
Preempt Timeouts: 0
Non-Preempt Timeouts: 0
Notify Timeouts: 0
Remote NV Messages Average Time (usecs): 4237
Total Remote NV Transfers: 47134
Remote NV Average Transfer Size: 9559
Remote NV Transfers Average Time (usecs): 5463
Total IC waits for Given ID: 178
Average IC Waitdone Time (usecs): 1890
Total IC isdone Checks: 393191
Total IC isdone Checks Success: 47382
Total IC isdone Checks Failed: 345809
IC Small Writes: 78369
IC 4K Writes: 3815
IC 8K Writes: 6005
IC 16K+ Writes: 22993
IC XORDER Writes: 53529
IC XORDER Reads: 0
RDMA Read Count: 524
Average IC Waitdone RDMA-READ Time (usecs): 62
Average MB/s: 2.3682
Average Bytes per Transfer: 5143
Total Transfers: 111501
Average Time for NVLOG Sync (msecs): 822
Maximum Time for NVLOG Sync (msecs): 822
Maximum Scatter-Gather Elements in a List: 27
Description
The system ha interconnect status show command displays the high-availability interconnect connection status.
Connection status information displayed by this command varies by controller model. For nodes with two HA interconnect links
over the backplane or connected externally, this command displays the following information:
• Node
For nodes with a single HA interconnect link, this command displays following the information:
• Node
• Link status
• Interconnect RDMA status
Running the command with the -instance or -node parameter displays detailed information about the interconnect device
and its ports.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that you
specify.
| [-instance ]}
Use this parameter to display all the fields for the specified node or all the nodes.
[-node {<nodename>|local}] - Node
Use this parameter to display all the fields for the specified node.
[-link-status {up|down}] - Link Status
Selects the nodes that match this parameter value. The value up means link is online.
[-link0-status {up|down}] - Link 0 Status
Selects the nodes that match this parameter value. The value up means link is online.
[-link1-status {up|down}] - Link 1 Status
Selects the nodes that match this parameter value. The value up means link is online.
[-ic-rdma {up|down}] - IC RDMA Connection
Selects the nodes that match this parameter value. The value up means active interconnect connection with its
partner.
[-is-link0-active {true|false}] - Is Link 0 Active
Selects the nodes that match this parameter value. The value true means the interconnect data channels are
established on this link.
[-is-link1-active {true|false}] - Is Link 1 Active
Selects the nodes that match this parameter value. The value true means the interconnect data channels are
established on this link.
[-slot <integer>] - Slot Number
Selects the nodes that match this PCI slot number.
[-driver-name <text>] - Driver Name
Selects the nodes that match this interconnect device driver name.
[-firmware <text>] - Firmware Revision
Selects the nodes that match this firmware version.
[-version <text>] - Version Number
Selects the nodes that match this parameter value.
Examples
The following example displays status information about the HA interconnect connection on FAS8000 series nodes with
two HA interconnect links in the cluster:
Node: ic-f8040-01
Link 0 Status: up
Link 1 Status: up
Is Link 0 Active: true
Is Link 1 Active: false
IC RDMA Connection: up
Node: ic-f8040-02
Link 0 Status: up
Link 1 Status: up
Is Link 0 Active: true
Is Link 1 Active: false
IC RDMA Connection: up
2 entries were displayed.
The following example displays status information about the HA interconnect connection on FAS2500 series nodes with a
single HA interconnect link in the cluster:
Node: ic-f2554-01
Link Status: up
IC RDMA Connection: up
Node: ic-f2554-02
The following example displays detailed information about the HA interconnect link when parameters like -instance, -
node are used with the system ha interconnect status show command
Node: ic-f8040-01
Link 0 Status: up
Link 1 Status: up
Is Link 0 Active: true
Is Link 1 Active: false
IC RDMA Connection: up
Slot: 0
Driver Name: IB Host Adapter i0 (Mellanox ConnectX MT27518 rev. 0)
Firmware: 2.11.534
Debug Firmware: no
Interconnect Port 0 :
Port Name: ib0a
GID: fe80:0000:0000:0000:00a0:9800:0030:33ec
Base LID: 0x3ec
MTU: 4096
Data Rate: 40 Gb/s (4X) QDR
Link Information: ACTIVE
Interconnect Port 1 :
Port Name: ib0b
GID: fe80:0000:0000:0000:00a0:9800:0030:33ed
Base LID: 0x3ed
MTU: 4096
Data Rate: 40 Gb/s (4X) QDR
Link Information: ACTIVE
Description
The system health alert delete command deletes all the alerts on the cluster with the specified input parameters.
Parameters
-node {<nodename>|local} - Node
Use this parameter to delete alerts generated on a cluster only on the node you specify.
-monitor <hm_type> - Monitor
Use this parameter to delete alerts generated on a cluster only on the monitor you specify.
Examples
This example shows how to delete an alert with the specified alert-id:
Description
The system health alert modify command suppresses alerts generated on the cluster and sets the acknowledgement state
for an alert.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which you want to change the state.
-monitor <hm_type> - Monitor
Use this parameter to specify the monitor name on which you want to change the state.
-alert-id <text> - Alert ID
Use this parameter to specify the alert ID on which you want to change the state.
-alerting-resource <text> - Alerting Resource
Use this parameter to specify the alerting resource name on which you want to change the state.
[-acknowledge {true|false}] - Acknowledge
Use this parameter to set the acknowledgement state to true or false.
[-suppress {true|false}] - Suppress
Use this parameter to set the suppress state to true or false.
[-acknowledger <text>] - Acknowledger
Use this parameter to set the acknowledger as the filter for setting state.
[-suppressor <text>] - Suppressor
Use this parameter to set the suppressor as the filter for setting state.
Examples
This example modifies the alert field states on the cluster:
cluster1::> system health alert modify -node * -alert-id DualPathToDiskShelf_Alert -suppress true
Description
The system health alert show command displays information about all the alerts generated on the system. Using -
instance will add detailed information.
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays the following additional information about each alert:
• Node name
• Resource name
Examples
The example below displays information about all the alerts generated in the cluster:
Node: node1
Resource: Shelf ID 2
Severity: Major
Suppress: false
Acknowledge: false
Tags: quality-of-service, nondisruptive-upgrade
Probable Cause: Disk shelf 2 does not have two paths to controller
node1.
Possible Effect: Access to disk shelf 2 via controller node1 will be
lost with a single hardware component failure (e.g.
cable, HBA, or IOM failure).
Corrective Actions: 1. Halt controller node1 and all controllers attached to disk shelf 2.
2. Connect disk shelf 2 to controller node1 via two paths following the rules
in the Universal SAS and ACP Cabling Guide.
3. Reboot the halted controllers.
4. Contact support personnel if the alert persists.
The example below displays additional information about a specific alert generated in the cluster:
Node: node1
Monitor: node-connect
Alert ID: DualPathToDiskShelf_Alert
Alerting Resource: 50:05:0c:c1:02:00:0f:02
Description
The system health alert definition show command displays information about the various alerts defined in the
system health monitor policy file. Using -instance will display additional details.
Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Use this parameter to display additional information on each alert definition.
• Node name
• Monitor name
• Subsystem identifier
• Alert ID
• Probable cause
Examples
The example below displays information about all the definitions in the alert definition file:
Node: krivC-01
Monitor: system-connect
Class of Alert: DualControllerNonHa_Alert
Severity of Alert: Major
Probable Cause: Configuration_error
Probable Cause Description: Disk shelf $(sschm_shelf_info.id) is connected to two controllers ($
(sschm_shelf_info.connected-nodes)) that are not an HA pair.
Possible Effect: Access to disk shelf $(sschm_shelf_info.id) may be lost with a single
controller failure.
Corrective Actions: 1. Halt all controllers that are connected to disk shelf $
(sschm_shelf_info.id).
2. Connect disk shelf $(sschm_shelf_info.id) to both HA controllers following the
rules in the Universal SAS and ACP Cabling Guide.
3. Reboot the halted controllers.
4. Contact support personnel if the alert persists.
Subsystem Name: SAS-connect
Additional Relevant Data: -
Additional Alert Tags: quality_of_service, nondisruptive-upgrade
Description
The system health autosupport trigger history show command displays all the alert triggers in the cluster that
generated the AutoSupport messages. The following fields are displayed in the output:
• Node name
• Monitor name
• Subsystem
• Alert identifier
• Alerting resource
• Severity
Examples
This example displays information about the AutoSupport trigger history
This example displays info about the autosupport trigger history in detail
Description
The system health config show command displays the configuration and status of each health monitor in the cluster. The
command shows a health status for each health monitor. The health status is an aggregation of the subsystem health for each
subsystem that the health monitor monitors. For example, if a health monitor monitors two subsystems and the health status of
one subsystem is "ok" and the other is "degraded", the health status for the health monitor is "degraded".
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to list the health monitors present on the specified node.
[-monitor <hm_type>] - Monitor
Use this parameter to display the health monitors with the specified monitor name.
[-subsystem <hm_subsystem>, ...] - Subsystem
Selects the health monitors with the specified subsystems.
[-health {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Health
Selects the health monitors with the specified health status.
[-mon-version <text>] - Monitor Version
Selects the health monitors with the specified monitor version.
[-pol-version <text>] - Policy File Version
Selects the health monitors with the specified health monitor policy version.
[-context {Node |Cluster}] - Context
Selects the health monitors with the specified running context.
Examples
The example below displays information about health monitor configuration:
The example below displays detailed information about health monitor configuration:
Node: node1
Monitor: node-connect
Subsystem: SAS-connect
Health: degraded
Monitor Version: 1.0
Policy File Version: 1.0
Context: node_context
Aggregator: system-connect
Resource: SasAdapter, SasDisk, SasShelf
Subsystem Initialization Status: initialized
Subordinate Policy Versions: 1.0 SAS, 1.0 SAS multiple adapters
Description
The system health policy definition modify enables or disables health monitoring policies based on input
parameters the user provides.
Examples
This example modifies policy state on the cluster:
Description
The system health policy definition show command lists the health monitor policy definitions as described by the
health monitor policy file. The command displays the following fields:
• Node name
• Monitor name
• Policy name
• Policy status
• Alert identifier
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The example below displays information about all the policy definitions present in the cluster:
The example below displays detailed information about all the policy definitions present in the cluster:
Node: node1
Monitor: node-connect
Policy: ControllerToShelfIomA_Policy
Rule Expression: nschm_shelf_info.num-paths == 2 && nschm_shelf_info.iomb-
adapter == NULL
Variable Equivalence: -
Policy Status: true
Alert ID: ControllerToShelfIomA_Alert
Table and ID of Resource at Fault: nschm_shelf_info.name
Description
The system health status show command displays the health monitor status. The possible states are:
• ok
• ok-with-suppressed
• degraded
• unreachable
Examples
This example displays information about health monitoring status:
Description
The system health subsystem show command displays the health status of each subsystem for which health monitoring is
available. This command aggregates subsystem health status from each node in the cluster. A subsystem's health status changes
to "degraded" when a health monitor raises an alert. You can use the system health alert show command to display
information about generated alerts.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-subsystem <hm_subsystem>] - Subsystem
Selects the specified subsystem.
Examples
The example below displays the health status of each subsystem:
The example below displays detailed information about the health status of each subsystem:
Subsystem: SAS-connect
Health: degraded
Initialization State: initialized
Number of Outstanding Alerts: 0
Number of Suppressed Alerts: 0
Node: node1,node2
Subsystem Refresh Interval: 30m, 30m
Subsystem: Switch-Health
Health: ok
Initialization State: initialized
Number of Outstanding Alerts: 0
Number of Suppressed Alerts: 0
Node: node1
Subsystem Refresh Interval: 5m
Subsystem: CIFS-NDO
Health: OK
Initialization State: initialized
Number of Outstanding Alerts: 0
Number of Suppressed Alerts: 0
Node: node1
Subsystem Refresh Interval: 5m
Related references
system health alert show on page 1242
Description
This command adds a license to a cluster. To add a license you must specify a valid license key, which you can obtain from your
sales representative.
Parameters
-license-code <License Code V2>, ... - License Code V2
This parameter specifies the key of the license that is to be added to the cluster. The parameter accepts a list of
28 digit upper-case alphanumeric character keys.
Examples
The following example adds a list of licenses with the keys AAAAAAAAAAAAAAAAAAAAAAAAAAAA and
BBBBBBBBBBBBBBBBBBBBBBBBBBBB to the cluster
Description
This command manages licenses in the cluster that have no effect, and so can be removed. Licenses that have expired or are not
affiliated with any controller in the cluster are deleted by this command. Licenses that cannot be deleted are displayed with
reasons for the non-deletion.
Parameters
[-unused [true]] - Remove unused licenses
If you use this parameter, the command removes licenses in the cluster that are not affiliated with any
controller in the cluster.
[-expired [true]] - Remove expired licenses
If you use this parameter, the command removes licenses in the cluster that have expired.
[-simulate | -n [true]] - Simulate Only
If you use this parameter, the command will not remove the licenses. Instead it will display the licenses that
will be removed if this parameter was not provided.
Examples
The following example simulates and displays the licenses that can be cleaned up:
The following licenses are either expired or unused but cannot be safely deleted:
Description
This command deletes a license from a cluster.
Parameters
-serial-number <text> - Serial Number
This parameter specifies the serial number of the license that is to be deleted from the cluster. If this parameter
is not provided, the default value is the serial number of the cluster.
-package <Licensable Package> - Package
This parameter specifies the name of the package that is to be deleted from the cluster.
Examples
The following example deletes a license named CIFS and serial number 1-81-0000000000000000000123456 from the
cluster:
Description
The system license show command displays the information about licenses in the system.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-serial-number <text>] - Serial Number
If you use this parameter, the command displays information only about the licenses that matches the serial
number you specify.
[-package <Licensable Package>] - Package
If you use this parameter, the command displays information only about the specified package.
[-owner <text>] - Owner
If you use this parameter, the command displays information only about the packages that matches the owner
name you specify.
[-expiration <MM/DD/YYYY HH:MM:SS>] - Expiration
If you use this parameter, the command displays information only about the licenses that have the expiration
date you specify.
[-description <text>] - Description
If you use this parameter, the command displays information only about the licenses that matches the
description you specify.
[-type {license|site|demo|subscr|capacity|capacity-per-term}] - Type
If you use this parameter, the command displays information only about the licenses that have the license type
you specify.
[-legacy {yes|no}] - Legacy
If you use this parameter, the command displays information only about the licenses that matches the legacy
field you specify.
[-customer-id <text>] - Customer ID
If you use this parameter, the command displays information only about the licenses that have the customer-id
you specify.
Examples
The following example displays default information about all licensed packages in the cluster:
Description
This command displays the status of all ONTAP aggregates.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you use this parameter, the command displays information only about aggregates that match the given node.
[-aggr-name <text>] - Aggregate Name
If you use this parameter, the command displays information only about aggregate that match the given
aggregate.
[-aggr-size {<integer>[KB|MB|GB|TB|PB]}] - Aggregate Size
If you use this parameter, the command displays information only about aggregates that match the given
physical size of an aggregate.
[-licensed-size {<integer>[KB|MB|GB|TB|PB]}] - Licensed Size
If you use this parameter, the command displays information only about aggregates that match the given
licensed-size.
[-expiration <MM/DD/YYYY HH:MM:SS>] - Lease Expiration
If you use this parameter, the command displays information only about aggregates that match the given lease
expiration.
[-status <AggrLicStatus>] - Aggregate Status
If you use this parameter, the command displays information only about aggregates that match the given
status.
[-compliant {true|false}] - Is Aggregate Compliant
If you use this parameter, the command displays information only about aggregates that match the given state
of compliance.
[-aggr-uuid <UUID>] - Aggregate UUID
If you use this parameter, the command displays information only about aggregate that match the given
aggregate uuid.
Licensed Physical
Node Aggregate Size Size Lease Expiration Status
--------- ------------------- -------- -------- ------------------ ------------
node1
root1 0B 2GB - lease-not-required
root2 (mirror) 0B 2GB - lease-not-required
aggr1 20GB 20GB 6/21/2018 18:10:00 lease-up-to-date
aggr2 (mirror) 10GB 10GB 6/21/2018 20:00:00 lease-up-to-date
node2
root1 (mirror) 0B 2GB - lease-not-required
root2 0B 2GB - lease-not-required
aggr1 (mirror) 20GB 20GB 6/21/2018 18:10:00 lease-up-to-date
aggr2 10GB 10GB 6/21/2018 20:00:00 lease-up-to-date
node3
root3 0B 2GB - lease-not-required
root4 (mirror) 0B 2GB - lease-not-required
aggr3 15GB 0B 6/21/2018 20:00:00 aggregate-deleted
aggr4 (mirror) 15GB 15GB 6/21/2018 12:00:00 lease-expired
aggr5 (mirror) 15GB 15GB 6/21/2018 21:00:00 lease-up-to-date
aggr6 15GB 15GB 6/21/2018 21:00:00 plex-deleted
aggr7 15GB 14GB 6/21/2018 21:00:00 aggregate-license-size-decreased
aggr8 (mirror) 0B 14GB - lease-missing
node4
root3 (mirror) 0B 2GB - lease-not-required
root4 0B 2GB - lease-not-required
aggr3 (mirror) 15GB 0B 6/21/2018 20:00:00 aggregate-deleted
aggr4 15GB 15GB 6/21/2018 12:00:00 lease-expired
aggr5 15GB 15GB 6/21/2018 21:00:00 lease-up-to-date
aggr6 (mirror) 15GB 0B 6/21/2018 21:00:00 plex-deleted
aggr7 (mirror) 15GB 14GB 6/21/2018 21:00:00 aggregate-license-size-decreased
aggr8 0B 14GB - lease-missing
Description
This command displays the status of all Data ONTAP licenses.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-status {not-compliant|eval|partially-installed|valid|not-installed|not-applicable|not-
known}] - Current State
If you use this parameter, the command displays information only about licenses that match the given status.
[-license <Licensable Package>] - License
If you use this parameter, the command displays information only about licenses that match the given license.
[-scope {site|cluster|node|pool}] - License Scope
If you use this parameter, the command displays information only about licenses that match the given scope.
Examples
The following example displays the license status of the cluster:
Description
The system license update-leases command attempts to update (that is, renew) any capacity pool leases that have
expired.
Parameters
[-node {<nodename>|local}] - Nodes to Attempt Renewal
This optional parameter directs the system to update leases for only the specified nodes.
[-force {true|false}] - Force Renewal of Valid Leases
This optional parameter, if set with a value of "true", directs the system to update all leases for a node, not just
those that have expired.
Examples
The following example updates all leases on a node:
Description
Note: This command is deprecated and may be removed in a future release of Data ONTAP. Use the "system license
show-status" command.
The system license capacity show command displays the information about the licenses in the system that are
specifically related to storage capacity limits.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-serial-number <Node Serial Number>] - Serial Number
If you use this parameter, the command displays information only about the capacity-related licenses that
matches the serial number you specify.
[-package <Licensable Package>] - Package
If you use this parameter, the command displays information only about the package you specify.
[-owner <text>] - Owner
If you use this parameter, the command displays information only about the capacity-related licenses that have
the owner you specify.
[-max-capacity {<integer>[KB|MB|GB|TB|PB]}] - Maximum Capacity
If you use this parameter, the command displays information only about the capacity-related licenses that have
the maximum amount of attached storage capacity you specify.
[-current-capacity {<integer>[KB|MB|GB|TB|PB]}] - Current Capacity
If you use this parameter, the command displays information only about the capacity-related licenses that
apply to the node with the current attached capacity you specify.
[-expiration <MM/DD/YYYY HH:MM:SS>] - Expiration Date
If you use this parameter, the command displays information only about the capacity-related licenses that have
the expiration date you specify.
[-reported-state {evaluation|warning|missing|enforcement|installed}] - Reported State
If you use this parameter, the command displays information only about the capacity-related licenses that have
the reported state you specify.
Examples
The following example displays information about all capacity-related licensed packages in the cluster, for a hypothetical
cluster of four nodes:
Note that for some nodes below, the maximum capacity is displayed as "-" (meaning "unlimited"). This happens when
there is no capacity license for the node - the node is operating with a limited-time temporary capacity license.
Node: node1
Serial Number: 1-81-0000000000001234567890123456
Max Current
Package Capacity Capacity Expiration
------------------------ -------- -------- -------------------
Select 2TB 15.81GB 4/11/2016 00:00:00
Node: node2
Serial Number: 1-81-0000000000000000000123456788
Max Current
Package Capacity Capacity Expiration
------------------------ -------- -------- -------------------
Select - 10.40TB 4/11/2016 00:00:00
Node: node3
Serial Number: 1-81-0000000000000000000123456789
Max Current
Package Capacity Capacity Expiration
------------------------ -------- -------- -------------------
Select - 10.40TB 4/11/2016 00:00:00
Node: node4
Serial Number: 1-81-0000000000001234567890123456
Max Current
Package Capacity Capacity Expiration
------------------------ -------- -------- -------------------
Select 2TB 15.81GB 4/11/2016 00:00:00
Related references
system license show-status on page 1258
Description
This command displays information about license entitlement risk of the cluster for each license package. The command
displays license package name, entitlement risk, corrective action to reduce the entitlement risk for each package, and the names
and serial numbers for the nodes that do not have a node-locked license for a given package. If command is used with the "-
detail" parameter, the output displays the names and serial numbers for all nodes in the cluster instead of only the nodes missing
Parameters
{ [-fields <fieldname>, ...]
With this parameter, you can specify which fields should be displayed by the command. License package
names and node serial numbers are always displayed.
| [-detail ]
If you use this parameter, the command displays the license package name, entitlement risk, corrective action,
all nodes' names, their serial numbers, whether a node-locked license is present and whether a given license
package has been in use in the past week for each node in the cluster.
| [-instance ]}
If this parameter is used, the command displays values for all fields for each license package and each node in
the cluster individually.
[-package <Licensable Package>] - Package Name
If you use this parameter, the command displays information only for the specified license package.
[-serial-number <text>] - Node Serial Number
If you use this parameter, the command displays information only for the node with the specified serial
number. The displayed entitlement risk and corrective action apply to the entire cluster.
[-node-name <text>] - Node Name
If you use this parameter, the command displays information only for the node with the specified name. The
displayed entitlement risk and corrective action apply to the entire cluster.
[-risk {high|medium|low|unlicensed|unknown}] - Entitlement Risk
If you use this parameter, the command displays information only for the license packages that have the
specified license entitlement risk.
[-action <text>] - Corrective Action
If you use this parameter, the command displays information only for the license packages which need the
specified corrective action to reduce entitlement risk.
[-is-licensed {true|false}] - Is Node-Locked License Present
If you use this parameter, the command displays information only for the license packages for which at least
one node in the cluster has a node-locked license. It also displays the nodes in the cluster which do not have a
node-locked license.
[-in-use {true|false}] - Usage Status
If you use this parameter, the command displays information only for the license packages with corresponding
features in use.
[-missing-serial-numbers <text>, ...] - Serial Numbers Missing a Node-Locked License
If you use this parameter, the command displays the packages for which the node with the specified serial
number does not have a node-locked license.
Examples
The following example displays the information for license package NFS. NFS is unlicensed in the cluster and no action
is necessary to reduce the entitlement risk. The nodes, cluster1-01 and cluster-02, are missing a node-locked license. The
serial numbers for both nodes are also displayed.
The following example displays the information for license package CIFS. The cluster has high entitlement risk for CIFS.
The command displays serial numbers for all nodes in the cluster. Both nodes are missing a node-locked CIFS license.
Node with serial number 1-81-0000000000000004073806282 has used CIFS feature in the past week, and the node with
serial number 1-81-0000000000000004073806283 has not used this feature in the past week.
Description
The system license license-manager check checks the connectivity status of a node to the License Manager that the
node was configured to use. The status of a node might indicate that the License Manager is inaccessible. If so, the status
message contains additional text in parentheses. The text options and descriptions are as follows:
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter directs the system to display results for the License Manager configured for the specified node.
[-status <text>] - Status
This parameter directs the system to display results for the given status message.
Examples
The following examples check the status of the configured License Manager, before and after its license has expired:
Node: node1
LM status: License Manager (1.2.3.4:5678) is accessible.
Node Status
--------------- ------------------------------------------------------
node1 License Manager (1.2.3.4:5678) is accessible.
node2 License Manager (1.2.3.4:5678) is accessible.
2 entries were displayed.
Node: node1
LM status: License Manager (1.2.3.4:5678) is inaccessible (license_expired).
Node Status
--------------- ------------------------------------------------------
node1 License Manager (1.2.3.4:5678) is inaccessible (license_expired).
node2 License Manager (1.2.3.4:5678) is inaccessible (license_expired).
2 entries were displayed.
Availability: This command is available to cluster administrators at the advanced privilege level.
Description
The system license license-manager modify command modifies the configuration information for the License
Manager the system is using.
Parameters
[-host <text>] - License Manager Host
Sets the specified host, which can either be a fully qualified domain name (FQDN) or an IP address.
Description
The system license license-manager show command displays the information about the current License Manager
configuration.
Examples
The following example displays information about current License Manager configuration:
Description
Note: This command is deprecated and may be removed in a future release of Data ONTAP. Use the "system license
show-status" command.
This command displays the list of licensable packages in the system and their current licensing status.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-package <Licensable Package>] - Package Name
If you use this parameter, the command displays information only about the specified package.
Examples
The following example displays the license status of the cluster:
Related references
system license show-status on page 1258
Description
The system node halt command stops all activity on a node. You may supply a reason for the shutdown, which will be
stored in the audit log. You may also keep partner nodes from performing storage takeover during the shutdown.
If you enter this command without using this parameter, its effective value is false and storage takeover is
allowed. If you enter this parameter without a value, it is automatically set to true and storage takeover is
disabled during reboot.
[-dump | -d [true]] - Create a Core Dump
If this parameter is set to true, it forces a dump of the kernel core when halting the node.
[-skip-lif-migration-before-shutdown [true]] - Skip Migrating LIFs Away from Node Prior to Shutdown
If this parameter is specified, LIF migration prior to the shutdown will be skipped. However if LIFs on this
node are configured for failover, those LIFs may still failover after the shutdown has occurred. The default is
to migrate LIFs prior to the shutdown. In the default case, the command attempts to synchronously migrate
data and cluster management LIFs away from the node prior to shutdown. If the migration fails or times out,
the shutdown will be aborted.
[-ignore-quorum-warnings [true]] - Skip Quorum Check Before Shutdown
If this parameter is specified, quorum checks will be skipped prior to the shutdown. The operation will
continue even if there is a possible data outage due to a quorum issue.
[-ignore-strict-sync-warnings [true]] - Skip SnapMirror Synchronous Strict Sync Check Before Reboot
If this parameter is specified, the check for volumes that are in SnapMirror Synchronous relationships with
policy of type strict-sync-mirror will be skipped. The operation will continue even if there is a possible data
outage due to not being able to fully sync data.
Examples
The following example shuts down the node named cluster1 for hardware maintenance:
Related references
storage failover show on page 993
Parameters
-node {<nodename>|local} - Node
Specifies the node that owns the root aggregate that you wish to migrate. The value local specifies the
current node.
{ -disklist <disk path name>, ... - List of Disks for New Root Aggregate
Specifies the list of disks on which the new root aggregate will be created. All disks must be spares and owned
by the same node. Minimum number of disks required is dependent on the RAID type.
-raid-type {raid_tec|raid_dp|raid4} - RAID Type for the New Root Aggregate
Specifies the RAID type of the root aggregate. The default value is raid-dp.
| -resume [true]} - Resume a Failed Migrate Operation
Resumes a failed migrate-root operation if the new_root aggregate is created and the old root aggregate is in
the retricted state.
Examples
The command in the following example starts the root aggregate migration on node node1:
Description
The system node modify command sets the attributes of a node.
The owner, location, and asset tag attributes are informational only, and do not affect the operation of the node or the cluster.
The cluster eligibility attribute marks a node as eligible to participate in a cluster. The epsilon attribute marks a node as the tie-
breaker vote if the cluster has an even number of nodes.
Any field of type <text> may be set to any text value. However, if the value contains spaces or other special characters, you must
enter it using double-quotes as shown in the example below.
Use the system node show command to display the field values that this command modifies.
Parameters
-node {<nodename>|local} - Node
This mandatory parameter specifies which node will have its attributes modified. The value "local" specifies
the current node.
[-owner <text>] - Owner
This optional text string identifies the node's owner. Fill it in as needed for your organization.
Examples
The following example modifies the attributes of a node named node0. The node's owner is set to "IT" and its location to
"Data Center 2."
cluster1::> system node modify -node node0 -owner "IT" -location "Data Center 2"
Related references
system node show on page 1274
Description
The system node reboot command restarts a node. You can supply a reason for the reboot, which is stored in the audit log.
You can also keep partner nodes from performing storage takeover during the reboot and instruct the rebooted node to create a
core dump.
Parameters
-node {<nodename>|local} - Node
Specifies the node that is to be restarted. The value "local" specifies the current node.
Examples
The command in the following example restarts the node named cluster1 for a software upgrade:
Description
The system node rename command changes a node's name. Both the node to be modified and the new name of that node
must be specified with the following parameters. This command is best executed from the node that is being renamed, using the
-node local parameter.
Use the system node show command to display the names of all the nodes in the current cluster.
Parameters
-node {<nodename>|local} - Node
This parameter specifies which node you are renaming. The value local specifies the current node.
• The name must contain only the following characters: A-Z, a-z, 0-9, "-" or "_".
• The first character must be one of the following characters: A-Z or a-z.
• The last character must be one of the following characters: A-Z, a-z or 0-9.
• The system reserves the following names: "all", "cluster", "local" and "localhost".
Examples
The following example changes the name of the node named node3 to node4.
Related references
system node show on page 1274
Description
The system node restore-backup command restores the backup configuration file that is stored on the partner node to the
specified target node in an HA pair. The backup configuration file is restored after Data ONTAP has been installed on the target
node.
The backup configuration file is stored on the HA partner node while the target node is down. After the target node has been
installed, the partner node sends this backup configuration file to the target node through the management network by using the
system node restore-backup command to restore the original configuration. This procedure is commonly used when
replacing the target node's boot device.
The target IP address should be the address of the target node used for netboot installation.
Parameters
-node {<nodename>|local} - Node
Specifies the partner node that sends the backup configuration file to the target node. The value "local"
specifies the current node.
-target-address <Remote InetAddress> - HA Partner IP Address
Specifies the IP address for the target node.
Description
The system node revert-to command reverts a node's cluster configuration to the given version. After the system node
revert-to command has finished, the revert_to command must be run from the nodeshell. The revert_to command reverts
Parameters
-node {<nodename>|local} - Node
Specifies the node that is to be reverted. The value local specifies the current node.
-version <revert version> - Data ONTAP Version
Specifies the version of Data ONTAP to which the node is to be reverted.
[-check-only [true]] - Capability Check
If set to true, this parameter specifies that the cluster configuration revert should perform checks to verify all
of the preconditions necessary for revert-to to complete successfully. Setting the parameter to true does not
run through the actual revert process. By default this option is set to false.
Examples
The command in the following example reverts cluster configuration of a node named node1 to Data ONTAP version 9.5
Description
Use the system node run command to run certain commands from the nodeshell CLI on a specific node in the cluster. You
can run a single nodeshell command from the clustershell that returns immediately, or you can start an interactive nodeshell
session from which you can run multiple nodeshell commands.
Nodeshell commands are useful for root volume management and system troubleshooting. Commands that are available through
the nodeshell are scoped to a single node in the cluster. That is, they affect only the node specified by the value of the -node
parameter and do not operate on other nodes in the cluster. To see a list of available nodeshell commands, type '?' at the
interactive nodeshell prompt. For more information on the meanings and usage of the available commands, use the man
command in the nodeshell.
Only one interactive nodeshell session at a time can be run on a single node. Up to 24 concurrent, non-interactive sessions can
be run at a time on a node.
When running the nodeshell interactively, exit the nodeshell and return to the clustershell by using the exit command. If the
nodeshell does not respond to commands, terminate the nodeshell process and return to the clustershell by pressing Ctrl-D.
The system node run command is not available from the GUI interface.
Note: An alternate way to invoke the system node run command is by typing the run as a single word.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the node on which you want to run the nodeshell command. If you
specify only this parameter, the command starts an interactive nodeshell session that lasts indefinitely. You can
exit the nodeshell to the clustershell by pressing Ctrl-D or by typing the exit command.
Examples
The following example runs the nodeshell command sysconfig -V on a node named node1:
The following example starts a nodeshell session on a node named node2 and then runs the nodeshell sysconfig -V
command. The system remains in the nodeshell after running the sysconfig -V command.
The following example starts a nodeshell session on a node named node1 and then runs two nodeshell commands, aggr
status first and vol status second. Use quotation marks and semicolons when executing multiple nodeshell
commands with a single run command.
Description
This command allows you to access the console of any remote node on the same cluster. The remote access is helpful in
situations where the node cannot be booted up or has network issues. This command establishes an SSH session with the
Service Processor of a remote node and accesses that node's console over the serial channel. This command works even if Data
ONTAP is not booted up on the remote node. You can get back to the original node by pressing Ctrl+D. This command works
only on SSH sessions and not on physical console sessions.
Examples
The following example accesses the console of node2 in the same cluster.
cluster1::>
Description
The system node show command displays information about the nodes in a cluster. You can limit output to specific types of
information and specific nodes in the cluster, or filter output by specific field values.
To see a list of values that are in use for a particular field, use the -fields parameter of this command with the list of field
names you wish to view. Use the system node modify command to change some of the field values that this command
displays.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-inventory ]
Use this parameter to display inventory information such as serial numbers, asset tags, system identifiers, and
model numbers.
| [-messages ]
Use this parameter to display system messages for each node.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information for node names that match this parameter value.
[-owner <text>] - Owner
Selects nodes that have owner values that match this parameter value.
Examples
This example displays the locations and model numbers of all nodes that are in physical locations that have names
beginning with "Lab":
This example displays the locations and model numbers of all nodes that are in physical locations that have names
beginning with "Lab":
Related references
system node modify on page 1268
Description
The system node show-discovered command displays information about all the detectable nodes on the local cluster
network. This includes both nodes in a cluster and nodes that do not belong to a cluster. You can filter the output to show only
nodes that do not belong to a cluster or nodes that are in a cluster.
To see a list of values that are in use for a particular field, use the -fields parameter of this command with the list of field
names you wish to view.
Examples
The following example displays information about all discovered nodes in the cluster network:
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-verbose ]
The -verbose parameter enables verbose mode, resulting in the display of more detailed output.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
When provided, the -node parameter specifies the nodes for which the memory error statistics are to be
displayed. When the -node is not provided, the command is applied to all the nodes in the cluster.
[-id <integer>] - DIMM ID
This parameter refers to the DIMM ID. It can be used to look at the correctable ECC error count on a specific
DIMM.
[-name <text>] - DIMM Name
This parameter specifies the DIMM name for which the memory error statistics are to be displayed.
[-cecc <integer>] - Correctable ECC Error Count
This parameter can be used to get all the DIMMs with the specified correctable ECC error count.
[-merr {true|false}] - Multiple Errors on Same Address
Use this parameter with the values true to specify whether the error was seen multiple times on the same
physical address. It can also be used to look at all the DIMMs with multiple errors on same address.
[-timestamp <text>, ...] - Error Time
This specifies the time at which the error was seen on the DIMM.
[-addr <text>, ...] - Error Address
This specifies the physical address on which the error was seen.
Examples
Node: localhost
Node: localhost
Description
The system node autosupport invoke command sends an AutoSupport message from a node.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node from which the AutoSupport message is sent.
[-message <text>] - Message Included in the AutoSupport Subject
Use this parameter to specify text sent in the subject line of the AutoSupport message. This parameter is not
available when the -type parameter is set to performance.
-type {test|performance|all} - Type of AutoSupport Collection to Issue
Use this parameter to specify the type of AutoSupport collection to issue. There is no default; you must
specify a -type.
• test - The message contains basic information about the node. When the AutoSupport message is received
by technical support, an e-mail confirmation is sent to the system owner of record. This enables you to
confirm that the message is being received by technical support.
• all - The message contains all collected information about the node.
• performance - The message contains only performance information about the node. This parameter has
effect only if performance AutoSupport messages are enabled, which is controlled by the -perf parameter
of the system node autosupport modify command.
Examples
The following example sends a test AutoSupport message from a node named node0 with the text "Testing ASUP":
cluster1::> system node autosupport invoke -node node0 -type test -message "Testing ASUP"
Related references
system node autosupport modify on page 1282
Description
The system node autosupport invoke-core-upload command sends two AutoSupport messages. The first
AutoSupport message contains the content relevant to this core upload. This AutoSupport message has subject line with prefix
"CORE INFO:". The second Autosupport message contains the core file specified by the "-core-filename" option. This
AutoSupport message has subject line with prefix "CORE UPLOAD:". The command requires that the specified file be present
while the AutoSupport message is being transmitted.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node from which the AutoSupport message is sent. Defaults to localhost.
[-message <text>] - Message Included in the AutoSupport Subject
Use this parameter to specify the text in the subject line of the AutoSupport message.
[-uri <text>] - Alternate Destination for This AutoSupport
Use this parameter to send the AutoSupport message to an alternate destination. Only "http" and "https"
protocols are supported. If this parameter is omitted, the message is sent to all the recipients defined by the
system node autosupport modify command.
[-force [true]] - Generate and Send Even if Disabled
Use this parameter to generate and send the AutoSupport message even if AutoSupport is disabled on the
node.
[-case-number <text>] - Case Number for This Core Upload
Use this parameter to specify the optional case number to be associated with this AutoSupport message.
-core-filename <text> - The Existing Core Filename to Upload
Use this parameter to specify the core file to be included in the AutoSupport message. Use the system node
coredump show command to list the core files by name.
Examples
Use this command to list the core files from a node:
Use this command to invoke an AutoSupport message with the corefile core.4073000068.2013-09-11.15_05_01.nz:
Related references
system node autosupport modify on page 1282
system node coredump show on page 1306
Description
The system node autosupport invoke-performance-archive command sends an AutoSupport message with the
performance archives from a node. The command requires that the performance archives in the specified date range be present
while the AutoSupport message is being transmitted.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node from which the AutoSupport message is sent. The default setting is
localhost.
[-message <text>] - Message Included in the AutoSupport Subject
Use this parameter to specify the text in the subject line of the AutoSupport message.
[-uri <text>] - Alternate Destination for This AutoSupport
Use this parameter to send the AutoSupport message to an alternate destination. Only "file," "http," and "https"
protocols are supported. If this parameter is omitted, the message is sent to the all of the recipients defined by
the system node autosupport modify command.
[-force [true]] - Generate and Send Even if Disabled
Use this parameter to generate and send the AutoSupport message even if AutoSupport is disabled on the
node.
[-case-number <text>] - Case Number for This Performance Archive Upload
Use this parameter to specify the optional case number to be associated with this AutoSupport message.
-start-date <MM/DD/YYYY HH:MM:SS> - Start Date for Performance Archive Dataset
Use this parameter to specify the start date for the files in the performance archive dataset to be included in the
AutoSupport message.
{ -end-date <MM/DD/YYYY HH:MM:SS> - End Date for Performance Archive Dataset
Use this parameter to specify the end date for the files in the performance archive dataset to be included in the
AutoSupport message. The end date should be within six hours of the start date.
| -duration <[<integer>h][<integer>m][<integer>s]>} - Duration of Performance Archive Dataset
Use this parameter with start-date to specify the duration of the performance archive dataset to be included in
the AutoSupport message. The maximum duration limit is six hours from the start date.
Description
The system node autosupport invoke-splog command sends an AutoSupport message with the Service Processor log
files from a specified node in the cluster.
Parameters
-remote-node {<nodename>|local} - Node
Use this parameter to specify the node from which Service Processor log files are to be collected.
[-log-sequence <integer>] - Log File Sequence Number
Use this parameter to specify the sequence number of the Service Processor log files to be collected. If this
parameter is omitted, the latest Service Procesor log files are collected.
[-uri <text>] - Alternate Destination for This AutoSupport
Use this parameter to send the AutoSupport message to an alternate destination. Only "file," "http," and "https"
protocols are supported. If this parameter is omitted, the message is sent to the all of the recipients defined by
the system node autosupport modify command.
[-force [true]] - Generate and Send Even if Disabled
Use this parameter to generate and send the AutoSupport message even if AutoSupport is disabled on the
node.
Examples
Use this command to invoke an AutoSupport message to include the Service Processor log files collected from node
cluster1-02.
cluster1::>
Related references
system node autosupport modify on page 1282
Parameters
-node {<nodename>|local} - Node
Note: The AutoSupport configuration will be modified on all nodes in the cluster, even if this parameter is
specified.
This parameter is ignored, but retained for CLI backward compatibility.
[-state {enable|disable}] - State
Use this parameter to specify whether AutoSupport is enabled or disabled on the node. The default setting is
enable. When AutoSupport is disabled, messages are not sent to anyone, including the vendor's technical
support, your internal support organization, or partners.
[-mail-hosts <text>, ...] - SMTP Mail Hosts
Use this parameter to specify up to five SMTP mail hosts through which the node sends AutoSupport
messages. This parameter is required if you specify e-mail addresses in the -to, -noteto, or -partner-address
parameters or if you specify smtp in the -transport parameter. Separate multiple mail hosts with commas with
no spaces in between. The AutoSupport delivery engine attempts to use these hosts for delivery in the order
that you specify.
You can optionally specify a port value for each mail server. A port value can be specified on none, all, or
some of the mail hosts. The port specification for a mail host consists of a colon (":") and a decimal value
between 1 and 65335, and follows the mailhost name (for example, mymailhost.example.com:5678 ). The
default port value is 25.
Also, you can optionally prepend a user name and password combination for authentication to each mail
server. The format of the username and password pair is user1@mymailhost.example.com. User will be
prompted for the password. The username and password can be specified on none, all, or some of the mail
hosts.
The default value for this parameter is mailhost.
[-from <mail address>] - From Address
Use this parameter to specify the e-mail address from which the node sends AutoSupport messages. The
default is Postmaster@xxx where xxx is the name of the system.
[-to <mail address>, ...] - List of To Addresses
Use this parameter to specify up to five e-mail addresses to receive AutoSupport messages that are most
relevant for your internal organization. Separate multiple addresses with commas with no spaces in between.
For this parameter to have effect, the -mail-hosts parameter must be configured correctly. Individual trigger
events can change the default usage of this parameter using the -to parameter of the system node
autosupport trigger modify command. By default, no list is defined.
[-noteto <mail address>, ...] - (DEPRECATED) List of Noteto Addresses
Note: This parameter has been deprecated and might be removed in a future version of Data ONTAP.
Use this parameter to specify up to five addresses to receive a short-note version of AutoSupport messages that
are most relevant for your internal organization. Short-note e-mails contain only the subject line of the
AutoSupport message, which is easier to view on a mobile device. For this parameter to have effect, the -
mail-hosts parameter must be configured correctly. Individual trigger events can change the default usage
of this parameter using the -noteto parameter of the system node autosupport trigger modify
command. By default, no list is defined.
[-partner-address <mail address>, ...] - List of Partner Addresses
Use this parameter to specify up to five e-mail addresses to receive all AutoSupport messages including
periodic messages. This parameter is typically used for support partners. For this parameter to have effect, the
-mail-hosts parameter must be configured correctly. By default, no list is defined.
Examples
The following example enables AutoSupport on all nodes in the cluster with the following settings:
cluster1::> system node autosupport modify -state enable -mail-hosts smtp.example.com -from
alerts@node3.example.com -to support@example.com -support enable -transport https -noteto
pda@example.com -retry-interval 23m
The following examples show how to modify AutoSupport URLs when using IPv6 address literals:
Related references
system node autosupport trigger modify on page 1301
Description
The system node autosupport show command displays the AutoSupport configuration of one or more nodes.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-config ]
Use this parameter to display the retry interval, retry count, throttle, and reminder settings of all nodes in the
cluster.
| [-nht-performance ]
Use this parameter to display NHT and performance information about all nodes in the cluster.
| [-recent ]
Use this parameter to display the subject and time of the last AutoSupport message generated by each node in
the cluster.
| [-support-http ]
Use this parameter to display whether HTTP support is enabled for each node in the cluster, and identify the
transport protocol and the support proxy URL used by each node.
| [-support-smtp ]
Use this parameter to display whether SMTP (e-mail) support is enabled for each node in the cluster, and
identify the transport protocol and the "to" mail address used by each node.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display detailed information about the node you specify.
Examples
The following example displays the AutoSupport configuration for a node named node3:
Related references
system node autosupport trigger show on page 1302
system node autosupport history show on page 1294
system node autosupport manifest show on page 1298
Description
The system node autosupport status check show command displays the overall status of the AutoSupport subsystem.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node
Selects the nodes that match this parameter value. This parameter specifies the node whose status is being
displayed.
[-http-status {ok|warning|failed|not-run}] - Overall Status of AutoSupport HTTP/HTTPS Destinations
Selects the nodes that match this parameter value. This parameter specifies whether connectivity to the
AutoSupport HTTP destination was established.
Examples
The following example displays the overall status of the AutoSupport subsystem on a node named node2:
On Demand
Node HTTP/HTTPS Server SMTP Configuration
---------------- ---------- ---------- ---------- -------------
node2 ok ok ok ok
Description
The system node autosupport check show-details command displays the detailed status of the AutoSupport
subsystem. This includes verifying connectivity to your vendor's AutoSupport destinations by sending test messages and
providing a list of possible errors in your AutoSupport configuration settings.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node
Selects the check results that match this parameter value. This parameter specifies the node whose status is
being displayed.
[-check-type <Type of AutoSupport Check>] - AutoSupport Check Type
Selects the check results that match this parameter value. This parameter specifies the type of AutoSupport
check being performed.
[-status {ok|warning|failed|not-run}] - Status of the Check
Selects the check results that match this parameter value. This parameter specifies the result of this
AutoSupport check.
Examples
The following example displays the detailed status of the AutoSupport subsystem for a node named node2:
Node: node2
Category: http-https
Component: http-put-destination
Status: ok
Detail: Successfully connected to "support.netapp.com/put".
Component: http-post-destination
Status: ok
Detail: Successfully connected to "support.netapp.com/post".
Category: smtp
Component: mail-server
Status: ok
Detail: Successfully connected to "mailhost.netapp.com".
Component: mail-server
Status: ok
Detail: Successfully connected to "sendmail.domain.com".
Component: mail-server
Status: ok
Detail: Successfully connected to "qmail.domain.com".
Category: on-demand
Component: ondemand-server
Status: ok
Detail: Successfully connected to "support.netapp.com/aods".
Category: configuration
Component: configuration
Status: ok
Detail: No configuration issues found.
Description
The system node autosupport destinations show command displays a list of all message destinations used by
AutoSupport. The command provides you with a quick summary of all addresses and URLs that receive AutoSupport messages
from all nodes in the cluster.
Examples
This example displays all of the destinations in use by the current cluster. Each node uses the same destination for HTTP
POST, HTTP PUT, and e-mail notifications.
node2
https://asuppost.example.com/cgi-bin/asup.cgi
https://asupput.example.com/cgi-bin/asup.cgi
support@example.com
Description
The system node autosupport history cancel command cancels an active AutoSupport transmission. This command
is used to pause or abandon a long running delivery of an AutoSupport message. The cancelled AutoSupport message remains
available for retransmission using the system node autosupport history retransmit command.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which to cancel the AutoSupport message. The default setting is
localhost.
-seq-num <Sequence Number> - AutoSupport Sequence Number
Use this parameter to specify the sequence number of the AutoSupport message you want to cancel.
[-destination {smtp|http|noteto|retransmit}] - Destination for This AutoSupport
Use this parameter to specify the destination type for the AutoSupport message you want to cancel.
Use this command to cancel the AutoSupport message delivery with seq-num 10 via HTTP only.
cluster1::> system node autosupport history cancel -node local -seq-num 10 -destination http
Related references
system node autosupport history retransmit on page 1293
Description
The system node autosupport history retransmit command retransmits a locally stored AutoSupport message.
Support personnel might ask you to run this command to retransmit an AutoSupport message. You might also retransmit an
AutoSupport message if you run the system node autosupport history show command and notice that a message was
not delivered.
If you retransmit an AutoSupport message, and if support already received that message, the support system will not create a
duplicate case. If, on the other hand, support did not receive that message, then the AutoSupport system will analyze the
message and create a case, if necessary.
Use the system node autosupport history show command to display the 50 most recent AutoSupport messages, which
are available for retransmission.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node from which the AutoSupport message is sent.
-seq-num <Sequence Number> - AutoSupport Sequence Number
Use this parameter to specify the sequence number of the AutoSupport message to retransmit.
-uri <text> - Destination to Send this AutoSupport
Use this parameter to specify the HTTP, HTTPS, FILE, or MAILTO uniform resource indicator (URI) to
which the AutoSupport message is sent.
[-size-limit {<integer>[KB|MB|GB|TB|PB]}] - Transmit Size Limit for this AutoSupport.
Use this parameter to specify a size limit for the retransmitted AutoSupport message. If the message
information exceeds this limit, it will be trimmed to fit the limit you specify. Omit the size limit or set it to 0 to
disable it, which is useful to retransmit an AutoSupport message that was truncated by a mail or Web server
due to the default size limits.
Examples
The following example retransmits the AutoSupport message with sequence number 45 on the node "node1" to a support
address by e-mail.
Related references
system node autosupport history show on page 1294
Description
The system node autosupport history show command displays information about the 50 most recent AutoSupport
messages sent by nodes in the cluster. By default, it displays the following information:
• Attempt count
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-delivery ]
Use this parameter to display destination information about each AutoSupport message.
| [-detail ]
Use this parameter to display trigger and subject information about each AutoSupport message.
| [-instance ]}
Use this parameter to display the following detailed information about all entries:
• Trigger event
• Delivery URI
• Last error
• Compressed Size
• Decompressed Size
• collection-failed - The AutoSupport collection failed. View the 'Last Error' field of this message for more
information.
• ignore - The AutoSupport message was processed successfully, but the trigger event is not configured for
delivery to the current destination type.
• re-queued - The AutoSupport message transmission failed, has been re-queued, and will be retried.
• transmission-failed - The AutoSupport message transmission failed, and the retry limit was exceeded.
• ondemand-ignore - The AutoSupport message was processed successfully, but the AutoSupport On
Demand server chose to ignore it.
Examples
The following example shows the first three results output by the history command. Note that "q" was pressed at the
prompt.
Description
The system node autosupport history show-upload-details command displays upload details of the 50 most
recent AutoSupport messages sent by nodes in the cluster. By default, it displays the following information:
• Destination
• Compressed Size
• Percentage Complete
• Rate of upload
• Time Remaining
• Destination
• Compressed Size
• Percentage Complete
• Rate of Upload
• Time Remaining
Examples
The following example shows the first three results output by the history show-upload-details command. Note that "q"
was pressed at the prompt.
Description
The system node autosupport manifest show command reports what information is contained in AutoSupport
messages. The name and size of each file collected for the message is reported, along with any errors.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-content ]
Use this parameter to display detailed information about the content of the files contained in the report.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display information about only AutoSupport messages sent from the node you specify.
[-seq-num <Sequence Number>] - AutoSupport Sequence Number
Use this parameter to display information about only AutoSupport message content with the sequence number
you specify. Sequence numbers are unique to a node. Use this parameter with the -node parameter to display
information about an individual message.
[-prio-num <integer>] - Priority Order of Collection
Use this parameter to display information about only AutoSupport message content with the collection priority
you specify. Content is collected in order, by priority number.
[-subsys <subsys1,subsys2,...>] - Subsystem
Use this parameter to display information about only AutoSupport message content collected by the
AutoSupport subsystem you specify.
[-cmd-tgt <Execution domain of AutoSupport content>] - Execution Domain for Command
Use this parameter to display information about only AutoSupport message content produced in the execution
domain you specify.
[-body-file <text>] - The AutoSupport Content Filename for this Data
Use this parameter to display information about only AutoSupport message content stored in the body file
with the file name you specify.
[-cmd <text>] - Actual Data Being Collected
Use this parameter to display information about only AutoSupport message content produced by the D-Blade
command, BSD command, file, or XML table you specify.
• requested - The AutoSupport request has been added to the queue and is waiting processing by the
collector.
• no-such-table - The AutoSupport collector was unable to find the requested SMF table.
• collection-truncated-size-limit - AutoSupport data was truncated due to size limits, but partial
data is available.
• collection-skipped-size-limit - AutoSupport data was skipped due to size limits, and no data is
available.
• collection-truncated-time-limit - AutoSupport data was truncated due to time limits, but partial
data is available.
• collection-skipped-time-limit - AutoSupport data was skipped due to time limits, and no data is
available.
• delivery-skipped-size-limit - AutoSupport data was skipped at delivery time due to size limits.
• general-error - AutoSupport data collection failed. Additional information (if any) is in the Error
String field.
• completed - AutoSupport data collection is complete, and the AutoSupport message is ready for delivery.
• content-empty - AutoSupport content was collected successfully, but the output was empty.
Examples
This example displays the content of AutoSupport message number 372 on the node "node1".
cluster1::> system node autosupport manifest show -node node1 -seq-num 372
AutoSupport Collected
Node Sequence Body Filename Size Status Error
----------------- --------- -------------- ----------- --------- -----------
node1 372
SYSCONFIG-A.txt 1.73KB completed
OPTIONS.txt 29.44KB completed
software_image.xml 7.56KB completed
CLUSTER-INFO.xml 3.64KB completed
autosupport.xml 12.29KB completed
autosupport_budget.xml
7.01KB completed
autosupport_history.xml
46.52KB completed
X-HEADER-DATA.TXT 717.00B completed
SYSTEM-SERIAL-NUMBER.TXT
35.00B completed
cluster_licenses.xml
3.29KB completed
cm_hourly_stats.gz 151.4KB completed
boottimes.xml 56.86KB completed
rdb_txn_latency_stats_hrly.xml
39.31KB completed
rdb_voting_latency_stats_hrly.xml
3.43KB completed
14 entries were displayed.
This example shows how you can use parameters to limit output to specific fields of a specific AutoSupport message. This
is helpful when troubleshooting.
cluster1::> system node autosupport manifest show -node node5 -seq-num 842 -fields body-
file,status,size-collected,time-collected,cmd,cmd-tgt,subsys
node seq-num prio-num subsys cmd-tgt body-file cmd size-collected time-
collected status
---------- ------- -------- --------- ------- --------------- -------------- --------------
-------------- ---------
node5 842 0 mandatory dblade SYSCONFIG-A.txt "sysconfig -a" 16.44KB
256 completed
node5 842 1 mandatory dblade OPTIONS.txt options 29.67KB
3542 completed
node5 842 2 mandatory smf_table software_image.xml software_image 8.68KB
33 completed
node5 842 3 mandatory smf_table CLUSTER-INFO.xml asup_cluster_info 4.75KB
7 completed
node5 842 4 mandatory smf_table autosupport.xml autosupport 12.32KB
10 completed
node5 842 5 mandatory smf_table autosupport_budget.xml autosupport_budget 7.03KB
29 completed
node5 842 6 mandatory smf_table autosupport_history.xml autosupport_history
62.77KB 329 completed
Description
Use the system node autosupport trigger modify command to enable and disable AutoSupport messages for
individual triggers, and to specify additional subsystem reports to include if an individual trigger sends an AutoSupport
message.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node whose AutoSupport trigger configuration is modified.
-autosupport-message <Autosupport Message> - EMS Message
Use this parameter to specify the AutoSupport trigger to modify. AutoSupport triggers are EMS messages
whose names begin with "callhome.". However, for the purposes of this command, "callhome." is implied,
does not need to be entered, and will not be displayed in command output.
[-to {enabled|disabled}] - Deliver to AutoSupport -to Addresses
Use this parameter with the value "enabled" to enable sending AutoSupport messages to the configured "to"
addresses.
[-noteto {enabled|disabled}] - (DEPRECATED) Deliver to AutoSupport -noteto Addresses
Note: This parameter has been deprecated and might be removed in a future version of Data ONTAP.
Use this parameter with the value "enabled" to enable sending short notes to the configured "noteto" addresses.
[-basic-additional <subsys1,subsys2,...>, ...] - Additional Subsystems Reporting Basic Info
Use this parameter to include basic content from the additional subsystems you specify. Content is collected
from these subsystems in addition to the default list of subsystems.
[-troubleshooting-additional <subsys1,subsys2,...>, ...] - Additional Subsystems Reporting
Troubleshooting Info
Use this parameter to include troubleshooting content from the additional subsystems you specify.
Content is collected from these subsystems in addition to the default list of subsystems.
Examples
The following example enables messages to the configured "to" addresses from the battery.low trigger on the node
node1.
cluster1::> system node autosupport trigger modify -node node1 -autosupport-message battery.low -
to enabled
Related references
system node autosupport manifest show on page 1298
Description
The system node autosupport trigger show command displays what system events trigger AutoSupport messages.
When a trigger event occurs, the node may send an AutoSupport message to a predefined destination, and a short note to another
destination. The full AutoSupport message contains detail for troubleshooting. The short message is meant for short pager or
SMS text messages.
Use the system node autosupport destinations show command to view available destinations.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-basic ]
Use this parameter to display which subsystem information is included as basic content when the
AutoSupport message is triggered.
| [-troubleshooting ]
Use this parameter to display which subsystem information is included as troubleshooting content when
the AutoSupport message is triggered.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display AutoSupport triggers only on the node you specify.
[-autosupport-message <Autosupport Message>] - EMS Message
Use this parameter to display only AutoSupport triggers with the name you specify. AutoSupport triggers are
EMS messages whose names begin with "callhome.". However, for the purposes of this command, "callhome."
is implied, does not need to be entered, and will not be displayed in command output.
[-to {enabled|disabled}] - Deliver to AutoSupport -to Addresses
Use this parameter with the value "enabled" to display only AutoSupport messages that send full messages to
the "to" address when triggered. Use this parameter with the value "disabled" to display only AutoSupport
messages that do not send full messages.
Examples
This example shows the first page of output from the command. Note that "q" was pressed at the prompt to quit.
Description
The system node coredump delete command deletes a specified core dump. If the command is issued while the specified
core dump is being saved, the command prompts you before stopping the save operation and deleting the core dump.
Parameters
-node {<nodename>|local} - Node That Owns the Coredump
This specifies the node from which core files are to be deleted.
[-type {kernel|ancillary-kernel-segment|application}] - Coredump Type
This specifies the type of core file to be deleted. If the type is kernel, the specified kernel core file will be
deleted. If the type is application, the specified application core file will be deleted.
-corename <text> - Coredump Name
This specifies the core file that is to be deleted.
Examples
The following example deletes a core dump named core.101268397.2010-05-30.19_37_31.nz from a node named node0:
Description
The system node coredump delete-all command deletes either all unsaved core dumps or all saved core files on a node.
You can specify whether saved core files or unsaved core dumps are deleted by using the optional -saved parameter. If the
command is issued while a core dump is being saved, the command prompts you before stopping the save operation and
deleting the core dump.
Parameters
-node <nodename> - Node That Owns the Coredump
This specifies the node from which core files or core dumps are to be deleted.
[-type {unsaved-kernel|saved-kernel|kernel|application|all}] - Type of Core to delete
This parameter specifies the type of core file to be deleted. If the type is unsaved, all unsaved core dumps will
be deleted. If the type is saved, all saved core files will be deleted. If the type is kernel, all kernel core files and
kernel core dumps will be deleted. If the type is application, all application core files will be deleted. If the
Examples
The following example deletes all unsaved kernel core dumps on a node named node0:
Description
The system node coredump save command saves a specified core dump. If the node has already attempted to save the core
dump by the value specified by the -save-attempts parameter, the command prompts you before continuing. The -save-
attempts parameter is set by invoking the command system node coredump config modify. A saved core dump can be
uploaded to a remote site for support analysis; see the system node coredump upload command man page for more
information.
Parameters
-node {<nodename>|local} - Node That Owns the Coredump
This specifies the node on which the core dump is located.
-corename <text> - Coredump Name
This specifies the core dump that is to be saved.
Examples
The following example saves a core dump named core.101268397.2010-05-30.19_37_31.nz on a node named node0:
Related references
system node coredump config modify on page 1313
system node coredump upload on page 1312
system node coredump save-all on page 1305
Description
The system node coredump save-all saves all unsaved core dumps on a specified node. If the node has already attempted
to save the core dump by the value set by the -save-attempts parameter, the command prompts you before continuing. The save-
attempts parameter is set by invoking the command system node coredump config modify.
Parameters
-node <nodename> - Node That Owns the Coredump
This specifies the node on which unsaved core dumps are to be saved.
Related references
system node coredump save on page 1305
Description
The system node coredump show command displays basic information about core dumps, such as the core dump name,
time of panic that triggered the core dump and whether the core file is saved. You can specify optional parameters to display
information that matches only those parameters. For example, to display the list of kernel core files, run the command with -type
kernel.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-system ]
If you specify this parameter, the command displays the following information:
• Node name
• Core dump ID
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node That Owns the Coredump
If you specify both this parameter and the -corename parameter, the command displays detailed information
about the specified core. If you specify this parameter by itself, the command displays information about the
core files on the specified node.
[-type {kernel|ancillary-kernel-segment|application}] - Coredump Type
This parameter specifies the type of core files to be displayed. If the type is kernel and the system supports
segmented core files, the command displays information about primary kernel core segment files. If the type is
kernel and the system does not support segmented core files, the command displays information about full
core files. If the type is ancillary-kernel-segment, the command displays information about ancillary kernel
core segment files. If the type is application, the command displays information about application core files. If
no type is specified, the command displays information about core files of type kernel or application.
Examples
The following examples display information about the core files:
Related references
system node coredump status on page 1308
Description
The system node coredump status command displays status information about core dumps. The command output
depends on the parameters specified with the command. If a core dump is in the process of being saved into a core file, the
command also displays its name, the total number of blocks that are to be saved, and the current number of blocks that are
already saved.
You can specify additional parameters to display only information that matches those parameters. For example, to display
coredump status information about the local node, run the command with the parameter -node local.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-disks ]
If you specify this parameter, the command displays the following information:
• Node name
• Node name
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays the following information:
• Node name
Examples
The following example displays core dump information about the node named node0:
Node: node0
State: idle
Space Available On Internal Filesystem: 132.1GB
Name of Core Being Saved: -
Total Number of Blocks in Core Being Saved: -
Number of Blocks saved: -
Type of core dump: spray
Number of Unsaved Complete Cores: 0
Number of Unsaved Partial Cores: 1
Space Needed To Save All Unsaved Cores: 4.81GB
Minimum Free Bytes On Internal Filesystem: 250MB
Related references
system node coredump show on page 1306
Description
This command triggers a Non-maskable Interrupt (NMI) on the specified node via the Service Processor of that node, causing a
dirty shutdown of the node. This operation forces a dump of the kernel core when halting the node. LIF migration or storage
takeover occurs as normal in a dirty shutdown. This command is different from the -dump parameter of the system node
shutdown, system node halt, or system node reboot command in that this command uses a control flow through the
Service Processor of the remote node, whereas the -dump parameter uses a communication channel between Data ONTAP
running on the nodes. This command is helpful in cases where Data ONTAP on the remote node is hung or does not respond for
some reason. If the panic node reboots back up, then the generated coredump can be seen by using the system node
coredump show command. This command works for a single node only and the full name of the node must be entered exactly.
Parameters
-node {<nodename>|local} - Node
This parameter specifies the node for which you want to trigger a coredump.
Examples
The following example triggers a NMI via the Service Processor and causes node2 to panic and generate a coredump.
Once node2 reboots back up, the command system node coredump show can be used to display the generated
coredump.
Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y
Warning: The Service Processor is about to perform an operation that will cause
a dirty shutdown of node "node2". This operation can
cause data loss. Before using this command, ensure that the cluster
will have enough remaining nodes to stay in quorum. To reboot or halt
a node gracefully, use the "system node reboot" or "system node halt"
command instead. Do you want to continue? {yes|no}: yes
Warning: This operation will reboot the current node. You will lose this login
session. Do you want to continue? {y|n}: y
cluster1::*>
cluster1::> system coredump show
Node:Type Core Name Saved Panic Time
--------- ------------------------------------------- ----- -----------------
node2:kernel
core.1786429481.2013-10-04.11_18_37.nz false 10/4/2013 11:18:37
Partial Core: false
Number of Attempts to Save Core: 0
Space Needed To Save Core: 3.60GB
1 entries were displayed.
cluster1::>
Related references
system node halt on page 1266
system node halt on page 1266
system node reboot on page 1269
system node coredump show on page 1306
Description
Attention: This command is deprecated and might be removed in a future release of Data ONTAP. Use "system node
autosupport invoke-core-upload" instead.
The system node coredump upload command uploads a saved core file to a specified URL. You should use this command
only at the direction of technical support.
Parameters
-node {<nodename>|local} - Node That Owns the Coredump
This specifies the node on which the core file is located.
[-type {kernel|ancillary-kernel-segment|application}] - Coredump Type
This specifies the type of core files to be uploaded. If the type is kernel, kernel core files will be uploaded. If
the type is application, application core file will be uploaded.
-corename <text> - Coredump Name
This specifies the name of the core file that is to be uploaded.
Examples
The following example uploads a core file named core.07142005145732.2010-10-05.19_03_41.nz on a node named
node0 to the default location. The support case number is 2001234567.
Related references
system node coredump config modify on page 1313
system node autosupport invoke-core-upload on page 1280
Description
The system node coredump config modify command modifies the cluster's core dump configuration.
Parameters
-node {<nodename>|local} - Node
This parameter specifies the node whose coredump configuration you want to modify.
[-sparsecore-enabled {true|false}] - Enable Sparse Cores
If you set this parameter to true, the command enables sparse cores. A sparse core omits all memory buffers
that contain only user data.
[-min-free {<integer>[KB|MB|GB|TB|PB]}] - Minimum Free Bytes On Root Filesystem
If you specify this parameter, the command displays the number of bytes that need to be made available in the
root file system after saving the core dump. If the minimum number of bytes cannot be guaranteed, core
dumps are not generated. The default setting is 250 MB.
[-coredump-attempts <integer>] - Maximum Number Of Attempts to Dump Core
If you specify this parameter, the command displays the maximum number of times the system will attempt to
generate a core dump when encountering repeated disk failures. The default setting is 2.
[-save-attempts <integer>] - Maximum Number Attempts to Save Core
If you specify this parameter, the command displays the maximum number of times the system will attempt to
save a core dump.The default setting is 2.
Examples
The following example sets the maximum number of core dump attempts to 5 and the maximum number of save attempts
to 5:
Related references
system node autosupport invoke-core-upload on page 1280
Description
The system node coredump config show command displays basic information about a cluster's core dump configuration,
such as whether sparse cores are enabled, minimum number of free bytes on the root volume file system that need to be
available after saving the core files, maximum number of times the process attempts to generate a core dump when encountering
repeated disk failures, maximum number of times the process attempts to save a core dump, the URL to which core dumps are
uploaded, and whether core dumps are automatically saved when a node restarts.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays the coredump configuration information of the specified
node.
[-sparsecore-enabled {true|false}] - Enable Sparse Cores
If you specify this parameter, the command displays only the coredump information that matches the specified
spare core setting. A sparse core omits all memory buffers that contain only user data.
[-min-free {<integer>[KB|MB|GB|TB|PB]}] - Minimum Free Bytes On Root Filesystem
If you specify this parameter, the command displays only the core dump information that matches the
specified number of bytes that need to be made available in the root file system after saving the core dump.
Examples
The following example displays information about the cluster's core dump configuration:
Related references
system node autosupport invoke-core-upload on page 1280
Description
The system node coredump external-device save command saves a specified core dump to an external USB device
plugged into the port specified by the -device parameter.
External USB device requirements:
• A device formatted as FAT32 can be used to save a core dump smaller than 4GB.
• The command system node coredump show can be used to determine the size of the core dump.
Parameters
-node {<nodename>|local} - Node That Owns the Coredump
This specifies the node on which the core dump is located.
-device {usb0|usb1} - Device
This specifies the port on the node to which the external USB device is connected, for example: usb0.
-corename <text> - Coredump Name
This specifies the core dump that is to be saved.
Examples
The following example saves a core dump named core.101268397.2010-05-30.19_37_31.nz on node1 to an external USB
device in port usb0:
cluster1::> system node coredump external-device save -node node1 -device usb0 -corename core.
101268397.2010-05-30.19_37_31.nz
Related references
system node coredump show on page 1306
Availability: This command is available to cluster administrators at the advanced privilege level.
Description
The system node coredump external-device show command displays basic information about files on an external USB
device, such as the filename and size.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
-node {<nodename>|local} - Node That Owns the Coredump
This parameter selects the node that has files that are to be displayed on the external USB device.
[-device {usb0|usb1}] - Device
This parameter specifies the name of the external device, for example: usb0.
[-corename <text>] - Coredump Name
This parameter specifies the core dump file for which the information is displayed.
[-size {<integer>[KB|MB|GB|TB|PB]}] - Size of Core
If specified, the command displays information only about the core files that are of the specified size.
Description
The system node coredump reports delete command deletes the specified application core report.
Parameters
-node {<nodename>|local} - Node That Owns the Coredump
This specifies the node from which reports are to be deleted.
-reportname <text> - Report Name
This specifies the report that is to be deleted.
Examples
The following example shows how a report named notifyd.1894.80335005.2011-03-25.09_59_43.ucore.report is deleted
from a node named node0:
cluster1::> system node coredump reports delete -node node0 -reportname notifyd.
1894.80335005.2011-03-25.09_59_43.ucore.report
Description
The system node coredump reports show command displays basic information about application core reports, such as
the report name and time of the panic that triggered the application core dump. You can specify optional parameters to display
information that matches only those parameters. For example, to display the list of reports in the local node, run the command
with -node local.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
Examples
The following example displays information about the reports:
Description
Attention: This command is deprecated and might be removed in a future release of Data ONTAP. See core report
information in the SmartSoft tool.
The system node coredump reports upload command uploads an application report to a specified URL. You should use
this command only at the direction of technical support.
Parameters
-node {<nodename>|local} - Node That Owns the Coredump
This specifies the node on which the report is located.
-reportname <text> - Report Name
This specifies the name of the report that is to be uploaded.
Examples
The following example shows how a report named notifyd.1894.80335005.2011-03-25.09_59_43.ucore.bz2 is uploaded
on a node named node0 to the default location. The support case number is 2001234567.
cluster1::> system node coredump reports upload -node node0 -corename notifyd.
1894.80335005.2011-03-25.09_59_43.ucore.bz2 -casenum 2001234567
Description
This command deletes a core segment.
Parameters
-node {<nodename>|local} - Node
This specifies the node on which to delete the core segments.
-segment <text> - Core Segment
This specifies the core segment to delete. The pathname is relative to the coredump directory. If a directory is
specified, all core segment files within it are deleted. If the directory is empty, it is deleted.
[-owner-node <text>] - Node That Owns the Core Segment File
This specifies the node that owns the core segment. Use this parameter only in takeover mode to delete a
partner's coredump segment.
Examples
This deletes all core segments in the directory, core.151708240.2012-01-11.05_56_52.
cluster1::> system node coredump segment delete -node node1 -segment core.
151708240.2012-01-11.05_56_52
Description
This command deletes all the core segments on a node.
Examples
This deletes all the core segments for node1.
Description
This command displays the following information about core segments:
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
Displays the following details:
Examples
The example below displays the core segments on node1.
The example below displays detailed information a specific core segment file on node1.
cluster1::> system node coredump segment show -node node1 -segment core.
118049106.2012-01-05.17_11_11.ontap.nz -instance
Node: node1
Core Segment: core.118049106.2012-01-05.17_11_11.ontap.nz
Node That Owns the Core Segment File: node1
System ID of Node That Generated Core: 118049106
Md5 Checksum of the Compressed Data of the Core Segment: 1a936d805dcd4fd5f1180fa6464fdee4
Name of the Core Segment: ontap
Number of Segments Generated: 2
Time of Panic That Generated Core: 1/5/2012 12:11:11
Description
The system node environment sensors show command displays the following information:
• Node name
• Sensor name
• Sensor state
• Sensor value
• Sensor units
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information about the sensors on the specified node.If this parameter is specified with the -name
parameter, the command displays information only about the specified sensor.
[-name <text>] - Sensor Name
Selects information about the sensors that have the specified name.If this parameter is specified with the -node
parameter, the command displays information only about the specified sensor.
[-fru <text>] - FRU
Selects information about the sensors associated with the specified Field Replaceable Unit (FRU).
[-type {fan|thermal|voltage|current|battery-life|discrete|fru|nvmem|counter|minutes|
percent|agent|unknown}] - Sensor Type
Selects information about the sensors that have the specified sensor type. Possible values vary among
platforms but may include fan, temperature, thermal and voltage.
[-units <text>] - Value Units
Selects information about the sensors that have readings displayed in the specified units of measure. Possible
values vary among platforms but may include RPM, C and mV.
Examples
The following example displays information about all sensors on a cluster named cluster1:
Description
The system node external-cache modify command can be used to modify the following attributes of external-cache for
a node:
• is-enabled
• is-rewarm-enabled
• is-mbuf-inserts-enabled
• pcs-size
• is-hya-enabled
Parameters
-node {<nodename>|local} - Node
This specifies the node on which the modifications need to be made.
[-is-enabled {true|false}] - Is Enabled?
Enables external-cache module (Flash Cache Family) on the storage system. Valid values for this option are
true and false. If external-cache hardware is present, then this option will enable external-cache functionality
in WAFL. If no hardware is present, this option will enable external-cache pcs (Predictive Cache Statistics).
The default value for this option is false.
[-is-rewarm-enabled {true|false}] - Is Rewarm On?
Specifies whether an external-cache module should attempt to preserve data across reboots. Valid values for
this option are true and false. This option applies only to cache hardware with persistent media. It does not
apply to Predictive Cache Statistics (PCS). Enabling this option will marginally increase the duration of
system boot and shutdown, but it will reduce or eliminate the time required for cache warming. The default
value for this option is determined by the cache hardware type. The option is disabled by default.
[-is-mbuf-inserts-enabled {true|false}] - Is Mbuf Inserts On?
Specifies whether the external-cache module allows insert of mbuf data as part of a network write. In rare
cases, inserting mbuf data may cause excessive CPU usage. We provide this workaround to disable the
behavior, if necessary. Do not change the value of this option unless directed to do so by technical support.
The data from the mbuf network writes can still be stored in the external cache, but only after a subsequent
disk read of that data.
[-pcs-size <integer>] - PCS Size
Controls the size of the cache emulated by external-cache PCS. Valid values for this option are integers
between 16 and 16383. This option is only used when PCS is enabled. The default value for this option is
chosen automatically based on the amount of memory in the controller, and the upper limit is further restricted
on controllers with smaller amounts of memory.
[-is-hya-enabled {true|false}] - Is HyA Caching Enabled?
Specifies whether the external-cache module allows caching of blocks targeted for hybrid aggregates. This
option is set to true by default when the external-cache is enabled.
Description
The system node external-cache show command displays external-cache information for each of the nodes available.
Parameters
{ [-fields <fieldname>, ...]
Valid values for this option are {node|is-enabled|is-rewarm-enabled|is-mbuf-inserts-enabled|
pcs-size|is-hya-enabled}. Specifying the value will display all entries that correspond to it.
| [-instance ]}
This option does not need an input value. Specifying this option will display the information about all the
entries.
[-node {<nodename>|local}] - Node
Specify this parameter to display external-cache parameters that match the specified node.
[-is-enabled {true|false}] - Is Enabled?
Valid values for this option are true and false. Specifying the value will display all entries that correspond to it.
[-is-rewarm-enabled {true|false}] - Is Rewarm On?
Valid values for this option are true and false. Specifying the value will display all entries that correspond to it.
[-is-mbuf-inserts-enabled {true|false}] - Is Mbuf Inserts On?
Valid values for this option are true and false. Specifying the value will display all entries that correspond to it.
[-pcs-size <integer>] - PCS Size
Valid values for this option are integers between 16 and 16383. Specifying the value will display all entries
that correspond to it.
[-is-hya-enabled {true|false}] - Is HyA Caching Enabled?
Valid values for this option are true and false. Specifying the value will display all entries that correspond to it.
Examples
Node: node1
Is Enabled: false
Is rewarm on: false
Is Mbuf inserts on: true
PCS size: 256
Is hya caching enabled: true
Description
The system node firmware download command downloads new system firmware to the boot device. A reboot followed by
the 'update_flash' command at the firmware prompt is required for the firmware to take effect.
Parameters
-node {<nodename>|local} - Node
This specifies the node or nodes on which the firmware is to be updated.
-package <text> - Package URL
This parameter specifies the URL that provides the location of the package to be fetched. Standard URL
schemes, including HTTP, HTTPS, FTP and FILE, are accepted. The FILE URL scheme can be used to
specify location of the package to be fetched from an external device connected to the storage controller.
Currently, only USB mass storage devices are supported. The USB device is specified as file://usb0/
<filename>. Typically, the file name is image.tgz. The package must be present in the root directory of the
USB mass storage device. The HTTPS URL scheme requires that you install the HTTPS server certificate on
the system by using the command "security certificate install -type server-ca".
Examples
The following example downloads firmware to node-01 from a web server:
Description
The system node image abort-operation command aborts software installation ("update") or download ("get")
operation on the specified node.
Parameters
-node {<nodename>|local} - Node
This specifies the node on which to abort the operation.
Examples
The following example aborts the software installation operation on a node named node1.
Description
This command fetches a file from the specified URL and stores it in the /mroot/etc/software directory.
Parameters
[-node {<nodename>|local}] - Node
This parameter specifies the node that fetches and stores the package.
-package <text> - Package URL
This parameter specifies the URL that provides the location of the package to be fetched. Standard URL
schemes, including HTTP, HTTPS, FTP and FILE, are accepted. The FILE URL scheme can be used to
specify the location of the package to be fetched from an external device connected to the storage controller.
Currently, only USB mass storage devices are supported. The USB device is specified as file://usb0/
<filename>. Typically, the file name is image.tgz. The package must be present in the root directory of the
USB mass storage device. The HTTPS URL scheme requires that you install the HTTPS server certificate on
the system by using the command "security certificate install -type server-ca".
[-replace-package [true]] - Replace the Local File
Specifies whether an existing package is deleted and replaced with a new package. If you enter this command
without using this parameter, its effective value is false and an existing package is not replaced with the new
one. If you enter this parameter without a value, it is set to true and an existing package is replaced with the
new one.
[-rename-package <text>] - Rename the File
Use this parameter to enter a package name that is different than the file name in the URL.
[-background [true]] - Run in the background
This parameter allows the operation to run in the background. The progress of the operation can be checked
with the command system image show-update-progress. If this command is entered without using this
parameter, its effective value is false and the operation runs in the foreground. If this parameter is used without
a value, it is set to true.
Examples
Description
The system node image modify command sets the default software image on a specified node. The default software image
is the image that is run when the node is started. A node holds two software images; when you set one as the default image, the
other image is automatically unset as the default. Conversely, if you unset a software image as the default, the other image is
automatically set as the default.
Examples
The following example sets the software image named image2 as the default image on a node named node0.
node::> system node image modify -node node0 -image image2 -isdefault true
Default Image Changed.
Description
The system node image show command displays information about software images. By default, the command displays the
following information:
• Node name
• Image name
• Software version
• Installation date
To display detailed information about a specific software image, run the command with the -node and -image parameters. The
detailed view adds information about the kernel image path, and the root file system image path.
You can specify additional parameters to select specific information. For example, to display information only about software
images that are currently running, run the command with the -iscurrent true parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information about the software images on the specified node. If this parameter and the -image
parameter are both used, the command displays detailed information about the specified software image.
Examples
The following example displays information about the software images on a node named node1:
Description
The system node image show-update-progress command displays the progress of a software-image update initiated by
using the system node image update command. The command displays progress until the update completes; you can also
interrupt it by pressing Ctrl-C.
Parameters
-node {<nodename>|local} - Node
This optionally specifies the name of a node whose image-update progress is to be displayed.
[-follow [true]] - Follow the Progress in the Foreground
Do use not use background processing for this command. If you do not use this parameter, the value is true.
Examples
The following example displays image-update progress:
Related references
system node image update on page 1331
Description
The system node image update command downloads the software image from a specified location and updates the
alternate software image (that is, the image that is not currently running on the node). By default, validation of the software
image is not performed. Use the "-validate-only" parameter to validate the software image first, before performing the update on
the cluster nodes.
At the advanced privilege level, you can specify whether to disable version-compatibility checking.
Parameters
-node {<nodename>|local} - Node
This specifies the node on which the software image is located.
-package <text> - Package URL
This specifies the location from which the software image is to be downloaded. The location can be specified
in any of the following ways:
Note: The HTTPS URL scheme requires that you install the HTTPS server certificate on the system by
using the command "security certificate install -type server-ca".
• As a filename of a package left behind by a previous installation, or a package fetched using system
node image get. For example, image.tgz. Available packages can be displayed using system node
image package show.
• The FILE URL scheme can be used to specify the location of the package to be fetched from an external
device connected to the storage controller. Currently, only USB mass storage devices are supported. The
Examples
The following example updates the software image on a node named node0 from a software package located at ftp://
ftp.example.com/downloads/image.tgz:
Related references
system node image get on page 1328
system node image package show on page 1333
system node image show-update-progress on page 1330
Parameters
-node {<nodename>|local} - Node
The package will be deleted from the repository belonging to the node specified with this parameter. The local
node is used as the default if this parameter is omitted.
-package <text> - Package File Name
This parameter specifies the package to be deleted.
Examples
Description
The package show command displays details of the software packages residing on the storage controller.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects which node's packages are displayed. The local node is the default if this parameter is omitted.
[-package <text>] - Package File Name
This parameter specifies which package's information will be displayed.
Examples
Description
The delete command deletes the specified file on the external device.
Parameters
-node {<nodename>|local} - Node
The file is deleted from the external device of the node specified with this parameter. If this parameter is
omitted, then the local node is used as the default node.
-package <text> - File Name
This parameter specifies the file to be deleted.
-device {usb0|usb1} - Device
This parameter specifies the name of the external device. Currently, only usb0 is supported.
Examples
Description
The external-device show command displays files residing on the external storage device.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter selects the node that has files that are to be displayed on the external storage device. If this
parameter is omitted, then the local node is the default node.
[-package <text>] - File Name
This parameter specifies the file for which the information is displayed.
[-device {usb0|usb1}] - Device
This parameter specifies the name of the external device. Currently, only usb0 is supported.
Description
The system node hardware nvram-encryption modify command configures the encryption feature for the NVRAM or
NVMEM data that is destaged to non-volatile flash storage.
Note: This feature might be restricted in some countries due to local regulations concerning encrypted data.
Parameters
-node {<nodename>|local} - Node
Specifies the node containing the NVRAM or NVMEM subsystem.
[-is-enabled {true|false}] - Is Encryption Enabled
Specifies whether the NVRAM or NVMEM encryption is disabled or enabled.
Examples
The following commands enable or disable the NVRAM encryption:
cluster1::> system node hardware nvram-encryption modify -node node1 -is-enabled false
cluster1::> system node hardware nvram-encryption modify -node node1 -is-enabled true
Description
The system node hardware nvram-encryption show command displays the configuration of the encryption feature for
the NVRAM or NVMEM data that is destaged to non-volatile flash storage.
Examples
The following example displays information about the NVRAM encryption configuration on all nodes of the cluster:
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays detailed information about tape drives on the specified node.
[-device-id <text>] - Device ID
Selects information about the tape drive that has the specified device ID.
[-description <text>] - Description
Selects information about the tape drive or drives that have the specified description.
[-wwn <text>] - World Wide Name
Selects information about the tape drive that has the specified worldwide name.
[-serial-number <text>] - Serial Number
Selects information about the tape drive that has the specified serial number.
[-ndmp-path <text>, ...] - NDMP Path
Selects information about the tape drive or drives that have the specified NDMP path.
Examples
The following example displays information about all tape drives in the cluster:
Description
This command displays the following information about tape libraries:
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays detailed information about tape libraries on the specified node.
[-device-id <text>] - Device ID
Selects information about the tape library that has the specified device ID.
[-description <text>] - Description
Selects information about the tape library or libraries that have the specified description.
[-wwn <text>] - World Wide Name
Selects information about the tape library that has the specified worldwide name.
[-serial-number <text>] - Serial Number
Selects information about the tape library that has the specified serial number.
[-ndmp-path <text>] - NDMP Path
Selects information about the tape library or libraries that have the specified NDMP path.
Examples
The following example displays information about all tape libraries attached to the cluster:
Description
The system node hardware unified-connect modify command changes the adapter configuration. Any changes to the
adapter mode or type will require a reboot for the changes to take effect. The adapter must also be offline before you can make
any changes.
The adapter argument is in the form Xy where X is an integer and y is a letter. For example: 4a
For a target adapter, use the network fcp adapter modify command to bring the adapter offline.
For an initiator adapter, use the system node run local storage disable adapter command to take the adapter
offline.
The -mode parameter refers to the mode of the adapter and can be either fc or cna.
The -type parameter refers to the FC-4 type of the adapter and can be initiator, target, or fcvi.
The -force parameter suppresses confirmation prompts.
Note: The adapter type fcvi is supported only on platforms with FCVI adapters.
Parameters
-node {<nodename>|local} - Node
Specifies the node of the adapter.
-adapter <text> - Adapter
Specifies the adapter.
[-mode | -m {fc|cna}] - Configured Mode
Specifies the mode.
[-type | -t {initiator|target|fcvi}] - Configured Type
Specifies the FC-4 type.
[-force | -f [true]] - Force
Suppresses warnings and confirmation prompts.
Examples
cluster1::> system node hardware unified-connect modify -node node1 -adapter 0d -mode cna
Related references
network fcp adapter modify on page 327
Description
This command manages Fibre Channel and converged networking adapters used by the storage subsystem. Use the command to
show the current mode and FC-4 type of adapters or the capabilities of adapters.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-capability ]
If this parameter is specified, the command displays the capabilities of the adapters.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays information about Fibre Channel and converged
networking adapters on the specified node.
[-adapter <text>] - Adapter
If this parameter is specified, the command displays information about the specified adapter.
[-current-mode {fc|cna}] - Current Mode
If this parameter is specified, the command displays adapters configured to the specified mode.
[-current-type {initiator|target|fcvi}] - Current Type
If this parameter is specified, the command displays adapters configured to the specified FC-4 type.
[-pending-mode {fc|cna}] - Pending Mode
If this parameter is specified, the command displays adapters configured to the specified mode on the next
reboot.
[-pending-type {initiator|target|fcvi}] - Pending Type
If this parameter is specified, the command displays adapters configured to the specified FC-4 on the next
reboot.
[-status-admin <text>] - Administrative Status
If this parameter is specified, the command displays adapters with the specified status.
[-supported-modes {fc|cna}, ...] - Supported Modes
The list of modes that the adapter supports.
[-supported-fc-types {initiator|target|fcvi}, ...] - Supported FC Types
The list of FC-4 types the adapter supports when configured into fc mode.
[-supported-cna-types {initiator|target|fcvi}, ...] - Supported CNA Types
The list of FC-4 types the adapter supports when configured into cna mode.
Examples
The following example displays information about all Fibre Channel and converged networking adapters in the cluster:
Description
The system node internal-switch show command is used to display the internal switch state information and the link
status.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to specify the node the switch resides on.
[-switch-id <integer>] - Switch
Use this parameter to specify the switch id. For example, 1.
[-port-id <integer>] - Port
Use this parameter to specify the port id. For example, 0.
[-port-name <text>] - Port Name
Use this parameter to specify the port name. For example, e0M.
[-auto-admin <Auto-negotiation setting>] - Auto-Negotiation Administrative
Use this parameter to show the auto-negotiation administrative setting. 'enable' or 'disable'.
[-auto-op <Auto-negotiation setting>] - Auto-Negotiation Operational
Use this parameter to show the auto-negotiation operational setting. 'unknown', 'complete', 'incomplete',
'failded' or 'disabled'.
Examples
The example shows the attributes of the internal switch 0 on the node Node1.
Description
The system node internal-switch dump stat command is used to display the counter information of the internal
switch ports.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
Examples
The following example shows partial counter information of the internal switch 0 on Node1
Description
The system node nfs usage show command displays the NFS usage information in the local node. The display output
shows the number of RPC calls received per protocol on the local node. Usage is collected whenever there is any NFS traffic.
These values are not persistent and will reset when the node reboots. This command is used in conjunction with the system
node nfs usage reset command in the NFS licensing callback.
The following example displays the NFS usage information with v3 usage.
The following example displays the NFS usage information with v4 usage.
Description
This command switches on the power of the main controller of the specified node. This command works for a single node only
and the full name of the node must be entered exactly.
Parameters
-node {<nodename>|local} - Node
This parameter specifies the node whose power you want to switch on.
Examples
The following example switches on the power of node2.
Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y
cluster1::*>
cluster1::*>
Description
This command displays the power status of the main controller in each node across the cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This optional parameter specifies the name of a node for which information is to be displayed. If this
parameter is not specified, the command displays information about all nodes in the cluster.
[-status {on|off}] - Current Power Status
If the -status parameter is specified, the command only lists information about the node with the power status
value you enter.
Examples
The following example displays power status of all the nodes in cluster1.
Node Status
---------------- -----------
node1 on
node2 on
2 entries were displayed.
cluster1::>
Parameters
-node <nodename> - Owner of the Root-mount
The node name where the root-mount will be created.
-root-node <nodename> - Root-mount Destination Node
The node name that the root-mount will access.
Examples
The following example shows the creation of a root-mount from cluster1::nodeA to cluster1::nodeB and the
verification of the successful completion.
Related references
system node root-mount show on page 1347
system node root-mount delete on page 1346
Description
The system node root-mount delete command removes a root-mount from one node in the cluster to another node's root
volume. The root-mount is marked for immediate deletion by a background task. Use the system node root-mount show
command to view the current status of root-mount or verify task completion.
Parameters
-node <nodename> - Owner of the Root-mount
The node which has the mount.
-root-node <nodename> - Root-mount Destination Node
The node accessed by the mount.
Examples
This example shows the deletion of a root-mount from cluster1::nodeA to cluster1::nodeB and the verification of
the command's successful completion.
Related references
system node root-mount show on page 1347
system node root-mount create on page 1345
Description
The system node root-mount show command displays the status of current root-mounts from any node to another node's
root volume. These root-mounts are used by cluster services to access data on other nodes in the cluster. These root-mounts are
not pre-created, but are created as they are needed. They can also be manually created or deleted.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Owner of the Root-mount
Selects information about root-mounts that exist on the specified node.
[-root-node <nodename>] - Root-mount Destination Node
Selects information about root-mounts that connect to the specified node.
[-create-time <MM/DD/YYYY HH:MM:SS>] - Mount Creation Time
Selects information about root-mounts that were created at the specified time.
[-state <Mount State>] - State of the Root-Mount
Selects information about root-mounts that have the specified state. The states are:
• initializing: A root-mount was found and needs testing to determine the correct state.
Examples
The following example shows the default state of the root-mounts on a cluster that is not using root-node services:
The following example displays the root-mounts that exist for a cluster that has nodeA mounted to nodeB, and nodeB
mounted to nodeA:
Related references
system node root-mount create on page 1345
system node root-mount delete on page 1346
Description
The system node upgrade-revert show command displays information about the status of upgrades or reversions. If an
upgrade has failed, this command enables you to determine which phase of the upgrade contains the failed upgrade task and the
reason for the failure.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
• stopped - Upgrade process stopped due to node or management application restart. Use the system node
upgrade-revert upgrade command to complete the upgrade manually.
• waiting - Upgrade process is waiting the replication database to come online or for applications to be
stable. If the RDB is not online, check network connectivity using cluster show and cluster ping-
cluster to ensure that all nodes are healthy and in communication.
Examples
The following example shows typical output for a cluster with two nodes. Status messages for each phase display
information about the tasks in that phase.
Related references
system node upgrade-revert upgrade on page 1350
cluster show on page 26
cluster ping-cluster on page 20
Description
The system node upgrade-revert upgrade command manually executes an upgrade. Use this command to execute an
upgrade after issues that caused an upgrade failure are resolved. If the upgrade is successful, no messages display.
Before the command executes upgrades, it checks the configuration of the nodes in the cluster. If no upgrades are needed, the
command displays a message and does not execute any upgrades.
Parameters
-node {<nodename>|local} - Node
Specifies the node that is to be upgraded. The value local specifies the current node.
Examples
This example shows command output of a node named node0 if node configuration is current.
Description
The system node virtual-machine show-network-load-balancer displays the list of network load balancer probe
ports for each ONTAP node in the cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node
Represents the name of the ONTAP node for which information is to be displayed. If this parameter is not
specified, the command displays information about all nodes in the cluster.
[-vserver-name <text>] - Vserver Name
Vserver name.
[-lif-name <text>] - ONTAP LIF Name
ONTAP logical interface name.
[-probe-port <integer>] - Probe Port
A TCP port which is regularly probed by the network load balancer. When the TCP port is healthy and open,
the network load balancer will continue sending traffic to an associated network route. When unhealthy, the
network load balancer will redirect all traffic intended for this route to an alternate route.
[-last-probe-time <MM/DD/YYYY HH:MM:SS>] - Last Probe Time
The timestamp of the most recent health probe request on this TCP port.
[-remove-listener {true|false}] - Remove listener for This LIF
Whether or not ONTAP has programmatically told the network load balancer to stop listening on the health
probe associated with this LIF.
[-active {true|false}] - Actively receiving Health Probes
Whether or not the network load balanacer has received a health probe request on this TCP port, within the
expecteced timeframe.
Examples
The following example displays probe ports for each node in the cluster.
Description
The system node virtual-machine disk-object-store create command adds an object store container to a node's
configuration. All objects within the container will be added as disks to the specified node.
Parameters
-node <nodename> - ONTAP Node Name
Specifies the name of the ONTAP node to which the object store container will be added.
-object-store-name <object store name> - ONTAP Name for this Object Store Config
Specifies the name that will be used to identify the object store configuration.
-server <text> - Fully Qualified Domain Name of the Object Store Server
Specifies the object store server where the container is hosted.
-port <integer> - Port Number of the Object Store
Specifies the port number to connect to the object store server.
-container-name <text> - Container Name
Specifies the name of the container to be added.
-azure-account <text> - Azure Storage Account
Specifies the Azure storage account.
-azure-private-key <text> - Azure Storage Account Access Key
Specifies the access key required to authenticate requests to the Azure object store.
[-update-partner [true]] - Update HA Partner
Specify this parameter when the system is running in an HA configuration.
Examples
The following example adds a container to the specified node.
Description
The system node virtual-machine disk-object-store delete command removes an object store container from a
node's configuration.
Parameters
-node <nodename> - ONTAP Node Name
Specifies the name of the ONTAP node from which the object store container will be removed.
-object-store-name <object store name> - ONTAP Name for this Object Store Config
Specifies the name that will be used to identify the object store configuration.
[-update-partner {true|false}] - Update HA Partner
Specify this parameter when the system is running in an HA configuration.
Examples
The following example removes a container from the specified node.
Description
The system node virtual-machine disk-object-store modify command updates one or more object store
configuration parameters.
Parameters
-node <nodename> - ONTAP Node Name
Specifies the name of the ONTAP node for which the object store configuration will be modified.
-object-store-name <object store name> - ONTAP Name for this Object Store Config
Specifies the name that will be used to identify the object store configuration.
[-server <text>] - Fully Qualified Domain Name of the Object Store Server
This optional parameter specifies the new Fully Qualified Domain Name (FQDN) of the same object store
server.
Examples
The following example updates the stored private key for an Azure container on the specified node.
Description
The system node virtual-machine disk-object-store show command displays the list of object store containers
that contain each node's disks.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - ONTAP Node Name
Represents the name of the ONTAP node for which information is to be displayed. If this parameter is not
specified, the command displays information about all nodes in the cluster.
[-object-store-name <object store name>] - ONTAP Name for this Object Store Config
Selects object store configurations with the specified name.
[-server <text>] - Fully Qualified Domain Name of the Object Store Server
Selects containers on the specified server.
[-port <integer>] - Port Number of the Object Store
Selects containers attached on the specified port.
[-container-name <text>] - Container Name
Selects containers with the specified name.
[-azure-account <text>] - Azure Storage Account
Selects containers in the specified Azure storage account.
[-alive {true|false}] - Is Server Alive
Selects containers based on their aliveness state, as seen from the ONTAP node.
Description
The system node virtual-machine hypervisor modify-credentials command is used to set the IP address of
the hypervisor on which the node is running or vSphere credentials (i.e. -new-username or -new-password).
Parameters
-node {<nodename>|local} - Node
Name of the Data ONTAP node running in a virtual machine. It is a required field and the node must exist in
the cluster.
[-new-server <text>] - New Hypervisor IP Address
New vSphere server controlling this virtual machine. It can be either an IP address or (if name resolution is
enabled) a hostname.
[-new-username <text>] - New Hypervisor Username
New vSphere username for the -new-server specified above.
[-new-password <text>] - New Hypervisor Password
New vSphere password for the -new-server specified above.
Examples
The following example sets the IP address and the credentials of the vSphere server on which the node is running.
Description
The system node virtual-machine hypervisor show command displays information for each hypervisor that is
hosting a Data ONTAP virtual machine. The output contains the hypervisor-specific information, such as host name and IP
address, as well as network configuration details. The command only scans hypervisors on which Data ONTAP virtual machines
are installed. To filter command output, specify any number of optional fields listed below.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
The name of the Data ONTAP node running in a virtual machine for which information is to be displayed. If
this optional parameter is not specified, the command displays information about all nodes in the cluster.
[-vm-uuid <UUID>] - UUID of the Virtual Machine
The hypervisor-supplied unique ID for this virtual machine. This optional parameter selects information about
the hypervisor on which the Data ONTAP virtual machine is running with the specified UUID. Since UUID is
unique per host, an alternative and easier way is to use -node to filter out the same information.
[-vmhost-bios-release-date <text>] - Release Date for the Hypervisor BIOS
The release date for the currently running hypervisor BIOS. This optional parameter selects information about
the hypervisors that have the specified BIOS release date.
[-vmhost-bios-version <text>] - Current BIOS Version of the Hypervisor Physical Chassis
The current BIOS version of the hypervisor physical chassis. This optional parameter selects information
about the hypervisors that are running with the specified BIOS version.
[-vmhost-boot-time <text>] - Time When Hypervisor was Last Booted
The time when the hypervisor was last booted. This optional parameter selects information about the
hypervisors which were last booted at the specified boot time.
[-vmhost-cpu-clock-rate <integer>] - Speed of the Hypervisor CPU Cores (MHz)
The speed of the hypervisor CPU cores. This optional parameter selects information about the hypervisors that
are running with the specified CPU clock rate.
[-vmhost-cpu-core-count <integer>] - Number of Physical CPU Cores on the Hypervisor
The number of physical CPU cores on the hypervisor. Physical CPU cores are the processors contained by a
CPU package. This optional parameter selects information about the hypervisors that are running with the
specified CPU cores.
[-vmhost-cpu-socket-count <integer>] - Number of Physical CPU Packages on the Hypervisor
The number of physical CPU packages on the hypervisor. Physical CPU packages are chips that contain one or
more processors. Processors contained by a package are also known as CPU cores. For example, one dual-core
package is comprised of one chip that contains two CPU cores. This optional parameter selects information
about the hypervisors that are running with the specified CPU sockets.
Hypervisor Info
--------------------
Hardware Vendor: VMware, Inc.
Model: VMware Virtual Platform
Software Vendor: Unknown
Hypervisor: VMware ESX 4.1.0 build-12345
Host Name: myesx.example.com
Last Boot Time: 2014-01-01T01:23:45.678901-23:45
Host UUID: 00000000-0000-0000-0000-0012a3456789
BIOS Version: S1234.5.6.7.8.901234567890
BIOS Release Date: 2013-01-01T00:00:00Z
CPU Packages: 2
CPU Cores: 12
CPU Threads: 24
Processor ID: 0000:0000:0000:0010:0010:0100:1100:0010
Processor Type: Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
CPU MHz: 2925
Memory Size: 4227858432
IPv4 Configuration: IP Address: 192.168.0.1
Netmask: 255.255.255.0
Gateway: 192.165.0.1
Hypervisor Info
--------------------
Hardware Vendor: VMware, Inc.
Model: VMware Virtual Platform
Software Vendor: Unknown
Error: ServerFaultCode:
InvalidLoginFault type='InvalidLogin'
Related references
system node virtual-machine instance show-system-disks on page 1362
Description
The system node virtual-machine hypervisor show-credentials command displays the current vSphere
authentication information (except the password). This consists of the vSphere server and username. The vSphere password is
not displayed for security reasons. vSphere authentication information is required to use (system node virtual-machine
hypervisor show -node <node>, system node virtual-machine instance show-system-disks -node <node>
and storage disk show -virtual-machine-disk-info -node <node>) to be able to gather information about the
• Node
• vSphere Username
If the check fails or credentials are incorrect, the command displays an additional Error.
• Node
• vSphere Username
• Error:
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This optional parameter represents the name of the Data ONTAP node running in a virtual machine for which
information is to be displayed. If this parameter is not specified, the command displays information about all
nodes in the cluster.
[-server <text>] - Hypervisor IP Address or Hostname
Use this parameter to only display the Data ONTAP nodes in the cluster whose vSphere server matches this
value.
[-username <text>] - Hypervisor Username
Use this parameter to only display the Data ONTAP nodes in the cluster whose vSphere username matches
this value.
[-are-credentials-correct {true|false}] - Credentials Correct?
Get a list of Data ONTAP nodes running with either incorrect (false) or correct (true) vSphere credentials.
[-error <text>] - Error
Get a list of nodes with the specified error. This parameter is most useful when entered with wildcards.
Examples
The following example shows the vSphere server and vSphere username. It also displays whether the server address or its
credentials are correct and displays an error if they are not.
Related references
system node virtual-machine hypervisor show on page 1356
system node virtual-machine instance show-system-disks on page 1362
system node virtual-machine hypervisor modify-credentials on page 1355
Description
The system node virtual-machine instance show command displays virtual machine information. With this
information you can determine the relationship between a Data ONTAP node and its associated virtual machine instance
running within a cloud provider. Several other details about the virtual machine can be extracted as well, such as the cloud
provider account ID to which it belongs. To filter command output, specify any number of optional fields listed below.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This optional parameter represents the name of the Data ONTAP node running in a virtual machine for which
information is to be displayed. If this parameter is not specified, the command displays information about all
nodes in the cluster.
[-instance-id <text>] - ID of This Instance
Selects the nodes that match this parameter value. A cloud provider-supplied unique instance ID for this
virtual machine, for example "i-a9d42f89" or "db00a7755a5e4e8a8fe4b19bc3b330c3.1".
[-account-id <text>] - ID of This Account
Selects the nodes that match this parameter value. The cloud provider-associated account ID for this virtual
machine. This parameter is usually associated with a cloud provider login ID and password.
[-image-id <text>] - ID Of the Image in Use on This Instance
Selects the nodes that match this parameter value. The image ID installed on this virtual machine instance. It
identifies a pre-defined template of a computing device's software environment. It contains the operating
Examples
The following examples illustrate typical output from the system node virtual-machine instance show
command for a virtual machine running in a cloud provider environment.
Description
The system node virtual-machine instance show-system-disks command displays information about the system
disks (non-data disks) attached to the virtual machine. Data disk information is available using the command storage disk
show -virtual-machine-disk-info.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
Selects disk information for nodes that match this parameter.
Examples
The following example shows typical output from the command.
Related references
storage disk show on page 950
Description
The system script delete command deletes files that contain CLI session records. Use the system script show
command to display saved CLI sessions.
Parameters
-username <text> - Log Owner Username
Use this parameter to specify the name of the user whose CLI session record files are deleted. The default is
the username is that of the logged in user.
-filename <text> - Log Filename
Use this parameter to specify the names of CLI session record files to delete.
Related references
system script show on page 1364
Description
The system script show command displays information about files that contain records of CLI sessions.
For security reasons, the command normally displays only the script files created by the logged in user. Administrative users can
display all log files using the -user parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-user ]
Use this parameter to display all script files created by all users, along with the username associated with each
file.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-username <text>] - Log Owner Username
Use this parameter to display information only about files saved by the user you specify. The default username
is that of the logged in user.
[-filename <text>] - Log Filename
Use this parameter to display information only about files that have the file name you specify.
[-size-limit {<integer>[KB|MB|GB|TB|PB]}] - Logfile Size Limit
Use this parameter to display information only about files that have the size limit you specify.
[-state <State of CLI session log>] - Current State
Use this parameter to display information only about files that have the state you specify. Valid values for this
parameter are open-and-logging, file-full, and file-closed.
[-size {<integer>[KB|MB|GB|TB|PB]}] - Current Logfile Size
Use this parameter to display information only about files that are the size you specify.
Examples
The following example displays typical system script information.
Description
The system script start command starts creating a record of your CLI session. The record is stored in a file. Use the
system script show -this-session yes command to display files that are recording the current CLI session. Use the
system script stop command to stop recording the current CLI session.
Parameters
-filename <text> - Filename to Log To
Use this parameter to specify the file name to which the CLI session record is saved.
-size-limit {<integer>[KB|MB|GB|TB|PB]} - Logfile Size Limit Max:2GB
Use this parameter to specify the maximum size of the file that contains the CLI session record. When the file
size reaches this limit, recording stops. The default file size limit is 1 MB. The maximum file size limit is 2
GB.
Examples
The following example shows how to start creating a record of the CLI session in a file named sessionlog3. The size
limit of this file is 20 MB.
Related references
system script show on page 1364
system script stop on page 1366
Description
The system script stop command stops creating a record of your CLI session, if you started creating the record by using
the system script start command. Use the system script show -this-session yes command to display files that
are recording the current CLI session.
Examples
The following example shows how to stop creating a record of your CLI session.
Related references
system script start on page 1365
system script show on page 1364
Description
The system script upload command uploads a CLI session record file to a remote location. Specify the remote location
using an FTP or HTTP URI. Use the system script show command to display saved CLI sessions. Use the system
script start command to record a CLI session and save it to a file.
Parameters
-username <text> - Username If Not Your Own
Use this parameter to specify the name of the user who owns the file to upload. By default, this is the user who
is logged in.
-filename <text> - Filename to Log To
Use this parameter to specify the name of a file to be uploaded.
-destination {(ftp|http)://(hostname|IPv4 Address|'['IPv6 Address']')...} - URI to Send File To
Use this parameter to specify the FTP or HTTP destination of the file.
Examples
The following example shows how to upload the file named sessionlog3 to the destination ftp://
now.example.com/cli_sessions.
Related references
system script show on page 1364
Description
The system service-processor reboot-sp command reboots the Service Processor of the specified node.
Parameters
-node {<nodename>|local} - Node
This parameter specifies the node whose Service Processor is to be rebooted.
[-image {primary|backup}] - Image to Boot with After Reboot
This parameter specifies the image that the Service Processor uses after the reboot. By default, the primary
image is used. Avoid booting the SP from the backup image. Booting from the backup image is reserved for
troubleshooting and recovery purposes only. It might require that the SP automatic firmware update be
disabled, which is not a recommended setting. You should contact technical support before attempting to boot
the SP from the backup image.
Examples
The following command reboots the Service Processor of node "node1" into the primary image.
cluster1::>
The following command reboots the Service Processors of all nodes. Since -image is not specified, the Service
Processors will boot into the primary image.
cluster1::>
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information for the Service Processor of the specified node.
[-type {SP|NONE|BMC}] - Type of Device
Selects information for the Service Processors of the specified type.
[-status {online|offline|sp-daemon-offline|node-offline|degraded|rebooting|unknown|
updating}] - Status
Selects information for the Service Processors whose status matches the specified value.
[-ip-configured {true|false}] - Is Network Configured
Selects information for the Service Processors whose network is configured (true) or not configured (false).
[-address <IP Address>, ...] - Public IP Address
Selects information for the Service Processors that use the specified IP address or addresses.
[-mac <MAC Address>] - MAC Address
Selects information for the Service Processors that use the specified MAC address.
[-fw-version <text>] - Firmware Version
Selects information for the Service Processors that are running the specified firmware version.
[-part-num <text>] - Part Number
Selects information for the Service Processors that have the specified part number.
[-serial-num <text>] - Serial Number
Selects information for the Service Processors that have the specified serial number.
[-dev-rev <text>] - Device Revision
Selects information for the Service Processors that have the specified device revision.
[-autoupdate-enabled {true|false}] - Is Firmware Autoupdate Enabled
Selects information for the Service Processors that have the specified status for firmware automatic update.
Examples
The following example displays basic information for the Service Processors of all the nodes.
cluster1::>
The following example displays all available information for the Service Processors of all the nodes.
Node: node1
Type of Device: SP
Status: online
Is Network Configured: true
Public IP Address: 192.168.1.201
MAC Address: ab:cd:ef:fe:ed:01
Firmware Version: 2.2X5
Part Number: Not Applicable
Serial Number: Not Applicable
Device Revision: Not Applicable
Is Firmware Autoupdate Enabled: true
Node: node2
Type of Device: SP
Status: online
Is Network Configured: true
Public IP Address: 192.168.1.202
MAC Address: ab:cd:ef:fe:ed:02
Firmware Version: 2.2X5
Part Number: Not Applicable
Serial Number: Not Applicable
Device Revision: Not Applicable
Is Firmware Autoupdate Enabled: true
2 entries were displayed.
cluster1::>
The following example displays only the type, status and firmware version for the Service Processors of all the nodes.
cluster1::>
Description
This command disables user-installed certificates for secure communication with the service processor API service. Default
certificates are then auto-generated.
Description
This command enables user-installed certificates for secure communication with the service processor. Use the security
certificate install command to install client, server and CA certificates.
Parameters
-vserver <Vserver Name> - Vserver
Use this parameter to specify the Vserver on which certificates are installed.
-server-cert <text> - Name of the Server Certificate
Use this parameter to specify the unique name of the server certificate.
-client-cert <text> - Name of the Client Certificate
Use this parameter to specify the unique name of the client certificate.
-rootca-cert <text> - Names of the Root CA Certificates
Use this parameter to specify the unique names of server-ca or client-ca certificate.
Examples
The following example installs server, client and rootca certificates and then enables those certificates for secure
communication with the service processor.
cluster1 1533F273AA311FDB
xxx-client client
Certificate Authority: xxx-ca
Expiration Date: Fri May 31 05:34:37 2019
cluster1 1533F1B321E55242
xxx-server server
Certificate Authority: xxx-ca
Expiration Date: Fri May 31 05:20:50 2019
Description
The system service-processor api-service modify command modifies SP API service configuration. The SP API is
a secure network API that enables Data ONTAP to communicate with the Service Processor over the network.
Parameters
[-is-enabled {true|false}] - Is SP API Service Enabled
This parameter enables or disables the API service of the Service Processor. When the API service is disabled,
features like network-based firmware updates and network-based down filer log collection will not be
available, and the slower serial-interface will be used for firmware updates and down filer log collections.
[-port <integer>] - SP API Service Port
This parameter specifies the port number on the Service Processor used for the API service. By default, port
50000 is used.
Examples
The following example modifies the port number used for the SP API service and then disables the SP API service.
Description
The system service-processor api-service renew-internal-certificates command generates the certificates
used for secure communication with the service processor API service. This command is not allowed if user-installed
certificates are enabled.
Examples
The following example generates new default host and root-ca certificates.
Description
The system service-processor api-service show command displays the service processor API service configuration.
Examples
The following example displays the service processor API service configuration:
Description
The system service-processor image modify command enables or disables automatic firmware update on the Service
Processor of specified node or nodes.
Parameters
-node {<nodename>|local} - Node
The parameter specifies the node on which automatic Service Processor firmware update is to be enabled or
disabled.
[-autoupdate {true|false}] - Firmware Autoupdate
Setting this parameter to true enables automatic firmware update. Setting this parameter to false disables
automatic firmware update. This is a mandatory parameter.
Examples
The following command enables automatic firmware update for the Service Processor on the local node.
The following command enables automatic firmware update for the Service Processors on all the nodes.
Description
The system service-processor image show command displays information about the currently installed firmware
images on the Service Processor of each node in a cluster. You can limit output to specific types of information and specific
nodes in the cluster, or filter output by specific field values.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects firmware image information for the Service Processor of the specified node.
[-image {primary|backup}] - Image
Selects firmware image information for the Service Processors that are running the primary or backup image
as specified.
[-type {SP|NONE|BMC}] - Type
Selects firmware image information for the Service Processors of the specified type.
[-status {installed|corrupt|updating|auto-updating|none}] - Image Status
Selects firmware image information for the Service Processors whose image status matches the specified
value.
[-is-current {true|false}] - Is Image Current
Selects firmware image information for the SP whose current image matches the specified status. This
parameter indicates the partition (primary or backup) that the SP is currently booted from, not whether the
installed firmware version is most current.
[-version <text>] - Firmware Version
Selects firmware image information for the Service Processors running the specified firmware version.
[-autoupdate {true|false}] - Firmware Autoupdate
Selects firmware image information for the Service Processors whose automatic update matches the specified
configuration.
Examples
The following command displays basic firmware information for the Service Processors of all the nodes.
cluster1::>
The following command displays all available firmware information for the Service Processors of all the nodes.
Node: node1
Image: primary
Type: SP
Image Status: installed
Is Image Current: true
Firmware Version: 2.2X8
Firmware Autoupdate: true
Last Update Status: passed
Node: node1
Image: backup
Type: SP
Image Status: installed
Is Image Current: false
Firmware Version: 2.2X5
Firmware Autoupdate: true
Last Update Status: passed
Node: node2
Image: primary
Type: SP
Image Status: installed
Is Image Current: true
Firmware Version: 2.2X8
Firmware Autoupdate: true
Last Update Status: passed
Node: node2
Image: backup
Type: SP
Image Status: installed
Is Image Current: false
Firmware Version: 2.2X5
Firmware Autoupdate: true
Last Update Status: passed
4 entries were displayed.
cluster1::>
Description
The system service-processor image update command installs a new firmware version on the Service Processor of
specified node in a cluster. If this command fails, it will display and log an appropriate error message and abort. No automatic
command retries will be performed. This command also specifies which firmware image is to be installed on the Service
Processor and how.
You can use the command system service-processor image update-progress show to check the progress of the
update.
The following parameter combinations are not supported for this command:
Parameters
-node {<nodename>|local} - Node
This parameter specifies the node whose Service Processor's firmware is to be updated.
[-package <text>] - Firmware Package
This parameter specifies the package that will be installed. You can find the package file in the SP Update
Repository field of the system node image package show command. If you do not specify this
parameter, the Service Processor is updated to the most recent version of the firmware that is available in the
update repository. You must specify this parameter if baseline is false or omitted.
[-baseline {true|false}] - Install Baseline
If you set this parameter to true, the command installs the Service Processor firmware version that is bundled
with the currently running release of Data ONTAP. This is a safety mechanism that allows you to revert the SP
firmware to the version that was qualified and bundled with the currently running version of Data ONTAP on
your system. If not specified, this parameter defaults to false.
[-clear-logs {true|false}] - Clear Logs After Update
If you set this parameter to true, the command resets log settings to factory default and clears contents of all
logs maintained by the Service Processor, including:
• Event logs
• IPMI logs
• Forensics logs
Examples
The following command reverts the firmware on the Service Processor of the local node to the version that was packaged
with the currently running release of Data ONTAP. A complete install will be performed, clearing all logs maintained by
the Service Processor. The second command displays the status of the in-progress firmware install.
cluster1::> system service-processor image update -node local -update-type full -baseline true -
clear-logs true
cluster1::>
Related references
system node image package show on page 1333
system service-processor image update-progress show on page 1376
Description
The system service-processor image update-progress show command displays the progress information of
firmware updates on the Service Processor of the specified nodes. The "in-progress" field displays "no" if no update is in
progress. This command does not display the progress of an SP firmware update that is triggered from the SP CLI.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter displays the status of Service Processor firmware update for the specified node.
[-start-time <MM/DD/YYYY HH:MM:SS>] - Latest SP Firmware Update Start Timestamp
This parameter displays the status of the Service Processor whose firmware update start time matches the
specified value.
[-percent-done <integer>] - Latest SP Firmware Update Percentage Done
This parameter displays the status of the Service Processor whose update completion percentage matches the
specified value.
[-end-time <MM/DD/YYYY HH:MM:SS>] - Latest SP Firmware Update End Timestamp
This parameter displays the status of the Service Processor whose firmware update end time matches the
specified value.
Examples
The following example starts a firmware update on the local node and then uses the command system service-
processor image update-progress show to display progress of firmware updates on Service Processors of all
nodes in the system.
cluster1::> system service-processor image update -node local -update-type full -baseline true -
clear-logs true
cluster1::>
cluster1::>
Description
The system service-processor log show-allocations command displays the allocation map of the Service Processor
logs collected in the cluster. The Service Processor logs of a node are archived in the mroot directory of the collecting node.
This command displays the sequence numbers for the Service Processor log files that reside in each collecting node.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays the sequence numbers of Service Processor log files that
the specified node has collected.
[-remote-node <text>] - Remote Node
If you specify this parameter, the command displays the sequence numbers of Service Processor log files that
have been collected from the specified remote node.
Examples
The following example displays the allocation map of the Service Processor log files in the cluster.
cluster1::>
Description
The system service-processor network modify command modifies the network configuration of the Service Processor
of specified node or nodes in a cluster.
If the SP automatic network configuration has been enabled, the system service-processor network modify command
allows you to only enable or disable the SP IPv4 or Ipv6 network interface.
Parameters
-node {<nodename>|local} - Node
This parameter specifies the node whose Service Processor's network configuration is to be modified.
-address-family {IPv4|IPv6} - Address Family
This parameter specifies whether the IPv4 or the IPv6 configuration is to be modified.
[-enable {true|false}] - Interface Enabled
This parameter enables or disables the underlying network interface for the specified address-family. This
is a mandatory parameter.
[-dhcp {v4|none}] - DHCP Status
If this parameter is set to v4, the Service Processor uses network configuration from the DHCP server.
Otherwise, the Service Processor uses the network address you specify. If this parameter is not set to v4 or is
not specified, you must specify the IP address, netmask, prefix-length, and gateway in the command. DHCP is
not supported for IPv6 configuration.
[-ip-address <IP Address>] - IP Address
This parameter specifies the public IP address for the Service Processor. You must specify this parameter when
the –dhcp parameter is not set to v4.
Examples
The following example enables the network interface for IPv4 on the Service Processor of the local node. It first displays
the current network configuration information of the local node to show the network interface is initially disabled, and
then enables it with IP address 192.168.1.202, netmask as 255.255.255.0 and gateway as 192.168.1.1. It displays the
interim state with SP Network Setup Status field showing "in-progress". It finally displays the network configuration
again to confirm the specified values took effect.
Node: node2
Address Family: IPv4
Interface Enabled: false
Type of Device: SP
Status: online
Link Status: disabled
DHCP Status: -
IP Address: -
MAC Address: ab:cd:ef:fe:ed:02
Netmask: -
Prefix Length of Subnet Mask: -
Router Assigned IP Address: -
Link Local IP Address: -
Gateway IP Address: -
Time Last Updated: Fri Jun 13 16:29:55 GMT 2014
Subnet Name: -
Enable IPv6 Router Assigned Address: -
SP Network Setup Status: succeeded
SP Network Setup Failure Reason: -
Node: node2
Address Family: IPv6
Interface Enabled: false
Type of Device: SP
Status: online
Link Status: disabled
DHCP Status: none
IP Address: -
MAC Address: ab:cd:ef:fe:ed:02
Netmask: -
Prefix Length of Subnet Mask: -
Router Assigned IP Address: -
Link Local IP Address: -
Gateway IP Address: -
Time Last Updated: Fri Jun 13 16:29:55 GMT 2014
Subnet Name: -
Enable IPv6 Router Assigned Address: -
SP Network Setup Status: not-setup
SP Network Setup Failure Reason: -
2 entries were displayed.
cluster1::>
cluster1::>
cluster1::> system service-processor network show -instance -node local
Node: node2
Address Family: IPv4
Interface Enabled: false
Type of Device: SP
Status: online
Link Status: disabled
DHCP Status: -
IP Address: -
MAC Address: ab:cd:ef:fe:ed:02
Netmask: -
Prefix Length of Subnet Mask: -
Router Assigned IP Address: -
Link Local IP Address: -
Gateway IP Address: -
Time Last Updated: Fri Jun 13 16:29:55 GMT 2014
Subnet Name: -
Enable IPv6 Router Assigned Address: -
SP Network Setup Status: in-progress
SP Network Setup Failure Reason: -
Node: node2
Address Family: IPv6
Interface Enabled: false
Type of Device: SP
Status: online
Link Status: disabled
DHCP Status: none
IP Address: -
MAC Address: ab:cd:ef:fe:ed:02
Node: node2
Address Family: IPv4
Interface Enabled: true
Type of Device: SP
Status: online
Link Status: up
DHCP Status: none
IP Address: 192.168.1.202
MAC Address: ab:cd:ef:fe:ed:02
Netmask: 255.255.255.0
Prefix Length of Subnet Mask: -
Router Assigned IP Address: -
Link Local IP Address: -
Gateway IP Address: 192.168.1.1
Time Last Updated: Fri Jun 13 16:29:55 GMT 2014
Subnet Name: -
Enable IPv6 Router Assigned Address: -
SP Network Setup Status: succeeded
SP Network Setup Failure Reason: -
Node: node2
Address Family: IPv6
Interface Enabled: false
Type of Device: SP
Status: online
Link Status: disabled
DHCP Status: none
IP Address: -
MAC Address: ab:cd:ef:fe:ed:02
Netmask: -
Prefix Length of Subnet Mask: -
Router Assigned IP Address: -
Link Local IP Address: -
Gateway IP Address: -
Time Last Updated: Fri Jun 13 16:29:55 GMT 2014
cluster1::>
Description
The system service-processor network show command displays the network configuration of the Service Processor of
each node in a cluster. You can limit output to specific types of information and specific nodes in the cluster, or filter output by
specific field values.
In case a node is offline or its Service Processor management daemon is down, the command displays the last known IP address
of its Service Processor. Only the IP address is displayed in such cases.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects network configuration information for the Service Processor of the specified node.
[-address-family {IPv4|IPv6}] - Address Family
Selects network configuration information for the Service Processors that have the specified IP address family.
[-enable {true|false}] - Interface Enabled
Selects network configuration information for the Service Processors whose network interface for the given
address-family is enabled or disabled as specified.
[-type {SP|NONE|BMC}] - Type of Device
Selects network configuration information for the Service Processors of the specified type.
[-status {online|offline|sp-daemon-offline|node-offline|degraded|rebooting|unknown|
updating}] - Status
Selects network configuration information for the Service Processors whose status matches the specified
value.
[-link-status {up|down|disabled|unknown}] - Link Status
Selects network configuration information for the Service Processors whose link status matches the specified
value.
[-dhcp {v4|none}] - DHCP Status
Selects network configuration information for the Service Processors whose DHCP status matches the
specified value.
[-ip-address <IP Address>] - IP Address
Selects network configuration information for the Service Processors that use the specified IP address.
Examples
The following example displays basic network configuration information for the Service Processors of all the nodes.
DHCP: v4
MAC Address: ab:cd:ef:fe:ed:01
Network Gateway: 192.168.1.1
Network Mask (IPv4 only): 255.255.255.0
Prefix Length (IPv6 only): -
IPv6 RA Enabled: -
Subnet Name: -
SP Network Setup Status: succeeded
DHCP: none
MAC Address: ab:cd:ef:fe:ed:01
Network Gateway: -
Network Mask (IPv4 only): -
Prefix Length (IPv6 only): -
IPv6 RA Enabled: -
Subnet Name: -
SP Network Setup Status: not-setup
DHCP: v4
MAC Address: ab:cd:ef:fe:ed:02
Network Gateway: 192.168.1.1
Network Mask (IPv4 only): 255.255.255.0
Prefix Length (IPv6 only): -
IPv6 RA Enabled: -
Subnet Name: -
SP Network Setup Status: succeeded
DHCP: none
MAC Address: ab:cd:ef:fe:ed:02
Network Gateway: -
Network Mask (IPv4 only): -
Prefix Length (IPv6 only): -
IPv6 RA Enabled: -
Subnet Name: -
SP Network Setup Status: not-setup
cluster1::>
The following example displays all available network configuration information for the Service Processors of all the
nodes.
Node: node1
Address Family: IPv4
Interface Enabled: true
Type of Device: SP
Status: online
Link Status: up
DHCP Status: v4
IP Address: 192.168.1.201
MAC Address: ab:cd:ef:fe:ed:01
Netmask: 255.255.255.0
Prefix Length of Subnet Mask: -
Router Assigned IP Address: -
Link Local IP Address: -
Gateway IP Address: 192.168.1.1
Time Last Updated: Fri Jun 13 17:03:59 GMT 2014
Subnet Name: -
Enable IPv6 Router Assigned Address: -
SP Network Setup Status: succeeded
SP Network Setup Failure Reason: -
Node: node1
Address Family: IPv6
Interface Enabled: false
Type of Device: SP
Status: online
Link Status: disabled
DHCP Status: none
IP Address: -
MAC Address: ab:cd:ef:fe:ed:01
Netmask: -
Node: node2
Address Family: IPv4
Interface Enabled: true
Type of Device: SP
Status: online
Link Status: up
DHCP Status: v4
IP Address: 192.168.1.202
MAC Address: ab:cd:ef:fe:ed:02
Netmask: 255.255.255.0
Prefix Length of Subnet Mask: -
Router Assigned IP Address: -
Link Local IP Address: -
Gateway IP Address: 192.168.1.1
Time Last Updated: Fri Jun 13 17:03:59 GMT 2014
Subnet Name: -
Enable IPv6 Router Assigned Address: -
SP Network Setup Status: succeeded
SP Network Setup Failure Reason: -
Node: node2
Address Family: IPv6
Interface Enabled: false
Type of Device: SP
Status: online
Link Status: disabled
DHCP Status: none
IP Address: -
MAC Address: ab:cd:ef:fe:ed:02
Netmask: -
Prefix Length of Subnet Mask: -
Router Assigned IP Address: -
Link Local IP Address: -
Gateway IP Address: -
Time Last Updated: Fri Jun 13 17:03:59 GMT 2014
Subnet Name: -
Enable IPv6 Router Assigned Address: -
SP Network Setup Status: not-setup
SP Network Setup Failure Reason: -
4 entries were displayed.
cluster1::>
Description
The system service-processor network auto-configuration disable command disables the SP's use of subnet
resource for the automatic configuration of its networking port. This command is a cluster-wide configuration. When you
disable the SP automatic network configuration, all SPs in the cluster will be configured to use IPv4 DHCP. Any addresses
previously allocated from the subnet to the SP will be released. If the SP fails to obtain an IPv4 IP address from the DHCP
server, an EMS message warns you about the failure. The IPv6 interface will be disabled.
Examples
The following example disables the automatic configuration for IPv4 on the SP. It first displays the current network
configuration and then disables the SP IPv4 automatic network configuration.
DHCP: v4
MAC Address: ab:cd:ef:fe:ed:01
Network Gateway: 192.168.1.1
Network Mask (IPv4 only): 255.255.255.0
Prefix Length (IPv6 only): -
IPv6 RA Enabled: -
Subnet Name: -
SP Network Setup Status: succeeded
Description
The system service-processor network auto-configuration enable command enables the automatic network
configuration for the SP. This is a cluster-wide configuration. Every node in the cluster will use the specified subnet to allocate
IP address, subnet mask and gateway address for the SP configuration. When the SP automatic network configuration is
Parameters
-address-family {IPv4|IPv6} - Subnet Address Family
This parameter specifies whether the IPv4 or the IPv6 automatic configuration is to be enabled for the SP.
-subnet-name <text> - Subnet Name
This parameter specifies the network subnet that the SP will use for automatic network configuration.
Examples
The following example enables the automatic network configuration for IPv4 on the SP. It first displays the current SP
network configuration, displays available network subnet in the cluster, and then enable the SP to use the subnet for IPv4
automatic configuration.
DHCP: v4
MAC Address: ab:cd:ef:fe:ed:01
Network Gateway: 192.168.1.1
Network Mask (IPv4 only): 255.255.255.0
Prefix Length (IPv6 only): -
IPv6 RA Enabled: -
Subnet Name: -
SP Network Setup Status: succeeded
IPspace: Default
Subnet Broadcast Avail/
Name Subnet Domain Gateway Total Ranges
--------- ---------------- --------- --------------- --------- ---------------
ipv4_test 192.168.1.0/24 Default 192.168.1.1 3/5 192.168.1.2-192.168.1.6
DHCP: none
MAC Address: ab:cd:ef:fe:ed:01
Network Gateway: 192.168.1.1
Network Mask (IPv4 only): 255.255.255.0
Prefix Length (IPv6 only): -
Description
The system service-processor network auto-configuration show command displays the names of the IPv4 and
IPv6 network subnet objects configured in the cluster that the SP uses for automatic configuration.
Examples
The following example shows that the SP is configured to use the "ipv4_test" IPv4 subnet in the cluster for the SP
automatic network configuration.
Description
The system service-processor ssh add-allowed-addresses command grants IP addresses access to the Service
Processor.
Parameters
-allowed-addresses <IP Address/Mask>, ... - Public IP Addresses
Use this parameter to specify one or more IP addresses with corresponding netmasks. The value should be
specified in the format of address/netmask, for example, 10.98.150.10/24, fd20:8b1e:b255:c09b::/64. Use
commas to separate multiple address/netmask pairs. If "0.0.0.0/0, ::/0" is specified in the parameter, any IP
address is allowed to access the Service Processor.
Examples
The following examples grant the specified IP addresses access to the Service Processor and display the list of public IP
addresses that are allowed to access the Service Processor.
The following example enables all IP addresses to access the Service Processor.
cluster1::> system service-processor ssh add-allowed-addresses -allowed-addresses 0.0.0.0/0, ::/0
cluster1::>
Description
The system service-processor ssh remove-allowed-addresses command blocks the specified IP address from
accessing the Service Processor. If all IP addresses are removed from the access list, then the Service Processor is not accessible
from any IP address.
Parameters
-allowed-addresses <IP Address/Mask>, ... - Public IP Addresses
Use this parameter to specify one or more IP addresses with corresponding netmasks. The value should be
specified in the format of address/netmask, for example, 10.98.150.10/24, fd20:8b1e:b255:c09b::/64. Use
commas to separate multiple address/netmask pairs.
Examples
The following example prevents the specified IP addresses from accessing the Service Processor. It also displays the list
of public IP addresses that are allowed to access the Service Processor.
Warning: If all IP addresses are removed from the allowed address list, all IP
addresses will be denied access. To restore the "allow all" default,
use the "system service-processor ssh add-allowed-addresses
-allowed-addresses 0.0.0.0/0, ::/0" command. Do you want to continue?
{y|n}: y
cluster1::>
Examples
The following example displays SSH security information about the Service Processor.
cluster1::>
Description
The system services firewall modify command modifies a node's firewall configuration.
Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which to modify firewall configuration.
[-enabled {true|false}] - Service Enabled
Use this parameter to specify whether firewall protection is enabled ("true") or disabled ("false") for the
node's network ports. The default setting is true.
[-logging {true|false}] - (DEPRECATED)-Enable Logging
Note: This parameter is deprecated and may be removed in a future version of Data ONTAP.
Use this parameter to specify whether logging is enabled ("true") or disabled ("false") for the firewall
service. The default setting is false.
Examples
The following example enables firewall protection and logging for a node named node1:
cluster1::> system services firewall modify -node node1 -enabled true -logging true
Description
The system services firewall show command displays firewall configuration and logging information. If the command
is issued without any parameters, it displays information about all nodes in the cluster. You can also query specific nodes for
their firewall information by running the command with the -node parameter.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information about the firewall settings on the node you specify.
[-enabled {true|false}] - Service Enabled
Selects information about the nodes with the firewall enabled ("true") or disabled ("false").
[-logging {true|false}] - (DEPRECATED)-Enable Logging
Note: This parameter is deprecated and may be removed in a future version of Data ONTAP.
Selects information about the nodes with firewall logging enabled ("true") or disabled ("false").
Examples
The following example displays information about firewall configuration for all nodes in the cluster:
Description
The system services firewall policy clone command creates a new firewall policy that is an exact copy of an
existing policy, but has a new name.
Examples
This example creates a new firewall policy named "data2" on Vserver "vs0" from an existing firewall policy named "data"
on Vserver "vs1".
cluster1::> system services firewall policy clone -vserver vs0 -policy data -destination-vserver
vs1 -destination-policy data2
Description
The system services firewall policy create command creates a firewall policy entry with the specified name and
network service. This command is used both to create the first network service associated with a new firewall policy, or to add to
an existing firewall policy by associating another network service with an existing policy. You can optionally specify one or
more IP addresses with corresponding netmasks that are allowed to use the firewall policy entry.
You can use the network interface modify command with the -firewall-policy parameter to put a firewall policy into
effect for a given logical interface by modifying that logical interface to use the specified firewall policy.
Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the name of the Vserver on which the policy is to be created.
-policy <textpolicy_name> - Policy
Use this parameter to specify the name of the policy that is to be created.
-service <service> - Service
Use this parameter to specify the network service that is associated with the policy. Possible values include:
• default - The default protocol or protocols for the port to which the firewall is applied
Examples
The following example creates a firewall policy named data that uses the SSH protocol and enables access from all IP
addresses on the 192.0.2.128/25 subnet:
cluster1::> system services firewall policy create -policy data -service ssh -allow-list
192.0.2.128/25
The following example adds an entry to the firewall policy named data, associating the HTTPS protocol with that policy
and enabling access from all IP addresses on the 192.0.2.128/25 subnet:
cluster1::> system services firewall policy create -policy data -service https -allow-list
192.0.2.128/25
Related references
network interface modify on page 340
Description
The system services firewall policy delete command deletes a firewall policy. You cannot delete a policy that is
being used by a logical interface. Use the network interface modify command with the -firewall-policy parameter
to change a network interface's firewall policy.
Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver of the policy to delete.
-policy <textpolicy_name> - Policy
Use this parameter to specify the name of the policy to delete.
-service <service> - Service
Use this parameter to specify the policy's network service to delete.
Examples
The following example deletes a firewall policy that uses the Telnet protocol on the policy named data:
cluster1::> system services firewall policy delete -policy data -service telnet
Use wildcards to delete entire policies at once, or particular services from every policy. This example deletes the entire
intercluster policy.
Related references
network interface modify on page 340
Description
The system services firewall modify command enables you to modify the list of IP addresses and netmasks associated
with a firewall policy.
Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver of the policy to modify.
-policy <textpolicy_name> - Policy
Use this parameter to specify the name of the policy to modify.
-service <service> - Service
Use this parameter to specify the policy's network service to modify.
[-allow-list <IP Address/Mask>, ...] - Allowed IPs
Use this parameter to specify one or more IP addresses with corresponding netmasks that are allowed by this
firewall policy. The correct format for this parameter is address/netmask, similar to "192.0.2.128/25". Multiple
address/netmask pairs should be separated with commas. Use the value 0.0.0.0/0 for "any".
Examples
The following example modifies the firewall policy named data that uses the SSH protocol to enable access from all
addresses on the 192.0.2.128 subnet:
cluster1::> system services firewall policy modify -policy data -service ssh -allow-list
192.0.2.128/25
Related references
system services firewall modify on page 1389
Description
The system services firewall policy show command displays information about firewall policies.
Examples
The following example displays information about all firewall policies:
Description
The system services manager install show command displays information about installed services.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-service <text>] - Service
Selects information about installed services that have the name you specify.
[-version <service version>] - Version
Selects information about installed services that have the version number you specify.
[-constituent <text>] - Constituent
Selects information about installed services that have the constituent process you specify.
[-nodes {<nodename>|local}, ...] - Nodes
Selects information about services that are installed on the nodes you specify.
[-description <text>] - Description
Selects information about installed services that match the description you specify.
Examples
The following example shows typical output from a two-node cluster.
Description
The system services manager policy add command adds a new service policy to the services manager. Policies
determine which versions of a service can run on the nodes of the cluster.
Examples
This example adds a service manager policy for version 1.0 of the diagnosis service.
cluster1::> system services manager policy add -service diagnosis -version 1.0
Description
The system services manager policy remove command removes a policy from the services manager. Policies
determine which versions of a service can run on the nodes of the cluster.
Parameters
-service <text> - Service
Use this parameter to specify the name of the service from which to remove a policy.
-version <service version> - Version
Use this parameter to specify the version number that is configured by the policy to remove.
Examples
The following example shows the removal of the service policy for version 1.0 of the diagnosis service.
cluster1::> system services manager policy remove -service diagnosis -version 1.0
Description
The system services manager policy setstate command enables or disables services manager policies. Use the
system services manager policy show command to display information about configured policies.
Parameters
-service <text> - Service
Use this parameter to set the state of the policy you specify.
-version <service version> - Version
Use this parameter to set the state of the policy with the version number you specify.
Examples
The following example sets the policy for version 1.0 of the diagnosis service to off.
cluster1::> system services manager policy setstate -service diagnosis -version 1.0 -state off
Related references
system services manager policy show on page 1397
Description
The system services manager policy show command displays information about policies that determine which versions
of a service can run on the nodes of the cluster.
Use the system services manager status show command to view information about services that are configured to run
in the cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-service <text>] - Service
Selects policies that apply to the service you specify.
[-version <service version>] - Version
Selects policies that have the version number you specify.
[-constituent <text>] - Constituent
Selects policies that have the constituent process you specify.
[-state {on|off}] - State
Use this parameter with the value "on" to select information about policies that are currently active. Use this
parameter with the value "off" to select information about policies that are not currently active.
[-num-active <integer>] - Number Active
Selects policies that have the number of active (running) instances you specify.
[-target-nodes <service affinity>, ...] - Target Nodes
Selects policies that are configured to run on the nodes you specify.
[-tag <UUID>] - Tag (privilege: advanced)
Selects policies that have the UUID you specify. Use this parameter with the -fields parameter to display a
list of the UUIDs of configured services.
Related references
system services manager status show on page 1398
Description
The system services manager status show command displays the status of system services that are configured to run in
the cluster.
System services run on the nodes of the cluster based on policies. Policies determine which versions of a service can run on the
nodes of the cluster. Use the system services manager policy show command to view existing policies.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-service <text>] - Service
Selects information about services that match the service name you specify.
[-version <service version>] - Version
Selects information about services that are configured to run the version number you specify. The configured
version is the minimum version that is allowed to run in the cluster according to a policy. Use the system
services manager policy show command to view information about service policies.
[-constituent <text>] - Constituent
Selects information about services that have the constituent process you specify.
[-actual-version <service version>] - Actual Version
Selects information about services that are running the version number you specify. This number can be higher
than the configured version if a more recent version is installed on the node that is running the service.
Examples
The example below shows typical output for a simple cluster.
Related references
system services manager policy show on page 1397
Description
The system services ndmp kill command is used to terminate a specific NDMP session on a particular node in the
cluster.
Parameters
<integer> - Session Identifier
Session ID of the NDMP session.
Examples
The following example shows how a specific NDMP session on the node named node1 can be terminated:
Description
The system services ndmp kill-all command is used to terminate all NDMP sessions on a particular node in the cluster.
Parameters
-node {<nodename>|local} - Node
Node on which all NDMP sessions needs to be terminated.
Examples
The following example shows how all NDMP sessions on the node named node1 can be terminated:
Description
Note: This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed in a future
release of Data ONTAP. Use the Vserver-aware "vserver services ndmp modify" command.
The system services ndmp modify command allows you to modify the NDMP configurations for a node in the cluster.
One or more of the following configurations can be modified:
• Enable/disable sending the NDMP password in clear text. Note that MD5 authentication mode is always enabled.
• NDMP user ID
Parameters
-node {<nodename>|local} - Node
This specifies the node whose NDMP configuration is to be modified.
[-enable {true|false}] - NDMP Service Enabled
This optionally specifies whether NDMP is enabled on the node. The default setting is true.
[-clear-text {true|false}] - Allow Clear Text Password
This optionally specifies whether the NDMP password can be sent in clear text. The default setting is true.
[-user-id <text>] - NDMP User ID
This optionally specifies the ID of the NDMP user.
Examples
The following example modifies the NDMP configuration on a node named node1. The configuration enables NDMP,
disables sending the password in clear text, and specifies an NDMP user named ndmp:
Related references
vserver services ndmp modify on page 2226
Description
Note: This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed in a future
release of Data ONTAP. Use the Vserver-aware "vserver services ndmp off" command.
The system services ndmp off command is used to disable the NDMP service on any node in the cluster.
Parameters
-node {<nodename>|local} - Node
The specific node on which NDMP service is to be disabled.
Examples
The following example is used to turn off the NDMP service on node named node1:
Related references
vserver services ndmp off on page 2231
Description
Note: This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed in a future
release of Data ONTAP. Use the Vserver-aware "vserver services ndmp on" command.
The system services ndmp on command is used to enable the NDMP service across any node in the cluster.
Parameters
-node {<nodename>|local} - Node
The specific node on which the NDMP service is to be enabled.
Examples
The following example is used to turn on the NDMP service on node named node1:
Description
Note: This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed in a future
release of Data ONTAP. Use the Vserver-aware "vserver services ndmp generate-password" command.
The system services ndmp password command is used to change the NDMP password for a node in the cluster.
Parameters
-node {<nodename>|local} - Node
The specific node for which the password is to be changed.
Examples
The following example is used to change the NDMP password for the node named node1:
Related references
vserver services ndmp generate-password on page 2225
Description
The system services ndmp probe command displays diagnostic information about all the NDMP sessions in the cluster.
The following fields are displayed for each of the sessions:
• Node
• Session identifier
• NDMP version
• Session authorized
• Data state
• Data operation
• Mover state
• Mover mode
• Mover position
• Effective host
• SCSI device ID
• SCSI hostadapter
• SCSI target ID
• SCSI LUN ID
• Tape device
• Tape mode
• Data Path
Examples
The following example displays diagnostic information about all the sessions in the cluster:
Node: cluster1-01
Session identifier: 4952
NDMP version: 4
Session authorized: true
Data state: IDLE
Data operation: NOACTION
Data server halt reason: NA
Data server connect type: LOCAL
....
...
Node: cluster1-02
Session identifier: 5289
NDMP version: 4
Session authorized: true
Data state: IDLE
Data operation: NOACTION
Data server halt reason: NA
Data server connect type: LOCAL
....
...
The following example displays diagnostic information of sessions running on the node cluster1-01 only:
Node: cluster1-01
Session identifier: 4952
NDMP version: 4
Session authorized: true
Data state: IDLE
Data operation: NOACTION
Data server halt reason: NA
Data server connect type: LOCAL
....
...
Related references
system services ndmp status on page 1408
Description
Note: This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed in a future
release of Data ONTAP. Use the Vserver-aware "vserver services ndmp show" command.
The system services ndmp show command displays the following information about the NDMP configuration across all
the nodes in the cluster:
• Node name
• Whether sending the NDMP password in clear text is enabled on the node
• NDMP user ID
A combination of parameters can be optionally supplied to filter the results based on specific criteria.
Parameters
{ [-fields <fieldname>, ...]
If this parameter is specified, the command displays only the fields that you specify.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all entries.
[-node {<nodename>|local}] - Node
Selects information about the specified node.
[-enable {true|false}] - NDMP Service Enabled
Selects information about the nodes where NDMP is enabled/disabled.
[-clear-text {true|false}] - Allow Clear Text Password
Selects information about the nodes whose clear-text setting matches the specified value.
[-user-id <text>] - NDMP User ID
Selects information about the nodes that have the specified NDMP user ID.
Examples
The following example displays information about the NDMP configuration of all nodes in the cluster:
Related references
vserver services ndmp show on page 2236
Description
The system services ndmp status command lists all the NDMP sessions in the cluster. By default it lists the following
details about the active sessions:
• Node
• Session ID
A combination of parameters can be optionally supplied so as to list only those sessions which match specific conditions. A
short description of each of the parameter is provided in the parameters section.
Parameters
{ [-fields <fieldname>, ...]
This optional parameter specifies which all additional fields to display. Any combination of the following
fields are valid:
• ndmp-version
• session-authorized
• data-state
• data-operation
• data-halt-reason
• data-con-addr-type
• data-con-addr
• data-con-port
• data-bytes-processed
• mover-state
• mover-mode
• mover-pause-reason
• mover-halt-reason
• mover-record-size
• mover-record-num
• mover-bytes-moved
• mover-seek-position
• mover-bytes-left-to-read
• mover-window-offset
• mover-window-length
• mover-position
• mover-setwindow-flag
• mover-con-addr-type
• mover-con-addr
• mover-con-port
• eff-host
• client-addr
• client-port
• spt-device-id
• spt-ha
• spt-scsi-id
• spt-scsi-lun
• tape-device
• tape-modes
• is-secure-control-connection
• data-backup-mode
• data-path
• source-addr
| [-instance ]}
If this parameter is specified, the command displays detailed information about all the active sessions.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays information about the sessions running on the specified
node only. Node should be a valid node name.
[-session-id <integer>] - Session Identifier
If this parameter is specified, the command displays information about specific NDMP session. A session-id is
a number used to identify a particular NDMP session.
[-ndmp-version <integer>] - NDMP Version
This parameter refers to the NDMP protocol version being used in the session.
[-session-authorized {true|false}] - Session Authorized
This field indicates whether an NDMP session is authenticated or not.
[-data-state <component state>] - Data State
This field identifies the current state of the data server's state machine.
[-data-operation <data operation>] - Data Operation
This field identifies the data server's current operation.
[-data-halt-reason <halt reason>] - Data Server Halt Reason
This field identifies the event that caused the data server state machine to enter the HALTED state.
Examples
The following example displays all the NDMP sessions on the cluster:
The following example shows how to display only the sessions running on node-01:
Description
Note: This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed in a future
release of Data ONTAP. Use the Vserver-aware "vserver services ndmp log start" command.
This command is used to start logging on an active NDMP session on a node. You can start logging two different kinds of
sessions. The NDMP server session manages all NDMP tasks on the node. If you want to log information regarding the
NDMP server, use server with the -session-id parameter to enable logging. If you want to log information about a
particular NDMP session, for example a restore operation, then determine the session ID for the session using the "system
services ndmp status" command and use that ID with the -session-id parameter to enable logging.
Parameters
-node {<nodename>|local} - Node
This parameter specifies the node.
-session-id {<integer>|server} - Session Identifier
This parameter specifies the NDMP session-id on which logging needs to be started. The session-id is
associated with a unique NDMP session. Specify server to start logging on the NDMP server session.
-filter <text> - Level Filter
Use this parameter to specify the filter for a particular session ID. This parameter controls the NDMP modules
for which logging is to be enabled. This parameter can take five values. They are as follow : all, none,
normal, backend or "filter-expression". The default value for this is none.
• normal is a short cut parameter that enables logging for all modules except verbose and io_loop. The
equivalent filter string is all-verbose-io_loop
• backend is a short cut parameter that enables logging for all modules except verbose, io_loop, ndmps
and ndmpd. The equivalent filter string is all-verbose-io_loop-ndmps-ndmpp
◦ - to remove the given module from the list of specified modules in the filter string. For example the
filter all-ndmpp will enable logging for all modules but not ndmpp.
◦ ^ to add the given module or modules to the list of modules specified in the filter string. For example
the filter ndmpp^mover^data will enable logging for ndmpp, mover and data.
+---------------------------------------------------+
| Modules | Description |
+---------------------------------------------------+
| verbose | verbose message |
| io | I/O process loop |
| io_loop | I/O process loop verbose messages |
| ndmps | NDMP service |
| ndmpp | NDMP Protocol |
| rpc | General RPC service |
| fdc_rpc | RPC to FC driver service |
| auth | Authentication |
| mover | NDMP MOVER (tape I/O) |
| data | NDMP DATA (backup/restore) |
| scsi | NDMP SCSI (robot/tape ops) |
| bkup_rpc | RPC to Backup service client |
| bkup_rpc_s | RPC to Backup service server |
| cleaner | Backup/Mover session cleaner |
| conf | Debug configure/reconfigure |
| dblade | Dblade specific messages |
| timer | NDMP server timeout messages |
| vldb | VLDB service |
| smf | SMF Gateway messages |
| vol | VOL OPS service |
| sv | SnapVault NDMP extension |
| common | NDMP common state |
| ext | NDMP extensions messages |
| sm | SnapMirror NDMP extension |
| ndmprpc | NDMP Mhost RPC server |
+---------------------------------------------------+
Examples
The following example shows how to start logging on a specific NDMP session 33522, running on the node cluster1-01
with filter normal.
cluster1::*> system services ndmp log start -node cluster1-01 -session-id 33522 -
filter normal
The following example shows how to start logging on the NDMP server session, on the node cluster1-01 with filter all.
cluster1::*> system services ndmp log start -session-id server -filter all -node
cluster1-01
Related references
vserver services ndmp log start on page 2248
Description
Note: This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed in a future
release of Data ONTAP. Use the Vserver-aware "vserver services ndmp log stop" command.
This command is used to stop logging on an active NDMP session on a node. The NDMP server session manages all NDMP
tasks on the node. If you want to stop logging information regarding the NDMP server, use server with the -session-id
parameter to disable logging. If you want to stop logging information about a particular NDMP session, for example a restore
operation, then determine the session ID for the session using the "system services ndmp status" command and use that ID with
the -session-id parameter to disable logging.
Parameters
-node {<nodename>|local} - Node
This parameter specifies the node.
-session-id {<integer>|server} - Session Identifier
This parameter specifies the NDMP session-id on which logging needs to be stopped. The session-id is
associated with a unique NDMP session. Specify server to stop logging on the NDMP server session.
Examples
The following example shows how to stop logging on a specific NDMP session 35512, running on node cluster1-01.
cluster1::*> system services ndmp log stop -session-id 35512 -node cluster1-01
The following example shows how to stop logging on the NDMP server session, running on node cluster1-01.
cluster1::*> system services ndmp log stop -session-id server -node cluster1-01
Related references
vserver services ndmp log stop on page 2249
Related references
vserver services ndmp on page 2224
Examples
The following example shows how to disable the node-scope-mode of NDMP server.
Related references
vserver services ndmp on page 2224
Description
Note: This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed in a future
release of Data ONTAP. Use the Vserver-aware "vserver services ndmp" command.
This command puts the NDMP server in the node-scope-mode. In the node-scope-mode, NDMP server has the following
behavior:
• NDMP authentication falls back to DATA ONTAP 8.1 NDMP authentication scheme
Examples
The following example enables node-scope-mode of operation :
Related references
vserver services ndmp on page 2224
Examples
The following example shows how to check the status of NDMP server in a cluster
Related references
vserver services ndmp on page 2224
Description
The system services ndmp service modify command allows you to modify the NDMP service configurations for a
node in the cluster. The following configuration can be modified:
Parameters
-node {<nodename>|local} - Node
This specifies the node whose NDMP configuration is to be modified.
[-common-sessions <integer>] - NDMP Common Sessions
This optional parameter specifies the number of extra common NDMP sessions supported, in addition to the
number of backup and restore sessions supported for a platform. The default value is 4 for all platforms. The
number of backup and restore sessions are platform dependent.
Caution:
Examples
The following example modifies the NDMP configuration on a node named node1. The configuration sets the NDMP
Common Sessions to 16:
Description
The system services ndmp service show command displays the following information about the NDMP service
configuration across all the nodes in the cluster:
• Node name
A combination of parameters can be optionally supplied to filter the results based on specific criteria.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information about the specified node.
[-common-sessions <integer>] - NDMP Common Sessions
Selects information about the nodes that have the specified number of NDMP common sessions.
Examples
The following example displays information about the NDMP configuration of all nodes in the cluster:
Description
The system services ndmp service start command starts the NDMP service daemon for a node. This is different from
the system services ndmp on command. The system services ndmp on command enables the daemon to accept
NDMP requests. The NDMP service daemon starts automatically on a node when it boots up. Use this command to start the
NDMP service daemon that has been stopped by the system services ndmp service stop command.
Examples
Related references
system services ndmp on on page 1401
system services ndmp service stop on page 1418
Description
The system services ndmp service stop command stops the NDMP service daemon on a node. This is a disruptive
command and should not be used in normal scenarios. Processing of active sessions continues but the ability to view or kill
sessions is lost. This is different from the system services ndmp off command. The system services ndmp off
command disables new NDMP connections on the node but does not stop the NDMP service daemon.
Parameters
-node {<nodename>|local} - Node
The node on which the NDMP service needs to be stopped.
Examples
Related references
system services ndmp off on page 1401
system services ndmp service start on page 1417
Description
The system services ndmp service terminate command terminates all active sessions on the node. This command
forcefully terminates all NDMP sessions without an opportunity for a graceful shutdown. Use system services ndmp
kill-all for a clean termination of all active sessions on a node.
Parameters
-node {<nodename>|local} - Node
The node on which the NDMP sessions need to be terminated
Related references
system services ndmp kill-all on page 1400
Description
This command modifies the overall availability of web services in the cluster, including the core protocol configurations for
those services. In a pre-root or unclustered scenario, its scope applies to the local node.
Parameters
[-external {true|false}] - External Web Services
Defines whether remote clients can access HTTP or HTTPS service content. Along with the system
services firewall configuration, this parameter controls the visibility for client connections. The default
value for this parameter after installation is 'true', which exports web protocols for remote access. If no value is
provided during modification, its behavior does not change.
[-per-address-limit <integer>] - Per Address Limit (privilege: advanced)
Limits the number of connections that can be processed concurrently from the same remote address. If more
connections are accepted, those in excess of the limit are delayed and processed after the number of
connections being processed drops below the limit. The default value is 96.
[-http-enabled {true|false}] - HTTP Enabled (privilege: advanced)
Defines whether HTTP is enabled. The default value for this parameter is false.
Examples
The following command changes the maximum size of the wait queue:
Related references
system services firewall on page 1389
The External Web Services field indicates whether remote clients are allowed to access the HTTP or HTTPS service
content. Along with the system services firewall configuration, the External Web Services field indicates the
visibility for client connections.
The Status field describes the aggregated operational state of cluster-level web services as retrieved from the system
services web node command. The Status field does not reflect whether the protocols are externally visible, but whether
the server processes are running correctly. For detailed information about individual servers, use the system services web
node show command. The following are the possible values for the Status in node configuration or availability:
• online, all web services are consistently configured and working correctly.
• partial, one or more nodes' web services are unavailable due to an error condition.
• mixed, the nodes in the cluster do not share the same web services configuration. This situation might occur if individual
nodes were reconfigured with the system services web node command.
• offline, all of the nodes' web services are unavailable due to an error condition.
Examples
The following example displays the availability of web services for the cluster.
Related references
system services firewall on page 1389
system services web node on page 1420
system services web node show on page 1420
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the nodes that match this parameter value. Identifies the node where the web server process is being
executed.
[-external {true|false}] - External Web Services
Selects the nodes that match this parameter value. Defines whether remote clients can access the HTTP or
HTTPS service content. Along with the system services firewallcommand configuration, this
parameter controls the visibility for client connections. The default value for this parameter after installation is
true, which exports web protocols for remote access.
[-http-port <integer>] - HTTP Port
Selects the nodes that match this parameter value. Defines the HTTP port for the node-level web services.
[-https-port <integer>] - HTTPS Port
Selects the nodes that match this parameter value. Defines the encrypted HTTP (HTTPS) port for the node-
level web services.
[-http-enabled {true|false}] - HTTP Enabled
Selects the nodes that match this parameter value. Defines whether HTTP is enabled.
[-per-address-limit <integer>] - Per Address Limit (privilege: advanced)
Selects the nodes that match this parameter value. Limits the number of connections that can be processed
concurrently from the same remote address. If more connections are accepted, those in excess of the limit are
delayed and processed after the number of connections being processed drops below the limit.
[-status {offline|partial|mixed|online|unclustered}] - Protocol Status
Selects the nodes that match this parameter value. Describes the operational state of node-level web services.
This parameter does not reflect whether protocols are externally visible, but whether the server processes are
running correctly. The following are the possible values that describe the service availability:
• offline, indicates that web services are unavailable due to an error condition.
• unclustered, indicates that the current node is not part of an active cluster.
Examples
The following example displays the status of web servers for nodes in the cluster.
Related references
system services firewall on page 1389
system services web show on page 1419
SMTape Commands
Manage SMTape operations
smtape commands description
Description
This command aborts the backup or restore operations based on the session identifier. You can perform SMTape operations
using the system smtape backup or system smtape restore commands. A unique session identifier is assigned for each
new SMTape operation. This command aborts sessions that are in active and waiting states.
Parameters
-session <Sequence Number> - Session Identifier
Use this parameter to specify the session identifier for a backup or restore session.
Examples
Abort the SMTape session with the session identifier 20
Related references
system smtape backup on page 1422
system smtape restore on page 1425
Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver name on which the volume is located. You need not specify this
parameter if only one cluster Vserver exists.
-volume <volume name> - Volume Name
Use this parameter to specify the name of the volume that needs to be backed up to tape.
-backup-snapshot <snapshot name> - Snapshot Name
Use this parameter to specify the name of the Snapshot copy while performing an SMTape backup operation.
-tape </node_name/tape_device> - Tape Name
Use this parameter to specify the name of the tape device which is used for this SMTape operation. The format
of the tape device name is /node_name/tape_device, where node_name is the name of the cluster node
owning the tape and tape_device is the name of the tape device.
[-tape-block-size <integer>] - Tape Record Size in KB
Use this parameter to specify the tape record size in KB for backup and restore operations. The tape record
size is in multiples of 4KB, ranging from 4KB to 256KB. The default tape record size is 240KB unless it is
specified.
Examples
The following example will start the backup of a volume datavol in a Vserver vserver0 to a tape rst0a. Both the
volume and tape reside on the same node cluster1-01. The Snapshot copy to be backed up is datavol_snapshot and
the tape record size has the value of 256KB.
The following example will start the backup of a volume datavol in a Vserver vserver0 to a tape rst0a. The volume
datavol is in a Vserver vserver0. Both the volume and tape reside on the same node cluster1-01. The Snapshot
copy to be backed up is datavol_snapshot and the tape record size has the default value of 240KB.
Related references
system smtape status on page 1428
system smtape restore on page 1425
Description
This command breaks the relationship between the tape backup of a volume and a restored volume, changing the restored
volume from read-only to read/write.
Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver name on which the volume is located. You need not specify this
parameter if only one cluster Vserver exists.
-volume <volume name> - Volume Name
Use this parameter to specify the name of the read-only volume that needs to be made read/writeable after a
restore.
Examples
Make the read-only volume datavol on Vserver vserver0 writeable after a restore.
Related references
system smtape backup on page 1422
system smtape restore on page 1425
system node hardware tape drive show on page 1336
Description
This command continues the SMTape backup and restore operations using the specified tape device. You can use this command
when an SMTape operation has reached the end of current tape and is in the wait state to write to or read from a new tape.
If a tape device is not specified, the original tape device will be used.
User has to make sure that the correct tape media is inserted in the device and positioned appropriately before issuing this
command.
Examples
Continues an SMTape session having session ID 20 on tape device rst0a on the node node1 in the cluster.
The following example continues session 40 on the same tape device that was being used by the session.
Description
This command performs restore of a backup image created using the command system smtape backup in the specified tape
device to a destination volume path. A new unique session ID is assigned for this operation; the status of the session can be
monitored using the command system smtape status. It is required that the volume and tape device reside in the same
cluster node. The volume must be of type DP (Data Protection) and should be placed in the restricted mode prior to a restore.
Any existing data on the volume will get overwritten upon a restore. The volume will remain as read-only and of type DP after
the restore. You can use the command system smtape break to get read/write permissions on the volume. Restore can be
done to a non-root DP volume.
Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver name on which the volume is located. You need not specify this
parameter if only one cluster Vserver exists.
-volume <volume name> - Volume Name
Use this parameter to specify the volume name on which the tape content will be restored.
-tape </node_name/tape_device> - Tape Name
Use this parameter to specify the name of the tape device which is used for this SMTape operation. The format
of the tape device name is /node_name/tape_device, where node_name is the name of the cluster node
owning the tape and tape_device is the name of the tape device.
[-tape-block-size <integer>] - Tape Record Size in KB
Use this parameter to specify the tape record size in KB for backup and restore operations. The tape record
size is in multiples of 4KB, ranging from 4KB to 256KB. The default tape record size is 240KB unless it is
specified. Use the same record size which was used during the backup. If the tape record size is different from
the tape record size that was used at the time of backup then system smtape restore will fail.
The following example will start the restore to a volume datavol from a tape rst0a. The volume datavol is in a
Vserver vserver0. Both vserver0 and rst0a reside on the same node cluster1-01. The default tape record size of
240KB was used during backup.
Related references
system smtape backup on page 1422
system smtape status on page 1428
system smtape break on page 1424
system smtape status show on page 1429
system smtape continue on page 1424
system node hardware tape drive show on page 1336
Description
This command displays the image header of a tape. The tape must have a valid backup of data. The following information about
the backup is displayed:
• Tape Number - the tape number if the backup spans multiple tape devices.
• WAFL Version - WAFL version of the storage system when the volume was backed up on tape.
• Source Storage System - the source storage system where the volume resided when the backup was performed.
• Source Volume Capacity - the capacity of the source volume that was backed up to tape.
• Source Volume Used Size - the used size of the source volume that was backed up to tape.
• Source Snapshot - name of the Snapshot copy used for the backup.
• Is SIS Volume - this field is true if the backed up volume was a SIS volume.
• Time of Previous Backup - the time at which the previous backup was performed; this information is displayed only if the
previous backup was an incremental backup.
• Snapshot Time - time at which the backup Snapshot copy was created.
• Snapshot Name - name of the Snapshot copy which was backed up to tape.
Parameters
-tape </node_name/tape_device> - Tape Name
Use this parameter to specify the name of the tape device which is used for this SMTape operation. The format
of the tape device name is /node_name/tape_device, where node_name is the name of the cluster node
owning the tape and tape_device is the name of the tape device.
[-tape-block-size <integer>] - Tape Record Size in KB
Use this parameter to specify the tape record size in KB for backup and restore operations. The tape record
size is in multiples of 4KB, ranging from 4KB to 256KB. The default tape record size is 240KB unless it is
specified.
Examples
The following example reads the image header from the tape nrst0l residing on the node cluster1-01 and displays
relevant tape header information.
Description
This command clears SMTape sessions which are completed, failed or Unknown state.
Parameters
[-session <Sequence Number>] - Session Identifier
Use this parameter to clear the SMTape sessions with the specified session identifier.
[-node {<nodename>|local}] - Node Name
Use this parameter to clear the SMtape sessions related to the specified node.
[-type {backup|restore}] - Operation Type
Use this parameter to clear the SMTape sessions of the specified operation type. These can be either backup or
restore sessions.
[-status {COMPLETED|FAILED|UNKNOWN}] - Session Status
Use this parameter to clear the SMTape sessions which have the status as specified in the parameter.
[-path <text>] - Path Name
Use this parameter to clear the SMTape sessions which have path as specified in the parameter.
[-device <text>] - Device Name
Use this parameter to clear the SMTape sessions on a specific tape device.
[-backup-snapshot <snapshot name>] - Snapshot Name
Use this parameter to clear the SMTape sessions using the Snapshot copy name as specified in the parameter.
[-tape-block-size <integer>] - Tape Block Size
Use this parameter to clear the SMTape sessions with the tape block size as specified in the parameter.
Examples
The following example clears all the completed SMTape sessions in the cluster:
The SMTape sessions on the node node1 in the cluster are cleared.
Description
This command lists the status of all SMTape sessions in the cluster. By default, this command lists the following information:
• Session
• Type
• Status
• Progress
• Path
• Device
• Node
Parameters
{ [-fields <fieldname>, ...]
Use this parameter to display additional fields about each session apart from the default entries. This
parameter is optional. Any combination of the following fields is valid:
• Session
• Node
• Type
• Status
• Path
• Device
• Progress
• Start-time
• End-time
• Update-time
• Backup-snapshot
• Tape-block-size
• Error
| [-instance ]}
Displays detailed information about the specified sessions.
[-session <Sequence Number>] - Session Identifier
Selects information about a specific SMTape session. A Session Identifier is a number that is used to identify a
particular SMTape session.
Examples
Displays default entries about the five SMTape sessions.
The following example shows the output with the -instance argument.
Session Identifier: 1
Node Name: node1
Operation Type: Backup
Status: COMPLETED
Description
Use this command to either enable or disable the standard SNMP authentication failure traps.
Parameters
[-authtrap <integer>] - Enables SNMP Authentication Trap
Enter the value of 1 to enable SNMP authentication failure traps. By default, SNMP authentication trap is
disabled and the value is 0.
Examples
The following example demonstrates how to set the SNMP authtrap.
Parameters
[-contact <text>] - Contact
Specifies the contact name. Without any value specified, this command displays current setting of contact
name.
Examples
The following example sets the contact name for SNMP.
Description
The system snmp enable-snmpv3 command enables SNMPv3 server on the entire cluster. When this command is run,
SNMP users and SNMP traphosts that are non-compliant to FIPS will be deleted automatically, since cluster FIPS mode is
enabled. Any SNMPv1 user, SNMPv2c user or SNMPv3 user (with none or MD5 as authentication protocol or none or DES as
encryption protocol or both) is non-compliant to FIPS. Any SNMPv1 traphost or SNMPv3 traphost (configured with an
SNMPv3 user non-compliant to FIPS) is non-compliant to FIPS.
Examples
The following command enables SNMPv3 server on the entire cluster, within a cluster named cluster1:
Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y
Warning: If you enable SNMPv3 using this command, any SNMP users and
SNMP traphosts that are non-compliant to FIPS will be
deleted automatically, since cluster FIPS mode is enabled.
Any SNMPv1 user, SNMPv2c user or SNMPv3 user (with none or
MD5 as authentication protocol or none or DES as encryption
protocol or both) is non-compliant to FIPS. Any SNMPv1
traphost or SNMPv3 traphost (configured with an SNMPv3 user
non-compliant to FIPS) is non-compliant to FIPS.
Do you want to continue? {y|n}: y
cluster1::*>
Description
Initializes or disables sending of traps by the SNMP daemon from the cluster.
Parameters
[-init <integer>] - Initialize Traps
Use the value of 1 to initialize SNMP daemon to send traps or use a value of 0 to stop sending traps from the
cluster. If no value is specified, this command displays the current setting of init. Traps are enabled by default.
Examples
The following command initializes SNMP daemon to send traps.
Description
Sets the location name as the System.sysLocation.0 MIB-II variable.
Parameters
[-location <text>] - Location
Specifies the location details. If no value is specified, this command displays the current setting of location.
Examples
This command sets the location name.
Description
The system snmp prepare-to-downgrade command prepares the SNMP subsystem for a downgrade or a revert. More
specifically, it prepares the SNMPv3 client feature for a downgrade or a revert. It deletes all storage switches that were explicitly
added for monitoring and are using SNMPv3 as the underlying protocol. It also deletes any cluster switches that are using
SNMPv3 for monitoring. Finally, it deletes any remote switch SNMPv3 users configured in ONTAP.
Examples
The following command prepares the SNMP subsystem for a downgrade or a revert, within a cluster named cluster1:
Description
Lists the current values of all the SNMP parameters.
Examples
The example below shows a typical command display.
Description
The system snmp community add command adds communities with the specified access control type. Only read-only
communities are supported. There is no limit for the number of communities supported.
Parameters
-vserver <Vserver Name> - Vserver
This parameter specifies the Vserver to which the community will be added. If no Vserver is specified, the
community is added to the admin Vserver.
-community-name <text> - Community
This parameter specifies the name of the community.
-type <ctype> - access type
This parameter specifies 'ro' for read-only community.
Examples
The following example adds the read-only community name 'private'.
Description
The system snmp community delete command deletes communities with the specified access control type. Only read-only
communities are supported.
Parameters
-vserver <Vserver Name> - Vserver
This parameter specifies the Vserver from which you wish to delete the community. If no Vserver is specified,
the community is deleted from the admin Vserver.
Examples
The following example deletes the read-only community 'private':
Description
Displays the current list of SNMP communities.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver
Selects the Vserver to which the SNMP community belongs
[-community-name <text>] - Community
Selects the SNMP v1/v2c community string
[-access <ctype>] - access
Selects the access type of the SNMP v1/v2c community. Read-only (ro) is the only access type supported
Examples
cluster1
ro private
Description
Adds the SNMP manager who receives the SNMP trap PDUs. The SNMP manager can be a hostname or IP address. There is no
limit on the number of traphosts supported.
Parameters
-peer-address <Remote InetAddress> - Remote IP Address
Specifies the IP address or hostname of the traphost. If the USM user is associated, then the SNMPv3 traps are
generated for this traphost using the associated USM user's authentication and privacy credentials. If no USM
user is associated, then the SNMP v1/v2c traps are generated for this traphost. For the SNMP v1/v2c traps, the
default community string is 'public', when no community is defined. When the community strings are defined,
then the first community string is chosen for the SNMP v1/v2c traps.
[-usm-username <text>] - USM User Name
Specifies a predefined SNMPv3 USM user. The SNMPv3 traps are generated using this USM user's
authentication and privacy credentials for the traphost identified by the peer-address parameter.
Examples
In the following example, the command adds a hostname 'yyy.example.com' for the SNMPv3 traps:
In the following example, the command adds a hostname 'xxx.example.com' for the SNMP v1/v2c traps:
Description
Deletes the SNMP manager, who receives the SNMP trap PDUs. The SNMP manager can be a hostname or IP address. There is
no limit on the number of traphosts supported.
Parameters
-peer-address <Remote InetAddress> - Remote IP Address
Specifies the IP address or hostname of the traphost. If the USM user is associated, then specify the USM user
to delete the traphost.
Examples
In the following example, the command deletes the SNMPv3 traphost 'yyy.example.com' associated with the USM user:
In the following example, the command deletes the SNMP v1/v2c traphost 'xxx.example.com' associated with a
community string:
Description
Displays list of the SNMP v1/v2c and SNMP v3 managers, that receive trap PDUs.
Examples
In the following example, the command displays all the host names or IP addresses that have been added until now:
Description
The system status show command displays information about the status of objects in Data ONTAP. You can limit output to
specific types of information and specific status in Data ONTAP, or filter output by specific field values.
To see a list of values that are in use for a particular field, use the -fields parameter of this command with the list of field
names you wish to view.
Examples
Description
The system timeout modify command sets the timeout value for CLI sessions. If there is no CLI activity during the length
of the timeout interval, the logged in user is logged out. The default value is 30 minutes. To prevent CLI sessions from timing
out, specify a value of 0 (zero).
Parameters
[-timeout <integer>] - Timeout (in minutes)
Use this parameter to specify the timeout value, in minutes.
Examples
The following example shows how to modify the timeout value for CLI sessions to be 10 minutes:
The following example shows how to prevent CLI sessions from timing out:
Examples
The following example displays the timeout value for CLI sessions:
Template Commands
The Templates directory
The template commands enable you to manage Templates and their parameters.
template copy
Copy a template
Availability: This command is available to cluster administrators at the admin privilege level.
Description
Use this command to copy an existing template. The copied template becomes a readwrite template and can be customized
using template parameter family of commands.
Parameters
-name <template name> - Name of the template
This parameter specifies the name of the template.
-destination-name <template name> - Destination template
This parameter specifies the name of the destination template.
Examples
The following example copies template1 to template2. The template2 will be a readwrite template:
template delete
Delete an existing template
Availability: This command is available to cluster administrators at the admin privilege level.
Description
Use this command to delete an existing template.
Examples
The following example deletes a template named template1 from the cluster:
template download
Download a template
Availability: This command is available to cluster administrators at the advanced privilege level.
Description
Use this command to download a template from an external server to the cluster.
Parameters
-uri {(ftp|http)://(hostname|IPv4 Address|'['IPv6 Address']')...} - URI of the template
This parameter specifies the URI from which the template will be downloaded.
[-name <template name>] - Name of the template
This parameter specifies the name that will be assigned to the template in the cluster.
Examples
The following example downloads the template specified in the -uri parameter value and names the template as template1:
The following example downloads the template specified in the -uri parameter value:
template provision
Provision Data ONTAP resources using the template
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The command template provision provisions ONTAP system based on the template that is passed as an input to the
template parameter. A wizard is presented which will accept the required inputs.
Parameters
-name <template name> - Name of the template
This parameter specifies the name of the template
Examples
The following example provisions a vserver with required protocols using template.
******************************
* Setup of network.interface *
******************************
Enter number of instances for object network.interface: 2
(1/2)LIF Protocol: nfs
(1/2)IP Addr: 1.1.1.1
(1/2)NetMask: 255.255.255.0
(1/2)Node Name: node1-vsim1
(1/2)Port: e0c
(2/2)LIF Protocol: nfs
(2/2)IP Addr: 1.1.1.1
(2/2)NetMask: 255.255.255.0
(2/2)Node Name: node1-vsim1
(2/2)Port: e0c
***************************
* Setup of network.routes *
***************************
Enter number of instances for object network.routes: 1
(1/1)Gateway: 1.1.1.1
***********************
* Setup of access.dns *
***********************
Search Domain: netapp.com
DNS IP Addresses List: 1.1.1.1
*************************
* Setup of security.nis *
*************************
NIS Domains: netapp.com
NIS IP Address: 1.1.1.1
*********************
* Setup of security *
*********************
LDAP Client Config: ldapconfig
LDAP Server IP: 1.1.1.1
LDAP Base DN: dc=examplebasedn
**********************
* Setup of protocols *
**********************
Protocols to configure: nfs
[Job 15] Configuring vserver for vs0 (100%)
Description
Use this command to rename an existing template.
Parameters
-name <template name> - Name of the template
This parameter specifies the name of the template.
-new-name <template name> - New name of the template
This parameter specifies the template's new name.
Examples
The following example renames a template template1 as template2:
template show
Display templates
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The template show command displays information about templates. The command output depends on the parameter or
parameters specified with the command. If no parameters are specified, then the command displays the following information
about all the templates:
• Template Name
To display detailed information about a single template, run the command with the -name parameter. The detailed view
provides all of the information in the previous list with the following additional information:
• Description
• Version
To display detailed information about all templates, run the command with the -instance parameter.
You can specify additional parameters to display information that matches only those parameters. For example, to display
information only about templates with readonly permissions, run the command with the -permission readonly parameter.
Examples
The following example displays information about all templates in the cluster:
The following example displays detailed information about a template named template1:
template show-permissions
Display Template Allowed and Disallowed System Objects
Availability: This command is available to cluster administrators at the admin privilege level.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-name <text>] - Name
If you specify this parameter, only permissions that match the specified name are displayed.
[-object-name <text>] - Object Name
If you specify this parameter, only permissions that match the specified object-name are displayed.
[-permission <text>] - Permission
If you specify this parameter, only permissions that match the specified permission are displayed.
[-command-name <text>] - Command Name
If you specify this parameter, only permissions that match the specified command-name are displayed.
Examples
The following example shows all the the allowed and disallowed system objects
template upload
Upload an existing template to an external server
Availability: This command is available to cluster administrators at the advanced privilege level.
Description
Use this command to upload an existing template on to an external server.
Parameters
-name <template name> - Name of the template
This parameter specifies the name of the template.
-uri {(ftp|http)://(hostname|IPv4 Address|'['IPv6 Address']')...} - URI to upload the template
This parameter specifies the URI to which the template will be uploaded.
Examples
The following example uploads a template template1 on to an external server specified in the uri input parameter:
Description
The template parameter modify command can be used to modify the following attributes of a template parameter:
Parameters
-template <template name> - Name of the template
Name of the template.
-name <text> - Name of the parameter
This parameter specifies the name of the parameter.
[-default-value <text>] - Default value of the parameter
This parameter specifies the default value of the parameter. This value is used by the template provision
command when it provisions the system using this template.
Examples
The following example modifies the default value of the parameter param1 in template template1 to value1:
cluster1::> template parameter modify -template template1 -parameter param1 -default-value value1
Related references
template provision on page 1442
Description
The template parameter show command displays information about the parameters of a template. The command output
depends on the parameter or parameters specified with the command. If no parameters are specified, the command displays the
following information about all the parameters of all the templates in the system:
• Readonly
• Description
To display detailed information about a single parameter of the template run the command with the -name parameter. The
detailed view provides all of the information in the previous list with the following additional information:
• Recommended Value for the parameter
• Maximum Length
• Range Maximum
• Range Minimum
• Allowed Values
To display detailed information about all the parameters of the template, run the command with the -instance parameter
You can specify additional parameters to display information that matches only those parameters. For example, to display
information about all the parameters of the templates with readonly permissions, run the command with the -permission
readonly parameter.
Parameters
{ [-fields <fieldname>, ...]
This parameter specifies the fields that need to be displayed.
| [-instance ]}
If this parameter is specified, the command displays information about all entries.
[-template <template name>] - Name of the template
Name of the template.
[-name <text>] - Name of the parameter
If this parameter is specified, the command displays information about the parameter of all the templates that
matches the specified parameter name.
[-permission <text>] - Template permission
If this parameter is specified, the command displays information about the parameter or parameters of all the
templates that matches the specified permission.
[-type <text>] - Type of the parameter
If this parameter is specified, the command displays information about the paramter or parameters of all the
templates that matches the specified type.
[-description <text>] - Parameter description
If this parameter is specified, the command displays information about the parameter or parameters of all the
templates that matches the specified description.
[-recommended-value <text>] - Recommended value of the parameter
If this parameter is specified, the command displays information about the parameter or parameters of all the
templates that matches the specified recommended value.
Examples
The following example displays information about all the parameters of all the templates in the cluster:
Volume Commands
Manage virtual storage, including volumes, snapshots, and mirrors
The volume commands enable you to manage volumes, mirrors, and Snapshot(tm) copies.
volume autosize
Set/Display the autosize settings of the flexible volume.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The volume autosize command allows the user to specify the maximum size that a volume will automatically grow to when
it is out of space or the minimum size that it will shrink to when the amount of used space is below a certain threshold. If only
the volume/Vserver name is specified then the current settings are displayed. This command is not supported on Infinite
Volumes.
• off - The volume will not grow or shrink in size in response to the amount of used space.
• grow - The volume will automatically grow when used space in the volume is above the grow threshold.
• grow_shrink - The volume will grow or shrink in size in response to the amount of used space.
By default, -mode is off for new FlexVol volumes, except for DP mirrors, for which the default value is
grow_shrink. The grow and grow_shrink modes work together with Snapshot autodelete to automatically
reclaim space when a volume is about to become full. The volume parameter -space-mgmt-try-first
controls the order in which these two space reclamation policies are attempted.
| [-reset [true]]} - Autosize Reset
This option allows the user to reset the values of autosize, max-autosize, min-autosize, autosize-grow-
threshold-percent, autosize-shrink-threshold-percent and autosize-mode to their default values based on the
current size of the volume. For example, the max-autosize value will be set to 120% of the current size of the
volume.
Examples
The following example sets the autosize settings on a volume named vol1. The maximum size to grow is 1TB and
autogrow is enabled.
The following example shows the autosize settings on a volume named vol1. The maximum size to grow is 1TB and
autogrow is enabled.
volume create
Create a new volume
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The volume create command creates a volume on a specified Vserver and storage aggregates. You can optionally specify the
following attributes for the new volume:
• Size
• Export policy
• User ID
• Group ID
• Security style (All volume types: UNIX mode bits, CIFS ACLs, or mixed NFS and CIFS permissions.)
• Language
• Junction path
• Whether the junction path is active (advanced privilege level or higher only)
• Whether the volume is the root volume for its Vserver (advanced privilege level or higher only)
• Comment
• Snapshot policy
• Caching policy
• Encrypt
• Efficiency policy
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the volume is located. If only one data Vserver exists, you do not need to
specify this parameter.
-volume <volume name> - Volume Name
This specifies the name of the volume that is to be created. A volume's name must start with an alphabetic
character (a to z or A to Z) and be 197 or fewer characters in length for FlexGroups, and 203 or fewer
characters in length for all other volume types. Volume names must be unique within a Vserver.
{ -aggregate <aggregate name> - Aggregate Name
This specifies the storage aggregate on which the volume is to be created. This parameter only applies to
FlexVol volumes.
| -aggr-list <aggregate name>, ... - List of Aggregates for FlexGroup Constituents
Specifies an array of names of aggregates to be used for FlexGroup constituents. Each entry in the list will
create a constituent on the specified aggregate. An aggregate may be specified multiple times to have multiple
constituents created on it. This parameter only applies to FlexGroups.
[-aggr-list-multiplier <integer>] - Aggregate List Repeat Count
Specifies the number of times to iterate over the aggregates listed with the -aggr-list parameter when
creating a FlexGroup. The aggregate list will be repeated the specified number of times. Example:
will cause four constituents to be created in the order aggr1, aggr2, aggr1, aggr2.
The default value is 4.
This parameter only applies to FlexGroups
• off - The volume will not grow or shrink in size in response to the amount of used space.
• grow - The volume will automatically grow when used space in the volume is above the grow threshold.
• grow_shrink - The volume will grow or shrink in size in response to the amount of used space.
By default, -autosize-mode is off for new volumes, except for data protection mirrors, for which the
default value is grow_shrink. The grow and grow_shrink modes work together with Snapshot autodelete
to automatically reclaim space when a volume is about to become full. The volume parameter -space-mgmt-
try-first controls the order in which these two space reclamation policies are attempted.
[-maxdir-size {<integer>[KB|MB|GB|TB|PB]}] - Maximum Directory Size (privilege: advanced)
This optionally specifies the maximum directory size. The default maximum directory size is model-
dependent and optimized for the size of system memory.
{ [-space-slo {none|thick|semi-thick}] - Space SLO
This optionally specifies the Service Level Objective for space management (the space SLO setting) for the
volume. The space SLO value is used to enforce volume settings so that sufficient space is set aside to meet
the space SLO. The default setting is none. There are three supported values: none, thick and semi-thick.
• none: The value of none does not provide any guarantee for overwrites or enforce any restrictions. It
should be used if the admin plans to manually manage space consumption in the volume and aggregate,
and out of space errors.
• thick: The value of thick guarantees that the hole fills and overwrites to space-reserved files in this
volume will always succeed by reserving space. To meet this space SLO, the following volume-level
settings are automatically set and cannot be modified:
◦ Space Guarantee: volume - The entire size of the volume is preallocated in the aggregate. Changing the
volume's space-guarantee type is not supported.
◦ Fractional Reserve: 100 - 100% of the space required for overwrites is reserved. Changing the volume's
fractional-reserve setting is not supported.
• semi-thick: The value of semi-thick is a best-effort attempt to ensure that overwrites succeed by
restricting the use of features that share blocks and auto-deleting backups and Snapshot copies in the
volume. To meet this space SLO, the following volume-level settings are automatically set and cannot be
modified:
◦ Space Guarantee: volume - The entire size of the volume is preallocated in the aggregate. Changing the
volume's space-guarantee type is not supported.
◦ Fractional Reserve: 0 - No space will be reserved for overwrites by default. However, changing the
volume's fractional-reserve setting is supported. Changing the setting to 100 means that 100% of
the space required for overwrites is reserved.
◦ Snapshot Autodelete: enabled - Automatic deletion of Snapshot copies is enabled to reclaim space. To
ensure that the overwrites can be accommodated when the volume reaches threshold capacity, the
following volume Snapshot autodelete parameters are set automatically to the specified values and
cannot be modified:
⁃ enabled: true
⁃ trigger: volume
⁃ defer-delete: none
In addition, with a value of semi-thick, the following technologies are not supported for the volume:
◦ File Clones with autodelete disabled: Only full file clones of files, LUNs or NVMe namespaces that
can be autodeleted can be created in the volume. The use of autodelete for file clone create is
required.
◦ Partial File Clones: Only full file clones of files or LUNs that can be autodeleted can be created in the
volume. The use of range for file clone create is not supported.
◦ Volume Efficiency: Enabling volume efficiency is not supported to allow autodeletion of Snapshot
copies.
• random_read - Read caches all metadata and randomly read user data blocks.
• random_read_write - Read caches all metadata, randomly read and randomly written user data blocks.
• all_read - Read caches all metadata, randomly read and sequentially read user data blocks.
• all_read_random_write - Read caches all metadata, randomly read, sequentially read and randomly written
user data.
• all - Read caches all data blocks read and written. It does not do any write caching.
• noread-random_write - Write caches all randomly overwritten user data blocks. It does not do any read
caching.
• meta-random_write - Read caches all metadata and write caches randomly overwritten user data blocks.
• random_read_write-random_write - Read caches all metadata, randomly read and randomly written user
data blocks. It also write caches randomly overwritten user data blocks.
• all_read-random_write - Read caches all metadata, randomly read and sequentially read user data blocks. It
also write caches randomly overwritten user data blocks.
• all_read_random_write-random_write - Read caches all metadata, randomly read, sequentially read and
randomly written user data. It also write caches randomly overwritten user data blocks.
• all-random_write - Read caches all data blocks read and written. It also write caches randomly overwritten
user data blocks.
Note that in a caching-policy name, a hyphen (-) separates read and write policies. Default caching-policy is
auto.
[-cache-retention-priority {normal|low|high}] - Cache Retention Priority (privilege: advanced)
This optionally specifies the cache retention priority to apply to the volume. A cache retention priority defines
how long the blocks of a volume will be cached in flash pool once they become cold. If a cache retention
• snapshot-only - This policy allows tiering of only the volume Snapshot copies not associated with the
active file system. The default minimum cooling period is 2 days. The -tiering-minimum-cooling-
days parameter can be used to override the default.
• all - This policy allows tiering of both Snapshot copy data and active file system user data to the cloud tier
as soon as possible without waiting for a cooling period. On DP volumes, this policy allows all transferred
user data blocks to start in the cloud tier.
Examples
Specifies the number of times to iterate over the aggregates listed with the -aggr-list parameter when creating a
FlexGroup. The aggregate list will be repeated the specified number of times. Example:
The following example creates a new volume named user_jdoe on a Vserver named vs0 and a storage aggregate named
aggr1. Upon its creation, the volume is placed in the online state. It uses the export policy named default_expolicy. The
owner of the volume's root is a user named jdoe whose primary group is named dev. The volume's junction path is /user/
jdoe. The volume is 250 GB in size, space for the entire volume is reserved on the aggregate, and the create operation runs
in the background.
The following example creates a new volume named vol_cached on a Vserver named vs0 and a Flash Pool storage
aggregate named aggr1. The newly created volume is placed online and uses auto as the caching policy.
The following example creates a new FlexGroup named media_vol on a Vserver named vs0 with four constituents on
aggregates aggr1 and aggr2. Upon its creation, the volume is placed in the online state. The volume's junction path is /
media. The volume is 200 TB in size, no space for the volume is reserved on the aggregates, and the create operation runs
in the background.
The following example creates a new FlexGroup volume named fg on a Vserver named vs0 on aggregates selected by
Data ONTAP.
Related references
volume modify on page 1463
volume show on page 1477
vserver export-policy create on page 1846
vserver create on page 1675
vserver modify on page 1678
volume delete
Delete an existing volume
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The volume delete command deletes the specified volumes. Before deleting a volume, the user is prompted to confirm the
operation unless the -force flag is specified. If this volume was associated with a policy group the underlying qos workload is
deleted.
Note:
• If there is a qtree or quota policy associated with a volume, it is deleted when you delete the volume.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the name of the Vserver from which the volume is to be deleted. If only one data Vserver exists,
you do not need to specify this parameter.
-volume <volume name> - Volume Name
This specifies the name of the volume that is to be deleted.
[-force [true]] - Force Delete (privilege: advanced)
If this parameter is specified, the user is not prompted to confirm each deletion operation. In addition, the
operation is run only on the local node, and several potential errors are ignored. By default, this setting is
false. This parameter is available only at the advanced privilege level and higher.
[-foreground {true|false}] - Foreground Process
This specifies whether the operation runs in the foreground. The default setting is true (the operation runs in
the foreground). When set to true, the command will not return until the operation completes.
Examples
The following example deletes a volume named vol1_old from a Vserver named vs0:
volume expand
Expand the size of a volume by adding constituents
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The volume expand command allows the user to increase the size of a FlexGroup by adding constituents. The size of the new
constituents is determined by the size of the smallest existing constituent. This command only applies to FlexGroups.
Parameters
-vserver <vserver name> - Vserver Name
This parameter can be used to specify the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This parameter specifies the volume for which the user wants to expand.
-aggr-list <aggregate name>, ... - List of Aggregates for FlexGroup Constituents
Specifies an array of names of aggregates to be used for new FlexGroup constituents. Each entry in the list
will create a constituent on the specified aggregate. An aggregate may be specified multiple times to have
multiple constituents created on it.
[-aggr-list-multiplier <integer>] - Aggregate List Repeat Count
Specifies the number of times to iterate over the aggregates listed with the -aggr-list parameter when
expanding a FlexGroup. The aggregate list will be repeated the specified number of times. Example:
will cause four constituents to be created in the order aggr1, aggr2, aggr1, aggr2. The default value is 1.
[-foreground {true|false}] - Foreground Process
If false is specified for this parameter, the command runs as a job in the background. If true is specified,
the command will not return until the operation is complete. The default value is true.
Examples
Specifies the number of times to iterate over the aggregates listed with the -aggr-list parameter when expanding a
FlexGroup. The aggregate list will be repeated the specified number of times. Example:
The following example increase the size of a FlexGroup by adding 6 constituents using the -aggr-list-multiplier:
cluster1::> volume expand -vserver vs1 -volume flexgroup -aggr-list aggr1,aggr2 -aggr-list-
multiplier 3
volume make-vsroot
Designate a non-root volume as a root volume of the Vserver
Availability: This command is available to cluster administrators at the advanced privilege level.
Description
The volume make-vsroot command promotes a non-root volume of the Vserver to be the Vserver's root volume. The
Vserver's root volume must be a FlexVol volume with a size of atleast 1 GB.
For instance, if you run this command on a volume named user that is located on a Vserver named vs0, the volume user is
made the root volume of the Vserver vs0.
This command is available only at the advanced privilege level and higher.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which a non-root volume is to be made the root volume.
-volume <volume name> - Volume Name
This specifies the non-root volume that is to be made the root volume of its Vserver. This must be an existing
FlexVol volume. Using a SnapLock volume as the root volume for a Vserver is not supported.
Examples
The following example makes a volume named root_vs0_backup the root volume of its Vserver with FlexVol volumes,
which is named vs0.
The following example makes a volume named root_vs1 the root volume of the Vserver with Infinite Volume vs1.
volume modify
Modify volume attributes
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The volume modify command can be used to modify the following attributes of a volume:
• Size
• Export policy
• User ID
• Group ID
• Security style (All volume types: UNIX mode bits, CIFS ACLs, or mixed NFS and CIFS permissions.)
• Comment
• Snapshot policy
• Convert ucode
• Caching policy
You can use the volume move command to change a volume's aggregate or node. You can use the volume rename command
to change a volume's name. You can use the volume make-vsroot command to make a volume the root volume of its
Vserver.
You can change additional volume attributes by using this command at the advanced privilege level and higher.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the volume is located. If only one data Vserver exists, you do not need to
specify this parameter. Although node Vservers are not displayed when using <Tab> completion, this
parameter supports node Vservers for modifying the root volume of the specified node Vserver.
-volume <volume name> - Volume Name
This specifies the volume that is to be modified.
[-size {<integer>[KB|MB|GB|TB|PB]}] - Volume Size
This optionally specifies the new size of the volume. The size is specified as a number followed by a unit
designation: k (kilobytes), m (megabytes), g (gigabytes), or t (terabytes). If the unit designation is not
specified, bytes are used as the unit, and the specified number is rounded up to the nearest 4 KB. A relative
rather than absolute size change can be specified by adding + or - before the given size: for example,
specifying +30m adds 30 megabytes to the volume's current size. The minimum size for a volume is 20 MB
(the default setting). The volume's maximum size is limited by the platform maximum. If the volume's
guarantee is set to volume, the volume's maximum size can also be limited by the available space in the
hosting aggregate. If the volume's guarantee is currently disabled, its size cannot be increased.
[-state {online|restricted|offline|force-online|force-offline|mixed}] - Volume State
This optionally specifies the volume's state. A restricted volume does not provide client access to data but is
available for administrative operations.
Note: The mixed state applies to FlexGroups only and cannot be specified as a target state.
• off - The volume will not grow or shrink in size in response to the amount of used space.
• grow - The volume will automatically grow when used space in the volume is above the grow threshold.
• grow_shrink - The volume will grow or shrink in size in response to the amount of used space.
By default, -autosize-mode is off for new volumes, except for DP mirrors, for which the default value is
grow_shrink. The grow and grow_shrink modes work together with Snapshot autodelete to automatically
reclaim space when a volume is about to become full. The volume parameter -space-mgmt-try-first
controls the order in which these two space reclamation policies are attempted.
• none: The value of none does not provide any guarantee for overwrites or enforce any restrictions. It
should be used if the admin plans to manually manage space consumption in the volume and aggregate,
and out of space errors.
• thick: The value of thick guarantees that the hole fills and overwrites to space-reserved files in this
volume will always succeed by reserving space. To meet this space SLO, the following volume-level
settings are automatically set and cannot be modified:
◦ Space Guarantee: volume - The entire size of the volume is preallocated in the aggregate. Changing the
volume's space-guarantee type is not supported.
◦ Fractional Reserve: 100 - 100% of the space required for overwrites is reserved. Changing the volume's
fractional-reserve setting is not supported.
• semi-thick: The value of semi-thick is a best-effort attempt to ensure that overwrites succeed by
restricting the use of features that share blocks and auto-deleting backups and Snapshot copies in the
volume. To meet this space SLO, the following volume-level settings are automatically set and cannot be
modified:
◦ Space Guarantee: volume - The entire size of the volume is preallocated in the aggregate. Changing the
volume's space-guarantee type is not supported.
◦ Fractional Reserve: 0 - No space will be reserved for overwrites by default. However, changing the
volume's fractional-reserve setting is supported. Changing the setting to 100 means that 100% of
the space required for overwrites is reserved.
◦ Snapshot Autodelete: enabled - Automatic deletion of Snapshot copies is enabled to reclaim space. To
ensure that the overwrites can be accommodated when the volume reaches threshold capacity, the
following volume Snapshot autodelete parameters are set automatically to the specified values and
cannot be modified:
⁃ enabled: true
⁃ commitment: destroy
⁃ defer-delete: none
In addition, with a value of semi-thick, the following technologies are not supported for the volume:
◦ File Clones with autodelete disabled: Only full file clones of files, LUNs or NVMe namespaces that
can be autodeleted can be created in the volume. The use of autodelete for file clone create is
required.
◦ Partial File Clones: Only full file clones of files or LUNs that can be autodeleted can be created in the
volume. The use of range for file clone create is not supported.
◦ Volume Efficiency: Enabling volume efficiency is not supported to allow autodeletion of Snapshot
copies.
• auto - Read caches all metadata and randomly read user data blocks, and write caches all randomly
overwritten user data blocks.
• random_read - Read caches all metadata and randomly read user data blocks.
• all_read - Read caches all metadata, randomly read and sequentially read user data blocks.
• all_read_random_write - Read caches all metadata, randomly read, sequentially read and randomly written
user data.
• all - Read caches all data blocks read and written. It does not do any write caching.
• noread-random_write - Write caches all randomly overwritten user data blocks. It does not do any read
caching.
• meta-random_write - Read caches all metadata and write caches randomly overwritten user data blocks.
• random_read_write-random_write - Read caches all metadata, randomly read and randomly written user
data blocks. It also write caches randomly overwritten user data blocks.
• all_read-random_write - Read caches all metadata, randomly read and sequentially read user data blocks. It
also write caches randomly overwritten user data blocks.
• all_read_random_write-random_write - Read caches all metadata, randomly read, sequentially read and
randomly written user data. It also write caches randomly overwritten user data blocks.
• all-random_write - Read caches all data blocks read and written. It also write caches randomly overwritten
user data blocks.
Note that in a caching-policy name, a hyphen (-) separates read and write policies. Default caching-policy is
auto.
[-cache-retention-priority {normal|low|high}] - Cache Retention Priority (privilege: advanced)
This parameter specifies the cache retention priority to apply to the volume. A cache retention priority defines
how long the blocks of a volume will be cached in flash pool once they become cold. If a cache retention
priority is not assigned to this volume, the system uses the default policy.
The available cache retention priority are:
• snapshot-only - This policy allows tiering of only the volume Snapshot copies not associated with the
active file system. The default cooling period is 2 days. The -tiering-minimum-cooling-days
parameter can be used to override the default.
• auto - This policy allows tiering of both Snapshot copy data and active file system user data to the cloud
tier. The default cooling period is 31 days. The -tiering-minimum-cooling-days parameter can be
used to override the default.
• all - This policy allows tiering of both Snapshot copy data and active file system user data to the cloud tier
as soon as possible without waiting for a cooling period. On DP volumes, this policy allows all transferred
user data blocks to start in the cloud tier.
Examples
The following example modifies a volume named vol4 on a Vserver named vs0. The volume's export policy is changed to
default_expolicy and its size is changed to 500 GB.
cluster1::> volume modify -vserver vs0 -volume vol4 -policy default_expolicy -size 500g
The following example modifies a volume named vol2. It enables autogrow and sets the maximum autosize to 500g
The following example modifies a volume named vol2 to have a space guarantee of none.
The following example modifies all volumes in Vserver vs0 to have a fractional reserve of 30%.
The following example modifies a volume named vol2 to grow in size by 5 gigabytes
The following example modifies a volume named vol2 to have a different caching policy. The volume must be on a Flash
Pool aggregate.
Related references
vserver export-policy create on page 1846
vserver create on page 1675
vserver modify on page 1678
volume move on page 1583
volume rename on page 1475
volume make-vsroot on page 1462
volume mount
Mount a volume on another volume with a junction-path
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The volume mount command mounts a volume at a specified junction path.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This specifies the volume that is to be mounted.
-junction-path <junction path> - Junction Path Of The Mounting Volume
This specifies the junction path of the mounted volume. The junction path name is case insensitive and must
be unique within a Vserver's namespace.
[-active {true|false}] - Activate Junction Path
This optionally specifies whether the mounted volume is accessible. The default setting is false . If the
mounted path is not accessible, it does not appear in the Vserver's namespace.
[-policy-override {true|false}] - Override The Export Policy
This optionally specifies whether the parent volume's export policy overrides the mounted volume's export
policy. The default setting is false .
Examples
The following example mounts a volume named user_tsmith on a Vserver named vs0. The junction path for the mounted
volume is /user/tsmith. The mounted volume is accessible, and the mounted volume's export policy is not overridden by
the parent volume's export policy.
volume offline
Take an existing volume offline
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The volume offline command takes the volume offline. If the volume is already in restricted or iron_restricted state, then it
is already unavailable for data access, and much of the following description does not apply. The current root volume may not
be taken offline. A number of operations being performed on the volume in question can prevent volume offline from
succeeding for various lengths of time. If such operations are required, the command may take additional time to complete. If
they do not, the command is aborted. The -force flag can be used to forcibly offline a volume.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the name of the Vserver from which the volume is to be taken offline. If only one data Vserver
exists, you do not need to specify this parameter.
-volume <volume name> - Volume Name
This specifies the name of the volume that is to be taken offline.
[-force | -f [true]] - Force Offline
This specifies whether the offline operation is forced. Using this option to force a volume offline can
potentially disrupt access to other volumes. The default setting is false.
[-foreground {true|false}] - Foreground Process
This specifies whether the operation runs in the foreground. The default setting is true (the operation runs in
the foreground). When set to true, the command will not return until the operation completes. This parameter
applies only to FlexGroups. For FlexVol volumes, the command always runs in the foreground.
[-disable-luns-check [true]] - Disable Check for Existing LUNs
Taking the volume offline will make all associated LUNs and NVMe namespaces unavailable, which normally
requires a user confirmation. If this parameter is specified, the command proceeds without a confirmation. The
default setting is false
Examples
The following example takes the volume named vol1 offline:
volume online
Bring an existing volume online
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Parameters
-vserver <vserver name> - Vserver Name
This parameter specifies the name of the Vserver from which the volume is to be brought online. If only one
data Vserver exists, you do not need to specify this parameter.
-volume <volume name> - Volume Name
This parameter specifies the name of the volume that is to be brought online.
[-force | -f [true]] - Force Online
When this parameter is used, the volume will be brought online even if there is not enough space available in
the aggregate to honor the volume's space guarantee.
[-foreground {true|false}] - Foreground Process
This parameter specifies whether the operation runs in the foreground. The default setting is true (the
operation runs in the foreground). When set to true, the command will not return until the operation completes.
This parameter applies only to FlexGroups. For FlexVol volumes, the command always runs in the foreground.
Examples
The following example brings a volume named vol1 online:
volume prepare-for-revert
Preparing the volume for revert
Availability: This command is available to cluster administrators at the advanced privilege level.
Description
The volume prepare-for-revert command prepares volumes to be reverted to the previous version of ONTAP.
Parameters
[-node {<nodename>|local}] - Node Name
This specifies the name of the node in which all the volumes will be prepared for revert. If unspecified, the
command will execute on the local node.
Examples
The following example prepares all volumes on node node1 to be reverted.
Description
The volume rehost command rehosts a volume from source Vserver onto destination Vserver. The volume name must be
unique among the other volumes on the destination Vserver.
Parameters
-vserver <vserver name> - Source Vserver name
This specifies the Vserver on which the volume is located.
-volume <volume name> - Target volume name
This specifies the volume that is to be rehosted.
-destination-vserver <vserver name> - Destination Vserver name
This specifies the destination Vserver where the volume must be located post rehost operation.
{ [-force-unmap-luns {true|false}] - Unmap LUNs in volume
This specifies whether the rehost operation should unmap LUNs present on volume. The default setting is
false (the rehost operation shall not unmap LUNs). When set to true, the command will unmap all mapped
LUNs on the volume.
| [-auto-remap-luns {true|false}]} - Automatic Remap of LUNs
This specifies whether the rehost operation should perform LUN mapping operation at the destination Vserver
for the LUNs mapped on the volume at the source Vserver. The default setting is false (the rehost operation
shall not map LUNs at the destination Vserver). When set to true, at the destination Vserver the command will
create initiators groups along with the initiators (if present) with same name as that of source Vserver. Then
the LUNs on the volume are mapped to initiator groups at the destination Vserver as mapped in source
Vserver.
Examples
The following example rehosts a volume named vol3 on Vserver named vs1 to a destination Vserver named vs2:
volume rename
Rename an existing volume
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The volume rename command renames a volume. The volume name must be unique among the other volumes in the same
Vserver.
Examples
The following example renames a volume named vol3_backup as vol3_save on a Vserver named vs0:
volume restrict
Restrict an existing volume
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The volume restrict command puts the volume in restricted state. If the volume is online, then it will be made unavailable
for data access as described under volume offline.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the name of the Vserver from which the volume is to be restricted. If only one data Vserver
exists, you do not need to specify this parameter.
-volume <volume name> - Volume Name
This specifies the name of the volume that is to be restricted.
[-foreground {true|false}] - Foreground Process
This specifies whether the operation runs in the foreground. The default setting is true (the operation runs in
the foreground). When set to true, the command will not return until the operation completes. This parameter
applies only to FlexGroups. For FlexVol volumes, the command always runs in the foreground.
Examples
The following example restricts a volume named vol1:
volume show
Display a list of volumes
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The volume show command displays information about volumes. The command output depends on the parameter or
parameters specified with the command. If no parameters are specified, the command displays the following information about
all volumes:
• Vserver name
• Volume name
• Aggregate name
• Size
• Available size
To display detailed information about a single volume, run the command with the -vserver and -volume parameters. The
detailed view provides all of the information in the previous list and the following additional information:
• Name ordinal
• User ID
• Group ID
• UNIX permissions
• Junction path
• Comment
• Filesystem size
• Used size
• Used percentage
• Volume nearly full threshold percent
• Autosize enabled
• Maximum autosize
• Minimum autosize
• Autosize mode
• Total files
• Files used
• Creation time
• Overwrite reserve
• Fractional reserve
• Language
• Concurrency level
• Optimization policy
• Volume UUID
• Failover state
• (DEPRECATED)-Extent option
• Consistency state
• Caching policy
• FlexGroup index
• FlexGroup UUID
• Application UUID
To display detailed information about all volumes, run the command with the -instance parameter.
You can specify additional parameters to display information that matches only those parameters. For example, to display
information only about data-protection volumes, run the command with the -type DP parameter.
Parameters
{ [-fields <fieldname>, ...]
This specifies the fields that need to be displayed. The fields Vserver and policy are the default fields (see
example).
| [-encryption ]
If this parameter is specified, the command displays the following information:
• Vserver name
• Volume name
• Aggregate name
• Volume state
• Encryption state
| [-junction ]
If this parameter is specified, the command displays the following information:
• Vserver name
• Volume name
• Junction path
• Vserver name
• Volume name
| [-instance ]}
If this parameter is specified, the command displays information about all entries.
[-vserver <vserver name>] - Vserver Name
If this parameter and the -volume parameter are specified, the command displays detailed information about
the specified volume. If this parameter is specified by itself, the command displays information about volumes
on the specified Vserver.
[-volume <volume name>] - Volume Name
If this parameter and the -vserver parameter are specified, the command displays detailed information about
the specified volume. If this parameter is specified by itself, the command displays information about all
volumes matching the specified name.
[-aggregate <aggregate name>] - Aggregate Name
If this parameter is specified, the command displays information only about the volume or volumes that are
located on the specified storage aggregate. This field is displayed as "-" for FlexGroups.
[-aggr-list <aggregate name>, ...] - List of Aggregates for FlexGroup Constituents
If this parameter is specified, the command displays information only about the FlexGroup or FlexGroups that
are located on the specified list of storage aggregates. This parameter applies to FlexGroups only.
[-encryption-type {none|volume|aggregate}] - Encryption Type
If this parameter is specified, the command displays information about the type of encryption key used for
encrypting the volume. The possible values are undefined, none, volume, and aggregate. The value none
is used for non-encrypted volumes, the value volume is used for volumes encrypted with volume key and
aggregate is used for volumes encrypted with aggregate key.
[-nodes {<nodename>|local}, ...] - List of Nodes Hosting the Volume
If this parameter is specified, the command displays information only about the FlexGroup or FlexGroups that
are located on the specified list of storage systems. This parameter applies to FlexGroups only.
[-size {<integer>[KB|MB|GB|TB|PB]}] - Volume Size
If this parameter is specified, the command displays information only about the volume or volumes that have
the specified size. Size is the maximum amount of space a volume can consume from its associated
aggregate(s), including user data, metadata, Snapshot copies, and Snapshot reserve. Note that for volumes
without a -space-guarantee of volume, the ability to fill the volume to this maximum size depends on the
space available in the associated aggregate or aggregates.
[-name-ordinal <text>] - Name Ordinal (privilege: advanced)
If this parameter is specified, it denotes the ordinal assignment used in relation to this volume's name. Ordinals
are used to disambiguate volumes that have the same base name on the same controller. A value of "0"
indicates that the base volume name is unique on the controller. A value greater than zero indicates that the
volume's base name is used by two or more volumes on the same controller, and that appending "(n)" to this
volume's name uniquely identifies it on this controller.
[-dsid <integer>] - Volume Data Set ID
If this parameter is specified, the command displays information only about the volume or volumes that match
the specified data set ID. This field is displayed as "-" for FlexGroups.
[-msid <integer>] - Volume Master Data Set ID
If this parameter is specified, the command displays information only about the volume or volumes that match
the specified master data set ID.
• data-move: Volumes that are being moved from a system operating in 7-Mode.
• data-protection: Volumes that are being replicated from a system operating in 7-Mode for disaster recovery.
• 'read' ... Indicates that the volume cannot participate in write caching.
• 'read-write' ... Indicates that the volume can participate in read and write caching.
• random_read - Read caches all metadata and randomly read user data blocks.
• random_read_write - Read caches all metadata, randomly read and randomly written user data blocks.
• all_read - Read caches all metadata, randomly read and sequentially read user data blocks.
• all_read_random_write - Read caches all metadata, randomly read, sequentially read, and randomly written
user data.
• all - Read caches all data blocks read and written. It does not do any write caching.
• noread-random_write - Write caches all randomly overwritten user data blocks. It does not do any read
caching.
• meta-random_write - Read caches all metadata and write caches randomly overwritten user data blocks.
• all_read-random_write - Read caches all metadata, randomly read and sequentially read user data blocks. It
also write caches randomly overwritten user data blocks.
• all_read_random_write-random_write - Read caches all metadata, randomly read, sequentially read, and
randomly written user data. It also write caches randomly overwritten user data blocks.
• all-random_write - Read caches all data blocks read and written. It also write caches randomly overwritten
user data blocks.
Note that in a caching-policy name, a hyphen (-) separates read and write policies. Default caching-policy is
auto.
[-cache-retention-priority {normal|low|high}] - Cache Retention Priority (privilege: advanced)
If this parameter is specified, the command displays the volumes that match the specified cache retention
priority policy.
A cache retention priority defines how long the blocks of a volume will be cached in flash pool once they
become cold.The available cache retention priority are:
• snapshot-only - Only the volume Snapshot copies not associated with the active file system are tiered to the
cloud tier.
• auto - Both Snapshot copy data and active file system user data are tiered to the cloud tier.
• all - Both Snapshot copy data and active file system user data are tiered to the cloud tier as soon as possible
without waiting for a cooling period. On DP volumes all transferred user data blocks start in the cloud tier.
The following example displays detailed information about a volume named vol1 on an SVM named vs1:
volume show-footprint
Display a list of volumes and their data and metadata footprints in their associated aggregate.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver
If this parameter and the -volume parameter are specified, the command displays detailed information about
the specified volume. If this parameter is specified by itself, the command displays information about volumes
on the specified Vserver.
[-volume <volume name>] - Volume Name
If this parameter and the -vserver parameter are specified, the command displays detailed information about
the specified volume. If this parameter is specified by itself, the command displays information about all
volumes matching the specified name.
[-volume-msid <integer>] - Volume MSID
If this parameter is specified, the command displays information only about the volume that has the specified
MSID.
[-volume-dsid <integer>] - Volume DSID
If this parameter is specified, the command displays information only about the volume that has the specified
DSID.
[-vserver-uuid <UUID>] - Vserver UUID
If this parameter is specified, the command displays information only about the volume on the vserver which
has the specified UUID.
[-aggregate <aggregate name>] - Aggregate Name
If this parameter is specified, the command displays information only about the volumes that are associated
with the specified aggregate.
[-aggregate-uuid <UUID>] - Aggregate UUID
If this parameter is specified, the command displays information only about the volumes on the aggregate
which have the specified UUID.
[-hostname <text>] - Hostname
If this parameter is specified, the command displays information only about the volumes that belong to the
specified host.
[-tape-backup-metafiles-footprint {<integer>[KB|MB|GB|TB|PB]}] - Tape Backup Metadata Footprint
If this parameter is specified, the command displays information only about the volumes whose tape backup
metafiles use the specified amount of space in the aggregate.
[-tape-backup-metafiles-footprint-percent <percent>] - Tape Backup Metadata Footprint Percent
If this parameter is specified, the command displays information only about the volumes whose tape backup
metafiles use the specified percentage of space in the aggregate.
Examples
The following example displays information about all volumes in the system.
Vserver : nodevs
Volume : vol0
Vserver : thevs
Volume : therootvol
The following example displays information about all volumes in a system and highlights a scenario where the aggregates
associated with volumes have a cloud tier attached to them.
Vserver : vsim1
Volume : vol0
Vserver : vs1
Volume : svm_root
Vserver : vs1
Volume : vol1
Related references
volume show-space on page 1500
volume move on page 1583
volume show-space
Display space usage for volume(s)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The volume show-space command displays information about space usage within the volume. The command output depends
on the parameter or parameters specified with the command. If no parameters are specified, the command displays the following
information about all volumes.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver
If this parameter and the -volume parameter are specified, the command displays detailed information about
the specified volume. If this parameter is specified by itself, the command displays information about volumes
on the specified Vserver.
[-volume <volume name>] - Volume Name
If this parameter and the -vserver parameter are specified, the command displays detailed information about
the specified volume. If this parameter is specified by itself, the command displays information about all
volumes matching the specified name.
[-volume-msid <integer>] - Volume MSID
If this parameter is specified, the command displays information only about the volume that has the specified
MSID.
[-volume-dsid <integer>] - Volume DSID
If this parameter is specified, the command displays information only about the volume that has the specified
DSID.
[-vserver-uuid <UUID>] - Vserver UUID
If this parameter is specified, the command displays information only about the volume on the vserver which
has the specified UUID.
[-aggregate <aggregate name>] - Aggregate Name
If this parameter is specified, the command displays information only about the volumes that are associated
with the specified aggregate.
Examples
The following example shows how to display details for all volumes.
Vserver : nodevs
Volume : vol0
Vserver : thevs
Volume : rootvol
Vserver : vs1
Volume : vol1
The following example shows all Volumes that have a snap reserve greater than 2 MB:
Vserver : nodevs
Volume : vol0
Vserver : vs1
Volume : vol1
Related references
volume show on page 1477
volume size
Set/Display the size of the volume.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The volume size command allows the user to set or display the volume size. If new-size is not specified then the current
volume size is displayed.
Parameters
-vserver <vserver name> - Vserver Name
This parameter can be used to specify the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This parameter specifies the volume for which the user wants to set or display the size.
[-new-size <text>] - [+|-]<New Size>
This optional parameter specifies the size of the volume. It can be used to set the volume size to a particular
number or grow/shrink the size by a particular amount. The size is specified as a number (preceded with a sign
for relative growth/shrinkage) followed by a unit designation: k (kilobytes), m (megabytes), g (gigabytes), or t
(terabytes). If the unit designation is not specified, bytes are used as the unit, and the specified number is
rounded up to the nearest 4 KB. The minimum size for a flexible volume is 20 MB, and the maximum size
depends on hardware platform and free space in the containing aggregate. If the volume's space guarantee is
currently disabled, its size cannot be increased.
Examples
The following example shows the size of a volume called vol1.
The following example sets the size of a volume called vol1 to 1GB.
The following example increases the size of a volume called vol1 by 500MB.
The following example decreases the size of a volume called vol1 by 250MB.
volume transition-prepare-to-downgrade
Verifies that there are no volumes actively transitioning from 7-mode to clustered Data ONTAP, and configures the transition
feature for downgrade.
Availability: This command is available to cluster administrators at the advanced privilege level.
Description
The volume transition-prepare-to-downgrade command is used to verify that a volume is not currently being
transitioned from 7-Mode to clustered Data ONTAP. This check must be done before reverting or downgrading a node.
Examples
volume unmount
Unmount a volume
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The volume unmount command unmounts a volume from its parent volume. The volume can be remounted at the same or a
different location by using the volume mount command.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This specifies the volume that is to be unmounted.
Examples
The following example unmounts a volume named vol2 on a Vserver named vs0:
Related references
volume mount on page 1472
Description
The volume clone create command creates a FlexClone volume on the aggregate containing the specified parent volume.
This command is only supported for flexible volumes. The maximum volume clone hierarchy depth is 500 and the default depth
is 60. You can optionally specify the following attributes for the new FlexClone volume:
• Comment
• Whether the volume clone create command runs as a foreground or background process
Parameters
-vserver <vserver name> - Vserver Name
This parameter specifies the Vserver on which the FlexClone volume is to be created. If only one data Vserver
exists, you do not need to specify this parameter.
-flexclone <volume name> - FlexClone Volume
This parameter specifies the name of the FlexClone volume. The name must be unique within the hosting
Vserver.
[-type {RW|DP}] - FlexClone Type
This parameter specifies the type of FlexClone volume. A read-only FlexClone volume is created if you
specify the type as DP; otherwise a read-write FlexClone volume is created.
[-parent-vserver <vserver name>] - FlexClone Parent Vserver
This parameter specifies the name of the Vserver to which the FlexClone parent volume belongs. If it is
different from the Vserver on which the FlexClone volume is to be created, then the FlexClone volume inherits
the export policies from the residing Vserver, and not from the FlexClone parent volume.
-parent-volume | -b <volume name> - FlexClone Parent Volume
This parameter specifies the name of parent volume from which the FlexClone clone volume is derived.
[-parent-snapshot <snapshot name>] - FlexClone Parent Snapshot
This specifies the name of the parent snapshot from which the FlexClone clone volume is derived.
[-junction-path <junction path>] - Junction Path
This specifies the junction path at which the new FlexClone clone volume should be mounted.
• auto - Read caches all metadata and randomly read user data blocks, and write caches all randomly
overwritten user data blocks.
• random_read - Read caches all metadata and randomly read user data blocks.
• random_read_write - Read caches all metadata, randomly read and randomly written user data blocks.
• all_read - Read caches all metadata, randomly read, and sequentially read user data blocks.
• all_read_random_write - Read caches all metadata, randomly read, sequentially read and randomly written
user data.
• all - Read caches all data blocks read and written. It does not do any write caching.
• noread-random_write - Write caches all randomly overwritten user data blocks. It does not do any read
caching.
• meta-random_write - Read caches all metadata and write caches randomly overwritten user data blocks.
• random_read_write-random_write - Read caches all metadata, randomly read and randomly written user
data blocks. It also write caches randomly overwritten user data blocks.
• all_read_random_write-random_write - Read caches all metadata, randomly read, sequentially read and
randomly written user data. It also write caches randomly overwritten user data blocks.
• all-random_write - Read caches all data blocks read and written. It also write caches randomly overwritten
user data blocks.
Note that in a caching-policy name, a hyphen (-) separates read and write policies. Default caching-policy is
auto.
[-vserver-dr-protection {protected|unprotected}] - Vserver DR Protection
This optionally specifies whether the volume should be protected by Vserver level SnapMirror. This parameter
is applicable only if the Vserver is the source of a Vserver level SnapMirror relationship. By default the clone
volume will inherit this value from the parent volume.
[-uid <integer>] - Volume-Level UID
This parameter optionally specifies a volume-level user ID (UID). All files and directories in a FlexClone
volume will inherit this UID.
[-gid <integer>] - Volume-Level GID
This parameter optionally specifies a volume-level group ID (GID). All files and directories in a FlexClone
volume will inherit this GID.
Examples
Description
The volume clone show command displays information about FlexClone clone volumes. This command is only supported
for flexible volumes. By default, the command displays the following information about all FlexClone volume clones:
• Vserver name
To display detailed information about all FlexClone volumes, run the command with the -instance parameter.
Parameters
{ [-fields <fieldname>, ...]
Selects the fields to be displayed.
| [-estimate ]
Displays an estimate of the free disk space required in the aggregate to split the indicated clone volume from
its underlying parent volume. The value reported may differ from the space actually required to perform the
split, especially if the clone volume is changing when the split is being performed.
• auto - Read caches all metadata and randomly read user data blocks, and write caches all randomly
overwritten user data blocks.
• random_read - Read caches all metadata and randomly read user data blocks.
• random_read_write - Read caches all metadata, randomly read and randomly written user data blocks.
• all_read - Read caches all metadata, randomly read, and sequentially read user data blocks.
• all_read_random_write - Read caches all metadata, randomly read, sequentially read and randomly written
user data.
• all - Read caches all data blocks read and written. It does not do any write caching.
• noread-random_write - Write caches all randomly overwritten user data blocks. It does not do any read
caching.
• meta-random_write - Read caches all metadata and write caches randomly overwritten user data blocks.
• random_read_write-random_write - Read caches all metadata, randomly read and randomly written user
data blocks. It also write caches randomly overwritten user data blocks.
• all_read-random_write - Read caches all metadata, randomly read, and sequentially read user data blocks.
It also write caches randomly overwritten user data blocks.
• all_read_random_write-random_write - Read caches all metadata, randomly read, sequentially read and
randomly written user data. It also write caches randomly overwritten user data blocks.
• all-random_write - Read caches all data blocks read and written. It also write caches randomly overwritten
user data blocks.
Examples
Description
The volume clone split estimate command displays an estimate of the free disk space required in the aggregate to split
the indicated clone volume from its underlying parent volume. The value reported might differ from the space actually required
to perform the split, especially if the clone volume is changing when the split is being performed. This command is only
supported for flexible volumes.
Parameters
[-vserver <vserver name>] - Vserver Name
This specifies the estimates for free disk space required for splitting FlexClone volumes residing on this
Vserver. If the -flexclone option is also specified, then the command displays the free disk space estimate
only for the specified FlexClone volume residing on the specified Vserver.
[-flexclone <volume name>] - FlexClone Volume
This specifies the free disk space estimate for splitting this FlexClone volume.
[-type {RW|DP}] - FlexClone Type
This parameter specifies the type of FlexClone volume. A read-only FlexClone volume is created if you
specify the type as DP; otherwise a read-write FlexClone volume is created.
Examples
Related references
volume clone show on page 1509
Description
The volume clone split show command displays the progress information of all the active FlexClone volume splitting
jobs. This command is only supported for flexible volumes. By default, this command displays the following information:
If the -instance option is also specified, detailed information about all splitting jobs is displayed.
Parameters
{ [-fields <fieldname>, ...]
This specifies the fields to be displayed, for all the ongoing FlexClone splitting jobs.
| [-instance ]}
This specifies the command to display detailed information about the ongoing FlexClone volume splitting
jobs.
[-vserver <vserver name>] - Vserver Name
Selects information about the ongoing FlexClone volume splitting jobs for all FlexClone volumes on this
Vserver.
[-flexclone <volume name>] - FlexClone Volume
Selects information about ongoing FlexClone volume splitting jobs for this FlexClone volume.
[-block-percentage-complete <integer>] - Percentage Complete
Selects information about all the ongoing FlexClone splitting jobs that have the specified percentage of Block
processing completed.
[-blocks-scanned <integer>] - Blocks Scanned
Selects information about all the ongoing FlexClone splitting jobs that have the specified number of blocks
scanned.
[-blocks-updated <integer>] - Blocks Updated
Selects information about all the ongoing FlexClone splitting jobs that have the specified number of blocks
updated.
Examples
Description
The volume clone split start command starts a job to separate the FlexClone volume from the underlying parent
volume. Both, the parent and the FlexClone volumes will be available for the duration of the split operation. After the job starts,
you can stop it using the volume clone split stop command. You can also stop the job using the job stop command.
You can monitor the current progress of the job using the volume clone split show and job show commands. This
command is only supported for flexible volumes. This command is not supported on volumes that are being protected as part of
a Vserver level SnapMirror.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver that the FlexClone volume exists on.
Examples
Related references
volume clone split stop on page 1516
job stop on page 149
volume clone split show on page 1514
job show on page 142
Description
The volume clone split stop command stops the process of separating the FlexClone volume from its underlying parent
volume, but does not lose any of the progress achieved while the split process was active. That is, all the clone volume blocks
already separated from the parent volume remain separated. If you restart the split operation, splitting process begins from the
beginning because no information about previously achieved progress is saved, but previously split blocks are not re-split. This
command is only supported for flexible volumes.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver that the FlexClone volume exists on.
-flexclone <volume name> - FlexClone Volume
This specifies the FlexClone volume whose separation from the parent volume will be stopped.
Examples
Description
The volume clone sharing-by-split show command displays the split volumes with shared physical blocks. This
command is only supported for flexible volumes. By default, this command displays the following information:
• Node Name
• Vserver Name
• Aggregate Name
• Volume State
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
This parameter selects information about the split volumes with shared physical blocks on this node.
[-vserver <Vserver Name>] - Vserver Name
This parameter selects information about the split volumes with shared physical blocks on this Vserver.
[-volume <volume name>] - Volume Name
This parameter selects information about shared physical blocks for this volume.
[-aggregate <aggregate name>] - Aggregate Name
This parameter specifies the aggregate associated with the given volume.
Examples
The following example displays the split volumes with shared physical blocks in the node:
The following example displays information about volume vol_clone1 residing on vserver vs1:
cluster1::> volume clone sharing-by-split show -node node1 -vserver vs1 -volume vol_clone1 -
instance
• Vserver name
• Volume name
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
This parameter selects information about the ongoing undo-sharing scan for all volumes on this Vserver.
[-volume <volume name>] - Volume Name
This parameter selects information about the ongoing undo-sharing scan on this volume.
[-blocks-scanned <integer>] - Scanned Blocks
This parameter selects information about the total number of blocks scanned by undo-sharing in the given
volume.
[-blocks-total <integer>] - Total Blocks
This parameter selects information about the total number of blocks for the undo-sharing to scan in the given
volume.
[-blocks-percentage-complete <integer>] - Blocks Percentage Complete
This parameter selects information about the percentage of block processing completed by undo-sharing in the
given volume.
Examples
The following example displays information about all the ongoing undo-sharing scan in the cluster:
The following example displays information about volume vol_clone1 residing on vserver vs1:
cluster1::> volume clone sharing-by-split undo show -vserver vs1 -volume vol_clone1 -instance
Description
The volume clone sharing-by-split undo start command starts a scan to undo the shrared physical blocks in the
given volume. The volume will be available for the duration of the undo-sharing operation. After the scan starts, you can stop it
using the volume clone sharing-by-split undo stop command. You can monitor the current progress of the scan
using the volume clone sharing-by-split undo show command. This command is supported for flexible volumes that
were split from their parent volumes.
Parameters
-vserver <Vserver Name> - Vserver Name
This parameter specifies the vserver that the volume exists on.
-volume <volume name> - Volume Name
This parameter specifies the split volume with shared physical blocks, in which the sharing will be undone.
Examples
The following example starts the scan to undo the physical block sharing in volume vol_clone1 on vserver vs1:
cluster1::> volume clone sharing-by-split undo start -vserver vs1 -volume vol_clone1
Related references
volume clone sharing-by-split undo stop on page 1520
volume clone sharing-by-split undo show on page 1517
Description
The volume clone sharing-by-split undo start-all command starts a scan to undo the shrared physical blocks in
all the relevant volumes across the cluster. The volumes will be available for the duration of the undo-sharing operation. You can
monitor the current progress of the scan using the volume clone sharing-by-split undo show command. This
command is supported for flexible volumes that were split from their parent volumes.
Examples
The following example starts the scan to undo the physical block sharing in all volumes across the cluster:
Description
The volume clone sharing-by-split undo stop command stops the process of reverting the shared physical blocks
from the split volume. If you restart the undo-sharing operation, scan begins from the beginning because no information about
previously achieved progress is saved, but previously unshared blocks are not processed again. This command is only supported
for flexible volumes.
Parameters
-vserver <Vserver Name> - Vserver Name
This parameter specifies the vserver that the volume exists on.
-volume <volume name> - Volume Name
This parameter specifies the volume whose unsharing of blocks will be stopped.
Examples
The following example stops an ongoing undo-sharing scan for volume vol_clone1 on vserver vs1:
cluster1::> volume clone sharing-by-split undo stop -vserver vs1 -volume vol_clone1
Description
This command verifies and updates the fingerprint database for the specified volume. This command is not supported on
FlexGroups or Infinite Volumes that are managed by storage services.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the Vserver on which the volume is located.
{ -volume <volume name> - Volume Name
Specifies the volume on which the verify operation needs to be started.
| -path </vol/volume>} - Volume Path
Specifies the volume path on which the verify operation needs to be started.
Examples
The following example runs volume efficiency check with delete checkpoint option turned on.
cluster1::> volume efficiency check -vserver vs1 -volume vol1 -delete-checkpoint true
Description
This command is used to set or modify the schedule, policy and various other efficiency configuration options on a volume.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the volume is located.
{ -volume <volume name> - Volume Name
This specifies the volume on which efficiency options need to be modified.
| -path </vol/volume>} - Volume Path
This specifies the volume path on which efficiency options need to be modified.
{ [-schedule <text>] - Schedule
This option is used to set and modify the schedule.
schedule is [day_list][@hour_list] or [hour_list][@day_list] or - or auto or manual
The day_list specifies the days of the week that an efficiency operation should run. It is a list of the first three
letters of the day (sun, mon, tue, wed, thu, fri, sat), separated by a comma. Day ranges such as mon-fri can
also be used. The default day_list is sun-sat. The names are not case sensitive.
The hour_list specifies the hours of each scheduled day that an efficiency operation should run. The hour_list
is from 0 to 23, separated by a comma. Hour ranges such as 8-17 are allowed. Step values can be used in
conjunction with ranges (For example, 0-23/2 means every two hours in a day). The default hour_list is 0, i.e.
at midnight of each scheduled day.
When efficiency is enabled on a volume for the first time, an initial schedule is assigned to the volume. This
initial schedule is sun-sat@0, which means run once every day at midnight.
If "-" is specified, no schedule is set on the volume. The auto schedule string triggers an efficiency operation
depending on the amount of new data written to the volume. The manual schedule string prevents SIS from
automatically triggering any operations and disables change-logging. This schedule string can only be used on
SnapVault destination volumes. The use of this schedule is mainly desirable when inline compression is
enabled on a SnapVault destination volume and background processing is not necessary.
Note that schedule and policy are mutually exclusive options.
| [-policy <text>]} - Efficiency Policy Name
This option is used to set an efficiency policy. The policy cannot be changed to the predefined inline-only
policy when there is an active background operation on the volume.
Note that schedule and policy are mutually exclusive options.
You can use the inline-only predefined efficiency policy to run inline compression without the need of any
background efficiency operations.
[-inline-dedupe {true|false}] - Inline Dedupe
This option is used to enable and disable inline deduplication. The default value is false.
You can use the inline-only predefined efficiency policy to run inline deduplication without the need of
any background efficiency operations.
[-data-compaction {true|false}] - Data Compaction
This option is used to enable and disable data compaction. The default value is false.
[-cross-volume-inline-dedupe {true|false}] - Cross Volume Inline Deduplication
This option is used to enable and disable cross volume inline deduplication. The default value is false.
[-cross-volume-background-dedupe {true|false}] - Cross Volume Background Deduplication
This option is used to enable and disable cross volume background deduplication. The default value is false.
Examples
The following examples modify efficiency options on a volume.
cluster1::> volume efficiency modify -vserver vs1 -volume vol1 -schedule sun-sat@12
cluster1::> volume efficiency modify -vserver vs1 -volume vol1 -policy policy1
cluster1::> volume efficiency modify -vserver vs1 -volume vol1 -compression true -inline-
compression true -inline-dedupe true -data-compaction true -cross-volume-inline-dedupe true -cross-
volume-background-dedupe true
Description
The volume efficiency off command disables efficiency on a volume.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the Vserver on which the volume is located.
{ -volume <volume name> - Volume Name
Specifies the name of the volume on which efficiency needs to be disabled.
Examples
The following examples disable efficiency on a volume:
volume efficiency on
Enable efficiency on a volume
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The volume efficiency on command enables efficiency on a volume. The specified volume must be online. Efficiency
operations will be started periodically according to a per volume schedule or policy. The volume efficiency modify
command can be used to modify schedule and the volume efficiency policy modify command can be used to modify
policy. You can also manually start an efficiency operation with the volume efficiency start command.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the volume is located.
{ -volume <volume name> - Volume Name
This specifies the name of the volume on which efficiency needs to be enabled.
| -path </vol/volume>} - Volume Path
This specifies the volume path on which efficiency needs to be enabled.
Examples
The following examples enable efficiency on a volume.
Related references
volume efficiency modify on page 1521
volume efficiency policy modify on page 1538
volume efficiency start on page 1530
Description
The volume efficiency prepare-to-downgrade command updates efficiency configurations and metadata to be
compatible with releases prior to ONTAP 9. This command also disables the use of incompatible efficiency features. This
command is not supported on FlexGroups.
Parameters
[-disable-feature-set <downgrade version>] - Data ONTAP Version
This parameter specifies the Data ONTAP version that introduced new volume efficiency feature set.
Examples
The following example disables the the features introduced in Data ONTAP 8.3.1
The following example disables the the features introduced in Data ONTAP 8.3.2.
The following example ignores offline volumes while disabling the the features introduced in Data ONTAP 8.3.2 .
The following example ignores offline volumes while disabling the the features introduced in Data ONTAP 8.3.1 .
Description
Use the volume efficiency promote command to promote a volume from deprioritized state back to auto state.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the volume is located.
{ -volume <volume name> - Volume Name
This specifies the name of the volume on which auto scheduling needs to be restarted.
Examples
The following examples promote a volume from deprioritized state back to auto state.
Description
The volume efficiency revert-to command reverts the format of volume efficiency metadata for the volume to the given
version of Data ONTAP. This command is not supported on FlexGroups.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the volume is located.
{ -volume <volume name> - Volume Name
This specifies the name of the volume for which volume efficiency metadata needs to be reverted.
| -path </vol/volume>} - Volume Path
This specifies the volume path for which volume efficiency metadata needs to be reverted.
[-version <revert version>] - Revert to Version
Specifies the version of Data ONTAP to which the volume efficiency metadata needs to be formatted.
[-delete | -d {true|false}] - Delete Existing Metafile on Revert
If set to true, this parameter specifies that the volume efficiency metadata be deleted instead of reverting its
format. By default this parameter is set to false.
[-clean-up | -c {true|false}] - Delete Previously Downgraded Metafiles
If set to true, this parameter specifies that the volume efficiency metadata already reverted using volume
efficiency revert-to be deleted. By default this parameter is set to false.
[-revert-adaptive-compression {true|false}] - Downgrade to minor version
If set to true, this parameter Specifies that the volume efficiency metadata needs to be reverted to minor
version of Data ONTAP. By default this parameter is set to false.
[-check-snapshot {true|false}] - Revert ignore snapshots
If set to false, this parameter specifies that the volume efficiency revert will not check for Snapshot copies
created by previous releases of Data ONTAP. By default this parameter is set to true.
Examples
The following examples reverts volume efficiency metadata on a volume named vol1 located in vserver vs1 to version 8.3.
cluster1::> volume efficiency revert-to -vserver vs1 -volume vol1 -version 8.3
cluster1::> volume efficiency revert-to -vserver vs1 -path /vol/vol1 -version 8.3
Description
The volume efficiency show command displays the information about storage efficiency of volumes. The command output
depends on the parameter or parameters specified. If no parameters are specified, the command displays the following
information for all volumes with efficiency:
• Status: Status of the efficiency on the volume. Following are the possible values:
◦ Downgrading: An efficiency operation necessary to downgrade the efficiency metafiles to a previous Data ONTAP
release is active.
• Progress: The progress of the current efficiency operation with information as to which stage of the efficiency process is
currently in progress and how much data is processed for that stage. For example: "25 MB Scanned", "20 MB Searched",
"500 KB (2%) Compressed", "40 MB (20%) Done", "30 MB Verified".
To display detailed information, run the command with the -l or -instance parameter. The detailed view provides all
information in the previous list and the following additional information:
• Inline Compression: Current state of inline compression on the volume (Enabled or Disabled).
• Minimum Blocks Shared: The minimum number of adjacent blocks in a file that can be shared.
• Blocks Skipped Sharing: Blocks skipped sharing because of the minimum block share value.
• Last Successful Operation Begin: The time and date at which the last successful operation began.
• Last Successful Operation End: The time and date at which the last successful operation ended.
• Last Operation End: The time and date at which the last operation ended.
• Change Log Usage: The percentage of the change log that is used.
• Logical Data: The total logical data in the volume, and how much is reached compared to the deduplication logical data
limit.
• Queued Job: The job that is queued. Following are the possible values:
◦ check: A job to eliminate stale data from the fingerprint database is queued.
◦ downgrading: An efficiency operation necessary to downgrade the efficiency metafiles to a previous Data ONTAP
release is queued.
• Stale Fingerprints: The percentage of stale entries in the fingerprint database. If this is greater than 20 percent a subsequent
volume efficiency start operation triggers the verify operation, which might take a long time to complete.
• Inline Dedupe: Current state of inline deduplication on the volume (Enabled or Disabled).
• Cross Volume Inline Deduplication: Current state of cross volume inline deduplication on the volume (Enabled or Disabled).
• Cross Volume Background Deduplication: Current state of cross volume background deduplication on the volume (Enabled
or Disabled).
• Extended Compressed Data: Is there extended compressed data present on the volume.
• Inline Adaptive Data Compaction: Whether Inline Adaptive Data Compaction is enabled or disabled on the volume. When
enabled, Data ONTAP combines data fragments to reduce on-disk block consumption.
You can specify additional parameters to display information that matches only those parameters. For example, to display
information only about volumes with efficiency in Vserver vs1, run the command with the -vserver vs1 parameter.
Parameters
{ [-fields <fieldname>, ...]
This specifies the fields that need to be displayed. The fields Vserver and volume name are the default fields.
| [-l ]
This option displays detailed information about the volumes with efficiency.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
Displays information only for those volumes that match the specified Vserver.
{ [-volume <volume name>] - Volume Name
Displays information only for those volumes that match the specified volume.
| [-path </vol/volume>]} - Volume Path
Displays information only for those volumes that match the specified volume path.
Examples
The following example displays information about all volumes with efficiency on the Vserver named vs1:
The following example displays detailed information about a volume named vol1 on a Vserver named vs1:
Related references
volume efficiency start on page 1530
Description
Use the volume efficiency start command to start an efficiency operation. The volume must be online and have
efficiency enabled. If there is an efficiency operation already active on the volume, this command fails.
When the volume efficiency start command is issued, a checkpoint is created at the end of each stage or sub-stage, or on
an hourly basis in the gathering phase. If at any point the volume efficiency start operation is stopped, the system can
restart the efficiency operation from the execution state saved in the checkpoint. The delete-checkpoint parameter can be
used to delete the existing checkpoint and restart a fresh efficiency operation. The checkpoint corresponding to gathering has a
validity period of 24 hours. If the user knows that significant changes have not been made on the volume, then such a gatherer
checkpoint whose validity has expired can be used with the help of the use-checkpoint parameter. There is no time
restriction for checkpoints of other stages.
When the volume is configured to use the inline-only efficiency policy, the system will stop monitoring changes to the data
for the purpose of running background efficiency operations. The background deduplication operations will be disabled. The
user can still execute compression specific efficiency operation with -scan-old-data and -compression parameters to
compress the existing data on the volume.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the Vserver on which the volume is located.
{ -volume <volume name> - Volume Name
Specifies the name of the volume.
| -path </vol/volume>} - Volume Path
Specifies the complete path of the volume.
[-scan-old-data | -s [true]] - Scan Old Data
This option scans the file system and processes all existing data. It prompts for user confirmation before
proceeding. Use the force option to suppress this confirmation.
{ [-use-checkpoint | -p [true]] - Use Checkpoint (if scanning old data)
Use the checkpoint when scanning existing data. Valid only if scan-old-data parameter is true.
Examples
The following examples start efficiency on a volume:
cluster1::> volume efficiency start -volume vol1 -vserver vs1 -queue -delete-checkpoint
Description
The volume efficiency stat command displays efficiency statistics. The output depends on the parameters specified with
the command. If no parameters are specified, the command displays the following efficiency statistics fields for all the volumes:
• Vserver: The Vserver that the volume belongs to.
• Inline Incompressible CGs: Number of compression groups that cannot be compressed by inline compression.
Parameters
{ [-fields <fieldname>, ...]
This specifies the fields that need to be displayed. The Vserver and volume name are the default fields.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
Displays statistics only for those volume(s) that match the specified Vserver.
{ [-volume <volume name>] - Volume Name
Displays statistics only for those volume(s) that match the specified volume name.
| [-path </vol/volume>]} - Volume Path
Displays statistics only for those volume(s) that match the specified volume path.
[-b [true]] - Display In Blocks
Displays usage size in 4k block counts.
[-num-compressed-inline <integer>] - Inline Compression Attempts
Displays statistics only for those volume(s) that match the specified number of Compression Groups attempted
inline.
Examples
The following example displays default efficiency statistics for all the volumes.
Vserver: vs1
Volume: vol3
Inline Compression Attempts: 0
Inline Incompressible CGs: 0
The following example show the detailed statistics for vol1 in Vserver vs1.
Description
Use the volume efficiency stop command to stop an efficiency operation.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the volume is located.
{ -volume <volume name> - Volume Name
This specifies the name of the volume on which efficiency operation needs to be stopped.
| -path </vol/volume>} - Volume Path
This specifies the volume path on which efficiency operation needs to be stopped.
[-all | -a [true]] - Stop All Operations
This specifies both active and queued efficiency operations to be aborted.
Examples
The following examples stop efficiency on a volume.
Description
The command volume efficiency undo removes volume efficiency on a volume by undoing compression, undoing
compaction and removing all the block sharing relationships, and cleaning up any volume efficiency specific data structures.
Any efficiency operations on the volume must be disabled before issuing this command. The volume efficiency configuration is
deleted when the undo process completes. The command is used to revert a volume to an earlier version of Data ONTAP where
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the volume is located.
{ -volume <volume name> - Volume Name
This specifies the volume name.
| -path </vol/volume>} - Volume Path
This specifies the volume path.
[-compression | -C [true]] - Decompress Data in the Volume
Undo the effects of compression. This requires efficiency to be disabled (by performing volume
efficiency off).
[-dedupe | -D [true]] - Undo Block Sharing in the Volume
Undo the effects of deduplication. This requires efficiency to be disabled (by performing volume
efficiency off).
[-inode | -i <integer>] - Inode Number to Undo Sharing
Remove the block sharings from a specified inode.
[-undo-type | -t {all|wrong}] - Selective Undo
This specifies to remove either all or only invalid block sharing. When all is used, all block sharings are
removed. When wrong is used, only invalid sharings present in the volume are removed. When used along
with log option, it logs information about all or wrong block sharings without sharing removal.
[-log | -d [true]] - Only Log Incorrect Savings
If specified, information about invalid block sharing relationships will only be logged. Invalid sharings will not
be removed. This parameter is only valid when the parameter -undo-type is specified as wrong.
[-data-compaction | -P [true]] - Undo Data Compaction in the Volume
Undo the effects of data compaction.
[-cross-volume-dedupe | -A [true]] - Undo Cross Volume Deduplication
Undo the effects of cross volume deduplication.
[-extended-compression | -X [true]] - Extended compression
Undo the effects of extended compression. This removes the compression savings for data that requires more
resources to compress.
Examples
The following are examples of how to use efficiency undo.
To undo deduplication and compression savings, but not compaction savings in a volume name vol1 on a Vserver named
vs1:
To rewrite compressed blocks and undo compression savings in a volume name vol1 on a Vserver named vs1:
To rewrite compressed and deduped blocks without any efficiency in a volume name vol1 on a Vserver named vs1:
Related references
volume efficiency off on page 1522
Description
The volume efficiency policy create creates an efficiency policy.
Parameters
-vserver <vserver name> - Vserver
Specifies the Vserver on which the volume is located.
-policy <text> - Efficiency Policy Name
This specifies the policy name.
[-type <Efficiency policy type>] - Policy Type
This specifies the policy type. The policy type defines when the volume using this policy will start processing
a changelog. There are two possible values:
• threshold means changelog processing occurs when the changelog reaches a certain percentage.
Examples
The following example creates an efficiency policy.
cluster1::> volume efficiency policy create -vserver vs1 -policy policy1 -schedule daily -duration
100
Related references
job schedule on page 161
Description
The volume efficiency policy delete command deletes an efficiency policy. An efficiency policy can be deleted only
when it is not associated with any volume. The pre-defined policies default and inline-only cannot be deleted.
Parameters
-vserver <vserver name> - Vserver
This specifies the Vserver on which the volume is located.
-policy <text> - Efficiency Policy Name
This specifies the policy name.
Examples
The following example deletes an efficiency policy:
Description
The volume efficiency policy modify command can be used to modify the policy attributes.
The attributes of the inline-only predefined policy cannot be modified.
Examples
The following example modifies efficiency policy.
Related references
job schedule show on page 162
• Duration (Hours): The duration in hours that the efficiency operation can run.
• Enable: Whether the policy is enabled or not.
You can specify additional parameters to select the displayed information. For example, to display efficiency policies only with
duration 5 hours, run the command with the -duration 5 parameter.
The pre-defined policies default and inline-only are available when all the nodes in the cluster are running Data ONTAP
version 8.3 or later.
The inline-only pre-defined policy must be used when the user wants to use the inline compression feature without any
regularly scheduled or manually started background storage efficiency operations. When a volume is configured to use the
inline-only efficiency policy, the system will stop monitoring changes to the data for running the background efficiency
operations on the volume. Volumes cannot be configured with the inline-only policy if there is a currently active background
efficiency operation.
Parameters
{ [-fields <fieldname>, ...]
Selects the fields to be displayed. Vserver and policy are the default fields (see example).
| [-instance ]}
If this parameter is specified, the command displays information about all entries.
[-vserver <vserver name>] - Vserver
Selects information about the policies that match the specified Vserver.
[-policy <text>] - Efficiency Policy Name
Selects information about the policies that match the specified policy name.
[-type <Efficiency policy type>] - Policy Type
Selects information about the policies that match the specified policy type. There are two possible values -
threshold and scheduled.
[-schedule <text>] - Job Schedule Name
Selects information about the policies that match the specified schedule.
[-duration <text>] - Duration
Selects information about the policies that match the specified duration hours.
[-start-threshold-percent <percent>] - Threshold Percentage
Selects information about the policies that match the specified start-threshold-percent. Valid only if -type
parameter is set as threshold.
[-qos-policy <Efficiency QoS policy>] - QoS Policy
Selects information about the policies that match the specified throttling method. The values can be
background or best-effort.
Examples
The following example shows all the efficiency policies with the matching Vserver vs1.
The following example shows all the policies with the following fields - Vserver (default), policy (default) and duration.
Description
The volume encryption conversion pause command pauses the running encryption conversion operation on a volume.
Parameters
-vserver <vserver name> - Vserver Name
This parameter specifies the Vserver on which the volume is located.
Examples
Description
The volume encryption conversion resume command resumes the paused encryption conversion operation on a volume.
Parameters
-vserver <vserver name> - Vserver Name
This parameter specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This parameter specifies the name of the volume being encrypted.
Examples
Description
The volume encryption conversion show command displays information about volume encryption conversion in the
cluster. By default, with no parameters, it only shows volume encryption operations that have failed or are currently running.
The command display output depends on the parameters passed. If -vserver and -volume are specified, the following
information is displayed:
• Volume Name: The volume that is part of a completed or running volume move operation.
• Start Time: The date and time when the volume encryption operation was started.
• Percentage Completed: The amount of work to encrypt the volume completed thus far in terms of percentage.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example shows a sample output for this command:
Description
The volume encryption conversion start command converts a non-encrypted volume to encrypted volume.
Parameters
-vserver <vserver name> - Vserver Name
This parameter specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This parameter specifies the name of the volume being encrypted.
[-ignore-warning {true|false}] - Ignore Warning for Conversion Start
If this parameter is set, the command ignores the confirmation message.
Examples
Parameters
-vserver <vserver name> - Vserver Name
This parameter specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This parameter specifies the name of the volume being encrypted.
[-ignore-warning {true|false}] - Ignore Warning for Rekey Pause
If this parameter is set, the command ignores the confirmation message.
Examples
Description
The volume encryption rekey resume command resumes the paused encryption rekey operation on a volume.
Parameters
-vserver <vserver name> - Vserver Name
This parameter specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This parameter specifies the name of the volume being encrypted.
Examples
Description
The volume encryption rekey show command displays information about volume encryption rekey in the cluster. By
default, with no parameters, it only shows volume encryption rekey operations that have failed or are currently running. The
command display output depends on the parameters passed. If -vserver and -volume are specified, the following information
is displayed:
• Volume Name: The volume that is part of a completed or running volume move operation.
• Start Time: The date and time when the volume encryption operation was started.
• Percentage Completed: The amount of work to encrypt the volume completed thus far in terms of percentage.
Examples
The following example shows a sample output for this command:
Description
The volume encryption rekey start command changes the encryption key of a volume.
Parameters
-vserver <vserver name> - Vserver Name
This parameter specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This parameter specifies the name of the volume being rekeyed.
[-ignore-warning {true|false}] - Ignore Warning for Rekey Start
If this parameter is set, the command ignores the confirmation message.
Description
The volume encryption secure-purge abort command aborts the secure purge operation on a volume.
Parameters
-vserver <vserver name> - Vserver
This parameter specifies the Vserver on which the volume is located.
-volume <volume name> - Volume
This parameter specifies the name of the volume being encrypted.
Examples
Description
The volume encryption secure-purge show command displays information about volume encryption securepurge
operation in the cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver
This parameter specifies the Vserver on which the volume is located.
[-volume <volume name>] - Volume
This parameter specifies the name of the volume being secure purged.
[-status {invalid|initializing|snapshots-deleting|snapshots-deleted|zombies-draining|
zombies-drained|batched-free-log-draining|batched-free-log-drained|finishing-trash-purge|
finished-trash-purge|reencrypting|aborting|aborted|success|failure}] - Status
This parameter displays the status of the secure purge operation.
Examples
The following example shows a sample output for this command:
Description
The volume encryption secure-purge start command performs secure purge of encrypted volume.
Parameters
-vserver <vserver name> - Vserver
This parameter specifies the Vserver on which the volume is located.
-volume <volume name> - Volume
This parameter specifies the name of the volume being encrypted.
Examples
Description
The volume file compact-data command applies the Adaptive Data Compaction feature to the Snapshot copy of a file
such that partially filled blocks from that file will merge and consume less storage space.
Parameters
-node <nodename> - Node
This parameter indicates the node name that the AWA instance runs on.
-vserver <vserver name> - Vserver Name
This specifies the Vserver in which the target file is located.
-file </vol/<volume name>/<file path>> - File Path
This specifies the complete file path. The Snapshot copy name can be specified as part of the path or by
specifying the -snapshot parameter.
[-volume <volume name>] - Volume Name
This specifies the volume in which the targeted file is located.
Examples
The following command applies the Adaptive Data Compaction feature to the Snapshot copy snap1 of the file /file1 in
volume vol1:
cluster1::> volume file compact-data -vserver vs1 -volume vol1 -file /vol/vol1/file1 -snapshot
snap1
Description
This command adds and removes files from QoS policy groups. QoS policy groups define measurable service level objectives
(SLOs) that apply to the storage objects with which the policy group is associated. A QoS policy group associated with this file
can be created, modified, and deleted. You cannot associate a file to a QoS policy group if a LUN was created from the file.
Parameters
-vserver <vserver name> - Vserver Managing Volume
This specifies the Vserver on which the volume (containing the file) resides.
-volume <volume name> - Volume Name
This specifies the name of the volume. The name must be unique within the hosting Vserver.
-file <text> - File Path
This specifies the actual path of the file with respect to the volume.
{ [-qos-policy-group <text>] - QoS Policy Group Name
This option associates the file with a QoS policy group. This policy group manages storage system resources
to deliver your desired level of service. If you do not assign a policy to a file, the system will not monitor and
control the traffic to it. To remove this file from a QoS policy group, enter the reserved keyword “none”.
| [-qos-adaptive-policy-group <text>]} - QoS Adaptive Policy Group Name
This optional parameter specifies which QoS adaptive policy group to apply to the file. This policy group
defines measurable service level objectives (SLOs) and Service Level Agreements (SLAs) that adjust based on
the file's allocated space or used space. To remove this file from an adaptive policy group, enter the reserved
keyword "none".
[-caching-policy <text>] - Caching Policy Name
This optionally specifies the caching policy to apply to the file. A caching policy defines how the system
caches this volume's data in Flash Cache modules. If a caching policy is not assigned to this file, the system
uses the caching policy that is assigned to the containing volume. If a caching policy is not assigned to the
containing volume, the system uses the caching policy that is assigned to the containing Vserver. If a caching
policy is not assigned to the containing Vserver, the system uses the default cluster-wide policy. The available
caching policies are:
• auto - Read caches all metadata and randomly read user data blocks, and write caches all randomly
overwritten user data blocks.
• random_read_write - Read caches all metadata, randomly read and randomly written user data blocks.
• all_read - Read caches all metadata, randomly read and sequentially read user data blocks.
• all_read_random_write - Read caches all metadata, randomly read, sequentially read, and randomly written
user data.
• all - Read caches all data blocks read and written. It does not do any write caching.
Examples
cluster1::> vol file modify -vserver vs0 -volume vs0_vol56 -file 1.txt -qos-policy-group fast -
cache all-read
Description
The volume file privileged-delete command is used to perform a privileged-delete operation on unexpired WORM
files on a SnapLock enterprise volume. The only built-in role that has access to the command is "vsadmin-snaplock".
Parameters
-vserver <vserver name> - Vserver
Specifies the Vserver which hosts the SnapLock enterprise volume.
-file </vol/<volume name>/<file path>> - File Path
Specifies the absolute path of the file to be deleted. The value begins with /vol/<volumename>.
Examples
The following example deletes the unexpired WORM file "/vol/vol1/wormfile". The file wormfile is stored in
volume vol1 under Vserver vserver1.
Description
The volume file reservation command can be used to query the space reservation settings for the named file, or to
modify those settings. With no further modifiers, the command will report the current setting of the space reservation flag for a
file. This tells whether or not space is reserved to fill holes in the file and to overwrite existing portions of the file that are also
stored in a snapshot. For symlinks, the link is followed and the command operates on the link target.
Examples
The following example enables the file reservation setting for the file named file1. The file file1 is stored in volume
testvol on Vserver vs0.
Description
This command requires a path to a file in a volume and displays the following information:
• Vserver name
If not logged in as Vserver administrator, the command also requires a Vserver name.
Note: The "-instance" option provides the same result as the default as there are no extra fields to display.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-h ]
If this parameter is specified, the command displays total bytes used by the file in human readable form.
| [-k ]
If this parameter is specified, the command displays total bytes used by the file in kilobytes.
| [-m ]
If this parameter is specified, the command displays total bytes used by the file in megabytes.
| [-u ]
If this parameter is specified, the command displays the unique bytes used by the file (bytes that are not shared
with any other file in the volume due to deduplication or FlexClone files) in kilobytes.
| [-uh ]
If this parameter is specified, the command displays the unique bytes used by the file in human readable form.
Examples
The following example displays the disk-usage of the file file1.txt in volume /vol/root_vs0.
Description
This command requires a path to a file in a volume and displays the file handle information described below:
• Vserver name
• File ID
If not logged in as a Vserver administrator, the command also requires a Vserver name.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Managing Volume
This specifies the Vserver where the file resides.
[-path <text>] - Path to File
This specifies the path to the file.
Examples
The following example displays the file handle information of a file named file1.txt in the volume /vol/vol1.
Description
This command displays information about all the files having a given inode in a volume of a Vserver. If the -snapshot-id or -
snapshot-name parameter is specified, the command displays file information from the Snapshot copy; otherwise, it displays
the information from the active file system. The -vserver, -volume and -inode-number are mandatory parameters.
If no optional parameter is specified, the command displays the following fields for all the files having the given inode:
• Vserver Name
• Volume Name
• Inode Number
• File Path
The volume file show-inode command is only supported on flexible volumes and FlexGroup constituents.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields parameter, the command output also includes the specified field or fields.
• Vserver Name
• Volume Name
• Inode Number
• Snapshot Name
• Snapshot ID
• File Path
| [-instance ]}
If this parameter is specified, the command displays detailed information about the files matching the specified
inode number. The following information is displayed:
• Vserver Name
• Volume Name
• Inode Number
• File Path
• Snapshot Name
• Snapshot ID
• File Name
Examples
The following example displays information about all the files having the inode number 96 in the active file system of a
volume named vol1 on a Vserver named vs1:
The following example displays information about all the files with inode number 96 in a Snapshot copy named mysnap.
The Snapshot copy is present in a volume named vol1 on a Vserver named vs1:
cluster1::> volume file show-inode -vserver vs1 -volume vol1 -inode-number 96 -snapshot-name
mysnap -snapshot
Inode Snapshot Snapshot
Vserver Volume Number Name ID File Path
--------- ------------ -------- -------- -------- -------------------------
vs1 vol1 96 mysnap 131 /vol/vol1/.snapshot/mysnap/file1
vs1 vol1 96 mysnap 131 /vol/vol1/.snapshot/mysnap/file2
2 entries were displayed.
The following example displays detailed information about all the files with inode number 96 in a Snapshot copy named
mysnap. The Snapshot copy is present in a volume named vol1 on a Vserver named vs1:
cluster1::> volume file show-inode -vserver vs1 -volume vol1 -inode-number 96 -snapshot-name
mysnap -instance
Description
The volume file clone autodelete command enables or disables the automatic deletion of file, LUN or NVMe
namespace clones. Newly created file, LUN and NVMe namespace clones are disabled for automatic deletion by default.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the volume resides. If only one data Vserver exists, you do not need to
specify this parameter.
[-volume <volume name>] - Volume Name
This specifies the name of the volume in which the file, LUN or NVMe namespace is present.
-clone-path <text> - Clone Path
This specifies the path where clone resides. If you use the volume parameter, then specify the relative path to
the file, LUN or NVMe namespace clone. Otherwise, specify the absolute path.
-enable {true|false} - Enable or Disable Autodelete
This parameter enables or disables the autodelete feature for the file, LUN or NVMe namespace clones in a
specified volume if the clones are already added for automatic deletion. If you set the parameter to true, the
specified file, LUN or NVMe namespace clones get automatically deleted in the 'try' or 'disrupt' mode. If the
value is false, the clones get automatically deleted only in the 'destroy' mode.
[-force [true]] - Force Enable or Disable Autodelete
If -enable is true then this parameter forces automatic deletion of a specified file, LUN or NVMe
namespace, or a file, LUN or NVMe namespace clone. If -enable is false then specifying this parameter
disables autodeletion on a file, LUN or NVMe namespace - or a file, LUN or NVMe namespace clone - even
if -commitment destroy is specified.
Examples
The following command enables for automatic deletion a LUN Clone named lun_clone contained in a volume named
volume1. This volume is present on a Vserver named vs1.
cluster1::> volume file clone autodelete /vol/volume1/lun_clone -enable true -vserver vs1
The following command specifies the relative clone path when the volume parameter is specified in the command.
• The option to avoid space reservations for the new file or LUN clone
• The option to assign a QoS policy group to the new file or LUN clone
• The option to assign a caching policy to the new file or LUN clone
• The option to mark the new file, LUN or NVMe namespace clone created for auto deletion
File, LUN or NVMe namespace clones create a duplicate copy of another file, LUN or NVMe namespace, but don't require
copying the data itself. This allows the clone operation to occur in constant time, taking the same amount of time to complete no
matter the size of the file being cloned. This also means that clones require only a small amount of additional storage space
because the clone shares the data with the source file, LUN or NVMe namespace.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver in which the parent volume resides. If only one data Vserver exists, you do not need
to specify this parameter.
[-volume <volume name>] - Volume
This specifies the name of volume in which a file, LUN or NVMe namespace is going to be cloned.
-source-path <text> - Source Path
This specifies the path to the file, LUN or NVMe namespace to be cloned relative to the specified volume.
-destination-path <text> - Destination Path
This specifies the path for the newly-created cloned file, LUN or NVMe namespace relative to the specified
volume. If the file, LUN or NVMe namespace clone to be created is a whole file, LUN or NVMe namespace,
the destination file, LUN or NVMe namespace must not exist. If the range parameter is specified, the
destination file or LUN must exist. If the snapshot-name parameter is specified, this option is mandatory.
[-snapshot-name | -s <snapshot name>] - Source Snapshot
The name of the Snapshot copy to use as the source for the clone operation. If this value is not specified, the
active filesystem will be used instead.
{ [-range | -r <<source start block>:<destination start block>:<block length>>, ...] - Block Range
This specifies the block range to be cloned. If the range is not specified, the entire file, LUN, or NVMe
namespace is cloned. The block range should be specified in the format s:d:n where s is the source start block
number, d is the destination start block number, and n is the length in blocks to be cloned. The range of n
should be from 1 to 32768 or 1 to 16777216 in case of clone from Active File System or Snapshot copy
respectively. If this parameter is used in the path provided by the destination-path, the parameter must
refer to a file, LUN, or NVMe namespace which already exists. If either the source or destination is a LUN or
NVMe namespace, then the block size is measured in LBA blocks. The source object block size and
destination block size must be equal. If neither the source nor destination is a LUN or NVMe namespace, then
the block size will be 4KB. If 512-byte sectors are used, the source and destination offsets must have the same
offset within 4KB blocks.
This option is most likely to be used by external automated systems in managing virtual disk configurations
and not by human administrators.
• auto - Read caches all metadata and randomly read user data blocks, and write caches all randomly
overwritten user data blocks.
• random_read - Read caches all metadata and randomly read user data blocks.
• random_read_write - Read caches all metadata, randomly read and randomly written user data blocks.
• all_read - Read caches all metadata, randomly read and sequentially read user data blocks.
• all_read_random_write - Read caches all metadata, randomly read, sequentially read, and randomly written
user data.
• all - Read caches all data blocks read and written. It does not do any write caching.
Examples
The following command creates a FlexClone file of the file named myfile contained in a volume named vol. The file
myfile is located in the root directory of that volume. The cloned file myfile_copy resides in the root directory same
volume.
cluster1::> volume file clone create -volume vol -source-path /myfile -destination-path /
myfile_copy
The following command optionally associates the FlexClone file named myfile_copy with the fast QoS policy group
and the caching policy named random-read.
cluster1::> volume file clone create -volume vol -source-path /myfile -destination-path /
myfile_copy -qos-policy-group fast -caching-policy random-read
Description
The volume file clone show-autodelete command displays the autodelete details of a file, LUN or NVMe namespace
clone. The command displays the following information about a file, LUN or NVMe namespace clone:
• Vserver Name
• Clone Path
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
Examples
The following example displays the autodelete information about a file, LUN or NVMe namespace clone.
Description
The volume file clone deletion add-extension command can be used to add new supported file extensions for clone
delete.
Parameters
-vserver <vserver name> - Vserver Name
Name of the vserver.
-volume <volume name> - Volume Name
Name of the volume.
-extensions <text> - Supported Extensions for Clone Delete
List of supported file extensions for clone delete.
Examples
The following example adds the new supported vmdk,vhd file extensions to volume vol1 of vserver vs1.
cluster1::> volume file clone deletion add-extension -vserver vs1 -volume vol1 -extensions vmdk,vhd
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Parameters
-vserver <vserver name> - Vserver Name
Name of the vserver.
-volume <volume name> - Volume Name
Name of the volume.
[-minimum-size {<integer>[KB|MB|GB|TB|PB]}] - Minimum Size Required for Clone delete
Minimum clone file size required for clone delete.
Examples
The following example changes the required minimum file size to 100M for volume vol1 of vserver vs1 .
cluster1::> volume file clone deletion modify -volume vol1 -vserver vs1 -minimum-size 100M
Description
The volume file clone deletion remove-extension command can be used to remove the existing file extensions that
are no longer supported for clone delete.
Parameters
-vserver <vserver name> - Vserver Name
Name of the vserver.
-volume <volume name> - Volume Name
Name of the volume.
[-extensions <text>] - Unsupported Extensions for Clone Delete
List of unsupported file extensions for clone delete.
Examples
The following example removes the existing unsupported vmdk, vhd file extensions to volume vol1 of vserver vs1.
cluster1::> volume file clone deletion remove-extension -vserver vs1 -volume vol1 -extensions
vmdk,vhd
Description
The volume file clone deletion show command displays the following information for clone delete:
• Volume Name
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
Name of the vserver.
[-volume <volume name>] - Volume Name
Name of the volume.
[-extensions <text>, ...] - Supported Extensions for Clone Delete
List of supported file extensions for Clone Delete.
[-minimum-size {<integer>[KB|MB|GB|TB|PB]}] - Minimum Size Required for Clone delete
Minimum file size required for Clone Delete.
Examples
The following example displays the clone deletion information for all volumes of all vservers.
The following example displays the clone deletion information for volume vol1 of vserver vs1.
cluster1::> volume file clone deletion show -vserver vs0 -volume testvol
Vserver Name: vs0
Volume Name: testvol
Supported Extensions for Clone Delete: vmdk, vhd, vhdx, vswp
Minimum Size Required for Clone delete: 100B
Description
The volume file clone split load modify command can be used to change the maximum split load (file, LUN or
NVMe namespace clones) of a node.
Parameters
-node {<nodename>|local} - Node Name
Node name on which the new maximum split load is being applied.
[-max-split-load {<integer>[KB|MB|GB|TB|PB]}] - Maximum Clone Split Load
This specifies the new maximum split load of a node. This is the amount of clone create load, the node can
take at any point of time. If it crosses this limit, then the clone create requests will not be allowed, till the split
load is less than maximum split load
Examples
The following example changes the new maximum limit to 10TB on node1.
cluster1::*> volume file clone split load*> modify -node clone-01 -max-split-load 100KB
Description
The volume file clone split load show command displays the corresponding file, LUN or NVMe namespace clone
split loads on nodes.If no parameters are specified, the command displays the following information:
• Node
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
Node on which the file, LUN or NVMe namespace clone split load is displayed.
Examples
The following example displays the current and allowable file, LUN or NVMe namespace clone split load on a node.
cluster1::> volume file clone split load show -node clone-01 -instance
Description
The volume file fingerprint abort command aborts an in-progress fingerprint operation. This command only aborts the
fingerprint operations that have not yet completed. This command takes session-id as input and aborts the fingerprint operation
that is associated with that particular session-id.
Parameters
-session-id <integer> - Session ID of Fingerprint Operation
Specifies the session-id of the fingerprint operation that needs to be aborted. It is an unique identifier for the
fingerprint operation. This session-id is returned when the fingerprint operation is started on a file.
Examples
The following example aborts the fingerprint operation identified by 17039361:
Description
The volume file fingerprint dump command displays the following information given the -session-id of the
fingerprint operation:
• Vserver:
The Vserver on which the file exists.
• Session-ID:
A unique identifier for the fingerprint operation. This session-id is returned when the fingerprint operation is started on a file.
The session-id of the fingerprint operation can be used to get the progress of an ongoing fingerprint operation as well as the
complete fingerprint output for the file once the operation is completed.
• Volume:
The name of the volume on which the file resides.
• Path:
The absolute path of the file on which the fingerprint is calculated. The value begins with /vol/<volumename>.
• Data Fingerprint:
The digest value of data of the file. The fingerprint is base64 encoded. This field is not included if the scope is metadata-
only.
• Metadata Fingerprint:
The digest value of metadata of the file. The metadata fingerprint is calculated for file size, file ctime, file mtime, file crtime,
file retention time, file uid, file gid, and file type. The fingerprint is base64 encoded. This field is not included if the scope is
data-only.
• Fingerprint Algorithm:
The digest algorithm which is used for the fingerprint computation. Fingerprint is computed using md5 or sha-256 digest
algorithm.
• Fingerprint Scope:
The scope of the file which is used for the fingerprint computation. Fingerprint is computed over data-only, metadata-
only, or data-and-metadata.
• Fingerprint Version:
The version of the fingerprint output format.
• SnapLock License:
The status of the SnapLock license.
• Vserver UUID:
A universal unique identifier for the Vserver on which the file exists.
• Volume MSID:
• Volume DSID:
The data set identifier of the volume where the file resides.
• Hostname:
The name of the storage system where the fingerprint operation is performed.
• Filer ID:
The NVRAM identifier of the storage system.
• Volume Containing Aggregate:
The name of the aggregate in which the volume resides.
• Aggregate ID:
A universal unique identifier for the aggregate containing the volume.
• Volume ComplianceClock:
The volume ComplianceClock time in seconds since 1 January 1970 00:00:00 in GMT timezone. This has a value only for
SnapLock volumes.
• Filesystem ID:
The filesystem identifier of the volume on which the file resides.
• File ID:
A unique number within the filesystem identifying the file.
• File Type:
The type of the file. Possible values include: worm, worm_appendable, worm_active_log, worm_log, and regular.
• File Size:
The size of the file in bytes.
• Modification Time:
The last modification time of the file in seconds since 1 January 1970 00:00:00 in GMT timezone.
• Changed Time:
The last changed time of the file attributes in seconds since 1 January 1970 00:00:00 in GMT timezone. Time is taken from
the system clock for regular files and from the volume ComplianceClock for WORM files when they are committed. The
changed time can be in wraparound format.
• Retention Time:
The retention time of the files committed to WORM on SnapLock volumes in seconds since 1 January 1970 00:00:00 in
GMT timezone. The retention time can be in wraparound format.
• Access Time:
The last access time of the regular files on SnapLock volumes and files on non-SnapLock volumes attributes in seconds
since 1 January 1970 00:00:00 in GMT timezone.
• Owner ID:
The integer identifier of the owner of the file.
• Group ID:
The integer identifier of the group owning the file.
• Owner SID:
The security identifier of the owner of the file when it has NTFS security style.
• Litigation Count:
The number of litigations on the file.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
-session-id <integer> - Session ID of Fingerprint Operation
Specifies the session-id of the fingerprint operation whose output is to be displayed. It is a unique identifier for
the fingerprint operation. This session-id is returned when the fingerprint operation is started on a file.
Examples
The following example displays the fingerprint information of the fingerprint session identified by session-id 17039367:
Vserver:vs1
Session-ID:17039367
Volume:nfs_slc
Path:/vol/nfs_slc/worm
Data Fingerprint:MOFJVevxNSJm3C/4Bn5oEEYH51CrudOzZYK4r5Cfy1g=
Metadata Fingerprint:8iMjqJXiNcqgXT5XuRhLiEwIrJEihDmwS0hrexnjgmc=
Fingerprint Algorithm:SHA256
Fingerprint Scope:data-and-metadata
Fingerprint Start Time:1460612586
Formatted Fingerprint Start Time:Thu Apr 14 05:43:06 GMT 2016
Fingerprint Version:3
SnapLock License:available
Vserver UUID:acf7ae64-00d6-11e6-a027-0050569c55ae
Volume MSID:2152884007
Volume DSID:1028
Hostname:cluster1
Filer ID:5f18eda2-00b0-11e6-914e-6fb45e537b8d
Volume Containing Aggregate:slc_aggr
Aggregate ID:c84634aa-c757-4b98-8f07-eefe32565f67
SnapLock System ComplianceClock:1460610635
Formatted SnapLock System ComplianceClock:Thu Apr 14 05:10:35 GMT 2016
Volume SnapLock Type:compliance
Volume ComplianceClock:1460610635
Formatted Volume ComplianceClock:Thu Apr 14 05:10:35 GMT 2016
Volume Expiry Date:1465880998
Is Volume Expiry Date Wraparound:false
Formatted Volume Expiry Date:Tue Jun 14 05:09:58 GMT 2016
Filesystem ID:1028
File ID:96
File Type:worm
File Size:1048576
Creation Time:1460612515
Formatted Creation Time:Thu Apr 14 05:41:55 GMT 2016
Modification Time:1460612515
Formatted Modification Time:Thu Apr 14 05:41:55 GMT 2016
Changed Time:1460610598
Is Changed Time Wraparound:false
Formatted Changed Time:Thu Apr 14 05:09:58 GMT 2016
Retention Time:1465880998
Is Retention Time Wraparound:false
Formatted Retention Time:Tue Jun 14 05:09:58 GMT 2016
Access Time:-
Formatted Access Time:-
Owner ID:0
Group ID:0
Description
The volume file fingerprint show command returns information for one or several fingerprint operations. This
command requires either -session-id or -vserver and -volume.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-session-id <integer>] - Session ID of Fingerprint Operation
If this parameter is specified, the command returns the progress of the fingerprint operation of the specified
session-id. The session-id is a unique identifier for the fingerprint operation that is returned when the
fingerprint operation is started on a file.
[-vserver <vserver name>] - Vserver
If this parameter is specified, -volume must also be specified. When queried with -vserver and -volume,
the command returns the progress of all the fingerprint operations running on that particular volume.
[-volume <volume name>] - Volume Name
If this parameter is specified, -vserver must also be specified. When queried with -vserver and -volume,
the command returns the progress of all the fingerprint operations running on that particular volume.
[-file </vol/<volume name>/<file path>>] - File Path
If this parameter is specified, the command returns the progress of all fingerprint operations on the specified
file.
[-operation-status {Unknown|In-Progress|Failed|Aborting|Completed}] - Operation Status
If this parameter is specified, the command returns the progress of all fingerprint operations with matching
status value.
[-progress-percentage <integer>] - Progress Percentage
If this parameter is specified, the command returns the progress of all fingerprint operations with matching
progress percentage value.
Examples
The following example displays the progress of all the fingerprint operations running on volume nfs_slc:
Description
The volume file fingerprint start command starts the fingerprint computation on a file. The fingerprint computation is
started on the file, and a session-id is returned. This session-id is an unique identifier for the fingerprint operation and can be
used to get the progress of an ongoing fingerprint operation as well as the complete fingerprint output for the file once the
operation is completed.
Parameters
-vserver <vserver name> - Vserver
Specifies the name of the vserver which owns the volume on which the file resides.
-file </vol/<volume name>/<file path>> - Path
Specifies the absolute path of the file on which fingerprint needs to be calculated. The value begins with /vol/
<volumename>.
[-algorithm {MD5|SHA256}] - Fingerprint Algorithm
Specifies the digest algorithm which is used for the fingerprint computation.
Fingerprint can be computed using one of the following digest algorithms:
• md5
• sha-256
• data-only
• metadata-only
• data-and-metadata
Examples
The following example starts computing fingerprint over data and metadata for file /vol/nfs_slc/worm using md5 hash
algorithm. The file /vol/nfs_slc/worm is stored in volume nfs_slc on Vserver vs0.
cluster1::> volume file fingerprint start -vserver vs0 -scope data-and-metadata -algorithm md5 -
file /vol/nfs_slc/worm
File fingerprint operation is queued. Run "volume file fingerprint show -session-id 16973825" to
view the fingerprint session status.
Description
The volume file retention show command displays the retention time of a file protected by SnapLock given -vserver
and -file.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
Specifies the name of the Vserver which has the file.
[-file </vol/<volume name>/<file path>>] - File path
Specifies the absolute path of the file. The value begins with /vol/<volumename>.
[-retention-time <integer>] - Retention Time of the File
If this paramater is specified, the command returns the retention time of the file protected by SnapLock if its
retention time in seconds since 1 January 1970 00:00:00 matches the specified value.
[-formatted-retention-time <text>] - Formatted Retention Period
If this paramater is specified, the command returns the retention time of the file protected by SnapLock if its
expiry date matches the specified value. The expiry date format is <day> <month> <day of month>
<hour>:<min>:<sec><year> in GMT timezone taking care of wraparound. A value of infinite indicates
that this file has infinite retention time. A value of indefinite indicates that this file has indefinite retention
time.
[-is-wraparound {true|false}] - Is Retention Time Wraparound
If this paramater is specified, the command returns the retention time of the file protected by SnapLock if it
has a matching -is-wraparound value. The value is true if the date represented in retention time is in
wraparound format. The wraparound format indicates that dates after 19 January 2038 are mapped from 1
January 1970 through 31 December 2002 to 19 January 2038 through 19 January 2071.
Examples
The following example displays the retention time of the file /vol/nfs_sle/f12:
Vserver : vs0
Path : /vol/nfs_sle/f12
Retention Time (Secs from Epoch) : 1439111404
Formatted Retention Time : Sun Aug 9 09:10:04 GMT 2015
Is Retention Time Wraparound : false
Description
The volume flexcache config-refresh command is used to refresh the FlexCache configuration. It can be used to update
the configuration on either the FlexCache or origin of a FlexCache cluster. This command only needs to be run if the automatic
refresh failed and the corresponding EMS was generated.
Note: This command must be run on the peer cluster. For example, refresh of origin of a FlexCache volume must be run from
the FlexCache cluster. Refresh a FlexCache volume must be run from the of an origin of a FlexCache cluster.
Parameters
-peer-vserver <vserver name> - Peer Vserver
Name of the Vserver for which the configuration is being refreshed.
-peer-volume <volume name> - Peer Volume
Name of the volume for which the configuration is being refreshed.
-peer-endpoint-type {cache|origin} - Origin/Cache Volume
The peer-endpoint-type specifies the FlexCache endpoint type of the peer volume. Possible values are cache
for FlexCache volumes and origin for origin of a FlexCache volumes.
Examples
The following example triggers config-refresh on origin of a FlexCache volume "origin1".
The following example triggers config-refresh on FlexCache volume "fc1" with an incorrect peer-endpoint-type.
Error: command failed: Failed to store the configuration for peer volume "fc1"
in Vserver "vs34" on cluster "cluster1". Check the FlexCache
configuration on the local cluster and retry the operation.
Description
The volume flexcache create command is used to create a FlexCache volume. It also creates the relationship between the
FlexCache volume and the specified origin volume.
Note: If the vserver and origin-vserver are different, the Vservers must be in a peer relationship.
Parameters
-vserver <vserver name> - Vserver
This specifies the Vserver on which the FlexCache volume is to be created.
-volume <volume name> - Cache Volume Name
This specifies the name of the FlexCache volume that is to be created.
{ -aggr-list <aggregate name>, ... - List of Aggregates for FlexGroup Constituents
Specifies an array of names of aggregates to be used for creating the FlexCache volume.
[-aggr-list-multiplier <integer>] - Aggregate List Repeat Count
Specifies the number of times to iterate over the aggregates listed with the -aggr-list parameter when
creating a FlexGroup. The aggregate list will be repeated the specified number of times. Example:
will cause four constituents to be created in the order aggr1, aggr2, aggr1, aggr2.
The default value is 4.
| -auto-provision-as <FlexGroup>} - Automatically Provision as Volume of Type
Use this parameter to automatically select existing aggregates for provisioning the FlexCache volume.
[-size {<integer>[KB|MB|GB|TB|PB]}] - Volume Size
This optionally specifies the size of the FlexCache volume. The size is specified as a number followed by a
unit designation: k (kilobytes), m (megabytes), g (gigabytes), or t (terabytes). If the unit designation is not
specified, bytes are used as the unit, and the specified number is rounded up to the nearest 4 KB. If the size
parameter in not specified, it defaults to 10% of the origin volume size.
[-origin-vserver <vserver name>] - Origin Vserver Name
This specifies the name of the Vserver where the origin volume is located.
-origin-volume <volume name> - Origin Volume Name
This specifies the name of the origin volume.
[-junction-path <junction path>] - Cache Junction Path
This optionally specifies the FlexCache volume's junction path. The junction path name is case insensitive and
must be unique within a Vserver's namespace.
[-foreground {true|false}] - Foreground Process
This specifies whether the operation runs in the foreground. The default setting is true (the operation runs in
the foreground). When set to true, the command will not return until the operation completes.
cluster1::> flexcache create -vserver vs34 -volume fc1 -aggr-list aggr34,aggr43 -origin-volume
origin1 -size 400m
(volume flexcache create)
[Job 894] Job succeeded: Successful
cluster1::> flexcache create -vserver vs34 -volume fc3 -auto-provision-as flexgroup -origin-volume
origin1 -size 400m
(volume flexcache create)
[Job 898] Job succeeded: Successful
cluster1::> flexcache create -vserver vs34 -volume fc4 -aggr-list aggr34,aggr43 -origin-volume
origin1 -size 400m -junction-path /fc4
(volume flexcache create)
[Job 903] Job succeeded: Successful
Description
The volume flexcache delete command deletes the specified FlexCache volumes and their relationships.
Note:
Parameters
-vserver <vserver name> - Vserver
This specifies the name of the Vserver from which the FlexCache volume is to be deleted.
-volume <volume name> - Cache Volume Name
This specifies the name of the FlexCache volume that is to be deleted.
[-foreground {true|false}] - Foreground Process
This specifies whether the operation runs in the foreground. The default setting is true (the operation runs in
the foreground). When set to true, the command will not return until the operation completes.
Examples
The following example deletes FlexCache volume "fc1":
Related references
volume offline on page 1473
Description
The volume flexcache show command displays information about FlexCache volumes. The command output depends on
the parameter or parameters specified with the command. If no parameters are specified, the command displays the following
information about all FlexCache volumes:
• Vserver name
• Volume name
• Size
• Origin Vserver
• Origin volume
• Origin cluster
To display detailed information about all FlexCache volumes, run the command with the -instance parameter.
Parameters
{ [-fields <fieldname>, ...]
This specifies the fields that need to be displayed.
| [-instance ]}
If this parameter is specified, the command displays information about all values.
Examples
The following example displays information about all FlexCache volumes on the Vserver named "vs34":
The following example displays detailed information about a FlexCache volume named fc1 on an SVM named vs34:
Vserver: vs34
Cache Volume Name: fc1
List of Aggregates for FlexGroup Constituents: aggr34
Volume Size: 800MB
Cache Flexgroup MSID: 2155934574
Origin Vserver Name: vs34
Origin Vserver UUID: a8717aeb-2826-11e8-bf56-00505695f37a
Origin Volume Name: origin1
Origin Volume MSID: 2155934545
Origin Cluster Name: cluster-2
Cache Junction Path: -
FlexCache Create Time: Thu Aug 23 04:36:19 2018
Description
The volume flexcache sync-properties command is used to sync a FlexCache volumes properties with its origin
volume.
Note: This command only needs to be used when there was a failure to sync properties from the origin of a FlexCache
volume to the FlexCache volume.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the FlexCache volume is located.
-volume <volume name> - FlexCache Volume Name
This specifies the FlexCache volume whose properties need to be synced with its origin of a FlexCache
volume.
Examples
• Node
• Vserver
• Volume
• Remote Vserver
• Remote Volume
• Remote Endpoint
• Connection Status
To display detailed information about all FlexCache volumes, run the command with the -instance parameter.
Parameters
{ [-fields <fieldname>, ...]
This specifies the fields that need to be displayed.
| [-instance ]}
If this parameter is specified, the command displays information about all values.
[-node {<nodename>|local}] - Node Name
If this parameter is specified, the command displays detailed information the connection status of all the
FlexCache volumes and origin of FlexCache volumes on this node.
[-vserver <vserver name>] - Vserver Name
If this parameter and the -volume parameter are specified, the command displays detailed information about
the specified volume. If this parameter is specified by itself, the command displays information about
FlexCache volumes and origin of FlexCache volumes on the specified Vserver.
[-volume <volume name>] - Local Volume
If this parameter is specified, the command displays detailed information about the specified FlexCache
volume and origin of FlexCache volumes.
[-remote-vserver <vserver name>] - Remote Vserver
If this parameter is specified, the command displays information only about the FlexCache volume or volumes
which have a relationship with the specificed remote-vserver.
[-remote-volume <volume name>] - Remote Volume
If this parameter is specified, the command displays information only about the FlexCache volume or volumes
that have a relationship with the specified remote-volume.
[-remote-cluster <Cluster name>] - Remote Cluster
If this parameter is specified, the command displays information only about the FlexCache volume or volumes
which have a relationship with the specificed remote-cluster.
[-conn-status {unknown|connected|disconnected|downrev}] - Connection Status
If this parameter is specified, the command displays information only about the FlexCache volume or volumes
that have connection status as specified by conn-status.
Examples
The following example displays connection status of all FlexCache volumes on the Vserver named "vs_1":
Node: node3-01
Description
The volume flexcache origin cleanup-cache-relationship command is used to cleanup the cache relationship on
the origin of a FlexCache cluster.
Note: This command only needs to be run if volume flexcache delete fails on the FlexCache cluster and prompts you
to run this command.
Parameters
-origin-vserver <vserver name> - Origin Vserver
This specifies the name of the Vserver hosting the origin of a FlexCache volume.
-origin-volume <volume name> - Origin Volume
This specifies the name of the origin of a FlexCache volume.
-cache-vserver <vserver name> - Cache Vserver
This specifies the name of the Vserver hosting the FlexCache volume.
-cache-volume <volume name> - Cache Volume
This specifies the name of the FlexCache volume.
[-force-retry [true]] - Force Origin Cleanup Retry
This specifies whether the origin cleanup operation can be retried if it has previously failed. The default setting
is false.
Note: When the last FlexCache relationship is deleted, the origin cleanup operation needs to remove
metadata from the filesystem. This can take a long time, and it's possible the operation will take too long
Examples
The following example cleans up the FlexCache relationship for the specified FlexCache and origin of a FlexCache
volume:
Related references
volume flexcache delete on page 1573
Description
The volume flexcache origin show-caches command displays FlexCache relationships for origin of a FlexCache
volumes on the origin cluster.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-origin-vserver <vserver name>] - Vserver
If this parameter and the origin-volume parameter are specified, the command displays the FlexCache
relationship for the specified origin of a FlexCache volume. If this parameter is specified by itself, the
command displays all the FlexCache relationships for all origin of a FlexCache volumes in the specified
Vserver.
[-origin-volume <volume name>] - Origin Volume
If this parameter is specified, the command displays all the FlexCache relationships for the specified origin of
a FlexCache volume.
[-cache-vserver <vserver name>] - Cache Vserver
If this parameter is specified then the command displays FlexCache relationships for the specified Vserver
hosting FlexCache volumes.
[-cache-volume <volume name>] - Cache Volume
If this parameter is specified then the command displays FlexCache relationships for the specified FlexCache
volume.
Examples
The following example displays information about all origin of a FlexCache volumes on the Vserver named vs34:
Vserver: vs56
Origin Volume: origin
Cache Vserver: vs56
Cache Volume: fc1
Cache Cluster: cluster-3
Cache Volume MSID: 2156002002
Relationship Create Time: Thu Aug 23 04:36:24 2018
Description
The volume flexgroup qtree-disable command disables qtree support on a FlexGroup.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver in which the FlexGroup is located.
Examples
The following example disables the qtree support on a FlexVol named "fg" in Vserver "vs0":
Description
The volume inode-upgrade prepare-to-downgrade command prepares volumes to downgrade to a release earlier than
Data ONTAP 9.0.0. It is used when there are still volumes in the middile of the inode upgrade process when revert is issued.
Parameters
-node {<nodename>|local} - Node Name
This specifies the node on which the command will run. Default is the local node.
Examples
The following example prepares volumes to revert to an earlier release.
Description
The volume inode-upgrade resume command resumes suspended inode upgrade process. The inode upgrade process may
be suspended earlier due to performance reasons.
Parameters
-vserver <vserver name> - VServer Name
This specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This specifies the volume for which the inode upgrade process is to be resumed.
Examples
The following example resumes a volume upgrade process.
Description
The volume inode-upgrade show command displays information about volumes in the middle of the inode upgrade
process. The command output depends on the parameter or parameters specified with the command. If no parameters are
specified, the command displays the default fields about all volumes in the middle of the inode upgrade process. Default fields
are vserver, volume, aggregate, status, scan-percent, remaining-time, space-needed, and scanner-progress.
Parameters
{ [-fields <fieldname>, ...]
This specifies the fields that need to be displayed.
| [-instance ]}
If this parameter is specified, the command displays information about all entries.
[-vserver <vserver name>] - Vserver
If this parameter and the -volume parameter are specified, the command displays detailed information about
the specified volume. If this parameter is specified by itself, the command displays information about volumes
on the specified Vserver.
[-volume <volume name>] - Volume
If this parameter and the -vserver parameter are specified, the command displays detailed information about
the specified volume. If this parameter is specified by itself, the command displays information about all
volumes that match the specified name.
[-node <nodename>] - Node Name
If this parameter is specified, the command displays information only about the volume or volumes that are
located on the specified storage system.
[-vol-dsid <integer>] - Volume DSID
If this parameter is specified, the command displays information only about the volume or volumes that match
the specified data set ID.
[-vol-uuid <UUID>] - Volume UUID
If this parameter is specified, the command displays information only about the volume or volumes that match
the specified UUID.
[-volume-msid <integer>] - Volume MSID
If this parameter is specified, the command displays information only about the volume or volumes that match
the specified master data set ID.
[-vserver-uuid <UUID>] - Vserver UUID
If this parameter is specified, the command displays information only about the volume on the Vserver that
has the specified UUID.
[-aggregate <aggregate name>] - Aggregate Name
If this parameter is specified, the command displays information only about the volume or volumes that are
located on the specified storage aggregate.
Examples
The following example displays information about all volumes in the middle of the inode upgrade process on the Vserver
named vs0:
Related references
vserver on page 1674
volume on page 1449
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This specifies the name of the volume being moved.
Examples
The following example aborts running volume move operation on volume vol1
cluster1::> volume move show -vserver vs0 -volume vol1 -fields completion-status
vserver volume completion-status
------- ------ --------------------------
vs0 vol1 "Volume move job stopped."
The following example shows command failed to abort on vol2 as volume move operation is completed.
Related references
job stop on page 149
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This specifies the name of the volume being moved.
[-cutover-action {abort_on_failure|defer_on_failure|force|wait|retry_on_failure}] - Specified
Action For Cutover
Specifies the action to be taken for cutover. If the effective cluster version is Data ONTAP 8.3 and later, the
default is retry_on_failure; otherwise the default is defer_on_failure. If the abort_on_failure
action is specified, the job will try to cutover until cutover attempts are exhausted. If it fails to cutover, it will
cleanup and end the operation. If the defer_on_failure action is specified, the job will try to cutover until
the cutover attempts are exhausted. If it fails to cutover, it will move into the "cutover_hard_deferred" state.
The volume move job waits for a volume move trigger-cutover command to restart the cutover process.
If the force action is specified, the job will try to cutover until the cutover attempts are exhausted and force
the cutover at the expense of disrupting the clients. If the wait action is specified, when the job hits the
decision point, it will not go into cutover automatically, instead it will wait for a volume move trigger-
cutover command as the signal to try the cutover.
[-cutover-window <integer>] - Specified Cutover Time Window
This specifies the time interval in seconds to completely cutover operations from the original volume to the
moved volume. The default value is 30 seconds. The range for valid input is from 30 to 300 seconds, inclusive.
Examples
The following example modifies the parameters for volume move operation on volume vol2.
cluster1::*> volume move modify -vserver vs0 -volume vol2 -cutover-action abort_on_failure -
cutover-window 50
The following example shows command failed to modify on vol1 as volume move operation is completed.
cluster1::*> volume move modify -vserver vs0 -volume vol1 -cutover-action abort_on_failure -
cutover-window 40
Error: command failed: There is no volume move operation running on the specified volume.
Related references
volume move trigger-cutover on page 1595
volume move show on page 1587
Description
The volume move show command displays information about volume moves in the cluster. By default, with no parameters, it
only shows volume moves that have failed or are currently running. The command display output depends on the parameters
passed. If -vserver and -volume are specified, the following information is displayed:
• Volume Name: The volume that is part of a completed or running volume move operation.
• Actual Completion Time: The date and time in the cluster time zone when the volume move completed.
• Bytes Remaining: The number of bytes remaining to be sent during volume move. This is an approximate number and lags
the current status by a few minutes while the volume move is in operation.
• Specified Action for Cutover: The action to be taken for cutover or during cutover failure. This is the input given during the
start of volume move or the value specified during a volume move modification.
• Specified Cutover Time Window: The time window in seconds given as an input for the cutover phase of volume move. This
is the input given during the start of volume move or the value specified during a volume move modification.
• Destination Node: The name of the node where the destination aggregate is present.
• Source Node: The name of the node where the source aggregate is present.
• Prior Issues Encountered: The latest issues or transient errors encountered causing the move operation to retry the data copy
phase or the cutover phase.
• Move Initiated by Auto Balance Aggregate: The value "true" indicates the move is initiated by Auto Balance Aggregate
feature.
• Destination Aggregate: The name of the aggregate to which the volume is moved.
• Detailed Status: The detail about any warnings, errors, and state of the move operation.
• Estimated Time of Completion: The approximate date and time in the cluster time zone when the entire volume move
operation is expected to complete. Note that this time may keep increasing when the move goes into cutover-deferred mode.
In those cases where the input for cutover-action is wait, during the data copy phase, the estimated time of completion will
approximate the time to reach the cutover point and wait for user intervention.
• Managing Node: The node in the cluster on which the move job is or was running. This is usually on the node hosting the
volume to be moved.
• Percentage Complete: The amount of work to move the volume completed thus far in terms of percentage.
• Estimated Remaining Duration: The approximate amount of time in terms of days, hours, minutes and seconds remaining to
complete the volume move.
• Replication Throughput: The current replication throughput of the move operation in terms of Kb/s, Mb/s or Gb/s.
• Duration of Move: The duration in days, hours and minutes for which the volume move was or is in progress.
• Source Aggregate: The name of the aggregate where the volume being moved originally resides or resided.
• Move State: The state of the volume move at the time of issuing the command and the system gathering up the information
about the move.
• Original Job ID: The job-id assigned when the job was first created. This value will only be populated when the original job-
id differs from the current job-id.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
This specifies the Vserver on which the volume is located. If this parameter and the -volume parameter are
specified, the command displays detailed information about latest move performed on the specified volume. If
this parameter is specified by itself, the command displays information about latest moves performed on
volumes of the specified Vserver.
[-volume <volume name>] - Volume Name
This specifies the volume that is part of a completed or running volume move operation. If this parameter and
the -vserver parameter are specified, the command displays detailed information about latest move
performed on the specified volume. If this parameter is specified by itself, the command displays information
about the latest move on all volumes matching the specified name.
[-actual-completion-time <Date>] - Actual Completion Time
If this parameter is specified, the command displays move operations that match the specified date and time in
the cluster time zone when the volume move completed.
[-bytes-remaining {<integer>[KB|MB|GB|TB|PB]}] - Bytes Remaining
If this parameter is specified, the command displays move operations that match the specified number of bytes
remaining to be sent during volume move.
[-cutover-action {abort_on_failure|defer_on_failure|force|wait|retry_on_failure}] - Specified
Action For Cutover (privilege: advanced)
If this parameter is specified, the command displays move operations that match the specified action to be
taken for cutover or during cutover failure.
[-cutover-window <integer>] - Specified Cutover Time Window (privilege: advanced)
If this parameter is specified, the command displays move operations that match the specified time window in
seconds for the cutover phase of volume move.
[-destination-aggregate <aggregate name>] - Destination Aggregate
If this parameter is specified, the command displays move operations that match the specified name of the
aggregate to which the volume is being moved.
[-destination-node <nodename>] - Destination Node (privilege: advanced)
If this parameter is specified, the command displays move operations that match the specified name of the
node where the destination aggregate is present.
[-details <text>] - Detailed Status
If this parameter is specified, the command displays move operations that match the specified detail about any
warnings, errors and state of the move operation.
Examples
The following example lists status of volume move operation for a volume vol2 on a Vserver vs0
The following example lists status of volume move operation for a volume vol2 on a Vserver vs0 in advanced mode
The following example lists status of volume move operation for a volume vol2 on a Vserver vs0 in diagnostic mode
The following example lists status of volume move operation for a volume vol2 on a Vserver vs0
The following example lists status of volume move operation for a volume vol2 on a Vserver vs0 in advanced mode
The following example lists status of running and failed volume move operations in the cluster.
The following example lists status of all the volume move operations in the cluster.
Description
The volume move start command moves a flexible volume from one storage aggregate to another. The destination aggregate
can be located on the same node as the original aggregate or on a different node. The move occurs within the context of the
same Vserver.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This specifies the volume that will be moved.
-destination-aggregate <aggregate name> - Destination Aggregate
This specifies the aggregate to which the volume will be moved.
[-cutover-window <integer>] - Cutover time window in seconds (privilege: advanced)
This specifies the time interval to completely cutover operations from the original volume to the moved
volume. The default value is 30 seconds. The range for valid input is from 30 to 300 seconds, inclusive.
[-cutover-action {abort_on_failure|defer_on_failure|force|wait|retry_on_failure}] - Action for
Cutover (privilege: advanced)
Specifies the action to be taken for cutover. If the effective cluster version is Data ONTAP 8.3 and later, the
default is retry_on_failure; otherwise the default is defer_on_failure. If the abort_on_failure
action is specified, the job tries to cutover until cutover attempts are exhausted. If it fails to cutover, it cleans
up and ends the operation. If the defer_on_failure action is specified, the job tries to cutover until the
cutover attempts are exhausted. If it fails to cutover, it moves into the "cutover deferred" state. The volume
move job waits to issue a volume move trigger-cutover command to restart the cutover process. If the
force action is specified, the job tries to cutover until the cutover attempts are exhausted and forces the
cutover at the expense of disrupting the clients. If the wait action is specified, when the job hits the decision
point, it does not go into cutover automatically, instead it waits to issue a volume move trigger-cutover
command as the signal to try the cutover. Once cutover is manually triggered, the cutover action changes to
defer_on_failure. If the retry_on_failure action is specified, the job retries to cutover indefinitely and
it never enters a "hard-deferred" state. After exhausting cutover attempts, the move job waits one hour before
trying to cutover again. Issue a volume move trigger-cutover command at any time to restart the
cutover process.
[-perform-validation-only [true]] - Performs validation checks only
This is a boolean option allowing to perform pre-move validation checks for the intended volume. When set to
true, the command only performs the checks without creating a move job. The default value is false.
[-foreground {true|false}] - Foreground Process
This specifies whether the volume move operation runs as a foreground process. The default setting is false
(that is, the operation runs in the background). Note that using this parameter will not affect how long it takes
for the operation to complete.
[-encrypt-destination {true|false}] - Encrypt Destination Volume
This specifies whether the move operation should result in creating an encrypted volume on the destination
aggregate. When this option is set to true, the destination volume will be encrypted. When it is set to false,
the destination volume will be a plain-text volume. When this parameter is not specified, then destination will
be same as the source type.
• auto - Both Snapshot copy data and active file system user data are tiered to the cloud tier.
• all - Both Snapshot copy data and active file system user data are tiered to the cloud tier as soon as possible
without waiting for a cooling period.
Examples
The following examples perform a validation-check for a volume named volume_test on a Vserver named vs0 to
determine if it can be moved to a destination-aggregate named dest_aggr.
The following example performs a volume move start operation to move a volume named volume_test on a Vserver name
vs0 to a destination-aggregate named dest_aggr.
The following example performs a volume move start operation to move a plain-text volume named vol1 to an encrypted
volume on destination-aggregate aggr1.
Related references
volume move trigger-cutover on page 1595
Description
This command causes a replicating or deferred volume move job to attempt cutover. Unless the force option is set, cutover entry
is not guaranteed.
Parameters
-vserver <vserver name> - Vserver Name
The Vserver on which the volume is located.
-volume <volume name> - Volume Name
The volume that is being moved.
[-force [true]] - Force Cutover
If this parameter is specified, the cutover is done without confirming the operation - even if the operation
could cause client I/O disruptions.
Examples
Description
The volume move recommend show command displays moves that were recommended by the Auto Balance Aggregate
feature.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example displays information about the recommendations made by the Auto Balance Aggregate feature.
Description
The volume move target-aggr show displays information about compatible target aggregates for the specified volume to be
moved to.
Examples
The following example lists target aggregates compatible for moving a volume vol1 on a Vserver vs1.
Description
This command creates a qtree in the Vserver and volume you specify. You can create up to 4,994 qtrees per volume.
• Security style
• UNIX permissions
• Export Policy
Parameters
-vserver <vserver name> - Vserver Name
This specifies the name of the Vserver on which the volume containing the qtree belongs.
{ -volume <volume name> - Volume Name
This specifies the name of the volume that will contain the qtree you are creating.
-qtree <qtree name> - Qtree Name
This specifies the name of the qtree you are creating.
A qtree name cannot contain a forward slash (/). The qtree name cannot be more than 64 characters long.
| -qtree-path <qtree path>} - Actual (Non-Junction) Qtree Path
The qtree path argument in the format /vol/<volume name>/<qtree name> can be specified instead of
specifying volume and qtree as separate arguments.
[-security-style <security style>] - Security Style
This optionally specifies the security style for the qtree, which determines how access to the qtree is
controlled. The supported values are unix (for UNIX uid, gid and mode bits), ntfs (for CIFS ACLs), and
mixed (for NFS and CIFS access). If you do not specify a security style for the qtree, it inherits the security
style of its containing volume.
[-oplock-mode {enable|disable}] - Oplock Mode
This optionally specifies whether oplocks are enabled for the qtree. If you do not specify a value for this
parameter, it inherits the oplock mode of its containing volume.
[-unix-permissions | -m <unix perm>] - Unix Permissions
This optionally specifies the UNIX permissions for the qtree when the -security-style is set to unix or
mixed. You can specify UNIX permissions either as a four-digit octal value (for example, 0700) or in the
style of the UNIX ls command (for example, -rwxr-x--- ). For information on UNIX permissions, see the
UNIX or Linux documentation. If you do not specify UNIX permissions for the qtree, it inherits the UNIX
permissions of its containing volume.
[-export-policy <text>] - Export Policy
This optional parameter specifies the name of the export policy associated with the qtree. For information on
export policies, see the documentation for the vserver export-policy create command. If you do not
specify a value for this parameter, it inherits the export policy of its containing volume.
Examples
The following example creates a qtree named qtree1. The Vserver name is vs0 and the volume containing the qtree is
named vol1. The qtree has a mixed security style. Its other attributes are inherited from volume vol1.
cluster1::> volume qtree create -vserver vs0 -volume vol1 -qtree qtree1 -security-style mixed
Related references
vserver export-policy create on page 1846
Description
This command deletes a qtree. The length of time that it takes to delete a qtree depends on the number of directories and files it
contains. You can monitor the progress of the delete operation by using the job show and job watch-progress commands,
respectively.
The automatically created qtree in the volume - qtree0, listed in CLI output as "" - cannot be deleted.
Note: Quota rules associated with this qtree in all the quota policies will be deleted when you delete this qtree. Qtree deletion
will not be allowed if Storage-level Access Guard (SLAG) is configured.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the name of the Vserver on which the volume containing the qtree belongs.
{ -volume <volume name> - Volume Name
This specifies the name of the volume containing the qtree to be deleted.
-qtree <qtree name> - Qtree Name
This specifies the name of the qtree to be deleted.
| -qtree-path <qtree path>} - Actual (Non-Junction) Qtree Path
The qtree path argument in the format /vol/<volume name>/<qtree name> can be specified instead of
specifying volume and qtree as separate arguments.
[-force [true]] - Force Delete (privilege: advanced)
This optionally forces the qtree delete operation to proceed when the qtree contains files. The default setting is
false (that is, the qtree will not be deleted if it contains files). This parameter is available only at the advanced
privilege and higher.
[-foreground [true]] - Foreground Process
This optionally specifies whether the qtree delete operation runs as a foreground process. The default setting is
false (that is, the operation runs in the background).
Examples
The following example deletes a qtree named qtree4. The Vserver name is vs0 and the volume containing the qtree is
named vol1.
cluster1::> volume qtree delete -vserver vs0 -volume vol1 -qtree qtree4
WARNING: Are you sure you want to delete qtree qtree4 in volume vol1 vserver vs0? {y|n}: y
[Job 38] Job is queued: Delete qtree qtree4 in volume vol1 vserver vs0.
Description
This command allows you to modify the following attributes of an existing qtree in the given Vserver and volume:
• Security style
• Export policy
Parameters
-vserver <vserver name> - Vserver Name
This specifies the name of the Vserver on which the volume containing the qtree belongs.
{ -volume <volume name> - Volume Name
This specifies the name of the volume containing the qtree to be modified.
-qtree <qtree name> - Qtree Name
This specifies the name of the qtree to be modified. You can modify the attributes of qtree0 (represented as ""
in the CLI) by omitting the -qtree parameter from the command or by specifying the value """" for the -
qtree parameter.
| -qtree-path <qtree path>} - Actual (Non-Junction) Qtree Path
The qtree path argument in the format /vol/<volume name>/<qtree name> can be specified instead of
specifying volume and qtree as separate arguments. The automatically created qtree0 can be represented
as /vol/<volume name>.
[-security-style <security style>] - Security Style
This optionally modifies the security style for the qtree. The supported values are unix (for UNIX uid, gid and
mode bits), ntfs (for CIFS ACLs), and mixed (for NFS and CIFS access). Modifying a qtree's security style
will not affect any of the files in the other qtrees of this volume.
[-oplock-mode {enable|disable}] - Oplock Mode
This optionally modifies whether oplocks are enabled for the qtree.
Modifying qtree0's oplock mode will not affect any of the files in the other qtrees of this volume.
[-unix-permissions <unix perm>] - Unix Permissions
This optionally modifies the UNIX permissions for the qtree. You can specify UNIX permissions either as a
four-digit octal value (for example, 0700) or in the style of the UNIX ls command (for example, -rwxr-
x---). For information on UNIX permissions, see the UNIX or Linux documentation.
The unix permissions can be modified only for qtrees with unix or mixed security style.
[-export-policy <text>] - Export Policy
This optional parameter modifies the export policy associated with the qtree. If you do not specify an export
policy name, the qtree inherits the export policy of the containing volume.For information on export policy,
see the documentation for the vserver export-policy create command.
cluster1::> volume qtree modify -vserver vs0 -volume vol1 -qtree qtree1 -security-style unix -
oplocks enabled
Related references
vserver export-policy create on page 1846
Description
This command allows you to display or modify the opportunistic lock mode of a qtree.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the name of the Vserver on which the volume containing the qtree belongs.
{ -volume <volume name> - Volume Name
This specifies the name of the volume containing the qtree.
-qtree <qtree name> - Qtree Name
This specifies the name of the qtree for which the oplock mode is being displayed or modified.
| -qtree-path <qtree path>} - Actual (Non-Junction) Qtree Path
The qtree path argument in the format /vol/<volume name>/<qtree name> can be specified instead of
specifying volume and qtree as separate arguments. The automatically created qtree0 can be represented
as /vol/<volume name>.
[-oplock-mode {enable|disable}] - Oplock Mode
This specifies the new oplock mode of the qtree. If this parameter is not specified, then the current oplock
mode of the qtree is displayed.
Modifying qtree0's oplock mode will not affect any of the files in the other qtrees of this volume.
Examples
The following example displays the oplock mode of a qtree called qtree1. The Vserver name is vs0 and the volume
containing the qtree is named vol1.
cluster1::> volume qtree oplocks -vserver vs0 -volume vol1 -qtree qtree1
/vol/vol1/qtree1 has mixed security style and oplocks are disabled.
The following example modifies the oplock mode of a qtree called qtree2 to enabled. The Vserver name is vs0 and the
volume containing the qtree is named vol1.
cluster1::> volume qtree oplocks -vserver vs0 -volume vol1 -qtree qtree2 -oplock-mode enable
The following example uses a 7G-compatible command to display and modify the oplock mode of a qtree.
Description
This command allows you to rename an existing qtree.
The automatically created qtree in the volume - qtree0, listed in CLI output as "" - cannot be renamed.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the name of the Vserver on which the volume containing the qtree belongs.
{ -volume <volume name> - Volume Name
This specifies the name of the volume containing the qtree to be renamed.
-qtree <qtree name> - Qtree Name
This specifies the name of the qtree to be renamed.
| -qtree-path <qtree path>} - Actual (Non-Junction) Qtree Path
The qtree path argument in the format /vol/<volume name>/<qtree name> can be specified instead of
specifying volume and qtree as separate arguments.
-newname <qtree name> - Qtree New Name
This specifies the new name of the qtree. The new qtree name cannot contain a forward slash (/) and cannot be
more than 64 characters long.
Examples
The following example renames a qtree named qtree3 to qtree4. The Vserver name is vs0 and the volume containing the
qtree is named vol1.
cluster1::> volume qtree rename -vserver vs0 -volume vol1 -qtree qtree3 -newname qtree4
Description
This command allows you to display or modify the security style of a qtree.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the name of the Vserver on which the volume containing the qtree belongs.
Examples
The following example displays the security style of a qtree called qtree1. The Vserver name is vs0 and the volume
containing the qtree is named vol1.
cluster1::> volume qtree security -vserver vs0 -volume vol1 -qtree qtree1
/vol/vol1/qtree1 has mixed security style and oplocks are disabled.
The following example modifies the security style of a qtree called qtree2 to unix. The Vserver name is vs0 and the
volume containing the qtree is named vol1.
cluster1::> volume qtree security -vserver vs0 -volume vol1 -qtree qtree2 -security-style unix
The following example uses a 7G-compatible command to display and modify the security style of a qtree.
Description
This command displays information about qtrees for online volumes. By default, the command displays the following
information about all qtrees in the cluster:
• Vserver name
• Volume name
• Qtree name
• UNIX permissions
• Qtree ID
• Export policy
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-exports ]
Displays the following information about qtree exports:
• Policy Name - The name of the export policy assigned to the qtree
• Is Export Policy Inherited - Whether the export policy assigned to the qtree is inherited
| [-id ]
Displays qtree IDs in addition to the default output.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
Selects information about the qtrees in the specified Vserver.
{ [-volume <volume name>] - Volume Name
Selects information about the qtrees in the specified volume.
[-qtree <qtree name>] - Qtree Name
Selects information about the qtrees that have the specified name.
| [-qtree-path <qtree path>]} - Actual (Non-Junction) Qtree Path
Selects information about the qtrees that have the specified path.
[-security-style <security style>] - Security Style
Selects information about the qtrees that have the specified security style.
[-oplock-mode {enable|disable}] - Oplock Mode
Selects information about the qtrees that have the specified oplock mode.
Examples
The following example displays default information about all qtrees along with each qtree ID. Note that on vs0, no qtrees
have been manually created, so only the automatically created qtrees referred to as qtree 0 are shown. On vs1, the volume
named vs1_vol1 contains qtree 0 and two manually created qtrees, qtree1 and qtree2.
Description
Note: This command does not support FlexGroups and will be deprecated in a future release of Data ONTAP. Use the
statistics qtree show command to view qtree statistics.
This command displays NFS and CIFS operations statistics for qtrees. Note that qtree statistics are available only when the
volume containing the qtree is online.
Statistics are cumulative values from the time the volume is brought online or when the statistics have been reset by using the
"volume qtree statistics-reset" command.
The command output depends on the parameters specified with the command. If no parameters are specified, the command
displays the following statistics information about all qtrees:
• Vserver name
• Volume name
• Qtree name
• CIFS operations
Note:
Qtree statistics are not persistent. If you restart a node, if a storage takeover and giveback occurs, or if the containing volume
is set to offline and then online, qtree statistics are set to zero.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-internal ] (privilege: advanced)
If this parameter is specified, the output will also include the internal operation statistics. Internal operation is
any operation on the qtree that originated within Data ONTAP software.
| [-no-reset ] (privilege: advanced)
If this parameter is specified, the output will display the NFS and CIFS op statistics since the time the volume
was online.
| [-no-reset-internal ] (privilege: advanced)
If this parameter is specified, the output will also include the internal op statistics since the time the volume
was online.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
If this parameter is specified, the command displays information about the qtrees on the specified Vserver.
{ [-volume <volume name>] - Volume Name
If this parameter is specified, the command displays information about the qtrees on the specified volume.
[-qtree <qtree name>] - Qtree Name
If this parameter is specified, the command displays information about the specified qtree.
| [-qtree-path <qtree path>]} - Actual (Non-Junction) Qtree Path
The qtree path argument in the format /vol/<volume name>/<qtree name> can be specified instead of
specifying volume and qtree as separate arguments. The automatically created qtree0 can be represented
as /vol/<volume name>.
[-nfs-ops <Counter64>] - NFS operations since reset
If this parameter is specified, the command displays information about qtrees that have the corresponding
cumulative number of NFS operations since the statistics was zeroed.
[-cifs-ops <Counter64>] - CIFS operations since reset
If this parameter is specified, the command displays information about qtrees that have the corresponding
cumulative number of CIFS operations since the statistics was zeroed.
[-internal-ops <Counter64>] - Internal operations since reset (privilege: advanced)
If this parameter is specified, the command displays information about qtrees that have the corresponding
cumulative number of internal operations since the statistics was zeroed.
[-no-reset-nfs-ops <Counter64>] - NFS operations since online (privilege: advanced)
If this parameter is specified, the command displays information about qtrees that have the corresponding
cumulative number of NFS operations since the volume was online.
Examples
The following example displays statistics information for all qtrees on the Vserver named vs0.
The following example displays statistics information for qtrees on Vserver vs0 that have NFS ops more than 15000.
Related references
statistics qtree show on page 808
volume qtree statistics-reset on page 1607
Description
Note: This command does not support FlexGroups and will be deprecated in a future release of Data ONTAP. Use the
statistics qtree show command to view qtree statistics.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the name of the Vserver on which the volume containing the qtree belongs.
-volume <volume name> - Volume Name
This specifies the name of the volume containing the qtrees whose statistics you want to reset.
Examples
The following example resets statistics for all qtrees on the volume named vol1 on the Vserver named vs0.
Description
This command allows you to modify the following quota attributes for one or more volumes:
• Quota state
Modifications to the quota state for a volume creates a job to perform the quota state changes for that volume. You can monitor
the progress of the job by using the job show and job watch-progress commands.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the name of the Vserver on which the volume whose quota attributes you are modifying is
located.
-volume <volume name> - Volume Name
This specifies the name of the volume whose quota attributes you are modifying.
[-state <quota_state>] - Quota State
This parameter optionally modifies the quota state to one of the following:
• off - This indicates that quotas be deactivated for the specified volume.
• resize - This indicates that the quota limits be resized according to the values specified in the quota
policy assigned to the Vserver. Note that quotas must be activated first for a volume before a resize
operation can be performed.
Both quota activation and quota resize operations apply the quota rules configured for the volume within the
quota policy that is currently assigned to the Vserver. These quota rules are managed by using the commands
in the volume quota policy rule menu. Quotas, when activated for a volume, go through an
initialization process. As part of the quota initialization all the quota rules are applied to the volume. In
addition, a filesystem scanner is started to scan the entire filesystem within the volume to bring the quota
accounting and reporting up to date. The quota job finishes after the filesystem scanner is started on the
volume. The quota state for the volume is initializing until the filesystem scanner finishes scanning the
entire filesystem. After the scanning is complete, the quota state will be on.
When quotas are resized, the quota state is resizing until the resizing operation finishes. As part of this
operation, the quota limits for quotas currently in effect are resized to the limits currently configured for the
volume. After the quota resize operation finishes, the quota state will be on.
Examples
The following example activates quotas on the volume named vol1, which exists on Vserver vs0.
The following example turns on quota message logging and sets the logging interval to 4 hours.
cluster1::> volume quota modify -vserver vs0 -volume vol1 -logging on -logging-interval 4h
cluster1::> volume quota modify -vserver vs0 -volume vol1 -state resize -foreground true
[Job 80] Job succeeded:
Successful
Related references
volume quota policy rule on page 1625
volume quota on on page 1610
volume quota off on page 1610
Description
This command creates a job to deactivate quotas for the specified volume. You can monitor the progress of the job by using the
job show and job watch-progress commands.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the name of the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This specifies the name of the volume on which you are deactivating quotas.
[-foreground [true]] - Foreground Process
This optionally specifies whether the job created for deactivating quotas runs as a foreground process. The
default setting is false (that is, the operation runs in the background). When set to true, the command will
not return until the job completes.
Examples
The following example deactivates quotas on the volume named vol1, which exists on Vserver vs0.
The following example uses a 7G-compatible command to deactivate quotas on the volume named vol1 which exists on
Vserver vs0.
Related references
job show on page 142
job watch-progress on page 150
volume quota modify on page 1608
volume quota on
Turn on quotas for volumes
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the name of the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This specifies the name of the volume on which you are activating quotas.
[-foreground | -w [true]] - Foreground Process
This optionally specifies whether the job created for activating quotas runs as a foreground process. The
default setting is false (that is, the operation runs in the background). When set to true, the command will
not return until the job completes. The quota job finishes after the filesystem scanner is started. The quota state
for the volume is initializing until the filesystem scanner finishes scanning the entire filesystem. After the
scanning is complete, the quota state will be on.
Examples
The following example activates quotas on the volume named vol1, which exists on Vserver vs0.
The following example uses a 7G-compatible command to activate quotas on the volume named vol1 which exists on
Vserver vs0.
Related references
job show on page 142
job watch-progress on page 150
volume quota modify on page 1608
Description
This command displays the quota report for all volumes in each Vserver that are online and for which quotas are activated.
Quota report includes the quota rules (default, explicit, and derived) in effect and the associated resource usage (disk space and
files). If quotas are still initializing for a specific volume, that volume is not included.
This command displays the following information:
• Vserver name
• Index - This is a unique number within a volume assigned to each quota rule displayed in the quota report.
• Tree name - This field gives the name of the qtree if the quota rule is at the qtree level. It is empty if the quota rule is at the
volume level.
• Quota target - This field gives the name of the target of the quota rule. For tree quota rules, it will be the qtree ID of the
qtree. For user quota rules, it will be the UNIX user name or the Windows user name. For group quota rules, it will be the
UNIX group name. For default rules (tree or user or group), this will display "*". If the UNIX user identifier, UNIX group
identifier, or Windows security identifier no longer exists or if the identifier-to-name conversion fails, the target appears in
numeric form.
• Quota target ID - This field gives the target of the quota rule in numeric form. For tree quota rules, it will be the qtree ID of
the qtree. For group quota rules, it will be the UNIX group identifier. For UNIX user quota rules, it will be the UNIX user
identifier. For Windows user quota rules, it will be the Windows security identifier in its native format. For default rules (tree
or user or group), "*" will be displayed.
• File limit
• Quota specifier - For an explicit quota, this field shows how the quota target was configured by the administrator using the
volume quota policy rule command. For a default quota, the field shows "*". For a derived tree quota, this field shows the
qtree path. For a derived user and group quota, the field is either blank or "*".
The following parameters: -soft, -soft-limit-thresholds, -target-id, -thresholds, -fields and -instance
display different set of fields listed above. For example, -soft will display the soft disk space limit and soft file limit apart from
other information. Similarly -target-id will display the target in the numeric form.
A quota report is a resource intensive operation. If you run it on many volumes in the cluster, it might take a long time to
complete. A more efficient way would be to view the quota report for a particular volume in a Vserver.
Depending upon the quota rules configured for a volume, the quota report for a single volume can be large. If you want to
monitor the quota report entry for a particular tree/user/group repeatedly, find the index of that quota report entry and use the -
index field to view only that quota report entry. See the examples section for an illustration.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-soft ]
If this parameter is specified, the command display will include the soft disk space limit and the soft file limit.
| [-soft-limit-thresholds ]
If this parameter is specified, the command display will include the soft disk space limit, threshold for disk
space limit and soft file limit.
Examples
The following example displays the quota report for all the volumes.
The following example displays the quota report for the quota rules that are applicable for the given path to a file.
The following example displays the quota report with the target in the numeric form for the given path to a file.
The following example shows how to monitor the quota report for a particular user/tree/group. First, the quota report
command is issued with -instance to see the index field for the quota report entry we are interested in. Next, the quota
report is issued with the -index field specified to fetch only that particular quota report entry repeatedly to view the
usage over time.
cluster1::> volume quota report -vserver vs0 -volume vol1 -quota-type user -quota-target john -
tree q1 -instance
Related references
volume quota show on page 1617
volume quota modify on page 1608
volume quota policy rule on page 1625
Description
This command resizes the quota limits for the quotas currently in effect for the specified volume. It creates a job to resize
quotas. You can monitor the progress of the job by using the job show and job watch-progress commands.
Note: Quotas must be activated before quota limits can be resized.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the name of the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This specifies the name of the volume on which you are resizing the quota limits and threshold.
[-foreground [true]] - Foreground Process
This optionally specifies whether the job created for resizing quotas runs as a foreground process. The default
setting is false (that is, the operation runs in the background). When set to true, the command will not
return until the job completes.
Examples
The following example resizes quotas on the volume named vol1, which exists on Vserver vs0.
The following example uses a 7G-compatible command to resize quotas on the volume named vol1 which exists on
Vserver vs0.
Related references
job show on page 142
job watch-progress on page 150
volume quota modify on page 1608
• Vserver name
• Volume name
• Quota state - quota state for this volume. The possible values are as follows:
◦ off - this state indicates that quotas are deactivated for the volume.
◦ on - this state indicates that quotas are activated for the volume.
◦ initializing - this state indicates that quotas are being initialized for the volume.
◦ resizing - this state indicates that quota limits are being resized for the volume.
◦ corrupt - this state indicates that quotas are corrupt for this volume.
◦ mixed - this state may only occur for FlexGroups, and indicates that the constituent volumes are not all in the same state.
• Scan status - percentage of the files in the volume scanned by the quota scanner that runs as part of activating quotas.
• Last error - most recently generated error message as part of the last quota operation (on or resize). Present only if an error
has been generated.
To display detailed information about all volumes, run the command with the -instance parameter. The detailed view provides
all of the information in the previous list and the following additional information:
• Logging messages - whether quota exceeded syslog/EMS messages are logged or not. For volumes where the quota logging
parameter is set to on, quota exceeded messages are generated when a NFS/CIFS operation or any internal Data ONTAP
operation is being prevented because the quota disk usage is exceeding the disk limit or the quota file usage is exceeding the
file limit. For quotas where the logging parameter is set to off, no quota exceeded messages are generated.
• Logging interval - frequency with which quota exceeded messages are logged. This parameter only applies to volumes that
have the logging parameter set to on.
• Sub status - additional status about quotas for this volume. Following are the possible values reported:
◦ upgrading - this indicates that the quota metadata format is being upgraded from an older version to a newer version for
the volume.
◦ setup - this indicates that the quotas are being setup on the volume.
◦ transferring rules - this indicates that the quota rules are being transferred to the volume.
◦ scanning - this indicates that the quota filesystem scanner is currently running on the volume.
◦ finishing - this indicates that the quota on or resize operation is in the final stage of the operation.
• All errors - collection of all the error messages generated as part of the last quota operation (on or resize) since the volume
was online.
• User quota enforced (advanced privilege only) - indicates whether there are user quota rules being enforced.
• Group quota enforced (advanced privilege only)- indicates whether there are group quota rules being enforced.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-logmsg ]
If this parameter is specified, the command displays whether quota exceeded messages are logged and the
logging interval for the quota messages.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
If this parameter is specified, the command displays information for the volumes in the specified Vserver.
[-volume <volume name>] - Volume Name
If this parameter is specified, the command displays information for the specified volume.
[-state <quota_state>] - Quota State
If this parameter is specified, the command displays information for the volumes that have the specified quota
state.
[-scan-status <percent>] - Scan Status
If this parameter is specified, the command displays information about the volumes whose scan-status matches
the specified percentage value. The scan status is displayed in the format [0-100]%.
[-logging {on|off}] - Logging Messages
If this parameter is specified, the command displays information about the volumes that have the specified
logging setting.
[-logging-interval <text>] - Logging Interval
If this parameter is specified, the command displays information about the volumes that have the specified
quota logging interval.
[-sub-status <text>] - Sub Quota Status
If this parameter is specified, the command displays information about the volumes that have the specified
quota sub-status.
[-last-error <text>] - Last Quota Error Message
If this parameter is specified, the command displays information about the volumes whose last error matches
the specified error message.
[-errors <text>] - Collection of Quota Errors
If this parameter is specified, the command displays information about the volumes whose collection of errors
match the specified error message.
[-is-user-quota-enforced {true|false}] - User Quota enforced (privilege: advanced)
If this parameter is specified, the command displays information about the volumes that have the specified
value for this field.
[-is-group-quota-enforced {true|false}] - Group Quota enforced (privilege: advanced)
If this parameter is specified, the command displays information about the volumes that have the specified
value for this field.
[-is-tree-quota-enforced {true|false}] - Tree Quota enforced (privilege: advanced)
If this parameter is specified, the command displays information about the volumes that have the specified
value for this field.
The following example displays the logging information for all the volumes.
The following example displays detailed information in advanced privilege for a volume vol1, which exists on Vserver
vs0
Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y
The following example displays detailed information in advanced privilege for a volume vol1, which exists on Vserver
vs0
Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y
The following example displays the detailed information when quotas are upgrading for volume vol1, which exists on
Vserver vs0.
The following example displays the "Last Quota Error Message" and the "Collection of Quota Errors" for volume vol1,
which exists on Vserver vs0
Description
This command copies a quota policy and the rules it contains.You must enter the following information to copy a quota policy:
• Vserver name
• Policy name
Examples
The following example copies a quota policy named quota_policy_0 on Vserver vs0. It is copied to quota_policy_1.
cluster1::> volume quota policy copy -vserver vs0 -policy-name quota_policy_0 -new-policy-name
quota_policy_1
Description
A quota policy is collection of quota rules for all the volumes in a specific Vserver. This command creates a quota policy for a
specific Vserver. Multiple quota policies can be created for a Vserver, but only one of them can be assigned to the Vserver. A
Vserver can have a maximum of five quota policies. If five quota policies already exist, the command fails and a quota policy
must be deleted before another quota policy can be created.
When you turn on quotas for a volume, the quota rules to be enforced on that volume will be picked from the quota policy that is
assigned to the Vserver containing that volume. The quota policy for clustered volumes is equivalent to the /etc/quotas file in
7G.
You must enter the following information to create a quota policy:
• Vserver name
• Policy name
Parameters
-vserver <vserver name> - Vserver
This parameter specifies the Vserver for which you are creating the quota policy. You can create a quota policy
only for a data Vserver. Quota policies cannot be created for a node or admin Vserver.
-policy-name <text> - Policy Name
This parameter specifies the name of the quota policy you are creating. The quota policy name cannot be more
than 32 characters long and must be unique within the Vserver.
Examples
The following example creates a quota policy named quota_policy_0 on Vserver vs0.
Description
This command deletes a quota policy and all the rules it contains. The policy can be deleted only when it is not assigned to the
Vserver.You must enter the following information to delete a quota policy:
• Vserver name
• Policy name
Parameters
-vserver <vserver name> - Vserver
This parameter specifies the Vserver containing the quota policy you want to delete.
-policy-name <text> - Policy Name
This parameter specifies the name of the quota policy you want to delete.
Examples
The following example deletes a quota policy named quota_policy_5 on Vserver vs0.
Description
This command renames a quota policy. You must enter the following information to rename a quota policy:
• Vserver name
• Policy name
Parameters
-vserver <vserver name> - Vserver
This parameter specifies the Vserver containing the quota policy you want to rename.
-policy-name <text> - Policy Name
This parameter specifies the name of the quota policy you are renaming.
-new-policy-name <text> - New Policy Name
This parameter specifies the new name of the quota policy. The new name cannot be more than 32 characters
long.
Examples
The following example renames a quota policy named quota_policy_0 on Vserver vs0. The policy's new name is
quota_policy_1.
Description
This command displays information about quota policies. The command displays the following information about all quota
policies:
• Vserver name
• Policy name
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver
If this parameter is specified, the command displays information about the quota policies for the specified
Vserver.
[-policy-name <text>] - Policy Name
If this optional parameter is specified, the command displays information about quota policies that match the
specified name.
[-last-modified <MM/DD/YYYY HH:MM:SS>] - Last Modified
If this optional parameter is specified, the command displays information about quota policies with the last
modified time that match the given time.
Examples
The following example displays information about all quota policies.
The following example displays information about all quota policies along with the policy ID in the diagnostic privilege.
Description
This command creates a quota policy rule. You must enter the following information to create a quota policy rule:
• Vserver name
• Volume name
You can optionally specify the following additional attributes for the quota policy rule:
• User mapping
Note: For a new quota policy rule to get enforced on the volume, you should create the rule in the quota policy assigned to the
Vserver. Additionally, a quota off and on or a quota resize operation must be done using the "volume quota modify"
command.
Parameters
-vserver <vserver name> - Vserver
This parameter specifies the Vserver containing the quota policy for which you are creating a rule.
-policy-name <text> - Policy Name
This parameter specifies the name of the quota policy in which you are creating a rule.
-volume <volume name> - Volume Name
This parameter specifies the name of the volume for which you are creating a rule.
-type {tree|user|group} - Type
This parameter specifies the quota target type of the rule you are creating.
Examples
The following example creates a default tree quota rule for volume vol0 in Vserver vs0 and in the quota policy named
quota_policy_0. This quota policy applies to all qtrees on volume vol0.
The following example creates a quota policy rule for volume vol0 in Vserver vs0 and in the quota policy named
quota_policy_0. This quota policy applies to the UNIX user myuser for a qtree named qtree1 on volume vol0 with
a disk limit of 20 Gigabytes, soft disk limit of 15.4 Gigabytes and threshold limit of 15.4 Gigabytes. User mapping is
turned on for this rule.
The following example creates a quota policy rule for volume vol0 in Vserver vs0 and in the quota policy named
quota_policy_0. This quota policy applies to the Windows user DOMXYZ\myuser for a qtree named qtree1 on
volume vol0 with a file limit of 40000 and a soft file limit of 26500. User mapping is turned on for this rule.
The following example creates a quota policy rule for volume vol0 in Vserver vs0 and in the quota policy named
quota_policy_0. This quota policy applies to the UNIX user identifier 12345 for a qtree named qtree1 on volume
vol0.
The following example creates a quota policy rule for volume vol0 in Vserver vs0 and in the quota policy named
quota_policy_0. This quota policy applies to the Windows Security Identifier S-123-456-789 for a qtree named
qtree1 on volume vol0.
The following example creates a quota policy rule for volume vol0 in Vserver vs0 and in the quota policy named
quota_policy_0. This quota policy applies to the UNIX group engr for a qtree named qtree1 on volume vol0.
The following example creates a quota policy rule for volume vol0 in Vserver vs0 and in the quota policy named
quota_policy_0. This quota policy applies to the user who is the owner of the file /vol/vol0/qtree1/file1.txt
for qtree qtree1 on volume vol0.
The following example creates a quota policy rule for volume vol0 in Vserver vs0 and in the quota policy named
quota_policy_0. This quota policy applies to the users specified in the target for qtree qtree1 on volume vol0.
Related references
vserver name-mapping on page 1985
volume quota modify on page 1608
Description
The volume quota policy rule delete command deletes a quota policy rule. You must enter the following information
to delete a quota policy rule:
• Vserver name
• Volume name
Note: If the rule being deleted belongs to the quota policy that is currently assigned to the Vserver, enforcement of the rule on
the volume must be terminated by performing a quota off and on or a quota resize operation using the "volume quota
modify" command.
Examples
The following example deletes a quota policy rule on Vserver vs1 for the quota policy named quota_policy_1. This quota
policy applies to the group named engr for the qtree named qtree1 on volume vol1.
Related references
volume quota modify on page 1608
Description
This command can be used to modify the following attributes of a quota policy rule:
• User mapping
Note: If the rule being modified belongs to the quota policy that is currently assigned to the Vserver, rule enforcement on the
volume must be enabled by performing a quota off and on or a quota resize operation using the "volume quota modify"
command.
Related references
volume quota modify on page 1608
Description
This command displays the following information about quota policy rules by default.
• Vserver name
• Volume name
• Qtree name
• User mapping
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver
If this parameter is specified, the command displays information about quota rules for the quotas contained on
volumes on the specified Vserver.
Examples
The following example displays information about all the quota policy rules in a cluster. There is one user rule that exists
on Vserver vs0 for the quota policy named quota_policy_0. This quota policy applies to the user named myuser for qtree
named qtree0 on volume vol0.
Soft Soft
User Disk Disk Files Files
Type Target Qtree Mapping Limit Limit Limit Limit Threshold
----- ------ ------ ------- ------- ------- ------ ------- ---------
tree myuser qtree0 on 20GB 18GB 100000 80000 16GB
Description
This command displays various counts of quota policy rules defined within a quota policy. By default, the subtotal for each
volume is displayed. Optionally, the command can provide the total rule count across the entire quota policy or detailed
subtotals organized by qtree and quota rule type.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-detail ]
Displays rule count subtotals for each quota rule type. The subtotals for each type are computed for a specific
volume and qtree.
| [-hierarchy ]
Displays rule count subtotals in hierarchical format with subtotals at the quota policy, volume, qtree, and quota
rule type levels.
| [-total ]
Displays the total rule count for each Vserver and quota policy.
| [-instance ]}
Displays detailed information about all fields.
[-vserver <vserver name>] - Vserver
Displays quota rule counts for the specified Vserver.
[-policy-name <text>] - Policy Name
Displays quota rule counts for the specified quota policy.
[-volume <volume name>] - Volume Name
Displays quota rule counts for the specified volume.
[-qtree <qtree name>] - Qtree Name
Displays quota rule counts for the specified qtree.
[-type {tree|user|group}] - Type
Displays quota rule counts for the specified quota rule type.
[-count-where-policy-volume-qtree-type <integer>] - Qtree/Type Subtotal
Subtotal of rules matching the given Vserver, quota policy, volume, qtree, and quota rule type. If specified as
input, only matching totals are displayed.
[-count-where-policy-volume-qtree <integer>] - Qtree Subtotal
Subtotal of rules matching the given Vserver, quota policy, volume, and qtree. All quota rule types are
included. If specified as input, only matching totals are displayed.
[-count-where-policy-volume-type <integer>] - Volume/Type Subtotal
Subtotal of rules matching the given Vserver, quota policy, volume, and quota rule type. All qtrees are
included. If specified as input, only matching totals are displayed.
Examples
The following example shows quota rule counts for Vserver vs0, quota policy default. The total number of rules in
quota policy default is 7500. There are two volumes with quota rules configured. Volume volume0 has a total of 1000
rules, and volume1 has a total of 6500 rules.
cluster1::> volume quota policy rule count show -vserver vs0 -policy-name default
Rule
Volume Count
-------------------------- --------
volume0 1000
volume1 6500
2 entries were displayed.
Description
Performs a measure-only reallocation check on a LUN, NVMe namespace, file, or volume. At the end of each check, the system
logs the optimization results in the Event Message System (EMS). If you use the logfile, the system records detailed
information about the LUN, NVMe namespace, file, or volume layout in the log file. To view previous measure-only
reallocation checks, use the volume reallocation show command.
Note: This command is not supported for FlexGroups or FlexGroup constituents.
Parameters
-vserver <vserver name> - Vserver
Specifies the Vserver.
-path <text> - Path
Specifies the path of the reallocation for a LUN, NVMe namespace, file, or volume.
• m for minutes
• h for hours
• d for days
For example, 30m is a 30 minute interval. The countdown to the next scan begins after the first scan is
complete.
The default interval is 24 hours.
| [-once | -o [true]]} - Once
Specifies that the job runs once and then is automatically removed from the system when set to true. If you use
this command without specifying this parameter, its effective value is false and the reallocation scan runs as
scheduled. If you enter this parameter without a value, it is set to true and a reallocation scan runs once.
[-logpath | -l <text>] - Log Path
Specifies the path for reallocation logs.
[-threshold | -t <integer>] - Threshold
Specifies the threshold when a LUN, NVMe namespace, file, or volume is considered unoptimized and a
reallocation should be performed. Once the threshold is reached, the system creates a diagnostic message that
indicates that a reallocation might improve performance.
The threshold range is from 3 (the layout is moderately optimized) to 10 (the layout is not optimal). The
threshold default is 4.
Examples
Related references
volume reallocation show on page 1638
Description
Disables all reallocation jobs globally in a cluster. After you use this command, you cannot start or restart any reallocation jobs.
All jobs that are executing when you use this command are stopped. You must use the reallocate on command to enable or
restart reallocation jobs globally in a cluster.
Note: This command is not supported for FlexGroups or FlexGroup constituents.
Examples
volume reallocation on
Enable reallocate jobs
Availability: This command is available to cluster administrators at the admin privilege level.
Description
Globally enables all reallocation jobs in a cluster. You must globally enable reallocation scans in the cluster before you can run a
scan or schedule regular scans. Reallocation scans are disabled by default.
Note: This command is not supported for FlexGroups or FlexGroup constituents.
Examples
Description
Temporarily stops any reallocation jobs that are in progress. When you use this command, the persistent state is saved. You can
use the volume reallocation restart command to restart a job that is quiesced.
There is no limit to how long a job can remain in the quiesced state.
Note: This command is not supported for FlexGroups or FlexGroup constituents.
Parameters
-vserver <vserver name> - Vserver
Specifies the Vserver.
-path <text> - Path
Specifies the file path of the LUN, NVMe namespace, file, or volume that you want to stop temporarily.
Examples
Related references
volume reallocation restart on page 1636
Parameters
-vserver <vserver name> - Vserver
Specifies the Vserver.
-path <text> - Path
Specifies the file path of the LUN, NVMe namespace, file, or volume on which you want to restart
reallocation scans.
[-ignore-checkpoint | -i [true]] - Ignore Checkpoint
Restarts the job at the beginning when set to true. If you use this command without specifying this parameter,
its effective value is false and the job starts the scan at the point where it stopped. If you specify this parameter
without a value, it is set to true and the scan restarts at the beginning.
Examples
Description
Schedules a reallocation scan for an existing reallocation job. If the reallocation job does not exist, use the volume
reallocation start command to define a reallocation job.
You can delete an existing reallocation scan schedule. However, if you do this, the job's scan interval reverts to the schedule that
was defined for it when the job was created with the volume reallocation start command.
Note: This command is not supported for FlexGroups or FlexGroup constituents.
Parameters
-vserver <vserver name> - Vserver
Specifies the Vserver.
-path <text> - Path
Specifies the path of the reallocation for a LUN, NVMe namespace, file, or volume.
[-del | -d [true]] - Delete
Deletes an existing reallocation schedule when set to true. If you use this command without specifying this
parameter, its effective value is false and the reallocation schedule is not deleted. If you specify this parameter
without a value, it is set to true and the reallocation schedule is deleted.
[-cron | -s <text>] - Cron Schedule
Specifies the schedule with the following four fields in sequence. Use a space between field values. Enclose
the values in double quotes.
Use an asterisk "*" as a wildcard to indicate every value for that field. For example, an * in the day of month
field means every day of the month. You cannot use the wildcard in the minute field.
You can enter a number, a range, or a comma-separated list of values for a field.
Examples
Related references
volume reallocation start on page 1639
Description
Displays the status of a reallocation scan, including the state, schedule, interval, optimization, and log files. If you do not specify
the path for a particular reallocation scan, then the command displays all the reallocation scans.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-v ]
Specify this parameter to display the output in a verbose format.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver
Specify this parameter to display reallocation scans that match the Vserver that you specify.
[-path <text>] - Path
Specify this parameter to display reallocation scans that match the path that you specify.
[-threshold | -t <integer>] - Threshold
Specify this parameter to display reallocation scans that match the threshold that you specify.
[-id <integer>] - Job ID
Specify this parameter to display reallocation scans that match the reallocation job ID that you specify.
[-description <text>] - Job Description
Specify this parameter to display reallocation scans that match the text description that you specify.
Examples
Related references
job schedule show on page 162
Description
Begins a reallocation scan on a LUN, NVMe namespace, file, or volume when you specify the path. If a volume has several
small files that would benefit from periodic optimization, specify the /vol/volname.
Before performing a reallocation scan, the reallocation job normally performs a check of the current layout optimization. If the
current layout optimization is less than the threshold, then the system does not perform a reallocation on the LUN, NVMe
namespace, file, or volume.
You can define the reallocation scan job so that it runs at a specific interval, or you can use the volume reallocation
schedule command to schedule reallocation jobs.
Parameters
-vserver <vserver name> - Vserver
Specifies the Vserver.
-path <text> - Path
Specifies the path of the reallocation for a LUN, NVMe namespace, file, or volume.
• m for minutes
• h for hours
• d for days
For example, 30m is a 30 minute interval. The countdown to the next scan begins after the first scan is
complete.
The default interval is 24 hours.
| [-once | -o [true]] - Once
Specifies that the job runs once and then is automatically removed from the system when set to true. If you use
this command without specifying this parameter, its effective value is false and the reallocation scan runs as
scheduled. If you enter this parameter without a value, it is set to true and a reallocation scan runs once.
| [-force | -f [true]]} - Force
Performs a one-time full reallocation on a LUN, file, or volume when set to true. A forced reallocation
rewrites blocks on a LUN, file, or volume unless the reallocation would result in worse performance. If you
use this command without specifying this parameter, its effective value is false and a forced reallocation is not
performed. If you specify this parameter without a value, it is set to true, and a forced reallocation is
performed.
{ [-space-optimized | -p [true]] - Space Optimized
Specifies that snapshot blocks are not copied to save space when set to true. If you use this command without
specifying this parameter, its effective value is false and snapshot blocks are copied. However, reads from
snapshots might have a slightly higher latency. If you specify this parameter without a value, it is set to true
and snapshot blocks are not copied. You cannot use the space-optimized option with the unshare option.
| [-unshare | -u [true]]} - Unshare Deduplicated Blocks
Specifies that blocks that are shared by deduplication will be unshared. This option can help remove
fragmentation caused on dense volumes. This may result in increased disk usage, especially for full
reallocation. You cannot use the unshare option with the space-optimized option.
{ [-threshold | -t <integer>] - Threshold
Specifies the threshold when a LUN, NVMe namespace, file, or volume is considered unoptimized and a
reallocation should be performed. Once the threshold is reached, the system creates a diagnostic message that
indicates that a reallocation might improve performance.
The threshold range is from 3 (the layout is moderately optimized) to 10 (the layout is not optimal). The
threshold default is 4.
| [-no-check | -n [true]]} - No Threshold Check
Does not check the current layout to determine if a reallocation is needed when set to true. If you use this
command without specifying this parameter, its effective value is false and the system does check the current
layout to determine if a reallocation is needed. If you specify this parameter without a value, it is set to true
and the system does not check the current layout to determine if a reallocation is needed.
Examples
Description
Stops and deletes any reallocation scans on a LUN, NVMe namespace, file, or volume. This command stops and deletes in-
progress, scheduled, and quiesced scans.
Note: This command is not supported for FlexGroups or FlexGroup constituents.
Parameters
-vserver <vserver name> - Vserver
Specifies the Vserver.
-path <text> - Path
Specifies the path of the reallocation for a LUN, NVMe namespace, file, or volume.
Examples
Description
This command will disable the volume schedule style feature and set schedule style to default (create-time).
Examples
The following example prepares the schedule-style on all volumes for revert/downgrade.
Description
The volume snaplock modify command modifies one or more SnapLock attributes of a SnapLock volume.
Parameters
-vserver <vserver name> - Vserver
This specifies the vserver which owns the required SnapLock volume.
-volume <volume name> - Volume
This specifies the SnapLock volume whose attribute needs to be modified.
{ [-minimum-retention-period {{<integer> seconds|minutes|hours|days|months|years} |
infinite}] - Minimum Retention Period
Specifies the minimum allowed retention period for files committed to WORM state on the volume. Any files
committed with a retention period shorter than this minimum value, is assigned this minimum value.
If this option value is infinite, then every file committed to the volume will have a retention period that
never expires.
Otherwise, the retention period is specified as a number followed by a suffix. The valid suffixes are seconds,
minutes, hours, days, months, and years. For example, a value of 6months represents a retention period of 6
months. The maximum allowed retention period is 70 years. This option is not applicable while extending
retention period of an already committed WORM file.
[-default-retention-period {{<integer> seconds|minutes|hours|days|months|years} | min | max
| infinite}] - Default Retention Period
Specifies the default retention period that is applied to files while committing to WORM state without an
associated retention period.
If this option value is min, then minimum-retention-period is used as the default retention period. If this option
value is max, then maximum-retention-period is used as the default retention period. If this option value is
infinite, then a retention period that never expires will be used as the default retention period.
The retention period can also be explicitly specified as a number followed by a suffix. The valid suffixes are
seconds, minutes, hours, days, months, and years. For example, a value of 6months represents a retention
period of 6 months. The maximum valid retention period is 70 years.
[-maximum-retention-period {{<integer> seconds|minutes|hours|days|months|years} | infinite}]
- Maximum Retention Period
Specifies the maximum allowed retention period for files committed to WORM state on the volume. Any files
committed with a retention period longer than this maximum value, is assigned this maximum value.
If this option value is infinite, then files that have retention period that never expires might be committed to
the volume.
Otherwise, the retention period is specified as a number followed by a suffix. The valid suffixes are seconds,
minutes, hours, days, months, and years. For example, a value of 6months represents a retention period of 6
months. The maximum allowed retention period is 70 years. This option is not applicable while extending
retention period of an already committed WORM file.
[-autocommit-period {{<integer> minutes|hours|days|months|years} | none}] - Autocommit Period
Specifies the autocommit period for SnapLock volume. All files which are not modified for a period greater
the autocommit period of the volume are committed to WORM state.
Examples
The following command sets -default-retention-period of a given SnapLock volume:
cluster1::>
cluster1::>
cluster1::> volume snaplock modify -vserver vs1 -volume vol_sle -privileged-delete enabled
[Job 38] Job succeeded: Privileged-delete Attribute Change for Volume "vs1:vol_sle" Completed.
cluster1::>
cluster1::>volume snaplock show -vserver vs1 -volume vol_sle -fields privileged-delete
vserver volume privileged-delete
------- -------- -----------------
cluster1::>
Related references
volume file privileged-delete on page 1549
Description
The volume snaplock prepare-to-downgrade command prepares nodes to downgrade to a release without SnapLock
volume append mode feature. Prior to disabling the feature, the command disables volume append mode on all SnapLock
volumes in the cluster.
Examples
The following example disables the SnapLock volume append mode feature in the local cluster:
Description
The volume snaplock show command displays following information :
• Vserver name
• Volume name
• Volume ComplianceClock
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver
If this parameter is specified, the command displays information for all the SnapLock volumes that match the
specified -vserver value.
[-volume <volume name>] - Volume
If this parameter is specified, the command displays information for the specified -volume value.
[-type {non-snaplock|compliance|enterprise}] - SnapLock Type
If this parameter is specified, the command displays all the volumes that match the specified -type value.
[-minimum-retention-period {{<integer> seconds|minutes|hours|days|months|years} | infinite}]
- Minimum Retention Period
If this parameter is specified, the command displays all the volumes that match the specified -minimum-
retention-period value.
[-default-retention-period {{<integer> seconds|minutes|hours|days|months|years} | min | max
| infinite}] - Default Retention Period
If this parameter is specified, the command displays all the volumes that match the specified -default-
retention-period value.
[-maximum-retention-period {{<integer> seconds|minutes|hours|days|months|years} | infinite}]
- Maximum Retention Period
If this parameter is specified, the command displays all the volumes that match the specified -maximum-
retention-period value.
[-autocommit-period {{<integer> minutes|hours|days|months|years} | none}] - Autocommit Period
If this parameter is specified, the command displays all the volumes that match the specified -autocommit-
period value.
[-is-volume-append-mode-enabled {true|false}] - Is Volume Append Mode Enabled
If this parameter is specified, the command displays all the volumes that match the specified -is-volume-
append-mode-enabled value.
[-privileged-delete {disabled|enabled|permanently-disabled}] - Privileged Delete
If this parameter is specified, the command displays all the volumes that match the specified -privileged-
delete value.
[-expiry-time <text>] - Expiry Time
If this parameter is specified, the command displays all the volumes that match the specified -expiry-time
value.
[-compliance-clock-time <text>] - ComplianceClock Time
If this parameter is specified, the command displays all the volumes that match the specified -compliance-
clock-time value.
Examples
The following command shows summary of SnapLock volumes on a vserver:
cluster1::>
The following commands lists the complete SnapLock attributes of two given SnapLock volumes:
cluster1::>
Description
The volume snapshot compute-reclaimable command calculates the volume space that can be reclaimed if one or more
specified Snapshot copies are deleted.
The command heavily uses system’s computational resources so it can reduce the performance for client requests and other
system processes. Therefore, the queries that use queries that use query operators (*, |, etc.), are disabled for this command.
You should not specify more than three Snapshot copies per query. Snapshot copies must be specified as a comma-separated list
with no spaces after the commas.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This specifies the volume for which reclaimable space is to be calculated.
-snapshots <snapshot name>, ... - List of Snapshots
This specifies one or more than one Snapshot copies that are to be considered for deletion. If you list more
than one Snapshot copy, specify a comma-separated list with no spaces after the commas.
Examples
The following example calculates the space that can be reclaimed if the Snapshot copy named hourly.2008-01-10_1505 is
deleted on a volume named vol3, which is a part of the Vserver named vs0:
Description
The volume snapshot create command creates a Snapshot copy of a specified volume.
Parameters
-vserver <vserver name> - Vserver
This specifies the Vserver that contains the volume on which the snapshot is to be created.
-volume <volume name> - Volume
This specifies the volume where a Snapshot copy is to be created.
-snapshot <snapshot name> - Snapshot
This specifies the name of the Snapshot copy that is to be created.
[-comment <text>] - Comment
This optionally specifies a comment for the Snapshot copy.
Examples
The following example creates a Snapshot copy named vol3_snap on a volume named vol3 on a Vserver named vs0. The
Snapshot copy is given the comment "Single snapshot" and the operation runs in the background.
cluster1::> volume snapshot create -vserver vs0 -volume vol3 -snapshot vol3_snapshot -comment
"Single snapshot" -foreground false
Description
The volume snapshot delete command deletes a Snapshot copy from a specified volume.
Parameters
-vserver <vserver name> - Vserver
This specifies the Vserver that contains the volume on which the specified Snapshot copy is saved.
-volume <volume name> - Volume
This specifies the volume from which a Snapshot copy is to be deleted.
-snapshot <snapshot name> - Snapshot
This specifies the Snapshot copy that is to be deleted.
[-foreground {true|false}] - Foreground Process
If you use this option and set it to false, the delete operation runs as a background process. If you specify
this option and set it to true, the operation runs as a foreground process. The default is true.
[-force [true]] - Force Delete (privilege: advanced)
If you use this switch, the Snapshot copy is immediately deleted without generating any confirmation
messages. If you do not use this option the operation generates confirmation messages and the operation is
disallowed on application tagged volumes. Passing in a value of true is supported, but not required. The
force switch is typically used for scripting applications where users cannot directly confirm the delete
operation.
[-ignore-owners [true]] - Ignore Snapshot Owners (privilege: advanced)
If you use this switch, the command ignores other processes that might be accessing the Snapshot copy. If you
do not use this option the operation exhibits default behavior and checks the owners tags before allowing the
deletion to occur. Passing in a value of true is supported, but not required.
cluster1::> volume snapshot delete -vserver vs0 -volume vol3 -snapshot vol3_daily
Description
The volume snapshot modify command enables you to change the text comment associated with a Snapshot copy.
Parameters
-vserver <vserver name> - Vserver
This specifies the Vserver that contains the volume on which the specified Snapshot copy is saved.
-volume <volume name> - Volume
This specifies the volume whose Snapshot copy is to be modified.
-snapshot <snapshot name> - Snapshot
This specifies the Snapshot copy whose text comment is to be modified.
[-comment <text>] - Comment
This specifies the new comment for the Snapshot copy.
[-snapmirror-label <text>] - Label for SnapMirror Operations
This specifies the SnapMirror Label for the Snapshot copy. The SnapMirror Label is used by the Vaulting
subsystem when you back up Snapshot copies to the Vault Destination. If an empty label ("") is specified, the
existing label will be deleted.
[-expiry-time <MM/DD/YYYY HH:MM:SS>] - Expiry Time
This specifies the expiry time for the Snapshot copy. The expiry time indicates the time at which the Snapshot
copy becomes eligible for deletion. If an expiry time of ("0") is specified, the existing expiry time will be
deleted.
Examples
The following example modifies the comment of a Snapshot copy named vol3_snapshot of a volume named vol3 on a
Vserver named vs0. The comment is changed to "Pre-upgrade snapshot".
Description
The volume snapshot modify-snapshot-expiry-time extends snaplock expiry time of an existing Snapshot copy.
Examples
The following example extends the retention period of a Snapshot copy snap1 to "03/03/2020 00:00:00":
The following example extends the retention period of a Snapshot copy snap2 to infinite:
Description
The volume snapshot partial-restore-file command enables you to restore a range of bytes in a file from the version
of the file saved in the Snapshot copy. This command is intended to be used to restore particular pieces of LUNs, NVMe
namespaces, and NFS or CIFS container files that are used by a host to store multiple sources of data. For example, a host might
be storing multiple user databases in the same LUN. A partial file restore can be used to restore one of those databases in the
LUN without touching other databases stored in the LUN. This command is not intended for restoring parts of normal user-level
files that are stored in the volume. You should use volume snapshot restore-file command to restore normal user-level
files. The volume for the partial-restore should be online during this operation.
For LUNs and NVMe namespaces, this command is supported across all LUN and NVMe namespace source and destination
objects with equal logical block sizes.
Examples
The following example restores first 4096 bytes in the file foo.txt inside the volume vol3 from the Snapshot copy
vol3_snap:
Related references
volume snapshot restore-file on page 1653
Description
This command will delete all Snapshot copies that have the format used by the current version of Data ONTAP. It will fail if any
Snapshot copy polices are enabled, or if any Snapshot copies have an owner.
Note: Snapshot policies must be disabled prior to running this command.
Parameters
-node <nodename> - Node
The name of the node.
Description
The volume snapshot rename command renames a Snapshot copy.
Note: You cannot rename a Snapshot copy that is created as a reference copy during the execution of the volume move
command.
Parameters
-vserver <vserver name> - Vserver
This specifies the Vserver that contains the volume on which the specified Snapshot copy is to be renamed
-volume <volume name> - Volume
This specifies the volume that contains the Snapshot copy to be renamed.
-snapshot <snapshot name> - Snapshot
This specifies the Snapshot copy that is to be renamed.
-new-name <snapshot name> - Snapshot New Name
This specifies the new name for the Snapshot copy.
[-force [true]] - Force Rename (privilege: advanced)
If this parameter is specified, the Snapshot copy rename operation is allowed on application tagged volumes.
Otherwise, the operation is disallowed on application tagged volumes.
Examples
The following example renames a Snapshot copy named vol3_snap on a volume named vol3 and a Vserver named vs0.
The Snapshot copy is renamed to vol3_snap_archive.
Related references
volume move on page 1583
Parameters
-vserver <vserver name> - Vserver
This specifies the Vserver that contains the volume on which the specified Snapshot copy to be restored is
saved.
-volume <volume name> - Volume
This specifies the parent read-write volume whose Snapshot copy is to be restored to take its place.
-snapshot <snapshot name> - Snapshot
This specifies the Snapshot copy that is to be restored to be the read-write parent volume.
[-force [true]] - Force Restore
If you use this parameter, the Snapshot copy is restored even if the volume has one or more newer Snapshot
copies which are currently used as reference Snapshot copy by SnapMirror. If a restore is done in this
situation, this will cause future SnapMirror transfers to fail. The SnapMirror relationship may be repaired
using snapmirror resync command if a common Snapshot copy is found between the source and
destination volume. If there is no common Snapshot copy between the source and the destination volume, a
baseline SnapMirror copy would be required. If you use this parameter, the operation is also allowed on
application tagged volumes.
[-preserve-lun-ids {true|false}] - Preserve LUN Identifiers
This option enables you to select whether the Snapshot copy restore needs to be non-disruptive to clients due
to LUN or NVMe namespace identifiers changing. If you use this option and set it to true, or choose to not
use this option at all, the volume snapshot restore command fails if the system determines that it cannot
be non-disruptive with regards to LUN or NVMe namespace identifiers. If you use this option and set it to
false, the restore operation proceeds even if this might cause client-visible effects. In this case,
administrators should take the LUNs or NVMe namespaces offline before proceeding.
Examples
The following example restores a Snapshot copy named vol3_snap_archive to be the parent read-write volume for the
volume family. The existing read-write volume is named vol3 and is located on a Vserver named vs0:
Related references
snapmirror resync on page 676
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver which contains the volume.
[-volume <volume name>] - Volume Name
This specifies the volume which contains the specified Snapshot copy.
-snapshot | -s <snapshot name> - Snapshot Name
This specifies the Snapshot copy from which the file is restored.
-path <text> - Filepath
This specifies the relative path to the file which is restored from the Snapshot copy. You should specify the -
volume option so that the file is searched and restored from the Snapshot copy of the specified volume. If you
do not specify the -volume then the file is searched and restored from the Snapshot copy of the root volume.
[-restore-path | -r <text>] - Restore Filepath
This option specifies the destination location inside the volume where the file is restored. If you do not specify
this option, the file is restored at the same location referred by -path option. If you specify -restore-path
option, then it should refer to a relative path location within the same volume which contains the source file. If
you do not specify -volume along with the relative path, the file is restored in the root volume.
[-split-disabled [true]] - Disable Space Efficient LUN Splitting
If you use this option and set it to true, space efficient LUN or NVMe namespace clone split is not allowed
during the restore operation. If you use this option and set it to false or do not use this option, then space
efficient LUN or NVMe namespace clone split is allowed during the restore operation.
[-ignore-streams [true]] - Ignore Streams
If you use this parameter, the file is restored without its streams. By default, the streams are restored.
Examples
The following example restores a file foo.txt from the Snapshot copy vol3_snap inside the volume vol3 contained in
a Vserver vs0:
cluster1::> volume snapshot restore-file -vserver vs0 -volume vol3 -snapshot vol3_snap -path /
foo.txt
• Vserver name
• Volume name
• Size
To display a detailed list view with additional information, run the command and select the -instance view. In addition to the
above mentioned information about the Snapshot copies, the detailed list view provides the following additional information:
• Creation time
• Snapshot busy
• 7-Mode Snapshot
• Constituent Snapshot
• Expiry Time
At the advanced or higher privilege level the detailed view provides the following additional information:
• Physical Snap ID
• Logical Snap ID
• Snapshot tags
• Instance UUID
• Version UUID
Note: For Snapshot copies whose parent volume is a FlexGroup, some information is not available and empty values will be
displayed. This information includes:
• State
• Size
All information is available for Snapshot copies whose parent volume is a FlexGroup Constituent.
At the admin and advanced privilege level, Snapshot copies whose parent volume is a FlexGroup Constituent are not
displayed by default. To display these, run the commmand and set the is-constituent to true. At the diagnostic or higher
privilege level, all Snapshot copies are displayed by default.
The list view is automatically enabled if a single Snapshot copy is specified by using the -vserver, -volume and -snapshot
options together.
A preformatted query for displaying the time-related information is available by specifying the -time format specifier. This
displays a table that contains the following fields for all the available Snapshot copies:
• Vserver name
• Volume name
• Creation time
By using the -fields option you can choose to print only the certain fields in the output. This presents the selected fields in a
table view. This is ideal when you want additional information to be different from the information that is provided by the
default table view, but would like it in a format which is visually easy to compare.
You can specify additional parameters to display the information that matches only those parameters. For example, to display
information only about Snapshot copies of the load-sharing volumes, run the command with the -volume-type LS parameter.
If you specify multiple filtering parameters, only those Snapshot copies that match all the specified parameters are displayed.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-time ]
If the -time format is specified, the command displays time related information about all entries.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Examples
The following example displays default information about all Snapshot copies of a volume named vol1:
The following example displays Snapshot copies which are older than 1 hour, limiting the output to wanted fields:
The following example displays detailed information about a specific Snapshot copy, using the 'snap' alias:
Vserver: cluster1
Volume: vol1
Snapshot: one
Snapshot Data Set ID: 4294968322
Snapshot Master Data Set ID: 6442451970
Creation Time: Mon Nov 17 10:23:42 2014
Snapshot Busy: false
List of Owners: -
Snapshot Size: 68KB
Percentage of Total Blocks: 0%
Percentage of Used Blocks: 33%
Consistency Point Count: 4
Comment: -
File System Version: 9.0
7-Mode Snapshot: false
Label for SnapMirror Operations: -
Constituent Snapshot: false
Node: node1
Snapshot Inofile Version: 3
Expiry Time: -
SnapLock Expiry Time: -
Description
The volume snapshot show-delta command returns the number of bytes that changed between two Snapshot copies or a
Snapshot copy and the active filesystem. This is calculated from the number of blocks that differ multiplied by the block size.
The command also shows the time elapsed between the Snapshot copies in seconds.
Queries that use query operators (*, |, etc.) are disabled for this command to avoid performance degradation for client requests.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This specifies the volume for which the delta is to be calculated.
-snapshot1 <snapshot name> - First Snapshot Name
This specifies the first Snapshot copy for the comparison.
[-snapshot2 <snapshot name>] - Second Snapshot Name
This specifies the second Snapshot copy for the comparison. If the field is not specified, it is assumed to be the
Active File System.
Examples
The following example shows the bytes changed and the time separating the two Snapshots copies:
Description
The volume snapshot autodelete modify command enables you to modify Snapshot autodelete and LUN, NVMe
namespace or file clone autodelete policy settings. Based on the defined policy, automatic deletion of Snapshot copies and LUN,
NVMe namespace or file clones is triggered. Automatic deletion of Snapshot copies and LUN, NVMe namespace or file clones
is useful when you want to automatically reclaim space consumed by the Snapshot copies and LUN, NVMe namespace or file
clones from the volume when it is low in available space. LUN, NVMe namespace or file clone autodelete follows Snapshot
copy autodelete. This command works only on a read-write parent volume. You cannot setup automatic Snapshot copy deletion
and automatic LUN, NVMe namespace or file clone deletion for read-only volumes.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This specifies the volume whose autodelete policy has to be modified.
[-enabled {true|false}] - Enabled
This option specifies whether automatic deletion of Snapshot copies and LUN, NVMe namespace or file
clones is enabled or disabled. If set to true, automatic deletion of Snapshot copies and LUN, NVMe
namespace or file clones is enabled. If set to false, automatic deletion of Snapshot copies and LUN, NVMe
namespace or file clones is disabled.
[-commitment {try|disrupt|destroy}] - Commitment
This option specifies which Snapshot copies and LUN, NVMe namespace or file clones can be automatically
deleted to reclaim back space.
When set to try, the Snapshot copies which are not locked by any application and the LUN, NVMe
namespace or file clones which are not configured as preserved are deleted.
When set to disrupt, the Snapshot copies which are not locked by data backing functionalities (such as
volume clones, LUN clones, NVMe namespace clones and file clones) and LUN, NVMe namespace or file
clones which are not configured as preserved are deleted. In the disrupt mode, the Snapshot copies locked
by data protection utilities such as Snapmirror and Volume Move can be deleted. If such a locked Snapshot
copy is deleted during the data transfer, the transfer is aborted.
When set to destroy, the Snapshot copies locked by the data backing functionalities are deleted. In addition,
all the LUN, NVMe namespace or file clones in the volume are deleted.
[-defer-delete {scheduled|user_created|prefix|none}] - Defer Delete
This option determines the order in which Snapshot copies can be deleted.
Possible values are as follows:
• When set to scheduled, scheduled Snapshot copies are the last to be deleted.
• When set to user_created, user Snapshot copies are the last to be deleted.
This option is not applicable for LUN, NVMe namespace or file clones.
[-delete-order {newest_first|oldest_first}] - Delete Order
This option specifies if the oldest Snapshot copy and the oldest LUN, NVMe namespace or file clone or the
newest Snapshot copy and the newest LUN, NVMe namespace or file clone are deleted first.
[-defer-delete-prefix <text>] - Defer Delete Prefix
This option specifies the prefix string for the -defer-delete prefix parameter. The option is not
applicable for LUN, NVMe namespace or file clones.
[-target-free-space <percent>] - Target Free Space
This option specifies the free space percentage at which the automatic deletion of Snapshot copies and LUN,
NVMe namespace or file clones must stop. Depending on the -trigger Snapshot copies and LUN, NVMe
namespace or file clones are deleted until you reach the target free space percentage.
[-trigger {volume|snap_reserve|(DEPRECATED)-space_reserve}] - Trigger
This option specifies the condition which starts the automatic deletion of Snapshot copies and LUN, NVMe
namespace or file clones.
Setting this option to volume triggers automatic deletion of Snapshot copies and LUN, NVMe namespace or
file clones when the volume reaches threshold capacity and the volume space reserved for Snapshot copies is
exceeded.
Setting the option to snap_reserve triggers automatic deletion of Snapshot copies and LUN, NVMe
namespace or file clones when the space reserved for Snapshot copies reaches threshold capacity.
Setting the option to (DEPRECATED)-space_reserve triggers automatic deletion of Snapshot copies when
reserved space in the volume reaches threshold capacity and the volume space reserved for Snapshot copies is
exceeded.
Note: The option space_reserve is deprecated.
The threshold capacity is determined by the size of the volume as follows:
• If the volume size is less than 20 GB, the autodelete threshold is 85%.
• If the volume size is equal to or greater than 20 GB and less than 100 GB, the autodelete threshold is 90%.
• If the volume size is equal to or greater than 100 GB and less than 500 GB, the autodelete threshold is
92%.
• If the volume size is equal to or greater than 500 GB and less than 1 TB, the autodelete threshold is 95%.
• If the volume size is equal to or greater than 1 TB, the autodelete threshold is 98%.
Examples
The following example enables Snapshot autodelete and sets the trigger to snap_reserve for volume vol3 which is part
of the Vserver vs0:
cluster1::> volume snapshot autodelete modify -vserver vs0 -volume vol3 -enabled true -trigger
snap_reserve
The following example enables Snapshot autodelete and LUN, NVMe namespace or file clone autodelete for volume
vol3 which is part of the Vserve vs0:
cluster1::> volume snapshot autodelete modify -vserver vs0 -volume vol3 -enabled true -trigger
volume -commitment try -delete-order oldest_first -destroy-list lun_clone,file_clone
Description
The volume snapshot autodelete show command displays information about Snapshot autodelete policies. The
command output depends on the parameters specified with the command. If no parameters are specified, the command displays
a table with the following information about all the available Snapshot autodelete policies:
• Vserver name
• Volume name
• Option name
• Option value
To display a detailed list view with additional information, run the command and select the -instance view. The detailed list
view provides the following information:
• Vserver name
• Volume name
• Enabled
• Commitment
• Defer Delete
• Delete Order
• Trigger
• Destroy List
• Is Constituent Volume
By using the -fields option you can choose to print only the certain fields in the output. This presents the selected fields in a
table view. This is ideal when you want additional information to be different from the information that is provided by the
default table view, but would like it in a format which is visually easy to compare.
You can specify additional parameters to display the information that matches only those parameters. For example, to display
information only about Snapshot autodelete policies which are enabled, run the command with -enabled true parameter. If
you specify multiple filtering parameters, only those policies that match all the specified parameters are displayed.
Parameters
{ [-fields <fieldname>, ...]
This option allows you to print only certain fields in the output.
| [-instance ]}
This option allows you to print a detailed list view.
[-vserver <vserver name>] - Vserver Name
If this parameter and the -volume parameter are specified, the command displays detailed autodelete policy
information about the specified volume. If this parameter is specified by itself, the command displays
autodelete policy information about volumes on the specified Vserver.
[-volume <volume name>] - Volume Name
If this parameter and the -vserver parameter are specified, the command displays detailed autodelete policy
information about the specified volume. If this parameter is specified by itself, the command displays
autodelete policy information about all volumes matching the specified name.
[-enabled {true|false}] - Enabled
If this parameter is specified, the command displays information about autodelete policies that match the
specified parameter value.
[-commitment {try|disrupt|destroy}] - Commitment
If this parameter is specified, the command displays information about autodelete policies that match the
specified commitment value.
[-defer-delete {scheduled|user_created|prefix|none}] - Defer Delete
If this parameter is specified, the command displays information about autodelete policies that match the
specified defer deletion criterion.
[-delete-order {newest_first|oldest_first}] - Delete Order
If this parameter is specified, the command displays information about autodelete policies that match the
specified deletion order.
[-defer-delete-prefix <text>] - Defer Delete Prefix
If this parameter is specified, the command displays information about autodelete policies that match the
prefix used for deferring deletion.
[-target-free-space <percent>] - Target Free Space
If this parameter is specified, the command displays information about autodelete policies that match the
specified target free space.
Examples
The following example displays Snapshot autodelete policy settings for volume vol3 which is inside the Vserver vs0:
Description
The volume snapshot policy add-schedule command adds a schedule to a Snapshot policy. You can create a schedule
by using the job schedule cron create or job schedule interval create commands.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which a Snapshot policy schedule is to be added.
-policy <snapshot policy> - Snapshot Policy Name
This specifies the Snapshot policy to which a schedule is to be added.
-schedule <text> - Schedule Name
This specifies the schedule that is to be added to the Snapshot policy.
-count <integer> - Maximum Snapshot Copies for Schedule
This specifies the maximum number of Snapshot copies that can be taken by the specified schedule. The total
count of all the Snapshot copies to be retained for the policy cannot be more than 1023.
Examples
The following example adds a schedule named midnight to the Snapshot policy named snappolicy_nightly on Vserver
vs0. The schedule can take a maximum of five Snapshot copies.
cluster1::> volume snapshot policy add-schedule -vserver vs0 -policy snappolicy_nightly -schedule
midnight -count 5
Related references
job schedule cron create on page 163
job schedule interval create on page 167
Description
The volume snapshot policy create command creates a Snapshot policy. A Snapshot policy includes at least one
schedule, up to a maximum of five schedules, and a maximum number of Snapshot copies per schedule. You can create a
schedule by using the job schedule cron create or job schedule interval create commands. When applied to a
volume, the Snapshot policy specifies the schedule on which Snapshot copies are taken and the maximum number of Snapshot
copies that each schedule can take. The total count of all the Snapshot copies to be retained for the policy cannot be more than
1023.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the Snapshot policy is to be created.
-policy <snapshot policy> - Snapshot Policy Name
This specifies the Snapshot policy that is to be created.
-enabled {true|false} - Snapshot Policy Enabled
This specifies whether the Snapshot policy is enabled.
[-comment <text>] - Comment
This option specifies a text comment for the Snapshot policy.
-schedule1 <text> - Schedule1 Name
This specifies the name of the first schedule associated with the Snapshot policy.
Examples
The following example creates a Snapshot policy named snappolicy_4hrs on a Vserver named vs0. The policy runs on a
single schedule named 4hrs with a prefix every_4_hour and has a maximum number of five Snapshot copies.
Related references
job schedule cron create on page 163
job schedule interval create on page 167
Description
The volume snapshot policy delete command deletes a Snapshot policy.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the Snapshot policy is to be deleted.
-policy <snapshot policy> - Snapshot Policy Name
This specifies the Snapshot policy that is to be deleted.
Examples
The following example deletes a Snapshot policy named snappolicy_hourly on Vserver vs0:
Description
The volume snapshot policy modify command enables you to modify the description associated with a Snapshot policy
and whether the policy is enabled or disabled.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which the Snapshot policy is to be modified.
-policy <snapshot policy> - Snapshot Policy Name
This specifies the Snapshot policy that is to be modified.
[-enabled {true|false}] - Snapshot Policy Enabled
This optionally specifies whether the Snapshot policy is enabled.
[-comment <text>] - Comment
This specifies the comment text for the Snapshot policy.
[-snapmirror-labels <text>, ...] - Label for SnapMirror Operations
This optionally specifies a comma separated list of SnapMirror labels that are applied to the schedules in the
Snapshot policy. Each label in the list applies to only one schedule in the Snapshot policy (maximum of 5
SnapMirror labels), the first label applying to the first schedule, the second label applying to the second
schedule, and so on. You can have a maximum of five SnapMirror labels, which corresponds to the maximum
number of schedules a Snapshot policy can have. If an empty string ("") is specified, the existing labels will be
deleted from all the schedules.
cluster1::> volume snapshot policy modify -vserver vs0 -policy snappolicy_wknd -comment "Runs only
on weekends"
Description
The volume snapshot policy modify-schedule command modifies the maximum number of Snapshot copies that can
be taken by a Snapshot policy's schedule.
Parameters
-vserver <vserver name> - Vserver Name
This specifies the Vserver on which a Snapshot policy schedule is to be modifed.
-policy <snapshot policy> - Snapshot Policy Name
This specifies the Snapshot policy whose schedule is to be modified.
-schedule <text> - Schedule Name
This specifies the schedule that is to be modified.
[-newcount <integer>] - Maximum Snapshot Copies for Schedule
This specifies the maximum number of Snapshot copies that can be taken by the specified schedule. The total
count of all the Snapshot copies to be retained for the policy cannot be more than 1023.
[-newsnapmirror-label <text>] - Label for SnapMirror Operations
This specifies the SnapMirror Label identified with a Snapshot copy when it is created for the specified
schedule. The SnapMirror Label is used by the Vaulting subsystem when you back up Snapshot copies to the
Vault Destination. If an empty label ("") is specified, the existing label will be deleted.
Examples
The following example changes the maximum number of Snapshot copies from five to four for a schedule named
midnight on a Snapshot policy named snappolicy_nightly on Vserver vs0:
Description
The volume snapshot policy remove-schedule command removes a schedule from a Snapshot policy.
Examples
The following example removes a schedule named hourly from a Snapshot policy named snappolicy_daily on Vserver
vs0:
cluster1::> volume snapshot policy remove-schedule -vserver vs0 -policy snappolicy_daily -schedule
hourly
Description
The volume snapshot policy show command displays the following information about Snapshot policies:
• Vserver name
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-revert-incompatible ] (privilege: advanced)
If this parameter is specified, the command displays Snapshot policies that are not supported in Data ONTAP
8.2. The total Snapshot copy count in the policy needs to be reduced to be equal to or less than the supported
count for the revert operation to succeed.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
If this parameter is specified, the command displays Snapshot policies on the specified Vserver.
Examples
The following example displays information about all Snapshot policies:
Vserver: vs0
Number of Is
Policy Name Schedules Enabled Comment
------------------------ --------- ------- ----------------------------------
p1 1 false -
Schedule Count Prefix SnapMirror Label
---------------------- ----- ---------------------- -------------------
weekly 2 weekly -
p2 2 true -
Schedule Count Prefix SnapMirror Label
---------------------- ----- ---------------------- -------------------
hourly 6 hourly -
daily 2 daily -
Description
The volume transition-convert-dir show command displays information about ongoing directory copy conversion
operations.
Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
Displays summary information about the ongoing copy conversions of directories for the volumes in the
specified Vserver.
[-volume <volume name>] - Volume Name
Displays summary information about the ongoing copy conversions of directories that are occurring on the
specified volume.
[-path <text>] - Directory Being Converted
Displays summary information for the ongoing copy conversions of directories that have the specified
directory path to convert.
[-job-id <integer>] - Convert Job ID
Displays summary information for the ongoing copy conversions of directories that have the specified job ID.
Examples
The following example illustrates how to show directory conversions for a volume:
Description
The volume transition-convert-dir start command moves the directory entries in an existing directory to a new
temporary directory and then replaces the existing directory with the temporary directory. This command only has a use for
directories that were created in a non-Unicode format on a 7-Mode storage system and then transitioned to clustered Data
ONTAP by using a SnapMirror relationship of type TDP. This command converts the directories to the Unicode format in a way
that is less likely to disrupt the operation of the Data ONTAP systems than the existing directory conversion mechanisms. The
temporary directory is visible from clients. Attempting to manipulate the directory being copied or the temporary directory
might result in expected side-effects and should be avoided.
Parameters
-vserver <vserver name> - Vserver Name
Specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
Specifies the volume in which the directory to be converted is located.
-path <file path> - Directory Path
Specifies the path to the directory to be converted from the root of the volume specified with the -volume
parameter. The root directory of a volume might not be converted using this command. Also, the path must not
have a symbolic link as the last component in the path.
Examples
The following example shows how to start a 7-mode directory conversion for a given path in a volume:
cluster1::*> volume transition-convert-dir start -vserver vs0 -volume vol1 -path /data/large_dir
vserver add-aggregates
Add aggregates to the Vserver
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The vserver add-aggregates command adds aggregates to the Vserver.
Parameters
-vserver <vserver> - Vserver
Specifies the Vserver for which aggregates have to be added.
-aggregates <aggregate name>, ... - List of Aggregates to Be Added
Specifies the list of aggregates to add to the Vserver. The root aggregates should not be specified in this list
because though the command will return success, volumes cannot be created on root aggregates. In a
MetroCluster configuration, this command does not honor the remote cluster's aggregates.
Examples
The following example illustrates how to add aggregates aggr1 and aggr2 to a Vserver named vs.example.com:
vserver add-protocols
Add protocols to the Vserver
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The vserver add-protocols command adds given protocols to a specified Vserver.
Parameters
-vserver <vserver> - Vserver
This specifies the Vserver that is to be modified.
-protocols {nfs|ci