Professional Documents
Culture Documents
NetappTS ExerciseGuide Answers
NetappTS ExerciseGuide Answers
ONTAP Troubleshooting
COPYRIGHT
© 2017 NetApp, Inc. All rights reserved. Printed in the U.S.A. Specifications subject to change without notice.
No part of this document covered by copyright may be reproduced in any form or by any means—graphic, electronic, or
mechanical, including photocopying, recording, taping, or storage in an electronic retrieval system—without prior written
permission of NetApp, Inc.
TRADEMARK INFORMATION
NETAPP, the NETAPP logo, and the marks listed at http://www.netapp.com/TM are trademarks of NetApp, Inc. Other
company and product names may be trademarks of their respective owners.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise Equipment
The student lab environment consists of one vApp for each student.
The vApp is labeled OTS_X0Y, where X is the set number and Y is the student vApp number
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Objectives
This exercise focuses on enabling you to do the following:
Conduct a full and comprehensive health check of an ONTAP cluster
Access the cluster using OnCommand System Manager (System Manager)
Access cluster and node log files via HTTPS
This is an optional module. The instructor should decide if any of the tasks here are
useful depending on the expertise of the students in the class.
1-2. Open the Remote Desktop Connection (RDC) application and connect to your access host.
1-5. Run the following commands to do a complete health check of the cluster.
cluster1::*> network interface show
cluster1::*> cluster show
cluster1::*> cluster ring show
cluster1::*> storage failover show
cluster1::*> event log show -severity WARNING
cluster1::*> event log show -severity EMERGENCY
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
NOTE: In ONTAP 8.3 software, you do not need to enable the web services and
HTTP.
6-3. Connect to the cluster1 management interface using the administrator account from the access
host. You might want to enable logging and save all your session output.
6-5. Access the URLs to view the log directory on each node. You must log in using the cluster
administration credentials.
https://<cluster-mgmt-ip>/spi/<node_name>/etc/log/
For example,
https://<cluster-mgmt-ip>/spi/node1/etc/log/
https://<cluster-mgmt-ip>/spi/node1/etc/log/
6-6. Access the URLs to view the directory where the core files are saved on each node.
https://<cluster-mgmt-ip>/spi/<node_name>/etc/crash/
For example,
https://<cluster-mgmt-ip>/spi/node1/etc/crash/
https://<cluster-mgmt-ip>/spi/node2/etc/crash/
End of Exercise
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Objectives
This exercise focuses on enabling you to do the following:
Recover a replicated database (RDB) configuration
Resolve RDB replication problems
Perform cluster and node backups
Resolve an issue with /mroot
1-2. List two frequent reasons that could cause a cluster configuration backup to fail.
Common causes include:
Lack of space in mroot
Missing or misnamed files in /mroot for RDB and /var for CDB
Failures in job manager
1-4. Identify a knowledge base article to resolve the error message in Item 3.
Troubleshooting Workflow: Cluster Config Backup/Restore: Backup failure
KB Article Number: 000014430 (Former KB ID: KB ID: 2017186)
https://kb.netapp.com/support/s/article/ka11A00000015ga/troubleshooting-workflow-cluster-
config-backup-restore-backup-failure
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
List the command that verifies that the scheduled backups were created and distributed within
the cluster.
cluster1::*> system configuration backup show
List the command that you can use to recover a node’s configuration.
cluster1::*> system configuration recovery
cluster node
After the node restore, sync the node so it gets the RDB configuration data
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2-3. On node1, start a job to create a system configuration backup of the entire cluster and note the
job ID number.
cluster1::*> system configuration backup create -node node1 -backup-type cluster -
backup-name node1.cluster
2-4. Before the job finishes, review the job that you have created:
cluster1::*> job show
cluster1::*> job show –id <ID#>
(You use the job ID from the backup create command.)
cluster1::*> job show –id <ID#> -fields uuid
cluster1::*> job show -uuid UUID_from_the_previous_command
Manual Break:
Manual Fix:
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
4-5. Log in to the node management interface for node2 again, and answer these questions:
Are you able to log in?
Why or why not?
We are not able to login because mgwd was not restarted. This is because spmctl is not
monitoring mgwd anymore.
You can verify that from the systemshell as follows:
4-7. From node 2’s systemshell, unmount mroot, using the following commands:
% cd /etc
% sudo./netapp_mroot_unmount
% exit
4-8. Log in to the cluster management session, and check the cluster health.
cluster1::*> cluster show
Node Health Eligibility Epsilon
-------------------- ------- ------------ ------------
node1 true true true
node2 true true false
node3 true true false
node4 true true false
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
4-11. Check the cluster health again, and answer these questions:
Do you see a difference?
If so, why?
What is nonoperational?
cluster1::*> cluster show
Node Health Eligibility Epsilon
-------- ----------- ------- ------------
node1 true true true
node2 false true false
node3 true true false
node4 true true false
Note: Restarting mgwd on this node does not remount mroot. (as was done in the previous
versions of ONTAP).
The following was the solution in the previous versions.
Remount /mroot by restarting the management gateway from the system shell of node 2. When
mgwd restarts, it mounts mroot if it is not already mounted.
node2% ps –A|grep mgwd -> verifies that mgwd has been started
node1% df| grep mroot
localhost:0x80000000,0xc3ffb35a 1881376 1601136 280240 85% /mroot
/mroot/etc/cluster_config/vserver 1881376 1601136 280240 85% /mroot/vserver_fs
End of Exercise
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Objectives
This exercise focuses on enabling you to do the following:
Identify the network component and the data component interaction
Outline the networking implications of upgrading to ONTAP 8.3 software
Use network triage tools
Describe the implications of vifmgr going Out of Quorum (OOQ)
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The customer says that these LIFs are normally home on the node cluster-01. Explain
which vifmgr behavior might explain why these LIFs are now on node cluster-03.
1-3. A customer calls in and provided the following output:
cluster::*> cluster ring show
Node UnitName Epoch DB Epoch DB Trnxs Master Online
--------- -------- -------- -------- -------- ---------
---------
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
primary
nfs_lif1 up/- 10.61.83.200/24
cluster-01 e0a true
1-4.
The customer says that the entire cluster is not serving data. The customer
wants an explanation as to why the LIFs are home but not serving data.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2-2. View the current networking interface configuration for node4 by entering the following
command:
cluster1::> net int show -curr-node node4
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
Cluster
cluster1-04_clus1
up/up 169.254.33.29/16 node4 e0a true
cluster1-04_clus2
up/up 169.254.33.30/16 node4 e0b true
cluster1
cluster1-04_mgmt1
up/up 192.168.6.34/24 node4 e0c true
nassvm1
nassvm1_data4
up/up 192.168.6.118/24 node4 e0d true
nassvm2
nassvm2_data4
up/up 192.168.6.128/24 node4 e0d true
sansvm1
sansvm1_data2
up/up 192.168.6.132/24 node4 e0d true
sansvm2
sansvm2_data2
up/up 192.168.6.136/24 node4 e0d true
7 entries were displayed.
2-4. From the systemshell prompt of node4, view the status of the network ports on the node
by running the following command:
node4% ifconfig -a
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
options=8009b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,LINKSTA
TE>
ether 00:50:56:01:21:1f
inet 192.168.6.34 netmask 0xffffff00 broadcast 192.168.6.255 NODEMGMTLIF Vserver ID: -1
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
e0d: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=8009b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,LINKSTA
TE>
ether 00:50:56:01:21:20
inet 192.168.6.118 netmask 0xffffff00 broadcast 192.168.6.255 DATALIF Vserver ID: 5
inet 192.168.6.128 netmask 0xffffff00 broadcast 192.168.6.255 DATALIF Vserver ID: 6
inet 192.168.6.132 netmask 0xffffff00 broadcast 192.168.6.255 DATALIF Vserver ID: 7
inet 192.168.6.136 netmask 0xffffff00 broadcast 192.168.6.255 DATALIF Vserver ID: 8
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
e0e: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=8009b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,LINKSTA
TE>
ether 00:50:56:01:21:21
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
e0f: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=8009b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,LINKSTA
TE>
ether 00:50:56:01:21:22
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
ipfw0: flags=8801<UP,SIMPLEX,MULTICAST> metric
ipfw0: flags=8801<UP,SIMPLEX,MULTICAST> metric 0 mtu 65536
lo0: flags=80c9<UP,LOOPBACK,RUNNING,NOARP,MULTICAST> metric 0 mtu 8232
options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
inet 127.0.10.1 netmask 0xff000000 LOOPBACKLIF Vserver ID: -1
inet 127.0.20.1 netmask 0xff000000 LOOPBACKLIF Vserver ID: -1
inet 127.0.0.1 netmask 0xff000000 LOOPBACKLIF Vserver ID: -1
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2-5. Correlate the output from Step 2 and Step 4 to determine whether the interface configuration, as
reported by the management component, agrees with the interface configuration of the FreeBSD
networking layer.
2-8. Repeat Step 2 through Step 5 to observe that the action taken in Step 7 was correctly passed on to the
FreeBSD networking layer of node4.
cluster1::> net int show -curr-node node4
(network interface show)
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
Cluster
cluster1-04_clus1
up/up 169.254.33.29/16 node4 e0a true
cluster1-04_clus2
up/up 169.254.33.30/16 node4 e0b true
cluster1
cluster1-04_mgmt1
up/up 192.168.6.34/24 node4 e0c true
nassvm1
nassvm1_data4
up/up 192.168.6.118/24 node4 e0d true
nassvm2
nassvm2_data4
down/down 192.168.6.128/24 node4 e0d true
sansvm1
sansvm1_data2
up/up 192.168.6.132/24 node4 e0d true
sansvm2
sansvm2_data2
up/up 192.168.6.136/24 node4 e0d true
7 entries were displayed.
node4% ifconfig -a
e0c: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=8009b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,LIN
KSTATE>
ether 00:50:56:01:21:1f
inet 192.168.6.34 netmask 0xffffff00 broadcast 192.168.6.255 NODEMGMTLIF Vserver
ID: -1
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
e0d: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=8009b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,LIN
KSTATE>
21 ONTAP Troubleshooting: Instructor Exercise Guide
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
options=8009b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,LIN
KSTATE>
ether 00:50:56:01:21:21
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
e0f: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=8009b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,LIN
KSTATE>
ether 00:50:56:01:21:22
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
ipfw0: flags=8801<UP,SIMPLEX,MULTICAST> metric 0 mtu 65536
lo0: flags=80c9<UP,LOOPBACK,RUNNING,NOARP,MULTICAST> metric 0 mtu 8232
options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
inet 127.0.10.1 netmask 0xff000000 LOOPBACKLIF Vserver ID: -1
inet 127.0.20.1 netmask 0xff000000 LOOPBACKLIF Vserver ID: -1
inet 127.0.0.1 netmask 0xff000000 LOOPBACKLIF Vserver ID: -1
crtr0: flags=1<UP> metric 0 mtu 65536
The entry inet 192.168.6.128 netmask 0xffffff00 broadcast 192.168.6.255 DATALIF Vserver
ID: 6 is missing from e0d
Task 3: Identify and Resolve Failures That Occur When You Can Create
New LIFs
Your instructor prepares your lab environment for this exercise and notifies you when it is ready.
Run script Mod2_Task5_stop_vifmgr.pl to break the lab.
Manual Break:
cluster1::*> systemshell -node node2 -command "sudo spmctl -s -h vifmgr "
Manual Fix:
cluster1::*> systemshell -node node2 -command "sudo spmctl -e -h vifmgr "
Scenario: A customer has called to report that the command to create LIFs fails.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
1) node2’s net int show command will error out. All others should succeed
cluster1::> net int show
(network interface show)
Error: show failed: RPC: Remote system error - Connection refused
3-4. Attempt the same command from another node, and then answer the following questions:
What do you see?
Is there any warning or error?
What might be wrong?
3-5. Verify your hypothesis on the systemshell using rdb_dump and using ps to check the running
processes, and check the logs from the clustershell.
3-6. You might need to include vifmgr and mgwd by using the following command:
cluster1::*> debug log files modify -incl-files vifmgr, mgwd,
messages
cluster1::*> debug log show –node node2 –timestamp “Mon Oct 10*”
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The logs might be verbose, so you might need to use debug log show and parse a
timestamp.
3-8.
3-9. Correct the problem using information that you learned in this module.
The students should log in to the system shell on node2 and run ps command to see if vifmgr is
running:
3-10. Log in to the cluster management interface, and again try to create the data LIF.
There should be no error messages and the lif should be created successfully.
cluster1::*> net int create -vserver nassvm1 -lif task6 -role data -data-protocol
nfs,cifs,fcache -home-node node2 -home-port e0d -address 192.168.81.150 -netmask
255.255.255.0
End of Exercise
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Objectives
This exercise focuses on enabling you to troubleshoot using the diag secd commands.
A customer calls in with a problem on its 4-node cluster. The customer states
that the SVM vs3 is not serving data. The customer indicates that it is connecting to
LIF_3, which is on node-3. The customer is trying to access the volume vol_cifs_homes
using CIFS. The customer thinks that the volume is on node-3, which is NTFS security
style.
1-2.
Answer the following questions:
Local I/O
Which node is doing the actual protocol work?
Node-3
Which node is doing the NetApp WAFL and storage work?
Node-3
Is multiprotocol processing involved?
No
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
A customer calls in with a problem on its 8-node cluster. The customer states
that the SVM vs1 is not serving data. The customer indicates that it is connecting to
LIF_5, which is on node-4. The customer is trying to access the volume vol_nfs_homes
using NFS. The customer thinks that the volume is on an aggregate on node-3, which is
UNIX security style.
1-4.
Answer the following questions:
Is this scenario an example of local or remote I/O?
remote I/O
Which node is doing the actual protocol work?
Node-4
Which node is doing the WAFL and storage work?
Node-3
Is multiprotocol processing involved?
No
2-3. Check for specific protocol connections by running the following commands:
cluster1::> network connections active show -service
nfs
2-4.
NOTE: The –service iscsi argument always returns empty results because the iSCSI
service is not tracked here.
27 ONTAP Troubleshooting: Instructor Exercise Guide
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2-10. Display the properties of the selected CID by running the following command:
cluster1::*> network connections active show -cid <CID #> -
instance
cluster1::*> network connections active show -cid 1656328807 -instance
Node: node1
Connection ID: 1656328807
Vserver: Cluster
Logical Interface Name: node1_clus2
Local IP address: 169.254.150.160
Local Port: 5007
Remote IP Address: 169.254.31.178
Remote Host: 169.254.31.178
Remote Port: 7700
Protocol: TCP
Logical Interface ID: 1023
Protocol Service: ctlopcp
Least Recently Used: no
Connection Blocks Load Balance Migrate: false
Context Id: 3
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
3-3. Identify the UNIX user that the Windows user student1 maps to, and use diag secd to
find this mapping.
cluster1::*> diag secd name-mapping show -node node4 -vserver nassvm1 -direction
win-unix -name student1
'student1' maps to 'pcuser'
3-4. Explain how you query for a Windows security identifier (SID) of student1 using diag
secd.
cluster1::*> diag secd authentication translate -node node4 -vserver
nassvm1 -win-name student1
S-1-5-21-2002460515-4267185084-3612797530-1104
3-5. Explain how you can test a cifs login for a student1 user in diag secd.
cluster1::*> diag secd authentication login-cifs -node node4 -vserver
nassvm1 student1
If this does not work clear all caches and reset server discovery as shown in step 6 and step
7. Also set the ntp server as follows:
::>cluster time-service ntp server create -server 192.168.6.10
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
You are attempting to restart a process in charge of security services. Do not restart this
process unless the system has generated a "secd.config.updateFail" event or you have
been instructed to restart this process by support personnel.
This command can take up to 2 minutes to complete.
Are you sure you want to proceed? {y|n}: y
Restart successful! Security services are operating correctly.
3-7. List the equivalents of Data ONTAP 7G operating system’s cifs resetdc and cifs
testdc.
cluster1::*> diag secd server-discovery reset -node node4 -vserver
nassvm1
Discovery Reset succeeded for Vserver:
3-8. Explain how you show and set the current logging level in secd.
cluster1::*> diag secd log show -node local
Log Options
----------------------------------
Log level: Debug
Function enter/exit logging: OFF
3-9. Explain how you enable tracing in secd to capture the logging level that is specified.
cluster1::*> diag secd trace set -node local
3-10. Explain how you check the secd configuration for comparison with what is in the RDB.
cluster1::*> diag secd configuration query -node node4 -source-name secd-cache-config
3-11. Explain how you can view and clear active CIFS connections in secd.
cluster1::*> diag secd connections show -node node4 -vserver nassvm2
No cached connections found matching the query provided.
Debrief: Ask the students why the node name is a required parameter for every command?
30 ONTAP Troubleshooting: Instructor Exercise Guide
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
End of Exercise
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Objectives
This exercise focuses on enabling you to do the following:
Resolve frequently seen mount issues
Resolve access issues
Manual Fix
cluster1::> nfs server modify -vserver nassvm1 -udp enabled -tcp enabled -v3 enabled
-v4.0 enabled
1-2.
Issue the following commands, and then answer the question.
[root@cats-cent ~] mkdir /nassvm1
[root@catsp-cent ~]# mount -o nfsvers=3
192.168.6.115:/nassvm1_nfs /nassvm1
Does the command succeed?
mount.nfs: requested NFS version or transport protocol is not supported
1-4.
From that node, capture a packet trace while repeating the previous mount command, and then
answer the following questions:
Are you able to troubleshoot the issue using the packet trace?
What is the issue?
(A sample packet trace is on the desktop of the Access host. Wireshark program is
pinned to the task bar of the Access Host.)
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2) Modify the SVM root volume permissions to 700 and change the volume owner to
cmodeuser and apply policy1
cluster1::*> vol modify -vserver nassvm1 -volume nassvm1_root -security-style
34 ONTAP Troubleshooting: Instructor Exercise Guide
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Manual Fix:
1) Modify root volume's attribute.
cluster1::> vol modify -vserver nassvm1 -volume nassvm1_root -security-style unix -
unix-permissions 755 -user root -group root -policy default
2-2. Explain why the customer is denied access, and then fix the problem.
The following are the issues here:
1 volume is not mounted to the svm namespace
2 export-policy rules are not set properly
3 permissions on the volumes are not good
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2-3.
If you can mount now, cd into the mount point, and then answer the following questions:
Can you cd into the mount point?
If nonoperational, how do you resolve the issue?
If you unmount and remount, does it still work?
[root@catsp-cent ~]# cd /nassvm1
[root@catsp-cent cmode]#
If the student has not changed anything other than the protocol and client match on the
export policy, they should get permission denied. To resolve, change the anon user to the
owner or modify the volume to allow access for all users, or change the user on the
volume.
If you unmount and remount, it still works.
[root@catsp-cent cmode]# cd ..
[root@catsp-cent /]# umount /nassvm1
[root@catsp-cent /]# mount -o nfsvers=3 192.168.6.115:/nassvm1_nfs /nassvm1
[root@catsp-cent /]#
2-4.
Try to write a file into the /nassvm1 directory, and then answer this question:
Are you able to write the file?
2-5.
After the write succeeds, view the permissions using ls –la, and then answer the following
questions:
What are the file permissions on the file that you wrote?
Why are the permissions and owner set the way that they are?
Permissions will be 744.
Owner will depend on anon and super user setting.
Superuser any/anon 0 – file will be root:root
Superuser any/anon = any value – file will be root:root
Superuser none /anon 0 – file will be root:bin
Superuser none /anon 65534 – permission denied to write
Superuser none /anon 65535 – cd /nassvm1 - permission denied
[root@catsp-cent nassvm1]# ll
total 0
-rw-r--r-- 1 root root 0 Jun 2 17:22 f1
-rw-r--r-- 1 root root 0 Jun 2 17:22 f2
-rw-r--r-- 1 root bin 0 Jun 2 17:23 f3
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2-7.
Open a new Secure Shell (SSH) session to your Linux computer, log in as the user “cmodeuser”
with the password “passwd,” and then answer these questions:
Can you cd to the mount directory?
If successful, can you write files to the mount?
If you notice an issue, what is the reason?
How do you resolve this issue?
May not be able to cd into the mount because the permissions are 700 unless the vol
permissions have been changed; only the owner/creator can cd/list/write files to this
mount.
cluster1::> vol modify -vserver nassvm1 -volume nassvm1_nfs -unix-permissions 755
Change permissions of the volume from cluster side to levels that would allow non-
creator/owners to write or change the owner to the ID of cmode on the client box.
End of Exercise
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Objectives
This exercise focuses on enabling you to do the following:
Identify LIFs that are involved in CIFS access
Troubleshoot using the diag secd commands
Troubleshoot domain controller login issues
Troubleshoot SMB user-authentication issues
Troubleshoot the export policy issues
Step Action
1-1. Try to access the share vol1 by using SMB and by mapping a network drive to the path
\\nassvm1\vol1 from the Windows host.
(you can set it to P@ssw0rd. You will have to do it from the ONTAP side using the above
command. Cannot be done from the client side when it prompts you to change the password)
If you get an error while connecting to the share by hostname (not IP address) from the
windows, check the event logs. You are most likely to get an error message similar to
“secd.kerberos.tktnyv: Kerberos client ticket not yet valid (-1765328351) for vserver (nassvm1)”
and the client will report “ A device attached is not functioning”. If this happens, please check
and correct the time on both windows client and the cluster.
https://technet.microsoft.com/en-us/library/cc780011(v=ws.10).aspx
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
C:\Users\Administrator>netstat -an
Active Connections
cluster1::>
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
node1
node2
Cluster cluster1-02_clus1 72
Cluster cluster1-02_clus2 72
node3
nassvm1 nassvm1_data3 1
node4
1-6. Explain whether the most efficient network path to the volume is being used.
No.
Client is connected via nassvm1_data3 lif hosted on e0d on node3.
CIFS share lives on volume nassvm1_cifs, aggregate nassvm1 on node1
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2-3.
Instead of using the host name, use the LIF IP to access the CIFS share, and answer this
question:
Can you access the share?
Yes.
2-4.
Analyze the issues, and use related commands to troubleshoot and fix the issues.
Hint: Use the Command cifs session show -instance when you map using
vserver name and when you map using the IP address and check the protocol that
is being used for authentication.
When you use the vserver name for the mapping:
cluster1::*> cifs session show -instance
Vserver: nassvm1
Node: node4
Session ID: 3439342740427505666
Connection ID: 3950412208
Incoming Data LIF IP Address: 192.168.6.118
Workstation IP Address: 192.168.6.11
Authentication Mechanism: Kerberos
User Authenticated as: domain-user
Windows User: CATSP\student1
When you use the data LIF to map:
cluster1::*> cifs session show -instance
Vserver: nassvm1
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Node: node2
Session ID: 15892640135037059074
Connection ID: 684290916
Incoming Data LIF IP Address: 192.168.6.115
Workstation IP Address: 192.168.6.11
Authentication Mechanism: NTLMv2
User Authenticated as: domain-user
Windows User: CATSP\student1
UNIX User: pcuser
The root cause of this issue is that when Kerberos is used, the time skew
between DC and the client cannot be more than 5 minutes.
The cause of this issue can be found by one of the 3 ways below:
- run “diag secd” command to test login. Authentication will fail due to the Kerberos error
with more than 5 minute time lag
- /mroot/etc/secd log shows the Kerberos error with more than 5 minute time lag
- If collecting the packet trace on the corresponding node, there will be a packet showing
the same error.
cluster1::*> diag secd authentication login-cifs -node node3 -vserver nassvm1 -user
catsp\student1
Enter the password:
Vserver: nassvm1 (internal ID: 5)
Error: User authentication procedure failed
[ 0 ms] Login attempt by domain user 'catsp\student1' using
NTLMv2 style security
[ 1] Successfully connected to ip 192.168.6.10, port 445 using
TCP
[ 9] Encountered NT error (NT_STATUS_MORE_PROCESSING_REQUIRED)
for SMB command SessionSetup
[ 11] Cluster and Domain Controller times differ by more than
the configured clock skew (KRB5KRB_AP_ERR_SKEW)
[ 11] Kerberos authentication failed with result: 7537.
[ 13] Unable to connect to NetLogon service on
catsp-win-1.catsp.csslp.netapp.com (Error:
RESULT_ERROR_SECD_NO_CONNECTIONS_AVAILABLE)
[ 14] No servers available for MS_NETLOGON, vserver: 5, domain:
catsp.csslp.netapp.com.
**[ 14] FAILURE: Unable to make a connection
** (NetLogon:CATSP.CSSLP.NETAPP.COM), result: 6940
[ 14] CIFS authentication failed
Error: command failed: Failed to authenticate user. Reason: "SecD Error: no server
available".
The error also shows up in the secd.log on node3 after enabling diag secd tracing:
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
4-2. Access Start > Run > \\nassvm1, and then describe the error message that you see.
Windows cannot access \\nassvm1
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Error: command failed: Failed to convert Windows SID to a Unix ID. Reason:
"SecD Error: Name mapping does not exist".
cluster1::*> diag secd authentication show-creds -node node4 -vserver nassvm1 -win-
name student1
Error: command failed: Failed to get user credentials. Reason: "SecD Error:
Name mapping does not exist".
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
cluster1::*> diag secd name-mapping show -node local -vserver nassvm1 -direction win-
unix -name stdent1
ATTENTION: Mapping of Data ONTAP "admin" users to UNIX user "root" is enabled, but
the following information does not reflect this mapping.
Error: command failed: Failed to find mapping for the user. Reason: "SecD
Error: Name mapping does not exist".
If you set the default-unix-user option, you would still need to create the default PC user
on the UNIX system. This is the same as 7-Mode where you need an entry in the
/etc/passwd file on the UNIX system.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
cluster1::> services unix-user create -vserver nassvm1 -user pcuser -id 65534
-primary-gid 65534 -full-name pcuser
4-7. If you still cannot access the share through SMB, check whether the user mapping is still a
problem.
No, not able to access the share.
Do the following:
- Enable debug logging for secd on the node that owns your data lifs
- Close the CIFS session on the Windows host and run net use /d* from cmd to clear
cached sessions and retry the connection
Usermapping succeeds
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Other ways to fix the problem is to change the security style, change the owner,
change the permissions. The best way depends on the needs of the customer.
4-10. List the commands that are available to review security settings, such as permissions and
security style on volumes, shares, and so on.
cluster1::>vol show –instance
cluster1::>vserver security file-directory show
4-11. From the cluster shell, use vserver security file-directory show to view
permissions on the volumes you’re trying to access. Should the user have access to
these volumes?
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
browsable
changenotify
show-previous-versions
nassvm1 ipc$ / browsable - -
nassvm1 vol1 /nassvm1_cifs oplocks - Everyone / Full Control
browsable
changenotify
4 entries were displayed.
cluster1::*> vserver security file-directory show -vserver nassvm1 -path
/nassvm1_cifs
Vserver: nassvm1
File Path: /nassvm1_cifs
File Inode Number: 64
Security Style: unix
Effective Style: unix
DOS Attributes: 10
DOS Attributes in Text: ----D---
Expanded Dos Attributes: -
UNIX User Id: 3500
UNIX Group Id: 3501
UNIX Mode Bits: 700
UNIX Mode Bits in Text: rwx------
ACLs: -
User student1 (the user that is logged in the windows client) will have access to the root
volume because the permissions on the nassvm1_root is 755. But it does not have
access to the nassvm1_cifs volume as the iunix permissions are set to 700.
4-13. Change the security style of the volume to NTFS, and see whether you can access the volume
now.
Yes
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
4-2. Try to access \\nassvm1\vol1, and describe the error that you see.
Login prompt pop-up appears and access is denied.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Vserver: nassvm1
File Path: /nassvm1_cifs
File Inode Number: 64
Security Style: ntfs
Effective Style: ntfs
DOS Attributes: 10
DOS Attributes in Text: ----D---
Expanded Dos Attributes: -
UNIX User Id: 0
UNIX Group Id: 0
UNIX Mode Bits: 777
UNIX Mode Bits in Text: rwxrwxrwx
ACLs: NTFS Security Descriptor
Control:0x8004
Owner:BUILTIN\Administrators
Group:BUILTIN\Administrators
DACL - ACEs
ALLOW-Everyone-0x1f01ff
ALLOW-Everyone-0x10000000-OI|CI|IO
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Modify export policy rule of policy1 to allow RW and RO, as well as changing the
protocol to allow CIFS
cluster1::*> export-policy rule modify -vserver nassvm1 -policyname policy1 -
ruleindex 1 -protocol any -clientmatch 0.0.0.0/0 -rorule any -rwrule any
End of Exercise
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Objectives
This exercise focuses on enabling you to do the following:
Use standard Linux commands to evaluate a Linux host in a NetApp scalable SAN environment
Use standard Linux commands to identify SAN disks in a NetApp scalable SAN environment
Use standard Linux commands to verify connectivity in a NetApp scalable SAN environment
Use standard Linux log files to evaluate the iSCSI subsystem in a NetApp scalable SAN environment
Troubleshoot a Linux host in a NetApp scalable SAN environment
Troubleshoot a Windows host in a NetApp scalable SAN environment
Restore LUN connectivity
This exercise is designed as a tutorial for students who have little or no Linux experience
It is important that the students verify that the disks are alive and are not stale.
Step Action
1-1. Log in to the Linux system ots-cent as root, run the following commands to evaluate a Linux
host, and record the results in the space provided.
Determine the IP address of the host: #ifconfig eth0
Verify that the iSCSI initiator is installed: #rpm –qa | grep iscsi
Verify that the host is logged in to the iSCSI array (target): #iscsiadm –m session
The IP addresses and iSCSI Qualified Names (IQNs) that are listed belong to the targets.
tcp: [10] 192.168.6.131:3260,1037 iqn.1992-
08.com.netapp:sn.140668517d5511e5ac18005056bf03f8:vs.16
List the IQN and IP addresses of the targets that are shown in the output of the previous
command from ONTAP:
::> net int show -vserver sansvm*
::>iscsi show -instance
[root@catsp-cent ~]# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:50:56:BF:5B:64
inet addr:192.168.6.20 Bcast:192.168.6.255 Mask:255.255.255.0
inet6 addr: fe80::250:56ff:febf:5b64/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:84324 errors:0 dropped:0 overruns:0 frame:0
TX packets:7448 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:7582697 (7.2 MiB) TX bytes:1140153 (1.0 MiB)
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Vserver: sansvm1
Target Name: iqn.1992-
08.com.netapp:sn.a2aa340f479411e7b0040050560120f9:vs.7
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Vserver: sansvm2
Target Name: iqn.1992-
08.com.netapp:sn.b106e9bc479411e7b0040050560120f9:vs.8
Target Alias: sansvm2
Administrative Status: up
Max Error Recovery Level: 0
RFC3720 DefaultTime2Retain Value (in sec): 20
Login Phase Duration (in sec): 15
Max Connections per Session: 4
Max Commands per Session: 128
TCP Receive Window Size (in bytes): 131400
2 entries were displayed.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
1-4. Use the output of the service iscsi status command that is displayed in Step 3 to
answer the following questions:
List the Iface initiatorname: ____________________________________
List the iSCSI connection state: ________________________________
List the disks that are attached to SCSI12 Channel 00: ______________________
List the state of each disk: ____________________________________
List the current portal: ________________________________________
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
1-6. The state of the active internet connection between the host (local address) and target (foreign
address) is ESTABLISHED.
1-7. The Linux host records events about the iSCSI subsystem in the system messages file,
/var/log/messages.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
This script will add a rule to the Linux IPTABLES firewall, to block all outgoing packets to TCP port
3260 on SANSVM1. Please note that packets to SANSVM2 is allowed. Students may delete this
rule, or disable firewall completely in order to get rid of this issue.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Manual fix:
Clear (flush) the Linux Firewall rules: iptables -F
2-4. Type the following command to verify connectivity between the host and target:
[root@cats-cent ~]# netstat -pant | grep iscsi
Answer the following questions:
Do you see 4 connections in ESTABLISHED state?
If not, what could be the issue?
Fix the issue.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The bold connections in this example are in a SYN_SENT state. After some time passes, the
connections are no longer displayed in the output:
Compare these netstat outputs to the netstat output of a fully functional Linux host in Step 5 of
Exercise 1.
The root cause is the local firewall rules of the host. This can be seen by observing the ipTables
(native Linux firewall configuration).
Observe the bold entries: Any packets to iscsi-target (port 3260) are dropped by the local
firewall.
[root@catsp-cent ~]# iptables -L
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Manual Fix:
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Scenario: A customer reports that there are no visible SAN disks attached to the Windows host. Evaluate the
NetApp scalable SAN environment. Restore LUN connectivity.
Step Action
3-1. Log in to the Windows host, and check firewall configuration.
3-2. If the firewall is enabled, disable it to see whether the LUN connectivity can be restored.
3-3. Log in to the NetApp cluster as an administrator.
3-4. Verify the configuration of the NetApp cluster.
3-5. Log in to the Windows host as an administrator.
3-6. Verify the configuration of the Windows host.
3-7. Verify that the windows IQN name is used in the SAN configurations of the cluster.
3-8. Restore the SAN disks.
Task 4: The LUNS are not visible through all LIFS of an SVM
Lab Scenario: In your lab, MPIO is not set in the windows host.
Step Action
4-1. Log in to the Windows host. Click Start-> Administrative Tools and open iSCSI Initiator.
4-2. Disconnect from the vserver sansvm2 if it is connected. (This will be the connection to the target
that has an iqn that ends with vs.8)
4-3. Select the target vserver sansvm1.(iqn ends with vs.7).
4-4. Click on Properties.
4-5. Note the target portal group of the session you see in the Properties window. Find the
corresponding LIF by using the following cluster shell command:
cluster1::> iscsi portal show Vserver
Logical Status Curr Curr
Interface TPGT Admin/Oper IP Address Node Port Enabled
---------- ---------- ---- ---------- --------------- ----------- ---- -------
sansvm1 sansvm1_data1
1034 up/up 192.168.6.131 node3 e0d true
sansvm1 sansvm1_data2
1035 up/up 192.168.6.132 node4 e0d true
sansvm2 sansvm2_data1
1036 up/up 192.168.6.135 node3 e0d true
sansvm2 sansvm2_data2
1037 up/up 192.168.6.136 node4 e0d true
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
For sansvm1, the lun is reported out only through node3. So the lun is visible only through the
lif that is local on node3.
For sansvm2, the lun is reported out only through node4. So the lun is visible only through the
lif that is local on node4.
Check where the lifs for vserver sansvm1 exists. It is on node3 and node4. Delete the lif
sansvm1_data2 from node 4 and put it on node 3. Since both lifs are on the node where the lun
is local, it will be visible as a disk through both lifs.
End of Exercise
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
****OBJECTIVES
TO DO in the scripts:
Convert the initial set of commands in SuperLabSetup1 to just use ssh login and execute the command on
the cluster shell instead of using the NMSDK API.
In the SuperLabSetup2, will have to change the domain name for the CIFS server creation command.
Need to fix the command that accepts the username and password to log into the Domain controller while
creating a machine account.
- Recover a broken environment using the skills you have learned in this course. The
Scenario:
A customer calls to report that he cannot write to some mounts and shares.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
cluster1::> vserver create -vserver happy -rootvolume happy_root -aggregate aggrsuper -ns-switch file -nm-switch file -
rootvolume-security-style unix
cluster1::> vserver create -vserver grumpy -rootvolume grumpy_root -aggregate aggrsuper -ns-switch file -nm-switch file -
rootvolume-security-style unix
cluster1::> vserver services dns create -vserver happy -domains cats.csslp.netapp.com -name-servers 192.168.6.10 -
state enabled
cluster1::> vserver services dns create -vserver grumpy -domains cats.csslp.netapp.com -name-servers 192.168.6.10 -
state enabled
cluster1::> network interface create -vserver happy -lif happy_data1 -role data -data-protocol nfs,cifs,fcache -home-node
node4 -home-port e0d -address 192.168.6.160 -netmask 255.255.255.0
cluster1::> network interface create -vserver grumpy -lif grumpy_data1 -role data -data-protocol nfs,cifs,fcache -home-
node node4 -home-port e0d -address 192.168.6.161 -netmask 255.255.255.0
cluster1::> export-policy rule create -vserver happy -policyname happy_policy -clientmatch 0.0.0.0/0 -rorule any -rwrule
none -ruleindex 1 -protocol any -anon 65534 -superuser any
cluster1::> export-policy rule create -vserver grumpy -policyname grumpy_policy -clientmatch 0.0.0.0/0 -rorule any -rwrule
none -ruleindex 1 -protocol any -anon 65534 -superuser any
cluster1::> export-policy rule create -vserver grumpy -policyname grumpy -clientmatch 0.0.0.0/0 -rorule any -rwrule none -
ruleindex 1 -protocol any -anon 65534 -superuser any
cluster1::> export-policy rule create -vserver grumpy -policyname default -clientmatch 0.0.0.0/0 -rorule any -rwrule any -
ruleindex 1 -protocol any -anon 65534 -superuser any
cluster1::> export-policy rule create -vserver happy -policyname default -clientmatch 0.0.0.0/0 -rorule any -rwrule any -
ruleindex 1 -protocol any -anon 65534 -superuser any
cluster1::> volume create -vserver grumpy -volume grumpy_cifs -aggregate aggrsuper -size 200M -state online -type RW
-unix-permissions 777 -junction-path /grumpy_cifs -policy default
cluster1::> volume create -vserver grumpy -volume grumpy_nfs -aggregate aggrsuper -size 200M -state online -type RW -
unix-permissions 777 -junction-path /grumpy_nfs -policy default
cluster1::> volume create -vserver happy -volume happy_cifs -aggregate aggrsuper -size 200M -state online -type RW -
unix-permissions 777 -junction-path /happy_cifs -policy default
cluster1::> volume create -vserver happy -volume happy_nfs -aggregate aggrsuper -size 200M -state online -type RW -
unix-permissions 777 -junction-path /happy_nfs -policy default
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
cluster1::> nfs server create -access true -v3 enabled -vserver happy
cluster1::> cifs share create -vserver happy -share-name happy -path /happy_cifs -share-properties
oplocks,browsable,changenotify
cluster1::> cifs share create -vserver grumpy -share-name grumpy -path /grumpy_cifs -share-properties
oplocks,browsable,changenotify
cluster1::> network interface create -vserver happy -lif happy_iscsi -role data -data-protocol iscsi -home-node node4 -
home-port e0d -address 192.168.6.164 -netmask 255.255.255.0
cluster1::> network interface create -vserver grumpy -lif grumpy_iscsi -role data -data-protocol iscsi -home-node node4 -
home-port e0d -address 192.168.6.165 -netmask 255.255.255.0
cluster1::> vol create -vserver grumpy -volume grumpy_iscsi -aggregate aggrsuper -size 600m -state online -type RW -
policy default -unix-permissions ---rwxr-xr-x
cluster1::> vol create -vserver happy -volume happy_iscsi -aggregate aggrsuper -size 600m -state online -type RW -policy
default -unix-permissions ---rwxr-xr-x
cluster1::> lun create -vserver grumpy -path /vol/grumpy_iscsi/grumpy_win_lun -size 200m -ostype windows -space-
reserve enabled
cluster1::> lun create -vserver happy -path /vol/happy_iscsi/happy_win_lun -size 200m -ostype windows -space-reserve
enabled
cluster1::> lun create -vserver grumpy -path /vol/grumpy_iscsi/grumpy_lun -size 200m -ostype linux -space-reserve
enabled
cluster1::> lun create -vserver happy -path /vol/happy_iscsi/happy_lun -size 200m -ostype linux -space-reserve enabled
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
TO BE MANUALLY DONE??
happy 192.168.6.160
grumpy 192.168.6.161
happy_iscsi 192.168.6.164
grumpy_iscsi 192.168.6.165
- ISCSI configuration
[root@cats-cent ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:cats-cent
[root@capt-cent grumpy]# iscsiadm --mode discovery --op update --type sendtargets --portal 192.168.6.164
Starting iscsid: [ OK ]
192.168.6.164:3260,1044 iqn.1992-08.com.netapp:sn.952455a27c9711e5ab27005056bf11fa:vs.16
[root@capt-cent grumpy]# iscsiadm --mode discovery --op update --type sendtargets --portal 192.168.6.165
192.168.6.165:3260,1045 iqn.1992-08.com.netapp:sn.b2bc17da7c9711e5ab27005056bf11fa:vs.17
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
cluster1::> igroup create -vserver grumpy -igroup linux_group -protocol iscsi -ostype linux -initiator iqn.1994-
05.com.redhat:cats-cent
cluster1::> igroup create -vserver happy -igroup linux_group -protocol iscsi -ostype linux -initiator iqn.1994-
05.com.redhat:cats-cent
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
cluster1::> igroup create -vserver happy -igroup win_group -protocol iscsi -ostype windows -initiator iqn.1991-
05.com.microsoft:cats-win-2.cats.csslp.netapp.com
cluster1::> igroup create -vserver grumpy -igroup win_group -protocol iscsi -ostype windows -initiator iqn.1991-
05.com.microsoft:cats-win-2.cats.csslp.netapp.com
cluster1::> lun map -vserver happy -path /vol/happy_iscsi/happy_win_lun -igroup win_group
cluster1::> lun map -vserver grumpy -path /vol/grumpy_iscsi/grumpy_win_lun -igroup win_group
C:\Windows> compmgnt.msc
Disc Management > Rescan Disks > Online disks> Initialize disks > Create Volumes on the disk
Disks F and G are created.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
- Change grumpy export policy to deny CIFS, NFS and no super user access
Cluster1::> network interface create -vserver nassvm1 -lif grumpy_data1 -role data -data-protocol
nfs,cifs,fcache -home-node node4 -home-port e0d -address 192.168.6.161 -netmask 255.255.255.0
- Disable the account and remove it from Administrators and make sure it only belongs to the Domain Users
group.
EnableSMB1Protocol : False
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
1. At the end of this lab you should have the following mounted on your linux host.
2. On the windows host, you should be able to map \grumpy_cifs and \happy_cifs.
3. You should be able to write to all the mounts and the mapped drives you recovered above.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Node: node1
Vserver: nassvm2
3. Cannot run the scripts to break labs. Able to ping the cluster management IP from the RDP
m/c. But the script is unable to run because it says the the target denied access.
Warning: Are you sure you want to reboot node "node2"? {y|n}: y
Error: command failed: Could not migrate LIFs away from node: Failed to migrate
one or more LIFs away from node "node2". Use the "network interface show
-curr-node node2" command to review the status of any remaining LIFs on
that node.
© 2017 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.