VNX

You might also like

Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 12

VNX/Clariion Block Architecture

VNX/Clariion Architecture
Note:
EMC introduced the VNX Series, designed to replace both the Clariion and Celerra
products.
Architecture

Clariion & VNX Block Level Storage share the same architecture and VNX Unified
Storage architecture includes both the SAN & NAS part.
And the main differences will be in the specifications like Capacity, Ports, drives
& connectivity.
Clariion/VNX is a mid-range storage array. It is an active passive architecture.
It has different modules,
SPE – Storage Processor Enclosure
DPE – Disk Processor Enclosure
DAE – Disk Array Enclosure
SPS – Stand-by Power Supply
Each SPE has two storage processor named as SP A & SP B, they are connected with
Clariion Messaging Interface (CMI).
And each SP has front end ports and back end ports and cache memory.
Front end ports helps to serve host I/O requests; back end ports helps to
communicate with the disks.
Cache is of two types, Write Cache which is mirrored & Read Cache which is not
mirrored.
The first DAE which is connected is known as DAE OS.
In this first five drives are known as Vault drives or Code drives. These are used
to save the critical data in case of power failure and also save the data like SP A
& SP B boot information which is mirrored.
All the drives are connected through the Link Control Cards (LCC).
FLARE which is triple mirrored and Persistent storage manager (PSM) is also triple
mirrored.
Each DAE has Primary and Expansion ports which is used to connect other DAE’s.

Basically VNX has 3 main models of Storage's


Block Level Storage
File Level Storage and
Unified Storage Array

VNX 1 Series
The VNX 1 series includes six models that are available in block, file, and unified
configurations:

VNX 5100, VNX 5300, VNX 5500 , VNX 5700 and VNX 7500.
The Block configuration for VNX 5500 & VNX 5700 shows below:

Block configuration for VNX 5500 & VNX 7500


The File configuration for VNX 5300 & VNX 5700 shows below:

File configuration for VNX 5300 & VNX 5700

The Unified configuration for VNX 5300 & VNX 5700 shows below:

Unified configuration for VNX 5300 & VNX 5700


The VNX series used updated components that make it significantly denser than
earlier drives.

Example of a Block dense configuration

Example of a Unified dense configuration


A 25 drive 2.5” SAS-drive DAE

Front view of 25 SAS-drive DAE

Back view of 25 SAS-drive DAE


A close up of the Back view with the ports naming

A close up of the back view


A 15 drives DAE

Front view of a 15 drives DAE

Back view of 15 drives DAE

A close up of the Back view with the ports naming

Close up view of a 15 drives DAE

A picture of Link Control Cards LCC cards connectivity

Link Control Cards


A picture of a cooling module

Cooling module

VNX 2 Series
The VNX 2 series includes six models that are available in block, file, and unified
configurations:
VNX 5200, VNX 5400, VNX 5600 , VNX 5800, VNX 7600 and VNX 8000.
There are two existing Gateway models:
VNX VG2 and VNX VG8
There are two VMAX® Gateway models:
VNX VG10 and VNX VG50
A model comparison chart for VNX 2 series.

A model comparison chart


The Block configuration for VNX 5600 & VNX 8000 shows below:
Block configuration for VNX 5600 & VNX 8000

The File configuration for VNX 5600 & VNX 8000 shows below:

File configuration VNX 5600 & VNX 8000

The Unified configuration for VNX 5600 & VNX 8000 shows below:

Unified configuration VNX 5600 & VNX 8000


As informed earlier, the VNX 2 series used updated components that make it
significantly denser than earlier drives.

Example of Block dense configuration

Example of Unified dense configuration

The below picture shows the back view of the DPE with SP A (on the right) and SP B
(on the left).

Back view of DPE


Picture shows a close-up of the back of the DPE-based storage processor.

Close up view of the back view of the DPE based Storage Processor

Power, fault, activity, link and status LED

A pic of Storage processor management and base module ports

Storage processor management and base module ports

Registering Host Initiators in the VNX Unisphere


Now we will see how to register the Host Initiators in the VNX, before that we will
discuss about the importance of it.

Once the Zoning task is completed for the newly deployed server in the Data center,
we have to login to the VNX Unisphere to check the host initiators are logged in or
not. If we have installed the Unisphere Host Agent already on the server, it will
logged in automatically to the VNX Unisphere, if not we have to register the server
initiators manually.

Procedure:
Login to the VNX Unisphere with the authorized login credentials.

Select the Initiators tab under the Host tab at the menu bar.
Initiator Tab in Unisphere

Click on the Create option located at the bottom left hand side.

Initiator logged in

A popup will open mention the details like Host WWN, SP Port, Initiator type and
Failover mode and then hit on OK button.

Registering the Initiators

Repeat the same procedure for all the host WWN which we used while performing the
zoning.

Once all the WWN are registered check the " Register and Logged in tabs " in the
Initiators main tab. If in these two tabs are showing as "Yes-Yes" it means the
host is able to see the storage in other words the zoning which we done earlier is
right. If in case it shows as " Yes-No" then the host is not able to see the
storage. Have to check the zoning and other trouble shooting procedure have to be
carried out.

Register and Logged in Tabs

This is the procedure to register the host initiators in the VNX Unisphere.

LUN Provisioning in VNX

When we will create a LUN

Whenever there is a space requirement from platform team or from the client we will
create a LUN.

Provision a LUN

Steps to create a LUN is as follow below

Login into the Unisphere with the specific IP Address and authorized user
credentials.
For Ex:- 10.XX.XX.X.XX
The VNX Unisphere Dashboard will look as it

Dashboard page

Go to Storage Tab and select the LUN option.

Slide 1

Click on Create option a popup window will open.

Slide 2

Fill all the columns with the required info. and click on Apply and hit Ok button.
Note: To create a Thin LUN we have to select (tick) the Thin option and to create a
Thick LUN we have to skip the thin option.

Slide 3

Select the newly created LUN and then select the Add to Storage Group option below
right side of the window.

Slide 4

Select the Specific Storage group (Host) listed in the available storage group
column and click the right side arrow, the selected storage group will be listed in
the selected storage group column and hit Ok option.

Slide 5

Now we have to inform this information to the platform team to check the visibility
of the LUN.

VNX LUN Trespassing


Brief description:
LUNs on a storage system are allocated to a server. A storage admin creates a LUN
on a RAID Group or Storage Pool and assigns it to a server. The platform team
discovers this LUN, formats it, mounts it or assigns a drive letter and starts to
use it. One important aspect is LUN ownership: which storage processor will process
the I/O for that specific LUN ?
The newly created LUN will access through the default SP owner. We can change the
ownership from one SP to another. This is processing is known as Trespassing.
Failover
A procedure by which a system automatically transfers control to a duplicate system
when it detects a fault or failure.
Failover modes are 4 types
Failover Mode 0 – LUN Based Trespass Mode
This failover mode is the default and works in conjunction with the Auto-trespass
feature. Auto-trespass is a mode of operation that is set on a LUN by LUN basis.
If Auto-Trespass is enabled on the LUN, the non-owning SP will report that the LUN
exists and is available for access. The LUN will trespass to the SP where the I/O
request is sent. Every time the LUN is trespassed a Unit Attention message is
recorded. If Auto-trespass is disabled, the non-owning SP will report that the LUN
exists but it is not available for access.
Failover Mode 1 – Passive Not Ready Mode
In this mode of operation the non-owning SP will report that all non-owned LUNs
exist and are available for access. Any I/O request that is made to the non-owning
SP will be rejected.

Failover Mode 2 – DMP Mode


In this mode of operation the non-owning SP will report that all non-owned LUNs
exist and are available for access. This is similar to Failover Mode 0 with Auto-
trespass Enabled. Any I/O requests made to the non-owning SP will cause the LUN to
be trespassed to the SP that is receiving the request.
Failover Mode 3 – Passive Always Ready Mode
In this mode of operation the non-owning SP will report that all non-owned LUNs
exist and are available for access. Any I/O requests sent to the Non-owning SP
will be rejected. This is similar to Failover Mode 1. However, any Test Unit Ready
command sent from the server will return with a success message, even to the non-
owning SP.
How trespassing works using ALUA (Failover mode 4) on a VNX/CLARiiON storage
system?
Resolution:
Since FLARE 26, Asymmetric Active/Active has provided a new way for CLARiiON arrays
to present LUNs to hosts, eliminating the need for hosts to deal with the LUN
ownership model. Prior to FLARE 26, all CLARiiON arrays used the standard
active/passive presentation feature which one SP "owns" the LUN and all I/O to that
LUN is sent only to that SP. If all paths to that SP fail, the ownership of the LUN
was 'trespassed' to the other SP and the host-based path management software
adjusted the I/O path accordingly.
Asymmetric Active/Active introduces a new initiator Failover Mode (Failover mode 4)
where initiators are permitted to send I/O to a LUN regardless of which SP actually
owns the LUN.
Manual trespass:
When a manual trespass is issued (using Navisphere Manager or CLI) to a LUN on a SP
that is accessed by a host with Failover Mode 1, subsequent I/O for that LUN is
rejected over the SP on which the manual trespass was issued. The failover software
redirects I/O to the SP that owns the LUN.

A manual trespass operation causes the ownership of a given LUN owned by a given SP
to change. If this LUN is accessed by an ALUA host (Failover Mode is set to 4), and
I/O is sent to the SP that does not currently own the LUN, this would cause I/O
redirection. In such a situation, the array based on how many I/Os (threshold of
64000 +/- I/Os) a LUN processes on each SP will change the ownership of the LUN.
Path, HBA, switch failure:
If a host is configured with Failover Mode 1 and all the paths to the SP that owns
a LUN fail, the LUN is trespassed to the other SP by the host’s failover software.

With Failover Mode 4, in the case of a path, HBA, or switch failure, when I/O
routes to the non-owning SP, the LUN may not trespass immediately (depending on the
failover software on the host). If the LUN is not trespassed to the owning SP,
FLARE will trespass the LUN to the SP that receives the most I/O requests to that
LUN. This is accomplished by the array keeping track of how many I/Os a LUN
processes on each SP. If the non-optimized SP processes 64,000 or more I/Os than
the optimal SP, the array will change the ownership to the non-optimal SP, making
it optimal.
SP failure:
In case of an SP failure for a host configured as Failover Mode 1, the failover
software trespasses the LUN to the surviving SP.
With Failover Mode 4, if an I/O arrives from an ALUA initiator on the surviving SP
(non-optimal), FLARE initiates an internal trespass operation. This operation
changes ownership of the target LUN to the surviving SP since its peer SP is dead.
Hence, the host (failover software) must have access to the secondary SP so that it
can issue an I/O under these circumstances.
Single backend failure:
Before FLARE Release 26, if the failover software was misconfigured (for example, a
single attach configuration), a single back-end failure (for example, an LCC or BCC
failure) would generate an I/O error since the failover software would not be able
to try the alternate path to the other SP with a stable backend.

With release 26 of FLARE, regardless of the Failover Mode for a given host, when
the SP that owns the LUN cannot access that LUN due to a back-end failure, I/O is
redirected through the other SP by the lower redirector. In this situation, the LUN
is trespassed by FLARE to the SP that can access the LUN. After the failure is
corrected, the LUN is trespassed back to the SP that previously owned the LUN. See
the “Enabler for masking back-end failures” section for more information.
Expand LUN in EMC Unisphere | Increasing the size of a LUN
LUN Expansion
Work Flow Introduction
If we need to increase the size of an existing LUN assigned to a host, the best
option is LUN Expansion.
With this option we can increase the LUN space without any maintenance window.
Procedure
Platform team will request you to increase the space on the existing LUN.
The workflow will be in this below pattern:
• If a request is raised to expand a LUN, we have to check the prerequisites
like LUN naa ID & LUN name.
• Login into the VNX Unisphere with authorized credentials.
• Go to the Storage tab and select the LUN option.
• Select the specific LUN and click the properties option, verify the LUN naa
ID and LUN Name.
• Once verified right click on the specific LUN and select the Expand option.
• Mention the size of space needs to expand the LUN and click OK.
Once we click on OK button, the newly increased space will be added to the existing
LUN.

VNX - How to create a Storage Group | What is a Storage Group (SG)


Storage Group
Introduction
It is a collection of Hosts and LUNs to communicate with each other providing the
high end security for the data.

For every host we should have to create a Storage Group and provisioned LUNs should
be added to the storage group to visible the LUNs at the Host end.

Whenever we brought a new server in our environment, after completing the Zoning
and creation of LUNs we have to create a storage group.
Adding LUNs to a existing storage group is masking in other words

Masking:
It means particular LUN is visible to particular Host. In clear, A LUN can be
visible to only one Storage Group/Host.

The procedure for creating a Storage Group is as follows:

Login to the Unisphere.

Unisphere image

Go to Host Tab and select the Storage Group option.

Host Tab in Unisphere

Name the storage group and hit on OK button.

Storage Group Creation Window

How to create a Storage Pool in VNX | Create and modify the Storage Pool in VNX /
Clariion
Creation of a Storage Pool in VNX
Introduction
It's a physical collection of disks on which logical units (LUNs) are created and
presented to the hosts as drives where they can write and read the data.
Pools are dedicated for use by pool (thin or thick) LUNs. Where RAID group can only
contain up to 16 disks, pool can contain hundreds of disks.
First we will see the difference between a Raid Group and Storage Pool.

Raid group Vs Storage pool

Drives in a storage pool

Steps to create a Storage Pool is as follow below


Login to the VNX Unisphere.

VNX Unisphere

Go to Storage Tab at the menu bar and select the “Storage Pool” option.

Storage pool option in unisphere


Click on Create button to create a new storage pool.

Storage pool creation option

A pop-window will open and fill all the columns like Name of the Storage Pool, ID,
what type of pool you required to create like Extreme Performance, Performance and
capacity and select the Automatic Disks option and hit on OK button.

How to gather SP Collects in VNX Unisphere GUI | How to generate SP Collects in VNX
Unisphere
Gather SP Collects in VNX Unisphere
Today I am going to explain the procedure to gather or generate the SP Collects
Logs in VNX Unisphere GUI.

About SP Collects
A lot of intervals you may be asked to provide or to upload the “SP Collects” for
your EMC Array. This may be for EMC Support Engineer, who may need to see how your
existing array is configured. Either way SP Collect data is very useful and
relatively easy to generate.
With the release of Unisphere the procedure to gather or generate the SP Collects
logs are made simple and easier than before.
The main purpose of collecting the SP Collects Logs is to analysis the Storage
Array Performance. We can also able to find errors like disk soft media error,
hardware failures errors and so on.
The procedure to gather the SP Collects is as follows below :
Login to the VNX Unisphere.
Click on the System tab on the menu bar.

System tab in VNX Unisphere


Right side of the main screen you can able to see Wizard columns.

Go to Diagnostic Files tab.


Select the Generate Diagnostic Files – SP A to generate the logs for Storage
Processor A

Diagnostic Files Column


And also select the Generate Diagnostic Files – SP B to generate the logs for
Storage Processor B as shown above.
Select the Get Diagnostic Files – SP A to retrieve the logs

Diagnostic Files Column


And also select the Get Diagnostic Files – SP B to retrieve the logs as shown
above.
A page will open with the logs generating, sort out the logs by Date range and your
log file name will be shown as XXX.runlog.txt

SP File Transfer Manager


It will take 10 -15 mins of time to gather the logs for each SP’s.
Once it complete the log file will convert from runlog.txt to data.zip as shown
below

SP File Transfer Manager


Transfer the file to your desired location and upload the logs to EMC Support Team
to analysis the Array Performance.

As mentioned above we can able to gather or generate the SP Collects Logs in VNX
Unisphere GUI.

VNX - LUN Provisioning for a New host | LUN Provisioning in EMC Clariion
VNX LUN Provisioning for a New Host
Today I am going to explain about LUN Provisioning for anew host.
Introduction

Whenever a new server deployed in the Data Center, the platform team either it is
Windows, Linux or Solaris team will contact you for the free ports on the Switch
(For Example:- Cisco Switch) to connect with the Storage.
We will login into the Switch with authorized credentials via putty.
Once logged in we check the free ports details by using the command.
Switch # sh interface brief
Note: As a storage admin, we have to know the server HBA’s details too. According
to that information we have to fetch the free ports details on the two switches for
redundancy.

Will update the free ports details to the platform team, then platform team will
contact the Data center folks to lay the physical cable connectivity between the
new server to the switches.
Note: The Storage ports are already connected to the switches.
Once the cable connectivity completes the platform team will inform us to do
zoning.

Zoning:

Grouping of Host HBA WWPN and Storage Front End Ports WWPN to speak each other.
All the commands should be run in Configuration Mode.
SwitchName # config t (configuration terminal)
SwitchName # zone name ABC vsan 2
SwitchName – zone # member pwwn 50:06:01:60:08:60:37:fb
SwitchName – zone # member pwwn 21:00:00:24:ff:0d:fc:fb
SwitchName – zone # exist
SwitchName # zoneset name XYZ vsan 2
SwitchName – zoneset # member ABC
SwitchName – zoneset # exist
SwitchName # zoneset activate name XYZ vsan 2
Once the zoning is completed, we have to check the Initiators status by login into
the Unisphere.
The procedure is as follows below:
Go to the Host Tab and select the Initiators tab.
Search for the Host for which you have done the zoning activity
Verify the Host Name, IPAddress and Host HBA WWPN & WWNN No.’s.
In the Initiators window check the Registered and Logged In columns. If “YES” is
there in both the columns then your zoning activity is correct and the Host is
connected to the Storage Box.
If “YES” is in one column and “NO” in another column, then your zoning part is not
correct. Check the zoning steps, if it correct and the issue is sustain then check
the Host WWPN &WWNN and cable connectivity.
Now we have to create a New Storage Group for the New Host.
The procedure is as follows:
Go to Host Tab and select the Storage group option.
Click on the Create option to create a new storage group.
Name the storage group for your identification and hit on OK button to complete
your task.
Before creating a LUN to a specified size we have to check the prerequisite like:
Check the availability of free space in the Storage Pool from where you are going
to create a LUN.
If free space is not available in that specific storage pool, share the information
to your Reporting Manager or to your seniors.
Now will create a LUN with the specified size
Login to the VNX Unisphere.
Go to Storage Tab and select the LUN option.
Fill all the Columns like Storage Pool, LUN Capacity, No. of LUNs to be created,
Name of the LUN and specific the LUN is THICK or THIN.
And hit on OK button to complete the LUN creation task.
Now we have to add the newly created LUN to the newly created Storage Group
(Masking).

In the LUN creation page, there is an option known as “ADD TO STORAGE GROUP” at the
below of the page.
Click on it and a new page will open.
Two columns will be appeared in the page one is “Available Host
” and “Connected Host”.
Select the New Storage Group in the available host column and click on the Right
side arrow and the host will appear in the connected host column and then hit on OK
button.
Inform the Platform team that the LUN has been assigned to host taking a snapshot
of the page and also inform them to rescan the disks from the platform level.
As mentioned above, these are the points to be considered before provisioning a LUN
to a newly deployed server in the data center.
VNX - LUN Migration | How to perform the LUN migration in VNX
Today I am going to discuss about the LUN Migration in VNX or EMC Clariion.
LUN Migration

LUN migration permits the data to be moved from one LUN to other LUN, no matter
what RAID type, disk type, LUN type, speed and number of disks in the RAID group or
Storage Pool with some restrictions.

The LUN Migration is a internal migration within the Storage array either it is VNX
or Clariion from one location to other location. It has 2 steps of process. First
it is a block by block copy of a “Source LUN” to its new location “Destination
LUN”. Once the copy is completed, it then moves the “Source” LUNs location to its
new place.

Procedure
Login to the Unisphere

Go to the Storage tab and select the LUN which you want to migrate.
Right Click on the ‘Source LUN = eg-145’ and choose Migrate option.
Select the ‘targeted destination LUN = LUN 149’ from the Available Destination LUN
field
The Migration Rate drop-down menu: Low

NOTE: The Migration Rate drop-down menu offers four levels of service: ASAP, High,
Medium, and Low.

Source LUN > 150GB; Migration Rate: Medium


Source LUN < 150GB; Migration Rate: High
Click OK
YES
DONE

The Destination LUN assumes the identity of Source LUN.


The Source LUN is unbound when the migration process is complete.
Note: During Migration destination LUN will be shown under ‘Private LUNS’
IMP: Only 1 LUN Migration per array at any time. The size of target LUN should be
equal or greater than source LUN.
As discussed above we will perform the LUN migration activity in VNX or EMC
Clariion Storage Arrays.

You might also like