Professional Documents
Culture Documents
VPLEX Architecture and Design
VPLEX Architecture and Design
EMC VPLEX
ArchitectureandDesign
April2010
Support:EducationServices
2010 EMC Corporation. All rights reserved. These materials may not be copied without EMCs written consent.
WelcometoEMCVPLEXArchitectureandDesign.Clicktheplaybuttoninthelowerrighthandcornerofthisscreentocontinue. Copyright 2010EMCCorporation.Allrightsreserved. ThesematerialsmaynotbecopiedwithoutEMC'swrittenconsent. EMCbelievestheinformationinthispublicationisaccurateasofitspublicationdate.Theinformationissubjecttochange withoutnotice. THEINFORMATIONINTHISPUBLICATIONISPROVIDEDASIS. EMCCORPORATIONMAKESNOREPRESENTATIONSOR WARRANTIESOFANYKINDWITHRESPECTTOTHEINFORMATIONINTHISPUBLICATION,ANDSPECIFICALLYDISCLAIMSIMPLIED WARRANTIESOFMERCHANTABILITYORFITNESSFORAPARTICULARPURPOSE. Use,copying,anddistributionofanyEMCsoftwaredescribedinthispublicationrequiresanapplicablesoftwarelicense. EMC ,EMC,EMCControlCenter,AdvantEdge,AlphaStor,ApplicationXtender,Avamar,Captiva,CatalogSolution,Celerra,Centera, CentraStar,ClaimPack,ClaimsEditor,ClaimsEditor,Professional, CLARalert,CLARiiON,ClientPak,CodeLink,Connectrix,Co StandbyServer,Dantz,DirectMatrixArchitecture,DiskXtender,DiskXtender 2000,DocumentSciences,Documentum, EmailXaminer,EmailXtender,EmailXtract,enVision,eRoom,EventExplorer,FLARE,FormWare,HighRoad,InputAccel,InputAccel Express,Invista,ISIS,MaxRetriever,Navisphere,NetWorker,nLayers,OpenScale,PixTools,Powerlink,PowerPath,Rainfinity, RepliStor,ResourcePak,Retrospect,RSA,RSASecured,RSASecurity,SecurID,SecurWorld,Smarts,SnapShotServer,SnapView/IP, SRDF,Symmetrix,TimeFinder,VisualSAN,VSAMAssist,WebXtender,whereinformationlives,xPression,xPresso,Xtender, Xtender Solutions;andEMCOnCourse,EMCProven,EMCSnap,EMCStorageAdministrator,Acartus,AccessLogix, ArchiveXtender,AuthenticProblems,AutomatedResourceManager,AutoStart,AutoSwap,AVALONidm,CClip,Celerra Replicator,CLARevent,CodebookCorrelationTechnology,CommonInformationModel,CopyCross,CopyPoint,DatabaseXtender, DigitalMailroom,DirectMatrix,EDM,ELab,eInput,Enginuity,FarPoint,FirstPass,Fortress,GlobalFileVirtualization,Graphic Visualization,InfoMover,Infoscape,MediaStor,MirrorView,Mozy,MozyEnterprise,MozyHome,MozyPro,NetWin,OnAlert, PowerSnap,QuickScan,RepliCare,SafeLine,SANAdvisor,SANCopy,SANManager,SDMS,SnapImage,SnapSure,SnapView, StorageScope,SupportMate,SymmAPI,SymmEnabler,SymmetrixDMX,UltraFlex,UltraPoint,UltraScale,Viewlets,VisualSRM are trademarksofEMCCorporation. Allothertrademarksusedhereinarethepropertyoftheirrespectiveowners.
VPLEXArchitectureandDesign
CourseOverview
ThiscourseprovidesdetailedcoverageofVPLEXintypicaldatacenter environments.Itcomprehensivelyaddressesproductarchitecture, hosttovirtual storageimplementation,systemenvironmentsizing,managementandmonitoring ofVPLEXenvironments. Thiscourseisintendedforaudienceswhoarepresentlyorplanningtobeengaged inpositioningVPLEX,andperformingVPLEXsolutionsdesign. Uponsuccessfulcompletionofthiscourse,youshouldbeableto:
Description
Audience
Objectives
ExplainhowVPLEXcanbeintegratedintoyourcustomersproduction
environment
PerformplanninganddesignforVPLEXdeployment
EMCbelievestheinformationinthiscourseisaccurateasofitspublicationdate.ItisbasedonpreGAproduct information,whichissubjecttochangewithoutnotice.Forthe mostcurrentinformation,seetheEMCSupport MatrixandproductreleasenotesinPowerlink.
VPLEXArchitectureandDesign
CourseModules
Module1:VPLEXTechnologyandPositioning Module2:Architecture PhysicalandLogicalComponents Module3:VPLEXFunctionalityandManagement Module4:PlanningandDesignConsiderations
VPLEXArchitectureandDesign
Module1:VPLEXTechnologyandPositioning
ArticulatehowVPLEXcanenableEMCsvisionofjourneytotheprivate
cloud
TheintroductorymodulebrieflyoutlinesEMCsvisiononblockstoragevirtualization,andpositionsVPLEX enabledsolutionswithinthebroadercontextofthatvision.
VPLEXArchitectureandDesign
ManageatScale
Simplifyandautomate
OptimizeServiceLevels
Tierandconsolidate
Transitioningto PrivateCloud
DeliverAlwaysOn
24xforeveravailability
VPLEXArchitectureandDesign
EMCVision:VirtualStorage
Capabilitiesthatfreeinformation fromphysicalstorage MovethousandsofVMsoverthousandsmiles Automated Efficient Batchprocessinlowcostenergylocations Physical Integrated Alwayson Dynamicworkloadbalancingandrelocation Storage OnDemand Secure Aggregatebigdatacentersfromseparateones 24xforever runapplicationswithoutrestart.Ever! FAST+Federation+StorageVirtualization
2010 EMC Corporation. All rights reserved. Module 1: VPLEX Technology and Positioning 6
Foryears,usershavereliedonphysicalstorage tomeettheirinformationneeds.Newandevolving changes,suchasvirtualizationandtheadoptionofPrivateCloudcomputing,haveplacednewdemandson howstorageandinformationismanaged. Tomeetthesenewrequirements,storagemustevolvetodelivercapabilitiesthatfreeinformationfroma physicalelementtoavirtualizedresourcethatisfullyautomated,integratedwithintheinfrastructure, consumedondemand,costeffectiveandefficient,alwaysonandsecure.Thetechnologyenablersneeded todeliverthistocombineuniqueEMCcapabilitiessuchasFAST, Federation,andstoragevirtualization. TheresultisanextgenerationPrivateCloudinfrastructurethatallowsusersto: MovethousandsofVMsoverthousandsofmiles Batchprocessinlowcostenergylocations Enableboundarylessworkloadbalancingandrelocation Aggregatebigdatacenters Deliver24xforever andrunorrecoverapplicationswithouteverhavingtorestart.
VPLEXArchitectureandDesign
EMCVPLEXArchitecture
Local&Distributed Federation
NextGenerationData MobilityandAccess
ScaleOutClusterArchitecture
Startsmallandgrowbigwith predictableservicelevels
AccessAnywhere
AdvancedDataCaching
ImproveI/Operformanceandreduce storagearraycontention
EMCandNonEMCArrays EMCandNonEMCArrays
DistributedCacheCoherence
Automaticsharing,balancingand failoverofstoragedomainswithinand acrossVPLEXEngines
Available April2010
Module 1: VPLEX Technology and Positioning 7
VPLEXArchitectureandDesign
EMCVPLEXCapabilities
LocalFederation StorageVirtualization DistributedFederation
AccessAnywhere
EMCandnonEMCArrays
Streamlinestoragerefreshes, Streamline storage refreshes, consolidationsandmigrations consolidations and migrations Simplifymultiarrayallocation, Simplify multi-array allocation, management,andprovisioning management, and provisioning Poolstoragecapacitytoextend Pool storage capacity to extend usefullifeforN1storageassets useful life for N-1 storage assets
2010 EMC Corporation. All rights reserved.
VPLEXArchitectureandDesign
VPLEXLocal:Overview
Simplifyprovisioningandvolume
management Centralizemanagementofblockstoragein thedatacenter Simplifystorageprovisioning, managementandmonitoring Physicalstorageneedstobeprovisioned justonce tothevirtualizationlayer Nondisruptivedatamobility Optimizeperformance,redistributeand balanceworkloadsamongarrays Workloadresiliency Improvereliability,scaleoutperformance Storagepooling Manageavailablecapacityacrossmultiple framesbasedonSLAs
2010 EMC Corporation. All rights reserved.
VPLEXLocal (SingleCluster)
Around2003,storagevirtualizationwasintroducedasaviablesolution.Theprimaryvaluepropositionof storagevirtualizationwasmovingdatanondisruptively.Customerslookedtothistechnologyfor transparenttiering,movingbackendstoragedatawithouthavingtodisrupthosts,simplifiedoperations overmultipleframes,aswellasongoingdatamovesfortechrefreshesandleaserollovers. Customersrequiredtoolsthatenabledstoragemovestobemadewithoutforcinginteraction,andworking atthehostanddatabaseadministrationlevels.Thisconceptof avirtualizationcontrollerwasintroduced andtookitsplaceinthemarket.WhileEMCreleaseditsownversionofthiswiththeInvistasplitpath architecture,wealsocontinueddevelopmentonbothSymmetrixandCLARiiONtointegratemultipletiersof storagewithinasinglearray.Today,weofferFlash,FibreChannelandSATAwithinEMCarrays,andavery transparentmethodofmovingdataacrossdifferentstoragetypes andtierswithourvirtualLUNcapability. Wefoundthatprovidingbothchoicesforcustomersallowedourproductstomeetawidersetofchallenges thanifweonlyofferedjustoneofthetwooptions. Thechallengesaddressedbytraditionalstoragevirtualization whichcanbebroadlycategorizedas simplifiedstoragemanagement stillexisttoday.VPLEXlocalfederationcansolvethisclassofproblems withinthecontextofasingledatacenter. However,wevealsoseenthesedatacenterissuesevolve.Newer,differentproblemshaveemergedthat requirenewsolutions aswellseenext,whenwediscussdistributedfederation.
VPLEXArchitectureandDesign
VPLEXMetro:Overview
AccessAnywhere:Blockstorageaccess
within,betweenandacrossdatacenters Withinsynchronousdistances Approximately60milesor 100Kilometers ConnecttwoVPLEXstorageclusterstogether overdistance Enablesvirtualvolumestobesharedbyboth clusters Providesuniquedistributedcache coherencyforallreadsandwrites Bothclustersmaintainthesameidentityfor avolume,andpreservethesameSCSIstate forthelogicalunit EnablesVMwareVMotionoverdistance
Cluster1/SiteA
Cluster2/SiteB
VPLEXMetro(TwoClusters)
10
10
VPLEXArchitectureandDesign
Domain2/Site2
Mail_4 Fileandprint server
MSExchange
MSExchange
MSExchange
Windows2008Server
SQL01
SQL02
Challenges:
SAN
SharePoint2007 SQLServer2008
Excel
SharePoint2007
VMFSVolume
VMFSVolume
Synchronous Distance 100 Kms Symmetrix CLARiiON ThirdParty Symmetrix CLARiiON ThirdParty
11
11
VPLEXArchitectureandDesign
Proposed:VMotionOverDistancewithVPLEX
Domain1/Site1
Mail_1 Mail_2 Mail_3
Domain2/Site2
DistanceVMotion
Mail_1 Mail_2 Mail_4 Mail_3 Fileandprint server
MSExchange
MSExchange
MSExchange
Windows2008Server
across sites
Excel
SharePoint2007
Theproposedsolutioncanaccomplishthisasfollows.
therebyaddressingthecustomersprimarychallenges.
DistanceVMotionopensupotherpossibilitiesforthiscustomer, aslistedhere.
12
VPLEXArchitectureandDesign
VPLEXLocal:SingleCluster
Management Server 8 port FC SW 8 port FC SW Switch UPS Switch UPS
Arrays(atGA) SANFabrics
13
ShownisasummaryofthekeycharacteristicsofaVPLEXLocalorsingleclusterconfiguration. Amongourkeyvaluepropositions:youcanstartsmallandscaleup,youcanhavecentralizedmanagement, aswellaspredictableperformanceandavailability. Theenginesarearrangedinatruecluster,whichmeansI/Othat enterstheclusterfromanywherecanbe servicedfromanywhere. TheenginesarearrangedinanN+1configuration whichmeansthatasyouaddmoreengines,you increasethememory,portsandperformanceofthetotalcluster. Theclustercanwithstandthefailureof anydevice,andanycomponent.Theclusterwillcontinuetooperateandprovidestorageservicesaslongas justoncedevicesurvives.Yougettransparentmobilityacrossheterogeneousarrays.Ifyouhaveaneedto extendthesecapabilitiesoutoverdistanceoracrossmultiplefailuredomainswithinasinglesite,aVPLEX Metroconfigurationmaybeamoreappropriatechoice.
13
VPLEXArchitectureandDesign
VPLEXMetro:DualCluster
DualCluster MetroPlex
Up to 8 Virtualization Engines 16K (8K per cluster or shared) total Virtual Devices Within or across Data Centers Synchronous distance support
14
HereisabriefsynopsisofVPLEXMetroconfigurations,limitsandkeycapabilities. AswesawwithVPLEXLocal,eachsingleclustercansupport8000 backendStorageVolumesand8000 VirtualVolumes,regardlessofwhetheryouspecify1,2or4engines.Thenumberofenginesinfluencesthe totalnumberofFE/BEportsavailable,andthusscalabilityandobtainableperformancerelativetothe numberofhostsandstoragearrayportstobeserviced.AVPLEXMetroDualClustercansupportatotalof 16000frontendand16000backend.However,whencreatingdistributedRAID1Devicesrememberthat youareconsuming2devices,1fromeachclusterintheMetro,soifalldevicesareDR1sthelimitis8000 frontenddevices. OneviewofaMetroPlexiseachclusterservicingadifferentphysicalsite,withupto100kmbetweensites. AnequallyusefulalternateviewistwojoinedclustersatasinglesitewithsharedLUNsbetweenthem.You maychoosetoimplementthesetwoclustersastwodifferenttargetswithinseparatefailuredomains,for example,inthesamedatacenter. AtGA,VPLEXwillsupportclusteredhostfilesystemsincludingVMFS.Withthisdeployment,multipleVMFS serverscanread/writethesamefilesystemsimultaneously,whileindividualvirtualmachinefilesarelocked. Wewillalsoextendsupportovertimetoinclude:SUNCluster,HPClusterIBMClusterandCXFS. CurrentlythereisalimitationforStretchhostclustersoverdistance:ifonesitefails,youneedtoperforma manualrestartoftheapplicationonthefailedsite.
14
VPLEXArchitectureandDesign
DescribeVPLEXhardwareandsoftwarearchitectureatahighlevel
15
ThismoduledescribesthephysicalcomponentsandlogicalcomponentscomprisingaVPLEXsystem.
15
VPLEXArchitectureandDesign
VPLEXArchitecture
Cluster1/SiteA
VPLEX ManagementServer Hosts IP
Cluster2/SiteB
VPLEX ManagementServer Hosts
VPLEX Engine
Virtual Vol Virtual Vol
Virtual Vol
Virtual Vol
Virtual Vol
Virtual Vol
LCOM
FC MAN
LCOM
EMCand NonEMCArrays
EMCand NonEMCArrays
16
Let'slookatatypicalproductionSANenvironment,andhowVPLEXfitsandworkswithinit. ThebasicbuildingblockofaVPLEXsystemistheEngine.MultipleenginescanbeconfiguredtoformasingleVPLEX clusterforscalability.EachEngineincludestwoHighAvailabilityDirectorswithfrontendandbackendFibreChannel portsforintegrationwiththecustomer'sfabrics.VPLEXdoesnotrelyon(orrequire)anyparticularfabricintelligence. TheDirectorFEandBEportsshowupasstandardFportsonthefabrics.VPLEXtechnologycanworkequallywellwith BrocadeorCiscofabricswithnodependencyonswitchinghardwareorfirmware.Directorswithinacluster communicatewitheachotherviaredundant,privateFibreChannel linkscalledLCOMlinks. Eachclusterincludesa1UManagementServerwithapublicIPportforsystemmanagementandadministrationover thecustomersnetwork.TheManagementServeralsohasprivate,redundantIPnetworkconnectionstoeachDirector withinthecluster. VPLEXimplementationfundamentallyinvolvesthreetasks:presentingSANvolumesfrombackendarraystoVPLEX enginesviaeachDirectorsbackendports;packagingtheseintosetsofVPLEXVirtualVolumeswiththedesired configurationsandprotectionlevels;andpresentingVirtualVolumestoproductionhostsintheSANviatheVPLEX frontend. CurrentlyaVPLEXsystemcansupportamaximumoftwoclusters. AdualclustersystemiscalledaMetroPlex.Fora dualclusterimplementation,thetwositesmustbelessthan100kmapart,withroundtriplatencyof5msecsorless ontheFClinks.VPLEXclusterswithinaMetroPlexcommunicateviaFCovertheDirectors FCMANports. VPLEXimplementsaVPNtunnelbetweentheManagementServersof thetwoclusters.Thisenableseach ManagementServertocommunicatewithDirectorsineitherclusterviatheprivateIPnetworks.Withthisdesign,its possibletoconvenientlymanageaMetroPlexfromeitherofthetwosites.
16
VPLEXArchitectureandDesign
VPLEXEngine:Characteristics
DualHADirectorsperengine GeoSynchronysoftwarerunsoneach
DirectortoprovideVPLEXfeaturesand functionality 32 8GB/sFibreChannelFE/BEports Forfabricconnectivitytohosts andstoragearrays
8Gb/sFibreChannel Host&ArrayPorts
8Gb/sFibreChannel Host&ArrayPorts
CPUComplex
Core Core Core Core Core Core Core Core
CPUComplex
Core Core Core Core Core Core Core Core
GlobalMemory
FibreChannelinterconnect
betweenDirectors
GlobalMemory
17
VPLEXArchitectureandDesign
DistributedCacheCoherency
Host
BlockAddress BlockAddress CacheA CacheA CacheC CacheC
EngineCacheCoherencyDirectory
1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 13 10 11 12 13
Host
NewWrite: Block3
Read: Block3
CacheDirectoryD CacheDirectoryC CacheDirectoryF CacheDirectoryE CacheDirectoryH CacheDirectoryG
Cache
Cache
Cache
Cache
18
TheVPLEXenvironmentisdynamicandusesahierarchytokeeptrackofwhereI/Osgo.
AnI/OrequestcancomefromanywhereandwillbeservicedbyanyavailableengineintheVPLEXcluster. VPLEXabstractstheownershipmodelintoahighleveldirectorythat'supdatedforeveryI/O,andshared acrossallengines.Thedirectoryusesasmallamountofmetadata,andtellsallotherenginesinthecluster, in4kblocks,whichblockofdataisownedbywhichengineandatwhattime.Thecommunicationthat actuallyoccursismuchlessthanthe4kblocksthatareactuallybeingupdated. Ifareadrequestcomesin,VPLEXautomaticallychecksthedirectoryforanowner.Oncetheowneris located,thereadrequestgoesdirectlytothatengine. Onceawriteisdoneandthetableismodified,ifanotherreadrequestcomesinfromanotherengine,it checksthetableandcanthenpullthereaddirectlyfromthatengine'scache.Ifit'sstillincache,thereisno needtogotothedisktosatisfytheread.ThismodelalsoenablesVPLEXtostretchthecluster,aswecan distributethisdirectorybetweenclustersandtherefore,betweensites.Thedesignhasminimaloverhead, isveryefficient,andenableseffectivecommunicationoverdistance.
18
VPLEXArchitectureandDesign VPLEX
HardwareComponents:Engine
VPLEXEngineFront
DirectorB DirectorA
VPLEXEngineBack
Directors
Frontendportsprovideactive/activeaccesstovirtualvolumes ProcessFibreChannelSCSIcommandsfromhosts
2010 EMC Corporation. All rights reserved. Module 2: Architecture - Physical and Logical Components 19
19
VPLEXArchitectureandDesign VPLEX
HardwareComponents:I/OModules
FrontEnd
BackEnd
COM GigE
COM GigE
20
20
VPLEXArchitectureandDesign VPLEX
HardwareComponents:DAE
InternalDAEbehindscreen
InternalDAEwithscreenremoved
SSDDriveCarrier
21
21
VPLEXArchitectureandDesign VPLEX
HardwareComponents:I/OModuleCarrier
I/OModuleCarrier
22
22
VPLEXArchitectureandDesign VPLEX
HardwareComponents:I/OModuleTypes
I/OModuleCarrier
4port8GbpsFibreChannelIOM UsedforFCCOMandFCWANconnectivitywithanI/OModulecarrier
23
23
VPLEXArchitectureandDesign VPLEX
HardwareComponents:ManagementandPower
PowerSupplies
ManagementModules
Allowsfordaisychainconnectionbetweenengineswithinacluster USBportunused
2010 EMC Corporation. All rights reserved. Module 2: Architecture - Physical and Logical Components 24
24
VPLEXArchitectureandDesign VPLEX
HardwareComponents:VPLEXManagementServer
CentralPointofManagement
25
TheVPLEXManagementServeristhecentralpointofmanagementforaVPLEXLocalandVPLEXMetro system.ItshipswithadualcoreXeonprocessor,a250GBSATAnearlinedriveand4GBofmemory.The ManagementServerinterfacesbetweenthecustomernetworkandtheVPLEXcluster.ItisolatestheVPLEX internalmanagementnetworksfromthecustomerLAN.ItcommunicateswithVPLEXfirmwarelayers withinthedirectorsovertheprivateIPconnections.AManagementservershipswitheachVPLEXcluster. NotethatthelossofaManagementServerdoesnotimpacthostI/OtoVPLEXprovidedvirtualstorage. WithinaMetroPlextherearetwoManagementservers,oneforeachcluster.Bothclusterscanbe controlledfromeitherManagementServer.AMetroPlexutilizesasecuremanagementconnection betweenthetwoManagementServersviaVPNconnection.AVPLEXclustercanbecontrolledthroughthe ManagementConsolewhichrunsontheManagementServer. TheManagementServeralsoenablesremotesupportviaanESRSGateway.Withthisfunctionalityinplace, VPLEXisabletosendCallHomeeventsandsystemreportstothe ESRSGateway.
25
VPLEXArchitectureandDesign VPLEX
HardwareComponents:FibreChannelCOMSwitches
ConnectrixDS300B:createsaredundantFibreChannelnetworkforCOM
26
26
VPLEXArchitectureandDesign
VPLEXLocal:SupportedConfigurations
Engine 4 SPS Engine 3 SPS FCSwitchB UPSB FCSwitchA UPSA ManagementServer ManagementServer Engine 2 SPS Engine 1 SPS SPS SPS SPS SPS
FCSwitchB UPSB FCSwitchA UPSA ManagementServer Engine 2 SPS Engine 1 SPS SPS SPS
SingleEngine
2010 EMC Corporation. All rights reserved.
DualEngine
QuadEngine
Module 2: Architecture - Physical and Logical Components 27
AllsupportedVPLEXconfigurationsshipinastandard,singlerack. Theshippedrackcontainstheselectednumberofengines,oneManagementServer,redundantStandby PowerSupplies(SPSs)foreachEngineandanyotherneededinternalcomponents.Forthedualandquad configurationsonly,theseincluderedundantinternalFCswitchesforLCOMconnectionbetweenthe Directors.Inaddition,dualandquadconfigurationscontainredundantUninterruptiblePowerSupplies (UPSs)thatservicetheFCswitchesandtheManagementServer. Thesoftwareispreinstalled,thesystemisprecabled,andalsopretested. Enginesarenumbered14fromthebottomtothetop.Anysparespaceintheshippedrackistobe preservedforpotentialengineupgradesinthefuture.Thecustomermaynotrepurposethisspacefor unrelateduses.Sincetheenginenumberdictatesitsphysicalpositionintherack,numberingwillremain intactasenginesgetaddedduringaclusterupgrade.
27
VPLEXArchitectureandDesign
ConfigurationsataGlance
SingleEngine DualEngine QuadEngine
4 Yes 32 32 128 GB 1 2 2
8 Yes 64 64 256 GB 1 2 2
ThistableprovidesaquickcomparisonofthethreedifferentVPLEXsingleclusterconfigurationsavailableat GA.
28
VPLEXArchitectureandDesign VPLEX
VPLEXManagement:IPInfrastructure
Customer LAN
HTTPSorSSH
Management Client
Management Server
Internal IP Network Internal IP
Network
EMCVPLEXCluster
Module 2: Architecture - Physical and Logical Components 29
29
VPLEXArchitectureandDesign VPLEX
VPlexcli(CLI)
VPLEXManagement
VPLEXManagement Console(GUI)
30
VPLEXprovidestwowaysofmanagement,theVPlexcliandtheVPLEXManagementConsole. TheVPlexcli canbeaccessedviaatelnetsessiontoTCPport49500ontheManagementServer.TheVPLEXManagement ConsoleisaccessedbypointingabrowserattheManagementServerIPusingthehttpsprotocol.Currently VPLEXCLIisthemorematureinterfaceprovidingcompletesupportforalldocumentedfeaturesand functionality.Themanagementconsolehasknownlimitationsinsomeareas.Forexample,mobility operationscanonlybeperformedusingCLI. EverytimetheVPlexcliisaccessed,itcreatesasessionlogin the/var/log/VPlex/cli/ directory.Loggingin throughtheManagementConsolealsocreatesasessionfilein/var/log/VPlex/cli.VPLEXManagement Console
ViahttpssessiontotheManagementServer Intuitive,easytouseinterfaceforsimplifiedstoragemanagement Incorporatescomprehensiveonlinehelp
30
VPLEXArchitectureandDesign
VPLEXFederation:Constructs
Dev Extent
Dev
Extent
Extent
StorageVol StorageVol
2010 EMC Corporation. All rights reserved.
StorageVol
31
LetsexaminethevarioustypesofmanagedstorageobjectswithinEMCVPLEX,theirinterrelationships,and howtheyrelatetoentitiesexternaltoVPLEX suchascustomerhostsandcustomerstoragearrays. BackendstoragearraysareconfiguredtopresentLUNstoVPLEXbackendports. EachpresentedbackendLUNmapstooneVPLEXStorageVolume.StorageVolumesareinitiallyinthe unclaimed state.UnclaimedstoragevolumesmaynotbeusedforanypurposewithinVPLEXotherthanto createmetavolumes,whichareforsysteminternaluseonly. OnceaStorageVolumehasbeenclaimedwithinVPLEX,itmaybecarvedintooneormorecontiguous Extents.AsingleExtentmaymaptoanentireStorageVolume;however,itcannotspanmultipleStorage Volumes. AVPLEXDeviceistheentityenablesRAIDimplementationacross multiplestoragearrays.VPLEXsupports RAID0forstriping,RAID1formirroring,andRAIDCforconcatenation.Thesimplestpossibledeviceisa singleRAID0devicecomprisingoneextent,asshownhere. Shownnextisamorecomplexdevice forexampleastripedRAID0deviceacrosstwoextents.Notethat theunderlyingextentscouldevenbefrommultiplebackendstoragearrays.
31
VPLEXArchitectureandDesign
VPLEXFederation:Constructs
StorageView
Host Initiator Initiator
Virtual Vol
Dev Dev
Extent
Extent
Extent
StorageVol StorageVol
2010 EMC Corporation. All rights reserved.
StorageVol
32
Devicesmaybelayeredontopofotherdevices.Forexample,wecouldcreateaRAID1mirroreddevice withtwodissimilarmirrorlegs,asshowninthisexample.Onlydevicesatthetoplevelmayhaveafrontend SCSIpersonalityandbepresentedtohosts.ThesearecalledTop LevelDevices. StorageView isthemaskingconstructthatcontrolshowvirtualstorageisexposedthroughthefrontend. AnoperationalStorageViewisconfiguredwiththreesetsofentitiesasshownnext. First,anyhoststhattheStorageViewmustpresentstoragetoshouldhaveoneormoreinitiatorports (HBAs)intheStorageView.Hostinitiatorsshouldberegistered withoneofseveralspecificallyrecognized andsupportedhostpersonalitytypeswithinVPLEX,suchasdefault whichcorrespondstomostopen systemshosts:WindowsandLinux,HPUX,andVCS.Ahighavailabilityhostshouldhaveaminimumoftwo registeredinitiatorportseachwithinitsStorageView. Second,oneormoreVPLEXfrontendportsneedstobeconfiguredaspartoftheStorageView.Atypical highavailabilityconfigurationwoulduseaminimumofonefrontendportperfabric,eachofthemservicing aseparatehostinitiator. Third,aVirtualVolumethatmapstotheappropriateTopLevelDeviceneedstobecreatedandthen configuredaspartoftheStorageView. OnceaStorageViewisproperlyconfiguredasdescribedandoperational,thehostshouldbeabletodetect anduseVirtualVolumesafterinitiatingabusscanonitsHBAs.EveryfrontendpathtoaVirtualVolumeis anactivepath,andthecurrentversionofVPLEXpresentsvolumeswiththeproductIDInvista.Thehost requiressupportedmultipathingsoftwareinatypicalhighavailabilityimplementation.
32
VPLEXArchitectureandDesign
Module3:VPLEXFunctionality andManagement
33
ThismoduleprovidesadetailedlookatthecoreVPLEXcapabilitiesthatareavailableatGA.
33
VPLEXArchitectureandDesign
Provisioning:UsingtheVPLEXManagementConsole
ProvisionStorage
Tasks ProvisioningOverview
2010 EMC Corporation. All rights reserved. Module 3: VPLEX Functionality and Management 34
ThisisthehomesectionoftheEMCVPLEXManagementConsole.Thisisagoodlogicalstartingpointfor manyVPLEXmanagementoperations. Ontherightofthescreentherearestorageprovisioningsteps. Thesestepsarealsolinksthatwillredirecta persontothepagetoimplementthestep. Ontheleftofthescreenthereisapictureshowingthetasksequencetoprovisionvirtualvolumesoutof VPLEX.TotherightoftheHomebutton,therearetwomorelinks,ProvisionStorage andHelp The ProvisionStoragelinkwilltaketheusertoanalternativepagefromwhichprovisioningcanbe implemented.TheHelplinkwilltaketheusertotheVPLEXOnlineHelppage.
34
VPLEXArchitectureandDesign
BrownfieldImplementation:Encapsulation
Encapsulation:theprocessofconvertingexistingproductionSAN volumesonhoststoVPLEXvolumes,viaoneforone mapping
EMCVPLEXmaintainsphysicalseparationofmetadatafromhostdata
VPLEXmetadataisstoredseparatelyonmetadatavolumes Basisforsimpledatainplacemobility
Highlevelsteps:
35
Encapsulationisbasicallydatainplace migrationofexistingproductiondataintoVPLEX,andtherefore doesnotrequireanyadditionalstorage.Encapsulationisdisruptivesinceyoucannotsimultaneouslypresent storageboththroughVPLEXanddirectlyfromthestoragearraywithoutriskingdatacorruption,dueto readcachingattheVPLEXlevel. YouhavetocutoverfromdirectarrayaccesstoVPLEXvirtualizedaccess.This impliesaperiodwhereall pathstostorageareunavailabletotheapplication.Withproper planningandexecution,thisdowntimecan beminimized.WhenPowerPathMigrationEnabler(PPME)supportis putinplace,itcanhelpeliminateany disruption. Analternativemigrationstrategyforexistingproductionhostsistoperformhostbasedreplicationfrom nativearrayvolumestonetnewVPLEXvolumes.Thisisnondisruptivebutrequiresadditionalstorage. Hostbasedcopyalsoconsumescyclesonthehost,andmayneedtobeplannedinaliveproduction environment.
35
VPLEXArchitectureandDesign
Encapsulation:MigratingaHosttoVPLEX
HostInitiatorPortsdetected: UNREGISTERED0x10000000c987422a UNREGISTERED0x10000000c987422b ArrayStorageVolumesfound: VPD83T3:600601606bb02500aab2affa35b5de11
FabricA
VPD83T3:600601606bb025006a17a18d5bfade11 VPD83T3:600601606bb02500ba7b6b1c49fade11
Host
VirtualVolumesdetected
FabricB
36
Thisexampleillustratestheprocessofcuttingoverfromnative SANvolumestoVPLEXvolumesvia encapsulation.Observethesystemstatetransitionsasyoustepthroughthistasksequence. ThebasicideaistologicallyintegrateVPLEXintoyourproductionfabricsbetweenyourhostsandstorage arrays. Todothis,thebackendportsofVPLEXarefirstconnectedtotheproductionfabrics. ViasuitablezoningandLUNmasking,VPLEXbackendports,whicharetechnicallyinitiators,detectthe backendstoragearraysandvolumes.NativearrayvolumesorLUNsare thenclaimedbyVPLEX,allowing yourstorageadministratortolayerVPLEXvirtualvolumesonthemforpresentationtohosts. Frontendconfigurationisthenextlogicalstep.VPLEXfrontendportsareconnectedtothefabrics,andthe zoningconfigurationmodifiedtoallowhoststodetecttheseportsastargets. Oncethisisdone,VPLEXcandetectthehostinitiators(HBAs)whichshouldthenberegisteredwiththe appropriatehostpersonality. Atthispoint,bycreatingasuitablestorageviewwithinVPLEX,itbecomespossibletopresentVPLEX volumestothehostinitiators.Notethatinthisprocess,theoriginalSANvolumesfromthearrayarenow repackagedasVPLEXvolumesandpresentedvianewFCtargets,(i.e.theVPLEXFEports).The recommendationistoremovehostaccesstotheoriginalSANvolumes,beforepresentingtheencapsulating VPLEXvolumes.
36
VPLEXArchitectureandDesign
RAID0 StripedVPLEXDevice
Idealforencapsulateddevices Considerstripedepth Avoidstripingstripedstoragevolumes
Extent
Extent
Extent
RAIDC ConcatenatedVPLEXDevice
Mostflexibletogrow
Dev
Dev
Dev
Dev
Dev
37
TheVPLEXdevice constructformsthebasisofcoreRAIDcapabilitiessuppliedby VPLEX.Thekeyvalueadd isthatVPLEXcanenableRAIDfunctionalityacrossstoragearrays. ARAID1VPLEXDevicemirrorsdatatotwoextentsordevices. ARAID0VPLEXDevicestripesdataacrossmultipleextentsordevices.SimplestpossibledeviceisaRAID0 devicethatusesoneextent.Thisistypicallywhatyoudconfigureduringencapsulation. ARAIDCVPLEXDeviceconcatenatesmultipleextentsordevices. Viewingtheseasbuildingblocksallowsyoutoconsideranorganizedsystemofdevicenesting tomeet yourcustomersspecificneeds.
37
VPLEXArchitectureandDesign VPLEX
Provisioning:MultipathingwithEMCPowerPath
38
38
VPLEXArchitectureandDesign
ExtentMobility
Mobilityofblockdataacrossextents,
nondisruptivetothehost Extentmobilitycanonlybeperformed withinacluster Originalextentisfreedupforreuse
Host
Fundamentaluse:nondisruptivedata
mobilityacrossheterogeneous storagearrays
Extent
Virtual Vol
DEV
Extent
StorageVol 1010101101
StorageVol
39
39
VPLEXArchitectureandDesign
DeviceMobility
Host
Virtual Vol
DEV
DEV
Extent
Extent
Extent
Extent
StorageVol 1010101101
StorageVol 1010101101
StorageVol
StorageVol
40
40
VPLEXArchitectureandDesign VPLEX
Mobility:TypicalTaskSequence
1. dm migration start n <name> -f <extent/device> -t <extent/device>
2. dm migration commit -m <name> --force 3. dm migration clean -m <name> --force 4. dm migration remove -m <name> --force
RAID1
SourceDeviceorExtent
TargetDeviceorExtent
1010101101
41
41
VPLEXArchitectureandDesign
BatchedMobility
Enablesscriptingofextentanddevicemobility Abatchcanprocesseitherextentsordevices,butnotamixofboth
Tasksequenceforbatchedmobility: 1. Createmigrationplan: batch-migrate create-plan plan.txt -f <source>
-t <destination>
2. Checkplanforerrors: batch-migrate check-plan plan.txt 3. Startmigration,copydatatotargets: batch-migrate start plan.txt 4. Commitmigration: batch-migrate commit plan.txt 5. Cleanupmigration: batch-migrate clean file plan.txt 6. Removemigrationrecord: batch-migrate remove
42
Batchedmobilityprovidestheabilitytoscriptlargescalemigrationswithouthavingtospecifyindividual extentbyextentordevicebydevicemigrationjobs.
42
VPLEXArchitectureandDesign
AccessAnywherewithVPLEXMetro
DistributedDevice
Cluster1/SiteA
Host
RemoteDevice
Cluster1/SiteA
Host
Cluster2/SiteB
Host
Cluster2/SiteB
Host
VirtualVolume
Virtual Volume
DistributedDevice
Device
Storage Array
Storage Array
43
43
VPLEXArchitectureandDesign
DistributedDevice:I/OOperation
VPLEXCluster1/SiteA
Host
VPLEXCluster2/SiteB
Host
10110
Virtual Volume
10110 ACK
ACK
FCMAN
Distributeddevice
ACK 10110
ACK
Storage Array
SynchronousDistance
Storage Array
44
VPLEXArchitectureandDesign
RemoteDevice:I/OOperation
VPLEXCluster1/SiteA
Host
READ 11001
VPLEXCluster2/SiteB
Host
10110
Virtual Vol
Virtual Vol
FCMAN
SynchronousDistance
Storage Array
Storage Array
45
VPLEXArchitectureandDesign
DistributedDevice:HandlingSplitbrain
Consideradistributedsystemwithtwosites:
FCMAN
Site A
Site B
FromSiteAsperspectivethefollowingtwoconditionsareindistinguishable:
Site A FCMAN Site B Site A FCMAN Site B
PartitionFailure
SiteFailure
Addressingthisisfundamentaltothedesignofdistributedapplications. WithMetroPlexdistributeddevice:handledwithaconfigurabledetachrule
2010 EMC Corporation. All rights reserved. Module 3: VPLEX Functionality and Management 46
46
VPLEXArchitectureandDesign
DistributedDevice:ConfiguringDetachRule
Canspecifyapredefinedrulesetorcustomizedruleset
2010 EMC Corporation. All rights reserved. Module 3: VPLEX Functionality and Management 47
47
VPLEXArchitectureandDesign
DistributedDevices:SupportedDetachOptions
DetachoptionscurrentlysupportedwithVPLEXdistributeddevicesina MetroPlex: Biasedsitedetach Nonbiasedsitedetach Manualdetach
Usewithautomatedscriptonproductionhost(s)toactivateread/write
accessfromeithersite,afterafailureevent
48
48
VPLEXArchitectureandDesign
Monitoring:VPLEXPerformance Creatingmonitors
monitor create --name <name> --period <time> -director <Director_Name> --stats <stat>
Listingmonitors
Destroyingmonitors
monitor destroy <monitor>
2010 EMC Corporation. All rights reserved. Module 3: VPLEX Functionality and Management 49
PerformancedatacanbecollectedontheVPLEXsystembycreatingmonitorsandsinks.Monitorscollect performancestatisticsonvariousVPLEXcomponents.ThesemonitorsarecreatedwithintheVPlexcliusing themonitorcommand.Bydefault,monitorscollectstatisticsevery30seconds.Thiscollectiontimecanbe modifiedifdesired. Onceamonitoriscreated,itcanbefoundinthe/monitoring directory.Monitorsonlystartcollectingdata whentheyhaveatleastoneassociatedsink,aswellseenext.Monitorscanbedestroyedusingthe monitor destroy command.
49
VPLEXArchitectureandDesign
Monitoring:VPLEXPerformance(Contd) Listingstatisticsavailableformonitoring
monitor stat-list
Monitorcollect
Updatesaperformancemonitorimmediately Adhocmanualcollectofdata
Removingasink
monitor remove-sink <sink>
50
50
VPLEXArchitectureandDesign
Monitoring:EventHandlingandReportGeneration
Engine
ManagementServer
ConnectEMC CallHomeListener SYR EMA_Adaptor VPlexcli
ESRSGateway
51
ShownisthehighlevelarchitectureofeventhandlingandmessagingflowfromtheEnginetothe Managementserver,toaproperlyconfiguredESRSgateway. VPlexcli,whichrunsontheManagementServer,pullseventseverysecondfromaprocessonaDirector. TheCallHomeListenerontheManagementserverlooksattheeventsanddetermines,whicheventsshould initiateacallhome.Itthenplacesthoseeventsintothe/opt/emc/VPlex/Event_Msg_Folderdirectoryas .txtfiles. TheEMA_adaptorsjobistotakethetextfilesfromtheEvent_Msg_Folder directoryandcreatethe requiredXMLfilesusingtheEMAAPI.TheEMA_adaptorthenplacesthosefilesintothe /opt/emc/connectemc/poll directory.TheConnectEMCprocesspicksuptheXMLeventfiles andsends themtoESRSGateway.Iftheeventsaresuccessfullysenttothegateway,theyarealsocopiedintothe /opt/emc/connectemc/archive directory.Iftransmissionfailsforsomereason,thecorrespondingevents areplacedintothe/opt/emc/connectemc/failed directory. TCPports22,9010,443,and5901mustbeopenbetweentheManagementServerandtheESRSGateway. TheESRSGatewayclassifiesincomingeventsasbelongingtothis VPLEXinstanceviatheTopLevel Assembly fieldwithineachevent.TheTopLevelAssembly isaclusteruniqueidentifierthatispresetat thefactoryonallenginesofaVPLEXcluster.
51
VPLEXArchitectureandDesign
GeneratingSystemReports:SYR
SYRgeneratesacompletereportoftheVPLEXSystem
Task ConfigureSYR ListSYR ManuallyrunSYR Command SendsaweeklyreporttotheESRSGateway scheduleSYR add -d <day> -t <hour> -m <minute> scheduleSYR list syrcollect
52
52
VPLEXArchitectureandDesign
CollectingVPLEXLogFiles
collect-diagnostics
Collectslogs,cores,andconfigurationinformationfromtheManagement
Serverandthedirectors Placesatar.gz filein/diag/collect-dianostics-out
53
53
VPLEXArchitectureandDesign
Scheduling:cronstyle
schedule manageandcontroltimingofspecifictasks
54
54
VPLEXArchitectureandDesign
Maintenance:NondisruptiveCodeUpgrade(NDU) NDUprocessforVPLEX:codeupgradeswithnodisruptionto
productionhostsperformingI/OtoVPLEXvirtualvolumes
Requiresbestpracticestobefollowedforhostconnectivity,and
supportedmultipathingsoftware Usesanotionoffirstupgraders andsecondupgraders
First:DirectorAofeveryengineisupgraded,thenrebooted Second:DirectorBofeveryengineisupgraded,thenrebooted VPLEXMetroupgrade:Bothclusterareupgradedwithasinglendu
operationissuedononeManagementServer
55
Secondupgraders:EveryenginesBdirectorsareupgraded
Bdirectorsfirmwareisshutdownduringtheupgrade I/OisautomaticallyredirectedtoAdirectors Onceupgraded,Bdirectorsreboot BdirectorsbeginservingI/Oagain
55
VPLEXArchitectureandDesign
Module4:Planningand DesignConsiderations
PerformplanninganddesignforVPLEXdeployment Stateandexplaintherationaleforrecommendedbestpractices
withVPLEXimplementations
56
ThismodulecoversPlanningandDesignconsiderationsduringdeploymentofaVPLEXsolution.
56
VPLEXArchitectureandDesign
VPLEXPhysicalConnectivity:SANBestPractices
Hosts BE
FabricA
FE
Volume1 Volume2
FabricB
FE BE
Arrays
2010 EMC Corporation. All rights reserved.
WhendeployingtheVPLEXcluster,thegeneralruleistouseaconfigurationthatprovidesthebest combinationofsimplicityandredundancy.Inmanyinstancesconnectivitycanbeconfiguredtovarying degreesofredundancy.However,therearesomeminimalrequirementsthatshouldbemet. Deploymirroredfabrics:thisisstandardEMCpractice.Inaddition,itispreferabletoisolatethefrontend fabricsfromthebackendfabrics.Thiswouldensurecleanseparationofhostsfromstoragearrays.Thisis appropriateinenvironmentswhereallencapsulationofexistingproductiondataiscomplete,andanyfuture provisioningtohostswillbeexclusivelyfromVPLEX. Connecteveryhostandeverystoragearraytobothfabrics. EachDirectorshouldbeassignedportstobothfabricsotherwise,afabricfailurecouldreducethepathsand computingpoweroftheVPLEX.Thiswilldoubletheworkloadfor thesurvivingDirectors.DistributeFEports ofeachdirectoroverbothfabrics. DistributeBEportsofeachdirectoroverbothfabrics. Theabovetworulesensurethefollowing:ifthereiscompleteoutageononefabric,thatdoesnotrendera Directorcompletelynonoperationaloneitherthefrontendoronthebackend. ThustheprocessingpoweroftheVPLEXsystemisnotcompromised byafabricoutage. DistributethefourportsofeachI/Omoduleoverbothfabrics. AgainthisminimizeslossofVPLEXefficiencyandprocessingpowerintheeventofcompletefailureonone fabric.
57
VPLEXArchitectureandDesign
VPLEXLogicalConnectivity:Backend
Volume
FabricA
VMAX
A0 A1 B0 B1
LUN
FabricB
CX4960
EachdirectormustbeprovidedaccesstoeveryBEvolumeinthecluster Active/Activearray:Foreachdirector,provideatleastoneBEpathtoeachvolumeviaeachfabric Active/Passivearray:Foreachdirector,provideBEpathsviabothcontrollerstoeachLUNviaeachfabric VPLEXBEportinitiatorpersonality opensystemshost,usefailovermode=1withCLARiiONarrays
Module 4: Planning and Design Considerations 58
58
VPLEXArchitectureandDesign
VPLEXLogicalConnectivity:Frontend
Hosts Engine2
DirectorB
FabricA
Engine2 DirectorB
Engine1
Engine1
Volume1 Volume2
FabricB
DirectorA DirectorA
Arrays
SingleEngineconfiguration:Foreachhost,configureFEpathstobothDirectorAandDirectorB DualEngineandQuadEngineconfiguration:Foreachhost,configureFEpathstoAandBof
separateengines
2010 EMC Corporation. All rights reserved. Module 4: Planning and Design Considerations 59
59
VPLEXArchitectureandDesign
60
60
VPLEXArchitectureandDesign
Requiredcapacity:1bitforevery4Kbytepageofdistributeddevice
One10GBloggingvolumecansupport320TBofdistributeddevices
GeneralrequirementsforSANvolumestobeusedforlogging:
Veryhighperformancerequirement
NoI/Oactivityonloggingvolumesundernormalconditions Highrandom,smallblockwriteI/Orateduringlossofconnectivity HighsmallblockreadI/Orateduringincrementalresynchronization
Highestpossibleavailability Usestripedandmirroredvolumestomeettheserequirements
61
ListedaretherequirementsandbestpracticesforVPLEXlogging volumes. Aprerequisiteforcreatingadistributeddevice,oraremotedevice, isthatyoumusthavealoggingvolume ateachcluster.Singleclustersystemsandsystemsthatdonothavedistributeddevices donotrequire loggingvolumes.Loggingvolumeskeeptrackofchangedblocksduringaninterclusterlinkfailure.Aftera linkisrestored,thesystemusestheinformationinloggingvolumestosynchronizethedistributeddevices bysendingonlychangedblockregionsacrossthelink. Theloggingvolumemustbelargeenoughtocontainonebitforeverypageofdistributedstoragespace.So forexample,youonlyneedabout10GBofloggingvolumespacefor320TBofdistributeddevicesina MetroPlex.TheloggingvolumereceivesalargeamountofI/Oduringandafterlinkoutages.Soitmustbe abletohandleI/Oquicklyandefficiently.
61
VPLEXArchitectureandDesign
StorageViews:BestPractices
Eachstorageviewshouldhave:
Atleasttworegisteredinitiators(HBAports)fromeachhost
Recommended:HBAsdistributedoverredundantfabrics
AtleasttwoVPLEXFEports:onefromanAdirector,onefromaBdirector
Recommended:portsfromdifferentengineswhenpossible,anddistributedover
redundantfabrics
Createonestorageviewforallthehoststhatneedaccesstothesamestorage
StorageView
Host Initiator VVol Host Initiator FEPort
FEPort
62
Whencreatingstorageviews,followthesebestpractices:
Createonestorageviewforallhoststhatneedaccesstothesamestorage,andthenaddallrequiredvolumestotheview. RedundancyrequirementsarebasedonstandardEMCguidelinesfor SANconfiguration.Eachhostshouldhaveatleasttwo registeredinitiatorsintheview.AccesstothevolumesshouldbeenabledviaatleasttwoVPLEXfrontendportsintheview. Whenselectingthefrontendportsforastorageview,makesuretofollowthepreviouslydiscussedbestpractices useports fromatleastoneAdirectorandoneBdirector,andwheneverpossible,fromdirectorsinseparateengines.
62
VPLEXArchitectureandDesign
PartitionAlignment
VPLEXpagesize=4K VMAXtracksize=32K Minimumrecommendedalignment=64K Cantgowrongwith1M
63
63
VPLEXArchitectureandDesign
Limitation:
Capacityofencapsulationtargetmustbeanintegralmultipleof 4
64
64
VPLEXArchitectureandDesign
VPNandMANCOM:BestPractices
Cluster1/SiteA
VPLEX ManagementServer
IPsecTunnel
WAN
Cluster2/SiteB
VPLEX ManagementServer
Engine2
DirectorB DirectorA
Switch
ISL1
Engine2
Switch
DirectorB DirectorA
Engine1
DirectorB ISL2 DirectorA
Switch Switch
Engine1
DirectorB DirectorA
ThediagramillustratestherequirementsforIPandFCconnectivitybetweenthetwoclustersinaMetro Plex. Afundamentalrequirement withoutwhichtheMetroPlexcannotbeinstalled isIPconnectivity betweentheVPLEXManagementServers.AspartofinitialMetroPlexinstall,aVPNtunnelisestablishedfor secureconnectionandinterchangeofconfigurationdatabetweentheseservers. Additionally,theVPLEXDirectorsofeachclusterneedvisibilitytoDirectorsoftheotherclusterviatheir MANCOMports.Currentlydistancesofupto100kmbetweenclusters issupported.Roundtriplatencyon thislinkmustbelessthan5milliseconds.Bandwidthrequirementwillobviouslydependonthespecific customerapplication;ingeneralaminimumof45Mbpsistheguideline. TheFCMANlinkscanuseeitherdarkfibreorDWDM. WhenconfiguringaMetroPlexitisbesttomakeuseoftwofabricsfortheFCMANconnection,allowinga DirectortocommunicatewithalltheotherDirectorsoneitherofthetwofabrics.Thisprovidesthebest possibleperformanceandfaulttolerance. IfMANtrafficmustsharethesamephysicallinkascustomerproductiontraffic,thenlogicalisolationmust beimplementedusingVSANsorLSANs. NotethattherearespecificzoningpracticestobefollowedwhenexposingDirectorFCMANportstoeach other.Refertotheproductinstallationguidefordetails.
65
VPLEXArchitectureandDesign
MobilityRecommendations DeviceMobility
Mobilitybetweendissimilararrays Relocatehotdevicesfromonearraytypetoanother RelocatedevicesacrossclustersinaMetroPlex
BatchMobility
Fornondisruptivetechrefreshesandleaserollovers FornondisruptivecrossPlexdevicemobility Only25devicesorextentscanbeintransitatonetime Additionalmobilitywillbequeuedifgreaterthan25
ExtentMobility
Loadbalanceacrossstoragevolumes
66
66
VPLEXArchitectureandDesign
DistributedDevices:HostConnectTopologies LocalAccess
EachhostaccessesvolumeviaFEportsononeclusteronly
SpannedAccess(NOTSupportedinV4.0)
EachhostaccessesvolumeviaFEportsonbothclusters
67
67
VPLEXArchitectureandDesign
ScalabilityandLimits
Parameter Virtualvolumes Storagevolumes Initiators(HBAports) Extents Metavolumesize RAID1mirrorlegs Activeintraclusterrebuilds Activeinterclusterrebuilds Storagevolumesize Virtualvolumesize Totalstorageprovisionedinasystem Maximum# 8000percluster 8000percluster 400 24000 78GB 2 25 25 Upto32TB Upto32TB 8PB
68
68
VPLEXArchitectureandDesign
VolumeLimitsinaMetroPlex:Example
Cluster1/SiteA
Hosts
Cluster2/SiteB
Hosts
2000stretched volumes
6000localdevices
2000localdevices
2000localdevices
6000localdevices
69
Hereisanexampletoillustratehowthemaximumlimitof8000volumesperclustercanbeeffectively exploitedinaMetroPlexsolution. Inthisscenario,wehave2000distributeddeviceswiththecorresponding2000stretched volumesthat canbepresentedtohostsatbothsites.Thesevolumescanpotentiallybesharedbyhostsacrosssites,for exampletoaccommodatedistanceVMotionorstretchedhostclusteringapplications.Notethatour2000 toplevel distributeddevices(i.e.devicesthatareenabledforfrontendpresentation)arelayeredupon 2000localdeviceswithineachcluster. Inaddition,youcanconfigureupto6000moretoplevel localdevicesateachsite,thatarepresentedto localhostsonly.Thesewouldbesuitablefordatathatdoesntneedtobesharedacrosssites. Thisexampleshowshowtoconformtothe8000volumesperVPLEXclusterlimit,whilealsomaximizingthe benefittothecustomer.
69
VPLEXArchitectureandDesign
HEAT
CheckforhostcompatibilitywithVPLEX
VPLEXDeploymentTool(VDT)
HelpstoassistwithVPLEX
Configurations,implementations,andmodificationsinVPLEXclusters
ExecutablethatrunsonWindows
SVCQualifier
70
ThesearethecurrentVPLEXsolutiondesigntoolsinactivedevelopment. Networkqualityandlatencyassessmentisrecommended.
70
VPLEXArchitectureandDesign
VPLEXSizingTool
71
71
VPLEXArchitectureandDesign
SimpleSupportMatrix(SSM)
CurrentVPLEXSSMisdownloadablefrom:
https://elabnavigator.emc.com/emcpubs/elab/esm/pdf/EMC_VPLEX.pdf
72
72
VPLEXArchitectureandDesign
physicalvolumebecausetheremotesite(target/R2)won'tbevirtualized
CurrentlyVPLEXsupportonlythicktothickdatamoves
Virtualprovisioningandsupportforthicktothinnondisruptivemobility
inVPLEXareplannedtobeaddedovertime
RecoverPoint:notintegratedandsupportedwithVPLEX
73
Shownaresomeofthekeyinteroperabilitylimitationsatlaunch time. Inv4.0,Timefinder/Clone/Snapisnotsupported. MirrorView/SRDFcanbeusedonVPLEXbackendaslongasthetargetorR2sitevolumesarenotvirtualized withVPLEX.ThisalsomeansthatwecanONLYsupport1:1mapping betweenVPLEXvirtualvolumeand arrayphysicalvolumebecausetheremotesite(target/R2)won'tbevirtualized. Inv4.0,VPLEXwillsupportonlythicktothickdatamoves.Virtualprovisioningandsupportforthicktothin nondisruptivemobilityinVPLEXareplannedtobeaddedovertime. RecoverPointisnotintegratedandsupportedwithv4.0.Thisfunctionalitywillbeaddedovertime.
73
VPLEXArchitectureandDesign
CourseSummary EMCVPLEXrepresentsinnovativelocalanddistributedfederation
technology.Itispositionedtoaddressnondisruptiveworkload relocation,distributeddataaccess,workloadresiliencyandsimplified storagemanagement. VPLEXLocalsupportslocalfederationincludingconsolidation, heterogeneouspoolingandnondisruptivemobilitywithinadata center. VPLEXMetrosupportstheabove,aswellasdistributedfederation acrosssitesorfailuredomains,withinsynchronousdistances(upto 100km,latency<5msec). VPLEXoffersAccessAnywherewiththekeyenablersincluding: distributedvirtualvolumesoverdistance,remoteaccess,andmobility withinandacrossclusters.
74
74