lnLroducLlon ............................................................................................................................................ 3 1radlLlonal physlcal soluLlons ................................................................................................................. 3 lssues wlLh masLer/slave and clusLered conflguraLlons...................................................................... 3 vlrLuallsaLlon ........................................................................................................................................... 3 Pow vMware handles redundancy. .................................................................................................... 3 Why vlrLuallsaLlon ls beLLer for your soluLlon. .................................................................................... 4 Are Lhere any llmlLaLlons Lo vlrLuallsaLlon? ........................................................................................ 4 Pow wlll my vlrLual server perform? .............................................................................................. 4 ls my daLa safe? .............................................................................................................................. 4 uoes vlrLuallzaLlon offer any oLher feaLures? ..................................................................................... 3 llexlblllLy ............................................................................................................................................. 3 Lasy upgrades and downgrades ..................................................................................................... 3 Llve sLorage mlgraLlons ................................................................................................................... 3 rlvaLe lnLernal neLworklng ............................................................................................................ 3 1echnlcal dlagrams ................................................................................................................................. 6 SLorage neLwork.................................................................................................................................. 6 lnLerneL access .................................................................................................................................... 7
age 3
"#$%&'()$*&# vlrLuallzaLlon has been slowly revoluLlonlslng compuLlng for over 10 years now. When we underLook Lhe challenge of bulldlng a new vMware envlronmenL, we declded LhaL lL should encompass Lhe besL LhaL vlrLuallsaLlon had Lo offer whllsL rlvalllng Lhe performance of LradlLlonal physlcal server-based soluLlons. 1o help overcome Lhls challenge, we looked Lo vMware, who are arguably Lhe leaders ln Lhe vlrLuallsaLlon markeL. +%,'*$*&#,- ./01*),- 1&-($* hyslcal servers are flexlble and allow us Lo creaLe some excellenL soluLlons Lo varlous l1 challenges. Powever, all dedlcaLed soluLlons suffer from Lhe same drawback: hardware fallure. 1he only way Lo negaLe hardware fallure ls Lo use redundanL hardware. 1hls can be done ln masLer/slave conflguraLlons or, more commonly Lhese days, by clusLerlng servlces on mulLlple hardware plaLforms. "11(21 3*$/ 4,1$2%51-,62 ,#' )-(1$2%2' )*8(%,$* unforLunaLely, more hardware means Lwo Lhlngs: a more cosLly soluLlon and a more complex soluLlon. A masLer/slave seLup offers n+1 redundancy. lf you requlre more Lhan Lhls Lhen you wlll need Lo look aL complex clusLered soluLlons. WhaL all Lhls means ln real Lerms ls LhaL you end up paylng ouL good money for hardware LhaL, if youre lucky, you will never use. 1hese facLors are Lhe maln drlver behlnd Lhe developmenL of vlrLuallsaLlon Lechnology. 9*%$(,-*1,$*&# ln lLs slmplesL Lerms, vlrLuallsaLlon ls a way of Laklng a physlcal server and parLlLlonlng lL lnLo many vlrLual servers. 1hanks Lo Pypervlsor-based vlrLuallsaLlon Lechnology (vMware), each vM (vlrLual machlne) has lLs own dlsLlncL seL of resources LhaL ls accessed aL a hardware level. vMs are completely unaware of each other and do not contend for each others resources. !"#$ &#'" ()"*()+,-.) / 01 2-33 3##4 /56 7)8/.) )9/+,3& 3-4) / (8&*-+/3 6)6-+/,)6 *)".)": :&3 9;3,%2 /,#'-21 %2'(#',#)0< vlrLual hosLs are sLlll physlcal servers, sufferlng from Lhe same lssues (such as hardware fallure) as a LradlLlonal physlcal server envlronmenL. 1hese hardware fallures are dealL wlLh ln exacLly Lhe same way as ln physlcal envlronmenLs. MulLlple componenLs such as power supplles are sLandard across our vMware cloud envlronmenL buL lL ls clusLerlng LhaL really brlngs Lhls producL lnLo lLs own. Cur cloud vMware envlronmenL ls acLually a large clusLer of vMware LSxl servers, all managed by a producL called vCenLer. vCenLer monlLors Lhe healLh of all Lhe LSxl hosLs. ?our vM exlsLs on one of Lhese LSxl hosLs buL any of Lhe LSxl hosLs are capable of runnlng your vM. lf vCenLer deLecLs an lssue wlLh any of Lhe LSxl hosLs, your vM ls seamlessly moved from Lhe hosL wlLh Lhe lssue Lo one wlLhouL. lrom your VMs perspective, Lhere has been no change aL all. ?our vM wlll [usL conLlnue Lo run as normal. lf Lhe LSxl hosL LhaL ls runnlng your machlne has a LoLal fallure, vCenLer wlll deLecL Lhls and slmply resLarL your vM on an LSxl hosL wlLhouL lssues. ln Lhls lnsLance you wlll see noLhlng more Lhan a rebooL. age 4
=/0 6*%$(,-*1,$*&# *1 >2$$2% 7&% 0&(% 1&-($*&#< Cur vMware cloud envlronmenL has aL leasL n+1 redundancy on every slngle componenL ln Lhe clusLer. 1hls lncludes lnLerneL connecLlons, sLorage, sLorage neLworks and power. MosL componenLs are acLually n+2. ;)+/'*) #'" 012/") )5.-"#5$)5, -* +3'*,)")6 /56 ("#,)+,)6 /</-5*, 8/"62/") =/-3'") -5 ,8-* 2/&> &#'" /((3-+/,-#5* /56 *)".-+)* 6#nt 5))6 ,# 7): ?#' 2-33 5# 3#5<)" 5))6 ,# *()56 $#5)& #5 ")6'56/5, 8/"62/") ,8/, &#' 2-33 5).)" 5))6> /* ,8) )5.-"#5$)5, /3")/6& 8/* ,8-* 7'-3, -5: ?%2 $/2%2 ,#0 -*4*$,$* $& 6*%$(,-*1,$*&#@ :&3 3*-- 40 6*%$(,- 12%62% .2%7&%4@ ln lLs early lLeraLlons, Lhe overhead of vlrLuallsaLlon made lL unsulLable for cerLaln appllcaLlons buL Lhese days are long pasL and ma[or Lech companles such as MlcrosofL and Cracle supporL vlrLual lmplemenLaLlons for resource-hungry and complex packages such as SCL, AcLlve dlrecLory and Lxchange as sLandard. 1hls does noL mean Lhere are no performance dlfferences. 1yplcally, a vlrLual machlne wlll have a reducLlon ln performance of abouL 3 ln comparlson wlLh a dedlcaLed machlne of exacLly Lhe same speclflcaLlon. "1 40 ',$, 1,72@ Many people have concerns regardlng Lhe safeLy and securlLy of cloud soluLlons, and Lhe concern for a loL of people ls: ls my daLa secure? A slmple answer Lo Lhls quesLlon ls:- ?)*> &#'" 6/,/ -* /* */=) #5 #'" 012/") +3#'6 (3/,=#"$ /* #5 /5& 6)6-+/,)6 *)".)": ln a vMware clusLered envlronmenL, daLa ls noL sLored on Lhe LSxl hosL buL on a cenLral sLorage plaLform and accessed vla a sLorage neLwork. Powever, this doesnt mean that someone could slmply galn access Lo Lhe sLorage plaLform and access your daLa. 1here are many sysLems, boLh physlcal and vlrLual, LhaL prevenL Lhls:- ?our daLa ls conLalned wlLhln a vlrLual hard dlsk, lL ls noL [usL flles ln a folder. 1he only way Lo access Lhe daLa on Lhls dlsk ls Lo mounL lL lnLo a physlcal/vlrLual machlne, ln Lhe same way LhaL Lhe only way Lo access a physlcal hard dlsk ls Lo mounL lL lnLo a server. 1o add an exLra level of securlLy, you can also encrypL a vlrLual hard dlsk, agaln uslng Lhe same processes you would use for a physlcal one. 1here ls no access Lo Lhe sLorage plaLform from any vlrLual machlne. ln facL Lhey are compleLely unaware of lLs exlsLence. As far as Lhe operaLlng sysLem of Lhe vM ls concerned, Lhe hard dlsk ls aLLached vla a SCSl card. lL has no concepL LhaL lLs own hard dlsk ls vlrLual, leL alone LhaL of oLher servers. 1here ls no physlcal connecLlon beLween Lhe sLorage plaLform and Lhe lnLerneL. Cur 10Cbe sLorage neLwork exlsLs only on a prlvaLe lnLernal range, and ln order Lo access LhaL range you need Lo be physlcally connecLed Lo our prlvaLe managemenL range. Cur enLlre clusLer ls housed ln a 1ler 4 faclllLy. 1hls faclllLy has 24-hour securlLy and no one can galn access wlLhouL offlclal phoLographlc lu (passporL or drlvlng llcence) and aL leasL 24- hours prlor arrangemenL wlLh Lhe daLacenLre Leam. As you can see, your daLa ls as secure ln a vM as lL ls on any dedlcaLed server. age 3
A&21 6*%$(,-*B,$*&# &772% ,#0 &$/2% 72,$(%21@ vlrLuallsaLlon allows us Lo approach Lhe varlous challenges around hardware ln brand new ways LhaL would noL be easy, or even posslble, wlLhln a physlcal envlronmenL. C-2D*>*-*$0 E,10 (.8%,'21 ,#' '&3#8%,'21 Scallng hardware requlremenLs ls dlfflculL, parLlcularly ln a physlcal envlronmenL, and lL requlres Lhe physlcal changlng of componenLs ln Lhe servers. 1hls usually lnvolves a loL of downLlme. ln a vlrLual envlronmenL, Lhe hardware speclflcaLlon ls deflned by a conflguraLlon flle, whlch can be edlLed exLremely qulckly. Addlng more 8AM and Cus can be done ln a maLLer of mlnuLes and commlLLed wlLh a slmple rebooL of Lhe operaLlng sysLem. AddlLlonal hard dlsk space doesnt even requlre a rebooL and can be done on Lhe fly (dependlng on your operaLlng sysLem). F*62 1$&%,82 4*8%,$* vMs ln our vMware cloud envlronmenL are housed on an LnLerprlse-level sLorage soluLlon from LMC. We have 3 Llers of sLorage, each wlLh lLs own feaLures whlch beLLer sulL lL Lo cerLaln workflows, and each wlLh lLs own cosLs. lf your needs should change aL any Llme, for whaLever reason, we can easlly move your vM beLween Lhese Llers. 1hls wlll change Lhe performance characLerlsLlcs of your vM lnsLanLly. 1hls ls done wlLhouL Lhe need for any downtime at all. It doesnt even require a restart. G%*6,$2 *#$2%#,- #2$3&%H*#8 As well as creaLlng vlrLual servers ln Lhe vMware cloud, we can also creaLe all manner of vlrLual devlces such as swlLches, flrewalls and rouLers.
age 6
+2)/#*),- A*,8%,41 1he followlng dlagrams are Lo lllusLraLe Lhe archlLecLure of our vMware cloud envlronmenL. 1hey are deslgned Lo glve an lndlcaLlon of Lhe Lechnologles used Lo creaLe Lhls soluLlon. 1hey are noL, however, 100 accuraLe. Some devlces/conflguraLlons have been purposely lefL ouL for securlLy reasons. I$&%,82 #2$3&%H
key olnLs All of our backup llnks are 10Cbe. Lach sLorage clusLer has mulLlple 10Cbe llnks Lo Lhe maln sLorage core. Lach server has mulLlple 10Cbe llnks Lo Lhe sLorage core. Lvery componenL here has aL leasL n+2 redundancy. age 7
"#$2%#2$ ,))211
key olnLs Lvery LSxl hosL has mulLlple llnks Lo Lhe lnLerneL. Pardware redundancy ls n+1. 1here are mulLlple paLhs Lo Lhe lnLerneL. Plgh capaclLy llnks Lo reduce Lhe lmpacL of uuCS aLLacks.