Application Engineered Storage Zfs

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Dragon S|ayer Consu|t|ng

Marc SLalmer


Why 1he Lra of Ceneral urpose
SLorage ls Comlng Lo an Lnd

!"#$%#&'# )* +,,-.'/0.)& !&%.&##$#1 20)$/%# 3+!24
W P l 1 L A L 8
WPl1L AL8 - Why Lhe Lra of Ceneral urpose SLorage ls Comlng Lo an Lnd
uragon Slayer ConsulLlng - C2 2013 2
Why 1he Lra Cf Genera| urpose Storage Is Com|ng 1o An Lnd
5/$' 20/."#$6 7$#8.1#&0 9 :;2 )* ;$/%)& 2-/<#$ :)&8=-0.&%

Introduct|on
LvoluLlon has LaughL us LhaL Lhose specles LhaL fall Lo adapL and evolve over Llme become exLlncL.
LvoluLlon occurs so LhaL a glven blologlcal specles can survlve Lhe changes lL musL face
each and every day. LvoluLlon ls blologlcal adapLaLlon. nowhere has Lhls been more
clearly evldenL Lhan Lhe emergence of superbugs. Superbugs are bacLerla and vlruses
LhaL have evolved Lo survlve ever more sophlsLlcaLed anLlbloLlcs or anLlvlrals. LvoluLlon
ls noL llmlLed Lo blology. lL can be seen playlng ouL ln Lhe ongolng progresslon of
lnformaLlon Lechnologles boLh blg and small. l1 ls loaded wlLh appllcaLlon and
compuLlng evoluLlonary examples lncludlng:
AppllcaLlons runnlng on physlcal servers evolvlng lnLo appllcaLlons runnlng on
vlrLual servers Lo overcome physlcal server sprawl consumlng floor/rack space,
power, coollng, adapLer cards, neLwork swlLches, cables, Lranscelvers, condulL,
and managemenL cycles. Cf course LhaL problem has morphed lnLo vlrLual
server sprawl LhaL wlll requlre furLher evoluLlon.
hyslcal servers evolvlng lnLo boLh blgger more powerful machlnes Lo adequaLely supporL ever
more vlrLual machlnes. 1hey've also adapLed Lo Lhe escalaLlng problem of power/coollng
consumpLlon by becomlng smaller wlLh Cus LhaL consume less power/coollng.
Plgh avallablllLy evolvlng from redundanL hardware lnLo sophlsLlcaLed sofLware aL Lhe appllcaLlon
and hypervlsor layer. Server hardware PA ls expenslve. AppllcaLlon and daLa PA ls noL and
makes hardware PA lrrelevanL.
AppllcaLlon daLa processlng evolvlng lnLo boLh smaller and larger daLa seLs adapLlng Lo a broader
swaLh of compuLaLlonal plaLforms (smarLphones, LableLs, masslve scale-ouL clusLers and more) as
well as Lhe lncreaslng need for acLlonable lnformaLlon on enormous amounLs of unsLrucLured
daLa growlng geomeLrlcally.
users' compuLe devlces evolvlng from deskLops Lo lapLops Lo
smarLphones and now LableLs as each generaLlon demands more
moblllLy and freedom.
uaLa processors evolvlng from slngle cores where clock raLes kepL
lncreaslng Lo mulLlple cores where Lhe number of cores keeps
lncreaslng Lo overcome Lhe clocklng consLralnLs as Lhe need for more and more power
acceleraLes.
1hls Lechnologlcal evoluLlon has noL been llmlLed Lo appllcaLlons and processlng. Cn Lhe conLrary, no
Lechnology has been changlng as rapldly as daLa sLorage sysLems. lor example:
uAS has evolved Lo SAn or nAS even unlfled (SAn and nAS) and/or ob[ecL Lo make sLorage
sysLems a blL broader ln Lhe proLocols, appllcaLlons, and sysLems concurrenLly supporLed.
uaLa sLorage sysLems have morphed from slngle Llers or pools of sLorage Lo mulLlple
performance Llers lncludlng u8AM, nv-8AM, solld sLaLe llash drlves (SSus), hlgh performance
hard dlsk drlves (Puu), and low performance hlgh capaclLy drlves, Lo beLLer maLch appllcaLlon
performance requlremenLs Lo daLa value and cosL.
uaLa sLorage sysLems have evolved from belng capaclLy hlghly lnefflclenL Lo becomlng hlghly
efflclenL Lhrough Lhe use of Lhln provlslonlng, dedupllcaLlon, compresslon, and vlrLuallzaLlon
Lechnologles. 1hls ls an adapLaLlon Lo exponenLlal daLa growLh whlle budgeLs grow ln zero Lo low
slngle dlglLs.
uaLa sLorage sysLems have also evolved beyond slmple daLa LargeLs Lo Lackle Lhe lssues of daLa
proLecLlon, dlsasLer Lolerance, dlsasLer recovery, and conLlnuous daLa avallablllLy Lo provlde daLa
lnsurance when Lhe daLa has become so valuable.
I|g 1. Superbug MkSA
I|g 2. VMs
I|g 3. Mob|||ty
WPl1L AL8 - Why Lhe Lra of Ceneral urpose SLorage ls Comlng Lo an Lnd
uragon Slayer ConsulLlng - C2 2013 3
uaLa sLorage sysLem 8Alu has evolved from physlcal dlsks Lo vlrLual dlsk pools, from proLecLlng
daLa belng losL when a slngle dlsk falls Lo proLecLlng lL when 2, 3, or even more dlsks falls, from
slow rebullds Lo lncredlbly fasL rebullds. All of lL ln response Lo Lhe lncreased rlsk of daLa loss
from a drlve or drlves (hard dlsk drlves and/or solld sLaLe drlves) falllng, and Lhey do fall. ln facL,
hlsLorlcally and sLaLlsLlcally, Lhey fall ln baLches.
LvoluLlonary rapld adapLaLlons are necessary for long-Lerm survlval. And [usL llke blologlcal evoluLlon,
Lechnlcal evoluLlon never sLops. lL ls conLlnuous. Cnce evoluLlonary adapLaLlon sLops Lhe specles or
Lechnology dles. lL dles because Lhe envlronmenL ls also always changlng, placlng relenLless pressure for
adapLaLlon or exLlncLlon.
1hls has never been Lruer wlLh regards Lo exLernal shared daLa sLorage sysLems Lhan lL ls Loday, rlghL now.
Shared sLorage has hlsLorlcally aLLempLed Lo be all Lhlngs Lo all appllcaLlons. As Lhe number of
appllcaLlons supporLed on a general-purpose daLa
sLorage sysLem lncreases so does Lhe LoLal avallable
markeL. lL ls ln Lhe lnLeresL of a daLa sLorage vendor Lo
supporL as many dlfferenL appllcaLlons as posslble for
greaLer poLenLlal sales. ln oLher words, exLernal shared
SAn, nAS or unlfled sLorage ls commonly poslLloned as
general-purpose sLorage (l.e. [ack of all Lrades, masLer of
none) Lo expand lLs usefulness. 1he problem ls LhaL
appllcaLlons have become more dlverse Lhan slmllar.
LxLernal shared daLa sLorage sysLem requlremenLs are
qulckly movlng well beyond performance. vMware
vSphere, vlrLual daLa cenLer Lechnologles, Cracle
daLabases plus buslness appllcaLlons LhaL run on Cracle
servers and sLorage, backup and repllcaLlon sofLware,
MlcrosofL appllcaLlons such as Pyper-v, Lxchange, ShareolnL,
SCL Server and more, are all demandlng a loL more from Lhelr
sLorage sysLems Lhan raw performance, capaclLy, and daLa
proLecLlon. 1hey are demandlng LhaL Lhelr aLLached exLernal
shared daLa sLorage rlse up Lo be peers wlLh Lhe appllcaLlon and
noL [usL slmply a resource. 1hey are demandlng LhaL each have
lnLlmaLe knowledge of Lhe oLher.
1hose sLorage sysLems LhaL can sLep up Lo meeL Lhls laLesL evoluLlonary challenge wlll survlve. 1hose LhaL
cannoL.

I|g 4. App||cat|on Chaos |n Genera|-urpose Storage

I|g S. App||cat|on & Storage eers

WPl1L AL8 - Why Lhe Lra of Ceneral urpose SLorage ls Comlng Lo an Lnd
uragon Slayer ConsulLlng - C2 2013 4
1ab|e of Contents
Introduct|on ................................................................................................................. 2
Lsca|at|ng App||cat|on Storage Issues ........................................................................... S
! Lack of "Cn-Demand" Automat|on ................................................................................ S
! I|xed redeterm|ned App||cat|on-Data Storage Interact|on .......................................... S
! know|edge, Sk|||s, and Lxper|ence Shortage .................................................................. S
! Why 1hese rob|ems Are u|et|y keach|ng Cr|t|ca| Mass .............................................. 6
1yp|ca| App||cat|on Storage Workarounds .................................................................... 6
! VMware Storage AI Integrat|on .................................................................................. 6
! M|crosoft W|ndows and nyper-V VSS (Vo|ume Shadow Serv|ces) Integrat|on ............... 7
! Storage Auto-1|er|ng or Cach|ng w|th I|ash SSDs ........................................................... 7
! Software Def|ned Storage (SDS) .................................................................................... 8
! Workarounds Conc|us|on .............................................................................................. 8
App||cat|on Lng|neered Storage (ALS) .......................................................................... 8
Crac|e 2IS Storage App||ances - 1he I|rst ALS .............................................................. 9
! Management ................................................................................................................ 9
! art|t|on|ng ................................................................................................................... 9
! nybr|d Co|umnar Compress|on(nCC) ........................................................................... 10
! Database Aware Data rotect|on ................................................................................ 10
! |us Lng|neered "Workarounds" ................................................................................. 10
Summary and Conc|us|on ........................................................................................... 11

WPl1L AL8 - Why Lhe Lra of Ceneral urpose SLorage ls Comlng Lo an Lnd
uragon Slayer ConsulLlng - C2 2013 3
Lsca|at|ng App||cat|on Storage Issues
AppllcaLlons LradlLlonally requlred exLernal shared daLa sLorage sysLems Lo supply Lhem wlLh capaclLy,
performance, and daLa proLecLlon. 1haL's whaL mosL daLa sLorage sysLems do and do reasonably well.
1helr blg problem ls llmlLed auLomaLlon and an lnablllLy Lo adapL ln real Llme Lo dynamlc change
requlremenLs from Lhe appllcaLlons.
! Lack of "Cn-Demand" Automat|on
1he Llmes Lhey are a changlng. AppllcaLlons Loday are demandlng much more Lhan sLorage capaclLy,
performance, and daLa proLecLlon provlslonlng. AppllcaLlon daLaseLs have rapldly ballooned ln slze as Lhe
daLa unlverse conLlnues Lo expand exponenLlally (2.8 Z8s ln 2012 expecLed Lo grow Lo 40 Z8s by 2020 per
luC.) 1o manage Lhe lncreaslng daLa Lsunaml, appllcaLlons auLomaLlcally consume more of everyLhlng
lncludlng processlng, lC, bandwldLh, sLorage capaclLy, and sLorage performance. 1hls ls a problem for
daLa sLorage sysLems.
SLorage ls noL cusLomarlly deslgned Lo allocaLe
capaclLy and performance resources on-demand.
8esources are Lyplcally manually allocaLed ln advance.
As Lhose sLorage resources are consumed and more
are requlred, Lhe sLorage admln wlll Lhen manually
allocaLe more. lL's noL a dynamlc auLomaLed process
for Lhe vasL ma[orlLy of sLorage sysLems. 1hose
allocaLlon Lasks are labor lnLenslve requlrlng
schedullng and more ofLen Lhan noL, scheduled
downLlme. Scheduled downLlme ls a rare commodlLy
ln Loday's 7 by 24 global economy. ?eL far Loo many sLorage vendors sLlll assume LhaL scheduled
downLlme-dlsrupLlve Lo any buslness-ls okay.
! I|xed redeterm|ned App||cat|on-Data Storage Interact|on
AppllcaLlons do noL generally see or dlrecLly conLrol daLa sLorage. 1haL's usually
accompllshed vla Lhe operaLlng sysLem, hypervlsor, or flle sysLem, alLhough Lhere
are excepLlons such as relaLlonal daLabases. ln all cases LhaL relaLlonshlp ls flxed or
predeLermlned meanlng LhaL Lhe sLorage does whaL lL's Lold Lo do ln a very narrow
seL of pre-conflgured parameLers. lL serves up capaclLy ln Lhe amounL LhaL has
been allocaLed. lL provldes 8Alu-based daLa proLecLlon on pre-arranged
parameLers. lL dellvers performance based on whaL was seL up. ln oLher words lL
ls an lnflexlble relaLlonshlp LhaL can only be alLered wlLh admln lnLervenLlon.
Conslder LhaL appllcaLlon performance has peaks and valley. And yeL, daLa sLorage cannoL for Lhe mosL
parL read or anLlclpaLe Lhose peaks and valleys. lL knows Lhe lC and/or LhroughpuL demands aL any glven
momenL ln Llme and wlll respond Lo Lhem based on Lhe performance pre-seLs and oLher appllcaLlon
demands belng placed on Lhe daLa sLorage sysLem aL LhaL momenL ln Llme. 1here ls no lnLegraLlon of
appllcaLlon and sLorage, no cooperaLlve processlng or communlcaLlon, no dynamlc adapLaLlon Lo
unexpecLed appllcaLlon needs, and no flexlblllLy.
! know|edge, Sk|||s, and Lxper|ence Shortage
AppllcaLlon and hypervlsor admlnlsLraLors have become more
speclallzed and narrower ln Lhelr scope and depLh. SLorage
knowledge ls vlewed Lhrough Lhe lens of Lhe appllcaLlon or
hypervlsor and ls nomlnal aL besL. 1hese admlns commonly lack
boLh Lhe baslc sLorage knowledge and Lhe experlence Lo seL up,
conflgure, manage, and operaLe daLa sLorage opLlmally for Lhelr
appllcaLlons, servers, or vlrLual machlnes.
SLorage admlns on Lhe oLher hand are generallsLs wlLh a dearLh of
appllcaLlon and/or hypervlsor knowledge. 1hey know how Lo Lweak Lhelr sLorage Lo geL Lhe besL
performance or uLlllzaLlon ouL of lL, buL noL opLlmlzed for every appllcaLlon, server, and vM LhaL connecLs
Lo LhaL sLorage. 1hey commonly lack speclflc appllcaLlon Lunlng knowledge, skllls, and experlence. Lven
I|g 6. L|m|ted "Cn-
Demand" kesources

I|g 7. Inadequate I|ex|b|||ty

I|g 8. App Adm|ns Don't know Storage

WPl1L AL8 - Why Lhe Lra of Ceneral urpose SLorage ls Comlng Lo an Lnd
uragon Slayer ConsulLlng - C2 2013 6
when Lhey do have Lhem for a speclflc appllcaLlon, Lhelr cycles are way Loo llmlLed Lo consLanLly Lune Lhe
sLorage for opLlmum appllcaLlon performance.
! Why 1hese rob|ems Are u|et|y keach|ng Cr|t|ca| Mass
l1 organlzaLlons dlscovered over Lhe pasL decade LhaL general-purpose servers or one slze flLs all dld noL
work for all appllcaLlons. Some apps preferred Wlndows or Llnux x86 ClSC archlLecLure, whereas oLhers
preferred a 8lSC archlLecLure and unlx. lL made llLLle sense and creaLed a loL of hearLburn Lrylng Lo force
flL all appllcaLlons lnLo one server archlLecLure. 1hls led Lo a surge ln appllcaLlon-speclflc server
deploymenL. 1haL appllcaLlon surge ls acceleraLlng wlLh malnsLream adopLlon of server vlrLuallzaLlon.
Server vlrLuallzaLlon has made lL lncredlbly easy Lo spln up a vM, whlch ln Lurn has led Lo ouL-of-conLrol
vM sprawl. vM sprawl greaLly worsens Lhe problems prevlously dlscussed. More appllcaLlons mean more
demands on Lhe aLLached daLa sLorage sysLems. uaLa sLorage sysLems have llmlLed capablllLles ln
provldlng dlfferenLlaLed servlce Lo Lhese appllcaLlon-server comblnaLlons regardless of wheLher Lhey're
physlcal or vlrLual. And every comblnaLlon has lLs own sLorage performance and funcLlonal requlremenLs.
?eL, Lhe vasL ma[orlLy of Lhem are connecLed Lo general-purpose sLorage. 1hls has Lhe feel of a
mlsallgnmenL.
1here are Lhree reasons why Lhe vasL ma[orlLy of daLa sLorage sysLems are general purpose. llrsL,
general-purpose sLorage sysLems are [usL easler Lo manage and requlre less appllcaLlon knowledge, skllls,
and experlence. 1he second reason ls lnerLla. lL's how lL's always been done or Lhe lf lL's noL broke,
don'L flx lL" phllosophy. unforLunaLely, lL ls broken or soon wlll be, badly broken. And Lhe Lhlrd, as
prevlously dlscussed, ls markeL reach. Ceneral-purpose sLorage enables daLa sLorage sysLems Lo connecL
Lo more appllcaLlons, creaLlng Lhe largesL posslble poLenLlal avallable markeL. 1haL markeL reach looks
good on paper Lo many l1 managers and sLorage admlnlsLraLors. And lL's why Lhere has been a lemmlng
llke Lrend Lo unlfled SLorage (SAn, nAS, and now Cb[ecL sLorage ln Lhe same sLorage sysLem). 8uL, lL does
noL work nearly as well ln pracLlce. 1o poorly paraphrase a llne from 1olklen's Lord of Lhe 8lngs", unlfled
sLorage ls one daLa sLorage sysLem Lo rule Lhem all". lL ls also descrlbed as a [ack-of-all-Lrades-masLer-of-
none phllosophy. 8egreLLably, general-purpose unlfled sLorage requlres Loo many compromlses ln
performance and managemenL. lL alms aL Lhe good enough" hearL of Lhe bell curve for Lhe performance
and resource requlremenLs.
1he general-purpose sysLem dld noL work well wlLh servers and appllcaLlons, and lL evolved lnLo vlrLual
machlnes. 1he general-purpose sysLem doesn'L work well wlLh sLorage elLher and lL ls Llme Lo evolve.
1hose l1 organlzaLlons LhaL have noL yeL dlscovered Lhese problems soon wlll. lL ls only a maLLer of Llme.
1yp|ca| App||cat|on Storage Workarounds
1here are four common workarounds. 1hese lnclude:
vMware SLorage Al lnLegraLlon
MlcrosofL Wlndows and Pyper-v vSS lnLegraLlon
SLorage Llerlng or cachlng wlLh llash SSus
SofLware deflned sLorage
! VMware Storage AI Integrat|on
vMware has many sLorage Als (vSphere Al for daLa
proLecLlon, vSphere Al for Array lnLegraLlon, vSphere Al for
sLorage awareness, 110 compllance, array-based Lhln
provlslonlng, hardware acceleraLlon for nAS, enhanced
hardware-asslsLed locklng, and vSphere Al for mulLl-
paLhlng). 8y adherlng Lo Lhese Als, Lhe sLorage vendors glve vMware vSphere admlnlsLraLors Lhe ablllLy
Lo manage Lhe aLLached shared daLa sLorage. ln some cases Lhey enhance Lhe ablllLy of speclflc funcLlons
such as daLa proLecLlon wlLh vAu Lo leverage vSphere funcLlonallLy (vSphere snapshoLs).
Powever, none of Lhese Als acLually address appllcaLlon and sLorage lssues and problems. 1he Als
assume Lhe vSphere admlnlsLraLor wlll be sLorage knowledgeable, skllled, and experlenced. 1hey don'L
make Lhe sLorage appllcaLlon aware or Lnglneered. 1hese Als make vSphere sLorage aware. 1hls ls
useful and lmporLanL whlle sLlll noL solvlng Lhe problem. SLorage does noL dynamlcally reacL Lo Lhe
appllcaLlons ln any way. 1he vSphere Al for daLa proLecLlon ls a good example (vAu). vAu qulesces
(pauses) a vM Lakes a snapshoL, and Lhen resLarLs Lhe vM. unforLunaLely, vAu ls noL appllcaLlon aware.
WPl1L AL8 - Why Lhe Lra of Ceneral urpose SLorage ls Comlng Lo an Lnd
uragon Slayer ConsulLlng - C2 2013 7
lL does noL flush Lhe cache and compleLe Lhe wrlLes ln Lhe correcL order. 1he snapshoLs of appllcaLlons
LhaL requlre an orderly shuL down are noL crash conslsLenL and may be corrupLed. vAu also Lakes one
vM snapshoL aL a Llme maklng lL resource-lnLenslve and relaLlvely slow. AnoLher example of vSphere's
mlnlmal appllcaLlon awareness ls lLs sLorage lC conLrol (SlCC). SlCC ls a bruLe force approach Lo solvlng
appllcaLlon performance dynamlc on-demand requlremenLs aL Lhe vMware clusLer level. SlCC monlLors
Lhe lC laLency of each vM daLasLore ln Lhe clusLer. When lC laLency reaches a Lhreshold (30 ms by
defaulL), Lhe daLasLore ls regarded as congesLed and SlCC lnLervenes Lo redlsLrlbuLe avallable resources
based on vM prlorlLlzaLlon. ln oLher words, hlgh prlorlLy vMs are glven sLorage lC resources from lower
prlorlLy vMs. lL does noL manage Lhe sLorage resources, lL manages Lhe vSphere access Lo Lhose
resources. SlCC also does noL Lake lnLo accounL appllcaLlon demand splkes or have any lnherenL
knowledge or awareness of appllcaLlon requlremenLs. 1he unspoken premlse ls LhaL when sLorage uses
vMware vSphere Als Lhen appllcaLlon sLorage lssues and problems are auLo-maglcally resolved. 1he
reallLy ls LhaL lL helps LreaL Lhe sympLoms. lL does noL cure Lhe problem.
! M|crosoft W|ndows and nyper-V VSS (Vo|ume Shadow Serv|ces) Integrat|on
MlcrosofL vSS lnLegraLlon ls a more llmlLed aLLempL Lo solve appllcaLlon and
sLorage lssues. SomewhaL analogous Lo vMware's vAu, vSS pauses vMs and
appllcaLlons, Lakes a snapshoL, and Lhen resumes Lhelr operaLlons. Cne key
dlfference ls LhaL appllcaLlons LhaL are vSS lnLegraLed (Lxchange, Cracle,
ShareolnL, SCL Server) wlll be properly qulesced wlLh Lhelr caches flushed and
wrlLes properly compleLed ln Lhe correcL order.
Cnce agaln, Lhls ls a parLlal soluLlon and only deals wlLh one aspecL of
appllcaLlon's lssues wlLh sLorage. lL deflnlLely lmproves Lhe dynamlcs beLween
appllcaLlons and sLorage ln a llmlLed way.
! Storage Auto-1|er|ng or Cach|ng w|th I|ash SSDs
llash solld-sLaLe drlves (SSu) have proven Lo be a huge performance boosL
Lo appllcaLlons. 1hey provlde up Lo 1000 Llmes Lhe performance of splnnlng
hard dlsk drlves (Puu). 1hey do so aL a prlce. SSus cosL slgnlflcanLly more
Lhan Puus. 1hls ls why Lhere are hybrld sLorage sysLems LhaL comblne Lhe
performance of SSus wlLh Lhe low cosL of Puus. 1hese sysLems elLher use
sLorage auLo-Llerlng or cachlng Lo capLure Lhe mosL efflclenL use of Lhose
SSus. SLorage auLo-Llerlng polnLs Lhe mosL mlsslon crlLlcal hlgh performance appllcaLlons aL Lhe SSu Ller
and Lhen, based on pollcles, moves deslgnaLed daLa Lo lower performlng, lower cosL Puu Llers. ollcles
can cover Llme slnce lasL access, frequency of access, Llme slnce creaLed, and more.
1he problem wlLh sLorage auLo-Llerlng and llash SSus ls LhaL sLorage auLo-Llerlng moves daLa beLween
Llers based on hlsLorlcal Lrends noL real-Llme sLaLus. Lvery Llme daLa ls moved beLween Llers lL consumes
Cu cycles LhaL cannoL Lhen be used for appllcaLlon lC or LhroughpuL. SLorage auLo-Llerlng decreases lC
performance qulLe noLlceably when Lhere ls consLanL movemenL or Lhrashlng of daLa beLween Llers. And
mlxlng SSus and Puus guaranLees consLanL auLo Llerlng daLa movemenL. 1haL Lhrashlng can shorLen Lhe
expenslve llash SSu wear llfe, especlally noLlceable on mulLl-level cell (MLC) llash.
llash SSu cachlng ls Lhe oLher more common lmplemenLaLlon. lL's prlmarlly
wrlLe-Lhrough cachlng (a.k.a. read cachlng). uaLa ls only puL lnLo Lhe cache lf
lL passes a speclfled pollcy Lhreshold LhaL reglsLers lL as hoL daLa. llash SSu
cachlng ls more popular Lhan llash SSu auLo sLorage Llerlng because lL's
slmpler, moves a loL less daLa, uses much fewer Cu cycles, and, mosL
lmporLanLly, reacLs ln near real-Llme Lo appllcaLlons demandlng more read lC
performance. 1he blggesL problem wlLh llash SSu cachlng ls Lhe slze of Lhe
cache. 1oo small of a cache means noL all of Lhe hoL" accessed daLa ls
placed ln cache leadlng Lo lncreased cache mlsses. Cache mlsses equal
reduced performance.
8oLh of Lhese meLhods alm aL solvlng appllcaLlon performance sLorage lssues.
And Lhey do Lo a slgnlflcanL exLenL even Lhough Lhey are bruLe force
approaches. 1hey may noL have any lnherenL appllcaLlon awareness, buL Lhey Lry Lo make up for LhaL by
Lhrowlng masslve amounLs of performance aL Lhe problem. Plgher performance ls always good, however,
WPl1L AL8 - Why Lhe Lra of Ceneral urpose SLorage ls Comlng Lo an Lnd
uragon Slayer ConsulLlng - C2 2013 8
ulLlmaLely lL only dellvers a Lemporary 8and-Ald soluLlon wlLh more speed. lL only delays Lhe problem
from becomlng acuLe because mosL lmplemenLaLlons don'L resolve any of Lhe aforemenLloned underlylng
lssues, and Lhere's a llmlL Lo Lhe amounL of performance any sLorage sysLem can Lhrow aL Lhe problem. ln
addlLlon, Lhese workarounds don'L address Lhe sLorage experLlse requlremenLs.
! Software Def|ned Storage (SDS)
SofLware deflned sLorage ls Lhe laLesL hyped sLorage Lrend creaLlng buzz. lL
absLracLs Lhe sLorage lmage and servlces from Lhe physlcal sLorage. SuS
provldes: pollcy-drlven capaclLy provlslonlng, performance managemenL,
LransparenL mapplng beLween vM daLasLores and large volumes or flle
sLores.
Cne of Lhe ballyhooed funcLlons ls guaranLeed SLAs. 1he guaranLee ls for
sLorage performance. lL's based on volume performance LhaL's loglcally
Lled Lo an appllcaLlon. lL's noL appllcaLlon aware or knowledgeable. lL
assumes Lhe sLorage admln has LhaL experLlse.
SuS could [usL as easlly be descrlbed as Lhe laLesL generaLlon of sLorage
vlrLuallzaLlon. And [usL llke sLorage vlrLuallzaLlon, lL's sLorage-cenLrlc noL appllcaLlon aware or cenLrlc. lL
does mask a loL of Lhe sLorage experLlse requlremenLs vla auLomaLlon. Powever, lL doesn'L currenLly
reacL Lo unexpecLed demands or Lo speclflc appllcaLlon requlremenLs because lL ls noL appllcaLlon aware.
! Workarounds Conc|us|on
Lach of Lhese workarounds has some value ln amelloraLlng Lhe appllcaLlon-sLorage problems. All of Lhem
address sympLoms of Lhe problem. none of Lhem really address Lhe underlylng problem, whlch ls lack of
appllcaLlon knowledge and lnLegraLlon. lL's dlfflculL Lo reacL Lo appllcaLlons' varlable demands wlLhouL
knowlng and lnLegraLlng wlLh Lhe appllcaLlons.
Ceneral-purpose sLorage does noL address Lhls new era of appllcaLlon cenLrlclLy. 1he workarounds LreaL
sympLoms buL noL Lhe underlylng rooL cause of Lhe problems. WhaL's requlred ls sLorage LhaL's
englneered wlLh Lhe appllcaLlon or appllcaLlon englneered sLorage.
App||cat|on Lng|neered Storage (ALS)
Ceneral-purpose sLorage does noL address Lhls new era of appllcaLlon cenLrlclLy. 1he workarounds LreaL
sympLoms buL noL Lhe underlylng rooL cause of Lhe problems. WhaL's requlred ls sLorage LhaL's
englneered wlLh Lhe appllcaLlon or appllcaLlon englneered sLorage.
ALS works wlLh Lhe appllcaLlon ln a cooperaLlve manner Laklng on Lhe processlng
load for funcLlons LhaL belong ln Lhe sLorage sysLem where Lhey're more
efflclenLly handled. 1hls ln Lurn opLlmlzes Lhe compuLe plaLform resources for
compuLe processlng and sLorage resources for lC and LhroughpuL. ln several
cases Lhls has a synerglsLlc effecL on performance where Lhe appllcaLlon
performance galn ls greaLer Lhan Lhe sysLem lC would suggesL.
1hls ls because appllcaLlon englneered sLorage aLLacks Lhe rooL cause of
appllcaLlon sLorage performance lssues lnsLead of merely LreaLlng Lhe sympLoms.
CLher advanLages ln deallng wlLh Lhe rooL cause are also slgnlflcanL. lor
example: as Lhe appllcaLlon scales so does Lhe ALS, appllcaLlons auLomaLlcally
Lake advanLage of new Lechnologles as Lhey're lnLroduced lnLo Lhe ALS,
appllcaLlon experLlse means ALS experLlse slnce lL appears as an exLenslon of Lhe appllcaLlon.
1here are several essenLlal underlylng premlses Lo ALS. LffecLual ALS as a prerequlslLe musL be able Lo:
1ake advanLage of appllcaLlon lnformaLlon Lo reduce read and wrlLe laLencles.
AcceleraLe daLa Lransfers beLween Lhe appllcaLlon and ALS Lo dellver Lhe rlghL lnformaLlon ln Lhe
rlghL place aL Lhe rlghL Llme.
Cff-load low-level processlng from Lhe appllcaLlon server.
lmplemenLlng Lhese requlremenLs ls nonLrlvlal. lL requlres boLh Lhe sLorage sysLem processlng power and
sofLware archlLecLure LhaL are capable of handllng concurrenLly all sLorage funcLlons ln addlLlon Lo Lhe
appllcaLlon lnLegraLlon funcLlons. lf lL doesn'L have boLh lL can'L execuLe appllcaLlon off-load funcLlons fasL
enough Lo make a dlfference. 1hls means Lhe ALS needs qulLe a blL of compuLlng power as well as a
I|g 11. SDS

I|g 12. ALS

WPl1L AL8 - Why Lhe Lra of Ceneral urpose SLorage ls Comlng Lo an Lnd
uragon Slayer ConsulLlng - C2 2013 9
hlghly scalable CS Lo compeLenLly supporL hundreds or Lhousands of appllcaLlon speclflc lC requesLs ln
parallel.
uellverlng ALS calls for cross-funcLlonal skllls LhaL mosL sLorage vendors slmply do noL have.
Crac|e 2IS Storage App||ances - 1he I|rst ALS
8ecause Cracle provldes Lhe world's number one relaLlonal
daLabase, appllcaLlons LhaL run on LhaL 8u8MS, sLorage
sysLems, servers, ZeLLabyLe flle sysLems (ZlS), and Solarls CS, lL
ls dlsLlncLlvely sulLed Lo Lackle Lhe appllcaLlon-sLorage problem
head-on. Cracle has englneered Lhe lndusLry's flrsL appllcaLlon
englneered sLorage sysLems wlLh Lhelr ZlS SLorage Appllances.
ZlS SLorage Appllances lncorporaLe Cracle's laLesL generaLlons
of compuLe processlng wlLh exLenslve memory (up Lo
1erabyLes), SSus, Puus, Solarls CS, sophlsLlcaLed sLorage
funcLlons, and ZlS Lo generaLe excepLlonal compuLe ln addlLlon
Lo sLorage performance. lL can handle over a hundred Lhreads
processlng many Lhousands of lC requesLs ln parallel vs.
convenLlonal sLorage sysLems LhaL are llmlLed Lo as llLLle as 8
processlng Lhreads and ClgabyLes of memory. lL ls Lhls unlque
archlLecLure LhaL enables ZlS SLorage Appllances Lo dlrecLly
lnLegraLe wlLh Cracle daLabases and appllcaLlons. 1he resulLs
are noLably hlgher efflclencles, flexlble on-demand lnLeracLlons,
and greaLer performance, aL much lower cosLs.
Cracle's ALS appears Lo be lmpresslve, buL Lhe proof ls ln how lL's execuLed.
! Management
ZlS SLorage Appllances are deslgned Lo ellmlnaLe Lhree
layers of managemenL beLween Lhe Cracle daLabase,
Solarls operaLlng sysLem, and Lhe sLorage lLself. 1hls
lncreased managemenL auLomaLlon places more of Lhe
experLlse lnLo Lhe sLorage sysLem lnsLead of Lhe
admlnlsLraLor. lL removes dozens Lo hundreds of
redundanL Lasks savlng admlnlsLraLors vasL amounLs of
Llme. 1he Ldlson Croup managemenL cosLs comparaLlve
sLudy emplrlcally verlfled Lhls wlLh flndlngs showlng LhaL
Lhe ZlS SLorage Appllances are generally 36 fasLer ln
admlnlsLraLlve Lasks, 36 fasLer ln sLorage provlslonlng,
and 44 fasLer ln monlLorlng and LroubleshooLlng lssues
Lhan ls requlred for general purpose daLa sLorage sysLems such as neLApp's lAS fllers. 1hese savlngs
converL lnLo a full Llme equlvalenL (l1L) operaLlng cosL savlngs of approxlmaLely $27,000 per year.
! art|t|on|ng
ZlS SLorage Appllances' LlghL lnLegraLlon wlLh Cracle daLabase lncreases performance and efflclencles by
aL leasL 3 Lo 3x over general-purpose daLa sLorage sysLems (such as neLApp lAS, P 3A8, LMC vnx, uLLL
CompellenL, and oLhers) LhaL do noL offer deep Cracle uaLabase appllcaLlon lnLegraLlon. Cne of Lhe ways
ZlS SLorage Appllances do Lhls ls vla daLabase daLa mapplng Lo mulLlple sLorage Llers wlLhln Lhe same ZlS
SLorage Appllance. Large Lables can be parLlLloned easlly on a
parLlLlon key, frequenLly range-parLlLloned on Lhe key LhaL
represenLs a Llme componenL, wlLh currenL acLlve" daLa
locaLed on Lhe hlgher performance Ller sLorage. As LhaL daLa
ages, becomlng less acLlve or passlve", lL's auLomaLlcally moved
vla Lhe parLlLlonlng Lo Lhe lower cosL, lower performlng sLorage
Ller. 1hls bullL-ln daLa onllne archlvlng" ls always acLlve and
avallable Lo Lhe appllcaLlon. lL doesn'L have Lo be recovered or
mlgraLed Lo or from anoLher sLorage sysLem, resulLlng ln
I|g 13. Crac|e 2IS
Storage App||ances

I|g 14. Intu|t|ve Management

I|g 1S. art|t|on|ng

WPl1L AL8 - Why Lhe Lra of Ceneral urpose SLorage ls Comlng Lo an Lnd
uragon Slayer ConsulLlng - C2 2013 10
quanLlflably measurable lmproved appllcaLlon daLa. Any access Lo Lhe acLlve daLa LhaL ls based on Lhe
same parLlLlon key (such as sales daLe), wlll auLomaLlcally beneflL from Lhe parLlLlon prunlng performed by
Lhe daLabase. ln oLher words, as daLabase Lables grow ln slze, acLlve daLa performance wlll noL degrade.
1he ZlS SLorage Appllance wlLh Cracle daLabase lnLegraLlon removes Lhe requlremenL Lo regularly archlve
or purge daLa LhaL ls Lyplcal ln order Lo malnLaln Lhe requlred performance of daLabase appllcaLlons. 1he
daLabase archlve ls always onllne avallable aL any Llme Lhrough Lhe appllcaLlon, and ls also malnLalned
LhroughouL daLabase and appllcaLlon upgrades, unllke daLa ln offllne archlvlng sofLware.
ConLrasLlng Lhe general-purpose sLorage user experlence, ZlS SLorage Appllances requlre much reduced:
AdmlnlsLraLor manual labor-lnLenslve Lasks,
SLorage sysLems, hlgh-end hlgher-performance, more expenslve sLorage,
SLorage lnfrasLrucLure (cables, Lranscelvers, condulL, swlLches, power, coollng, rack space, and
floor space),
And llcenslng for backup and archlvlng sofLware.
ZlS SLorage Appllances also enable a loL more of Lhe daLa Lo be kepL onllne aL all Llmes for much longer
perlods of Llme. 1hls greaLly lmproves performance of Lhe appllcaLlons LhaL depend on and access Lhose
large Cracle daLabases.
! nybr|d Co|umnar Compress|on(nCC)
Lven greaLer efflclencles and performance galns can be seen ln Lhe lnLegraLlon
of ZlS SLorage Appllances wlLh Cracle uaLabase's Pybrld Columnar
Compresslon (PCC.) PCC ls avallable only on Cracle ZlS SLorage Appllances and
provldes as much as 30x daLa compresslon. PCC demonsLrably reduces sLorage
capaclLy requlremenLs on Cracle uaLabase 3 Lo 3x more Lhan any oLher
vendor's besL daLa reducLlon opLlon, sharply decreaslng sLorage fooLprlnL and
assoclaLed acqulslLlon and operaLlonal cosLs. More lmporLanLly, because PCC ls
a cooperaLlve, collaboraLlve process beLween Cracle uaLabase and Lhe ZlS
SLorage Appllance, compressed daLa does noL have Lo be rehydraLed when
movlng beLween Lhe Lwo. ln addlLlon, compressed daLa can be accessed
dlrecLly so Lhere ls no laLency/response Llme/performance degradaLlon
ramlflcaLlons experlenced as wlLh oLher daLa reducLlon Lechnologles. ln facL, 3x
Lo 8x fasLer querles have been demonsLraLed ln cusLomer appllcaLlons.
! Database Aware Data rotect|on
ZlS SLorage Appllance lnLegraLlon wlLh Cracle daLabases exLends Lo daLa proLecLlon as well. SnapshoLs
are appllcaLlon aware and make sure LhaL Lhe Cracle uaLabase ls properly qulesced (cache flushed and all
wrlLes compleLed ln Lhelr proper order), before Lhe ZlS SLorage Appllance Lakes Lhe snap.
Cracle also provldes LlghLly englneered Cracle daLabase backup on ZlS SLorage Appllances. lL Lles dlrecLly
lnLo 8MAn and can move backed up daLa on dlsk Lo Cracle Lape/Lape llbrarles. none of Lhls requlres any
addlLlonal backup or repllcaLlon sofLware. nor does lL requlre any daLabase performance degradlng agenL
sofLware, or agenL sofLware of any klnd. All backed up daLa ls deduped and compressed as well. Cracle
daLabase backup performance ls excepLlonal aL approxlmaLely 3018/hr. WhaL ls more lmpresslve ls Lhe
resLore performance of approxlmaLely 1018/hr.
Cracle has furLher englneered Lhe ZlS SLorage Appllances wlLh LxadaLa connecLlng Lhem vla Cu8
(40Cbps) lnflnlband. lnflnlband has bullL-ln remoLe dlrecL memory access (8uMA) LhaL enables hlgher
performance vla much lower laLencles and much reduced lC processlng. no oLher sLorage sysLem Loday
lnLegraLes wlLh Cracle 8MAn and lnflnlband maklng Lhe ZlS SLorage Appllance Lhe mosL efflclenL and
hlghesL performlng daLa proLecLlon sysLem currenLly avallable.
! |us "Workarounds"
noL Lo be ouLdone, Cracle ZlS SLorage Appllances provlde sLorage auLo-Llerlng wlLh exLenslve u8AM,
SSus, and varlous classes of capaclLy/performance hlgh-denslLy dlsks. 1here ls value aL maklng sure Lhe
laLesL and greaLesL sLorage Lechnology ls parL of Lhe soluLlon. 8uL lL cannoL solve Lhe appllcaLlon sLorage
lssues by lLself. 1haL requlres appllcaLlon-englneered sLorage.
I|g 16. nCC

WPl1L AL8 - Why Lhe Lra of Ceneral urpose SLorage ls Comlng Lo an Lnd
uragon Slayer ConsulLlng - C2 2013 11
Summary and Conc|us|on
AppllcaLlons and servers have evolved. lL ls rare Loday Lo flnd more Lhan one appllcaLlon on a physlcal
server ln or vlrLual machlne lmage. 8aslcally lL's one appllcaLlon per server. SLorage sysLems have lagged
ln Lhls new world of appllcaLlon machlne prollferaLlon. MosL sLorage sysLems, lncludlng all ma[or sLorage
sysLem suppllers ln Cracle envlronmenLs, remaln bllssfully appllcaLlon unaware. 1hey are noL able Lo
cooperaLe or collaboraLe wlLh appllcaLlons and, Lherefore, cannoL respond ln real-Llme Lo changlng
appllcaLlon dynamlcs. nor can Lhey lmprove appllcaLlon efflclencles or lower cosLs. 1hey can and do
aLLempL Lo solve Lhe appllcaLlon sLorage problems wlLh workarounds. 1hese workarounds' help ls
analogous Lo Lhe way lbuprofen reduces fevers. 1hey LreaL Lhe sympLoms, buL noL Lhe problem.
1o solve Lhese appllcaLlon sLorage problems requlres appllcaLlon-englneered sLorage (ALS). Cracle's ZlS
SLorage Appllances are Lhe flrsL lnsLances of Lhls evoluLlonary sLorage caLegory-speclflcally archlLecLed Lo
work LogeLher wlLh buslness-crlLlcal enLerprlse appllcaLlons. 1hey wlll mosL llkely noL be Lhe lasL.

AbouL Lhe auLhor: Marc SLalmer ls Lhe founder, senlor analysL, and CuS of uragon Slayer ConsulLlng ln 8eaverLon, C8.
1he consulLlng pracLlce of 13 years has focused ln Lhe areas of sLraLeglc plannlng, producL developmenL, and markeL
developmenL. WlLh over 33 years of markeLlng, sales and buslness experlence ln lnfrasLrucLure, sLorage, server,
sofLware, daLabases, and vlrLuallzaLlon, he's consldered one of Lhe lndusLry's leadlng experLs. Marc can be reached aL
"/$'80/."#$>"/'?')"?

You might also like