Professional Documents
Culture Documents
Data Center Physical Design: Student Guide
Data Center Physical Design: Student Guide
NOTE: Please note this Student Guide has been developed from an audio narration. Therefore it will have
conversational English. The purpose of this transcript is to help you follow the online presentation and may require
reference to it.
Slide 1
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 1
Slide 2
Juniper Networks
Data Center Design
Best Practices
2016 Juniper Networks, Inc. All rights reserved. | www.juniper.net | Proprietary and Confidential
Slide 3
Navigation
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 3
Throughout this module, you will find slides with valuable detailed information. You can stop any slide with the Pause
button to study the details. You can also read the notes by using the Notes tab. You can click the Feedback link at any
time to submit suggestions or corrections directly to the Juniper Networks eLearning team.
Slide 4
Course Objectives
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 4
Slide 5
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 5
This course consists of two sections. The two main sections are as follows:
Physical Design Considerations; and
Evolving Optical Requirements.
Slide 6
Juniper Networks
Data Center Design
Best Practices
2016 Juniper Networks, Inc. All rights reserved. | www.juniper.net | Proprietary and Confidential
This section will compare the various equipment rack layouts used in a data center along with their advantages and
disadvantages. The various types of cabling options will also be described.
Slide 7
Section Objectives
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 7
Slide 8
Edge
Access and Core
Aggregation
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 8
There are seven specific domains in the networkAccess and Aggregation, Edge, Core, Data Center, WAN, Campus
and Branch, and Consumer and Business Device.
In this course we will look at the challenges, requirements, and drivers for todays data center, as well as the Juniper
Networks recommended solutions and designs for meeting those requirements and overcoming the challenges of data
center deployments.
Slide 9
Physical Layout
Multiple physical divisions:
Referred to as segments,
zones, cells, or pods
Physical Considerations:
Placement of equipment
Cabling requirements and restrictions
Power and cooling requirements
Layout options:
Top of rack
Bottom of rack
Middle of row
End of row
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 9
Physical Layout
One of the first steps in data center design is planning the physical layout of the data center. Multiple physical
divisions exist within the data center that are usually referred to as segments, zones, cells, or pods. Each segment
consists of multiple rows of racks containing equipment that provides computing resources, data storage, networking,
and other services.
Physical considerations for the data center include placement of equipment, cabling requirements and restrictions,
and power and cooling requirements. Once you determine the appropriate physical layout, you can replicate the
design across all segments within the data center or in multiple data centers. Using a modular design approach
improves the scalability of the deployment while reducing complexity and easing data center operations.
The physical layout of networking devices in the data center must balance the need for efficiency in equipment
deployment with restrictions associated with cable lengths and other physical considerations. Pros and cons must be
considered between deployments in which network devices are consolidated in a single rack versus deployments in
which devices are distributed across multiple racks. Adopting an efficient solution at the rack and row levels ensures
efficiency of the overall design because racks and rows are replicated throughout the data center.
Slide 10
Computing
Top of Rack and
Deployment Storage
Devices
Computing
and
Bottom of Rack Storage
Deployment Devices
Switches
Minimizes cable length Legacy devices must be managed separately
Copper 10-Gigabit Ethernet cable More complicated topology and management
lengths use less power Uplinks are required for connection between
Pros Can provide switching redundancy Cons the servers in adjacent racks, increasing latency
on a per rack basis Typically, more devices are needed (Note that
Cabling runs can be simpler Virtual Chassis addresses these issues)
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 10
In a ToR and BoR deployment, network devices are deployed in each server rack. A single device (or pair of devices
for redundancy at the device level) provides switching for all of the servers in the same rack. To allow sufficient space
for servers, the general recommendation is that devices in the rack should be limited to a 1U or 2U form factor.
A ToR or BoR layout places high-performance devices within the server rack in a row of servers in the data center.
With devices in close proximity, cable run lengths are minimized. Cable lengths can be short enough to accommodate
1-Gigabit Ethernet, 10-Gigabit Ethernet, and future 40-Gigabit Ethernet connections. Potential also exists for
significant power savings for 10-Gigabit Ethernet connections when the cable lengths are short enough to allow the
use of copper, which operates at one-third the power of longer-run fiber cables. Note also that deploying switches in a
middle-of-rack deployment might offer the additional benefits of even shorter cable runs and smaller cable bundles.
With ToR and BoR layouts, you can easily provide switching redundancy on a per rack basis. However, each legacy
device must be managed individually, which can complicate operations and add expense, because multiple discreet
24 or 48-port devices are required to meet connectivity needs. Both top of rack and bottom of rack deployments
provide the same advantages with respect to cabling and switching redundancy. Cabling run lengths are minimized in
this deployment and are simpler than MOR or EOR configurations. ToR deployments provide more convenient access
to the network devices, while BoR deployments can be more efficient from an airflow and power perspective, because
cool air from under-floor heating ventilation and cooling (HVAC) systems reaches the network devices in the rack
before continuing to flow upward.
ToR and BoR deployments do have some disadvantages, however. Having many networking devices in a single row
complicates topology and management. Because the devices serve only the servers in a single rack, uplinks are
required for connection between the servers in adjacent racks, and the resulting increase in latency can affect overall
performance. Agility is limited because modest increases in server deployment must be matched by the addition of
new network devices. Finally, because each device manages only a small number of servers, more devices are
typically required than would otherwise be needed to support the server population. Juniper has developed a solution
that delivers the significant benefits of ToR and BoR deployments while addressing the previously mentioned issues.
The solution, Virtual Chassis, is described in a later chapter.
Slide 11
End of Row
End of Row Deployment
High
Density
Computing and Storage Devices Switch
A single access tier for an entire row of Longer cable runs can exceed the length limits
servers for 10-Gigabit Ethernet and 40-Gigabit Ethernet
Requires fewer uplinks Port utilization is not always optimal with chassis
Pros Simplifies network topology Cons switches
Most chassis consume a great deal of power,
Best for 1-Gigabit Ethernet deployments
with relatively few servers cooling, and space, even when not fully
populated
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 11
End of Row
If the physical cable layout does not support ToR or BoR deployment, or if the customer prefers a large chassis-based
solution, the other options would be an EOR or MOR deployment, where network switches are deployed in a
dedicated rack in the row.
In the EoR configuration, which is common in existing data centers with existing cabling, high-density switches are
placed at the end of a row of servers, providing a consolidated location for the networking equipment to support all of
the servers in the row. EoR configurations can support larger form factor devices than ToR and BoR rack
configurations, so you end up with a single access tier switch to manage an entire row of servers. EoR layouts also
require fewer uplinks and simplify the network topologyinter-rack traffic is switched locally. Because EoR
deployments require cabling over longer distances than ToR and BoR configurations, they are best for deployments
that involve 1-Gigabit Ethernet connections and relatively few servers.
Disadvantages of the EoR layout include longer cable runs which can exceed the length limits for 10-Gigabit Ethernet
and 40-Gigabit Ethernet connections, so careful planning is required to accommodate high-speed network
connectivity. Device port utilization is not always optimal with traditional chassis-based devices, and most chassis-
based devices consume a great deal of power and cooling, even when not fully configured or utilized. In addition,
these large chassis-based devices can take up a great deal of valuable data center rack space.
Slide 12
Middle of Row
Middle of Row Deployment
High
Density
Switch
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 12
Middle of Row
An MoR deployment is similar to an EOR deployment, except that the devices are deployed in the middle of the row
instead of at the end. The MoR configuration provides some advantages over an EoR deployment, such as the ability
to reduce cable lengths to support 10-Gigabit Ethernet and 40-Gigabit Ethernet server connections. High-density,
large form-factor devices are supported, fewer uplinks are required in comparison with ToR and BoR deployments,
and a simplified network topology can be adopted.
You can configure an MoR layout so that devices with cabling limitations are installed in the racks that are closest to
the network device rack. While the MoR layout is not as flexible as a ToR or BoR deployment, the MoR layout
supports greater scalability and agility than the EoR deployment.
Although minimizing the cable length disadvantage associated with EoR deployments, the MoR deployment still has
the same port utilization, power, cooling, and rack space concerns associated with an EoR deployment.
Slide 13
SFP SFP+
100BASE-X 10GBASE-X
Juniper supports a wide
1000BASE-X Direct Attach Copper
range of optics in our data
center platforms from the
listed form factors
These form factors provide
QSFP CFP
low power consumption
40GBASE-X 100GBASE-X and high density to ensure
LC and MTP High-speed greater port density in
Direct Attach Copper connectivity between fewer rack units
core devices and
data centers
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 13
Before we discuss data center cabling, lets first take a look at the various optic form factors.
Junipers data center platforms support a variety of form factorsSFP, SFP+, QSFP, and CFP. These form factors
provide low power consumption and high density to ensure greater port density in fewer rack units. Depending on the
deployment scenario, Junipers data center platforms support different pluggable optic modules that can be selected
based on distance, form factor, and wavelength.
Slide 14
SFP SFP+
100BASE-X 10GBASE-X
1000BASE-X Direct Attach Copper
Juniper data center
platforms support a range
of 1GbE SFP optics
1GE-LX, SMF, 10km
1GE-SX, MMF, 500m
QSFP CFP
40GBASE-X 1GE-T, Cat5e, 100m
100GBASE-X
LC and MTP High-speed
Direct Attach Copper connectivity between
core devices and
data centers
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 14
SFP transceivers provide support for 1-Gigabit Ethernet fiber-optic or copper cables. Juniper data center platforms
support a range of 1-Gigabit Ethernet SFP transceiver typessingle-mode fiber-optic (SMF), multimode fiber-optic
(MMF), and category 5 enhanced (Cat5e) copper.
Slide 15
SFP SFP+
100BASE-X 10GBASE-X
1000BASE-X Direct Attach Copper
Juniper utilizes SFP+ to
provide dense pluggable
optic support
10GBASE-ZR, SMF, 80km
10GBASE-ER, SMF, 40km
QSFP CFP
40GBASE-X 10GBASE-LR, SMF, 10km
100GBASE-X
LC and MTP 10GBASE-SR, MMF, 300m
High-speed
Direct Attach Copper connectivity between 10GBASE-USR, MMF, 100m
core devices and
data centers
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 15
SFP+ transceivers are an enhanced SFP transceiver that provides support for data rates up to 10 Gbps for fiber-optic
or copper interfaces. Juniper utilizes SFP+ to provide dense pluggable optic support for SMF and MMF 10-Gigabit
Ethernet interfaces.
Slide 16
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 16
QSFP transceivers are quad (that is, four-channel) transceivers that provide support for fiber-optic or copper cables.
Juniper utilizes QSFP optics for 40-Gigabit Ethernet interfaces, or 10-Gigabit Ethernet interfaces when using a
breakout cable. QSFP transceivers are hot-insertable and hot-removable.
QSFP+ and QSFP28 are variations of QFSP, allowing for higher data rates.
Slide 17
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 17
CFP transceivers are 100 Gbps transceivers that provide support for fiber-optic cables with built-in clock recovery
circuits. Juniper utilizes CFP transceivers to provide 100-Gigabit Ethernet connectivity. The C stands for the Latin
letter C used to express the number 100, since the CFP was primarily designed for 100-Gigabit Ethernet use.
Slide 18
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 18
Cable installation is a major cost in data centers due to the labor involved in pulling cables through conduits and cable
trays and, to varying degrees, the price of the cabling itself. Organizations install different types of cabling in different
parts of the data center based on factors such as the equipment being connected, the bandwidth required by a
particular device or link, and the distances between the connected devices. Organizations often try to install sufficient
cabling to accommodate future expansion. However, any major change to a data centersuch as upgrading to
higher-performance servers or moving to a higher-speed corecan result in the need to run new cabling.
Cabling runs basically everywhere in the data center, both within tiers and between them. Cabling runs within racks to
connect servers and other appliances to each other and to their networks, between racks and the access switches,
between the switches in the access tier, between access switches and switches in the aggregation or core tiers,
between devices in the core, and between core devices and edge equipment housed in the telecom room.
The table on the slide summarizes data center cabling infrastructure, showing the data center network tiers, the
devices connected within them (for example, WAN routers, storage area network [SAN] devices or network attached
storage [NAS] devices) and the type of cable typically used.
Slide 19
40GBASE-LR4 SMF 10 km
100GBASE-LR4 SMF 10 km
100GBASE-ER4 SMF 40 km
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 19
Several trends are driving the need for higher bandwidth throughout the data center. For example, dual-port 10-
Gigabit Ethernet network interface cards (NICs) for servers are getting cheaper, so they are being deployed more
often. The use of 10-Gigabit Ethernet in the equipment distribution area is driving the need for 40-Gbps to 100-Gbps
links in the access, aggregation, and core tiers. To date, network equipment vendors have been delivering speeds of
40-Gbps and 100-Gbps by using variants of a technique called wavelength-division multiplexing (WDM). For example,
using WDM, vendors create a link that is essentially four 10-Gbps signals combined onto one optical medium.
Similarly, 100-Gbps links can be composed of four 25-Gbps or 10 10-Gbps channels.
In 2007, the Institute of Electrical and Electronics Engineers (IEEE) began the process of defining standards for 40-
Gigabit Ethernet and 100-Gigabit Ethernet communications, which were ratified in June 2010. These 40-Gigabit
Ethernet and 100-Gigabit Ethernet standards encompass a number of different physical layer specifications for
operation over single mode fiber (SMF), OM3 multi-mode fiber (MMF), copper cable assembly, and equipment
backplanes (see the table on the slide for more details). To achieve these high Ethernet speeds, the IEEE has
specified the use of ribbon cable, which means that organizations need to pull new cable in some or all parts of the
data center, depending on where 40-Gigabit Ethernet or 100-Gigabit Ethernet is needed.
Slide 20
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 20
The table on this slide gives you an overview of data center fiber cabling options. Coarse wavelength division
multiplexing (CWDM) and parallel optics SMF and MMF transceiver types are compared, showing the associated
costs and distance limitations.
Slide 21
LC LC MTP LC MTP LC
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 21
The illustration on the slide shows the 10-Gigabit Ethernet, 40-Gigabit Ethernet, and 100-Gigabit Ethernet transceiver
and fiber cable types that are used in the data center. With advances in technology the data center is moving beyond
10-Gigabit Ethernet using MMF or SMF with small form-factor pluggable plus (SFP+) or 10-Gigabit small form-factor
pluggable (XFP) transceivers. The advent of 40-Gigabit Ethernet and 100-Gigabit Ethernet speeds has introduced
new connectivity options.
Mechanical transfer pull-off (MTP) is a special type of fiber optic connector made by a company named US Conec.
MTP is an improvement of the original multi-fiber push-on (MPO) connector designed by a company named NTT. The
MTP connector is designed to terminate several fibers strandsup to 24 strandsin a single ferrule. MTP
connections are held in place by a push-on pull-off fastener, and can also be identified by a pair of metal guide pins
that project from the front of the connector.
Multi-mode transceiver types for 40 and 100-Gigabit Ethernet MTP connections include quad small form-factor
pluggable plus (QSFP+) and CXP. CXP was designed for data centers where high-density 100-Gigabit connections
will be needed in the future (the C stands for the Roman numeral for 100). 40 and 100-Gigabit Ethernet single-mode
fiber connections use 100-gigabit small form-factor pluggable (CFP) transceivers with LC connection fiber pairs.
Slide 22
Server Cabling
Maximum Power Latency Bit Error
Technology Type Distance (per side, in (milliseconds)
(meters) watts) Rate
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 22
Server Cabling
The slide illustrates that 10GBase-T is not acceptable, at this time, for converged (that is, lossless Ethernet) networks.
Vendors in the industry are pushing for the Fiber Channel over Ethernet (FCoE) Data Center Bridging (DCB)
standards body to approve 10GBase-T; however, it is currently not supported.
The bit error rate (BER) of 10 to the power of minus 15 is important, because that is the specification for Fiber
Channel SANs.
Slide 23
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 23
Cabling is a high cost item because it is labor intensive. To maximize their cabling dollar, the customer should try to
future-proof the cable plant. Which is possible because 40-Gigabit and 100-Gigabit use the same cabling guidelines.
The only difference is that 100-Gigabit requires twice as many fibers. Specify a minimum of OM3 fiber and OM4 if
extra reach is needed.
A data center should be designed with a maximum cabling distance of 100 to 150 meters between switches. The 100
to 150 meter length limit is part of the 40-Gigabit and 100-Gigabit specification for multi-mode fiber. 150 meters is the
longest length supportedassuming the customer is using OM4 cabling and no more than two patch panels are in the
path. If more than two patch panels are present, then OM4 is limited to 125 meters. If using OM3 fiber, use 100
meters as the maximum length.
A patch panel should be by the Main Distribution Area (MDA) and patch panels in each row. Structured cabling should
be used between the patch panels. To maximize cost savings we recommend running large fiber bundles between the
patch panels.
Obtaining all cabling components from the same manufacturer is important and a life-time or multi-year guarantee
comes with the installation. Using the same manufacturer and having a guarantee in place is important because fiber
plants have always run the risk of having polarity issues and with the increased speeds of 40-Gigabit Ethernet and
100-Gigabit Ethernet, polarity issues become more pronounced and troublesome. Manufacturers specifically design
their components to work together to avoid these issues.
Slide 24
SW1
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 24
The slide illustrates an example of using structured cabling. It addresses both the LAN and the SAN. It is comprised of
a patch panel for the MDA with bulk fiber cable connecting to patch panels in each row, and patch cables to connect
the active components. Cabling is not the place to cut corners. The customer should align themselves with a good
cabling vendor that they can rely on for present and future needs.
Slide 25
Row1
Row 1
16 96-strand
fiber cables
Row 2
Row2
terminate at
patch panel
Row3
Row 3
12-strand
20 meters (64 feet)
Chassis Area
Row 6
Row6
96-Strand 96-Strand
Row 7
Row7
Fiber Fiber
Cable Cable
Row 8
Row8
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 25
The slide shows an example of structured cabling for a pod that consists of 128 racks distributed over eight rows. This
example is important because it just happens to be the exact maximum size of QFabric.
Slide 26
R R R R
o o o o
Cold Hot Cold Hot Cold
Aisle w Aisle w Aisle w Aisle w Aisle
1 2 3 4
Overhead View
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 26
A critical element to minimizing power consumption in the data center is the concept of hot aisles and cold aisles. The
idea is to keep the cool air supplied to the equipment separate from the hot air exhausted from the equipment. Data
center devices are racked so that cool air is drawn into the equipment on a common cold aisle where the cool air is
delivered. The other side of the rows create a common hot aisle into which the hot air from the equipment is
exhausted. The hot air can then be drawn into the air conditioning equipment, cooled, and redistributed into the cold
aisle.
It is desirable to have as much separation as possible between the cool air supplied to the devices and the hot air
exhausted from the devices. This separation makes the cooling process more efficient and provides more uniformity of
air temperature from the top to the bottom of the racks, preventing hot spots within the data center. Physical barriers
above and around racks can be used to help achieve the desired separation. The racks can also help achieve the
desired separation and air flow.
Slide 27
Commercial Cabinets Enable Hot Aisle Cold Aisle Data Center Design
Products are available from several rack manufacturers that provide support for implementing hot aislecold aisle
designs in data centers. For example, cabinets are available that takecold air in front of the rack, move it through the
chassis with specially designed baffles, and then expel hot air at the rear of the cabinet.
Cool air is often forced through perforated tiles in raised floors as a way of delivering cool air to the cold aisle.
Plenums above the racks are then used to vent to hot air for re-cooling. More recently, delivering the cold air through
ducts and plenums above the rack cabinets and exhausting the hot air through separate ductwork and plenums has
been used to take advantage of the natural tendency of cold air to fall and warm air to rise.
Slide 28
Section Summary
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 28
Slide 29
A) EoR
B) ToR
C) MoR
D) BoR
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 29
Slide 29
A) OS1
B) Rj-45
C) Cat 6a
D) OM3
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 29
Slide 30
Juniper Networks
Data Center Design
Best Practices
2016 Juniper Networks, Inc. All rights reserved. | www.juniper.net | Proprietary and Confidential
This section will take a look at the evolving requirements for optics in the data center, and will describe Junipers
optical product offerings.
Slide 31
Section Objectives
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 31
Slide 32
Core/
Aggregation Nx10GbE 40GbE 100GbE Nx100GbE
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 32
Lets drill down a little deeper into a concern that might sometimes be overlooked, and thats optics. On this slide, we
are looking at the data center and our perception on evolving requirements, you see the core/aggregation layer,
access layer, and the compute layer (servers). At the core/aggregation layer we have seen an evolution from 10-
Gigabit Ethernet to 40-Gigabit Ethernet, and that is progressing into 100-Gigabit Ethernet. At the compute layer, we
have seen things grow exponentially from 1-Gigabit to 10-Gigabit interconnect from the server side to the access
layer, or ToR switch. Already we are seeing bonded 10-Gigabit Ethernet for the purpose of achieving greater
bandwidth and bigger trunk size. Sooner, rather than later, we will see 40-Gigabit Ethernet.
But lets look more specifically at the access layer. What we see here is a lot of movement, and it seems to be where
a lot of the significant development is happeningwhere the actual fabric touches the compute node, and more so
where the leaf and spine network touches the compute node or the compute network. At the access layer you see
bonded 10-Gigabit Ethernet, currently moving into 40-Gigabit Ethernet today, and in the not-so-distant future we are
looking at bonded 40-Gigabit Ethernet, just as we did with 10-Gigabit Ethernet in the recent past. Looking ahead, we
will see 100-Gigabit Ethernet in the data center. Again, looking at the access layer, there is quite a bit of movement.
This is where we will focus for the remainder of this module.
Slide 33
Core Core
nx10Gbps 40Gbps
TOR TOR
10Gbps 10Gbps
Server Server
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 33
Building on what we described on the previous slide, on this slide we are showing the use of N by 10-Gigabit
bonding to achieve greater bandwidth coming out of a ToR switch. This ToR switch in turn services compute nodes or
servers, as shown in the diagram on the left of the slide. Each server is running multiple virtual machine (VMs).
In this type of solution, there is a certain amount of oversubscription that is often acceptable for an access layer tier.
However, what needs to happen in a multi-link situation such as this, is that hashing has to occur for load balancing
and equalization of traffic across all of the links being used. As we add more links, not only do we add more cabling
and physical connections, but also we do not truly achieve the bandwidth that we had hoped for by bonding together
multiple 10-Gigabit links.
Comparing and contrasting this type of solution with where everything is going today, the oversubscription, as shown
by the diagram on the right, does not change. However, by virtue of reducing the number of links, 2 as opposed to 4 in
our examples on the slide, using 40-Gigabit uplinks instead of 10-Gigabit uplinks, we have optimized hashing and
achieved a closer realization of true 40-Gigabit throughput coming from the ToR into the fabric, or into the core of the
network. Less links are preferred over more links.
Slide 34
Inter-rack requires:
Low OS between racks
Reach up to <400m
High rack to rack traffic
Existing MMF
Juniper Offers Multiple Options
Reach
Model Cable Connector IEEE Standard 100G Ready
OM3/OM4
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 34
In regard to data center use cases for 40-Gigabit Ethernet, there are various options that Juniper does offer. When
looking at inner-rack requirements, a lot of the requirements revolve around low oversubscription, moderate to short
reach, and lots of rack-to-rack traffic. More importantly, multimode fiber (MMF) might have been utilized for the
purpose of 10-Gigabit interconnect.
The table at the bottom of the slide shows three 40-Gigabit fiber options offered by Juniper Networks. With regard to
the 40G-SR4, and 40G-eSR4 (that is, extended SR4) optics, as you can see, they have moderate to long reach in the
data center, allowing for flexibility and choice. They use a specialized connector, but they are IEEE standardized, and
most importantly, they are 100-Gigabit ready. These are offerings that Juniper has today.
However, at the bottom of the table, the newest introduction to these options is the 40G-LX4 optic. The 40G-LX4
offers short reach within the data center, but as you can see, with regard to cabling requirements, the 40G-LX4 is
down to two fiber lines as opposed to the 12 fiber lines needed for the other two options. This is important because,
what we are effectively saying here is that, this is using MMF fiber, and by virtue of introducing these new connectors,
the existing fiber plant can be retained. The ramifications are hugethere are cost savings for the customers, and
there is the fact that you have interchangeability that allows for seamless upgrade to 40-Gigabit Ethernet by utilizing a
simple modular connector.
The 40G-LX4 option is not currently IEEE standardized, or 100-Gigabit Ethernet ready. However, in the upcoming
slides we will discuss standardization and market adoption for this specific optic.
Slide 35
40G-LX4 Introduction
JNP-QSFP-40G-LX4 Benefits:
Rx No fiber upgrade for 40G
Duplex LC
Duplex LC
Tx
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 35
40G-LX4 Introduction
When you look at the new 40G-LX4 from Juniper, it is important to note specifically how it has been implemented. This
is a joint effort with a company called Finisar.
The new Juniper QSFP LX4 optic uses familiar duplex LC connectivity, thereby allowing you to retrofit these new 40-
Gigabit connectors to existing fiber, and unlike other solutions, we are actually using four different wavelengths and
10-Gigabit lanes to achieve 40-Gigabit throughput. Again, this is done over 2 fiber lines (multimode).
Slide 36
Aggregation Aggregation
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 36
In regard to how the LX4 optics would be implemented, it is as simple as retrofitting the LX4 optical modules where
10-Gigabit modules or transceivers are used. In the example shown on this slide, we would be doing so at the access
layer (on the fabric-facing side, in this example) and at the aggregation tier (or fabric interconnect). Again, it is as
simple as retrofitting the connectors on either side, leaving the cable in between.
Slide 37
Solution Comparison
BiDi versus LX4, SR4
Fiber Count 2 2 12
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 37
Solution Comparison
A discussion of the Juniper LX4 optics would not be complete without comparing it to one of the more prominent 10- to
40-Gigabit solutions out there todaynamely the Cisco bi-directional optic, also known as BiDi. This optic has been
available for a short while now, and there are some key points of comparison that we would like to point out.
The Juniper LX4 has slightly longer reach than the Cisco BiDi, and while the Cisco BiDi solution was intended to
service the same use case (10- to 40-Gigabit migration over existing fiber) the BiDi only works with Cisco specific
solutions and equipment. Therefore, it is more of a proprietary solution that does not service the greater needs of the
industry. Comparing the Cisco BiDi to the Juniper LX4, Juniper is proud of the fact that they are the first company to
work with Finisar to develop an open standard, 40-Gigabit, migrate-able optic. Looking at the anticipation around the
Juniper LX4, it is going to be adopted as an industry-wide, open standards-based optic. The LX4 is not proprietary in
nature, and while it still needs to be ratified, by working in conjunction with Finisar, we avoid vendor lock-in and allow
for third-party integration.
Lastly, not to leave out the SR4, one of the biggest features of the SR4 is that it is 100-Gigabit ready. For those
customers that are forward thinking and have the awareness today to understand that if they dictate their cabling plant
in a certain way, they will have no problem reaching that level of bandwidth as the actual switch interfaces and optics
become less expensive. It is also worth noting that the SR4 consumes less power than the other optics. In regard to
reach, it does come in multiple varietiesyou can go from moderate to long reach with the SR4. Therefore, the
Juniper SR4 is another valid offering to keep in mind for the data center.
Slide 38
There are many inherent benefits to buying Juniper optics and transceivers as opposed to trying third-party optics:
Co-engineering support with a vendor and optics manufacturer
With Juniper branded optics, you have end-to-end support
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 38
There are many reasons why the optics discussion is critical to the data center. Essentially, this is all money that
would be left on the table if opportunities presented themselves in a data center, or if you have any other type of
opportunities where you are introducing very dense switching. To not address the optical opportunity and realize the
lucrative aspects of selling optics, and the actual peace-of-mind behind selling Juniper branded optics, is leaving
money on the table. Lets first wrap up with what these optics actually entail.
We have already discussed the SR4s and the LX4, we also have an intermediate reach 40-Gigabit optic (up to 2
kilometers), as well as a long reach 40-Gigabit optic (up to 10 kilometers), and an extended reach 40-Gigabit optic (up
to 40 kilometers). The reason behind this, and the reason to note the IR4 specifically, is that again, this is a good
opportunity for your customers to reduce their expense by virtue of the fact that they can justify a long reach optic in
the data center. An LR4 is more suited for inter data center connectivity, but also, for requirements of something
greater than 400 meters than an extended SR4 optic would offer. Again, offering flexibility is one thing, but this might
save tremendous CapEx when deploying multiple switches with thousands of ports.
Rounding out the new optics family from Juniper, are the multiple breakout optic options shown in the table on the
slide. By breakout, we mean that these are 40-Gigabit to 10-Gigabit breakout solutions which allow our customers, for
example, to take the QFX5100 switches with 40-Gigabit Ethernet interfaces, and still utilize them for direct compute or
server attachment, or simply to break them out into 10-Gigabit cables or links to attach to other network elements.
We should point out that, with respect to the LX4 specifically, and the benefits it provides our customers when moving
to 40-Gigabit, there are certain other inherent aspects in regard to using Juniper optics. There are many inherent
benefits to buying Juniper optics and transceivers as opposed to trying third-party optics. Foremost is the fact that
Juniper, having co-engineering support with a vendor and optics manufacturer such as Finisar, can often avoid
systemic problems or quality assurance issues with the optics and stop them before they get out into the field. In
contrast, if you are dealing with an optics manufacturer on your own, you have less of a relationship and less of a
chance of avoiding such issues.
Again, with Juniper branded optics, you have end-to-end support, whereby with third-party optics, if there is deemed to
be a problem at that level, it will not be supported. As you can see, this is money on the table that you do not want to
leave behind. Some of our competition is instituting very interesting strategies with regard to optics in general. You
can read about certain competition that is actually instituting licensing on optics which throttles the optics bandwidth
down after a certain period of time if those licenses are not purchased. Imagine how disruptive and how broken your
network could become if those licenses were not procured in time. While you did buy a full-rate optic interface, all of a
sudden, after a certain time, it drops down to a lesser rate. These are the strategies that are being used by our
competition, and we note these because, when it comes to optics, you should always consider the fact that you have a
true vendor like Juniper and solution support behind every optic we sell.
Slide 39
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 39
Juniper Networks supports a variety of 100-Gigabit Ethernet interfaces. The use of 100-Gigabit Ethernet is expected
to increase dramatically in the next few years. The growth of 100-Gigabit Ethernet in the data center can be traced
back to cloud applications, which require higher-bandwidth interfaces. Not only will 100-Gigabit Ethernet be a leading
solution for data center spine applications in cloud-scale deployments, it will also grow in popularity for enterprise
deployments as it is highly cost-efficient and its bigger pipes provide better application performance.
Juniper offers a truly seamless path for data center upgrades that enable customers to choose 10-Gigabit Ethernet or
40-Gigabit Ethernet in the spine today and then migrate to 100-Gigabit Ethernet as bandwidth needs grow. This is
accomplished using Juniper Networks QFX10000 spine switches, which offer up to thirty QSFP28 100-Gigabit
Ethernet ports in a compact, single rack unit form factor.
Juniper offers exceptionally cost-optimized 100-Gigabit Ethernet optics and cables that complement the spine and leaf
architectures with open and flexible connectivity options. These options are backward compatible with 40-Gigabit
Ethernet speeds, which establishes a path to 100-Gigabit Ethernet deployments. Juniper solutions offer true
investment protection and the ability to easily move to higher speeds, enabling 100-Gigabit Ethernet deployments in
the most seamless fashion.
Slide 40
Section Summary
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 40
Slide 41
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 41
Slide 41
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 41
Slide 42
Course Summary
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 42
Slide 43
Additional Resources
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 43
For additional resources or to contact the Juniper Networks eLearning team, click the links on the screen.
Slide 44
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 44
You have reached the end of this Juniper Networks eLearning module. You should now return to your Juniper
Learning Center to take the assessment and the student survey. After successfully completing the assessment, you
will earn credits that will be recognized through certificates and non-monetary rewards. The survey will allow you to
give feedback on the quality and usefulness of the course.
Slide 45
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 45
All rights reserved. JUNIPER NETWORKS, the Juniper Networks logo, JUNOS, QFABRIC, NETSCREEN, and
SCREENOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. All other
trademarks, service marks, registered trademarks, or registered service marks are the property of their respective
owners. Juniper Networks reserves the right to change, modify, transfer or otherwise revise this publication without
notice.
Slide 46
CONFIDENTIAL
Co rp orat e and Sal es Head q uart ers APAC Head q uart ers EMEA Head q uart ers Copyright 20 10 Junip er Net w orks, Inc.
Al l right s reserved. Junip er Net w orks,
Junip er Net w orks, Inc. Junip er Net w orks ( Hong Kong) Junip er Net w orks Ireland t he Junip er Net w orks logo, Junos,
119 4 Nort h Mat hild a Avenue 26 / F, Cit yp laza One Airsid e Business Park Net Screen, and ScreenOS are regist ered
Sunnyvale, CA 9 4 0 8 9 USA 1111 Kings Road Sw ord s, Count y Dub l in, Ireland t rad em arks of Junip er Net w orks, Inc. in
Phone: 8 8 8 .JUNIPER Taikoo Shing, Hong Kong Phone: 35.31.8 9 0 3.6 0 0 t he Unit ed St at es and ot her count ries.
( 8 8 8 .58 6 .4737) Phone: 8 5 2.2332.36 36 EMEA Sales: 0 0 8 0 0 .4 58 6 .4737 Al l ot her t rad em arks, service m arks,
or 4 0 8 .74 5.20 0 0 Fax: 8 52.2574 .78 0 3 Fax: 35 .31.8 9 0 3.6 0 1 regist ered m arks, or regist ered service
Fax: 4 0 8 .74 5.210 0 m arks are t he p rop ert y of t heir
w w w.junip er.net resp ect ive ow ners. Junip er Net w orks
assum es no resp onsib il it y f or any
inaccuracies in t his d ocum ent . Junip er
Net w orks reserves t he right t o change,
m od if y, t ransf er, or ot herw ise revise t his
p ub l icat ion w it hout not ice.