Professional Documents
Culture Documents
Cisco ACI: Zero To Hero: A Comprehensive Guide To Cisco ACI Design, Implementation, Operation, and Troubleshooting 1st Edition Jan Janovic
Cisco ACI: Zero To Hero: A Comprehensive Guide To Cisco ACI Design, Implementation, Operation, and Troubleshooting 1st Edition Jan Janovic
Cisco ACI: Zero To Hero: A Comprehensive Guide To Cisco ACI Design, Implementation, Operation, and Troubleshooting 1st Edition Jan Janovic
https://ebookmeta.com/product/aci-reinforced-concrete-design-
handbook-a-companion-to-aci-318-19-aci/
https://ebookmeta.com/product/primary-mathematics-3a-hoerst/
https://ebookmeta.com/product/cambridge-igcse-and-o-level-
history-workbook-2c-depth-study-the-united-states-1919-41-2nd-
edition-benjamin-harrison/
https://ebookmeta.com/product/understanding-and-troubleshooting-
cisco-catalyst-9800-series-wireless-controllers-1st-edition-
simone-arena/
Introduction to Networks Companion Guide (CCNAv7) 1st
Edition Cisco Networking Academy
https://ebookmeta.com/product/introduction-to-networks-companion-
guide-ccnav7-1st-edition-cisco-networking-academy/
https://ebookmeta.com/product/introducing-cisco-unified-
computing-system-learn-cisco-ucs-with-cisco-ucspe-1st-edition-
stuart-fordham/
https://ebookmeta.com/product/workshop-machining-a-comprehensive-
guide-to-manual-operation-1st-edition-david-harrison/
https://ebookmeta.com/product/weird-fiction-a-genre-study-
michael-cisco/
https://ebookmeta.com/product/ccnp-enterprise-design-
ensld-300-420-official-cert-guide-designing-cisco-enterprise-
networks-certification-guide-1st-edition-anthony-bruno/
Cisco ACI:
Zero to Hero
A Comprehensive Guide to Cisco ACI
Design, Implementation, Operation,
and Troubleshooting
—
Jan Janovic
Cisco ACI: Zero to Hero
A Comprehensive Guide to Cisco ACI
Design, Implementation, Operation,
and Troubleshooting
Jan Janovic
Cisco ACI: Zero to Hero: A Comprehensive Guide to Cisco ACI Design,
Implementation, Operation, and Troubleshooting
Jan Janovic
Prague, Czech Republic
Introduction������������������������������������������������������������������������������������������������������������xix
v
Table of Contents
vi
Table of Contents
vii
Table of Contents
viii
Table of Contents
ix
Table of Contents
x
Table of Contents
xi
Table of Contents
Index��������������������������������������������������������������������������������������������������������������������� 591
xii
About the Author
Jan Janovic, 2x CCIE #5585 (R&S|DC) and CCSI #35493, is
an IT enthusiast with 10+ years of experience with network
design, implementation, and support for customers from
a wide variety of industry sectors. Over the past few years,
he has focused on datacenter networking with solutions
based on mainly, but not limited to, Cisco Nexus platforms–
traditional vPC architectures, VXLAN BGP EVPN network
fabrics, and Cisco ACI software-defined networking. All have
an emphasis on mutual technology integration, automation,
and analytic tools. Another significant part of his job is the delivery of professional
training for customers all around Europe.
He holds a master’s degree in Applied Network Engineering – Network Infrastructure
from the Faculty of Management Science and Informatics, University of Zilina, Slovakia.
During his university studies, he led a group of students in successfully developing
the world’s first open-source EIGRP implementation for the Quagga Linux package
(development currently continued under the name FRRouting). He also contributed to
OSPF features there. His technical focus additionally expands to system administration
of Windows and Linux stations plus public cloud topics related to the design and
deployment of AWS solutions.
He mantains the following certifications: Cisco Certified Internetworking Expert in
Routing and Switching, Cisco Certified Internetworking Expert in Datacenter, Cisco
Certified Professional in DevNet, Cisco Certified Systems Instructor, AWS Certified
Solution Architect – Associate, AWS Certified Developer – Associate, and AWS Certified
SysOps Administrator – Associate, ITILv4.
xiii
About the Technical Reviewer
David Samuel Peñaloza Seijas works as a Principal
Engineer at Verizon Enterprise Solutions in the Czech
Republic, focusing on Cisco SD-WAN, ACI, OpenStack,
and NFV. Previously, he worked as a Data Center Network
Support Specialist in the IBM Client Innovation Center in
the Czech Republic. As an expert networker, David has a
wide diapason of interests; his favorite topics include data
centers, enterprise networks, and network design, including
software-defined networking (SDN).
xv
Acknowledgments
First and foremost, I would like to again express my deep and sincere gratitude to my
family and friends, who are one the most important pillars to achieving anything in
life, even writing a book like this. Thank you, my darlings Simona and Nela. You were
the best support in this challenging feat, throughout all those endless days and nights
when writing on the road, in the hospital, or even during a rock concert. Thank you, my
parents, Vladimír and Libuša, and brother, Vladimir, for giving me the best possible
upbringing, supporting my higher education, being my positive idols, and always
encouraging me to follow my dreams.
Sometimes there are moments in life that completely change its direction and
become a foundation for a lifelong positive experience. For me, this happened when I
first enrolled in Peter Paluch’s networking classes during my university studies in 2011.
Thank you, Peter, for sparking a huge networking enthusiasm in me and for being my
mentor and closest friend ever since. You gave me unprecedent knowledge and wisdom
to become professionally as well as personally who I am now.
I cannot be more grateful to the Apress team, especially Aditee Mirashi, for giving
me the opportunity of a lifetime to put my knowledge and practical skills gained over the
years into this book and for guiding me through the whole publishing process.
It was an honor to cooperate with my friend David Samuel Peñaloza Seijas, who
as a technical reviewer always provided spot-on comments and helped me bring the
content of this book to the highest possible level. Additionally, thanks a lot Luca Berton
for your help with automation content and language enhancements.
Big appreciation goes to my employer ALEF NULA, a. s. and my coworkers for their
constant support and for providing me with all the necessary equipment and significant
opportunities to grow personally as well as professionally and gain the knowledge and
skills which ended up in this book.
I cannot forget to express my gratitude to my alma mater and its pedagogical
collective, Faculty of Management Science and Informatics, University of Zilina,
Slovakia, for giving me rock-solid knowledge and a lot of practical skills, thus thoroughly
preparing me for my professional career.
xvii
Acknowledgments
xviii
Introduction
Dear reader, whether you are network architect, engineer, administrator, developer,
student, or any other IT enthusiast interested in modern datacenter technologies and
architectures, you are in the right place. Welcome!
In this book, you will explore the ongoing datacenter network evolution driven by
modern application requirements, leading to the next-generation networking solution
called Cisco Application Centric Infrastructure (ACI). It doesn’t matter if you are
completely new to ACI or you already have some experience with the technology, my
goal in this book is to guide you through the whole implementation lifecycle. I will
show you how to simplify the deployment, operation, and troubleshooting of your
multi-datacenter networking environment using ACI, how to integrate it effectively
with L4-L7 devices, virtualization and containerization platforms, and how to provide
unprecedented visibility and security, all with an emphasis on programmability and
automation.
You will start “from zero” and discover the story behind ACI. You will build the strong
fundamental knowledge about its main components and explore the advantages of the
hardware-based Leaf-Spine architecture composed of Nexus 9000 switches. During the
first chapters, I describe all of ACI’s design options with their specifics, followed by a
detailed guide of how to deploy and initially configure the whole solution according to
best practices. You will then assemble all the necessary “access policies” for connecting
end hosts to ACI and utilize its multi-tenancy capabilities to create logical application
models and communication rules on top of the common physical network. You will
learn about control-plane and dataplane mechanisms running under the hood, resulting
in correct forwarding decisions, and I will help you with troubleshooting any potential
problems with end host connectivity in ACI. I will also describe both Layer 2 and
Layer 3 external connectivity features to expose datacenter applications and services
to the outside world. I will cover integration capabilities with L4-L7 security devices
or loadbalancers as well as virtualization solutions. At the end, you will learn how to
effectively use ACI’s REST API to further automate the whole solution using the most
common tools: Postman, Python, Ansible, or Terraform.
xix
Introduction
Many times throughout the book I provide my own recommendations and views
based on knowledge and experience gained from real-world implementations,
combined with best practices recommended directly by Cisco. I constantly try as much
as possible to support each topic with practical examples, GUI/CLI verification tools,
and troubleshooting tips to ultimately build in you the strong confidence to handle any
ACI-related task.
Although this book is not directly related to any Cisco certification, I hope it
will become a valuable resource for you if you are preparing for the CCNP or CCIE
Datacenter exams and that you will use it as a reference later for your own ACI projects.
Let’s now dive into the first chapter, which is related to network evolution from
legacy architecture to Cisco ACI.
xx
CHAPTER 1
Introduction: Datacenter
Network Evolution
What is a datacenter (or “data center,” which will be used interchangeably throughout
the book)? It’s a fairly simple question at first blush to many IT professionals, but let’s
start with proper definition of this term.
From a physical perspective, a datacenter is a facility used by various organizations
to host their mission-critical data. Companies can invest in their own hardware and
software resources, which will stay completely under their control, maintenance, and
operation, or consume services from public clouds in the form of XaaS (Infrastructure/
Platform/Software as a Service).
Internally, we can divide a datacenter into a complex multi-layered architecture
of compute, storage, and networking equipment, interconnected to provide fast and
reliable access to shared application resources (see Figure 1-1). The networking layer
of each datacenter is especially important for the whole system to work correctly. Keep
in mind that you can have the most powerful blade server chassis with petabytes of
storage space, but without robust, reliable, and secure networking, you won’t change
the world. Some IT architects often underestimate the importance of this “datacenter
undercarriage.” My goal in this publication is to give you all the necessary knowledge
and skills needed to design, configure, and operate one of the industry-leading data
center SDN networking solutions, Cisco Application Centric Infrastructure (ACI).
1
© Jan Janovic 2023
J. Janovic, Cisco ACI: Zero to Hero, https://doi.org/10.1007/978-1-4842-8838-2_1
Chapter 1 Introduction: Datacenter Network Evolution
(QWHUSULVH$SSOLFDWLRQV
0RQLWRULQJ0DQDJHPHQW$XWRPDWLRQ'HYHORSHPHQW
,QIRUPDWLRQ6\VWHPV&RQWHQW'HOLYHU\
0LGGOHZDUH
:HE6HUYHUV$SSOLFDWLRQ6HUYHUV&RQWHQW0DQDJHPHQW
2SHUDWLQJ6\VWHPV
:LQGRZV/LQX[9LUWXDOL]DWLRQ+\SHUYLVRUV
6HUYHUDQG6WRUDJH(TXLSPHQW
5DFNPRXQW6HUYHUV%ODGH&KDVVLVDQG9LUWXDOL]HG+:
'DWDFHQWHU1HWZRUN(TXLSPHQW
/$1DQG6$1,QIUDVWUXFWXUHWR,QWHUFRQQHFW$OO5HVRXUFHV
(OHFWULFLW\&RROLQJ3RZHU%DFNXS
5HGXQGDQW3'8V%DFNXS(OHFWULFLW\*HQHUDWRUV
'DWDFHQWHU3K\VLFDO%XLOGLQJ)DFLOLW\
3K\VLFDO$FFHVV6HFXULW\/RFDWLRQ([SDQVH
2
Chapter 1 Introduction: Datacenter Network Evolution
//%RXQGDU\ &RUH/D\HU
$JJUHJDWLRQ/D\HU
$FFHVV/D\HU
%ODGH6HUYHUV 5DFN6HUYHUV
In a datacenter, each part of such a network design had its specific purpose:
• Core layer
3
Chapter 1 Introduction: Datacenter Network Evolution
• Aggregation layer
• Access layer
• Admission control
The traditional design taken from campus networks was based on Layer 2
connectivity between all network parts, segmentation was implemented using VLANs,
and the loop-free topology relied on the Spanning Tree Protocol (STP). All switches
and their interfaces were implemented as individuals, without the use of virtualization
technologies. Because of this, STP had to block redundant uplinks between core and
aggregation, and we lost part of the potentially available bandwidth in our network.
Scaling such an architecture implies growth of broadcast and failure domains as well,
which is definitely not beneficial for the resulting performance and stability. Imagine
each STP Topology Change Notification (TCN) message causing MAC tables aging in
the whole datacenter for a particular VLAN, followed by excessive BUM (Broadcast,
Unknown Unicast, Multicast) traffic flooding, until all MACs are relearned. The impact
of any change or disruption could be unacceptable in such a network. I’ve seen multiple
times how a simple flapping interface in a DC provider L2 network caused constant
traffic disruptions. Additionally, the 12-bit VLAN ID field in an 802.1Q header limits the
number of possible isolated network segments for datacenter customers to 4,096 (often
even less due to internal switch reserved VLANs).
One of the solutions recommended and implemented for the mentioned STP
problems in the past was to move the L3 boundary from the core layer to aggregation
and divide one continuous switched domain into multiple smaller segments. As shown
in Figure 1-3, by eliminating STP you can use ECMP load balancing between aggregation
and core devices.
4
Chapter 1 Introduction: Datacenter Network Evolution
&RUH/D\HU
/ $JJUHJDWLRQ/D\HU
/
$FFHVV/D\HU
%ODGH6HUYHUV
%ODGH 6HUYHUV 5DFN6HUYHUV
Let’s Go Virtual
In 1999, VMware introduced the first x86 virtualization hypervisors, and the network
requirements changed. Instead of connecting many individual servers, we started to
aggregate multiple applications/operating systems on common physical hardware.
Access layer switches had to provide faster interfaces and their uplinks had to scale
accordingly. For new features like vMotion, which allows live migration of virtual
machines between hosts, the network had to support L2 connectivity between all
physical servers and we ended up back at the beginning again (see Figure 1-4).
5
Chapter 1 Introduction: Datacenter Network Evolution
/D\HU
&RUH/D\HU
$JJUHJDWLRQ/D\HU
/D\HU
$FFHVV/D\HU
%ODGH6HUYHUV
%ODGH 6HUYHUV 5DFN6HUYHUV
To overcome the already-mentioned STP drawbacks and to keep support for server
virtualization, we started to virtualize network infrastructure too. As you can see in
Figure 1-5, switches can be joined together into switch stacks, VSS, or vPC architectures.
Multiple physical interfaces can be aggregated into logical port channels, virtual port
channels, or multi-chassis link aggregation groups (MLAG). From the STP point of view,
such an aggregated interface is considered to be an individual, therefore not causing
any L2 loop. In a virtualized network, STP does not block any interface but runs in the
background as a failsafe mechanism.
Virtualization is deployed on higher networking levels as well. Let’s consider
virtual routing and forwarding instances (VRFs): they are commonly used to address
network multi-tenancy needs by separating the L3 address spaces of network consumers
(tenants). Their IP ranges can then be overlapping without any problems
These various types of virtualizations are still implemented in many traditional
datacenters, and they serve well for smaller, not-so-demanding customers.
6
Chapter 1 Introduction: Datacenter Network Evolution
&RUH/D\HU
/D\HU
$JJUHJDWLRQ/D\HU
$FFHVV/D\HU
%ODGH6HUYHUV 5DFN6HUYHUV
Server virtualization also introduced a new layer of virtual network switches inside
the hypervisor (see Figure 1-6). This trend increased the overall administrative burden,
as it added new devices to configure, secure, and monitor. Their management was often
the responsibility of a dedicated virtualization team and any configuration changes
needed to be coordinated inside the company. Therefore, the next requirement for a
modern datacenter networking platform is to provide tools for consistent operation
of both the physical network layer and the virtual one. We should be able to manage
encapsulation used inside the virtual hypervisor, implement microsegmentation, or
easily localize any connected endpoint.
7
Chapter 1 Introduction: Datacenter Network Evolution
&RUH/D\HU
$JJUHJDWLRQ/D\HU
$FFHVV/D\HU
WŚLJƐŝĐĂů ^ĞƌǀĞƌ ʹ sŝƌƵƚĂůŝnjĂƟŽŶ ,ŽƐƚ WŚLJƐŝĐĂů ^ĞƌǀĞƌ ʹ sŝƌƵƚĂůŝnjĂƟŽŶ ,ŽƐƚ WŚLJƐŝĐĂů ^ĞƌǀĞƌ ʹ sŝƌƵƚĂůŝnjĂƟŽŶ ,ŽƐƚ
Along with continuous changes in IT and network infrastructures, we can see another
significant trend in datacenter networking: a higher adoption of containerization,
microservices, cloud technologies, and deployment of data-intensive applications is causing
the transition from a north-south traffic pattern to a majority of east-west traffic. According
to Cisco Global Cloud Index, more than 86% of traffic stayed inside datacenters in 2020.
8
Chapter 1 Introduction: Datacenter Network Evolution
6SLQH6ZLWFKHV
/HDI6ZLWFKHV
9
Chapter 1 Introduction: Datacenter Network Evolution
In order to use all the available bandwidth between leaves and spines, their
interconnection needs to be based on Layer 3 interfaces. This eliminates the STP need and
provides support for ECMP load-balancing. However, to support virtualization or various other
applications features, we still need to ensure Layer 2 connectivity somehow. The solution here
is to simply encapsulate any (Layer 2 or Layer 3) traffic at the edge of the network into a new,
common protocol called Virtual eXtensible Local Area Network (VXLAN). See Figure 1-8.
2ULJLQDO(WKHUQHW)UDPH
9;/$1(QFDSVXODWLRQ
10
Chapter 1 Introduction: Datacenter Network Evolution
So why are we even discussing Cisco ACI in this book? All of the previously described
requirements can be solved by implementing this centrally managed, software-defined
networking in your datacenter.
11
Chapter 1 Introduction: Datacenter Network Evolution
Summary
In subsequent chapters, I will cover each ACI component in detail and you will learn
how to design and implement this solution while following best practices and many of
my personal practical recommendations, collected during multiple ACI deployments for
various customers. They won’t always necessarily match according to the situation they
are considered in, but the goal is the same: to provide you with a comprehensive toolset
to become confident with various ACI-related tasks.
12
CHAPTER 2
ACI Fundamentals:
Underlay Infrastructure
In this chapter, you will focus on establishing a proper understanding of the main ACI
components: the underlay network infrastructure and APIC controllers with their
architectural deployment options. In ACI, hardware-based underlay switching offers a
significant advantage over various software-only solutions due to specialized forwarding
chips. Thanks to Cisco’s own ASIC development, ACI brings many advanced features
including security policy enforcement, microsegmentation, dynamic policy-based
redirect (to insert external L4-L7 service devices into the data path) or detailed flow
analytics—besides the huge performance and flexibility.
13
© Jan Janovic 2023
J. Janovic, Cisco ACI: Zero to Hero, https://doi.org/10.1007/978-1-4842-8838-2_2
Chapter 2 ACI Fundamentals: Underlay Infrastructure
• Multi-speed 100M/1/10/25/40/50/100G/400G
• Intelligent buffering
14
Chapter 2 ACI Fundamentals: Underlay Infrastructure
• Encryption technologies
15
Chapter 2 ACI Fundamentals: Underlay Infrastructure
• Interface/queue info
• Flow latency
• Along with flows, you can also gather the current state of the
whole switch, utilization of its interface buffers, CPU, memory,
various sensor data, security status, and more.
16
Chapter 2 ACI Fundamentals: Underlay Infrastructure
Ingress Slice 2
Egress Slice 2
Ingress Slice n
Egress Slice n
Let’s dig deeper and look at the packet processing workflow inside the individual
CloudScale slice, as shown in Figure 2-3. On the ingress part of a slice, we receive the
incoming frame through the front switch interface using the Ingress MAC module. It’s
forwarded into the Ingress Forwarding Controller, where the Packet Parser identifies
what kind of packet is inside it. It can be IPv4 unicast, IPv6 unicast, multicast, VXLAN
packet, and so on. Based on its type, just the necessary header fields needed for the
forwarding decision are sent in the form of a lookup key to the lookup pipeline. The
switch doesn’t need to process all of the payload inside the packet. However, it will
still add some metadata to the lookup process (e.g., source interface, VLAN, queue
information).
Once we have a forwarding result, ASIC sends the information together with the
packet payload via the slice interconnect to the egress part. There, if needed, the packet
is replicated, buffered in some specific queue according to the Quality of Service
configuration, and we can apply a defined security policy in case of ACI. After that, ASIC
rewrites the fields and puts the frame on the wire.
Each result of this complex forwarding decision for a packet can be analyzed in
detail using advanced troubleshooting tools like the Embedded Logic Analyzer Module
(ELAM). By using it, you can easily monitor if the packet even arrived in the interface,
and if so, what happened inside the ASIC while doing these forwarding actions. You will
find more about ELAM in Chapter 6.
17
Chapter 2 ACI Fundamentals: Underlay Infrastructure
Slice
Ingress Forwarding Controller
Ingress
Packets Ingress Packet Packet Payload
MAC Parser
Lookup Result
Lookup Key
Slice
Lookup Pipeline Interconnect
Replication
18
Chapter 2 ACI Fundamentals: Underlay Infrastructure
Now, let’s have a look at the wide palette of device options when choosing from the
Nexus 9000 family of switches to build the ACI underlay fabric.
Each chassis consists of multiple components, and while some of them are common
for all chassis models, others are device-specific.
19
Chapter 2 ACI Fundamentals: Underlay Infrastructure
Chassis-Specific Components
Based on the size of the chassis, you have to choose specific fabric and fan modules. In
the following section, I will describe them in more detail.
Fabric Module
Internally, Nexus 9500 is designed (architecture-wise) as a leaf-spine topology itself.
Each fabric module (FM) inserted from the rear of the chassis comes with one or
multiple CloudScale ASICs and creates a spine layer for the internal fabric. On the
other hand, line cards with their set of ASICs act as leaf layer connected with all fabric
modules. Together they provide non-blocking any-to-any fabric and all traffic between
line cards is load balanced across fabric modules, providing optimal bandwidth
distribution within the chassis (see Figure 2-5). If a fabric module is lost during the
switch operation, all others continue to forward the traffic; they just provide less
total bandwidth for the line cards. There are five slots for fabric modules in total. If
you combine different line cards inside the same chassis, the fifth fabric module is
automatically disabled and unusable. This is due to the lack of a physical connector for
the fifth module on all line cards (at the time of writing) except for X9736C-FX, which
supports all five of them. Generally, you can always use four fabric modules to achieve
maximal redundancy and overall throughput.
20
Chapter 2 ACI Fundamentals: Underlay Infrastructure
Optional
FM 2 FM 3 FM 4 FM 5 FM 6
X9732C-EX X9736C-FX
• FM-G for 4-slot and 8-slot chassis with LS6400GX ASIC that provides
up to 1.6 Tbps capacity per line card slot.
• FM-E2 for 8-slot and 16-slot chassis with S6400 ASIC that provides up
to 800 Gbps capacity per line card slot.
• FM-E for 4-slot, 8-slot, and 16-slot chassis with older ALE2 ASIC that
provides up to 800 Gbps capacity per line card slot.
Note All fabric modules installed in the chassis should be of the same type.
Fan Module
The Nexus 9500 chassis uses three fan modules to ensure proper front-to-back airflow.
Their speed is dynamically driven by temperature sensors inside the chassis. If you
remove one of the fan trays, all others will speed up 100% to compensate for the loss of
cooling power.
21
Chapter 2 ACI Fundamentals: Underlay Infrastructure
There are two types of fan modules available that are compatible with specific fabric
modules:
22
Chapter 2 ACI Fundamentals: Underlay Infrastructure
System Controller
As you already know, the supervisor is responsible for handling all control plane
protocols and operations with them. A redundant pair of systems controllers further
helps to increase the overall system resiliency and scale by offloading the internal non-
datapath switching and device management functions from the main CPU in supervisor
engines. The system controllers act as multiple internal switches, providing three main
communication paths:
• Ethernet Out of Band Channel (EOBC): 1 Gbps switch for intra node
control plane communication. It provides a switching path via its
own switch chipset, which interconnects all HW modules, including
supervisor engines, fabric modules, and line cards.
• Ethernet Protocol Channel (EPC): 1 Gbps switch for intra node data
protocol communication. Compared to EOBC, EPC only connects
fabric modules to the supervisor engines. If any protocol packets
need to be sent to the supervisor, line card ASICs utilize the internal
data path to transfer the packets to fabric modules. They then
redirect the packet through EPC to the supervisor.
23
Chapter 2 ACI Fundamentals: Underlay Infrastructure
Line Cards
Several line card options are available and universally compatible with any chassis
size. For the purpose of this book, I will list only models related to the Cisco ACI
architecture. As I’ve already described, they need to be based on CloudScale ASICs and
you can deploy them only as ACI spine switches. The choice depends on the number of
interfaces, their speeds, or, for example, CloudSec DCI encryption support needed in
your final design. Internal TCAM scalability for the purpose of ACI is consistent across
all models and allows you to handle 365,000 database entries. TCAM entry is used for
saving information about MAC, IPv4, or IPv6 address of endpoints visible in the fabric
for any host connected to ACI. See Table 2-2.
Note *Inside one chassis you can combine different line cards, but in such
a case, the fifth fabric module will be disabled and you won’t get maximum
throughput for X9736C-FX.
24
Chapter 2 ACI Fundamentals: Underlay Infrastructure
25
Chapter 2 ACI Fundamentals: Underlay Infrastructure
93360YC-FX2 96x 1/10/25G SFP28 and 12x 40/100G 7.2 Tbps Leaf only Yes
QSFP28
93180YC-FX3 48x 1/10/25G SFP28 and 6x 40/100G 3.6 Tbps Leaf only Yes
QSFP28
93108TC-FX 48x 100M/1/10GBASE-T and 6x 40/100G 2.16 Tbps Leaf only Yes
QSFP28
9348GC-FXP 48x 100M/1G BASE-T, 4x 1/10/25G SFP28 969 Gbps Leaf only Yes
and 2x 40/100G QSFP28
93108TC- 48x 100M/1/2.5/5/10G BASE-T and 6x 2.16 Tbps Leaf only Yes
FX3P 40/100G QSFP28
Now with all this knowledge about the Nexus 9000 hardware platform, let’s have a
closer look how it’s used for ACI.
26
Chapter 2 ACI Fundamentals: Underlay Infrastructure
The leaf-spine topology offers simple scalability. The overall bandwidth capacity
of the whole ACI fabric is based on amount of spine switches used. Each spine
adds another and concurrent path for traffic to be sent between any leaf switch. For
production ACI fabric, the general recommendation is to use at least two spines. This
preserves the necessary redundancy in case of failure and allows you to upgrade
them one by one without affecting the production traffic. Adding a new spine switch
is just matter of connecting it correctly to the fabric; after registration in APIC, it will
automatically become a fabric member without disrupting already deployed devices.
On the other hand, if you need to scale your access interfaces, all you need to do
is deploy a new leaf switch (or pair), similar to the previous case. Each new leaf will
become part of the fabric and the control plane will converge automatically, all without
affecting already running devices.
This simplicity applies analogically when removing any device from ACI. To make
sure you won’t cause any service disruption in your production network, ACI offers
maintenance mode (commonly known also as the graceful insertion and removal
feature). If enabled on some ACI switch, this switch is not considered part of the
underlay network anymore and all control plane protocols are gracefully brought down.
Additionally, all host interfaces are shut down.
Min. 40G
ISIS/COOP
Control-plane
27
Another random document with
no related content on Scribd:
one of “Four Garrisons,” 255, 257;
Christianity and Zoroastrianism in, 256;
Arab raids reach, 257;
Ali Arslan defeated, 94, 261;
Turkish rule, 261;
Sadi visits, 262;
Marco Polo in, 265;
Timur’s capital, 266;
under Dughlat Amirs, 267-8;
Khojas established in, 269;
Hazrat Apak rules, 68, 270-71;
Chinese masters of, 271-3;
attempts of Khojas to regain, 272, 273-4, 277;
Yakub Beg rules, 277-8, 279-91;
Russian designs on, 283, 285;
British mission in, 287-90;
Chinese overthrow Yakub Beg, 97, 290-92;
Revolutionary disturbances in, 295-8
Kashgari:
costumes, 29, 58-9, 83, 193;
cruelty to animals, 100-101;
dirtiness, 44;
farmers, 303-7;
games, 323;
health, 99, 173;
horsemanship, 80, 99;
lying, 100;
medical mission and, 52-53;
music, 63-4, 323;
peasants, 60;
pleasure expeditions, 92, 95;
religious observances, 68-9, 92, 95, 314-315;
superstitions, 69, 318-21;
women, 58-61, 64-5, 83, 92-3, 122, 310-15;
workmen, 29, 31, 58
Kashmir, 269;
Maharaja of, 293
Kasia Mountains, 236
Katta Dawan, 128-31, 133, 327
Katta Tura, 273
Kaufmann, General, 18, 285
Kaufmann, Mt., 134
Kaulbars, Baron, 284
Kazis, 243, 245
Keen Lung, Emperor, 271
Keraits, 263
Khanates of the Sir Darya, 275, 283, 299
Khargush Pamir, 134-9
Khiva, 275
Khojas, 269, 270-71, 272-3, 274, 277-8, 280
Khokand, 20-21;
submits to China, 272, 273;
Russian advance on, 276, 277, 279;
Russian rule established, 282, 283, 299;
revolt, 283, 285
Khotan:
ancient cities of, 217-19;
carpets, 82, 116, 146-7, 213;
history, 94, 253, 255, 261, 279, 286, 292;
jade, 215-17;
oasis, 323;
population, 240, 246;
products, 213-17, 246;
women, 211-12
journey to:
the caravan, 175-8;
Yangi Hissar, 178, 180;
Yarkand, 182-90;
Posgam, 192-3;
crossing the desert, 195-7, 201-208;
arrival, 209-11;
Merket, 221-5;
return to Kashgar, 225-9
Khudadad, Amir, 268
Khudayar Khan, 276, 277
Kirghiz, 240-41, 308;
akhois, 30, 114-17, 125, 146, 149;
character and customs, 113-14, 118, 123-5, 164-5, 331;
features, 57, 113;
fishing, 152-3;
goat game, 122, 150-51, 165;
headgear, 25, 118, 133, 148;
hunter, 330-31;
Karakoram captured by, 259;
marriages, 119-22, 323;
ponies, 302;
religion and superstition, 125, 163;
submit to China, 272;
women, 23, 114, 118-22, 133
Kizil Art, 236
Kizil Su River, 35, 55, 238
Korla, 292
Koumiss, 122
Kucha, 280, 292
Kuen-lun range, 205, 236
Kuli, 165
Kulja, 271, 293
Kungur, Mt., 40, 164
Kuntigmas, 166
Kuropatkin, 291
Kutass riding, 162-4, 166-8
Kutayba ibn Muslim, 257
Kvass, 9
Pamirs:
British mission in, 289;
Russian authority in, 293-4, 299, 325
travel in:
preparations, 103-5;
journey begun, 105-7;
Gez River crossing, 107-11, 166;
daily routine, 112-13;
among the Kirghiz, 113-27;
Katta Dawan crossed, 129-31, 133, 327;
Karakul Lake, 130, 132, 134, 327;
ovis poli stalking, 133, 135, 327-8;
Khargush Pamir, 134-9;
fossils found, 141;
Pamirsky Post, 141-3, 328;
Uchak Valley, stalking in, 144, 329-31;
with the Sarikoli, 148-60;
Ulughat Pass, 166-8
Pamirsky Post, 139, 141-3, 328
Pan Ghao, General, 67, 250-51
Paper manufacture, 198
Parthia, Chinese missions to, 249, 251
Persia:
agriculture, 301;
character of people, 190;
China and, 253, 256;
deserts, 236
Peter the Great, 270
Petrograd, 3, 9-10
Philip of France, 178
Pigeon Shrine, 205-7
Polo, Marco, 129, 196, 205, 265, 324-5
Posgam, 192-3
Potais, 178-9, 196, 204
Pottery, 82
Prester John, 264
Przemyzl, capture of, 17
Ptolemy, 236
Sadi, 262
Sadik Beg, 277, 278
Safdar Ali Khan, 187
Said, Sultan, 268
Sakas, 249
Samara, 13-14
Samarkand, 216, 269
Sandstorms, 56, 239-40
Sanjar, Sultan, 262
Sarikol annexed, 280
Sarikolis, 131, 149, 153, 155, 157-160, 177, 308, 323
Sarts, 18-21
Satok Boghra Khan, 260
Sattur, 41-3, 50, 88, 113, 139, 158, 160-61, 166, 171, 172, 176
Sayyids, 92
Schlagintweit, Adolph, 274, 286
Semirechia, 299
Shah Murad Khan, 277
Shah Rukh, 268
Shakir Padshah, Imam, 206
Shamshir, 120-21
Shaw, Robert, 286-7
Sheep:
fat-tailed, 26, 44, 303;
Osh district, 26,44;
Yarkand, 185-6;
wolves and, 114, 124-5, 331;
wild, of Marco Polo, 131, 143, 146, 324-32
Shirin, 156
Shrines, 68-72, 92-5, 205-7, 310, 320
Sigm shrine, 320
Silk industry of Khotan, 213-15
Sir Daria, the, 20, 275-6
Snakes, 88
Snow-throwing custom, 321-2
Spiders, 88-9
Stein, Sir Aurel:
researches of, 76, 84, 94, 98, 107, 218-19, 242, 259;
returns from desert, 101-2;
helps in Pamir preparations, 105, 106;
bound for Persia, 166
Steppes, 15-16, 22
Stockholm, 5
“Stone Sheep-folds,” 114
Sturgeon, 8
Subashi, 162
Superstitions of Kashgari, 69, 318-320
Swedes, 5, 7-8
Swedish missionaries, 37, 44, 51, 52-53, 180, 184, 188-9, 190
THE END
FOOTNOTES:
[1] Tashkent, Kashgar, Cossacks.
[2] The monarch in question was actually Khusru Parviz.
[3] Iusce is Yu-shih or Jade stone.
[4] To offer tea is a symbol of apology.
[5] Si-Yu-Ki, by S. Beal, ii. pp. 324, 325.
[6] Tarikh-i-Rashidi, p. 303.
[7] La Haute Asie, ii. 308.
[8] Most of these terms are new, the old titles having been
abolished after the Revolution.
[9] There is considerable uncertainty about this date, which good
authorities give as some decades earlier.
[10] Cathay and the Way Thither, vol. i. p. 101 (Cordier edition).
[11] Boghra signifies a male camel—names of animals being used
by Turks as tribal names. It is an interesting form of totemism;
vide “La Légende de Satok Boghra et l’histoire,” Journ. Asiat.,
Jan.-Fév. 1900, pp. 24 et seq.
[12] This province generally signifies the country lying to the north
of the Tian Shan, but, as used in the text, the term includes the
whole of the eastern division of the Khanate.
[13] The writer was Mirza Haider, Gurkhan, who wrote a history of
his ancestors, and also graphically described the events in which
he sometimes played a leading part.
[14] “Journey to Ilchi Khotan (1866),” J.R.G.S. vol. 37 (1867).
[15] The text of the treaty is given in The Life of Yakub Beg, by D.
G. Boulger.
[16] Zir is the mark for the sound “i,” and zabar for the sound “a”;
their inclusion is apparently unmeaning.
[17] Vide The Eastern Turkestan Dialect, by G. Raquette.
[18] Vide The Sheep and its Cousins, by the late R. Lydekker,
who termed the poli, Ovis ammon poli, and the karelini, Ovis
ammon littledalei.
[19] “The Amir Yakub Khan and Eastern Turkestan in Mid-
Nineteenth Century,” by Colonel Sir Henry Trotter, K.C.M.G., C.B.
(Journal of the Central Asian Society, vol. iv., 1917, Part IV.).
Transcriber’s Notes:
New original cover art included with this eBook is granted to the public
domain.
The Index entry for Chinese Turkestan: taxes, has been corrected to page
306 from page 366 which does not exist.
Variations in spelling and hyphenation are retained.
Perceived typographical errors have been changed.
*** END OF THE PROJECT GUTENBERG EBOOK THROUGH
DESERTS AND OASES OF CENTRAL ASIA ***