Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

LEARN WORK IT

INFORMATION TECHNOLO GY (NE T WORK )

C I S C O ACI BLO GS VMWARE N SX BLO G S CISCO ROUT ING B LO G

C I S CO SW ITCHIN G BLO G IT INS TITU TES CONTACT US

TERMS & CONDIT ION

5. ACI Hardware
Installation
 APRIL 9, 2021  LEAVE A COMMENT

ACI Hardware
Installation Basic
Explained
The Application Centric Infrastructure (ACI) Fabric
hardware includes an Application Policy Infrastructure
Controller (APIC) appliance (a cluster of three
controllers), one or more leaf switches, and one or more
spine switches connected to each other.

The ACI fabric topology


includes the following major
components
Application Centric Infrastructure Controller (APIC)
appliance (cluster of APICs)
S w i t c h e s t h a t a r e u s e d a s L e a f s w i t c h e s are
Cisco Nexus 93108TC-EX, 93108TC-FX, 93120TX,
93128TX, 93180LC-EX,93180YC-EX, 93180YC-FX, 9332PQ,
9348GC-FXP, 9372PX, 9372PX-E, 9372TX, 9372TX-E,
9396PX, and 9396TX switches.
S w i t c h e s t h a t a r e u s e d a s S p i n e S w i t c h e s are
which are Cisco Nexus 9336PQ, 9364C, 9504, 9508, and
9516 switches.

The following �gure shows how APIC, spine and leaf


switches are connected to form the ACI Fabric
For APIC to con�gure we can use a cluster of second-
generation Cisco UCS 220 M4 or we can also use a
cluster of First-generation Cisco 220 M3. The Image of
these Servers are secured with Certi�cates, APIC
Product ID (PID), and also with Trusted Platform Module
(TPM).
F o r L a r g e c l u s t e r : Cluster of three Cisco APIC
second-generation controllers with large size CPU, hard
drive, and memory con�gurations for more than 1000
edge ports.
F o r M e d i u m c l u s t e r : Cluster of three Cisco APIC
second-generation controllers with medium size CPU,
hard drive, and memory con�gurations for up to 1000
edge ports.

Connection Features on a Third-


Generation APIC Controller as
follows.

1. It has Virtual Interface Card (VIC) for either optical


(VIC1455) connections or 10/25GBASE-T connections
2. To provide connection to downlink port on TOR
Switches it has two �ber optic or 10GBASE-T ports
3. It has 1-Gigabit Ethernet as Cisco Integrated
Management Controller (CIMC) port.
4. A Console port for a direct connection to a console if
you have to con�gure the device via Console.
5. It has Out-of-band management ports for OOB
connection and it cannot be used as CIMC ports
except when using the Shared_LOM mode

Second Generation

First Generation

Connecting a Leaf Switch to an


APIC
You must downlink one or two (recommended for
redundancy) Cisco Nexus 93128TX, 9332PQ, 9372PX,
9372TX, 9396PX, or 9396TX leaf switches running in
ACI mode to each Application Policy Infrastructure
Controller (APIC) in your ACI fabric (each leaf switch
can be connected to multiple APICs). The type of
interface cables and leaf switches that you connect to is
determined by the type of virtual interface card (VIC)
installed on the APIC as follows:
• The VIC1225 module supports optical transceivers,
optical cables, and the Cisco Nexus 9396PX leaf switch.
• The VIC1225T module supports copper connectors,
copper cables, and the Cisco Nexus 93128TX and
9396TX leaf switches.

In order to create the ACI fabric, ACI-mode leaf switches


must be connected to each APIC. The cable used to
connect to Leaf and APIC depends upon the type of
virtual interface card (VIC) installed on the APIC. VIC
card capabilities of APIC Controller.

The VIC1225 module supports optical SFP/SFP+


transceivers and to connect it we use optical �ber
cables.

S w i t c h e s t h a t h a v e f i b e r p o r t s are Cisco Nexus


93180LC-EX, 93180YC-EX, 93180YC-FX, 9332PQ, 9348GC-
FXP, 9372PX, 9372PX-E, and 9396PX.

The VIC1225T module has copper connection capability


and to provide connectivity we use copper connectors
and copper cables.

S w i t c h e s t h a t h a v e c o p p e r p o r t s are Cisco Nexus


93108TC-EX, 93128TX, 9348GC-FXP,9372TX, 9372TXE,
93108TC-FX, 93120TX, and 9396TX switches.
Connecting Leaf Switches to Spine
Switches

For optimal forwarding between endpoints, you must


connect each leaf switch (Cisco Nexus 93108TC-EX,
93108TC-FX, 93120TX, 93128TX, 93180LC-EX, 93180YC-
EX, 93180YC-FX, 9332PQ, 9372PX, 9372PX-E, 9372TX,
9372TX-E, 9396PX, or 9396TX) to every spine switch
(Cisco Nexus 9336PQ, 9504, 9508, or 9516) in the same
ACI fabric. The following table provides the number of
ports that can be used to connect leaf switches and the
supported speeds for those ports.

The leaf and spine switches in the fabric must be fully


installed in their racks and grounded and are powered
on.
Setting up the APIC
There are some options available to con�gure APIC.
CIMC: You can either use the CIMC address that you
have set up on the APIC to connect to it over the
network.
Console : ( via Serial Connection )
Here we will discuss how to con�gure APIC via Console.
When the APIC is started the �rst time, you will see a
series of initial setup options on the APIC console. For
some option, you can press Enter which can let you
choose the default setting, and if any point of time
while setup con�guration you can restart the setup
dialog by pressing
Ctrl-C.
Now if have connected RJ-45 cable on console port, in
this case �rst you should connect to CIMC using SSH
and enable Serial over LAN port by using the following
parameters:
Set Enabled to Yes
Commit
Exit
Once this has been enabled, use the command “connect
host” to get access to the console. If you have connected
the serial port then disconnect it or make sure the
connected device has the proper con�guration.

Setup for Active and Standby APIC

In a large DC infrastructure, we should have at least 3


Controller in Active-Active Mode and one another APIC
Controller in standby mode, and it can be scaled up to 5
or 7 APIC controller based on APIC Software Version,
and when any one of the Active controllers fails, the
standby controller is promoted to take part in remaining
active controller cluster. And if all three Active
controllers is working properly, the standby controller
would only participate in automatic software upgrade
but does not hold any con�guration. To take Standby
APIC in active role, the admin needs to promote it to
active with a valid APIC id number. This function starts
from APIC 2.2 release onwards.
The following option is used when con�guring active
APIC via Console.
Fabric name: ACI Fabric1
Fabric ID: 1
Number of active controllers: 3

When setting up APIC in an active-standby mode, you


must have at least 3 active APICs in a cluster.
POD ID: 1
Standby controller: No ( For Standby APIC it will be yes).
Controller ID: Valid range: 1-19, Unique ID number for the
active APIC instance.
Active controller name: apic1
IP address pool for tunnel endpoint addresses:
10.0.0.0/16
The subnet allocated here will be used for the
infrastructure virtual routing and forwarding (VRF)
only. And this subnet should not overlap with any other
routed subnets in your network to avoid subnet clash.
Subnet /23 is the minimum supported subnet for a 3
APIC cluster and If you are using Release 2.0(1) the
minimum supported subnet is /22.
V L A N I D f o r i n f r a s t r u c t u r e n e t w o r k : This is the
Infrastructure VLAN for APIC-to-switch
communication including virtual switches. You should
reserve this VLAN for APIC use only. This infrastructure
VLAN ID must not be used elsewhere in your
environment and must not overlap with any other
reserved VLANs on any other platforms.
IP address pool for bridge domain multicast address
(GIPo): 225.0.0.0/15.
Valid range for GIPo : 225.0.0.0/15 to 231.254.0.0/15,
pre�xlen must be 15 (128k IPs).
Management interface speed/duplex mode: auto
Strong password check:

Explained:
The following is a sample of the initial setup dialog as
displayed on the console:
Cluster con�guration …

Enter the fabric name [ACI Fabric1]:


Enter the fabric ID (1-128) [1]:
Enter the number of active controllers in the fabric (1-9)
[3]:
Enter the POD ID (1-9) [1]:
Is this a standby controller? [NO]:
Enter the controller ID (1-3) [1]:
Enter the controller name [apic1]: sec-ifc5
Enter address pool for TEP addresses [10.0.0.0/16]:
Enter the VLAN ID for infra network (2-4094): 4093
Enter address pool for BD multicast addresses (GIPO)
[225.0.0.0/15]:
Out-of-band management con�guration …
Enable IPv6 for Out of Band Mgmt Interface? [N]:
Enter the IPv4 address [192.168.10.1/24]: 172.23.142.29/21
Enter the IPv4 address of the default gateway [None]:
172.23.136.1
Enter the interface speed/duplex mode [auto]:
admin user con�guration …
Enable strong passwords? [Y]:
Enter the password for admin:
Reenter the password for admin:
Cluster con�guration …
Fabric Name: ACI Fabric1
Fabric ID: 1
Number of controllers: 3
Controller name: sec-ifc5
POD ID: 1
Controller ID: 1
TEP address pool: 10.0.0.0/16
Infra VLAN ID: 4093
Multicast address pool: 225.0.0.0/15
Out-of-band management con�guration …
Management IP address: 172.23.142.29/21
Default gateway: 172.23.136.1
Interface speed/duplex mode: auto
admin user con�guration …
Strong Passwords: Y
User name: admin
Password:
The above con�guration will be applied …
W a r n i n g : TEP address pool, Infra VLAN ID, and
Multicast address pool cannot be changed later, these
are permanent until the fabric is wiped.
Would you like to edit the con�guration? (y/n) [n]: (Press
enter in case you don’t want to edit the con�guration)

Spine & Leaf Fabric failures:


I f o n e s p i n e i s d o w n : Bandwidth degradation
I f b o t h s p i n e s a r e d o w n : Complete ACI fabric is
down
I f o n e l e a f f a i l s : Bandwidth degradation for VPC and
no connectivity for the single attached device
I f u p l i n k o f l e a f f a i l s : Bandwidth degradation

You might also like