GC Architecture and OpenStack 2

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 14

Agenda:

• Rakuten Network Deployment Architecture


• Altiostar Application (vCU/vDU/EMS) Architecture
• Port redundancy & Auto Healing

• Infra related Issues:


• FH connection Failure
• Stretch issue
• Packet loss/leakage issue
• RTD packets not coming to vDU FH
• Debugging:
• How & what to do to debug the issues
• Open Suggestions
• Open questions from team
Workload Connectivity Schematic
SKU1 Server with 4 vDU Example
Reference Architecture @GC Edge to Cell Site

GC Ring GC
100G or 10G (rural)
MBH MBH
MBH
Uplink 100G GC Ring, with Transport (for span budget)
25G vCU(s) vCU(s) 2x25G to each Edge TOR (criss-cross design)
SKU2 SKU2 1x25G to FH TOR (square design)

OOB Mgmt (rack 1)


F1 ethernet Edge TOR
25G xx ports of 25G  towards SKU1
Edge TOR Edge TOR xx ports of 25G  towards SKU2, Management Nodes
4 ports of 25G  Connection to MBH router (criss-cross
F1 ethernet design)
25G 2 ports of 100G  Edge TOR interconnection
vDU(s) vDU(s) FH TOR
SKU1 SKU1 48 ports of 10G  10G Bidi, PLC splitter from Cell Sites
eCPRI xx ports of 25G breakout (QSFP-100G-SR4-S)  towards
25G SKU1 (FH NICs)
FH TOR FH TOR OOB(rack 2) 1x25G breakout (QSFP-100G-SR4-S)  Connection to MBH
router (square)
eCPRI 2 ports of 100G  FH TOR interconnection
10G BiDi (splitted) PLC Splitter Management Switch (1 per Rack)
1x1G for each servers (only SKU3 has 2x1G connection)
eCPRI 1x1G for each ToR switches
10G BiDi Cell Site 2xSFP 1G uplink to MBH router for management network
RIU connectivity across WAN
CPRI 1 or 2x1GE SFP for Mgmt switch interconnect

RRH RRH RRH


Virtual Machines

vCU/vDU Applications
running over CentOS

CVIM OpenStack
over Red-Hat

Quanta Hardware
SRIOV (Single Root I/O Virtualization)

 SR-IOV specification defines a standardized mechanism to virtualize PCIe devices. This mechanism can virtualize a
single PCIe Ethernet controller to appear as multiple PCIe devices. Each device can be directly assigned to an
instance, bypassing the hypervisor and virtual switch layer. As a result, users are able to achieve low latency and
near-line wire speed.

 SR-IOV uses physical functions (PFs) and virtual functions (VFs) to manage global functions for the SR-IOV devices.
 PFs are full PCIe functions that are capable of configuring and managing the SR-IOV functionality.
 VFs are lightweight PCIe functions that support data flowing but have a restricted set of configuration
resources.

How SRIOV Works: https://www.youtube.com/watch?v=hRHsk8Nycdg&t=92s


vDU Flavor properties |
• hw:cpu_l3_cachelines='4',
CPU pinning for an instance

• hw:cpu_policy='dedicated',
places each vCPU on thread siblings
• hw:cpu_realtime='yes', hw:cpu_realtime_mask='^0-1',

• hw:cpu_thread_policy='require',
 Hugepage support is required for the large
memory pool allocation used for packet buffers
• hw:emulator_threads_policy='share',
• By using hugepage allocations, performance is
increased since fewer pages are needed, and
• hw:mem_page_size='1048576', therefore less Translation Lookaside Buffers

• hw:numa_mempolicy='strict',

• hw:numa_nodes='1',

• pci_passthrough:alias='vc_fpga:1' |
Redundancy for FH Interface
Router 1 Layer 3 routed link Router 2
S S
VI VI

Server 1 Server 2
vMAC vMAC
I to I - Mac1
vDU Mac2
vDU
NO
P to P –
OK
P to I –
OK
TOR 1 P Ports P Ports TOR 2

X X
Normal VLAN
500

Isol

Normal VLAN
Normal VLAN

Normal VLAN
Normal VLAN

Normal VLAN
Normal VLAN

Normal VLAN

720
720

720

500
500
720

500

720
500
500

720
720
500
720

500

Isol
Isol

Isol
Isol

Isol
Isol

Isol
X X X X

Y-cables

RIU 1 RIU 2 RIU 3 RIU 4


Auto Healing Call Flow
NFV Architecture

You might also like