Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

DAY-1

UNDERSTANDING THE
EVO-C 8200
Typical hardware structure and connections of Evo Controller 8200/RNC.

The MS includes all functions required from the EvoC 8200/RNC. ESs can be added for
increased traffic capacity and connectivity. ESs are connected to the MS using a 1+1
internal link redundancy.
The processing capacity of the EvoC 8200/RNC is scalable in relation to the number of
ESs and the number of EPB blades within the subracks.
Simple Traffic (CP, UP) flow in RNC

Iub APP CMXB(MS) CMXB(ES)

EPB(MS) EPB(ES)

Iu and Iur CP APP

Iu PS UP APP CMXB(MS) CMXB(ES)

Iu CS and Iur UP APP


28

11 mandatory boards in main subrack including 3 EPBs for traffic

CMXB
27

SCXB
26

EPB (C2)
25

EPB (C1)
24

17 optional boards for scalability and flexibility;

EPB
23

Optional EPB/EvoET
22

Optional EPB/EvoET
21

Optional EPB/EvoET
20

Optional EPB/EvoET
19

Optional EPB/EvoET
18

Optional EPB/EvoET
Optional EPB/EvoET 17
Used HW Configuration

16
Optional EPB/EvoET
15
Optional EPB/EvoET

14
Optional EPB/EvoET

13
Optional EPB/EvoET
EPB (C2)

12
Optional EPB/EvoET

11
Optional EPB/EvoET

10
Optional EPB/EvoET
Optional EPB/EvoET

9
Optional EPB/EvoET

8
EPB
Optional EPB/EvoET

7
Optional EPB/EvoET

6
EPB

5
EPB (C2)

4
EPB (C1)

3
CMXB

2
SCXB

1
Evo processing board, EPB1 1(2)

EPB C1 (1+1) EPB C2 (1+1)


› 2 x SCTP Front End › 2 x SCTP Front End
› O&M › Central device handling
› RFN server (moved from TUB) › UE register

EPB (3 – 68) for traffic


Primary processor Secondary processor
› CPP › CPP
› IP termination › IP termination
› 3xModule Controller › 6xDC device
› 2xDC device › 1xPDR device
› 1xCC device
› RNSAP
› RANAP
› PCAP

Up to 72 EPBs
1+1 for C1, 1+1 for C2 and 3-68 for traffic
Evo processing board, EPB1 2(2)

› Every ”traffic” EPB is a blade


› One individual call handled within the
EPB (W11B) same board
MC DC
MC DC › 2 processors with 8 cores on each
› Total 16 cores (2x8)
MC DC
DC DC › 6 different tasks for the EPB blades
DC DC › MC: Module Control
› DC: Dedicated Channel Handling
CC DC › CC: Common Channel Handling
IP PDR › PDR: Packet Data Router
IP CPP › IP: IP termination
› CPP: Common Packet Platform

Common HW for all tasks


Call handled within the same board
Evo Controller Provides 10G and 1G Ethernet to all Slot Positions

White fronts and new LEDs

SCXB CMXB3 SCXB CMXB3

Dbg Dbg Dbg Dbg


vel 2
APP 1GE APP
1GE
ExtA 1GE ExtA 1GE
Sync 1GE Sync 1GE
RPBS 1GE RPBS 1GE
10GE
SCXB provides 24x1G backplane ports and 4x10G + 10GE
10GE 10GE
10GE
2x1G front ports 10GE
1GE 1GE
1GE 10GE 1GE 10GE

10GE 40GE 10GE 40GE

10GE 40GE 10GE 40GE

10GE 40GE 10GE 40GE

10GE 40GE 10GE 40GE

CMXB3 allow 24 * 10GE backplane ports


4 * 10GE + 4 * 40GE + 4 * 1GE front ports
CMXB3

• The CMXB3 provides the Ethernet switching infrastructure with a total of


960 Gbps Ethernet switching capacity per EGEM2 subrack.
• The CMXB3 supports 10 Gbps connections between EGEM2 subracks for
user plane traffic. The board is HW prepared for 40 Gbps connections.
Node external traffic uses up to 8x10 Gbps.
• CMXB3 works in pairs for 1+1 redundancy.
Evo Controller 8200 / RNC internal and external connections
• END OF DAY -1

• Thank You

You might also like