Download as pdf or txt
Download as pdf or txt
You are on page 1of 66

PRIVATE NETWORK-TO-NETWORK INTERFACE 8-1

8 Private Network-to-Network Interface


(PNNI)

The Passport Private Network-to-Network Interface (PNNI) routing system simplifies large
network configuration for ATM switched connections. The protocol implements a powerful
dynamic routing mechanism that eliminates manual configurations and assures automatic
updates of routing tables in the case of topology changes and network resource shortages. It is
based on shortest best path algorithms that are distributed across the network for greater
reliability. PNNI provides standardized signaling/routing messages and interworking functions,
establishes and maintains clear point-to-point and point-to-multipoint connections, and provides
a variety of route selection criteria and performance constraints especially valuable in large
WAN networks. In addition, Passport PNNI implementation supports the new edge-based re-
routing capabilities necessary for advanced restoration mechanisms also known as route
optimization and route recovery. This is complemented by valued-added route balancing
algorithms, route caching and configurable multi-path variance that maximize the overall
performance of the network.

This chapter assumes that the reader is familiar with PNNI routing and signaling; while it is not
intended as a tutorial, the basic concepts and terms of PNNI are herein discussed.

8.1 Overview
The PNNI protocol is standardized by the ATM Forum to provide reliable, highly scalable and
dynamic routing to ATM networks. PNNI supports different ATM switches services such as
SVCs, SVPs, SPVCs and SPVPs. The value of PNNI is most typified by its ability to scale to
thousands of nodes with minimal operational overhead, and still maintain loop-free routing, QoS
guarantees and other powerful routing capabilities. Despite the major benefits of PNNI being
realized in large networks, the protocol is now mature enough to be well suited in smaller
networks as well. In the past, IISP static routing or proprietary protocols were used to route in
small networks. However, the problems inherent to static routing protocols (manual
configuration, possible routing loops, no dynamic topology updates or changes) are reason
enough for most networks to introduce or migrate to PNNI.

In addition to the benefits inherent to the protocol, Passport offers many value-added optional
features and a wide selection of proprietary route selection, route optimization, load balancing
and performance enhancing functionality.

This chapter assumes a general knowledge of PNNI routing and signaling and is not intended to
provide a PNNI tutorial, however, a brief overview is given that discusses the basic concepts and
terms of PNNI.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-2 PRIVATE NETWORK-TO-NETWORK INTERFACE

8.2 Routing in a hierarchical PNNI domain


Using peer groups and a hierarchy, PNNI networks can scale to enormous sizes. The protocol
imposes little restrictions on how to form and build such a hierarchical network, nor does it
recommend suggestions to do so. This section discusses some considerations and guidelines on
when it is appropriate, and how to engineer a hierarchical network.

PNNI is a source routing protocol. This implies that a single node calculates the “hierarchically
complete” route, thereby eliminating the possibility of routing loops. A common characteristic of
source routing protocols is that every node has an identical view of the topology. This
requirement enforces consistent routing in the network, and forces each separate node to learn
the topology and characteristics of the other nodes and links in the network.

The PNNI routing protocol consists of procedures and sub-protocols that control the distribution
of the topology information to each PNNI node. With each node maintaining its own database
reflecting the network topology, the database is growing as the topology grows. Ultimately, the
switch resources required to support such a database will become exhausted, limiting the size
and growth of the network. PNNI addresses this by defining smaller sub-networks where each
node in the sub-network has only to learn and maintain a database of the peers in its sub-
network. These sub-networks are called “peer groups” and can be groups of up to 300 contiguous
Passport nodes.

To maintain the connectivity of nodes in different peer groups, PNNI defines special roles for
switches that effectively form a logical hierarchy. The function of the hierarchy is to achieve
network scalability and decrease call setup times. To achieve this, mechanisms are put in place to
distribute a minimal amount of routing information between nodes while still maintaining
network integrity. The hierarchy is constructed such that nodes closer to the apex represent entire
peer groups below them in a parent/child relationship. Groups of parent nodes form higher level
peer groups. The position of each node in the hierarchy is defined by its peer group id and level.
The peer group id defines the peer group borders while the level specifies where in the hierarchy
this peer group resides.

Reachability information is traded between nodes in the same peer group. Starting at the lowest
level of the hierarchy, this information is summarized and passed upwards to the parent node.
The parent node in turn will summarize its peer group information and pass it upward until the
top level peer is reached. This information is traded among the parent nodes, but can now be
passed down the hierarchy to other child peer groups. It is through this mechanism that each
child peer group learns about other child peer groups, and which parent peer group claims
reachability to them. Each node in the lowest level peer group now has the information to
complete routing to every other node in the hierarchy. The local nodes no longer require the link
and topology information of the remote peer groups. Each node relinquishes memory in its own
topology database by purging the link state information from the remote peer groups which
allows PNNI to achieve scalability.

As a result of topology information localized on a peer group basis, in a hierarchical network the
entry border node of each peer group is responsible for completing the routing for connections

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-3

transiting this peer group. This implies that a single connection will be routed as many times as it
enters a new lowest level peer group.

To facilitate the construction of the hierarchy, PNNI defines four special switch roles:
• Peer Group Leader (PGL)
• Logical Group Node (LGN)
• Border Node (BN)
• Interior Node (IN)
PGLs are elected in each peer group, and are configured with policies that will represent this
node in the higher levels of the hierarchy.

LGNs are virtual nodes created on the same system as the PGL, and represent the child peer
group by broadcasting the policies configured on the corresponding PGL to other LGNs.

Border nodes are nodes that have a PNNI connection to a node not in its own peer group. Border
nodes have the special job of logically connecting the lower layers of the hierarchy to the higher
layers with virtual links (called uplinks) that can be routed on. Border nodes also are responsible
for routing connections through this peer group. Interior nodes are any other node in the
hierarchy.

INs are defined by not being a PGL or LGN, and only having links to nodes inside it’s own peer
group.

Passport release PCR1.3 supports all four types of PNNI switch roles and is fully capable of
supporting a PNNI hierarchy at any level.

Introducing hierarchical PNNI means that the source node cannot generate the complete route for
inter-PG connections because it does not have full information about the destination PG and
tandem PG(s). Instead, the source node uses its knowledge of its own PG and all its ancestor PGs
to calculate a “hierarchically complete” source route.

Section 8.2.1 Example: flat vs hierarchical PNNI shows two examples. The first is a simple
network illustrating the differences in routing between a flat and hierarchical networks with
identical physical topologies. The second is a more realistic example of a network configuration
where a number of PG selections are compared to illustrate the trade-offs between flat and
hierarchical topologies.

8.2.1 Example: flat vs hierarchical PNNI


Figure 8-1 shows the difference in routing in a flat versus hierarchical network.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-4 PRIVATE NETWORK-TO-NETWORK INTERFACE

a) Flat PNNI Topology: (same as the physical network) b) Hierarchical PNNI Topology

C Level 20
A
C
B
A.3
A.1 A.3
B.1 B.3
A.2 A.1
B.1 B.3 Level 80
A.2
B.2
B.2
c) Peer Group A’s view of the Network
Horizontal link: There is database exchange across
Level 20 horizontal links.
A
C Outside link: There is no database exchange across
B outside links.

A.3 Uplink: Border nodes advertise their uplinks in


PTSEs flooded in their respective peer groups.
A.1
Level 80 C Node: logical node ID is “C”
A.2

Figure 8-1 Source routes in flat versus hierarchical PNNI

Diagram B shows logical group nodes (LGN) A, B and C in the level 20 PG. Diagram A
shows the network as flat PNNI where every node has the entire view of the network. In
order to make a connection from A.1 to C, source node A.1 has the up to six routes to
reach C:

A.1->A.3->C A.1->A.2->B.1->B.2->B.3->C

A.1->A.2->A.3->C A.1->A.3->A.2->B.1->B.3->C

A.1->A.2->B.1->B.3->C A.1->A.3->A.2->B.1->B.2->B.3->C

Routes with insufficient bandwidth are pruned and the criteria for the best route is chosen
by A.1 according to Aw, (or CTD/CDV). In hierarchical PNNI, A.1 has three choices to
reach C; they are hierarchically complete source routes, since A.1 only has a view of the
its own PG and ancestor PGs as in Diagram C (hierarchically complete source routes):

A.1->A.3->C A.1->A.2->A.3->C

A.1->A.2->B->C

A.1 does not know if B or C are physical nodes or LGNs. Passport uses simple node
representation which summarizes an entire PG to a single node which is connected to
other nodes via the outside links to the PG. Note that the internal structure of PG B is not
visible to A.1, so if the route A.1->A.2->B->C is chosen as the best route, then the
outside link A.2->B .1 will be used, and node B.1 will have to route across PG B in order

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-5

to get to C. B.1 is the entry border node and must calculate whether B.1-> B.3->C or B.1-
>B.2->B.3->C is indeed the best route. This serves to illustrate how flat PNNI is entirely
source routed while hierarchical PNNI routing is shared between the source node and the
all entry border nodes en route to the destination.

8.2.2 Routing in a hierarchical domain


The information that A.1 has in the hierarchical case is less compared to flat: If simple
node representation is used, so A.1 does not know the internal structure of the PG B, and
so the associated cost of traversing PG B is zero. For example, in route A.1->A.3->C
versus A.1->A.2->B->C, all the links B.1->B.3, B.1->B.2, and B.2->B.3 have zero cost
(e.g. Aw) and so A.1 is really comparing the cost of route A.1->A.3->C with A.1->A.2-
>B.1 plus the route B.3->C.

Connections that traverse a PG boundary may be routed sub-optimally because there is


no way for the connection to know ahead of time which outside link will result in the best
route. There is no general rule that can be applied to assess if or how sub-optimal the
resulting route will be. One method of adjusting for the cost of PG traversal is to increase
the Aw of the outside link to account for the PG traversal cost. Great care should be
exercised, however, as there may be unwanted side-effects such as the link in question
being bypassed on other connections despite being part of the optimal route.

8.2.3 Complex node representation


Complex node representation is used to represent the aggregation of topology
information of a PG in the parent PG more richly compared to simple node
representation.

Complex node representation is a symmetric star topology with a uniform radius. The
center of the star is the interior reference point of the logical node, and is referred to as
the nucleus. The logical connectivity between the nucleus and a port of the logical node is
referred to as a spoke. The concatenation of two spokes represents traversal of a
symmetric peer group.

Complex node representation attempts to provide more information to the LGN so that a
better choice of outside links can be made for PG traversal. The complexity of PGs in
“real world” networks is typically much more complex than a simple spoke and hub
topology, and so the possibly other representations such as using a minimum spanning
tree may be superior.

Furthermore, complex node representation adds no value unless the PG is traversed.


Routing to an internal reachable address in a logical group node corresponds to choosing
a spoke. However, the destination node may be the entry border node, in which case the
adding the cost of the “spoke” to reach the nucleus of the PG is misleading.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-6 PRIVATE NETWORK-TO-NETWORK INTERFACE

In summary, routing considerations in hierarchical PNNI are much more dependent on


topology and PG selection, rather than complex versus simple node representation.

Outside link placements and hierarchical crankback implications. Examples of benefits of


2 level routing versus 3 level routing.

Without complex node representation, it is important to scale each peer group with
relatively the same diameter and size as each of the other peer groups. Failure to do this
may cause sub optimal routing because the zero cost of each LGN won’t be a true
reflection of the cost of each peer group. If each peer group has the same diameter and
roughly the same QoS metrics, then the equal cost of each peer group negates any large
implications of taking a particular peer group when compared to another.

Also, equally sized peer groups maximizes the EBR feature to choosing an optimal route
throughout the PNNI domain.

8.3 PNNI and ATM Addressing


An addressing plan is one of the first steps in designing a PNNI network. A new plan can be
designed around a growing network, considering the individual needs of that network tailored to
its future growth. Alternatively, an existing plan can be reused. Either option impacts the design
of a PNNI network as the addressing plan directly affects the design of PNNI peer groups and
hierarchical levels.

Developing an addressing plan is a fundamental step to offering signaled connections (e.g. ATM
SVCs or SPVCs) in a network. The addressing plan is used to uniquely identify ATM endpoints
where connections are terminated. Network nodes, other networks or Customer Premise
Equipment (CPE) are examples of connection endpoints. An effective addressing plan will
identify each endpoint, and provide summarization of groups of endpoints to reduce the size of
network routing tables (summarization is discussed in section 8.3.3 Address Summarization) and
allow for flexibility and portability of each of the addresses. Furthermore, an addressing plan
must also consider the future growth of the network and prepare for possible PNNI peer group
boundaries where they are currently not required. Section 8.3.1 Network Growth and Peer Group
Boundaries discusses the relationship between peer groups and NSAP addresses.

8.3.1 Network Growth and Peer Group Boundaries


PNNI defines a peer group as a collection of contiguous nodes that share the same
topology database, all identified by an identical peer group identifier (PGID). The PGID
is a 14 octet string that is composed of a one octect PNNI level indicator and 13 octects
from the switch NSAP prefix. All nodes within a peer group have the same PGID
therefore all nodes in a peer group must have some number of bits of their NSAP address
in common. The number of common bits in the NSAP addresses of nodes in a peer group
should be planned carefully by the addressing scheme. Often future peer group locations

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-7

are planned before the actual need for them. Please refer to section 8.5 Peer Group
Boundaries for a discussion on how peer group boundaries can be formed.

8.3.2 Addressing and PNNI Levels


The number of bits that each NSAP node prefix (and PGID) have in common is used to
define the PNNI hierarchical level. The level is determined by specifying how many of
the common bits (most to least significant) in the PGID are to be considered significant.
Figure 8-2 illustrates Address A and Address B with 10 octects of their respective switch
prefixes in common. Therefore, we can define these two switches to be in the same peer
group at any level ranging from 1 to 80. At level 1 (e.g. the first bit), the two addressing
plans have the value 0 (binary) in common. This is because the value of the hexadecimal
value 4 (the most significant nibble of both addresses) is represented in bits as binary
‘0100’. This commonality is extended to the end of the 10th octect, at which point these
two switches have 80 bits in common.

In this example, the level can be arbitrarily set to any value between 1 and 80. When
migrating a flat PNNI network to a hierarchy, a “top down” approach is recommended
(see section 8.9 Migrating to a PNNI Hierarchical Network on migrating PNNI
networks). If the top down approach is applied, the actual value of the level is
inconsequential. The only requirement is to have more common bits in geographic areas
following the 80th bit. These bits will be used to uniquely identify child peer groups
similarly to how the single peer group was represented.

Address A 4 7 2 3 4 5 6 7 8 9 1 1 ……..
Hexadecimal 1 ……….

80 bits/10 Octects

Address B 4 7 2 3 4 5 6 7 8 9 F F ……..
Hexadecimal

Addresses A and B have the first


Address A 80 bits in their addresses80 bits
Binary 0 1 0 0
in their addresses.
Bit 1 Bit2 Bit3 Bit4
Address B Therefore a level ranging
Binary between 1 and 80 is supported.
0 1 0 0

Figure 8-2 Sample NSAP addresses

Although the PNNI protocol supports up to 104 levels (13 octet PGID × 8bits), typically
only a two level hierarchy is required and recommended for most networks, if a hierarchy
is required at all. Whenever possible, a single lowest layer peer group is recommended.
The major benefits of hierarchy are only realized after concluding the size of the single
peer group has exhausted the switch resources needed to support routing efficiently.
Efficiency is based on network stability, call setup performance and route convergence
times in combination with the specific services offered in the network. For more
information on deciding how many levels are appropriate, please refer to section 8.7
PNNI Multi-level Hierarchical Configurations.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-8 PRIVATE NETWORK-TO-NETWORK INTERFACE

Note: If a single lowest level peer group is chosen, PNNI level 1 can always be used to
create the peer group id, regardless of the addressing plan and format.

A two level hierarchy is typically defined by one parent level peer group comprised of
logical group nodes and at least two lower level child peer groups. The child (lowest
level) peer groups do not have to be at the exact same level, but must define the same
parent level peer group and themselves have a lower level than the parent group. (If
address scoping with a value other than zero is used, there could be reachability
implications.) Few network topologies require 3 levels of hierarchy (e.g. grand parent,
parent and child peer groups). Justification of such a hierarchy would include
international networks and tens of thousands of nodes, or special migration scenarios.

In most instances networks today only require a single lowest level peer group. The
operator is encouraged to change the node NSAP addresses as little as possible because
doing so on a Passport switch requires a reboot. In order to offer switched services now
and avoid unnecessary reboots, a single addressing plan should be flexible enough to
identify the endpoints in a single peer group, but also be flexible and smart enough to be
reused in a hierarchical environment. Despite using a single peer group, the network
addressing plan must be aligned with future growth patterns of the network in order to
effectively partition future peer groups.

LGN 1 LGN 3
Peer Group 4

Level 32
PGID LGN 2
12345678
PG3
PG1

Level
PG2
Level 72
80
PGID PGID PGID 123456783
1234567890 1234567822

Figure 8-3 Two level PNNI hierarchy

8.3.3 Address Summarization


Address summarization is used to encapsulate many reachable addresses by the
advertisement of a single address to represent them. This is accomplished by advertising
the minimum number of common bits (most significant to least) of each of the addresses
that uniquely represent them in the network.

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-9

At the nodal level, Passport supports this concept by offering “summary” addresses.
Summary addresses are used to summarize UNI endpoints on this node by advertising
only the common bits of the end addresses. The simplest form of this is done via Interim
Local Management Interface (ILMI) registration of UNI addresses. This process appends
the CPE 7 octect IEEE MAC address to the 13 octect node prefix forming an NSAP
address. Because all the ILMI registered addresses have the same 13 octect prefix (that is
unique), only that prefix needs to be advertised in place of all the addresses.

Address summarization also applies to a PNNI hierarchical network. It is the role of each
Peer Group Leader (PGL) to summarize all the reachable addresses in its peer group,
such that nodes in other peer groups can reach them. The summarized list of addresses is
effectively broadcast to the rest of the network.

To facilitate effective summarization, the addressing plan must place as much general
information about the address in the most significant bits. Usually, the generalizations are
based on geographical regions. Using geographical regions is not a requirement, but for
simplicity they are used in examples and discussions that follow.

Using a geographical example, more general areas would imply larger, more expansive
geographical regions. For instance, the state of Texas is more general than the city of
Dallas. Both imply specific places, but Dallas provides a finer granularity than Texas, as
Dallas is a city in Texas. By advertising Texas, the city of Dallas (as well as every other
city in Texas) is implied.

Consider Figure 8-4 to help further illustrate. Here there are 3 major fields: city, block,
and house. Using this format, the multiple blocks and houses in a city can be effectively
summarized by advertising the single city address. In a network routing scenario, the city
address would be provisioned on a node with reachability to this particular city. Any calls
terminating in this city should route to the node advertising the city address. Advertising
the city address implies the individual city blocks and houses.

………..CITY………………BLOCK…………………….

Most Significant Field Least Significant Field

Figure 8-4 Geographical regions example

The major drawback to address summarization is that it promotes routing ambiguity.


Summarization reduces the information in the network routing tables, however, the
information available is less exact and more ambiguous. If the city address is advertised
to a group of nodes that wish to connect to a house in that city, a “guess” is made that the
house (destination address) actually exists in that city. It may be that the house does not
exist, but this is impossible to tell with only the city address advertised. In terms of

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-10 PRIVATE NETWORK-TO-NETWORK INTERFACE

routing, this leads to more calls set up to destinations that don’t exist, unnecessarily
wasting network resources.

8.4 Addressing Formats


Passport supports the following NSAP addressing formats: International Code Descriptor (ICD),
Data Country Code (ICD), X.121, native and embedded E.164. To ensure global scalability,
ATMSP networks must register their addresses with a standards organization. If private networks
don’t intend to connect to ATMSP networks, then private unregistered addresses can be used.
However, if a private network should ever plan to connect to an ATMSP network, then the
private network would have to register its addresses.

8.4.1 Selecting an Address Format


DCC NSAPs are intended for organizations that operate networks within a country and
wish to interconnect to another network outside its jurisdiction that also support NSAP
addressing. However, there is no restriction for a DCC address registered to a specific
country to be used outside of that country. The use of such an address should be
considered on a case by case basis. The standards organization in the country where the
DCC was registered should be consulted. Furthermore, if the address will be crossing an
ATMSP/private network boundary, a bilateral agreement should be negotiated to ensure
that the ATMSP network will accept such an address. ATMSP networks are mostly
concerned about scalability, and these addresses may not be capable of being summarized
in the ATMSP network.

ICD NSAPs are designed for organizations that are international in scope. As such these
organizations do not wish to be tied to any country in the hierarchically structured
address scheme and require globally unique ATM addresses for network interworking.
NSAP E.164 addresses are intended for organizations that own blocks of E.164 numbers
and are willing to administer their assignment according to the ITU-T recommendations.

Using any of the previous three is essentially a non-technical issue. However, the
following advantages can be gained from using NSAP addresses over E.164:
• ease of interworking between public and private ATM networks since the NSAP
address format is recommended in both environments by the ATMF
• ILMI protocol which allows ATM private switches to automatically register the
terminals (hosts) IEEE MAC addresses with the high order part of the ATM NSAP
node addresses in order to form a complete and unique 20 bytes NSAP address.
• ISO NSAPs are longer than E.164 addresses. This allow more hierarchy and
capability to address any ATM UNIs even in a very large ATM network without a
need for sub-addressing (the larger NSAP addresses allows considerably greater
flexibility and scalability).

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-11

• PNNI 1.0 dynamic routing protocol make uses of NSAP addresses in their topology
database

8.4.2 Network Service Access Point (NSAP) Address Structure


The syntax of the OSI Network Service Access Point (NSAP) addresses is defined in the
ATMF UNI 4.0 specifications with reference to ISO 8348 and ITU-T X.213 documents.
Figure 4 illustrates the ATM NSAP address formats. They are classified as Data Country
Code (DCC), International Code Designator (ICD) or E.164 encapsulated.

They start with one octet Authority and Format Identifier (AFI) followed by a two octet
Initial Domain Identifier (IDI). The AFI determines the format of the remainder of the
address. The hexadecimal value 0x39, 0x47 and 0x45 indicates that the NSAP addressing
structure conforms to the ISO DCC, ICD and E.164 addressing schemes respectively.
The IDI identifies the address authority (authority responsible for allocating the values
and the syntax of the HO-DSP field). The IDI can be a DCC code, ICD code or E.164
number.

The remainder of the address consists of three fields - the high-order Domain Specific
Part (HO-DSP), the end-system identifier (ESI), and a selector (SEL) field. The authority
identified by the IDI field decides the structure of the HO-DSP. For examples, a national
ISO member body formats the HO-DSP of the DCC addresses and the HO-DSP of the
ICD addresses is decided by the international organization.

The ESI identifies an end-system and must be unique within a particular value of AFI +
IDI + HO-DSP.

The ESI is usually an IEEE MAC address. The SEL field is not used to deliver calls to
end-systems, but could be used within an end-system to differentiate between
applications or processes. It is usually set to 00 hexadecimal value. For purposes of route
determination, end systems are identified by the 19 most significant octets of the ATM
NSAP address.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-12 PRIVATE NETWORK-TO-NETWORK INTERFACE

ATM end-system addresses (AESAs)

AFI DCC SEL


39 HO-DSP ESI 00

AFI SEL
47 ICD HO-DSP ESI 00

AFI X.121 HO-DSP ESI SEL


37 00

AFI E.164 HO-DSP ESI SEL


45 00

20 Octets
AFI = Authority and format identifier HO-DSP= High-order domain specific part
DCC= International code designator ESI= End-system identifier (for example, IEEE MAC address)
ICD= International code designator SEL= Selector

Figure 8-5 NSAP formats

8.4.3 Designing an Addressing Plan


The general rule when planning an ATM addressing plan is to organize the fields into a
hierarchical format, where higher levels of the hierarchy can summarize lower levels of
the hierarchy. The hierarchy is usually defined by geographical areas, where larger more
general areas reside at higher levels of the hierarchy with the lower levels designating
more specified and detailed information.

Planning by geographic region applies to the ATMSP network as well as a managed


private network. Managed private networks are private networks that adopt the
addressing scheme from the ATMSP network. Whether a managed network is connected
via IISP/AINI/pubUNI /PNNI, smart addressing will reduce the number of routing
addresses in the ATMSP network.

Many factors can influence how the hierarchical nature of the address is formed. The
remainder of this section, 8.4 Addressing Formats, discusses these factors.

8.4.4 Required Network Services


The services that are offered by the public network directly impact the addressing plan,
especially when the public network connects to other private networks. Calls terminating
in the adjacent private networks have implications on how they are identified in the
public network addressing plan.

With respect to addressing, all switched services are broken into two categories:

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-13

• Connections that terminate inside this routing domain (type 1)


• Connections that terminate outside this routing domain. (type 2)
The domain is defined as the ATMSP network that the addressing plan is designed for.
An example of connections that terminate outside of the domain are SPVCs or SVCs that
originate or pass through the ATMSP network but terminate in an adjacent private
network.

The biggest reason for the distinction is the use of the ESI portion of an address. Type 2
connections should not include the 7 octect End System Indicator (ESI) portion of the
address in any addressing plan. The ESI fields are typically reserved by private networks
who ILMI register their end customer addresses. The ESI portion is not important to
route the call in the ATMSP network, as that level of granularity only matters in the
private network. The public network must route to the highest (most general) address
uniquely identifying the private network. If the private network ESI addresses were
advertised in the ATMSP network, the ATMSP network would be flooded with addresses
identifying the private network end points.

IISP Private
ATMSP Cloud
Cloud

Address "13octectPrivPrefix:7octectESI_2

Provision CPE address (ILMI register)


SVC source, called address is: Address
13octectPrivPrefix:7octectESI_1 "13octectPrivPrefix:7octectESI_1"

Provision private network address on the egress


ATMSP interface towards the private network.

Address “13octectPrivPrefix”

Do not provision the private network ESI values here.

Figure 8-6 Private network addressing

8.4.5 Number of Addressing Plans


Depending of the services offered by the ATMSP network, two addressing plans may be
desirable. Having two addressing plans is not a necessity, but there are benefits to be had
as outlined in this section.

An SVC addressing plan (type 2) cannot use the ESI portion of the address and as such
any subscriber information should be kept in the first 13 octects of the address. In this
instance, the subscriber information would share the same space as the node id, region
identifiers and other geographic information that has already been encoded. If only a

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-14 PRIVATE NETWORK-TO-NETWORK INTERFACE

single addressing plan is in place, then the addressing plan is only considering the first 13
octects for both SVCs and SPVCs. This means that connections terminating inside the
ATMSP network (e.g.. SPVCs) as well as connections terminating outside the network re
sharing the 13 octect space. The consequence here is that the subscriber information
space reserved in the 13 octects is being shared amongst SVC addresses and SPVC
addresses. Under normal circumstances this would suffice if the geographic
regionalization fields do not require excessive number of bits. However, it may not be
possible to satisfy the geographic regionalization and the expected number of SVCs
and/or SPVCs in the first 13 octects. In this case, two separate addressing plans can be
used for SVCs and SPVCs respectively.

An SVC (type 2) addressing plan would mirror the one described above but would not
have to share its subscriber space with the SPVC connections. The SPVCs (type 1) would
have a separate plan that could use the ESI portion of the NSAP. For SPVCs, the call
would terminate in the ATMSP, and the ESI is a well known and distinguished routable
entity.

Any SPVC terminating outside the ATMSP network would follow the SVC addressing
plan because it will enter the private network as an SVC and would be routed as one.

8.4.6 End System Indicator (ESI) Portability


Currently, each Passport ATM interface has a default 20 octect NSAP address. The first
13 octects are commonly referred to as the node prefix, the last 7 are referred to as the
End System Indicator (ESI) fields. Passport provides default NSAP addresses for each
ATM interface. These addresses can be used to terminate SPVCs or transit SVCs. Each
address (by default) contains a three digit hex representation of the ATM interface
identifier. This is a Passport defined value used in various functions, including
identifying a “card/port” relationship.

Currently, the card/port numbering scheme is restricted to 3 digits. As the number of


ports and cards supported on passport increases, the numbering scheme fails (due to the
three digit limit). Using the default addressing scheme for the card/port relationship
represents a short term and non scalable addressing plan. It limits the identification to
ports and cards using three digits only. It is possible for a software enhancement to
increase the number of digits from three to more, but such a change would change the
default addressing scheme of your network, and may cause massive
connection/application reconfigurations to correct it. All existing connections would have
to be rerouted to the new (modified) ATM address. (The connection would remain
despite the address being deleted or changed. This is because the connection has already
routed. However, in case of failure, the subsequent reroutes will fail until the called party
address is updated to the new addressing plan.)

Furthermore, the default addresses are NOT portable. That is, they cannot be deleted or
moved between ATM interfaces. This may cause problems if an operator needs to move

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-15

an address from one card or port to another. In this case, all the connections terminating
or transiting on this interface would have to be reconfigured to the new destination or
transit address. The relationship from source to destination is many to one, so the changes
described here may be significant. The alternative and the recommended addressing plan
uses a user supplied ESI field. The ESI field can be provisioned to identify a particular
customer, service, reference number etc, at the discretion of the ATMSP network
planners. Providing such an address provides portability and allows the flexibility of any
values inside the ESI. Note that if the card and port numbers are included in the ESI, then
the portability is reduced to the card/port level, however, the limit of three hex digits to
represent card/port is lifted as 6 digits (not including the SEL field) can now be used.

Should an ESI be applied manually, the following considerations must be made:


• The ESI number must be unique. The number should not be a duplication of an
existing or default ESI address.
• The selector field should not be used. This field is reserved for applications, and is
not used for routing purposes.

8.4.7 Managing Private Address Plans


The ATMSP network is mostly concerned about scalability. Using a private network
addressing plan usually implies including a foreign address in the ATMSP network.
Foreign addresses are not summarizable, and are flooded explicitly and individually in
the ATMSP network. This is the major drawback to this approach. Both ATMSP and
private network authorities should agree upon including such addresses.

Private networks realize certain advantages to using their own addressing plan, such as
ATMSP portability. If a new ATMSP network is required for the private network, such a
change will not impact their own addressing plan. If they used the ATMSP plan,
changing the ATMSP network may imply changing the addressing plan. Currently, there
are no rules governing which plan to use.

If the private network wishes to adopt the addressing plan of the ATMSP network, this
implies that the ATMSP numbering plan will number each node in the private network.
The goal of this is to offer them a plan that can be easily summarized in the ATMSP
network, but flexible enough to satisfy the private network requirements.

Typically, a private network id can be allocated that would be used to uniquely identify
the private network. In addition, some fields can be reserved for use with in the private
network, that the private network can allocate at their discretion. Typically, these bits are
used to identify the individual nodes in the private network.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-16 PRIVATE NETWORK-TO-NETWORK INTERFACE

13 Octets

Geographical regions Node id PrivateID

Figure 8-7 Private network identifier

There are two special requirements that need to be considered in this scenario: portability,
and flexibility. Portability refers to the ability to have the private network change the
point of access to the ATMSP network. The point of access can be any of a port, node or
general geographical location. The level of portability must be negotiated between both
parties, but generally, it is not wise idea to bound a private network to a single port for
access.

Considering the example in Figure 8-7, if the private network id is appended to the node
id, then the private id is bounded to that node identifier. This means that the private id
cannot be ported to another node without the number scheme of the private network
nodes changing. All private network node ids have the ATMSP node id as part of their
numbering scheme. For this reason, the node id (or anything more specific such as card
or port id) should not be used to address SVCs terminating in a private network. Instead,
the amount of portability required for the private network should first be determined, and
then the private network identifier placed appropriately afterwards.

In Figure 8-8, a region and sub-region have been defined in the addressing plan. A
private network has connected to the sub-region, and wishes to adopt the ATMSP
networking plan. The nodes inside the ATMSP network use their node ids to terminate
calls. Calls transiting into the private network have addresses provisioned on the ATMSP
access nodes that do not specify the node id. The private network is now portable to any
node in the sub-region 1. In this case a unique value is used to distinguish private node
ids from ATMSP node ids.

Region+subregion1+nodeid1 Region+subregion1+nodeid2
Static address
Region+subregion1+privNet1
Subregion1

IISP Private network 1

Region+subregion1+privNet1+privNode1

Region+subregion1+privNet1+privNode2

Figure 8-8 Private network connecting to an ATMSP network

To accomplish this, the addressing plan must distinguish a node id from a private
network id. This can be done by using distinguished values for private network ids, or by

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-17

setting unique bit patterns to signify either a node or network. An example of using the
value based method, is to limit all the node ids from a low to high number range (i.e. 0-99
is reserved for node ids, and 100-999 is reserved for private networks). Alternatively, the
plan can designate that no node ids begin with 0, therefore any field beginning with 0
represents a private network. This implies the first bit with a value of 0 distinguishes a
private network. It may also happen that a private network customer has a request for
multiple networks with a sub region. In this case, the addressing plan may have a
requirement to identify one unique customer, but two distinct networks. A
recommendation here would be to divide the private network portion of the address into a
customer id field and a local network field. This reduces the customer id space, but
allows for sub networks to be uniquely identified.

Region+subregion1+nodeid1 Region+subregion1+nodeid2

Region+subregion1+privNet1Area2
Region+subregion1+privNet1Area1

Static Static
address address

Region+subregion1+privNet1Area1+privNode1 Region+subregion1+privNet1Area2+privNode1

Figure 8-9 Two private networks owned by a single organization attaching to the ATMSP Network

8.5 Peer Group Boundaries


The forming of peer group boundaries is dependent on the specific network topology, network
growth, and possibly the network services being offered. A peer group is defined by PNNI 1.0 as
“a set of logical nodes which are grouped for purposes of creating a routing hierarchy. PTSEs are
exchanged among all members of the group.”

For optimal network performance, the number of peer groups in a network should be minimized.
The more peer groups that exist, the more routing ambiguity that also exists due to address
summarization and topology hiding inherent to PNNI peer groups. As previously discussed, the
major benefits of multiple peer groups are only realized after the single peer group is no longer
suitable to the network in question.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-18 PRIVATE NETWORK-TO-NETWORK INTERFACE

8.5.1 Criteria for defining peer groups


Defining PGs and PG boundaries depends on the criteria for migration. In some cases the
PG boundaries are implicit:
• PNNI is being used for firewall purposes
• merging of two separate, flat PNNI domains
• PG boundaries that favour address summarization
The less obvious cases include partitioning due to network growth and/or the desire for
near-optimal routing, both of which are discussed in the remainder of this section.

Expected future growth


As the single lowest level peer group grows in time, network forecasters can predict
where the growth may concentrate and remain static. When the growth exceeds the single
peer group limit, multiple peer groups can be intelligently formed to minimize the further
splitting of peer groups.

Proximity between nodes


The outside link joining two peer groups is transcontinental or transoceanic. Dividing
such peer groups into countries may be easier to manage from an operations perspective.
This could be the case in a corporate merger, where two separate and existing network
management systems are already in place. In some such cases, there is no benefit to
directing PNNI routing control messages over such long haul facilities.

Communities of interest (COI)


There is a high call concentration among the nodes within specific sub regions. Defining
a single peer group around these nodes offers the most optimal routing, as the entire
topology from source to destination would be known in that peer group.

Software and hardware capabilities


The hierarchical capable software or the CPU capacity to run the software is limited to
only a select group of nodes within the network. The location of these nodes could
possibly mitigate where the peer group boundaries would be formed. Number of
supported nodes in a peer group, hierarchical capabilities, convergence times and routing
performance are a few of the major factors to consider with respect to the PNNI nodes in
the network.

8.5.2 Examples of peer group configurations


The examples illustrated in Figure 8-10, Figure 8-11, Figure 8-12, Figure 8-13 and Figure
8-14 reflect the possible issues arising in a real-time network environment with respect to
network topology and configuration of peer groups. The topology depicted consists of a

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-19

full mesh core of relatively few nodes (typically four to eight nodes) with a larger number
of edge nodes which are dually homed to the core nodes.

Core
Edge
Logical Group
Core Link
Edge Link

Figure 8-10 Network topology with no link aggregation and a flat PNNI configuration

This reflects a good compromise of low link cost, full redundancy with a maximum of
three hops per connection. A worldwide carrier network might consist of a number of
these clusters connected by transoceanic links, similar to the topology of the first
example, except where every peer group is a cluster such as Figure 8-10.

Core
Edge

Logical Group

Core Link
Edge Link

Figure 8-11 Network topology with two core nodes per peer group

Whereas Figure 8-10 depicts a flat PNNI configuration, Figure 8-11 and Figure 8-12
illustrate a two level hierarchy configuration.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-20 PRIVATE NETWORK-TO-NETWORK INTERFACE

Core
Edge
Logical Group
Core Link
Edge Link

Figure 8-12 Network topology with one core node per peer group

Figure 8-13, on the other hand, shows the core network in a separate peer group (PG), illustrating
a degenerate case where peer groups have been partitioned. Such topology implies many
crankbacks which prevents intra-PG routing and as such, intra-PG connections are not be
reachable. Edge nodes in each PG must be connected to each other. In order to avoid using edge
nodes for tandem traffic, a full mesh in each edge PG is required.

Core
Edge
Logical Group
Core Link
Edge Link

Figure 8-13 Network topology with core network in a separate peer group

Another possible approach is to handle network growth by avoiding partitioning up a cluster in


the first place and instead build another cluster in parallel and linking the two clusters by a

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-21

relatively small number of core links as in Figure 8-14. Note that core links between clusters are
connected.

Core
Edge
Logical Group
Core Link
Edge Link

Figure 8-14 Network topology with parallel clusters (one PG per cluster) and core nodes connected

8.5.3 Peer group configurations versus routing trade-offs


Table 8-1 compares the peer group configurations in section 8.5.2 with routing trade-offs.
Other factors to consider also include crankback, route selection load on the entry border
nodes, and compatibility with the address plan.

Table 8-1 Peer group selection versus routing trade-offs

Number of eligible
peer group leaders
Configuration (PGLs) Routing Benefits Routing Disadvantages
Flat PNNI Not applicable Optimal Scalability limited to 300
nodes
2 core nodes Two Near optimal routing in Relatively few PGs will
per PG Robust; if one core node the network core if the somewhat limit scalability
fails, PG is not Aw of the core links >
partitioned Aw of access links.*** Routing through access
node links if core link
inside PG fails
Some connections will
require an extra hop, see
a description of the
behaviour in the next
section
1 core node One Optimal routing in the Many outside links in
per PG Single point of failure, all network core each PG
nodes in PG isolated if
core node fails. ** Many partitions if core
node fails
Some connections will
require an extra hop, see
a description of the
behaviour in the next

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-22 PRIVATE NETWORK-TO-NETWORK INTERFACE

section
Core network Any node in the core PG Optimal in the core Many PGs are required
in a separate can be a PGL, other unless the edge nodes
PG PGs depend on how are highly inter-
they are connected connected. Too many
PGs means excessive
control traffic.
Extra hops required if
edge PGs are not full
mesh
Parallel There is only one PG Optimal in each cluster Scalability limited to 300
Clusters per cluster, so any core plus 1 hop to reach the times number of parallel
node can be a PGL other cluster if clusters (e.g. 600 nodes)
necessary
* if they are not interconnected within the PG, then this design is not feasible
** node isolation can be avoided if edges nodes are all eligible PGLs, but this is not recommended
due to impacts to the higher level and prohibitive operational complexity
*** calls have a 50% chance of 2 core hops should the destination (edge) node be connected to
only one core node

8.5.4 Recommendations

8.5.4.1 Selecting a PG configuration


Every real-time network environment will have other factors to be considered
including geography, politics, expected network growth and future service
offerings. The following guidelines do not take into account these issues and
are listed in order of priority to the network planner.

1) Flat network configuration (Figure 8-10): Should not be migrated to a


PNNI hierarchy unless truly necessary.

2) Parallel clusters (Figure 8-14): Select a cluster configuration if it is


feasible to build a second core network co-located to the first core. This
option typically requires significant pre-planning of network topology
growth before migrating to hierarchical PNNI.

3) Two core nodes per peer group (Figure 8-11): Implement this
configuration for reliability, scalability and near optimal routing in the
core.

4) One core node per peer group (Figure 8-12): Do not implement unless all
core nodes are absolutely bullet-proof. Otherwise partitioned PGs and
isolation of all nodes in the PG will result.

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-23

5) Core network in a separate peer group (Figure 8-13): Do not implement


this configuration as too many edge PGs and too many edge node-to-edge
node connections will be required in a typical network. Note that the
PNNI standard, Appendix D Multi-Peer Group Systems, is exploratory
material which describes how a single physical node can be a member of
multiple PGs acting as a “border node” much like Area Border Routers in
OSPF. It is recommended to implement the configuration illustrated in
Figure 8-13 only if such functionality is available.

In all the above cases, the Aw of core links should be similar in order to avoid
extra hops in the core. Refer to section 8.5.4.4 for more details.

8.5.4.2 Outside links


The number of outside links in every PG should be minimized. A PG with one
outside link will always be routed optimally. A PG with two outside links will
always be traversed optimally. As the number of PGs becomes larger, then the
chances of a sub-optimal outside link being chosen becomes greater. Complex
node representations only help if the PG is traversed.

8.5.4.3 Metrics
Metrics must always favour outside core links over access core links if routing
through the core is required.

8.5.4.4 Call Blocking


If call blocking is preferable to routing through the edge node, use restricted
transit in a higher level, which thereby restricts the entire PG to restrict
transit; setting the entry border node to restrict transit will only affect intra-PG
calls. Outside links do not communicate the transit attribute across outside
links, so if the PG accepts the call setup, then the call setup will reach the
entry border node. Regardless of the restrict transit attribute value, the entry
border node (r) will then:
• remove the top level DTL form the higher level PG
• elect the best route through the PG, and add the DTL
• forward the call setup
In cases where a PG is defined with either
• some border nodes as “core” nodes with transit properties; or,
• some border nodes as “edge” nodes without transit properties

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-24 PRIVATE NETWORK-TO-NETWORK INTERFACE

then setting the entire PG to “restrict transit” is not feasible, and the
recommendation is to use the Aw attribute (setting the Aw of edge links > Aw
of core links). This will cause the edge nodes to be used as transit nodes only
as a last resort (e.g. when the links to the core nodes do not have sufficient
AvCr).

The point to emphasize is that “restrict transit” has only a limited context (this
only applies to nodes that the source node can “see”).

8.5.4.5 Partitioning
There should be at least two eligible PGLs that are connected to all other
nodes in the PG to avoid partitioning a PG. For more details on partitioned
PGs, refer to section 8.6.

8.6 Peer Group Partitions


A partitioned peer group is a peer group which has been unintentionally fragmented into two or
more non-contiguous pieces. This can be caused by an equipment failure (such as failure of links
or switches). In such instances, each partition will elect a peer group leader (PGL). The PGL
election algorithm ensures that should a peer group becomes partitioned then each partition will
elect its own PGL (assuming that there is a node capable of acting as a PGL). Similarly, if a
partitioned peer group becomes re-attached, then only one node will remain as PGL. The PGL is
responsible for determining the identity of the parent peer group, and representing the partition in
this parent peer group.

In some cases, partitions may have no PGL; for example, a small partition might not contain any
nodes capable of acting as PGL. In this case the partition operates as a “leaderless peer group”
and is effectively isolated from the rest of the PNNI routing domain.

Given that each partition of a peer group is treated as a separate peer group, routing within the
parent peer group naturally routes calls around and across the various partitions. Similarly, calls
originated within the partitioned peer group to destinations outside of the peer group are
correctly routed.

A number of problems can arise when routing calls to destinations within the partitioned peer
group:
• Addressing implications — If more than one PGL is elected from the same partitioned
PG, then they must be uniquely identifiable in the parent peer group with a unique node
ID. This can be achieved by including the peer group leader’s 48-bit ESI into the “flat
ID” part of the node ID, although it is not recommended. There is the potential that the
ESIs will not be unique, especially in higher-level peer groups that span countries.
• Lost calls — It is likely that some nodes will be isolated from the rest of the PNNI
domain. Note that at the time of PGL failure, many existing calls may be severed, and the

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-25

they will not be re-established even though they are still connected to a core node, but
that core node is not in the same PG.
• Crankbacks — These will occur until the PGL is restored because all PG partitions will
be advertising the same summary address, but only a subset of the destination addressed
are actually reachable in each partition. In this case, the call will be cranked back until a
path is selected to the correct PG partition, which could result in many crankbacks
depending on the topology.
PGs with two core nodes both acting as eligible PGLs can be both elected if they are isolated
from each other. Fortunately, partitioned PGs are unlikely because the core link and all access
links between the two core nodes in the same PG would have to be simultaneously severed for
the PG to be non-contiguous. The bottom line is that the PGL must be very reliable, and core
links between eligible PGLs must be very reliable.

8.6.1 Choosing a Peer Group Leader


A PGL is the same as all other nodes in the group but its specific role is to aggregate and
distribute information for maintaining the PNNI hierarchy. Intra-PG connections do not
require a PGL. No PGL is required for the higher level PG. For other PGs, the PGL
election is based on leadership priority, and priority elevation (+50) occurs after election
to avoid toggling. Note that if a consistent PGL is desired, the leadership priority should
be set to more than 50 above all other eligible PGLs and the PGL will always revert to
that node. A logical group node (LGN) is created on every PGL node to represent the PG
in the next higher level of the hierarchy.

PGLs should be chosen using the following guidelines:


• There should be enough resources (CPU, memory) for the node to act as a PGL.
Passport does not currently place extra limitations on PGL nodes.
• There should be at least two eligible PGLs per PG for redundancy.
• Do not choose PGLs whose failure would cause the PG to be split into non-
contiguous pieces causing further complications. See the example in the next section.
• Carefully consider the load on a node if it is acting as both a border node and PGL, as
border nodes do a portion of the routing for all calls that enter the PG.
If possible, avoid having any one physical node a member of more than two PGs. For
example, in a three level hierarchy; if the physical node is a PGL of the next two PG
levels, then it is a member of three PGs. Failure of this node means that two PGs will
have a PGL election and possible partitioning, and more PTSE propagation.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-26 PRIVATE NETWORK-TO-NETWORK INTERFACE

PG(A)
CPE1 CPE.2

A.1 A.2

A.3 A.4
A.5
PG (C)
PG (B)
B1 C.1

B2 C.2

Customer Premise Equipment

Peer Group Leader Node

PNNI Node

Figure 8-15 Example of peer group partitioning

8.7 PNNI Multi-level Hierarchical Configurations


The PNNI routing system can organize a network into a variety of different hierarchical (or flat)
configurations. Network planners should balance future network growth against optimal routing
when designing a network. This section discusses the benefits offered by different hierarchical
configurations.

It is recommended that a single peer group or hierarchically flat configuration be used wherever
possible. This configuration offers the most precise routing available, as the status of all nodes
and links in the network are known and therefore provide the routing system with every possible
routing combination from source to destination. Where multiple peer groups are used, the
scalability of the network is increased at the expense of routing accuracy. Each peer group is
only represented by the addresses reachable in it, as its link constraints, weights or QoS metrics
are not known by nodes outside of the peer group. Therefore, routing to a peer group using
address summarization forces the calling node to “trust” that the destination address does indeed
exist, and that the QoS requirements of the connection can indeed be met by a remote peer
group.

If a hierarchy is indeed required (e.g. more than 300 nodes in the network), many decisions must
be made on how to partition the hierarchy and how many levels to build.

8.7.1 Routing ambiguity


Passport supports up to 10 levels of hierarchy, but typically only 2 levels are required for
most network topologies. A two-level configuration reduces the amount of routing
ambiguity and limits the complexity in engineering a hierarchical network. Figure 8-16
illustrates the potential problems caused by routing ambiguity.

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-27

A.1

B.1

Figure 8-16 Network XYZ using a single peer group configuration

In Figure 8-16 and Figure 8-17, network XYZ is illustrated in both a single peer group
configuration and a possible hierarchical configuration. In the single peer group
configuration, each node is aware of every other node and every other link in this peer
group (by definition of a peer group). Routing in this configuration is optimal because a
definitive decision can be made from source to destination, as all route combinations can
be examined.

A B

Peer group A Uplink 1


Uplink 2 Peer group B
A.1
Outside link 1

Outside link 2

B.1

Figure 8-17 Network XYZ using a hierarchical configuration

In the hierarchical configuration illustrated in Figure 8-17, routing ambiguities are caused
by address summarization and hidden topology inside the remote peer groups. Peer group
A has no knowledge of the topology, link or nodes states of peer group B. Instead, it has
only the understanding that some set of addresses is reachable in peer group B, via the
logical group node B.

Assuming that every link in Figure 8-17 has an identical cost, then the shortest path that
routes a connection between A.1 and B.1 is the least number of hops. In the flat peer

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-28 PRIVATE NETWORK-TO-NETWORK INTERFACE

group shown in Figure 8-16, the shortest path is easily calculated because optimal routing
is possible. In the hierarchical model in Figure 8-17, the route is calculated to LGN B, via
uplink 1 because it represents the shortest path to B. In this case, the shortest global path
from A.1 to B.1 is to take uplink 2 (and outside link 2). As a result of the hidden topology
of peer group B, uplink 1 is taken and as such the less optimal route is delivered to B.1.

This is further compounded if the entry border node in peer group B (B.2) cannot find a
suitable route from B.2 to B.1 that satisfies the request connection QoS requirements. In
this case the call must be released and re-routed to use uplink 2 (and outside link 2). If the
QoS can be satisfied on the alternate route, the connection will be successfully
established but with the following consequences:
• bandwidth is unnecessarily reserved on the initial setup attempt on all links up to B.2
• the connection experiences more delays due to routing the connection twice
• the source node (DTL originator) must support alternate routing for the connection to
be completed
Such routing ambiguity is amplified with every new level of hierarchy (using address
summarization) introduced in the network. A comparison of a single, two-level and three-
level hierarchy is shown in Table 8-2.

Table 8-2 Comparison of hierarchical network configurations

Flat peer group 2-level hierarchy 3-level hierarchy


Effect of a single Low An LGN failure will A top level LGN
node Failure cause the failure would cause
A single node only corresponding peer multiple peer
represents itself group to disappear groups from being
until a new LGN is reachable until a
established new LGN is
established
Scalability Low High High
300 node maximum Up to 40000 node 100 000s of nodes
support
Engineering Low Medium High
Complexity
2 level LGN 3 level LGN
redundancy needed redundancy
Induced uplinks
Number of High Low Low
advertised
addresses Only nodal address Worst case for any Maximal address
summarization is peer group is the summarization
possible peer group
addresses plus LGN
summarized ones

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-29

Unless the number of nodes immediately justifies a three-level hierarchy, there is no


benefit to implementing such a network configuration. If a network is expected to grow
to a size where a third level would be required, a two-level hierarchy should first be
established with a contingency plan to migrate to a three-level configuration when and if
it becomes necessary.

8.8 Configuring PNNI for Bandwidth Changes


PNNI allows for the configuration of a node’s responsiveness to changes in bandwidth. The
consequence of a node’s responsiveness will define how accurately each topology database will
reflect the actual link bandwidth states. Network configuration relies greatly on the expected
service offerings. For example, a network offering real-time sensitive applications would require
more frequent updates than a network offering non real-time services.. A UBR predominant
network does not care about how much bandwidth is allocated in the network because is does not
reserve any bandwidth. Alternatively, a network with a high percentage of CBR traffic would
require a fairly accurate view of the network in order to maximize its chances of routing
connections on the first attempt.

The cost of more precise routing information in the topology databases is higher CPU usage. In
order to maintain an accurate depiction of the network, more PNNI routing control messages
must be sent and processed by every node in the peer group thereby increasing the CPU required
to process them. An abuse of these parameters could negatively affect call setup rates and other
switch functions.

8.8.1 Bandwidth update parameters


PNNI defines two parameters that control how often bandwidth updates are sent between
switches: the available cell rate proportional multiplier (avcrpm) and the available cell
rate minimum threshold (avcrmt).

8.8.1.1 Available Cell Rate Proportional Multiplier (avcrpm)


The avcrpm is defined on a per node basis and is used to define what
bandwidth changes are considered significant. A significant bandwidth change
implies that the new bandwidth should be broadcast to the peers in this peer
group. The broadcast message is used to update the peers’ topology databases
view of the available cell rate of this link, allowing for more precise routing
decisions.

The avcrpm is defined as a percentage of the available cell rate on a PNNI


link. On Passport, a minor variation is used because a single ATM interface
can be divided into a variable number of bandwidth pools (up to 5 bandwidth
pools on the ATM IP function processor). A significant bandwidth change is
broadcast if the bandwidth change (from the last connection) is greater than

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-30 PRIVATE NETWORK-TO-NETWORK INTERFACE

the product of the available cell rate of the bandwidth pool from which the
connection bandwidth was allocated and the avcrpm percentage.

8.8.1.2 Available Cell Rate Minimum Threshold (avcrmt)


The avcrmt is the second parameter used to determine whether bandwidth
change is significant. As the available cell rate decreases inside of a PNNI
link, the avcrpm percentage of the available cell rate also decreases. As a
result the avcrpm produces significant change messages when insignificant
bandwidth changes occur. This is compounded by links operating near
saturation, where every bandwidth change might be considered significant by
the avcrpm procedures. As a counteractive measure, PNNI defines the avcrmt
as the minimum amount of bandwidth change required before a significant
change message can be broadcasted. This feature blocks avcrpm messages
until the bandwidth change is greater than avcrmt cells per second regardless
of a shrinking available cell rate within a link.

The avcrmt is a percentage of the maximum cell rate of a link. In Passport, a


separate maxCR value is defined for each bandwidth pool created on that link.
maxCR represents the overbooked cell rate of bandwidth pool, regardless of
service categories assoicated to the pool. An avcrmt significant change
message may be broadcast if the amount of changed bandwidth is greater than
the product of maxCR and avcrmt, where the maxCR value is taken from the
same bandwidth pool from which the connection bandwidth was allocated.

8.8.1.3 Recommendations
In most networks defaults of 3% avcrmt and 50% avcrpm are suitable for
normal operating conditions. In the case of a predominantly real-time (or non
real-time) traffic profile, a network operating at near capacity or a network
offering extremely high bandwidth services, these values will not suffice.
Specific values are difficult to determine without examining the entire
network profile on an individual basis. Table 8-3 summarizes the benefits and
consequences to changing these values in addition to which circumstances
would justify such changes.

Table 8-3 Effect of changes to avcrpm value on network performance

Low Normal High Comments


(~25%) (~50%) (~75%)
90% bandwidth optimal effective effective If possible, networks operating at
utilization near capacity should install
alternate routes and investigate
Passport load balancing schemes
High Bandwidth optimal effective sub- An OC-12 link offering OC-3
Applications optimal services may wish to decrease the
avcrpm value

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-31

75% Real-Time optimal effective sub- Lowering the avcrpm is most


traffic optimal optimal in networks similar to
depicted in rows 1 and 2
75% Non Real- sub- effective optimal UBR does not require bandwidth
Time Traffic optimal reservation

8.8.2 PNNI Routing Control Channel (RCC)


The PNNI routing control channel (RCC) is effectively a PVC channel automatically
setup between PNNI routing peers on a reserved channel for the purpose of exchanging
routing information. In a lowest level peer group, the RCC resides on VPI 0 VCI 18, but
it can be provisioned to occupy any VPI/VCI space. In higher level peer groups, the RCC
is created automatically as an SVC between two logical group nodes, where the SVC
reserves bandwidth and VPI/VCI space as a regular switched connection.

ATM Forum PNNI 1.0 specifies the default RCC service category to be non real-time
VBR, with PCR=906, SCR=453 and MBS=171. As a result of nrt-VBR low emission
priority, Passport PNNI increases the default priority of the RCC to real-time VBR (the
recommended cell rates are still used). However, any service category and cell rate can be
manually provisioned as desired by the operator.

In lowest level peer groups, it is possible that PNNI peers implement different RCC
default values or even different default service categories. In addition to cell rates and
service categories, the RCC channel can also be subject to GCRA (UPC) policing and
traffic shaping. Also, in multi vendor PNNI networks, the RCC default parameters might
not align perfectly to each other. In all situations, the operator is encouraged to engineer
the RCC channel with respect to the following considerations:

Traffic Shaping
Traffic Shaping on the RCC should be disabled. Shaping provides more uniform traffic
flows intended to conform to network policers. However, for bursty traffic types (i.e.
VBR), traffic shaping increases the delay of the traffic, prolonging the transfer of critical
routing information. Across large networks, this increased delay may grow significant.

GCRA Policing
GCRA policing on the RCC should be disabled. It is difficult to characterize traffic bursts
on the RCC channel in different routing situations. Also, shaping is not recommended on
the RCC for the reasons stated above. Policing the control traffic may result in dropping
critical routing information that non conforms only for very brief periods of time.

Reserved Bandwidth
As a result of policing being disabled, the importance of the reserved bandwidth of the
RCC channel within a link becomes less important. However, the RCC should reserve

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-32 PRIVATE NETWORK-TO-NETWORK INTERFACE

enough bandwidth such that under high link utilization, control traffic does not greatly
affect user data sharing the link. Typically, the default settings specified in the ATM
Forum PNNI 1.0 should be used.

RCC Parameters
If the default RCC parameters between two switches are not identical, the following
guidelines are given:
• The preferred service category for the RCC on Passport is rt-VBR. The increased
emission priority of rt-VBR compared to the nrt-VBR decreases the chance of control
traffic delays due to competition with higher priority traffic. The service category of
the RCC channel must be consistent for both directions of the PNNI link. If desired,
to prevent starvation to any service category on Passport, the ATM IP FP minimum
bandwidth guarantee (MBG) mechanisms can be employed. Please refer to the ATM
traffic management section for more details on MBG.
• To maintain uniform bandwidth reservation on the link, the same RCC cell rates
should be provisioned for both sides of the PNNI link. Also, the operator should
ensure that the resulting bandwidth reserved for the RCC is equal in both directions of
the link. Any differences in the amount of bandwidth reserved can be compensated
using the Passport overbooking parameters. If the bandwidth reservation of
connections on this link is not symmetrical, ACAC may prematurely reject
connections due to the forward or reverse direction of the link bandwidth being
saturated.
As mentioned earlier, in hierarchical networks the RCC is automatically setup between
adjacent LGNs. In the case where Passport cannot establish an rt-VBR RCC, it
automatically attempts a CBR RCC SVC. If CBR cannot be established, a nrt-VBR RCC
is attempted. If the nrt-VBR is not available, then a UBR SVC RCC is attempted. A
service category is considered to be not available either if no route can be found with that
service category, or if the call setup with that service category fails due to problems with
service category , traffic parameters, or QoS parameters. Before attempting the next
lower service category, Passport does not change the cell rates of any RCC SVC
establishment.

8.9 Migrating to a PNNI Hierarchical Network


This section describes the issues to be considered in migrating to hierarchical PNNI. The focus is
on the network design in general using examples, with some Passport 6000/7000/15000
implementation specific coverage.

These migration procedures assume that the network is already configured as a flat PNNI
network; a flat PNNI network is a PNNI domain that has only one Peer Group (PG). Other
migration scenarios such as migration from IISP to flat or hierarchical PNNI should be
considered separately. Where possible, interworking information from other ATM vendors has
been included. This is not a tutorial on hierarchical PNNI, the reader is assumed to already be

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-33

familiar with the PNNI standard, ATM addressing and hierarchical concepts such as LGNs and
outside links.

8.9.1 Network Considerations


A migration to hierarchical PNNI should be considered in light of the following (listed
from highest to lowest in priority):

Network Growth
The number of nodes/links exceeds the capacity of network nodes. Passport supports 300
nodes in a single PG. PTSE volume varies by O(NLR), N = number of nodes, L =
number of links, R = rate of flooding of PTSE; also note R = f(N,L). For simplicity, the
number of nodes is the only constraint since there are practical limitations to the
“meshedness” of the network, especially as the size of the network increases. Other
vendor equipment scalability may vary and the PG size is bounded by capacity of
smallest node.

Excessive Control Traffic


In a large network with unreliable links or frequent changes in available bandwidth, the
amount of control traffic to advertise like availability and bandwidth may increase to the
point where it becomes a significant bandwidth consumer. Partitioning into multiple peer
groups will reduce the amount of control traffic.

Merging of Two PNNI Domains


As an example, a merger of two ATM service providers, especially if the address plans
differ significantly.

Firewall
The PNNI standard describes how PGs can be used to “hide” topology information of
parts of the network. This approach uses PNNI as a firewall so that nodes do not have a
full view of the network. Nortel Networks does not recommend using hierarchical PNNI
for this purpose.

Overall, actual or expected network growth that is the primary reason for migration. The
decision to migrate should not be taken lightly as there are trade-offs to consider which
will be explained in further detail. The following are the prevailing factors to consider,
some of which may even preclude the migration:
• sub-optimal routing which may increase cell delay, and increase link utilisation
• compatibility with the ATM address plan; e.g. having enough bits for partitioning
address bits by level and maintaining efficient address summarization

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-34 PRIVATE NETWORK-TO-NETWORK INTERFACE

• change in routing overhead from source node only routing to distributed routing
(hierarchical PNNI routing is done by source node plus all intermediate entry border
nodes); which has crankback and re-routing implications
• additional operational complexity
• additional configuration of PG levels, PGL, link aggregation
• failure modes and PG partitioning when a PGL node fails
• topological constraints; some topologies are not well-suited to hierarchical PNNI as is
demonstrated in subsequent sections
Careful network design will mitigate these effects and provide the benefits of hierarchical
PNNI. The following sections will explore the above issues by considering all the steps
of migration, including assessing if hierarchical PNNI is suitable at all.
Note: Call setup performance is the major consequence of large PNNI peer groups on
Passport. Passport can easily support 300 nodes in a peer group when only
memory and CPU are considered. However, due to the large number of routing
combinations that exist in large networks, call setup performance will degrade as
the network size increases. The level degradation is dependent upon the size of
the network and the exact topological architecture (e.g. ring topology, full mesh,
partial mesh, etc.) By reducing the amount of links and route combinations when
forming peer groups, call setup time can be increased.

8.9.2 Summary of Migration Steps


The following is a summary of steps for migrating to hierarchical PNNI:

1. PNNI Software and 3rd Party compatibility


Ensure that all nodes are running the appropriate software level.

2. Levels and ATM Address Plan


Decide on how many levels you will need and the number of address prefix bits for each
level. Confirm that the NSAP addresses configured on each node and the remaining
address space is compatible with this plan.

3. Peer Groups
Decide on the number of PGs and the PG boundaries. Impact of simple versus complex
representation.

4. Peer Group
Leaders and Peer Group Partitioning How to choose eligible PGLs, and the leadership
level, how to avoid isolating nodes when a PGL or link fails.

5. Top Down or Bottom Up?


One-node-at-time-view of how to migrate from single to multiple PGs, comparison of
approaches.

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-35

6. Nodal Provisioning Procedures


Considerations for Nodal Configuration: comments, procedures, and nodal provisioning;
PNNI outage numbers.

7. Crankback and Call Blocking


What are the considerations for crankbacks in a hierarchical network compared to a flat
network.

8.9.3 Top Down versus Bottom Down Migration


There are two ways to migrate to hierarchical PGs from a single PG:

1. A single peer group be split into multiple peer groups by creating a new higher level
peer group for the LGNs. This is Bottom Up migration.

2. Replace existing nodes with LGNs and add new nodes to create new peer groups at the
lower levels. This is Top Down migration.

To facilitate understanding of the migration steps, consider a simple migration example.


Figure 8-18 shows both the original PNNI peer group and the final PNNI network after
the migration. Refer to the Hierarchical PNNI Functional Specification for details on Top
Down versus Bottom Up procedures.

Before migration

A.1 A.2 A.3 A.4 A.5

After migration
LG
A B

A.1 A.2 A.3 A.4 A.5

PGL

Figure 8-18 PNNI network before and after hierarchical migration

8.9.3.1 Top Down Migration


1. Select the node that is to be replaced by a peer group (Node A.2). Modify
this node such that it exists as an LGN at the higher level.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-36 PRIVATE NETWORK-TO-NETWORK INTERFACE

A.1 A A.3 A.4 A.5

PGL LGN
A.2

Figure 8-19 Node A.2 to be replaced by a peer group

2. Move remaining lowest level nodes to the new peer group as required.

A A.3 A.4 A.5

LGN
A.1 A.2 PGL

Figure 8-20 Lowest level nodes moved to new peer group

3. Repeat with other peer groups.


Note: If the network is being built with hierarchical PNNI from day one, the
enough peer groups should be defined with room for growth in each
peer group, in the initial network rollout.

8.9.3.2 Bottom Up Migration


1. Select a node that would appear at the highest level in the resulting
network (Node A.1). Modify this node such that it becomes a leader at this
level and have an LGN at the highest level.

LGN
PGL A

A.1 A.2 A.3 A.4 A.5

Figure 8-21 Selecting a node at the highest level in the resulting network

2. Using the following criteria, select a node that has yet to be migrated:

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-37

• Selection must avoid partitioning an existing peer group.


• If possible select a peer group leader to move first.
• Nodes that are within the same peer group as the node selected in step
1 should be considered last (Although A.4 will be a peer group leader,
selecting it now would cause A.5 to be disconnected with A.1, A.2 and
A.3. So A.5 is selected first).
3. Provision the selected node with all the hierarchy information for that
node. If the migration of this node would create a new peer group, ensure that
it is provisioned to be an LGN at the higher level (Although A.5 may not be a
PGL in the final hierarchy, we have to temporarily make it a PGL with LGN
at level 80 to avoid disconnecting it from the network).

LGN
A B
Moving A.5 first and making it the
PGL is necessary to avoid isolating
nodes during the migration. This is
A.1 A.2 A.3 A.4 A.5 the only necessary during Bottom
Up migration.
PGL

Figure 8-22 Avoid isolating nodes during migration

4. Continue doing step 2 until all the nodes are moved. (A.4 will be selected
next, then A.3, finishing the migration), and move the PGL if necessary.

8.9.3.3 Comparison
The Top Down approach is better, especially if there are many physical nodes
at the higher level. It is also advantageous since downward migration can stop
when Peer Groups are sized appropriately.

The Bottom Up approach suffers from the characteristic described in step 2.


The migration of nodes should start with the PGL first, but in cases where
nodes would be isolated, one or more temporary PGL nodes have to be
created in order to avoid node isolation. This adds to PNNI outage (since
PNNI needs to be re-started if the node prefix changes), and operational
complexity because nodes have to be changed to PGL, then changed back to
non PGL node.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-38 PRIVATE NETWORK-TO-NETWORK INTERFACE

8.9.4 Migrating IISP Networks to PNNI


This section discusses two different IISP migration scenarios: IISP networks to a single
lowest level PNNI peer group, and two PNNI networks migrating to a two level PNNI
hierarchical network.

Before migration After migration New peer group

A.1 A.2 A.3 A.4 A.1 A.2 A.3 A.4

IISP Links PNNI Links

Figure 8-23 Flat IISP network migration

In Figure 8-23, the network is running IISP between all nodes. The migration is to
convert the IISP links to PNNI. After the migration has completed, the resulting network
is a single lowest level PNNI peer group.

Despite full interworking between IISP/UNI/PNNI, this flavour of migration is difficult


in large networks due to the possibility of routing loops. Routing loops are an inherent
problem to IISP networks that is solved by PNNI. However, until the network is
completely PNNI, routing loops are still possible. For this reason, it is recommended that
large networks perform a “flash cut” method of introducing PNNI.

The basic steps are as follows:

1. Configure PNNI routing on all switches (PNNI level, PNNI peer group ids etc). It
is assumed that the NSAP addresses have already been established by the IISP network.
Assuming a top down migration, the value of the PNNI level should be hierarchically
high enough (closer to 1) to allow for the proper formation of future child peer groups.
This step can be performed without any service interruption to existing or new
connections.

2. Block new connections from establishing. This step is necessary to ensure no calls
are attempted before the network can guarantee loop free routing.

3. Introduce PNNI interfaces one at a time until the network is completely migrated,
OR a stable IISP/PNNI routing environment (i.e. no routing loops) is achieved

In smaller networks, it is possible to intelligently determine a sequence of switches to


migrate to PNNI that effectively eliminates routing loops. Under such circumstances, step
2 as described above can be eliminated.

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-39

Before migration

PNNI PG 1 B.1 B.2 PNNI PG 2


A.2 A.1

A.4 A.3 B.3 B.4

IISP Links

After migration

New PNNI peer group


Level N-M, where M >=1
A B

PNNI PG 1 PNNI PG 2
Level N, Level N,
A.2 A.1 B.1 B.2
where N>0 where N>0

A.4 A.3 B.3 B.4

PNNI Links

Figure 8-24 Hierarchical IISP network migration

In the second migration scenario shown in Figure 8-24, a two-level hierarchy can be
achieved by provisioning a separate hierarchy in each peer group, before changing the
IISP interface between the peer groups to PNNI. This migration technique adds a new
higher level peer group. The basic procedure for this migration is:

1. Choose a node to represent each peer group as it’s respective Peer Group Leader
(PGL).

2. Configure the LGNs of each PGL to co-exist in the same parent level peer group.

3. After the PGL election has completed in both peer groups, convert the IISP link(s)
to PNNI.

If multiple IISP links connect the two peer groups, it is still possible that the IISP links
would be favoured over the fully functional PNNI link. For instance, if the upnode
advertising the destination address is using a summarized destination address whereas the
IISP link does not perform an summarization, the IISP link will advertise a longer prefix
match and will always route any subsequent connections. In fact, even if the addresses
are identical in length (same number of significant bits advertised), the IISP link may still
be used. If an earlier release of software than PCR1.3 is used, then calls will alternate
between the PNNI link and IISP link(s). If PCR 1.3 (or later) is used, then calls will route

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-40 PRIVATE NETWORK-TO-NETWORK INTERFACE

to which ever node represents the shortest AW, CTD or CDV cost from the source node.
The shortest route may still include the IISP link.

Maintaining the IISP links until the hierarchy has fully established ensures that new
connections can reach the other peer group despite the PNNI hierarchy is not fully
established. Once the hierarchy has correctly established, the existing IISP links can be
converted to PNNI without disruption to any connections.

After the PNNI link is established, the PNNI protocol will automatically establish the
higher level peer group, including the instantiation the uplinks and the SVCC RCC
between each logical group node. The conversion of the outside link from IISP to PNNI
should not affect any existing connections transiting that link.

New connections spanning both peer groups will be blocked until the establishment of
the SVCC RCC between the LGNs.

8.10 Passport PNNI Routing Considerations


Passport route computation is accomplished by calculating a suitable path from source to
destination. A path is considered “suitable” if it satisfies the requirements of the connection and
is formed by the concatenation of the links and nodes between the source node and the
destination. Each individual link and sometimes the cumulative costs of multiple links in a path
must satisfy the requested class and quality of service requested by the connection. Typical
connection requirements that need to be satisfied by might be cell rates (bandwidth), cell transfer
delays (CTD), cell delay variation (CDV), cell loss ratio (CLR) or supported service categories.
The attributes on the links are often referred to as “link constraints”. Passport must consider the
link constraints, link metrics and employ load balancing techniques to effectively route
connections through the network. The remainder of this section details how Passport
accomplishes this.

8.10.1 Passport Link Constraints


Passport PNNI supports:
• up to seven provisionable attributes per trunk, all of which are static except the
bandwidth pools
• bandwidth overbooking and partitioning
• control over maximum number of UBR connections
Table 8-4 QoS-sensitive routing attributes on Passport function processors

PNNI trunk
attribute Description CBR rt-VBR nrt-VBR UBR
CLR0 Cell loss ratio Cbr Clr rt-VBR Clr nrt-VBR Clr n/a
for CLP=0

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-41

PNNI trunk
attribute Description CBR rt-VBR nrt-VBR UBR
CLR01 Cell loss ratio Cbr Clr rt-VBR Clr nrt-VBR Clr n/a
for CLP=0+1
AvCR Available cell poolAvaiBwl poolAvaiBwl poolAvaiBwl n/a
rate
MaxCR Maximum cell n/a n/a n/a ubrMaxConnection
rate

CLR0 and CLR01 are the maximum cell loss ratio (CLR) objectives for CLP=0 traffic
and CLP=0+1 traffic, respectively. CLR is defined as the ratio of the number of cells that
do not make it across the link to the number of cells transmitted across the link. For any
given Passport ATM service category, the CLR0 and CLR01 are advertised with the
same value because they are derived from the existing single CLR attribute provisionable
under each ATM interface. In other words, the meaning of the CLR attribute on Passport
is associated with the compliance definition of the connection. For constant bit rate
(CBR.1) and variable bit rate (VBR.1), the CLR applies for the CLP=0+1 aggregate flow.
For VBR.2 and VBR.3, the CLR applies to the CLP=0 cell flow.

The AvCR is a measure of the effective available capacity for the entire link or for each
specific service category. It is expressed in units of cells per second. The value of this
attribute is derived from the respective bandwidth pool(s). Port capacity can be divided
into the different pools (for example, CBR, rt-VBR, and nrt-VBR). Each service category
is mapped to a given pool through provisioning. By default, all traffic is assigned to a
common pool (pool 1). Each pool is assigned a percentage of the link capacity that may
vary between 0% and 2000% (large percentages are used for oversubscription). The CQC
Passport FPs support 3 bandwidth pools per interface, the PQC supports 5 pools.

MaxCR is an optional attribute that represents the maximum capacity usable by


connections belonging to the specified service category. MaxCR is expressed in units of
cells per second. MaxCR is not directly supported for CBR and VBR connections on
Passport because the bandwidth pools provide the same results. Note that MaxCR=0 for
the unspecified bit rate (UBR) service category is a distinguished value, used to indicate
the inability to accept new UBR connections. With Passport, network operators can
control the amount of UBR connections that are accepted on a per-trunk basis with a
provisionable attribute called ubrMaxConnections. This UBR attribute is used to trigger
the setting of MaxCR=0, indicating that no UBR connections should be routed through
this link.

If a link does not satisfy the requirements of the connection, that that link is ineligible to
route the connection. Once all the unsuitable links are pruned from the calculation, it is
very likely that more than one path will exist from source to destination. From the source
to destination node, some paths may require routing less hops, or shorter expected cell
transfer delay, or smaller expected delay variations or provide more available bandwidth.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-42 PRIVATE NETWORK-TO-NETWORK INTERFACE

Of the many possible routes from source to destination, it is up to the source node to
determine which policy it will apply to find the most optimal route. A route is optimal by
providing an additive function to the link metrics. From source to destination, the most
optimal route represents the smallest sum of the link metrics in that path.

8.10.2 Passport Link Metrics


Passport PNNI supports three metrics: Administrative Weight (AW), Cell Transfer Delay
(CTD) and Cell Delay Variation (CDV).

Administrative weight is a dimensionless integer applied by the operator. It is used to


reflect the relative attractiveness of the trunk for routing purposes. The AW value can be
used to reflect the amount of bandwidth, cell delay or any other criteria (or combination
of) desired by the operator. AW is applied individually to all service categories: CBR, rt-
VBR, nrt-VBR and UBR. The default AW value (as specified by ATM Forum is 5040).

Cell Delay Variation is the maximum expected delay from the egress queuing buffers.
This value represents the worst case scenario, that is the difference in microseconds from
a cell entering the queue with no cells ahead of it, to the case where the queue is full.
CDV only applies to service categories with real time requirements: CBR and rt-VBR. It
does not apply to nrt-VBR, ABR or UBR service categories. The default value for CDV
is dependent on the FP and buffer sizes being used.

Cell Transfer Delay is used to reflect the maximum expected cell delay for using this
trunk. The value should encompass the expected CDV and the propagation time for a cell
to traverse this trunk. Therefore, the CTD should never be less than the CDV value.
Similarly to CDV, CTD only applies to service categories with real time requirements
(CBR and rt-VBR). CTD does not apply to nrt-VBR, ABR or UBR service categories.
The default CTD value is dependent on the FP being used. (VERIFY!!!!)

Passport PNNI routing metrics include:


• up to eight provisionable static metrics are provided per trunk
• provisionable optimization criteria are on a per/node/per-QoS basis (CBR and rt-
VBR)
Table 8-5 QoS-sensitive routing metrics

PNNI trunk
attribute Description CBR rt-VBR nrt-VBR UBR
AW Administered weight CBR weight rt-VBR weight nrt-VBR UBR weight
weight
MaxCTD Maximum cell CBR MaxCTD rt-VBR maxCtd n/a n/a
Transfer delay
CDV Cell delay variation Cbr Cdv rt-VBR cdv n/a n/a

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-43

Note that changes in the AW metrics are non-service affecting. However, any
modifications on the CDV and MaxCTD metrics will reset the PNNI links and
connections will be rerouted (SPVCs and SPVPS) or cleared (SVCs and SVPs).

Passport allows the operator to optimize routes based on service category and supported
metric. Therefore, CBR service category can be optimized on a different metric than rt-
VBR.

There are two basic methods for engineering the Passport PNNI metrics:

Method A
• Optimize CBR traffic based on CDV where the CDV = minimal queueing delay (e.g.
10 cells) since peak rate reservation is provided by Passport ACAC
• Optimize rt-VBR traffic based on MaxCTD where the MaxCTD = propagation delay

Method B
• Route all service categories based on AW
• Cost = hierarchical structure based on tariffs, distance, and/or link speed
For instance, in a high-speed trunking environment (e.g. OC-3c), the MaxCTD can be
approximated to be equal to the propagation delay because it is assumed to be the
dominant delay component across the network. CDV can also assumed to be less than 10
cell slots of the link scheduler (less than 10 cells are waiting in the common queue).

8.10.3 Extended QoS Routing


PNNI supports extended QoS routing, where a connection can optionally specify it’s end
to end routing requirements.

Under PNNI 1.0 signaling, two new optional QoS IEs are introduced in the Setup and
Connect messages to supplement the QoS parameter IE that has already been defined in
UNI 3.1.

The new QoS IEs include:

Extended QoS parameter IE


The extended QoS parameter IE specifies individual forward/backward QoS parameter
requirements on a per-call basis for the CBR, rt-VBR, or nrt-VBR service categories. The
acceptable peak-to-peak CDV (determined by users/operators) indicates the calling user’s
highest acceptable (least desired) peak-to-peak cell delay variation value, expressed in
units of microseconds. The cumulative peak-to-peak CDV( determined by PNNI
switches) includes the measured cumulative CDV value from network boundary to
network boundary. The acceptable CLR parameter (determined by users/operators)

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-44 PRIVATE NETWORK-TO-NETWORK INTERFACE

indicates the calling user’s highest acceptable (least desired) CLR value. A CLR is
expressed as an order of magnitude n, where CLR takes the value.

End-to-end transit delay IE


The end-to-end transit delay IE specifies the maximum end-to-end transit delay
requirements on a per call basis for the CBR and rt-VBR service categories. The PNNI
acceptable forward maximum cell transfer delay parameter (determined by
users/operators) indicates the calling user’s highest acceptable (least desired) maximum
cell transfer delay value. It is expressed in units of microseconds. The PNNI cumulative
forward maximum cell transfer delay (determined by PNNI switches) includes the
measured cumulative MaxCTD value from network boundary to network boundary.

The QoS parameters values included in the extended QoS parameters IE, together with
those included in the end-to-end transit delay IE (if present), specify a QoS capability at a
UNI 4.0 interface.

Regarding provisionable QoS parameters for SPVCs/SPVPs: if the next hops are PNNI
then route according to acceptable values; if next hops is IISP/UNI then UNI 3.1
signaling without the new IEs.

If the call originates from an interface that does not support the extended QoS IEs, then
by specification of ATMF PNNI1.0, they must be inserted into the signalling stream
before the next PNNI interface is encountered. On Passport, these IEs are always inserted
with out specifying any hard timing, delay or cell loss ratio requirements. Therefore, with
respect to timing, delay or cell loss ratio, this connection will not have any requirements
for the network to meet.

8.10.4 Overview of Edge-based re-routing (EBR)


The PNNI 1.0 standard specification describes the procedures for the routing and
progressing of call establishments across a PNNI routing domain. It also provides
rerouting capabilities for SPVCs and SPVPs initiated and terminated within a PNNI
domain. However, the PNNI 1.0 standard specification does not specify the procedures
for route recovery (i.e rerouting) in the cases of SVCs and SPVCs/SPVPS initiated
outside the PNNI domain.

If a connection is dynamically established outside a PNNI domain, the PNNI network


operator must release the connections back to the owner under failure scenarios. It also
omits the route optimization of active connections when shorter paths become available.
Previously, network operators had to manually tear down connections and re-establish
them for optimizing the route selection over the shortest path. This is inherent to
connection-oriented routing systems. Furthermore, in the past, network operators had no
direct means to determine which SVCs, SVPs, SPVCs and SPVCs were impacted by a
network failure and if connection recovery was successful.

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-45

In order to improve the rerouting procedures in the ATMF PNNI protocol, Nortel
Networks has recommended an edge-based rerouting mechanism for point-to-point
connections. In the proposal, an “edge-to-edge” protocol would allow the source node
(DTL originator) and the destination node (DTL terminator) to participate in the control
of rerouting operation within a PNNI 1.0.

8.10.4.1 EBR functionality


PNNI Edge-based Rerouting (EBR) introduces new capabilities that are
defined in the pre-standard ATM Forum specification, “Baseline document of
edge-based rerouting for point-to-point connections”. The support of first
phase EBR capabilities on Passport include:
• PNNI network-initiated rerouting procedures when a failure to an active
point -to- point “SVC/SVP” connection (a.k.a. EBR route recovery)
• Manual trigger of route optimization procedures for active
SVC/SVP/SPVC/SPVP connections with reduced cell loss (a.k.a. EBR
route optimization)
• Operational procedures used to identify EBR capable connections which
have been successful recovered after outages and could benefit from route
optimization i.e. interface alarm indication and statistics

8.10.4.2 EBR benefits


EBR allows network operators to:
• achieve more efficient use of network resources; a PNNI network with
route optimization capability will have a lower bandwidth consumption
across the network, freeing up capacity for other traffic sources
• reduce the number of connection hops thus improving the end-to-end
delay
• provide faster rerouting time for SVCs and greater availability for
established connections impacted by network failures when traversing
multiple PNNI domains

8.10.5 EBR route recovery


For a given connection, edge-based re-routing defines a single re-routing segment to span
across an entire PNNI domain (i.e. the re-routing node is always the DTL originator node
and the rendezvous is always the DTL terminator). This approach will permit network
operators to better understand the implications related with route recovery and route
optimization mechanisms in a flat or hierarchical PNNI network. A segmented or peer
group-based rerouting approach is subject for further study and will eventually run in
conjunction with the edge-based re-routing protocol.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-46 PRIVATE NETWORK-TO-NETWORK INTERFACE

PNNI Domain 1 PNNI Domain 2 PNNI Domain 3

Destination
Source
UNI
UNI
IISP
IISP AINI
AINI

RELEASE

SETUP

CONNECT

Route Recovery WITHOUT Edge-Based Rerouting

PNNI Domain 1 PNNI Domain 2 PNNI Domain 3

Source Destination
UNI
UNI IISP
IISP
AINI
AINI

RELEASE

SETUP

CONNECT

Route Recovery WITH Edge-Based Rerouting

Figure 8-25 Benefits of EBR route recovery

The EBR route recovery procedures are required when networks are responsible for
quickly restoring connections impacted by outages. EBR allows PNNI network-initiated
fault recovery of point-to-point “SVC/SVP” connections without the intervention of the
connection owner. It also minimizes the use of network resources required to re-instate
active connections to a single PNNI domain without the assistance of multiple PNNI
network providers.

The route recovery mechanism uses the cumulative QOS information of the original
connection (also known as the incumbent connection) as the criteria to determine whether
to reroute the call. As long as a viable alternate path provides as good or better
cumulative CDV and MaxCTD parameters then the PNNI network should be able to
reroute. This also means that the current route recovery procedure will not reroute on
"longer" paths. This is a generic mechanism used by both route optimization and route
recovery procedures. The cumulative QOS parameters (CDV and MAxCTD) are
calculated between the two PNNI edge nodes implementing the EBR protocol.

In general, EBR route recovery capabilities are not required for SPVCs/SPVPs
originating and terminating within the same PNNI domain since it is already provided by
the base PNNI 1.0 protocol.

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-47

8.10.6 EBR route optimization


Connection-oriented routing systems like PNNI can reserve more aggregate network
bandwidth when routes are non-optimal (e.g. alternate path after routing around network
failure). Longer routes decrease the total capacity of the network and degrade the quality
of service of the end users. EBR route optimization provides more efficient use of
network resources in terms of bandwidth utilization, transit delay and the number of
nodes a call needs to traverse.

An important characteristic of route optimization is to guarantee that the process uses on-
demand route calculation with minimum disruptive procedures. This is accomplished in
the following ways:
• existing connections are never dropped as it first attempts to establish a new optimal
path before swapping the traffic
• cells ordering is guaranteed and no cells are duplicated
• pacing of a single route optimization attempt per ATM signaling interface ensure that
network resources are never over utilized during optimization cycles
• route optimization attempts can be interrupted by the establishment of new calls and
rerouting procedures (i.e. next priority)

PNNI Domain
DTL DTL
Rerouting
Originator Terminator
Connection

Source Traffic Swap


Destination

Incumbent
Connection

Figure 8-26 Benefits of EBR route optimization

During route optimization, there is a transition state where two connections coexist. The
original connection that was established for the call is referred as the incumbent
connection while the new “optimal” connection that will be established by the route
optimization mechanisms is referred as the re-routing connection.

Note that successful route optimization of a connection will incur a cell loss for the
duration of the swap interval. The period of cell loss can vary and is characterized by the
time it takes to clear the incumbent connection. The DTL originator initiates the tearing
down of the incumbent connection. It is performed through call control procedures that
involve the propagation and processing of a Release PDU across all of the PNNI nodes
along the incumbent connection towards the DTL terminator. In comparison, the service

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-48 PRIVATE NETWORK-TO-NETWORK INTERFACE

interruption caused by the current EBR route optimization is 300% much less that the
loss of data triggered by a hard re-route.

In the first phase, route optimization is initiated by the network operator through the
optimize command issued to an “edge” (or ingress) ATM signaling interface which
maintains point-to-point connections (typically the UNI or IISP interfaces). The optimize
command initiates route optimization procedures for all of the applicable connections
associated with the ATM signaling interface. Only ATM point-to-point connections
traversing PNNI nodes fulfilling the role of DTL originators are optimized.

ATM point-to-point connections traversing PNNI tandem nodes or nodes performing the
role of DTL terminators are unaffected by route optimization commands. It is important
to understand that optimize command will only move connections if there is a new route
that offers an improvement compared to the existing route. If Passport finds an eligible
path in the topology database with a lower sum of the link metrics (as low as 1 metric
unit) then the connection is considered candidate for route optimization. Otherwise, the
connection is not considered for route optimization procedures and the cycle continues
with the remaining EBR capable connections on the ATM signaling interface. Note that
the route optimization procedure will not consider shorter paths if the incumbent
connection is “within” the load balancing variance.

The first phase of the EBR implementation is characterized with the following
limitations:
• route optimization is service affecting
• route recovery is limited to paths providing as good or better end-to-end QoS
guarantees (e.g. cumulative CTD) than the incumbent connection
• EBR route recovery and route optimization mechanisms don’t support preemption
(calls are rerouted and optimized in random order)
• EBR route optimization does not prevent “temporary” double booking of network
resources (ACAC) when the rerouting connection overlap the same intermediate
PNNI nodes that are already part of the incumbent connection
• automatic real-time triggering of the EBR route optimization procedures initiated by
released bandwidth and shorter path availability is for future releases
• the automatic process periodically evaluates the current path characteristics against
the updated topology database
• EBR capabilities over PNNI logical paths (VP associated signaling) are not supported
• EBR route optimization will not move connections to an equivalent path in order to
balance the bandwidth reservation among multiple equal best paths and multiple
PNNI links in a link group (links that are used to interconnect two neighbor nodes)
When connections are routed across multiple PNNI networks interconnected with UNI,
IISP or AINI links, each intermediate PNNI network independently coordinates its two

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-49

edge switches along the path in order to synchronize the rerouting and optimization
procedures. As it stands, it is the responsibility of the PNNI networks to provision the
ingress UNI interfaces if route recovery is desired. Path optimization is always initiated
independently by each PNNI organization. Adding EBR to an UNI/IISP signaling
interface enables by default both route recovery and route optimization for all switched
connections originating from this interface.

It is also important to understand that the current edge-based rerouting capability is


limited to failures inside a PNNI domain. In other words, when a call/connection fails
outside the PNNI network boundaries (for instance, an IISP link or destination node
outage), no route recovery (except for SPVCs and SPVPs) nor route optimization
procedures are attempted. This is because the protocol for re-routing is only performed
between two PNNI nodes for a given connection. So far, the standardized route
optimization procedure cannot involve another PNNI destination node in multi-homed
environment.

As a result, a specific indication is provided to the source node not to trigger the edge-
based rerouting operation when the failure is external to the PNNI network.

For a connection to be eligible for EBR capabilities, the DTL originator and DTL
terminator must support EBR procedures. Supporting EBR procedures means that the
nodes both implement route recovery and route optimization procedures. The
intermediate nodes within the PNNI network transparently transport these information
elements across the network. This is accomplished by using the “Pass along/No pass
along request” described in the existing PNNI 1.0 specification. This procedure is part of
the PNNI 1.0 minimum function and all PNNI vendors must be compliant to it.

Passport supports different subscription options that allow for increased flexibility during
feature deployment and allow for the differentiation of ATM service based on connection
recovery capabilities. Addition of the EBR component only impacts new call
establishments. New connections are subscribed to the specified EBR options.
Furthermore, changes to the provisioned attributes are not critical to the already
established connections. Existing connections retain the old subscription options and new
connections would be given the new options.

To provide route optimization and route recovery capabilities, PNNI connections must be
registered for EBR capabilities when the call is established. This means that during initial
deployment of EBR in an exiting PNNI network, connections must be cleared after the
software upgrade in order to initiate the EBR capabilities during call establishment. Once
the call has been established, the connection maintains EBR capabilities until the
connection owner clears the call or the network no longer has the route diversity to
perform route recoveries for failed connections.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-50 PRIVATE NETWORK-TO-NETWORK INTERFACE

8.10.7 EBR in Hierarchical PNNI Networks


In a hierarchical domain, special consideration must be taken to account for hierarchical
routing. Recall that under simple node representation, each higher level logical node
represents an entire peer group, but appears in the routing tables as a single node. By
default, the single instance of the peer group does not accurately reflect the true cost of
traversing that peer group. As a result of this, EBR path optimization may incorrectly
calculate a hierarchically more optimal route based on the inaccurate information of the
logical group node costs. To address this, if EBR is used in a hierarchy, emphasis should
be placed on forming peer groups of relatively the same size (number of nodes) and
diameter (number of nodes across the peer group). Doing so will minimize the effect of
simplex node representation. Also, ensure that care is taken to derive the aggregate
outside link costs between peer groups. This cost is reflected as the link cost between
higher level nodes. Dramatically increasing or lowering a single outside link cost may
skew the total aggregate cost of the higher level horizontal link, inadvertently causing
metrically better (more optimal) routes.

To further address this issue, the outside links (and consequently the uplinks and higher
level horizontal links) can have their routing costs increased to reflect routing across a
peer group rather than a node as it might appear.

8.10.8 Multi-path variance (MPV)


Multi-path Variance (MPV) allows Passport path computation to perform load balancing
techniques on a controlled amount of paths that include optimal paths and some paths that
are sub optimal. Historically, load balancing is only done of equal best cost paths from
source to destination. If longer or sub optimal paths also existed, they would not be
considered because it was impossible to determine when a path was too sub optimal to be
acceptable.

MPV solves this problem by allowing an operator to define exactly to what degree will
paths be determined optimal, acceptable or sub-optimal. Up to three optimal or
acceptable paths will be eligible for load balancing (up to a maximum of four alternate
paths, as defined by MaxAlternateRoutes).

MPV adds benefits to the Passport routing system by more efficiently using the network
bandwidth and providing more connection reliability. If load balancing is done on only
paths of optimal and equal cost, then connections would always be routed on the same
small number of paths. Using this small set of paths would continue until a significant
change on part of the intermediate links of such paths occurred (i.e. link down or
bandwidth saturation). Until then, other non optimal but acceptable paths would be under
utilized. Furthermore, due to the a large number of connections routed on only a few
paths, a failure to one of these paths causes massive rerouting as it affects the service of
many connections. MPV can be used to alleviate all these problems, by spreading
connections over optimal and acceptable paths in the network.

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-51

8.10.8.1 Acceptable variance


The acceptable variance defines the amount a route can be sub optimal, but
still be acceptable for load balancing purposes. It is an integer value that
defines a range between the cost of the optimal path and the cost of a path that
is higher than the optimal cost, but still acceptable. Any routes in the network
that have a cost that falls between the two boundaries may be eligible for load
balancing.

The value of the minimum variance should represent the average cost of a
single link in this routing domain.

The acceptable variance if defined by the following equation:

Acceptable Variance = (VarianceFactor) × (Route(OptimalCost) + (Minimum Variance)

The Variance Factor is defined as a percentage that is applied to the optimal


cost of a route from source to destination. For instance, if the best cost route
from source to destination is 20000 AW, then a variance factor of 10% would
produce an acceptable variance of 2000 AW. Therefore, any route
representing a cost between 20000 and 22000 would be considered acceptable
by this part of the MPV equation.

The second part of the equation represents the constant amount of variance
that must be considered. As the optimal cost of a path in the network grows
smaller, then variance factor (which is always smaller than the optimal cost)
also grows smaller. It is foreseeable in smaller one hop networks, that the
variance factor would produce a result that is too small to include any other
routes or links to load balance on. In these situations, the minimum variance
can be used to always define the smallest the amount of variance to be used.

The maximum number of links returned from MPV is defined by the


parameter MaxAlternateRoutes, which has a maximum value of 3. In some
networks, it is possible to have more than three paths that fall in the defined
“acceptable” range of variance. To prune these extra links, MPV employs
other techniques to return the optimal set of paths, namely route diversity.

8.10.9 Route diversity


Route diversity on Passport is a mechanism to ensure that paths with many common links
can be distinguished. To maximize network resources, it is desirable to load balance on
paths that have as little number of links in common as possible. Doing so spreads
connections over more diverse paths, minimizing the number of connection reroutes in
the case of a link failure, while maximizing the available network resources for other
purposes.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-52 PRIVATE NETWORK-TO-NETWORK INTERFACE

To accomplish route diversity, Passport path computation internally remembers the


individual links used for the best cost path. If multiple paths are required (via multi-path
variance), then each subsequent acceptable path (up to a maximum of three) is compared
to the best cost path for diversity.

A path is considered to be “diverse” (and therefore acceptable to load balance on) if the
number of common links in the two paths is strictly less than a configured percentage of
the total links in the best cost path. By default, the diversity percentage is 50%.

L3.1 L4

L6 L7
L1 L2
3 6 7 8
1 2

L3.2 L5

Figure 8-27 Routing where all links have an equal cost

As an example of route diversity, consider Figure 8-27. In this network, if all links have
an equal cost, routing could choose two paths from node 1 to node 7:

(L1, L2, L3.1, L4, L6, L7) and (L1, L2, L3.2, L5, L6, L7)

both of which are optimal. The problem here is that for the most part, this is almost the
same path. The paths are identical for 4/6 of the hops involved. Other sub-optimal paths
may exist that might not use any of these links and would provide better load spreading
than choosing strictly on the optimal paths. For instance, consider a new path, that
directly joined node 1 to node 7. Such a link may have a higher routing cost than either
(L1, L2, L3.1, L4, L6, L7) or (L1, L2, L3.2, L5, L6, L7) but would be desirable to use for
diversity purposes.

To better quantify the diversity of two paths, we define the diversity degree,
D(path1,path2), the diversity degree of path1 relative to path2, as follows:

D(path1,path2) = 1 − [(number of common links) ÷ (number of links in path2)]

Note that since we always normalize to the number of links in path2, the diversity degree
is not a reflexive relationship between path1 and path2, which means that

D(path1,path2) != D(path2,path1)

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-53

In the on-demand PNNI routing algorithm, the diversity of all the paths is calculated
relative to the best path (the optimal path). Among the paths computed in all the three
steps of the algorithm, only the paths that have the diversity degree relative to the best
path greater than 0.5 are considered as alternate paths. Note that no sorting is done based
on the diversity degree. Specifically, if the algorithm computes more than lbMaxPaths, it
is possible that paths with a higher diversity degree are not considered as alternate paths,
and paths with smaller diversity degree (but still greater than 50%) are considered.

8.10.10 Engineering Multi-path Variance (MPV)


The default values for MPV are 0 for minimum variance and 0 for the variance factor.
These values effectively disable the feature and provide the default behaviour of load
balancing on equal optimal cost paths only. Changing the values to non zero effectively
enables the MPV mechanism. The value of each MPV parameter will depend on the
network topology and architecture where it will be used. The remainder of this section
discusses some guidelines on how to choose suitable MPV values. Engineering MPV is
the same for most typical network architectures: ring topologies, fully meshed or partially
meshed. However, a slight distinction is made between engineering MPV in flat PNNI
networks and hierarchical PNNI networks.

8.10.10.1 Flat Networks


In flat PNNI networks, the entire source to destination route can always be
routed to optimally (all links states/costs are known to the source node). It is
possible to know before hand the general costs of the optimal routes in the
network (at least between a few major communities of interest). It is also
possible to estimate what suitable alternate routes are available, and their
respective costs.

Best cost routes can always change in dynamic networks due to bandwidth
changes, or link failures. The MPV formula accounts for changes by basing
the variance on any best cost path, by deriving a percentage of the optimal
cost. Therefore, it is recommended that the significant part of the MPV
equation defining the majority of the acceptable variance be the Variance
Factor. Figure 8-28 illustrates how the acceptable variance increases as the
cost of the optimal path increases.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-54 PRIVATE NETWORK-TO-NETWORK INTERFACE

Acceptable Variance
∆ = C+V· OptMetric (OP)

C – Mininum Variance
V – Variance Factor
C

0
Optimization Metric of the optimal path

Figure 8-28 Acceptable variance versus cost of optimal path

The Variance factor should be set large enough to include some number of
acceptable paths (as desired by the operator). The value of the variance factor
will be influenced by the cost of the expected optimal paths and the variance
between the optimal path and suitable alternative paths. As a guideline, the
worst case (largest expected cost) optimal path and variance deviation should
be used to determine the value of the variance factor. It is important to
engineer the variance factor to always find suitable paths in the worst case
situation. Doing so, guarantees variance in the entire network. In cases where
the optimal cost is lower, or the suitable paths are “closer” to the optimal path,
MPV will be inclined to include too many routes as the variance factor is set
to higher worst case value. However, as the optimal cost decreases, so shall
the acceptable variance. Furthermore, MPV is limited to include a maximum
of 4 alternate paths that all must satisfy the diversity criteria. In other words,
despite the aggressive values of the variance factor, MPV limits itself to return
only a few (provisionable) number of routes.

Once a suitable value for the variance factor is found. Then the minimum
variance can be defined. Again, referencing the worst case scenario as
previously discussed, the worst case acceptable variance would have already
been estimated. A guideline for choosing the minimum variance is the average
cost of a single (or a couple) of links. If the source destination pair is very
close in proximity, then the variance factor becomes incidental. The minimum
variance should represent the cost of one or two links to facilitate load
spreading in short distances.

8.10.10.2 Hierarchical Networks


The same engineering guidelines that apply to flat networks also applies to
hierarchical networks. However, in hierarchical networks, an extra challenge
is presented due calculating hierarchically complete source routes with
simplex node representation. Load spreading in a hierarchical network can
cause multiple connections to the same destination (from the same source) to

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-55

follow very diversely different paths, including many different peer groups. In
some cases, due to simplex node representation, the paths may be too diverse,
and represent very sub optimal paths through misrepresented peer groups.
Without complex node representation, a true representation of the size of each
peer group is not known, and is only represented by the cost of the horizontal
link joining the higher level nodes.

To minimize this effect, the cost of the upper level horizontal links connecting
two logical nodes should be priced higher than normal inside links to account
for the extra (and unknown) peer group costs. Doing so distinguishes the case
where the horizontal link cost reflects the cost to the next node, from a higher
level horizontal link reflecting the cost of a peer group. As a result of
increasing the outside link costs, the network routing systems will not change
hierarchical routes including different peer groups due to small topology
changes in the local peer group. If the cost system for both inside and outside
links are equal, then small changes in the local peer group topology may have
larger and adverse affects on the global hierarchical route. Differentiating the
inside and outside link costs minimizes this affect.

Also, the variance factor should not be set high enough to include many
horizontal link combinations from source to destination. If possible, try to
limit the number of suitable routes at higher levels of the hierarchy. For more
information on how to cost links in a Passport network, refer to section 8.11
Passport Hierarchical Routing.

8.10.11 Load balancing


Load balancing is a technique used to ensure that the resources in a distributed system are
equally loaded. In a network environment, load balancing is used to achieve a more
balanced utilization of the network resources, therefore minimizing the risk of congestion
and increasing the stability of the network in the case of link or node failures.

In order to achieve load balancing in a network, there are two requirements that have to
be met. First, the routing algorithm must determine not one, but multiple diverse
acceptable routing paths. Passport utilizes the Multi Path Variance feature to accomplish
this. Second, out of these acceptable paths, the load balancing scheme has to select the
routing path used by the connection, ensuring that a balanced utilization of the network
bandwidth is achieved.

Passport supports four load balancing techniques, that load spread connections based on a
random selection, the available cell rate or a path and/or the optimization cost of different
acceptable paths. The different services offered in the network coupled with the specific
network topology will define what load balancing technique is most suitable.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-56 PRIVATE NETWORK-TO-NETWORK INTERFACE

Uniform Load Balancing


Uniform load balancing is the default Passport load balancing technique. It selects a path
with equal probability to every other acceptable path. Uniform load balancing does not
consider the available cell rate or the optimization metric of any path it chooses. As a
result of this, uniform load balancing is best suited for networks designed with many
equal cost routes.

If the costs of each acceptable route are roughly equivalent, then randomly selecting one
is an efficient method of choosing one. Without complications, it also offers the benefit
of distributing all the connections over all the paths evenly. If uniform load balancing is
used in networks with large variations in best cost and available cell rate, sub optimal
network utilization could occur. It is possible for some paths to become over utilized, and
for CBR and rt-VBR traffic to take un-necessary extra hops. The only exception to these
guidelines is UBR traffic, because it does not require network timing or reserved any
trunk capacity, is well suited for uniform balancing.

1
Pr[ pi ] =
n
If n denotes the number of acceptable routing paths, then the probability of choosing a
path pi as the routing path is:

Widest Load Balancing


Widest load balancing will always select the path with the most amount of available cell
rate. The maximum available cell rate of a path is determined by the smallest amount of
available cell rate on any link of that path. The link with the smallest available cell rate in
any path represents the maximum amount of bandwidth that can be transmitted on the
path. Regardless of how much more bandwidth is available on other links in the path,
exceeding the link with the smallest amount will cause congestion.

Widest load balancing is best suited for networks with equal sized trunks with equal
utilization. Using the widest technique, may be well suited to load balance connections
effectively over backbone links between core nodes. For instance, in a peer group
consisting entirely of core back bone nodes, the widest technique could effectively.

8.10.11.1 Proportional to Available Cell Rate


This stochastic load balancing technique considers only the available cell rate
criteria when selecting the routing path. Specifically, it selects a path with a
probability computed based on the following equation:

AvCr ( pi )
Pr[ pi | AvCr ] = n
∑ AvCr ( pk )
k =1

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-57

Since the probability of selecting a path as the routing path is directly


proportional to its available cell rate, a path with a higher available cell rate is
chosen more frequently than the other paths.

8.10.11.2 Proportional to Optimization Metric


Contrary to the previous technique, this load balancing technique selects the
routing path based on the optimization metric values. Specifically, the
probability of selecting an acceptable path is inversely proportional to its
optimization metric, therefore the paths with lower optimization metrics are
chosen more frequently than the rest of the paths. If OptMetric (pi) denotes the
value of the optimization metric of the path pi, the probability of choosing the
path pi as the routing path is equal to:

OptMetric −1( pi )
Pr[ pi | OptMetric] = n
−1
∑ OptMetric ( pk )
k =1

8.10.11.3 Proportional to Available Cell Rate and Optimization Metric


The load balancing techniques presented in the previous sections use either
the available cell rate or the optimization metric criteria in determining the
routing path. By considering these criteria individually, these techniques do
not take advantage of the benefits that the other criterion introduces. For
example, if the routing path is chosen proportionally to the available cell rate,
it is possible that the most sub-optimal acceptable path is chosen
predominantly. Similarly, if the routing path is chosen only depending on the
optimization metric, the PNNI routing algorithm does not consider the
bandwidth available on different paths, which could, in time, diminish the
number of available routing paths in the network.

To overcome these limitations, this load balancing technique selects the


routing path based on both the available cell rate and the optimization metric.
The probability of selecting the path pi as routing path is given by the
following formula:

0.6 ⋅ Pr[ pi | AvCr ] + 0.4 ⋅ Pr[ pi | OptMetric ]


Although this technique uses both criteria in selecting the routing path,
equation (above) shows that more importance is given to the available cell
rate, the resulting probability being biased towards this criteria.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-58 PRIVATE NETWORK-TO-NETWORK INTERFACE

8.11 Passport Hierarchical Routing


One limitation of simplex node representation is that the true cost of traversing an LGN is not
known. The size of the peer group the LGN represents is hidden from the DTL originator. This is
how PNNI supports large scalability.

To compensate for the limitations of simplex node representation, the cost of the outside links
(and consequently the uplinks and upper level horizontal links) can be engineered to reflect a
more accurate picture of the topology. Increasing the cost between two logical group nodes has
the following benefits:
• Small topology changes (i.e. adding or deleting links, or introducing /removing nodes) in
the DTL originators peer group will not affect the hierarchically complete routing.
• Features like Edge Based Routing (EBR) path optimization and multi-path variance
(MPV) can determine more accurate optimal paths. The increased higher level link costs
will better represent the cost of traversing many peer groups.
However, changing the value of the outside links has a great affect on the end-to-end routing of
connections, and care must be taken to not adversely affect call flows.

Due to such complexities, it is difficult to give general guidelines that can be simply applied to
every network. Instead of attempting such a daunting task, this section outlines the various cost
consideration that must be taken into account when designing Passport networks. For more
specific case scenarios or assistance in engineering a specific network topology, please consult
your Data Network Engineering representative.

8.11.1 Costing Outside Links


As the hierarchy increases in levels, the link costs in those levels should also increase to
combat the amplified routing ambiguities. Consider a node in a highest level peer group.
For routing purposes, this node represents a single point that advertises a reachable
destination address. In reality, this node may represent many successor peer groups or
groups of peer groups. The cost of the links attaching to this node should be increased to
reflect some of the hidden complexity.

The cost of an outside link should represent multiple factors: the cost of actually using
the outside link, the cost of traversing the connecting peer group and possibly a cost
considering its position in the hierarchy. The cost of the outside link should be inline with
how the remainder of the network links are engineered (i.e. based on available
bandwidth, delay etc). The cost of traversing the connecting peer group should be based
the expected number of hops to exit that peer group. This number does not represent the
total width of the peer group, rather the number of expected hops. Take for example, a
peer group with 2 backbone nodes inside of it. Any connections transiting through this
network should be engineered to use the backbone links to enter and exit out of the peer
group. In this case, the typical case is one or two hops (one for each backbone node).

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-59

Regardless of how many other nodes exist in this peer group, only the backbone nodes
are used to transit the peer group.

If link costs are increased to reflect the amplified summarization, then the guidelines
summarized in sections 8.11.1.1,8.11.1.2 and 8.11.1.3 should be followed.

8.11.1.1 Cost considerations based on MPV and EBR


When engineering upper level link costs, it is imperative to ensure that the
costs of the horizontal links in the entire hierarchy are distinguishable between
levels, but also remain relative to each other. If the costs of links increase at
rate that makes the lowest level links insignificant, then features like MPV
have little value. The inflated cost of the hierarchically optimal route would
skew the MPV calculations such that too many paths in the local peer group
are included as acceptable. Recall that MPV calculates acceptable routes using
the formula:

Acceptable Variance = [VarianceFactor × Route(OptimalCost)] + Minimum Variance

As the cost of the optimal route grows larger, the determining factor in the
equation is the product of the variance factor and the cost of the optimal route.
If the higher level link costs are very large compared to the cost of the lowest
level links, then it is possible that even the most conservative values for
variance factor might include too many routes in the lowest level peer group.

Conversely, setting the link costs at values too low might cause features such
as EBR to use unnecessary peer group hops to re-optimize connections. If the
upper level links are roughly equal in cost to the lowest level links, then the
cost of a single hop anywhere in the network is also equal. This scenario does
not accurately reflect the true cost of a LGN hop compared to a lowest level
node hop. In this situation, EBR may incorrectly optimize a connection to a
sub optimal path traversing extra peer groups.

8.11.1.2 Cost considerations based on Generic CAC


Passport allows connections to be optimized on Aw, CTD or CDV. Unlike
CTD and CDV, Aw is not considered in PNNI GCAC or ACAC algorithms.
Therefore, Aw can be changed without regard to inadvertently blocking calls
from CAC failures. If CTD and CDV costs are increased, then GCAC
algorithms may block connections due to inflated artificial costs. GCAC
cannot distinguish between the artificial inflated costs and the true cost of the
links, it must assume the artificial cost is the true cost. If connections are
optimized on CTD or CDV, then the values must be engineered to be
optimistic, and reflect the best cost through the outside link and connecting
peer group.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-60 PRIVATE NETWORK-TO-NETWORK INTERFACE

This problem becomes amplified as the number of levels in the hierarchy


increases. For instance, a node at the highest level of the hierarchy may
represent many child/grandchild peer groups. If Aw is used, the cost of this
link can be changed to any value (following the guidelines outlined in this
section). However, if CTD or CDV is used, then the value must be engineered
to allow connections to route to the best case peer group (destination is the
next or current peer group) and also the worst case peer group (destination is
the furthest peer group), without GCAC failures. The cost of an outside link
connecting to a peer group does not have to reflect the “true” cost of
traversing it. It does however need to have a value higher in value to links at
lower levels.

Modifying the Aw of a trunk does not affect any existing connections on that
trunk. However, modifying CTD or CDV causes connections on the trunk to
be released.

8.11.1.3 Cost considerations based on Peer Groups with LGNs


The value of the outside links should only be changed if the higher level peer
groups do not have any lowest level nodes in them. It is quite likely that a
higher level peer group is partially done a top down migration, and some
logical group nodes directly connect to lower level nodes in the same peer
group.

If the higher level peer group is partially meshed, such that a source node at
the lowest level can see both the logical group node and the lowest level nodes
in the higher peer group, then sub optimal routing is possible. Consider a
connection routing to an address advertised by the logical group node. The
upper level horizontal link connecting to the logical node has an inflated cost
used to represent the traversal of the peer group. However, in this instance, the
call is terminating in the peer group represented by this logical node. The
inflated cost of the outside link may cause the path to route around the inflated
link (which is probably also the optimal path) and use the other lowest level
nodes in the higher peer group. In this case, a sub optimal route has been
chosen.

The operator has two options to alleviate this problem: leave all the link costs
(inside and out) as the defaults and ensure to the best of their ability that the
peer groups are relatively equal in size (diameter), or change the lowest level
nodes link costs in the higher level peer group to a increased value. The latter
quickly becomes unsuitable in large networks with high connectivity,
especially considering the costs would have to be re-examined every time a
node migrated to a new peer group. Despite the inaccurate routing for EBR
and MPV, the former is the recommended approach.

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-61

8.11.2 Setting AW and Link Aggregation


Considering Figure 8-29, assume that all connections terminate at edge nodes. Using a
hierarchal configuration, links 1,2,3 are the outside links connecting PG(A) with PG(B).
Consider the example of a connection from A.2 to B.2 as outlined in Table 8-2

Edge link
B.2
B.3
A.3
Core link 1 B.4

A.2 2
C.2
A.1 3 B.1

Figure 8-29 Partial PNNI hierarchical network configuration

This is the compromise of hierarchical PNNI and hierarchical routing in general; should
the initial choice of outside link not be the “correct” one, then extra hops and will be
encountered thereby increasing delay and bandwidth utilization. Note that in this
particular example, as long as all core link Aw’s are set to the same value (which would
be a lower value compared to the edge links) then the following observations hold true:

1. Without a hierarchy, the connections would take two or three hops: two hops if
there is a core node connected to both source and destination, and three hops otherwise.

2. Using a hierarchy, there will be at most one extra hop in the core. When the
source node calculates the hierarchically complete source route, including the LGN of the
destination PG, the favoured outside links are always the core links, and the exit border
node and entry border node will be always be core nodes, and so the connection will
always use one core link and therefore require three hops in total. In migrating from flat
to hierarchical PNNI, the extra bandwidth on each core link required is the bandwidth
required by all the connections that would have made optimal two hop routes (e.g. the
aggregate bandwidth of all the connections between two access nodes who are in
different PGs and that both have a connection to the same core node).

3. Since the higher level peer group includes all the core nodes, there is always
optimal routing within the core.

4. If there are N edge nodes in the destination PG that are also connected to the
source PG, and the core link between them fails, then the probability of using access
nodes as a tandem is N-1/N, and the chance of using the optimal route 1/N. So, the
criticality of the core failing is related to the number of destination access nodes that are
also connected to the source PG core node. Additional redundancy of the core link in this

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-62 PRIVATE NETWORK-TO-NETWORK INTERFACE

case is recommended if N is large (e.g. over 3-5 ) in order to avoid at all costs the
possibility of using access nodes as tandem.

Table 8-6 Comparison of routing depending on the AW of outside links 1, 2, 3

Outside link with Number


the lowest Aw Physical Path Taken of Hops Comment
1 A.2->A.1->B.2 2 Optimal
2 A.2->A.1->B.4->B.1->B.2 4 Uses access node as
a tandem; undesirable
3 A.2->A.1->B.1->B.2 3 Routes all connections
through neighbouring
core nodes

Note that complex node representation only helps in routing through PGs; there is no
difference if the there are no tandem PGs. The network has no PGs which could be
traversed, so complex node representation is irrelevant in this case.

If the semantics of the network Aw link values include link rate, then usually the
behaviour described above will be consistent in the network. The link rate on core links is
typically larger than edge links, driving the Aw cost down and making the core links
more attractive.

8.11.3 MPV and Load Balancing in Hierarchical Networks


MPV and load balancing will be applied by every node that is involved in route
computation in a PNNI domain. This means that the DTL originator and every entry
border node in a PNNI domain can be provisioned with local variance and load balancing
policies. This allows Passport to customize its route computation to the local topology
and nuances of that peer group.

8.12 Route Cache


Dykstra’s shortest path algorithm is currently executed every time a connection needs to be
routed. As the size of the topology database increases, so too does the complexity and time to
compute shortest path routes. The increased time to compute routes may be unsuitable for some
networks demanding extremely high call setup rates. The route cache is introduced to address
this problem, but storing recently used routes for re-use by subsequent calls. As a result of the
cache, Dykstra’a algorithm need not be executed so frequently, freeing the CPU to perform other
tasks, while increasing the call setup performance of this switch.

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-63

8.12.1 Cache Architecture


The route cache stores routes according to service category, optimization metric, QoS
profile and destination address. The QoS profile characterizes the bandwidth
requirements and acceptable cell delays and variations expected for connections on the
cached routes. Each profile is defined by the following attributes:

Minimum Forward cell rate


Guarantees the reserved cell rate in the forward direction for a cached route

Minimum Backward cell rate


Guarantees the reserved cell rate in the reverse direction for a cached route

Cell Transfer Delay


Characterizes the maximum cell transfer delay of the particular cached route

Cell Delay Variation


Characterizes the maximum expected cell delay variation of the particular cached route

Cell Loss Ratio


Characterizes the worst case cell loss ratio of the particular cached route

The cache can define a maximum size by limiting the number of cached routes it stores.
The default is 10000 routes.

8.12.2 QoS Profile Matching


To maximize the number of cache “hits”, a profile matching scheme is implemented to
map specific connection routing requests to previously computed cached routes. A
cached route is considered to satisfy the connection requirements if the following are
satisfied:

1. The forward cell rate of the cache is greater than the forward cell rate required by the
connection. However, the cached forward cell rate should not be significantly
superior to the connection cell rate. Such a match would limit the utilization of other
lower cell rate cached routes. Passport uses a matching scheme that promotes matches
that are “close” in advertised rates and required rates. The variance allowed between
the two rates is based on the actual amount of CR in the cached routes. The variance
increases proportionally to the amount the CR increases in the cache route.

2. The reverse cell rate of the cache is greater than the reverse cell rate required by the
connection.

3. The cached route satisfies the connections acceptable CTD and CDV.

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-64 PRIVATE NETWORK-TO-NETWORK INTERFACE

4. The cached route satisfies the connection acceptable CLR.

8.12.3 Load Balancing and Cached Routes


If multi-path variance is employed in combination with route caching, the result is that
the acceptable routes defined by MPV are also cached. Subsequent calls satisfied by the
QoS profile can utilize any of Passports load balancing mechanisms on the cached
acceptable routes.

8.12.4 Cache Maintenance Procedures


Routes in the cache must be reactive to network changes that affect where connections
can be routed. This section describes the mechanisms in place to ensure that the cache
only holds valid paths for connections to use.

PTSE Updates
Network changes are reflected in the topology database by way of PTSE updates. Any
change that updates the database is also applied to the route cache. Any cached route is
purged from the cache if the characteristics of the link no longer satisfies it’s QoS profile.
If multi path variance is enabled and some acceptable paths are cached for a particular
destination, then a PTSE update that disqualifies an acceptable path only removes that
individual path. The remaining acceptable routes can be used for load balancing.

Route Cache Aging


Every cached route is subject to an aging process, controlled by the configurable attribute
“aging period”. By default, the aging period is set to 30 minutes. The cached route is
purged regardless of how many “hits” were registered. This mechanism allows for newer
or more optimal routes to be cached and route subsequent connections.

Crankback Updates
If MPV is employed, then any crankback messages are used to prune links and routes
from the set of acceptable paths. Crankback messages will identify the link or node that
has failed or is determined to be unacceptable. Of the set of acceptable paths remaining
that did not include the rogue link or node, it can be used to reroute the connection
avoiding an on demand calculation.

Complete Purge
Passport has the ability to purge all the information in the database by virtue of an
operator command.

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY
PRIVATE NETWORK-TO-NETWORK INTERFACE 8-65

8.12.5 Performance Trade-off


As is the case with other performance enhancements, a consequence is typified by an
increased memory requirement. As a worst case scenario, which assumes 10000
individual cached routes, each with the 20 hops (the maximum allowable as specified by
PNNI 1.0), and three acceptable load balancing paths per route, then the memory
consumption is estimated to be 11.6 MB. Recommendations on physical resources should
be done on a per network basis, consulting you data network engineering representative.

8.13 Multi-homed Routing


A multi-homed address is defined as an address that is being advertised by at least two PNNI
nodes. The most common situation occurs when, for redundancy, the same end device is
connected to at least two Passport nodes.

Previous support for multi-homed addresses including a round robin approach where connections
were routed serially routed to each multi homed addresses in turn. Routing was done without
regard to any other routing constraints.

This feature eliminates the Round Robin scheme of the pre PCR1.3 Passport release, and ensures
that the PNNI routing scheme calculates the optimal route to the destination address. If a PNNI
address is advertised by two or more nodes in the PNNI domain, then with the old software, each
node would be routed to in a round robin fashion for each subsequent connection specifying that
address. The problem with this method, is that no regard was given to any metrics when the call
is routed: one node advertising the address may be only one hop away, yet the connection could
be routed to a node across the network.

The enhanced multi-homed routing feature now allow metrics to be considered, providing the
best cost path from source to destination node. Network resources are better utilized, as
connections will reserved less network bandwidth, and possibly decrease cell transfer delays.

8.13.1 Multi-path Variance and Multi-homed Routing


The primary advantage of the Round Robin scheme is in ensuring that multiple
connections to the destination address are uniformly distributed between the different
non-PNNI links to the destination address. The same behavior is achieved by the
enhanced multi-homed routing if MPV is engineered to generate acceptable paths to
different nodes that advertise the destination address. This implies that the MPV variance
must be set wide enough to encompass paths to all desired nodes advertising the same
destination address.

Once completed, any load balancing scheme can be used to distribute connections on the
different paths to any of the destination nodes. If MPV is not set wide enough to include
other nodes advertising the destination address, then load balancing is accomplished only

PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0 JULY 2000


NORTEL NETWORKS PROPRIETARY
8-66 PRIVATE NETWORK-TO-NETWORK INTERFACE

between the source node and the single destination node. If MPV is disabled, then only
the best cost path to a node advertising the destination address is routed to.

8.13.2 Crankback
Passport does not crankback/re-route on unequal addresses. Therefore, in crankback
scenarios Passport will not complete alternate routing to nodes advertising any other
prefix than the original call destination address. It is not possible to configure more
general addresses on a “backup” node for Passport to use in case of failure on the original
route.

With the older PNNI software, connections are always routed in a round robin fashion. In
crankback scenarios, the second address is always used if the first one becomes
unreachable.

With the multi-homed routing feature, it is possible to (optionally) ignore the second
route completely until the first route becomes unavailable. If the primary route becomes
unreachable, then regardless of how the MPV was engineered the secondary route will
now be used.

JULY 2000 PASSPORT 7000/15000 ENGINEERING NOTES & GUIDELINES PCR2.0


NORTEL NETWORKS PROPRIETARY

You might also like