LGW 2 e Chapter 7 Presentation

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 153

Chapter 7

Packet-Switching
Networks
Network Services and Internal Network
Operation
Packet Network Topology
Datagrams and Virtual Circuits
Routing in Packet Networks
Shortest Path Routing
ATM Networks
Traffic Management
Chapter 7
Packet-Switching
Networks
Network Services and Internal
Network Operation
Network Layer
z Network Layer: the most complex layer
z Requires the coordinated actions of multiple,
geographically distributed network elements
(switches & routers)
z Must be able to deal with very large scales
z Billions of users (people & communicating devices)
z Biggest Challenges
z Addressing: where should information be directed to?
z Routing: what path should be used to get information
there?
Packet Switching

t0 t1

Network

z Transfer of information as payload in data packets


z Packets undergo random delays & possible loss
z Different applications impose differing requirements
on the transfer of information
Network Service
Messages
Messages

Transport Segments Transport


layer layer
Network Network
service service
Network Network Network Network
layer layer layer layer
Data link Data link Data link Data link End
End layer layer layer layer
system
system Physical
Physical Physical Physical β
α layer
layer layer layer

z Network layer can offer a variety of services to transport layer


z Connection-oriented service or connectionless service
z Best-effort or delay/loss guarantees
Network Service vs. Operation
Network Service Internal Network
z Connectionless Operation
z Datagram Transfer z Connectionless

z Connection-Oriented z IP
z Reliable and possibly z Connection-Oriented
constant bit rate transfer z Telephone connection
z ATM

Various combinations are possible


z Connection-oriented service over Connectionless operation

z Connectionless service over Connection-Oriented operation

z Context & requirements determine what makes sense


Complexity at the Edge or in the
Core?
C

12 3 21
2 2
1 1

End system End system


1 1 1
α 2 2 2 β
4 3 21 12 3 21
Medium
12 3 21 123 4
2
A B 1

Network
1 Physical layer entity 3 Network layer entity
3 Network layer entity
2 Data link layer entity 4 Transport layer entity
The End-to-End Argument for
System Design
z An end-to-end function is best implemented at a
higher level than at a lower level
z End-to-end service requires all intermediate components to
work properly
z Higher-level better positioned to ensure correct operation
z Example: stream transfer service
z Establishing an explicit connection for each stream across
network requires all network elements (NEs) to be aware of
connection; All NEs have to be involved in re-
establishment of connections in case of network fault
z In connectionless network operation, NEs do not deal with
each explicit connection and hence are much simpler in
design
Network Layer Functions
Essential
z Routing: mechanisms for determining the
set of best paths for routing packets requires
the collaboration of network elements
z Forwarding: transfer of packets from NE
inputs to outputs
z Priority & Scheduling: determining order of
packet transmission in each NE
Optional: congestion control, segmentation &
reassembly, security
Chapter 7
Packet-Switching
Networks
Packet Network Topology
End-to-End Packet Network
z Packet networks very different than telephone
networks
z Individual packet streams are highly bursty
z Statistical multiplexing is used to concentrate streams
z User demand can undergo dramatic change
z Peer-to-peer applications stimulated huge growth in traffic
volumes
z Internet structure highly decentralized
z Paths traversed by packets can go through many networks
controlled by different organizations
z No single entity responsible for end-to-end service
Access Multiplexing

Access
MUX
To
packet
network

z Packet traffic from users multiplexed at access to network into


aggregated streams
z DSL traffic multiplexed at DSL Access Mux
z Cable modem traffic multiplexed at Cable Modem Termination
System
Oversubscription
z Access Multiplexer
r z N subscribers connected @ c bps to mux
r z Each subscriber active r/c of time
•••
•••

nc z Mux has C=nc bps to network


r z Oversubscription rate: N/n
Nr Nc z Find n so that at most 1% overflow probability
Feasible oversubscription rate increases with size

N r/c n N/n
10 0.01 1 10 10 extremely lightly loaded users

10 0.05 3 3.3 10 very lightly loaded user


10 0.1 4 2.5 10 lightly loaded users
20 0.1 6 3.3 20 lightly loaded users
40 0.1 9 4.4 40 lightly loaded users
100 0.1 18 5.5 100 lightly loaded users
Home LANs
WiFi

Ethernet
Home
Router
To
packet
network

z Home Router
z LAN Access using Ethernet or WiFi (IEEE 802.11)
z Private IP addresses in Home (192.168.0.x) using Network
Address Translation (NAT)
z Single global IP address from ISP issued using Dynamic
Host Configuration Protocol (DHCP)
LAN Concentration
Switch
/ Router

z z z z z z z z z z z z

z LAN Hubs and switches in the access


network also aggregate packet streams that
flows into switches and routers
Servers have
Campus Network redundant
connectivity to
backbone
Organization
To Internet or Servers
wide area
network
s s
Gateway
Backbone
R R
R
S
S S
Departmental R R
Server R
s
s
s
High-speed
Only outgoing
campus leave
packets s s s s
backbone
LAN net
through
s s

connects dept
router
routers
Connecting to Internet Service
Provider
Internet service provider

Border routers

Campus
Border routers
Network
Interdomain level
Autonomous
system or
domain
Intradomain level
s
LAN network administered
s s
by single organization
Internet Backbone
National Service Provider A

National Service Provider B

NAP
NAP
National Service Provider C Private
peering

z Network Access Points: set up during original


commercialization of Internet to facilitate exchange of traffic
z Private Peering Points: two-party inter-ISP agreements to
exchange traffic
(a) National Service Provider A

National Service Provider B

NAP
NAP
National Service Provider C Private peering

(b) NAP RA

Route RB
Server
LAN

RC
Key Role of Routing
How to get packet from here to there?
z Decentralized nature of Internet makes
routing a major challenge
z Interior gateway protocols (IGPs) are used to
determine routes within a domain
z Exterior gateway protocols (EGPs) are used to
determine routes across domains
z Routes must be consistent & produce stable flows
z Scalability required to accommodate growth
z Hierarchical structure of IP addresses essential to
keeping size of routing tables manageable
Chapter 7
Packet-Switching
Networks
Datagrams and Virtual Circuits
The Switching Function
z Dynamic interconnection of inputs to outputs
z Enables dynamic sharing of transmission resource
z Two fundamental approaches:
z Connectionless
z Connection-Oriented: Call setup control, Connection
control
Backbone Network

Switch

Access Network
Packet Switching Network
Packet switching network
User z Transfers packets
between users
Transmission
line z Transmission lines +
packet switches
Network (routers)
Packet
switch z Origin in message
switching
Two modes of operation:
z Connectionless
z Virtual Circuit
Message Switching
z Message switching
invented for telegraphy
Message z Entire messages
Message
multiplexed onto shared
Message lines, stored & forwarded
Source
z Headers for source &
destination addresses
Message
z Routing at message
switches
Switches
z Connectionless
Destination
Message Switching Delay

Source T
t
Switch 1
t
τ
Switch 2
t

t
Destination
Delay

Minimum delay = 3τ + 3T

Additional queueing delays possible at each link


Long Messages vs. Packets
1 Mbit
message source dest
BER=p=10-6 BER=10-6
How many bits need to be transmitted to deliver message?
z Approach 1: send 1 Mbit z Approach 2: send 10
message 100-kbit packets
z Probability message z Probability packet arrives
arrives correctly correctly
−6 10 6 −10 610 −6 −6
Pc = (1 − 10 ) ≈e = e −1 ≈ 1 / 3 Pc′ = (1 − 10 −6 )10 ≈ e −10 10 = e −0.1 ≈ 0.9
5 5

z On average it takes about


z On average it takes about 1.1 transmissions/hop
3 transmissions/hop
z Total # bits transmitted ≈
z Total # bits transmitted ≈
6 Mbits 2.2 Mbits
Packet Switching - Datagram
z Messages broken into
smaller units (packets)
z Source & destination
addresses in packet header Packet 1
z Connectionless, packets
routed independently Packet 1

(datagram) Packet 2
z Packet may arrive out of
order
z Pipelining of packets across Packet 2
network can reduce delay,
increase throughput Packet 2

z Lower delay that message


switching, suitable for
interactive traffic
Packet Switching Delay
Assume three packets corresponding to one message
traverse same path
t
1 2 3
t
1 2 3
t
1 2 3
t
Delay
Minimum Delay = 3τ + 5(T/3) (single path assumed)
Additional queueing delays possible at each link
Packet pipelining enables message to arrive sooner
Delay for k-Packet Message over L
Hops
Source
t
1 2 3
Switch 1
t
τ
1 2 3
Switch 2
t
1 2 3
t
Destination
3 hops L hops
3τ + 2(T/3) first bit received Lτ + (L-1)P first bit received

3τ + 3(T/3) first bit released Lτ + LP first bit released

3τ + 5 (T/3) last bit released Lτ + LP + (k-1)P last bit released


where T = k P
Routing Tables in Datagram
Networks
Destination Output
z Route determined by table
address port
lookup
0785 7
z Routing decision involves
finding next hop in route to
given destination
1345 12 z Routing table has an entry
for each destination
1566 6 specifying output port that
leads to next hop
z Size of table becomes
impractical for very large
2458 12
number of destinations
Example: Internet Routing
z Internet protocol uses datagram packet
switching across networks
z Networks are treated as data links
z Hosts have two-port IP address:
z Network address + Host address
z Routers do table lookup on network address
z This reduces size of routing table
z In addition, network addresses are assigned
so that they can also be aggregated
z Discussed as CIDR in Chapter 8
Packet Switching – Virtual Circuit

Packet Packet
Packet

Packet

Virtual circuit

z Call set-up phase sets ups pointers in fixed path along


network
z All packets for a connection follow the same path
z Abbreviated header identifies connection on each link
z Packets queue for transmission
z Variable bit rates possible, negotiated during call set-up
z Delays variable, cannot be less than circuit switching
Connection Setup
Connect Connect Connect
request request request
SW SW … SW
1 2 n
Connect Connect Connect
confirm confirm confirm

z Signaling messages propagate as route is selected


z Signaling messages identify connection and setup tables
in switches
z Typically a connection is identified by a local tag, Virtual
Circuit Identifier (VCI)
z Each switch only needs to know how to relate an
incoming tag in one input to an outgoing tag in the
corresponding output
z Once tables are setup, packets can flow along path
Connection Setup Delay
t
Connect
request
1 2 3
CC
t
Release
CR 1 2 3
CC
t
Connect 1
CR 2 3
confirm
t

z Connection setup delay is incurred before any


packet can be transferred
z Delay is acceptable for sustained transfer of large
number of packets
z This delay may be unacceptably high if only a few
packets are being transferred
Virtual Circuit Forwarding Tables

Input Output Output z Each input port of packet


VCI port VCI switch has a forwarding
table
12 13 44 z Lookup entry for VCI of
incoming packet
15
z Determine output port (next
15 23
hop) and insert VCI for next
link
27 13 16
z Very high speeds are
possible
z Table can also include
priority or other information
58 7 34 about how packet should be
treated
Cut-Through switching

Source
t
1 2 3
Switch 1
t
1 2 3
Switch 2
t
1 2 3
t
Destination
Minimum delay = 3τ + T

z Some networks perform error checking on header only,


so packet can be forwarded as soon as header is
received & processed
z Delays reduced further with cut-through switching
Message vs. Packet Minimum
Delay
z Message:
Lτ + LT = L τ + (L – 1) T + T

z Packet
L τ + L P + (k – 1) P = L τ + (L – 1) P + T

z Cut-Through Packet (Immediate forwarding after


header)
= Lτ+ T

Above neglect header processing delays


Example: ATM Networks
z All information mapped into short fixed-length
packets called cells
z Connections set up across network
z Virtual circuits established across networks
z Tables setup at ATM switches
z Several types of network services offered
z Constant bit rate connections
z Variable bit rate connections
Chapter 7
Packet-Switching
Networks
Datagrams and Virtual Circuits
Structure of a Packet Switch
Packet Switch: Intersection where
Traffic Flows Meet
•••

1 1

•••
2 2
z
•••

z
z
z
z
z

N N

z Inputs contain multiplexed flows from access muxs & other packet
switches
z Flows demultiplexed at input, routed and/or forwarded to output
ports
z Packets buffered, prioritized, and multiplexed on output lines
Generic Packet Switch
“Unfolded” View of Switch
Controller z Ingress Line Cards
z Header processing
z Demultiplexing
z Routing in large switches
1 Line card Line card 1 z Controller
2 2 Routing in small switches
Interconnection

Line card Line card z


3 Line card Line card 3 z Signalling & resource
fabric

allocation
z Interconnection Fabric


z Transfer packets between


N Line card Line card N
line cards
z Egress Line Cards
Input ports Output ports z Scheduling & priority
z Multiplexing
Data path
Control path (a)
Line Cards
Transceiver Framer

Network Backplane
processor transceivers

Transceiver Framer

Interconnection
To To To

fabric
physical switch
ports fabric
other
line
cards
Folded View
z 1 circuit board is ingress/egress line card
z Physical layer processing
z Data link layer processing
z Network header processing
z Physical layer across fabric + framing
Shared Memory Packet Switch
Ingress Output
Processing Buffering
Connection
Control
1 1
Queue
2 Control 2

3 3
Shared


Memory

N N

Small switches can be built by reading/writing into shared memory


Crossbar Switches
(a) Input buffering (b) Output buffering
Inputs Inputs
1 3 1

2 83 2
3 3


N N

… …

1 2 3 N 1 2 3 N
Outputs Outputs

z Large switches built from crossbar & multistage space switches


z Requires centralized controller/scheduler (who sends to whom
when)
z Can buffer at input, output, or both (performance vs complexity)
Self-Routing Switches
Inputs Outputs
0 0
1 1

2 2
3 3

4 4
5 5

6 6

7 7
Stage 1 Stage 2 Stage 3

z Self-routing switches do not require controller


z Output port number determines route
z 101 → (1) lower port, (2) upper port, (3) lower port
Chapter 7
Packet-Switching
Networks
Routing in Packet Networks
Routing in Packet Networks
1 3
6

2 Node
5
(switch or router)

z Three possible (loopfree) routes from 1 to 6:


z 1-3-6, 1-4-5-6, 1-2-5-6
z Which is “best”?
z Min delay? Min hop? Max bandwidth? Min cost?
Max reliability?
Creating the Routing Tables
z Need information on state of links
z Link up/down; congested; delay or other metrics
z Need to distribute link state information using a
routing protocol
z What information is exchanged? How often?
z Exchange with neighbors; Broadcast or flood
z Need to compute routes based on information
z Single metric; multiple metrics
z Single route; alternate routes
Routing Algorithm Requirements
z Responsiveness to changes
z Topology or bandwidth changes, congestion
z Rapid convergence of routers to consistent set of routes
z Freedom from persistent loops
z Optimality
z Resource utilization, path length
z Robustness
z Continues working under high load, congestion, faults,
equipment failures, incorrect implementations
z Simplicity
z Efficient software implementation, reasonable processing
load
Centralized vs Distributed Routing
z Centralized Routing
z All routes determined by a central node
z All state information sent to central node
z Problems adapting to frequent topology changes
z Does not scale
z Distributed Routing
z Routes determined by routers using distributed
algorithm
z State information exchanged by routers
z Adapts to topology and other changes
z Better scalability
Static vs Dynamic Routing
z Static Routing
z Set up manually, do not change; requires administration
z Works when traffic predictable & network is simple
z Used to override some routes set by dynamic algorithm
z Used to provide default router
z Dynamic Routing
z Adapt to changes in network conditions
z Automated
z Calculates routes based on received updated network
state information
Routing in Virtual-Circuit Packet
Networks

2
1 7 8
1 3 B
A 3
5 1 6 5
4 2
VCI
Host 4

3 5 Switch or router
2
C 5
6
2 D

z Route determined during connection setup


z Tables in switches implement forwarding that
realizes selected route
Routing Tables in VC Packet
Networks
Node 3
Incoming Outgoing
Node 1 Node VCI Node VCI Node 6
Incoming Outgoing 1 2 6 7 Incoming Outgoing
Node VCI Node VCI 1 3 4 4 Node VCI Node VCI
A 1 3 2 4 2 6 1 3 7 B 8
A 5 3 3 6 7 1 2 3 1 B 5
3 2 A 1 6 1 4 2 B 5 3 1
3 3 A 5 4 4 1 3 B 8 3 7

Node 4
Incoming Outgoing
Node VCI Node VCI
Node 2 2 3 3 2
Node 5
3 4 5 5
Incoming Outgoing 3 2 2 3 Incoming Outgoing
Node VCI Node VCI 5 5 3 4 Node VCI Node VCI
C 6 4 3 4 5 D 2
4 3 C 6 D 2 4 5

z Example: VCI from A to D


z From A & VCI 5 → 3 & VCI 3 → 4 & VCI 4
z → 5 & VCI 5 → D & VCI 2
Routing Tables in Datagram
Packet Networks
Node 3

Node 1 Destination Next node Node 6


Destination Next node 1 1 Destination Next node
2 2 2 4 1 3
3 3 4 4 2 5
4 4 5 6 3 3
6 6 4 3
5 2
6 3 5 5

Node 4
Destination Next node
1 1
2 2
Node 2 Node 5
3 3
Destination Next node Destination Next node
5 5
1 1 6 3 1 4
3 1 2 2
4 4 3 4
5 5 4 4
6 5 6 6
Non-Hierarchical Addresses and
Routing
0000 0001
0111 0100
1010 1 4 1011
1101 1110
3
R1 R2
2 5
0011 0011
0110 0000 1 0001 4 0101
1001 0111 1 0100 4 1000
1100 1010 1 1011 4 1111
… … … …

z No relationship between addresses & routing


proximity
z Routing tables require 16 entries each
Hierarchical Addresses and
Routing
0000 0100
0001 0101
0010 1 4 0110
0011 0111
3
R1 R2
2 5
1000 1100
1001 00 1 00 3 1101
1010 01 3 01 4 1110
1011 10 2 10 3 1111
11 3 11 5

z Prefix indicates network where host is


attached
z Routing tables require 4 entries each
Flat vs Hierarchical Routing
z Flat Routing
z All routers are peers
z Does not scale
z Hierarchical Routing
z Partitioning: Domains, autonomous systems, areas...
z Some routers part of routing backbone
z Some routers only communicate within an area
z Efficient because it matches typical traffic flow patterns
z Scales
Specialized Routing
z Flooding
z Useful in starting up network
z Useful in propagating information to all nodes

z Deflection Routing
z Fixed, preset routing procedure
z No route synthesis
Flooding
Send a packet to all nodes in a network
z No routing tables available

z Need to broadcast packet to all nodes (e.g. to


propagate link state information)

Approach
z Send packet on all ports except one where it
arrived
z Exponential growth in packet transmissions
1 3
6

2
5

Flooding is initiated from Node 1: Hop 1 transmissions


1 3
6

2
5

Flooding is initiated from Node 1: Hop 2 transmissions


1 3
6

2
5

Flooding is initiated from Node 1: Hop 3 transmissions


Limited Flooding
z Time-to-Live field in each packet limits
number of hops to certain diameter
z Each switch adds its ID before flooding;
discards repeats
z Source puts sequence number in each
packet; switches records source address and
sequence number and discards repeats
Deflection Routing
z Network nodes forward packets to preferred port
z If preferred port busy, deflect packet to another port
z Works well with regular topologies
z Manhattan street network
z Rectangular array of nodes
z Nodes designated (i,j)
z Rows alternate as one-way streets
z Columns alternate as one-way avenues
z Bufferless operation is possible
z Proposed for optical packet networks
z All-optical buffering currently not viable
0,0 0,1 0,2 0,3

1,0 1,1 1,2 1,3


Tunnel from
last column to
first column or
2,0 2,1 2,2 2,3 vice versa

3,0 3,1 3,2 3,3


Example: Node (0,2)→(1,0)
busy

0,0 0,1 0,2 0,3

1,0 1,1 1,2 1,3

2,0 2,1 2,2 2,3

3,0 3,1 3,2 3,3


Chapter 7
Packet-Switching
Networks
Shortest Path Routing
Shortest Paths & Routing
z Many possible paths connect any given
source and to any given destination
z Routing involves the selection of the path to
be used to accomplish a given transfer
z Typically it is possible to attach a cost or
distance to a link connecting two nodes
z Routing can then be posed as a shortest path
problem
Routing Metrics
Means for measuring desirability of a path
z Path Length = sum of costs or distances

z Possible metrics
z Hop count: rough measure of resources used
z Reliability: link availability; BER
z Delay: sum of delays along path; complex & dynamic
z Bandwidth: “available capacity” in a path
z Load: Link & router utilization along path
z Cost: $$$
Shortest Path Approaches

Distance Vector Protocols


z Neighbors exchange list of distances to destinations

z Best next-hop determined for each destination

z Ford-Fulkerson (distributed) shortest path algorithm

Link State Protocols


z Link state information flooded to all routers

z Routers have complete topology information

z Shortest path (& hence next hop) calculated

z Dijkstra (centralized) shortest path algorithm


Distance Vector
Do you know the way to San Jose?

Sa
n Jo San Jose 392
se
29
4

San Jose 596

Sa
n
Jo
se
25
0
Distance Vector
Local Signpost Table Synthesis
z Direction z Neighbors exchange
z Distance table entries
z Determine current best

Routing Table next hop


For each destination list: z Inform neighbors
z Periodically
z Next Node
z After changes
z Distance dest next dist
Shortest Path to SJ
Focus on how nodes find their shortest
path to a given destination node, i.e. SJ San
Jose

Dj
Cij
j
i
Di If Di is the shortest distance to SJ from i
and if j is a neighbor on the shortest path,
then Di = Cij + Dj
But we don’t know the shortest
paths

i only has local info


from neighbors San
Dj' Jose
j'
Cij'
Dj
Cij j
i
Cij” Pick current
Di j"
Dj" shortest path
Why Distance Vector WorksSJ sends
accurate info

San
2 Hops
1 Hop
From SJ
Jose
3 Hops From SJ
From SJ

Hop-1 nodes
calculate current
(next hop, dist), &
Accurate info about SJ send to neighbors
ripples across network,
Shortest Path Converges
Bellman-Ford Algorithm
z Consider computations for one destination d
z Initialization
z Each node table has 1 row for destination d
z Distance of node d to itself is zero: Dd=0
z Distance of other node j to d is infinite: Dj=∝, for j≠ d
z Next hop node nj = -1 to indicate not yet defined for j ≠ d
z Send Step
z Send new distance vector to immediate neighbors across local link
z Receive Step
z At node j, find the next hop that gives the minimum distance to d,

z Minj { Cij + Dj }
z Replace old (nj, Dj(d)) by new (nj*, Dj*(d)) if new next node or distance
z Go to send step
Bellman-Ford Algorithm
z Now consider parallel computations for all destinations d
z Initialization
z Each node has 1 row for each destination d
z Distance of node d to itself is zero: Dd(d)=0
z Distance of other node j to d is infinite: Dj(d)= ∝ , for j ≠ d
z Next node nj = -1 since not yet defined
z Send Step
z Send new distance vector to immediate neighbors across local link
z Receive Step
z For each destination d, find the next hop that gives the minimum
distance to d,
z Minj { Cij+ Dj(d) }
z Replace old (nj, Di(d)) by new (nj*, Dj*(d)) if new next node or distance
found
z Go to send step
Iteration Node 1 Node 2 Node 3 Node 4 Node 5

Initial (-1, ∞) (-1, ∞) (-1, ∞) (-1, ∞) (-1, ∞)


1
2
3

Table entry Table entry


@ node 1 @ node 3
for dest SJ for dest SJ
2 3
1 1 San
5 2
4
Jose
3 1 3 6

2
2 5
4
Iteration Node 1 Node 2 Node 3 Node 4 Node 5

Initial (-1, ∞) (-1, ∞) (-1, ∞) (-1, ∞) (-1, ∞)


1 (-1, ∞) (-1, ∞) (6,1) (-1, ∞) (6,2)
2
3

D3=D6+1
n3=6
D6=0
2 3 1
1 1
5 2
0
4 San
3 1 3 6
Jose
2 5 2
4
2
D5=D6+2 D6=0
n5=6
Iteration Node 1 Node 2 Node 3 Node 4 Node 5

Initial (-1, ∞) (-1, ∞) (-1, ∞) (-1, ∞) (-1, ∞)


1 (-1, ∞) (-1, ∞) (6, 1) (-1, ∞) (6,2)
2 (3,3) (5,6) (6, 1) (3,3) (6,2)
3

3 1
2 3
1 1
5 2
3 0
4 San
3 1 3 6
Jose
2 2
5
4
6 2
Iteration Node 1 Node 2 Node 3 Node 4 Node 5

Initial (-1, ∞) (-1, ∞) (-1, ∞) (-1, ∞) (-1, ∞)


1 (-1, ∞) (-1, ∞) (6, 1) (-1, ∞) (6,2)
2 (3,3) (5,6) (6, 1) (3,3) (6,2)
3 (3,3) (4,4) (6, 1) (3,3) (6,2)

1
3 2 3
1 1
5 2
3 0
4 San
3 1 3 6
Jose
2
2 5
4
6 4 2
Iteration Node 1 Node 2 Node 3 Node 4 Node 5

Initial (3,3) (4,4) (6, 1) (3,3) (6,2)


1 (3,3) (4,4) (4, 5) (3,3) (6,2)
2
3

1 5
3 2 3
1 1
5 2
3 0
4 San
3 1 3 6
Jose
2
2 5
4 4
2
Network disconnected; Loop created between nodes 3 and 4
Iteration Node 1 Node 2 Node 3 Node 4 Node 5

Initial (3,3) (4,4) (6, 1) (3,3) (6,2)


1 (3,3) (4,4) (4, 5) (3,3) (6,2)
2 (3,7) (4,4) (4, 5) (5,5) (6,2)
3

5
37 2 3
1 1
5 53 2
0
4 San
3 1 3 6
Jose
2
2 5
4
4 2

Node 4 could have chosen 2 as next node because of tie


Iteration Node 1 Node 2 Node 3 Node 4 Node 5

Initial (3,3) (4,4) (6, 1) (3,3) (6,2)


1 (3,3) (4,4) (4, 5) (3,3) (6,2)
2 (3,7) (4,4) (4, 5) (5,5) (6,2)
3 (3,7) (4,6) (4, 7) (5,5) (6,2)

5 7
7 2 3
1 1
5 2
5 0
4 San
3 1 3 6
Jose
2 5 2
4
46 2

Node 2 could have chosen 5 as next node because of tie


Iteration Node 1 Node 2 Node 3 Node 4 Node 5

1 (3,3) (4,4) (4, 5) (3,3) (6,2)


2 (3,7) (4,4) (4, 5) (2,5) (6,2)
3 (3,7) (4,6) (4, 7) (5,5) (6,2)
4 (2,9) (4,6) (4, 7) (5,5) (6,2)

79 2
7
3
1 1
5 2
5 0
4 San
3 1 3 6
Jose
2
2 5
4
6 2

Node 1 could have chose 3 as next node because of tie


Counting to Infinity Problem
(a) 1 2 3 4
1 1 1 Nodes believe best
path is through each
(b) 1 2 3 X 4 other
1 1
(Destination is node 4)
Update Node 1 Node 2 Node 3

Before break (2,3) (3,2) (4, 1)


After break (2,3) (3,2) (2,3)
1 (2,3) (3,4) (2,3)
2 (2,5) (3,4) (2,5)
3 (2,5) (3,6) (2,5)
4 (2,7) (3,6) (2,7)
5 (2,7) (3,8) (2,7)
… … … …
Problem: Bad News Travels
Slowly
Remedies
z Split Horizon
z Do not report route to a destination to the
neighbor from which route was learned
z Poisoned Reverse
z Report route to a destination to the neighbor
from which route was learned, but with infinite
distance
z Breaks erroneous direct loops immediately
z Does not work on some indirect loops
Split Horizon with Poison Reverse
(a) 1 2 3 4
1 1 1 Nodes believe best
path is through
(b) 1 2 3 X 4 each other
1 1

Update Node 1 Node 2 Node 3


Before break (2, 3) (3, 2) (4, 1)
After break (2, 3) (3, 2) (-1, ∞) Node 2 advertizes its route to 4 to
node 3 as having distance infinity;
node 3 finds there is no route to 4

1 (2, 3) (-1, ∞) (-1, ∞) Node 1 advertizes its route to 4 to


node 2 as having distance infinity;
node 2 finds there is no route to 4

2 (-1, ∞) (-1, ∞) (-1, ∞) Node 1 finds there is no route to 4


Link-State Algorithm
z Basic idea: two step procedure
z Each source node gets a map of all nodes and link metrics
(link state) of the entire network
z Find the shortest path on the map from the source node to
all destination nodes
z Broadcast of link-state information
z Every node i in the network broadcasts to every other node
in the network:
z ID’s of its neighbors: Ni=set of neighbors
of i
z Distances to its neighbors: {Cij | j ∈Ni}
z Flooding is a popular method of broadcasting packets
Dijkstra Algorithm: Finding
shortest paths in order
Find shortest paths from Closest node to s is 1 hop away
source s to all other
destinations 2nd closest node to s is 1 hop
away from s or w”
3rd closest node to s is 1 hop
w' away from s, w”, or x

z
w
x
s z'
w"
x'
Dijkstra’s algorithm
z N: set of nodes for which shortest path already found
z Initialization: (Start with source node s)
z N = {s}, Ds = 0, “s is distance zero from itself”
z Dj=Csj for all j ≠ s, distances of directly-connected neighbors
z Step A: (Find next closest node i)
z Find i ∉ N such that
z Di = min Dj for j ∉ N
z Add i to N
z If N contains all the nodes, stop
z Step B: (update minimum costs)
z For each node j ∉ N
Minimum distance from s to
z Dj = min (Dj, Di+Cij)
j through node i in N
z Go to Step A
Execution of Dijkstra’s algorithm
2 2 9
1 3 1 1 3 1
6 6 9
5 2 5 2
3 3
1 4 1 49 2
2
3 3
2 2
9 4 5 4 5

Iteration N D2 D3 D4 D5 D6
Initial {1} 3 29 5 ∝ ∝
1 {1,3} 39 2 4 ∝ 3
2 {1,2,3} 3 2 4 7 3 9
3 {1,2,3,6} 3 2 4 9 5 3
4 {1,2,3,4,6} 3 2 4 5 9 3
5 {1,2,3,4,5,6} 3 2 4 5 3
Shortest Paths in Dijkstra’s
Algorithm
2 3 1 2 3 1
1 1
6 6
5 2 5 2
3 3
1 4 2 1 4 2
3 3
2 2
4 5 4 5
2 3 1 2 3 1
1 1
6 6
5 2 5 2
3 3
1 4 2 1 4 2
3 3
2 2
4 5 4 5

2 3 1 2 3 1
1 1
6 6
5 2 5 2
3 3
1 4 2 1 4 2
3 3
2 2
4 5 4 5
Reaction to Failure
z If a link fails,
z Router sets link distance to infinity & floods the
network with an update packet
z All routers immediately update their link database &
recalculate their shortest paths
z Recovery very quick
z But watch out for old update messages
z Add time stamp or sequence # to each update
message
z Check whether each received update message is new
z If new, add it to database and broadcast
z If older, send update message on arriving link
Why is Link State Better?
z Fast, loopless convergence
z Support for precise metrics, and multiple
metrics if necessary (throughput, delay, cost,
reliability)
z Support for multiple paths to a destination
z algorithm can be modified to find best two paths
Source Routing
z Source host selects path that is to be followed by a
packet
z Strict: sequence of nodes in path inserted into header
z Loose: subsequence of nodes in path specified
z Intermediate switches read next-hop address and
remove address
z Source host needs link state information or access
to a route server
z Source routing allows the host to control the paths
that its information traverses in the network
z Potentially the means for customers to select what
service providers they use
Example

3,6,B 6,B
1,3,6,B 1 3
B
6
A
4 B
Source host
2 Destination host
5
Chapter 7
Packet-Switching
Networks
ATM Networks
Asynchronous Tranfer Mode
(ATM)
z Packet multiplexing and switching
z Fixed-length packets: “cells”
z Connection-oriented
z Rich Quality of Service support
z Conceived as end-to-end
z Supporting wide range of services
z Real time voice and video
z Circuit emulation for digital transport
z Data traffic with bandwidth guarantees
z Detailed discussion in Chapter 9
ATM Networking
Voice Video Packet Voice Video Packet

ATM ATM
Adaptation Adaptation
Layer Layer

ATM Network

z End-to-end information transport using cells


z 53-byte cell provide low delay and fine multiplexing
granularity
z Support for many services through ATM Adaptation Layer
TDM vs. Packet Multiplexing

Variable bit
rate Delay Burst traffic Processing
TDM Multirate Low, fixed9 Inefficient Minimal, very
only high speed
Packet Easily 9 Variable Efficient 9 Header & packet*
handled processing
required

*In mid-1980s, packet processing mainly in software and


hence slow; By late 1990s, very high speed packet
processing possible
ATM: Attributes of TDM & Packet
Switching

Voice 1

2
Data MUX
packets 3
Wasted bandwidth
Images 4

TDM
3 2 1 4 3 2 1 4 3 2 1
• Packet structure gives
flexibility & efficiency ATM
• Synchronous slot 4 3 1 3 2 2 1
transmission gives high
speed & density
Packet Header
ATM Switching
Switch carries out table translation and routing
1
1
voice 67
Switch

video 67 2
5 video 25 voice 32 25 N 75
32 1 67 data 39 3
32 3 39
6 data 32 video 61
61 2 67


video 75 N
N

ATM switches can be implemented using shared memory,


shared backplanes, or self-routing multi-stage fabrics
ATM Virtual Connections
z Virtual connections setup across network
z Connections identified by locally-defined tags
z ATM Header contains virtual connection information:
z 8-bit Virtual Path Identifier
z 16-bit Virtual Channel Identifier
z Powerful traffic grooming capabilities
z Multiple VCs can be bundled within a VP
z Similar to tributaries with SONET, except variable bit rates possible

Virtual paths

Physical link

Virtual channels
VPI/VCI switching & multiplexing
a
VPI 3 VPI 5
a
b ATM ATM
ATM ATM
c Sw Sw b
cross- Sw
1 connect 2 c
3
d
e

VPI 2

VPI 1
ATM d
Sw e
Sw = switch 4

z Connections a,b,c bundled into VP at switch 1


z Crossconnect switches VP without looking at VCIs
z VP unbundled at switch 2; VC switching thereafter
z VPI/VCI structure allows creation virtual networks
MPLS & ATM
z ATM initially touted as more scalable than packet
switching
z ATM envisioned speeds of 150-600 Mbps
z Advances in optical transmission proved ATM to be
the less scalable: @ 10 Gbps
z Segmentation & reassembly of messages & streams into
48-byte cell payloads difficult & inefficient
z Header must be processed every 53 bytes vs. 500 bytes
on average for packets
z Delay due to 1250 byte packet at 10 Gbps = 1 μsec; delay
due to 53 byte cell @ 150 Mbps ≈ 3 μsec
z MPLS (Chapter 10) uses tags to transfer packets
across virtual circuits in Internet
Chapter 7
Packet-Switching
Networks
Traffic Management
Packet Level
Flow Level
Flow-Aggregate Level
Traffic Management
Vehicular traffic management Packet traffic management
z Traffic lights & signals z Multiplexing & access
control flow of traffic in city mechanisms to control flow
street system of packet traffic
z Objective is to maximize z Objective is make efficient
flow with tolerable delays use of network resources &
z Priority Services deliver QoS
z Police sirens z Priority
z Cavalcade for dignitaries z Fault-recovery packets
z Bus & High-usage lanes z Real-time traffic
z Trucks allowed only at night z Enterprise (high-
revenue) traffic
z High bandwidth traffic
Time Scales & Granularities
z Packet Level
z Queueing & scheduling at multiplexing points
z Determines relative performance offered to packets over a
short time scale (microseconds)
z Flow Level
z Management of traffic flows & resource allocation to
ensure delivery of QoS (milliseconds to seconds)
z Matching traffic flows to resources available; congestion
control
z Flow-Aggregate Level
z Routing of aggregate traffic flows across the network for
efficient utilization of resources and meeting of service
levels
z “Traffic Engineering”, at scale of minutes to days
End-to-End QoS

Packet buffer


1 2 N–1 N

z A packet traversing network encounters delay and


possible loss at various multiplexing points
z End-to-end performance is accumulation of per-hop
performances
Scheduling & QoS
z End-to-End QoS & Resource Control
z Buffer & bandwidth control → Performance

z Admission control to regulate traffic level

z Scheduling Concepts
z fairness/isolation

z priority, aggregation,

z Fair Queueing & Variations


z WFQ, PGPS

z Guaranteed Service
z WFQ, Rate-control

z Packet Dropping
z aggregation, drop priorities
FIFO Queueing
Packet buffer
Arriving
packets
Transmission
Packet discard
link
when full

z All packet flows share the same buffer


z Transmission Discipline: First-In, First-Out
z Buffering Discipline: Discard arriving packets if
buffer is full (Alternative: random discard; pushout
head-of-line, i.e. oldest, packet)
FIFO Queueing
z Cannot provide differential QoS to different packet flows
z Different packet flows interact strongly

z Statistical delay guarantees via load control


z Restrict number of flows allowed (connection admission control)

z Difficult to determine performance delivered

z Finite buffer determines a maximum possible delay


z Buffer size determines loss probability
z But depends on arrival & packet length statistics

z Variation: packet enqueueing based on queue thresholds


z some packet flows encounter blocking before others

z higher loss, lower delay


FIFO Queueing with Discard
Priority
(a)
Packet buffer
Arriving
packets
Transmission
Packet discard link
when full

(b)
Packet buffer
Arriving
packets
Transmission
link
Class 1 Class 2 discard
discard when threshold
when full exceeded
HOL Priority Queueing
Packet discard
when full
Transmission
High-priority
link
packets

Low-priority When
packets high-priority
Packet discard queue empty
when full

z High priority queue serviced until empty


z High priority queue has lower waiting time
z Buffers can be dimensioned for different loss probabilities
z Surge in high priority queue can cause low priority queue to
saturate
HOL Priority Features
z Provides differential QoS
z Pre-emptive priority:
lower classes invisible
z Non-preemptive priority:
Delay

(Note: Need labeling) lower classes impact


higher classes through
residual service times
z High-priority classes can
hog all of the bandwidth &
starve lower priority
classes
z Need to provide some
Per-class loads
isolation between classes
Earliest Due Date Scheduling

Sorted packet buffer


Arriving
Tagging
packets
unit
Transmission
Packet discard link
when full

z Queue in order of “due date”


z packets requiring low delay get earlier due date
z packets without delay get indefinite or very long due
dates
Fair Queueing / Generalized
Processor Sharing
Packet flow 1
Approximated bit-level
round robin service
Packet flow 2
C bits/second


Transmission
Packet flow n link

z Each flow has its own logical queue: prevents hogging; allows
differential loss probabilities
z C bits/sec allocated equally among non-empty queues
z transmission rate = C / n(t), where n(t)=# non-empty
queues
z Idealized system assumes fluid flow from queues
z Implementation requires approximation: simulate fluid system;
sort packets according to completion time in ideal system
Buffer 1 Fluid-flow system:
at t=0 both packets served
at rate 1/2
Buffer 2 1
at t=0 Both packets
complete service
at t = 2
t
0 1 2

Packet from
Packet-by-packet system:
buffer 2 waiting
buffer 1 served first at rate 1;
then buffer 2 served at rate 1.
1
Packet from buffer 2
being served
Packet from t
buffer 1 being 0 1 2
served
2 Fluid-flow system:
Buffer 1
at t=0 both packets served
at rate 1/2
Buffer 2 1
at t=0 Packet from buffer 2
served at rate 1
t
0 2 3

Packet from Packet-by-packet


buffer 2 fair queueing:
waiting buffer 2 served at rate 1
1
Packet from
buffer 1
served at t
0 1 2 3
rate 1
Buffer 1 Fluid-flow system:
at t=0 packet from buffer 1
served at rate 1/4;
Buffer 2
at t=0 1
Packet from buffer 1
served at rate 1
Packet from buffer 2
t
served at rate 3/4 0 2
1

Packet from Packet-by-packet weighted fair queueing:


buffer 1 waiting buffer 2 served first at rate 1;
then buffer 1 served at rate 1
1
Packet from buffer 1 served at rate 1

Packet from t
buffer 2 0 1 2
served at rate 1
Packetized GPS/WFQ

Sorted packet buffer


Arriving
Tagging
packets
unit
Transmission
Packet discard link
when full

z Compute packet completion time in ideal system


z add tag to packet
z sort packet in queue according to tag
z serve according to HOL
Bit-by-Bit Fair Queueing
z Assume n flows, n queues
z 1 round = 1 cycle serving all n queues
z If each queue gets 1 bit per cycle, then 1 round = # active queues
z Round number = number of cycles of service that have been
completed
rounds Current Round #

z If packet arrives to idle queue:


Finishing time = round number + packet size in bits
z If packet arrives to active queue:

Finishing time = finishing time of last packet in queue + packet size


Number of rounds = Number of bit transmission opportunities

Rounds

Buffer 1
Buffer 2

Buffer n

Packet of length
Packet completes k bits begins
transmission transmission
k rounds later at this time

Differential Service:
If a traffic flow is to receive twice as much bandwidth as a
regular flow, then its packet completion time would be half
Computing the Finishing Time
z F(i,k,t) = finish time of kth packet that arrives at time t to flow i
z P(i,k,t) = size of kth packet that arrives at time t to flow i
z R(t) = round number at time t
Generalize so R(t) continuous, not discrete
rounds
R(t) grows at rate inversely
proportional to n(t)

z Fair Queueing:
F(i,k,t) = max{F(i,k-1,t), R(t)} + P(i,k,t)
z Weighted Fair Queueing:
F(i,k,t) = max{F(i,k-1,t), R(t)} + P(i,k,t)/wi
WFQ and Packet QoS
z WFQ and its many variations form the basis
for providing QoS in packet networks
z Very high-speed implementations available,
up to 10 Gbps and possibly higher
z WFQ must be combined with other
mechanisms to provide end-to-end QoS (next
section)
Buffer Management
z Packet drop strategy: Which packet to drop when buffers full
z Fairness: protect behaving sources from misbehaving
sources
z Aggregation:
z Per-flow buffers protect flows from misbehaving flows
z Full aggregation provides no protection
z Aggregation into classes provided intermediate protection
z Drop priorities:
z Drop packets from buffer according to priorities
z Maximizes network utilization & application QoS
z Examples: layered video, policing at network edge
z Controlling sources at the edge
Early or Overloaded Drop

Random early detection:


z drop pkts if short-term avg of queue exceeds threshold

z pkt drop probability increases linearly with queue length

z mark offending pkts

z improves performance of cooperating TCP sources

z increases loss probability of misbehaving sources


Random Early Detection (RED)
z Packets produced by TCP will reduce input rate in response
to network congestion
z Early drop: discard packets before buffers are full
z Random drop causes some sources to reduce rate before
others, causing gradual reduction in aggregate input rate

Algorithm:
z Maintain running average of queue length
z If Qavg < minthreshold, do nothing
z If Qavg > maxthreshold, drop packet
z If in between, drop packet according to probability
z Flows that send more packets are more likely to have
packets dropped
Packet Drop Profile in RED
Probability of packet drop

0
minth maxth full

Average queue length


Chapter 7
Packet-Switching
Networks
Traffic Management at the Flow
Level
Congestion occurs when a surge of traffic overloads network
resources Congestion

3 6

1
4
8

2
7
5

Approaches to Congestion Control:


• Preventive Approaches: Scheduling & Reservations
• Reactive Approaches: Detect & Throttle/Discard
Ideal effect of congestion control:
Resources used efficiently up to capacity available
Throughput

Controlled

Uncontrolled

Offered load
Open-Loop Control
z Network performance is guaranteed to all
traffic flows that have been admitted into the
network
z Initially for connection-oriented networks
z Key Mechanisms
z Admission Control
z Policing
z Traffic Shaping
z Traffic Scheduling
Admission Control
z Flows negotiate contract
with network
Peak rate z Specify requirements:
z Peak, Avg., Min Bit rate
Bits/second

z Maximum burst size


Average rate
z Delay, Loss requirement
z Network computes
resources needed
z “Effective” bandwidth
z If flow accepted, network
Time allocates resources to
ensure QoS delivered as
Typical bit rate demanded by
long as source conforms to
a variable bit rate information
contract
source
Policing
z Network monitors traffic flows continuously to
ensure they meet their traffic contract
z When a packet violates the contract, network can
discard or tag the packet giving it lower priority
z If congestion occurs, tagged packets are discarded
first
z Leaky Bucket Algorithm is the most commonly used
policing mechanism
z Bucket has specified leak rate for average contracted rate
z Bucket has specified depth to accommodate variations in
arrival rate
z Arriving packet is conforming if it does not result in overflow
Leaky Bucket algorithm can be used to police arrival rate of
a packet stream

water poured
irregularly Leak rate corresponds to
long-term rate

leaky bucket Bucket depth corresponds to


maximum allowable burst
arrival

1 packet per unit time


water drains at
Assume constant-length
a constant rate
packet as in ATM

Let X = bucket content at last conforming packet arrival


Let ta – last conforming packet arrival time = depletion in bucket
Leaky Bucket Algorithm
Arrival of a packet at time ta

Depletion rate:
X’ = X - (ta - LCT) 1 packet per unit time

Interarrival time L+I = Bucket Depth


Current bucket
content Yes I = increment per arrival,
X’ < 0?
nominal interarrival time

Non-empty No X’ = 0

Nonconforming Yes empty


X’ > L?
packet

arriving packet
No
would cause
overflow X = X’ + I X = value of the leaky bucket counter
LCT = ta X’ = auxiliary variable
conforming packet LCT = last conformance time

conforming packet
Leaky Bucket Example
I=4 L=6 Nonconforming
Packet
arrival

Time

L+I

Bucket
content

* * * * * * * * * Time

Non-conforming packets not allowed into bucket & hence not


included in calculations
Policing Parameters
T = 1 / peak rate
MBS = maximum burst size
I = nominal interarrival time = 1 / sustainable rate

⎡ L ⎤
MBS = 1 + ⎢
⎣ I − T ⎥⎦

MBS

Time
T L I
Dual Leaky Bucket
Dual leaky bucket to police PCR, SCR, and MBS:

Incoming Tagged or
Leaky bucket 1
traffic dropped
SCR and MBS

Untagged traffic

Leaky bucket 2 Tagged or


PCR and CDVT dropped

PCR = peak cell rate


CDVT = cell delay variation tolerance
SCR = sustainable cell rate
Untagged traffic MBS = maximum burst size
Traffic Shaping
Traffic shaping Policing Traffic shaping Policing

1 2 3 4

Network A Network C
Network B

z Networks police the incoming traffic flow


z Traffic shaping is used to ensure that a packet
stream conforms to specific parameters
z Networks can shape their traffic prior to passing it to
another network
Leaky Bucket Traffic Shaper
Size N
Incoming traffic Shaped traffic
Server

Packet

z Buffer incoming packets


z Play out periodically to conform to parameters
z Surges in arrivals are buffered & smoothed out
z Possible packet loss due to buffer overflow
z Too restrictive, since conforming traffic does not
need to be completely smooth
Token Bucket Traffic Shaper
Tokens arrive
periodically

An incoming packet must


have sufficient tokens
before admission into the
network Size K
Token

Size N
Incoming traffic Shaped traffic
Server

Packet
z Token rate regulates transfer of packets
z If sufficient tokens available, packets enter network without delay
z K determines how much burstiness allowed into the network
Token Bucket Shaping Effect

The token bucket constrains the traffic from a


source to be limited to b + r t bits in an interval of
length t

b bytes instantly b+rt

r bytes/second

t
Packet transfer with Delay
Guarantees
A(t) = b+rt Bit rate > R > r
(a) e.g., using WFQ No backlog
Token Shaper of packets
R(t)

1 2

Buffer Buffer
(b) occupancy Empty
occupancy
at 1 at 2
t t
0 b b
R R-r

z Assume fluid flow for information


z Token bucket allows burst of b bytes 1 & then r bytes/second
z Since R>r, buffer content @ 1 never greater than b byte
z Thus delay @ mux < b/R
z Rate into second mux is r<R, so bytes are never delayed
Delay Bounds with WFQ / PGPS
z Assume
z traffic shaped to parameters b & r
z schedulers give flow at least rate R>r
z H hop path
z m is maximum packet size for the given flow
z M maximum packet size in the network
z Rj transmission rate in jth hop
z Maximum end-to-end delay that can be experienced
by a packet from flow i is:
b ( H − 1)m H M
D≤ + +∑
R R j =1 R j
Scheduling for Guaranteed
Service
z Suppose guaranteed bounds on end-to-end
delay across the network are to be provided
z A call admission control procedure is required
to allocate resources & set schedulers
z Traffic flows from sources must be
shaped/regulated so that they do not exceed
their allocated resources
z Strict delay bounds can be met
Current View of Router Function
Routing Reservation Mgmt.
Agent Agent Agent

Admission
Control

[Routing database] [Traffic control database]

Classifier Pkt. scheduler


Input
Internet
driver
forwarder Output driver
Closed-Loop Flow Control
z Congestion control
z feedback information to regulate flow from sources into
network
z Based on buffer content, link utilization, etc.
z Examples: TCP at transport layer; congestion control at
ATM level
z End-to-end vs. Hop-by-hop
z Delay in effecting control
z Implicit vs. Explicit Feedback
z Source deduces congestion from observed behavior
z Routers/switches generate messages alerting to
congestion
End-to-End vs. Hop-by-Hop
Congestion Control
Source Packet flow Destination

(a)

Source Destination

(b)

Feedback information
Traffic Engineering
z Management exerted at flow aggregate level
z Distribution of flows in network to achieve efficient
utilization of resources (bandwidth)
z Shortest path algorithm to route a given flow not
enough
z Does not take into account requirements of a flow, e.g.
bandwidth requirement
z Does not take account interplay between different flows
z Must take into account aggregate demand from all
flows
3 6 3 6

1 1
4 7 4 7

2 2
8 8
5 5

(a) (b)

Shortest path routing Better flow allocation


congests link 4 to 8 distributes flows
more uniformly

You might also like