Professional Documents
Culture Documents
Lec - 3 - Packet Delay - Refernce Model
Lec - 3 - Packet Delay - Refernce Model
B
packets in buffers (queueing delay)
free (available) buffers: arriving packets
6
dropped (loss) if no free buffers
Host: sends packets of data
host sending function:
▪ takes application message
▪ breaks into smaller chunks, two packets,
known as packets, of length L bits L bits each
B
nodal
processing queueing
• packet transmission delay: takes L/R seconds to transmit One-hop numerical example:
(push out) L-bit packet into link at R bps ▪ L = 10 Kbits
▪ R = 100 Mbps
▪ one-hop transmission delay
9 = 0.1 msec
2. Propagation delay:
After the packet is transmitted to the transmission medium, it has to go through the medium to reach
the destination. Hence the time taken by the last bit of the packet to reach the destination is called
propagation delay.
Distance: It takes more time to reach the destination if the distance of the medium is longer.
Velocity: If the velocity(speed) of the signal is higher, the packet will be received faster.
Tp = Distance / Velocity
Note:
• In general, we can’t calculate queueing delay because we don’t have any formula for that.
11
4. Processing delay:
• Now the packet will be taken for the processing which is called processing delay.
• Time is taken to process the data packet by the processor that is the time required by intermediate routers to decide
where to forward the packet, update TTL, perform header checksum calculations.
• It also doesn’t have any formula since it depends upon the speed of the processor and the speed of the processor
varies from computer to computer.
Note: Both queueing delay and processing delay doesn’t have any formula because they depend on the
speed of the processor
12
Packet delay: four sources
transmission
A propagation
B
nodal
processing queueing
B
nodal
processing queueing
3 probes 3 probes
3 probes
16
Real Internet delays and routes
traceroute: gaia.cs.umass.edu to www.eurecom.fr
3 delay measurements from
gaia.cs.umass.edu to cs-gw.cs.umass.edu
1 cs-gw (128.119.240.254) 1 ms 1 ms 2 ms 3 delay measurements
2 border1-rt-fa5-1-0.gw.umass.edu (128.119.3.145) 1 ms 1 ms 2 ms
3 cht-vbns.gw.umass.edu (128.119.3.130) 6 ms 5 ms 5 ms to border1-rt-fa5-1-0.gw.umass.edu
4 jn1-at1-0-0-19.wor.vbns.net (204.147.132.129) 16 ms 11 ms 13 ms
5 jn1-so7-0-0-0.wae.vbns.net (204.147.136.136) 21 ms 18 ms 18 ms
6 abilene-vbns.abilene.ucaid.edu (198.32.11.9) 22 ms 18 ms 22 ms
7 nycm-wash.abilene.ucaid.edu (198.32.8.46) 22 ms 22 ms 22 ms trans-oceanic link
8 62.40.103.253 (62.40.103.253) 104 ms 109 ms 106 ms
9 de2-1.de1.de.geant.net (62.40.96.129) 109 ms 102 ms 104 ms
10 de.fr1.fr.geant.net (62.40.96.50) 113 ms 121 ms 114 ms looks like delays
11 renater-gw.fr1.fr.geant.net (62.40.103.54) 112 ms 114 ms 112 ms
12 nio-n2.cssi.renater.fr (193.51.206.13) 111 ms 114 ms 116 ms decrease! Why?
13 nice.cssi.renater.fr (195.220.98.102) 123 ms 125 ms 124 ms
14 r3t2-nice.cssi.renater.fr (195.220.98.110) 126 ms 126 ms 124 ms
15 eurecom-valbonne.r3t2.ft.net (193.48.50.54) 135 ms 128 ms 133 ms
16 194.214.211.25 (194.214.211.25) 126 ms 128 ms 126 ms
17 * * *
18 * * * * means no response (probe lost, router not replying)
19 fantasia.eurecom.fr (193.55.113.142) 132 ms 128 ms 136 ms
17
Packet loss
▪ queue (aka buffer) preceding link in buffer has finite capacity
▪ packet arriving to full queue dropped (aka lost)
▪ lost packet may be retransmitted by previous node, by source end
system, or not at all
buffer
(waiting area) packet being transmitted
A
B
packet arriving to
full buffer is lost
18
Throughput
▪ throughput: rate (bits/time unit) at which bits are being sent from
sender to receiver
• instantaneous: rate at given point in time
• average: rate over longer period of time
link capacity
pipe that can carry linkthat
pipe capacity
can carry
Rsfluid
bits/sec
at rate Rfluid
c bits/sec
at rate
serverserver,
sends with
bits
(fluid) into pipe (Rs bits/sec) (Rc bits/sec)
file of F bits
to send to client
19
Throughput
Rs < Rc What is average end-end throughput?
Rs bits/sec Rc bits/sec
Rs bits/sec Rc bits/sec
bottleneck link
link on end-end path that constrains end-end throughput
20
Throughput: network scenario
▪ per-connection end-
Rs end throughput:
Rs Rs min(Rc,Rs,R/10)
▪ in practice: Rc or Rs is
R often bottleneck
Rc Rc
Rc
23
Why layering?
Approach to designing/discussing complex systems:
▪ explicit structure allows identification,
relationship of system’s pieces
• layered reference model for discussion
▪ modularization eases maintenance,
updating of system
• change in layer's service implementation:
transparent to rest of system
• e.g., change in gate procedure doesn’t
affect rest of system
24
The TCP/IP and OSI Networking Models
▪ vendors formalized and published their networking protocols, enabling other vendors to create
products that could communicate with their computers. For instance, IBM published its Systems
Network Architecture (SNA) networking model in 1974. After SNA was published, you could buy
computers from other vendors as well as IBM, and they could communicate—as long as they
supported IBM’s proprietary SNA.
25
The TCP/IP and OSI Networking Models
• A better solution was to create a standardized networking model that all vendors would support. The
International Organization for Standardization (ISO) took on this task starting as early as the late 1970s,
beginning work on what would become known as the Open Systems Interconnection (OSI) networking
model. The ISO had a noble goal for the OSI: to standardize data networking protocols to allow communication
between all computers across the entire planet. The OSI worked toward this ambitious and noble goal, with
participants from most of the technologically developed nations on Earth participating in the process.
• A second, less formal effort to create a standardized, public networking model sprouted forth from a U.S.
Defense Department contract. Researchers at various universities volunteered to help further develop the
protocols surrounding the original department’s work. These efforts resulting in a competing networking
model called TCP/IP.
• The world now had many competing vendor networking models and two competing standardized networking
models. So what happened? TCP/IP won the war. Proprietary protocols are still in use today in many networks,
but much less so than in the 1980s and 1990s. OSI, whose development suffered in part because of the slow
formal standardization processes of the ISO, never succeeded in the marketplace. And TCP/IP, the networking
model created almost entirely by a bunch of volunteers, has become the most prolific set of data networking
protocols ever.
26
The TCP/IP and OSI Networking Models
27
The TCP/IP and OSI Networking Models
Data Encapsulation: The term encapsulation describes the process of putting headers and trailers around
some data. A computer that needs to send data encapsulates the data in headers of the correct format so that the
receiving computer will know how to interpret the received data.
28
• Step 1 Create the application data and headers—This simply means that the application has data to send.
• Step 2 Package the data for transport—In other words, the transport layer (TCP or UDP) creates the transport header and
places the data behind it.
• Step 3 Add the destination and source network layer addresses to the data-The network layer creates the network header,
which includes the network layer addresses, and places the data behind it.
• Step 4 Add the destination and source data link layer addresses to the data -The data link layer creates the data link header,
places the data behind it, and places the data link trailer at the end.
• Step 5 Transmit the bits—The physical layer encodes a signal onto the medium to transmit the frame.
29
Chapter 1: roadmap
• What is the Internet?
• What is a protocol?
• Network edge: hosts, access
network, physical media
• Network core: packet/circuit
switching, internet structure
• Performance: loss, delay,
throughput
• Protocol layers, service models
• History
30