Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

3.

1 DATA LINK LAYER DESIGN The actual transmission follows the path of
Fig. 3-2(b).
ISSUES
 The data link layer can be designed to offer
various services. The actual services that are
offered vary from protocol to protocol. Three
reasonable possibilities that we will consider
in turn are:
1. Unacknowledged connectionless service.
2. Acknowledged connectionless service.
3. Acknowledged connection-oriented service.
 The data link layer uses the services of the 3.1.2 Framing
physical layer to send and receive bits over
communication channels. It has a number of
functions, including:
1. Providing a well-defined service interface to
the network layer.
2. Dealing with transmission errors.
3. Regulating the flow of data so that slow
receivers are not swamped by fast senders.
 To accomplish these goals, the data link layer
takes the packets it gets from the network
layer and encapsulates them into frames for
transmission.
 Each frame contains a frame header, a payload  The usual approach is for the data link layer to
field for holding the packet, and a frame break up the bit stream into discrete frames,
trailer, as illustrated in Fig. 3-1. compute a short token called a checksum for
each frame, and include the checksum in the
3.1.1 Services Provided to the Network frame when it is transmitted.
Layer  When a frame arrives at the destination, the
checksum is recomputed. If the newly
computed checksum is different from the one
contained in the frame, the data link layer
knows that an error has occurred and takes
steps to deal with it.
 There are four methods:
1. Byte count.
2. Flag bytes with byte stuffing.
3. Flag bits with bit stuffing.
4. Physical layer coding violations.
 The first framing method uses a field in the
header to specify the number of bytes in the
frame. When the data link layer at the
 The function of the data link layer is to
destination sees the byte count, it knows how
provide services to the network layer. The
many bytes follow and hence where the end of
principal service is transferring data from the
the frame is. This technique is shown in Fig.
network layer on the source machine to the
3-3(a) for four small example frames of sizes
network layer on the destination machine.
5, 5, 8, and 8 bytes, respectively.
 On the source machine is an entity, call it a
 The trouble with this algorithm is that the
process, in the network layer that hands some
count can be garbled by a transmission error.
bits to the data link layer for transmission to
For example, if the byte count of 5 in the
the destination. The job of the data link layer
second frame of Fig. 3-3(b) becomes a 7 due
is to transmit the bits to the destination
to a single bit flip, the destination will get out
machine so they can be handed over to the
of synchronization. It will then be unable to
network layer there, as shown in Fig. 3-2(a).
locate the correct start of the next frame.
3.1.3 Error Control
 The usual way to ensure reliable delivery is to
provide the sender with some feedback about
what is happening at the other end of the line.
Typically, the protocol calls for the receiver to
send back special control frames bearing
positive or negative acknowledgements about
the incoming frames.
 If the sender receives a positive
acknowledgement about a frame, it knows the
frame has arrived safely. On the other hand, a
negative acknowledgement means that
something has gone wrong and the frame must
 The second framing method gets around the be transmitted again.
problem of resynchronization after an error by  If the acknowledgement frame is lost, the
having each frame start and end with special sender will not know how to proceed. It
bytes. Often the same byte, called a flag byte, should be clear that a protocol in which the
is used as both the starting and ending sender transmits a frame and then waits for an
delimiter. This byte is shown in Fig. 3-4(a) as acknowledgement, positive or negative, will
FLAG. hang forever if a frame is ever lost due.
 Of course, the next question is: what happens  This possibility is dealt with by introducing
if an escape byte occurs in the middle of the timers into the data link layer. When the
data? The answer is that it, too, is stuffed with sender transmits a frame, it generally also
an escape byte. At the receiver, the first escape starts a timer. The timer is set to expire after
byte is removed, leaving the data byte that an interval long enough for the frame to reach
follows it (which might be another escape byte the destination, be processed there, and have
or the flag byte). Some examples are shown in the acknowledgement propagate back to the
Fig. 3-4(b). sender. Normally, the frame will be correctly
 The third method of delimiting the bit stream received and the acknowledgement will get
gets around a disadvantage of byte stuffing, back before the timer runs out, in which case
which is that it is tied to the use of 8-bit bytes. the timer will be canceled.
3.1.4 Flow Control
 Another important design issue that occurs in
the data link layer (and higher layers as well)
is what to do with a sender that systematically
wants to transmit frames faster than the
receiver can accept them. This situation can
occur when the sender is running on a fast,
powerful computer and the receiver is running
 If the user data contain the flag pattern,
01111110, this flag is transmitted as on a slow, low-end machine.
011111010 but stored in the receiver’s  Clearly, something has to be done to prevent
memory as 01111110. Figure 3-5 gives an this situation. Two approaches are commonly
example of bit stuffing. used. In the first one, feedback-based flow
control, the receiver sends back information to
 With bit stuffing, the boundary between two
frames can be unambiguously recognized by the sender giving it permission to send more
data, or at least telling the sender how the
the flag pattern.
receiver is doing.
 The last method of framing is to use a shortcut
from the physical layer. That the encoding of  In the second one, rate-based flow control, the
bits as signals often includes redundancy to protocol has a built-in mechanism that limits
the rate at which senders may transmit data,
help the receiver.
without using feedback from the receiver.
3.3 ELEMENTARY DATA LINK  The protocol consists of two distinct
PROTOCOLS procedures, a sender and a receiver. The
sender runs in the data link layer of the source
machine, and the receiver runs in the data link
layer of the destination machine. No sequence
numbers or acknowledgements are used here,
so MAX SEQ is not needed. The only event
type possible is frame arrival .
 The sender is in an infinite while loop just
pumping data out onto the line as fast as it can.
The body of the loop consists of three actions:
go fetch a packet from the (always obliging)
 The physical layer, data link layer, and network layer, construct an outbound frame
network layer are independent processes that using the variable s, and send the frame on its
communicate by passing messages back and way.
forth. A common implementation is shown in  Only the info field of the frame is used by this
Fig. 3-10. protocol, because the other fields have to do
 The physical layer process and some of the with error and flow control and there are no
data link layer process run on dedicate errors or flow control restrictions here. The
hardware called a NIC (Network Interface receiver is equally simple. Initially, it waits for
Card). something to happen, the only possibility
 The rest of the link layer process and the being the arrival of an undamaged frame.
network layer process run on the main CPU as  Eventually, the frame arrives and the
part of the operating system, with the software procedure wait for event returns, with event
for the link layer process often taking the form set to frame arrival (which is ignored anyway).
of a device driver. The call to from physical layer removes the
newly arrived frame from the hardware buffer
3.3.1 A Utopian Simplex Protocol and puts it in the variable r, where the receiver
code can get at it.
3.3.2 A Simplex Stop-and-Wait Protocol
for an Error-Free Channel

FIGURE 3-12 A utopian simplex protocol FIGURE 3-13 A simplex stop-and-wait protocol
 Protocols in which the sender sends one frame detected). In other words, the protocol will
and then waits for an acknowledgement before fail.
proceeding are called stop-and-wait. Figure 3-  Protocols in which the sender waits for a
13 gives an example of a simplex stop-and- positive acknowledgement before advancing
wait protocol. to the next data item are often called ARQ
 This protocol entails a strict alternation of (Automatic Repeat reQuest) or PAR (Positive
flow: first the sender sends a frame, then the Acknowledgement with Retransmission). Like
receiver sends a frame, then the sender sends protocol 2, this one also transmits data only in
another frame, then the receiver sends another one direction.
one, and so on. A half-duplex physical channel  Protocol 3 differs from its predecessors in that
would suffice here. both sender and receiver have a variable
 As in protocol 1, the sender starts out by whose value is remembered while the data link
fetching a packet from the network layer, layer is in the wait state. The sender
using it to construct a frame, and sending it on remembers the sequence number of the next
its way. But now, unlike in protocol 1, the frame to send in next frame to send; the
sender must wait until an acknowledgement receiver remembers the sequence number of
frame arrives before looping back and fetching the next frame expected in frame expected.
the next packet from the network layer.  Each protocol has a short initialization phase
 The sending data link layer need not even before entering the infinite loop .After
inspect the incoming frame as there is only transmitting a frame and starting the timer, the
one possibility. The incoming frame is always sender waits for something exciting to happen.
an acknowledgement.  Only three possibilities exist: an
 The only difference between receiver1 and acknowledgement frame arrives undamaged, a
receiver2 is that after delivering a packet to the damaged acknowledgement frame staggers in,
network layer, receiver2 sends an or the timer expires. If a valid
acknowledgement frame back to the sender acknowledgement comes in, the sender fetches
before entering the wait loop again. the next packet from its network layer and puts
it in the buffer, overwriting the previous
3.3.3 A Simplex Stop-and-Wait Protocol packet.
for a Noisy Channel  It also advances the sequence number. If a
damaged frame arrives or the timer expires,
 Consider the following scenario: neither the buffer nor the sequence number is
1. The network layer on A gives packet 1 to its
changed so that a duplicate can be sent. In all
data link layer. The packet is correctly cases, the contents of the buffer (either the
received at B and passed to the network layer
next packet or a duplicate) are then sent.
on B. B sends an acknowledgement frame
back to A.
2. The acknowledgement frame gets lost
completely. It just never arrives at all. Life
would be a great deal simpler if the channel
mangled and lost only data frames and not
control frames, but sad to say, the channel is
not very discriminating.
3. The data link layer on A eventually times out.
Not having received an acknowledgement, it
(incorrectly) assumes that its data frame was
lost or damaged and sends the frame
containing packet 1 again.
4. The duplicate frame also arrives intact at the
data link layer on B and is unwittingly passed
to the network layer there. If A is sending a
file to B, part of the file will be duplicated
(i.e., the copy of the file made by B will be
incorrect and the error will not have been
interferes with other users who also wish to
use the channel.
4.1.1 Static Channel Allocation
 The traditional way of allocating a single
channel, such as a telephone trunk, among
multiple competing users is to chop up its
capacity by using one of the multiplexing
schemes. If there are N users, the bandwidth is
divided into N equal-sized portions, with each
user being assigned one portion.
 Since each user has a private frequency band,
there is now no interference among users.
When there is only a small and constant
number of users, each of which has a steady
stream or a heavy load of traffic, this division
is a simple and efficient allocation mechanism.
 A wireless example is FM radio stations. Each
station gets a portion of the FM band and uses
it most of the time to broadcast its signal. The
poor performance of static FDM can easily be
seen with a simple queueing theory
calculation. Let us start by finding the mean
time delay, T, to send a frame onto a channel
of capacity C bps.
 We assume that the frames arrive randomly
with an average arrival rate of λ frames/sec,
and that the frames vary in length with an
average length of 1/μ bits. With these
parameters, the service rate of the channel is
μC frames/sec.
 A standard queueing theory result is:

 Precisely the same arguments that apply to


FDM also apply to other ways of statically
dividing the channel. If we were to use time
division multiplexing (TDM) and allocate
FIGURE 3-13 A positive acknowledgement with each user every Nth time slot, if a user does
retransmission protocol not use the allocated.

4.1 THE CHANNEL ALLOCATION 4.1.2 Assumptions for Dynamic Channel


PROBLEM Allocation
 The central theme of this chapter is how to 1. Independent Traffic: The model consists of
allocate a single broadcast channel among N independent stations (e.g., computers,
competing users. The channel might be a telephones), each with a program or user that
portion of the wireless spectrum in a generates frames for transmission. The
geographic region, or a single wire or optical expected number of frames generated in an
fiber to which multiple nodes are connected. interval of length t is λΔt, where λ is a
 It does not matter. In both cases, the channel constant (the arrival rate of new frames). Once
connects each user to all other users and any a frame has been generated, the station is
user who makes full use of the channel blocked and does nothing until the frame has
been successfully transmitted.
2. Single Channel: A single channel is available any system in which uncoordinated users are
for all communication. All stations can competing for the use of a single shared
transmit on it and all can receive from it. The channel.
stations are assumed to be equally capable,
Pure ALOHA
though protocols may assign them different
roles (e.g., priorities).
3. Observable Collisions: If two frames are
transmitted simultaneously, they overlap in
time and the resulting signal is garbled. This
event is called a collision. All stations can
detect that a collision has occurred. A collided
frame must be transmitted again later. No
errors other than those generated by collisions
occur.
4. Continuous or Slotted Time: Time may be
assumed continuous, inwhich case frame
transmission can begin at any instant.  The basic idea of an ALOHA system is
Alternatively, time may be slotted or divided simple: let users transmit whenever they have
into discrete intervals (called slots). Frame data to be sent. There will be collisions, of
transmissions must then begin at the start of a course, and the colliding frames will be
slot. A slot may contain 0, 1, or more frames, damaged. Senders need some way to find out
corresponding to an idle slot, a successful if this is the case.
transmission, or a collision, respectively.  In the ALOHA system, after each station has
5. Carrier Sense or No Carrier Sense: With the sent its frame to the central computer, this
carrier sense assumption, stations can tell if computer rebroadcasts the frame to all of the
the channel is in use before trying to use it. No stations. A sending station can thus listen for
station will attempt to use the channel while it the broadcast from the hub to see if its frame
is sensed as busy. If there is no carrier sense, has gotten through. In other systems, such as
stations cannot sense the channel before trying wired
to use it. They just go ahead and transmit.  LANs, the sender might be able to listen for
Only later can they determine whether the collisions while transmitting.
transmission was successful.  If the frame was destroyed, the sender just
waits a random amount of time and sends it
4.2.1 ALOHA again. The waiting time must be random or the
same frames will collide over and over, in
 In this case, “pristine‘‘ can be interpreted as
lockstep. Systems in which multiple users
“not having a working telephone system.‘‘
share a common channel in a way that can
This did not make life more pleasant for
lead to conflicts are known as contention
researcher Norman Abramson and his
systems.
colleagues at the University of Hawaii who
were trying to connect users on re-mote Slotted ALOHA
islands to the main computer in Honolulu.
Stringing their own cables under the Pacific
Ocean was not in the cards, so they looked for
a different solution.
 The one they found used short-range radios,
with each user terminal sharing the same
upstream frequency to send frames to the
central computer.
 It included a simple and elegant method to
solve the channel allocation problem. Their
work has been extended by many researchers
since then (Schwartz and Abramson, 2009).
 Soon after ALOHA came onto the scene,
Although Abramson‘s work, called the
Roberts (1972) published a method for
ALOHA system, used ground-based radio
doubling the capacity of an ALOHA system.
broadcasting, the basic idea is applicable to
His proposal was to divide time into discrete  No other station is allowed to transmit during
intervals called slots, each interval this slot. Regardless of what station 0 does,
corresponding to one frame. station 1 gets the opportunity to transmit a 1
 In Roberts‘ method, which has come to be bit during slot 1, but only if it has a frame
known as slotted ALOHA—in contrast to queued. Protocols like this in which the desire
Abramson‘s pure ALOHA—a station is not to transmit is broadcast before the actual
permitted to send when-ever the user types a transmission are called reservation protocols
line. Instead, it is required to wait for the because they reserve channel ownership in
beginning of the next slot. Thus, the advance and prevent collisions.
continuous time ALOHA is turned into a  Let us briefly analyze the performance of this
discrete time one. This halves the vulnerable protocol. For convenience, we will measure
period. time in units of the contention bit slot, with
 The probability that it will avoid a collision is data frames consisting of d time units.
e −G , which is the probability that all the  The channel efficiency at low load is easy to
other stations are silent in that slot. The compute. The overhead per frame is N bits and
probability of a collision is then just 1 −e −G . the amount of data is d bits, for an efficiency
The probability of a transmission requiring of d / (d+N ).
exactly k attempts (i.e., k− 1 collisions
followed by one success) is Token Passing

 The expected number of transmissions, E, per


line typed at a terminal

4.2.3 COLLISION-FREE
PROTOCOLS  The essence of the bit-map protocol is that it
 In the protocols to be described, we assume lets every station transmit a frame in turn in a
that there are exactly N stations, each predefined order. Another way to accomplish
programmed with a unique address from 0 to the same thing is to pass a small message
N− 1. It does not matter that some stations called a token from one station to the next in
may be inactive part of the time. We also the same predefined order.
assume that propagation delay is negligible.  The token represents permission to send. If a
 Most of these protocols are not currently used station has a frame queued for transmission
in major systems, but in a rapidly changing when it receives the token, it can send that
field, having some protocols with excellent frame before it passes the token to the next
properties available for future systems is often station. If it has no queued frame, it simply
a good thing. passes the token.
 In a token ring protocol, the topology of the
A Bit-Map Protocol network is used to define the order in which
stations send. The stations are connected one
to the next in a single ring.
 The performance of token passing is similar to
that of the bit-map protocol, though the
contention slots and frames of one cycle are
 In our first collision-free protocol, the basic now intermingled.
bit-map method, each contention period  The channel connecting the stations might
consists of exactly N slots. If station 0 has a instead be a single long bus. Each station then
frame to send, it transmits a 1 bit during the uses the bus to send the token to the next
slot 0. station in the predefined sequence. Possession
of the token allows a station to use the bus to
send one frame, as before. This protocol is
called token bus.
Binary Countdown

 A problem with the basic bit-map protocol,


and by extension token passing, is that the
overhead is 1 bit per station, so it does not
scale well to networks with thousands of
stations. We can do better than that by using
binary station ad-dresses with a channel that
combines transmissions.
 All addresses are assumed to be the same
length. The bits in each ad-dress position from
different stations are BOOLEAN OR together
by the channel when they are sent at the same
time. We will call this protocol binary count-
down.
 It implicitly assumes that the trans-mission
delays are negligible so that all stations see
asserted bits essentially instantaneously.
 To avoid conflicts, an arbitration rule must be
applied: as soon as a station sees that a high-
order bit position that is 0 in its address has
been overwritten with a 1, it gives up.
 The protocol is illustrated in Fig. 4-8. It has
the property that higher-numbered stations
have a higher priority than lower-numbered
stations, which may be either good or bad,
depending on the context.

You might also like