Download as pdf or txt
Download as pdf or txt
You are on page 1of 198

Nodes and Links

❑ Although communication at the application, transport, and network


layers is end-to-end, communication at the data-link layer is node-to
node.
❑ As we have learned in the previous chapters, a data unit from one
point in the Internet needs to pass through many networks (LANs and
WANs) to reach another point.
❑ Theses LANs and WANs are connected by routers. It is customary to
refer to the two end hosts and the routers as nodes and the networks
in between as links.
Data Link Layer
Responsible for error
and flow control

Link Layer Control (LLC)

control
MAC

Responsible for framing,


MAC address and
Multiple Access Control

Two functionality-oriented sublayers in DATA LINK LAYER


THE MEDIUM ACCESS CONTROL SUBLAYER
❑ In the OSI protocol stack, channel allocation is addressed in
the Medium access control (MAC) sublayer. This is a
sublayer of the Data Link Layer - considered to be below
the Logical Link Control (LLC) sub-layer.

❑ MAC layer provides an unreliable connectionless service; if


required, the LLC layer can convert this into a reliable
service by using an ARQ (Automatic repeat request)
protocol.

❑ The protocols used to determine who goes next on a


multiaccess channel belong to a sublayer of the data link
layer called the MAC (Medium Access Control) sublayer.

❑ The MAC sublayer is especially important in LANs,


particularly wireless ones because wireless is naturally a
broadcast channel.
THE CHANNEL ALLOCATION PROBLEM:
❑ Links are classified as either broadcast or point-to-point.
With a broadcast link, more than two users share the same
transmission media. At times an entire network, in particular a
LAN, will consist of a single broadcast link connecting a group
of users 1. In this case we refer to the network itself as a
broadcast network. A broadcast link is also called a
multiaccess channel. In such a channel, 1. A transmitter can
be heard by multiple receivers 2. A receiver can hear multiple
transmitters.
❑ In any broadcast network, the key issue is how to determine
who gets to use the channel when there is competition for it.
To make this point, consider a conference call in which six
people, on six different telephones, are all connected so that
each one can hear and talk to all the others. It is very likely
that when one of them stops speaking, two or more will start
talking at once, leading to chaos.

❑ In a face-to-face meeting, chaos is avoided by external means.


For example, at a meeting, people raise their hands to request
permission to speak. When only a single channel is available,
it is much harder to determine who should go next.
How to share a common medium among the many
users? how to allocate a single broadcast channel
among competing users. The channel might be a
portion of the wireless spectrum in a geographic
region, or a single wire or optical fiber to which
multiple nodes are connected.

In both cases, the channel connects each user to all


other users and any user who makes full use of the
channel interferes with other users who also wish to
use the channel.
❑When two or more nodes transmit at the same
time, their frames will collide and the link
bandwidth is wasted during collision.
DYNAMIC APPROACH DYNAMIC APPROACH STATIC APPROACH
CHANNEL ALLOCATION APPROACHES:
❑ How to allocate a multiaccess channel among
competing users. In other words, we need a set of
rules (i.e. a protocol) to allow each user to
communicate and avoid interference. There are a
variety of solutions to this problem that are used in
practice. These solutions can be classified as either
static or dynamic.
❑ With a static approach, the channel's capacity is
essentially divided into fixed portions; each user is
then allocated a portion for all time. If the user has no
traffic to use in its portion, then it goes unused.
❑ With a dynamic approach, the allocation of the
channel changes based on the traffic generated by the
users. Generally, a static allocation performs better
when the traffic is predictable. A dynamic channel
allocation tries to get better utilization and lower delay
on a channel when the traffic is unpredictable.
STATIC CHANNEL ALLOCATION TECHNIQUES :

❑ Time Division Multiple Access (TDMA) – With TDMA the time axis is
divided into time slots of a fixed length. Each user is allocated a fixed
set of time slots at which it can transmit. TDMA requires that users be
synchronized to a common clock. Typically extra overhead bits are
required for synchronization.

❑ Frequency Division Multiple Access (FDMA) – With FDMA the available


frequency bandwidth is divided into disjoint frequency bands. A fixed
band is allocated to each user. FDMA requires a guard band between
user frequency bands to avoid cross-talk.

❑ Another static allocation technique is Code Division Multiple Access


(CDMA), this technique is used in many wireless networks
DYNAMIC CHANNEL ALLOCATION TECHNIQUES :
Many different dynamic allocation strategies have been
developed. They can be broadly classified as:
❑ Contention resolution approaches - users transmit a
packet when they have data to send- if multiple users
transmit at the same time a collision occurs and the
packets must be retransmitted according to some rule.
❑ Perfectly scheduled approaches - Users transmit
contention free according to a schedule that is
dynamically formed based on which users have data to
send, e.g. polling, reservations. Various combinations
of these approaches also exist. We will look at several
specific examples in the next few lectures, beginning
with a basic contention resolution approach.
❑ As we look at different approaches keep in mind the
following two performance criteria:
1. The delay at low load.
2. The throughput (channel efficiency) at high load.
Dynamic Channel Allocation in LANs and MANs
1.Station Model.
• Independent stations for generating frames.
• Once a frame has been generated, the station is blocked until
the frame has been transmitted.
2.Single Channel Assumption. A single channel for all communication
(send and receive), and all stations are equivalent.
3.Collision Assumption. If the transmission of two frames overlap in
time, a collision occurs. All stations can detect collisions. A collided
frame must be retransmitted.
4.Time assumption.
(a) Continuous Time.
(b) Slotted Time.
5.Sense assumption.
(a) Carrier sense. Stations can tell if the channel is in use before
trying to use it.
(b) No carrier sense. Stations cannot sense the channel before
trying to use it.
RANDOM ACCESS PROTOCOLS
In random access or contention methods, no station is
superior to another station and none is assigned the
control over another. No station permits, or does not
permit, another station to send. At each instance, a
station that has data to send uses a procedure defined
by the protocol to make a decision on whether or not
to send.

❑ ALOHA ( PURE AND SLOTTED ALOHA )


❑ Carrier Sense Multiple Access (CSMA)
❑ Carrier Sense Multiple Access with Collision Detection (CSMA/CD)
❑ Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA)
ALOHA Protocol
• ALOHA is developed in the 1970s at the University of Hawaii for wireless
LAN and can be used for any shared medium.

It has two variants:


• PURE ALOHA
• SLOTTED ALOHA

• The basic idea is simple:


• Let users transmit whenever they have data to be sent.

• If two or more users send their frames at the same time, a


collision occurs and the frames are destroyed.
Pure ALOHA
How the channel know that there is a collision:
- Due to the feedback property of broadcasting, a
sender can always find out whether its frame was
destroyed by listening to the channel, the same way
other users do. With a LAN, the feedback is
immediate; with a satellite, there is a delay of 270
msec before the sender knows if the transmission
was successful.
- If listening while transmitting is not possible for some
reason, acknowledgements are needed.
Pure ALOHA

❑In pure ALOHA, frames are transmitted at completely


arbitrary times.
❑The throughput of ALOHA systems is maximized by having a
uniform frame size rather than by allowing variable length
frames.
Critical ( Vulnerable ) time for pure ALOHA protocol
Under what conditions will the packet arrive A undamaged?

Tfr= Frame Transmission time

If the frame transmission time is Tfr sec


then the vulnerable time is = 2 Tfr sec
This means no station should send during the T-sec before this station starts
transmission and no station should start sending during the T-sec period that the
current station is sending.
Maximum propagation delay (tp):
• The time it takes for a bit of a frame to travel between the
two most widely separated stations.

The farthest
station

Station B
receives
the first
bit of the
frame at
time t= tp
Node 1 Waiting a random time
Packet
Node 2
Packet Retransmission Retransmission

1 2 3 3 2
Time
Collision

Node 3
Packet

Collision mechanism in ALOHA


Throughput of Pure ALOHA
Efficiency or Throughput (S) of ALOHA defines
average number of frames successfully transmitted
per unit time.
• The probability that n packets arrive in two packets time is
given by here –
(2G)n e- 2G G is average number of frames generated
P(n)= by the system (all stations) during one
n! frame transmission time

• The probability P(0) that a packet is successfully received


without collision is calculated by letting n=0 in the above
equation. We get P(0) = e −2G
• We can calculate throughput S with a traffic load G as
follows: S = G  P(0) = G  e −2G
The throughput (S) for pure ALOHA is
S = G × e −2G
The maximum throughput can be obtained by -

G = Average number of frames generated by the system (all stations) during


one frame transmission time
Example

The stations on a wireless ALOHA network are a


maximum of 600 km apart. If we assume that signals
propagate at 3 × 108 m/s, we find
Tp = (600 × 105 ) / (3 × 108 ) = 2 ms.
Now we can find the value of TB for different values of
K.

a. For K = 1, the range is {0, 1}. The station needs to|


generate a random number with a value of 0 or 1. This
means that TB is either 0 ms (0 × 2) or 2 ms (1 × 2),
based on the outcome of the random variable.
Example (continued)

b. For K = 2, the range is {0, 1, 2, 3}. This means that TB


can be 0, 2, 4, or 6 ms, based on the outcome of the
random variable.

c. For K = 3, the range is {0, 1, 2, 3, 4, 5, 6, 7}. This


means that TB can be 0, 2, 4, . . . , 14 ms, based on the
outcome of the random variable.

d. We need to mention that if K > 10, it is normally set to


10.
Example

A pure ALOHA network transmits 200-bit frames on a


shared channel of 200 kbps. What is the requirement to
make this frame collision-free?

Solution
Average frame transmission time Tfr is 200 bits/200 kbps
or 1 ms.
The vulnerable time is 2 × 1 ms = 2 ms.
This means no station should send later than 1 ms before
this station starts transmission and no station should start
sending during the one 1-ms period that this station is
sending.
Example
A pure ALOHA network transmits 200-bit frames on a
shared channel of 200 kbps. What is the throughput if the
system (all stations together) produces
a. 1000 frames per second b. 500 frames per second
c. 250 frames per second.
Solution
The frame transmission time Tfr = 200 bits/200 kbps or 1 ms.
a. If the system creates 1000 frames per second, this is 1
frame per millisecond. The load G is 1.
In this case -
S = G× e−2 G or S = 0.135 (13.5 percent). This means
that the throughput is 1000 × 0.135 = 135 frames. Only
135 frames out of 1000 will probably survive.
Example (continued)
b. If the system creates 500 frames per second, this is
(1/2) frame per millisecond. The load G is (1/2).
In this
case S = G × e −2G or S = 0.184 (18.4 percent). This
means that the throughput is 500 × 0.184 = 92 and that
only 92 frames out of 500 will probably survive.

c. If the system creates 250 frames per second, this is(1/4)


frame per millisecond.
The load is (1/4). In this case
S = G × e −2G or S = 0.152 (15.2 percent). This means
that the throughput is 250 × 0.152 = 38.
Only 38 frames out of 250 will probably survive.
Pure Aloha Throughput
0.2

0.18

0.16

0.14
Throughput (S)

0.12

0.1

0.08

0.06

0.04

0.02

0
0 1 2 3 4 5
Average Number of frames per unit time (G)
Slotted ALOHA protocol
❑ The method: Divide time into discrete intervals,
each interval corresponding to one frame.

❑ A special station emits a pip at the start of each


interval, like a clock.

❑ A user is not permitted to send whenever a


special character (for example: CR) is typed.
Instead, it is required to wait for the beginning
of the next slot.
Slotted ALOHA protocol (contd.)
❑ “Danger / CrEnhancement of pure ALOHA in
that stations can only start to transmit frame so
that it arrives at the destination at the
beginning of defined time slots of duration T
❑ itical / Vulnerable” period for this system is only
the T seconds prior to the start of station’s
frame and thus S=Ge-G
❑ For this system, optimum throughput occurs if
G=1
Slotted ALOHA
• Time is divided into slots equal to a frame
transmission time (Tfr)

• A station can transmit at the beginning of a slot only

• If a station misses the beginning of a slot, it has to


wait until the beginning of the next time slot.

• A central clock or station informs all stations about


the start of a each slot
Vulnerable time for slotted ALOHA protocol
Slotted ALOHA
• The vulnerable period is now reduced in half.

• Probability of no other packet generated during the


vulnerable period is:
P0 = e−G
• Hence, S = G e-G
• Now here Smax = 1/e = 0.368 at G = 1
• So, Maximum channel utilization is 36.8%
Slotted Aloha Throughput
0.4

0.35

0.3
Throughput (S)

0.25

0.2

0.15

0.1

0.05

0
0 1 2 3 4 5
Average Number of frames per unit time (G)
Performance Comparison of ALOHA
Slotted ALOHA can double the throughput
of pure ALOHA Slotted ALOHA peaks at G = 1, with S = 1/e  0.368, twice that of
pure ALOHA. The main reason for poor channel utilization of
ALOHA (pure or slotted) is that all stations can transmit at will,
without paying attention to what the other stations are doing.

Throughput versus offered traffic for ALOHA systems


Numericals based on ALOHA
Question
A slotted ALOHA network transmits 200-bit frames on a shared
channel of 200 kbps.
What is the throughput if the system (all stations together)
produces
a. 1000 frames per second b. 500 frames per second
c. 250 frames per second.
Solution
The frame transmission time is 200/200 kbps or 1 ms.
a. If the system creates 1000 frames per second, this is 1
frame per millisecond. The load is 1. In this case
S = G× e−G or S = 0.368 (36.8 percent). This means
that the throughput is 1000 × 0.0368 = 368 frames.
Only 386 frames out of 1000 will probably survive.
(continued)

b. If the system creates 500 frames per second, this is


(1/2) frame per millisecond. The load is (1/2). In this
case S = G × e−G or S = 0.303 (30.3 percent). This
means that the throughput is 500 × 0.303 = 151.
Only 151 frames out of 500 will probably survive.

c. If the system creates 250 frames per second, this Is


(1/4) frame per millisecond. The load is (1/4).

In this case S = G × e −G or S = 0.195 (19.5 percent). This


means that the throughput is 250 × 0.195 = 49. Only 49
frames out of 250 will probably survive.
Consider the delay of pure ALOHA versus slotted ALOHA at low load . Which one
is less? Explain your answer.
Solution
At low load, no collisions are likely. In slotted ALOHA we still need to wait for the
next slot beginning time to transmit, so delay is higher.

Ten thousand airline reservation stations are competing for the use of a single
slotted ALOHA channel. The average station makes 18 requests/hour. A slot is
125 μ s. What is the approximate total channel load?
Solution
Each terminal makes one request every 200 seconds, for a total load of
50requests/second. This allows us to find the attempt rate, which is the quantity
that determines channel load (not just original transmissions but the
retransmissions as well).Thus G= 50/8000 = 1/160 is the answer.
Measurements of a slotted ALOHA channel with an infinite number of users
show that 10 percent of the slots are idle.
(a) What is the channel load G ?
(b) (b) What is the throughput?
(c) (c) Is the channel underloaded or overloaded?
Solution
(a) What is the channel load, G?
Ans: When a slot is idle, there is 0 frame generated in that frame time.
Therefore P[succ]=0.1.
P[succ]=e-G=0.1;
-G=ln(0.1);
G=2.303.
(b) What is the throughput?
Ans: S=Ge-G=2.303*0.1=0.2303.
(c) Is the channel underloaded or overloaded?
Ans: When G=1, the slotted Aloha obtains the optimal throughput. G>1, we have too many
generated frame in a slot. It is a overloaded situation.
Here we have G=2.303; S=0.2303<Smax=0.368. G>S. Therefore the channel is overloaded.
Suppose measurements made on a slotted ALOHA channel for a very large
number of users shows that on average 20% of the slots are idle.
a) What is the channel load G ?
b) What is the throughput?
c) Is the channel underloaded or overloaded?
Solution
(a) What is the channel load, G?
Ans: When a slot is idle, there is 0 frame generated in that frame time.
Therefore P[succ]=0.2.
P[succ]=e-G=0.2;
-G=ln(0.2);
G=1.61.
(b) What is the throughput?
Ans: S=Ge-G=1.61*0.2=0.32.
(c) Is the channel underloaded or overloaded?
Ans: Slotted ALOHA is at is optimal (maximum utilization / throughput) at G = 1.0, so this
channel is overloaded.
Therefore the channel is overloaded.
CSMA (Carrier-Sense Multiple-Access )
The poor efficiency of the ALOHA scheme can be attributed to
the fact that a node start transmission without paying any
attention to what others are doing.
In situations where propagation delay of the signal between two
nodes is small compared to the transmission time of a packet,
all other nodes will know very quickly when a node starts
transmission. This observation is the basis of the carrier-sense
multiple-access (CSMA) protocol.
In this scheme, a node having data to transmit first listens to the
medium to check whether another transmission is in progress or
not. The node starts sending only when the channel is free, that
is there is no carrier. That is why the scheme is also known as
listen-beforetalk.
Vulnerable time in CSMA
Behavior of three persistence methods
There are three variations of this basic scheme as outlined below.

(i) 1-persistent CSMA: In this case, a node having data to send,


start sending, if the channel is sensed free. If the medium is
busy, the node continues to monitor until the channel is idle.
Then it starts sending data.

(ii) Non-persistent CSMA: If the channel is sensed free, the node


starts sending the packet. Otherwise, the node waits for a
random amount of time and then monitors the channel.

(iii) p-persistent CSMA: If the channel is free, a node starts


sending the packet. Otherwise the node continues to monitor
until the channel is free and then it sends with probability p.
Behavior of three persistence methods
CSMA with Collision Detection (CSMA /CD)
Persistent and non-persistent CSMA protocols improve ALOHA by ensuring
that no station begins to transmit when it senses the channel busy.

CSMA/CD (Carrier Sense Multiple Access with Collision Detection)


protocol further improves ALOHA by aborting transmissions as
soon as a collision is detected.

The conceptual model:


• To send data, a station first listens to the channel to see if anyone
else is transmitting.
• If so, the station waits until the end of the transmission (1-
persistent) or wait a random period of time and repeats the
algorithm (non-persistent). Otherwise, it transmits a frame.
• If a collision occurs, the station will detect the collision, abort its
transmission, waits a random amount of time, and starts all over
again.
CSMA/CD can be in one of three states:
contention, transmission, or idle

• When two stations both begin transmitting at exactly the same time, how long will it
take them to realize that there has been a collision ?
The minimum time to detect the collision is the time it takes the signal to
propagate from one station to the other.
• How long could the transmitting station be sure it has seized the network ?
• It is worth noting that no MAC-sublayer protocol guarantees reliable delivery. Even
in the absence of collisions, the receiver may not have copied the frame correctly
due to various reasons (e.g., lack of buffer space or a missed interrupt).
Collision and abortion in CSMA/CD
Question

A network using CSMA/CD has a bandwidth of 10 Mbps.


If the maximum propagation time (including the delays in
the devices and ignoring the time needed to send a
jamming signal, as we see later) is 25.6 μs, what is the
minimum size of the frame?
Solution
The frame transmission time is Tfr = 2 × Tp = 51.2 μs.
This means, in the worst case, a station needs to transmit for a
period of 51.2 μs to detect the collision.
The minimum size of the frame is 10 Mbps × 51.2 μs = 512 bits or 64
bytes. This is actually the minimum size of the frame for Standard
Ethernet.
Binary exponential back off
• random waiting period but consecutive collisions increase the
mean waiting time
• mean waiting time doubles in the first 10 retransmission attempts
• after first collision, waits 0 or 1 slot time (selected at random)
• if collided again (second time), waits 0, 1, 2 or 3 slots (at random)
• if collided for the ith time, waits 0, 1, …, or 2i-1 slots (at random)
• the randomization interval is fixed to 0 … 1023 after 10th collision
• station tries a total of 16 times and then gives up if cannot transmit
• low delay with small amount of waiting stations
• large delay with large amount of waiting stations

one slot time = max. round trip delay  50 microsecs in 10 Mbps


Ethernet (see next slide for details of this value)
Binary exponential back off algorithm used in CSMA/CD
CSMA/CA Protocol is used in wireless networks because they
cannot detect the collision so the only solution is collision avoidance.
• CSMA/CA avoids the collisions using three basic techniques.
(i) Interframe space
(ii) Contention window
(iii) Acknowledgements

1. Interframe Space (IFS)


• Whenever the channel is found idle, the station does not transmit immediately. It
waits for a period of time called interframe space (IFS).
• When channel is sensed to be idle, it may be possible that same distant station
may have already started transmitting and the signal of that distant station has not
yet reached other stations.
• Therefore the purpose of IFS time is to allow this transmitted signal to reach other
stations.
• If after this IFS time, the channel is still idle, the station can send, but it still needs
to wait a time equal to contention time.
• IFS variable can also be used to define the priority of a station or a frame.
2. Contention Window
• Contention window is an amount of time divided into slots.
• A station that is ready to send chooses a random number of slots as its wait
time.
• The number of slots in the window changes according to the binary exponential
back-off strategy. It means that it is set of one slot the first time and then doubles
each time the station cannot detect an idle channel after the IFS time.
• This is very similar to the p-persistent method except that a random outcome
defines the number of slots taken by the waiting station.
• In contention window the station needs to sense the channel after each time
slot.
• If the station finds the channel busy, it does not restart the process. It just stops
the timer & restarts it when the channel is sensed as idle.

3. Acknowledgement
• Despite all the precautions, collisions may occur and destroy the data.
• The positive acknowledgment and the time-out timer can help guarantee that
receiver has received the frame.
Timing in CSMA/CA
Flow diagram for CSMA/CA
Performance Comparison of all protocols

1.0 0.01-persistent CSMA


0.9 Non-persistent CSMA
0.8
0.7
0.1-persistent CSMA
0.6
0.5-persistent CSMA
S

0.5
1-persistent CSMA
0.4
0.3
0.2 Slotted Aloha
Aloha
0.1
0
0 1 2 3 4 5 6 7 8 9

G
CONTROLLED ACCESS METHODS
In controlled access, the stations consult one another
to find which station has the right to send. A station
cannot send unless it has been authorized by other
stations. We discuss three popular controlled-access
methods.
❑Reservation

❑Polling

❑Token Passing
RESERVATION ACCESS METHOD
POLLING
Select and poll functions in polling access method
Logical ring and physical topology in Token-passing Access Method
CHANNELIZATION

Channelization is a multiple-access method in which the


available bandwidth of a link is shared in time, frequency,
or through code, between different stations.

❑ Frequency-Division Multiple Access (FDMA)


❑ Time-Division Multiple Access (TDMA)
❑ Code-Division Multiple Access (CDMA)
In FDMA, the available bandwidth
of the common channel is divided into bands that are separated
by guard bands.
Frequency-division multiple access (FDMA)
In TDMA, the bandwidth is just one
channel that is timeshared between
different stations.
Time-division multiple access (TDMA)
In CDMA, one channel carries all
transmissions simultaneously.
Simple idea of communication with code using CDMA
Data representation in CDMA

Chip sequences
Sharing channel in CDMA
General rule and examples of creating Walsh tables
The number of sequences in a Walsh
table needs to be N = 2m .

Walsh codes are the most common orthogonal codes used in CDMA
applications. A set of Walsh codes of length n consists of the n rows
of an n×n Walsh matrix. The matrix is defined recursively in
previous diagram where n is the dimension of the matrix and the
overscore denotes the logical NOT of the bits in the matrix. The
Walsh matrix has the property that every row is orthogonal
to every other row and to the logical NOT of every other row.

It is easily seen that a bitwise multiplication of any two rows


produces 0.
For example, in the 8×8 matrix, row 3 multiplied by row 4
equals 1+(-1)+1+(-1)+1+(-1)+1+(-1)=0.
Find the chips for a network with
a. Two stations b. Four stations

Solution
We can use the rows of W2 and W4 in previous figure:
a. For a two-station network, we have
[+1 +1] and [+1 −1].

b. For a four-station network we have


[+1 +1 +1 +1], [+1 −1 +1 −1],
[+1 +1 −1 −1], and [+1 −1 −1 +1].
What is the number of sequences if we have 90 stations
in our network?

Solution
The number of sequences needs to be 2m.
We need to choose m = 7
and
N (Number of Stations) = 27 or 128.

We can then use 90


of the sequences as the chips.
FDMA vs TDMA vs CDMA
Approach TDMA FDMA CDMA
Divide sending time into Divide the
Spread the spectrum using
Idea disjoint time slots demand frequency band into
orthogonal codes.
driven or fixed patterns. disjoint subbands

All terminals are active for Every terminal has All terminals can be active at
Terminals short periods of time on its own frequency the same place at the same
same frequency. uninterrupted moment uninterrupted.

Synchronization in time Filtering in the


Signal separation Code plus special receivers.
domain frequency domain.
Transmission
Discontinuous Continuous Continuous
scheme
Flexible, less frequency
Established fully digital, Simple, established,
Advantages planning needed, soft
flexible robust
handover

Guard space needed Inflexible, Complex receivers, needs


Disadvantages (multipath propagation), frequencies are more complicated power
synchronization difficult limited resource control for senders
IEEE 802 Standards
❑In 1985, the Computer Society of the IEEE (The
Institute of Electrical and Electronics Engineers) started a
project, called Project 802, to set standards to
enable intercommunication among equipment
from a variety of manufacturers.

❑Project 802 is a way of specifying functions of the


physical layer and the data link layer of major LAN
protocols.
IEEE standard for LANs
IEEE standard for LANs

• IEEE has standardized a number of LANs and MANs


under the name of IEEE 802. A few have survived but
many have not.
• The most important of the survivors are 802.3
(Ethernet) and 802.11 (wireless LAN)
• For 802.15 (Bluetooth) and 802.16 (wireless MAN), it is
too early to tell.
• Both 802.3 and 802.11 have different physical layers
and different MAC sublayers but converge on the same
logical link control sublayer (defined in 802.2), so they
have the same interface to the network layer.
Notable IEEE Standards formats

IEEE 802 LAN/MAN


Standards for LAN/MAN bridging and management and
IEEE 802.1 remote media access control (MAC) bridging.
Standards for Logical Link Control (LLC) standards for
IEEE 802.2 connectivity.
Ethernet Standards for Carrier Sense Multiple Access with
IEEE 802.3 Collision Detection (CSMA/CD).
IEEE 802.4 Standards for token passing bus access.
Standards for token ring access and for communications
IEEE 802.5 between LANs and MANs
IEEE 802.6 Standards for information exchange between systems.
IEEE 802.7 Standards for broadband LAN cabling.
IEEE 802.8 Fiber optic connection.
IEEE 802.9 Standards for integrated services, like voice and data.
IEEE 802.10 Standards for LAN/MAN security implementations.
IEEE 802.11 Wireless Networking – "Wi-Fi".
IEEE 802.12 Standards for demand priority access method.
Standards for cable television broadband
IEEE 802.14 communications.
IEEE 802.15.1 Bluetooth
IEEE 802.15.4 Wireless Sensor/Control Networks – "ZigBee"
Wireless Body Area Network (BAN) – (e.g. Bluetooth low
IEEE 802.15.6 energy)
IEEE 802.16 Wireless Networking – "WiMAX"
IEEE 802.3(Wired LAN’s: Ethernet)

❑IEEE 802.3 is a working group and a collection of IEEE


standards produced by the working group defining the
physical layer and data link layer's media access control (MAC)
of wired Ethernet. This is generally a local area network
technology with some wide area network applications.

❑ Physical connections are made between nodes and/or


infrastructure devices (hubs, switches, routers) by various
types of copper or fiber cable.

❑802.3 also defines LAN access method using CSMA/CD.


What is Ethernet?
❑ Ethernet is a family of computer networking technologies
for local area networks (LANs) and metropolitan area
networks (MANs). It was commercially introduced in 1980
and first standardized in 1983 as IEEE 802.3 and has since
been refined to support higher bit rates and longer link
distances.

❑ Over time, Ethernet has largely replaced competing wired


LAN technologies such as token ring, FDDI, and ARCNET.
Ethernet evolution through four generations
802.3 MAC Frame Format
Ethernet frame format (more details of terms used)
•Preamble Bits + SFD:
(Preamble Sequence of 10101010s. 7 bytes)
(SFD, Start of Frame delimiter, for compatibility with 802.4 and 802.5)
•Addresses: 2 or 6 bytes.
• high-order bit of the destination address:
• 0 for ordinary addresses
• 1 for group addresses.
• bit 46 - global or local address.
Type: specifies which process to give the frame to.
(Any number <=1500 is treated as length or as type otherwise.)
•Data: up to 1500 bytes.
•Pad: (optional) The frame must be at least 64 bytes in total!
•Checksum: CRC based on this polynomial:
x32+x26+x23+x22+x16+x12+x11+x10+x8+x7+x5+x4+x2+x+1
Minimum and maximum lengths
Example of an Ethernet address in hexadecimal notation

❑ The least significant bit of the first byte defines the type of address.
If the bit is 0, the address is unicast; otherwise, it is multicast.
❑ The broadcast destination address is a special case of the multicast address
in which all bits are 1s.
Define the type of the following destination addresses:
a. 4A:30:10:21:10:1A b. 47:20:1B:2E:08:EE
c. FF:FF:FF:FF:FF:FF

Solution
To find the type of the address, we need to look at the
second hexadecimal digit from the left. If it is even, the
address is unicast. If it is odd, the address is multicast. If all
digits are F’s, the address is broadcast. Therefore, we have
the following:
a. This is a unicast address because A in binary is 1010.
b. This is a multicast address because 7 in binary is 0111.
c. This is a broadcast address because all digits are F’s.
Categories of Standard Ethernet (802.3)
Ethernet Cabling
Name Cable Max seg. Nodes per Advantages
(m) segment

10Base5 thick coax 500 100 The Original

10Base2 thin coax 185 30 no hub

10Base-T twisted pair 100 1024 cheapest


(UTP)

10Base-F fiber 2000 1024 long distance


Three kinds of Ethernet cabling

10Base5 10Base2 10Base-T

Term Used:
A transceiver is a device comprising both a transmitter and a receiver that are
combined and share common circuitry or a single housing.
10Base5 implementation

Term Used:
A transceiver is a device comprising both a transmitter and a receiver that are
combined and share common circuitry or a single housing.
10Base2 implementation
10Base-F implementation
10Base-T implementation
Ethernet-802.3
Following table mentions different 10 Mbps Ethernet such as 10BASE5, 10BASE2,
10BASE-F and 10BASE-T.
Specification 10BASE5 10BASE2 10BASE-F 10BASE-T
varies from
Maximum
500 m 185 m 400 m to 100m
segment length
2000 m
topology Bus Bus Star Star
50-"omega" thick 50-"omega" thin multimode 100-"omega"
medium
coax. coax. fiber UTP

connector NICDB15 BNC ST RJ-45


Medium MAU bolted to ethernal or on External or External or on
attachment coax NIC on NIC NIC
stations/cable
100 30 N/A 2(NIC, repeater)
segment

Maximum 5 5 5 5 segments
Fast Ethernet-802.3u
The ethernet working at the speed of 100Mbps is referred as
fast ethernet. IEEE standard 802.3u fast ethernet/100BASE-T
specified in May1995. The features of this type of fast ethernet
are as follows:
• Includes multiple Physical layers.
• It uses original ethernet MAC but operates at 10 times higher speed.
• It needs star wired configuration with central hub.
The MAC parameters are same as described for ethernet
above. There are three physical layers for fast ethernet.
• 100BASE-TX: Needs 2 pairs of cat.5 UTP/Type1 STP cables
• 100BASE-FX: Needs 2 strands of multimode fiber
• 100BASE-T4: Needs 4 pairs of cat.3

Following table summarizes different versions of 100BASE-T


physical layers -
Fast Ethernet-802.3u

Specification 100BASE-Tx 100BASE-Fx 100BASE-T4


IEEE standard 802.3u-1995 802.3u-1995 802.3u-1995
Multimode or single
cabling UTP cat.5 or STP UTP cat.3/4/5
mode fiber

signal frequency 125 MHz 125 MHz 25 MHz

No. of pairs needed 2 2 4

No. of transmit pairs 1 1 3

distance 100m 150/412/2000 m 100m

Full duplex
Yes Yes No
capabilities
Gigabit Ethernet-802.3z
The ethernet working at the speed of 1000Mbps (i.e. 1Gbps)
and above is referred as Gigabit ethernet.

❑ Gigabit Ethernet uses 802.3 frame format same as 10Mbps


ethernet and 100Mbps fast Ethernet It also operates in half
duplex and full duplex modes. There are various Gigabit
ethernet versions which operates at 1 Gigabit, 10 Gigabit, 40
Gigabit and 100 Gigabit per second speeds.

❑ There are various versions for 10 Gbps ethernet such as


10GBASE-T, 10GBASE-R, 10GBASE-X and 10GBASE-W.

❑ MAC parameters for gigabit ethernet are same as mentioned


above in ethernet. Except slot time which is 512 byte times.
Token Bus – IEEE 802.4
• A network which implements the modified Token Ring
protocol over a "virtual ring" on a coaxial cable with a
bus topology.
• It is mainly used for industrial applications.
Token Bus – IEEE 802.4 (contd.)
• The IEEE 802.4 is a popular standard for the bus based token
passing LANs. In a token bus LAN, the physical media is a bus
and a logical ring as shown in the figure is created. The token is
passed from one user to other in a sequence, decided in the
logical ring. IEEE 802.4 frame is as shown in the figure-
Token Bus – IEEE 802.4 (contd.)
• The preamble is used to synchronize the receiver’s clock, which, unlike as in
802.3, may be as short as 1 byte. The start and end delimiter fields are used to
mark the frame boundaries. The frame control field is used to distinguish data
frames from control frames. For data frames, it carries the frame’s priority. The
token bus defines four priority classes-0, 2, 4 and 6 for traffic, with 0 the lowest
and 6 the highest. To understand priority classes, each station can be thought of
being internally divided into four substations, one at each priority level.
• When the token comes into the station over the cable, it is passed internal to the
priority 6 sub station, which may begin transmitting frames, if it has any. When it
is done, or when its timer expires, the token is passed internally to the priority 4
sub station and so on up to priority 0 so four timers are used to control the
internal flow of token at a station. By setting the timers properly, a guaranteed
fraction of the token holding time can be allocated to priority 6 traffic.
• For control frames, frame control field is used to specify the frame types, which
include, token passing and various ring maintenance frames. The data field may
be up to 8182 bytes long, when 2 byte addresses are used and up to 8174 bytes
long when 6 byte addresses are used. The checksum is used to detect
transmission errors, which uses the same polynomial algorithm as in 802.3.
Control Token or Token Passing
• Another way of controlling access to a shared transmission
medium is by a control token (Token Passing)
• The Control Token technique uses a control or permission token
to share the communication resource between a number of
nodes. The technique can be applied to both bus and ring
network topologies.
• This token is passed from one station to another according to a
defined set of rules
• A station may transmit a frame only when it has possession of
the token and after it has transmitted the frame, it passes the
token on, to allow another station to access the transmission
medium.
Token passing network
A token always circulates around a ring net.
A user grabs a token to transmit data
Control Token procedure
Token Ring – IEEE 802.5
• A ring topology network developed in the 1970s. Supported
mainly by IBM.
• A LAN protocol which resides at the data link layer (DLL) of the
OSI model.
• In cases of heavy traffic, the token ring network has higher
throughput than ethernet due to the deterministic (non-
random) nature of the medium access.

• Is used in applications in which delay when sending data must


be predictable
• Is a robust network i.e. it is fault tolerant through fault
management mechanisms
• Can support data rates of around 16 Mbps.
• Typically uses twisted pair
Token Ring – IEEE 802.5
Token Ring Operation
• When nobody is transmitting a token circles.
• When a station needs to transmit data, it converts the token into
a data frame.
• When the sender receives its own data frame, it converts the
frame back into a token.
• If an error occurs and no token frame, or more than one, is
present, a special station (“Active Monitor”) detects the problem
and removes and/or reinserts tokens as necessary.
• The Abort frame: used to abort transmission by the sending
station.
Active and Standby monitors
• Every station in a token ring network is either an Active monitor (AM) or
Standby monitor (SM) station. There can be only one active monitor on a
ring at a time. The active monitor is chosen through an election or monitor
contention process.

• The monitor contention process is initiated when


➢ A loss of signal on the ring is detected,
➢ An AM station is not detected by other stations, or
➢ When a timer on an end station expires (the station hasn't seen a token in the
past 7 seconds).
• When any of the above conditions take place and a station decides that a
new AM is needed.
• The AM performs a number of ring management functions and roles:
❑ Master clock for the ring, synchronization.
❑ Inserts a 24-bit delay into the ring for sufficient buffering.
❑ To support exactly one token and there is no frame being transmitted.
❑ Detects a broken ring.
❑ Removes circulating frames from the ring.
Token Ring frame formats
Data or command frame
SD AC FC DA SA PDU from LLC (IEEE 802.2) CRC ED FS
8 bits 8 bits 8 bits 48 bits 48 bits up to 18200x8 bits 32 bits 8 bits 8 bits

Token Abort frame


SD AC ED SD ED
8 bits 8 bits 8 bits 8 bits 8 bits

SD: Starting delimiter


PDU: data
AC: Access control CRC: check sum
FC: Frame control ED: End delimiter
FS: Frame status
DA: Destination address
SA: Source address
Token Ring frame formats
The starting and ending delimiter fields mark the beginning and ending of the
frame. Each contains invalid differential Manchester patterns to distinguish
them from data bytes. The access control byte contains the token bit, and also
the monitoring bit, priority bits and reservation bits. The frame control byte,
distinguishes data frames from various possible control frames. The frame
status byte contains A and C bits. When a frame arrives at the interface of a
station with the destination address, the interface turns on the A bit as it passes
through. If the interface copies the frame to the station, it also turns on the C
bit. A station might fail to copy a frame due to lack of buffer space or other
reasons.

The ending delimiter contains an E bit which is set if any interface detects an
error.
Question:
A 8-Mbps token ring has a token holding timer value of
10 msec. What is the longest frame (assume header bits
are negligible) that can be sent on this ring?

Answer:
At 8 Mbps, a station can transmit 80,000 bits or 10,000 bytes in 10
msec.
This is an upper bound on frame length.
From this amount, some overhead must be subtracted, giving a
slightly lower limit for the data portion.
FDDI (Fiber Distributed Data Interface)
• FDDI is a standard developed by the American National
Standards Institute (ANSI) for transmitting data on
optical fibers
• Supports transmission rates of up to 200 Mbps
• Uses a dual ring
❑First ring used to carry data at 100 Mbps
❑Second ring used for primary backup in case first ring fails
❑If no backup is needed, second ring can also carry data,
increasing the data rate up to 200 Mbps
• Supports up to 1000 nodes
• Has a range of up to 200 km
FDDI (Fiber Distributed Data Interface)
Differences between 802.5 and FDDI

Token Ring FDDI


• Shielded twisted pair • Optical Fiber
• 4, 16 Mbps • 100 Mbps
• No reliability specified • Reliability specified (dual
ring)
• Differential Manchester
• 4B/5B encoding
• Centralized clock
• Distributed clocking
• Priority and Reservation
bits • Timed Token Rotation Time
• New token after receive • New token after transmit
IEEE 802.11 standards
for Wi-Fi and WLAN applications
IEEE 802.11 is a set of media access control (MAC) and physical layer (PHY)
specifications for implementing wireless local area network (WLAN) computer
communication in the 900 MHz and 2.4, 3.6, 5, and 60 GHz frequency bands. They
are the world's most widely used wireless computer networking standards, used in
most home and office networks to allow laptops, printers, and smartphones to talk
to each other and access the Internet without connecting wires. They are created
and maintained by the Institute of Electrical and Electronics Engineers (IEEE)
LAN/MAN Standards Committee (IEEE 802).

The base version of the standard was released in 1997, and has had subsequent
amendments. The standard and amendments provide the basis for wireless
network products using the Wi-Fi brand. While each amendment is officially
revoked when it is incorporated in the latest version of the standard, the corporate
world tends to market to the revisions because they concisely denote capabilities
of their products. As a result, in the marketplace, each revision tends to become
its own standard.
LOGICAL LINK CONTROL SUBLAYER
Logical Link Control Design Issues
 Services Provided to the Network Layer
• The network layer wants to be able to send packets to its neighbors
without worrying about the details of getting it there in one piece.

 Framing
• Group the physical layer bit stream into units called frames. Frames are
nothing more than "packets" or "messages". By convention, we use the
term "frames" when discussing DLL.

 Flow Control
• Prevent a fast sender from overwhelming a slower receiver.

Error Control
• Sender checksums the frame and transmits checksum together with
data. Receiver re-computes the checksum and compares it with the
received value.
Services provided to the network layer
• The function of the data link layer is to provide services to the
network layer.
• The principal service is transferring data from the network
layer on the source machine to the network layer on the
destination machine.

• The data link layer can be designed to offer various services. The
actual services offered can vary from system to system. Three
reasonable possibilities that are commonly provided are-

❑ Unacknowledged Connectionless service


❑ Acknowledged Connectionless service
❑ Acknowledged Connection-Oriented service
HDLC (High level data link control)
• HDLC is a bit-oriented protocol.
• It was developed by the International Organization for
Standardization (ISO). It falls under the ISO standards ISO
3309 and ISO 4335. It specifies a packetization standard for
serial links. It has found itself being used throughout the
world.
• It has been so widely implemented because it supports both
half-duplex and full-duplex communication lines, point-to-
point (peer to peer) and multi-point networks, and switched
or non-switched channels.
• HDLC supports several modes of operation, including a
simple sliding-window mode for reliable delivery. Since
Internet provides retransmission at higher levels (i.e., TCP),
most Internet applications use HDLC's unreliable delivery
mode, Unnumbered Information.
HDLC (High level data link control) - contd.

• Other benefits of HDLC are that the control information is always in the
same position, and specific bit patterns used for control differ
dramatically from those in representing data, which reduces the chance
of errors. It has also led to many subsets.

• Two subsets widely in use are Synchronous Data Link Control (SDLC) and
Link Access Procedure-Balanced (LAP-B).

• HDLC Stations and Configurations


HDLC specifies the following three types of stations for data link control:
❑ Primary Station
❑ Secondary Station
❑ Combined Station
HDLC (High level data link control) - contd.
• Primary Station
It is used as the controlling station on the link. It has the responsibility
of controlling all other stations on the link (usually secondary
stations). It is also responsible for the organization of data flow on the
link. It also takes care of error recovery at the data link level.

• Secondary Station
The secondary station is under the control of the primary station. It
has no ability, or direct responsibility for controlling the link. It is only
activated when requested by the primary station. It can only send
response frames when requested by the primary station.

• Combined Station
A combined station is a combination of a primary and secondary
station. On the link, all combined stations are able to send and receive
commands and responses without any permission from any other
stations on the link.
HDLC (High level data link control) - contd.
• Following are the three configurations defined by HDLC:
• Unbalanced Configuration
The unbalanced configuration in an HDLC link consists of a primary station
and one or more secondary stations. The unbalanced condition arises
because one station controls the other stations.
• Balanced Configuration
The balanced configuration in an HDLC link consists of two or more combined
stations. Each of the stations has equal and complimentary responsibility
compared to each other.
• Symmetrical Configuration
This third type of configuration is not widely in use today. It consists of two
independent point-to-point, unbalanced station configurations as shown in Fig. below.
In this configuration, each station has a primary and secondary status. Each station is
logically considered as two stations as shown below -
HDLC (High level data link control) - contd.
• Operational Modes:
A mode in HDLC is the relationship between two devices involved in an
exchange; the mode describes who controls the link. Exchanges over
unbalanced configurations are always conducted in normal response mode.
Exchanges over symmetric or balanced configurations can be set to specific
mode using a frame design to deliver the command. HDLC offers three
different modes of operation.

These three modes of operations are:


1) Normal Response Mode (NRM): The primary station initiates transfers to the secondary
station. The secondary station can only transmit a response when, and only when, it is
instructed to do so by the primary station
2) Asynchronous Response Mode (ARM): The primary station doesn't initiate transfers to
the secondary station. In fact, the secondary station does not have to wait to receive
explicit permission from the primary station to transfer any frames. The frames may be
more than just acknowledgment frames.
3) Asynchronous Balanced Mode (ABM): This mode uses combined stations. There is no
need for permission on the part of any station in this mode. This is because combined
stations do not require any sort of instructions to perform any task on the link.
HDLC (High level data link control) - contd.

HDLC Non-Operational Modes


HDLC also defines three non-operational modes. These three
non-operational modes are:
• Normal Disconnected Mode (NDM)
• Asynchronous Disconnected Mode (ADM)
• Initialization Mode (IM)
The two disconnected modes (NDM and ADM) differ from the
operational modes in that the secondary station is logically
disconnected from the link (note the secondary station is not
physically disconnected from the link). The IM mode is different from
the operations modes in that the secondary station's data link control
program is in need of regeneration or it is in need of an exchange of
parameters to be used in an operational mode.
HDLC (High level data link control) - contd.
HDLC Frame Structure
There are three different types of frames as shown in Fig below with the
size of different fields-
Framing
• DLL translates the physical layer's raw bit stream into
discrete units (messages) called frames.
• How can frame be transmitted so the receiver can detect frame
boundaries? That is, how can the receiver recognize the start and end of a
frame?
1) Character Count
2) Starting and ending characters, with character stuffing
3) Starting and ending flag with bit stuffing
4) Physical layer coding violations
Transport Protocol Data Unit (TPDU)
Payload – User Data / Actual Data for transmission
Framing – Character Count
• The first framing method uses a field in the header to specify the
number of characters in the frame. When the data link layer at the
destination sees the character count, it knows how many characters
follow and hence where the end of the frame is.

Disadvantage –
If the count is corrupted by a transmission error, the destination will lose synchronization and will
be unable to locate the start of the next frame. So, this method is rarely used.
Framing – Byte Stuffing (character stuffing)
• In character-oriented protocol, we add special characters (called flag)
to distinguish beginning and end of a frame. Usually flag has 8-bit
length.
• While using character–oriented protocol another problem is
arises, pattern used for the flag may also part of the data to
send. If this happens, the destination node, when it encounters this
pattern in the middle of the data, assumes it has reached the end of
the frame.
• To deal with this problem, a byte stuffing (also known as
character stuffing) approach was included to character-oriented
protocol. In byte stuffing a special byte is add to the data part, this is
known as escape character (ESC). The escape characters have a
predefined pattern. The receiver removes the escape character and
keeps the data part. It cause to another problem, if the text contains
escape characters as part of data. To deal with this, an escape
character is prefixed with another escape character.
Byte stuffing and unstuffing
The following figure explains everything we discussed about character stuffing.

a) A frame delimited by flag bytes


b) 4 examples of byte sequences before and after sequencing
Disadvantage: character is the smallest unit that can be operated on; not all
architectures are byte oriented.
Framing – Bit Stuffing
• The third method allows data frames to contain an arbitrary number of bits
and allows character codes with an arbitrary number of bits per character.
• At the start and end of each frame is a flag byte consisting of the special
bit pattern 01111110 .
• Whenever the sender's data link layer encounters five consecutive
1s in the data, it automatically stuffs a zero bit into the outgoing bit
stream. This technique is called bit stuffing.
• When the receiver sees five consecutive 1s in the incoming data stream,
followed by a zero bit, it automatically destuffs the 0 bit. The boundary
between two frames can be determined by locating the flag pattern.
Bit stuffing example
Framing – Physical layer coding violations
• The final framing method is physical layer coding
violations and is applicable to networks in which the
encoding on the physical medium contains some
redundancy.

• Example:
10 bit is a high-low pair and a 01 bit is a low-high pair. The
combinations of 00 low-low and 11 high-high which are not
used for data may be used for marking frame boundaries.
Flow Control
• Flow control deals with throttling the speed of the sender to
match that of the receiver.
• Two Approaches:
• rate-based flow control, the protocol has a built-in mechanism that
limits the rate at which senders may transmit data, without using
feedback from the receiver.
• feedback-based flow control, the receiver sends back information to
the sender giving it permission to send more data or at least telling
the sender how the receiver is doing

• Various Flow Control schemes uses a common protocol that


contains well-defined rules about when a sender may
transmit the next frame. These rules often prohibit frames
from being sent until the receiver has granted permission,
either implicitly or explicitly.
Flow Control Protocols
Consider a situation in which the sender transmits frames faster than the
receiver can accept them. If the sender keeps pumping out frames at high
rate, at some point the receiver will be completely swamped and will start
losing some frames. This problem may be solved by introducing flow
control.
Most flow control protocols contain a feedback mechanism to inform the
sender when it should transmit the next frame.
An unrestricted simplex protocol
(Noiseless Environment)
• In order to appreciate the step by step development of efficient and
complex protocols we will begin with a simple but unrealistic protocol. In
this protocol: Data are transmitted in one direction only
• The transmitting (A) and receiving (B) hosts are always ready
• Processing time can be ignored
• Infinite buffer space is available
• No errors occur; i.e. no damaged frames and no lost frames (perfect
channel)
A simplex stop-and-wait protocol
(Noiseless Environment)
• In this protocol we assume that Data are transmitted in one direction
only. No errors occur (perfect channel)
• The receiver can only process the received information at a finite rate
• These assumptions imply that the transmitter cannot send frames at
a rate faster than the receiver can process them.
The problem here is how to prevent the sender from flooding the
receiver.
• A general solution to this problem is to have the receiver provide some
sort of feedback to the sender.
• The process could be as follows: The receiver send an acknowledge
frame back to the sender telling the sender that the last received
frame has been processed and passed to the host; permission to send
the next frame is granted. The sender, after having sent a frame, must
wait for the acknowledge frame from the receiver before sending
another frame.
Stop-and-Wait ARQ

➢ It is the simplest flow and error control mechanism . A


transmitter sends a frame then stops and waits for an
acknowledgment.
➢ Stop-and-Wait ARQ has the following features:
✓The sending device keeps a copy of the sent frame transmitted
until it receives an acknowledgment( ACK)
✓The sender starts a timer when it sends a frame. If an ACK is not
received within an allocated time period, the sender resends it
✓Both frames and acknowledgment (ACK) are numbered
alternately 0 and 1( two sequence number only)
✓This numbering allows for identification of frames in case of
duplicate transmission
Stop-and-Wait ARQ
➢ The acknowledgment number defines the number of
next expected frame. (frame 0 received ACK 1 is sent)
➢ A damage or lost frame treated by the same manner
by the receiver
➢ If the receiver detects an error in the received frame, or
receives a frame out of order it simply discards the
frame
➢ The receiver send only positive ACK for frames received
safe; it is silent about the frames damage or lost.
➢ The sender has a control variable Sthat holds the
number of most recently sent frame (0 or 1). The
receiver has control variable R, that holds the number
of the next frame expected (0,or 1)
Stop-and-Wait ARQ

Cases of Operations:

1.Normal operation
2.The frame is lost
3.The Acknowledgment (ACK) is lost
4.The Ack is delayed
Stop-and-Wait ARQ
Normal operation

➢ The sender will not


send the next frame
until it is sure that
the current one is
correctly receive
➢ sequence number is
necessary to check
for duplicated
frames
Stop-and-Wait ARQ
2. Lost or damaged frame

➢ A damage or lost frame


treated by the same
manner by the receiver.

➢ No NACK when frame is


corrupted / duplicate
Stop-and-Wait ARQ

3. Lost ACK frame

➢ Importance of frame
numbering
In Stop and-Wait ARQ, numbering frames prevents the retaining of
duplicate frames.
Stop-and-Wait ARQ
4. Delayed ACK and lost frame

➢ Importance of frame
numbering
Piggybacking
• Temporarily delaying transmission of outgoing acknowledgement so that they
can be hooked onto the next outgoing data frame.
• Combining data to be sent with control information is called
piggybacking. Thus, piggybacking means combining data to be
sent and acknowledgement of the frame received in one single
frame for higher channel bandwidth utilization.
• Complication:
• How long to wait for a packet to piggyback?
• If longer than sender timeout period then sender retransmits
 Purpose of acknowledgement is lost
• Solution for timing complexion
• If a new packet arrives quickly
 Piggybacking
• If no new packet arrives after a receiver ack timeout
 Sending a separate acknowledgement frame
Sliding Window Protocols
Inspite of the use of timers, the stop and wait ARQ protocol still suffers from
few drawbacks.

❑ Firstly, if the receiver had the capacity to accept more than one frame,
its resources are being underutilized.
❑ Secondly, if the receiver was busy and did not wish to receive any more
packets, it may delay the acknowledgement. However, the timer on the
sender's side may go off and cause an unnecessary retransmission. These
drawbacks are overcome by the sliding window protocols.

▪ In sliding window protocols the sender's data link layer maintains a


'sending window' which consists of a set of sequence numbers
corresponding to the frames it is permitted to send.
▪ Similarly, the receiver maintains a 'receiving window' corresponding to
the set of frames it is permitted to accept. The window size is dependent
on the retransmission policy and it may differ in values for the receiver's
and the sender's window.
Sliding Window Protocols

❑ The sequence numbers within the sender's window represent the frames sent or
can be sent but as yet not acknowledged. Whenever a new packet arrives from
the network layer, the upper edge of the window is advanced by one. When an
acknowledgement arrives from the receiver the lower edge is advanced by one.
❑ The receiver's window corresponds to the frames that the receiver's data link
layer may accept. When a frame with sequence number equal to the lower edge
of the window is received, it is passed to the network layer, an acknowledgement
is generated and the window is rotated by one. If however, a frame falling outside
the window is received, the receiver's data link layer has two options. It may
either discard this frame and all subsequent frames until the desired frame is
received or it may accept these frames and buffer them until the appropriate
frame is received and then pass the frames to the network layer in sequence.
Sliding Window Protocol
Sliding window protocols apply Pipelining :
❖ Go-Back-N ARQ Allowing the sender to transmit multiple contiguous frames
❖ Selective Repeat ARQ (say up to frames) before it receives an
W

acknowledgement. This technique is known as pipelining.

❑ Sliding window protocols improve the efficiency


❑ multiple frames should be in transition while waiting for ACK. Let more
than one frame to be outstanding.
❑ Outstanding frames: frames sent but not acknowledged.
❑ We can send up to W frames and keep a copy of these
frames(outstanding) until the ACKs arrive.
❑ This procedures requires additional feature to be added : sliding window
Sliding window Protocols ( Terms Used )
Go_Back _N ARQ Sliding Window Protocol
Sender sliding window

If m = 3;
sequence numbers = 8 and
window size =7

Acknowledged frames
Go_Back _N ARQ Sliding Window Protocol
Receiver sliding window
➢ The receive window is an abstract concept defining
an imaginary box of size 1 with one single variable
Rn.
➢ The window slides when a correct frame has arrived;
sliding occurs one slot at a time.
Go_Back _N ARQ Sliding Window Protocol
control variables
Outstanding frames: frames sent but not
acknowledged

❑ S: hold the sequence number of the recently sent frame


❑ SF: holds sequence number of the first frame in the window
❑ SL: holds the sequence number of the last frame
❑ R: sequence number of the frame expected to received
Go_Back _N ARQ Sliding Window Protocol
Go_Back _N ARQ Sliding Window Protocol
In Go-Back-N ARQ, we use one timer for the first outstanding
frame
➢ The receiver sends a positive ACK if a frame has arrived safe
and in order.
➢ if a frame is damaged or out of order ,the receiver is silent and
will discard all subsequent frames
➢ When the timer of an unacknowledged frame at the sender
site is expired , the sender goes back and resend all frames ,
beginning with the one with expired timer.( that is why the
protocol is called Go-Back-N ARQ)
➢ The receiver doesn't have to acknowledge each frame
received . It can send cumulative Ack for several frame
Go_Back _N ARQ Sliding Window Protocol
Normal operation

➢ How many frames can be


transmitted without
acknowledgment?
➢ ACK1 is not necessary if
ACK2 is sent:
Cumulative ACK
Go_Back _N ARQ Sliding Window Protocol
Damage or Lost Frame

Correctly received out of


order packets are not
Buffered.

What is the
disadvantage of this?
Go_Back _N ARQ Sliding Window Protocol
Selective Repeat ARQ
Go-Back-N ARQ is inefficient in a noisy link.

➢ In a noisy link frames, have higher probability of damage ,


which means the resending of multiple frames.
➢ This resending consumes the bandwidth and slow down the
transmission .

Solution:
➢ Selective Repeat ARQ protocol : resend only the damaged frame
➢ It defines a negative Acknowlgement (NAK) that report the
sequence number of a damaged frame before the timer expires
➢ It is more efficient for noisy link, but the processing at the
receiver is more complex
In Selective Repeat ARQ, the size of the sender and receiver
window must be at most one-half of 2m.
Selective Repeat ARQ
Selective Repeat ARQ
m=3, so, 2m =8
Lost Frame Sequences no= : 0,1,2 ,3,4,5,6,7
Window size =2m/2= 8/2=4
Selective Repeat ARQ
Notice that only two ACKs are sent here.

❑ The first one acknowledges only the first frame;


❑ the second one acknowledges three frames. In
Selective Repeat,
❑ ACKs are sent when data are delivered to the
network layer.
❑ If the data belonging to n frames are delivered in
one shot , only one ACK is sent for all of them.
Selective Repeat ARQ
Performance Issues of Stop and Wait ARQ
Performance Issues of Sliding Window (contd.)
Review Questions
Consider the use of 1000-bit frames on a 1Mbps satellite channel
with a 270ms delay. What is the maximum link utilization for
a) Stop-and-wait flow control?
b) Sliding window flow control with a window size of 7?
c) Sliding window flow control with a window size of 127?
d) Sliding window flow control with a window size of 255?
It is given that
Frame=1000 bits, Channel data rate = 1Mbps, Propagation delay tprop = 270ms
(a) Maximum link utilization with stop-and-wait flow control:
U = 1/(1+2a) where a=tprop/tframe
Since tprop=270ms, in order to find the value of U we need to calculate tframe.

Since frame = 1000 bits and Max bit rate = channel data rate = 1Mbps,
the tframe= 1000/106=1 ms.

So, a= tprop/tframe = 270


So, U = 1/(1+2a) = 1/(1+2x270)=1.85x10-3 = 0.185%
Review Questions
(b) Maximum link utilization with window flow control of window size 7:
Maximum link utilization for window flow control is
U=1 for W>=2a+1
U=W/(2a+1) for W<2a+1
Since W=7 and a=270 then (2a+1)=541, which means that W<2a+1
So,
U=W/(2a+1)= 7/541
= 0.013 = 1.3%

(c) Maximum link utilization with window flow control of window size 127:
Since W=127 then (2a+1)=541, which means that W<2a+1
So, U=W/(2a+1)= 127/541
= 0.235 = 23.5%

(d) Maximum link utilization with window flow control of window size 255:
Since W=255 then (2a+1)=541, which means that W<2a+1
So, U=W/(2a+1)= 255/541
= 0.471 = 47.1%
Review Questions
A channel has a data rate of 4kbps and a propagation delay of 20ms.
For what range of frame sizes does stop-and-wait give an efficiency of
atleast 50%?
It is given that data rate = 4kbps, hence bit duration = 1/4000 =0.25ms
Time to transmit the frame is
tframe= Frame_Size/Bit rate = Frame_Size x Bit_duration
Also, tprop=20ms.
For stop-and-wait flow control, efficiency is equal to -
U = 1/(1+2a) where a = tprop / tframe
Rewriting the equation yields a= 0.5 [(1/U) – 1]
For U>= 50%=0.5,
then a <=0.5[(1/0.5) –1] i.e. a <= 0.5
Since a = tprop/tframe then tprop/tframe <= 0.5 i.e. tframe >= 2tprop
But Frame_size = tframe/bit_duration i.e.-
Frame_size >= 2tprop/bit_duration = 2 x 20ms / 0.25ms = 160 bits
So, in order to have an efficiency of at least 50%, frame size must be atleast 160 bits
long.
Review Questions

Consider the use of 10 K-bit size frames on a 10 Mbps satellite


channel with 270 ms delay. What is the link utilization for stop-and-
wait ARQ technique assuming P = 10-3?

If value of P is given, previous formula is updated as –


Link utilization = (1-P) / (1+2a)
Where a = (Propagation Time) / (Transmission Time)
Propagation time = 270 msec
Transmission time = (frame length) / (data rate)
= (10 K-bit) / (10 Mbps) = 1 msec Hence, a = 270/1 = 270

Link utilization = 0.999/(1+2*270) ≈0.0018 =0.18%


Error control
• Error control is concerned with insuring that all frames are eventually
delivered (possibly in order) to a destination. How? Three items are required-

• Acknowledgements: Typically, reliable delivery is achieved using the


“acknowledgments with retransmission" paradigm, whereby the receiver returns a
special acknowledgment (ACK) frame to the sender indicating the correct receipt of a
frame.
• In some systems, the receiver also returns a negative acknowledgment (NACK) for
incorrectly-received frames. This is nothing more than a hint to the sender so that it can
retransmit a frame right away without waiting for a timer to expire.
• Timers: One problem that simple ACK/NACK schemes fail to address is recovering from
a frame that is lost, and as a result, fails to solicit an ACK or NACK. What happens if an
ACK or NACK becomes lost?
• Retransmission timers are used to resend frames that don't produce an ACK. When sending a
frame, schedule a timer to expire at some time after the ACK should have been returned. If
the timer goes o, retransmit the frame.

• Sequence Numbers: Retransmissions introduce the possibility of duplicate frames. To


suppress duplicates, add sequence numbers to each frame, so that a receiver can
distinguish between new frames and old copies.
Error Correction and Detection
• It is physically impossible for any data recording or transmission
medium to be 100% perfect 100% of the time over its entire
expected useful life.
o In data communication, line noise is a fact of life (e.g., signal
attenuation, natural phenomenon such as lightning, and the
telephone repairman).
• As more bits are packed onto a square centimeter of disk storage, as
communications transmission speeds increase, the likelihood of
error increases-- sometimes geometrically.
• Thus, error detection and correction is critical to accurate data
transmission, storage and retrieval.
• Detecting and correcting errors requires redundancy --
sending additional information along with the data.
Types of Errors
• There are two main types of errors in transmissions:
1. Single bit error : It means only one bit of data unit is changed from 1 to 0
or from 0 to 1.

2. Burst error : It means two or more bits in data unit are changed from 1 to 0
from 0 to 1. In burst error, it is not necessary that only consecutive bits are
changed. The length of burst error is measured from first changed bit to last
changed bit.
Error Detection vs Error Correction
There are two types of attacks against errors:
❑ Error Detecting Codes: Include enough redundancy bits to
detect errors and use ACKs and retransmissions to recover from the
errors.
❑ Error Correcting Codes: Include enough redundancy to detect
and correct errors. The use of error-correcting codes is often referred
to as forward error correction.
Error Detection Techniques
Error detection means to decide whether the received
data is correct or not without having a copy of the
original message.

Error detection uses the concept of redundancy,


which means adding extra bits for detecting errors at
the destination.
Vertical Redundancy Check (VRC)
• Append a single bit at the end of data block such that the number of
ones is even
→ Even Parity (odd parity is similar)
0110011 → 01100110
0110001 → 01100011
• VRC is also known as Parity Check. Detects all odd-number errors in a data
block
• The problem with parity is that it can only detect odd numbers of
bit substitution errors, i.e. 1 bit, 3bit, 5, bit, etc. errors. If there two,
four, six, etc. bits which are transmitted in error, using VRC will not
be able to detect the error.
Longitudinal Redundancy Check (LRC)
• Longitudinal Redundancy Checks (LRC) seek to overcome the weakness of simple,
bit-oriented, one-directional parity checking.
• LRC adds a new character (instead of a bit) called the Block Check Character (BCC)
to each block of data. Its determined like parity, but counted longitudinally through
the message (also vertically)
• Its has better performance over VRC as it detects 98% of the burst errors (>10
errors) but less capable of detecting single errors
• If two bits in one data units are damaged and two bits in exactly the same positions
in another data unit are also damaged, the LRC checker will not detect an error.
11100111 11011101 00111001 10101001
11100111
11011101
00111001
10101001
10101010

11100111 11011101 00111001 10101001 10101010


Original Data LRC
Cyclic Redundancy Check (CRC)
• A cyclic redundancy check (CRC) is a non-secure hash function designed to
detect accidental changes to raw computer data, and is commonly used in
digital networks and storage devices such as hard disk devices.
• CRCs are so called because the check (data verification) code is
a redundancy (it adds zero information) and the algorithm is based on cyclic
codes.
• The term CRC may refer to the check code or to the function that calculates it,
which accepts data streams of any length as input but always outputs a fixed-
length code
• The divisor in a cyclic code is normally called the generator polynomial or
simply the generator.
• 1. It should have at least two terms.
• 2. The coefficient of the term x0 should be 1.

• 3. It should not divide xt + 1, for t between 2 and n − 1.


• 4. It should have the factor x + 1.
Cyclic Redundancy Check
Let M(x) be the message polynomial
Let P(x) be the generator polynomial
P(x) is fixed for a given CRC scheme
P(x) is known both by sender and receiver
• Sending
1. Multiply M(x) by xn
2. Divide M(x) xn by P(x)
3. Ignore the quotient and keep the reminder C(x)
4. Form and send F(x) = M(x) xn+C(x)

• Receiving
1. Receive F’(x)
2. Divide F’(x) by P(x)
3. Accept if remainder is 0, reject otherwise
CRC encoder and decoder
Division in CRC encoder
Division in the CRC decoder for two cases:

No ERROR Case ERROR Case


CRC Example 1
A bit stream 10011101 is transmitted using the standard CRC method.
The generator polynomial is x3 + 1.
Show the actual bit string transmitted.
Suppose the third bit from the left is inverted during transmission.
Show that this error is detected at the receivers end.

Our generator G(x) = x3 + 1 encoded as 1001.


Because the generator polynomial is of the degree three we append three
zeros to the lower end of the frame to be transmitted.
Hence after appending the 3 zeros the bit stream is
10011101000.
On dividing the message by generator after appending three
zeros to the frame we get a remainder of 100.
We do modulo 2 subtraction thereafter of the remainder
from the bit stream with the three zeros appended.
The actual frame transmitted is 10011101100.
Now suppose the third bit from the left is garbled and the frame is received as
10111101100. Hence on dividing this by the polynomial generator we get a remainder of
100 which shows that an error has occurred. Had the received frame been error free we
would have got a remainder of zero. See below -
CRC Example 2
Checksum
• Checksum is the error detection scheme used in IP, TCP & UDP.
• Here, the data is divided into k segments each of m bits. In the sender’s end
the segments are added using 1’s complement arithmetic to get the sum.
The sum is complemented to get the checksum. The checksum segment is
sent along with the data segments

• At the receiver’s end, all received segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented. If the result is zero, the
received data is accepted; otherwise discarded

• The checksum detects all errors involving an odd number of bits. It also
detects most errors involving even number of bits.
Checksum
Checksum Example
Practice Question)
For a pattern of, 10101001 00111001 00011101 Find out whether any
transmission errors have occurred or not
Error Correction
• Messages (frames) consist of d data (message) bits and r redundancy
bits, yielding an n = (d+r) bit codeword.

Hamming Distance: Given any two codewords, we can


determine how many of the bits differ. Simply exclusive or (XOR) the
two words, and count the number of 1 bits in the result.
• Significance? If two codewords are d bits apart, d errors are required
to convert one to the other.
• A code's Hamming Distance is defined as the minimum Hamming
Distance between any two of its legal codewords (from all possible
codewords).
• To detect d 1-bit errors requires having a Hamming Distance of
at least d+1 bits.
• To correct d errors requires 2d+1 bits. Intuitively, after d errors,
the garbled messages is still closer to the original message than any
other legal codeword.
Let us find the Hamming distance between two pairs of
words.

1. The Hamming distance d(000, 011) is 2 because

2. The Hamming distance d(10101, 11110) is 3 because


Hamming Code
Hamming code is a set of error-correction codes that can be used
to detect and correct the errors that can occur when the data is moved
or stored from the sender to the receiver. It is technique developed by
R.W. Hamming for error correction.
Redundant bits –
Redundant bits are extra binary bits that are generated and added to the
information-carrying bits of data transfer to ensure that no bits were lost during
the data transfer.

The number of redundant bits can be calculated using the following formula:
2^r > m + r + 1 where, r = redundant bit, m = data bit
Suppose the number of data bits is 7, then the number of redundant bits can be
calculated using: = 2^4 > 7 + 4 + 1
Thus, the number of redundant bits= 4
Parity bits –
Hamming Code
A parity bit is a bit appended to a data of binary bits to ensure that the
total number of 1’s in the data are even or odd. Parity bits are used for
error detection. There are two types of parity bits:

❑ Even parity bit:


In the case of even parity, for a given set of bits, the number of
1’s are counted. If that count is odd, the parity bit value is set to
1, making the total count of occurrences of 1’s an even number.
If the total number of 1’s in a given set of bits is already even,
the parity bit’s value is 0.
❑ Odd Parity bit –
In the case of even parity, for a given set of bits, the number of
1’s are counted. If that count is even, the parity bit value is set
to 1, making the total count of occurrences of 1’s an odd
number. If the total number of 1’s in a given set of bits is already
odd, the parity bit’s value is 0.
Hamming Code
Create the code word as follows:
1) Mark all bit positions that are powers of two as parity bits. (positions 1, 2, 4, 8, 16, 32, 64, etc.)
2) All other bit positions are for the data to be encoded. (positions 3, 5, 6, 7, 9, 10, 11, 12, 13, 14,
15, 17, etc.)
3) Each parity bit calculates the parity for some of the bits in the code word. The position of the
parity bit determines the sequence of bits that it alternately checks and skips.
Position 1: check 1 bit, skip 1 bit, check 1 bit, skip 1 bit, etc. (1,3,5,7,9,11,13,15,...)
Position 2: check 2 bits, skip 2 bits, check 2 bits, skip 2 bits, etc. (2,3,6,7,10,11,14,15,...)
Position 4: check 4 bits, skip 4 bits, check 4 bits, skip 4 bits, etc.
(4,5,6,7,12,13,14,15,20,21,22,23,...)
Position 8: check 8 bits, skip 8 bits, check 8 bits, skip 8 bits, etc. (8-15,24-31,40-47,...)
Position 16: check 16 bits, skip 16 bits, check 16 bits, skip 16 bits, etc. (16-31,48-63,80-95,...)
Position 32: check 32 bits, skip 32 bits, check 32 bits, skip 32 bits, etc. (32-63,96-127,160-
191,...)
etc.
4) Set a parity bit to 1 if the total number of ones in the positions it checks is odd. Set a parity bit
to 0 if the total number of ones in the positions it checks is even.
Hamming
Code
Hamming Code
A byte of data: 10011010
Create the data word, leaving spaces for the parity bits: _ _ 1 _ 0 0 1 _ 1 0 1 0
Calculate the parity for each parity bit (a ? represents the bit position being set):
Position 1 checks bits 1,3,5,7,9,11:
? _ 1 _ 0 0 1 _ 1 0 1 0. Even parity so set position 1 to a 0: 0 _ 1 _ 0 0 1 _ 1 0 1 0
Position 2 checks bits 2,3,6,7,10,11:
0 ? 1 _ 0 0 1 _ 1 0 1 0. Odd parity so set position 2 to a 1: 0 1 1 _ 0 0 1 _ 1 0 1 0
Position 4 checks bits 4,5,6,7,12:
0 1 1 ? 0 0 1 _ 1 0 1 0. Odd parity so set position 4 to a 1: 0 1 1 1 0 0 1 _ 1 0 1 0
Position 8 checks bits 8,9,10,11,12:
0 1 1 1 0 0 1 ? 1 0 1 0. Even parity so set position 8 to a 0: 0 1 1 1 0 0 1 0 1 0 1 0
Code word: 011100101010.

Finding and fixing a bad bit


The above example created a code word of 011100101010.
Suppose the word that was received was 011100101110 instead. Then the receiver could calculate
which bit was wrong and correct it. The method is to verify each check bit. Write down all the
incorrect parity bits. Doing so, you will discover that parity bits 2 and 8 are incorrect. It is not an
accident that 2 + 8 = 10, and that bit position 10 is the location of the bad bit. In general, check each
parity bit, and add the positions that are wrong, this will give you the location of the bad bit.

You might also like