Data Communication and Computer Networks (Ece-4308) : Overview of Data Link Layer

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 80

DATA COMMUNICATION AND

COMPUTER NETWORKS (ECE-4308)


Overview of Data link layer

Firew Tadele
Outline

 Overview of Data link layer protocols


 Frame structure, Flow control, Error control, DLC protocols,
& MAC addressing
Data Link Layer Responsibilities

• Framing:
• Media Access control:
• Flow control:
• Error control:
i. Framing
 separates a message from one source to a destination, or from other messages to
other destinations, by adding a sender address and a destination address.

 The destination address defines where the packet is to go; the sender address
helps the recipient acknowledge the receipt.

 Although the whole message could be packed in one frame, that is not normally
done.

reason:
 a frame can be very large, making flow and error control very inefficient.
 When a message is carried in one very large frame, even a single-bit error
would require the retransmission of the whole message.
Cont.
Frames can be of fixed or variable size

 Fixed-Size Framing:- In fixed-size framing, there is no need for defining the


boundaries of the frames; the size itself can be used as a delimiter.
 An example of this type of framing is the ATM wide-area network, which uses
frames of fixed size called cells.
 Variable-Size Framing:- variable-size framing is prevalent in local area
networks. In variable-size framing, we need a way to define the end of the frame
and the beginning of the next.
 Historically, two approaches were used for this purpose: a character-oriented
(byte) approach and a bit-oriented approach.
Cont.
Character-Oriented (byte-oriented) Protocols

 data to be carried are 8-bit characters from a coding system such as ASCII.
 The header, which normally carries the source and destination addresses and
other control information, and the trailer, which carries error detection or error
correction redundant bits, are also multiples of 8 bits.
 To separate one frame from the next, an 8-bit (1-byte) flag is added at the
beginning and the end of a frame.
 Character-oriented framing was popular when only text was exchanged by the data
link layers. The flag could be selected to be any character not used for text
communication.
Cont.
Bit-Oriented Protocols

 In a bit-oriented protocol, the data section of a frame is a sequence of bits to be


interpreted by the upper layer as text, graphic, audio, video, and so on.

 However, in addition to headers (and possible trailers), we still need a delimiter to


separate one frame from the other. Most protocols use a special 8-bit pattern flag
01111110 as the delimiter to define the beginning and the end of the frame, as shown
in the figure.
Cont.

 This flag can create the same type of problem we saw in the byte-oriented protocols.
 That is, if the flag pattern appears in the data, we need to somehow inform the receiver
that this is not the end of the frame.
 We do this by stuffing 1 single bit (instead of 1 byte) to prevent the pattern from
looking like a flag.
 The strategy is called bit stuffing.
 In bit stuffing, if a 0 and five consecutive 1 bits are encountered, an extra 0 is
added. This extra stuffed bit is eventually removed from the data by the receiver.
 Note that the extra bit is added after one 0 followed by five 1s regardless of the
value of the next bit.
 This guarantees that the flag field sequence does not appear in the frame.
Cont.

Bit stuffing is the process of adding one extra 0 whenever five consecutive 1s follow
a 0 in the data, so that the receiver does not mistaken the pattern 0111110 for a
flag.
Cont.

 Note that even if we have a 0 after five 1s, we still stuff a 0. The 0 will be removed
by the receiver.

 This means that if the flag like pattern 01111110 appears in the data, it will change to
011111010 (stuffed) and is not mistaken as a flag by the receiver.

 The real flag 0111110 is not stuffed by the sender and is recognized by the receiver
as a flag.
ii. Media Access control

 The data link layer can further be divided in to two layers:


 the upper sub-layer that is responsible for flow and error control is called the logical link
control (LLC) layer;
 the lower sub-layer that is mostly responsible for multiple access resolution is called the media
access control (MAC) layer

 When nodes or stations are connected and use a common link, called a multipoint or
broadcast link, we need a multiple-access protocol to coordinate access to the link.
iii. Flow Control

 Flow control refers to a set of procedures used to restrict the amount of data that the
sender can send before waiting for acknowledgment.
 Flow control coordinates the amount of data that can be sent before receiving an
acknowledgment and is one of the most important duties of the data link layer.
 In most protocols, flow control is a set of procedures that tells the sender how much
data it can transmit before it must wait for an acknowledgment from the receiver.
 The flow of data must not be allowed to overwhelm the receiver. Any receiving device
has a limited speed at which it can process incoming data and a limited amount of
memory in which to store incoming data.
 The receiving device must be able to inform the sending device before those limits are
reached and to request that the transmitting device send fewer frames or stop temporarily.
 Incoming data must be checked and processed before they can be used.
iv. Error control

 We need to build systems that are robust to errors in data.


 There is no way to guarantee that all bits will be sent uncorrupted.
 One way to cope with this is to detect errors and request that corrupted data
should be retransmitted.
 Detecting errors cannot be guaranteed either.
 We can at least make it extremely unlikely that errors will go undetected.
Cont.
 Detecting Errors

 Usually, noise levels are fairly low and most of the bits are received correctly by
the receiver.
 The question is, how can the receiver know when an error has occurred?
 Because errors occur randomly, there is no way of knowing with complete
certainty if the data is correct.
 The best we can do is detect most errors.
 Even when we detect an error, the next question is: what to do about it?
Cont.
Error Detection Process

 For a given frame of bits, additional bits that constitute an error-detecting code are
added by the transmitter.
 for a data block of k bits, the error-detecting algorithm yields an error-detecting
code of n – k bits, where (n – k) < k.
 The error-detecting code, also referred to as the check bits, is appended to the data
block to produce a frame of n bits,
 The receiver separates the incoming frame into the k bits of data and (n – k) bits of
the error-detecting code.
 The receiver performs the same error-detecting calculation on the data bits and
compares this value with the value of the incoming error-detecting code.
 A detected error occurs if and only if there is a mismatch.
Cont.
Types of Error

 single bit errors


 only one bit altered
 caused by white noise
 burst errors
 contiguous sequence of 8 bits in which first, last and any number of intermediate
bits are in error
 caused by impulse noise or by fading in wireless
 effect is greater at higher data rates
1. Parity Checking (Vertical Redundancy Check (VRC))

 One of the most common ways of checking to see if an error occurs is to count the
bits in a character to see if there is an even or odd number.

 Before transmission, an extra bit (parity bit) is appended to the character to force
the number of bits to be even (or odd).

 If the received character does not have an even (or odd) number of bits then an
error must have occurred.

 Both the sender and receiver must know which form of parity to use.
Cont.

 A character such as 0110001 would be transmitted as:

Odd Parity: 0110001 0 (There are an odd number of 1s)

Even Parity: 0110001 1 (There are an even number of 1s)

 Parity checking will detect a single error in a character but not double
errors.

8 bits including parity


7 bits of data
(count of 1 bits) Even odd

0000000 (0) 00000000 (0) 10000000 (1)


1010001 (3) 11010001 (4) 01010001 (3)
1101001 (4) 01101001 (4) 11101001 (5)
1111111 (7) 11111111 (8) 01111111 (7)
2. Checksum
 Another simple way of checking if there has been an error in a block of data is to
find a checksum.
 Imagine we send the data 121, 17, 29 and 47. Adding these numbers up, we get
214.
 We actually send 121,17,29,47 and 214.
 The receiver can total up the first numbers and compare it to the last one.
 A difference means an error has occurred.
3. Cyclic Redundancy Check

 A far more effective way of detecting errors in a block of data is to use a Cyclic
Redundancy Code.
 In CRC, a number is mathematically calculated for a packet by its source
computer, and then recalculated by the destination computer.
 for block of k bits, transmitter generates an n-k bit frame check sequence (FCS)
 Transmits n bits which is exactly divisible by some predetermined number
 The receiver then divides the incoming frame by that number and, if there is no
remainder, assumes there was no error.
 Can state this procedure in three equivalent ways: modulo 2 arithmetic,
polynomials, and digital logic
1. Modulo 2 Arithmetic
 uses binary addition with no carries, which is just the exclusive-OR (XOR) operation.
 Binary subtraction with no carries is also interpreted as the XOR operation:

 We would like T/P to have no remainder. It should be clear that

 by multiplying D by 2^ (n-k)
we have in effect shifted it to the left by n-k bits and padded out the result with zeroes.
Cont.
Cont.
2. Polynomials

 A second way of viewing the CRC process is to express all values as polynomials in a
dummy variable X, with binary coefficients.
 The coefficients correspond to the bits in the binary number. Thus, for D = 110011we
have D(X) = X^5 + X^4 + X + 1and for P = 11001, we have P(X) = X^4 +
X^3 + 1.
 Arithmetic operations are again modulo 2.
 The CRC process can now be described as
3. Digital Logic
Protocols
o Data link layer can combine framing, flow control, and error control to achieve the delivery of data
from one node to another.
o Protocols are those that can be used for noiseless (error-free) channels and those that can be used
for noisy (error-creating) channels.
Noiseless channels
o channel in which no frames are lost, duplicated, or corrupted
Simplest Protocol
o is one that has no flow or error control
o unidirectional protocol in which data frames are traveling in only one
direction-from the sender to receiver.
Stop-and-wait Protocol
 Sender sends one frame, stops until it receives confirmation from the receiver
(okay to go ahead), and then sends the next frame.
 still have unidirectional communication for data frames, but auxiliary ACK frames
(simple tokens of acknowledgment) travel from the other direction.
Noisy channels

 Can ignore the error (as we sometimes do), or we need to add error control to our
protocols.
Stop-and-Wait Automatic Repeat Request (Stop and wait ARQ)
 Error correction in Stop-and-Wait
ARQ is done by keeping a copy of the
sent frame and retransmitting of the
frame when the timer expires.
Go-Back-N Automatic Repeat Request (Go-Back-N ARQ)
 Multiple frames must be in transition while waiting for acknowledgment.
 let more than one frame be outstanding to keep the channel busy while the sender is
waiting for Acknowledgment
 can send several frames before receiving acknowledgments; we keep a copy of these
frames until the acknowledgments arrive.
Sliding Window
• is an abstract concept that defines the range of sequence numbers that is the concern
of the sender and receiver.
• In other words, the sender and receiver need to deal with only part of the possible
sequence numbers.
• The range which is the concern of the sender is called the send sliding window
• The range that is the concern of the receiver is called the receive sliding window
 The send window can slide one or more slots when a valid acknowledgment
arrives.
 The window slides when a correct frame has arrived; sliding occurs one slot at a
time during receiver window sliding
HDLC
o High-level Data Link Control (HDLC) is a bit-oriented protocol for communication
over point-to-point and multipoint links. It implements the ARQ mechanisms
We will discuss some concepts of:
 Configurations and Transfer Modes
 Frames
 Control Field
Configurations and Transfer Modes
o HDLC provides two common transfer modes that can be used in different
configurations:
 normal response mode (NRM) and
 asynchronous balanced mode (ABM)

11.35
Normal response mode

o The station configuration is unbalanced


Asynchronous balanced mode
o The station configuration is balanced

Frames
o HDLC defines three types of frames:
 information frames (I-frames)
 supervisory frames (S-frames) and
 unnumbered frames (V-frames)
o Each type of frame serves as an envelope for the transmission of a different type
of message.
o I-frames are used to transport user data and control information relating to user data
o S-frames are used only to transport control information
o V-frames are reserved for system management.
 Information carried by V-frames is intended for managing the link itself

Frame Format
Each frame in HDLC may contain up to six fields:
 a beginning flag field
 an address field
 a control field
 an information field
 a frame check sequence (FCS) field and
 an ending flag field.
HDLC frames
Control field

o The control field determines the type of frame and defines its functionality
o Control field format for the different frame types

11.40
Error Correction

 correction of errors using an error-detecting code, requires that block of data be


retransmitted.
 For wireless applications this approach is inadequate for two reasons:
1. The bit error rate on a wireless link can be quite high, which would result in a large number of
retransmissions.
2. In some cases, especially satellite links, the propagation delay is very long compared to the
transmission time of a single frame. With a long data link, an error in a single frame necessitates
retransmitting many frames.

 Instead, it would be desirable to enable the receiver to correct errors in an incoming


transmission on the basis of the bits in that transmission.
 Error correction operates in a fashion similar to error detection but is capable of
correcting certain errors in a transmitted bit stream.
Error Correction Process
How Error Correction Works

 error correction works by adding redundancy to the transmitted message.


 The redundancy makes it possible for the receiver to deduce what the original
message was, even in the face of a certain level of error rate.
 If we wish to transmit blocks of data of length k bits, we map each k-bit sequence
into a unique n-bit code-word, which differ significantly from each other.
 Typically, each valid code-word reproduces the original k data bits and
adds to them (n – k) check bits to form the n-bit code-word.
Cont.

 Then if an invalid code-word is received, assume the valid code-word is the one that
is closest to it, and use the input bit sequence associated with it.
 The ratio of redundant bits to data bits, (n – k)/k, is called the redundancy of the
code, and the ratio of data bits to total bits, k/n, is called the code rate.
 The code rate is a measure of how much additional bandwidth is required to carry
data at the same data rate as without the code.
 For example, a code rate of 1/2 requires double the transmission capacity of an
uncoded system to maintain the same data rate.
Hamming Distance

 The Hamming distance between two bit patterns is the number of dissimilar
bits.
 It measures the minimum number of substitutions required to change one string
into the other, or the number of errors that transform one string into the other.
 For example, the Hamming distance between 01000001 (‘A’) and 01000011
(‘C’) is 1 because there is only one dissimilar bit.
 One error in the wrong place can turn an ‘A’ into a ‘C’.
Cont.

 The Hamming distance between 01000001 (‘A’) and 01000010 (‘B’) is 2


because there are two dissimilar bits.

 It would take two errors in the wrong place to turn an ‘A’ into a ‘B’.

 Adding a parity bit ensures that there is at least a Hamming distance of 2


between any two code words.
Hamming ECC

‘use of extra parity bits to allow the position identification of a single error’
1. Mark all bit positions that are powers of 2 as parity bits. (positions 1, 2, 4, 8, 16,
etc.)
2. All other bit positions are for the data to be encoded. (positions 3, 5, 6, 7, 9, 10,
11, 12, 13, 14, 15, etc.)
3. Each parity bit calculates the parity for some of the bits in the code word. The
position of the parity bit determines the sequence of bits that it checks.
 Position 1: checks bits (1,3,5,7,9,11,...) – Alternate
Position 2: checks bits (2,3,6,7,10,11,14,15,...) – Alternate 2-bits
 Position 4: checks bits (4,5,6,7,12,13,14,15,20,21,22,23,...) -Alternate 4-bits
 Position 8: checks bits (8-15,24-31,40-47,...) – Alternate 8-bits
4. Set the parity bit to create even parity.
A Layout of Data and Check Bits
that Achieves Our Design Criteria:
Bit position 12 11 10 9 8 7 6 5 4 3 2 1
Bit number 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001
Data bit D8 D7 D6 D5 D4 D3 D2 D1
Check bit C8 C4 C2 C1

C1 is a parity check on every data bit whose position is xxx1


C1 = D1 exor D2 exor D4 exor D5 exor D7
C2 is a parity check on every data bit whose position is xx1x
C2 = D1 exor D3 exor D4 exor D6 exor D7
C4 is a parity check on every data bit whose position is x1xx
C4 = D2 exor D3 exor D4 exor D8
C8 is a parity check on every data bit whose position is 1xxx
C8 = D4 exor D5 exor D7 exor D8

Why this ordering? Because we want the syndrome, the Hamming test word,
to yield the address of the error.
Example:
Data stored = 00111001
Check bits:

Putting it together:
Example:
Example:
Word fetched: Check Bits:

Comparing:
C8 C4 C2 C1
0 1 1 1 Orig Check Bits
0 0 0 1 New Check Bits
0 1 1 0 Syndrome

0110 = 6  bit position 6


is wrong, i.e. bit D3 is wrong
Putting it all together:
Random Access

In random access or contention methods, no station is superior to another station


and none is assigned the control over another.
No station permits, or does not permit, another station to send.

At each instance, a station that has data to send uses a procedure defined by the
protocol to make a decision on whether or not to send.

This decision depends on the state of the medium (idle or busy). In other words,
each station can transmit when it desires on the condition that it follows the
predefined procedure, including the testing of the state of the medium.
Cont.

 In a random access method, each station has the right to the medium without being
controlled by any other station.
 However, if more than one station tries to send, there is an access conflict, collision-
and the frames will be either destroyed or modified.
 To avoid access conflict or to resolve it when it happens, each station follows a
procedure that answers the following questions:
 When can the station access the medium?

 What can the station do if the medium is busy?

 How can the station determine the success or failure of the transmission?

 What can the station do if there is an access conflict?


Cont.

The random access methods have evolved from a very interesting protocol known
as ALOHA, which used a very simple procedure called multiple access (MA).

The method was improved with the addition of a procedure that forces the station to
sense the medium before transmitting.
 This was called carrier sense multiple access.

 This method later evolved into two parallel methods: carrier sense multiple access with collision
detection (CSMA/CD) and carrier sense multiple access with collision avoidance (CSMA/CA).

 CSMA/CD tells the station what to do when a collision is detected.

 CSMA/CA tries to avoid the collision.


1. Pure ALOHA

ALOHA is the simplest technique in multiple accesses.

Basic idea of this mechanism is a user can transmit the data whenever they want.

If data is successfully transmitted then there isn’t any problem.

But if collision occurs then the station will transmit again.

Sender can detect the collision if it doesn’t receive the acknowledgement from the
receiver.
Procedure for pure ALOHA protocol
Procedure for pure ALOHA protocol
• Under what conditions will the shaded packet arrive undamaged?
Vulnerable Period
2. Slotted ALOHA

• Divide time up into discrete intervals, each


corresponding to one packet.
• The vulnerable period is now reduced in
half.
Figure Vulnerable time for slotted ALOHA protocol
3. Carrier Sense Multiple Access (CSMA)

Protocols that listen for a carrier and act accordingly are called carrier sense protocols.
Carrier sensing allows the station to detect whether the medium is currently being used.
There are two variants of CSMA: CSMA/CD and CSMA/CA
The simplest CSMA scheme is for a station to sense the medium, sending packets
immediately if the medium is idle.
If the station waits for the medium to become idle it is called persistent otherwise it is
called non persistent.
Assumptions with CSMA
Networks

1. Constant length packets


2. No errors, except those caused by collisions
3. Each host can sense the transmissions of all other hosts
4. The propagation delay is small compared to the transmission time

There are several types of CSMA protocols:


1. 1-Persistent CSMA
2. Non-Persistent CSMA
3. P-Persistent CSMA
1-Persistent CSMA
• Sense the channel.
– If busy, keep listening to the channel and transmit immediately when
the channel becomes idle.
– If idle, transmit a packet immediately.
• If collision occurs,
– Wait a random amount of time and start over again.
The protocol is called 1-persistent because the host transmits
with a probability of 1 whenever it finds the channel idle.
Non-Persistent CSMA
• Sense the channel.
– If busy, wait a random amount of time and sense the channel again
– If idle, transmit a packet immediately
• If collision occurs
– wait a random amount of time and start all over again

Tradeoff between 1- and Non- Persistent CSMA


• If B and C become ready in the middle of A’s transmission,
– 1-Persistent: B and C collide
– Non-Persistent: B and C probably do not collide
• If only B becomes ready in the middle of A’s transmission,
– 1-Persistent: B succeeds as soon as A ends
– Non-Persistent: B may have to wait
P-Persistent CSMA
• Optimal strategy: use P-Persistent CSMA
• Assume channels are slotted
• One slot = contention period (i.e., one round trip propagation delay)

1. Sense the channel


– If channel is idle, transmit a packet with probability p
• if a packet was transmitted, go to step 2
• if a packet was not transmitted, wait one slot and go to step 1
– If channel is busy, wait one slot and go to step 1.
2. Detect collisions
– If a collision occurs, wait a random amount of time and go to step 1
Figure Behavior of three persistence methods
3.1 Carrier Sense Multiple Access/Collision Detection (CSMA/CD)

CSMA/CD is a technique for multiple access protocols.


If no transmission is taking place at the time, the particular station can transmit.

 If two stations attempt to transmit simultaneously, this causes a collision, which is


detected by all participating stations.

After a random time interval, the stations that collided attempt to transmit again.

If another collision occurs, the time intervals from which the random waiting time is
selected are increased step by step.
 This is known as exponential back off.
Flow diagram for the CSMA/CD
3.2 Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA)

 We need to avoid collisions on wireless networks because they cannot be detected.


 Collisions are avoided by deferring transmission even if the channel is found idle.
 When an idle channel is found, the station does not send immediately. It waits for a
period of time called the inter frame space or IFS.
 In CSMA/CA, the Inter frame space (IFS) can be used to define the priority of a
station or a frame.
 In CSMA/CA, if the station finds the channel busy, it does not restart the timer of
the contention window (amount of time divided in time slots); it stops the timer and
restarts it when the channel becomes idle.
Controlled access

In controlled access, the stations consult one another to find


which station has the right to send.

A station cannot send unless it has been authorized by other


stations.
1. Reservation

 In the reservation method, a station needs to make a reservation before sending data.
 Time is divided into intervals: In each interval, a reservation frame precedes the data
frames sent in that interval.
 If there are N stations in the system, there are exactly N reservation mini lots in the
reservation frame.
 Each mini slot belongs to a station. When a station needs to send a data frame, it makes
a reservation in its own minis lot.
 The stations that have made reservations can send their data frames after the reservation
frame.
Cont.

 The following figure shows a situation with five stations and a five-minis
lot reservation frame.
 In the first interval, only stations 1, 3, and 4 have made reservations. In
the second interval, only station 1 has made a reservation.
2. Polling

Polling works with topologies in which one device is designated as a primary station
and the other devices are secondary stations.

All data exchanges must be made through the primary device even when the ultimate
destination is a secondary device.

The primary device controls the link; the secondary devices follow its instructions.

It is up to the primary device to determine which device is allowed to use the channel
at a given time.

The primary device, therefore, is always the initiator of a session.


Cont.

If the primary wants to receive data, it asks the secondary devices
if they have anything to send; this is called poll function.

If the primary wants to send data, it tells the secondary to get
ready to receive; this is called select function.
3. Token Passing

In this method, a special packet called a token circulates through the ring.
The possession of the token gives the station the right to access the channel
and send its data.
When a station has some data to send, it waits until it receives the token from
its predecessor.
It then holds the token and sends its data.
When the station has no more data to send, it releases the token, passing it to
the next logical station in the ring.
The station cannot send data until it receives the token again in the next round.
Cont…
Channelization protocols

 FDMA

 the available bandwidth of the common channel is divided into bands that are
separated by guard bands.
 TDMA

 the bandwidth is just one channel that is timeshared between different


stations.
 CDMA

 one channel carries all transmissions simultaneously.


THANK YOU!

QUESTIONS?

You might also like