Data Link Layer Design Issues: Services To The Network Layer

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Data Link Layer Design Issues

The data link layer in the OSI (Open System Interconnections) Model is in between the physical layer and the
network layer. This layer converts the raw transmission facility provided by the physical layer to a reliable and
error-free link.

The main functions and the design issues of this layer are

 Providing services to the network layer


 Framing
 Error Control
 Flow Control

Services to the Network Layer

In the OSI Model, each layer uses the services of the layer below it and provides services to the layer above it.
The data link layer uses the services offered by the physical layer. The primary function of this layer is to
provide a well-defined service interface to network layer above it.

The types of services provided can be of three types −

 Unacknowledged connectionless service


 Acknowledged connectionless service
 Acknowledged connection - oriented service

Framing

The data link layer encapsulates each data packet from the network layer into frames that are then transmitted.

A frame has three parts, namely −

 Frame Header
 Payload field that contains the data packet from network layer
 Trailer
Types of Errors

There may be three types of errors:


 Single bit error

In a frame, there is only one bit, anywhere though, which is corrupt.


 Multiple bits error

Frame is received with more than one bit in corrupted state.


 Burst error

Frame contains more than1 consecutive bits corrupted.


Error control mechanism may involve two possible ways:
 Error detection
 Error correction

Error Detection

Errors in the received frames are detected by means of Parity Check and Cyclic Redundancy Check (CRC). In
both cases, few extra bits are sent along with actual data to confirm that bits received at other end are same as
they were sent. If the counter-check at receiver’ end fails, the bits are considered corrupted.
Parity Check or Vertical Redundancy Check (VRC)
One extra bit is sent along with the original bits to make number of 1s either even in case of even parity, or odd
in case of odd parity.
The sender while creating a frame counts the number of 1s in it. For example, if even parity is used and
number of 1s is even then one bit with value 0 is added. This way number of 1s remains even. If the number of
1s is odd, to make it even a bit with value 1 is added.

The receiver simply counts the number of 1s in a frame. If the count of 1s is even and even parity is used, the
frame is considered to be not-corrupted and is accepted. If the count of 1s is odd and odd parity is used, the
frame is still not corrupted.
If a single bit flips in transit, the receiver can detect it by counting the number of 1s. But when more than one
bits are errorneous, then it is very hard for the receiver to detect the error.
Cyclic Redundancy Check (CRC)
CRC is a different approach to detect if the received frame contains valid data. This technique involves binary
division of the data bits being sent. The divisor is generated using polynomials. The sender performs a division
operation on the bits being sent and calculates the remainder. Before sending the actual bits, the sender adds
the remainder at the end of the actual bits. Actual data bits plus the remainder is called a codeword. The sender
transmits data bits as codewords.

At the other end, the receiver performs division operation on codewords using the same CRC divisor. If the
remainder contains all zeros the data bits are accepted, otherwise it is considered as there some data corruption
occurred in transit.
Error Correction
Error Correction codes are used to detect and correct the errors when data is transmitted from the sender to the
receiver.

Error Correction can be handled in two ways:

o Backward error correction: Once the error is discovered, the receiver requests the sender to retransmit
the entire data unit.
o Forward error correction: In this case, the receiver uses the error-correcting code which automatically
corrects the errors.

A single additional bit can detect the error, but cannot correct it.

For correcting the errors, one has to know the exact position of the error. For example, If we want to calculate a
single-bit error, the error correction code will determine which one of seven bits is in error. To achieve this, we
have to add some additional redundant bits.

To determine the position of the bit which is in error, a technique developed by R.W Hamming is Hamming
code which can be applied to any length of the data unit and uses the relationship between data units and
redundant units.

Hamming code
Hamming code is a liner code that is useful for error detection up to two immediate bit errors. It is capable of
single-bit errors.

In Hamming code, the source encodes the message by adding redundant bits in the message. These redundant
bits are mostly inserted and generated at certain positions in the message to accomplish error detection and
correction process.

History of Hamming code

 Hamming code is a technique build by R.W. Hamming to detect errors.


 Hamming code should be applied to data units of any length and uses the relationship between data and
redundancy bits.
 He worked on the problem of the error-correction method and developed an increasingly powerful array
of algorithms called Hamming code.
 In 1950, he published the Hamming Code, which widely used today in applications like ECC memory.

Application of Hamming code

Here are some common applications of using Hemming code:

 Satellites
 Computer Memory
 Modems
 PlasmaCAM
 Open connectors
 Shielding wire
 Embedded Processor

Advantages of Hamming code

 Hamming code method is effective on networks where the data streams are given for the single-bit
errors.
 Hamming code not only provides the detection of a bit error but also helps you to indent bit containing
error so that it can be corrected.
 The ease of use of hamming codes makes it best them suitable for use in computer memory and single-
error correction.

Disadvantages of Hamming code

 Single-bit error detection and correction code. However, if multiple bits are founded error, then the
outcome may result in another bit which should be correct to be changed. This can cause the data to be
further errored.
 Hamming code algorithm can solve only single bits issues.

Process of Encoding a message using Hamming Code

The process used by the sender to encode the message includes the following three steps:

 alculation of total numbers of redundant bits.


 Checking the position of the redundant bits.
 Lastly, calculating the values of these redundant bits.

When the above redundant bits are embedded within the message, it is sent to the user.

Step 1) Calculation of the total number of redundant bits.

Let assume that the message contains:

 n- number of data bits


 p - number of redundant bits which are added to it so that np can indicate at least (n + p + 1) different
states.

Here, (n + p) depicts the location of an error in each of (n + p) bit positions and one extra state indicates no
error. As p bits can indicate 2p states, 2p has to at least equal to (n + p + 1).

Step 2) Placing the redundant bits in their correct position.

The p redundant bits should be placed at bit positions of powers of 2. For example, 1, 2, 4, 8, 16, etc. They are
referred to as p1 (at position 1), p2 (at position 2), p3 (at position 4), etc.

Step 3) Calculation of the values of the redundant bit.

The redundant bits should be parity bits makes the number of 1s either even or odd.

The two types of parity are −


 Total numbers of bits in the message is made even is called even parity.
 The total number of bits in the message is made odd is called odd parity.

Here, all the redundant bit, p1, is must calculated as the parity. It should cover all the bit positions whose binary
representation should include a 1 in the 1st position excluding the position of p1.

P1 is the parity bit for every data bits in positions whose binary representation includes a 1 in the less important
position not including 1 Like (3, 5, 7, 9, …. )

P2 is the parity bit for every data bits in positions whose binary representation include 1 in the position 2 from
right, not including 2 Like (3, 6, 7, 10, 11,…)

P3 is the parity bit for every bit in positions whose binary representation includes a 1 in the position 3 from
right not include 4 Like (5-7, 12-15,… )

Process of Decrypting a Message in Hamming code

Receiver gets incoming messages which require to performs recalculations to find and correct errors.

The recalculation process done in the following steps:

 Counting the number of redundant bits.


 Correctly positioning of all the redundant bits.
 Parity check

Step 1) Counting the number of redundant bits

You can use the same formula for encoding, the number of redundant bits

2p ≥ n + p + 1

Here, the number of data bits and p is the number of redundant bits.

Step 2) correctly positing all the redundant bits

Here, p is a redundant bit which is located at bit positions of powers of 2, For example, 1, 2, 4, 8, etc.

Step 3) Parity check

Parity bits need to calculate based on data bits and the redundant bits.

p1 = parity (1, 3, 5, 7, 9, 11…)

p2 = parity (2, 3, 6, 7, 10, 11… )

p3 = parity (4-7, 12-15, 20-23… )

Error control
Error control in data link layer is the process of detecting and correcting data frames that have been corrupted or
lost during transmission.
In case of lost or corrupted frames, the receiver does not receive the correct data-frame and sender is ignorant
about the loss. Data link layer follows a technique to detect transit errors and take necessary actions, which is
retransmission of frames whenever error is detected or frame is lost. The process is called Automatic Repeat
Request (ARQ).

Phases in Error Control


The error control mechanism in data link layer involves the following phases:
 Detection of Error − Transmission error, if any, is detected by either the sender or the receiver.
 Acknowledgment − acknowledgment may be positive or negative.
o Positive ACK − On receiving a correct frame, the receiver sends a positive acknowledge.
o Negative ACK − On receiving a damaged frame or a duplicate frame, the receiver sends a
negative acknowledgment back to the sender.
 Retransmission − The sender maintains a clock and sets a timeout period. If an acknowledgment of a
data-frame previously transmitted does not arrive before the timeout, or a negative acknowledgment is
received, the sender retransmits the frame.

Error Control Techniques


There are three main techniques for error control:

 Stop and Wait ARQ


This protocol involves the following transitions:
o A timeout counter is maintained by the sender, which is started when a frame is sent.
o If the sender receives acknowledgment of the sent frame within time, the sender is confirmed
about successful delivery of the frame. It then transmits the next frame in queue.
o If the sender does not receive the acknowledgment within time, the sender assumes that either
the frame or its acknowledgment is lost in transit. It then retransmits the frame.
o If the sender receives a negative acknowledgment, the sender retransmits the frame.
 Go-Back-N ARQ
The working principle of this protocol is:
o The sender has buffers called sending window.
o The sender sends multiple frames based upon the sending-window size, without receiving the
acknowledgment of the previous ones.
o The receiver receives frames one by one. It keeps track of incoming frame’s sequence number
and sends the corresponding acknowledgment frames.
o After the sender has sent all the frames in window, it checks up to what sequence number it has
received positive acknowledgment.
o If the sender has received positive acknowledgment for all the frames, it sends next set of frames.
o If sender receives NACK or has not received any ACK for a particular frame, it retransmits all
the frames after which it does not receive any positive ACK.
 Selective Repeat ARQ
o Both the sender and the receiver have buffers called sending window and receiving window
respectively.
o The sender sends multiple frames based upon the sending-window size, without receiving the
acknowledgment of the previous ones.
o The receiver also receives multiple frames within the receiving window size.
o The receiver keeps track of incoming frame’s sequence numbers, buffers the frames in memory.
o It sends ACK for all successfully received frames and sends NACK for only frames which are
missing or damaged.
o The sender in this case, sends only packet for which NACK is received.

Flow Control
o It is a set of procedures that tells the sender how much data it can transmit before the data overwhelms
the receiver.
o The receiving device has limited speed and limited memory to store the data. Therefore, the receiving
device must be able to inform the sending device to stop the transmission temporarily before the limits
are reached.
o It requires a buffer, a block of memory for storing the information until they are processed.

Two methods have been developed to control the flow of data:

o Stop-and-wait
o Sliding window

Stop-and-wait

o In the Stop-and-wait method, the sender waits for an acknowledgement after every frame it sends.
o When acknowledgement is received, then only next frame is sent. The process of alternately sending and
waiting of a frame continues until the sender transmits the EOT (End of transmission) frame.
Advantage of Stop-and-wait

The Stop-and-wait method is simple as each frame is checked and acknowledged before the next frame is sent.

Disadvantage of Stop-and-wait

Stop-and-wait technique is inefficient to use as each frame must travel across all the way to the receiver, and an
acknowledgement travels all the way before the next frame is sent. Each frame sent and received uses the entire
time needed to traverse the link.

Sliding Window

o The Sliding Window is a method of flow control in which a sender can transmit the several frames
before getting an acknowledgement.
o In Sliding Window Control, multiple frames can be sent one after another due to which capacity of the
communication channel can be utilized efficiently.
o A single ACK acknowledge multiple frames.
o Sliding Window refers to imaginary boxes at both the sender and receiver end.
o The window can hold the frames at either end, and it provides the upper limit on the number of frames
that can be transmitted before the acknowledgement.
o Frames can be acknowledged even when the window is not completely filled.
o The window has a specific size in which they are numbered as modulo-n means that they are numbered
from 0 to n-1. For example, if n = 8, the frames are numbered from
0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1........
o The size of the window is represented as n-1. Therefore, maximum n-1 frames can be sent before
acknowledgement.
o When the receiver sends the ACK, it includes the number of the next frame that it wants to receive. For
example, to acknowledge the string of frames ending with frame number 4, the receiver will send the
ACK containing the number 5. When the sender sees the ACK with the number 5, it got to know that the
frames from 0 through 4 have been received.

Sender Window

o At the beginning of a transmission, the sender window contains n-1 frames, and when they are sent out,
the left boundary moves inward shrinking the size of the window. For example, if the size of the window
is w if three frames are sent out, then the number of frames left out in the sender window is w-3.
o Once the ACK has arrived, then the sender window expands to the number which will be equal to the
number of frames acknowledged by ACK.
o For example, the size of the window is 7, and if frames 0 through 4 have been sent out and no
acknowledgement has arrived, then the sender window contains only two frames, i.e., 5 and 6. Now, if
ACK has arrived with a number 4 which means that 0 through 3 frames have arrived undamaged and the
sender window is expanded to include the next four frames. Therefore, the sender window contains six
frames (5,6,7,0,1,2).
Receiver Window

o At the beginning of transmission, the receiver window does not contain n frames, but it contains n-1
spaces for frames.
o When the new frame arrives, the size of the window shrinks.
o The receiver window does not represent the number of frames received, but it represents the number of
frames that can be received before an ACK is sent. For example, the size of the window is w, if three
frames are received then the number of spaces available in the window is (w-3).
o Once the acknowledgement is sent, the receiver window expands by the number equal to the number of
frames acknowledged.
o Suppose the size of the window is 7 means that the receiver window contains seven spaces for seven
frames. If the one frame is received, then the receiver window shrinks and moving the boundary from 0
to 1. In this way, window shrinks one by one, so window now contains the six spaces. If frames from 0
through 4 have sent, then the window contains two spaces before an acknowledgement is sent.

SLIP and PPP


SLIP and PPP are the two distinct independent serial link encapsulation protocols. The significant difference
between the SLIP and PPP is that SLIP is the earlier version protocol while PPP is the later variant which gives
several advantages over SLIP such as detection and prevention of misconfiguration, etcetera. Furthermore, PPP
supplies greater built-in security mechanism.

These protocols involve just two devices and between those two devices, the straightforward communication
takes place. It provides connectivity on the second layer for TCP/IP implementations.

Definition of SLIP

The SLIP (Serial Line Internet Protocol) mainly serve the purpose of framing the IP packets along the serial
lines mostly in a dial-up connection where the line transmission rate could be in the range of 1200 bps and 19.2
Kbps. However, there is no provision of for addressing, packet type identification, compression or error
detection/correction mechanisms but it is easily implemented.

The SLIP was first introduced in the year of 1984 and implemented on the 4.2 Berkeley and Sun Microsystems
Unix platforms. The development of slip is stimulated by the availability of Unix workstation enabled with
TCP/IP capabilities. Later, the SLIP protocol development moved to personal computers when the personal
computers evolved to support TCP/IP.

A SLIP connection facilitates PCs communication with the native Internet Protocol and turns it into an internet
host. It eliminated the need of connecting the PC user to the internet connected central computer. So, SLIP
provided the internet services to the personal computers directly.

Now, how does these PC’s are connected to the internet? For establishing the connection between a PC and
internet router (able to transfer TCP/IP protocols), telephone lines are used along with SLIP support. Practically,
these internet routers can be internet host enabled with routing functions.

Hence, the SLIP protocol users physically connect to the central computer through dial-up. After initiating the
protocol, the users can access other internet hosts transparently and the central computer starting as a part of the
internet infrastructure.

Definition of PPP

PPP (Point-to-Point) protocol render a standard method for the transfer of the multiprotocol datagrams
(packets) along a point-to-point link. The main elements of PPP are – a mechanism for encapsulating multi-
protocol datagram, LCP (Link Control Protocol) and a group of NCP (Network Control Protocols). LCP
mainly sets up, configure and test the connections while NCP is responsible for establishing and configuring the
distinct network layer protocols.

The PPP was developed by the IETF (Internet Engineering Task Force) in November 1989. As the
antecedent, the non-standard method SLIP did not support features such as error detection and correction, and
compression gave rise to the development of the PPP protocol. The earlier existing standard only assists
datagram encapsulation for the popular local area network not for the serial connections.

PPP has emerged as an internet standard which facilitates in encapsulation and transfer of the datagrams over
the point-to-point serial link. A datagram very similar to a packet in the context of the packet-switched network,
but it does not rely on the physical network and does not contain packet switching node number and PSN
destination ports.

Key Differences between SLIP and PPP

1. The SLIP expands to Serial Line Internet Protocol while PPP stands for the Point-to-Point protocol.
2. SLIP is an outdated protocol, though it is still used in some places. It is good for just bridging the gap
between the IP at layer 3 and serial link at layer 1. On the other hand, PPP is the newer protocol used for
the same purpose as the SLIP but offer several new features.
3. SLIP encapsulates IP packets while PPP encapsulates datagram.
4. IP protocol is the only protocol supported by SLIP. On the contrary, PPP provides support for the other
layer three protocols also.
5. PPP offers authentication, error detection, error correction, compression, and encryption whereas SLIP
does not have these features.
6. In SLIP the IP addresses are statically allocated. Conversely, PPP performs the dynamic assignment.
7. Data can be transferred in synchronous mode in SLIP. As against, PPP facilitates synchronous and
asynchronous modes for data transfer.
Advantages of PPP over SLIP

 Multiplexing of network protocols – PPP can adapt several other networking technologies, rather than
just restricting to the internet and TCP/IP.
 Link configuration – It employs a negotiation mechanism for setting up communication parameters
between two PPP peers.
 Error detection – At the receiving end, it discards the corrupted packets.
 Value added communication characteristics – It also supports data compression and encryption.
 Establishing network addresses – It sets network addresses required for the datagram routing.
 Authentication – Before initiating the communication, the two end users are authenticated first.

Conclusion

The SLIP and PPP protocol are used to provide the point-to-point serial communication between the two hosts.
Since PPP is latter and advanced protocol, it offers several additional features along with just providing the
point-to-point services.

You might also like