Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 145

• The Data-link layer is the second layer from the bottom in the OSI (Open

System Interconnection) network architecture model.


• Node-to-Node delivery of data.
• Its major role is to ensure error-free transmission of information.
• DLL is also responsible to encode, decode and organize the outgoing and
incoming data.
• This is considered the most complex layer of the OSI model as it hides all
the underlying complexities of the hardware from the other above layers.
• Sub-layers of Data Link Layer:
• The data link layer is further divided into two sub-layers, which are as follows:
• Logical Link Control (LLC):
• This sublayer of the data link layer deals with multiplexing, the flow of data among applications and
other services, and LLC is responsible for providing error messages and acknowledgments as well.
• Media Access Control (MAC):
• MAC sublayer manages the device’s interaction, responsible for addressing frames, and also controls
physical media access.
• 1. Framing: The packet received from the Network layer is known as a frame
in the Data link layer.
• At the sender’s side, DLL receives packets from the Network layer and divides
them into small frames, then, sends each frame bit-by-bit to the physical layer.
• It also attaches some special bits (for error control and addressing) at the
header and end of the frame.
• At the receiver’s end, DLL takes bits from the Physical layer organizes them
into the frame, and sends them to the Network layer.
• 2. Addressing: The data link layer encapsulates the source and destination’s MAC address/
physical address in the header of each frame to ensure node-to-node delivery.
• MAC address is the unique hardware address that is assigned to the device while
manufacturing.
• 3. Error Control: Data can get corrupted due to various reasons like noise, attenuation, etc.
• So, it is the responsibility of the data link layer, to detect the error in the transmitted data
and correct it using error detection and correction techniques respectively.
• DLL adds error detection bits into the frame’s header, so that receiver can check received
data is correct or not.
• 4. Flow Control: If the receiver’s receiving speed is lower than the sender’s
sending speed, then this can lead to an overflow in the receiver’s buffer and some
frames may get lost.
• So, it’s the responsibility of DLL to synchronize the sender’s and receiver’s speeds
and establish flow control between them.
• 5. Access Control: When multiple devices share the same communication channel
there is a high probability of collision, so it’s the responsibility of DLL to check
which device has control over the channel and CSMA/CD and CSMA/CA can be
used to avoid collisions and loss of frames in the channel.
Data link layer
Data link layer
• Data-link layer takes the packets from the Network Layer and
encapsulates them into frames.
• If the frame size becomes too large, then the packet may be divided into
small sized frames. Smaller sized frames makes flow control and error
control more efficient.
• Then, it sends each frame bit-by-bit on the hardware. At receiver’s end,
data link layer picks up signals from hardware and assembles them into
frames.
Data link layer
Data link layer
• Parts of a Frame
• A frame has the following parts −
• Frame Header − It contains the source and the destination addresses of the frame.
• Payload field − It contains the message to be delivered.
• Trailer − It contains the error detection and error correction bits.
• Flag − It marks the beginning and end of the frame.
Data link layer
Explain different techniques used by data
link layer for framing(UQ)
• Methods of Framing :
There are basically four methods of framing as given below –

• 1. Character Count
• 2. Flag Byte with Character Stuffing
• 3. Starting and Ending Flags, with Bit Stuffing
• 4. Encoding Violations
• Character Count :
This method is rarely used
• required to count total number of characters that are present in frame. (By using Header )
• Character count method ensures data link layer at the receiver or destination about total
number of characters that follow, and about where the frame ends.
• There is disadvantage also of using this method i.e., if anyhow character count is disturbed or
distorted by an error occurring during transmission, then destination or receiver might lose
synchronization.
• The destination or receiver might also be not able to locate or identify beginning of next
frame.
Bit stuffing
• Bit stuffing is also known as bit-oriented framing or bit-oriented approach.
• In bit stuffing, extra bits are being added by network protocol designers to data
streams.
• It is generally insertion or addition of extra bits into transmission unit or message
to be transmitted as simple way to provide and give signaling information and data
to receiver and to avoid or ignore appearance of unintended or unnecessary control
sequences
• . Bit stuffing is very essential part of transmission process in network and
communication protocol.
• Bit stuffing is the insertion of non information bits into data.
• Note that stuffed bits should not be confused with overhead bits.
• Overhead bits are non-data bits that are necessary for transmission
(usually as part of headers, checksums etc.).
• Applications of Bit Stuffing –
1. synchronize several channels before multiplexing
2. rate-match two single channels to each other
3. run length limited coding
• Run length limited coding – To limit the number of consecutive bits of
the same value(i.e., binary value) in the data to be transmitted.
• Example of bit stuffing –
Bit sequence: 110101111101011111101011111110 (without bit stuffing)
Bit sequence: 110101111100101111101010111110110 (with bit stuffing)
• After 5 consecutive 1-bits, a 0-bit is stuffed. Stuffed bits are marked bold.
Character Stuffing
• Character stuffing is also known as byte stuffing or character-oriented framing and is same as that of bit stuffing but byte stuffing
actually operates on bytes whereas bit stuffing operates on bits.

• In byte stuffing, special byte that is basically known as ESC (Escape Character) that has predefined pattern is generally
added to data section of the data stream or frame when there is message or character that has same pattern as that of flag
byte.

• But receiver removes this ESC and keeps data part that causes some problems or issues.
• In simple words, we can say that character stuffing is addition of 1 additional byte if there is presence of ESC or flag in
text.
4.Physical Layer Coding Violations
• :
Encoding violation is method that is used only for network in which
encoding on physical medium includes some sort of redundancy i.e., use
of more than one graphical or visual structure to simply encode or
represent one variable of data.
• Network redundancy is process of providing multiple paths for traffic, so
that data can keep flowing even in the event of a failure.
• Data-link layer uses some error control mechanism to ensure that frames
(data bit streams) are transmitted with certain level of accuracy. But to
understand how errors is controlled, it is essential to know what types of
errors may occur.
• Types of Errors
• There may be three types of errors:
• Single bit error
Multiple Bit Error

• Burst Error
• Error control mechanism may involve two possible ways:
• Error detection
• Error correction
Error detection code – Hamming Distance,
• Hamming code is a set of error-correction codes that can be used
to detect and correct the errors that can occur when the data is moved or
stored from the sender to the receiver
• It is a technique developed by R.W. Hamming for error correction.
• Redundant bits – Redundant bits are extra binary bits that are generated
and added to the information-carrying bits of data transfer to ensure that
no bits were lost during the data transfer.
• The number of redundant bits can be calculated using the following
formula
• 2^r ≥ m + r + 1
• where, r = redundant bit, m = data bit
• Suppose the number of data bits is 7, then the number of redundant bits ?
• Parity bits. A parity bit is a bit appended to a data of binary bits to ensure that the total number
of 1’s in the data is even or odd. Parity bits are used for error detection. There are two types of
parity bits:
1. Even parity bit: In the case of even parity, for a given set of bits, the number of 1’s are
counted. If that count is odd, the parity bit value is set to 1, making the total count of
occurrences of 1’s an even number. If the total number of 1’s in a given set of bits is already
even, the parity bit’s value is 0.
2. Odd Parity bit – In the case of odd parity, for a given set of bits, the number of 1’s are counted.
If that count is even, the parity bit value is set to 1, making the total count of occurrences of 1’s
an odd number. If the total number of 1’s in a given set of bits is already odd, the parity bit’s
value is 0.
• Working of Hamming Code
• To solve the data bit issue with the hamming code method, some steps need to be followed:
• Step 1 - The position of the data bits and the number of redundant bits in the original data. The
number of redundant bits is deduced from the expression [2^r >= d+r+1].
• Step 2 - Fill in the data bits and redundant bit, and find the parity bit value using the expression
[2^p, where, p - {0,1,2, …… n}].
• Step 3 - Fill the parity bit obtained in the original data and transmit the data to the receiver side.
• Step 4 - Check the received data using the parity bit and detect any error in the data, and in case
damage is present, use the parity bit value to correct the error
• To better understand the working of the hamming code, the following example is to be solved:
• The data bit to be transmitted is 1011010, to be solved using the hamming code method.
• Determining the Number of Redundant Bits and Position in the Data,
• The data bits = 7
• The redundant bit,
• 2^r >= d+r+1
• 2^4 >= 7+4+1
• 16 >= 12, [So, the value of r = 4.]
• Position of the redundant bit, applying the 2^p expression:
• 2^0 - P1
• 2^1 - P2
• 2^2 - P4
• 2^3 - P8
Applying the data bits in Fig. 1.
• Finding the Parity Bits, for ”Even parity bits,”
• 1. P1 parity bit is deduced by checking all the bits with 1’s in the least significant location.
• P1: 1, 3, 5, 7, 9, 11
• P1 - P1, 0, 1, 1, 1, 1
• P1 - 0
• 2. P2 parity bit is deduced by checking all the bits with 1’s in the second significant location.
• P2: 2, 3, 6, 7, 10, 11
• P2 - P2, 0, 0, 1, 0, 1
• P2 - 0
• 3. P4 parity bit is deduced by checking all the bits with 1’s in the third significant location.
• P4: 4, 5, 6, 7
• P4 - P4, 1, 0, 1
• P4 - 0
• 4. P8 parity bit is deduced by checking all the bits with 1’s in the fourth significant location.
• P8: 8, 9, 10, 11
• P8 - P1, 1, 0, 1
• P8 - 0
• So, the original data to be transmitted to the receiver side is:
• Error Detecting and Correction of the Data Received,
• Assume that during transmission, the data bit at position 7 is changed
from 1 to 0. Then by applying the parity bit technique, we can identify the
error:
• Parity values obtained in the above deduction vary from the originally
deduced parity values, proving that an error occurred during data
transmission.
• To identify the position of the error bit, use the new parity values as,
• [0+2^2+2^1+2^0]
• 7, i.e., same as the assumed error position.
• To correct the error, simply reverse the error bit to its complement, i.e., for
this case, change 0 to 1, to obtain the original data bit.
• geeksforgeeks.org/hamming-code-in-computer-network/
• Rules to Determine the Parity Sequence
• Position 1 − Check 1 bit, then skip 1 bit, check 1 bit and then skip 1 bit and so on (Ex
− 1,3,5,7,11, etc.)
• Position 2 − Check 2 bit, then skip 2 bit, check 2 bit, then skip 2 bit (Ex −
2,3,6,7,10,11,14,15, etc.)
• Position 4 − Check 4 bit, then skip 4 bit, check 4 bit, then skip 4 bit (Ex − 4, 5, 6, 7,
12, 13, 14, 15, etc.)
• Position 8 − Check 8 bit, then skip 8 bit, check 8 bit, then skip 8 bit (Ex − 8, 9, 10, 11,
12, 13, 14, 15, 24, 25, 26, 27, 28, 29, 30, 31).
Features of Hamming Code
• Redudancy
• Efficiency
• Widely used
• Single err corr
• Error detection and correction
• Example problem 2
• Let us assume the even parity hamming code from the
above example (111001101) is transmitted and the
received code is (111000101). Now from the received code,
let us detect and correct the error.
• Checking the parity bits
• For P1 : Check the locations 1, 3, 5, 7, 9. There is three 1s in this group, which is wrong for even parity.
Hence the bit value for P1 is 1.
• For P2 : Check the locations 2, 3, 6, 7. There is one 1 in this group, which is wrong for even parity. Hence
the bit value for P2 is 1.
• For P3 : Check the locations 3, 5, 6, 7. There is one 1 in this group, which is wrong for even parity. Hence
the bit value for P3 is 1.
• For P4 : Check the locations 8, 9. There are two 1s in this group, which is correct for even parity. Hence
the bit value for P4 is 0.
• The resultant binary word is 0111. It corresponds to the bit location 7 in the above table. The error is
detected in the data bit D4. The error is 0 and it should be changed to 1. Thus the corrected code is
111001101.
CRC
• The Cyclic Redundancy Checks (CRC) is the most powerful method for
Error-Detection and Correction.
• Qualities of CRC
• It should have accurately one less bit than the divisor.
• Joining it to the end of the data unit should create the resulting bit
sequence precisely divisible by the divisor.
• CRC uses Generator Polynomial which is available on both sender and
receiver side. An example generator polynomial is of the form like x 3 + x
+ 1. This generator polynomial represents key 1011. Another example is
x2 + 1 that represents key 101.
• n : Number of bits in data to be sent
• from sender side.
• k : Number of bits in the key obtained
• from generator polynomial.
• sender Side (Generation of Encoded Data from Data and Generator Polynomial
(or Key)):
1. The binary data is first augmented by adding k-1 zeros in the end of the data
2. Use modulo-2 binary division to divide binary data by the key and store remainder of
division.
3. Append the remainder at the end of the data to form the encoded data and send the
same
• Receiver Side (Check if there are errors introduced in transmission)
Perform modulo-2 division again and if the remainder is 0, then there are no errors.
• Illustration:
• Example 1 (No error in transmission):

• Data word to be sent - 100100


• Key - 1101 [ Or generator polynomial x3 + x2 + 1]

• Sender Side:
• Therefore, the remainder is 001 and hence the encoded
• data sent is 100100001.

• Receiver Side:
• Code word received at the receiver side 100100001
Therefore, the remainder is all zeros. Hence, the data received has no error.
• Example 2: (Error in transmission)

• Data word to be sent - 100100


• Key - 1101

• Sender Side:
• Therefore, the remainder is 001 and hence the
• code word sent is 100100001.

• Receiver Side
• Let there be an error in transmission media
• Code word received at the receiver side - 100000001
• 2. Find the crc for 1111010101001 with divisor x4+ x3+x2+1
• Types of Data Link Protocols
•Data link protocols can be broadly divided into two categories, depending on whether the
transmission channel is noiseless or noisy.
Simplex stop & wait protocol,
• stop and wait means, whatever the data that sender wants to send, he sends the data to the receiver.
• After sending the data, he stops and waits until he receives the acknowledgment from the receiver.
• The stop and wait protocol is a flow control protocol where flow control is one of the services of the
data link layer.
• It is a data-link layer protocol which is used for transmitting the data over the noiseless channels.
• It provides unidirectional data transmission which means that either sending or receiving of data will
take place at a time.
• It provides flow-control mechanism but does not provide any error control mechanism.
• The idea behind the usage of this frame is that when the sender sends the frame then he waits for the
acknowledgment before sending the next frame.
• Simplest Protocol is a protocol that neither has flow control nor has error control( as we have already told you that it lies under
the category of Noiseless channels).
• The simplest protocol is basically a unidirectional protocol in which data frames only travel in one direction; from the sender
to the receiver.
• In this, the receiver can immediately handle the frame it receives whose processing time is small enough to be considered as
negligible.
• Basically, the data link layer of the receiver immediately removes the header from the frame and then hand over the data
packet to the network layer that also accepts the data packet immediately.
• We can also say that in the case of this protocol the receiver never gets overwhelmed with the incoming frames from the
sender.
Design of the Simplest Protocol

• The flow control is not needed by the Simplest Protocol.


• The data link layer at the sender side mainly gets the data from the
network layer and then makes the frame out of data and sends it.
• On the Receiver site, the data link layer receives the frame from the
physical layer and then extracts the data from the frame, and then delivers
the data to its network layer.
• Primitives of Stop and Wait Protocol
• The primitives of stop and wait protocol are:
• Sender side
• Rule 1: Sender sends one data packet at a time.
• Rule 2: Sender sends the next packet only when it receives the acknowledgment of the
previous packet.
• Therefore, the idea of stop and wait protocol in the sender's side is very simple, i.e., send
one packet at a time, and do not send another packet before receiving the
acknowledgment.
• Rule 1: Receive and then consume the data packet.
• Rule 2: When the data packet is consumed, receiver sends the
acknowledgment to the sender.
• Therefore, the idea of stop and wait protocol in the receiver's side is also
very simple, i.e., consume the packet, and once the packet is consumed,
the acknowledgment is sent. This is known as a flow control mechanism.
• Problems :
• 1. Lost Data
Lost ACK
• Delayed Acknowledgement/Data: After a timeout on the sender side, a
long-delayed acknowledgement might be wrongly considered as
acknowledgement of some other recent packet.
Stop and wait ARQ((Automatic Repeat Request)

• Used in Connection-oriented communication.


• It offers error and flows control
• It is used in Data Link and Transport Layers
• Stop and Wait for ARQ mainly implements the Sliding Window Protocol
concept with Window Size 1
• Useful Terms:
• Propagation Delay: Amount of time taken by a packet to make a physical journey
from one router to another router.
• Propagation Delay = (Distance between routers) / (Velocity of propagation)
• RoundTripTime (RTT) = Amount of time taken by a packet to reach the receiver +
Time taken by the Acknowledgement to reach the sender
• TimeOut (TO) = 2* RTT
• Time To Live (TTL) = 2* TimeOut. (Maximum TTL is 255 seconds)
Sequence Number (Data)
• . Delayed Acknowledgement:
This is resolved by introducing sequence numbers for acknowledgement
also.
Working of Stop and Wait for ARQ:

• 1) Sender A sends a data frame or packet with sequence number 0.


2) Receiver B, after receiving the data frame, sends an acknowledgement
with sequence number 1 (the sequence number of the next expected data
frame or packet)
There is only a one-bit sequence number that implies that both sender and
receiver have a buffer for one frame or packet only.
• Sliding Window Protocol is actually a theoretical concept in which we
have only talked about what should be the sender window size (1+2a) in
order to increase the efficiency of stop and wait arq. Now we will talk
about the practical implementations in which we take care of what should
be the size of receiver window. Practically it is implemented in two
protocols namely :
1. Go Back N (GBN)
2. Selective Repeat (SR)
• What is Go-Back-N ARQ?
• In Go-Back-N ARQ, N is the sender's window size.
• Suppose we say that Go-Back-3, which means that the three frames can be sent at a
time before expecting the acknowledgment from the receiver.
• It uses the principle of protocol pipelining in which the multiple frames can be sent
before receiving the acknowledgment of the first frame.
• If we have five frames and the concept is Go-Back-3, which means that the three
frames can be sent, i.e., frame no 1, frame no 2, frame no 3 can be sent before
expecting the acknowledgment of frame no 1.
• In Go-Back-N ARQ, the frames are numbered sequentially as Go-Back-N
ARQ sends the multiple frames at a time that requires the numbering
approach to distinguish the frame from another frame, and these numbers
are known as the sequential numbers.
• Working of Go-Back-N ARQ
• Suppose there are a sender and a receiver, and let's assume that there are 11 frames to
be sent.
• These frames are represented as 0,1,2,3,4,5,6,7,8,9,10, and these are the sequence
numbers of the frames. Mainly, the sequence number is decided by the sender's
window size.
• But, for the better understanding, we took the running sequence numbers, i.e.,
0,1,2,3,4,5,6,7,8,9,10. Let's consider the window size as 4, which means that the four
frames can be sent at a time before expecting the acknowledgment of the first frame.
• Step 1: Firstly, the sender will send the first four frames to the receiver,
i.e., 0,1,2,3, and now the sender is expected to receive the
acknowledgment of the 0th frame.
ACK for 0 frame
The sender will then send the next frame, i.e., 4, and
the window slides containing four frames (1,2,3,4)
The receiver will then send the acknowledgment for the frame no
1. After receiving the acknowledgment, the sender will send the
next frame,
• Now, let's assume that the receiver is not acknowledging the frame no 2,
either the frame is lost, or the acknowledgment is lost.
• Instead of sending the frame no 6, the sender Go-Back to 2, which is the
first frame of the current window, retransmits all the frames in the current
window, i.e., 2,3,4,5.
• In Go-Back-N, N determines the sender's window size, and the size of the
receiver's window is always 1.
• It does not consider the corrupted frames and simply discards them.
• It does not accept the frames which are out of order and discards them.
• If the sender does not receive the acknowledgment, it leads to the
retransmission of all the current window frames.
• Let's understand the Go-Back-N ARQ through an example.
• Example 1: In GB4, if every 6th packet being transmitted is lost and if we
have to spend 10 packets then how many transmissions are required?
• Selective Repeat ARQ is also known as the Selective Repeat Automatic
Repeat Request. It is a data link layer protocol that uses a sliding window
method
• In this protocol, the size of the sender window is always equal to the size
of the receiver window. The size of the sliding window is always greater
than 1.
• If the receiver receives a corrupt frame, it does not directly discard it. It
sends a negative acknowledgment to the sender.
• The sender sends that frame again as soon as on the receiving negative
acknowledgment.
• There is no waiting for any time-out to send that frame. The design of the
Selective Repeat ARQ protocol is shown below.
PPP
• Point - to - Point Protocol (PPP) is a communication
protocol of the data link layer that is used to transmit
multiprotocol data between two directly connected (point-
to-point) computers.
• It is a byte - oriented protocol that is widely used in
broadband communications having heavy loads and high
speeds.
Define Frame Format

Procedure for
Method for establishment of link
encapsulation
Services of ppp
protocol Rules for
authentication
Connection over
multiple links

Support other
prortocols
• Flag − 1 byte that marks the beginning and the end of the frame. The bit pattern of the flag is
01111110.
• Address − 1 byte which is set to 11111111 in case of broadcast.
• Control − 1 byte set to a constant value of 11000000.
• Protocol − 1 or 2 bytes that define the type of data contained in the payload field.
• Payload − This carries the data from the network layer. The maximum length of the payload
field is 1500 bytes. However, this may be negotiated between the endpoints of communication.
• FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard code
used is CRC (cyclic redundancy code)
High-Level Data Link Control (HDLC) Encapsulation

• HDLC basically provides reliable delivery of data frames over a network


or communication link.
• HDLC provides various operations such as framing, data transparency,
error detection, and correction, and even flow control.
HDLC Frame

• HDLC is a bit - oriented protocol where each frame contains up to six fields. The
structure varies according to the type of frame. The fields of a HDLC frame are −
• Flag − It is an 8-bit sequence that marks the beginning and the end of the frame. The bit pattern of the
flag is 01111110.
• Address − It contains the address of the receiver. If the frame is sent by the primary station, it contains
the address(es) of the secondary station(s). If it is sent by the secondary station, it contains the address of
the primary station. The address field may be from 1 byte to several bytes.
• Control − It is 1 or 2 bytes containing flow and error control information.
• Payload − This carries the data from the network layer. Its length may vary from one network to another.
• FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard code used is CRC
(cyclic redundancy code)
Multiple access protocol- ALOHA, CSMA, CSMA/CA and
CSMA/CD

• What is a multiple access protocol?


• When a sender and receiver have a dedicated link to transmit data packets,
the data link control is enough to handle the channel. Suppose there is no
dedicated path to communicate or transfer the data between two devices.
• In that case, multiple stations access the channel and simultaneously
transmits the data over the channel. It may create collision and cross talk.
Hence, the multiple access protocol is required to reduce the collision and
avoid crosstalk between the channels.
A. Random Access Protocol

• In this protocol, all the station has the equal priority to send the data over a channel. In random access
protocol, one or more stations cannot depend on another station nor any station control another station.
Depending on the channel's state (idle or busy), each station transmits the data frame. However, if more
than one station sends the data over a channel, there may be a collision or data conflict. Due to the
collision, the data frame packets may be lost or changed. And hence, it does not receive by the receiver
end.
• Following are the different methods of random-access protocols for broadcasting frames on the channel.
• Aloha
• CSMA
• CSMA/CD
• CSMA/CA
• ALOHA Random Access Protocol
• It is designed for wireless LAN (Local Area Network) but can also be used in a shared medium to transmit data. Using this
method, any station can transmit data across a network simultaneously when a data frameset is available for transmission.
• Aloha Rules
1. Any station can transmit data to a channel at any time.
2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through multiple stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.
• Whenever data is available for sending over a channel at stations, we use Pure Aloha.
• In pure Aloha, when each station transmits data to a channel without checking whether the channel is idle or
not, the chances of collision may occur, and the data frame can be lost.
• When any station transmits the data frame to a channel, the pure Aloha waits for the receiver's
acknowledgment.
• If it does not acknowledge the receiver end within the specified time, the station waits for a random amount
of time, called the backoff time (Tb). And the station may assume the frame has been lost or destroyed.

• Therefore, it retransmits the frame until all the data are successfully transmitted to the receiver.
• Slotted ALOHA is an improved version of the pure ALOHA protocol that
aims to make communication networks more efficient.
• In this version, the channel is divided into small, fixed-length time slots
and users are only allowed to transmit data at the beginning of each time
slot.
• This synchronization of transmissions reduces the chances of collisions
between devices, increasing the overall efficiency of the network.
Carrier Sense Multiple Access (CSMA)

• This method was developed to decrease the chances of collisions when


two or more stations start sending their signals over the data link layer.
Carrier Sense multiple access requires that each station first check the
state of the medium before sending.
• 1. Carrier Sense Multiple Access with Collision Detection
(CSMA/CD):
• In this method, a station monitors the medium after it sends a frame to see
if the transmission was successful. If successful, the transmission is
finished, if not, the frame is sent again
Process: The entire process of collision detection can be explained as follows:
• 2. Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) –
• The basic idea behind CSMA/CA is that the station should be able to receive
while transmitting to detect a collision from different stations. In wired
networks, if a collision has occurred then the energy of the received signal
almost doubles, and the station can sense the possibility of collision. In the
case of wireless networks, most of the energy is used for transmission, and
the energy of the received signal increases by only 5-10% if a collision
occurs. It can’t be used by the station to sense collision.
Therefore CSMA/CA has been specially designed for wireless networks.
• These are three types of strategies:
1. InterFrame Space (IFS): When a station finds the channel busy it senses the
channel again, when the station finds a channel to be idle it waits for a period of
time called IFS time. IFS can also be used to define the priority of a station or a
frame. Higher the IFS lower is the priority.
2. Contention Window: It is the amount of time divided into slots. A station that is
ready to send frames chooses a random number of slots as wait time.
3. Acknowledgments: The positive acknowledgments and time-out timer can help
guarantee a successful transmission of the frame.
• Characteristics of CSMA/CA :
1. Carrier Sense: The device listens to the channel before transmitting, to ensure that it is not
currently in use by another device.
2. Multiple Access: Multiple devices share the same channel and can transmit simultaneously.
3. Collision Avoidance: If two or more devices attempt to transmit at the same time, a collision
occurs. CSMA/CA uses random backoff time intervals to avoid collisions.
4. Acknowledgment (ACK): After successful transmission, the receiving device sends an ACK to
confirm receipt.
5. Fairness: The protocol ensures that all devices have equal access to the channel and no single
device monopolizes it.
1. Binary Exponential Backoff: If a collision occurs, the device waits for a random period of time before
attempting to retransmit. The backoff time increases exponentially with each retransmission attempt.
2. Interframe Spacing: The protocol requires a minimum amount of time between transmissions to allow
the channel to be clear and reduce the likelihood of collisions.
3. RTS/CTS Handshake: In some implementations, a Request-To-Send (RTS) and Clear-To-Send (CTS)
handshake is used to reserve the channel before transmission. This reduces the chance of collisions and
increases efficiency.
4. Wireless Network Quality: The performance of CSMA/CA is greatly influenced by the quality of the
wireless network, such as the strength of the signal, interference, and network congestion.
5. Adaptive Behavior: CSMA/CA can dynamically adjust its behavior in response to changes in network
conditions, ensuring the efficient use of the channel and avoiding congestion.
Protocol Transmission behavior Collision detection method Efficiency Use cases

Low-traffic networks
Pure ALOHA Sends frames immediately No collision detection Low

Sends frames at specific time Low-traffic networks


Slotted ALOHA No collision detection Better than pure ALOHA
slots

Monitors medium after


Collision detection by Wired networks with moderate
CSMA/CD sending a frame, retransmits if High
monitoring transmissions to high traffic
necessary

Monitors medium while Collision avoidance through Wireless networks with


CSMA/CA transmitting, adjusts behavior random backoff time High moderate to high traffic and
to avoid collisions intervals high error rates
Controlled Access Protocols in Computer Network

• In controlled access, the stations seek information from one another to


find which station has the right to send. It allows only one node to send at
a time, to avoid the collision of messages on a shared medium. The three
controlled-access methods are:
1. Reservation
2. Polling
3. Token Passing
• Reservation
• In the reservation method, a station needs to make a reservation before sending data.
• The timeline has two kinds of periods:
• Reservation interval of fixed time length
• Data transmission period of variable frames.
• If there are M stations, the reservation interval is divided into M slots, and each station
has one slot.
• Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No other
station is allowed to transmit during this slot.
• In general, i th station may announce that it has a frame to send by
inserting a 1 bit into i th slot. After all N slots have been checked, each
station knows which stations wish to transmit.
• The stations which have reserved their slots transfer their frames in that
order.
• After data transmission period, next reservation interval begins.
• Since everyone agrees on who goes next, there will never be any
collisions.
Polling

• Polling process is similar to the roll-call performed in class. Just like the teacher, a controller
sends a message to each node in turn.
• In this, one acts as a primary station(controller) and the others are secondary stations. All data
exchanges must be made through the controller.
• The message sent by the controller contains the address of the node being selected for
granting access.
• Although all nodes receive the message the addressed one responds to it and sends data if any.
If there is no data, usually a “poll reject”(NAK) message is sent back.
• Problems include high overhead of the polling messages and high dependence on the
reliability of the controller.
• Token Passing
• In token passing scheme, the stations are connected logically to each other in form of ring and access to stations is governed by tokens.
• A token is a special bit pattern or a small message, which circulate from one station to the next in some predefined order.
• In Token ring, token is passed from one station to another adjacent station in the ring whereas incase of Token bus, each station uses the
bus to send the token to the next station in some predefined order.
• In both cases, token represents permission to send. If a station has a frame queued for transmission when it receives the token, it can send
that frame before it passes the token to the next station. If it has no queued frame, it passes the token simply.
• After sending a frame, each station must wait for all N stations (including itself) to send the token to their neighbours and the other N – 1
stations to send a frame, if they have one.
• There exists problems like duplication of token or token is lost or insertion of new station, removal of a station, which need be tackled
for correct and reliable operation of this scheme.
• Performance of token ring can be concluded by 2 parameters:-

• Delay, is a measure of time between when a packet is ready and when it is


delivered. So, the average time (delay) required to send a token to the next
station = a/N.
• Throughput, which is a measure of successful traffic.
• Throughput, S = 1/(1 + a/N) for a<1
Channelization

Channelization is a way to provide multiple access by sharing the available bandwidth in time,
frequency, or through code between source and destination nodes.
Channelization Protocols can be classified as
•FDMA (Frequency Division Multiple Access)
•TDMA (Time Domain Multiple Access)
•CDMA (Code Division Multiple Access)
• FDMA (Frequency Division Multiple Access)
• In this technique, the bandwidth is divided into frequency bands, and each frequency band is allocated to a particular
station to transmit its data. The frequency band distributed to the stations becomes reserved. Each station uses a band-pass
filter to confine their data transmission into their assigned frequency band. Each frequency band has some gap in-between to
prevent interference of multiple bands, and these are called guard bands.
• Advantages of FDMA
• FDMA system is easy to implement, and it's not very complex.
• Frequency bands ensure continuous transmission, saving the bits used for synchronization and framing.
• When the traffic is uniform, FDMA becomes very efficient due to its separate frequency band for each station.
• All stations can run simultaneously at all times without waiting for their turn.
• If the channel is not being used, then it sits idle.
• There is no restriction regarding the baseband or modulation.
• Disadvantages of FDMA
• The bandwidth channel is narrow.
• The planning of the network and spectrum is very time-consuming.
• The presence of a guard band reduces the bandwidth available for use.
• Bandwidth is assigned permanently to each station which reduces its
flexibi
• TDMA (Time Domain Multiple Access)
• TDMA is another technique to enable multiple access in a shared medium. In this, the
stations share the channel's bandwidth time-wise. Every station is allocated a fixed
time to transmit its signal.
• The data link layer tells its physical layer to use the allotted time. TDMA requires
synchronization between stations.
• There is a time gap between the time intervals, called guard time, which is assigned for
the synchronization between stations.
• The rate of data in TDMA is greater than FDMA but lesser than CDMA.
• CDMA (Code Division Multiple Access)
• In the CDMA technique, communication happens using codes. Using this technique,
different stations can transmit their signal on the same channel using other codes.
• There is only one channel in CDMA that carries all the signals. CDMA is based on
the coding technique, where each station is assigned a code (a sequence of numbers
called chips).
• It differs from TDMA as all the stations can transmit simultaneously in the channel as
there is no time sharing. And it differs from FDMA as only one channel occupies the
whole bandwidth.

You might also like