Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 36

Programme Name and Semester: B.Tech.

(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Study Material
(Computer Network (PCC-CSM502))
_____________________________________________________________________________________________

Table of Contents

Error Detection and Error Correction-Fundamentals

CRC

Flow Control

Error control protocols

Stop and Wait

Go back–ARQ

Selective Repeat ARQ

Sliding Window

Piggy backing

Random Access

Block coding

Hamming Distance

Data Link Layer and Medium Access Sub Layer: Multiple access protocols

Pure ALOHA

Slotted ALOHA

CSMA/CD

CSMA/CA

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 1
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Module 2
When data is transmitted from one device to another device, the system does not guarantee whether the
data received by the device is identical to the data transmitted by another device. An Error is a situation
when the message received at the receiver end is not identical to the message transmitted.

Types of Errors

Errors can be classified into two categories:


o Single-Bit Error
o Burst Error

Single-Bit Error: The only one bit of a given data unit is changed from 1 to 0 or from 0 to 1.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 2
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

In the above figure, the message which is sent is corrupted as single-bit, i.e., 0 bit is changed to 1.

Single-Bit Error does not appear more likely in Serial Data Transmission. For example, Sender sends the
data at 10 Mbps, this means that the bit lasts only for 1 ?s and for a single-bit error to occurred, a noise
must be more than 1 ?s.

Single-Bit Error mainly occurs in Parallel Data Transmission. For example, if eight wires are used to send
the eight bits of a byte, if one of the wire is noisy, then single-bit is corrupted per byte.

Burst Error: The two or more bits are changed from 0 to 1 or from 1 to 0 is known as Burst Error. The
Burst Error is determined from the first corrupted bit to the last corrupted bit.

The duration of noise in Burst Error is more than the duration of noise in Single-Bit. Burst Errors are
most likely to occur in Serial Data Transmission. The number of affected bits depends on the duration of
the noise and data rate.

Redundancy
The central concept in detecting or correcting errors is redundancy. To be able to detect or correct errors,
we need to send some extra bits with our data. These redundant bits are added by the sender and removed
by the receiver. Their presence allows the receiver to detect or correct corrupted bits.

Detection versus Correction


The correction of errors is more difficult than the detection. In error detection, we are looking only to see
if any error has occurred. The answer is a simple yes or no. We are not even interested in the number of
errors. A single-bit error is the same for us as a burst error.
In error correction, we need to know the exact number of bits that are corrupted and more importantly,
their location in the message. The number of the errors and the size of the message are important factors.

Forward Error Correction versus Retransmission

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 3
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

There are two main methods of error correction. Forward error correction is the process in which the
receiver tries to guess the message by using redundant bits. This is possible, as we see later, if the number
of errors is small. Correction by retransmission is a technique in which the receiver detects the occurrence
of an error and asks the sender to resend the message. Resending is repeated until a message arrives that
the receiver believes is error-free.

Coding
Redundancy is achieved through various coding schemes. The sender adds redundant bits through a
process that creates a relationship between the redundant bits and the actual data bits. The receiver checks
the relationships between the two sets of bits to detect or correct the errors. The ratio of redundant bits to
the data bits and the robustness of the process are important factors in any coding scheme.

Error Detecting Techniques

The most popular Error Detecting Techniques are:


o Single parity check or Vertical Redundancy Check (VRC)
o Two-dimensional parity check or Longitudinal Redundancy Check (LRC)
o Checksum
o Cyclic redundancy check

Block Coding Techniques in Error Detection and Correction

In block coding, we divide our message into blocks, each of k bits, called data words. We add r redundant
bits to each block to make the length n = k + r. The resulting n-bit blocks are called code words.

For example, we have a set of data words, each of size k, and a set of code words, each of size of n. With
k bits, we can create a combination of 2 k data words, with n bits; we can create a combination of 2 n code
words. Since n > k, the number of possible code words is larger than the number of possible data words.

The block coding process is one-to-one; the same data word is always encoded as the same code word.
This means that we have 2n-2k code words that are not used. We call these code words invalid or illegal.
The following figure shows the situation.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 4
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Error Detection
If the following two conditions are met, the receiver can detect a change in the original code word by
using Block coding technique.
1. The receiver has (or can find) a list of valid code words.
2. The original code word has changed to an invalid one.

The sender creates code words out of data words by using a generator that applies the rules and
procedures of encoding. Each code word sent to the receiver may change during transmission. If the
received code word is the same as one of the valid code words, the word is accepted; the corresponding
data word is extracted for use.

If the received code word is not valid, it is discarded. However, if the code word is corrupted during
transmission but the received word still matches a valid code word, the error remains undetected. This
type of coding can detect only single errors. Two or more errors may remain undetected.

For example consider the following table of data words and Code words:

Assume the sender encodes the data word 01 as 011 and sends it to the receiver. Consider the following
cases:
1. The receiver receives O11. It is a valid code word. The receiver extracts the data word 01 from it.
2. The code word is corrupted during transmission, and 111 is received (the leftmost bit is corrupted).
This is not a valid code word and is discarded.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 5
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

3. The code word is corrupted during transmission, and 000 is received (the right two bits are corrupted).
This is a valid code word. The receiver incorrectly extracts the data word 00. Two corrupted bits have
made the error undetectable.

Hamming Distance in Error Control

The central concepts in coding for error control are the idea of the Hamming distance. The Hamming
distance between two words (of the same size) is the number of differences between the corresponding
bits. We show the Hamming distance between two words x and y as d(x, y).
The Hamming distance can easily be found if we apply the XOR operation on the two words and count
the number of 1s in the result. Note that the Hamming distance is a value greater than zero.
1. The Hamming distance d(000, 011) is 2 because 000 ⊕ 011 is 011 (two 1s).
2. The Hamming distance d(10101, 11110) is 3 because 10101 ⊕ 11110 is 01011 (three 1s).

Minimum Hamming Distance:


The minimum Hamming distance is the smallest Hamming distance between all possible pairs. We use
"dmin" to define the minimum Hamming distance in a coding scheme. To find this value, we find the
Hamming distances between all words and select the smallest one.

Hamming Distance and Error


When a code word is corrupted during transmission, the Hamming distance between the sent and received
code words is the number of bits affected by the error. In other words, the Hamming distance between the
received code word and the sent code word is the number of bits that are corrupted during transmission.
For example, if the code word 00000 is sent and 01101 is received, 3 bits are in error and the Hamming
distance between the two is d(00000, 01101) =3.

Minimum Distance for Error Detection:


To find the minimum Hamming distance in a code if we want to be able to detect up to s errors. If S
errors occur during transmission, the Hamming distance between the sent code word and received code
word is S. If our code is to detect up to S errors, the minimum distance between the valid codes must be s
+ 1, so that the received code word does not match a valid code word.

Minimum Distance for Error Correction:


Error correction is more complex than error detection, a decision is involved. When a received code word
is not a valid code word, the receiver needs to decide which valid code word was actually sent. The
decision is based on the concept of territory, an exclusive area surrounding the code word. Each valid
code word has its own territory. We use a geometric approach to define each territory. We assume that
each valid Code word has a circular territory with a radius of t and that the valid code word is at the
center.
For example, suppose a code word x is corrupted by t bits or less. Then this corrupted code word is
located either inside or on the perimeter of this circle. If the receiver receives a code word that belongs to
this territory, it decides that the original code word is the one at the center. Note that we assume that only
up to t errors have occurred; otherwise, the decision is wrong. The following Figure shows this geometric
interpretation. Some texts use a sphere to show the distance between all valid block codes.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 6
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Cyclic Redundancy Check (CRC)

CRC is a redundancy error technique used to determine the error.


Following are the steps used in CRC for error detection:
o In CRC technique, a string of n 0s is appended to the data unit, and this n number is less than the
number of bits in a predetermined number, known as division which is n+1 bits.
o Secondly, the newly extended data is divided by a divisor using a process is known as binary
division. The remainder generated from this division is known as CRC remainder.
o Thirdly, the CRC remainder replaces the appended 0s at the end of the original data. This newly
generated unit is sent to the receiver.
o The receiver receives the data followed by the CRC remainder. The receiver will treat this whole
unit as a single unit, and it is divided by the same divisor that was used to find the CRC
remainder.
If the resultant of this division is zero which means that it has no error, and the data is accepted.
If the resultant of this division is not zero which means that the data consists of an error. Therefore, the
data is discarded.

Let's understand this concept through an example:


Suppose the original data is 11100 and divisor is 1001.

CRC Generator
o A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at the end of the
data as the length of the divisor is 4 and we know that the length of the string 0s to be appended is
always one less than the length of the divisor.
o Now, the string becomes 11100000, and the resultant string is divided by the divisor 1001.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 7
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

o The remainder generated from the binary division is known as CRC remainder. The generated
value of the CRC remainder is 111.
o CRC remainder replaces the appended string of 0s at the end of the data unit, and the final string
would be 11100111 which is sent across the network.

CRC Checker
o The functionality of the CRC checker is similar to the CRC generator.
o When the string 11100111 is received at the receiving end, then CRC checker performs the
modulo-2 division.
o A string is divided by the same divisor, i.e., 1001.
o In this case, CRC checker generates the remainder of zero. Therefore, the data is accepted.

Flow Control

o It is a set of procedures that tells the sender how much data it can transmit before the data
overwhelms the receiver.
o The receiving device has limited speed and limited memory to store the data. Therefore, the
receiving device must be able to inform the sending device to stop the transmission temporarily
before the limits are reached.
o It requires a buffer, a block of memory for storing the information until they are processed.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 8
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Two methods have been developed to control the flow of data:


o Stop-and-wait
o Sliding window

Error control in Data Link Layer

Error control in data link layer is the process of detecting and correcting data frames that have been
corrupted or lost during transmission.
In case of lost or corrupted frames, the receiver does not receive the correct data-frame and sender is
ignorant about the loss. Data link layer follows a technique to detect transit errors and take necessary
actions, which is retransmission of frames whenever error is detected or frame is lost. The process is
called Automatic Repeat Request (ARQ).

Phases in Error Control


The error control mechanism in data link layer involves the following phases
● Detection of Error

● Acknowledgment
o Positive ACK

o Negative ACK

● Retransmission

Error Control Techniques


There are three main techniques for error control

Stop-and-wait Protocol

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 9
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

In the Stop-and-wait method, the sender waits for an acknowledgement after every frame it sends. When
acknowledgement is received, then only next frame is sent. The process of alternately sending and
waiting of a frame continues until the sender transmits the EOT (End of transmission) frame.
When the ACK arrives, the sender sends the next frame. It is Stop-and-Wait Protocol because the sender
sends one frame, stops until it receives confirmation from the receiver (okay to go ahead), and then sends
the next frame. We still have unidirectional communication for data frames, but auxiliary ACK frames
(simple tokens of acknowledgment) travel from the other direction.

Primitives of Stop and Wait Protocol


The primitives of stop and wait protocols are:

Sender side
Rule 1: Sender sends one data packet at a time.
Rule 2: Sender sends the next packet only when it receives the acknowledgment of the previous packet.
Therefore, the idea of stop and wait protocol in the sender's side is very simple, i.e., send one packet at a
time, and do not send another packet before receiving the acknowledgment.
Receiver side
Rule 1: Receive and then consume the data packet.
Rule 2: When the data packet is consumed, receiver sends the acknowledgment to the sender.
Therefore, the idea of stop and wait protocol in the receiver's side is also very simple, i.e., consume the
packet, and once the packet is consumed, the acknowledgment is sent. This is known as a flow control
mechanism.

Working of Stop and Wait protocol

The above figure shows the working of the stop and waits protocol. If there is a sender and receiver, then
sender sends the packet and that packet is known as a data packet. The sender will not send the second
packet without receiving the acknowledgment of the first packet. The receiver sends the acknowledgment
for the data packet that it has received. Once the acknowledgment is received, the sender sends the next
packet. This process continues until all the packet are not sent.
The main advantage of this protocol is its simplicity but it has some disadvantages also. For example, if
there are 1000 data packets to be sent, then all the 1000 packets cannot be sent at a time as in Stop and
Wait protocol, one packet is sent at a time.
Faculty of CSE Dept.
Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 10
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Disadvantages of Stop and Wait protocol


1. Problems occur due to lost data

Suppose the sender sends the data and the data is lost. The receiver is waiting for the data for a long time.
Since the data is not received by the receiver, so it does not send any acknowledgment. Since the sender
does not receive any acknowledgment so it will not send the next packet. This problem occurs due to the
lost data.
In this case, two problems occur:
o Sender waits for an infinite amount of time for an acknowledgment.
o Receiver waits for an infinite amount of time for a data.
2. Problems occur due to lost acknowledgment

Suppose the sender sends the data and it has also been received by the receiver. On receiving the packet,
the receiver sends the acknowledgment. In this case, the acknowledgment is lost in a network, so there is
no chance for the sender to receive the acknowledgment. There is also no chance for the sender to send
the next packet as in stop and wait protocol, the next packet cannot be sent until the acknowledgment of
the previous packet is received.
In this case, one problem occurs:
o Sender waits for an infinite amount of time for an acknowledgment.
3. Problem due to the delayed data or acknowledgment

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 11
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Suppose the sender sends the data and it has also been received by the receiver. The receiver then sends
the acknowledgment but the acknowledgment is received after the timeout period on the sender's side. As
the acknowledgment is received late, so acknowledgment can be wrongly considered as the
acknowledgment of some other data packet.

Network Performance

Network performance is defined by the overall quality of service provided by a network. This
encompasses numerous parameters and measurements that must be analyzed collectively to assess a given
network. Network performance measurement is therefore defined as the overall set of processes and tools
that can be used to quantitatively and qualitatively assess network performance and provide actionable
data to remediate any network performance issues.

Why Measure Network Performance


The demands on networks are increasing every day, and the need for proper network performance
measurement is more important than ever before. Effective network performance translates into improved
user satisfaction, whether that be internal employee efficiencies, or customer-facing network components
such as an e-commerce website, making the business rationale for performance testing and monitoring
self-evident.
The performance of a network can never be fully modeled, so measuring network performance before,
during, and after updates are made and monitoring performance on an ongoing basis are the only valid
methods to fully ensure network quality. While measuring and monitoring network performance
parameters are essential, the interpretation and actions stemming from these measurements are equally
important.

Network Performance Measurement Parameters


To ensure optimized network performance, the most important criterion should be selected for
measurement. Many of the parameters included in a comprehensive network performance measurement
solution focus on data speed and data quality. Both of these broad categories can significantly impact end
user experience and are influenced by several factors.

Throughput and Bandwidth

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 12
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Throughput is a metric often associated with the manufacturing industry and is most commonly defined
as the amount of material or items passing through a particular system or process. A common question in
the manufacturing industry is how many of product X were produced today, and did this number meet
expectations. For network performance measurement, throughput is defined in terms of the amount of
data or number of data packets that can be delivered in a pre-defined time frame.
Bandwidth, usually measured in bits per second, is a characterization of the amount of data that can be
transferred over a given time period. Bandwidth is therefore a measure of capacity rather than speed. For
example, a bus may be capable of carrying 100 passengers (bandwidth), but the bus may actually only
transport 85 passengers (throughput).

Latency
With regards to network performance measurement, latency is simply the amount of time it takes for data
to travel from one defined location to another. This parameter is sometimes referred to as delay. Ideally,
the latency of a network is as close to zero as possible. The absolute limit or governing factor for latency
is the speed of light, but packet queuing in switched networks and the refractive index of fiber optic
cabling are examples of variables that can increase latency.

● Propogation delay
Propagation delay is the time that it takes for a bit to reach from one end of a link to the other. The
delay depends on the distance (D) between the sender and the receiver, and the propagation speed
(S) of the wave signal. It is calculated as: D/S

● Transmission delay
Transmission delay refers to the time it takes to transmit a data packet onto the outgoing link. The
delay is determined by the size of the packet and the capacity of the outgoing link. If a packet
consists of LL bits and the link has a capacity of BB bits per second, then the transmission delay is
equal to: L/B

● Queuing delay
Queuing delay refers to the time that a packet waits to be processed in the buffer of a switch. The
delay is dependent on the arrival rate of the incoming packets, the transmission capacity of the
outgoing link, and the nature of the network’s traffic.

● Processing delay
Processing delay is the time taken by a switch to process the packet header. The delay depends on
the processing speed of the switch.

Packet Loss
With regards to network performance measurement, packet loss refers to the number of packets
transmitted from one destination to another that fail to transmit. This metric can be quantified by
capturing traffic data on both ends, then identifying missing packets and/or retransmission of packets.
Packet loss can be caused by network congestion, router performance and software issues, among other
factors.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 13
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Jitter
Jitter is defined as the variation in time delay for the data packets sent over a network. This variable
represents an identified disruption in the normal sequencing of data packets. Jitter is related to latency,
since the jitter manifests itself in increased or uneven latency between data packets, which can disrupt
network performance and lead to packet loss and network congestion. Although some level of jitter is to
be expected and can usually be tolerated, quantifying network jitter is an important aspect of
comprehensive network performance measurement.

Latency vs Throughput
While the concepts of throughput and bandwidth are sometimes misunderstood, the same confusion is
common between the terms latency and throughput. Although these parameters are closely related, it is
important to understand the difference between the two.
In relation to network performance measurement, throughput is a measurement of actual system
performance, quantified in terms of data transfer over a given time.
Latency is a measurement of the delay in transfer time, meaning it will directly impact the throughput, but
is not synonymous with it. The latency might be thought of as an unavoidable bottleneck on an assembly
line, such as a test process, measured in units of time. Throughput, on the other hand, is measured in units
completed which is inherently influenced by this latency.

Factors Affecting Network Performance


Network performance management includes monitoring and optimization practices for key network
performance metrics such as application down time and packet loss. Increased network availability and
minimized response time when problems occur are two of the logical outputs for a successful network
management program.
1. Infrastructure The overall network infrastructure includes network hardware, such as routers,
switches and cables, networking software, including security and operating systems as well as
network services such as IP addressing and wireless protocols. From the infrastructure
perspective, it is important to characterize the overall traffic and bandwidth patterns on the
network. This network performance measurement will provide insight into which flows are most
congested over time and could become potential problem areas.
2. Network Issues Performance limitations inherent to the network itself are often a source of
significant emphasis. Multiple facets of the network can contribute to performance, and
deficiencies in any of these areas can lead to systemic problems. Since hardware requirements are
essential to capacity planning, these elements should be designed to meet all anticipated system
demands. Network congestion, on either the active devices or physical links (cabling) of the
network can lead to decreased speeds, if packets are queued, or packet loss if no queuing system is
in place.
3. Applications While network hardware and infrastructure issues can directly impact user
experience for a given application, it is important to consider the impact of the applications
themselves as important cogs in the overall network architecture. Poor performing applications
can over-consume bandwidth and diminish user experience. As applications become more
complex over time, diagnosing and monitoring application performance gains importance.
4. Security Issues Network security is intended to protect privacy, intellectual property, and data
integrity. Thus, the need for robust cyber security is never in question. Managing and mitigating
network security issues requires device scanning, data encryption, virus protection, authentication

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 14
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

and intrusion detection, all of which consume valuable network bandwidth and can impact
performance.

Bandwidth Delay Product

Round-trip delay (RTD) or round-trip time (RTT) is the amount of time it takes for a signal to be sent
plus the amount of time it takes for acknowledgement of that signal having been received. This time delay
includes propagation times for the paths between the two communication endpoints.[1] In the context of
computer networks, the signal is typically a data packet. RTT is also known as ping time, and can be
determined with the ping command.

End-to-end delay is the length of time it takes for a signal to travel in one direction and is often
approximated as half the RTT.Bandwidth delay product is a measurement of how many bits can fill up a
network link. It gives the maximum amount of data that can be transmitted by the sender at a given time
before waiting for acknowledgment. Thus it is the maximum amount of unacknowledged data.

Measurement
Bandwidth delay product is calculated as the product of the link capacity of the channel and the round –
trip delay time of transmission.
The link capacity of a channel is the number of bits transmitted per second. Hence, its unit is bps, i.e. bits
per second.
The round – trip delay time is the sum of the time taken for a signal to be transmitted from the sender to
the receiver and the time taken for its acknowledgment to reach the sender from the receiver. The round –
trip delay includes all propagation delays in the links between the sender and the receiver.
The unit of bandwidth delay product is bits or bytes.

Stop-and-Wait Automatic Repeat Request

To detect and correct corrupted frames, we need to add redundancy bits to our data frame. When the
frame arrives at the receiver site, it is checked and if it is corrupted, it is silently discarded. The detection
of errors in this protocol is manifested by the silence of the receiver.
Lost frames are more difficult to handle than corrupted ones. In our previous protocols, there was no way
to identify a frame. The received frame could be the correct one, or a duplicate, or a frame out of order.
The solution is to number the frames. When the receiver receives a data frame that is out of order, this
means that frames were either lost or duplicated
The lost frames need to be resent in this protocol. If the receiver does not respond when there is an error,
how can the sender know which frame to resend? To remedy this problem, the sender keeps a copy of the
sent frame. At the same time, it starts a timer. If the timer expires and there is no ACK for the sent frame,
the frame is resent, the copy is held, and the timer is restarted.
Since the protocol uses the stop-and-wait mechanism, there is only one specific frame that needs an ACK
Error correction in Stop-and-Wait ARQ is done by keeping a copy of the sent frame and retransmitting of
the frame when the timer expires
In Stop-and-Wait ARQ, we use sequence numbers to number the frames. The sequence numbers
are based on modulo-2 arithmetic.
In Stop-and-Wait ARQ, the acknowledgment number always announces in modulo-2 arithmetic the
sequence number of the next frame expected.
Faculty of CSE Dept.
Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 15
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Sliding Window Protocol

The sliding window is a technique for sending multiple frames at a time. It controls the data packets
between the two devices where reliable and gradual delivery of data frames is needed. In this technique,
each frame has sent from the sequence number. The sequence numbers are used to find the missing data
in the receiver end. The purpose of the sliding window technique is to avoid duplicate data, so it uses the
sequence number.

Go-Back-N Automatic Repeat Request

To improve the efficiency of transmission (filling the pipe), multiple frames must be in transition while
waiting for acknowledgment. In other words, we need to let more than one frame be outstanding to keep
the channel busy while the sender is waiting for acknowledgment. The first is called Go-Back-N
Automatic Repeat. In this protocol we can send several frames before receiving acknowledgments; we
keep a copy of these frames until the acknowledgments arrive.
In the Go-Back-N Protocol, the sequence numbers are modulo 2 m, where m is the size of the
sequence number field in bits.
The sequence numbers range from 0 to 2 power m- 1. For example, if m is 4, the only sequence numbers
are 0 through 15 inclusive.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 16
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

The sender window at any time divides the possible sequence numbers into four regions. The first region,
from the far left to the left wall of the window, defines the sequence numbers belonging to frames that are
already acknowledged. The sender does not worry about these frames and keeps no copies of them.
The second region, colored in Figure (a), defines the range of sequence numbers belonging to the frames
that are sent and have an unknown status. The sender needs to wait to find out if these frames have been
received or were lost. We call these outstanding frames.
The third range, white in the figure, defines the range of sequence numbers for frames that can be sent;
however, the corresponding data packets have not yet been received from the network layer.
Finally, the fourth region defines sequence numbers that cannot be used until the window slides.

Timers
Although there can be a timer for each frame that is sent, in our protocol we use only one. The reason is
that the timer for the first outstanding frame always expires first; we send all outstanding frames when
this timer expires.
Acknowledgment
The receiver sends a positive acknowledgment if a frame has arrived safe and sound and in order. If a
frame is damaged or is received out of order, the receiver is silent and will discard all subsequent frames
until it receives the one it is expecting. The silence of the receiver causes the timer of the
unacknowledged frame at the sender side to expire. This, in turn, causes the sender to go back and resend
all frames, beginning with the one with the expired timer. The receiver does not have to acknowledge
each frame received. It can send one cumulative acknowledgment for several frames.
Resending a Frame
When the timer expires, the sender resends all outstanding frames. That is why the protocol is called Go-
Back-N ARQ.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 17
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Selective Repeat Automatic Repeat Request

In Go-Back-N ARQ, The receiver keeps track of only one variable, and there is no need to buffer out-of- order
frames; they are simply discarded. However, this protocol is very inefficient for a noisy link.

In a noisy link a frame has a higher probability of damage, which means the resending of multiple frames. This
resending uses up the bandwidth and slows down the transmission.

For noisy links, there is another mechanism that does not resend N frames when just one frame is damaged; only
the damaged frame is resent. This mechanism is called Selective Repeat ARQ.

It is more efficient for noisy links, but the processing at the receiver is more complex.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 18
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Differences between Go-Back N & Selective Repeat

One main difference is the number of timers. Here, each frame sent or resent needs a timer, which means that
the timers need to be numbered (0, 1,2, and 3). The timer for frame 0 starts at the first request, but stops when
the ACK for this frame arrives.

There are two conditions for the delivery of frames to the network layer: First, a set of consecutive frames must
have arrived. Second, the set starts from the beginning of the window. After the first arrival, there was only one
frame and it started from the beginning of the window. After the last arrival, there are three frames and the first
one starts from the beginning of the window.

Another important point is that a NAK is sent.

The next point is about the ACKs. Notice that only two ACKs are sent here.

The first one acknowledges only the first frame; the second one acknowledges three frames. In Selective Repeat,
ACKs are sent when data are delivered to the network layer. If the data belonging to n frames are delivered in one
shot, only one ACK is sent for all of them.

Problem: In SR protocol, suppose frames through 0 to 4 have been transmitted. Now, imagine that 0 times out,
5 (a new frame) is transmitted, 1 times out, 2 times out and 6 (another new frame) is transmitted.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 19
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

At this point, what will be the outstanding packets in sender’s window?

1. 341526

2. 3405126

3. 0123456

4. 654321

Solution- In SR Protocol, only the required frame is retransmitted and not the entire window.

Step-01: Frames through 0 to 4 have been transmitted-

4,3,2,1,0

Step-02:

0 times out. So, sender retransmits it-

0,4,3,2,1

Step-03:

5 (a new frame) is transmitted-

5,0,4,3,2,1

Step-04:

1 times out. So, sender retransmits it-

1,5,0,4,3,2

Step-05:

2 times out. So, sender retransmits it-

2,1,5,0,4,3

Step-06:

6 (another new frame) is transmitted-

6,2,1,5,0,4,3

Thus, Option (B) is correct.

Data Link Layer

Purpose of the Data Link Layer

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 20
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

The data link layer is actually divided into two sublayers:

Logical Link Control (LLC): This upper sublayer defines the software processes that provide services to the
network layer protocols. It places information in the frame that identifies which network layer protocol is being
used for the frame. This information allows multiple Layer 3 protocols, such as IPv4 and IPv6, to utilize the same
network interface and media.

Media Access Control (MAC): This lower sublayer defines the media access processes performed by the
hardware. It provides data link layer addressing and delimiting of data according to the physical signaling
requirements of the medium and the type of data link layer protocol in use.

Separating the data link layer into sublayers allows for one type of frame defined by the upper layer to access
different types of media defined by the lower layer. Such is the case in many LAN technologies, including
Ethernet.

The figure illustrates how the data link layer is separated into the LLC and MAC sublayers. The LLC communicates
with the network layer while the MAC sublayer allows various network access technologies. For instance, the
MAC sublayer communicates with Ethernet LAN technology to send and receive frames over copper or fiber-optic
cable. The MAC sublayer also communicates with wireless technologies such as Wi-Fi and Bluetooth to send and
receive frames wirelessly.

Framing in Data Link Layer

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 21
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Frames are the units of digital transmission, particularly in computer networks and telecommunications. Frames
are comparable to the packets of energy called photons in the case of light energy. Frame is continuously used in
Time Division Multiplexing process.

Framing is a point-to-point connection between two computers or devices consists of a wire in which data is
transmitted as a stream of bits. However, these bits must be framed into discernible blocks of information.
Framing is a function of the data link layer. It provides a way for a sender to transmit a set of bits that are
meaningful to the receiver. Ethernet, token ring, frame relay, and other data link layer technologies have their
own frame structures. Frames have headers that contain information such as error-checking codes.

At the data link layer, it extracts the message from the sender and provides it to the receiver by providing the
sender’s and receiver’s addresses. The advantage of using frames is that data is broken up into recoverable
chunks that can easily be checked for corruption.

Problems in Framing –

● Detecting start of the frame: When a frame is transmitted, every station must be able to detect it. Station
detects frames by looking out for a special sequence of bits that marks the beginning of the frame i.e. SFD
(Starting Frame Delimiter).

● How does the station detect a frame: Every station listens to link for SFD pattern through a sequential
circuit. If SFD is detected, sequential circuit alerts station. Station checks destination address to accept or
reject frame.

● Detecting end of frame: When to stop reading the frame.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 22
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Types of Framing

Framing can be of two types, fixed sized framing and variable sized framing.

Fixed-sized Framing Here the size of the frame is fixed and so the frame length acts as delimiter of the frame.
Consequently, it does not require additional boundary bits to identify the start and end of the frame. Example −
ATM cells.

Variable – Sized Framing Here, the size of each frame to be transmitted may be different. So additional
mechanisms are kept to mark the end of one frame and the beginning of the next frame. It is used in local area
networks.

Parts of a Frame

A frame has the following parts −

● Frame Header − It contains the source and the destination addresses of the frame.

● Payload field − It contains the message to be delivered.

● Trailer − It contains the error detection and error correction bits.

● Flag − It marks the beginning and end of the frame.

Framing is a Data Link layer function whereby the packets from the Network Layer are encapsulated into frames.
The data frames can be of fixed length or variable length. In variable - length framing, the size of each frame to be
transmitted may be different. So, a pattern of bits is used as a delimiter to mark the end of one frame and the
beginning of the next frame.

The two types of variable - sized framing are −

● Byte or Character-oriented framing

● Bit - oriented framing

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 23
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Character - Oriented Framing

In character - oriented framing, data is transmitted as a sequence of bytes, from an 8-bit coding system like ASCII.
The parts of a frame in a character - oriented framing are −

● Frame Header − It contains the source and the destination addresses of the frame in form of bytes.

● Payload field − It contains the message to be delivered. It is a variable sequence of data bytes.

● Trailer − It contains the bytes for error detection and error correction.

● Flags − Flags are the frame delimiters signalling the start and end of the frame. It is of 1- byte denoting a
protocol - dependent special character.

Character - oriented protocols are suited for transmission of texts. The flag is chosen as a character that is not
used for text encoding. However, if the protocol is used for transmitting multimedia messages, there are chances
that the pattern of the flag byte is present in the message byte sequence. In order that the receiver does not
consider the pattern as the end of the frame, byte stuffing mechanism is used. Here, a special byte called the
escape character (ESC) is stuffed before every byte in the message with the same pattern as the flag byte. If the
ESC sequence is found in the message byte, then another ESC byte is stuffed before it.

A problem with character - oriented framing is that it adds too much overhead on the message, thus increasing
the total size of the frame. Another problem is that the coding system used in recent times have 16-bit or 32-bit
characters that conflicts with the 8-bit encoding.

Bit-oriented framing

In bit-oriented framing, data is transmitted as a sequence of bits that can be interpreted in the upper layers both
as text as well as multimedia data.

The parts of a frame in a character - oriented framing are −

● Frame Header − It contains bits denoting the source and the destination addresses of the frame.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 24
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

● Payload field − It contains the message to be delivered. It is a variable sequence of bits.

● Trailer − It contains the error detection and error correction bits.

● Flags − Flags are a bit pattern that act as the frame delimiters signalling the start and end of the frame. It is
generally of 8-bits and comprises of six or more consecutive 1s. Most protocols use the 8-bit pattern
01111110 as flag.

Bit - oriented protocols are suited for transmitting any sequence of bits. So there are chances that the pattern of
the flag bits is present in the message. In order that the receiver does not consider this as end of frame, bit-
stuffing mechanism is used. Whenever a 0 bit is followed by five consecutive 1 bits in the message, an extra 0 bit
is stuffed at the end of the five 1s. When the receiver receives the message, it removes the stuffed 0s after each
sequence of five 1s. The un-stuffed message is then sent to the upper layers.

Multiple access protocol

The data link layer is used in a computer network to transmit the data between two devices or nodes. It divides
the layer into parts such as data link control and the multiple access resolution/protocol. The upper layer has the
responsibility to flow control and the error control in the data link layer, and hence it is termed as logical of data
link control. Whereas the lower sub-layer is used to handle and reduce the collision or multiple access on a
channel. Hence it is termed as media access control or the multiple access resolutions.

Data Link Control

A data link control is a reliable channel for transmitting data over a dedicated link using various techniques such
as framing, error control and flow control of data packets in the computer network.

What is a multiple access protocol?

When a sender and receiver have a dedicated link to transmit data packets, the data link control is enough to
handle the channel. Suppose there is no dedicated path to communicate or transfer the data between two
devices. In that case, multiple stations access the channel and simultaneously transmits the data over the
channel. It may create collision and cross talk. Hence, the multiple access protocol is required to reduce the
collision and avoid crosstalk between the channels.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 25
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

For example, suppose that there is a classroom full of students. When a teacher asks a question, all the students
(small channels) in the class start answering the question at the same time (transferring the data simultaneously).
All the students respond at the same time due to which data is overlap or data lost. Therefore it is the
responsibility of a teacher (multiple access protocol) to manage the students and make them one answer.
Following are the types of multiple access protocol that is subdivided into the different process as:

Random Access Protocol

In this protocol, all the station has the equal priority to send the data over a channel. In random access protocol,
one or more stations cannot depend on another station nor any station control another station. Depending on the
channel's state (idle or busy), each station transmits the data frame. However, if more than one station sends the
data over a channel, there may be a collision or data conflict. Due to the collision, the data frame packets may be
lost or changed. And hence, it does not receive by the receiver end.

Following are the different methods of random-access protocols for broadcasting frames on the channel.

o Aloha

o CSMA

o CSMA/CD

o CSMA/CA

Controlled Access Protocols

Controlled access protocols allow only one node to send data at a given time.Before initiating transmission, a
node seeks information from other nodes to determine which station has the right to send. This avoids collision of
messages on the shared channel.

The station can be assigned the right to send by the following three methods−

● Reservation

● Polling

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 26
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

● Token Passing

Channelization Protocols

Channelization are a set of methods by which the available bandwidth is divided among the different nodes for
simultaneous data transfer.

The three channelization methods are−

● Frequency division multiple access (FDMA)

● Time division multiple access (TDMA)

● Code division multiple access (CDMA)

ALOHA Random Access Protocol

It is designed for wireless LAN (Local Area Network) but can also be used in a shared medium to transmit data.
Using this method, any station can transmit data across a network simultaneously when a data frameset is
available for transmission.

Aloha Rules

1. Any station can transmit data to a channel at any time.

2. It does not require any carrier sensing.

3. Collision and data frames may be lost during the transmission of data through multiple stations.

4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.

5. It requires retransmission of data after some random amount of time.

Pure Aloha

Whenever data is available for sending over a channel at stations, we use Pure Aloha. In pure Aloha, when each
station transmits data to a channel without checking whether the channel is idle or not, the chances of collision
may occur, and the data frame can be lost. When any station transmits the data frame to a channel, the pure
Aloha waits for the receiver's acknowledgment. If it does not acknowledge the receiver end within the specified
Faculty of CSE Dept.
Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 27
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

time, the station waits for a random amount of time, called the backoff time (Tb). And the station may assume the
frame has been lost or destroyed. Therefore, it retransmits the frame until all the data are successfully
transmitted to the receiver.

1. The total vulnerable time of pure Aloha is 2 * Tfr.

2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.

3. Successful transmission of data frame is S = G * e ^ - 2 G.

As we can see in the figure above, there are four stations for accessing a shared channel and transmitting data
frames. Some frames collide because most stations send their frames at the same time. Only two frames, frame
1.1 and frame 2.2, are successfully transmitted to the receiver end. At the same time, other frames are lost or
destroyed. Whenever two frames fall on a shared channel simultaneously, collisions can occur, and both will
suffer damage. If the new frame's first bit enters the channel before finishing the last bit of the second frame.
Both frames are completely finished, and both stations must retransmit the data frame.

Slotted Aloha

The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has a very high
possibility of frame hitting. In slotted Aloha, the shared channel is divided into a fixed time interval called slots. So
that, if a station wants to send a frame to a shared channel, the frame can only be sent at the beginning of the
slot, and only one frame is allowed to be sent to each slot. And if the stations are unable to send data to the
beginning of the slot, the station will have to wait until the beginning of the slot for the next time. However, the
possibility of a collision remains when trying to send a frame at the beginning of two or more station time slot.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 28
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.

2. The probability of successfully transmitting the data frame in the slotted Aloha is

S = G * e ^ - 2 G.

3. The total vulnerable time required in slotted Aloha is Tfr.

CSMA (Carrier Sense Multiple Access)

It is a carrier sense multiple access based on media access protocol to sense the traffic on a channel (idle or busy)
before transmitting the data. It means that if the channel is idle, the station can send data to the channel.
Otherwise, it must wait until the channel becomes idle. Hence, it reduces the chances of a collision on a
transmission medium.

CSMA Access Modes

1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared channel and if the
channel is idle, it immediately sends the data. Else it must wait and keep track of the status of the channel to be
idle and broadcast the frame unconditionally as soon as the channel is idle.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 29
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Non-Persistent: It is the access mode of CSMA that defines before transmitting the data, each node must sense
the channel, and if the channel is inactive, it immediately sends the data. Otherwise, the station must wait for a
random time (not continuously), and when the channel is found to be idle, it transmits the frames.

P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-Persistent mode defines that
each node senses the channel, and if the channel is inactive, it sends a frame with a P probability. If the data is not
transmitted, it waits for a (q = 1-p probability) random time and resumes the frame with the next time slot.

O- Persistent: It is an O-persistent method that defines the superiority of the station before the transmission of
the frame on the shared channel. If it is found that the channel is inactive, each station waits for its turn to
retransmit the data.

CSMA/ CD

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 30
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

It is a carrier sense multiple access/ collision detection network protocol to transmit data frames. The CSMA/CD
protocol works with a medium access control layer.

Therefore, it first senses the shared channel before broadcasting the frames, and if the channel is idle, it transmits
a frame to check whether the transmission was successful. If the frame is successfully received, the station sends
another frame.

If any collision is detected in the CSMA/CD, the station sends a jam/ stop signal to the shared channel to
terminate data transmission. After that, it waits for a random time before sending a frame to a channel.

● Though this algorithm detects collisions, it does not reduce the number of collisions.

● It is not appropriate for large networks performance degrades exponentially when more stations are
added.

Advantages of CSMA/ CD:

1. It is used for collision detection on a shared channel within a very short time.

2. CSMA CD is better than CSMA for collision detection.

3. CSMA CD is used to avoid any form of waste transmission.

4. When necessary, it is used to use or share the same amount of bandwidth at each station.

5. It has lower CSMA CD overhead as compared to the CSMA CA.

Disadvantage of CSMA/ CDp Ad

1. It is not suitable for long-distance networks because as the distance increases, CSMA CD' efficiency
decreases.

2. It can detect collision only up to 2500 meters, and beyond this range, it cannot detect collisions.

3. When multiple devices are added to a CSMA CD, collision detection performance is reduced.

CSMA/ CA

It is a carrier sense multiple access/collision avoidance network protocol for carrier transmission of data frames.
It is a protocol that works with a medium access control layer. When a data frame is sent to a channel, it receives
an acknowledgment to check whether the channel is clear. If the station receives only a single (own)
acknowledgments, that means the data frame has been successfully transmitted to the receiver.

But if it gets two signals (its own and one more in which the collision of frames), a collision of the frame occurs in
the shared channel. Detects the collision of the frame when a sender receives an acknowledgment signal.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 31
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Following are the methods used in the CSMA/ CA to avoid the collision:

● Interframe space: In this method, the station waits for the channel to become idle, and if it gets the
channel is idle, it does not immediately send the data. Instead of this, it waits for some time, and this time
period is called the Interframe space or IFS. However, the IFS time is often used to define the priority of
the station.

● Contention window: In the Contention window, the total time is divided into different slots. When the
station/ sender is ready to transmit the data frame, it chooses a random slot number of slots as wait
time. If the channel is still busy, it does not restart the entire process, except that it restarts the timer only
to send data packets when the channel is inactive.

● Acknowledgment: In the acknowledgment method, the sender station sends the data frame to the
shared channel if the acknowledgment is not received ahead of time.

Advantage of CSMA/ CA

1. When the size of data packets is large, the chances of collision in CSMA CA is less.

2. It controls the data packets and sends the data when the receiver wants to send them.

3. It is used to prevent collision rather than collision detection on the shared channel.

4. CSMA CA avoids wasted transmission of data over the channel.

5. It is best suited for wireless transmission in a network.

6. It avoids unnecessary data traffic on the network with the help of the RTS/ CTS extension.

The disadvantage of CSMA/ CA

1. Sometime CSMA/CA takes much waiting time as usual to transmit the data packet.

2. It consumes more bandwidth by each station.

3. Its efficiency is less than a CSMA CD.

MCQ questions(10x1)-

Question 1: Which of the following is the primary purpose of the Data Link Layer in the OSI model?

a) Packet routing

b) Error detection and correction

c) End-to-end communication

d) Addressing and identification

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 32
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Answer: b) Error detection and correction

Question 2: What is the basic unit of data at the Data Link Layer called?

a) Segment

b) Packet

c) Frame

d) Datagram

Answer: c) Frame

Question 3: Which sublayer of the Data Link Layer is responsible for MAC (Media Access Control) addressing and
frame delivery within a local network?

a) Logical Link Control (LLC)

b) Media Access Control (MAC)

c) Physical Layer

d) Network Interface Layer

Answer: b) Media Access Control (MAC)

Question 4: In Ethernet networks, what is the most common method used for resolving collisions?

a) CSMA/CD (Carrier Sense Multiple Access with Collision Detection)

b) CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance)

c) Token passing

d) Circuit switching

Answer: a) CSMA/CD (Carrier Sense Multiple Access with Collision Detection)

Question 5: Which field in an Ethernet frame is used to indicate the type of protocol being carried in the data
field?

a) Source MAC address

b) Destination MAC address

c) Frame Check Sequence (FCS)

d) EtherType

Answer: d) EtherType

Question 6: What is the purpose of the Frame Check Sequence (FCS) in a Data Link Layer frame?

a) To identify the source and destination devices


Faculty of CSE Dept.
Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 33
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

b) To indicate the type of data carried in the frame

c) To detect errors in the frame's contents

d) To specify the frame's priority level

Answer: c) To detect errors in the frame's contents

Question 7: Which Data Link Layer protocol is commonly used for point-to-point serial communication between
network devices?

a) HDLC (High-Level Data Link Control)

b) PPP (Point-to-Point Protocol)

c) VLAN (Virtual Local Area Network)

d) STP (Spanning Tree Protocol)

Answer: b) PPP (Point-to-Point Protocol)

Question 8: What is the main function of flow control at the Data Link Layer?

a) To manage network congestion

b) To ensure reliable data transmission

c) To establish connections between devices

d) To encrypt data for secure transmission

Answer: b) To ensure reliable data transmission

Question 9: In the Data Link Layer addressing, which address is used to identify the physical network interface
card of a device?

a) IP address

b) MAC address

c) Port number

d) URL (Uniform Resource Locator)

Answer: b) MAC address

Question 10: Which Data Link Layer protocol is used to create a loop-free logical topology in Ethernet networks?

a) ARP (Address Resolution Protocol)

b) RIP (Routing Information Protocol)

c) STP (Spanning Tree Protocol)

d) DHCP (Dynamic Host Configuration Protocol)

Answer: c) STP (Spanning Tree Protocol)


Faculty of CSE Dept.
Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 34
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Short questions(10x3)-

1. What is the primary function of the Data Link Layer?

2. What is the basic unit of data at the Data Link Layer?

3. What are the two primary sublayers of the Data Link Layer, and what do they do?

4. What is the purpose of MAC (Media Access Control) addresses?

5. How does the Data Link Layer handle errors in data transmission?

6. What is CSMA/CD, and in which network technology is it commonly used?

7. What does ARP (Address Resolution Protocol) do at the Data Link Layer?

8. How does the Data Link Layer facilitate point-to-point communication?

9. What is the role of flow control in the Data Link Layer?

10. How does the Data Link Layer address devices in a local network?

Long questions(5x5)-

Question 1: Explain the concept of framing in the Data Link Layer. How does the Data Link Layer divide data from
the Network Layer into frames for transmission over the network? Describe the structure of a typical frame,
including the purpose of frame delimiters, addressing information, and error-checking mechanisms. Provide a
step-by-step illustration of how framing works and its significance in reliable data transmission.

Question 2: Compare and contrast the Media Access Control (MAC) sublayer and the Logical Link Control (LLC)
sublayer within the Data Link Layer. Discuss their respective functions and responsibilities. How do these
sublayers interact with higher layers of the OSI model, and why is it essential to have separate sublayers for MAC
and LLC functionalities? Provide real-world examples to demonstrate situations where the MAC and LLC sublayers
play distinct roles in network communication.

Question 3: Discuss the significance of error detection and correction in the Data Link Layer. Explain the role of
the Frame Check Sequence (FCS) in identifying and handling errors in frames. How does the receiver verify the
integrity of received frames using the FCS? Describe scenarios where error detection is essential, and elaborate
on how the Data Link Layer deals with erroneous frames to ensure data integrity during transmission.

Question 4: Explore the process of collision detection in the context of shared network media and the
mechanisms employed by the Data Link Layer to manage collisions. Provide a detailed explanation of the Carrier
Sense Multiple Access with Collision Detection (CSMA/CD) protocol. How does CSMA/CD help devices contend for
access to the network medium and detect collisions? Discuss the limitations of CSMA/CD and situations where it
is commonly used.

Question 5: Illustrate the operation of switches and bridges at the Data Link Layer. How do these devices facilitate
efficient data transmission within local area networks (LANs)? Describe the role of MAC address tables in switches
and how they optimize frame forwarding. Explain the purpose of bridges in connecting separate LAN segments
and how they reduce collision domains. Provide a real-world example to showcase the advantages of using
switches and bridges in a network environment.

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 35
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024

Faculty of CSE Dept.


Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 36

You might also like