Professional Documents
Culture Documents
Study Material COMPUTER NETWORK PCC-CSM502 Module2
Study Material COMPUTER NETWORK PCC-CSM502 Module2
(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024
Study Material
(Computer Network (PCC-CSM502))
_____________________________________________________________________________________________
Table of Contents
CRC
Flow Control
Go back–ARQ
Sliding Window
Piggy backing
Random Access
Block coding
Hamming Distance
Data Link Layer and Medium Access Sub Layer: Multiple access protocols
Pure ALOHA
Slotted ALOHA
CSMA/CD
CSMA/CA
Module 2
When data is transmitted from one device to another device, the system does not guarantee whether the
data received by the device is identical to the data transmitted by another device. An Error is a situation
when the message received at the receiver end is not identical to the message transmitted.
Types of Errors
Single-Bit Error: The only one bit of a given data unit is changed from 1 to 0 or from 0 to 1.
In the above figure, the message which is sent is corrupted as single-bit, i.e., 0 bit is changed to 1.
Single-Bit Error does not appear more likely in Serial Data Transmission. For example, Sender sends the
data at 10 Mbps, this means that the bit lasts only for 1 ?s and for a single-bit error to occurred, a noise
must be more than 1 ?s.
Single-Bit Error mainly occurs in Parallel Data Transmission. For example, if eight wires are used to send
the eight bits of a byte, if one of the wire is noisy, then single-bit is corrupted per byte.
Burst Error: The two or more bits are changed from 0 to 1 or from 1 to 0 is known as Burst Error. The
Burst Error is determined from the first corrupted bit to the last corrupted bit.
The duration of noise in Burst Error is more than the duration of noise in Single-Bit. Burst Errors are
most likely to occur in Serial Data Transmission. The number of affected bits depends on the duration of
the noise and data rate.
Redundancy
The central concept in detecting or correcting errors is redundancy. To be able to detect or correct errors,
we need to send some extra bits with our data. These redundant bits are added by the sender and removed
by the receiver. Their presence allows the receiver to detect or correct corrupted bits.
There are two main methods of error correction. Forward error correction is the process in which the
receiver tries to guess the message by using redundant bits. This is possible, as we see later, if the number
of errors is small. Correction by retransmission is a technique in which the receiver detects the occurrence
of an error and asks the sender to resend the message. Resending is repeated until a message arrives that
the receiver believes is error-free.
Coding
Redundancy is achieved through various coding schemes. The sender adds redundant bits through a
process that creates a relationship between the redundant bits and the actual data bits. The receiver checks
the relationships between the two sets of bits to detect or correct the errors. The ratio of redundant bits to
the data bits and the robustness of the process are important factors in any coding scheme.
In block coding, we divide our message into blocks, each of k bits, called data words. We add r redundant
bits to each block to make the length n = k + r. The resulting n-bit blocks are called code words.
For example, we have a set of data words, each of size k, and a set of code words, each of size of n. With
k bits, we can create a combination of 2 k data words, with n bits; we can create a combination of 2 n code
words. Since n > k, the number of possible code words is larger than the number of possible data words.
The block coding process is one-to-one; the same data word is always encoded as the same code word.
This means that we have 2n-2k code words that are not used. We call these code words invalid or illegal.
The following figure shows the situation.
Error Detection
If the following two conditions are met, the receiver can detect a change in the original code word by
using Block coding technique.
1. The receiver has (or can find) a list of valid code words.
2. The original code word has changed to an invalid one.
The sender creates code words out of data words by using a generator that applies the rules and
procedures of encoding. Each code word sent to the receiver may change during transmission. If the
received code word is the same as one of the valid code words, the word is accepted; the corresponding
data word is extracted for use.
If the received code word is not valid, it is discarded. However, if the code word is corrupted during
transmission but the received word still matches a valid code word, the error remains undetected. This
type of coding can detect only single errors. Two or more errors may remain undetected.
For example consider the following table of data words and Code words:
Assume the sender encodes the data word 01 as 011 and sends it to the receiver. Consider the following
cases:
1. The receiver receives O11. It is a valid code word. The receiver extracts the data word 01 from it.
2. The code word is corrupted during transmission, and 111 is received (the leftmost bit is corrupted).
This is not a valid code word and is discarded.
3. The code word is corrupted during transmission, and 000 is received (the right two bits are corrupted).
This is a valid code word. The receiver incorrectly extracts the data word 00. Two corrupted bits have
made the error undetectable.
The central concepts in coding for error control are the idea of the Hamming distance. The Hamming
distance between two words (of the same size) is the number of differences between the corresponding
bits. We show the Hamming distance between two words x and y as d(x, y).
The Hamming distance can easily be found if we apply the XOR operation on the two words and count
the number of 1s in the result. Note that the Hamming distance is a value greater than zero.
1. The Hamming distance d(000, 011) is 2 because 000 ⊕ 011 is 011 (two 1s).
2. The Hamming distance d(10101, 11110) is 3 because 10101 ⊕ 11110 is 01011 (three 1s).
CRC Generator
o A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at the end of the
data as the length of the divisor is 4 and we know that the length of the string 0s to be appended is
always one less than the length of the divisor.
o Now, the string becomes 11100000, and the resultant string is divided by the divisor 1001.
o The remainder generated from the binary division is known as CRC remainder. The generated
value of the CRC remainder is 111.
o CRC remainder replaces the appended string of 0s at the end of the data unit, and the final string
would be 11100111 which is sent across the network.
CRC Checker
o The functionality of the CRC checker is similar to the CRC generator.
o When the string 11100111 is received at the receiving end, then CRC checker performs the
modulo-2 division.
o A string is divided by the same divisor, i.e., 1001.
o In this case, CRC checker generates the remainder of zero. Therefore, the data is accepted.
Flow Control
o It is a set of procedures that tells the sender how much data it can transmit before the data
overwhelms the receiver.
o The receiving device has limited speed and limited memory to store the data. Therefore, the
receiving device must be able to inform the sending device to stop the transmission temporarily
before the limits are reached.
o It requires a buffer, a block of memory for storing the information until they are processed.
Error control in data link layer is the process of detecting and correcting data frames that have been
corrupted or lost during transmission.
In case of lost or corrupted frames, the receiver does not receive the correct data-frame and sender is
ignorant about the loss. Data link layer follows a technique to detect transit errors and take necessary
actions, which is retransmission of frames whenever error is detected or frame is lost. The process is
called Automatic Repeat Request (ARQ).
● Acknowledgment
o Positive ACK
o Negative ACK
● Retransmission
Stop-and-wait Protocol
In the Stop-and-wait method, the sender waits for an acknowledgement after every frame it sends. When
acknowledgement is received, then only next frame is sent. The process of alternately sending and
waiting of a frame continues until the sender transmits the EOT (End of transmission) frame.
When the ACK arrives, the sender sends the next frame. It is Stop-and-Wait Protocol because the sender
sends one frame, stops until it receives confirmation from the receiver (okay to go ahead), and then sends
the next frame. We still have unidirectional communication for data frames, but auxiliary ACK frames
(simple tokens of acknowledgment) travel from the other direction.
Sender side
Rule 1: Sender sends one data packet at a time.
Rule 2: Sender sends the next packet only when it receives the acknowledgment of the previous packet.
Therefore, the idea of stop and wait protocol in the sender's side is very simple, i.e., send one packet at a
time, and do not send another packet before receiving the acknowledgment.
Receiver side
Rule 1: Receive and then consume the data packet.
Rule 2: When the data packet is consumed, receiver sends the acknowledgment to the sender.
Therefore, the idea of stop and wait protocol in the receiver's side is also very simple, i.e., consume the
packet, and once the packet is consumed, the acknowledgment is sent. This is known as a flow control
mechanism.
The above figure shows the working of the stop and waits protocol. If there is a sender and receiver, then
sender sends the packet and that packet is known as a data packet. The sender will not send the second
packet without receiving the acknowledgment of the first packet. The receiver sends the acknowledgment
for the data packet that it has received. Once the acknowledgment is received, the sender sends the next
packet. This process continues until all the packet are not sent.
The main advantage of this protocol is its simplicity but it has some disadvantages also. For example, if
there are 1000 data packets to be sent, then all the 1000 packets cannot be sent at a time as in Stop and
Wait protocol, one packet is sent at a time.
Faculty of CSE Dept.
Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 10
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024
Suppose the sender sends the data and the data is lost. The receiver is waiting for the data for a long time.
Since the data is not received by the receiver, so it does not send any acknowledgment. Since the sender
does not receive any acknowledgment so it will not send the next packet. This problem occurs due to the
lost data.
In this case, two problems occur:
o Sender waits for an infinite amount of time for an acknowledgment.
o Receiver waits for an infinite amount of time for a data.
2. Problems occur due to lost acknowledgment
Suppose the sender sends the data and it has also been received by the receiver. On receiving the packet,
the receiver sends the acknowledgment. In this case, the acknowledgment is lost in a network, so there is
no chance for the sender to receive the acknowledgment. There is also no chance for the sender to send
the next packet as in stop and wait protocol, the next packet cannot be sent until the acknowledgment of
the previous packet is received.
In this case, one problem occurs:
o Sender waits for an infinite amount of time for an acknowledgment.
3. Problem due to the delayed data or acknowledgment
Suppose the sender sends the data and it has also been received by the receiver. The receiver then sends
the acknowledgment but the acknowledgment is received after the timeout period on the sender's side. As
the acknowledgment is received late, so acknowledgment can be wrongly considered as the
acknowledgment of some other data packet.
Network Performance
Network performance is defined by the overall quality of service provided by a network. This
encompasses numerous parameters and measurements that must be analyzed collectively to assess a given
network. Network performance measurement is therefore defined as the overall set of processes and tools
that can be used to quantitatively and qualitatively assess network performance and provide actionable
data to remediate any network performance issues.
Throughput is a metric often associated with the manufacturing industry and is most commonly defined
as the amount of material or items passing through a particular system or process. A common question in
the manufacturing industry is how many of product X were produced today, and did this number meet
expectations. For network performance measurement, throughput is defined in terms of the amount of
data or number of data packets that can be delivered in a pre-defined time frame.
Bandwidth, usually measured in bits per second, is a characterization of the amount of data that can be
transferred over a given time period. Bandwidth is therefore a measure of capacity rather than speed. For
example, a bus may be capable of carrying 100 passengers (bandwidth), but the bus may actually only
transport 85 passengers (throughput).
Latency
With regards to network performance measurement, latency is simply the amount of time it takes for data
to travel from one defined location to another. This parameter is sometimes referred to as delay. Ideally,
the latency of a network is as close to zero as possible. The absolute limit or governing factor for latency
is the speed of light, but packet queuing in switched networks and the refractive index of fiber optic
cabling are examples of variables that can increase latency.
● Propogation delay
Propagation delay is the time that it takes for a bit to reach from one end of a link to the other. The
delay depends on the distance (D) between the sender and the receiver, and the propagation speed
(S) of the wave signal. It is calculated as: D/S
● Transmission delay
Transmission delay refers to the time it takes to transmit a data packet onto the outgoing link. The
delay is determined by the size of the packet and the capacity of the outgoing link. If a packet
consists of LL bits and the link has a capacity of BB bits per second, then the transmission delay is
equal to: L/B
● Queuing delay
Queuing delay refers to the time that a packet waits to be processed in the buffer of a switch. The
delay is dependent on the arrival rate of the incoming packets, the transmission capacity of the
outgoing link, and the nature of the network’s traffic.
● Processing delay
Processing delay is the time taken by a switch to process the packet header. The delay depends on
the processing speed of the switch.
Packet Loss
With regards to network performance measurement, packet loss refers to the number of packets
transmitted from one destination to another that fail to transmit. This metric can be quantified by
capturing traffic data on both ends, then identifying missing packets and/or retransmission of packets.
Packet loss can be caused by network congestion, router performance and software issues, among other
factors.
Jitter
Jitter is defined as the variation in time delay for the data packets sent over a network. This variable
represents an identified disruption in the normal sequencing of data packets. Jitter is related to latency,
since the jitter manifests itself in increased or uneven latency between data packets, which can disrupt
network performance and lead to packet loss and network congestion. Although some level of jitter is to
be expected and can usually be tolerated, quantifying network jitter is an important aspect of
comprehensive network performance measurement.
Latency vs Throughput
While the concepts of throughput and bandwidth are sometimes misunderstood, the same confusion is
common between the terms latency and throughput. Although these parameters are closely related, it is
important to understand the difference between the two.
In relation to network performance measurement, throughput is a measurement of actual system
performance, quantified in terms of data transfer over a given time.
Latency is a measurement of the delay in transfer time, meaning it will directly impact the throughput, but
is not synonymous with it. The latency might be thought of as an unavoidable bottleneck on an assembly
line, such as a test process, measured in units of time. Throughput, on the other hand, is measured in units
completed which is inherently influenced by this latency.
and intrusion detection, all of which consume valuable network bandwidth and can impact
performance.
Round-trip delay (RTD) or round-trip time (RTT) is the amount of time it takes for a signal to be sent
plus the amount of time it takes for acknowledgement of that signal having been received. This time delay
includes propagation times for the paths between the two communication endpoints.[1] In the context of
computer networks, the signal is typically a data packet. RTT is also known as ping time, and can be
determined with the ping command.
End-to-end delay is the length of time it takes for a signal to travel in one direction and is often
approximated as half the RTT.Bandwidth delay product is a measurement of how many bits can fill up a
network link. It gives the maximum amount of data that can be transmitted by the sender at a given time
before waiting for acknowledgment. Thus it is the maximum amount of unacknowledged data.
Measurement
Bandwidth delay product is calculated as the product of the link capacity of the channel and the round –
trip delay time of transmission.
The link capacity of a channel is the number of bits transmitted per second. Hence, its unit is bps, i.e. bits
per second.
The round – trip delay time is the sum of the time taken for a signal to be transmitted from the sender to
the receiver and the time taken for its acknowledgment to reach the sender from the receiver. The round –
trip delay includes all propagation delays in the links between the sender and the receiver.
The unit of bandwidth delay product is bits or bytes.
To detect and correct corrupted frames, we need to add redundancy bits to our data frame. When the
frame arrives at the receiver site, it is checked and if it is corrupted, it is silently discarded. The detection
of errors in this protocol is manifested by the silence of the receiver.
Lost frames are more difficult to handle than corrupted ones. In our previous protocols, there was no way
to identify a frame. The received frame could be the correct one, or a duplicate, or a frame out of order.
The solution is to number the frames. When the receiver receives a data frame that is out of order, this
means that frames were either lost or duplicated
The lost frames need to be resent in this protocol. If the receiver does not respond when there is an error,
how can the sender know which frame to resend? To remedy this problem, the sender keeps a copy of the
sent frame. At the same time, it starts a timer. If the timer expires and there is no ACK for the sent frame,
the frame is resent, the copy is held, and the timer is restarted.
Since the protocol uses the stop-and-wait mechanism, there is only one specific frame that needs an ACK
Error correction in Stop-and-Wait ARQ is done by keeping a copy of the sent frame and retransmitting of
the frame when the timer expires
In Stop-and-Wait ARQ, we use sequence numbers to number the frames. The sequence numbers
are based on modulo-2 arithmetic.
In Stop-and-Wait ARQ, the acknowledgment number always announces in modulo-2 arithmetic the
sequence number of the next frame expected.
Faculty of CSE Dept.
Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 15
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024
The sliding window is a technique for sending multiple frames at a time. It controls the data packets
between the two devices where reliable and gradual delivery of data frames is needed. In this technique,
each frame has sent from the sequence number. The sequence numbers are used to find the missing data
in the receiver end. The purpose of the sliding window technique is to avoid duplicate data, so it uses the
sequence number.
To improve the efficiency of transmission (filling the pipe), multiple frames must be in transition while
waiting for acknowledgment. In other words, we need to let more than one frame be outstanding to keep
the channel busy while the sender is waiting for acknowledgment. The first is called Go-Back-N
Automatic Repeat. In this protocol we can send several frames before receiving acknowledgments; we
keep a copy of these frames until the acknowledgments arrive.
In the Go-Back-N Protocol, the sequence numbers are modulo 2 m, where m is the size of the
sequence number field in bits.
The sequence numbers range from 0 to 2 power m- 1. For example, if m is 4, the only sequence numbers
are 0 through 15 inclusive.
The sender window at any time divides the possible sequence numbers into four regions. The first region,
from the far left to the left wall of the window, defines the sequence numbers belonging to frames that are
already acknowledged. The sender does not worry about these frames and keeps no copies of them.
The second region, colored in Figure (a), defines the range of sequence numbers belonging to the frames
that are sent and have an unknown status. The sender needs to wait to find out if these frames have been
received or were lost. We call these outstanding frames.
The third range, white in the figure, defines the range of sequence numbers for frames that can be sent;
however, the corresponding data packets have not yet been received from the network layer.
Finally, the fourth region defines sequence numbers that cannot be used until the window slides.
Timers
Although there can be a timer for each frame that is sent, in our protocol we use only one. The reason is
that the timer for the first outstanding frame always expires first; we send all outstanding frames when
this timer expires.
Acknowledgment
The receiver sends a positive acknowledgment if a frame has arrived safe and sound and in order. If a
frame is damaged or is received out of order, the receiver is silent and will discard all subsequent frames
until it receives the one it is expecting. The silence of the receiver causes the timer of the
unacknowledged frame at the sender side to expire. This, in turn, causes the sender to go back and resend
all frames, beginning with the one with the expired timer. The receiver does not have to acknowledge
each frame received. It can send one cumulative acknowledgment for several frames.
Resending a Frame
When the timer expires, the sender resends all outstanding frames. That is why the protocol is called Go-
Back-N ARQ.
In Go-Back-N ARQ, The receiver keeps track of only one variable, and there is no need to buffer out-of- order
frames; they are simply discarded. However, this protocol is very inefficient for a noisy link.
In a noisy link a frame has a higher probability of damage, which means the resending of multiple frames. This
resending uses up the bandwidth and slows down the transmission.
For noisy links, there is another mechanism that does not resend N frames when just one frame is damaged; only
the damaged frame is resent. This mechanism is called Selective Repeat ARQ.
It is more efficient for noisy links, but the processing at the receiver is more complex.
One main difference is the number of timers. Here, each frame sent or resent needs a timer, which means that
the timers need to be numbered (0, 1,2, and 3). The timer for frame 0 starts at the first request, but stops when
the ACK for this frame arrives.
There are two conditions for the delivery of frames to the network layer: First, a set of consecutive frames must
have arrived. Second, the set starts from the beginning of the window. After the first arrival, there was only one
frame and it started from the beginning of the window. After the last arrival, there are three frames and the first
one starts from the beginning of the window.
The next point is about the ACKs. Notice that only two ACKs are sent here.
The first one acknowledges only the first frame; the second one acknowledges three frames. In Selective Repeat,
ACKs are sent when data are delivered to the network layer. If the data belonging to n frames are delivered in one
shot, only one ACK is sent for all of them.
Problem: In SR protocol, suppose frames through 0 to 4 have been transmitted. Now, imagine that 0 times out,
5 (a new frame) is transmitted, 1 times out, 2 times out and 6 (another new frame) is transmitted.
1. 341526
2. 3405126
3. 0123456
4. 654321
Solution- In SR Protocol, only the required frame is retransmitted and not the entire window.
4,3,2,1,0
Step-02:
0,4,3,2,1
Step-03:
5,0,4,3,2,1
Step-04:
1,5,0,4,3,2
Step-05:
2,1,5,0,4,3
Step-06:
6,2,1,5,0,4,3
Logical Link Control (LLC): This upper sublayer defines the software processes that provide services to the
network layer protocols. It places information in the frame that identifies which network layer protocol is being
used for the frame. This information allows multiple Layer 3 protocols, such as IPv4 and IPv6, to utilize the same
network interface and media.
Media Access Control (MAC): This lower sublayer defines the media access processes performed by the
hardware. It provides data link layer addressing and delimiting of data according to the physical signaling
requirements of the medium and the type of data link layer protocol in use.
Separating the data link layer into sublayers allows for one type of frame defined by the upper layer to access
different types of media defined by the lower layer. Such is the case in many LAN technologies, including
Ethernet.
The figure illustrates how the data link layer is separated into the LLC and MAC sublayers. The LLC communicates
with the network layer while the MAC sublayer allows various network access technologies. For instance, the
MAC sublayer communicates with Ethernet LAN technology to send and receive frames over copper or fiber-optic
cable. The MAC sublayer also communicates with wireless technologies such as Wi-Fi and Bluetooth to send and
receive frames wirelessly.
Frames are the units of digital transmission, particularly in computer networks and telecommunications. Frames
are comparable to the packets of energy called photons in the case of light energy. Frame is continuously used in
Time Division Multiplexing process.
Framing is a point-to-point connection between two computers or devices consists of a wire in which data is
transmitted as a stream of bits. However, these bits must be framed into discernible blocks of information.
Framing is a function of the data link layer. It provides a way for a sender to transmit a set of bits that are
meaningful to the receiver. Ethernet, token ring, frame relay, and other data link layer technologies have their
own frame structures. Frames have headers that contain information such as error-checking codes.
At the data link layer, it extracts the message from the sender and provides it to the receiver by providing the
sender’s and receiver’s addresses. The advantage of using frames is that data is broken up into recoverable
chunks that can easily be checked for corruption.
Problems in Framing –
● Detecting start of the frame: When a frame is transmitted, every station must be able to detect it. Station
detects frames by looking out for a special sequence of bits that marks the beginning of the frame i.e. SFD
(Starting Frame Delimiter).
● How does the station detect a frame: Every station listens to link for SFD pattern through a sequential
circuit. If SFD is detected, sequential circuit alerts station. Station checks destination address to accept or
reject frame.
Types of Framing
Framing can be of two types, fixed sized framing and variable sized framing.
Fixed-sized Framing Here the size of the frame is fixed and so the frame length acts as delimiter of the frame.
Consequently, it does not require additional boundary bits to identify the start and end of the frame. Example −
ATM cells.
Variable – Sized Framing Here, the size of each frame to be transmitted may be different. So additional
mechanisms are kept to mark the end of one frame and the beginning of the next frame. It is used in local area
networks.
Parts of a Frame
● Frame Header − It contains the source and the destination addresses of the frame.
Framing is a Data Link layer function whereby the packets from the Network Layer are encapsulated into frames.
The data frames can be of fixed length or variable length. In variable - length framing, the size of each frame to be
transmitted may be different. So, a pattern of bits is used as a delimiter to mark the end of one frame and the
beginning of the next frame.
In character - oriented framing, data is transmitted as a sequence of bytes, from an 8-bit coding system like ASCII.
The parts of a frame in a character - oriented framing are −
● Frame Header − It contains the source and the destination addresses of the frame in form of bytes.
● Payload field − It contains the message to be delivered. It is a variable sequence of data bytes.
● Trailer − It contains the bytes for error detection and error correction.
● Flags − Flags are the frame delimiters signalling the start and end of the frame. It is of 1- byte denoting a
protocol - dependent special character.
Character - oriented protocols are suited for transmission of texts. The flag is chosen as a character that is not
used for text encoding. However, if the protocol is used for transmitting multimedia messages, there are chances
that the pattern of the flag byte is present in the message byte sequence. In order that the receiver does not
consider the pattern as the end of the frame, byte stuffing mechanism is used. Here, a special byte called the
escape character (ESC) is stuffed before every byte in the message with the same pattern as the flag byte. If the
ESC sequence is found in the message byte, then another ESC byte is stuffed before it.
A problem with character - oriented framing is that it adds too much overhead on the message, thus increasing
the total size of the frame. Another problem is that the coding system used in recent times have 16-bit or 32-bit
characters that conflicts with the 8-bit encoding.
Bit-oriented framing
In bit-oriented framing, data is transmitted as a sequence of bits that can be interpreted in the upper layers both
as text as well as multimedia data.
● Frame Header − It contains bits denoting the source and the destination addresses of the frame.
● Flags − Flags are a bit pattern that act as the frame delimiters signalling the start and end of the frame. It is
generally of 8-bits and comprises of six or more consecutive 1s. Most protocols use the 8-bit pattern
01111110 as flag.
Bit - oriented protocols are suited for transmitting any sequence of bits. So there are chances that the pattern of
the flag bits is present in the message. In order that the receiver does not consider this as end of frame, bit-
stuffing mechanism is used. Whenever a 0 bit is followed by five consecutive 1 bits in the message, an extra 0 bit
is stuffed at the end of the five 1s. When the receiver receives the message, it removes the stuffed 0s after each
sequence of five 1s. The un-stuffed message is then sent to the upper layers.
The data link layer is used in a computer network to transmit the data between two devices or nodes. It divides
the layer into parts such as data link control and the multiple access resolution/protocol. The upper layer has the
responsibility to flow control and the error control in the data link layer, and hence it is termed as logical of data
link control. Whereas the lower sub-layer is used to handle and reduce the collision or multiple access on a
channel. Hence it is termed as media access control or the multiple access resolutions.
A data link control is a reliable channel for transmitting data over a dedicated link using various techniques such
as framing, error control and flow control of data packets in the computer network.
When a sender and receiver have a dedicated link to transmit data packets, the data link control is enough to
handle the channel. Suppose there is no dedicated path to communicate or transfer the data between two
devices. In that case, multiple stations access the channel and simultaneously transmits the data over the
channel. It may create collision and cross talk. Hence, the multiple access protocol is required to reduce the
collision and avoid crosstalk between the channels.
For example, suppose that there is a classroom full of students. When a teacher asks a question, all the students
(small channels) in the class start answering the question at the same time (transferring the data simultaneously).
All the students respond at the same time due to which data is overlap or data lost. Therefore it is the
responsibility of a teacher (multiple access protocol) to manage the students and make them one answer.
Following are the types of multiple access protocol that is subdivided into the different process as:
In this protocol, all the station has the equal priority to send the data over a channel. In random access protocol,
one or more stations cannot depend on another station nor any station control another station. Depending on the
channel's state (idle or busy), each station transmits the data frame. However, if more than one station sends the
data over a channel, there may be a collision or data conflict. Due to the collision, the data frame packets may be
lost or changed. And hence, it does not receive by the receiver end.
Following are the different methods of random-access protocols for broadcasting frames on the channel.
o Aloha
o CSMA
o CSMA/CD
o CSMA/CA
Controlled access protocols allow only one node to send data at a given time.Before initiating transmission, a
node seeks information from other nodes to determine which station has the right to send. This avoids collision of
messages on the shared channel.
The station can be assigned the right to send by the following three methods−
● Reservation
● Polling
● Token Passing
Channelization Protocols
Channelization are a set of methods by which the available bandwidth is divided among the different nodes for
simultaneous data transfer.
It is designed for wireless LAN (Local Area Network) but can also be used in a shared medium to transmit data.
Using this method, any station can transmit data across a network simultaneously when a data frameset is
available for transmission.
Aloha Rules
3. Collision and data frames may be lost during the transmission of data through multiple stations.
Pure Aloha
Whenever data is available for sending over a channel at stations, we use Pure Aloha. In pure Aloha, when each
station transmits data to a channel without checking whether the channel is idle or not, the chances of collision
may occur, and the data frame can be lost. When any station transmits the data frame to a channel, the pure
Aloha waits for the receiver's acknowledgment. If it does not acknowledge the receiver end within the specified
Faculty of CSE Dept.
Designation and Department: - Teaching Assistant, CSE
Brainware University, Kolkata 27
Programme Name and Semester: B.Tech.(AI-ML),5th Semester
Course Name (Course Code): Computer Network (PCC-CSM502)
Class:
Academic Session :2023-2024
time, the station waits for a random amount of time, called the backoff time (Tb). And the station may assume the
frame has been lost or destroyed. Therefore, it retransmits the frame until all the data are successfully
transmitted to the receiver.
As we can see in the figure above, there are four stations for accessing a shared channel and transmitting data
frames. Some frames collide because most stations send their frames at the same time. Only two frames, frame
1.1 and frame 2.2, are successfully transmitted to the receiver end. At the same time, other frames are lost or
destroyed. Whenever two frames fall on a shared channel simultaneously, collisions can occur, and both will
suffer damage. If the new frame's first bit enters the channel before finishing the last bit of the second frame.
Both frames are completely finished, and both stations must retransmit the data frame.
Slotted Aloha
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has a very high
possibility of frame hitting. In slotted Aloha, the shared channel is divided into a fixed time interval called slots. So
that, if a station wants to send a frame to a shared channel, the frame can only be sent at the beginning of the
slot, and only one frame is allowed to be sent to each slot. And if the stations are unable to send data to the
beginning of the slot, the station will have to wait until the beginning of the slot for the next time. However, the
possibility of a collision remains when trying to send a frame at the beginning of two or more station time slot.
2. The probability of successfully transmitting the data frame in the slotted Aloha is
S = G * e ^ - 2 G.
It is a carrier sense multiple access based on media access protocol to sense the traffic on a channel (idle or busy)
before transmitting the data. It means that if the channel is idle, the station can send data to the channel.
Otherwise, it must wait until the channel becomes idle. Hence, it reduces the chances of a collision on a
transmission medium.
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared channel and if the
channel is idle, it immediately sends the data. Else it must wait and keep track of the status of the channel to be
idle and broadcast the frame unconditionally as soon as the channel is idle.
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data, each node must sense
the channel, and if the channel is inactive, it immediately sends the data. Otherwise, the station must wait for a
random time (not continuously), and when the channel is found to be idle, it transmits the frames.
P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-Persistent mode defines that
each node senses the channel, and if the channel is inactive, it sends a frame with a P probability. If the data is not
transmitted, it waits for a (q = 1-p probability) random time and resumes the frame with the next time slot.
O- Persistent: It is an O-persistent method that defines the superiority of the station before the transmission of
the frame on the shared channel. If it is found that the channel is inactive, each station waits for its turn to
retransmit the data.
CSMA/ CD
It is a carrier sense multiple access/ collision detection network protocol to transmit data frames. The CSMA/CD
protocol works with a medium access control layer.
Therefore, it first senses the shared channel before broadcasting the frames, and if the channel is idle, it transmits
a frame to check whether the transmission was successful. If the frame is successfully received, the station sends
another frame.
If any collision is detected in the CSMA/CD, the station sends a jam/ stop signal to the shared channel to
terminate data transmission. After that, it waits for a random time before sending a frame to a channel.
● Though this algorithm detects collisions, it does not reduce the number of collisions.
● It is not appropriate for large networks performance degrades exponentially when more stations are
added.
1. It is used for collision detection on a shared channel within a very short time.
4. When necessary, it is used to use or share the same amount of bandwidth at each station.
1. It is not suitable for long-distance networks because as the distance increases, CSMA CD' efficiency
decreases.
2. It can detect collision only up to 2500 meters, and beyond this range, it cannot detect collisions.
3. When multiple devices are added to a CSMA CD, collision detection performance is reduced.
CSMA/ CA
It is a carrier sense multiple access/collision avoidance network protocol for carrier transmission of data frames.
It is a protocol that works with a medium access control layer. When a data frame is sent to a channel, it receives
an acknowledgment to check whether the channel is clear. If the station receives only a single (own)
acknowledgments, that means the data frame has been successfully transmitted to the receiver.
But if it gets two signals (its own and one more in which the collision of frames), a collision of the frame occurs in
the shared channel. Detects the collision of the frame when a sender receives an acknowledgment signal.
Following are the methods used in the CSMA/ CA to avoid the collision:
● Interframe space: In this method, the station waits for the channel to become idle, and if it gets the
channel is idle, it does not immediately send the data. Instead of this, it waits for some time, and this time
period is called the Interframe space or IFS. However, the IFS time is often used to define the priority of
the station.
● Contention window: In the Contention window, the total time is divided into different slots. When the
station/ sender is ready to transmit the data frame, it chooses a random slot number of slots as wait
time. If the channel is still busy, it does not restart the entire process, except that it restarts the timer only
to send data packets when the channel is inactive.
● Acknowledgment: In the acknowledgment method, the sender station sends the data frame to the
shared channel if the acknowledgment is not received ahead of time.
Advantage of CSMA/ CA
1. When the size of data packets is large, the chances of collision in CSMA CA is less.
2. It controls the data packets and sends the data when the receiver wants to send them.
3. It is used to prevent collision rather than collision detection on the shared channel.
6. It avoids unnecessary data traffic on the network with the help of the RTS/ CTS extension.
1. Sometime CSMA/CA takes much waiting time as usual to transmit the data packet.
MCQ questions(10x1)-
Question 1: Which of the following is the primary purpose of the Data Link Layer in the OSI model?
a) Packet routing
c) End-to-end communication
Question 2: What is the basic unit of data at the Data Link Layer called?
a) Segment
b) Packet
c) Frame
d) Datagram
Answer: c) Frame
Question 3: Which sublayer of the Data Link Layer is responsible for MAC (Media Access Control) addressing and
frame delivery within a local network?
c) Physical Layer
Question 4: In Ethernet networks, what is the most common method used for resolving collisions?
c) Token passing
d) Circuit switching
Question 5: Which field in an Ethernet frame is used to indicate the type of protocol being carried in the data
field?
d) EtherType
Answer: d) EtherType
Question 6: What is the purpose of the Frame Check Sequence (FCS) in a Data Link Layer frame?
Question 7: Which Data Link Layer protocol is commonly used for point-to-point serial communication between
network devices?
Question 8: What is the main function of flow control at the Data Link Layer?
Question 9: In the Data Link Layer addressing, which address is used to identify the physical network interface
card of a device?
a) IP address
b) MAC address
c) Port number
Question 10: Which Data Link Layer protocol is used to create a loop-free logical topology in Ethernet networks?
Short questions(10x3)-
3. What are the two primary sublayers of the Data Link Layer, and what do they do?
5. How does the Data Link Layer handle errors in data transmission?
7. What does ARP (Address Resolution Protocol) do at the Data Link Layer?
10. How does the Data Link Layer address devices in a local network?
Long questions(5x5)-
Question 1: Explain the concept of framing in the Data Link Layer. How does the Data Link Layer divide data from
the Network Layer into frames for transmission over the network? Describe the structure of a typical frame,
including the purpose of frame delimiters, addressing information, and error-checking mechanisms. Provide a
step-by-step illustration of how framing works and its significance in reliable data transmission.
Question 2: Compare and contrast the Media Access Control (MAC) sublayer and the Logical Link Control (LLC)
sublayer within the Data Link Layer. Discuss their respective functions and responsibilities. How do these
sublayers interact with higher layers of the OSI model, and why is it essential to have separate sublayers for MAC
and LLC functionalities? Provide real-world examples to demonstrate situations where the MAC and LLC sublayers
play distinct roles in network communication.
Question 3: Discuss the significance of error detection and correction in the Data Link Layer. Explain the role of
the Frame Check Sequence (FCS) in identifying and handling errors in frames. How does the receiver verify the
integrity of received frames using the FCS? Describe scenarios where error detection is essential, and elaborate
on how the Data Link Layer deals with erroneous frames to ensure data integrity during transmission.
Question 4: Explore the process of collision detection in the context of shared network media and the
mechanisms employed by the Data Link Layer to manage collisions. Provide a detailed explanation of the Carrier
Sense Multiple Access with Collision Detection (CSMA/CD) protocol. How does CSMA/CD help devices contend for
access to the network medium and detect collisions? Discuss the limitations of CSMA/CD and situations where it
is commonly used.
Question 5: Illustrate the operation of switches and bridges at the Data Link Layer. How do these devices facilitate
efficient data transmission within local area networks (LANs)? Describe the role of MAC address tables in switches
and how they optimize frame forwarding. Explain the purpose of bridges in connecting separate LAN segments
and how they reduce collision domains. Provide a real-world example to showcase the advantages of using
switches and bridges in a network environment.