Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 38

Computer Network important question :-

1> compare and contrast the go-back-n arq protocol with the selective-
repeat arq protocal with diagram.
Solution :- Both Go-Back-N Protocol and Selective Repeat Protocol are the types
of sliding window protocols. The main difference between these two protocols is
that after finding the suspect or damage in sent frames go-back-n protocol re-
transmits all the frames whereas selective repeat protocol re-transmits only that
frame which is damaged.

Go-Back-N Protocol:

The Go-Back-N protocol is a sliding window protocol used for reliable data transfer
in computer networks. It is a sender-based protocol that allows the sender to transmit
multiple packets without waiting for an acknowledgement for each packet. The
receiver sends a cumulative acknowledgement for a sequence of packets, indicating
the last correctly received packet. If any packet is lost, the receiver sends a negative
acknowledgement (NACK) for the lost packet, and the sender retransmits all the
packets in the window starting from the lost packet. The sender also maintains a
timer for each packet, and if an acknowledgement is not received within the timer’s
timeout period, the sender retransmits all packets in the window.
The key features of the Go-Back-N (GBN) protocol include:
 Sliding window mechanism
 Sequence numbers
 Cumulative acknowledgements
 Timeout mechanism
 NACK mechanism
 Simple implementation.

Selective Repeat Protocol:

The Selective Repeat protocol is another sliding window protocol used for reliable
data transfer in computer networks. It is a receiver-based protocol that allows the
receiver to acknowledge each packet individually, rather than a cumulative
acknowledgement of a sequence of packets. The sender sends packets in a window
and waits for acknowledgements for each packet in the window. If a packet is lost,
the receiver sends a NACK for the lost packet, and the sender retransmits only that
packet. The sender also maintains a timer for each packet, and if an
acknowledgement is not received within the timer’s timeout period, the sender
retransmits only that packet.
key features include:
 Receiver-based protocol
 Each packet is individually acknowledged by the receiver
 Only lost packets are retransmitted, reducing network congestion
 Maintains a buffer to store out-of-order packets
 Requires more memory and processing power than Go-Back-N
 Provides efficient transmission of packets.

Similarities between the two protocols are:

Both protocols use a sliding window mechanism to allow the sender to


transmit multiple packets without waiting for an acknowledgement for
each packet.
 Both protocols use sequence numbers to ensure the correct order of
packets.
 Both protocols use a timer mechanism to handle lost or corrupted packets.
 Both protocols can retransmit packets that are not acknowledged by the
receiver.
 Both protocols can reduce network congestion by only retransmitting lost
packets.
 Both protocols are widely used in modern communication networks.
Now, we shall see the difference between them:
S.N
O Go-Back-N Protocol Selective Repeat Protocol

In Go-Back-N Protocol, if the sent


frame are find suspected then all the In selective Repeat protocol, only
1. frames are re-transmitted from the those frames are re-transmitted which
lost packet to the last packet are found suspected.
transmitted.

Sender window size of Go-Back-N Sender window size of selective


2.
Protocol is N. Repeat protocol is also N.

Receiver window size of Go-Back-N Receiver window size of selective


3.
Protocol is 1. Repeat protocol is N.

Go-Back-N Protocol is less Selective Repeat protocol is more


4.
complex. complex.

In Go-Back-N Protocol, neither In selective Repeat protocol, receiver


5.
sender nor at receiver need sorting. side needs sorting to sort the frames.

In Go-Back-N Protocol, type of In selective Repeat protocol, type of


6.
Acknowledgement is cumulative. Acknowledgement is individual.
S.N
O Go-Back-N Protocol Selective Repeat Protocol

In Go-Back-N Protocol, Out-of-


Order packets are NOT Accepted In selective Repeat protocol, Out-of-
7.
(discarded) and the entire window is Order packets are Accepted.
re-transmitted.

In selective Repeat protocol, if


In Go-Back-N Protocol, if Receives Receives a corrupt packet, it
8. a corrupt packet, then also, the entire immediately sends a negative
window is re-transmitted. acknowledgement and hence only the
selective packet is retransmitted.

Efficiency of selective Repeat protocol


Efficiency of Go-Back-N Protocol is
is also
9.
N/(1+2*a)
N/(1+2*a)

Difference between Go-Back-N and Selective Repeat Protocol

The following table highlights the major differences between the Go-Back-N
and the Selective Repeat protocols −

Key Go-Back-N Selective Repeat

In Go-Back-N, if In Selective
a sent frame is Repeat, only the
found suspected suspected or
or damaged, damaged frames
Definition
then all the are
frames are retransmitted.
retransmitted till
the last packet.

Sender Window Sender Window


Sender Window
is of size N. size is same as
Size
N.
Key Go-Back-N Selective Repeat

Receiver Window Receiver Window Receiver Window


Size Size is 1. Size is N.

Go-Back-N is In Selective
easier to Repeat, receiver
Complexity
implement. window needs to
sort the frames.

Efficiency of Go- Efficiency of


Back-N = N / (1 Selective Repeat
+ 2a), where "a" = N / (1 + 2a).
is ratio of
propagation
Efficiency
delay vs.
transmission
delay and "N" is
the number of
packets sent.

Acknowledgeme Acknowledgeme
Acknowledgeme
nt type is nt type is
nt
cumulative. individual.

Neither the To sort the


sender nor the frames on the
Sorting recipient receiver side,
requires sorting. sorting is
needed.

Out-of-Order Out-of-Order
packets are packets are
Out-of-order
rejected, and the accepted in the
Packets
entire window is selective Repeat
re-transmitted. protocol.
Key Go-Back-N Selective Repeat

The Minimum The Minimum


Sequence Sequence
Number in the Number in the
Minimum Go-Back-N Selective Repeat
Sequence protocol is N+1, protocol is 2N,
Number where "N" is the where "N" is the
number of number of
packets sent. packets
transmitted.

Conclusion

Go-Back-N consumes more bandwidth because it would retransmit an entire


window even if a single packet is lost. If the rate of error is high, then Go-
Back-N will consume a lot of bandwidth. Selective Repeat is a better option if
you have to be consider bandwidth requirement, as it would resend only the
defective or missing packets and not the entire windows.

Go-Back-N uses cumulative acknowledgements which can reduce the traffic;


however, there's always a risk of losing the cumulative acknowledgement. If
it happens so, then the acknowledgements of all the corresponding packets
are lost.

2 > differentiate between pure aloha and slotted aloha with example
and diagram.

Solution :- The Aloha Protocol allows several stations to send data frames
over the same communication channel at the same time. This protocol is a
straightforward communication method in which each network station is
given equal priority and works independently.
Aloha is a medium access control (MAC) protocol for transmission of data via a
shared network channel. Using this protocol, several data streams originating
from multiple nodes are transferred through a multi-point transmission
channel.
There are two types of Aloha protocols − Pure Aloha and Slotted Aloha.
 In Pure Aloha, the time of transmission is continuous. Whenever a station has an
available frame, it sends the frame. If there is collision and the frame is destroyed, the
sender waits for a random amount of time before retransmitting it.
 In Slotted Aloha, time is divided into discrete intervals called slots, corresponding to a
frame.

In this article, we will highlight the major differences between Pure Aloha and
Slotted Aloha.

What is Pure Aloha?

Pure Aloha is the basic form of Aloha contention mechanism, in which


demand-driven data frames from numerous VSATs are sent to the satellite
through a shared channel. It was first used at the University of Hawaii in
1970, under the direction of Norman Abramson.

 In Pure Aloha, the time of transmission is continuous. Whenever a station has an


available frame, it sends the frame.
 A collision occurs if more than one frame tries to occupy the channel at the same time. If
there is collision and the frame is destroyed, the sender waits for a random amount of
time before retransmitting it.
 After transmitting a frame, a station waits for a finite period of time to receive an
acknowledgement. If the acknowledgement is not received within this time, the station
assumes that the frame has been destroyed due to collision and resends the frame.

Due to the bursty nature of traffic inside a network, the possibilities of data
frames colliding are quite high when using the Pure Aloha protocol.

None of the stations are concerned whether or not another station is


transmitting at the time. As a result, when several data packets are
broadcast over the same channel, they collide.

What is Slotted Aloha?

Slotted Aloha was introduced in 1972 by Robert as an improvement over


Pure Aloha.

 In slotted aloha, successful data transmission occurs only when each slot sends just one
data frame at a time. The chance of a collision is considerably reduced by doing so.
 Here, time is divided into discrete intervals called slots, corresponding to a frame. The
communicating stations agree upon the slot boundaries.
 Any station can send only one frame in each slot. Also, the stations cannot transmit at any
time whenever a frame is available. They should wait for the beginning of the next slot.
 It will stay idle if no data packets are sent in any of the slots. It should be noted that if a
packet does not get acknowledgment after a collision, it is deemed lost and is
retransmitted in a different slot after back-off time is taken into account.

However, there still can be collisions. If more than one frame transmits at
the beginning of a slot, collisions occur.

What is aloha?
Aloha is designed for wireless LAN (Local Area Network) but can also be used in a
shared medium to transmit data. In aloha, any station can transmit data to a channel
at any time. It does not require any carrier sensing.

Pure Aloha
Pure aloha is used when data is available for sending over a channel at stations. In
pure Aloha, when each station transmits data to a channel without checking whether
the channel is idle or not, the chances of collision may occur, and the data frame can
be lost.
When a station transmits the data frame to a channel without checking whether the
channel is free or not, there will be a possibility of the collision of data frames.
Station expects the acknowledgement from the receiver, and if the
acknowledgement of the frame is received at the specified time, then it will be OK;
otherwise, the station assumes that the frame is destroyed. Then station waits for a
random amount of time, and after that, it retransmits the frame until all the data are
successfully transmitted to the receiver.

Slotted Aloha
There is a high possibility of frame hitting in pure aloha, so slotted aloha is designed
to overcome it. Unlike pure aloha, slotted aloha does not allow the transmission of
data whenever the station wants to send it.

In slotted Aloha, the shared channel is divided into a fixed time interval called slots.
So that, if a station wants to send a frame to a shared channel, the frame can only be
sent at the beginning of the slot, and only one frame is allowed to be sent to each
slot. If the station is failed to send the data, it has to wait until the next slot.

However, there is still a possibility of a collision because suppose if two stations try to
send a frame at the beginning of the time slot.

Pure aloha v/s slotted aloha


Now, let's see the comparison chart between pure aloha and slotted aloha. We are
comparing both terms on the basis of characteristics to make the topic more clear
and understandable.

S.no. On the basis Pure Aloha Slotted Aloha


of

1. Basic In pure aloha, data can be transmitted In slotted aloha, data can be
at any time by any station. transmitted at the beginning of
the time slot.

2. Introduced by It was introduced under the leadership It was introduced by Robert in


of Norman Abramson in 1970 at the 1972 to improve pure aloha's
University of Hawaii. capacity.

3. Time Time is not synchronized in pure aloha. Time is globally synchronized in


Time is continuous in it. slotted aloha.
Time is discrete in it.

4. Number of It does not decrease the number of On the other hand, slotted aloha
collisions collisions to half. enhances the efficiency of pure
aloha.
It decreases the number of
collisions to half.

5. Vulnerable time In pure aloha, the vulnerable time is = Whereas, in slotted aloha, the
2 x Tt vulnerable time is = Tt

6. Successful In pure aloha, the probability of the In slotted aloha, the probability
transmission successful transmission of the frame is of the successful transmission of
- the frame is -
S = G * e-2G S = G * e-G

7. Throughput The maximum throughput in pure The maximum throughput in


aloha is about 18%. slotted aloha is about 37%.

Conclusion
From the above discussion, it can be said that slotted aloha is somewhat better than
pure aloha. It is because there is less possibility of collision in slotted aloha.
Definition Of Pure ALOHA

Pure ALOHA is introduced by Norman Abramson and his associates at the


University of Hawaii in early 1970. The Pure ALOHA just allows every station to
transmit the data whenever they have the data to be sent. When every station
transmits the data without checking whether the channel is free or not there is
always the possibility of the collision of data frames. If the acknowledgment
arrived for the received frame, then it is ok or else if the two frames collide
(Overlap), they are damaged.

If a frame is
damaged, then the stations wait for a random amount of type and retransmits the
frame till it transmits successfully. The waiting time of the each station must be
random and it must not be same just to avoid the collision of the frames again and
again.

The throughput of the Pure ALOHA is maximized when the frames are of uniform
length. The formula to calculate the throughput of the Pure ALOHA is S-=G*e^-
2G, the throughput is maximum when G=1/2 which is 18% of the total transmitted
data frames.

Definition Of Slotted ALOHA

After the pure ALOHA in 1970, Roberts introduced an another method to improve
the capacity of the Pure ALOHA which is called Slotted ALOHA. He proposed to
divide the time into discrete intervals called time slots. Each time slot corresponds
to the length of the frame.
In contrast to the Pure ALOHA, Slotted ALOHA does not allow to transmit the
data whenever the station has the data to be send. The Slotted ALOHA makes the
station to wait till the next time slot begins and allow each data frame to be
transmitted in the new time slot.

Synchronizati
on can be achieved in Slotted ALOHA with the help of a special station that emits
a pip at the beginning of every time slot as a clock does. The formula to calculate
the throughput of the Slotted ALOHA is S=G*e^-G, the throughput is maximum
when G=1 which is 37% of the total transmitted data frames. In Slotted ALOHA,
37% of the time slot is empty, 37% successes and 26% collision.

Note :- diagram make from Greek from Greek . and example take from
youtube video .
3 > How data link layer provide error control and flow control ? Explain with
examples .
Overview

Data Link Layer is responsible for reliable point-to-point data transfer over a physical
medium. To implement this data link layer provides three functions :

 Line Discipline:
Line discipline is the functionality used to establish coordination between link systems. It
decides which device sends data and when.
 Flow Control:
Flow control is an essential function that coordinates the amount of data the sender can
send before waiting for acknowledgment from the receiver.
 Error Control:
Error control is functionality used to detect erroneous transmissions in data frames and
retransmit them.

What is Flow Control in the Data Link Layer?

Flow control is a set of procedures that restrict the amount of data a sender should send
before it waits for some acknowledgment from the receiver.

 Flow Control is an essential function of the data link layer.


 It determines the amount of data that a sender can send.
 It makes the sender wait until an acknowledgment is received from the receiver’s end.
 Methods of Flow Control are Stop-and-wait , and Sliding window.

Purpose of Flow Control

The device on the receiving end has a limited amount of memory (to store incoming data)
and limited speed (to process incoming data). The receiver might get overwhelmed if the rate
at which the sender sends data is faster or the amount of data sent is more than its capacity.

Buffers are blocks in the memory that store data until it is processed. If the buffer is
overloaded and there is more incoming data, then the receiver will start losing frames.

The flow control mechanism was devised to avoid this loss and wastage of frames.
Following this mechanism, the receiver, as per its capacity, sends an acknowledgment to send
fewer frames or temporarily halt the transmission until it can receive again.

Thus, flow control is the method of controlling the rate of transmission of data to a value that
the receiver can handle.

Methods to Control the Flow of Data

Stop-and-wait Protocol

Stop-and-wait protocol works under the assumption that the communication channel
is noiseless and transmissions are error-free.

Working:

 The sender sends data to the receiver.


 The sender stops and waits for the acknowledgment.
 The receiver receives the data and processes it.
 The receiver sends an acknowledgment for the above data to the sender.
 The sender sends data to the receiver after receiving the acknowledgment of previously sent
data.
 The process is unidirectional and continues until the sender sends the End of Transmission
(EoT) frame.
Sliding Window Protocol

The sliding window protocol is the flow control protocol for noisy channels that allows the
sender to send multiple frames even before acknowledgments are received. It is called
a Sliding window because the sender slides its window upon receiving the acknowledgments
for the sent frames.

Working:

 The sender and receiver have a “window” of frames. A window is a space that consists of
multiple bytes. The size of the window on the receiver side is always 1.
 Each frame is sequentially numbered from 0 to n - 1, where n is the window size at the
sender side.
 The sender sends as many frames as would fit in a window.
 After receiving the desired number of frames, the receiver sends an acknowledgment. The
acknowledgment (ACK) includes the number of the next expected frame.

Example:

1. The sender sends the frames 0 and 1 from the first window (because the window size
is 2).
2. The receiver after receiving the sent frames, sends an acknowledgment for frame 2 (as
frame 2 is the next expected frame).
3. The sender then sends frames 2 and 3. Since frame 2 is lost on the way, the receiver
sends back a “NAK” signal (a non-acknowledgment) to inform the sender that
frame 2 has been lost. So, the sender retransmits frame 2.

What is Error Control in the Data Link Layer?

Error Control is a combination of both error detection and error correction. It ensures that
the data received at the receiver end is the same as the one sent by the sender.

Error detection is the process by which the receiver informs the sender about any erroneous
frame (damaged or lost) sent during transmission.

Error correction refers to the retransmission of those frames by the sender.

Purpose of Error Control

Error control is a vital function of the data link layer that detects errors in transmitted
frames and retransmits all the erroneous frames. Error discovery and amendment deal with
data frames damaged or lost in transit and the acknowledgment frames lost during
transmission. The method used in noisy channels to control these errors
is ARQ or Automatic Repeat Request.
Categories of Error Control

Stop-and-wait ARQ

 In the case of stop-and-wait ARQ after the frame is sent, the sender maintains a timeout
counter.
 If acknowledgment of the frame comes in time, the sender transmits the next frame in the
queue.
 Else, the sender retransmits the frame and starts the timeout counter.
 In case the receiver receives a negative acknowledgment, the sender retransmits the frame.

Sliding Window ARQ

To deal with the retransmission of lost or damaged frames, a few changes are made to the
sliding window mechanism used in flow control.

Go-Back-N ARQ :

In Go-Back-N ARQ, if the sent frames are suspected or damaged, all the frames are re-
transmitted from the lost packet to the last packet transmitted.

Selective Repeat ARQ:

Selective repeat ARQ/ Selective Reject ARQ is a type of Sliding Window ARQ in which
only the suspected or damaged frames are re-transmitted.

Differences between Flow Control and Error Control


Flow control Error control

Flow control refers to the


Error control refers to the transmission of error-free and
transmission of data frames
reliable data frames from sender to receiver.
from sender to receiver.

Approaches for error detection are Checksum, Cyclic


Approaches for Flow Control :
Redundancy Check, and Parity Checking. Approaches for error
Feedback-based Flow Control
correction are Hamming code, Binary Convolution codes, Reed-
and Rate-based Flow Control.
Solomon code, and Low-Density Parity-Check codes.

Flow control focuses on the


proper flow of data and data Error control focuses on the detection and correction of errors.
loss prevention.

Examples of Flow Control


Examples of Error Control techniques are :
techniques are :
1. Stop and Wait for ARQ,
1. Stop and Wait for Protocol,
2. Sliding Window ARQ.
2. Sliding Window Protocol.
Conclusion

 Data frames are transmitted from the sender to the receiver.


 For the transmission to be reliable, error-free, and efficient flow control and error
control techniques are implemented.
 Both these techniques are implemented in the Data Link Layer.
 Flow Control is used to maintain the proper flow of data from the sender to the receiver.
 Error Control is used to find whether the data delivered to the receiver is error-free and
reliable.

Note :- make diagram from scaler.in

1. Flow Control :
It is an important function of the Data Link Layer. It refers to a set of procedures that
tells the sender how much data it can transmit before waiting for acknowledgment
from the receiver.
Purpose of Flow Control :
Any receiving device has a limited speed at which it can process incoming data and
also a limited amount of memory to store incoming data. If the source is sending the
data at a faster rate than the capacity of the receiver, there is a possibility of the
receiver being swamped. The receiver will keep losing some of the frames simply
because they are arriving too quickly and the buffer is also getting filled up.
This will generate waste frames on the network. Therefore, the receiving device
must have some mechanism to inform the sender to send fewer frames or stop
transmission temporarily. In this way, flow control will control the rate of frame
transmission to a value that can be handled by the receiver.
Example – Stop & Wait Protocol
2. Error Control :
The error control function of the data link layer detects the errors in transmitted
frames and re-transmits all the erroneous frames.
Purpose of Error Control :
The function of error control function of the data link layer helps in dealing with
data frames that are damaged in transit, data frames lost in transit and acknowledged
frames that are lost in transmission. The method used for error control is called
Automatic Repeat Request (ARQ) which is used for the noisy channel.
Example – Stop & Wait ARQ and Sliding Window ARQ
Difference between Flow Control and Error Control :

S.NO. Flow control Error control

Flow control is meant only for


Error control is meant for the transmission
1. the transmission of data from
of error free data from sender to receiver.
sender to receiver.
S.NO. Flow control Error control

To detect error in data, the approaches


are : Checksum, Cyclic Redundancy
For Flow control there are two
Check and Parity Checking.
approaches : Feedback-based
2. To correct error in data, the approaches
Flow Control and Rate-based
are : Hamming code, Binary Convolution
Flow Control.
codes, Reed-Solomon code, Low-Density
Parity Check codes.

It prevents the loss of data and


It is used to detect and correct the error
3. avoid over running of receive
occurred in the code.
buffers.

Example of Flow Control Example of Error Control techniques are :


techniques are : Stop & Wait Stop & Wait ARQ and Sliding Window
4.
Protocol and Sliding Window ARQ (Go-back-N ARQ, Selected Repeat
Protocol. ARQ).

Stop and Wait protocol, its problems and solutions




It is the simplest flow control method in which the sender will send the packet and
then wait for the acknowledgement by the receiver that it has received the packet then
it will send the next packet.
Stop and wait protocol is very easy to implement.
Total time taken to send is,

Ttotal = Tt(data) + Tp + Tq + Tprocess + Tt(ack) + Tp

( since, Tq and Tprocess = 0)

Ttotal = Tt(data) + 2Tp + Tt(ack)


Ttotal = Tt(data) + 2Tp

(when Tt(ack) is negligible)

Efficiency
= useful time / total cycle time
= Tt / (Tt+2Tp)
= 1 / (1+2a) [a = Tp/Tt]
Note: Stop and wait is better for less distance. Hence it is a good protocol for LAN.
Stop and wait is favorable for bigger packets.
What if the data packet is lost in between ?
1. According to sender, receiver is busy but actually data is lost.
2. Receiver will assume, no packet has been sent by sender.
3. Both will be waiting for each other and there will be a deadlock.
Need for timeout timer:
A timer is applied and the receiver will wait till the timeout timer for the data after
that it will confirm that the data has been lost.
What if the data packet has been lost ?
After timeout timer expires, sender will assume that the data is lost but actually the
acknowledgement is lost. By assuming this it will send the data packet again but
according to receiver it is a new data packet, hence it will give rise to duplicate packet
problem.
To eliminate duplicate packet problem sequence number is added to the data packet.
So using packet numbers it can easily determine the duplicate packets.
What if there is a delay in receiving acknowledgement ?

According to sender, the acknowledgement of packet 1 is delayed and packet 2 has


been lost. But the receiver assumes that the acknowledgement that has been received
was of packet 2. This problem is called missing packet problem.
Missing packet problem can be solved if acknowledgements also have numbers.
Stop and Wait ARQ


Characteristics

 Used in Connection-oriented communication.


 It offers error and flows control
 It is used in Data Link and Transport Layers
 Stop and Wait for ARQ mainly implements the Sliding Window Protocol
concept with Window Size 1

Useful Terms:

Propagation Delay: Amount of time taken by a packet to make a physical


journey from one router to another router.
Propagation Delay = (Distance between routers) / (Velocity of propagation)
 RoundTripTime (RTT) = Amount of time taken by a packet to reach the
receiver + Time taken by the Acknowledgement to reach the sender
 TimeOut (TO) = 2* RTT
 Time To Live (TTL) = 2* TimeOut. (Maximum TTL is 255 seconds)
Simple Stop and Wait
Sender:
Rule 1) Send one data packet at a time.
Rule 2) Send the next packet only after receiving acknowledgement for the previous.

Receiver:
Rule 1) Send acknowledgement after receiving and consuming a data packet.
Rule 2) After consuming packet acknowledgement need to be sent (Flow Control)
Problems :
1. Lost Data
2. Lost Acknowledgement:

3. Delayed Acknowledgement/Data: After a timeout on the sender side, a long-


delayed acknowledgement might be wrongly considered as acknowledgement of some
other recent packet.
Stop and Wait for ARQ (Automatic Repeat Request)
The above 3 problems are resolved by Stop and Wait for ARQ (Automatic Repeat
Request) that does both error control and flow control.

1. Time Out:

2. Sequence Number (Data)


3. Delayed Acknowledgement:
This is resolved by introducing sequence numbers for acknowledgement also.

Working of Stop and Wait for ARQ:


1) Sender A sends a data frame or packet with sequence number 0.
2) Receiver B, after receiving the data frame, sends an acknowledgement with
sequence number 1 (the sequence number of the next expected data frame or packet)
There is only a one-bit sequence number that implies that both sender and receiver
have a buffer for one frame or packet only.
Characteristics of Stop and Wait ARQ:
 It uses a link between sender and receiver as a half-duplex link
 Throughput = 1 Data packet/frame per RTT
 If the Bandwidth*Delay product is very high, then they stop and wait for
protocol if it is not so useful. The sender has to keep waiting for
acknowledgements before sending the processed next packet.
 It is an example of “Closed Loop OR connection-oriented “ protocols
 It is a special category of SWP where its window size is 1
 Irrespective of the number of packets sender is having stop and wait for
protocol requires only 2 sequence numbers 0 and 1
Constraints:
Stop and Wait ARQ has very less efficiency , it can be improved by increasing the
window size. Also , for better efficiency , Go back N and Selective Repeat Protocols
are used.
The Stop and Wait ARQ solves the main three problems but may cause big
performance issues as the sender always waits for acknowledgement even if it has the
next packet ready to send. Consider a situation where you have a high bandwidth
connection and propagation delay is also high (you are connected to some server in
some other country through a high-speed connection). To solve this problem, we can
send more than one packet at a time with a larger sequence number. We will be
discussing these protocols in the next articles.
So Stop and Wait ARQ may work fine where propagation delay is very less for
example LAN connections but performs badly for distant connections like satellite
connections.
Advantages of Stop and Wait ARQ :
 Simple Implementation: Stop and Wait ARQ is a simple protocol that is
easy to implement in both hardware and software. It does not require
complex algorithms or hardware components, making it an inexpensive and
efficient option.
 Error Detection: Stop and Wait ARQ detects errors in the transmitted data
by using checksums or cyclic redundancy checks (CRC). If an error is
detected, the receiver sends a negative acknowledgment (NAK) to the
sender, indicating that the data needs to be retransmitted.
 Reliable: Stop and Wait ARQ ensures that the data is transmitted reliably
and in order. The receiver cannot move on to the next data packet until it
receives the current one. This ensures that the data is received in the correct
order and eliminates the possibility of data corruption.
 Flow Control: Stop and Wait ARQ can be used for flow control, where the
receiver can control the rate at which the sender transmits data. This is
useful in situations where the receiver has limited buffer space or
processing power.
 Backward Compatibility: Stop and Wait ARQ is compatible with many
existing systems and protocols, making it a popular choice for
communication over unreliable channels.
Disadvantages of Stop and Wait ARQ :
 Low Efficiency: Stop and Wait ARQ has low efficiency as it requires the
sender to wait for an acknowledgment from the receiver before sending the
next data packet. This results in a low data transmission rate, especially for
large data sets.
 High Latency: Stop and Wait ARQ introduces additional latency in the
transmission of data, as the sender must wait for an acknowledgment before
sending the next packet. This can be a problem for real-time applications
such as video streaming or online gaming.
 Limited Bandwidth Utilization: Stop and Wait ARQ does not utilize the
available bandwidth efficiently, as the sender can transmit only one data
packet at a time. This results in underutilization of the channel, which can
be a problem in situations where the available bandwidth is limited.
 Limited Error Recovery: Stop and Wait ARQ has limited error recovery
capabilities. If a data packet is lost or corrupted, the sender must retransmit
the entire packet, which can be time-consuming and can result in further
delays.
 Vulnerable to Channel Noise: Stop and Wait ARQ is vulnerable to channel
noise, which can cause errors in the transmitted data. This can result in
frequent retransmissions and can impact the overall efficiency of the
protocol.

Sliding Window Protocol | Set 1 (Sender Side)





Prerequisite : Stop and Wait ARQ The Stop and Wait ARQ offers error and flow
control, but may cause big performance issues as sender always waits for
acknowledgement even if it has next packet ready to send. Consider a situation where
you have a high bandwidth connection and propagation delay is also high (you are
connected to some server in some other country through a high-speed connection),
you can’t use this full speed due to limitations of stop and wait. Sliding Window
protocol handles this efficiency issue by sending more than one packet at a time with a
larger sequence number. The idea is same as pipelining in architecture.
Few Terminologies :
Transmission Delay (Tt) – Time to transmit the packet from host to the outgoing
link. If B is the Bandwidth of the link and D is the Data Size to transmit
Tt = D/B
Propagation Delay (Tp) – It is the time taken by the first bit transferred by the host
onto the outgoing link to reach the destination. It depends on the distance d and the
wave propagation speed s (depends on the characteristics of the medium).
Tp = d/s
Efficiency – It is defined as the ratio of total useful time to the total cycle time of a
packet. For stop and wait protocol,
Total cycle time = Tt(data) + Tp(data) +
Tt(acknowledgement) + Tp(acknowledgement)
= Tt(data) + Tp(data) + Tp(acknowledgement)
= Tt + 2*Tp
Since acknowledgements are very less in size, their transmission delay can be
neglected.
Efficiency = Useful Time / Total Cycle Time
= Tt/(Tt + 2*Tp) (For Stop and Wait)
= 1/(1+2a) [ Using a = Tp/Tt ]
Effective Bandwidth(EB) or Throughput – Number of bits sent per second.
EB = Data Size(D) / Total Cycle time(Tt + 2*Tp)
Multiplying and dividing by Bandwidth (B),
= (1/(1+2a)) * B [ Using a = Tp/Tt ]
= Efficiency * Bandwidth
Capacity of link – If a channel is Full Duplex, then bits can be transferred in both the
directions and without any collisions. Number of bits a channel/Link can hold at
maximum is its capacity.
Capacity = Bandwidth(B) * Propagation(Tp)

For Full Duplex channels,


Capacity = 2*Bandwidth(B) * Propagation(Tp)
Concept Of Pipelining
In Stop and Wait protocol, only 1 packet is transmitted onto the link and then sender
waits for acknowledgement from the receiver. The problem in this setup is that
efficiency is very less as we are not filling the channel with more packets after 1st
packet has been put onto the link. Within the total cycle time of Tt + 2*Tp units, we
will now calculate the maximum number of packets that sender can transmit on the
link before getting an acknowledgement.
In Tt units ----> 1 packet is Transmitted.
In 1 units ----> 1/Tt packet can be Transmitted.
In Tt + 2*Tp units -----> (Tt + 2*Tp)/Tt
packets can be Transmitted
------> 1 + 2a [Using a = Tp/Tt]
Maximum packets That can be Transmitted in total cycle time = 1+2*a Let me explain
now with the help of an example. Consider Tt = 1ms, Tp = 1.5ms. In the picture given
below, after sender has transmitted packet 0, it will immediately transmit packets 1, 2,
3. Acknowledgement for 0 will arrive after 2*1.5 = 3ms. In Stop and Wait, in time 1 +
2*1.5 = 4ms, we were transferring one packet only. Here we keep a window of
packets that we have transmitted but not yet acknowledged.
After we have received the Ack for packet 0, window slides and the next packet can
be assigned sequence number 0. We reuse the sequence numbers which we have
acknowledged so that header size can be kept minimum as shown in the diagram
given below.

Minimum Number Of Bits For Sender window (Very Important For GATE)
As we have seen above,
Maximum window size = 1 + 2*a where a = Tp/Tt

Minimum sequence numbers required = 1 + 2*a.


All the packets in the current window will be given a sequence number. Number of
bits required to represent the sender window = ceil(log2(1+2*a)). But sometimes
number of bits in the protocol headers is pre-defined. Size of sequence number field in
header will also determine the maximum number of packets that we can send in total
cycle time. If N is the size of sequence number field in the header in bits, then we can
have 2N sequence numbers. Window Size ws = min(1+2*a, 2N) If you want to calculate
minimum bits required to represent sequence numbers/sender window, it will
be ceil(log2(ws)). In this article, we have discussed sending window only. For
receiving window, there are 2 protocols namely Go Back N and Selective
Repeat which are used to implement pipelining practically. We will be discussing
receiving window in set 2. This article has been contributed by Pranjul Ahuja. Please
write comments if you find anything incorrect, or you want to share more information
about the topic discussed above

Advantages:

Efficiency: The sliding window protocol is an efficient method of transmitting data


across a network because it allows multiple packets to be transmitted at the same time.
This increases the overall throughput of the network.
Reliable: The protocol ensures reliable delivery of data, by requiring the receiver to
acknowledge receipt of each packet before the next packet can be transmitted. This
helps to avoid data loss or corruption during transmission.
Flexibility: The sliding window protocol is a flexible technique that can be used with
different types of network protocols and topologies, including wireless networks,
Ethernet, and IP networks.
Congestion Control: The sliding window protocol can also help control network
congestion by adjusting the size of the window based on the network conditions,
thereby preventing the network from becoming overwhelmed with too much traffic.

Disadvantages:

Complexity: The sliding window protocol can be complex to implement and can
require a lot of memory and processing power to operate efficiently.
Delay: The protocol can introduce a delay in the transmission of data, as each packet
must be acknowledged before the next packet can be transmitted. This delay can
increase the overall latency of the network.
Limited Bandwidth Utilization: The sliding window protocol may not be able to
utilize the full available bandwidth of the network, particularly in high-speed
networks, due to the overhead of the protocol.
Window Size Limitations: The maximum size of the sliding window can be limited
by the size of the receiver’s buffer or the available network resources, which can
affect the overall performance of the protocol.

Error Detection in Computer Networks




Error is a condition when the receiver’s information does not match the sender’s
information. During transmission, digital signals suffer from noise that can introduce
errors in the binary bits traveling from sender to receiver. That means a 0 bit may
change to 1 or a 1 bit may change to 0.
Data (Implemented either at the Data link layer or Transport Layer of the OSI Model)
may get scrambled by noise or get corrupted whenever a message is transmitted. To
prevent such errors, error-detection codes are added as extra data to digital messages.
This helps in detecting any errors that may have occurred during message
transmission.
Types of Errors

Single-Bit Error
A single-bit error refers to a type of data transmission error that occurs when one bit
(i.e., a single binary digit) of a transmitted data unit is altered during transmission,
resulting in an incorrect or corrupted data unit.

Single-Bit Error

Multiple-Bit Error

A multiple-bit error is an error type that arises when more than one bit in a data
transmission is affected. Although multiple-bit errors are relatively rare when
compared to single-bit errors, they can still occur, particularly in high-noise or high-
interference digital environments.

Multiple-Bit Error

Burst Error
When several consecutive bits are flipped mistakenly in digital transmission, it creates
a burst error. This error causes a sequence of consecutive incorrect values.

Burst Error

To detect errors, a common technique is to introduce redundancy bits that provide


additional information. Various techniques for error detection include::
1. Simple Parity Check
2. Two-dimensional Parity Check
3. Checksum
4. Cyclic Redundancy Check (CRC)

Error Detection Methods

Simple Parity Check

Simple-bit parity is a simple error detection method that involves adding an


extra bit to a data transmission. It works as:
 1 is added to the block if it contains an odd number of 1’s, and
 0 is added if it contains an even number of 1’s
This scheme makes the total number of 1’s even, that is why it is called even parity
checking.
Disadvantages
 Single Parity check is not able to detect even no. of bit error.
 For example, the Data to be transmitted is 101010. Codeword transmitted
to the receiver is 1010101 (we have used even parity).
Let’s assume that during transmission, two of the bits of code word flipped
to 1111101.
On receiving the code word, the receiver finds the no. of ones to be even
and hence no error, which is a wrong assumption.

Two-dimensional Parity Check

Two-dimensional Parity check bits are calculated for each row, which is equivalent
to a simple parity check bit. Parity check bits are also calculated for all columns, then
both are sent along with the data. At the receiving end, these are compared with the
parity bits calculated on the received data.
Checksum

Checksum error detection is a method used to identify errors in transmitted data. The
process involves dividing the data into equally sized segments and using a 1’s
complement to calculate the sum of these segments. The calculated sum is then sent
along with the data to the receiver. At the receiver’s end, the same process is repeated
and if all zeroes are obtained in the sum, it means that the data is correct.
Checksum – Operation at Sender’s Side
 Firstly, the data is divided into k segments each of m bits.
 On the sender’s end, the segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented to get the checksum.
 The checksum segment is sent along with the data segments.
Checksum – Operation at Receiver’s Side
 At the receiver’s end, all received segments are added using 1’s
complement arithmetic to get the sum. The sum is complemented.
 If the result is zero, the received data is accepted; otherwise discarded.
Disadvantages
 If one or more bits of a segment are damaged and the corresponding bit or
bits of opposite value in a second segment are also damaged.

Cyclic Redundancy Check (CRC)

 Unlike the checksum scheme, which is based on addition, CRC is based on


binary division.
 In CRC, a sequence of redundant bits, called cyclic redundancy check bits,
are appended to the end of the data unit so that the resulting data unit
becomes exactly divisible by a second, predetermined binary number.
 At the destination, the incoming data unit is divided by the same number. If
at this step there is no remainder, the data unit is assumed to be correct and
is therefore accepted.
 A remainder indicates that the data unit has been damaged in transit and
therefore must be rejected.
4 . Based on single bit parity error detection code devise a new code to detect
and correct a single 1 bit error in 4 bytes of data . How many parity bits do
you require ? You may assume that parity bits do you require ? you may
assume that parity bits are error free.

Solution :- solved by youtube .

You might also like