Professional Documents
Culture Documents
DCN - Unit 4
DCN - Unit 4
Before learning about the difference between transport layer and network layer,
let us first learn about computer networks and the OSI model in brief.
The OSI model stands for Open Systems Interconnection model. The OSI model is
also known as the ISO-OSI model as it was developed by ISO (International
Organization for Standardization). It is a conceptual reference model that describes
the entire flow of information from one computer to the other computer. The OSI
model is a 7-layered model so it is also known as a 7-layered architecture model.
The basic idea behind the layered architecture is to divide the design into smaller
pieces. To reduce the design complexity, most networks are organized in a series
of layers. The network layer and transport layer are two of the seven layers of the
OSI model.
Now, let us discuss the transport layer briefly so that we can get a better
understanding of the topic i.e. difference between transport layer and network
layer.
The transport layer is the fourth layer of the OSI model which is responsible for
the process to process delivery of data. The main aim of the transport layer is to
maintain the order so that the data must be received in the same sequence as it was
sent by the sender. The transport layer provides two types of services namely -
connection-oriented and connection-less.
Refer to the image below to see the basic transmission of data and working of the
transport layer.
Segments,
Load Balancers, etc.
The network layer is the third layer of the OSI model which provides
communication between hosts of different networks. The network layer divides the
data received from the transport layer in the form of packets. The network layer
provides two ways of communication namely - connection-oriented and
connection-less.
Note:
Logical Addressing: The network layer adds the logical address i.e. IP
address (Internet Protocol address) if the packet crosses the network
boundary. It helps in the proper identification of devices on the network.
Hence, the network layer adds the source and destination address to the
header of the packet.
Routing: Routing simply means determining the best (optimal) path out of
multiple paths from the source to the destination. So the network layer must
choose the best routing path for the data to travel.
If many devices are connected on the same router then there is a change of
packet drop due as a router may not be able to handle all the requests. So,
the network layer controls the congestion on the network as well.
Refer to the image below to see the basic transmission of data and the working of
the network layer.
Routers,
Brouters, etc.
Note: The network layer does not guarantee the delivery of packets to the
destination. There is no reliability guarantee as well.
Before learning the difference between transport layer and network layer, let us
understand how both these layers are interrelated.
As we know the layers of the OSI model are interrelated and the data is
transferred from one form to the other among the seven layers. The transport layer
is located just above the network layer. The network being the third layer takes the
data from the transport layer (i.e. fourth layer) and adds the network layer header
(as discussed above) to it. After adding the network layer header, the data gets
forwarded to the data link layer.
Note:- The transport layer, as well as the network layer, provides two types of
services namely - connection-oriented and connection-less.
After discussing the transport layer, network layer, and their relation. Let us now
discuss some of the differences between transport layer and network layer.
The transport layer is the fourth layer of the The network layer is the third layer of
OSI model. the OSI model.
The network mainly deals with logical
The transport layer mainly deals with logical
communication between the hosts
communication between the processes
present on the same or different
running on different hosts.
network.
The network layer provides
The transport layer focuses on the process to
communication between hosts of
process the delivery of data.
different networks.
The transport layer receives the data from the The network layer divides the data
upper layer and converts it into smaller parts received from the transport layer in the
known as segments. form of packets.
The network layer adds the logical
The transport layer performs the port
address i.e. IP address (Internet
addressing i.e. the addition of a port number
Protocol address) if the packet crosses
to the header of the data.
the network boundary.
The network layer does not focus on
The transport layer maintains the order of
maintaining the order of the data
data.
packets
The transport layer deals with the process-to-
The network layer deals with host-to-
process communication or port-to-port
host communication.
communication.
The various protocols used in the transport The various protocols used in the
layer are TCP (Transmission Control network layer are IPv4, IPv6,
Protocol), UDP (User Datagram Protocol), ICMP (Internet Control Message
etc. Protocol), etc.
The various devices used in the
The various devices used in the transport are
network layer are Routers, Brouters,
Segments, Load Balancers, etc.
etc.
Overview of the Transport Layer in the Internet
Multiplexing –
Gathering data from multiple application processes of the sender, enveloping that
data with a header, and sending them as a whole to the intended receiver is called
multiplexing.
Demultiplexing –
Delivering received segments at the receiver side to the correct app layer
processes is called demultiplexing.
Figure – Abstract view of multiplexing and demultiplexing
Multiplexing and demultiplexing are the services facilitated by the transport layer
of the OSI model.
Now the messages from both the apps will be wrapped up along with appropriate
headers(viz. source IP address, destination IP address, source port no, destination
port number) and sent as a single message to the receiver.
This process is called multiplexing.
At the destination, the received message is unwrapped and constituent messages
(viz messages from a hike and WhatsApp application) are sent to the appropriate
application by looking to the destination the port number.
This process is called demultiplexing. Similarly, B can also transfer the
messages to A.
Figure – Message transfer using WhatsApp and hike messaging application
Along with the data that needs to be sent, the sender uses an algorithm to calculate
the checksum of the data and sends it along. When the receiver gets the data, it
calculates the checksum of the received data using the same algorithm and
compares it with the transmitted checksum. If they both match, it means the
transmission was error-free.
Examples
1) UDP Checksum
UDP is a transport layer protocol that enables applications to send and receive
data, especially when it is time-sensitive. UDP uses a checksum to detect whether
the received data has been altered.
The data being sent is divided into 16-bit chunks. These chunks are then added,
any generated carry is added back to the sum. Then, the 1’s complement of the
sum is performed and put in the checksum field of the UDP segment.
Suppose the data we want to send consists of the following three words:
0110011001100000
0101010101010101
1000111100001100
1011101110110101
1011101110110101
1000111100001100
____________________________
0100101011000001
However, there is a carry out, which we need to add to the final sum again:
0100101011000001+1=
0100101011000010
Finally, we take the 1’s complement of the final sum, which in this case, becomes:
1011010100111101
At the receiver side, all the 16-bit data chunks are added again, with any overflow
being wrapped around. The checksum is also added to the final result. The answer
at the receiver’s side should consist of all ones. If a single bit is zero, it means an
error occurred during transmission.
In the example given above, the data would be added at the receiver’s side to get
0100101011000010, considering the data was transferred correctly. Then, the
receiver would add the checksum to it, which was 1011010100111101
0100101011000010
1011010100111101
____________________________
1111111111111111
All ones in the final result indicate that there were no problems. If, however,
during transmission, even a single bit of the data was altered, the sum would be
different; hence, the final result would consist of at least one zero. This way, the
error would be detected.
2) MD5 Algorithm
The MD5 is an algorithm used to find a 128-bit hash value of the data being sent.
This can also be used to check data integrity. Changing the data very slightly can
change this hash value. An example is:
Now, if the data was altered e.g., a period added at the end:
The quick brown fox jumped over the lazy dog.
Transport Layer Protocols are central piece of layered architectures, these provides
the logical communication between application processes. These processes uses
the logical communication to transfer data from transport layer to network layer
and this transfer of data should be reliable and secure. The data is transferred in
the form of packets but the problem occurs in reliable transfer of data.
The problem of transferring the data occurs not only at the transport layer, but also
at the application layer as well as in the link layer. This problem occur when a
reliable service runs on an unreliable service, For example, TCP (Transmission
Control Protocol) is a reliable data transfer protocol that is implemented on top of
an unreliable layer, i.e., Internet Protocol (IP) is an end to end network layer
protocol.
Figure: Study of Reliable Data Transfer
In this model, we have design the sender and receiver sides of a protocol over a
reliable channel. In the reliable transfer of data the layer receives the data from the
above layer breaks the message in the form of segment and put the header on each
segment and transfer. Below layer receives the segments and remove the header
from each segment and make it a packet by adding to header.
The data which is transferred from the above has no transferred data bits corrupted
or lost, and all are delivered in the same sequence in which they were sent to the
below layer this is reliable data transfer protocol. This service model is offered by
TCP to the Internet applications that invoke this transfer of data.
Similarly in an unreliable channel we have design the sending and receiving side.
The sending side of the protocol is called from the above layer to rdt_send() then it
will pass the data that is to be delivered to the application layer at the receiving
side (here rdt-send() is a function for sending data where rdt stands for reliable
data transfer protocol and _send() is used for the sending side).
On the receiving side, rdt_rcv() (rdt_rcv() is a function for receiving data where -
rcv() is used for receiving side), will be called when a packet arrives from the
receiving side of the unreliable channel. When the rdt protocol wants to deliver
data to the application layer, it will do so by calling deliver_data() (where
deliver_data() is a function for delivering data to upper layer).
In reliable data transfer protocol, we only consider the case of unidirectional data
transfer, that is transfer of data from the sending side to receiving side(i.e. only in
one direction). In case of bidirectional (full duplex or transfer of data on both the
sides) data transfer is conceptually more difficult. Although we only consider
unidirectional data transfer but it is important to note that the sending and
receiving sides of our protocol will needs to transmit packets in both directions, as
shown in above figure.
In order to exchange packets containing the data that is needed to be transferred
the both (sending and receiving) sides of rdt also need to exchange control packets
in both direction (i.e., back and forth), both the sides of rdt send packets to the
other side by a call to udt_send() (udt_send() is a function used for sending data to
other side where udt stands for unreliable data transfer protocol).
The speed-of-light round-trip propagation delay between these two end systems,
RTT, is approximately 30 milliseconds.
"Go-Back-N (GBN)" Figure 1(a) illustrates that with our stop-and-wait protocol,
if the sender begins sending the packet at t = 0, then at t = L/R = 8
microseconds, the last bit enters the channel at the sender side. The packet then
makes its 15-msec cross-country journey, with the last bit of the packet
emerging at the receiver at t = RTT/2 + L/R = 15.008 msec. Assuming for
simplicity that ACK packets are very small (so that we can ignore their
transmission time) and that the receiver can send an ACK as soon as the last bit
of a data packet is received, the ACK emerges back at the sender at t = RTT +
L/R = 30.008 msec. At this point, the sender can now transmit the next message.
Therefore, in 30.008 msec, the sender was sending for only 0.008 msec. If we
define the utilization of the sender (or the channel) as the fraction of time the
sender is actually busy sending bits into the channel, the analysis in "Go-Back-N
(GBN)" Figure 1(a) illustrates that the stop-and-wait protocol has a rather dismal
sender utilization, Usender, of
That is, the sender was busy only 2.7 hundredths of one percent of the time.
Viewed another way, the sender was able to send only 1,000 bytes in 30.008
milliseconds, an effective throughput of only 267 kbps - even though a 1 Gbps
link was available. Imagine the unhappy network manager who just paid a fortune
for a gigabit capacity link but manages to get a throughput of only 267 kilobits
per second. This is a graphic example of how network protocols can limit the
capabilities provided by the underlying
network hardware. Also, we have neglected lower-layer protocol-processing
times at the sender and receiver, as well as the processing and queuing delays
that would take place at any intermediate routers between the sender and
receiver. Including these effects would serve only to further increase the delay
and further accentuate the poor performance.
You should implement the Pipelined Reliable Data Transfer Protocols by using:
1. Go-Back-N and
2. Selective repeat.
GO – BACK N
Go-Back-N protocol, also called Go-Back-N Automatic Repeat reQuest, is a data
link layer protocol that uses a sliding window method for reliable and sequential
delivery of data frames. It is a case of sliding window protocol having to send
window size of N and receiving window size of 1.
Working Principle
Go – Back – N ARQ provides for sending multiple frames before receiving the
acknowledgment for the first frame. The frames are sequentially numbered and a
finite number of frames. The maximum number of frames that can be sent
depends upon the size of the sending window. If the acknowledgment of a frame
is not received within an agreed upon time period, all frames starting from that
frame are retransmitted.
The size of the sending window determines the sequence number of the outbound
frames. If the sequence number of the frames is an n-bit field, then the range of
sequence numbers that can be assigned is 0 to 2n−1. Consequently, the size of the
sending window is 2n−1. Thus in order to accommodate a sending window size
of 2n−1, a n-bit sequence number is chosen.
The sequence numbers are numbered as modulo-n. For example, if the sending
window size is 4, then the sequence numbers will be 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, and
so on. The number of bits in the sequence number is 2 to generate the binary
sequence 00, 01, 10, 11.
begin
frame s; //s denotes frame to be sent
frame t; //t is temporary frame
S_window = power(2,m) – 1; //Assign maximum window size
SeqFirst = 0; // Sequence number of first frame in window
SeqN = 0; // Sequence number of Nth frame window
while (true) //check repeatedly
do
Wait_For_Event(); //wait for availability of packet
if ( Event(Request_For_Transfer)) then
//check if window is full
if (SeqN–SeqFirst >= S_window) then
doNothing();
end if;
Get_Data_From_Network_Layer();
s = Make_Frame();
s.seq = SeqN;
Store_Copy_Frame(s);
Send_Frame(s);
Start_Timer(s);
SeqN = SeqN + 1;
end if;
if ( Event(Frame_Arrival) then
r = Receive_Acknowledgement();
if ( AckNo > SeqFirst && AckNo < SeqN ) then
while ( SeqFirst <= AckNo )
Remove_copy_frame(s.seq(SeqFirst));
SeqFirst = SeqFirst + 1;
end while
Stop_Timer(s);
end if
end if
// Resend all frames if acknowledgement havn’t been received
if ( Event(Time_Out)) then
TempSeq = SeqFirst;
while ( TempSeq < SeqN )
t = Retrieve_Copy_Frame(s.seq(SeqFirst));
Send_Frame(t);
Start_Timer(t);
TempSeq = TempSeq + 1;
end while
end if
end
Begin
frame f;
RSeqNo = 0; // Initialise sequence number of expected frame
while (true) //check repeatedly
do
Wait_For_Event(); //wait for arrival of frame
if ( Event(Frame_Arrival) then
Receive_Frame_From_Physical_Layer();
if ( Corrupted ( f.SeqNo )
doNothing();
else if ( f.SeqNo = RSeqNo ) then
Extract_Data();
Deliver_Data_To_Network_Layer();
RSeqNo = RSeqNo + 1;
Send_ACK(RSeqNo);
end if
end if
end while
end
SELECTIVE REPEAT
he selective repeat protocol, is to allow the receiver to accept and buffer the
frames following a damaged or lost one. Selective Repeat attempts to
retransmit only those packets that are actually lost (due to errors) :
Receiver must be able to accept packets out of order.
Since receiver must release packets to higher layer in order, the receiver
must be able to buffer some packets.
Retransmission requests :
Implicit – The receiver acknowledges every good packet, packets that are
not ACKed before a time-out are assumed lost or in error.Notice that this
approach must be used to be sure that every packet is eventually received.
Explicit – An explicit NAK (selective reject) can request retransmission of
just one packet. This approach can expedite the retransmission but is not
strictly needed.
One or both approaches are used in practice.
Selective Repeat Protocol (SRP) : This protocol(SRP) is mostly identical to
GBN protocol, except that buffers are used and the receiver, and the sender,
each maintains a window of size. SRP works better when the link is very
unreliable. Because in this case, retransmission tends to happen more
frequently, selectively retransmitting frames is more efficient than
retransmitting all of them. SRP also requires full-duplex link. backward
acknowledgements are also in progress.
Sender’s Windows ( Ws) = Receiver’s Windows ( Wr).
Window size should be less than or equal to half the sequence number in
SR protocol. This is to avoid packets being recognized incorrectly. If the
size of the window is greater than half the sequence number space, then if
an ACK is lost, the sender may send new packets that the receiver believes
are retransmissions.
Sender can transmit new packets as long as their number is with W of all
unACKed packets.
Sender retransmit un-ACKed packets after a timeout – Or upon a NAK if
NAK is employed.
Receiver ACKs all correct packets.
Receiver stores correct packets until they can be delivered in order to the
higher layer.
In Selective Repeat ARQ, the size of the sender and receiver window must
be at most one-half of 2^m.
Handshake refers to the process to establish connection between the client and
server. Handshake is simply defined as the process to establish a communication
link. To transmit a packet, TCP needs a three way handshake before it starts
sending data. The reliable communication in TCP is termed as PAR (Positive
Acknowledgement Re-transmission). When a sender sends the data to the
receiver, it requires a positive acknowledgement from the receiver confirming the
arrival of data. If the acknowledgement has not reached the sender, it needs to
resend that data. The positive acknowledgement from the receiver establishes a
successful connection.
Here, the server is the server and client is the receiver. The above diagram shows
3 steps for successful connection. A 3-way handshake is commonly known as
SYN-SYN-ACK and requires both the client and server response to exchange the
data. SYN means synchronize Sequence Number and ACK
means acknowledgment. Each step is a type of handshake between the sender
and the receiver.
Step 1: SYN
Step 2: SYN-ACK
Step 3: ACK
After these three steps, the client and server are ready for the data communication
process. TCP connection and termination are full-duplex, which means that the
data can travel in both the directions simultaneously.
The header of a TCP segment can range from 20-60 bytes. 40 bytes are for
options. If there are no options, a header is 20 bytes else it can be of upmost 60
bytes.
Header fields:
Acknowledgement Number –
A 32-bit field that holds the acknowledgement number, i.e, the byte number
that the receiver expects to receive next. It is an acknowledgement for the
previous bytes being received successfully.
Control flags –
These are 6 1-bit control bits that control connection establishment,
connection termination, connection abortion, flow control, mode of transfer
etc. Their function is:
URG: Urgent pointer is valid
ACK: Acknowledgement number is valid( used in case of
cumulative acknowledgement)
PSH: Request for push
RST: Reset the connection
SYN: Synchronize sequence numbers
FIN: Terminate the connection
Window size –
This field tells the window size of the sending TCP in bytes.
Checksum –
This field holds the checksum for error control. It is mandatory in TCP as
opposed to UDP.
Urgent pointer –
This field (valid only if the URG control flag is set) is used to point to data
that is urgently required that needs to reach the receiving process at the
earliest. The value of this field is added to the sequence number to get the
byte number of the last urgent byte.
Round Trip Time
RTT (Round Trip Time) also called round-trip delay is a crucial tool in
determining the health of a network. It is the time between a request for data
and the display of that data. It is the duration measured in milliseconds.
RTT can be analyzed and determined by pinging a certain address. It refers to
the time taken by a network request to reach a destination and to revert back to
the original source. In this scenario, the source is the computer and the
destination is a system that captures the arriving signal and reverts it back.
Advantages of RTT
Calculation of RTT is advantageous because:
1. It allows users and operators to identify how long a signal will take to
complete the transmission.
2. It also determines how fast a network can work and the reliability of the
network.
Example: Let us assume there are two users, one of which wants to contact the
other one. One of them is located in California while the other one is situated in
Germany. When the one in California makes the request, the network traffic is
transferred across many routers before reaching the server located in Germany.
Once the request reverts back to California, a rough estimation of the time taken
for this transmission could be made. This time taken by the transmitted request
is referred to as RTT. The Round Trip Time is a mere estimate. The path
between the two locations can change as the passage and network congestion
can come into play, affecting the total period of transmission.
(RDT) 1.0 works on a perfectly reliable channel, that is, it assumes that the
underlying channel has:
1. No bit errors and
2. No loss of packets
This transfer of data is shown by using FSM (finite state machine). In RDT
1.0, there is only one state each for sender and receiver.
Sender Side: When the sender sends data from the application
layer, RDT simply accepts data from the upper layer via
the rdt_send(data) event. Then it puts the data into a packet (via
the make_pkt(packet,data) event) and sends the packet into the channel using
the udp_send(packet) event.
RDT1.0: Sender side FSM
Receiving Side: On receiving data from the channel, RDT simply accepts data
via the rdt_rcv(data) event. Then it extracts the data from the packet (via the
extract(packet, data)) and sends the data to the application layer using
the deliver_data(data) event.
The other side performs a connect primitive specifying the I/O port to which it
wants to join. The maximum TCP segment size available, other options are
optionally like some private data (example password).
The CONNECT primitive transmits a TCP segment with the SYN bit on and the
ACK bit off and waits for a response.
The sequence of TCP segments sent in the typical case, as shown in the figure
below −
When the segment sent by Host-1 reaches the destination, i.e., host -2, the
receiving server checks to see if there is a process that has done a LISTEN on the
port given in the destination port field. If not, it sends a response with the RST
bit on to refuse the connection. Otherwise, it governs the TCP segment to the
listing process, which can accept or decline (for example, if it does not look
similar to the client) the connection.
Call Collision
If two hosts try to establish a connection simultaneously between the same two
sockets, then the events sequence is demonstrated in the figure under such
circumstances. Only one connection is established. It cannot select both the links
because their endpoints identify connections.
Suppose the first set up results in a connection identified by (x, y) and the second
connection are also released up. In that case, only tail enter will be made, i.e., for
(x, y) for the initial sequence number, a clock-based scheme is used, with a clock
pulse coming after every 4 microseconds. For ensuring additional safety when a
host crashes, it may not reboot for sec, which is the maximum packet lifetime.
This is to make sure that no packets from previous connections are roaming
around.
TCP congestion control is a method used by the TCP protocol to manage data
flow over a network and prevent congestion. TCP uses a congestion window
and congestion policy that avoids congestion. Previously, we assumed that only
the receiver could dictate the sender’s window size. We ignored another entity
here, the network. If the network cannot deliver the data as fast as it is created
by the sender, it must tell the sender to slow down. In other words, in addition
to the receiver, the network is a second entity that determines the size of the
sender’s window
Exponential increment: In this phase after every RTT the congestion window
size increments exponentially.
Example:- If the initial congestion window size is 1 segment, and the first
segment is successfully acknowledged, the congestion window size becomes 2
segments. If the next transmission is also acknowledged, the congestion window
size doubles to 4 segments. This exponential growth continues as long as all
segments are successfully acknowledged.
Initially cwnd = 1
After 1 RTT, cwnd = 2^(1) = 2
2 RTT, cwnd = 2^(2) = 4
3 RTT, cwnd = 2^(3) = 8
Congestion Avoidance Phase
Additive increment: This phase starts after the threshold value also denoted as
ssthresh. The size of cwnd(congestion window) increases additive. After each
RTT cwnd = cwnd + 1.
Example:- if the congestion window size is 20 segments and all 20 segments
are successfully acknowledged within an RTT, the congestion window size
would be increased to 21 segments in the next RTT. If all 21 segments are again
successfully acknowledged, the congestion window size would be increased to
22 segments, and so on.
Initially cwnd = i
After 1 RTT, cwnd = i+1
2 RTT, cwnd = i+2
3 RTT, cwnd = i+3
There are some approaches for congestion control over a network which are
usually applied on different time scales to either prevent congestion or react to it
once it has occurred.
Step 1 − The basic way to avoid congestion is to build a network that is well
matched to the traffic that it carries. If more traffic is directed but a low-
bandwidth link is available, definitely congestion occurs.
Step 2 − Sometimes resources can be added dynamically like routers and links
when there is serious congestion. This is called provisioning, and which happens
on a timescale of months, driven by long-term trends.
Step 4 − Some of local radio stations have helicopters flying around their cities
to report on road congestion to make it possible for their mobile listeners to route
their packets (cars) around hotspots. This is called traffic aware routing.
Step 5 − Sometimes it is not possible to increase capacity. The only way to reduce
the congestion is to decrease the load. In a virtual circuit network, new
connections can be refused if they would cause the network to become congested.
This is called admission control.
Step 6 − Routers can monitor the average load, queueing delay, or packet loss. In
all these cases, the rising number indicates growing congestion. The network is
forced to discard packets that it cannot deliver. The general name for this is Load
shedding. The better technique for choosing which packets to discard can help to
prevent congestion collapse.