Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Computer Networks (CS-577)

Mr. Ali bin Tahir Email: ali@biit.edu.pk

Week # 11 – Lecture # 21 – Network layer (Layer 3)

Recommended Reading:

Book: Computer Networking, A Top-Down Approach 6th edition, Authors: James F.


Kurose, K W. Ross

The Network Layer

We are studying topic of routing in the computer networks. In last lecture, we have
learned distance vector routing algorithm. In this lecture, we will study link state routing
algorithm, which is yet another famous routing algorithm but with a completely different
approach than distance vector. The outline for this lecture is as follows:

• Categories of Routing Algorithms


• Distance Vector Routing Algorithm
• Link-State Routing Algorithm
• Distance Vector vs. Link-State Routing Algorithm

Classification of Routing Algorithms


We have learned until now that the job of a routing algorithm is to compute least cost
path. The next question is how this objective can be achieved. Different routing
algorithms compute least cost path differently and can be classified in different ways.
Broadly, one way in which we can classify routing algorithms is according to whether
they are global (central) or decentralized.

Global routing algorithm


A global routing algorithm computes the least-cost path between a source and
destination using complete, global knowledge about the network. That is, the algorithm
takes the connectivity between all nodes and all link costs as inputs. This then requires
that the algorithm somehow obtain this information before actually performing the
calculation. In practice, algorithms with global state information are often referred to
as link-state (LS) algorithms, since the algorithm must be aware of the cost of each
link in the network. We’ll study LS algorithms in next Sections.

Decentralized routing algorithm

CN-WK-11-Lec-21-22 Page 1 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

In a decentralized routing algorithm, the calculation of the least-cost path is carried out
in an iterative, distributed manner. No node has complete information about the costs
of all network links. Instead, each node begins with only the knowledge of the costs of
its own directly attached links. Then, through an iterative process of calculation and
exchange of information with its neighboring nodes (that is, nodes that are at the other
end of links to which it itself is attached), a node gradually calculates the least-cost
path to a destination or set of destinations. The decentralized routing algorithm we’ll
study is called a distance-vector (DV) algorithm, because each node maintains a
vector of estimates of the costs (distances) to all other nodes in the network.

The distance vector (DV) routing algorithm


The distance vector (DV) algorithm is an iterative and distributed routing algorithm. It
is distributed in that each node receives some information from one or more of its
directly attached neighbors, performs a calculation, and then distributes the results of
its calculation back to its neighbors. It is iterative in that this process continues on until
no more information is exchanged between neighbors. We will explain these points
through an example, but before that it is important to introduce Bellman-Ford equation
which is used to compute least cost path in distance vector routing algorithm.

Bellman-Ford Equation
Let us say we want to compute least cost path from a source x to a destination y, then

• You can notice there are two parts of this equation.


• The first part is c(x,v), it means the cost to reach from source x to its neighbor
v.

CN-WK-11-Lec-21-22 Page 2 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

• The second part is dv(y) which means that least cost path from neighbor v to
destination y.
• By adding first and second part (cost of reaching neighbor and then cost of
reaching destination from neighbor), we get total cost from source x to
destination y via neighbor v.
• Since there could be more than one neighbor of a source, this computation is
performed for all neighbors.
• We will get total cost via each neighbor and finally a minimum cost neighbor will
be selected as a next hop in order to reach destination y.

Let us take an example to explain Bellman-Ford equation.

Example

Let us take u as source and z as destination from figure 4.27. There are three
neighbors of u (v, w and x).

Bellman-Ford equation says:

du(z) = min { c(u,v) + dv(z),

c(u,x) + dx(z),

c(u,w) + dw(z)

It can be noticed from figure 4.27 that

• c(u,v) = 2 , c(u,x) = 1, c(u,w) = 5 (cost from source u to its neighbor)


• dv(z) = 5, dx(z) = 3, dw(z) = 3 (least cost path from a neighbor to destination z)

By putting values in Bellman-Ford equation:

du(z) = min { 2 +5,

1 +3,

5 +3

CN-WK-11-Lec-21-22 Page 3 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

du(z) = min {7, 4, 8 }

du(z) = 4

By using Bellman-Ford equation, source node u knows that destination z is reachable


via next hop x, and the total cost to reach z is 4. So, this information is used to add an
entry in the forwarding table at node u. For the destination IP address of z, outgoing
interface will be an interface that connects node u to node x.

The distance vector (DV) routing algorithm


In last lecture, we notice that Bellman-Ford equation is used to compute least cost
path at a source node. The one input for Bellman-Ford equation is the cost to reach
destination via a neighbor. It means such information must be shared among
neighbors to calculate least cost path. This information sharing is the part of distance
vector routing algorithm which we are going to discuss next.
The basic idea is as follows. Each node x begins with Dx(y), an estimate of the cost of
the least-cost path from itself to node y, for all nodes in N. Let Dx = [Dx(y): y in N] be
node x’s distance vector, which is the vector of cost estimates from x to all other nodes,
y, in N. With the DV algorithm, each node x maintains the following routing information:
• For each neighbor v, the cost c(x,v) from x to directly attached neighbor, v
• Node x’s distance vector, that is, Dx = [Dx(y): y in N], containing x’s estimate of
its cost to all destinations, y, in N
• The distance vectors of each of its neighbors, that is, Dv = [Dv(y): y in N] for
each neighbor v of x
In the distributed, asynchronous algorithm, from time to time, each node sends a copy
of its distance vector to each of its neighbors. When a node x receives a new distance
vector from any of its neighbors v, it saves v’s distance vector, and then uses the
Bellman-Ford equation to update its own distance vector as follows:

If node x’s distance vector has changed as a result of this update step, node x will then
send its updated distance vector to each of its neighbors, which can in turn update
their own distance vectors. It is important to remember that distance vector information
is shared only with neighbors (directly connected nodes) in this algorithm.
Furthermore, a node x updates its distance-vector estimate when it either sees a cost

CN-WK-11-Lec-21-22 Page 4 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

change in one of its directly attached links or receives a distance vector update from
some neighbor.
Example Scenario

Cost is 1 for each edge i.e., least cost path = shortest path (with fewest hops)
Distance vector routing algorithm is running on each node in this network.
The routing information is used to add forwarding table entries at each node.
Distance vector routing algorithm at node A:

Iteration 1: Cost to reach neighbors is known

Initial Forwarding table (routing table) at node A:

Destination Cost Next-hop


B 1 B
C 1 C
D ∞ -
E 1 E
F 1 F
G ∞ -

Iteration 2: DV update received from neighbors

Final Forwarding table (routing table) at node A:

CN-WK-11-Lec-21-22 Page 5 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

Destination Cost Next hop


B 1 B
C 1 C
D 2 C
E 1 E
F 1 F
G 2 F

Similarly, forwarding table can be shown for other nodes (B, C, D, E, F and G) in this
network.
Let us take another example by considering forwarding table at node G

Distance vector routing algorithm at node G:

Iteration 1: Cost to reach neighbors is known

Initial Forwarding table (routing table) at node G:

Destination Cost Next hop


A ∞ -
B ∞ -
C ∞ -
D 1 D
E ∞ -
F 1 F

Iteration 2: DV update received from neighbors

Forwarding table (routing table) at node G:

Destination Cost Next hop

CN-WK-11-Lec-21-22 Page 6 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

A 2 F
B ∞ -
C 2 D
D 1 D
E ∞ -
F 1 F

Iteration 3: DV update received from neighbors

Final forwarding table (routing table) at node G:

Destination Cost Next hop


A 2 F
B 3 F
C 2 D
D 1 D
E 3 F
F 1 F

As you can notice that the process of receiving updated distance vectors from
neighbors, recomputing routing table entries, and informing neighbors of changed
costs of the least-cost path to a destination continues until no update messages are
sent. At this point, since no update messages are sent, no further routing table
calculations will occur and the algorithm will enter into a rest state and wait until a link
cost changes.

Handling link failure in distance vector


Routing protocols dynamically reacts to the topological changes in the network. To
understand this concept, suppose link between A and B fails in the given example.
Now the forwarding table entry at node G to reach B via F is no more valid (Path = G-
F-A-B). We can notice that this path should be replaced by an alternate path via D
(Path = G-D-C-B). It means forwarding table entry for destination B at node G should
be replaced as follows:

CN-WK-11-Lec-21-22 Page 7 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

Destination Cost Next hop


B 3 D

But how long node G will take to detect this failure and update its forwarding table
accordingly. This process will take some time and steps are explained below
• First of all, node A will detect (A-B) link failure and set cost ∞ for destination
B (before link failure cost from A to B was 1)
• A will send this DV update to its neighbors, so F will receive this update and
also set cost ∞ for destination B
• Similarly in next iteration, G will get this DV update from F and set cost ∞ for
destination B
• Later G will get a DV update from D that B is reachable at cost of 2
• Using Bellman-Ford G will recompute its cost towards B which is (1+2) = 3
• G will update its forwarding table entry for destination B (i.e., cost 3, next hop
D)

Count to Infinity Problem


For a routing protocol to work properly, if the cost of a link changes, every other router
should be aware of it immediately, but in distance vector routing, this takes some time.
It may result in route instability and this problem is referred as count to infinity. Count
to infinity simply means that distance vector algorithm does not enter into the rest state
even after several iterations. An example explaining count to infinity problem is given
in book (Figure 4.31) and left for you as an exercise.

As we notice that distance vector algorithm relies on the neighbor’s information in order to
reach a destination. First, a neighbor learns how to reach a destination and then it is shared
with others. This step by step (Iterative) learning takes a lot of time and results in slow route

CN-WK-11-Lec-21-22 Page 8 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

propagation which causes problems such as count to infinity. In next lecture, we will see
another routing algorithm (i.e., link state routing) where routing messages are propagated
quickly.

The Link-State Routing Algorithm

Recall from previous lecture that link-state algorithm is an example of global routing
algorithm. It means that network topology and all link/edge costs are known, that is,
available as input to the LS algorithm. The link state algorithm executes on each node
and every node knows the cost of all edges in that network. On a node, these link
costs are provided as an input to the route calculation algorithm. Then job of the
algorithm is to compute least cost path from that source node to all available
destinations in that network. In this regard, the algorithm which is used for route
calculation is known as Dijkstra’s algorithm, named after its inventor, but indeed it is a
link-state routing algorithm.

Link-State Packet Broadcast


We will study the Dijkstra’s algorithm in details. But before that, it is important to
understand the concept of link-sate. As we said that each node should know the cost
of all links in a network. In other words, a node knows the state of every link in the
network. But how link state information is collected by a node in the network? In
practice this is accomplished by having each node broadcast link-state
packets/messages to all other nodes in the network. These link-state packets contain
the node identity and costs of its attached links. The result of the nodes’ broadcast is
that all nodes have an identical and complete view of the network. Each node can then
run the LS algorithm (Dijkstra for example) and compute the same set of least-cost
paths as every other node.

Example

CN-WK-11-Lec-21-22 Page 9 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

A link state message contains following information:


Source, destination, cost
Suppose u as a source node, u will receive link-state messages from all other nodes
in the network (i.e., node v, w, x, y and z). So, let’s take it one by one.
What link-state message u will receive from v?
In link-state algorithm, a node shares directly attached link cost to all other nodes in
the network.
Directly attached links of v along costs:
• (v,u,2)
• (v,w,3) Link state messages
• (v,x,2)
Similarly, Advertised link state messages of node x
• (x,u,1)
• (x,v,2)
• (x,w,3)
• (x,y,1)
Link state messages broadcasted by other nodes can be written in the similar fashion.
Now you can imagine that a source node (let us say u) will receive a list of links costs
which will be used as input to compute least cost paths for all destinations in the
network.

Dijkstra’s algorithm
Dijkstra’s algorithm computes the least-cost path from one node (the source, which
we will refer to as u) to all other nodes in the network (destinations, which we will refer
to as v). Dijkstra’s algorithm is iterative and has the property that after the kth iteration

CN-WK-11-Lec-21-22 Page 10 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

of the algorithm, the least-cost paths are known to k destination nodes. Let us explain
how the Dijkstra’s algorithm works in order to compute least cost path:
1 From a source node u, algorithm starts computing costs of neighbor nodes as
a first step.
2 Cost is set to infinity for those destinations which are not neighbors.
3 As a next step, a least cost neighbor is selected and costs are recomputed via
that neighbor.
4 If a newly computed cost is smaller than earlier computed cost, then new cost
will replace earlier cost.
5 If earlier computed cost is smaller than the new cost, it will not be changed.
6 The algorithm will continue iteratively in the same fashion (step 3-5), until least
cost paths are computed for all destinations.
Let us define the following notation and explain the algorithm with the help of an
example:
• D(v): Cost of the least-cost path from the source node to destination v as of this
iteration of the algorithm.
• p(v): Previous node (neighbor of destination v) along the current least-cost path
from the source to v. As an example, let’s consider the network in Figure 4.27
and compute the least-cost paths from u to all possible destinations. A tabular
summary of the algorithm’s computation is shown in Table below.
• In the initialization step, the currently known least-cost paths from u to its
directly attached neighbors, v, x, and w, are initialized to 2, 1, and 5,
respectively. Note in particular that the cost to w is set to 5 (even though we will
soon see that a lesser-cost path does indeed exist) since this is the cost of the
direct (one hop) link from u to w. The costs to y and z are set to infinity because
they are not directly connected to u.
• In the first iteration, we look among those nodes and find that node with the
least cost as of the end of the previous iteration. That node is x, with a cost of
1, and thus paths are recomputed via x. The results are shown in the second
line (Step 1) in Table. The cost of the path to v is unchanged, since earlier
computed cost to reach v is 2 whereas it is 3 via x. The cost of the path to w
(which was 5 at the end of the initialization) through node x is found to have a
cost of 4. Hence this lower-cost path is selected and w’s predecessor along the

CN-WK-11-Lec-21-22 Page 11 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

shortest path from u is set to x. Similarly, the cost to y (through x) is computed


to be 2, and the table is updated accordingly.

Table for Figure 4.27 (source node = u)

• In the second iteration, nodes v and y are found to have the least-cost paths
(2), and we break the tie arbitrarily and select y for this iteration. The cost to the
nodes w and z, are updated, yielding the results shown in the third row in the
Table.
• And so on. . ..

When the LS algorithm terminates, we have, for each node, its predecessor along the
least-cost path from the source node. For each predecessor, we also have its

CN-WK-11-Lec-21-22 Page 12 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

predecessor, and so in this manner we can construct the entire path from the source
to all destinations. The forwarding table in a node, say node u, can then be constructed
from this information. Figure 4.28 shows the resulting least-cost paths and forwarding
table in u for the example network

Distance Vector vs. Link-State Routing Algorithm


The DV and LS algorithms use different approaches towards computing routing. In the
DV algorithm, each node talks to only its directly connected neighbors, but it provides
its neighbors with least-cost estimates from itself to all the nodes (that it knows about)
in the network. In the LS algorithm, each node talks with all other nodes (via
broadcast), but it tells them only the costs of its directly connected links. Both
approaches have their own advantages and disadvantages. As we know that DV is
slow in route propagation whereas LS is quicker because of broadcast messages.
However, such broadcasting consumes more bandwidth. Furthermore, LS routing
algorithm is computationally intensive as compared to DV where computational load
is distributed among different nodes in the network.

CN-WK-11-Lec-21-22 Page 13 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

Week # 11 – Lecture # 22 – Transport layer (Layer 4)

Recommended Reading:

Book: Computer Networking, A Top-Down Approach 6th edition, Authors: James F.


Kurose, K W. Ross

Transport Layer

In this lecture we are going to start a new chapter on transport layer. We will cover
following topics in this lecture:

• Introduction to Transport Layer


• Transport Layer Protocols (TCP and UDP)
• Connection oriented vs connectionless

Introduction to Transport Layer


Logical Communication between processes
Residing between the application and network layers, the transport layer is a central
piece of the layered network architecture. A transport-layer provides logical
communication between application processes running on different hosts. By logical
communication, we mean that from an application’s perspective, it is as if the hosts
running the processes were directly connected; but in reality, the hosts may be on
opposite sides of the planet, connected via numerous routers and a wide range of link
types. Application processes use the logical communication provided by the transport
layer to send messages to each other, free from the worry of the details of the physical
infrastructure used to carry these messages. Figure 3.1 illustrates the notion of logical
communication.

Role of Transport Layer at sender side


As shown in Figure 3.1, transport-layer protocols are implemented in the end systems
but not in network routers. On the sending side, the transport layer converts the
application-layer messages it receives from a sending application process into
transport-layer packets, known as transport-layer segments in Internet terminology.
This is done by breaking the application messages into smaller chunks and adding a
transport-layer header to each chunk to create the transport-layer segment. The
transport layer then passes the segment to the network layer at the sending end

CN-WK-11-Lec-21-22 Page 14 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

system, where the segment is encapsulated within a network-layer packet (a


datagram) and sent to the destination.

Role of Transport Layer at receiver side


It’s important to note that network routers act only on the network-layer fields of the
datagram; that is, they do not examine the fields of the transport-layer segment
encapsulated with the datagram. On the receiving side, the network layer extracts the
transport-layer segment from the datagram and passes the segment up to the
transport layer. The transport layer then processes the received segment, making the
data in the segment available to the receiving application.

Transport Layer Protocols


More than one transport-layer protocol may be available to network applications. For
example, the Internet has two protocols—TCP and UDP. Each of these protocols
provides a different set of transport-layer services to the application.

CN-WK-11-Lec-21-22 Page 15 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

Transport Layer Services


Before proceeding with our brief introduction of UDP and TCP, it will be useful to say
a few words about the Internet’s network layer i.e., IP. IP provides logical
communication between hosts. The IP service model is a best-effort delivery service.
This means that IP makes its “best effort” to deliver segments between communicating
hosts, but it makes no guarantees. In particular, it does not guarantee segment
delivery, it does not guarantee orderly delivery of segments, and it does not guarantee
the integrity of the data in the segments. For these reasons, IP is said to be an
unreliable service.

Multiplexing and Demultiplexing


The most fundamental responsibility of Transport layer protocols (UDP and TCP) is to
extend IP’s delivery service between two end systems to a delivery service between
two processes running on the end systems. Extending host-to-host delivery to
process-to-process delivery is called transport-layer multiplexing and demultiplexing.
We’ll discuss transport-layer multiplexing and demultiplexing in the next section.

Bit Error Detection


UDP and TCP also provide integrity checking by including error detection fields in their
segments’ headers. These two minimal transport-layer services—process-to-process
data delivery and error checking—are the only two services that UDP provides! In
particular, like IP, UDP is an unreliable service—it does not guarantee that data sent
by one process will arrive intact (or at all!) to the destination process. UDP is discussed
in detail Section 3.3 (book).

Reliable Data Delivery


TCP, on the other hand, offers several additional services to applications. First and
foremost, it provides reliable data transfer. Using flow control, sequence numbers,
acknowledgments, and timers (techniques we’ll explore in detail in this chapter), TCP
ensures that data is delivered from sending process to receiving process, correctly
and in order. TCP thus converts IP’s unreliable service between end systems into a
reliable data transport service between processes.

CN-WK-11-Lec-21-22 Page 16 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

Congestion Control
TCP also provides congestion control. Congestion means dropping packets on routers
due to excessive amount of traffic. Congestion control is not so much a service
provided to the application as it is a service for the Internet as a whole. TCP congestion
control prevents any one TCP connection from overloading the links and routers
between communicating hosts with an excessive amount of traffic. TCP tries to give
each connection traversing a link with an equal share of the link bandwidth. This is
done by regulating the rate at which the sending sides of TCP connections can send
traffic into the network. UDP traffic, on the other hand, is unregulated. An application
using UDP transport can send traffic at any rate.
In this chapter, we will explain how these aforementioned services are provided by
transport layer protocols. We will start our discussion with the most basic service of
Multiplexing and Demultiplexing.

Multiplexing and Demultiplexing


At the destination host, the transport layer receives segments from the network layer
just below. The transport layer has the responsibility of delivering the data in these
segments to the appropriate application process running in the host. On a host,
generally we are running multiple processes at a time. Let’s take a look at an example.
Suppose you are sitting in front of your computer, and you are downloading two files
from two different Web pages. Application layer protocol which is responsible to handle
web communication is known as Hyper Text Transfer Protocol (HTTP). Two different
web pages mean you have two processes running under the web application.
Similarly, application layer protocol which is responsible to handle file download from
a remote server is known as File Transfer Protocol (FTP). There are two file downloads
in progress which means two processes belonging to FTP. So we have multiple
processes running on the same host in parallel. When the transport layer in your
computer receives data from the network layer below, it needs to direct the received
data to one of these processes. The question is how do we know which process is the
correct destination?

Socket [IP + Port number]


Before explaining the answer of this question, it is important to introduce related
concepts of socket and port number. Each process in a host can have one or more

CN-WK-11-Lec-21-22 Page 17 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

associated sockets. A socket can be considered as a reserved memory area where a


process can write data to deliver to the network stack or read received data from
network stack. In other words, sockets can be imagined as doors through which data
passes from the network to the process and through which data passes from the
process to the network. Thus, as shown in Figure 3.2, the transport layer in the
receiving host does not actually deliver data directly to a process, but instead to an
intermediary socket. Because at any given time there can be more than one sockets
in a host, each socket has a unique identifier which is known as port number.

Demultiplexing
Now let’s consider how a receiving host directs an incoming transport-layer segment
to the appropriate socket. Each transport-layer segment has a set of header fields in
for this purpose. At the receiving end, the transport layer examines these fields to
identify the receiving socket and then directs the segment to that socket. This job of
delivering the data in a transport-layer segment to the correct socket is called
demultiplexing.

Multiplexing
The job of gathering data chunks at the source host from different sockets,
encapsulating each data chunk with header information (that will later be used
in demultiplexing) to create segments, and passing the segments to the network
layer is called multiplexing.

CN-WK-11-Lec-21-22 Page 18 of 19
Computer Networks (CS-577)
Mr. Ali bin Tahir Email: ali@biit.edu.pk

Note that the transport layer in the middle host in Figure 3.2 must demultiplex
segments arriving from the network layer below to either process P1 or P2 above; this
is done by directing the arriving segment’s data to the corresponding process’s socket.
The transport layer in the middle host must also gather outgoing data from these
sockets, form transport-layer segments, and pass these segments down to the
network layer.
Source and Destination port number
Now that we understand the roles of transport-layer multiplexing and demultiplexing,
let us examine how it is actually done in a host. From the discussion above, we know
that transport-layer multiplexing requires (1) that sockets have unique identifiers, and
(2) that each segment have special header fields that indicate the socket to which the
segment is to be delivered. These special fields, illustrated in Figure 3.3, are the
source port number field and the destination port number field. (The UDP and TCP
headers have other fields as well, as discussed in the subsequent sections of this
chapter.) Each port number is a 16-bit number, ranging from 0 to 65535

.
Well-known port numbers
The port numbers ranging from 0 to 1023 are called well-known port numbers and are
restricted, which means that they are reserved for use by well-known application
protocols such as HTTP (which uses port number 80) and FTP (which uses port
number 21). The list of well-known port numbers is given in RFC 1700 and is updated
at http://www.iana.org [RFC 3232].
When we develop a new network application, we must assign the application a port
number. It should now be clear how the transport layer could implement the
demultiplexing service: Each socket in the host could be assigned a port number, and
when a segment arrives at the host, the transport layer examines the destination port
number in the segment and directs the segment to the corresponding socket. The
segment’s data then passes through the socket into the attached process.

CN-WK-11-Lec-21-22 Page 19 of 19

You might also like