Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

CHAPTER 5: TRANSPORT LAYER PORT SERVICES

20 FTP data
Functions: 21 FTP control
22 SSH (Secure Shell Connection)
 Error handling
23 Telnet
 Flow control 25 SMTP (Simple Mail Transfer Protocol)
 Multiplexing 53 DNS
 Connection set-up and release 80 HTTP (web server)
 Congestion handling 110 POP 3 (Post Office Protocol)
 Segmentation and reassembly 143 IMAP (Interactive Message Access Protocol)
 Addressing (port addressing)
 Well-known ports (0-1023): assigned and controlled by IANA
Services:  Registered ports (1024-49151): not assigned and controlled by
 Reliable, in-order unicast delivery (TCP) IANA but only registered to prevent from duplication
 Unreliable, unordered unicast or multicast delivery (UDP)  Dynamic ports (49152-65535): can be used in any process
TCP (Transmission Control Protocol): Socket=IP:Port number (for various hosts, port num. may be same but
 This is a real protocol which runs in transport layer. socket number will be different)
 It offers reliable connection oriented service between source and Source port # (16) Dest. Port # (16)
destination. Sequence number (32)
 It acts as if it is connecting two end points together, so that it is a point Acknowledgement number (32)
to point connection between two parties. Hlen(4) Res. U A P R S F Window size (16)
 It doesn’t support multicasting (because it is connection oriented). (6)
 The data in TCP is called a segment. Checksum (16) Urgent ptr. (16)
Options (32)
 Segments are obtained after breaking big files into small pieces.
Data
 It assists in flow control.
 It provides buffer to each connection.

Fig: TCP segment format


 Source port #: defines port number of the application in host that is  Provides faster service than that of TCP
sending the segment  Offers minimum error checking mechanism
 Dest. port #: defines the port number at which receiver receives the  Supports multicasting because connectionless
segment  Offers minimum flow control mechanism
 Sequence number: defines the number assigned to the first byte of the  Also used by SNMP (Simple Network Management Protocol)
data segment
 Acknowledge number: defines the number of next byte, a party Source port number (16) Destination port number (16)
expects to receive UDP segment Length (16) UDP checksum (16)
 Hlen: length of TCP header Data
 Res.: reserved for future use
 U: urgent valid
Fig: UDP segment header
 A: acknowledgement valid
 P: data push is valid
Q. Why is UDP faster than TCP?
 R: reset valid
 Small header size
 S: synchronization valid
 Minimum error checking mechanism
 F: final valid
 Less number of fields
 Window size: number of bytes that a receiver is willing to accept
 Checksum:
Connection Establishment and Connection Release:
 Urgent ptr.: pointer to urgent data
Steps of connection establishment (between ports P1 and P2):
 Options: used when sender and receiver negotiate the max. segment 1. P1 sends connection request to P2.
size 2. P2 sends acknowledge signal to P1.
 Data: 3. P1 sends data to P2.
4. P2 sends connection request to P1.
5. P1 sends acknowledgement signal to P2.
UDP (User Datagram Protocol): 6. P2 sends data to P1.

 Used in transport layer Steps of connection establishment using Piggybackig:


 Offers unreliable connectionless service 1. P1 sends connection request to P2.
2. P2 sends acknowledgement along with connection request to P1. sockets, encapsulating each data chunk with the header information
3. P1 sends acknowledgement along with data to P2. to create segments and passing the segments to the network layer is
called multiplexing. At the destination host, the transport layer
Flow control and buffering: receives segments from network layer just below. The transport layer
A TCP connection sets aside a receive buffer for the connection. When has the responsibility to delivering the data in these segments to the
the TCP connection receives bytes that are correct and in sequence, it appropriate application process running in the host.
places the data in the receiver buffer. The associated application process
will read data from this buffer, but not necessarily at this instant the data
processes
arrives. Indeed, the receiving application may be busy with some other
task and may not even attempt to read the data until longer after it has ports
arrived. If the application is relatively slow at reading data, the sender can
very easily overflow the receive buffer by sending too much data quickly. MUX DeMUX
TCP provides a flow control service to its application to estimate the
possibility of the sender overflowing the receiver buffer. Flow control is
thus a speed matching service- matching the rate at which the sender is
sending against the rate at which receiving application is receiving.

Multiplexing and Demultiplexing: IP Address IP Address

 Extending the host to host delivery service provided by the network


layer to a process to process delivery service application running on
the hosts.
 Consider how a receiving host directs on incoming transport layer
segment to the appropriate socket. Each transport layer segment has a
Network service provider
set of fields for this purpose. At the receiving end, the transport layer
examines these fields to identify the receiving socket and then it
directs the segment to that socket. This job of delivering the data in
Fig: Multiplexing and Demultiplexing in Transport Layer
transport layer segment to the correct socket is called demultiplexing.
The job of gathering data chunks at the source host from the different
Problem in connection establishment:  If a router has no free buffers i.e. insufficient memory to hold queue of
1. Jamming packets.
2. Congestion  If the components used in subnet (link, router, switches etc.) have
different traffics carrying and switching capacities, then congestion
Jamming: occurs.
 One request may append to immediate states.  If the bandwidths of the lines are low, it can’t carry large volume of
 After round trip time, sender resends request. packets and caused congestion.
 After sometime, receiver may get two or more requests which
Congestion cannot be eradicated but can be controlled.
are similar. These occupy buffer unnecessarily.
 So to minimize this problem, each request is sent with individual Congestion control mechanism:
and unique sequence number.
 For each incoming request, receiver compares sequence number a) Open loop control: no feedback is considered
and accepts only one request in case if two or more are similar. b) Closed loop control: feedback is considered

Congestion: Congestion control algorithms:

When too many packets are present in a subnet or a part of subnet, 1. Traffic shaping:
performance degrades. This situation is called congestion. When number  Also called as ‘Service Level Management’.
of packets dumped into the subnet by the hosts is within its carrying  The abrupt traffic is decreased without affecting the performance of
capacity, they are all delivered (except for a few that contain transmission application layer.
errors), and the number delivered is proportional to the number sent.  Traffic will be shaped as per the negotiation between source and
However as traffic increases too far, the routers are no longer able to destination.
cope, and they begin losing packets. At very high traffics, performance  Sliding window protocol at some level helps in traffic shaping.
collapses completely, and almost no packets are delivered. Example: a flow that specifies an average throughput of 1 Mbps of
traffic for a millisecond followed by no traffic for a millisecond. A
Causes of Congestion:
router can reshape the burst by temporarily queuing incoming
 When there are more output lines and less or single output lines. datagrams and sending them at a steady rate of 1 Mbps.
 When there is slow router i.e. if routers’ CPUs are slow.
2. Leaky Bucket Algorithm: if bucket is full. To minimize such limitation of bucket, token bucket
Imagine a bucket with a small hole in the bottom. No matter at what rate algorithm was introduced.
water enters the bucket, the outflow is at constant rate, when there is The bucket holds token, not packet. Tokens are generated by clock at
any water in the bucket and zero when the bucket is empty. Also, once the rate of one token per ΔT sec. For a packet to be transmitted, it
the bucket is full, any additional water entering it spills over the sides and must capture and destroy one token. The token bucket algorithm
is lost. allows saving up permission to send large burst later, up to the
The same idea can be applied to the packets. Conceptually, each host maximum size of the bucket, as leaky bucket doesn’t allow. This
connected to the network by an interface, containing a leaky bucket, i.e. property means that bursts of packets can be sent at once allowing
finite internal queue. If a packet arrives at the queue when it is full, the some burstiness at output stream and giving faster response to
packet is discarded. If one or more processes within the host try to send a sudden bursts of input.
packet when a maximum number are already in queue, the new packet is Another difference between token bucket and leaky bucket is that the
discarded. token bucket throws away token when the bucket fills up but never
discards packets. In token bucket also allows sending bytes basis, for
variable size packets. A packet can only be transmitted, if enough
token are available to cover its length in bytes. Fractional tokens are
kept for future use.
The implementation of basic token bucket algorithm is just a variable
counts tokens. The counter is incremented by one at every ΔT sec and
decremented by one whenever one packet is sent. When counter hits
zero, no packet can be sent.

3. Token Bucket Algorithm:


The leaky bucket algorithm is based on rigid output pattern at the
average rate no matter how bursty the traffic is. In leaky bucked, there
are chances of loss of packet as packet is filled in bucket and overflow
TCP Connection Management: Closing a connection:
Step 1: client end system sends TCP FIN control segment to
Opening a connection (3-way handshake): server
Step 1: client end system sends TCP SYN control segment to server Step 2: server receives FIN, replies with ACK. Closes
Step 2: server end system receives SYN, replies with SYN-ACK connection, sends FIN.
– allocates buffers
– ACKs received SYN Step 3: client receives FIN, replies with ACK.
Step 3: client receives SYN-ACK – Enters “timed wait” - will respond with ACK to
– connection is now set up received FINs
– client starts the “real work”
Step 4: server, receives ACK. Connection closed.

client server

open
listen client server

close

established

established close

closed
closed d
d

You might also like