Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 39

Lecture 6

Article 3.5
UDP packet Discussion:
UDP packet Discussion:
Connection-Oriented Transport: TCP
• TCP—the Internet’s transport-layer, connection-oriented, reliable transport protocol
• TCP is defined in RFC 793, RFC 1122, RFC 2018, RFC 5681, and RFC 7323.
The TCP Connection
1: TCP is said to be connection-oriented because before one application process can begin to send
data to another, the two processes must first “handshake” with each Other
• that is, they must send some preliminary segments to each other to establish the parameters of
the ensuing data transfer.
• As part of TCP connection establishment, both sides of the connection will initialize many TCP
state variables associated with the TCP connection.
2: The TCP “connection” is not an end-to-end TDM or FDM circuit as in a circuit switched network.
• Instead, the “connection” is a logical one, with common state residing only in the TCPs in the two
communicating end systems.
• The TCP protocol runs only in the end systems and not in the intermediate network elements
(routers and link-layer switches).
• The intermediate network elements do not maintain TCP connection state. In fact, the
intermediate routers are completely oblivious to TCP connections; they see datagrams, not
connections.
The TCP Connection
3: A TCP connection provides a full-duplex service:
4: A TCP connection is also always point-to-point, that is, between a single sender and a single
receiver.
So-called “multicasting” —the transfer of data from one sender to many receivers in a single
send operation—is not possible with TCP.
With TCP, two hosts are company and three are a crowd!
How a TCP connection is established?
• The connection-establishment procedure is often referred to as a three-way handshake.
• Client first sends a special TCP segment; the server responds with a second special TCP segment;
and finally the client responds again with a third special segment.
• The first two segments carry no payload, that is, no application-layer data; the third of these
segments may carry a payload and a total of three segments are sent between the two Hosts.
How a TCP connection is established?
• Once a TCP connection is established, the two application
processes can send data to each other.
• Let’s consider the sending of data from the client process
to the server process.
• The client process passes a stream of data through the
socket (the door of the process)
• Once the data passes through the door, the data is in the
hands of TCP running in the client.
• TCP directs this data to the connection’s send buffer, which
is one of the buffers that is set aside during the initial
three-way handshake.
• From time to time, TCP will grab chunks of data from the
send buffer and pass the data to the network layer.
How a TCP connection is established?
• TCP pairs each chunk of client data with a TCP header,
thereby forming TCP segments.
• The segments are passed down to the network layer, where
they are separately encapsulated within network-layer IP
datagrams.
• The IP datagrams are then sent into the network. When TCP
receives a segment at the other end, the segment’s data is
placed in the TCP connection’s receive buffer.
• The application reads the stream of data from this buffer.
Each side of the connection has its own send buffer and its
own receive buffer.
How a TCP connection is established?
• The maximum amount of data that can be grabbed and placed in a TCP segment is limited by the
maximum segment size (MSS).
• The MSS is the maximum amount of application-layer data in the segment, not the maximum size
of the TCP segment including headers.
• The MSS is typically set by first determining the length of the largest link-layer frame that can be
sent by the local sending host (the so-called maximum transmission unit, MTU),
• The MSS is done to ensure that a TCP segment (when encapsulated in an IP datagram) plus the
TCP/IP header length (typically 40 bytes) will fit into a single link-layer frame.
• Both Ethernet and PPP link-layer protocols have an MTU of 1,500 bytes. Thus, a typical value of
MSS is 1460 bytes.
• Approaches have also been proposed for discovering the path MTU—the largest link-layer frame
that can be sent on all links from source to destination [RFC 1191]—and setting the MSS based on
the path MTU value.
TCP Segment Structure
Let’s examine the TCP segment structure.
The TCP segment consists of header fields and a data field.
Data Field:
• The data field contains a chunk of application data.
• As mentioned above, the MSS limits the maximum size of a segment’s data
field.
• When TCP sends a large file, such as an image as part of a Web page, it
typically breaks the file into chunks of size MSS (except for the last chunk,
which will often be less than the MSS).
• Interactive applications, however, often transmit data chunks that are
smaller than the MSS;
• For example, with remote login applications such as Telnet and ssh, the data
field in the TCP segment is often only one byte.
• Because the TCP header is typically 20 bytes segments sent by Telnet and ssh
may be only 21 bytes in length.
TCP Segment Structure
TCP Segment Structure
TCP Header:
TCP header is typically 20 bytes. The header includes
• source and destination port numbers
• checksum field.
A TCP segment header also contains the following fields:
• The 32-bit sequence number field and the 32-bit acknowledgment number field are used by the
TCP sender and receiver in implementing a reliable data transfer service.
• The 16-bit receive window field is used for flow control.
• The 4-bit header length field specifies the length of the TCP header in 32-bit words. The TCP
header can be of variable length due to the TCP options field. (Typically, the options field is
empty, so that the length of the typical TCP header is 20 bytes.)
• The optional and variable-length options field is used when a sender and receiver negotiate the
maximum segment size (MSS) or as a window scaling factor for use in high-speed networks.
TCP Segment Structure
The flag field contains 6 bits.
• The ACK bit is used to indicate that the value carried in the acknowledgment field is valid; that is,
the segment contains an acknowledgment for a segment that has been successfully received.
• The RST, SYN, and FIN bits are used for connection setup and teardown,
• The CWR and ECE bits are used in explicit congestion notification,
• Setting the PSH bit indicates that the receiver should pass the data to the upper layer immediately.
• Finally, the URG bit is used to indicate that there is data in this segment that the sending-side
upper layer entity has marked as “urgent.”
• The location of the last byte of this urgent data is indicated by the 16-bit urgent data pointer field.
• TCP must inform the receiving-side upper-layer entity when urgent data exists and pass it a
pointer to the end of the urgent data.
• (In practice, the PSH, URG, and the urgent data pointer are not used. However, we mention these
fields for completeness.)
TCP Segment Structure
Sequence Numbers and Acknowledgment Numbers
• Two of the most important fields in the TCP segment header are the sequence number field and
the acknowledgment number field.
• These fields are a critical part of TCP’s reliable data transfer service. But before discussing how
these fields are used to provide reliable data transfer, let us first explain what exactly TCP puts in
these fields.
TCP Segment Structure
Sequence Numbers
• TCP views data as an unstructured, but ordered, stream of bytes.
• TCP’s use of sequence numbers reflects this view in that sequence numbers
• The sequence number for a segment is therefore the byte-stream number of the first byte in the
segment.
• Example:
• Suppose that a process in Host A wants to send a stream of data to a process in Host B over a TCP
connection. Suppose that the data stream consists of a file consisting of 500,000 bytes, that the
MSS is 1,000 bytes,
TCP Segment Structure
Sequence Numbers

• In Figure , we assumed that the initial sequence number was zero.


• In truth, both sides of a TCP connection randomly choose an initial sequence number.
• This is done to minimize the possibility that a segment that is still present in the network from an
earlier, already-terminated connection between two hosts is mistaken for a valid segment in a
later connection between these same two hosts
TCP Segment Structure
Acknowledgment Numbers
• These are a little trickier than sequence numbers.
• Recall that TCP is full-duplex,
• so that Host A may be receiving data from Host B while it sends data to Host B (as part of the
same TCP connection). Each of the segments that arrive from Host B has a sequence number for
the data flowing from B to A.
• The acknowledgment number that Host A puts in its segment is the sequence number of the next
byte Host A is expecting from Host B.
Example 1:
Suppose that Host A has received all bytes numbered 0 through 535 from B and suppose that it is about to
send a segment to Host B. Host A is waiting for byte 536 and all the subsequent bytes in Host B’s data
stream. So Host A puts 536 in the acknowledgment number field of the segment it sends to B.
TCP Segment Structure
Example 2:
As another example, suppose that Host A has received one segment from Host B containing bytes 0 through 535
and another segment containing bytes 900 through 1,000. For some reason Host A has not yet received bytes 536
through 899. In this example, Host A is still waiting for byte 536 (and beyond) in order to re-create B’s data stream.
Thus, A’s next segment to B will contain 536 in the acknowledgment number field. Because TCP only acknowledges
bytes up to the first missing byte in the stream, TCP is said to provide cumulative acknowledgments.
• As in previous example What does a host do when it receives out-of-order segments in a TCP connection?
• Interestingly, the TCP RFCs do not impose any rules here and leave the decision up to the programmers
implementing a TCP implementation. There are basically two choices:
(1) the receiver immediately discards out-of-order segments
(2) The receiver keeps the out-of-order bytes and waits for the missing bytes to fill in the gaps.
• Clearly, the latter choice is more efficient in terms of network bandwidth, and is the approach taken in
practice.
Round-Trip Time Estimation and Timeout
• TCP uses a timeout/retransmit mechanism to recover from lost segments.
• Perhaps the most obvious question is the length of the timeout intervals.
• Clearly, the timeout should be larger than the connection’s round-trip time (RTT), that is, the time
from when a segment is sent until it is acknowledged. Otherwise, unnecessary retransmissions
would be sent. But
how much larger?
How should the RTT be estimated in the first place?
Should a timer be associated with each and every unacknowledged segment?
So many questions!
• Our discussion in this section is about the current IETF recommendations for managing TCP
timers [RFC 6298].
Estimating the Round-Trip Time
This is accomplished as follows
1st step: Calculate the SampleRTT.
• The sample RTT for a segment is the amount of time between when the segment is sent and when an
acknowledgment for the segment is received.
• Instead of measuring a SampleRTT for every transmitted segment, most TCP implementations take
only one SampleRTT measurement at a time.
• That is, at any point in time, the SampleRTT is being estimated for only one of the transmitted but
currently unacknowledged segments, leading to a new value of SampleRTT approximately once every
RTT.
• TCP never computes a SampleRTT for a segment that has been retransmitted.
• It only measures SampleRTT for segments that have been transmitted once.
• The SampleRTT values will fluctuate from segment to segment due to congestion in the routers and
to the varying load on the end systems.
• It is therefore natural to take some sort of average of the SampleRTT values.
Estimating the Round-Trip Time
2nd step: calculate the EstimatedRTT

• EstimatedRTT is a weighted average of the


SampleRTT values.
• This weighted average puts more weight on recent
samples than on old samples. This is natural, as the
more recent samples better reflect the current
congestion in the network.
Estimating the Round-Trip Time
Step 3: calculate the variance
In addition to having an estimate of the RTT, it is also valuable to have a measure of the variability of
the RTT.

• Note that DevRTT is the difference between SampleRTT and EstimatedRTT.


• If the SampleRTT values have little fluctuation, then DevRTT will be small; on the other hand, if
there is a lot of fluctuation, DevRTT will be large.
• The recommended value of β is 0.25.
Estimating the Round-Trip Time
Given values of EstimatedRTT and DevRTT, what value should be used for TCP’s timeout
interval?
• Clearly, the interval should be greater than or equal to EstimatedRTT.
• But the timeout interval should not be too much larger than EstimatedRTT; otherwise,
when a segment is lost, TCP would not quickly retransmit the segment, leading to large
data transfer delays.
• It is therefore desirable to set the timeout equal to the EstimatedRTT plus some margin.
• The margin should be large when there is a lot of fluctuation in the SampleRTT values; it
should be small when there is little fluctuation.
• The value of DevRTT should thus come into play here.
• All of these considerations are taken into account in TCP’s method for determining the
retransmission timeout interval
Estimating the Round-Trip Time

• An initial TimeoutInterval value of 1 second is recommend.


• Also, when a timeout occurs, the value of TimeoutInterval is doubled to
avoid a premature timeout occurring for a subsequent segment that will
soon be acknowledged.
• However, as soon as a segment is received and EstimatedRTT is updated, the
TimeoutInterval is again computed using the formula above.
Reliable Data Transfer
Preface:
Internet’s network-layer service (IP service) is unreliable. IP does not guarantee datagram
delivery, does not guarantee in-order delivery of datagrams, and does not guarantee the
integrity of the data in the datagrams.
With IP service, datagrams can overflow router buffers and never reach their destination,
datagrams can arrive out of order, and bits in the datagram can get corrupted .
Because transport-layer segments are carried across the network by IP datagrams, transport-
layer segments can suffer from these problems as well.
• TCP creates a reliable data transfer service on top of IP’s unreliable service.
• TCP’s reliable data transfer service ensures that the data stream that a process reads out of
its TCP receive buffer is uncorrupted, without gaps, without duplication, and in sequence;
that is, the byte stream is exactly the same byte stream that was sent by the end system on
the other side of the connection
Reliable Data Transfer
• How TCP provides reliable data transfer in two incremental steps.
• We first present a highly simplified description of a TCP sender that uses only timeouts to recover
from lost segments; we then present a more complete description that uses duplicate
acknowledgments in addition to timeouts.
• In the ensuing discussion, we suppose that data is being sent in only one direction, from Host A to
Host B, and that Host A is sending a large file.
Reliable Data Transfer
• Figure 3.33 presents a highly simplified
description of a TCP sender.
• We see that there are three major events
related to data transmission and
retransmission in the TCP sender: data
received from application above; timer
timeout; and ACK.
• Read the explanation in from the book. Page
no 240, chapter 3.
Reliable Data Transfer
• A Few Interesting Scenarios
Reliable Data Transfer
• A Few Interesting Scenarios
Reliable Data Transfer
• A Few Interesting Scenarios
Reliable Data Transfer
A Few Interesting Scenarios
Congestion Control
• The timer expiration is most likely caused by congestion in the network,
• That is, too many packets arriving at one (or more) router queues in the path between the source
and destination, causing packets to be dropped and/or long queuing delays.
• In times of congestion, if the sources continue to retransmit packets persistently, the congestion
may get worse.
• Instead, TCP acts more politely, with each sender retransmitting after longer and longer intervals.
• This modification provides a limited form of congestion control.
Congestion Control
Doubling the Timeout Interval
• Whenever the timeout event occurs, TCP retransmits the not-yet-acknowledged
• Segment with the smallest sequence number, But each time TCP retransmits, it sets the next
timeout interval to twice the previous value.
• Suppose TimeoutInterval associated with the oldest not yet acknowledged segment is .75 sec
when the timer first expires. TCP will then retransmit this segment and set the new expiration
time to 1.5 sec.
• If the timer expires again 1.5 sec later, TCP will again retransmit this segment, now setting the
expiration time to 3.0 sec.
• Thus, the intervals grow exponentially after each retransmission.
Note: data received from application above, and ACK received
The TimeoutInterval is derived from the most recent values of EstimatedRTT and DevRTT.

You might also like