Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

QUIC Protocol

Content

 Version Of Http
 What Is Quic Protocol
 What’s QUIC Used For
 What Advantages Do
 Packets And Frames
 QUIC Streams
 QUIC recovery and flow control
 Load balancing QUIC
HTTP Version 1.0

 In this context, version 1.0 of HTTP was released in 1996, about five years after version 0.9.
 Version 1.0 of HTTP brings several new utilities. Let’s see some of them:
 Header: only the method and the resource name composed an HTTP 0.9 request. HTTP 1.0, in turn,
introduced the HTTP header, thus allowing the transmission of metadata that made the protocol
flexible and extensible
 Versioning: the HTTP requests explicitly informs the employed version, appending it in the request
line

 Status code: HTTP responses now contain a status code, thus enabling the receiver to check the
request processing status (successful or failed)
 Content-type: thanks to the HTTP header, in specific to the Content-Type field, HTTP can transmit
other documents types than a plain HTML file
 New methods: besides GET, HTTP 1.0 provides two new methods (POST and HEAD)
 In summary, HTTP 1.0 got much more robust than the 0.9 version. The most responsible for the
protocol
HTTP Version 1.1

 Version 1.1 of HTTP was released in 1997, only one year after the previous version
1.0. HTTP 1.1 is an enhancement of HTTP 1.0, providing several extensions.
 Among the most relevant enhancements, we can cite the following:
 Host header: HTTP 1.0 does not officially require the host header. HTTP 1.1 requires it by
the specification. The host header is specially important to route messages through proxy
servers, allowing to distinguish domains that point to the same IP
 Persistent connections: in HTTP 1.0, each request/response pair requires opening a new
connection. In HTTP 1.1, it is possible to execute several requests using a single connection
 Continue status: to avoid servers refusing unprocessable requests, now clients can first send
only the request headers and check if they receive a continue status code (100)
 New methods: besides the already available methods of HTTP 1.0, the 1.1 version added six
extra methods: PUT, PATCH, DELETE, CONNECT, TRACE, and OPTION
HTTP Version 2.0

 HTTP version 2.0 was officially released in 2015, about eighteen years after the HTTP 1.1. Particularly, HTTP
2.0 focused on improving the protocol performance.
 To do that, HTTP 2.0 implemented several features to improve connections and data exchange. Let’s see some of
them:
 Request multiplexing: HTTP 1.1 is a sequential protocol. So, we can send a single request at a time. HTTP 2.0, in
turn, allows to send requests and receive responses asynchronously. In this way, we can do multiple requests at the
same time using a single connection
 Request prioritization: with HTTP 2.0, we can set a numeric prioritization in a batch of requests. Thus, we can be
explicit in which order we expect the responses, such as getting a webpage CSS before its JS files
 Automatic compressing: in the previous version of HTTP (1.1), we must explicitly require the compression of
requests and responses. HTTP 2.0, however, executes a GZip compression automatically
 Connection reset: a functionality that allows closing a connection between a server and a client for some reason,
thus immediately opening a new one
 Server push: to avoid a server receiving lots of requests, HTTP 2.0 introduced a server push functionality. With
that, the server tries to predict the resources that will be requested soon. So, the server proactively pushes these
resources to the client cache
HTTP Version 3.0

 HTTP 3.0 is an Internet-Draft, different from the previous HTTP versions,


which were/are Request For Comments (RFC) documents of the Internet
Engineering Task Force (IETF). Its first draft was published in 2020.
 The main difference between HTTP 2.0 and HTTP 3.0 is the employed
transport layer protocol. In HTTP 2.0, we have TCP connections with or
not TLS (HTTPS and HTTP). HTTP 3.0, in turn, is designed over QUIC (Quick
UDP Internet Connections).
 QUIC, in short, is a transport layer protocol with native multiplexing and
built-in encryption. QUIC provides a quick handshake process, besides being
able to mitigate latency problems in lossy and slow connections.
 QUIC, as defined by the Internet Engineering Task Force (IETF), is an
encrypted connection-oriented protocol that operates at the Transport Layer,
or Layer 4, in the OSI model. While only formally adopted as a standard by
the IETF in May 2021, its roots date back nearly a decade.
 In 2012, engineers at Google originally developed the Quick UDP Internet
Connections protocol as an experiment to improve the performance of
Google’s web applications. While QUIC was originally an acronym, the official
standard in RFC 9000 describes QUIC as a name, not an acronym. Personally, I
think Quick UDP Internet Connections is helpful for immediate context, but I
have to remind myself that the name is, in fact, just QUIC.
 In one word, the motivation behind the development of QUIC is speed. In
contrast to HTTPS leveraging TLS, which is built on top of the TCP protocol,
QUIC is built on top of UDP. This comes with one clear advantage: the time to
the first valuable communication drops significantly.
 Since QUIC uses UDP, there is no need to complete a complex handshake to
establish the first connection. The protocol includes initiating encryption, and
the exchange of keys, in the initial handshake process. It takes only one round
trip to establish a path for communication.
 And while UDP itself is a connectionless protocol, and therefore technically
unreliable (meaning that packets can get lost), QUIC handles identifying lost
data and completing re-transmissions to ensure a seamless user experience.
What’s QUIC used for?

 The need for a protocol like QUIC is obvious, at least in hindsight. When TCP,
HTTP, and SSL / TLS became the standards for web traffic, they were the best
protocols available at that time. They were, however, all built independently
of each other. Meaning that HTTP was built to use TCP, and operated
independently. TCP doesn’t care what type of traffic it carries.
What advantages does QUIC offer?

 Faster Connections
 QUIC will start a connection with a single packet (or two packets if it
is the first connection), and will even transmit all necessary TLS or
HTTPS parameters. In most cases, a client can send data directly to a
server without relying on a response, while TCP must first obtain and
process the server’s acknowledgement.
 Multiplex connection options
 The QUIC protocol solves the situation in a different way: it uses 64-
bit connection detection and various ‘streams’ to transport data
within a connection. Therefore, a QUIC connection is not necessarily
bound to a specific port (in this instance a UDP port), an IP address or
a specific endpoint. As a consequence, port and IP changes are both
viable options, as is the previously described multiplex connection.
 Assignment of unique sequence numbers
 Each data segment of a QUIC connection receives its own
sequence number, regardless of whether it is
an original or a forwarded segment. Continuously tagging
the packets is advantageous because it allows a
more accurate round trip time (RTT) estimate.
 Forward error correction
 Lost packets do not present a big problem when
transporting data over QUIC. Thanks to a simple XOR-
based error correction system, it is not necessary to
resubmit the corresponding data..
 Overload control (packet pacing)
 The QUIC protocol counteracts these load peaks with ‘packet pacing’.
This procedure ensures that the transmission rate is automatically
limited. So, even with low bandwidth connections, there is no
overload. However, this is not a new technique: some Linux kernels
also use the method for the TCP protocol.
 Authentication and encryption
 Safety has been a key aspect in the planning and design of QUIC right from
the very beginning. Developers have also prioritized finding a solution to one
of TCP’s biggest issues: the header on a sent packet is in plain text and can
be read without prior authentication. Man-in-the-middle-attacks are not
uncommon as a result. However, QUIC packages are always authenticated
and largely encrypted (including payload). The parts of the header that are
not in encrypted form are protected from injection and tampering
by authentication on the receiver’s end.
 Hardware independence
 QUIC support is only required at application level. It is up to the individual
software companies to integrate the software – they are not dependent on
hardware manufacturers. To date, it is mainly Google applications like Google
servers or Google Chrome that have QUIC implemented. However, programs
like the browser Opera, Caddy server software, and LiteSpeed Technologies’
load balancing and web server products already have third-party applications
which enable connections through the new transport protocol.
Packets and frames

 The QUIC protocol sends packets along the connection. Packets are
individually numbered in a 62-bit number space. There is no allowance for the
retransmission of a numbered packet. If data is to be retransmitted, it is done
so in a new packet with the next packet number in the sequence. That way
there is a clear distinction between the reception of an original packet and a
retransmission of the data payload.
 Multiple QUIC packets can be loaded into a single UDP datagram. QUIC UDP
datagrams must not be fragmented, and unless the end performs path MTU
discovery, then QUIC assumes that the path can support a 1,200-byte UDP
payload.
QUIC streams

 QUIC streams
 A QUIC connection is further broken into streams. Each QUIC stream provides
an ordered byte-stream abstraction to an application similar in nature to a
TCP byte-stream. QUIC allows for an arbitrary number of concurrent streams
to operate over a connection. Applications may indicate the relative priority
of streams.
 Each QUIC stream is identified by a unique stream ID, where its two least
significant bits are used to identify which endpoint initiated the stream and
whether the stream is bidirectional or unidirectional.
 QUIC endpoints can decide how to allocate bandwidth between different
streams, and how to prioritize the transmission of different stream frames
based on information from the application
QUIC datagrams

 In addition to reliable streams, QUIC also supports an unreliable but


secured data delivery service with DATAGRAM frames, which will not be
retransmitted upon loss detection (RFC 9221). When an application
sends a datagram over a QUIC connection, QUIC will generate a
DATAGRAM frame and send it in the first available packet. When a QUIC
endpoint receives a valid DATAGRAM frame, it is expected that it would
deliver the data to the application immediately. These DATAGRAM
frames are not associated with any stream.
QUIC frames

 Each packet contains a sequence of frames. Frames have a frame type field
and type-dependant data. The QUIC standard defines 20 different frame
types. They serve an analogous purpose to the TCP flags, carrying a control
signal about the state of streams and the connection itself.
 Frame types include padding, ping (or keepalive), ACK frames for received
packet numbers, which themselves contain ECN counts and ACK ranges, as
well as stream data frames, and datagram frame
QUIC recovery and flow control

 QUIC packets contain one or more frames. QUIC performs loss detection based
on these packets, not on individual frames. For each packet that is
acknowledged by the received, all frames carried in that packet are
considered received. The packet is considered lost if that packet is
unacknowledged when a later sent packet has been acknowledged, and when
a loss threshold is met.
 QUIC uses two thresholds to determine loss. The first is a packet reordering
threshold t. When packet x is acknowledged, then all unacknowledged
packets with a number less than x – t are considered lost. The second is
related to the QUIC-measured RTT interval, the waiting time w, which is
determined as a weight factor applied to the current estimated RTT interval.
If the time of the most recent acknowledgement is t, then all
unacknowledged packets sent before time t – w will be considered lost.
 For recovery, all frames in lost packets where the associated stream requires
retransmission will be placed into new packets for retransmission. The lost
packet itself is not retransmitted.
 As with TCP’s advertised receiver window, QUIC contains a mechanism to
enable a QUIC receiver to control the maximum amount of data that a sender
can send on an individual stream and the maximum amount on all streams at
any time. Also, as with TCP, the flow control algorithm to be used by reliable
streams is not specified by QUIC, although one such sender-side congestion
controller is defined in RFC 9002. This is an algorithm similar to TCP’s New
Reno (RFC 6582).

You might also like