Services Provided: Connection-Less Protocol, in Contrast To So-Called

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

The Internet Protocol (IP) is the principal communications protocol used for relaying

datagrams (packets) across an internetwork using the Internet Protocol Suite. Responsible for
routing packets across network boundaries, it is the primary protocol that establishes the Internet.

IP is the primary protocol in the Internet Layer of the Internet Protocol Suite and has the task of
delivering datagrams from the source host to the destination host solely based on their addresses.
For this purpose, IP defines addressing methods and structures for datagram encapsulation.

Historically, IP was the connectionless datagram service in the original Transmission Control
Program introduced by Vint Cerf and Bob Kahn in 1974, the other being the connection-oriented
Transmission Control Protocol (TCP). The Internet Protocol Suite is therefore often referred to
as TCP/IP.

The first major version of IP, now referred to as Internet Protocol Version 4 (IPv4) is the
dominant protocol of the Internet, although the successor, Internet Protocol Version 6 (IPv6) is in
active, growing deployment worldwide

Services provided
The Internet Protocol is responsible for addressing hosts and routing datagrams (packets) from a
source host to the destination host across one or more IP networks. For this purpose the Internet
Protocol defines an addressing system that has two functions. Addresses identify hosts and
provide a logical location service. Each packet is tagged with a header that contains the meta-
data for the purpose of delivery. This process of tagging is also called encapsulation.

IP is a connectionless protocol and does not need circuit setup prior to transmission.

Reliability
The design principles of the Internet protocols assume that the network infrastructure is
inherently unreliable at any single network element or transmission medium and that it is
dynamic in terms of availability of links and nodes. No central monitoring or performance
measurement facility exists that tracks or maintains the state of the network. For the benefit of
reducing network complexity, the intelligence in the network is purposely mostly located in the
end nodes of each data transmission, cf. end-to-end principle. Routers in the transmission path
simply forward packets to the next known local gateway matching the routing prefix for the
destination address.

As a consequence of this design, the Internet Protocol only provides best effort delivery and its
service can also be characterized as unreliable. In network architectural language it is a
connection-less protocol, in contrast to so-called connection-oriented modes of transmission. The
lack of reliability allows any of the following fault events to occur:

 data corruption
 lost data packets
 duplicate arrival
 out-of-order packet delivery; meaning, if packet 'A' is sent before packet 'B', packet 'B'
may arrive before packet 'A'. Since routing is dynamic and there is no memory in the
network about the path of prior packets, it is possible that the first packet sent takes a
longer path to its destination.

The only assistance that the Internet Protocol provides in Version 4 (IPv4) is to ensure that the IP
packet header is error-free through computation of a checksum at the routing nodes. This has the
side-effect of discarding packets with bad headers on the spot. In this case no notification is
required to be sent to either end node, although a facility exists in the Internet Control Message
Protocol (ICMP) to do so.

IPv6, on the other hand, has abandoned the use of IP header checksums for the benefit of rapid
forwarding through routing elements in the network.

The resolution or correction of any of these reliability issues is the responsibility of an upper
layer protocol. For example, to ensure in-order delivery the upper layer may have to cache data
until it can be passed to the application.

In addition to issues of reliability, this dynamic nature and the diversity of the Internet and its
components provide no guarantee that any particular path is actually capable of, or suitable for,
performing the data transmission requested, even if the path is available and reliable. One of the
technical constraints is the size of data packets allowed on a given link. An application must
assure that it uses proper transmission characteristics. Some of this responsibility lies also in the
upper layer protocols between application and IP. Facilities exist to examine the maximum
transmission unit (MTU) size of the local link, as well as for the entire projected path to the
destination when using IPv6. The IPv4 internetworking layer has the capability to automatically
fragment the original datagram into smaller units for transmission. In this case, IP does provide
re-ordering of fragments delivered out-of-order.[1]

Transmission Control Protocol (TCP) is an example of a protocol that will adjust its segment size
to be smaller than the MTU. User Datagram Protocol (UDP) and Internet Control Message
Protocol (ICMP) disregard MTU size thereby forcing IP to fragment oversized datagrams.[2]

[edit] IP addressing and routing


Perhaps the most complex aspects of IP are IP addressing and routing. Addressing refers to how end
hosts become assigned IP addresses and how subnetworks of IP host addresses are divided and grouped
together. IP routing is performed by all hosts, but most importantly by internetwork routers, which
typically use either interior gateway protocols (IGPs) or external gateway protocols (EGPs) to help make
IP datagram forwarding decisions across IP connected networks.

Version history
In May 1974, the Institute of Electrical and Electronic Engineers (IEEE) published a paper
entitled "A Protocol for Packet Network Interconnection."[3] The paper's authors, Vint Cerf and
Bob Kahn, described an internetworking protocol for sharing resources using packet-switching
among the nodes. A central control component of this model was the "Transmission Control
Program" (TCP) that incorporated both connection-oriented links and datagram services between
hosts. The monolithic Transmission Control Program was later divided into a modular
architecture consisting of the Transmission Control Protocol at the connection-oriented layer and
the Internet Protocol at the internetworking (datagram) layer. The model became known
informally as TCP/IP, although formally it was henceforth referenced as the Internet Protocol
Suite.

The Internet Protocol is one of the determining elements that define the Internet. The dominant
internetworking protocol in the Internet Layer in use today is IPv4; with number 4 assigned as
the formal protocol version number carried in every IP datagram. IPv4 is described in RFC 791
(1981).

The successor to IPv4 is IPv6. Its most prominent modification from version 4 is the addressing
system. IPv4 uses 32-bit addresses (c. 4 billion, or 4.3×109
, addresses) while IPv6 uses 128-bit addresses (c. 340 undecillion, or 3.4×1038
addresses). Although adoption of IPv6 has been slow, as of June 2008, all United States
government systems have demonstrated basic infrastructure support for IPv6 (if only at the
backbone level).[4]

Version numbers 0 through 3 were development versions of IPv4 used between 1977 and 1979.
[citation needed]
Version number 5 was used by the Internet Stream Protocol, an experimental
streaming protocol. Version numbers 6 through 9 were proposed for various protocol models
designed to replace IPv4: SIPP (Simple Internet Protocol Plus, known now as IPv6), TP/IX (RFC
1475), PIP (RFC 1621) and TUBA (TCP and UDP with Bigger Addresses, RFC 1347). Version
number 6 was eventually chosen as the official assignment for the successor Internet protocol,
subsequently standardized as IPv6.

A humorous Request for Comments that made an IPv9 protocol center of its storyline was
published on April 1, 1994 by the IETF.[5] It was intended as an April Fool's Day joke. Other
protocol proposals named "IPv9" and "IPv8" have also briefly surfaced, though these came with
little or no support from the wider industry and academia.[6]

Reference diagrams
Sample encapsulation of application data from
UDP to a Link protocol frame
Internet Protocol Suite in operation between
two hosts connected via two routers and the
corresponding layers used at each hop

Advantages and disadvantages


Computers that are connected to each other create a network. These networks are often configured
with "public" Internet Protocol (IP) addresses -- that is, the devices on the network are "visible" to
devices outside the network (from the Internet or another network). Networks can also be configured as
"private" -- meaning that devices outside the network cannot "see" or communicate directly to them.

Computers on a public network have the advantage (and disadvantage) that they are completely visible
to the Internet. As such, they have no boundaries between themselves and the rest of the Internet
community. This advantage oftentimes becomes a distinct disadvantage since this visibility can lead to a
computer vulnerability exploit -- a.k.a., a "hack" -- if the devices on the public network are not properly
secured.

Most likely, your computer at work is on the Medical Center's private network. A public/private network
like the Medical Center's has the advantage that the majority of the network is "privatized," and
therefore unseen directly from the Internet.
Only a limited number of computers, such as those hosting our public Web sites, are on the public
network and are therefore accessible from the Internet. We typically set these Web servers into a
protected area known as a DMZ. By minimizing exposure to the Internet, the Medical Center's network
attracts less attention for malicious network attacks.

The disadvantage of a private network is that it entails more configuration and administration to
maintain usability. At times, not being fully visible on the Internet can cause some difficulty in
connecting to certain services, such as streaming audio/video, chat/instant messaging programs, or
some secure Web sites.

Maintaining most computers on a private network, with only an IDP/IDS and/or Firewall visible to the
public Internet helps maintain a highly secure environment for the computers on the private network,
while at the same time keeping them connected to the public Internet

« previous Page 3 of 10 next »

Types of Internet Protocols

There's more to the Internet than the World Wide Web


When we think of the Internet we often think only of the World Wide Web. The Web is one of
several ways to retrieve information from the Internet. These different types of Internet
connections are known as protocols. You could use separate software applications to access the
Internet with each of these protocols, though you probably wouldn't need to. Many Internet Web
browsers allow users to access files using most of the protocols. Following are three categories
of Internet services and examples of types of services in each category.

File retrieval protocols


This type of service was one of the earliest ways of retrieving information from computers
connected to the Internet. You could view the names of the files stored on the serving computer,
but you didn't have any type of graphics and sometimes no description of a file's content. You
would need to have advanced knowledge of which files contained the information you sought.

FTP (File Transfer Protocol)


This was one of the first Internet services developed and it allows users to move files from one
computer to another. Using the FTP program, a user can logon to a remote computer, browse
through its files, and either download or upload files (if the remote computer allows). These can
be any type of file, but the user is only allowed to see the file name; no description of the file
content is included. You might encounter the FTP protocol if you try to download any software
applications from the World Wide Web. Many sites that offer downloadable applications use the
FTP protocol.
An example of a FTP Protocol Window:

Telnet
You can connect to and use a remote computer program by using the telnet protocol. Generally
you would telnet into a specific application housed on a serving computer that would allow you
to use that application as if it were on your own computer. Again, using this protocol requires
special software.

There are only two types of internet protocol (IP). IPv4, which is currently the most widely used,
and IPv6, which is currently being rolled out, but not very widely-used.

You might also like