Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 68

1.

Communication Protocol
Communication protocols define the manner in which peer processes
communicate between computer hardware devices. The protocols give the
rules for such things as the passing of messages, the exact formats of the
messages and how to handle error conditions.
If two computers are communicating and they both follow the protocol(s)
properly, the exchange is successful, regardless of what types of machines
they are and what operating systems are running on the machines. As long as
the machines have software that can manage the protocol, communication is
possible.
Essentially, therefore, a computer protocol is a set of rules that coordinates
the exchange of information.
2. Packet
The term packet is used often in data communications, sometimes
incorrectly.
To transfer data effectively, it is usually better to transfer uniform chunks of
data than to send characters singly or in widely varying sized groups.
Usually these chunks of data have some information ahead of them ( called
the header ) and sometimes an indicator at the end (called the trailer). These
chunks of data are loosely called packets. In some data communications
systems, "packets" refer to the units of data passed between two specific
layers in a protocol hierarchy e.g. the Data Link Layer and the Network
Layer of the OSI 7 layer model.
The amount of data in a packet and the composition of the header or trailer
may vary depending on the communications protocol as well as some
system parameters, but the concept of a packet always refers to the entire
chunk of data (including header and trailer).
3. A Host
A network host is a computer or other device connected to a computer
network. A network host may offer information resources, services, and
applications to users or other nodes on the network. A network host is a
network node that is assigned a network layer host address.
Computers participating in networks that use the Internet Protocol Suite may
also be called IP hosts. Specifically, computers participating in the Internet
are called Internet hosts, sometimes Internet nodes. Internet hosts and other
IP hosts have one or more IP addresses assigned to their network interfaces.

The addresses are configured either manually by an administrator,


automatically at start-up by means of the Dynamic Host Configuration
Protocol (DHCP), or by stateless address autoconfiguration methods
4. Gateways
In a communications network, a network node equipped for interfacing
with another network that uses different protocols.
A gateway may contain devices such as protocol translators, impedance
matching devices, rate converters, fault isolators, or signal translators as
necessary to provide system interoperability. It also requires the
establishment of mutually acceptable administrative procedures between
both networks.
A protocol translation/mapping gateway interconnects networks with
different network protocol technologies by performing the required protocol
conversions.
Loosely, a computer or computer program configured to perform the tasks
of a gateway. For a specific case, see default gateway.
gateways, also called protocol converters, can operate at any network layer.
The activities of a gateway are more complex than that of the router or
switch as it communicates using more than one protocol.[citation needed]
Both the computers of Internet users and the computers that serve pages to
users are host nodes, while the nodes that connect the networks in between
are gateways. For example, the computers that control traffic between
company networks or the computers used by internet service providers
(ISPs) to connect users to the internet are gateway nodes.
5. A Router
A router is a device that forwards data packets along networks. A router is
connected to at least two networks, commonly two LANs or WANs or a
LAN and its ISP's network. Routers are located at gateways, the places
where two or more networks connect.
Routers use headers and forwarding tables to determine the best path for
forwarding the packets, and they use protocols such as ICMP to
communicate with each other and configure the best route between any two
hosts.

Very little filtering of data is done through routers.


6. Routing
Routing is the process of selecting best paths in a network. In the past, the
term routing was also used to mean forwarding network traffic among
networks. However this latter function is much better described as simply
forwarding. Routing is performed for many kinds of networks, including the
telephone network (circuit switching), electronic data networks (such as the
Internet), and transportation networks. This article is concerned primarily
with routing in electronic data networks using packet switching technology.
In packet switching networks, routing directs packet forwarding (the transit
of logically addressed network packets from their source toward their
ultimate destination) through intermediate nodes. Intermediate nodes are
typically network hardware devices such as routers, bridges, gateways,
firewalls, or switches. General-purpose computers can also forward packets
and perform routing, though they are not specialized hardware and may
suffer from limited performance. The routing process usually directs
forwarding on the basis of routing tables which maintain a record of the
routes to various network destinations. Thus, constructing routing tables,
which are held in the router's memory, is very important for efficient routing.
Most routing algorithms use only one network path at a time. Multipath
routing techniques enable the use of multiple alternative paths.
7. Network protocol
n networking, a communications protocol or network protocol is the
specification of a set of rules for a particular type of communication.
Multiple protocols often describe different aspects of a single
communication. A group of protocols designed to work together are known
as a protocol suite; when implemented in software they are a protocol stack.
The terms are often intermingled; people may use the term "protocol" to
refer to a software implementation, or use "protocol stack" to refer to the
specification.
Most recent protocols are assigned by the IETF for Internet communications,
and the IEEE, or the ISO organizations for other types. The ITU-T handles
telecommunications protocols and formats for the PSTN. As the PSTN and
Internet converge, the two sets of standards are also being driven towards
convergence

8. Protocol layering
In modern protocol design, protocols are "layered". Layering is a design
principle which divides the protocol design into a number of smaller parts,
each of which accomplishes a particular sub-task, and interacts with the
other parts of the protocol only in a small number of well-defined ways.
For example, one layer might describe how to encode text (with ASCII, say),
while another describes how to inquire for messages (with the Internet's
simple mail transfer protocol, for example), while another may detect and
retry errors (with the Internet's transmission control protocol), another
handles addressing (say with IP, the Internet Protocol), another handles the
encapsulation of that data into a stream of bits (for example, with the pointto-point protocol), and another handles the electrical encoding of the bits,
(with a V.42 modem, for example).
Layering allows the parts of a protocol to be designed and tested without a
combinatorial explosion of cases, keeping each design relatively simple.
Layering also permits familiar protocols to be adapted to unusual
circumstances. For example, the mail protocol above can be adapted to send
messages to aircraft. Just change the V.42 modem protocol to the INMARS
LAPD data protocol used by the international marine radio satellites.
9. Internet
If you wish to expose information to everyone in the world, then you would
build an Internet-type application. An Internet-type application uses Internet
protocols such as HTTP, FTP, or SMTP and is available to persons
anywhere on the Internet. We use the Internet and web applications as ways
to extend who the application can reach. For example, I no longer need to
go to the bank to transfer funds. Because the bank has built a web site on
the Internet, I can do that from the comfort of my own home.
10. Circuit Switching
The concept of circuit switching works very much like common telephone
networks today. To establish a data connection from point A to point Z, a
person must work out a direct path over a number of connection routes to the
destination. Once a route has been determined, the person needs to set aside
resources on that line to establish his connection, after which he may start
transmitting data. While resources have been allocated for that connection,
no one else may use that line until the first user has disconnected his host.
This raises some questions as to how people can share a circuit switched
connection, 2 of methods of which are outlined below.

One of the cons of a Circuit Switched network is that it relies on a user


reserving and allocating resources for himself in order to use the network.
This adds to connection time and can form significant overhead in
establishing connections. In the event the user decides to hop off and get a
coffee without disconnecting, he will not have released the resources
reserved for use by others.
11. Time Division
The circuit is divided into time slots, each of which are allocated to users
wanting to use the network. In each time slot available to a user, the user has
the full amount of bandwidth provided by the circuit. The more people using
the circuit, the more time it takes for each frame to be passed from
destination to receiver.
12 Frequency Division
The circuit is divided into a number of frequencies, each of which are
allocated to a user. The user has full control of that connection, only to be
limited by the bandwidth his frequency provides. Naturally the more people
utilising the circuit, the less bandwidth each person has.
13 Packet Switching
This form of switching works on the concept of data being sent as discrete
chunks of data known as packets. When a person has data to send, a stream
of packets are sent into the network, and routers forward the packets to their
destination. There is no notion of reserved bandwidth, and as a result if too
many packets are sent into the network, the network may become congested
and packet loss will occur. This leads to the problem of congestion control.
The main idea behind packet switching networks is that nobody is using all
the bandwidth of a connection at any one time, and therefore cuts out the
waste of bandwidth that usually accompanies Circuit Switched networks.
This results in greater efficiency, and is the main reason why packet
switched networks form the backbone of today's computer networks.
14 Delay
The transmission of information from one host to another is not
instantaneous. The amount of time it takes for data to be transmitted from
one end to another is dependent on a number of factors, namely

Nodal Processing
Queuing
Transmission Delay
Propagation Delay
15.Bridges
A network bridge connects and filters traffic between two network segments
at the data link layer (layer 2) of the OSI model to form a single network.
This breaks the network's collision domain but maintains a unified broadcast
domain. Network segmentation breaks down a large, congested network into
an aggregation of smaller, more efficient networks.
Bridges come in three basic types:
Local bridges: Directly connect LANs
Remote bridges: Can be used to create a wide area network (WAN) link
between LANs. Remote bridges, where the connecting link is slower than
the end networks, largely have been replaced with routers.
Wireless bridges: Can be used to join LANs or connect remote devices to
LANs.
16.Switches
A network switch is a device that forwards and filters OSI layer 2 datagrams
between ports based on the MAC addresses in the packets.[9] A switch is
distinct from a hub in that it only forwards the frames to the physical ports
involved in the communication rather than all ports connected. It can be
thought of as a multi-port bridge.[10] It learns to associate physical ports to
MAC addresses by examining the source addresses of received frames. If an
unknown destination is targeted, the switch broadcasts to all ports but the
source. Switches normally have numerous ports, facilitating a star topology
for devices, and cascading additional switches.
Multi-layer switches are capable of routing based on layer 3 addressing or
additional logical levels. The term switch is often used loosely to include
devices such as routers and bridges, as well as devices that may distribute
traffic based on load or based on application content (e.g., a Web URL
identifier).

17.Repeaters and hubs


A repeater is an electronic device that receives a network signal, cleans it of
unnecessary noise, and regenerates it. The signal is retransmitted at a higher
power level, or to the other side of an obstruction, so that the signal can
cover longer distances without degradation. In most twisted pair Ethernet
configurations, repeaters are required for cable that runs longer than 100
meters. With fiber optics, repeaters can be tens or even hundreds of
kilometers apart.
A repeater with multiple ports is known as a hub. Repeaters work on the
physical layer of the OSI model. Repeaters require a small amount of time to
regenerate the signal. This can cause a propagation delay that affects
network performance. As a result, many network architectures limit the
number of repeaters that can be used in a row, e.g., the Ethernet 5-4-3 rule.
Hubs have been mostly obsoleted by modern switches; but repeaters are
used for long distance links, notably undersea cabling.
18.Network interfaces
An ATM network interface in the form of an accessory card. A lot of
network interfaces are built-in.
A network interface controller (NIC) is computer hardware that provides a
computer with the ability to access the transmission media, and has the
ability to process low-level network information. For example the NIC may
have a connector for accepting a cable, or an aerial for wireless transmission
and reception, and the associated circuitry.
The NIC responds to traffic addressed to a network address for either the
NIC or the computer as a whole.
19. A media access control address (MAC address)
it is a unique identifier assigned to network interfaces for communications
on the physical network segment. MAC addresses are used as a network
address for most IEEE 802 network technologies, including Ethernet and
WiFi. Logically, MAC addresses are used in the media access control
protocol sublayer of the OSI reference model.
MAC addresses are most often assigned by the manufacturer of a network
interface controller (NIC) and are stored in its hardware, such as the card's
read-only memory or some other firmware mechanism. If assigned by the
manufacturer, a MAC address usually encodes the manufacturer's registered
identification number and may be referred to as the burned-in address (BIA).

It may also be known as an Ethernet hardware address (EHA), hardware


address or physical address. This can be contrasted to a programmed
address, where the host device issues commands to the NIC to use an
arbitrary address.
20. Ethernet
It is a family of computer networking technologies for local area networks
(LANs) and metropolitan area networks (MANs). It was commercially
introduced in 1980 and first standardized in 1983 as IEEE 802.3,[1] and has
since been refined to support higher bit rates and longer link distances. Over
time, Ethernet has largely replaced competing wired LAN technologies such
as token ring, FDDI, and ARCNET. The primary alternative for
contemporary LANs is not a wired standard, but instead a wireless LAN
standardized as IEEE 802.11 and also known as Wi-Fi.
Systems communicating over Ethernet divide a stream of data into shorter
pieces called frames. Each frame contains source and destination addresses
and error-checking data so that damaged data can be detected and retransmitted. As per the OSI model, Ethernet provides services up to and
including the data link layer
.21. DataLink Layer:In the seven-layer OSI model of computer networking, the data link layer is
layer 2; in the TCP/IP reference model, it is part of the link layer. The data
link layer is the protocol layer that transfers data between adjacent network
nodes in a wide area network or between nodes on the same local area
network segment.[1] The data link layer provides the functional and
procedural means to transfer data between network entities and might
provide the means to detect and possibly correct errors that may occur in the
physical layer.
Examples of data link protocols are Ethernet for local area networks (multinode), the Point-to-Point Protocol (PPP), HDLC and ADCCP for point-topoint (dual-node) connections.
22 Protocol Data Units:In telecommunications, the term protocol data unit (PDU) has the following
meanings:
Information that is delivered as a unit among peer entities of a network and
that may contain control information, such as address information, or user

data.
In a layered system, a unit of data which is specified in a protocol of a
given layer and which consists of protocol-control information and possibly
user data of that layer. For example: Bridge PDU or iSCSI PDU.
23 HDLC
High-Level Data Link Control (HDLC) is a bit-oriented code-transparent
synchronous data link layer protocol developed by the International
Organization for Standardization (ISO).
The current standard for HDLC is ISO 13239, which replaces all of those
standards.
HDLC provides both connection-oriented and connectionless service.
HDLC can be used for point to multipoint connections, but is now used
almost exclusively to connect one device to another, using what is known as
Asynchronous Balanced Mode (ABM). The original master-slave modes
Normal Response Mode (NRM) and Asynchronous Response Mode (ARM)
are rarely used.
24 MAN
A metropolitan area network (MAN) is a computer network larger than a
local area network, covering an area of a few city blocks to the area of an
entire city, possibly also including the surrounding areas.
A MAN is optimized for a larger geographical area than a LAN, ranging
from several blocks of buildings to entire cities. MANs can also depend on
communications channels of moderate-to-high data rates. A MAN might be
owned and operated by a single organization, but it usually will be used by
many individuals and organizations. MANs might also be owned and
operated as public utilities. They will often provide means for inter
networking of local networks.
25 WiFi
Wi-Fi (or WiFi) is a local area wireless technology that allows an electronic
device to participate in computer networking using 2.4 GHz UHF and 5 GHz
SHF ISM radio bands.
Wi-Fi can be less secure than wired connections, such as Ethernet, because
an intruder does not need a physical connection. Web pages that use TLS are

secure, but unencrypted internet access can easily be detected by intruders.


Because of this, Wi-Fi has adopted various encryption technologies. The
early encryption WEP proved easy to break. Higher quality protocols (WPA,
WPA2) were added later. An optional feature added in 2007, called Wi-Fi
Protected Setup (WPS), had a serious flaw that allowed an attacker to
recover the router's password.[2] The Wi-Fi Alliance has since updated its
test plan and certification program to ensure all newly certified devices resist
attacks .
26 Modems
Modems (MOdulator-DEModulator) are used to connect network nodes via
wire not originally designed for digital network traffic, or for wireless. To do
this one or more carrier signals are modulated by the digital signal to
produce an analog signal that can be tailored to give the required properties
for transmission. Modems are commonly used for telephone lines, using a
Digital Subscriber Line technology.
27 Network structure
Network topology is the layout or organizational hierarchy of interconnected
nodes of a computer network. Different network topologies can affect
throughput, but reliability is often more critical. With many technologies,
such as bus networks, a single failure can cause the network to fail entirely.
In general the more interconnections there are, the more robust the network
is; but the more expensive it is to install.
28
Nanoscale Network
A nanoscale communication network has key components implemented at
the nanoscale including message carriers and leverages physical principles
that differ from macroscale communication mechanisms. Nanoscale
communication extends communication to very small sensors and actuators
such as those found in biological systems and also tends to operate in
environments that would be too harsh for classical communication
29
Personal area network

A personal area network (PAN) is a computer network used for


communication among computer and different information technological
devices close to one person. Some examples of devices that are used in a
PAN are personal computers, printers, fax machines, telephones, PDAs,
scanners, and even video game consoles. A PAN may include wired and
wireless devices. The reach of a PAN typically extends to 10 meters.[17] A
wired PAN is usually constructed with USB and FireWire connections while
technologies such as Bluetooth and infrared communication typically form a
wireless PAN.
30
Personal area network
A personal area network (PAN) is a computer network used for
communication among computer and different information technological
devices close to one person. Some examples of devices that are used in a
PAN are personal computers, printers, fax machines, telephones, PDAs,
scanners, and even video game consoles. A PAN may include wired and
wireless devices. The reach of a PAN typically extends to 10 meters.[17] A
wired PAN is usually constructed with USB and FireWire connections while
technologies such as Bluetooth and infrared communication typically form a
wireless PAN.
31
Home area network
A home area network (HAN) is a residential LAN used for communication
between digital devices typically deployed in the home, usually a small
number of personal computers and accessories, such as printers and mobile
computing devices. An important function is the sharing of Internet access,
often a broadband service through a cable TV or digital subscriber line
(DSL) provider.
32
Storage area network
A storage area network (SAN) is a dedicated network that provides access to
consolidated, block level data storage. SANs are primarily used to make
storage devices, such as disk arrays, tape libraries, and optical jukeboxes,
accessible to servers so that the devices appear like locally attached devices
to the operating system. A SAN typically has its own network of storage

devices that are generally not accessible through the local area network by
other devices. The cost and complexity of SANs dropped in the early 2000s
to levels allowing wider adoption across both enterprise and small to
medium-sized business environments.
33
Campus area network
A campus area network (CAN) is made up of an interconnection of LANs
within a limited geographical area. The networking equipment (switches,
routers) and transmission media (optical fiber, copper plant, Cat5 cabling,
etc.) are almost entirely owned by the campus tenant / owner (an enterprise,
university, government, etc.).
For example, a university campus network is likely to link a variety of
campus buildings to connect academic colleges or departments, the library,
and student residence halls.
34
Backbone network
A backbone network is part of a computer network infrastructure that
provides a path for the exchange of information between different LANs or
sub-networks. A backbone can tie together diverse networks within the same
building, across different buildings, or over a wide area.
For example, a large company might implement a backbone network to
connect departments that are located around the world. The equipment that
ties together the departmental networks constitutes the network backbone.
When designing a network backbone, network performance and network
congestion are critical factors to take into account. Normally, the backbone
network's capacity is greater than that of the individual networks connected
to it.
Another example of a backbone network is the Internet backbone, which is
the set of wide area networks (WANs) and core routers that tie together all
networks connected to the Internet.
35
Wide area network
A wide area network (WAN) is a computer network that covers a large

geographic area such as a city, country, or spans even intercontinental


distances. A WAN uses a communications channel that combines many types
of media such as telephone lines, cables, and air waves. A WAN often makes
use of transmission facilities provided by common carriers, such as
telephone companies. WAN technologies generally function at the lower
three layers of the OSI reference model: the physical layer, the data link
layer, and the network layer.
36
Virtual private network
A virtual private network (VPN) is an overlay network in which some of the
links between nodes are carried by open connections or virtual circuits in
some larger network (e.g., the Internet) instead of by physical wires. The
data link layer protocols of the virtual network are said to be tunneled
through the larger network when this is the case. One common application is
secure communications through the public Internet, but a VPN need not
have explicit security features, such as authentication or content encryption.
VPNs, for example, can be used to separate the traffic of different user
communities over an underlying network with strong security features.
VPN may have best-effort performance, or may have a defined service level
agreement (SLA) between the VPN customer and the VPN service provider.
Generally, a VPN has a topology more complex than point-to-point.
37 Intranets
An intranet is a set of networks that are under the control of a single
administrative entity. The intranet uses the IP protocol and IP-based tools
such as web browsers and file transfer applications. The administrative
entity limits use of the intranet to its authorized users. Most commonly, an
intranet is the internal LAN of an organization. A large intranet typically has
at least one web server to provide users with organizational information. An
intranet is also anything behind the router on a local area network.
38 Extranet
An extranet is a network that is also under the administrative control of a
single organization, but supports a limited connection to a specific external
network. For example, an organization may provide access to some aspects
of its intranet to share data with its business partners or customers. These

other entities are not necessarily trusted from a security standpoint. Network
connection to an extranet is often, but not always, implemented via WAN
technology
39 Darknet
A Darknet is an overlay network, typically running on the internet, that is
only accessible through specialized software. A darknet is an anonymizing
network where connections are made only between trusted peers
sometimes called "friends" (F2F)[21] using non-standard protocols and
ports.
Darknets are distinct from other distributed peer-to-peer networks as sharing
is anonymous (that is, IP addresses are not publicly shared), and therefore
users can communicate with little fear of governmental or corporate
interference
40 Network service
Network services are applications hosted by servers on a computer network,
to provide some functionality for members or users of the network, or to
help the network itself to operate.
The World Wide Web, E-mail, printing and network file sharing are
examples of well-known network services. Network services such as DNS
(Domain Name System) give names for IP and MAC addresses (people
remember names like nm.lan better than numbers like 210.121.67.18),
and DHCP to ensure that the equipment on the network has a valid IP
address.
Services are usually based on a service protocol that defines the format and
sequencing of messages between clients and servers of that network service.
41 wireless networks
A wireless network is any type of computer network that uses wireless data
connections for connecting network nodes.
Wireless networking is a method by which homes, telecommunications
networks and enterprise (business) installations avoid the costly process of
introducing cables into a building, or as a connection between various

equipment locations.[1] Wireless telecommunications networks are generally


implemented and administered using radio communication. This
implementation takes place at the physical level (layer) of the OSI model
network structure.[2]
Examples of wireless networks include cell phone networks, Wi-Fi local
networks and terrestrial microwave networks.
42 Wireless PAN
Wireless personal area networks (WPANs) interconnect devices within a
relatively small area, that is generally within a person's reach.[3] For
example, both Bluetooth radio and invisible infrared light provides a WPAN
for interconnecting a headset to a laptop. ZigBee also supports WPAN
applications.[4] Wi-Fi PANs are becoming commonplace (2010) as
equipment designers start to integrate Wi-Fi into a variety of consumer
electronic devices. Intel "My WiFi" and Windows 7 "virtual Wi-Fi"
capabilities have made Wi-Fi PANs simpler and easier to set up and
configure.
43 Wireless LAN
Wireless LANs are often used for connecting to local resources and to the
Internet
A wireless local area network (WLAN) links two or more devices over a
short distance using a wireless distribution method, usually providing a
connection through an access point for Internet access. The use of spreadspectrum or OFDM technologies may allow users to move around within a
local coverage area, and still remain connected to the network.
Products using the IEEE 802.11 WLAN standards are marketed under the
Wi-Fi brand name. Fixed wireless technology implements point-to-point
links between computers or networks at two distant locations, often using
dedicated microwave or modulated laser light beams over line of sight paths.
It is often used in cities to connect networks in two or more buildings
without installing a wired link.
44wireless WAN
Wireless wide area networks are wireless networks that typically cover large
areas, such as between neighboring towns and cities, or city and suburb.

These networks can be used to connect branch offices of business or as a


public internet access system. The wireless connections between access
points are usually point to point microwave links using parabolic dishes on
the 2.4 GHz band, rather than omnidirectional antennas used with smaller
networks. A typical system contains base station gateways, access points and
wireless bridging relays. Other configurations are mesh systems where each
access point acts as a relay also. When combined with renewable energy
systems such as photo-voltaic solar panels or wind systems they can be stand
alone systems.
45 A cellular network or mobile network
it is a radio network distributed over land areas called cells, each served by
at least one fixed-location transceiver, known as a cell site or base station. In
a cellular network, each cell characteristically uses a different set of radio
frequencies from all their immediate neighbouring cells to avoid any
interference.
When joined together these cells provide radio coverage over a wide
geographic area. This enables a large number of portable transceivers (e.g.,
mobile phones, pagers, etc.) to communicate with each other and with fixed
transceivers and telephones anywhere in the network, via base stations, even
if some of the transceivers are moving through more than one cell during
transmission.
46. GSM
Global System for Mobile Communications (GSM): The GSM network is
divided into three major systems: the switching system, the base station
system, and the operation and support system. The cell phone connects to
the base system station which then connects to the operation and support
station; it then connects to the switching station where the call is transferred
to where it needs to go. GSM is the most common standard and is used for a
majority of cell phones

47 Personal Communications Service or PCS


it describes a set of wireless communications capabilities that allows some
combination of terminal mobility, personal mobility, and service profile
management.[1] More specifically, PCS refers to any of several types of
wireless voice and/or wireless data communications systems, typically

incorporating digital technology, providing services similar to advanced


cellular mobile or paging services. In addition, PCS can also be used to
provide other wireless communications services, including services that
allow people to place and receive communications while away from their
home or office, as well as wireless communications to homes, office
buildings and other fixed locations.[2] Described in more commercial terms,
PCS is a generation of wireless-phone technology that combines a range of
features and services surpassing those available in analog- and digitalcellular phone systems, providing a user with an all-in-one wireless phone,
paging, messaging, and data service.
48. GSM
(Global System for Mobile Communications, originally Groupe Spcial
Mobile), is a standard developed by the European Telecommunications
Standards Institute (ETSI) to describe protocols for second-generation (2G)
digital cellular networks used by mobile phones. As of 2014 it has become
the default global standard for mobile communications - with over 90%
market share, operating in over 219 countries and territories.[2]
2G networks developed as a replacement for first generation (1G) analog
cellular networks, and the GSM standard originally described a digital,
circuit-switched network optimized for full duplex voice telephony. This
expanded over time to include data communications, first by circuitswitched transport, then by packet data transport via GPRS (General Packet
Radio Services) and EDGE (Enhanced Data rates for GSM Evolution or
EGPRS).
Subsequently, the 3GPP developed third-generation (3G) UMTS standards
followed by fourth-generation (4G) LTE Advanced standards, which do not
form part of the ETSI GSM standard.
"GSM" is a trademark owned by the GSM Association. It may also refer to
the (initially) most common voice codec used, Full Rate.
49. Subscriber Identity Module (SIM)
One of the key features of GSM is the Subscriber Identity Module,
commonly known as a SIM card. The SIM is a detachable smart card
containing the user's subscription information and phone book. This allows
the user to retain his or her information after switching handsets.
Alternatively, the user can also change operators while retaining the handset
simply by changing the SIM. Some operators will block this by allowing the
phone to use only a single SIM, or only a SIM issued by them; this practice
is known as SIM locking.

50. WiMAX
It refers to interoperable implementations of the IEEE 802.16 family of
wireless-networks standards ratified by the WiMAX Forum. (Similarly, WiFi refers to interoperable implementations of the IEEE 802.11 Wireless LAN
standards certified by the Wi-Fi Alliance.) WiMAX Forum certification
allows vendors to sell fixed or mobile products as WiMAX certified, thus
ensuring a level of interoperability with other certified products, as long as
they fit the same profile.
The original IEEE 802.16 standard (now called "Fixed WiMAX") was
published in 2001. WiMAX adopted some of its technology from WiBro, a
service marketed in Korea.[3]
Mobile WiMAX (originally based on 802.16e-2005) is the revision that was
deployed in many countries, and is the basis for future revisions such as
802.16m-2011.
WiMAX is sometimes referred to as "Wi-Fi on steroids"[4] and can be used
for a number of applications including broadband connections, cellular
backhaul, hotspots, etc. It is similar to Wi-Fi, but it can enable usage at much
greater distances.
51. General packet radio service (GPRS)
It is a packet oriented mobile data service on the 2G and 3G cellular
communication system's global system for mobile communications (GSM).
GPRS was originally standardized by European Telecommunications
Standards Institute (ETSI) in response to the earlier CDPD and i-mode
packet-switched cellular technologies. It is now maintained by the 3rd
Generation Partnership Project (3GPP).[1][2]
GPRS usage is typically charged based on volume of data transferred,
contrasting with circuit switched data, which is usually billed per minute of
connection time. Usage above the bundle cap is either charged per megabyte
or disallowed.
GPRS is a best-effort service, implying variable throughput and latency that
depend on the number of other users sharing the service concurrently, as
opposed to circuit switching, where a certain quality of service (QoS) is
guaranteed during the connection. In 2G systems, GPRS provides data rates
of 56114 kbit/second.
52. Internet Protocol suite

The Internet protocol suite is the computer networking model and set of
communications protocols used on the Internet and similar computer
networks. It is commonly known as TCP/IP, because its most important
protocols, the Transmission Control Protocol (TCP) and the Internet
Protocol (IP), were the first networking protocols defined in this standard.
Often also called the Internet model, it was originally also known as the
DoD model, because the development of the networking model was funded
by DARPA, an agency of the United States Department of Defense.
TCP/IP provides end-to-end connectivity specifying how data should be
packetized, addressed, transmitted, routed and received at the destination.
This functionality is organized into four abstraction layers which are used to
sort all related protocols according to the scope of networking involved.[1]
[2] From lowest to highest, the layers are the link layer, containing
communication technologies for a single network segment (link); the
internet layer, connecting hosts across independent networks, thus
establishing internetworking; the transport layer handling host-to-host
communication; and the application layer, which provides process-toprocess application data exchange.
53 Downlink
In the context of satellite communications, a downlink (DL) is the link from
a satellite to a ground station.
Pertaining to cellular networks, the radio downlink is the transmission
path from a cell site to the cell phone. Traffic and signalling flows within the
base station subsystem (BSS) and network switching subsystem (NSS) may
also be identified as uplink and downlink.
Pertaining to computer networks, a downlink is a connection from data
communications equipment towards data terminal equipment. This is also
known as a downstream connection.
54 Uplink
Pertaining to satellite communications, an uplink (UL or U/L) is the portion
of a communications link used for the transmission of signals from an Earth
terminal to a satellite or to an airborne platform. An uplink is the inverse of a
downlink. An uplink or downlink is distinguished from reverse link or
forward link.
Pertaining to GSM and cellular networks, the radio uplink is the
transmission path from the mobile station (cell phone) to a base station (cell
site). Traffic and signalling flows within the BSS and NSS may also be

identified as uplink and downlink.


Pertaining to computer networks, an uplink is a connection from data
communications equipment toward the network core. This is also known as
an upstream connection
55.Telecommunications link
In telecommunications a link is a communications channel that connects two
or more communicating devices. This link may be an actual physical link or
it may be a logical link that uses one or more actual physical links.
A telecommunications link is generally one of several types of information
transmission paths such as those provided by communication satellites,
terrestrial radio communications infrastructure and computer networks to
connect two or more points.
The term link is widely used in computer networking (see data link) to refer
to the communications facilities that connect nodes of a network. When the
link is a logical link the type of physical link should always be specified
(e.g., data link, uplink, downlink, fiber optic link, point-to-point link, etc.)
56 Client server
The clientserver model of computing is a distributed application structure
that partitions tasks or workloads between the providers of a resource or
service, called servers, and service requesters, called clients.[1] Often clients
and servers communicate over a computer network on separate hardware,
but both client and server may reside in the same system. A server host runs
one or more server programs which share their resources with clients. A
client does not share any of its resources, but requests a server's content or
service function. Clients therefore initiate communication sessions with
servers which await incoming requests.
Examples of computer applications that use the clientserver model are
Email, network printing, and the World Wide Web.
57 Peer-to-peer
A peer-to-peer (P2P) network in which interconnected nodes ("peers") share
resources amongst each other without the use of a centralized administrative
system
A network based on the client-server model, where individual clients request

services and resources from centralized servers


Peer-to-peer (P2P) computing or networking is a distributed application
architecture that partitions tasks or work loads between peers. Peers are
equally privileged, equipotent participants in the application. They are said
to form a peer-to-peer network of nodes.
Peers make a portion of their resources, such as processing power, disk
storage or network bandwidth, directly available to other network
participants, without the need for central coordination by servers or stable
hosts.[1] Peers are both suppliers and consumers of resources, in contrast to
the traditional client-server model in which the consumption and supply of
resources is divided. Emerging collaborative P2P systems are going beyond
the era of peers doing similar things while sharing resources, and are looking
for diverse peers that can bring in unique resources and capabilities to a
virtual community thereby empowering it to engage in greater tasks beyond
those that can be accomplished by individual peers, yet that are beneficial to
all the peers.
58 Secure Shell
Secure Shell, or SSH, is a cryptographic (encrypted) network protocol for
initiating text-based shell sessions[clarification needed] on remote machines
in a secure way.
This allows a user to run commands on a machine's command prompt
without them being physically present near the machine. It also allows a user
to establish a secure channel over an insecure network in a client-server
architecture, connecting an SSH client application with an SSH server.[1]
Common applications include remote command-line login and remote
command execution, but any network service can be secured with SSH. The
protocol specification distinguishes between two major versions, referred to
as SSH-1 and SSH-2.
59 The File Transfer Protocol (FTP)
It is a standard network protocol used to transfer computer files from one
host to another host over a TCP-based network, such as the Internet.
FTP is built on a client-server architecture and uses separate control and data
connections between the client and the server. FTP users may authenticate
themselves using a clear-text sign-in protocol, normally in the form of a
username and password, but can connect anonymously if the server is
configured to allow it. For secure transmission that protects the username
and password, and encrypts the content, FTP is often secured with SSL/TLS

(FTPS). SSH File Transfer Protocol (SFTP) is sometimes also used instead,
but is technologically different.
The first FTP client applications were command-line applications developed
before operating systems had graphical user interfaces, and are still shipped
with most Windows, Unix, and Linux operating systems. Many FTP clients
and automation utilities have since been developed for desktops, servers,
mobile devices, and hardware, and FTP has been incorporated into
productivity applications, such as Web page editors.
60 Simple Mail Transfer Protocol
(SMTP) is an Internet standard for electronic mail (e-mail) transmission.
First defined by RFC 821 in 1982, it was last updated in 2008 with the
Extended SMTP additions by RFC 5321 - which is the protocol in
widespread use today.
SMTP by default uses TCP port 25. The protocol for mail submission is the
same, but uses port 587. SMTP connections secured by SSL, known as
SMTPS, default to port 465 (nonstandard, but sometimes used for legacy
reasons).
Although electronic mail servers and other mail transfer agents use SMTP to
send and receive mail messages, user-level client mail applications typically
use SMTP only for sending messages to a mail server for relaying. For
receiving messages, client applications usually use either POP3 or IMAP.
Although proprietary systems (such as Microsoft Exchange and Lotus
Notes/Domino) and webmail systems (such as Hotmail, Gmail and Yahoo!
Mail) use their own non-standard protocols to access mail box accounts on
their own mail servers, all use SMTP when sending or receiving email from
outside their own systems.

61 An application layer
It is an abstraction layer that specifies the shared protocols and interface
methods used by hosts in a communications network. The application layer
abstraction is used in both of the standard models of computer networking;
the Internet Protocol Suite (TCP/IP) and the Open Systems Interconnection
model (OSI model).
Although both models use the same term for their respective highest level
layer, the detailed definitions and purposes are different.

In TCP/IP, the application layer contains the communications protocols and


interface methods used in process-to-process communications across an
Internet Protocol (IP) computer network. The application layer only
standardizes communication and depends upon the underlying transport
layer protocols to establish host-to-host data transfer channels and manage
the data exchange in a client-server or peer-to-peer networking model.
Though the TCP/IP application layer does not describe specific rules or data
formats that applications must consider when communicating, the original
specification (in RFC 1123) does rely on and recommend the robustness
principle for application design.
62 The Domain Name System
(DNS) is a hierarchical distributed naming system for computers, services,
or any resource connected to the Internet or a private network. It associates
various information with domain names assigned to each of the participating
entities. Most prominently, it translates domain names, which can be easily
memorized by humans, to the numerical IP addresses needed for the purpose
of computer services and devices worldwide. The Domain Name System is
an essential component of the functionality of most Internet services because
it is the Internet's primary directory service.
The Domain Name System also specifies the technical functionality of the
database service which is at its core. It defines the DNS protocol, a detailed
specification of the data structures and data communication exchanges used
in DNS, as part of the Internet Protocol Suite. Historically, other directory
services preceding DNS were not scalable to large or global directories as
they were originally based on text files, prominently the HOSTS.TXT
resolver. DNS has been in wide use since the 1980s.

63 Internet Message Access Protocol


(IMAP) is a protocol for e-mail retrieval and storage developed by Mark
Crispin in 1986 at Stanford University as an alternative to POP. IMAP,
unlike POP, specifically allows multiple clients simultaneously connected to
the same mailbox, and through flags stored on the server, different clients
accessing the same mailbox at the same or different times can detect state
changes made by other clients.
he Internet Message Access Protocol (commonly known as IMAP) is an
Application Layer Internet protocol that allows an e-mail client to access email on a remote mail server. The current version, IMAP version 4 revision 1
(IMAP4rev1), is defined by RFC 3501. An IMAP server typically listens on

well-known port 143. IMAP over SSL (IMAPS) is assigned well-known port
number 993.
IMAP supports both on-line and off-line modes of operation. E-mail clients
using IMAP generally leave messages on the server until the user explicitly
deletes them. This and other characteristics of IMAP operation allow
multiple clients to manage the same mailbox. Most e-mail clients support
IMAP in addition to Post Office Protocol (POP) to retrieve messages;
however, fewer e-mail services support IMAP.[1] IMAP offers access to the
mail storage. Clients may store local copies of the messages, but these are
considered to be a temporary cache.
64 The Lightweight Directory Access Protocol
LDAP is an open, vendor-neutral, industry standard application protocol for
accessing and maintaining distributed directory information services over an
Internet Protocol (IP) network. Directory services play an important role in
developing intranet and Internet applications by allowing the sharing of
information about users, systems, networks, services, and applications
throughout the network. As examples, directory services may provide any
organized set of records, often with a hierarchical structure, such as a
corporate email directory. Similarly, a telephone directory is a list of
subscribers with an address and a phone number.
LDAP is specified in a series of Internet Engineering Task Force (IETF)
Standard Track publications called Request for Comments (RFCs), using the
description language ASN.1. The latest specification is Version 3, published
as RFC 4511. For example, here is an LDAP search translated into plain
English: "Search in the company email directory for all people located in
Nashville whose name contains 'Jesse' that have an email address. Please
return their full name, email, title, and description
65 The Media Gateway Control Protocol
(MGCP) is an implementation of the Media Gateway Control Protocol
architecture for controlling media gateways on Internet Protocol (IP)
networks connected to the public switched telephone network (PSTN).[1]
The protocol architecture and programming interface is described in RFC
2805 and the current specific MGCP definition is RFC 3435 which overrides
RFC 2705. It is a successor to the Simple Gateway Control Protocol (SGCP)
which was developed by Bellcore and Cisco. In November 1998, the Simple
Gateway Control Protocol (SGCP) was combined with Level 3
Communications Internet Protocol Device Control (IPDC) to form the

Media Gateway Control Protocol (MGCP).


MGCP uses the Session Description Protocol (SDP) for specifying and
negotiating the media streams to be transmitted in a call session and the
Real-time Transport Protocol (RTP) for framing of the media streams.
66 Network News Transfer Protocol
The Network News Transfer Protocol (NNTP) is an application protocol
used for transporting Usenet news articles (netnews) between news servers
and for reading and posting articles by end user client applications. Brian
Kantor of the University of California, San Diego and Phil Lapsley of the
University of California, Berkeley authored RFC 977, the specification for
the Network News Transfer Protocol, in March 1986. Other contributors
included Stan O. Barber from the Baylor College of Medicine and Erik Fair
of Apple Computer.
Usenet was originally designed based on the UUCP network, with most
article transfers taking place over direct point-to-point telephone links
between news servers, which were powerful time-sharing systems. Readers
and posters logged into these computers reading the articles directly from the
local disk.
As local area networks and Internet participation proliferated, it became
desirable to allow newsreaders to be run on personal computers connected to
local networks. It resembled the Simple Mail Transfer Protocol (SMTP), but
was tailored for exchanging newsgroup articles.
67 Network Time Protocol
Network Time Protocol (NTP) is a networking protocol for clock
synchronization between computer systems over packet-switched, variablelatency data networks. In operation since before 1985, NTP is one of the
oldest Internet protocols in current use. NTP was originally designed by
David L. Mills of the University of Delaware, who still oversees its
development.
NTP is intended to synchronize all participating computers to within a few
milliseconds of Coordinated Universal Time (UTC):3 It uses a modified
version of Marzullo's algorithm to select accurate time servers and is
designed to mitigate the effects of variable network latency. NTP can usually
maintain time to within tens of milliseconds over the public Internet, and can
achieve better than one millisecond accuracy in local area networks under
ideal conditions. Asymmetric routes and network congestion can cause
errors of 100 ms or more.

68 Post Office Protocol


In computing, the Post Office Protocol (POP) is an application-layer Internet
standard protocol used by local e-mail clients to retrieve e-mail from a
remote server over a TCP/IP connection.[1] POP has been developed
through several versions, with version 3 (POP3) being the current standard.
Virtually all modern e-mail clients and servers support POP3, and it along
with IMAP (Internet Message Access Protocol) are the two most prevalent
Internet standard protocols for e-mail retrieval,[2] with many webmail
service providers such as Gmail, Outlook.com and Yahoo! Mail also
providing support
69 Real-time Transport Protocol
The Real-time Transport Protocol (RTP) is a network protocol for delivering
audio and video over IP networks. RTP is used extensively in
communication and entertainment systems that involve streaming media,
such as telephony, video teleconference applications, television services and
web-based push-to-talk features.
RTP is used in conjunction with the RTP Control Protocol (RTCP). While
RTP carries the media streams (e.g., audio and video), RTCP is used to
monitor transmission statistics and quality of service (QoS) and aids
synchronization of multiple streams. RTP is one of the technical foundations
of Voice over IP and in this context is often used in conjunction with a
signaling protocol such as the Session Initiation Protocol (SIP) which
establishes connections across the network.
RTP was developed by the Audio-Video Transport Working Group of the
Internet Engineering Task Force (IETF) and first published in 1996 as RFC
1889, superseded by RFC 3550 in 2003.
70 The Real Time Streaming Protocol
(RTSP) is a network control protocol designed for use in entertainment and
communications systems to control streaming media servers. The protocol is
used for establishing and controlling media sessions between end points.
Clients of media servers issue VCR-style commands, such as play and
pause, to facilitate real-time control of playback of media files from the
server.

The transmission of streaming data itself is not a task of the RTSP protocol.
Most RTSP servers use the Real-time Transport Protocol (RTP) in
conjunction with Real-time Control Protocol (RTCP) for media stream
delivery, however some vendors implement proprietary transport protocols.
The RTSP server software from RealNetworks, for example, also used
RealNetworks' proprietary Real Data Transport (RDT).
71 Routing Information Protocol
The Routing Information Protocol (RIP) is one of the oldest distance-vector
routing protocols, which employs the hop count as a routing metric. RIP
prevents routing loops by implementing a limit on the number of hops
allowed in a path from the source to a destination. The maximum number of
hops allowed for RIP is 15. This hop limit, however, also limits the size of
networks that RIP can support. A hop count of 16 is considered an infinite
distance, in other words the route is considered unreachable. RIP implements
the split horizon, route poisoning and holddown mechanisms to prevent
incorrect routing information from being propagated.
RIP uses the User Datagram Protocol (UDP) as its transport protocol, and is
assigned the reserved port number 520
72. The Session Initiation Protocol
(SIP) is a communications protocol for signaling and controlling
multimedia communication sessions. The most common applications of SIP
are in Internet telephony for voice and video calls, as well as instant
messaging all over Internet Protocol (IP) networks.
The protocol defines the messages that are sent between endpoints, which
govern establishment, termination and other essential elements of a call. SIP
can be used for creating, modifying and terminating sessions consisting of
one or several media streams. SIP is an application layer protocol designed
to be independent of the underlying transport layer. It is a text-based
protocol, incorporating many elements of the Hypertext Transfer Protocol
(HTTP) and the Simple Mail Transfer Protocol (SMTP).
73 Simple Network Management Protocol
(SNMP) is an "Internet-standard protocol for managing devices on IP
networks". Devices that typically support SNMP include routers, switches,
servers, workstations, printers, modem racks and more. SNMP is used
mostly in network management systems to monitor network-attached

devices for conditions that warrant administrative attention. SNMP is a


component of the Internet Protocol Suite as defined by the Internet
Engineering Task Force (IETF). It consists of a set of standards for network
management, including an application layer protocol, a database schema,
and a set of data objects.
SNMP exposes management data in the form of variables on the managed
systems, which describe the system configuration. These variables can then
be queried (and sometimes set) by managing applications.
74 Telnet
It is an application protocol used on the Internet or local area networks to
provide a bidirectional interactive text-oriented communication facility using
a virtual terminal connection. User data is interspersed in-band with Telnet
control information in an 8-bit byte oriented data connection over the
Transmission Control Protocol (TCP).
The term telnet may also refer to the software that implements the client part
of the protocol. Telnet client applications are available for virtually all
computer platforms. Telnet is also used as a verb. To telnet means to
establish a connection with the Telnet protocol, either with command line
client or with a programmatic interface. For example, a common directive
might be: "To change your password, telnet to the server, log in and run the
passwd command." Most often, a user will be telnetting to a Unix-like server
system or a network device (such as a router) and obtaining a login prompt
to a command line text interface or a character-based full-screen manager.
75 Transport Layer Security
(TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic
protocols designed to provide communications security over a computer
network. They use X.509 certificates and hence asymmetric cryptography to
authenticate the counterparty with whom they are communicating, and to
negotiate a symmetric key. This session key is then used to encrypt data
flowing between the parties. This allows for data/message confidentiality,
and message authentication codes for message integrity and as a by-product,
message authentication.[clarification needed] Several versions of the
protocols are in widespread use in applications such as web browsing,
electronic mail, Internet faxing, instant messaging, and voice-over-IP (VoIP).
An important property in this context is forward secrecy, so the short-term
session key cannot be derived from the long-term asymmetric secret key

76 XMPP
Extensible Messaging and Presence Protocol (XMPP) is a communications
protocol for message-oriented middleware based on XML (Extensible
Markup Language).The protocol was originally named Jabber, and was
developed by the Jabber open-source community in 1999 for near real-time,
instant messaging (IM), presence information, and contact list maintenance.
Designed to be extensible, the protocol has also been used for publishsubscribe systems, signalling for VoIP, video, file transfer, gaming, Internet
of Things (IoT) applications such as the smart grid, and social networking
services.
Unlike most instant messaging protocols, XMPP is defined in an open
standard and uses an open systems approach of development and
application, by which anyone may implement an XMPP service and
interoperate with other organizations' implementations. Because XMPP is an
open protocol, implementations can be developed using any software
license; although many server, client, and library implementations are
distributed as free and open-source software, numerous freeware and
commercial software implementations also exist.
77 Wireless Communications Transfer Protocol
Wireless Communications Transfer Protocol (WCTP) is the method used to
send messages to wireless devices such as pagers on NPCS (Narrowband
PCS) networks. It uses HTTP as a transport layer over the World Wide Web.
Development of WCTP was initiated by the Messaging Standards
Committee and submitted to the Radio Paging Community. When the first
proposal was received, a sub-committee was established to improve the
protocol and issue it as a specification. The sub-committee was moved into
the PTC (Paging Technical Committee) which is a volunteer committee
composed of industry representatives. The PCIA (Personal Communications
Industry) accepted the first full release and adopted the protocol as a PCIA
standard. The current version is WCTP 1.3.
78 The Dynamic Host Configuration Protocol
(DHCP) is a standardized network protocol used on Internet Protocol (IP)
networks for dynamically distributing network configuration parameters,
such as IP addresses for interfaces and services. With DHCP, computers

request IP addresses and networking parameters automatically from a DHCP


server, reducing the need for a network administrator or a user to configure
these settings manually.
Computers use the Dynamic Host Configuration Protocol for requesting
Internet Protocol parameters from a network server, such as an IP address.
The protocol operates based on the client-server model. DHCP is very
common in all modern networks ranging in size from home networks to
large campus networks and regional Internet service provider networks.
Most residential network routers receive a globally unique IP address within
the provider network. Within a local network, DHCP assigns a local IP
address to devices connected to the local network.
79 The protocol stack
It is an implementation of a computer networking protocol suite. The terms
are often used interchangeably. Strictly speaking, the suite is the definition
of the protocols, and the stack is the software implementation of them.
Individual protocols within a suite are often designed with a single purpose
in mind. This modularization makes design and evaluation easier. Because
each protocol module usually communicates with two others, they are
commonly imagined as layers in a stack of protocols. The lowest protocol
always deals with "low-level", physical interaction of the hardware. Every
higher layer adds more features. User applications usually deal only with the
topmost layers (see also OSI model).
In practical implementation, protocol stacks are often divided into three
major sections: media, transport, and applications. A particular operating
system or platform will often have two well-defined software interfaces: one
between the media and transport layers, and one between the transport layers
and applications.

80 Transport layer
In computer networking, a transport layer provides end-to-end or host-tohost communication services for applications within a layered architecture of
network components and protocols.The transport layer provides services
such as connection-oriented data stream support, reliability, flow control,
and multiplexing.
Transport layer implementations are contained in both the TCP/IP model

(RFC 1122), which is the foundation of the Internet, and the Open Systems
Interconnection (OSI) model of general networking, however, the definitions
of details of the transport layer are different in these models. In the Open
Systems Interconnection model the transport layer is most often referred to
as Layer 4 or L4.
81 Transmission Control Protocol
The Transmission Control Protocol (TCP) is a core protocol of the Internet
Protocol Suite. It originated in the initial network implementation in which it
complemented the Internet Protocol (IP). Therefore, the entire suite is
commonly referred to as TCP/IP. TCP provides reliable, ordered, and errorchecked delivery of a stream of octets between applications running on hosts
communicating over an IP network. TCP is the protocol that major Internet
applications such as the World Wide Web, email, remote administration and
file transfer rely on. Applications that do not require reliable data stream
service may use the User Datagram Protocol (UDP), which provides a
connectionless datagram service that emphasizes reduced latency over
reliability.
The Transmission Control Protocol provides a communication service at an
intermediate level between an application program and the Internet Protocol.
It provides host-to-host connectivity at the Transport Layer of the Internet
model. An application does not need to know the particular mechanisms for
sending data via a link to another host, such as the required packet
fragmentation on the transmission medium. At the transport layer, the
protocol handles all handshaking and transmission details and presents an
abstraction of the network connection to the application.
82 Datagram Congestion Control Protocol
The Datagram Congestion Control Protocol (DCCP) is a message-oriented
transport layer protocol. DCCP implements reliable connection setup,
teardown, Explicit Congestion Notification (ECN), congestion control, and
feature negotiation. DCCP was published as RFC 4340, a proposed standard,
by the IETF in March, 2006. RFC 4336 provides an introduction. FreeBSD
had an implementation for version 5.1. Linux also had an implementation of
DCCP first released in Linux kernel version 2.6.14 (released October 28,
2005).
DCCP provides a way to gain access to congestion control mechanisms
without having to implement them at the application layer. It allows for
flow-based semantics like in Transmission Control Protocol (TCP), but does

not provide reliable in-order delivery. Sequenced delivery within multiple


streams as in the Stream Control Transmission Protocol (SCTP) is not
available in DCCP.
DCCP is useful for applications with timing constraints on the delivery of
data. Such applications include streaming media, multiplayer online games
and Internet telephony. The primary feature of these applications is that old
messages quickly become stale so that getting new messages is preferred to
resending lost messages. Currently such applications have often either
settled for TCP or used User Datagram Protocol (UDP) and implemented
their own congestion control mechanisms, or have no congestion control at
all.
83 Stream Control Transmission Protocol
In computer networking, the Stream Control Transmission Protocol (SCTP)
is a transport-layer protocol (protocol number 132, serving in a similar role
to the popular protocols Transmission Control Protocol (TCP) and User
Datagram Protocol (UDP). It provides some of the same service features of
both: it is message-oriented like UDP and ensures reliable, in-sequence
transport of messages with congestion control like TCP.
The IETF Signaling Transport (SIGTRAN) working group defined the
protocol in 2000,[2] and the IETF Transport Area (TSVWG) working group
maintains it. RFC 4960 defines the protocol. RFC 3286 provides an
introduction.
In the absence of native SCTP support in operating systems it is possible to
tunnel SCTP over UDP,[3] as well as mapping TCP API calls to SCTP ones.
84 User Datagram Protocol
The User Datagram Protocol (UDP) is one of the core members of the
Internet protocol suite. The protocol was designed by David P. Reed in 1980
and formally defined in RFC 768.
UDP uses a simple connectionless transmission model with a minimum of
protocol mechanism. It has no handshaking dialogues, and thus exposes any
unreliability of the underlying network protocol to the user's program. There
is no guarantee of delivery, ordering, or duplicate protection. UDP provides
checksums for data integrity, and port numbers for addressing different
functions at the source and destination of the datagram.

With UDP, computer applications can send messages, in this case referred to
as datagrams, to other hosts on an Internet Protocol (IP) network without
prior communications to set up special transmission channels or data paths.
UDP is suitable for purposes where error checking and correction is either
not necessary or is performed in the application, avoiding the overhead of
such processing at the network interface level. Time-sensitive applications
often use UDP because dropping packets is preferable to waiting for delayed
packets, which may not be an option in a real-time system.[1] If error
correction facilities are needed at the network interface level, an application
may use the Transmission Control Protocol (TCP) or Stream Control
Transmission Protocol (SCTP) which are designed for this purpose.
85 Resource Reservation Protocol
The Resource Reservation Protocol (RSVP) is a Transport Layer protocol
designed to reserve resources across a network for an integrated services
Internet. RSVP operates over an IPv4 or IPv6 Internet Layer and provides
receiver-initiated setup of resource reservations for multicast or unicast data
flows with scaling and robustness. It does not transport application data but
is similar to a control protocol, like Internet Control Message Protocol
(ICMP) or Internet Group Management Protocol (IGMP). RSVP is described
in RFC 2205.
RSVP can be used by either hosts or routers to request or deliver specific
levels of quality of service (QoS) for application data streams or flows.
RSVP defines how applications place reservations and how they can
relinquish the reserved resources once the need for them has ended. RSVP
operation will generally result in resources being reserved in each node
along a path.
RSVP is not a routing protocol and was designed to interoperate with current
and future routing protocols.
86 Wireless Datagram Protocol
It defines the movement of information from receiver to the sender and
resembles the User Datagram Protocol in the Internet protocol suite.
The Wireless Datagram Protocol (WDP), a protocol in WAP architecture,
covers the Transport Layer Protocols in the Internet model. As a general
transport service, WDP offers to the upper layers an invisible interface
independent of the underlying network technology used. In consequence of
the interface common to transport protocols, the upper layer protocols of the
WAP architecture can operate independent of the underlying wireless
network. By letting only the transport layer deal with physical network-

dependent issues, global interoperability can be acquired using mediating


gateways.
87 The Internet Group Management Protocol
(IGMP) is a communications protocol used by hosts and adjacent routers on
IPv4 networks to establish multicast group memberships. IGMP is an
integral part of IP multicast.
IGMP can be used for one-to-many networking applications such as online
streaming video and gaming, and allows more efficient use of resources
when supporting these types of applications.
IGMP is used on IPv4 networks. Multicast management on IPv6 networks is
handled by Multicast Listener Discovery (MLD) which uses ICMPv6
messaging in contrast to IGMP's bare IP encapsulation.
88 Explicit Congestion Notification
Explicit Congestion Notification (ECN) is an extension to the Internet
Protocol and to the Transmission Control Protocol and is defined in RFC
3168 (2001). ECN allows end-to-end notification of network congestion
without dropping packets. ECN is an optional feature that may be used
between two ECN-enabled endpoints when the underlying network
infrastructure also supports it.
Conventionally, TCP/IP networks signal congestion by dropping packets.
When ECN is successfully negotiated, an ECN-aware router may set a mark
in the IP header instead of dropping a packet in order to signal impending
congestion. The receiver of the packet echoes the congestion indication to
the sender, which reduces its transmission rate as though it detected a
dropped packet.
Rather than responding properly or ignoring the bits, some outdated or faulty
network equipment drop packets that have ECN bits set.
89 Internet Control Message Protocol version 6
(ICMPv6) is the implementation of the Internet Control Message Protocol
(ICMP) for Internet Protocol version 6 (IPv6) defined in RFC 4443.[1]
ICMPv6 is an integral part of IPv6 and performs error reporting and

diagnostic functions (e.g., ping), and has a framework for extensions to


implement future changes.
Several extensions have been published, defining new ICMPv6 message
types as well as new options for existing ICMPv6 message types. Neighbor
Discovery Protocol (NDP) is a node discovery protocol in IPv6 which
replaces and enhances functions of ARP.[2] Secure Neighbor Discovery
Protocol (SEND) is an extension of NDP with extra security. Multicast
Router Discovery (MRD) allows discovery of multicast routers.
90 The Internet Control Message Protocol
(ICMP) is one of the main protocols of the Internet Protocol Suite. It is used
by network devices, like routers, to send error messages indicating, for
example, that a requested service is not available or that a host or router
could not be reached. ICMP can also be used to relay query messages.[1] It
is assigned protocol number 1.[2] ICMP[3] differs from transport protocols
such as TCP and UDP in that it is not typically used to exchange data
between systems, nor is it regularly employed by end-user network
applications (with the exception of some diagnostic tools like ping and
traceroute)
The Internet Control Message Protocol is part of the Internet Protocol Suite,
as defined in RFC 792. ICMP messages are typically used for diagnostic or
control purposes or generated in response to errors in IP operations (as
specified in RFC 1122). ICMP errors are directed to the source IP address of
the originating packet.[1]
For example, every device (such as an intermediate router) forwarding an IP
datagram first decrements the time to live (TTL) field in the IP header by
one. If the resulting TTL is 0, the packet is discarded and an ICMP Time To
Live exceeded in transit message is sent to the datagram's source address.
91 The Internet Protocol
(IP) is the principal communications protocol in the Internet protocol suite
for relaying datagrams across network boundaries. Its routing function
enables internetworking, and essentially establishes the Internet.
IP, as the primary protocol in the Internet layer of the Internet protocol suite,
has the task of delivering packets from the source host to the destination host
solely based on the IP addresses in the packet headers. For this purpose, IP
defines packet structures that encapsulate the data to be delivered. It also
defines addressing methods that are used to label the datagram with source
and destination information.

Historically, IP was the connectionless datagram service in the original


Transmission Control Program introduced by Vint Cerf and Bob Kahn in
1974; the other being the connection-oriented Transmission Control Protocol
(TCP). The Internet protocol suite is therefore often referred to as TCP/IP.
The first major version of IP, Internet Protocol Version 4 (IPv4), is the
dominant protocol of the Internet. Its successor is Internet Protocol Version 6
(IPv6).
92 IPv4
Internet Protocol version 4 (IPv4) is the fourth version in the development of
the Internet Protocol (IP) Internet, and routes most traffic on the Internet.[1]
However, a successor protocol, IPv6, has been defined and is in various
stages of production deployment. IPv4 is described in IETF publication RFC
791 (September 1981), replacing an earlier definition (RFC 760, January
1980).
IPv4 is a connectionless protocol for use on packet-switched networks. It
operates on a best effort delivery model, in that it does not guarantee
delivery, nor does it assure proper sequencing or avoidance of duplicate
delivery. These aspects, including data integrity, are addressed by an upper
layer transport protocol, such as the Transmission Control Protocol (TCP).
93 Internet Protocol version 6
(IPv6) is the most recent version of the Internet Protocol (IP), the
communications protocol that provides an identification and location system
for computers on networks and routes traffic across the Internet. IPv6 was
developed by the Internet Engineering Task Force (IETF) to deal with the
long-anticipated problem of IPv4 address exhaustion. IPv6 is intended to
replace IPv4
IPv6 provides other technical benefits in addition to a larger addressing
space. In particular, it permits hierarchical address allocation methods that
facilitate route aggregation across the Internet, and thus limit the expansion
of routing tables. The use of multicast addressing is expanded and
simplified, and provides additional optimization for the delivery of services.
Device mobility, security, and configuration aspects have been considered in
the design of the protocol.
94 IpSec

Internet Protocol Security (IPsec) is a protocol suite for securing Internet


Protocol (IP) communications by authenticating and encrypting each IP
packet of a communication session. IPsec includes protocols for establishing
mutual authentication between agents at the beginning of the session and
negotiation of cryptographic keys to be used during the session. IPsec can be
used in protecting data flows between a pair of hosts (host-to-host), between
a pair of security gateways (network-to-network), or between a security
gateway and a host (network-to-host).
Internet Protocol security (IPsec) uses cryptographic security services to
protect communications over Internet Protocol (IP) networks. IPsec supports
network-level peer authentication, data origin authentication, data integrity,
data confidentiality (encryption), and replay protection.
95 the link layer
In computer networking, the link layer is the lowest layer in the Internet
Protocol Suite, commonly known as TCP/IP, the networking architecture of
the Internet. It is described in RFC 1122 and RFC 1123. The link layer is the
group of methods and communications protocols that only operate on the
link that a host is physically connected to. The link is the physical and
logical network component used to interconnect hosts or nodes in the
network and a link protocol is a suite of methods and standards that operate
only between adjacent network nodes of a local area network segment or a
wide area network connection
Despite the different semantics of layering in TCP/IP and OSI, the link layer
is sometimes described as a combination of the data link layer (layer 2) and
the physical layer (layer 1) in the OSI model. However, the layers of TCP/IP
are descriptions of operating scopes (application, host-to-host, network, link)
and not detailed prescriptions of operating procedures, data semantics, or
networking technologies.
96 The Address Resolution Protocol
(ARP) is a telecommunication protocol used for resolution of network layer
addresses into link layer addresses, a critical function in multiple-access
networks. ARP was defined by RFC 826 in 1982. It is Internet Standard STD
37. It is also the name of the program for manipulating these addresses in
most operating systems.
ARP is used to convert a network address (e.g. an IPv4 address) to a
physical address such as an Ethernet address (also known as a MAC
address). ARP has been implemented with many combinations of network
and data link layer technologies, such as IPv4, Chaosnet, DECnet and Xerox

PARC Universal Packet (PUP) using IEEE 802 standards, FDDI, X.25,
Frame Relay and Asynchronous Transfer Mode (ATM). IPv4 over IEEE
802.3 and IEEE 802.11 is the most common case.
97 The Neighbor Discovery Protocol
(NDP) is a protocol in the Internet protocol suite used with Internet Protocol
Version 6 (IPv6). It operates in the Link Layer of the Internet model (RFC
1122) and is responsible for address autoconfiguration of nodes, discovery
of other nodes on the link, determining the link layer addresses of other
nodes, duplicate address detection, finding available routers and Domain
Name System (DNS) servers, address prefix discovery, and maintaining
reachability information about the paths to other active neighbor nodes (RFC
4861)
The protocol defines five different ICMPv6 packet types to perform
functions for IPv6 similar to the Address Resolution Protocol (ARP) and
Internet Control Message Protocol (ICMP) Router Discovery and Router
Redirect protocols for IPv4. However, it provides many improvements over
its IPv4 counterparts.
The Inverse Neighbor Discovery (IND) protocol extension (RFC 3122)
allows nodes to determine and advertise an IPv6 address corresponding to a
given link-layer address, similar to Reverse ARP for IPv4. The Secure
Neighbor Discovery Protocol (SEND) is a security extension of NDP that
uses Cryptographically Generated Addresses (CGA) and the Resource
Public Key Infrastructure (RPKI) to provide an alternate mechanism for
securing NDP with a cryptographic method that is independent of IPsec.
Neighbor Discovery Proxy (ND Proxy) (RFC 4389) provides a service
similar to IPv4 Proxy ARP and allows bridging multiple network segments
within a single subnet prefix when bridging cannot be done at the link layer.
98 Open Shortest Path First
Open Shortest Path First (OSPF) is a routing protocol for Internet Protocol
(IP) networks. It uses a link state routing algorithm and falls into the group
of interior routing protocols, operating within a single autonomous system
(AS). It is defined as OSPF Version 2 in RFC 2328 (1998) for IPv4. The
updates for IPv6 are specified as OSPF Version 3 in RFC 5340 (2008).
OSPF is perhaps the most widely used interior gateway protocol (IGP) in
large enterprise networks. IS-IS, another link-state dynamic routing protocol,
is more common in large service provider networks. The most widely used
exterior gateway protocol is the Border Gateway Protocol (BGP), the
principal routing protocol between autonomous systems on the Internet.

OSPF detects changes in the topology, such as link failures, and converges
on a new loop-free routing structure within seconds. It computes the shortest
path tree for each route using a method based on Dijkstra's algorithm, a
shortest path first algorithm.
99 Tunneling protocol
In computer networks, a tunneling protocol allows a network user to access
or provide a network service that the underlying network does not support or
provide directly. One important use of a tunneling protocol is to allow a
foreign protocol to run over a network that does not support that particular
protocol; for example, running IPv6 over IPv4. Another important use is to
provide services that are impractical or unsafe to be offered using only the
underlying network services; for example, providing a corporate network
address to a remote user whose physical network address is not part of the
corporate network. Because tunneling involves repackaging the traffic data
into a different form, perhaps with encryption as standard, a third use is to
hide the nature of the traffic that is run through the tunnel.
The tunneling protocol works by using the data portion of a packet (the
payload) to carry the packets that actually provide the service. Tunneling
uses a layered protocol model such as those of the OSI or TCP/IP protocol
suite, but usually violates the layering when using the payload to carry a
service not normally provided by the network. Typically, the delivery
protocol operates at an equal or higher level in the layered model than the
payload protocol.
100 point to point protocol
In computer networking, Point-to-Point Protocol (PPP) is a data link
protocol used to establish a direct connection between two nodes. It can
provide connection authentication, transmission encryption and
compression.
PPP is used over many types of physical networks including serial cable,
phone line, trunk line, cellular telephone, specialized radio links, and fiber
optic links such as SONET. PPP is also used over Internet access
connections. Internet service providers (ISPs) have used PPP for customer
dial-up access to the Internet, since IP packets cannot be transmitted over a
modem line on their own, without some data link protocol. Two derivatives
of PPP, Point-to-Point Protocol over Ethernet (PPPoE) and Point-to-Point
Protocol over ATM (PPPoA), are used most commonly by Internet Service
Providers (ISPs) to establish a Digital Subscriber Line (DSL) Internet

service connection with customers


101 In the seven-layer OSI model of computer networking, media access
control (MAC) data communication protocol is a sublayer of the data link
layer (layer 2). The MAC sublayer provides addressing and channel access
control mechanisms that make it possible for several terminals or network
nodes to communicate within a multiple access network that incorporates a
shared medium, e.g. Ethernet. The hardware that implements the MAC is
referred to as a media access controller.
The MAC sublayer acts as an interface between the logical link control
(LLC) sublayer and the network's physical layer. The MAC layer emulates a
full-duplex logical communication channel in a multi-point network. This
channel may provide unicast, multicast or broadcast communication service.
the primary functions performed by the MAC layer are:
Frame delimiting and recognition
Addressing of destination stations (both as individual stations and as groups
of stations)
Conveyance of source-station addressing information
Transparent data transfer of LLC PDUs, or of equivalent information in the
Ethernet sublayer
Protection against errors, generally by means of generating and checking
frame check sequences
Control of access to the physical transmission medium

102 Layer 2 Tunneling Protocol


In computer networking, Layer 2 Tunneling Protocol (L2TP) is a tunneling
protocol used to support virtual private networks (VPNs) or as part of the
delivery of services by ISPs. It does not provide any encryption or
confidentiality by itself. Rather, it relies on an encryption protocol that it
passes within the tunnel to provide privacy.
The entire L2TP packet, including payload and L2TP header, is sent within a
User Datagram Protocol (UDP) datagram. It is common to carry PPP
sessions within an L2TP tunnel. L2TP does not provide confidentiality or

strong authentication by itself. IPsec is often used to secure L2TP packets by


providing confidentiality, authentication and integrity. The combination of
these two protocols is generally known as L2TP/IPsec
103 circuit switching
Circuit switching is a methodology of implementing a telecommunications
network in which two network nodes establish a dedicated communications
channel (circuit) through the network before the nodes may communicate.
The circuit guarantees the full bandwidth of the channel and remains
connected for the duration of the communication session. The circuit
functions as if the nodes were physically connected as with an electrical
circuit.
The defining example of a circuit-switched network is the early analog
telephone network. When a call is made from one telephone to another,
switches within the telephone exchanges create a continuous wire circuit
between the two telephones, for as long as the call lasts.
Circuit switching contrasts with packet switching which divides the data to
be transmitted into packets transmitted through the network independently.
In packet switching, instead of being dedicated to one communication
session at a time, network links are shared by packets from multiple
competing communication sessions, resulting in the loss of the quality of
service guarantees that are provided by circuit switching.
104 virtual circuit
A virtual circuit (VC) is a means of transporting data over a packet switched
computer network in such a way that it appears as though there is a
dedicated physical layer link between the source and destination end systems
of this data. The term virtual circuit is synonymous with virtual connection
and virtual channel. Before a connection or virtual circuit may be used, it has
to be established, between two or more nodes or software applications, by
configuring the relevant parts of the interconnecting network. After that, a
bit stream or byte stream may be delivered between the nodes; hence, a
virtual circuit protocol allows higher level protocols to avoid dealing with
the division of data into segments, packets, or frames.
\Virtual circuit communication resembles circuit switching, since both are
connection oriented, meaning that in both cases data is delivered in correct
order, and signalling overhead is required during a connection establishment
phase. However, circuit switching provides a constant bit rate and latency,
while these may vary in a virtual circuit service due to factors such as:

varying packet queue lengths in the network nodes,


varying bit rate generated by the application,
varying load from other users sharing the same network resources by means
of statistical multiplexing, etc.

105 Multiplexing
In telecommunications and computer networks, multiplexing (sometimes
contracted to muxing) is a method by which multiple analog message signals
or digital data streams are combined into one signal over a shared medium.
The aim is to share an expensive resource. For example, in
telecommunications, several telephone calls may be carried using one wire.
Multiplexing originated in telegraphy in the 1870s, and is now widely
applied in communications. In telephony, George Owen Squier is credited
with the development of telephone carrier multiplexing in 1910.
The multiplexed signal is transmitted over a communication channel, which
may be a physical transmission medium (e.g. a cable). The multiplexing
divides the capacity of the low-level communication channel into several
high-level logical channels, one for each message signal or data stream to be
transferred. A reverse process, known as demultiplexing, can extract the
original channels on the receiver side.
106 Messaging Switching
In telecommunications, message switching was the precursor of packet
switching, where messages were routed in their entirety, one hop at a time. It
was first built by Collins Radio Company, Newport Beach, California during
the period 1959-1963 for sale to large airlines, banks and railroads. Message
switching systems are nowadays mostly implemented over packet-switched
or circuit-switched data networks. Each message is treated as a separate
entity. Each message contains addressing information, and at each switch
this information is read and the transfer path to the next switch is decided.
Depending on network conditions, a conversation of several messages may
not be transferred over the same path. Each message is stored (usually on
hard drive due to RAM limitations) before being transmitted to the next
switch. Because of this it is also known as a 'store-and-forward' network.
Email is a common application for Message Switching. A delay in delivering
email is allowed unlike real time data transfer between two computers.

107 Optical Transport Network


Optical Transport Network (OTN) is a large complex network of server hubs
at different locations on ground, connected by Optical fiber cable or optical
network carrier, to transport data across different nodes. The server hubs are
also known as head-ends, nodes or simply, sites. OTNs are the backbone of
Internet Service Providers and are often daisy chained and cross connected
to provide network redundancy. Such a setup facilitates uninterrupted
services and fail-over capabilities during maintenance windows, equipment
failure or in case of accidents
The devices used to transport data are known as network transport
equipment.
The capacity of a network is mainly dependent on the type of signalling
scheme employed on transmitting and receiving end. In the earlier days, a
single wavelength light beam was used to transmit data, which limited the
bandwidth to the maximum operating frequency of the transmitting and
receiving end equipment. With the application of wavelength division
multiplexing (WDM), the bandwidth of OTN has risen up to 100Gbit/s
(OTU4 Signal), by emitting light beams of different wavelengths. Lately,
AT&T, Verizon, and Rogers Communication have been able to employ these
100G "pipes" in their metro network. Large field areas are mostly serviced
by 40G pipes (OC192/STM-64).

108 An Internet area network


(IAN) is a concept for a communications network that connects voice and
data endpoints within a cloud environment over IP, replacing an existing
local area network (LAN), wide area network (WAN) or the public switched
telephone network (PSTN).
Seen by proponents as the networking model of the future, an IAN securely
connects endpoints through the public Web, so that they can communicate
and exchange information and data without being tied to a physical location.
Hosted in the cloud by a managed services provider, an IAN platform offers
users secure access to information from anywhere, at anytime, via an
Internet connection. Users also have access to telephony, voicemail, e-mail,
and fax services from any connected endpoint. For businesses, the hosted
model reduces IT and communications expenses, protects against loss of

data and disaster downtime, while realizing a greater return on their invested
resources through increased employee productivity and reduction in telecom
costs.

109 Near field communication


Near field communication (NFC) is a set of ideas and technology that
enables smartphones and other devices to establish radio communication
with each other by touching them together or bringing them into proximity,
typically a distance of 10 cm (3.9 in) or less. Each full NFC device can work
in 3 modes: NFC target (acting like a credential), NFC initiator (acting as a
reader) and NFC (peer to peer.) Most of the first business models, like
advertisement tags or other industrial applications, have not been successful.
They have always been overtaken by other technologies, such as 2D
barcodes or UHF tags.
The main advantage of NFC is that NFC devices are often cloud connected.
Connected credentials can be provisioned over the air unlike a standard card.
All connected NFC-enabled smartphones can be provisioned with dedicated
apps, which gives the application huge benefits, like dedicated readers (as
opposed to the traditional dedicated infrastructure of ticket), access control,
or payment readers. All NFC peers can connect a third party NFC device
with a server for any action or reconfiguration.
110 Body Area Nerwork
A body area network (BAN), also referred to as a wireless body area
network (WBAN) or a body sensor network (BSN), is a wireless network of
wearable computing devices. BAN devices may be embedded inside the
body, implants, may be surface-mounted on the body in a fixed position
Wearable technology or may be accompanied devices which humans can
carry in different positions, in clothes pockets, by hand or in various bags.
Whilst, there is a trend towards the miniaturization of devices, in particular,
networks consisting of several miniaturized body sensor units (BSUs)
together with a single body central unit (BCU). larger decimeter sized (tab
and pad) sized smart devices, accompanied devices, still play an important
role in terms of acting as a data hub, data gateway and providing a user
interface to view and manage BAN applications, in-situ.
The development of WBAN technology started around 1995 around the idea
of using wireless personal area network (WPAN) technologies to implement

communications on, near, and around the human body.


111 Near me Network
A near-me area network (NAN) is a logical communication network that
focuses on communication among wireless devices in close proximity.
Unlike local area networks (LANs), in which the devices are in the same
network segment and share the same broadcast domain, the devices in a
NAN can belong to different proprietary network infrastructures (for
example, different mobile carriers). So, even though two devices are
geographically close, the communication path between them might, in fact,
traverse a long distance, going from a LAN, through the Internet, and to
another LAN. NAN applications focus on two-way communications among
people within a certain proximity to each other, but don't generally concern
themselves with those peoples exact locations.

112 Interplanetary Internet


The interplanetary internet (based on IPN, also called InterPlaNet) is a
conceived computer network in space, consisting of a set of network nodes
that can communicate with each other.Communication would be greatly
delayed by the great interplanetary distances, so the IPN needs a new set of
protocols and technology that are tolerant to large delays and errors.
Although the Internet as it is known today tends to be a busy network of
networks with high traffic, negligible delay and errors, and a wired
backbone, the interplanetary Internet is a store and forward network of
internets that is often disconnected, has a wireless backbone fraught with
error-prone links and delays ranging from tens of minutes to even hours,
even when there is a connection

113 The public switched telephone network


The public switched telephone network (PSTN) is the aggregate of the
world's circuit-switched telephone networks that are operated by national,
regional, or local telephony operators, providing infrastructure and services
for public telecommunication. The PSTN consists of telephone lines, fiber
optic cables, microwave transmission links, cellular networks,

communications satellites, and undersea telephone cables, all interconnected


by switching centers, thus allowing any telephone in the world to
communicate with any other. Originally a network of fixed-line analog
telephone systems, the PSTN is now almost entirely digital in its core
network and includes mobile and other networks, as well as fixed
telephones.
The technical operation of the PSTN adheres to the standards created by the
ITU-T. These standards allow different networks in different countries to
interconnect seamlessly. The E.163 and E.164 standards provide a single
global address space for telephone numbers. The combination of the
interconnected networks and the single numbering plan allow telephones
around the world to dial each other.

114 OSI model


The Open Systems Interconnection model (OSI Model) is a conceptual
model that characterizes and standardizes the internal functions of a
communication system by partitioning it into abstraction layers. The model
is a product of the Open Systems Interconnection project at the International
Organization for Standardization (ISO), maintained by the identification
ISO/IEC 7498-1.
An open system is a set of protocols that allow any two different systems to
communicate regardless of their underlying structure. The purpose of OSI
model is to show how to facilitate communication between different systems
without requiring changes to the logic of the underlying hardware and
software.The OSI model isn't a protocol; it is a model for understanding and
designing a network architecture that is flexible, robust and interoperable.
115 Layer 1: Physical Layer of osi model
The physical layer has the following major functions:
It defines the electrical and physical specifications of the data connection. It
defines the relationship between a device and a physical transmission
medium (e.g., a copper or fiber optical cable). This includes the layout of
pins, voltages, line impedance, cable specifications, signal timing, hubs,
repeaters, network adapters, host bus adapters (HBA used in storage area
networks) and more.
It defines the protocol to establish and terminate a connection between

two directly connected nodes over a communications medium.


It may define the protocol for flow control.
It defines transmission mode i.e. simplex, half duplex, full duplex.
It defines topology.
116 FTAM,
ISO standard 8571, is the OSI Application layer protocol for File Transfer
Access and Management.
The goal of FTAM is to combine into a single protocol both file transfer,
similar in concept to the Internet FTP, as well as remote access to open files,
similar to NFS. However, like the other OSI protocols, FTAM has not been
widely adopted,[1] and the TCP/IP based Internet has become the dominant
global network.
The FTAM protocol was used in the German banking sector to transfer
clearing information. The Banking Communication Standard (BCS) over
FTAM access (short BCS-FTAM) was standardized in the DF-Abkommen
(EDI-agreement) enacted in Germany on 15 March 1995. The BCS-FTAM
transmission protocol was supposed to be replaced by the Electronic
Banking Internet Communication Standard (EBICS) in 2010. The obligatory
support for BCS over FTAM was to cease on 31 December 2010.
RFC 1415 provides an FTP-FTAM gateway specification but attempts to
define an Internet-scale file transfer protocol have instead focused on Server
message block, NFS or Andrew File System as models.
118 The Common Management Information Protocol
(CMIP) is the OSI specified network management protocol.
Defined in ITU-T Recommendation X.711, ISO/IEC International Standard
9596-1. It provides an implementation for the services defined by the
Common Management Information Service (CMIS) specified in ITU-T
Recommendation X.710, ISO/IEC International Standard 9595, allowing
communication between network management applications and management
agents. CMIS/CMIP is the network management protocol specified by the
ISO/OSI Network management model and is further defined by the ITU-T in
the X.700 series of recommendations.
CMIP models management information in terms of managed objects and
allows both modification and performing actions on managed objects.

Managed objects are described using GDMO (Guidelines for the Definition
of Managed Objects), and can be identified by a distinguished name (DN),
from the X.500 directory.
CMIP also provides good security (support authorization, access control, and
security logs) and flexible reporting of unusual network conditions.
119 FDDI
It provides a 100 Mbit/s optical standard for data transmission in local area
network that can extend in range up to 200 kilometers (120 mi). Although
FDDI logical topology is a ring-based token network, it did not use the IEEE
802.5 token ring protocol as its basis; instead, its protocol was derived from
the IEEE 802.4 token bus timed token protocol. In addition to covering large
geographical areas, FDDI local area networks can support thousands of
users. FDDI offers both a Dual-Attached Station (DAS), counter-rotating
token ring topology and a Single-Attached Station (SAS), token bus passing
ring topology
120 SONET
Synchronous Optical Networking (SONET) and Synchronous Digital
Hierarchy (SDH) are standardized protocols that transfer multiple digital bit
streams synchronously over optical fiber using lasers or highly coherent
light from light-emitting diodes (LEDs). At low transmission rates data can
also be transferred via an electrical interface. The method was developed to
replace the Plesiochronous Digital Hierarchy (PDH) system for transporting
large amounts of telephone calls and data traffic over the same fiber without
synchronization problems. SONET generic criteria are detailed in Telcordia
Technologies Generic Requirements document GR-253-CORE. Generic
criteria applicable to SONET and other transmission systems (e.g.,
asynchronous fiber optic systems or digital radio systems) are found in
Telcordia GR-499-CORE.
SONET and SDH, which are essentially the same, were originally designed
to transport circuit mode communications (e.g., DS1, DS3) from a variety of
different sources, but they were primarily designed to support real-time,
uncompressed, circuit-switched voice encoded in PCM format The primary
difficulty in doing this prior to SONET/SDH was that the synchronization
sources of these various circuits were different. This meant that each circuit
was actually operating at a slightly different rate and with different phase.
SONET/SDH allowed for the simultaneous transport of many different
circuits of differing origin within a single framing protocol. SONET/SDH is
not a communications protocol in itself, but a transport protocol.

121a communication channel


In telecommunications and computer networking, a communication channel
or channel, refers either to a physical transmission medium such as a wire, or
to a logical connection over a multiplexed medium such as a radio channel.
A channel is used to convey an information signal, for example a digital bit
stream, from one or several senders (or transmitters) to one or several
receivers. A channel has a certain capacity for transmitting information,
often measured by its bandwidth in Hz or its data rate in bits per second.
Communicating data from one location to another requires some form of
pathway or medium. These pathways, called communication channels, use
two types of media: cable (twisted-pair wire, cable, and fiber-optic cable)
and broadcast (microwave, satellite, radio, and infrared). Cable or wire line
media use physical wires of cables to transmit data and information.
Twisted-pair wire and coaxial cables are made of copper, and fiber-optic
cable is made of glass.
122 A point-to-multipoint channel,
It is also known as broadcasting medium (not to be confused with
broadcasting channel): In this channel, a single sender transmits multiple
messages to different destination nodes. All wireless channels except radio
links can be considered as broadcasting media, but may not always provide
broadcasting service. The downlink of a cellular system can be considered as
a point-to-multipoint channel, if only one cell is considered and inter-cell cochannel interference is neglected. However, the communication service of a
phone call is unicasting.
123 Frequency Division Multiple Access (FDMA)
The frequency-division multiple access (FDMA) channel-access scheme is
based on the frequency-division multiplexing (FDM) scheme, which
provides different frequency bands to different data-streams. In the FDMA
case, the data streams are allocated to different nodes or devices. An
example of FDMA systems were the first-generation (1G) cell-phone
systems, where each phone call was assigned to a specific uplink frequency
channel, and another downlink frequency channel. Each message signal
(each phone call) is modulated on a specific carrier frequency. A related
technique is wavelength division multiple access (WDMA), based on
wavelength-division multiplexing (WDM), where different datastreams get

different colors in fiber-optical communications. In the WDMA case,


different network nodes in a bus or hub network get a different color.
124 Time division multiple access (TDMA)
The time division multiple access (TDMA) channel access scheme is based
on the time-division multiplexing (TDM) scheme, which provides different
time-slots to different data-streams (in the TDMA case to different
transmitters) in a cyclically repetitive frame structure. For example, node 1
may use time slot 1, node 2 time slot 2, etc. until the last transmitter. Then it
starts all over again, in a repetitive pattern, until a connection is ended and
that slot becomes free or assigned to another node. An advanced form is
Dynamic TDMA (DTDMA), where a scheduling may give different time
sometimes but some times node 1 may use time slot 1 in first frame and use
another time slot in next frame.
As an example, 2G cellular systems are based on a combination of TDMA
and FDMA. Each frequency channel is divided into eight timeslots, of which
seven are used for seven phone calls, and one for signalling data
125 code division multiple access (CDMA)/Spread spectrum multiple access
(SSMA)
The code division multiple access (CDMA) scheme is based on spread
spectrum, meaning that a wider radio spectrum in Hertz is used than the data
rate of each of the transferred bit streams, and several message signals are
transferred simultaneously over the same carrier frequency, utilizing
different spreading codes. The wide bandwidth makes it possible to send
with a very poor signal-to-noise ratio of much less than 1 (less than 0 dB)
according to the Shannon-Heartly formula, meaning that the transmission
power can be reduced to a level below the level of the noise and co-channel
interference (cross talk) from other message signals sharing the same
frequency.
One form is direct sequence spread spectrum (DS-CDMA), used for
example in 3G cell phone systems. Each information bit (or each symbol) is
represented by a long code sequence of several pulses, called chips. The
sequence is the spreading code, and each message signal (for example each
phone call) use different spreading code.
126 Carrier Sense Multiple Access With Collision Detection

(CSMA/CD) is a media access control method used most notably in local


area networking using early Ethernet technology. It uses a carrier sensing
scheme in which a transmitting data station detects other signals while
transmitting a frame, and stops transmitting that frame, transmits a jam
signal, and then waits for a random time interval before trying to resend the
frame.[1]
CSMA/CD is a modification of pure carrier sense multiple access (CSMA).
CSMA/CD is used to improve CSMA performance by terminating
transmission as soon as a collision is detected, thus shortening the time
required before a retry can be attempted.
127 Automatic Packet Reporting System
(APRS) is an amateur radio-based system for real time tactical digital
communications of information of immediate value in the local area.[1] In
addition, all such data are ingested into the APRS Internet System (APRSIS) and distributed globally for ubiquitous and immediate access. Along with
messages, alerts, announcements, and bulletins, the most visible aspect of
APRS is its map display. Anyone may place any object or information on his
or her map, and it is distributed to all maps of all users in the local RF
network or monitoring the area via the Internet. Any station, radio, or object
that has an attached GPS is automatically tracked. Other prominent map
features are weather stations, alerts and objects and other map-related
amateur radio volunteer activities including Search and Rescue and signal
direction finding.
128 Orthogonal frequency-division multiplexing
(OFDM) is a method of encoding digital data on multiple carrier
frequencies. OFDM has developed into a popular scheme for wideband
digital communication, used in applications such as digital television and
audio broadcasting, DSL Internet access, wireless networks, powerline
networks, and 4G mobile communications.
OFDM is a frequency-division multiplexing (FDM) scheme used as a digital
multi-carrier modulation method. A large number of closely spaced
orthogonal sub-carrier signals are used to carry data[1] on several parallel
data streams or channels. Each sub-carrier is modulated with a conventional
modulation scheme (such as quadrature amplitude modulation or phase-shift
keying) at a low symbol rate, maintaining total data rates similar to
conventional single-carrier modulation schemes in the same bandwidth.
The primary advantage of OFDM over single-carrier schemes is its ability to
cope with severe channel conditions

129 ISI
In telecommunication, intersymbol interference (ISI) is a form of distortion
of a signal in which one symbol interferes with subsequent symbols. This is
an unwanted phenomenon as the previous symbols have similar effect as
noise, thus making the communication less reliable. ISI is usually caused by
multipath propagation or the inherent non-linear frequency response of a
channel causing successive symbols to "blur" together.
The presence of ISI in the system introduces errors in the decision device at
the receiver output. Therefore, in the design of the transmitting and receiving
filters, the objective is to minimize the effects of ISI, and thereby deliver the
digital data to its destination with the smallest error rate possible.
Ways to fight intersymbol interference include adaptive equalization and
error correcting codes
130 Cyclic redundancy checks (CRCs)
A cyclic redundancy check (CRC) is a single-burst-error-detecting cyclic
code and non-secure hash function designed to detect accidental changes to
digital data in computer networks. It is not suitable for detecting maliciously
introduced errors. It is characterized by specification of a so-called generator
polynomial, which is used as the divisor in a polynomial long division over a
finite field, taking the input data as the dividend, and where the remainder
becomes the result.
Cyclic codes have favorable properties in that they are well suited for
detecting burst errors. CRCs are particularly easy to implement in hardware,
and are therefore commonly used in digital networks and storage devices
such as hard disk drives.
Even parity is a special case of a cyclic redundancy check, where the singlebit CRC is generated by the divisor x + 1.
131 Error-correcting memory
DRAM memory may provide increased protection against soft errors by
relying on error correcting codes. Such error-correcting memory, known as
ECC or EDAC-protected memory, is particularly desirable for high fault-

tolerant applications, such as servers, as well as deep-space applications due


to increased radiation.
Error-correcting memory controllers traditionally use Hamming codes,
although some use triple modular redundancy.
Interleaving allows distributing the effect of a single cosmic ray potentially
upsetting multiple physically neighboring bits across multiple words by
associating neighboring bits to different words. As long as a single event
upset (SEU) does not exceed the error threshold (e.g., a single error) in any
particular word between accesses, it can be corrected (e.g., by a single-bit
error correcting code), and the illusion of an error-free memory system may
be maintained
132 Error-correcting memory
relying on error correcting codes. Such error-correcting memory, known as
ECC or EDAC-protected memory, is particularly desirable for high faulttolerant applications, such as servers, as well as deep-space applications due
to increased radiation.
Error-correcting memory controllers traditionally use Hamming codes,
although some use triple modular redundancy.
Interleaving allows distributing the effect of a single cosmic ray potentially
upsetting multiple physically neighboring bits across multiple words by
associating neighboring bits to different words. As long as a single event
upset (SEU) does not exceed the error threshold (e.g., a single error) in any
particular word between accesses, it can be corrected (e.g., by a single-bit
error correcting code), and the illusion of an error-free memory system may
be maintained
133 Memory scrubbing
Itconsists of reading from each computer memory location, correcting bit
errors (if any) with an error-correcting code (ECC), and writing the corrected
data back to the same location.
Due to the high integration density of contemporary computer memory
chips, the individual memory cell structures became small enough to be
vulnerable to cosmic rays and/or alpha particle emission. The errors caused
by these phenomena are called soft errors. This can be a problem for DRAM
and SRAM based memories. The probability of a soft error at any individual
memory bit is very small. However, together with the large amount of
memory modern computersespecially serversare equipped with, and

together with extended periods of uptime, the probability of soft errors in the
total memory installed is significant.
The information in an ECC memory is stored redundantly enough to correct
single bit error per memory word. Hence, an ECC memory can support the
scrubbing of the memory content. Namely, if the memory controller scans
systematically through the memory, the single bit errors can be detected, the
erroneous bit can be determined using the ECC checksum, and the corrected
data can be written back to the memory.
134 network congestion
In data networking and queueing theory, network congestion occurs when a
link or node is carrying so much data that its quality of service deteriorates.
Typical effects include queueing delay, packet loss or the blocking of new
connections. A consequence of the latter two effects is that an incremental
increase in offered load leads either only to a small increase in network
throughput, or to an actual reduction in network throughput.
Network protocols which use aggressive retransmissions to compensate for
packet loss tend to keep systems in a state of network congestion, even after
the initial load has been reduced to a level which would not normally have
induced network congestion. Thus, networks using these protocols can
exhibit two stable states under the same level of load. The stable state with
low throughput is known as congestive collapse
135 Signal-to-noise ratio
( SNR) is a measure used in science and engineering that compares the level
of a desired signal to the level of background noise. It is defined as the ratio
of signal power to the noise power, often expressed in decibels. A ratio
higher than 1:1 (greater than 0 dB) indicates more signal than noise. While
SNR is commonly quoted for electrical signals, it can be applied to any form
of signal (such as isotope levels in an ice core or biochemical signaling
between cells).
The signal-to-noise ratio, the bandwidth, and the channel capacity of a
communication channel are connected by the ShannonHartley theorem.
Signal-to-noise ratio is sometimes used informally to refer to the ratio of
useful information to false or irrelevant data in a conversation or exchange.
For example, in online discussion forums and other online communities, offtopic posts and spam are regarded as "noise" that interferes with the "signal"
of appropriate discussion

136 noise
In communication systems, noise is an error or undesired random
disturbance of a useful information signal in a communication channel. The
noise is a summation of unwanted or disturbing energy from natural and
sometimes man-made sources. Noise is, however, typically distinguished
from interference, (e.g. cross-talk, deliberate jamming or other unwanted
electromagnetic interference from specific transmitters), for example in the
signal-to-noise ratio (SNR), signal-to-interference ratio (SIR) and signal-tonoise plus interference ratio (SNIR) measures. Noise is also typically
distinguished from distortion, which is an unwanted systematic alteration of
the signal waveform by the communication equipment, for example in the
signal-to-noise and distortion ratio (SINAD). In a carrier-modulated
passband analog communication system, a certain carrier-to-noise ratio
(CNR) at the radio receiver input would result in a certain signal-to-noise
ratio in the detected message signal. In a digital communications system, a
certain Eb/N0 (normalized signal-to-noise ratio) would result in a certain bit
error rate (BER).

137 frequency modulation


(FM) is the encoding of information in a carrier wave by varying the
instantaneous frequency of the wave. (Compare with amplitude modulation,
in which the amplitude of the carrier wave varies, while the frequency
remains constant.)
In analog signal applications, the difference between the instantaneous and
the base frequency of the carrier is directly proportional to the instantaneous
value of the input-signal amplitude.
Digital data can be encoded and transmitted via a carrier wave by shifting
the carrier's frequency among a predefined set of frequenciesa technique
known as frequency-shift keying (FSK). FSK is widely used in modems and
fax modems, and can also be used to send Morse code.[1] Radioteletype also
uses FSK.
138 Amplitude modulation (AM)
It is a modulation technique used in electronic communication, most
commonly for transmitting information via a radio carrier wave. In
amplitude modulation, the amplitude (signal strength) of the carrier wave is

varied in proportion to the waveform being transmitted. That waveform may,


for instance, correspond to the sounds to be reproduced by a loudspeaker, or
the light intensity of television pixels. This technique contrasts with
frequency modulation, in which the frequency of the carrier signal is varied,
and phase modulation, in which its phase is varied.
AM was the earliest modulation method used to transmit voice by radio

139 Encapsulation
It is the packing of data and functions into a single component. The features
of encapsulation are supported using classes in most object-oriented
programming languages, although other alternatives also exist. It allows
selective hiding of properties and methods in an object by building an
impenetrable wall to protect the code from accidental corruption.
In programming languages, encapsulation is used to refer to one of two
related but distinct notions, and sometimes to the combination thereof:
A language mechanism for restricting access to some of the object's
components.
A language construct that facilitates the bundling of data with the methods
(or other functions) operating on that data
Some programming language researchers and academics use the first
meaning alone or in combination with the second as a distinguishing feature
of object-oriented programming, while other programming languages which
provide lexical closures view encapsulation as a feature of the language
orthogonal to object orientation.
The second definition is motivated by the fact that in many OOP languages
hiding of components is not automatic or can be overridden; thus,
information hiding is defined as a separate notion by those who prefer the
second definition.
140 fragmentation
It is a phenomenon in which storage space is used inefficiently, reducing
capacity or performance and often both. The exact consequences of
fragmentation depend on the specific system of storage allocation in use and
the particular form of fragmentation. In many cases, fragmentation leads to
storage space being "wasted", and in that case the term also refers to the
wasted space itself. For other systems (e.g. the FAT file system) the space

used to store given data (e.g. files) is the same regardless of the degree of
fragmentation (from none to extreme).
There are three different but related forms of fragmentation: external
fragmentation, internal fragmentation, and data fragmentation, which can be
present in isolation or conjunction. Fragmentation is often accepted in return
for improvements in speed or simplicity.
141 Thinnet
This refers to RG-58 cabling, which is a flexible coaxial cable about -inch
thick. Thinnet is used for short-distance communication and is flexible
enough to facilitate routing between workstations. Thinnet connects directly
to a workstations network adapter card using a British naval connector
(BNC) and uses the network adapter cards internal transceiver. The
maximum length of thinnet is 185 meters.
142 Thicknet
This coaxial cable, also known as RG-8, gets its name by being a thicker
cable than thinnet. Thicknet cable is about -inch thick and can support data
transfer over longer distances than thinnet. Thicknet has a maximum cable
length of 500 meters and usually is used as a backbone to connect several
smaller thinnet-based networks. Due to the thickness of inch, this cable is
harder to work with than thinnet cable. A transceiver often is connected
directly to the thicknet cable using a connector known as a vampire tap.
Connection from the transceiver to the network adapter card is made using a
drop cable to connect to the adapter unit interface (AUI) port connector.
143 10base2
The 10Base2 Ethernet architecture is a network that runs at 10 Mbps and
uses baseband transmissions. 10Base2 typically is implemented as a bus
topology, but it could be a mix of a bus and a star topology. The cable type
that we use is determined by the character at the end of the name of the
architecturein this case a 2. The 2 implies 200 meters. Now, what type of
cable is limited to approximately 200 m? You got it; thinnet is limited to
approximately 200 m (185 m, to be exact). The only characteristic we have
not mentioned is the access method that is used. All Ethernet environments
use CSMA/CD as a way to put .data on the wire
144 10baset

The 10BaseT Ethernet architecture runs at 10 Mbps and uses baseband


transmission. It uses a star topology with a hub or switch at the center,
allowing all systems to connect to one another. The cable it uses is CAT 3
UTP, which is the UTP cable type that runs at 10 Mbps. Keep in mind that
most cable types are backward compatible, so you could have CAT 5 UTP
cabling in a 10BaseT environment. But because the network cards and hubs
are running at 10 Mbps, that is the maximum transfer speed you will get,
even though the cable supports more. Like all Ethernet environments,
10BaseT uses CSMA/CD as the access method.
145
token ring
A big competitor to Ethernet in the past was Token Ring, which runs at 4
Mbps or 16 Mbps. Token Ring is a network architecture that uses a star ring
topology (a hybrid, looking physically like a star but logically wired as a
ring) and can use many forms of cables. IBM Token Ring has its own
proprietary cable types, while more modern implementations of Token Ring
can use CAT 3 or CAT 5 UTP cabling. Token Ring uses the token-passing
access method.Looking at Token Ring networks today, you may wonder
where the ring topology is, because the network appears to have a star
topology. The reason this network architecture appears to use a star topology
is that all hosts are connected to a central device that looks similar to a hub,
but with Token Ring, this device is called a multistation access unit (MAU
or MSAU). An example is shown in Figure 1-30. The ring is the internal
communication path within the wiring.Token Ring uses token passing; it is
impossible to have collisions in a token-passing environment, because the
MAUs do not have collisions lights as an Ethernet hub does (remember that
Ethernet uses CSMA/CD and there is potential for collisions)
146 Cryptography
Cryptography is a technique to encrypt the plain-text data which makes it
difficult to understand and interpret. There are several cryptographic
algorithms available present day as described below
Secret Key Encryption
Both sender and receiver have one secret key. This secret key is used to
encrypt the data at senders end. After the data is encrypted, it is sent on the
public domain to the receiver. Because the receiver knows and has the Secret
Key, the encrypted data packets can easily be decrypted.
Example of secret key encryption is Data Encryption Standard (DES). In

Secret Key encryption, it is required to have a separate key for each host on
the network making it difficult to manage.
Public Key Encryption
In this encryption system, every user has its own Secret Key and it is not in
the shared domain. The secret key is never revealed on public domain.
Along with secret key, every user has its own but public key. Public key is
always made public and is used by Senders to encrypt the data. When the
user receives the encrypted data, he can easily decrypt it by using its own
Secret Key.
147 Internetwork
A network of networks is called an internetwork, or simply the internet. It is
the largest network in existence on this planet.The internet hugely connects
all WANs and it can have connection to LANs and Home networks. Internet
uses TCP/IP protocol suite and uses IP as its addressing protocol. Present
day, Internet is widely implemented using IPv4. Because of shortage of
address spaces, it is gradually migrating from IPv4 to IPv6.
Internet enables its users to share and access enormous amount of
information worldwide. It uses WWW, FTP, email services, audio and video
streaming etc. At huge level, internet works on Client-Server model.
Internet uses very high speed backbone of fiber optics. To inter-connect
various continents, fibers are laid under sea known to us as submarine
communication cable.
Internet is widely deployed on World Wide Web services using HTML
linked pages and is accessible by client software known as Web Browsers.
When a user requests a page using some web browser located on some Web
Server anywhere in the world, the Web Server responds with the proper
HTML page. The communication delay is very low.
Internet is serving many proposes and is involved in many aspects of life.
Some of them are:
Web sites
E-mail
Instant Messaging
Blogging
Social Media
Marketing

Networking
Resource Sharing
Audio and Video Streaming
148 Multilayer switching
It is an evolution of LAN and internetworking technologies. Multilayer
devices combine aspects of OSI layer 2 (the data link layer) and OSI layer 3
(the network layer) into hybrid switches that can route packets at wire speed.
A basic switch is a multiport bridge. These switches were developed to allow
microsegmentation of LANs into large broadcast domains with small
collision domains. See "Switching and Switched Networks" for an overview
of the evolution of switches.
As the technology developed, hardware-based routing functions were also
added, then higher-level functions such as the ability to look deep inside
packets for information that could aid in the packet-forwarding process.
Thus, multilayer switches are devices that examine layer 2 through layer 7
information.
149 Distributed applications
It allow users to interact with other systems on a network. A distributed
application is traditionally divided into two parts-the front-end client and the
back-end server. This is the client/server model, a model that balances
processing loads between client and server. See "Client/Server Computing."
Distributed means that clients may interact with many different servers all
over the network.
Application/groupware suites like Microsoft Exchange, Novell GroupWise,
Lotus Notes/Domino, and Netscape SuiteSpot are designed for distributed
networks. Management applications that use SNMP can collect information
from remote distributed systems and report it back to management systems.
150 An MSP
It is a service provider that offers system and network management tools and
expertise. An MSP typically has its own data center that runs advanced
network management software such as HP OpenView or Tivoli. It uses these
tools to actively monitor and provide reports on aspects of its customer's
networks, including communication links, network bandwidth, servers, and
so on. The MSP may host the customer's Web servers and application servers

at its own site. The services provided by MSPs have been called "Web
telemetry" services. The MSP Association defines MSPs as follows:
Management Service Providers deliver information technology (IT)
infrastructure management services to multiple customers over a network on
a subscription basis. Like Application Service Providers (ASPs),
Management Service Providers deliver services via networks that are billed
to their clients on a recurring fee basis. Unlike ASPs, which deliver business
applications to end users, MSPs deliver system management services to IT
departments and other customers who manage their own technology assets.
151 A data center or NOC
(network operations center) is a place to consolidate application servers,
Web servers, communications equipment, security systems, system
administrators, support personnel, and anything or anybody else that
provides data services. A data center benefits from centralized management,
support, backup control, power management, security, and so on. It may be
housed in a single room or fill an entire building. It may be within a carrier's
PoP (point of presence). Special equipment is usually installed to protect
against power outages, natural disasters, and security breaches.
Enterprise and public data centers
Internet data center role in outsourcing
Facilities management, managed services, and colocation services
High availability, reliability, and scalability advantages
Data center features, including power systems, temperature controls, fire
detection and suppression systems, physical security, cages, racks, and
vaults.
Interconnection systems, including new technologies such as InfiniBand
152 Mobile Computing
Most computer users are connected to networks, and have access to data and
devices on those networks. They connect to the Internet and communicate
with other users via electronic mail. They work in collaborative groups in
which they share schedules and other information. However, when users hit
the road, they can lose contact with the people and resources they are
accustomed to working with. Fortunately, there is plenty of support for
mobile users:

Operating systems like Microsoft Windows support mobile users with a


host of features, including dial-up networking, docking station support, data
synchronization with desktop systems and network servers, deferred printing
and faxing, and wireless support such as infrared connections. See
"Microsoft Windows."
153 security
"Security" is an all-encompassing term that describes all the concepts,
techniques, and technologies to protect information from unauthorized
access. There are several requirements for information security:
Confidentiality
Hiding data, usually with encryption, to prevent
unauthorized viewing and access.
Authenticity
The ability to know that the person or system you are
communicating with is who or what you think it is.
Access control
Once a person or system has been authenticated, their
ability to access data and use systems is determined by access controls.
Data Integrity
genuine.
Availability
way.

Providing assurance that an information system or data is


Making sure that information is available to users in a secure

154 Thin clients


It inhabit the feature space somewhere between dumb terminals and smart,
full- featured desktop computers. They have features of X-terminals and
diskless workstations. Devices that fall into the thin-client category include
desktop (and kitchen top) Internet terminals, handheld devices, wireless
PDAs, and even smart phones. A common characteristic is that thin clients
are in sealed cases with no expansion slots, no hard drives, and limited
upgrade capabilities. This helps cut costs.
Categories of thin clients
Characteristics of thin clients
Web/e-mail appliances
Network terminals
WBT (Windows-Based Terminal) and Microsoft's Terminal Server products

ICA (Independent Computing Architecture) thin-client communications


protocol
Network computers or NCs
155 Network Appliances
The general description of a network appliance is a single-purpose device in
which all nonessential functions are stripped away. What remains is an
inexpensive device with a simplified embedded operating system and an
Ethernet network interface.
Perhaps the best example of a network appliance that has been around for
years, but never really called a network appliance, is the printer-specifically,
a network-attached printer. It does one thing really well-it attaches to the
network with ease-and it is ready to use immediately. It also does not need to
be connected to a server, which is often overburdened with excess tasks.
156 Distributed computer networks
It consist of clients and servers connected in such a way that any system can
potentially communicate with any other system. The platform for distributed
systems has been the enterprise network linking workgroups, departments,
branches, and divisions of an organization. Data is not located in one server,
but in many servers. These servers might be at geographically diverse areas,
connected by WAN links.
A distributed environment has interesting characteristics. It takes advantage
of client/server computing and multitiered architectures. It distributes
processing to inexpensive systems and relieves servers of many tasks. Data
may be accessed from a diversity of sites over wired or wireless networks.
Data may be replicated to other systems to provide fault tolerance and place
data close to users. Distributing data provides protection from local
disasters.
157 NAS
It is a category of storage devices that attaches directly to a network,
allowing clients to access the storage as if it were directly attached to their
system. The technique bypasses traditional server attached storage. Storage
becomes accessible to users directly across the network and much of the
overhead imposed by server and operating system intervention is removed to

improve performance.
NAS description and NAS as "filers"
Network appliances
Use in data centers and at the department level
Comparison to SANs
CIFS (Common Internet File System) and NFS (Network File System)
NAS importance in the bandwidth explosion and peer-to-peer trend
Network Appliance's WAFL (Write Anywhere File Layout)
Comparison to IP storage solutions such as iSCSI and VI
Architecture/DAFS (Direct Access File System) solutions
Block-level storage and access protocols

158 The Virtual Interface (VI)


Its Architecture specification defines a high-bandwidth, low-latency
networking architecture that was designed for creating clusters of servers
and SANs (Storage Area Networks). Clustering is a technique of linking
systems together in a way that makes them appear as a single system. In the
past, clustering was achieved through proprietary solutions. The VI
Architecture is an attempt to standardize the interface for high-performance
clustering. The interface specifies logical and physical components, as well
as connection setup and data transfer operations.
DAFS (Direct Access File System) uses VI Architecture as its underlying
transport mechanism. DAFS is a file transfer protocol that provides a
consistent view of files to a heterogeneous environment of servers that may
be running different operating systems.
159 DAFS
It is a new fast-and-simple way of accessing data from file servers. It is
designed for use in SANs (Storage Area Networks) environments and NAS
(Network Attached Storage) devices. DAFS is tied to the Virtual Interface
architecture (VI architecture), which provides fast data transport in a local
environment, such as data centers. VI architecture was originally designed
for SANs. DAFS provides a performance gain that makes NAS (Network

Attached Storage) devices suitable for high-volume transactions.


DAFS was developed by Network Appliance, Intel, Oracle and Seagate
Technology. The DAFS Collaborative Web site (listed on the related entries
page) notes that application servers can benefit from using DAFS. An
example is a set of diskless Web servers connected to one or more file
servers that store Web information. Another example is a cluster of diskless
servers running a highly available shared database that uses a file server to
store database information. DAFS is primarily designed for clustered,
shared-file network environments, in which a limited number of server-class
clients connect to a set of file servers via a dedicated high-speed network.
160 IP storage
It refers to technology for transporting block-mode data across IP/Ethernet
networks. Block-mode is the raw mechanism used by SCSI and other disk
drivers to directly access data on disks. Most applications go through higherlevel file access protocols such as NFS, CIFS, and FTP to access disk
information. However, file-mode access is slow and requires many
operations compared to block-mode access. Block mode is usually
performed between a computer and its directly attached storage devices and
is the preferred access method for database applications.
The goal of IP storage is to run block-mode data calls over networks. Doing
so can reduce complex file-mode data access, improve disk performance
over existing networks, and reduce the need for secondary storage networks
such as SANs (storage area networks). Many NAS (network attached
storage) devices also work at the file level using protocols such as NFS and
CIFS. The performance of these devices improves with support for blockmode data calls over existing networks.
161 Voice and data networking
It is about the trends and technologies for merging voice and data
communication on a single network. It is part of a broader "multiservice
networking" concept that combines all types of communication onto a single
network. Practically speaking, the single network is a packet-based IP
intranet, the Internet, or special service provider networks that provide
Internet-like services. Some other methods for combining voice and data are
mentioned here, but the general trend is toward IP-based voice and data
networking.
Goals and benefits of integrating voice and data networks
Voice and data models

PBX (Private Branch Exchange) and Integrated Access Devices


Voice over ATM (VoATM)
Voice over Frame Relay (VoFR)
Voice over DSL (Digital Subscriber Line)
Voice over Cable

162 A mainframe
It is a central processing computer system that gets its name from the large
frame or rack that holds the electronics. Mainframes are based on the central
processing model of computing, in which all processing and data storage is
done at a central system and users connect to that system via "dumb
terminals." The most common mainframes were made by IBM, although
major systems were also made by Sperry Rand, Burroughs, NCR,
Honeywell, and others.
163 SNA
SNA is an IBM architecture that defines a suite of communication protocols
for IBM systems and networks. SNA is an architecture like the OSI model.
There is a protocol stack and various architectural definitions about how
communication takes place at the various levels of the protocol stack.
SNA was originally designed for IBM mainframe systems. One could refer
to this original SNA as "legacy SNA." The "new SNA" is APPN (Advanced
Peer-to-Peer Networking). Legacy SNA is based on the older concept of
centralized processing, where the mainframe was the central computing
node. Dumb terminals attached to the central processor and did two things:
accepted keyboard input from users, and displayed calculated results or
query replies from the mainframe.
164 Gigabit Ethernet
It is a 1-gigabit/sec (1,000-Mbit/sec) extension of the IEEE 802.3 Ethernet
networking standard. Its primary niches are corporate LANs, campus
networks, and service provider networks where it can be used to tie together
existing 10-Mbit/sec and 100-Mbit/sec Ethernet networks. Gigabit Ethernet
can replace 100-Mbit/sec FDDI (Fiber Distributed Data Interface) and Fast
Ethernet backbones, and it competes with ATM (Asynchronous Transfer

Mode) as a core networking technology. Many ISPs use Gigabit Ethernet in


their data centers.
Gigabit Ethernet provides an ideal upgrade path for existing Ethernet-based
networks. It can be installed as a backbone network while retaining the
existing investment in Ethernet hubs, switches, and wiring plants. In
addition, management tools can be retained, although network analyzers will
require updates to handle the higher speed.
165 Network access services
It provide businesses with communication links to carrier and service
provider wide area networks. A telephone is connected via twisted-pair
copper wire (the local loop) to the public telephone network where switches
connect calls. Internet users can connect to the Internet over the same local
loop or use a variety of other services, including cable TV connections,
wireless connections, and fiber-optic connections. This topic surveys
traditional and emerging network access methods.
The traditional access method is the public-switched telephone network. The
earliest IBM mainframe systems communicated over standard phone lines
using protocols such as BSC (Binary Synchronous Communication) and
later SDLC (Synchronous Data Link Control). The early Internet (then
called ARPANET) was connected with a mesh of AT&T long-distance
telephone lines that provided 50-Kbit/sec throughput.
166 Network Core Technologies :A core network is a central network into which other networks feed. It must
have the bandwidth to support the aggregate bandwidth of all the networks
feeding into it. Traditionally, the core network has been the circuit-oriented
telephone system. More recently, alternative optical networks bypass the
traditional core and implement packet-oriented technologies. Figure N-6 (see
book) provides a timeline of the development of core networks, starting with
the early telephone system.
167 Fast Ethernet
It
is traditional CSMA/CD (carrier sense multiple access/collision
detection) access control at 100 Mbits/sec over twisted-pair wire. The
original Ethernet data rate was 10 Mbits/sec.
During the early development of Fast Ethernet, two different groups worked
out standards proposals-and both were finally approved, but under different

IEEE committees. One standard became IEEE 802.3u Fast Ethernet and the
other became 100VG-AnyLAN, which is now governed by the IEEE 802.12
committee. The latter uses the "demand priority" medium access method
instead of CSMA/CD.
168 The TIA/EIA
It structured cabling standards define how to design, build, and manage a
cabling system that is structured, meaning that the system is designed in
blocks that have very specific performance characteristics. The blocks are
integrated in a hierarchical manner to create a unified communication
system. For example, workgroup LANs represent a block with lowerperformance requirements than the backbone network block, which requires
high-performance fiber-optic cable in most cases. The standard defines the
use of fiber-optic cable (single and multimode), STP (shielded twisted pair)
cable, and UTP (unshielded twisted pair) cable.
The initial TIA/EIA 568 document was followed by several updates and
addendums as outlined below. A major standard update was released in 2000
that incorporates previous changes.
169

You might also like