Professional Documents
Culture Documents
Netcom Document 1
Netcom Document 1
Communication Protocol
Communication protocols define the manner in which peer processes
communicate between computer hardware devices. The protocols give the
rules for such things as the passing of messages, the exact formats of the
messages and how to handle error conditions.
If two computers are communicating and they both follow the protocol(s)
properly, the exchange is successful, regardless of what types of machines
they are and what operating systems are running on the machines. As long as
the machines have software that can manage the protocol, communication is
possible.
Essentially, therefore, a computer protocol is a set of rules that coordinates
the exchange of information.
2. Packet
The term packet is used often in data communications, sometimes
incorrectly.
To transfer data effectively, it is usually better to transfer uniform chunks of
data than to send characters singly or in widely varying sized groups.
Usually these chunks of data have some information ahead of them ( called
the header ) and sometimes an indicator at the end (called the trailer). These
chunks of data are loosely called packets. In some data communications
systems, "packets" refer to the units of data passed between two specific
layers in a protocol hierarchy e.g. the Data Link Layer and the Network
Layer of the OSI 7 layer model.
The amount of data in a packet and the composition of the header or trailer
may vary depending on the communications protocol as well as some
system parameters, but the concept of a packet always refers to the entire
chunk of data (including header and trailer).
3. A Host
A network host is a computer or other device connected to a computer
network. A network host may offer information resources, services, and
applications to users or other nodes on the network. A network host is a
network node that is assigned a network layer host address.
Computers participating in networks that use the Internet Protocol Suite may
also be called IP hosts. Specifically, computers participating in the Internet
are called Internet hosts, sometimes Internet nodes. Internet hosts and other
IP hosts have one or more IP addresses assigned to their network interfaces.
8. Protocol layering
In modern protocol design, protocols are "layered". Layering is a design
principle which divides the protocol design into a number of smaller parts,
each of which accomplishes a particular sub-task, and interacts with the
other parts of the protocol only in a small number of well-defined ways.
For example, one layer might describe how to encode text (with ASCII, say),
while another describes how to inquire for messages (with the Internet's
simple mail transfer protocol, for example), while another may detect and
retry errors (with the Internet's transmission control protocol), another
handles addressing (say with IP, the Internet Protocol), another handles the
encapsulation of that data into a stream of bits (for example, with the pointto-point protocol), and another handles the electrical encoding of the bits,
(with a V.42 modem, for example).
Layering allows the parts of a protocol to be designed and tested without a
combinatorial explosion of cases, keeping each design relatively simple.
Layering also permits familiar protocols to be adapted to unusual
circumstances. For example, the mail protocol above can be adapted to send
messages to aircraft. Just change the V.42 modem protocol to the INMARS
LAPD data protocol used by the international marine radio satellites.
9. Internet
If you wish to expose information to everyone in the world, then you would
build an Internet-type application. An Internet-type application uses Internet
protocols such as HTTP, FTP, or SMTP and is available to persons
anywhere on the Internet. We use the Internet and web applications as ways
to extend who the application can reach. For example, I no longer need to
go to the bank to transfer funds. Because the bank has built a web site on
the Internet, I can do that from the comfort of my own home.
10. Circuit Switching
The concept of circuit switching works very much like common telephone
networks today. To establish a data connection from point A to point Z, a
person must work out a direct path over a number of connection routes to the
destination. Once a route has been determined, the person needs to set aside
resources on that line to establish his connection, after which he may start
transmitting data. While resources have been allocated for that connection,
no one else may use that line until the first user has disconnected his host.
This raises some questions as to how people can share a circuit switched
connection, 2 of methods of which are outlined below.
Nodal Processing
Queuing
Transmission Delay
Propagation Delay
15.Bridges
A network bridge connects and filters traffic between two network segments
at the data link layer (layer 2) of the OSI model to form a single network.
This breaks the network's collision domain but maintains a unified broadcast
domain. Network segmentation breaks down a large, congested network into
an aggregation of smaller, more efficient networks.
Bridges come in three basic types:
Local bridges: Directly connect LANs
Remote bridges: Can be used to create a wide area network (WAN) link
between LANs. Remote bridges, where the connecting link is slower than
the end networks, largely have been replaced with routers.
Wireless bridges: Can be used to join LANs or connect remote devices to
LANs.
16.Switches
A network switch is a device that forwards and filters OSI layer 2 datagrams
between ports based on the MAC addresses in the packets.[9] A switch is
distinct from a hub in that it only forwards the frames to the physical ports
involved in the communication rather than all ports connected. It can be
thought of as a multi-port bridge.[10] It learns to associate physical ports to
MAC addresses by examining the source addresses of received frames. If an
unknown destination is targeted, the switch broadcasts to all ports but the
source. Switches normally have numerous ports, facilitating a star topology
for devices, and cascading additional switches.
Multi-layer switches are capable of routing based on layer 3 addressing or
additional logical levels. The term switch is often used loosely to include
devices such as routers and bridges, as well as devices that may distribute
traffic based on load or based on application content (e.g., a Web URL
identifier).
data.
In a layered system, a unit of data which is specified in a protocol of a
given layer and which consists of protocol-control information and possibly
user data of that layer. For example: Bridge PDU or iSCSI PDU.
23 HDLC
High-Level Data Link Control (HDLC) is a bit-oriented code-transparent
synchronous data link layer protocol developed by the International
Organization for Standardization (ISO).
The current standard for HDLC is ISO 13239, which replaces all of those
standards.
HDLC provides both connection-oriented and connectionless service.
HDLC can be used for point to multipoint connections, but is now used
almost exclusively to connect one device to another, using what is known as
Asynchronous Balanced Mode (ABM). The original master-slave modes
Normal Response Mode (NRM) and Asynchronous Response Mode (ARM)
are rarely used.
24 MAN
A metropolitan area network (MAN) is a computer network larger than a
local area network, covering an area of a few city blocks to the area of an
entire city, possibly also including the surrounding areas.
A MAN is optimized for a larger geographical area than a LAN, ranging
from several blocks of buildings to entire cities. MANs can also depend on
communications channels of moderate-to-high data rates. A MAN might be
owned and operated by a single organization, but it usually will be used by
many individuals and organizations. MANs might also be owned and
operated as public utilities. They will often provide means for inter
networking of local networks.
25 WiFi
Wi-Fi (or WiFi) is a local area wireless technology that allows an electronic
device to participate in computer networking using 2.4 GHz UHF and 5 GHz
SHF ISM radio bands.
Wi-Fi can be less secure than wired connections, such as Ethernet, because
an intruder does not need a physical connection. Web pages that use TLS are
devices that are generally not accessible through the local area network by
other devices. The cost and complexity of SANs dropped in the early 2000s
to levels allowing wider adoption across both enterprise and small to
medium-sized business environments.
33
Campus area network
A campus area network (CAN) is made up of an interconnection of LANs
within a limited geographical area. The networking equipment (switches,
routers) and transmission media (optical fiber, copper plant, Cat5 cabling,
etc.) are almost entirely owned by the campus tenant / owner (an enterprise,
university, government, etc.).
For example, a university campus network is likely to link a variety of
campus buildings to connect academic colleges or departments, the library,
and student residence halls.
34
Backbone network
A backbone network is part of a computer network infrastructure that
provides a path for the exchange of information between different LANs or
sub-networks. A backbone can tie together diverse networks within the same
building, across different buildings, or over a wide area.
For example, a large company might implement a backbone network to
connect departments that are located around the world. The equipment that
ties together the departmental networks constitutes the network backbone.
When designing a network backbone, network performance and network
congestion are critical factors to take into account. Normally, the backbone
network's capacity is greater than that of the individual networks connected
to it.
Another example of a backbone network is the Internet backbone, which is
the set of wide area networks (WANs) and core routers that tie together all
networks connected to the Internet.
35
Wide area network
A wide area network (WAN) is a computer network that covers a large
other entities are not necessarily trusted from a security standpoint. Network
connection to an extranet is often, but not always, implemented via WAN
technology
39 Darknet
A Darknet is an overlay network, typically running on the internet, that is
only accessible through specialized software. A darknet is an anonymizing
network where connections are made only between trusted peers
sometimes called "friends" (F2F)[21] using non-standard protocols and
ports.
Darknets are distinct from other distributed peer-to-peer networks as sharing
is anonymous (that is, IP addresses are not publicly shared), and therefore
users can communicate with little fear of governmental or corporate
interference
40 Network service
Network services are applications hosted by servers on a computer network,
to provide some functionality for members or users of the network, or to
help the network itself to operate.
The World Wide Web, E-mail, printing and network file sharing are
examples of well-known network services. Network services such as DNS
(Domain Name System) give names for IP and MAC addresses (people
remember names like nm.lan better than numbers like 210.121.67.18),
and DHCP to ensure that the equipment on the network has a valid IP
address.
Services are usually based on a service protocol that defines the format and
sequencing of messages between clients and servers of that network service.
41 wireless networks
A wireless network is any type of computer network that uses wireless data
connections for connecting network nodes.
Wireless networking is a method by which homes, telecommunications
networks and enterprise (business) installations avoid the costly process of
introducing cables into a building, or as a connection between various
50. WiMAX
It refers to interoperable implementations of the IEEE 802.16 family of
wireless-networks standards ratified by the WiMAX Forum. (Similarly, WiFi refers to interoperable implementations of the IEEE 802.11 Wireless LAN
standards certified by the Wi-Fi Alliance.) WiMAX Forum certification
allows vendors to sell fixed or mobile products as WiMAX certified, thus
ensuring a level of interoperability with other certified products, as long as
they fit the same profile.
The original IEEE 802.16 standard (now called "Fixed WiMAX") was
published in 2001. WiMAX adopted some of its technology from WiBro, a
service marketed in Korea.[3]
Mobile WiMAX (originally based on 802.16e-2005) is the revision that was
deployed in many countries, and is the basis for future revisions such as
802.16m-2011.
WiMAX is sometimes referred to as "Wi-Fi on steroids"[4] and can be used
for a number of applications including broadband connections, cellular
backhaul, hotspots, etc. It is similar to Wi-Fi, but it can enable usage at much
greater distances.
51. General packet radio service (GPRS)
It is a packet oriented mobile data service on the 2G and 3G cellular
communication system's global system for mobile communications (GSM).
GPRS was originally standardized by European Telecommunications
Standards Institute (ETSI) in response to the earlier CDPD and i-mode
packet-switched cellular technologies. It is now maintained by the 3rd
Generation Partnership Project (3GPP).[1][2]
GPRS usage is typically charged based on volume of data transferred,
contrasting with circuit switched data, which is usually billed per minute of
connection time. Usage above the bundle cap is either charged per megabyte
or disallowed.
GPRS is a best-effort service, implying variable throughput and latency that
depend on the number of other users sharing the service concurrently, as
opposed to circuit switching, where a certain quality of service (QoS) is
guaranteed during the connection. In 2G systems, GPRS provides data rates
of 56114 kbit/second.
52. Internet Protocol suite
The Internet protocol suite is the computer networking model and set of
communications protocols used on the Internet and similar computer
networks. It is commonly known as TCP/IP, because its most important
protocols, the Transmission Control Protocol (TCP) and the Internet
Protocol (IP), were the first networking protocols defined in this standard.
Often also called the Internet model, it was originally also known as the
DoD model, because the development of the networking model was funded
by DARPA, an agency of the United States Department of Defense.
TCP/IP provides end-to-end connectivity specifying how data should be
packetized, addressed, transmitted, routed and received at the destination.
This functionality is organized into four abstraction layers which are used to
sort all related protocols according to the scope of networking involved.[1]
[2] From lowest to highest, the layers are the link layer, containing
communication technologies for a single network segment (link); the
internet layer, connecting hosts across independent networks, thus
establishing internetworking; the transport layer handling host-to-host
communication; and the application layer, which provides process-toprocess application data exchange.
53 Downlink
In the context of satellite communications, a downlink (DL) is the link from
a satellite to a ground station.
Pertaining to cellular networks, the radio downlink is the transmission
path from a cell site to the cell phone. Traffic and signalling flows within the
base station subsystem (BSS) and network switching subsystem (NSS) may
also be identified as uplink and downlink.
Pertaining to computer networks, a downlink is a connection from data
communications equipment towards data terminal equipment. This is also
known as a downstream connection.
54 Uplink
Pertaining to satellite communications, an uplink (UL or U/L) is the portion
of a communications link used for the transmission of signals from an Earth
terminal to a satellite or to an airborne platform. An uplink is the inverse of a
downlink. An uplink or downlink is distinguished from reverse link or
forward link.
Pertaining to GSM and cellular networks, the radio uplink is the
transmission path from the mobile station (cell phone) to a base station (cell
site). Traffic and signalling flows within the BSS and NSS may also be
(FTPS). SSH File Transfer Protocol (SFTP) is sometimes also used instead,
but is technologically different.
The first FTP client applications were command-line applications developed
before operating systems had graphical user interfaces, and are still shipped
with most Windows, Unix, and Linux operating systems. Many FTP clients
and automation utilities have since been developed for desktops, servers,
mobile devices, and hardware, and FTP has been incorporated into
productivity applications, such as Web page editors.
60 Simple Mail Transfer Protocol
(SMTP) is an Internet standard for electronic mail (e-mail) transmission.
First defined by RFC 821 in 1982, it was last updated in 2008 with the
Extended SMTP additions by RFC 5321 - which is the protocol in
widespread use today.
SMTP by default uses TCP port 25. The protocol for mail submission is the
same, but uses port 587. SMTP connections secured by SSL, known as
SMTPS, default to port 465 (nonstandard, but sometimes used for legacy
reasons).
Although electronic mail servers and other mail transfer agents use SMTP to
send and receive mail messages, user-level client mail applications typically
use SMTP only for sending messages to a mail server for relaying. For
receiving messages, client applications usually use either POP3 or IMAP.
Although proprietary systems (such as Microsoft Exchange and Lotus
Notes/Domino) and webmail systems (such as Hotmail, Gmail and Yahoo!
Mail) use their own non-standard protocols to access mail box accounts on
their own mail servers, all use SMTP when sending or receiving email from
outside their own systems.
61 An application layer
It is an abstraction layer that specifies the shared protocols and interface
methods used by hosts in a communications network. The application layer
abstraction is used in both of the standard models of computer networking;
the Internet Protocol Suite (TCP/IP) and the Open Systems Interconnection
model (OSI model).
Although both models use the same term for their respective highest level
layer, the detailed definitions and purposes are different.
well-known port 143. IMAP over SSL (IMAPS) is assigned well-known port
number 993.
IMAP supports both on-line and off-line modes of operation. E-mail clients
using IMAP generally leave messages on the server until the user explicitly
deletes them. This and other characteristics of IMAP operation allow
multiple clients to manage the same mailbox. Most e-mail clients support
IMAP in addition to Post Office Protocol (POP) to retrieve messages;
however, fewer e-mail services support IMAP.[1] IMAP offers access to the
mail storage. Clients may store local copies of the messages, but these are
considered to be a temporary cache.
64 The Lightweight Directory Access Protocol
LDAP is an open, vendor-neutral, industry standard application protocol for
accessing and maintaining distributed directory information services over an
Internet Protocol (IP) network. Directory services play an important role in
developing intranet and Internet applications by allowing the sharing of
information about users, systems, networks, services, and applications
throughout the network. As examples, directory services may provide any
organized set of records, often with a hierarchical structure, such as a
corporate email directory. Similarly, a telephone directory is a list of
subscribers with an address and a phone number.
LDAP is specified in a series of Internet Engineering Task Force (IETF)
Standard Track publications called Request for Comments (RFCs), using the
description language ASN.1. The latest specification is Version 3, published
as RFC 4511. For example, here is an LDAP search translated into plain
English: "Search in the company email directory for all people located in
Nashville whose name contains 'Jesse' that have an email address. Please
return their full name, email, title, and description
65 The Media Gateway Control Protocol
(MGCP) is an implementation of the Media Gateway Control Protocol
architecture for controlling media gateways on Internet Protocol (IP)
networks connected to the public switched telephone network (PSTN).[1]
The protocol architecture and programming interface is described in RFC
2805 and the current specific MGCP definition is RFC 3435 which overrides
RFC 2705. It is a successor to the Simple Gateway Control Protocol (SGCP)
which was developed by Bellcore and Cisco. In November 1998, the Simple
Gateway Control Protocol (SGCP) was combined with Level 3
Communications Internet Protocol Device Control (IPDC) to form the
The transmission of streaming data itself is not a task of the RTSP protocol.
Most RTSP servers use the Real-time Transport Protocol (RTP) in
conjunction with Real-time Control Protocol (RTCP) for media stream
delivery, however some vendors implement proprietary transport protocols.
The RTSP server software from RealNetworks, for example, also used
RealNetworks' proprietary Real Data Transport (RDT).
71 Routing Information Protocol
The Routing Information Protocol (RIP) is one of the oldest distance-vector
routing protocols, which employs the hop count as a routing metric. RIP
prevents routing loops by implementing a limit on the number of hops
allowed in a path from the source to a destination. The maximum number of
hops allowed for RIP is 15. This hop limit, however, also limits the size of
networks that RIP can support. A hop count of 16 is considered an infinite
distance, in other words the route is considered unreachable. RIP implements
the split horizon, route poisoning and holddown mechanisms to prevent
incorrect routing information from being propagated.
RIP uses the User Datagram Protocol (UDP) as its transport protocol, and is
assigned the reserved port number 520
72. The Session Initiation Protocol
(SIP) is a communications protocol for signaling and controlling
multimedia communication sessions. The most common applications of SIP
are in Internet telephony for voice and video calls, as well as instant
messaging all over Internet Protocol (IP) networks.
The protocol defines the messages that are sent between endpoints, which
govern establishment, termination and other essential elements of a call. SIP
can be used for creating, modifying and terminating sessions consisting of
one or several media streams. SIP is an application layer protocol designed
to be independent of the underlying transport layer. It is a text-based
protocol, incorporating many elements of the Hypertext Transfer Protocol
(HTTP) and the Simple Mail Transfer Protocol (SMTP).
73 Simple Network Management Protocol
(SNMP) is an "Internet-standard protocol for managing devices on IP
networks". Devices that typically support SNMP include routers, switches,
servers, workstations, printers, modem racks and more. SNMP is used
mostly in network management systems to monitor network-attached
76 XMPP
Extensible Messaging and Presence Protocol (XMPP) is a communications
protocol for message-oriented middleware based on XML (Extensible
Markup Language).The protocol was originally named Jabber, and was
developed by the Jabber open-source community in 1999 for near real-time,
instant messaging (IM), presence information, and contact list maintenance.
Designed to be extensible, the protocol has also been used for publishsubscribe systems, signalling for VoIP, video, file transfer, gaming, Internet
of Things (IoT) applications such as the smart grid, and social networking
services.
Unlike most instant messaging protocols, XMPP is defined in an open
standard and uses an open systems approach of development and
application, by which anyone may implement an XMPP service and
interoperate with other organizations' implementations. Because XMPP is an
open protocol, implementations can be developed using any software
license; although many server, client, and library implementations are
distributed as free and open-source software, numerous freeware and
commercial software implementations also exist.
77 Wireless Communications Transfer Protocol
Wireless Communications Transfer Protocol (WCTP) is the method used to
send messages to wireless devices such as pagers on NPCS (Narrowband
PCS) networks. It uses HTTP as a transport layer over the World Wide Web.
Development of WCTP was initiated by the Messaging Standards
Committee and submitted to the Radio Paging Community. When the first
proposal was received, a sub-committee was established to improve the
protocol and issue it as a specification. The sub-committee was moved into
the PTC (Paging Technical Committee) which is a volunteer committee
composed of industry representatives. The PCIA (Personal Communications
Industry) accepted the first full release and adopted the protocol as a PCIA
standard. The current version is WCTP 1.3.
78 The Dynamic Host Configuration Protocol
(DHCP) is a standardized network protocol used on Internet Protocol (IP)
networks for dynamically distributing network configuration parameters,
such as IP addresses for interfaces and services. With DHCP, computers
80 Transport layer
In computer networking, a transport layer provides end-to-end or host-tohost communication services for applications within a layered architecture of
network components and protocols.The transport layer provides services
such as connection-oriented data stream support, reliability, flow control,
and multiplexing.
Transport layer implementations are contained in both the TCP/IP model
(RFC 1122), which is the foundation of the Internet, and the Open Systems
Interconnection (OSI) model of general networking, however, the definitions
of details of the transport layer are different in these models. In the Open
Systems Interconnection model the transport layer is most often referred to
as Layer 4 or L4.
81 Transmission Control Protocol
The Transmission Control Protocol (TCP) is a core protocol of the Internet
Protocol Suite. It originated in the initial network implementation in which it
complemented the Internet Protocol (IP). Therefore, the entire suite is
commonly referred to as TCP/IP. TCP provides reliable, ordered, and errorchecked delivery of a stream of octets between applications running on hosts
communicating over an IP network. TCP is the protocol that major Internet
applications such as the World Wide Web, email, remote administration and
file transfer rely on. Applications that do not require reliable data stream
service may use the User Datagram Protocol (UDP), which provides a
connectionless datagram service that emphasizes reduced latency over
reliability.
The Transmission Control Protocol provides a communication service at an
intermediate level between an application program and the Internet Protocol.
It provides host-to-host connectivity at the Transport Layer of the Internet
model. An application does not need to know the particular mechanisms for
sending data via a link to another host, such as the required packet
fragmentation on the transmission medium. At the transport layer, the
protocol handles all handshaking and transmission details and presents an
abstraction of the network connection to the application.
82 Datagram Congestion Control Protocol
The Datagram Congestion Control Protocol (DCCP) is a message-oriented
transport layer protocol. DCCP implements reliable connection setup,
teardown, Explicit Congestion Notification (ECN), congestion control, and
feature negotiation. DCCP was published as RFC 4340, a proposed standard,
by the IETF in March, 2006. RFC 4336 provides an introduction. FreeBSD
had an implementation for version 5.1. Linux also had an implementation of
DCCP first released in Linux kernel version 2.6.14 (released October 28,
2005).
DCCP provides a way to gain access to congestion control mechanisms
without having to implement them at the application layer. It allows for
flow-based semantics like in Transmission Control Protocol (TCP), but does
With UDP, computer applications can send messages, in this case referred to
as datagrams, to other hosts on an Internet Protocol (IP) network without
prior communications to set up special transmission channels or data paths.
UDP is suitable for purposes where error checking and correction is either
not necessary or is performed in the application, avoiding the overhead of
such processing at the network interface level. Time-sensitive applications
often use UDP because dropping packets is preferable to waiting for delayed
packets, which may not be an option in a real-time system.[1] If error
correction facilities are needed at the network interface level, an application
may use the Transmission Control Protocol (TCP) or Stream Control
Transmission Protocol (SCTP) which are designed for this purpose.
85 Resource Reservation Protocol
The Resource Reservation Protocol (RSVP) is a Transport Layer protocol
designed to reserve resources across a network for an integrated services
Internet. RSVP operates over an IPv4 or IPv6 Internet Layer and provides
receiver-initiated setup of resource reservations for multicast or unicast data
flows with scaling and robustness. It does not transport application data but
is similar to a control protocol, like Internet Control Message Protocol
(ICMP) or Internet Group Management Protocol (IGMP). RSVP is described
in RFC 2205.
RSVP can be used by either hosts or routers to request or deliver specific
levels of quality of service (QoS) for application data streams or flows.
RSVP defines how applications place reservations and how they can
relinquish the reserved resources once the need for them has ended. RSVP
operation will generally result in resources being reserved in each node
along a path.
RSVP is not a routing protocol and was designed to interoperate with current
and future routing protocols.
86 Wireless Datagram Protocol
It defines the movement of information from receiver to the sender and
resembles the User Datagram Protocol in the Internet protocol suite.
The Wireless Datagram Protocol (WDP), a protocol in WAP architecture,
covers the Transport Layer Protocols in the Internet model. As a general
transport service, WDP offers to the upper layers an invisible interface
independent of the underlying network technology used. In consequence of
the interface common to transport protocols, the upper layer protocols of the
WAP architecture can operate independent of the underlying wireless
network. By letting only the transport layer deal with physical network-
PARC Universal Packet (PUP) using IEEE 802 standards, FDDI, X.25,
Frame Relay and Asynchronous Transfer Mode (ATM). IPv4 over IEEE
802.3 and IEEE 802.11 is the most common case.
97 The Neighbor Discovery Protocol
(NDP) is a protocol in the Internet protocol suite used with Internet Protocol
Version 6 (IPv6). It operates in the Link Layer of the Internet model (RFC
1122) and is responsible for address autoconfiguration of nodes, discovery
of other nodes on the link, determining the link layer addresses of other
nodes, duplicate address detection, finding available routers and Domain
Name System (DNS) servers, address prefix discovery, and maintaining
reachability information about the paths to other active neighbor nodes (RFC
4861)
The protocol defines five different ICMPv6 packet types to perform
functions for IPv6 similar to the Address Resolution Protocol (ARP) and
Internet Control Message Protocol (ICMP) Router Discovery and Router
Redirect protocols for IPv4. However, it provides many improvements over
its IPv4 counterparts.
The Inverse Neighbor Discovery (IND) protocol extension (RFC 3122)
allows nodes to determine and advertise an IPv6 address corresponding to a
given link-layer address, similar to Reverse ARP for IPv4. The Secure
Neighbor Discovery Protocol (SEND) is a security extension of NDP that
uses Cryptographically Generated Addresses (CGA) and the Resource
Public Key Infrastructure (RPKI) to provide an alternate mechanism for
securing NDP with a cryptographic method that is independent of IPsec.
Neighbor Discovery Proxy (ND Proxy) (RFC 4389) provides a service
similar to IPv4 Proxy ARP and allows bridging multiple network segments
within a single subnet prefix when bridging cannot be done at the link layer.
98 Open Shortest Path First
Open Shortest Path First (OSPF) is a routing protocol for Internet Protocol
(IP) networks. It uses a link state routing algorithm and falls into the group
of interior routing protocols, operating within a single autonomous system
(AS). It is defined as OSPF Version 2 in RFC 2328 (1998) for IPv4. The
updates for IPv6 are specified as OSPF Version 3 in RFC 5340 (2008).
OSPF is perhaps the most widely used interior gateway protocol (IGP) in
large enterprise networks. IS-IS, another link-state dynamic routing protocol,
is more common in large service provider networks. The most widely used
exterior gateway protocol is the Border Gateway Protocol (BGP), the
principal routing protocol between autonomous systems on the Internet.
OSPF detects changes in the topology, such as link failures, and converges
on a new loop-free routing structure within seconds. It computes the shortest
path tree for each route using a method based on Dijkstra's algorithm, a
shortest path first algorithm.
99 Tunneling protocol
In computer networks, a tunneling protocol allows a network user to access
or provide a network service that the underlying network does not support or
provide directly. One important use of a tunneling protocol is to allow a
foreign protocol to run over a network that does not support that particular
protocol; for example, running IPv6 over IPv4. Another important use is to
provide services that are impractical or unsafe to be offered using only the
underlying network services; for example, providing a corporate network
address to a remote user whose physical network address is not part of the
corporate network. Because tunneling involves repackaging the traffic data
into a different form, perhaps with encryption as standard, a third use is to
hide the nature of the traffic that is run through the tunnel.
The tunneling protocol works by using the data portion of a packet (the
payload) to carry the packets that actually provide the service. Tunneling
uses a layered protocol model such as those of the OSI or TCP/IP protocol
suite, but usually violates the layering when using the payload to carry a
service not normally provided by the network. Typically, the delivery
protocol operates at an equal or higher level in the layered model than the
payload protocol.
100 point to point protocol
In computer networking, Point-to-Point Protocol (PPP) is a data link
protocol used to establish a direct connection between two nodes. It can
provide connection authentication, transmission encryption and
compression.
PPP is used over many types of physical networks including serial cable,
phone line, trunk line, cellular telephone, specialized radio links, and fiber
optic links such as SONET. PPP is also used over Internet access
connections. Internet service providers (ISPs) have used PPP for customer
dial-up access to the Internet, since IP packets cannot be transmitted over a
modem line on their own, without some data link protocol. Two derivatives
of PPP, Point-to-Point Protocol over Ethernet (PPPoE) and Point-to-Point
Protocol over ATM (PPPoA), are used most commonly by Internet Service
Providers (ISPs) to establish a Digital Subscriber Line (DSL) Internet
105 Multiplexing
In telecommunications and computer networks, multiplexing (sometimes
contracted to muxing) is a method by which multiple analog message signals
or digital data streams are combined into one signal over a shared medium.
The aim is to share an expensive resource. For example, in
telecommunications, several telephone calls may be carried using one wire.
Multiplexing originated in telegraphy in the 1870s, and is now widely
applied in communications. In telephony, George Owen Squier is credited
with the development of telephone carrier multiplexing in 1910.
The multiplexed signal is transmitted over a communication channel, which
may be a physical transmission medium (e.g. a cable). The multiplexing
divides the capacity of the low-level communication channel into several
high-level logical channels, one for each message signal or data stream to be
transferred. A reverse process, known as demultiplexing, can extract the
original channels on the receiver side.
106 Messaging Switching
In telecommunications, message switching was the precursor of packet
switching, where messages were routed in their entirety, one hop at a time. It
was first built by Collins Radio Company, Newport Beach, California during
the period 1959-1963 for sale to large airlines, banks and railroads. Message
switching systems are nowadays mostly implemented over packet-switched
or circuit-switched data networks. Each message is treated as a separate
entity. Each message contains addressing information, and at each switch
this information is read and the transfer path to the next switch is decided.
Depending on network conditions, a conversation of several messages may
not be transferred over the same path. Each message is stored (usually on
hard drive due to RAM limitations) before being transmitted to the next
switch. Because of this it is also known as a 'store-and-forward' network.
Email is a common application for Message Switching. A delay in delivering
email is allowed unlike real time data transfer between two computers.
data and disaster downtime, while realizing a greater return on their invested
resources through increased employee productivity and reduction in telecom
costs.
Managed objects are described using GDMO (Guidelines for the Definition
of Managed Objects), and can be identified by a distinguished name (DN),
from the X.500 directory.
CMIP also provides good security (support authorization, access control, and
security logs) and flexible reporting of unusual network conditions.
119 FDDI
It provides a 100 Mbit/s optical standard for data transmission in local area
network that can extend in range up to 200 kilometers (120 mi). Although
FDDI logical topology is a ring-based token network, it did not use the IEEE
802.5 token ring protocol as its basis; instead, its protocol was derived from
the IEEE 802.4 token bus timed token protocol. In addition to covering large
geographical areas, FDDI local area networks can support thousands of
users. FDDI offers both a Dual-Attached Station (DAS), counter-rotating
token ring topology and a Single-Attached Station (SAS), token bus passing
ring topology
120 SONET
Synchronous Optical Networking (SONET) and Synchronous Digital
Hierarchy (SDH) are standardized protocols that transfer multiple digital bit
streams synchronously over optical fiber using lasers or highly coherent
light from light-emitting diodes (LEDs). At low transmission rates data can
also be transferred via an electrical interface. The method was developed to
replace the Plesiochronous Digital Hierarchy (PDH) system for transporting
large amounts of telephone calls and data traffic over the same fiber without
synchronization problems. SONET generic criteria are detailed in Telcordia
Technologies Generic Requirements document GR-253-CORE. Generic
criteria applicable to SONET and other transmission systems (e.g.,
asynchronous fiber optic systems or digital radio systems) are found in
Telcordia GR-499-CORE.
SONET and SDH, which are essentially the same, were originally designed
to transport circuit mode communications (e.g., DS1, DS3) from a variety of
different sources, but they were primarily designed to support real-time,
uncompressed, circuit-switched voice encoded in PCM format The primary
difficulty in doing this prior to SONET/SDH was that the synchronization
sources of these various circuits were different. This meant that each circuit
was actually operating at a slightly different rate and with different phase.
SONET/SDH allowed for the simultaneous transport of many different
circuits of differing origin within a single framing protocol. SONET/SDH is
not a communications protocol in itself, but a transport protocol.
129 ISI
In telecommunication, intersymbol interference (ISI) is a form of distortion
of a signal in which one symbol interferes with subsequent symbols. This is
an unwanted phenomenon as the previous symbols have similar effect as
noise, thus making the communication less reliable. ISI is usually caused by
multipath propagation or the inherent non-linear frequency response of a
channel causing successive symbols to "blur" together.
The presence of ISI in the system introduces errors in the decision device at
the receiver output. Therefore, in the design of the transmitting and receiving
filters, the objective is to minimize the effects of ISI, and thereby deliver the
digital data to its destination with the smallest error rate possible.
Ways to fight intersymbol interference include adaptive equalization and
error correcting codes
130 Cyclic redundancy checks (CRCs)
A cyclic redundancy check (CRC) is a single-burst-error-detecting cyclic
code and non-secure hash function designed to detect accidental changes to
digital data in computer networks. It is not suitable for detecting maliciously
introduced errors. It is characterized by specification of a so-called generator
polynomial, which is used as the divisor in a polynomial long division over a
finite field, taking the input data as the dividend, and where the remainder
becomes the result.
Cyclic codes have favorable properties in that they are well suited for
detecting burst errors. CRCs are particularly easy to implement in hardware,
and are therefore commonly used in digital networks and storage devices
such as hard disk drives.
Even parity is a special case of a cyclic redundancy check, where the singlebit CRC is generated by the divisor x + 1.
131 Error-correcting memory
DRAM memory may provide increased protection against soft errors by
relying on error correcting codes. Such error-correcting memory, known as
ECC or EDAC-protected memory, is particularly desirable for high fault-
together with extended periods of uptime, the probability of soft errors in the
total memory installed is significant.
The information in an ECC memory is stored redundantly enough to correct
single bit error per memory word. Hence, an ECC memory can support the
scrubbing of the memory content. Namely, if the memory controller scans
systematically through the memory, the single bit errors can be detected, the
erroneous bit can be determined using the ECC checksum, and the corrected
data can be written back to the memory.
134 network congestion
In data networking and queueing theory, network congestion occurs when a
link or node is carrying so much data that its quality of service deteriorates.
Typical effects include queueing delay, packet loss or the blocking of new
connections. A consequence of the latter two effects is that an incremental
increase in offered load leads either only to a small increase in network
throughput, or to an actual reduction in network throughput.
Network protocols which use aggressive retransmissions to compensate for
packet loss tend to keep systems in a state of network congestion, even after
the initial load has been reduced to a level which would not normally have
induced network congestion. Thus, networks using these protocols can
exhibit two stable states under the same level of load. The stable state with
low throughput is known as congestive collapse
135 Signal-to-noise ratio
( SNR) is a measure used in science and engineering that compares the level
of a desired signal to the level of background noise. It is defined as the ratio
of signal power to the noise power, often expressed in decibels. A ratio
higher than 1:1 (greater than 0 dB) indicates more signal than noise. While
SNR is commonly quoted for electrical signals, it can be applied to any form
of signal (such as isotope levels in an ice core or biochemical signaling
between cells).
The signal-to-noise ratio, the bandwidth, and the channel capacity of a
communication channel are connected by the ShannonHartley theorem.
Signal-to-noise ratio is sometimes used informally to refer to the ratio of
useful information to false or irrelevant data in a conversation or exchange.
For example, in online discussion forums and other online communities, offtopic posts and spam are regarded as "noise" that interferes with the "signal"
of appropriate discussion
136 noise
In communication systems, noise is an error or undesired random
disturbance of a useful information signal in a communication channel. The
noise is a summation of unwanted or disturbing energy from natural and
sometimes man-made sources. Noise is, however, typically distinguished
from interference, (e.g. cross-talk, deliberate jamming or other unwanted
electromagnetic interference from specific transmitters), for example in the
signal-to-noise ratio (SNR), signal-to-interference ratio (SIR) and signal-tonoise plus interference ratio (SNIR) measures. Noise is also typically
distinguished from distortion, which is an unwanted systematic alteration of
the signal waveform by the communication equipment, for example in the
signal-to-noise and distortion ratio (SINAD). In a carrier-modulated
passband analog communication system, a certain carrier-to-noise ratio
(CNR) at the radio receiver input would result in a certain signal-to-noise
ratio in the detected message signal. In a digital communications system, a
certain Eb/N0 (normalized signal-to-noise ratio) would result in a certain bit
error rate (BER).
139 Encapsulation
It is the packing of data and functions into a single component. The features
of encapsulation are supported using classes in most object-oriented
programming languages, although other alternatives also exist. It allows
selective hiding of properties and methods in an object by building an
impenetrable wall to protect the code from accidental corruption.
In programming languages, encapsulation is used to refer to one of two
related but distinct notions, and sometimes to the combination thereof:
A language mechanism for restricting access to some of the object's
components.
A language construct that facilitates the bundling of data with the methods
(or other functions) operating on that data
Some programming language researchers and academics use the first
meaning alone or in combination with the second as a distinguishing feature
of object-oriented programming, while other programming languages which
provide lexical closures view encapsulation as a feature of the language
orthogonal to object orientation.
The second definition is motivated by the fact that in many OOP languages
hiding of components is not automatic or can be overridden; thus,
information hiding is defined as a separate notion by those who prefer the
second definition.
140 fragmentation
It is a phenomenon in which storage space is used inefficiently, reducing
capacity or performance and often both. The exact consequences of
fragmentation depend on the specific system of storage allocation in use and
the particular form of fragmentation. In many cases, fragmentation leads to
storage space being "wasted", and in that case the term also refers to the
wasted space itself. For other systems (e.g. the FAT file system) the space
used to store given data (e.g. files) is the same regardless of the degree of
fragmentation (from none to extreme).
There are three different but related forms of fragmentation: external
fragmentation, internal fragmentation, and data fragmentation, which can be
present in isolation or conjunction. Fragmentation is often accepted in return
for improvements in speed or simplicity.
141 Thinnet
This refers to RG-58 cabling, which is a flexible coaxial cable about -inch
thick. Thinnet is used for short-distance communication and is flexible
enough to facilitate routing between workstations. Thinnet connects directly
to a workstations network adapter card using a British naval connector
(BNC) and uses the network adapter cards internal transceiver. The
maximum length of thinnet is 185 meters.
142 Thicknet
This coaxial cable, also known as RG-8, gets its name by being a thicker
cable than thinnet. Thicknet cable is about -inch thick and can support data
transfer over longer distances than thinnet. Thicknet has a maximum cable
length of 500 meters and usually is used as a backbone to connect several
smaller thinnet-based networks. Due to the thickness of inch, this cable is
harder to work with than thinnet cable. A transceiver often is connected
directly to the thicknet cable using a connector known as a vampire tap.
Connection from the transceiver to the network adapter card is made using a
drop cable to connect to the adapter unit interface (AUI) port connector.
143 10base2
The 10Base2 Ethernet architecture is a network that runs at 10 Mbps and
uses baseband transmissions. 10Base2 typically is implemented as a bus
topology, but it could be a mix of a bus and a star topology. The cable type
that we use is determined by the character at the end of the name of the
architecturein this case a 2. The 2 implies 200 meters. Now, what type of
cable is limited to approximately 200 m? You got it; thinnet is limited to
approximately 200 m (185 m, to be exact). The only characteristic we have
not mentioned is the access method that is used. All Ethernet environments
use CSMA/CD as a way to put .data on the wire
144 10baset
Secret Key encryption, it is required to have a separate key for each host on
the network making it difficult to manage.
Public Key Encryption
In this encryption system, every user has its own Secret Key and it is not in
the shared domain. The secret key is never revealed on public domain.
Along with secret key, every user has its own but public key. Public key is
always made public and is used by Senders to encrypt the data. When the
user receives the encrypted data, he can easily decrypt it by using its own
Secret Key.
147 Internetwork
A network of networks is called an internetwork, or simply the internet. It is
the largest network in existence on this planet.The internet hugely connects
all WANs and it can have connection to LANs and Home networks. Internet
uses TCP/IP protocol suite and uses IP as its addressing protocol. Present
day, Internet is widely implemented using IPv4. Because of shortage of
address spaces, it is gradually migrating from IPv4 to IPv6.
Internet enables its users to share and access enormous amount of
information worldwide. It uses WWW, FTP, email services, audio and video
streaming etc. At huge level, internet works on Client-Server model.
Internet uses very high speed backbone of fiber optics. To inter-connect
various continents, fibers are laid under sea known to us as submarine
communication cable.
Internet is widely deployed on World Wide Web services using HTML
linked pages and is accessible by client software known as Web Browsers.
When a user requests a page using some web browser located on some Web
Server anywhere in the world, the Web Server responds with the proper
HTML page. The communication delay is very low.
Internet is serving many proposes and is involved in many aspects of life.
Some of them are:
Web sites
E-mail
Instant Messaging
Blogging
Social Media
Marketing
Networking
Resource Sharing
Audio and Video Streaming
148 Multilayer switching
It is an evolution of LAN and internetworking technologies. Multilayer
devices combine aspects of OSI layer 2 (the data link layer) and OSI layer 3
(the network layer) into hybrid switches that can route packets at wire speed.
A basic switch is a multiport bridge. These switches were developed to allow
microsegmentation of LANs into large broadcast domains with small
collision domains. See "Switching and Switched Networks" for an overview
of the evolution of switches.
As the technology developed, hardware-based routing functions were also
added, then higher-level functions such as the ability to look deep inside
packets for information that could aid in the packet-forwarding process.
Thus, multilayer switches are devices that examine layer 2 through layer 7
information.
149 Distributed applications
It allow users to interact with other systems on a network. A distributed
application is traditionally divided into two parts-the front-end client and the
back-end server. This is the client/server model, a model that balances
processing loads between client and server. See "Client/Server Computing."
Distributed means that clients may interact with many different servers all
over the network.
Application/groupware suites like Microsoft Exchange, Novell GroupWise,
Lotus Notes/Domino, and Netscape SuiteSpot are designed for distributed
networks. Management applications that use SNMP can collect information
from remote distributed systems and report it back to management systems.
150 An MSP
It is a service provider that offers system and network management tools and
expertise. An MSP typically has its own data center that runs advanced
network management software such as HP OpenView or Tivoli. It uses these
tools to actively monitor and provide reports on aspects of its customer's
networks, including communication links, network bandwidth, servers, and
so on. The MSP may host the customer's Web servers and application servers
at its own site. The services provided by MSPs have been called "Web
telemetry" services. The MSP Association defines MSPs as follows:
Management Service Providers deliver information technology (IT)
infrastructure management services to multiple customers over a network on
a subscription basis. Like Application Service Providers (ASPs),
Management Service Providers deliver services via networks that are billed
to their clients on a recurring fee basis. Unlike ASPs, which deliver business
applications to end users, MSPs deliver system management services to IT
departments and other customers who manage their own technology assets.
151 A data center or NOC
(network operations center) is a place to consolidate application servers,
Web servers, communications equipment, security systems, system
administrators, support personnel, and anything or anybody else that
provides data services. A data center benefits from centralized management,
support, backup control, power management, security, and so on. It may be
housed in a single room or fill an entire building. It may be within a carrier's
PoP (point of presence). Special equipment is usually installed to protect
against power outages, natural disasters, and security breaches.
Enterprise and public data centers
Internet data center role in outsourcing
Facilities management, managed services, and colocation services
High availability, reliability, and scalability advantages
Data center features, including power systems, temperature controls, fire
detection and suppression systems, physical security, cages, racks, and
vaults.
Interconnection systems, including new technologies such as InfiniBand
152 Mobile Computing
Most computer users are connected to networks, and have access to data and
devices on those networks. They connect to the Internet and communicate
with other users via electronic mail. They work in collaborative groups in
which they share schedules and other information. However, when users hit
the road, they can lose contact with the people and resources they are
accustomed to working with. Fortunately, there is plenty of support for
mobile users:
improve performance.
NAS description and NAS as "filers"
Network appliances
Use in data centers and at the department level
Comparison to SANs
CIFS (Common Internet File System) and NFS (Network File System)
NAS importance in the bandwidth explosion and peer-to-peer trend
Network Appliance's WAFL (Write Anywhere File Layout)
Comparison to IP storage solutions such as iSCSI and VI
Architecture/DAFS (Direct Access File System) solutions
Block-level storage and access protocols
162 A mainframe
It is a central processing computer system that gets its name from the large
frame or rack that holds the electronics. Mainframes are based on the central
processing model of computing, in which all processing and data storage is
done at a central system and users connect to that system via "dumb
terminals." The most common mainframes were made by IBM, although
major systems were also made by Sperry Rand, Burroughs, NCR,
Honeywell, and others.
163 SNA
SNA is an IBM architecture that defines a suite of communication protocols
for IBM systems and networks. SNA is an architecture like the OSI model.
There is a protocol stack and various architectural definitions about how
communication takes place at the various levels of the protocol stack.
SNA was originally designed for IBM mainframe systems. One could refer
to this original SNA as "legacy SNA." The "new SNA" is APPN (Advanced
Peer-to-Peer Networking). Legacy SNA is based on the older concept of
centralized processing, where the mainframe was the central computing
node. Dumb terminals attached to the central processor and did two things:
accepted keyboard input from users, and displayed calculated results or
query replies from the mainframe.
164 Gigabit Ethernet
It is a 1-gigabit/sec (1,000-Mbit/sec) extension of the IEEE 802.3 Ethernet
networking standard. Its primary niches are corporate LANs, campus
networks, and service provider networks where it can be used to tie together
existing 10-Mbit/sec and 100-Mbit/sec Ethernet networks. Gigabit Ethernet
can replace 100-Mbit/sec FDDI (Fiber Distributed Data Interface) and Fast
Ethernet backbones, and it competes with ATM (Asynchronous Transfer
IEEE committees. One standard became IEEE 802.3u Fast Ethernet and the
other became 100VG-AnyLAN, which is now governed by the IEEE 802.12
committee. The latter uses the "demand priority" medium access method
instead of CSMA/CD.
168 The TIA/EIA
It structured cabling standards define how to design, build, and manage a
cabling system that is structured, meaning that the system is designed in
blocks that have very specific performance characteristics. The blocks are
integrated in a hierarchical manner to create a unified communication
system. For example, workgroup LANs represent a block with lowerperformance requirements than the backbone network block, which requires
high-performance fiber-optic cable in most cases. The standard defines the
use of fiber-optic cable (single and multimode), STP (shielded twisted pair)
cable, and UTP (unshielded twisted pair) cable.
The initial TIA/EIA 568 document was followed by several updates and
addendums as outlined below. A major standard update was released in 2000
that incorporates previous changes.
169