Professional Documents
Culture Documents
Q. What Does FDDI Stand For?
Q. What Does FDDI Stand For?
Q. What Does FDDI Stand For?
Q. What does FDDI stand for?
Q. What is FDDI?
It is a 100 Mbps Local Area Network, defined by ANSI and OSI standards. It was originally
designed to operate over fiber optic cabling, but now also includes standard copper media for
interconnection. FDDI uses a token ring media access control protocol.
Most of the standardization of FDDI was done in Acredited Standards Committee X3T9.5. In
1995, X3T9.5 became X3T12. FDDI standards are approved by both ANSI (American National
Standards Institute) and ISO (International Standards Organization). ISO approval usually occurs
after ANSI approval.
This information may be obsoleted by either the FDDI document overview or the FDDI
document summary pages.
TOP
There are no on-line copies of the final standards documents. The standards are available in
print, and now also on CD-ROM.
Q. I've heard that FDDI uses a token passing scheme for access arbitration, how does this
work?
A token is a special three octet FDDI frame. The station waits until a token comes by, grabs the
token, transmits one or more frames and release the token. The amount of frames that can be
transmitted is determined by timers in the MAC protocol chips.
FDDI uses a timed token protocol, while 802.5 uses a priority/reservation token access method.
Therefore, there are some differences in frame formats, and significant differences in how a
station's traffic is handled (queueing, etc.) Management of the rings is also very different.
Q. I've heard that FDDI is a counter-rotating ring, what does this mean?
FDDI is a dual ring technology. And each ring is running in the opposite direction to improve
fault recovery.
A B C
A concentrator provides for connection of either another concentrator or station into one of the
dual rings. This allows a tree to extend from each of the master ports of the concentrator.
A B C
D E F
H I
Q. What is dual homing?
When a DAS is connected to two concentrator ports, it is called dual-homing. One port is the
active link, where data is transmitted and the other port is a hot standby. The hot standby link is
constantly tested and will kick in if the active link fails or is disconnected. The B-port in a DAS
is usually the active port and the A-port the hot-standby.
TOP
Q. What is a DAS?
DAS (Dual Attachment Station) is a station with two peer ports (A-Port and B-Port). The A-port
connects to the B-Port of another DAS, and the B-port is going to connect to the A-Port the yet
another DAS. ie:
A B A B A B
Q. What is a SAS?
SAS (Single Attach Station) is a station with one slave port (S-Port). It is usually connected to a
master port (M-Port) of a concentrator.
Q. What is a wrapped ring?
When a link in the dual-ring is broken or not connected, the two adjacent ports connecting to the
broken link will be removed from the ring and the both stations enter the wrap state. This joins
the two counter-rotating rings into one ring.
Wrap A Wrap B
A B A B A B
Q. Do I need a concentrator port for each workstation, or can workstations be chained
together?
Usually you will need a concentrator port (M-Port) to connect each SAS. A DAS can be
connected in the dual rings or to concentrator port(s). FDDI allows two S ports to be connected
(a two station ring), or an S port to be connected to an A or B port of a DAS causing a wrapped
dual ring. If more than one DAS is used, ring redundancy is lost. (Not all equipment vendors
allow S to A, B, or S connections without special configuration.)
Advantages: Fault tolerance. When a link breaks, the ring can be segmented. A concentrator can
just bypass the problem port and avoid most segmentations. It also gives you better physical
planning. Usually people prefer tree physical topology. Generally star configuration of a
concentrator system is easier to troubleshoot. Stations can be powered off without serious ring
effects. Disadvatages: A concentrator represents a single point of failure. A concentrator
configuration may also be more costly.
Yes. And you can build a tree as deep as you want. Many users have multiple levels of
concentrators. The only limit is FDDI's limit of 500 stations.
Q. What is a bypass and what are the issues in having or not having one?
A bypass relay is a device that is used to skip a station on the dual ring if it is turned off, without
causing the ring to wrap. One problem with them is that they attenuate the light in the fiber, so
you can't have too many of them. (The maximum number in a row depends on the bypass loss,
and how the cable plant is constructed. When the bypass joins two fiber links, the number of
connectors between the optical transmitter and reciver will usually increase.)
No min, 2 km max for stations with the original multimode interface (PMD).
40-60 km max is possible for single mode fiber (SMF-PMD). An optical attenuator will
be necessary if high power (Category II lasers are used on a short link).
No min, 500 m for the new Low Cost Fiber on multimode (LCF-PMD).
Multimode (62.5/125 micron graded index multimode fiber) and other fiber like 50/125. 85/125.
100/140 allowed Single mode (8-10 micron)
All the FDDI optical PMDs use the same wavelength (1300nm), so they can be connected
together. For example, PMD can be connected to LCF-PMD as long as you stick to LCF-PMD
configuration rules. If you don't understand this optical stuff, don't attempt to mix SMF-PMD
with PMD or LCF-PMD devices without advice.
Too much optical power can permanently damage someone's eyes. This is not generally a
problem with PMD and LCF PMD but can be with SMF-PMD especially Category II SMF-
PMD. Inspecting the end of any fiber (e.g., with magnification) without knowing what is at the
other end is not a smart thing to do.
TOP
Q. I've hear of FDDI over Copper, what type of cable does this scheme use?
Q. Is there any advantage to separating the fiber pairs (will the ring work better if only one
strand is broken on a DAS connection?)
Not in most applications. Some military applications anticipate physical damage to the cable and
therefore separate it to improve survivability.
Yes.
Any problems that arise are generally with the transport protocol. Frame fragmentation is
standardized for TCP/IP. It should also be noted that frame fragmentation will not work for
DECNET, IPX, LAT, Appletalk, NETBEUI etc. IP is the only protocol that has a standard
method of fragmenting. Other protocols destined for Ethernet Lans must stay below the 1500
MTU.
Q. I've heard that there is a frame length difference, what are the issues and problems
here?
FDDI frames have a max size of 4500 bytes and Enet only 1500 bytes. Therefore your bridge or
router needs to be smart enough to fragment the packets (eg into smaller IP fragments). Or you
need to reduce your frame size to 1500 bytes (of data).
PA SD FC DA SA INFO ED FS
Depends. You can get aggregate usage greater than 95Mbit/s in most installations. As with any
LAN, high utilization corresponds to longer queuing times. Many systems run fine at 75Mbps.
The maximum for a station is implementation dependent. Some can't do more than 20 Mpbs,
while others can sustain more than 90Mbps.
Q. What happens when I bridge between a 100 Mbps FDDI and a 10Mbps ethernet if the
FDDI traffic destined for the ethernet gets above 8 Mbps? 10 Mbps?
After the buffer fills Frames start dropping. This is not a problem unique to FDDI however.
Consider ethernet to T1, or multiple ethernets to a single ethernet, or a lightly loaded ethernet to
a heavly loaded ethernet.
Q. What is the latency across a bridge/router? (Yes I know that different vendors are
different, but what is the window?)
Yes. The FDDI Physical Layer Repeater Protocol (PHY-REP) standard is in the final stages of
approval. Repeaters are sometimes used to convert between single mode and multimode.
TOP
Q. What type of test and trouble shooting equipment is available for FDDI?
FDDI protocol analyzers of varying capability are available from multiple manufacturers. Many
FDDI equipment vendors also have management software that implements ring mapping,
inspection of other stations operational parameters, general ring state monitoring, etc.
An optical time domain reflectometer (OTDR) and optical power meters are used for testing
optical fiber. There are also FDDI link testers that measure power and low level FDDI protocol
response.
Yes. The IETF has an FDDI SNMP MIB. The new MIB is called RFC1512. The previous FDDI
SNMP MIB (RFC1285) was based on X3T9.5 SMT 6.2.
The IETF has not standardized an RMON MIB for FDDI, but many RMON vendors have done
their own.
Beacon is a special frame that FDDI MAC sends when something is wrong. When Beaconing
persists, SMT will kick in to detect and try to solve the problem. A few FDDI implementations
will beacon on initial entry into the ring, but this is a short term condition.
Q. How about interoperability, does one manufacture's equipment work with others?
Just like any networking products, Ethernet, Token, FDDI, ATM, there is a possibility that one
vendor does not work with another. But most of the equipment shipping today is tested for
interoperability. There are test labs like UNH and ANTC. Ask the vendor what type of testing
they did.
Q. Can I interface FDDI to a PC (ISA Bus), PC (EISA Bus), PC (Micro channel Bus),
Macintosh, Sun workstation, DECstation 5000, NEXT computer, Silicon Graphics, Cisco
router, WellFleet router, SNA gateway (McData), other?
Yes.
I am not sure if NeXT has any FDDI adaptor software, but there are ~5 different NuBus FDDI
cards in the market. But FDDI adaptors are available for all other buses or vendors.
Q. What is the maximum time a station has to wait for media access. What type of
applications care?
Even at maximum load, this is unlikely. To get this worst case the ring basically must go from
zero offered load to maximum offered load (all stations enqueueing ~T-neg of frames) virtually
instantenously. If this doesn't happen, the ~T_neg of available bandwidth is split between
multiple stations with each fraction of the bandwidth being passed along the ring with each token
rotation. This results in a given station having multiple transmit opportunities within MaxTime.
To reduce MaxTime change the T_req of a station to some lower value (eg 8 msec). This is done
through the MIB parameter fddiPATHMaxT-Req.
Q. Can I bridge/route TCP/IP, SNA, Novell's IPX, Sun protocols, DecNet, Banyan Vines,
Appletalk, X windows, LAT?
Yes, but some equipment may not support bridging of all of these protocols. Check with the
bridge vendor for your required protocols.
Basically anything will be at least a bit faster. From NFS to images transmission. Even if a single
station cannot take advantage of the 100M bit/sec, the aggregate bandwidth will help a lot if your
Ethernet is saturated. However, note that though FDDI has higher bandwidth than ethernet, the
signals travel at the same speed. The propogation of a signal on the transmission line is the same
for ethernet, token ring, and FDDI.
Q. What are the effects of powering off a workstation on a DAS or SAS connection?
Depends. Let's do SAS first, it is easier. If a SAS is connected to a concentrator, then the
concentrator will bypass the SAS connection using an internal data path. If a DAS is connected
to a concentrator, then the concentrator will also bypass the DAS. If a DAS is connected to the
trunk rings without using an optical bypass switch, then the trunk ring will wrap. If multiple
stations power off on the trunk rings, then the ring will be segmented. Now if a DAS is using an
optical bypass switch, the switch will kick in and prevent the ring from wrapping.
TOP
Connect concentrators and other equipment which is powered on all the time (e.g., bridges,
routers, ring monitors, etc.) into the trunk ring. In a large enterprise, workgroup concentrators
and users stations are then connected to the backbone concentrators. Some connect bridges and
routers and critical servers to backbone concentrators using dual- homing.
Graceful Insertion is a method to insert a station (or a tree) into a ring minimizing disruption.
Graceful Insertion is not standard, and therefore different by vendor. Some implementations are
both "frame friendly" (don't corrupt frames) and "token friendly" (don't destroy the token). The
theory goes that Graceful Insertion can minimize ring non_op and lost frames, therefore saving
transmission timeout in upper layer protocols (eg TCP).
The following is the counter argument: Graceful Insertion can use more ring bandwidth (holding
the token) than is consumed by a ring recovery. And upper layer protocols are designed to
perform frame recovery and retransmission anyway. Also, no vendor can gaurantee 100%
Graceful Insertion anyway.
Not really, but on some concentrators, if you know a station is to be removed from the ring, it
could be removed gracefully by sending a command to the concentrator.
Q. What does SMT stand for? What does it do? Do I need it?
Station ManagemenT (SMT). It is part of the ANSI FDDI Standards that provides link-level
management for FDDI. SMT is a low-level protocol that addresses the management of FDDI
functions provided by the MAC, PHY, and PMD. It performs functions like ring recovery, frame
level management, link control, etc. Every stations on FDDI needs to have SMT.
TOP
lot of FDDI equipment was shipped before the SMT standard was completed. Most of this
equipment was shipped with SMT software based on SMT Revision 6.2, and earlier shipments
were even made with SMT 5.1. SMT 7.3 was the final working document in the standards
committee. It is functionally identical to SMT 7.2 and the approved SMT standard. All of these
versions of SMT work together on a ring, but they will look different to a management station.
(The SMT 6.2 and SMT 7.2 MIBs are very different, as are the frame protocols related to the
MIB.)
Q. Can I connect two Single attach stations together and form a two station ring without a
concentrator?
Yes. You can do that if both stations support the S-S port connection. Most vendors support the
S-S connections.
A port is the connector and supporting logic for one end of an FDDI link. Each port has a
transmitter and a receiver. Ports are given names descriptive of their position in FDDI topoligies.
SMT defines four types of ports (A, B, M, S). A dual-attachment station has two ports, one A-
port and one B-port. A single attach station has only one port (S-port). A concentrator will have
many M-port for connecting to other stations' A, B or S-ports.
Q. What are the port connection rules?
When connecting DASs, one should connect the A-port of one station to the B-port of another.
S-port on the SAS is to connect to the M-port on a concentrator. A and B-port on DASs can also
connect to the M-port of a concentrator. M-ports of concentrators will not connect to each other.
In more detail, SMT suggests the following rules:
Other Port
A B S M
Both FDDI and FDDI-II run at 100 M bits/sec on the fiber. FDDI can transport both
asynchronous and synchronous types of frames.
FDDI-II has a new mode of operation called Hybrid Mode. Hybrid mode uses a 125usec cycle
structure to transport isochronus traffic, in addition to sync/async frames.
FDDI and FDDI-II stations can be operated in the same ring only in Basic (FDDI) mode.
FDDI (Fiber-Distributed Data Interface) is a standard for data transmission on fiber optic lines in that can extend in
range up to 200 km (124 miles). The FDDI protocol is based on the token ring protocol. In addition to being large
An FDDI network contains two token rings, one for possible backup in case the primary ring fails. The primary ring
offers up to 100 Mbps capacity. If the secondary ring is not needed for backup, it can also carry data, extending
capacity to 200 Mbps. The single ring can extend the maximum distance; a dual ring can extend 100 km (62 miles).
FDDI is a product of American National Standards Committee X3-T9 and conforms to the open system interconnect
(OSI) model of functional layering. It can be used to interconnect LANs using other protocols. FDDI-II is a version of
FDDI that adds the capability to add circuit-switched service to the network so that voice signals can also be
handled. Work is underway to connect FDDI networks to the developing Synchronous Optical Network.
Function of FDDI
Background
The Fiber Distributed Data Interface (FDDI) specifies a 100-Mbps token-passing, dual-ring LAN using fiber-optic
cable. FDDI is frequently used as high-speed backbone technology because of its support for high bandwidth and
greater distances than copper. It should be noted that relatively recently, a related copper specification, called
Copper Distributed Data Interface (CDDI) has emerged to provide 100-Mbps service over copper. CDDI is the
implementation of FDDI protocols over twisted-pair copper wire. This chapter focuses mainly on FDDI
FDDI uses a dual-ring architecture with traffic on each ring flowing in opposite directions (called counter-rotating).
The dual-rings consist of a primary and a secondary ring. During normal operation, the primary ring is used for data
transmission, and the secondary ring remains idle. The primary purpose of the dual rings, as will be discussed in
detail later in this chapter, is to provide superior reliability and robustness. Figure 1 shows the counter-rotating
Figure 1: FDDI uses counter-rotating primary and secondary rings.
FDDI Specifications
FDDI specifies the physical and media-access portions of the OSI reference model. FDDI is not actually a single
specification, but it is a collection of four separate specifications each with a specific function. Combined, these
specifications have the capability to provide high-speed connectivity between upper-layer protocols such as TCP/IP
FDDI's four specifications are the Media Access Control (MAC), Physical Layer Protocol (PHY), Physical-Medium
Dependent (PMD), and Station Management (SMT). The MAC specification defines how the medium is accessed,
including frame format, token handling, addressing, algorithms for calculating cyclic redundancy check (CRC)
value, and error-recovery mechanisms. The PHY specification defines data encoding/decoding procedures, clocking
requirements, and framing, among other functions. The PMD specification defines the characteristics of the
transmission medium, including fiber-optic links, power levels, bit-error rates, optical components, and
connectors. The SMT specification defines FDDI station configuration, ring configuration, and ring control features,
including station insertion and removal, initialization, fault isolation and recovery, scheduling, and statistics
collection.
FDDI is similar to IEEE 802.3 Ethernet and IEEE 802.5 Token Ring in its relationship with the OSI model. Its primary
purpose is to provide connectivity between upper OSI layers of common protocols and the media used to connect
network devices. Figure 3 illustrates the four FDDI specifications and their relationship to each other and to the
IEEE-defined Logical-Link Control (LLC) sublayer. The LLC sublayer is a component of Layer 2, the MAC layer, of
Figure 2: FDDI specifications map to the OSI hierarchical model.
One of the unique characteristics of FDDI is that multiple ways actually exist by which to connect FDDI devices.
FDDI defines three types of devices: single-attachment station (SAS), dual-attachment station (DAS), and a
concentrator.
An SAS attaches to only one ring (the primary) through a concentrator. One of the primary advantages of
connecting devices with SAS attachments is that the devices will not have any effect on the FDDI ring if they are
disconnected or powered off. Concentrators will be discussed in more detail in the following discussion.
Each FDDI DAS has two ports, designated A and B. These ports connect the DAS to the dual FDDI ring. Therefore,
each port provides a connection for both the primary and the secondary ring. As you will see in the next section,
devices using DAS connections will affect the ring if they are disconnected or powered off. Figure 3 shows FDDI DAS
An FDDI concentrator (also called a dual-attachment concentrator [DAC]) is the building block of an FDDI network.
It attaches directly to both the primary and secondary rings and ensures that the failure or power-down of any SAS
does not bring down the ring. This is particularly useful when PCs, or similar devices that are frequently powered
on and off, connect to the ring. Figure 4 shows the ring attachments of an FDDI SAS, DAS, and concentrator.
TOP
FDDI provides a number of fault-tolerant features. In particular, FDDI's dual-ring environment, the implementation
of the optical bypass switch, and dual-homing support make FDDI a resilient media technology.
Dual Ring
FDDI's primary fault-tolerant feature is the dual ring. If a station on the dual ring fails or is powered down, or if the
cable is damaged, the dual ring is automatically wrapped (doubled back onto itself) into a single ring. When the
ring is wrapped, the dual-ring topology becomes a single-ring topology. Data continues to be transmitted on the
FDDI ring without performance impact during the wrap condition. Figure 5 and Figure 6 illustrate the effect of a
ring wrapping in FDDI.
When a single station fails, as shown in Figure 5, devices on either side of the failed (or powered down) station
wrap, forming a single ring. Network operation continues for the remaining stations on the ring. When a cable
failure occurs, as shown in Figure 6, devices on either side of the cable fault wrap. Network operation continues
for all stations.
It should be noted that FDDI truly provides fault-tolerance against a single failure only. When two or more failures
occur, the FDDI ring segments into two or more independent rings that are unable to communicate with each
other.
An optical bypass switch provides continuous dual-ring operation if a device on the dual ring fails. This is used both
to prevent ring segmentation and to eliminate failed stations from the ring. The optical bypass switch performs
this function through the use of optical mirrors that pass light from the ring directly to the DAS device during
normal operation. In the event of a failure of the DAS device, such as a power-off, the optical bypass switch will
pass the light through itself by using internal mirrors and thereby maintain the ring's integrity. The benefit of this
capability is that the ring will not enter a wrapped condition in the event of a device failure. Figure 7 shows the
Figure 7: The optical bypass switch uses internal mirrors to maintain a network.
Dual Homing
Critical devices, such as routers or mainframe hosts, can use a fault-tolerant technique called dual homing to
provide additional redundancy and to help guarantee operation. In dual-homing situations, the critical device is
attached to two concentrators. Figure 8 shows a dual-homed configuration for devices such as file servers and
routers.
One pair of concentrator links is declared the active link; the other pair is declared passive. The passive link stays
in back-up mode until the primary link (or the concentrator to which it is attached) is determined to have failed.
The FDDI frame format is similar to the format of a Token Ring frame. This is one of the areas where FDDI borrows
heavily from earlier LAN technologies, such as Token Ring. FDDI frames can be as large as 4,500 bytes. Figure 9
shows the frame format of an FDDI data frame and token.
TOP
The following descriptions summarize the FDDI data frame and token fields illustrated in Figure 9.
Preamble---A unique sequence that prepares each station for an upcoming frame.
Start Delimiter---Indicates the beginning of a frame by employing a signaling pattern that differentiates it from the
Frame Control---Indicates the size of the address fields and whether the frame contains asynchronous or
Destination Address---Contains a unicast (singular), multicast (group), or broadcast (every station) address. As with
Ethernet and Token Ring addresses, FDDI destination addresses are 6 bytes long.
Source Address---Identifies the single station that sent the frame. As with Ethernet and Token Ring addresses, FDDI
Frame Check Sequence (FCS)---Filed by the source station with a calculated cyclic redundancy check value
dependent on frame contents (as with Token Ring and Ethernet). The destination address recalculates the value to
determine whether the frame was damaged in transit. If so, the frame is discarded.
End Delimiter---Contains unique symbols, which cannot be data symbols, that indicate the end of the frame.
Frame Status---Allows the source station to determine whether an error occurred and whether the frame was
PA - Preamble 16 symbols
FCS - Frame Check Sequence 8 symbols, covers the FC, DA, SA and Information
Preamble
The Token owner as a minimum of transmits the preamble 16 symbols of Idle. Physical Layers of the subsequent
repeating stations can change the length of the Idle pattern according to the Physical Layer requirements.
Therefore, each repeating station may see a variable length preamble from the original preamble. Tokens will be
recognized as long as its preamble length is greater than zero. If a valid token is received and cannot be
processed (repeated), due to expiration of ring timing or latency constraints the station will issue a new token to
A given MAC implementation is not required to be capable of copying frames received with less than 12 symbols of
Since the preamble cannot be repeated, the rest of the frame will not be repeated as well.
Starting Delimiter
This field of the frame denodes the start of the frame. It can only have symbols 'J' and 'K'. These symbols will not
Frame Control
Frame Control field descibes what type of data it is carrying inthe INFO field. Here are the most common values
Please note that the list here are only the most common values that can be formed by a 48 bit
TOP
Destination Address
Destination Address field contains 12 symbols that identifies the station that is receiving this particular frame.
When FDDI is first setup, each station is given a unique address that identifies themselves from the others. When a
frame passed by the station, the station will compare its address against the DA field of the frame. If it is a
match, station then copies the frame into its buffer area waiting to be processed. There is not restriction on the
number of stations that a frame can reach at a time. If the first bit of the DA field is set to '1', then the address is
called a group or global address. If the first bit is '0', then the address is called individual address. As the name
suggests, a frame with a global address setting can be sent to multiple stations on the network. If the frame is
intended for everyone on the network, the address bits will be set to all 1's. Therefore, a global address contains
all 'F' symbols. There are also two different ways of administer these addresses. One's called local and the other's
called universal. The second bit of the address field determine whether or not the address is locally or universally
administered. If the second bit is '1' then it is locally administered address. If the second bit is a '0', then it is
universally administered adress.A locally administer address are addresses that havebeen assigned by the network
administrator and a universally administered addresses are pre-assigned by the manufacturer's OUI.
Source Address<>
A Source Address identifies the station that created the frame. This field is used for remove frames from the ring.
Each time a frame is sent, it travels around the ring, visiting each station,and eventually (hopefully) comes back to
the station that originally sent that frame. If the address of a station matches the SA field inthe frame, the
station will strip the frame off the ring. Each stationis responsible for removing its own frame from the ring.
Information Field
INFO field is the heart and soul of the frame. Every components of the frame is designed around this field; Who
to send it to, where's this coming from, how it is received and so on.The type of information in the INFO field can
be found by looking in the FC field of the frame. For example:'50'(hex) denodes a LLC frame. So, the INFO field
will have a LLC header followed by other upper layer headers. For example SNAP, ARP, IP, TCP, SNMP, etc.'41'(hex
or '4F'(hex) denode s SMT (Station Management) frame. Therefore, a SMT header will appear in the INFO field.
Frame Check Sequence field is used to check or verify the traversingframe for any bit errors. FCS information is
generated by the stationthat sends the frame, using the bits in FC, DA, SA, INFO, and FCSfields. To verify if there
are any bit errors in the frame, FDDI uses 8 symbols (32 bits) CRC (Cyclic Redundancy Check) to ensure the
End Delimiter
As the name suggests, the end delimiter denodes the end of the frame. The ending delimiter consist of a 'T'
symbol. This 'T' symbols indicates that the frame is complete or ended. Any data sequence that does not end with
Frame Status
Frame Status (FS) contains 3 indicators that dictates the condition of the frame. Each indicator can have two
values: Set ('S') or Reset ('R'). The indicators could possibly be corrupted. In this case, the indicators is neither 'S'
nor 'R'. All frame are initially set to 'R'.Three types of indicators are as follows:Error (E):This indicator is set if a
station determines an error for that frame. Might be a CRC failiure or other causes. If a frame has its E indicator
set, then, that frame is discarded by the first station that encounters the frame.Acknowledge(A): Sometime this
indicator is called 'address recognized'. This indicator is set whenever a frame in properly received; meaning the
frame has reached its destination address. Copy (C): This indicator is set whenever a station is able to copy the
received frame into its buffer section. Thus, Copy and Acknowledge indicators are usually set at the same time.
But, sometimes when a station is receiving too many frames and cannot copy all the incoming frames. If this
happens, it would re-transmit the frame with indicator 'A' set indicator 'C' left on reset.
ALOHA PROTOCOL
Aloha Protocol
Slotted Aloha
Pure Aloha
Aloha, also called the Aloha method, refers to a simple communications scheme in which each source (transmitter)
in a network sends data whenever there is a frame to send. If the frame successfully reaches the destination
(receiver), the next frame is sent. If the frame fails to be received at the destination, it is sent again. This
protocol was originally developed at the University of Hawaii for use with satellite communication systems in the
Pacific.
In a wireless broadcast system or a half-duplex two-way link, Aloha works perfectly. But as networks become more
complex, for example in an Ethernet system involving multiple sources and destinations in which data travels many
paths at once, trouble occurs because data frames collide (conflict). The heavier the communications volume, the
worse the collision problems become. The result is degradation of system efficiency, because when two frames
To minimize the number of collisions, thereby optimizing network efficiency and increasing the number of
subscribers that can use a given network, a scheme called slotted Aloha was developed. This system employs
signals called beacons that are sent at precise intervals and tell each source when the channel is clear to send a
frame. Further improvement can be realized by a more sophisticated protocol called Carrier Sense Multiple Access
In 1070s, Norman Abram son and his colleagues at the University of Hawaii devised a new and elegant method to
solve the channel allocation problem. Many researchers have extended their work since then. Although Abranson's
work, called the Aloha System, used ground-based radio broadcasting, the basic idea is applicable to any system in
which uncoordinated users are completing for the use of a single shared channel.
The two several of ALOHA are:
PURE ALOHA
SLOTTED ALOHA
throughput: S: number Packets successfully (without collision) transmitted per unit time
= Ge*(-2G)
Aloha throughput
Comparative analysis - TCP - UDP
TCP
Abbreviation of Transmission Control Protocol, and pronounced as separate letters. TCP is one of the main
protocols in TCP/IP networks. Whereas the IP protocol deals only with packets, TCP enables two hosts to establish
a connection and exchange streams of data. TCP guarantees delivery of data and also guarantees that packets will
TCP stands for Transmission Control Protocol. It is described in STD-7/RFC-793. TCP is a connection-oriented
protocol that is responsible for reliable communication between two end processes. The unit of data transferred is
Being connection-oriented means that before actually transmitting data, you must open the connection between
the two end points. The data can be transferred in full duplex (send and receive on a single connection). When the
transfer is done, you have to close the connection to free system resources. Both ends know when the session is
opened (begin) and is closed (end). The data transfer cannot take place before both ends have agreed upon the
connection. The connection can be closed by either side; the other is notified. Provision is made to close
Being stream oriented means that the data is an anonymous sequence of bytes. There is nothing to make data
boundaries apparent. The receiver has no means of knowing how the data was actually transmitted. The sender
can send many small data chunks and the receiver receive only one big chunk, or the sender can send a big chunk,
the receiver receiving it in a number of smaller chunks. The only thing that is guaranteed is that all data sent will
be received without any error and in the correct order. Should any error occur, it will automatically be corrected
At the program level, the TCP stream look like a flat file. When you write data to a flat file, and read it back
later, you are absolutely unable to know if the data has been written in only one chunk or in several chunks.
Unless you write something special to identify record boundaries, there is nothing you can do to learn it afterward.
You can, for example, use CR or CR LF to delimit your records just like a flat text file.
At the programming level, TWSocket is fairly simple to use. To send data, you just need to call the Send method
(or any variation such as SendStr) to give the data to be transmitted. TWSocket will put it in a buffer until it can
be actually transmitted. Eventually the data will be sent in the background (the Send method returns immediately
without waiting for the data to be transmitted) and the OnDataSent event will be generated once the buffer is
emptied.
To receive data, a program must wait until it receives the OnDataAvailable event. This event is triggered each
time a data packet comes from the lower level. The application must call the Receive method to actually get the
data from the low-level buffers. You have to Receive all the data available or your program will go in an endless
loop because TWSocket will trigger the OnDataAvailable again if you didn't Receive all the data.
As the data is a stream of bytes, your application must be prepared to receive data as sent from the sender,
fragmented in several chunks or merged in bigger chunks. For example, if the sender sent "Hello " and then
"World!", it is possible to get only one OnDataAvailable event and receive "Hello World!" in one chunk, or to get
two events, one for "Hello " and the other for "World!". You can even receive more smaller chunks like "Hel", "lo
wo" and "rld!". What happens depends on traffic load, router algorithms, random errors and many other
On the subject of client/server applications, most applications need to know command boundaries before being
able to process data. As data boundaries are not always preserved, you cannot suppose your server will receive a
single complete command in one OnDataAvailable event. You can receive only part of a request or maybe two or
more request merged in one chunk. To overcome this difficulty, you must use delimiters.
Most TCP/IP protocols, like SMTP, POP3, FTP and others, use CR/LF pair as command delimiter. Each client
request is sent as is with a CR/LF pair appended. The server receives the data as it arrives, assembles it in a
receive buffer, scans for CR/LF pairs to extract commands from the received stream, and removes them from the
receive buffer.
UDP
Short for User Datagram Protocol, a connectionless protocol that, like TCP, runs on top of IP networks. Unlike
TCP/IP, UDP/IP provides very few error recovery services, offering instead a direct way to send and receive
datagrams over an IP network. It's used primarily for broadcasting messages over a network.
UDP stands for User Datagram Protocol. It is described in STD-6/RFC-768 and provides a connectionless host-to-
host communication path. UDP has minimal overhead:; each packet on the network is composed of a small header
UDP preserves datagram boundaries between the sender and the receiver. It means that the receiver socket will
receive an OnDataAvailable event for each datagram sent and the Receive method will return a complete
datagram for each call. If the buffer is too small, the datagram will be truncated. If the buffer is too large, only
one datagram is returned, the remaining buffer space is not touched.
UDP is connectionless. It means that a datagram can be sent at any moment without prior advertising, negotiation
or preparation. Just send the datagram and hope the receiver is able to handle it.
UDP is an unreliable protocol. There is absolutely no guarantee that the datagram will be delivered to the
destination host. But to be honest, the failure rate is very low on the Internet and nearly null on a LAN unless the
bandwidth is full.
TOP
Not only the datagram can be undelivered, but it can be delivered in an incorrect order. It means you can receive
a packet before another one, even if the second has been sent before the first you just received. You can also
Your application must be prepared to handle all those situations: missing datagram, duplicate datagram or
datagram in the incorrect order. You must program error detection and correction. For example, if you need to
The main advantages for UDP are that datagram boundaries are respected, you can broadcast, and it is fast.
The main disadvantage is unreliability and therefore complicated to program at the application level.
ADDRESSING
TCP and UDP use the same addressing scheme. An IP address (32 bits number, always written as four 8-bit number
expressed as unsigned 3-digit decimal numbers separated by dots such as 193.174.25.26) and a port number (a 16-
The IP address is used by the low-level protocol (IP) to route the datagram to the correct host on the specified
network. Then the port number is used to route the datagram to the correct host process (a program on the host).
For a given protocol (TCP or UDP), a single host process can exist at a time to receive data sent to the given port.
i think tcp is an overused protocol; i think udp is an underused protocol. this is an argument i've been having quite
a bit with people lately, so i've decided i'll lay out my reasoning here so i don't have to type or recite it at people
since i first wrote this (jan 1998 or so), tcp extensions that fix many of my complaints (selective acknowledgment,
large windows, tcp for transactions) have seen more widespread implementation. while they are a step in the right
direction (well, t/tcp was a terrible security blunder...), tcp will always be a stream protocol, and thus will never
advantages of tcp
the operating system does all the work. you just sit back and watch the show. no need to have the same
bugs in your code that everyone else did on their first try; it's all been figured out for you.
since it's in the os, handling incoming packets has fewer context switches from kernel to user space and
back; all the reassembly, acking, flow control, etc is done by the kernel.
tcp guarantees three things: that your data gets there, that it gets there in order, and that it gets there
without duplication. (the truth, the whole truth, and nothing but the truth...)
routers may notice tcp packets and treat them specially. they can buffer and retransmit them, and in
disadvantages of tcp
the operating system may be buggy, and you can't escape it. it may be inefficient, and you have to put up
with it. it may be optimized for conditions other than the ones you are facing, and you may not be able to
retune it.
tcp makes it very difficult to try harder; you can set a few socket options, but beyond that you have to
tcp may have lots of features you don't need. it may waste bandwidth, time, or effort on ensuring things
routers on the internet today are out of memory. they can't pay much attention to tcp flying by, and try
tcp cannot conclude a transmission without all data in motion being explicitly acked.
disadvantages of udp
there are no guarantees with udp. a packet may not be delivered, or delivered twice, or delivered out of
order; you get no indication of this unless the listening program at the other end decides to say
something. tcp is really working in the same environment; you get roughly the same services from ip and
udp. however, tcp makes up for it fairly well, and in a standardized manner.
routers are quite careless with udp. they never retransmit it if it collides, and it seems to be the first
thing dropped when a router is short on memory. udp suffers from worse packet loss than tcp.
advantages of udp
it doesn't restrict you to a connection based communication model, so startup latency in distributed
all flow control, acking, transaction logging, etc is up to user programs; a broken os implementation is not
going to get in your way. additionally, you only need to implement and use the features you need.
the recipient of udp packets gets them unmangled, including block boundaries.
TOP
startup latency is significant. it takes at least twice rtt to start getting data back.
tcp allows a window of at most 64k, and the acking mechanism means that packet loss is misdetected. tcp
stalls easily under packet loss. tcp is more throttled by rtt than bandwidth.
tcp transfer servers have to maintain a separate socket (and often separate thread) for each client.
load balancing is crude and approximate. especially on local networks that allow collisions, two
simultaneous tcp transfers have a tendency to fight with each other, even if the sender is the same.
flow control is up to user space; windows can be infinite, artificial stalls nonexistant, latency well
tolerated, and maximum speeds enforced only by real network bandwidth, yet actual speeds chosen by
receiving an image simultaneously from multiple hosts is much easier with udp, as is sending one to
multiple hosts, especially if they happen to be part of the same broadcast or multicast group.
a single sending host with multiple transfers proceeding can balance them with excellent precision.
--------------------------------------------------------------------------------
The Internet runs on a hierarchical protocol stack. A simplified version of this is shown in figure 1 . The layer
common to all Internet applications is the IP (Internet Protocol) layer. This layer provides a connectionless,
unreliable packet based delivery service. It can be described as connectionless because packets are treated
independently of all others. The service is unreliable because there is no guarantee of delivery. Packets may be
silently dropped, duplicated or delayed and may arrive out of order. The service is also called a best effort
service, all attempts to deliver a packet will be made, with unreliability only caused by hardware faults or
exhausted resources.
As there is no sense of a connection at the IP level there are no simple methods to provide a quality of service
(QoS). QoS is a request from an application to the network to provide a guarantee on the quality of a connection.
This allows an application to request a fixed amount of bandwidth from the network, and assume it will be
provided, once the QoS request has been accepted. Also a fixed delay, i.e. no jitter and in order delivery can be
assumed. A network that supports QoS will be protected from congestion problems, as the network will refuse
connections that request larger resources than can be supplied. An example of a network that supports QoS is the
current telephone network, where every call is guaranteed the bandwidth for the call. Most users at some point
have heard the overloaded signal where the network cannot provide the requested resource required to make a
call.
The application decides which transport protocol is used. The two protocols shown here, TCP and UDP are the
most commonly used ones. TCP provides a reliable connection and is used by the majority of current Internet
applications. TCP, besides being responsible for error checking and correcting, is also responsible for controlling
the speed at which this data is sent. TCP is capable of detecting congestion in the network and will back off
transmission speed when congestion occurs. These features protect the network from congestion collapse.
As discussed in the introduction, VoIP is a real-time service. For real-time properties to be guaranteed to be met,
a network with QoS must be used to provide fixed delay and bandwidth. It has already been said that IP cannot
provide this. This then presents a choice. If IP is a requirement, which transport layer should be used to provide a
As TCP provides features such as congestion control, it would be the preferred protocol to use. Unfortunately due
to the fact that TCP is a reliable service, delays will be introduced whenever a bit error or packet loss occurs. This
delay is caused by retransmission of the broken packet, along with any successive packets that may have already
TCP uses a combination of four algorithms to provide congestion control, slow start, congestion avoidance, fast
retransmit and fast recovery. These algorithms all use packet loss as an indication of congestion, and all alter the
number of packets TCP will send before waiting for acknowledgments of those packets. These alterations affect
the bandwidth available and also change delays seen on a link, providing another source of jitter.
Combined, TCP raises jitter to an unacceptable level rendering TCP unusable for real-time services. Voice
communication has the advantage of not requiring a completely reliable transport level. The loss of a packet or bit
error will often only introduce a click or a minor break into the output.
For these reasons most VoIP applications use UDP for the voice data transmission. UDP is a thin layer on top of IP
that provides a way to distinguish among multiple programs running on a single machine. UDP also inherits all of
the properties of IP that TCP attempts to hide. UDP is therefore also a packet based, connectionless, best-effort
service. It is up to the application to split data into packets, and provide any necessary error checking that is
required.
Because of this, UDP allows the fastest and most simple way of transmitting data to the receiver. There is no
interference in the stream of data that can be possibly avoided. This provides the way for an application to get as
UDP however provides no congestion control systems. A congested link that is only running TCP will be
approximately fair to all users. When UDP data is introduced into this link, there is no requirement for the UDP
data rates to back off, forcing the remaining TCP connections to back off even further. This can be though of as
UDP data not being a ``good citizen''. The aim of this project is to characterise the quantity of this drop off in TCP
performance.
TCP UDP
Connection-Oriented Connectionless
stateless)
Server Process
socket()
|
bind()
listen socket() |
| | | socket()
| | | bind()
| | | <--- sendto()
| | | |
When an application process wishes to setup a connection to a remote application process, it must specify which
one to connect to. The method normally used is to define transport addresses to which processes can listen for
connection requests. In the internet, these end points are pairs. In ATM networks, they are AAL-SAPs, we eill use
the neutral term TSAP (Transport Service Access Point). The analogous end porints in the network layer (ie.,
network layer addresses) are then called NSAPs. IP addresses are examples of NSAPs.
The format of a transport protocal data unit (TPDU) is shown below. Each TPDU consists of four general fields :
Length : The length field occupies the first byte and indicates the total number of bytes (excluding the length
field itself) in the TPDU.
Fixed Parameters : The fixed parameters field contains parameters, or control fields that are commonly
present in all transport layer packets. It consists of five parts: Code, source reference, destination reference,
Code : The code identifies the type of the data unit for example, CR for connection request or DT for data.
CR : Connection request
CC : Connection confirm
DR : Disconnect request
DC : Disconnect Confirm
DT : Data
ED : Expedited data
RJ : Reject
ER : Error
Source and destination reference : The source and destination reference fields contain the addresses
of the original sender and the ultimate destination of the packet.
Sequence Number : As a transmission is divided into smaller packets for transport, each segment is given a
number that identifies its place in the sequence. Sequence numbers are used for acknowledgement, flow control,
Credit Allocation : Credit allocation enables a receiving station to tell the sender how many more data units
may be sent before the sender must wait for an acknowledgement.
Variable parameters : It contains parameters that occur in frequently. These control codes are used
mostly for management.
Data : It may contain regular data or expedited data coming from the upper layers. Expedited data consist of a
high priority message that must be handled out of sequence. An urgent request can supersede the incoming queue
of the receiver and be processed ahead of packets that have been received before it.
Another way of looking at the transport layer is to regard its primary function as enhancing the QoS (Quality of
Service) provided by the network layer. If the network service is impeccable, the transport layer has an easy job.
If however, the network service is poor, the transport layer has to bridge the gap between what the transport
The transport service may allow the user to specify preferred, acceptable, and minimum values for various service
The connection establishment delay is the amount of time elapsing between a transport connection being
requested and the confirmation being received by the user of the transport service. It includes the processing
delay in the remote transport entity. As with all parameters measuring a delay, the shorter the delay, the better
the service.
The connection establishment failure probability is the chance of a connection not being established with in the
maximum establishment delay time, for example, due to network congestion, lack of table space some where, or
other internal problems.
The Throughput parameter measures the number of bytes of user data transferred per second, measured over
some time interval. The throughput is measured separately for each direction.
The Transit delay measures the time between a message being sent by the transport user on the source machine
and its being received by the transport user on the destination machine. As with throughput, each direction is
handled separately.
The Residual error ratio measures the number of lost or garbled messages as a fraction of the total sent. In theory,
the residual error rate should be zero, since it is the job of the transport layer to hide all network layer errors. In
The Protection parameter provides a way for the transport user to specify interest in having the transport layer
provide protection against unauthorized third parties (wire tapers) reading or modifying the transmitted data.
The priority parameter provides a way for a transport user to indicate that some of its connections are more
important than other ones. And in the event of congestion, to make sure that the high-priority connections get
Finally, the Resilience parameter gives the probability of the transport layer itself spontaneously terminating a
The QoS parameters are specifies by the transport user when a connection is requested. Both the desired and
minimum acceptable values can be given . In some cases, upon seeing the QoS parameters, the transport layer may
IPv6
IPv6 is short for "Internet Protocol Version 6". IPv6 is the "next generation" protocol designed by the IETF (The
Internet Engineering Task Force) to replace the current version Internet Protocol, IP Version 4 ("IPv4"). The IP v 6
specifications are in rfc2460.
Most of today's internet uses IPv4, which is now nearly twenty years old. IPv4 has been remarkably resilient in spite
of its age, but it is beginning to have problems. Most importantly, there is a growing shortage of IPv4 addresses,
IPv6 fixes a number of problems in IPv4, such as the limited number of available IPv4 addresses. It also adds many
improvements to IPv4 in areas such as routing and network autoconfiguration. IPv6 is expected to gradually
replace IPv4, with the two coexisting for a number of years during a transition period.
Contents
1 Introduction
2 Key Issues
3 History of the IPng Effort
4 IPng Overview
5 IPng Header Format
6 IPng Extensions
7 IPng Addressing
8 IPng Routing
9 IPng Quality-of-Service Capabilities
10. IPng Security
11. IPng Transition Mechanisms
12. Why IPng?
1. Introduction
This paper presents an overview of the Next Generation Internet Protocol (IPng). IPng was recommended by the
IPng Area Directors of the Internet Engineering Task Force at the Toronto IETF meeting on July 25, 1994, and
documented in RFC 1752, "The Recommendation for the IP Next Generation Protocol" [1]. The recommendation
was approved by the Internet Engineering Steering Group on November 17, 1994 and made a Proposed Standard.
The formal name of this protocol is IPv6 (where the "6" refers to it being assigned version number 6). The current
version of the Internet Protocol is version 4 (referred to as IPv4). This overview is intended to give the reader an
overview of the IPng protocol. For more detailed information the reader should consult the documents listed in the
reference section.
IPng is a new version of IP which is designed to be an evolutionary step from IPv4. It is a natural increment to IPv4.
It can be installed as a normal software upgrade in internet devices and is interoperable with the current IPv4. Its
deployment strategy was designed to not have any "flag" days. IPng is designed to run well on high performance
networks (e.g., ATM) and at the same time is still efficient for low bandwidth networks (e.g., wireless). In
addition, it provides a platform for new internet functionality that will be required in the near future.
This paper describes the work of IETF IPng working group. Several individuals deserve specific recognition. These
include Paul Francis, Bob Gilligan, Dave Crocker, Ran Atkinson, Jim Bound, Ross Callon, Bill Fink, Ramesh
Govindan, Christian Huitema, Erik Nordmark, Tony Li, Dave Katz, Yakov Rekhter, Bill Simpson, and Sue Thompson.
protocol. Some are very straightforward. For example the new protocol must be able to support large global
internetworks. Others are less obvious. There must be a clear way to transition the current large installed base of
IPv4 systems. It doesn't matter how good a new protocol is if there isn't a practical way to transition the current
2.1 Growth Growth is the basic issue which caused there to be a need for a next generation IP. If anything is
to be learned from our experience with IPv4 it is that the addressing and routing must be capable of handling
reasonable scenarios of future growth. It is important that we have an understanding of the past growth and where
Currently IPv4 serves what could be called the computer market. The computer market has been the driver of the
growth of the Internet. It comprises the current Internet and countless other smaller internets which are not
connected to the Internet. Its focus is to connect computers together in the large business, government, and
university education markets. This market has been growing at an exponential rate. One measure of this is that
the number of networks in current Internet (40,073 as of 10/4/94) is doubling approximately every 12 months. The
computers which are used at the endpoints of internet communications range from PC's to Supercomputers. Most
are attached to Local Area Networks (LANs) and the vast majority are not mobile.
The next phase of growth will probably not be driven by the computer market. While the computer market will
continue to grow at significant rates due to expansion into other areas such as schools (elementary through high
school) and small businesses, it is doubtful it will continue to grow at an exponential rate. What is likely to happen
is that other kinds of markets will develop. These markets will fall into several areas. They all have the
characteristic that they are extremely large. They also bring with them a new set of requirements which were not
as evident in the early stages of IPv4 deployment. The new markets are also likely to happen in parallel with one
another. It may turn out that we will look back on the last ten years of Internet growth as the time when the
Internet was small and only doubling every year. The challenge for an IPng is to provide a solution which solves
Nomadic personal computing devices seem certain to become ubiquitous as their prices drop and their capabilities
increase. A key capability is that they will be networked. Unlike the majority of todays networked computers they
will support a variety of types of network attachments. When disconnected they will use RF wireless networks,
when used in networked facilities they will use infrared attachment, and when docked they will use physical wires.
This makes them an ideal candidate for internetworking technology as they will need a common protocol which
can work over a variety of physical networks. These types of devices will become consumer devices and will
replace the current generation of cellular phones, pagers, and personal digital assistants. In addition to the
obvious requirement of an internet protocol which can support large scale routing and addressing, they will require
an internet protocol which imposes a low overhead and supports auto configuration and mobility as a basic
element. The nature of nomadic computing requires an internet protocol to have built in authentication and
confidentiality. It also goes without saying that these devices will need to communicate with the current
generation of computers. The requirement for low overhead comes from the wireless media. Unlike LAN's which
will be very high speed, the wireless media will be several orders of magnitude slower due to constraints on
available frequencies, spectrum allocation, error rates, and power consumption.
Another market is networked entertainment. The first signs of this emerging market are the proposals being
discussed for 500 channels of television, video on demand, etc. This is clearly a consumer market. The possibility
is that every television set will become an Internet host. As the world of digital high definition television
approaches, the differences between a computer and a television will diminish. As in the previous market, this
market will require an Internet protocol which supports large scale routing and addressing, and auto configuration.
This market also requires a protocol suite which imposes the minimum overhead to get he job done. Cost will be
Another market which could use the next generation IP is device control. This consists of the control of everyday
devices such as lighting equipment, heating and cooling equipment, motors, and other types of equipment which
are currently controlled via analog switches and in aggregate consume considerable amounts of electrical power.
The size of this market is enormous and requires solutions which are simple, robust, easy to use, and very low
cost. The potential pay-back is that networked control of devices will result in cost savings which are extremely
large.
The challenge the IETF faced in the selection of an IPng is to pick a protocol which meets today's requirements and
also matches the requirements of these emerging markets. These markets will happen with or without an IETF
IPng. If the IETF IPng is a good match for these new markets it is likely to be used. If not, these markets will
develop something else. They will not wait for an IETF solution. If this should happen it is probable that because of
the size and scale of the new markets the IETF protocol would be supplanted. If the IETF IPng is not appropriate
for use in these markets, it is also probable that they will each develop their own protocols, perhaps proprietary.
These new protocols would not interoperate with each other. The opportunity for the IETF is to select an IPng
which has a reasonable chance to be used in these emerging markets. This would have the very desirable outcome
of creating an immense, interoperable, world- wide information infrastructure created with open protocols. The
TOP
2.2 Transition At some point in the next three to seven years the Internet will require a deployed new
version of the Internet protocol. Two factors are driving this: routing and addressing. Global internet routing based
on the on 32-bit addresses of IPv4 is becoming increasingly strained. IPv4 address do not provide enough flexibility
to construct efficient hierarchies which can be aggregated. The deployment of Classless Inter- Domain Routing [2]
is extending the life time of IPv4 routing by a number of years, the effort to manage the routing will continue to
increase. Even if the IPv4 routing can be scaled to support a full IPv4 Internet, the Internet will eventually run out
of network numbers. There is no question that an IPng is needed, but only a question of when.</p>
The challenge for an IPng is for its transition to be complete before IPv4 routing and addressing break. The
transition will be much easier if IPv4 addresses are still globally unique. The two transition requirements which are
the most important are flexibility of deployment and the ability for IPv4 hosts to communicate with IPng hosts.
There will be IPng- only hosts, just as there will be IPv4-only hosts. The capability must exist for IPng-only hosts to
communicate with IPv4-only hosts globally while IPv4 addresses are globally unique.
The deployment strategy for an IPng must be as flexible as possible. The Internet is too large for any kind of
controlled roll out to be successful. The importance of flexibility in an IPng and the need for interoperability
between IPv4 and IPng was well stated in a message to the sipp mailing list by Bill Fink, who is responsible for a
"Being a network manager and thereby representing the interests of a significant number of users, from my
perspective it's safe to say that the transition and interoperation aspects of any IPng is *the* key first element,
without which any other significant advantages won't be able to be integrated into the user's network
environment. I also don't think it wise to think of the transition as just a painful phase we'll have to endure en
route to a pure IPng environment, since the transition/coexistence period undoubtedly will last at least a decade
and may very well continue for the entire lifetime of IPng, until it's replaced with IPngng and a new transition. I
might wish it was otherwise but I fear they are facts of life given the immense installed base.
"Given this situation, and the reality that it won't be feasible to coordinate all the infrastructure changes even at
the national and regional levels, it is imperative that the transition capabilities support the ability to deploy the
IPng in the piecemeal fashion... with no requirement to need to coordinate local changes with other changes
"I realize that support for the transition and coexistence capabilities may be a major part of the IPng effort and
may cause some headaches for the designers and developers, but I think it is a duty that can't be shirked and the
necessary price that must be paid to provide as seamless an environment as possible to the end user and his basic
"The bottom line for me is that we must have interoperability during the extended transition period for the base
IPv4 functionality..."
Another way to think about the requirement for compatibility with IPv4 is to look at other product areas. In the
product world, backwards compatibility is very important. Vendors who do not provide backward compatibility for
their customers usually find they do not have many customers left. For example, chip makers put considerable
effort into making sure that new versions of their processor always run all of the software that ran on the previous
model. It is unlikely that Intel would develop a new processor in the X86 family that did not run DOS and the tens
Operating system vendors go to great lengths to make sure new versions of their operating systems are binary
compatible with their old version. For example the labels on most PC or MAC software usually indicate that they
require OS version XX or greater. It would be foolish for Microsoft come out with a new version of Windows which
did not run the applications which ran on the previous version. Microsoft even provides the ability for windows
applications to run on their new OS NT. This is an important feature. They understand that it was very important
to make sure that the applications which run on Windows also run on NT.
The same requirement is also true for IPng. The Internet has a large installed base. Features need to be designed
into an IPng to make the transition as easy as possible. As with processors and operating systems, it must be
backwards compatible with IPv4. Other protocols have tried to replace TCP/IP, for example XTP and OSI. One
element in their failure to reach widespread acceptance was that neither had any transition strategy other than
running in parallel (sometimes called dual stack). New features alone are not adequate to motivate users to deploy
new protocols. IPng must have a great transition strategy and new features.
TOP
developing an IPng. It represents over three years of effort focused on this topic. A brief history follows:
By the Winter of 1992 the Internet community had developed four separate proposals for IPng. These were "CNAT",
"IP Encaps", "Nimrod", and "Simple CLNP". By December 1992 three more proposals followed; "The P Internet
Protocol" (PIP), "The Simple Internet Protocol" (SIP) and "TP/IX". In the Spring of 1992 the "Simple CLNP" evolved
into "TCP and UDP with Bigger Addresses" (TUBA) and "IP Encaps" evolved into "IP Address Encapsulation" (IPAE).
By the fall of 1993, IPAE merged with SIP while still maintaining the name SIP. This group later merged with PIP
and the resulting working group called themselves "Simple Internet Protocol Plus" (SIPP). At about the same time
the TP/IX Working Group changed its name to "Common Architecture for the Internet" (CATNIP).
The IPng area directors made a recommendation for an IPng in July of 1994. This recommendation, from [1],
"Simple Internet Protocol Plus (SIPP) Spec. (128 bit ver)" [3] be adopted as the basis for IPng.
An IPng Working Group be formed, chaired by Steve Deering and Ross Callon.
An Address Autoconfiguration Working Group be formed, chaired by Dave Katz and Sue Thomson.
An IPng Transition Working Group be formed, chaired by Bob Gilligan and TBA.
Recommendations about the use of non-IPv6 addresses in IPv6 environments and IPv6 addresses in non-
The IESG commission a review of all IETF standards documents for IPng implications.
The IESG task current IETF working groups to take IPng into account.
The IESG charter new working groups where needed to revise old standards documents.
The IPng Area and Area Directorate continue until main documents are offered as Proposed Standards in
late 1994.
IPng was designed to take an evolutionary step from IPv4. It was not a design goal to take a radical step away from
IPv4. Functions which work in IPv4 were kept in IPng. Functions which didn't work were removed. The changes
IPng increases the IP address size from 32 bits to 128 bits, to support more levels of addressing hierarchy
and a much greater number of addressable nodes, and simpler auto-configuration of addresses.
The scalability of multicast routing is improved by adding a "scope" field to multicast addresses.
A new type of address called a "anycast address" is defined, to identify sets of nodes where a packet sent
to an anycast address is delivered to one of the nodes. The use of anycast addresses in the IPng source
route allows nodes to control the path which their traffic flows.
Some IPv4 header fields have been dropped or made optional, to reduce the common-case processing cost
of packet handling and to keep the bandwidth cost of the IPng header as low as possible despite the
increased size of the addresses. Even though the IPng addresses are four time longer than the IPv4
addresses, the IPng header is only twice the size of the IPv4 header.
Changes in the way IP header options are encoded allows for more efficient forwarding, less stringent
limits on the length of options, and greater flexibility for introducing new options in the future.
Quality-of-Service Capabilities
A new capability is added to enable the labeling of packets belonging to particular traffic "flows" for
which the sender requests special handling, such as non-default quality of service or "real- time" service.
Authentication and Privacy Capabilities
IPng includes the definition of extensions which provide support for authentication, data integrity, and
confidentiality. This is included as a basic element of IPng and will be included in all implementations.
The IPng protocol consists of two parts, the basic IPng header and IPng extension headers.
TOP
Source Address
Destination Address
Ver
4-bit Internet Protocol version number = 6.
Prio
4-bit Priority value. See IPng Priority section.
Flow Label
24-bit field. See IPng Quality of Service section.
Payload Length
16-bit unsigned integer. Length of payload, i.e., the rest of the packet following the IPng header, in octets.
Next Hdr
8-bit selector. Identifies the type of header immediately following the IPng header. Uses the same values as the
Hop Limit
8-bit unsigned integer. Decremented by 1 by each node that forwards the packet. The packet is discarded if Hop
Source Address
128 bits. The address of the initial sender of the packet. See [7] for details.
Destination Address
128 bits. The address of the intended recipient of the packet (possibly not the ultimate recipient, if an optional
TOP
are located between the IPng header and the transport-layer header in a packet. Most IPng extension headers are
not examined or processed by any router along a packet's delivery path until it arrives at its final destination. This
facilitates a major improvement in router performance for packets containing options. In IPv4 the presence of any
The other improvement is that unlike IPv4 options, IPng extension headers can be of arbitrary length and the total
amount of options carried in a packet is not limited to 40 bytes. This feature plus the manner in which they are
processed, permits IPng options to be used for functions which were not practical in IPv4. A good example of this is
In order to improve the performance when handling subsequent option headers and the transport protocol which
follows, IPng options are always an integer multiple of 8 octets long, in order to retain this alignment for
subsequent headers.
Encapsulation Confidentiality.
TOP
7.0 IPng Addressing
IPng addresses are 128-bits long and are identifiers for individual interfaces and sets of interfaces. IPng Addresses
of all types are assigned to interfaces, not nodes. Since each interface belongs to a single node, any of that node's
interfaces' unicast addresses may be used as an identifier for the node. A single interface may be assigned multiple
There are three types of IPng addresses. These are unicast, anycast, and multicast. Unicast addresses identify a
single interface. Anycast addresses identify a set of interfaces such that a packet sent to a anycast address will be
delivered to one member of the set. Multicast addresses identify a group of interfaces, such that a packet sent to
a multicast address is delivered to all of the interfaces in the group. There are no broadcast addresses in IPv6,
IPng supports addresses which are four times the number of bits as IPv4 addresses (128 vs. 32). This is 4 Billion
times 4 Billion times 4 Billion (2^^96) times the size of the IPv4 address space (2^^32). This works out to be:
340,282,366,920,938,463,463,374,607,431,768,211,456
665,570,793,348,866,943,898,599 addresses per square meter of the surface of the planet Earth (assuming the
In more practical terms the assignment and routing of addresses requires the creation of hierarchies which reduces
the efficiency of the usage of the address space. Christian Huitema performed an analysis in [8] which evaluated
the efficiency of other addressing architecture's (including the French telephone system, USA telephone systems,
current internet using IPv4, and IEEE 802 nodes). He concluded that 128bit IPng addresses could accommodate
between 8x10^^17 to 2x10^^33 nodes assuming efficiency in the same ranges as the other addressing
architecture's. Even his most pessimistic estimate this would provide 1,564 addresses for each square meter of the
surface of the planet Earth. The optimistic estimate would allow for 3,911,873,538,269,506,102 addresses for each
The specific type of IPng address is indicated by the leading bits in the address. The variable-length field
comprising these leading bits is called the Format Prefix (FP). The initial allocation of these prefixes is as follows:
Reserved for
Neutral-Interconnect-Based
This allocation supports the direct allocation of provider addresses, local use addresses, and multicast addresses.
Space is reserved for NSAP addresses, IPX addresses, and neutral-interconnect addresses. The remainder of the
address space is unassigned for future use. This can be used for expansion of existing use (e.g., additional provider
addresses, etc.) or new uses (e.g., separate locators and identifiers). Note that Anycast addresses are not shown
here because they are allocated out of the unicast address space.
Approximately fifteen percent of the address space is initially allocated. The remaining 85% is reserved for future
use.
7.1 Unicast Addresses There are several forms of unicast address assignment in IPv6. These are the global
provider based unicast address, the neutral-interconnect unicast address, the NSAP address, the IPX hierarchical
address, the site-local-use address, the link-local-use address, and the IPv4-capable host address. Additional
7.2 Provider Based Unicast Addresses Provider based unicast addresses are used for global
communication. They are similar in function to IPv4 addresses under CIDR. The assignment plan for unicast
+---+-----------+-----------+-------------+---------+----------+
+---+-----------+-----------+-------------+---------+----------+
The first 3 bits identify the address as a provider- oriented unicast address. The next field (REGISTRY ID) identifies
the internet address registry which assigns provider identifiers (PROVIDER ID) to internet service providers, which
then assign portions of the address space to subscribers. This usage is similar to assignment of IP addresses under
CIDR. The SUBSCRIBER ID distinguishes among multiple subscribers attached to the internet service provider
identified by the PROVIDER ID. The SUBNET ID identifies a specific physical link. There can be multiple subnets on
the same physical link. A specific subnet can not span multiple physical links. The INTERFACE ID identifies a single
7.3 Local-Use Addresses A local-use address is a unicast address that has only local routability scope (within
the subnet or within a subscriber network), and may have local or global uniqueness scope. They are intended for
use inside of a site for "plug and play" local communication and for bootstrapping up to the use of global addresses
[11].
There are two types of local-use unicast addresses defined. These are Link-Local and Site-Local. The Link-Local-
Use is for use on a single link and the Site-Local-Use is for use in a single site. Link-Local- Use addresses have the
following format:
| 10 |
+----------+-------------------------+----------------------------+
|1111111010| 0 | INTERFACE ID |
+----------+-------------------------+----------------------------+
Link-Local-Use addresses are designed to be used for addressing on a single link for purposes such as auto-address
configuration.
| 10 |
+----------+---------+---------------+----------------------------+
+----------+---------+---------------+----------------------------+
For both types of local use addresses the INTERFACE ID is an identifier which much be unique in the domain in
which it is being used. In most cases these will use a node's IEEE-802 48bit address. The SUBNET ID identifies a
specific subnet in a site. The combination of the SUBNET ID and the INTERFACE ID to form a local use address
allows a large private internet to be constructed without any other address allocation.
Local-use addresses allow organizations that are not (yet) connected to the global Internet to operate without the
need to request an address prefix from the global Internet address space. Local-use addresses can be used instead.
If the organization later connects to the global Internet, it can use its SUBNET ID and INTERFACE ID in combination
with a global prefix (e.g., REGISTRY ID + PROVIDER ID + SUBSCRIBER ID) to create a global address. This is a
significant improvement over IPv4 which requires sites which use private (non-global) IPv4 address to manually
renumber when they connect to the Internet. IPng does the renumbering automatically.
7.4 IPv6 Addresses with Embedded IPV4 Addresses The IPv6 transition mechanisms include a
technique for hosts and routers to dynamically tunnel IPv6 packets over IPv4 routing infrastructure. IPv6 nodes
that utilize this technique are assigned special IPv6 unicast addresses that carry an IPv4 address in the low-order
32-bits. This type of address is termed an "IPv4-compatible IPv6 address" and has the format:
| 80 bits | 16 | 32 bits |
+--------------------------------------+--------------------------+
+--------------------------------------+----+---------------------+
A second type of IPv6 address which holds an embedded IPv4 address is also defined. This address is used to
represent the addresses of IPv4- only nodes (those that *do not* support IPv6) as IPv6 addresses. This type of
| 80 bits | 16 | 32 bits |
+--------------------------------------+--------------------------+
+--------------------------------------+----+---------------------+
nodes), with the property that a packet sent to an anycast address is routed to the "nearest" interface having that
Anycast addresses, when used as part of an route sequence, permits a node to select which of several internet
service providers it wants to carry its traffic. This capability is sometimes called "source selected policies". This
would be implemented by configuring anycast addresses to identify the set of routers belonging to internet service
providers (e.g., one anycast address per internet service provider). These anycast addresses can be used as
intermediate addresses in an IPv6 routing header, to cause a packet to be delivered via a particular provider or
sequence of providers. Other possible uses of anycast addresses are to identify the set of routers attached to a
particular subnet, or the set of routers providing entry into a particular routing domain.
Anycast addresses are allocated from the unicast address space, using any of the defined unicast address formats.
Thus, anycast addresses are syntactically indistinguishable from unicast addresses. When a unicast address is
assigned to more than one interface, thus turning it into an anycast address, the nodes to which the address is
assigned must be explicitly configured to know that it is an anycast address.
7.6 Multicast Addresses A IPng multicast address is an identifier for a group of interfaces. A interface may
belong to any number of multicast groups. Multicast addresses have the following format:
| 8 | 4| 4| 112 bits |
+------ -+----+----+---------------------------------------------+
|11111111|FLGS|SCOP| GROUP ID |
+--------+----+----+---------------------------------------------+
11111111 at the start of the address identifies the address as being a multicast address.
T=0 indicates a permanently assigned ("well-known") multicast address, assigned by the global internet numbering
authority.
SCOP is a 4-bit multicast scope value used to limit the scope of the multicast group. The values are:
GROUP ID identifies the multicast group, either permanent or transient, within the given scope.
TOP
instead of 32-bit IPv4 addresses. With very straightforward extensions, all of IPv4's routing algorithms (OSPF, RIP,
IPng also includes simple routing extensions which support powerful new routing functionality. These capabilities
include:
The new routing functionality is obtained by creating sequences of IPng addresses using the IPng Routing option.
The routing option is used by a IPng source to list one or more intermediate nodes (or topological group) to be
"visited" on the way to a packet's destination. This function is very similar in function to IPv4's Loose Source and
In order to make address sequences a general function, IPng hosts are required in most cases to reverse routes in a
packet it receives (if the packet was successfully authenticated using the IPng Authentication Header) containing
address sequences in order to return the packet to its originator. This approach is taken to make IPng host
implementations from the start support the handling and reversal of source routes. This is the key for allowing
them to work with hosts which implement the new features such as provider selection or extended addresses.
Three examples show how the address sequences can be used. In these examples, address sequences are shown by
Where the first address is the source address, the last address is the destination address, and the middle addresses
For these examples assume that two hosts, H1 and H2 wish to communicate. Assume that H1 and H2's sites are
both connected to providers P1 and P2. A third wireless provider, PR, is connected to both providers P1 and P2.
----- P1 ------
/ | \
/ | \
H1 PR H2
\ | /
\ | /
----- P2 ------
The simplest case (no use of address sequences) is when H1 wants to send a packet to H2 containing the
addresses:H1, H2
When H2 replied it would reverse the addresses and construct a packet containing the addresses: H2, H1
In this example either provider could be used, and H1 and H2 would not be able to select which provider traffic
If H1 decides that it wants to enforce a policy that all communication to/from H2 can only use provider P1, it
This ensures that when H2 replies to H1, it will reverse the route and the reply it would also travel over P1. The
If H1 became mobile and moved to provider PR, it could maintain (not breaking any transport connections)
communication with H2, by sending packets that contain the address sequence: H1, PR, P1, H2
This would ensure that when H2 replied it would enforce H1's policy of exclusive use of provider P1 and send the
packet to H1 new location on provider PR. The reversed address sequence would be: H2, P1, PR, H1
The address sequence facility of IPng can be used for provider selection, mobility, and readdressing. It is a simple
TOP
it requests special handling by IPng routers, such as non-default quality of service or "real-time" service. This
capability is important in order to support applications which require some degree of consistent throughput, delay,
and/or jitter. These type of applications are commonly described as "multi- media" or "real-time" applications.
9.1 Flow Labels The 24-bit Flow Label field in the IPv6 header may be used by a source to label those
packets for which it requests special handling by the IPv6 routers, such as non-default quality of service or "real-
time" service.
This aspect of IPv6 is, at the time of writing, still experimental and subject to change as the requirements for flow
support in the Internet become clearer. Hosts or routers that do not support the functions of the Flow Label field
are required to set the field to zero when originating a packet, pass the field on unchanged when forwarding a
A flow is a sequence of packets sent from a particular source to a particular (unicast or multicast) destination for
which the source desires special handling by the intervening routers. The nature of that special handling might be
conveyed to the routers by a control protocol, such as a resource reservation protocol, or by information within
There may be multiple active flows from a source to a destination, as well as traffic that is not associated with any
flow. A flow is uniquely identified by the combination of a source address and a non- zero flow label. Packets that
A flow label is assigned to a flow by the flow's source node. New flow labels must be chosen (pseudo-)randomly
and uniformly from the range 1 to FFFFFF hex. The purpose of the random allocation is to make any set of bits
within the Flow Label field suitable for use as a hash key by routers, for looking up the state associated with the
flow.
All packets belonging to the same flow must be sent with the same source address, same destination address, and
same non-zero flow label. If any of those packets includes a Hop-by-Hop Options header, then they all must be
originated with the same Hop-by-Hop Options header contents (excluding the Next Header field of the Hop-by-Hop
Options header). If any of those packets includes a Routing header, then they all must be originated with the same
contents in all extension headers up to and including the Routing header (excluding the Next Header field in the
Routing header). The routers or destinations are permitted, but not required, to verify that these conditions are
satisfied. If a violation is detected, it should be reported to the source by an ICMP Parameter Problem message,
Code 0, pointing to the high-order octet of the Flow Label field (i.e., offset 1 within the IPv6 packet) [12].
Routers are free to "opportunistically" set up flow- handling state for any flow, even when no explicit flow
establishment information has been provided to them via a control protocol, a hop-by-hop option, or other means.
For example, upon receiving a packet from a particular source with an unknown, non-zero flow label, a router may
process its IPv6 header and any necessary extension headers as if the flow label were zero. That processing would
include determining the next-hop interface, and possibly other actions, such as updating a hop-by-hop option,
advancing the pointer and addresses in a Routing header, or deciding on how to queue the packet based on its
Priority field. The router may then choose to "remember" the results of those processing steps and cache that
information, using the source address plus the flow label as the cache key. Subsequent packets with the same
source address and flow label may then be handled by referring to the cached information rather than examining
all those fields that, according to the requirements of the previous paragraph, can be assumed unchanged from
the first packet seen in the flow.
TOP
9.2 Priority The 4-bit Priority field in the IPv6 header enables a source to identify the desired delivery priority
of its packets, relative to other packets from the same source. The Priority values are divided into two ranges:
Values 0 through 7 are used to specify the priority of traffic for which the source is providing congestion control,
i.e., traffic that "backs off" in response to congestion, such as TCP traffic. Values 8 through 15 are used to specify
the priority of traffic that does not back off in response to congestion, e.g., "real-time" packets being sent at a
constant rate.
For congestion-controlled traffic, the following Priority values are recommended for particular application
categories:
0 Uncharacterized traffic
3 (Reserved)
5 (Reserved)
For non-congestion-controlled traffic, the lowest Priority value (8) should be used for those packets that the
sender is most willing to have discarded under conditions of congestion (e.g., high-fidelity video traffic), and the
highest value (15) should be used for those packets that the sender is least willing to have discarded (e.g., low-
fidelity audio traffic). There is no relative ordering implied between the congestion-controlled priorities and the
non-congestion-controlled priorities.
TOP
below the application layer. IPng remedies these shortcomings by having two integrated options that provide
security services [13]. These two options may be used singly or together to provide differing levels of security to
different users. This is very important because different user communities have different security needs.
The first mechanism, called the "IPng Authentication Header", is an extension header which provides
authentication and integrity (without confidentiality) to IPng datagrams [14]. While the extension is algorithm-
independent and will support many different authentication techniques, the use of keyed MD5 is proposed to help
ensure interoperability within the worldwide Internet. This can be used to eliminate a significant class of network
attacks, including host masquerading attacks. The use of the IPng Authentication Header is particularly important
when source routing is used with IPng because of the known risks in IP source routing. Its placement at the
internet layer can help provide host origin authentication to those upper layer protocols and services that
currently lack meaningful protections. This mechanism should be exportable by vendors in the United States and
other countries with similar export restrictions because it only provides authentication and integrity, and
specifically does not provide confidentiality. The exportability of the IPng Authentication Header encourages its
The second security extension header provided with IPng is the "IPng Encapsulating Security Header" [15]. This
mechanism provides integrity and confidentiality to IPng datagrams. It is simpler than some similar security
protocols (e.g., SP3D, ISO NLSP) but remains flexible and algorithm-independent. To achieve interoperability
within the global Internet, the use of DES CBC is being used as the standard algorithm for use with the IPng
TOP
hosts and routers to be deployed in the Internet in a highly diffuse and incremental fashion, with few
interdependencies. A third objective is that the transition should be as easy as possible for end- users, system
The IPng transition mechanisms are a set of protocol mechanisms implemented in hosts and routers, along with
some operational guidelines for addressing and deployment, designed to make transition the Internet to IPv6 work
Incremental upgrade and deployment. Individual IPv4 hosts and routers may be upgraded to IPv6 one at a
time without requiring any other hosts or routers to be upgraded at the same time. New IPv6 hosts and
Minimal upgrade dependencies. The only prerequisite to upgrading hosts to IPv6 is that the DNS server
must first be upgraded to handle IPv6 address records. There are no pre-requisites to upgrading routers.
Easy Addressing. When existing installed IPv4 hosts or routers are upgraded to IPv6, they may continue to
use their existing address. They do not need to be assigned new addresses. Administrators do not need to
Low start-up costs. Little or no preparation work is needed in order to upgrade existing IPv4 systems to
IPv6, or to deploy new IPv6 systems. The mechanisms employed by the IPng transition mechanisms
include:
An IPv6 addressing structure that embeds IPv4 addresses within IPv6 addresses, and encodes other
A model of deployment where all hosts and routers upgraded to IPv6 in the early transition phase are
"dual" capable (i.e. implement complete IPv4 and IPv6 protocol stacks).
The technique of encapsulating IPv6 packets within IPv4 headers to carry them over segments of the end-
to-end path where the routers have not yet been upgraded to IPv6.
The header translation technique to allow the eventual introduction of routing topologies that route only
IPv6 traffic, and the deployment of hosts that support only IPv6. Use of this technique is optional, and
The IPng transition mechanisms ensures that IPv6 hosts can interoperate with IPv4 hosts anywhere in the Internet
up until the time when IPv4 addresses run out, and allows IPv6 and IPv4 hosts within a limited scope to
interoperate indefinitely after that. This feature protects the huge investment users have made in IPv4 and
ensures that IPv6 does not render IPv4 obsolete. Hosts that need only a limited connectivity range (e.g., printers)
The incremental upgrade features of the IPng transition mechanisms allow the host and router vendors to integrate
IPv6 into their product lines at their own pace, and allows the end users and network operators to deploy IPng on
There are a number of reasons why IPng is appropriate for the next generation of the Internet Protocol. It solves
the Internet scaling problem, provides a flexible transition mechanism for the current Internet, and was designed
to meet the needs of new markets such as nomadic personal computing devices, networked entertainment, and
device control. It does this in a evolutionary way which reduces the risk of architectural problems.
Ease of transition is a key point in the design of IPng. It is not something was added in at the end. IPng is designed
to interoperate with IPv4. Specific mechanisms (embedded IPv4 addresses, pseudo- checksum rules etc.) were
built into IPng to support transition and compatibility with IPv4. It was designed to permit a gradual and piecemeal
IPng supports large hierarchical addresses which will allow the Internet to continue to grow and provide new
routing capabilities not built into IPv4. It has anycast addresses which can be used for policy route selection and
has scoped multicast addresses which provide improved scalability over IPv4 multicast. It also has local use address
mechanisms which provide the ability for "plug and play" installation.
The address structure of IPng was also designed to support carrying the addresses of other internet protocol suites.
Space was allocated in the addressing plan for IPX and NSAP addresses. This was done to facilitate migration of
IPng provides a platform for new Internet functionality. This includes support for real-time flows, provider
In summary, IPng is a new version of IP. It can be installed as a normal software upgrade in internet devices. It is
interoperable with the current IPv4. Its deployment strategy was designed to not have any "flag" days. IPng is
designed to run well on high performance networks (e.g., ATM) and at the same time is still efficient for low
bandwidth networks (e.g., wireless). In addition, it provides a platform for new internet functionality that will be
How often do you use your printer? If you are constantly running out of printer ink, sign online. By purchasing
your printer ink cartridges online, you are saving yourself time and money. Have your printer ink delivered
right to your door when you need it!
Want to learn a skill that will help you with your future career? Computer training is an important task to
learn. From IT certification to A+ Certification, there are many programs to learn. If you are interested in A+
Certification, sign online to explore training courses. Computer training is training you don't want to delay!
Telnet
The Telnet protocol is often thought of as simply providing a facility for remote logins to computer via the
Internet. This was its original purpose although it can be used for many other purposes. It is best understood in the
context of a user with a simple terminal using the local telnet program (known as the client program) to run a
login session on a remote computer where his communications needs are handled by a telnet server program. It
should be emphasised that the telnet server can pass on the data it has received from the client to many other
types of process including a remote login server. It is described in RFC854 and was first published in 1983.
Commands
The telnet protocol also specifies various commands that control the method and various details of the interaction
between the client and server. These commands are incorporated within the data stream. The commands are
distinguished by the use of various characters with the most significant bit set. Commands are always introduced
by a character with the decimal code 255 known as an Interpret as command (IAC) character. The complete set of
special characters is
Decimal
Name Meaning
Code
DM 242 Data mark. Indicates the position of a Synch event within the data stream. This should
always be accompanied by a TCP urgent notification.
BRK 243 Break. Indicates that the "break" or "attention" key was hit.
IP 244 Suspend, interrupt or abort the process to which the NVT is connected.
Abort output. Allows the current process to run to completion but do not send its output to
AO 245
the user.
AYT 246 Are you there. Send back to the NVT some visible evidence that the AYT was received.
Erase character. The receiver should delete the last preceding undeleted character from the
EC 247
data stream.
Erase line. Delete characters from the data stream back to but not including the previous
EL 248
CRLF.
GA 249 Go ahead. Used, under certain circumstances, to tell the other end that it can transmit.
Indicates the desire to begin performing, or confirmation that you are now performing, the
WILL 251
indicated option.
WONT 252 Indicates the refusal to perform, or continue performing, the indicated option.
Indicates the request that the other party perform, or confirmation that you are expecting
DO 253
the other party to perform, the indicated option.
Indicates the demand that the other party stop performing, or confirmation that you are no
DONT 254
longer expecting the other party to perform, the indicated option.
There are a variety of options that can be negotiated between a telnet client and server using commands at any
stage during the connection. They are described in detail in separate RFCs. The following are the most important.
1 echo 857
5 status 859
34 linemode 1184
Options are agreed by a process of negotiation which results in the client and server having a common view of
various extra capabilities that affect the interchange and the operation of applications.Either end of a telnet
dialogue can enable or disable an option either locally or remotely. The initiator sends a 3 byte command of the
Associated with each of the these there are various possible responses
Sender Receiver
Implication
Sent Responds
The sender would like to use a certain facility if the receiver can handle it. Option
WILL DO
is now in effect
WILL DONT Receiver says it cannot support the option. Option is not in effect.
The sender says it can handle traffic from the sender if the sender wishes to use a
DO WILL
certain option. Option is now in effect.
DO WONT Receiver says it cannot support the option. Option is not in effect.
For example if the sender wants the other end to suppress go-ahead it would send the byte
sequence255(IAC),251(WILL),3The final byte of the three byte sequence identifies the required action.For some of
the negotiable options values need to be communicated once support of the option has been agreed. This is done
using sub-option negotiation.
Values are communicated via an exchange of value query commands and responses in the following
For example if the client wishes to identify the terminal type to the server the following exchange might take
place
Client 255(IAC),251(WILL),24
Server 255(IAC),253(DO),24
Server 255(IAC),250(SB),24,1,255(IAC),240(SE)
Client 255(IAC),250(SB),24,0,'V','T','2','2','0',255(IAC),240(SE)
The first exchange establishes that terminal type (option number 24) will be handled, the server then enquires of
the client what value it wishes to associate with the terminal type. The sequence SB,24,1 implies sub-option
negotiation for option type 24, value required (1). The IAC,SE sequence indicates the end of this request. The
repsonse IAC,SB,24,0,'V'... implies sub-option negotiation for option type 24, value supplied (0), the IAC,SE
sequence indicates the end of the response (and the supplied value).
The encoding of the value is specific to the option but a sequence of characters, as shown above, is common.
SMTP Protocol
The SMTP Protocol
Simple Mail Transfer Protocol
SMTP (Simple Mail Transfer Protocol) is a TCP/IP protocol used in sending and receiving e-mail. However, since it's
limited in its ability to queue messages at the receiving end, it's usually used with one of two other protocols,
POP3 or Internet Message Access Protocol, that let the user save messages in a server mailbox and download them
periodically from the server. In other words, users typically use a program that uses SMTP for sending e-mail and
either POP3 or IMAP for receiving messages that have been received for them at their local server. Most mail
programs such as Eudora let you specify both an SMTP server and a POP server. On UNIX-based systems, sendmail is
the most widely-used SMTP server for e-mail. A commercial package, Sendmail, includes a POP3 server and also
comes in a version for Windows NT.
SMTP usually is implemented to operate over Transmission Control Protocol port 25.
Simple Mail Transfer Protocol (SMTP), documented in RFC 821, is Internet's standard host-to-host mail transport
protocol and traditionally operates over TCP, port 25. In other words, a UNIX user can type telnet hostname 25 and
connect with an SMTP server, if one is present.
SMTP uses a style of asymmetric request-response protocol popular in the early 1980s, and still seen occasionally,
most often in mail protocols. The protocol is designed to be equally useful to either a computer or a human,
though not too forgiving of the human. From the server's viewpoint, a clear set of commands is provided and well-
documented in the RFC. For the human, all the commands are clearly terminated by newlines and a HELP
command lists all of them. From the sender's viewpoint, the command replies always take the form of text lines,
each starting with a three-digit code identifying the result of the operation, a continuation character to indicate
another lines following, and then arbitrary text information designed to be informative to a human.
If mail delivery fails, sendmail (the most important SMTP implementation) will queue mail messages and retry
delivery later. However, a backoff algorithm is used, and no mechanism exists to poll all Internet hosts for mail,
nor does SMTP provide any mailbox facility, or any special features beyond mail transport. For these reasons, SMTP
isn't a good choice for hosts situated behind highly unpredictable lines (like modems). A better-connected host can
be designated as a DNS mail exchanger, then arrange for a relay scheme. Currently, there two main configurations
that can be used. One is to configure POP mailboxes and a POP server on the exchange host, and let all users use
POP-enabled mail clients. The other possibility is to arrange for a periodic SMTP mail transfer from the exchange
host to another, local SMTP exchange host which has been queuing all the outbound mail. Of course, since this
solution does not allow full-time Internet access, it is not too preferred.
RFC 1869 defined the capability for SMTP service extensions, creating Extended SMTP, or ESMTP. ESMTP is by
definition extensible, allowing new service extensions to be defined and registered with IANA. Probably the most
important extension currently available is Delivery Status Notification (DSN), defined in RFC 1891.
TCP
Next >
Design
This program implements a simple client-server program. The client connects to the server via TCP and sends it
any ASCII text message. The server replies to the client with the same message. The client then outputs the
The message sent from the client to the server consists of two parts. The first byte sent contains the length of the
message, not including itself. The rest of the message is the message to be repeated back from the server to the
client. The reply message is in the same format. The length is passed in order to allow the server to be sure it has
To compile this program, go to any Solaris machine. Copy the following files to any subdirectory in your Unix home
directory:
Makefile
client.c
server.c
common.h
Type "make" in the directory where the source code is located. Two binaries should be produced: "client" and
"server". Telnet to another machine. Run "server" on one machine and "client" on the other machine.
Usage
Server: The server is started with a single command line argument, the port number. The server will wait on
this port until contact is made by the client. Once the server replies to the message sent by the client, it exits.
Example: #server 2000 (note: the pound sign indicates the Unix prompt)
Client: The client is started with three command line arguments. These arguments include the host, port
number, and message to send. The client will then send the given message to the given host and port number and
wait for a reply. Once the reply is recieved, the client exits.
Design Tradeoffs
The program is limited to transmitting a message containing 255 characters. This could be improved by seperating
the message into smaller chunks. The server will continue to receive the chunks until all has been sent. This could
be done with minimal modification of code. All the is needed is some sort of "all done" message sent between the
NOTE: There is a quick script called "Test" that will run a simple test case.
Page : 1 2 3 4 5 6 7 8 9 10 11 12 13
OSI
for facilitating the transfer of information on a network. The OSI model is made up of seven layers, with each layer
providing a distinct network service. By segmenting the tasks that each layer performs, it is possible to change one
of the layers with little or no impact on the others. For example, you can now change your network configuration
The OSI model was specifically made for connecting open systems. These systems are designed to be open for
communication with almost any other system. The model was made to break down each functional layer so that
overall design complexity could be lessened. The model was constructed with several precepts in mind:
Application layer--Provides a means for the user to access information on the network through an
application. This layer is the main interface for the user to interact with the application and therefore the
network. Examples include file transfer (FTP), DNS, the virtual terminal (Telnet), and electronic mail (SMTP).
Presentation layer--Manages the presentation of the information in an ordered and meaningful manner.
This layer's primary function is the syntax and semantics of the data transmission. It converts local host computer
data representations into a standard network format for transmission on the network. On the receiving side, it
changes the network format into the appropriate host computer's format so that data can be utilized independent
of the host computer. ASCII and EBCDIC conversions, cryptography, and the like are handled here.
Session layer--Coordinates dialogue/session/connection between devices over the network. This layer
manages communications between connected sessions. Examples of this layer are token management (the session
layer manages who has the token) and network time synchronization.
Transport layer--Responsible for the reliable transmission of data and service specification between hosts.
The major responsibility of this layer is data integrity--that data transmitted between hosts is reliable and timely.
Upper layer datagrams are broken down into network-sized datagrams if needed and then implemented using the
appropriate transmission control. The transport layer creates one or more than one network connection,
depending on conditions. This layer also handles what type of connection will be created. Two major transport
protocols are the TCP (Transmission Control Protocol) and the UDP (User Datagram Protocol
Network layer--Responsible for the routing of data (packets) to a system on the network; handles the
addressing and delivery of data. This layer provides for congestion control, accounting information for the
network, routing, addressing, and several other functions. ). IP (Internet Protocol) is a good example of a network
layer interface.
Data link layer--Provides for the reliable delivery of data across a physical network. This layer guarantees
that the information has been delivered, but not that it has been routed or accepted. This layer deals with issues
such as flow regulation, error detection and control, and frames. This layer has the important task of creating and
managing what frames are sent out on the network. The network data frame, or packet, is made up of checksum,
source address, destination address, and the data itself. The largest packet size that can be sent defines the
Physical layer--Handles the bit-level electrical/light communication across the network channel. The major
concern at this level is what physical access method to use. The physical layer deals with four very important
characteristics of the network: mechanical, electrical, functional, and procedural. It also defines the hardware
characteristics needed to transmit the data (voltage/current levels, signal strength, connector, and media).
Basically, this layer ensures that a bit sent on one side of the network is received correctly on the other side.
Data travels from the application layer of the sender, down through the levels, across the nodes of the network
service, and up through the levels of the receiver. Not all of the levels for all types of data are needed--certain
To keep track of the transmission, each layer "wraps" the preceding layer's data and header with its own header. A
small chunk of data will be transmitted with multiple layers attached to it. On the receiving end, each layer strips
The OSI model should be used as a guide for how data is transmitted over the network. It is an abstract
Physical Layer:
The physical later is concerned with transmitting raw bits over a communication channel. The design issues have
to do with making sure that when one side sends a 1 bit, it is received by the other side as a 1 bit, not as a 0 bit.
Typical questions here are how many volts should be used to represent a 1 and how many for a 0, how many
microseconds a bit lasts, whether transmission may proceed simultaneously in both directions, how the initial
connection is established and how it is torn down when both sides are finished, and how many pins the network
connector has and what each pin is used for. The design issues here deal largely with mechanical, electrical, and
procedural interfaces, and the physical transmission medium, which lies below the physical layer. Physical layer
design can properly be considered to be within the domain of the electrical engineer.
Analysis of the behavior of the signal mathematically
1. Fourier Analysis
2. Bandwidth-Limited Signals
Hertz (cycles/sec.) = Amplitudes can transmit undiminished from 0 to some frequency, which is measured in
Hertz.
Baud = One signal change per second, a measure of data transmission speed. Named after the French engineer
and telegrapher Jean-Maurice-Emile Baudot and originally used to measure the transmission speed of telegraph
equipment, the term now most commonly refers to the data transmission speed of a modem.
Baud Rate = The speed at which a modem can transmit data. The baud rate is the number of events, or signal
changes, that occur in one second--not the number of bits per second (bps) transmitted. In high-speed digital
communications, one event can actually encode more than one bit, and modems are more accurately described in
terms of bits per second than baud rate. For example, a so-called 9,600-baud modem actually operates at 2,400
baud but transmits 9,600 bits per second by encoding 4 bits per event (2,400 × 4 = 9,600) and thus is a 9,600-bps
modem.
Nyquist’s Theorem
(For noiseless channels)
Maximum data rate = 2H log2 V bits / sec where, the signal consists of V discrete levels.
Amount of Noise is measured by Signal-to-noise ratio. If S is signal power and N is the noise power than signal-to-
Claude Shanon’s
Maximum number of bits/sec = H log 2 (1+S/N)
Transmission Media
1. Magnetic Media
Tapes, Hard disks and Floppies
2. Twisted Pair
Oldest and the most common method of data transmission
Coaxial Cable are available in 2 varieties 50 – ohm (Digital) and 75 – ohm (Analog).
transmit signals in only one direction, so dual cable systems are used. If single cable systems are used than
Technically Broadband is inferior to Baseband but the advantage is that it is already in place as it has been widely
TOP
5. Fiber Optics
There are 3 components : Light Source, Transmission medium and detector. (A pulse of light indicates a 1 bit and
The transmission medium is of Fiber optic cable made of a center glass core , surrounded by a glass cladding and
protected by a plastic jacket (The glass cladding has a lower index of refraction than the core so that the light
Normally a transmission medium would leak light and transmission would not be possible but due to the Physics of
A multimode fiber will have different rays of light bouncing around at different angles. In case of a Single-mode
fiber the fiber’s diameter is reduced to a few wave lengths of light and the light is propagated in a straight line
only. This kind of fiber is faster but more expensive. Attenuation (weakening of transmitted power) in decibels =
1. Connectors are plugged into Fiber sockets (Easy but Connectors lose 10% to 20% light)
3. Two fibers can be fused(melted) to form a solid connection (Very small attenuation of light)
But in all three methods at the point of the join the refracted energy can interfere with the signal.
There are two types of light that can pass through a fiber :
LED light (date rate, distance and cost low --- life long --– Multimode --- Not temperature sensitive)
Laser light (date rate, distance and cost high --- Life short –-- Multimode and single mode --- Temperature
sensitive)
Advantages of Fiber Optic Cables :
1. Higher Bandwidth
2. Low attenuation (Repeaters needed every 30 kms compared to every 5 kms in case of copper cables)
4. Not affected by corrosive chemicals in the air, ideal for harsh factory environment.
5. They don’t leak light hence quite difficult to tap, giving excellent security.
6. Electrons in case of a copper wire are effected by one another and also the stray electrons outside, while in
case of Fiber optic cables Protons are not affected by one another (as they have no electric charge) and also by
Disadvantages :
1. Requires skilled engineers.
2. Fiber optic cables are unidirectional, hence two cables or two frequency bands are required.
3. Very costly
Wireless Transmission
The Electromagnetic Spectrum (eg. Wireless)
Telephone System
Problems with Transmission mediums
2. Delay distortion
3. Noise
TOP
Modem
A modem modulates outgoing digital signals from a computer or other digital device to analog signals for a
conventional copper twisted pair telephone line and demodulates the incoming analog signal and converts it to a
Kbps modems were temporary landing places on the way to the much higher bandwidth devices and carriers of
tomorrow. From early 1998, most new personal computers came with 56 Kbps modems. By comparison, using a
digital Integrated Services Digital Network adapter instead of a conventional modem, the same telephone wire can
now carry up to 128 Kbps. With Digital Subscriber Line (Digital Subscriber Line) systems, now being deployed in a
RS-232-C connector
Multiple xing
1. Frequency Division Multiplexing & Wavelength Division Multiplexing (For Fiber optics) --- Analog
Switching
2. Packet Switching
Switch Hierarchy
1. Crossbar switches
Narrowband ISDN
Integrated Services Digital Network (ISDN) is a set of CCITT/ITU standards for digital transmission over ordinary
telephone copper wire as well as over other media. Home and business users who install an ISDN adapter (in place
of a modem) can see highly-graphic Web pages arriving very quickly (up to 128 Kbps). ISDN requires adapters at
both ends of the transmission so your access provider also needs an ISDN adapter. ISDN is generally available from
your phone company in most urban areas in the United States and Europe.
There are two levels of service: the Basic Rate Interface (BRI), intended for the home and small enterprise, and
the Primary Rate Interface (PRI), for larger users. Both rates include a number of B-channels and a D-channels.
Each B-channel carries data, voice, and other services. Each D-channel carries control and signaling information.
The Basic Rate Interface consists of two 64 Kbps B-channels and one 16 Kbps D- channel. Thus, a Basic Rate user
can have up to 128 Kbps service. The Primary Rate consists of 23 B-channels and one 64 Kpbs D-channel in the
Integrated Services Digital Network in concept is the integration of both analog or voice data together with digital
data over the same network. Although the ISDN you can install is integrating these on a medium designed for
analog transmission, broadband ISDN (BISDN) will extend the integration of both services throughout the rest of the
end-to-end path using fiber optic and radio media. Broadband ISDN will encompass frame relay service for high-
speed data that can be sent in large bursts, the Fiber Distributed-Data Interface (FDDI), and the Synchronous
Opical Network (SONET). BISDN will support transmission from 2 Mbps up to much higher, but as yet unspecified,
rates.
BISDN is both a concept and a set of services and developing standards for integrating digital transmission services
in a broadband network of fiber optic and radio media. BISDN will encompass frame relay service for high-speed
data that can be sent in large bursts, the Fiber Distributed-Data Interface (Fiber Distributed-Data Interface), and
the Synchronous Optical Network (Synchronous Optical Network). BISDN will support transmission from 2 Mbps up
BISDN is the broadband counterpart to Integrated Services Digital Network, which provides digital transmission
over ordinary telephone company copper wires on the narrowband local loop.
53-byte cell units and transmits them over a physical medium using digital signal technology. Individually, a cell is
processed asynchronously relative to other related cells and is queued before being multiplexed over the
transmission path.
Because ATM is designed to be easily implemented by hardware (rather than software), faster processing and
switch speeds are possible. The prespecified bit rates are either 155.520 Mbps or 622.080 Mbps. Speeds on ATM
networks can reach 10 Gbps. Along with Synchronous Optical Network (SONET) and several other technologies, ATM
TOP
ATM switches
Knockout switch
Batcher-Banyan switch
Cellular Radio
Paging
Cordless telephones
Communication satellites
1. Geo-synchronous
2. Low orbit
X.21
A digital signaling interface called X.21 was recommended by the CCITT in 1976. The recommendation specifies
how the customer's computer, the DTE, sets up and clears calls by exchanging signals with the carrier's equipment,
the DCE.
The names and functions of the eight wires defined by X.21 are given in the following figure. The physical
connector has 15 pins, but not all of them are used. the DTE uses the T and C lines to transmit data and control
information, respectively. The DCE uses the R and I lines for data and control. The S line contains a signal stream
emitted by the DCE to provide timing information, so the DTE knows when each bit interval starts and stops. At the
carrier's option, a B line may also be provided to group the bits into 8-bit frames. If this option is provided, the
DTE must begin each character on a frame boundary. If the option is not provided, both DTE and DCE must begin
every control sequence with at least two SYN characters, to enable the other one to deduce the implied frame
boundaries.
Although X.21 is a long and complicated document, the simple example of the next figure illustrates the main
features. In this example it is shown how the DTE places a call to a remote DTE, and how the originating DTE
clears the call when it is finished. To make the explanation clearer, the calling and clearing procedures is