Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 36

Cable modem

A cable modem is a type of modem that provides access to a data signal sent over the
cable television infrastructure. Cable modems are primarily used to deliver broadband
Internet access in the form of cable Internet, taking advantage of unused bandwidth on
a cable television network. They are commonly found in Australia, New Zealand,
Canada, Europe, the United Kingdom, Costa Rica, and the United States. In USA alone
there were 22.5 million cable modem users during the first quarter of 2005, up from 17.4
million in the first quarter of 2004.

Cable modems in the OSI model or TCP/IP model

In network topology, a cable modem is a network bridge that conforms to IEEE 802.1D
for Ethernet networking (with some modifications). The cable modem bridges Ethernet
frames between a customer LAN and the coax cable network.

With respect to the OSI model, a cable modem is a data link layer (or layer 2) forwarder,
rather than simply a modem.

A cable modem does support functionalities at other layers. In physical layer (or layer 1),
the cable modem supports the Ethernet PHY on its LAN interface, and a DOCSIS
defined cable-specific PHY on its HFC cable interface. It is to this cable-specific PHY
that the name cable modem refers. In the network layer (or layer 3), the cable modem is
a IP host in that it has its own IP address used by the network operator to manage and
troubleshoot the device. In the transport layer (or layer 4) the cable modem supports
UDP in association with its own IP address, and it supports filtering based on TCP and
UDP port numbers to, for example, block forwarding of NetBIOS traffic out of the
customer's LAN. In the application layer (layer 5 or layer 7), the cable modem supports
certain protocols that are used for management and maintenance, notably DHCP,
SNMP, and TFTP.

Some cable modem devices may incorporate a router along with the cable modem
functionality, to provide the LAN with its own IP network addressing. From a data
forwarding and network topology perspective, this router functionality is typically kept
distinct from the cable modem functionality (at least logically) even though the two may
share a single enclosure and appear as one unit. So, the cable modem function will
have its own IP address and MAC address as will the router.

History

Hybrid Networks

Hybrid Networks developed, demonstrated and patented the first high speed,
asymmetrical cable modem systems in 1990. A key Hybrid Networks insight was that
highly asymmetrical communications would be sufficient to satisfy consumers connected
1
remotely to an otherwise completely symmetric high speed data communications
network. This was important because it was very expensive to provide high speed in the
upstream direction, while the CATV systems already had substantial broadband
capacity in the downstream direction. Also key was that it saw that the upstream and
downstream communications could be on the same or different communications media
using different protocols working in each direction to establish a closed loop
communications system. The speeds and protocols used in each direction would be
very different. The earliest systems used the public switched telephone network (PSTN)
for the return path since very few cable systems were bi-directional. Later systems used
cable for the upstream as well as the downstream path.

There was extreme skepticism to this approach initially. In fact, many technical people
doubted that it could work at all. Hybrid's system architecture is the way most cable
modem systems operate today.

LANcity

LANcity was an early pioneer in cable modems, developing a proprietary system that
saw fairly wide deployment in the US. LANcity was sold to Bay Networks which was
then acquired by Nortel, which eventually spun the cable modem business off as
ARRIS. ARRIS continues to make cable modems and CMTS equipment compliant with
the DOCSIS standard.

CDLP

CDLP was a proprietary system that was made by Motorola. CDLP CPE was capable of
both PSTN (telephone network) and RF (cable network) return paths. The PSTN return
path cable modem service was considered 'one way cable' and had many of the same
drawbacks as satellite Internet service, and as a result it quickly gave way to two way
cable. Cable modems that used the RF cable network for the return path were
considered 'two way cable', and were better able to compete with DSL which was
bidirectional. The standard is more or less defunct now with new providers using, and
existing providers having changed over to, the DOCSIS standard. The Motorola CDLP
Proprietary CyberSURFR is an example of a modem that was built to the CDLP
standard, capable of a peak 10 Mbit/s downstream and 1.532 Mbit/s upstream. (CDLP
supported a maximum downstream bandwidth of 30 Mbit/s which could be reached by
using several modems.)

The Australian ISP BigPond employed this system when it started cable modem trials in
1996. For a number of years cable Internet access was only available to Sydney,
Melbourne and Brisbane via CDLP. This network ran parallel to the newer DOCSIS
system for a number of years. In 2004 the CDLP network was switched off and now is
exclusively DOCSIS.

2
IEEE 802.14

In the mid-1990s the IEEE 802 committee formed a subcommittee (802.14) to develop a
standard for cable modem systems. While significant progress was made, the group
was disbanded when North American MSOs instead backed the fledgling DOCSIS
specification.

DOCSIS

In the late 1990s, a consortium of US cable operators, known as "MCNS" formed to


quickly develop an open and interoperable cable modem specification. The group
essentially combined technologies from the two dominant proprietary systems at the
time, taking the physical layer from the Motorola CDLP system and the MAC layer from
the LANcity system. When the initial specification had been drafted, the MCNS
consortium handed over control of it to CableLabs. CableLabs took on maintenance of
the specification, promoted it in various standards organizations (notably SCTE and
ITU), developed a certification testing program for cable modem equipment, and has
since drafted multiple extensions to the original specification. Virtually all cable modems
operating in the field today are compliant with one version or another of DOCSIS.

Cable modems and VoIP

With the advent of Voice over IP telephony, cable modems can also be used to provide
telephone service. Many people who have cable modems have opted to eliminate their
Plain Old Telephone Service (POTS). Because most telephone companies do not offer
naked DSL (DSL service without a POTS line), VoIP use is higher amongst cable
modem users.

A cable modem subscriber can make use of VoIP telephony by subscribing to a third
party service (e.g. Vonage or Skype). As an alternative, many cable operators offer a
VoIP service based on PacketCable. PacketCable allows MSOs to offer both High
Speed Internet and VoIP through a single piece of customer premise equipment, known
as an Embedded Multimedia Terminal Adapter (EMTA or E-MTA). An EMTA is basically
a cable modem and a VoIP adapter (known as a Multimedia Terminal Adapter) bundled
into a single device. PacketCable service has a significant technical advantage over
third-party providers in that voice packets are given guaranteed Quality of Service
across their entire path so that call quality can be assured.

Inside the Cable Modem: MAC


The MAC sits between the upstream and downstream portions of the cable modem, and
acts as the interface between the hardware and software portions of the various network
protocols. All computer network devices have MACs, but in the case of a cable modem the tasks are more
complex than those of a normal network interface card. For this reason, in most cases, some of the MAC

3
functions will be assigned to a central processing unit (CPU) -- either the CPU in the cable modem or the CPU of
the user's system.

. Modem, Network and Cable Modem

Modem

A modem connection is about 50 kbit/s, and is used point-to-point. The distance is


virtually unlimited, including multiple satellite hops etc.

Ethernet

An ethernet (LAN) connection is 10 Mbit/s or 100 Mbit/s, and is used to connect many
computers that can all "talk" directly to each other. Normally they will all talk with a few
servers and printers, but the network is all-to-all. The distance is normally limited to
below 1 km.

Cable Modem

A Cable Modem connection is something in-between. The speed is typically 3-50 Mbit/s
and the distance can be 100 km or even more. The Cable Modem Termination System
(CMTS) can talk to all the Cable Modems (CM's), but the Cable Modems can only talk to

4
the CMTS. If two Cable Modems need to talk to each other, the CMTS will have to relay
the messages.

The OSI layer stackup for a DOCSIS Cable Modem looks like this. For further
explanation of the various acronyms please see the other sections of this tutoral or refer
to www.whatis.com (lots of short concise explanations of especially the network terms).

OSI   DOCSIS
Higher Layers   Applications
DOCSIS
Transport Layer   TCP/UDP Control
Messages
Network Layer   IP
Data Link Layer   IEEE 802.2
Upstream Downstream
TDM (MPEG)
Physical Layer   TDMA (mini-slots) 42(65) - 850 MHz
5 - 42(65) MHz
64/256-QAM
QPSK/16-QAM
ITU-T J.83 Annex B(A)

Items in parenthesis refer to EuroDOCSIS, which is a version of DOCSIS with a


modified physical layer targeted at the more DVB centric European market.

External box cable modems with ethernet interface normally acts as either MAC-layer
bridges (low-end models) or as routers (high-end SOHO models).

Cable Modem types

A number of different Cable Modem configurations are possible. These three


configurations are the main products we see now. Over time more systems will arrive.
5
External Cable Modem

The external Cable Modem is the small external box that connect to your computer
normally through an ordinary Ethernet connection. The downside is that you need to add
a (cheap) Ethernet card to your computer before you can connect the Cable Modem. A
plus is that you can connect more computers to the Ethernet. Also the Cable Modem
works with most operating systems and hardware platforms, including Mac, UNIX,
laptop computers etc.

Another interface for external Cable Modems is USB, which has the advantage of
installing much faster (something that matters, because the cable operators are
normally sending technicians out to install each and every Cable Modem). The
downside is that you can only connect one PC to a USB based Cable Modem.

Internal Cable Modem

The internal Cable Modem is typically a PCI bus add-in card for a PC. That might be the
cheapest implementation possible, but it has a number of drawbacks. First problem is
that it can only be used in desktop PC's. Mac's and laptops are possible, but require a
different design. Second problem is that the cable connector is not galvanic isolated
from AC mains. This may pose a problem in some CATV networks, requiring a more
expensive upgrade of the network installations. Some countries and/or CATV networks
may not be able to use internal cable modems at all for technical and/or regulatory
reasons.

Interactive Set-Top Box

The interactive set-top box is really a cable modem in disguise. The primary function of
the set-top box is to provide more TV channels on the same limited number of
frequencies. This is possible with the use of digital television encoding (DVB). An
interactive set-top box provides a return channel - often through the ordinary plain old
telephone system (POTS) - that allows the user access to web-browsing, email etc.
directly on the TV screen

Typical Cable Modem installation

6
When installing a Cable Modem, a power splitter and a new cable is usually required.
The splitter divides the signal for the "old" installations and the new segment that
connects the Cable Modem. No TV-sets are accepted on the new string that goes to the
Cable Modem.

The transmitted signal from the Cable Modem can be so strong, that any TV sets
connected on the same string might be disturbed. The isolation of the splitter may not be
sufficient, so an extra high-pass filter can be needed in the string that goes to the TV-
sets. The high-pass filter allows only the TV-channel frequencies to pass, and blocks the
upstream frequency band. The other reason for the filter is to block ingress in the low
upstream frequency range from the in-house wiring. Noise injected at each individual
residence accumulates in the upstream path towards the head-end, so it is essential to
keep it at a minimum at every single residence that needs Cable Modem service..

Data-interface

On any kind of external cable modem (the majority of what is in use today), you
obviously need some kind of data-interface to connect the computer and the cable
modem.

Ethernet

On most external modems, the data-port interface is 10 Mbps Ethernet. Some might
argue that you need 100 Mbps Ethernet to keep up with the max. 27-56 Mbps
downstream capability of a cable modem, but this is not true. Even in a very good
installation, a cable modem can not keep up with a 10 Mbps Ethernet, as the
downstream is shared by many users.

The 1st version of the MCNS standard, that dominates the US market, specified 10
Mbps Ethernet as the only allowable data-interface. The DVB/DAVIC standard is totally
open, allowing any type of interface. Other types of interfaces are being incorporated in
the MCSN standard to allow for a wider range of cable modem configurations.

USB (Universal Serial Bus)


7
Among others, Intel recently announced that they are working with Broadcom on cable
modems with USB interface. This is expected to bring down the installation hassle for
the many users with less computer skills. Obviously you do not need to open the box to
install an Ethernet card, if the computer has an USB interface. If the computer does not
have an USB interface, you will need to install that (and you are back to about the same
hassle-level as with the Ethernet interface).

Cost

The installation cost is a significant issue, as this is something that needs to be done in
the house of every subscriber. The CATV operators and equipment manufactures needs
to try really hard to push down the installation cost, to keep the whole operation
profitable.

What is Downstream?

Downstream is the term used for the signal received by the Cable Modem. The electrical
characteristics are outlined in the below table. Notice that most CATV networks in
Europe allows 8 MHz bandwidth TV channels, whereas the US CATV networks allows
only 6 MHz. Again Europe runs a little faster...

Frequency 42-850 MHz in USA and 65-850 MHz in Europe

Bandwidth 6 MHz in USA and 8 MHz in Europe

Modulatio 64-QAM with 6 bits per symbol (normal)


n 256-QAM with 8 bits per symbol (faster, but more
sensitive to noise)

The raw data-rate depends on the modulation and bandwidth as shown below:

8
  64-QAM 256-QAM

6 MHz 31.2 Mbit/s 41.6 Mbit/s

8 MHz 41.4 Mbit/s 55.2 Mbit/s

Note: A symbol rate of 6.9 Msym/s is used for 8 MHz bandwidth and 5.2 Msym/s is
used for 6 MHz bandwidth in the above calculations. Raw bit-rate is somewhat higher
than the effective data-rate due to error-correction, framing and other overhead.

Since the downstream data are received by all Cable Modems, the total bandwidth is
shared between all active Cable Modems on the system. This is similar to an Ethernet,
only the wasted bandwidth on an Ethernet is much higher. Each Cable Modem filters out
the data it needs from the stream of data.

What is Upstream?

Upstream is the term used for the signal transmitted by the Cable Modem. Upstream is
always bursts, so many modems can transmit on the same frequency. The frequency
range is typically 5-65 MHz or 5-42 MHz. The bandwidth per channel may be e.g. 2 MHz
for a 3 MBit/s QPSK channel.

The modulation forms are QPSK (2 bits per symbol) and 16-QAM (4 bits per symbol),
with the later being the fastest, but also most sensitive to ingress. One downstream is
normally paired with a number of upstream channels to achieve the balance in data
bandwidths required.

Each modem transmits bursts in time slots, that might be either marked as reserved,
contention or ranging.

Reserved slots
9
A reserved slot is a time slot that is reserved to a particular Cable Modem. No other
Cable Modem is allowed to transmit in that time slot. The CMTS (Head-End) allocates
the time slots to the various Cable Modems through a bandwidth allocation algorithm
(notice: this algorithm is vendor specific, and may differentiate vendors considerably).

Reserved slots are normally used for longer data transmissions.

Contention slots

Time slots marked as contention slots are open for all Cable Modems to transmit in. If
two Cable Modems decide to transmit in the same time slot, the packets collide and the
data is lost. The CMTS (Head-End) will then signal that no data was received, to make
the Cable Modems try again at some other (random) time.

Contention slots are normally used for very short data transmissions (such as a request
for a number of reserved slots to transmit more data in).

Ranging slots

Due to the physical distance between the CMTS (Head-End) and the Cable Modem, the
time delay vary quite a lot and can be in the milliseconds range. To compensate for this
all Cable Modems employ a ranging protocol, that effectively moves the "clock" of the
individual Cable Modem forth or back to compensate for the delay.

To do this a number (normally 3) of consecutive time-slots are set aside for ranging
every now and then. The Cable Modem is commanded to try transmitting in the 2nd
time-slot. The CMTS (Head-End) measures this, and tells the Cable Modem a small
positive or negative correction value for its local clock. The two time slots before and
after are the "gap" required to insure that the ranging burst does not collide with other
traffic.

The other purpose of the ranging is to make all Cable Modems transmit at a power level
that makes all upstream bursts from all Cable Modems arrive at the CMTS at the same
level. This is essential for detecting collisions, but also required for optimum
performance of the upstream demodulator in the CMTS. The variation in attenuation
from the Cable Modem to the CMTS can vary more than 15dB.

Downstream data format

10
Downstream data is framed according to the MPEG-TS (transport stream) specification.
This is a simple 188/204 byte block format with a single fixed sync byte in front of each
block. The Reed-Solomon error correction algorithm reduces the block size from 204
bytes to 188 bytes, leaving 187 for MPEG header and payload.

This is where the various standards differ quite a lot. Some standards even allow
various formatting of data within the MPEG-TS payload.

For the DVB/DAVIC standard, the framing inside the MPEG-TS payload is simply a
stream of ATM cells.

Distributed-queue dual-bus

In telecommunication, a distributed-queue dual-bus network (DQDB) is a distributed


multi-access network that (a) supports integrated communications using a dual bus and
distributed queuing, (b) provides access to local or metropolitan area networks, and (c)
supports connectionless data transfer, connection-oriented data transfer, and
isochronous communications, such as voice communications.

IEEE 802.6 is an example of a network providing DQDB access methods.

DQDB Concept of Operation

The DQDB Medium Access Control (MAC) algorithm is generally credited to Robert
Newman who developed this algorithm in his PhD thesis in the 1980s at the University
of Western Australia. To appreciate the innovative value of the DQDB MAC algorithm, it
must be seen against the background of LAN protocols at that time, which were based
on broadcast (such as ethernet IEEE 802.3) or a ring (like token ring IEEE 802.5 and
FDDI). The DQDB may be thought of as two token rings, one carrying data in each
direction around the ring. The ring is broken between two of the nodes in the ring. (An
advantage of this is that if the ring breaks somewhere else, the broken link can be
closed to form a ring with only one break again. This gives reliability which is important

11
in Metropolitan Area Networks (MAN), where repairs may take longer than in a LAN
because the damage may be inaccessible.)

The DQDB standard IEEE 802.6 was developed while ATM (Broadband ISDN) was still
in early development, but there was strong interaction between the two standards. ATM
cells and DQDB frames were harmonized. They both settled on essentially a 48-byte
data frame with a 5-byte header. In the DQDB algorithm, a distributed queue was
implemented by communicating queue state information via the header. Each node in a
DQDB network maintains a pair of state variables which represent its position in the
distributed queue and the size of the queue. The headers on the reverse bus
communicated requests to be inserted in the distributed queue so that upstream nodes
would know that they should allow DQDB cells to pass unused on the forward bus. The
algorithm was remarkable for its extreme simplicity.

Distributed Queue Dual Bus (DQDB)

The distributed queue dual bus (DQDB) network uses a different kind of MAC method
based on the use of a distributed queuing algorithm called queued-packet distributed-
switch (QPSX) and a slotted ring arrangement. It uses two unconnected unidirectional
buses, which are normally implemented as a series of point-to-point segments. DQDB
also expects the use of optical fibre links. Some of the characteristics of DQDB are
given in Table .

Table:   Some characteristics of DQDB

The DQDB LAN or MAN transports data in fixed size cells, which happen to look very
much like ATM cells (Figures and ). However, as DQDB offers a MAC service, the
MAC frame may need to be segmented into several cells before transmission (Figure
). The cell structure for DQDB is almost identical to that of an ATM cell. This similarity
between the cell structure has been made so that the DQDB will be compatible with B-
ISDN.

12
Figure:   A DQDB cell

Figure:   A DQDB cell header

Figure:   MAC frame segmentation in DQDB

A DQDB cell differs from an ATM cell in its header and its payload. In the header of the
DQDB cell there is no virtual path identifier (VPI) and the virtual channel identifier (VCI)
has an additional 4 bits. The first 8 bits of the DQDB header form an access control field
(ACF) (in ATM this is the first 8 bits of the VPI). Also, the next 4 bits (the final 4 bits of
the VPI in ATM) form the first 4 bits of the VCI.

Also, the cell payload is structured:


13
 Segment type (ST): identifies the cell as one of the following:
o single segment: only this segment (no MAC fragmentation was required).
o first segment: the first cell of a segmented MAC frame;
o intermediate segment: the intermediate cells in a fragmented MAC frame.
o last segment: the final cell of a segmented MAC frame>
 Message identifier (MID): the MID is the same for all DQDB cells from the same
MAC frame. This allows the identification of intermediate segments.
 Information: (part of) the MAC frame contents.
 Length (LEN): the length of the information field.
 CRC: covering everything the whole cell payload.

The cell header contains the following information:

 Access control field (ACF): this contains the BUSY and REQUEST bits that are
used in the operation of the QPSX mechanism. The BUSY bit indicates the the
slot is in use. The REQUEST bit is set in a slot by a node that is waiting to
transmit.
 Virtual channel identifier (VCI): This is not used in a DQDB MAN as there are no
logical connections which require multiplexing -- the ST and MID fields in the
payload are used instead. In a DQDB LAN with a private UNI, there is the
possibility for applications to make use of this field.
 Payload type (PT): same as ATM.
 Cell loss priority (CLP): same as ATM.
 Header error control (HEC): CRC for the header.

A DQDB network is comprised of two buses that transmit cells in opposite directions and
each node is connected to both buses. The connections are normally point-to-point, but
are often depicted in the tapped-bus type configuration as shown in Figure . Both
these buses always have a constant number of slots circulating on them. A slot on one
bus is copied to the other.

14
Figure:   The operation of a DQDB network

The way that the DQDB network operates is by setting up a notion of a distributed
queue at each node. This is done by using a counter -- REQUEST counter (RC) -- that
records how many nodes are waiting to transmit ahead of this node in the queue.
Consider Figure which shows a node on the DQDB network. Only the inputs from the
two buses to the node are shown, for simplicity. By ahead in the queue we mean any
nodes upstream (relative to the flow of cells/slots) from this node. The count of waiting
nodes is incremented by counting any set BUSY bits in passing slots on Bus A. The
count is decremented by seeing any set REQUEST bits on Bus B.

For this node, we will consider what happens when it wishes to transmit on Bus A.
Firstly, it waits for a passing slot on Bus B with an available REQUEST bit, which it then
sets to indicate it is waiting to transmit. It then transfers the RC value to a down counter
(DC). DC now contains the number of nodes ahead of this node in the queue. DC is
decremented by slots passing by with BUSY bits set, indicating slots/cells used by other
nodes that are ahead in the queue. When DC becomes 0, then this node can transmit
on Bus A. Although we have only considered Bus A, an identical procedure is followed if
the node wishes to transmit on Bus B.

15
Figure:   DQDB node collecting queue information about Bus A

Figure:   DQDB node waiting to send on Bus A

DQDB also has 4 priority transmission levels, which are also based around a distributed
queue system using counters, i.e. each node effectively has 5 queues, with a RC and
DC for each queue. The priority levels are indicated by 4 further REQUEST bits in the
ACF in the cell header, labelled R1, R2, R3 and R4. The highest priority is R4. To cater
for the priority levels, the behaviour of RC and DC must be modified as follows:

 RC must count all request bits of its priority level or higher.


 DC decrements if an empty cell passes on Bus A, as this will be used by a waiting
node with a higher priority further downstream.
 DC increments if it sees a a REQUEST bit for a higher priority queue on Bus B.

High Performance Parallel Interface (HIPPI)

The high performance parallel interface (HIPPI) is a star-hub switch based


technology for providing very high speed connectivity for short distances. The original
HIPPI protocol specified the use of 50 copper twisted pairs to provide a uni-directional

16
point-to-point link for a maximum length of 25 metres. HIPPI uses connections set up at
very high speed through local switches. There is also work in progress to specify the
use of HIPPI over fibre channel (an optical fibre based technology). Some of the
characteristics of HIPPI are given in Table , with information about the fibre channel
flavour of HIPPI given in brackets. The normal operating bit rate of HIPPI is 8000Mb/s.

Table:   Some characteristics of HIPPI

A protocol suite is defined for the various functions to be performed in a HIPPI network
(Figure ).

Figure:   HIPPI protocol suite

The HIPPI physical layer (HIPPI-PH) is responsible for the electrical, mechanical and
signalling aspects of HIPPI. The data is transferred in parallel normally as 32 bit words,
but the cable can be doubled up (often called double wide HIPPI) to transfer 64 bit
words and so offer 1600Mb/s. As HIPPI has unidirectional links, upto four separate
wires are required to support duplex 1600Mb/s data exchange. HIPPI uses a block
parity based error control scheme at transmission time.

The HIPPI switching control (HIPPI-SC) function is responsible for setting up


connections between HIPPI switches. The HIPPI network can be composed of many
HIPPI switches (Figure ). The boxes marked as HIPPI device would be high
performance machines (such as supercomputers), interworking units (for instance to B-
ISDN) or intelligent peripheral devices. The switch control function uses information in
17
the I-field of a HIPPI frame (see below). A HIPPI source requests a connection giving
the relevant information in the I-field and the switch will try and make the connection to
the destination, then data transfer can proceed, after which the connection is released.
The control bits in the I-field and their affect on connection establishment is explained
below. It is possible to get switches that can set up connections in less than one
microsecond!

Figure:   A LAN constructed using HIPPI switches

HIPPI specifies a framing protocol for transmitting information (Figure ). The unit of
transmission for data is a burst of 256 words (32 bits or 64 bits). The I-field contains
information that is used for switching by the hub(s) (Figure ). The various bits of the I-
field are:

 Locally defined (L): if this bit is set to one, it signifies that the rest of the I-field
has locally (privately) defined format and not the standard format.
 Width (W): if this bit is set it indicates that a double wide HIPPI connection (64 bit
words) should be attempted. If the source or the destination can not support this,
the connection request is rejected.
 Direction (D): if this bit is set it indicates that the source and destination address
bits should be swapped round. This provides a method for devices to simply route
return messages, but is not useful when a LAN type arrangement is in operation
such as that depicted in Figure .
 Path selection (PS): determines how the I-field address information is treated. It
can be treated as a 24 bit block which contains a series of port numbers that
effectively define s route through a set of switches, or it can be used as two 12 bit
addresses, as depicted in Figure . The former mode is called source route
mode and the latter logical address mode.
 Camp on (C): is set, this bit instructs the switch not to reject the connection if the
destination is busy, but to wait until the destination is free and then re-attempt the
connection.
 Source/Destination address: this information in logical address mode is often
encoded as a 6 bit switch number and a 6 bit port number.
18
Figure:   HIPPI data framing hierarchy

Figure:   HIPPI data frame I-field

The HIPPI link encapsulation (HIPPI-LE) offers an interface that supports IEEE 802.2
Logical link PDUs, i.e. it looks like a CSMA/CD or token-bus/ring LAN but 80/160 times
as fast! This provides a way for other standard technologies to be used over HIPPI, e.g.
the Internet Protocol suite, IP, UDP and TCP.

The HIPPI fibre channel (HIPPI-FC) interface is still under development, but when
completed, hopes to offer a increased capabilities, as shown by the bracketed
information in Table .

The HIPPI intelligent peripheral interface (HIPPI-IPI) will allow high speed computer
controlled peripherals (such as optical disc drives) and the HIPPI-FC developments
intends to add support for the small computer systems interface (SCSI) command
set.

High Speed Networks

Use of LANs is now common place in commercial, research, government and


educational establishments. The evolution of hardware over the past decade has
resulted in inexpensive solutions to interfacing computer systems to networks such as
Ethernet and Token Ring. Such systems can typically offer around 10Mb/s on the
shared media. Further, the ISDN is available widely in Britain and in many places in

19
Europe and it is now possible to buy hardware that will allow access to the ISDN from a
desktop machine.

The technology explosion has two important consequences:

 Increase in the processing power available on the desktop. It is possible to go


to a high street shop and buy a PC with hardware that can offer facilities for real-
time conferencing across a LAN, or perform simulations in real-time.
 A diversity of software wanting to make use of the increased processing
power and available network capacity. As the hardware has become more
capable so the applications designers have began to try and offer increasingly
sophisticated services. Whereas LANS were initially used to provide access to,
say, mailboxes or maybe a shared printer, we found now a host of services
requiring the use of the network to provide real-time distributed services, for
instance a distributed database or a network file service. Additionally, applications
that make higher demands from the network are also becoming very popular -- for
example multimedia conferencing and computer supported cooperative work
(CSCW).

As well as network hungry applications resulting in increased network load, we find that
there are now few computer installations that are not networked -- virtually gone is the
day of a stand alone PC used by a single person. So not only has the demand for
network capacity increased per user but the number of users all contending for the
same piece of LAN has increased.

To try and rationalise on the use of the available network capacity within an office or a
site wide LAN, sites will often split the network up into segments. These segments are
connected to a backbone network by use of bridges. The use of segments and bridges
helps to localise traffic, so there is effectively less contention for the media compared to
the case where there is only one segment. Further organisational changes like
replication of data can also help to improve services for the user, but such schemes
bring their own problems! While segmenting the network will help to create more
localised traffic flows, the backbone may still need to contend with much inter-segment
traffic.

Sometimes a user would simply like to have more network capacity available to him/her
than the 10Mbp/s or so that, say, an Ethernet can offer, and if segments and bridges are
used, the backbone needs to offer a greater capacity to cope with the many segments
attached to it.

In this section we discuss some of the mechanisms of various high speed technologies
that can bring 100Mbp/s and more to the LAN environment. Some of the technologies
discussed here can be used to interconnect LANs over large public areas such as a
town or city, and so are also referred to as metropolitan area networks (MANs).
20
Fibre Distributed Data Interface (FDDI)

The fibre distributed data interface (FDDI) is a token based technology using fibre
optical links. The FDDI specification defines the use of OSI layers 1 and 2 (Physical
layer and Data Link layer). Some characteristics of FDDI are given in Table . FDDI's
operation is similar to that of token ring.

Table:   Some characteristics of FDDI

Dual counter-rotating rings are used to improve reliability. The rings are labelled the
primary ring and the secondary ring. Stations attached to the FDDI may be connected
to both rings -- dual attach stations (DASs) -- or only to the primary ring -- single
attach station (SASs). Although, the stations are logically attached in a ring, the
physical connection is more conveniently realised in a hub-star fashion by using wiring
concentrators.

The FDDI may be used as a LAN, but is more often used as a backbone and so most of
the attached stations will be bridges that are dual attached.

In a token ring, the single active ring monitor encodes the clock signal into the token that
it generates, using Manchester encoding. However, for a 100Mb/s rate, Manchester
encoding requires 200Mbaud signalling rate. So in an FDDI, NRZI coding is used (signal
transition when a 1 is transmitted and no transition when a 0 is transmitted) and all
stations have their own local clock which is used in data transmission. When receiving
frames, stations synchronise using the incoming signal. The physical interface is
depicted in Figure .

21
Figure:   Schematic of FDDI physical interface

The 4B/5B encoder takes each group of 4 bits and replaces them with a 5-bit symbol
(Table ), and the 4B/5B decoder performs the reverse operation. The use of 4B/5B
coding and NRZI ensures a signal transition every 2 bits. The latency buffer is where 2
5-bit symbols are used to give correct symbol boundary alingment.

Table:   FDDI 4B/5B data symbols

22
Table:   FDDI control code symbols

The FDDI frame formats are shown in Figure . The various fields are as follows (the
various control symbols are given in Table ):

 Preamble (PA): 16 (or more) IDLE symbols. Causes line signal changes every bit
to ensure receiver clock synchronisation at the beginning of a frame.
 Start delimiter (SD): the 2 symbols J and K are used to show the start of the
frame and also to allow interpretation of correct symbol boundaries.
 Frame control (FC): 2 symbols indicating whether or not this is an information
frame or a MAC frame (e.g. the token), with some additional control information for
the station identified by the DA.
 Destination address (DA): 4 or 12 symbols identifying the destination station. 16
symbols are used for a full 48-bit MAC address, 4 symbols for a 16-bit local
addressing mechanism. If the first bit of the (decoded) address is a 1 then this
identifies a group address.
 Source address (SA): 4 or 12 symbols identifying the source station.
 Information: this is usually set to about 9000 symbols (4500 decoded octets) in
length and is determined by the maximum length of time that a station can hold
the token.
 Frame check sequence (FCS): 8 symbols containing a 32-bit CRC. The FCS
covers the fields FC, DA, SA, information and FCS.
 End delimiter (ED): 1 or 2 T control symbols.
 Frame status (FS): 3 symbols which are a combination of R and S symbols
indicating if the frame has been seen by the destination station and if it has been
copied by the destination station.

23
Figure:   FDDI frame formats

The operation of FDDI is much the same as token ring. A station must be in possession
of a token before it can transmit an information frame. Once it has seen the frame go
around the ring it can the regenerate the token allowing someone else to transmit.
However, the potentially large size of the FDDI ring means that it has a higher latency
than token ring and so more than one frame may be circulating around the ring at a
given time. The physical interface will then repeat the PA, SD, FC and DA filed before it
nows if this is its own frame which it should remove from the ring. If this occurs, the
station stops sending any more of the frame and instead sends out IDLE symbols until it
receives an SD indicating another frame. This will lead to many frame fragments around
the ring which should be removed by receiving stations.

All stations also keep a note of the token rotation time (TRT) which is the time elapsed
since the station last saw the token. As the load on the FDDI network this time will
increase. The TRT can be compared to a preset value called the target token rotation
time (TTRT) to allow a priority operation scheme: only priority frames can be transmitted
if the TRT is greater than the TTRT.

A timer called the token hold timer (THT) also controls the normal transmission of data.
When a station receives the token it transfers the TRT to the THT which starts to count
down. The station can then continue to transmit frames as long as the THT remains
greater than the TTRT. In fact the THT determines the maximum number of
octets/symbols that can be sent in one FDDI frame, as the the THT determines the
maximum time that a station can remain transmitting data.

FDDI is a connectionless system and is designed for data purposes only. FDDI II is an
evolution of FDDI which can be seen as a superset of FDDI. It was developed a
standards project that proposed adding isochronous transmission services to FDDI so
that it was possible to support real-time applications such as multimedia. The proposal
includes a mechanism for dividing the the 100Mb/s into 16 channels, each of which
would be allocated for a certain use, e.g.\ video, bursty data, etc. However, because

24
existing FDDI hardware would be unable to have plug-in upgrades and with the advent
of ATM LANs and DQDB, this option of FDDI may not be so heavily deployed.

IPv6

Internet Protocol version 6 (IPv6) is a network layer for packet-switched


internetworks. It is designated as the successor of IPv4, the current version of the
Internet Protocol, for general use on the Internet.

The main change brought by IPv6 is a much larger address space that allows greater
flexibility in assigning addresses. The extended address length eliminates the need to
use network address translation to avoid address exhaustion, and also simplifies
aspects of address assignment and renumbering when changing providers. It was not
the intention of IPv6 designers, however, to give permanent unique addresses to every
individual and every computer.

It is common to see examples that attempt to show that the IPv6 address space is
extremely large. For example, IPv6 supports 2 128 (about 3.4×1038) addresses, or
approximately 5×1028 addresses for each of the roughly 6.5 billion (6.5×10 9) people alive
today.[1] In a different perspective, this is 252 addresses for every star in the known
universe[2] – a million times as many addresses per star than IPv4 supported for our
single planet.

The large number of addresses allows a hierarchical allocation of addresses that may
make routing and renumbering simpler. With IPv4, complex CIDR techniques were
developed to make the best possible use of a restricted address space. Renumbering,
when changing providers, can be a major effort with IPv4, as discussed in RFC 2071
and RFC 2072. With IPv6, however, renumbering becomes largely automatic, because
the host identifiers are decoupled from the network provider identifier. Separate address
spaces exist for ISPs and for hosts, which are "inefficient" in address space bits but are
extremely efficient for operational issues such as changing service providers.

Introduction

By the early 1990s, it was clear that the change to a classless network introduced a
decade earlier was not enough to prevent IPv4 address exhaustion and that further
changes to IPv4 were needed.[3] By the beginning of 1992, several proposed systems
were being circulated and by the end of 1992, the IETF announced a call for white
papers (RFC 1650) and the creation of the "IP, the Next Generation" (IPng Area) of
working groups.[3][4]

IPng was adopted by the Internet Engineering Task Force on July 25, 1994 with the
formation of several "IP Next Generation" (IPng) working groups.[3] By 1996, a series of
RFCs were released defining IPv6, starting with RFC 2460. (Incidentally, IPv5 was not a

25
successor to IPv4, but an experimental flow-oriented streaming protocol intended to
support video and audio.)

It is expected that IPv4 will be supported alongside IPv6 for the foreseeable future. IPv4-
only nodes (clients or servers) will not be able to communicate directly with IPv6 nodes,
and will need to go through an intermediary; see Transition mechanisms below.

Features and differences from IPv4

To a great extent, IPv6 is a conservative extension of IPv4. Most transport- and


application-layer protocols need little or no change to work over IPv6; exceptions are
applications protocols that embed network-layer addresses (such as FTP or NTPv3).

Applications, however, usually need small changes and a recompile in order to run over
IPv6.

Larger address space

The main feature of IPv6 that is driving adoption today is the larger address space:
addresses in IPv6 are 128 bits long versus 32 bits in IPv4.

The larger address space avoids the potential exhaustion of the IPv4 address space
without the need for network address translation (NAT) and other devices that break the
end-to-end nature of Internet traffic. It also makes administration of medium and large
networks simpler, by avoiding the need for complex subnetting schemes. Subnetting
will, ideally, revert to its purpose of logical segmentation of an IP network for optimal
routing and access.

The drawback of the large address size is that IPv6 carries some bandwidth overhead
over IPv4, which may hurt regions where bandwidth is limited (header compression can
sometimes be used to alleviate this problem). IPv6 addresses are also very difficult to
remember; use of the Domain Name System (DNS) is necessary.

Stateless address autoconfiguration

IPv6 hosts can be configured automatically when connected to a routed IPv6 network
using ICMPv6 router discovery messages. When first connected to a network, a host
sends a link-local multicast router solicitation request for its configuration parameters; if
configured suitably, routers respond to such a request with a router advertisement
packet that contains network-layer configuration parameters. [5]

If IPv6 autoconfiguration is not suitable, a host can use stateful configuration


(DHCPv6) or be configured manually. Stateless autoconfiguration is only suitable for
hosts: routers must be configured manually or by other means. [6]

26
Multicast

Multicast is part of the base specifications in IPv6, unlike IPv4, where it was introduced
later.

IPv6 does not have a link-local broadcast facility; the same effect can be achieved by
multicasting to the all-hosts group (FF02::1).

Most environments, however, do not currently have their network infrastructures


configured to route multicast: multicast on single subnet will work, but global multicast
might not.

Link-local addresses

IPv6 interfaces have link-local addresses in addition to the global addresses that
applications usually use. These link-local addresses are always present and never
change, which simplifies the design of configuration and routing protocols.

Jumbograms

In IPv4, packets are limited to 64 KiB of payload. When used between capable
communication partners and on communication links with a maximum transmission unit
(MTU) larger than 65,576 octets (65536 + 40 for the header), IPv6 has optional support
for packets over this limit, referred to as jumbograms which can be as large as 4 GiB.
The use of jumbograms may improve performance over high-MTU networks.

Network-layer security

IPsec, the protocol for IP network-layer encryption and authentication, is an integral part
of the base protocol suite in IPv6; this is unlike IPv4, where it is optional (but usually
implemented). IPsec, however, is not widely used at present except for securing traffic
between IPv6 Border Gateway Protocol routers.

Mobility

Unlike mobile IPv4, Mobile IPv6 (MIPv6) avoids triangular routing and is therefore as
efficient as normal IPv6. This advantage is mostly hypothetical, as neither MIPv4 nor
MIPv6 are widely deployed today.

Simpler processing by routers

IPv4 has a checksum field that covers all of the packet header. Since certain fields
(such as the TTL field) change during forwarding, the checksum must be recomputed by
every router. IPv6 has no error checking at the network layer but instead relies on link

27
layer and transport protocols to perform error checking, which should make forwarding
faster.

Deployment status

As of November 2007, IPv6 accounts for a minuscule percentage of the live addresses
in the publicly-accessible Internet, which is still dominated by IPv4.

With the notable exceptions of stateless auto-configuration, more flexible addressing


and Secure Neighbor Discovery (SEND), many of the features of IPv6 have been ported
to IPv4 in a more or less elegant manner. Thus IPv6 deployment is primarily driven by
IPv4 address space exhaustion, which has been slowed by the introduction of classless
inter-domain routing (CIDR) and the extensive use of network address translation
(NAT).

IPv4 exhaustion

Estimates as to when the pool of available IPv4 addresses will be exhausted vary
widely, and should be taken with caution. In 2003, Paul Wilson (director of APNIC)
stated that, based on then-current rates of deployment, the available space would last
until 2023.[7] In September 2005 a report by Cisco Systems reported that the pool of
available addresses would be exhausted in as little as 4 to 5 years. [8] As of November
2007, a daily updated report projected that the IANA pool of unallocated addresses
would be exhausted in May 2010, with the various Regional Internet Registries using up
their allocations from IANA in April 2011. [9] This report also argues that, if assigned but
unused addresses were reclaimed and used to meet continuing demand, allocation of
IPv4 addresses could continue until 2017.

Government incentives

A number of governments, however, are starting to require support for IPv6 in new
equipment. The U.S. Government, for example, has specified that the network
backbones of all federal agencies must be capable of deploying IPv6 by 2008,[10] and
spent the money to acquire a /16 block (281 trillion network addresses) to start the
deployment.[11][12][13]

The Peoples Republic of China has a 5 year plan for deployment of IPv6 called the
China Next Generation Internet.

Current deployment

In February 1999, The IPv6 Forum, [14] a world-wide consortium of worldwide leading
Internet vendors, Industry Subject Matter Experts, Research & Education Networks was
founded to promote the IPv6 technology and raise the market and industry awareness.

28
To drive the deployment of IPv6, regional and local IPv6 Task Forces were created. [15]
On 20 July 2004 ICANN announced that the root DNS servers for the Internet had been
modified to support both IPv6 and IPv4. The current integration of IPv6 on existing
network infrastructures could be monitored from different sources, for example:

 Regional Internet Registries (RIR) IPv6 Prefix Allocation [16]


 IPv6 Transit services[17]
 Japan ISP IPv6 services[18]

In addition modern operating systems have IPv6 turned on by default.

Addressing

128-bit length

The primary change from IPv4 to IPv6 is the length of network addresses. IPv6
addresses are 128 bits long (as defined by RFC 4291), whereas IPv4 addresses are 32
bits; where the IPv4 address space contains roughly 4 billion addresses, IPv6 has
enough room for 3.4×1038 unique addresses.

IPv6 addresses are typically composed of two logical parts: a 64-bit (sub-)network
prefix, and a 64-bit host part, which is either automatically generated from the interface's
MAC address or assigned sequentially. Because the globally unique MAC addresses
offer an opportunity to track user equipment, and so users, across time and IPv6
address changes, RFC 3041 was developed to reduce the prospect of user identity
being permanently tied to an IPv6 address, thus restoring some of the possibilities of
anonymity existing at IPv4. RFC 3041 specifies a mechanism by which time-varying
random bit strings can be used as interface circuit identifiers, replacing unchanging and
traceable MAC addresses.

Network notation

IPv6 networks are written using CIDR notation.

An IPv6 network (or subnet) is a contiguous group of IPv6 addresses the size of which
must be a power of two; the initial bits of addresses, which are identical for all hosts in
the network, are called the network's prefix.

A network is denoted by the first address in the network and the size in bits of the prefix
(in decimal), separated with a slash. For example, 2001:0db8:1234::/48 stands for the
network with addresses 2001:0db8:1234:0000:0000:0000:0000:0000 through
2001:0db8:1234:ffff:ffff:ffff:ffff:ffff

Because a single host can be seen as a network with a 128-bit prefix, you will
sometimes see host addresses written followed with /128.
29
Kinds of IPv6 addresses

IPv6 addresses are divided into 3 categories:[19]

 Unicast Addresses
 Multicast Addresses
 Anycast Addresses

A Unicast address identifies a single network interface. A packet sent to a unicast


address is delivered to that specific computer. The following types of addresses are
unicast IPv6 addresses:

 Global unicast addresses


 Link-local addresses
 Site-local addresses
 Unique local IPv6 unicast addresses
 Special addresses

Multicast addresses are used to define a set of interfaces that typically belong to
different nodes instead of just one. When a packet is sent to a multicast address, the
protocol delivers the packet to all interfaces identified by that address. Multicast
addresses begin with the prefix FF00::/8, and their second octet identifies the
addresses' scope, i.e. the range over which the multicast address is propagated.
Commonly used scopes include link-local (0x2), site-local (0x5) and global (0xE).

Anycast addresses are also assigned to more than one interface, belonging to different
nodes. However, a packet sent to an anycast address is delivered to just one of the
member interfaces, typically the “nearest” according to the routing protocol’s idea of
distance. Anycast addresses cannot be identified easily: they have the structure of
normal unicast addresses, and differ only by being injected into the routing protocol at
multiple points in the network.

Special addresses

There are a number of addresses with special meaning in IPv6:

Link local
 ::/128 — the address with all zeros is an unspecified address, and is to be used
only in software.
 ::1/128 — the loopback address is a localhost address. If an application in a host
sends packets to this address, the IPv6 stack will loop these packets back to the
same host (corresponding to 127.0.0.1 in IPv4).
 fe80::/10 — The link-local prefix specifies that the address only is valid in the local
physical link. This is analogous to the Autoconfiguration IP address
169.254.0.0/16 in IPv4.
30
Site local
 fc00::/7 — unique local addresses (ULA) are routable only within a set of
cooperating sites. They were defined in RFC 4193 as a replacement for site-local
addresses (see below). The addresses include a 40-bit pseudorandom number
that minimizes the risk of conflicts if sites merge or packets somehow leak out.

IPv4
 ::ffff:0:0/96 — this prefix is used for IPv4 mapped addresses (see Transition
mechanisms below).
 2002::/16 — this prefix is used for 6to4 addressing.

Multicast
[20]
 ff00::/8 — The multicast prefix is used for multicast addresses as defined by in
"IP Version 6 Addressing Architecture" (RFC 4291).

Used in examples, deprecated, or obsolete


 ::/96 — the zero prefix was used for IPv4-compatible addresses; it is now
obsolete.
 2001:db8::/32 — this prefix is used in documentation (RFC 3849). Anywhere
where an example IPv6 address is given, addresses from this prefix should be
used.
 fec0::/10 — The site-local prefix specifies that the address is valid only inside the
local organisation. Its use has been deprecated in September 2004 by RFC 3879
and systems must not support this special type of address.

Teredo
 2001::/32 — typically used for Teredo_tunneling addresses

There are no address ranges reserved for broadcast in IPv6 — applications use
multicast to the all-hosts group instead. IANA maintains the official list of the IPv6
address space. Global unicast assignments can be found at the various RIR's or at the
GRH DFP pages.

IP version 6 (IPv6) is a new version of the Internet Protocol, designed as the successor
to IP version 4 (IPv4) [RFC-791]. The changes from IPv4 to IPv6 fall primarily into the
following
categories:

o Expanded Addressing Capabilities

IPv6 increases the IP address size from 32 bits to 128 bits, to support more levels of
addressing hierarchy, a much greater number of addressable nodes, and simpler auto-
configuration of addresses. The scalability of multicast routing is improved by adding a

31
"scope" field to multicast addresses. And a new type of address called an "anycast
address" is defined, used to send a packet to any one of a group of nodes.

o Header Format Simplification

Some IPv4 header fields have been dropped or made optional, to reduce the common-
case processing cost of packet handling and to limit the bandwidth cost of the IPv6
header.

o Improved Support for Extensions and Options

Changes in the way IP header options are encoded allows for more efficient forwarding,
less stringent limits on the length of options, and greater flexibility for introducing new
options in the future.

o Flow Labeling Capability

A new capability is added to enable the labeling of packets belonging to particular traffic
"flows" for which the sender requests special handling, such as non-default quality of
service or "real-time" service.

o Authentication and Privacy Capabilities

Extensions to support authentication, data integrity, and (optional) data confidentiality


are specified for IPv6.

This document specifies the basic IPv6 header and the initially defined IPv6 extension
headers and options. It also discusses packet size issues, the semantics of flow labels
and traffic classes, and the effects of IPv6 on upper-layer protocols. The format and
semantics of IPv6 addresses are specified separately in [ADDRARCH]. The IPv6
version of ICMP, which all IPv6 implementations are required to include, is specified in
[ICMPv6].

2. Terminology

node - a device that implements IPv6.

router - a node that forwards IPv6 packets not explicitly


addressed to itself. [See Note below].

host - any node that is not a router. [See Note below].

upper layer - a protocol layer immediately above IPv6. Examples are

32
transport protocols such as TCP and UDP, control
protocols such as ICMP, routing protocols such as OSPF,
and internet or lower-layer protocols being "tunneled"
over (i.e., encapsulated in) IPv6 such as IPX,
AppleTalk, or IPv6 itself.

link - a communication facility or medium over which nodes can


communicate at the link layer, i.e., the layer
immediately below IPv6. Examples are Ethernets (simple
or bridged); PPP links; X.25, Frame Relay, or ATM
networks; and internet (or higher) layer "tunnels",
such as tunnels over IPv4 or IPv6 itself.

neighbors - nodes attached to the same link.

interface - a node's attachment to a link.

address - an IPv6-layer identifier for an interface or a set of


interfaces.

packet - an IPv6 header plus payload.

link MTU - the maximum transmission unit, i.e., maximum packet


size in octets, that can be conveyed over a link.

path MTU - the minimum link MTU of all the links in a path between
a source node and a destination node.

IPv6 and the Domain Name System

IPv6 addresses are represented in the Domain Name System by AAAA records (so-
called quad-A records) for forward lookups; reverse lookups take place under ip6.arpa
(previously ip6.int), where address space is delegated on nibble boundaries. This
scheme, which is a straightforward adaptation of the familiar A record and in-addr.arpa
schemes, is defined in RFC 3596.

The AAAA scheme was one of two proposals at the time the IPv6 architecture was
being designed. The other proposal, designed to facilitate network renumbering, would
have had A6 records for the forward lookup and a number of other innovations such as
bit-string labels and DNAME records. It is defined in the experimental RFC 2874 and its
references (with further discussion of the pros and cons of both schemes in RFC 3364).

33
AAAA record fields

NAME Domain name

TYPE AAAA (28)

CLASS Internet (1)

TTL Time to live in seconds

RDLENGTH Length of RDATA field

String form of the IPV6 address as described in


RDATA
RFC 3513

RFC 3484 specifies how applications should select an IPv6 or IPv4 address for use,
including addresses retrieved from DNS.

IPv6 and DNS RFCs

 DNS Extensions to support IP version 6 - RFC 1886


 DNS Extensions to Support IPv6 Address Aggregation and Renumbering - RFC
2874
 Tradeoffs in Domain Name System (DNS) Support for Internet Protocol version 6
(IPv6) - RFC 3364
 Default Address Selection for Internet Protocol version 6 (IPv6) - RFC 3484
 Internet Protocol Version 6 (IPv6) Addressing Architecture - RFC 3513
 DNS Extensions to Support IP Version 6 (Obsoletes 1886 and 3152) - RFC 3596

Transition mechanisms

Until IPv6 completely supplants IPv4, which is not likely to happen in the foreseeable
future, a number of so-called transition mechanisms are needed to enable IPv6-only
hosts to reach IPv4 services and to allow isolated IPv6 hosts and networks to reach the
IPv6 Internet over the IPv4 infrastructure. [22] contains an overview of the transition
mechanisms mentioned below.

34
Dual stack

Since IPv6 is a conservative extension of IPv4, it is relatively easy to write a network


stack that supports both IPv4 and IPv6 while sharing most of the code. Such an
implementation is called a dual stack, and a host implementing a dual stack is called a
dual-stack host. This approach is described in RFC 4213.

Most current implementations of IPv6 use a dual stack. Some early experimental
implementations used independent IPv4 and IPv6 stacks. There are no known
implementations that implement IPv6 only.

Tunneling

In order to reach the IPv6 Internet, an isolated host or network must be able to use the
existing IPv4 infrastructure to carry IPv6 packets. This is done using a technique known
as tunneling which consists of encapsulating IPv6 packets within IPv4, in effect using
IPv4 as a link layer for IPv6.

IPv6 packets can be directly encapsulated within IPv4 packets using protocol number
41. They can also be encapsulated within UDP packets e.g. in order to cross a router or
NAT device that blocks protocol 41 traffic. They can of course also use generic
encapsulation schemes, such as AYIYA or GRE.

Automatic tunneling

Automatic tunneling refers to a technique where the tunnel endpoints are automatically
determined by the routing infrastructure. The recommended technique for automatic
tunneling is 6to4 tunneling, which uses protocol 41 encapsulation. [23] Tunnel endpoints
are determined by using a well-known IPv4 anycast address on the remote side, and
embedding IPv4 address information within IPv6 addresses on the local side. 6to4 is
widely deployed today.

Another automatic tunneling mechanism is ISATAP.[24] This protocol treats the IPv4
network as a virtual IPv6 local link, with mappings from each IPv4 address to a link-local
IPv6 address.

Teredo is an automatic tunneling technique that uses UDP encapsulation and is claimed
to be able to cross multiple NAT boxes. [25] Teredo is not widely deployed today, but an
experimental version of Teredo is installed with the Windows XP SP2 IPv6 stack. IPv6,
6to4 and Teredo are enabled by default in Windows Vista and Mac OS X Leopard and
Apple's AirPort Extreme.[26]

Configured tunneling

35
Configured tunneling is a technique where the tunnel endpoints are configured explicitly,
either by a human operator or by an automatic service known as a tunnel broker.[27]
Configured tunneling is usually more deterministic and easier to debug than automatic
tunneling, and is therefore recommended for large, well-administered networks.

Configured tunneling uses protocol 41 in the Protocol field of the IPv4 packet. This
method is also better known as 6in4.

Proxying and translation

When an IPv6-only host needs to access an IPv4-only service (for example a web
server), some form of translation is necessary. One form of translation is the use of a
dual-stack application-layer proxy, for example a web proxy.

NAT-like techniques for application-agnostic translation at the lower layers have also
been proposed. Most have been found to be too unreliable in practice because of the
wide range of functionality required by common application-layer protocols, and are
considered by many to be obsolete.

36

You might also like