Data Link Layer - Computer Networks

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 102

DATA LINK LAYER

Data link layer:


Data-Link Layer has responsibility of transferring frames
from one node to adjacent node over a link
 DATA LINK LAYER DESIGN ISSUES:
 Data link layer has a number of specific functions to carry
out. These functions include
 Provides a well-defined service interface to the network layer.
 Determines how the bits of the physical layer are grouped into
frames (framing).
 Deals with transmission errors
 Regulates the flow of frames.
Data link layer-Introduction
 The data link layer takes the packets it gets from the network layer
and encapsulates them into frames for transmission .
 Each frame contains frame header ,a payload field for holding the
packet ,and a frame trailer

Sending Machine Receiving Machine


packet packet

Header payload field Trailer Header payload field Trailer

Relationship between packets and frames


Data link layer-Introduction
 Services Provided to the Network layer:
 The function of the data link layer is to provide services to
the network layer
 The principal service is transferring data from the network

layer on the source machine to the network layer on the


destination machine.

(a) Virtual Communication (b)Actual communication


Types of services provided to the Network Layer

 Unacknowledged Connectionless service

 Acknowledged Connectionless service

 Acknowledged Connection-Oriented service


Unacknowledged Connectionless service:
 It consists of having the source machine send
independent frames to the destination machine without
having the destination acknowledge them.
 No connection is established beforehand or released
afterward
 If a frame is lost due to noise on the line, no attempt is
made to recover it in the data link layer.
 Appropriate for voice, where delay is worse than bad
data.
 Most of the LANs use unacknowledged connection
less service in the data link layer
 Acknowledged Connectionless service
 when this service is offered, there are still no
logical connections used, but each frame sent is
individually acknowledged.
 In this way ,the sender knows whether a frame has
arrived correctly.
 If it has not arrived within a specified time interval
it can be sent again.
 This service is used over unreliable channels
,such as wireless systems.
 Acknowledged Connection-Oriented service
 The most sophisticated service the data link layer can
provide to the network layer is connection-oriented service.
 The source and destination machines establish a
connection before any data are transferred
 Each frame sent over the connection is numbered ,and the
data link layer guarantees that each frame is received
exactly once and that all frames are received in the right
order
 When connection oriented service is used ,transfers go
through three distinct phases.
 1) Connection established
 2) Data transferred
 3) connection Released
Transport layer-End to End
ACK/NAK

1 2 3 4 5

Data Data Data Data

Data link layer-Hop by Hop

Data Data Data Data


1 2 3 4 5
ACK/ ACK/ ACK/ ACK/
NAK NAK NAK NAK
Services Provided to Network Layer

Placement of the Data link protocol


Framing:
 In order to provide service to the network layer the data link
layer must use the service provided to it by the physical layer.
 Physical layer is used to accept the raw bit stream and attempt
to deliver it to the destination
 The bit stream is not guaranteed to be error free
 The number of bits received may be less than ,equal to ,or
more than the number of bits transmitted, and they may have
different values
 It is up to the data link layer to detect and if necessary
,correct errors.
 The usual approach is for the data link layer to break the bit
stream up into discrete frames and compute the checksum for
each frame.
Framing:
 When a frame arrives at the destination ,the checksum is
recomputed.
 If the newly computed checksum is different from the one
contained in the frame, the data link layer knows that an error
has occurred and take steps to deal with it.
 e.g., discarding the bad frame and possibly also sending
back an error report
 Breaking the bit stream up into frames is more difficult than it
at first appears, One way to achieve this framing is to insert
time gaps between frames, much like the spaces between words
in ordinary text.
Methods:

1) character count

2) Flag bytes with byte stuffing

3) starting and ending flags, with bit stuffing


Character count:

A character stream a) with out errors b) with one error


• The first framing method uses a field in the header to specify
the number of characters in the frame.
• When the data link layer in the destination sees the character
count ,it knows how many characters follow and hence where the
end of the frame is.
Character count:
 The trouble with this algorithm is that the count can be
garbled by a transmission error.
 For example, if the character count of 5 in the second frame
becomes a 7,the destination will get out of synchronization and
will be unable to locate the start of the next frame.
 Disadvantages:
 The count can be garbled by the transmission error
 Resynchronization is not possible Even if with checksum, the
receiver knows that the frame is bad there is no way to tell where
the next frame starts.
 Asking for retransmission doesn’t help either because the
start of the retransmitted frame is not known
 No longer used
Character Stuffing/Byte Stuffing

(a) A frame delimited by flag bytes.


(b) Four examples of byte sequences before and after stuffing.
 The second transmission method gets around the
problem of Resynchronization called character
stuffing.
 Most protocols used the same byte ,called flag byte,
as both the starting and ending delimiter as FLAG.
 Two consecutive flag bytes indicate the end of one
frame and start of the next one.
Starting and ending flags, with bit stuffing
 This new technique allows data frames to contain an
arbitrary number of bits and allow character codes with an
arbitrary number of bits per character.
 Each frame begins and ends with a special bit pattern,0111110
called a flag byte.
 whenever the senders data link layer encounters five
consecutive ones in the data , it automatically stuffs a 0 bit into
the outgoing bit stream.
 whenever the receiver sees five consecutive incoming 1 bits,
followed by a 0 bit ,it automatically destuffs (i.e deletes) the 0
bit.
 If the flag pattern 01111110,this flag is transmitted as
011111010 but stored in the receiver memory as 01111110.
starting and ending flags, with bit stuffing

Bit stuffing
(a) The original data.
(b) The data as they appear on the line.
(c) The data as they are stored in receiver’s memory after
destuffing.
Error Control:
 Next problem: How to make sure all frames are eventually
delivered to the network layer at the destination, and in the
proper order.

 The usual way to ensure reliable delivery is to provide the


sender with some feedback about what is happening at the other
end of the line.

 The protocol calls for the receiver to send back special control
frames bearing positive and negative acknowledgements about
the incoming frames
Error Control:
 If the sender receives a positive acknowledgement about a
frame ,it knows the frame has arrived safely.
 on the other hand, a negative acknowledgement means that
something has gone wrong, and the frame must be transmitted
again.
 An additional complication comes from hardware troubles.
 Managing the Timers and sequence numbers so as to ensure
that each frame is ultimately passed to the network layer at the
destination exactly once.
Error Control:
 Maintaining Timers for Error Control: When a sender
transmits a frame, it generally also starts a timer .
 The timer is set to expire after an interval long enough for the
frame to reach the destination, be processed there ,and have
the acknowledgement to propagate back to the sender.
 Normally, the frame will be correctly received and the
acknowledgement will get back before the timer runs out ,in
which case the timer will be cancelled.
 However, if either the frame or the acknowledgement is lost
,the timer will go off ,alerting the sender to a potential
problem .the obvious solution is to just transmit the frame
again.
 Sequence number is used to recognize the duplicate packet.
Flow control:
 A sender that systematically wants to transmit
frames faster than the receiver can accept them..
 When the sender is running on a fast (or lightly
loaded) computer and the receiver is running on a
slow (or heavily loaded machine)
 The sender keeps pumping the frames out at a higher
rate until the receiver is completely swamped.
 Even if the transmission is error free ,at a certain
point the receiver will simply be unable to handle the
frames as they arrive and will start to lose someone
To Prevent above situation two approaches are used
• Feedback- based flow control
• Rate-based flow control

Feedback- based flow control ,the receiver sends


back information to the sender giving it permission to
send more data.
Rate-based flow control , the protocol has built- in
mechanism that limits the rate at which senders may
transmits data, without using feedback – based flow
control
Error Detection and Correction
methods
Error Detection and correction:
 In some cases it is sufficient to detect an error and in some, it
requires the errors to be corrected also.

 For e.g.
 On a reliable medium : Error Detection is sufficient where
the error rate is low and asking for retransmission after
Error Detection would work efficiently
 In contrast, on an unreliable medium : Retransmission after

Error Detection may result in another error and still


another and so on. Hence Error Correction is desirable.
• Data can be corrupted during transmission some
applications require that error be detected and
corrected.
• ERROR: whenever bits flow from one point to
another, they are subject to unpredictable changes
because of interference this interference can change
the shape of the signal.
• Types of ERRORS:
1) single bit error
2) burst error
Single Bit Error
• The term single-bit error means that only one bit of
given data unit (such as a byte, character, or data
unit) is changed from 1 to 0 or from 0 to 1.
Burst Error
• The term burst error means that two or more bits in the data
unit have changed from 0 to 1 or vice-versa.
• Note that burst error doesn’t necessary means that error
occurs in consecutive bits.
• The length of the burst error is measured from the first
corrupted bit to the last corrupted bit. Some bits in between
may not be corrupted.
Error Detection :
• They are few codes which are used to detect errors, not to
prevent the occurrence of error or not to correct the occurred
error.
• Solution – a shorter group of data is appended to the end of the data
unit . This technique is called Redundancy.
Error Detecting Codes
• Basic approach used for error detection is the use
of redundancy, where additional bits are added to
facilitate detection and correction of errors.
Popular techniques are:
• Simple Parity check
• Two-dimensional Parity check
• Checksum
• Cyclic redundancy check
Simple Parity Checking/One-dimension Parity
Check/Vertical Redundancy Check(Vrc):
• .It is most common and least expensive mechanism for
error detection.
• It is also called as Parity check.
• Append a single bit at the end of data block such that
the number of ones is even
 Even Parity (odd parity is similar)
0110011  01100110
0110001  01100011
• Performance: Detects all odd-number errors in a
data block
Two Dimension parity check
• Performance can be improved by using two-
dimensional parity check, which organizes the block
of bits in the form of a table.
• Parity check bits are calculated for each row, which
is equivalent to a simple parity check bit. Parity
check bits are also calculated for all columns then
both are sent along with the data.
• At the receiving end these are compared with the
parity bits calculated on the received data
CRC(Cyclic Redundancy Check)
• It is based on the binary division
• Polynomial code are used for generating check bits in
the form of cyclic redundancy check.
 When the polynomial code method is employed ,the sender
and receiver must agree upon a generator polynomial G(x) .
 In advance both higher and lower order bits of generator must
be 1.
 To compute the check sum for some frame with m bits
,corresponding to the polynomial M(x), the frame must be
longer than the generator polynomial.
 The idea is to append a checksum to the end of the frame in
such a way that the polynomial represented by the check
summed frame is divisible by G(x).
 When the receiver gets the check summed frame, it tries
dividing it by G(x). If there is a remainder ,there has been a
transmission error otherwise no error.
Algorithm:
 Let r be the degree of G(x)

 Append r zero bits to the low-order end of the frame


,so it now contains m+r bits and corresponds to the
polynomial xrM(x).
 Divide the bit string corresponding to G(x) in to the
bit string corresponding to XrM(x) using modulo 2
division.
 subtract the remainder (which is always r or fewer
bits ) from the bit string correspond to xrM(x) using
modulo 2 subtraction.
 the result is the check summed frame to be
transmitted .call its polynomial T(x).
Calculation of the polynomial code checksum:

Frame: 1001 Generator : 1011


Division in the CRC decoder for two cases:
Performance of CRC:
• CRC is a very effective error detection technique. If the divisor
is chosen according to the previously mentioned rules, its
performance can be summarized as follows:
• CRC can detect all single-bit errors
• CRC can detect all double-bit errors (three 1’s)
• CRC can detect any odd number of errors (X+1)
• CRC can detect all burst errors of less than the degree of the
polynomial.
• CRC detects most of the larger burst errors with a high
probability.
• For example CRC-12 detects 99.97% of errors with a length 12
or more
Check sum
• Used by upper layer protocols.
• It is based on the concept of redundancy.
• It uses one’s complement arithmetic.
Error Correcting codes
• The techniques that we have discussed so far can
detect errors, but do not correct them.
Error Correction can be handled in two ways.
• One is when an error is discovered; the receiver can
have the sender retransmit the entire data unit. This
is known as backward error correction.
• In the other, receiver can use an error-correcting
code, which automatically corrects certain errors.
This is known as forward error correction.
Error correcting codes:
 Forward Error Correction: FEC is the only error
correction scheme that actually detects and corrects
transmission errors at the receive end with out calling
for retransmission.
 Ex: Hamming code
 No of bits in hamming code is dependent on the no.
of bits in the data character by using the relation.
 2n>=m+n+1
 Where n=no. of hamming bits
 m= no. of bits in the data character
Error correcting codes:
 No. of bits in hamming code
 2n>=m+n+1
 Take n=4

 24=16>=12+4+1
 16>=17
 Take n=5

 25=32>=12+5+1=18

 32>=18 ---- 5 bits are required for hamming code.

 12 bits of data +5 bits of code=17 bits of data stream.


Error correcting codes:
Hamming Code
Calculation of redundancy bits for 7 bit
data
• R1- r1 d3 d5 d7 d9 d11
• R2-r2 d3 d6 d7 d10 d11
• R4-r4 d5 d6 d7
• R8-r8 d9 d10 d11
Note: these bits are fixed.
Hamming Code
Hamming Code
• Above figure shows how hamming code is used for correction
for 4-bit numbers (d4d3d2d1) with the help of three redundant
bits (r3r2r1).
• For the example data 1010, first r1 (0) is calculated
considering the parity of the bit positions, 1, 3, 5 and 7. Then
the parity bits r2 is calculated considering bit positions 2, 3, 6
and 7. Finally, the parity bits r4 is calculated considering bit
positions 4, 5, 6 and 7 as shown.
• If any corruption occurs in any of the transmitted code
1010010, the bit position in error can be found out by
calculating r3r2r1 at the receiving end.
• For example, if the received code word is 1110010, the
recalculated value of r3r2r1 is 110, which indicates that bit
position in error is 6, the decimal value of 110.
Flow and Error Control Techniques
Frame header of Data link layer

packet

network layer buffer

frame

data link layer info ack seq kind


Elementary Data link Protocols:
 Protocols:
 1) unrestricted simplex protocol
 2) A simplex stop-and-wait protocol
 3) A simplex protocol for a Noisy Channel
Unrestricted Simplex Protocol
 Data are transmitted in one direction only.
 Both the transmitting and receiving network layers are always
ready
 processing time can be ignored
 Infinite buffer space is available
 The protocol consists of two distinct procedures, a sender and
a receiver. The sender runs in the data link layer of the source
machine and the receiver runs in the data link layer of the
destination machine.
 MAX-SEQ is not used because No sequence numbers and
Acknowledgements are used here.
 The only event type possible is frame-arrival (i.e. an arrival of
undamaged frame)
A Simplex Stop-and-Wait Protocol:
 If data frames arrives at the receiver side faster than
they can be processed, the frames must be stored until
their use
 Normally ,the receiver does not have enough storage
space ,especially if it is receiving data from many
sources
 This may result in the discarding frames
 To prevent this there must be feedback from the
receiver to the sender.
A simplex stop-and-wait protocol:
 Def : Protocol in which the sender sends one frame
and then waits for acknowledgement before
proceeding is called stop-and-wait.
 Flow Diagram:
A B
Request
Frame
Arrival
Ack
Arrival
Frame
Request Ack
Arrival
A simplex stop-and-wait protocol:
 Receiver 2 is that after delivering a packet to the network
layer, receiver 2 sends an acknowledgement frame back to the
sender before entering the wait loop again. because the only
arrival of the frame back at the sender is important. the
receiver need not put any particular information on it.
 points:
 1) sender start out fetching a packet from the network layer,
using it to construct a frame and sending it on its way.
 2)sender must wait until an acknowledgement frame

arrives before looping back and fetching the next packet


from the network layer
Ambiguities with Stop-and-Wait
[unnumbered frames]
(a) Frame 1 lost Time-out
time
 A frame frame frame frame
0 1 1 2
ACK ACK
B

(b) ACK lost Time-out


time
A frame frame frame frame
0 1 1 2
ACK ACK ACK
B

In parts (a) and (b) transmitting station A acts the same way,
but part (b) receiving station B accepts frame 1 twice.
A simplex protocol for a Noisy Channel
 The channel is noisy, frames may be damaged or lost
 Good scene : data frame reaches intact, ack sent back and
received, next frame sent
 Bad scene :
 Data frame damaged or lost ..hence no ack – sender times
out and resends .. No problems
 Data frame reaches intact but Ack lost .. Times out

..resends.. Receiver receives duplicate frames. Problem


 Solution :Keep a sequence number for each frame to
distinguish between the new frame and a duplicate frame
A simplex protocol for a Noisy Channel
 Protocols in which the sender waits for a positive
acknowledgement before advancing to the next data
item are often called (PAR-Positive acknowledgement
retransmission or ARQ- Automatic Repeat Request)
Sliding Window Protocols
 Must be able to transmit data in both directions.
 Choices for utilization of the reverse channel:
 mix DATA frames with ACK frames.

 Piggyback the ACK

 Receiver waits for DATA traffic in the opposite direction.

 Use the ACK field in the frame header to send sequence


number of frame being ACKed.
  better use of the channel capacity.
Sliding window protocols:
 In the Previous protocols ,Data frames were transmitted in one
direction only.
 In most practical situations ,there is a need for transmitting
data in both directions.
 One way of achieving full-duplex data transmission is to have
two separate communication channels and each one for
simplex data traffic (in different directions)
 we have two separate physical circuits ,each with a
“forward” channel (for data) and a “reverse” channel( for
acknowledgements).
 Data
source Destination
Acknowledgements
Sliding window protocols:
 Disadvantage: The bandwidth of the reverse channel is almost
entirely wasted.
 A better idea is to use the same circuits for data in both
directions
 In this model the data frames from A and B are intermixed
with the acknowledgement frames from A to B.
 Kind: kind field in the header of an incoming frame, the
receiver can tell whether the frame is data or
acknowledgements.
Sliding window protocols:
 Piggybacking:
Frame sent(Frame1)

source Destination
Ack(frame1)+Frame2
• When a data frame arrives, instead of immediately sending a separate
control frame, the receiver restrains itself and waits until the network layer
passes it the next packet.
• The acknowledgement is attached to the outgoing data frame
• Disadv: This technique is temporarily delaying outgoing
acknowledgements.
• If the datalinklayer waits longer than the senders timeout period, the
frame will be retransmitted.
Sliding window protocols:
 Rule: sender waiting a fixed number of milliseconds. If a
new packet arrives quickly the acknowledgement is
piggybacked onto it. other wise if no new packet has arrived
by the end of this time period ,the data link layer just sends a
separate acknowledgement frame.
 In sliding window protocol each frame contains a sequence
number ranging from 0 up to some maximum.
 The maximum is usually 2n-1 so the sequence number fits
nicely in an n-bit field.
 The stop-and-wait sliding window protocol uses n=1
restricting the sequence numbers 0 and 1.
 The sender must keep all these frames in its memory for
possible retransmission
Sliding window protocols:
 Thus if the maximum window size is n, the sender needs n
buffers to hold the unacknowledged frames
 3 bit field -000
001
010..etc
here n=3 i.e. 23-1=7
window size is 0 to 7
Sliding window :: sender has a window of frames and maintains a
list of consecutive sequence numbers for frames that it is
permitted to send without waiting for ACKs.
receiver has a window that is a list of frame sequence numbers it is
permitted to accept.
Note – sending and receiving windows do NOT have to be the same
size.
Sliding window protocols:

A sliding window of size 1, with a 3-bit sequence number.


(a) Initially.
(b) After the first frame has been sent.
(c) After the first frame has been received.
(d) After the first acknowledgement has been received.
Sliding window protocols:
 Sliding window protocols: 3 methods
 1) 1-bit sliding window protocol
 2) Go –Back N
 3) Selective Repeat
 1-bit sliding window protocol:
 Window size 1.
 Stop-and-wait.(ARQ)
 Must get ack before can send next frame.
 Both machines are sending and receiving.
1-bit sliding protocol :
 Example:
 A trying to send its frame 0 to B.
B trying to send its frame 0 to A.
 Imagine A's timeout is too short. A repeatedly times out and

sends multiple copies to B, all with seq=0, ack=1.


When first one of these gets to B, it is accepted. Set
expected=1. B sends its frame, seq=0, ack=0.
All subsequent copies of A's frame rejected since seq=0 not
equal to expected. All these also have ack=1.
B repeatedly sends its frame, seq=0, ack=0. But A not
getting it because it is timing out too soon.
Eventually, A gets one of these frames. A has its ack now
(and B's frame). A sends next frame and acks B's frame.
1-bit sliding protocol :

Two scenarios for protocol 4. (a) Normal case. (b) Abnormal


case. The notation is (seq, ack, packet number). An asterisk
indicates where a network layer accepts a packet.
Sliding window protocols:
 A protocol using Go Back N:
 Receiver B window size = 1
Discard all frames after error
No ack - they will eventually get re-sent
Can be lot of re-sends
Wastes lot of bandwidth if error rate high.
A protocol using Go Back N and selective repeat:
A protocol using Go Back N:
Go Back n, is for the receiver simply to discard all subsequent
frames, sending no acknowledgements for the discarded frames.
This strategy corresponds to a receive window of size 1.
The data link layer refuses to accept any frame except the next one
it must give to the network layer.
Points: fig ( a)
1) frame 0 and 1 correctly received and Acknowledged
2) Frame 2 is damaged or lost ,The sender continuous to send
frames until the timer for frame 2 expires.
3) Then it backs up to frame 2 and starts all over with it ,sending
2,3,4 etc. all over again.
Selective Repeat:
 When selective repeat is used ,a bad frame that is received is
discarded. But good frames after it are buffered.
 when the senders timeout ,only the oldest unacknowledged
frame is retransmitted.
 If that frames arrives correctly ,the receiver can deliver it to the
network layer in sequence all the frames it has buffered.
 fig (b) :
 1) Frames 0 and 1 correctly received and acknowledged.
 2) Frame 2 is lost, when frame 3 arrives at the receiver ,the data link
layer notices that it has a missed frame. so it sends back a NAK for 2 but
buffers 3
 3) when frame 4 and 5 arrive ,they are buffered by the data link layer
instead of being passed to the network layer
 4)The NAK 2 gets back to the sender which immediately resends frame 2.
HDLC
HDLC – High-Level Data Link Control

 SDLC (Synchronous data link Control)


 Submitted to ANSI and ISO
 ANSI modified it to become ADCCP(Advanced
Data Communication Control Procedure)
 ISO modified it to become HDLC
 All are bit oriented protocols
• HDLC is a bit oriented protocol that supports both half-duplex
and full-duplex communication over point to point &
multipoint link.
• For any HDLC communications session, one station is
designated primary and the other secondary. A session can use
one of the following connection modes, which determine how
the primary and secondary stations interact.
• Normal unbalanced: The secondary station responds only to
the primary station.
• Asynchronous: The secondary station can initiate a message.
• Asynchronous balanced: Both stations send and receive over
its part of a duplex line.
• This mode is used for X.25 packet-switching networks.
Example Data link protocols:
 HDLC : High Level Data Link Control
 The Data Link Layer in the Internet

 HDLC : High Level Data Link Control:


 HDLC – Bit oriented protocol
 HDLC uses Bit stuffing
 Frame format for Bit-oriented Protocols:

Bits 8 8 8 >= 0 16 8

01111110 Address control Data checksum 01111110


High level data link control:
 Frame Formats in control Field:
 3 types:
 I-Frame ( Information Frame)
 S-Frame (Supervisory Frame)
 U-Frame( Unnumbered Frame)
High level data link control:
 I-Frame: is used to carry userdata,piggybacking
flow control & error control
 S-Frame: S-Frames are used only to transport
control information
 Ex: Acknowledgements
 U-Frame: U-Frames are used for control
purposes but can also carry data when unreliable
connectionless service is called
Control field of

(a) An information frame.


(b) A supervisory frame.
(c) An unnumbered frame
• Bit number 5 in control field is P/F i.e. Poll/Final and
is used for these two purposes. It has, meaning only
when it is set i.e. when P/F=1. It can represent the
following two cases.
• (i) It means poll when frame is sent by a primary
station to secondary (when address field contains the
address of receiver).
• (ii) It means final when frame is sent by secondary to
a primary (when the address field contains the
address of the sender).
Various kinds of supervisory frames based on
type field
 Type field
Type 0 is a acknowledgement frame
 RR: It Represents the receiver is ready to receive the
next expected frame
Type 1 is a NAK Frame
 REJ: It indicated that a transmission error has been
detected ,Next field indicates the first frame in
sequence not received correctly.
Type 2 is Receive Not Ready

 RNR: Receiver is not ready for accepting the frames


from sender.

Type 3 is Selective Reject

 Selective Reject: It calls for retransmission of only the


frame specified.
ATM(Asynchronous Transmission mode)
• Asynchronous transfer mode (ATM) is a switching
technique used by telecommunication networks that
uses asynchronous time-division multiplexing to
encode data into small, fixed-sized cells.
• This is different from Ethernet or Internet, which use
variable packet sizes for data or frames.
• ATM networks are connection-oriented.
• ATM is a cell-switching and multiplexing technology
that combines the benefits of circuit switching
(guaranteed capacity and constant transmission
delay) with those of packet switching (flexibility and
efficiency for intermittent traffic).
• ATM transfers information in fixed-size units called cells.
Each cell consists of 53 octets, or bytes. The first 5 bytes
contain cell-header information, and the remaining 48 contain
the payload (user information).
• Small, fixed-length cells are well suited to transfer voice and
video traffic because such traffic is intolerant to delays that
result from having to wait for a large data packet to download,
among other things
Because of its asynchronous nature, ATM is more efficient than
synchronous technologies, such as time-division multiplexing
(TDM).
With TDM, each user is assigned to a time slot, and no other
station can send in that time slot .
• Because ATM is asynchronous, time slots are available on
demand with information identifying the source of the
transmission contained in the header of each ATM cell.
ATM Devices
An ATM network is made up of an
• ATM switch
• ATM endpoints.
An ATM switch is responsible for cell transit through an ATM
network. The job of an ATM switch is well defined. It accepts
the incoming cell from an ATM endpoint or another ATM
switch. It then reads and updates the cell header information
and quickly switches the cell to an output interface towards its
destination.
An ATM endpoint (or end system) contains an ATM network
interface adapter. Examples of ATM endpoints are
workstations, routers, digital service units (DSUs), LAN
switches, and video coder-decoders (Codec’s).
ATM Network Interface
• An ATM network consists of a set of ATM switches
interconnected by point-to-point ATM links or
interfaces.
• ATM switches support two primary types of
interfaces:
• UNI (User-Network Interface)
• NNI (Network-Network Interface)
• The UNI connects ATM end systems (such as hosts
and routers) to an ATM switch. The NNI connects
two ATM switches.
UNI and NNI interfaces of the ATM
ATM Switches

• In ATM, switches act as network hubs and


routers
– They provide interconnection between network
nodes
– They establish a path between source and
destination
• The difference between switches and routers is
that switches create this path once, not once
for each message
Virtual Circuits
• Virtual circuits are also often called virtual paths
• For reduced bandwidth usage for path identification,
these are numbered
– A virtual path identifier is used to indicate which virtual
path is intended for a cell
• Within virtual circuits, a number of channels is
available
– Nodes can share virtual paths
– Virtual paths can be used to communicate between more
than one pair of nodes
Creating Virtual Circuits
• Creating a virtual circuit has been compared to
making a telephone call
• A network node sends a request to the ATM switch
specifying the destination
• The switch interacts with any other switches
necessary to align themselves to form a complete path
• When communication is complete, the node sends a
disconnect message to the switch
• The switch will then notify all switches involved to
release the connection
ATM Efficiency
• ATM switches do not suffer from router-like
slowdowns where cells would be buffered and
processed
– Also hub-like duplicates are avoided
• The fixed size of ATM cells allows for
hardware optimizations
– The time to transmit a 53 octet cell is constant
– Assembling cells can also be done more quickly by
hardware
ATM Networking Drawbacks
• Small, finite sized cells provide faster transmission
speeds
– However, 53 octet cells are incompatible with other
technologies which are in widespread use
• ATM addressing also differs significantly from other
forms of addressing
– For example the TCP/IP protocol suite is the most common
network protocol system
• Most Internet applications are based on TCP/IP
• ATM networks are not broadcast networks
– Each cell only arrives at its intended destination
– Broadcasting & multicasting are not directly supported

You might also like