Professional Documents
Culture Documents
DCN - 9 - Two Sub Layers of Data Link Layer
DCN - 9 - Two Sub Layers of Data Link Layer
DCN - 9 - Two Sub Layers of Data Link Layer
• As we already know
5
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Firstly we will discuss
Data Link Control (DLC) Sub Layer
• 64-bit preamble and SFD consists of alternating ones and zeros allowing the
receiver to synchronise with the incoming signal followed by the header consisting
of a 48-bit destination address, a 48-bit source address, and a 16-bit frame type
• Payload can vary in length from a minimum of 46 bytes to a maximum of 1,500
bytes
• payload is followed by a 32-bit CRC
• Ethernet standard specifies the values that can be used in the header fields; for
example
– 48-bit address FF:FF:FF:FF:FF:FF (hexadecimal), i.e. all 1s, is reserved for broadcast
– frame type value of 0800 (hexadecimal) is reserved for IPv4 traffic
10
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
For example _ Ethernet Frame
• Preamble
– It consists of a 56-bits (seven-byte) pattern of alternating
1 and 0 bits, allowing devices on the network to easily
synchronize their receiver clocks, providing bit-level
synchronization.
• SFD:
– SFD provides byte-level synchronization and to mark a new
incoming frame.
– SFD is the eight-bit (one-byte) value that marks the end of the
preamble, which is the first field of an Ethernet packet, and
indicates the beginning of the Ethernet frame.
– SFD is designed to break the bit pattern of the preamble and
signal the start of the actual frame
– SFD is immediately followed by the destination MAC address,
which is the first field in an Ethernet frame.
20 20
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Serial Transmission
1 0 0 1 1 0 0 1
Sender transmitted Receiver received
1
Sender transmitted
Receiver received
0
0
1
1
0
0
1
25 25
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Serial Transmission
• When you send in parallel you must measure all of the lines at the exact same
moment, as you go faster the size of the window for that moment gets smaller and
smaller, eventually it can get so small that some of the wires may still be stabilizing
while others are finished before you ran out of time.
• By sending in serial you no longer need to worry about all of the lines stabilizing,
just one line. And it is more cost efficient to make one line stabilize 10 times faster
than to add 10 lines at the same speed.
• Some things like PCI Express do the best of both worlds, they do a parallel set of
serial connections (the 16x port on your motherboard has 16 serial connections). By
doing that each line does not need to be in perfect sync with the other lines, just as
long as the controller at the other end can reorder the "packets" of data as they
come in using the correct order.
• The How Stuff Works page for PCI-Express does a very good explanation in depth
on how PCI Express in serial can be faster than PCI or PCI-X in parallel.
Also called
Timing Methodology
synchronization:
– The transmission of a stream of bits from one device to another
across a transmission link involves a great deal of cooperation and
agreement between the two sides.
30
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Timing problems require a mechanism to synchronize the
transmitter and receiver
• Timing problems require a mechanism to synchronize
the transmitter and receiver
McGraw-Hill 2/26
©The McGraw-Hill Companies, Inc., 2000
Two solutions to synchronize clocks
• A fundamental requirement of digital data
communications is that the receiver knows the
starting time andand
Asynchronous theSynchronous
duration of each bit.
Transmission
33
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Asynchronous transmission
• In asynchronous transmission, each character of data is treated
independently.
• Each character begins with a start bit that alerts the receiver
that a character is arriving. The receiver samples each bit in the
character and then looks for the beginning of the next character.
McGraw-Hill 3/26
©The McGraw-Hill Companies, Inc., 2000
Asynchronous transmission
This technique would not work well for long blocks of data
because the receiver's clock might eventually drift out (to
move out of a place slowly ) of synchronization with the
transmitter's clock.
McGraw-Hill 4/26
©The McGraw-Hill Companies, Inc., 2000
Asynchronous Transmission
– For example, for an 8-bit character with no parity bit, using a 1-bit-long stop
element, two out of every ten bits convey no information but are there merely
for synchronization; thus the overhead is 20%.
• However, the larger the block of bits, the greater the cumulative
timing error. To achieve greater efficiency, a different form of
synchronization, known as synchronous
transmission, is used.
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Asynchronous File Transfer
• Used on
– Point-to-point asynchronous circuits
– Typically over phone lines via modem
– Computer to computer for transfer of data files
47
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Synchronous Transmission
• To achieve synchronous transmission, each block begins with a preamble bit pattern
and generally ends with a postamble bit pattern.
• A typical frame format for synchronous transmission starts with a preamble called a
flag, which is 8 bits long. The same flag is used as a postamble.
• This is followed by some number of control fields (containing data link control
protocol information), then a data field (variable length for most protocols), more
control fields, and finally the flag is repeated.
48
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Synchronous Transmission
49
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Comparison of Synchronous versus Asynchronous
Transmissions for Overhead
• Asynchronous transmission
• Synchronous transmission (two start and stop bits for every 8 bit
character, (2/(2+8))*100%=20%).
• Synchronous transmission
• A frame in one of the standard schemes contains 48 bits of
control, preamble, and postamble. Thus, for a 1000 character
block of data, each frame consists of 48 bits of overhead and
1000*8=8000 bits of data, for a percentage overhead of only
(48/(8000+48))*100%=0.6%.
• Result …………….???????
McGraw-Hill 8/26
©The McGraw-Hill Companies, Inc., 2000
Now come to Data Link Layer and its feature of ‘Variable-size framing
‘.…….
59
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Revision Header , Data Area , Trailer
61
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Frame Size
• In fixed-size framing
– there is no need for defining the boundaries of the frames; the
size itself can be used as a delimiter.
– ATM (Asynchronous Transfer Mode) WAN uses frames of fixed
size called cells.
• Variable-size framing
– the data divided into variable size frames.
– In LAN .
– Need a way to define the end of one frame and the beginning of
the next.
– Two approaches used for this purpose
• a character-oriented approach
• and a bit-oriented approach
BUT .
•It cause to another problem, if the text contains escape characters
as part of data. To deal with this, an escape character is prefixed with
another escape character. The following figure explains about
character stuffing
80
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Bit -Oriented Protocols
• Two main reasons for Bit stuffing (by many network and
communications protocols ):
1. To prevent data being interpreted as control information.
• In a bit-oriented protocol, the data to send is a series of bits.
• In order to distinguish frames, most protocols use a bit pattern of 8-bit length
(01111110) as flag at the beginning and end of each frame.
• This also causes the problem of appearance of flag in the data part .To deal with
this an extra bit added. This method is called bit stuffing. In bit stuffing, if a 0 and
five successive 1 bits are encountered, an extra 0 is added. The receiver node
removes the extra-added zero.
• Example:
Data bits
Transmitted bits
with stuffing
Data bits
Transmitted bits
with stuffing
Data bits
Transmitted bits
with stuffing
• A protocol which guarantees the receiver of synchronous data can recover the
sender's clock. When the data stream sent contains a large number of adjacent
bits which cause no transition of the signal, the receiver cannot adjust its clock to
maintain proper synchronised reception.
• The receiver follows the same protocol and removes the stuffed bit after the
specified number of transitionless bits, but can use the stuffed bit to recover the
sender's clock.
96
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Summary
• Problem: it is possible that the flag byte’s bit pattern occur in the
data
• Two popular solutions:
– Byte stuffing
• The sender inserts a special byte (e.g., ESC) just before each “accidental” flag byte
in the data.
• The receiver’s link layer removes this special byte before the data are given to the
network layer.
– Bit stuffing: each frame starts with a flag byte “01111110”. {Note this is
7E in hex}
• Whenever the sender encounters five consecutive 1s in the data, it automatically
stuffs a 0 bit into the outgoing bit stream.
• When the receiver sees five consecutive incoming 1 bits, followed by a 0 bit, it
automatically deletes / de-stuffs the 0 bit.
97
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Summary
104
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
2. Ethernet (IEEE 802.3)
105
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
2. Ethernet (IEEE 802.3)
Frame Formats
108
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
4. Serial Line Internet Protocol (SLIP)
109
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
4. Serial Line Internet Protocol (SLIP)
111
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
5. Point-to-Point Protocol (PPP)
112
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Flow and Error Control (Just a revision )
• Flow control in this case can be feedback from the receiving node to
the sending node to stop or slow down pushing frames.
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Flow and Error Control
• Buffers
– Flow control can be implemented in several way
– One of the solutions is normally to use two buffers
• one at the sending data-link layer and the other at the receiving data-link
layer.
– A buffer is a set of memory locations that can hold packets at the
sender and receiver.
– The flow control communication can occur by sending signals
from the consumer to the producer.
– When the buffer of the receiving data-link layer is full, it informs
the sending data-link layer to stop pushing frames.
• In a connectionless protocol
– Frames are sent from one node to the next without any
relationship between the frames; each frame is independent.
– Note that the term connectionless here does not mean that there
is no physical connection (transmission medium) between the
nodes; it means that there is no connection between frames.
– The frames are not numbered and there is no sense of ordering.
– Most of the data-link protocols for LANs are connectionless
protocols.
• In a connection-oriented protocol
– A logical connection should first be established between the two
nodes (setup phase).
– After all frames that are somehow related to each other are
transmitted (transfer phase), the logical connection is terminated
(teardown phase).
– Connection oriented protocols are rare in wired LANs, but we can
see them in some point-to-point protocols, some wireless LANs,
and some WANs.
• Traditionally four protocols have been defined for the data-link layer
to deal with flow and error control:
– Simple
– Stop-and-Wait
– Go-Back-N
– Selective-Repeat.
• Although the first two protocols still are used at the data-link layer,
the last two have disappeared.
Finished
Channel
Partitioning
(In the last)
McGraw-Hill
ALOHA
(Now)
McGraw-Hill
ALOHA
1. The station checks to make sure the medium is idle. This is called carrier
sense. This is analogous to the rules in an assembly. If a person wants to
speak, he must first listen to make sure no one else is talking.
2. If the medium is idle, the station can send data.
3. Even through steps 1and steps 2 are followed, there is still a potential for
collision.
• For example: two stations may be checking the medium at the same time;
neither senses that the medium is in use and both send at the same time.
• To avoid collision the sending stations can make a reservation for use of
the medium.
• To detect collision(and send the data again),the station need to continue
monitoring the medium.
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Data Collision
• A data collision is the simultaneous presence of signals
from two nodes on the network.
133
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Data Collision
• Ethernet uses CSMA/CD as its method of allowing devices to "take
turns" using the signal carrier line.
• When a device wants to transmit, it checks the signal level of the line
to determine whether someone else is already using it.
– If it is already in use, the device waits and, perhaps in a few seconds.
– If it isn't in use, the device transmits.
• However, two devices can transmit at the same time in which case a
collision occurs and both devices detect it.
• Each device then waits a random amount of time and retries until
successful in getting the transmission sent.
134
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Carrier Sense Multiple Access - Collision Detection (CSMA/CD)
• If there is a collision
– the data transmission will be corrupted
– stations have to wait for a collision to take place and then solve
the problem.
• Prevents collisions
• Token passing ensures that only one station can transmit at any one
time
• Then the token (still containing the data) is passed on round the loop
until it returns to the sender.
• The sender is responsible for removing the data from the token and
passing the empty token on to the next station.
• Problems:
• Token-passing
– performs well under heavy loads
– slower transmission speed (up to 100 Mbps)
– suitable for applications that require uniform response times
– complex software is required
– more expensive to implement.
CONTROLLED ACCESS:
The stations consult one another to find which
station has the right to send.
A station cannot send unless it has been
authorized by other stations.
Reservation method
Reserve then send
Reservation method
If there are N stations in the system, there are exactly N
reservation mini-slots in the reservation frame.
The stations that have made reservations can send their data
frames after the reservation frame.
All data exchanges must be made through the primary device even
when the ultimate destination is a secondary device.
Select
The select function is used whenever the primary device
has something to send.
If the primary is neither sending nor receiving data, it
knows the link is available.
If it has something to send, the primary device sends it.
The primary must alert the secondary to the upcoming
transmission and wait for an acknowledgment of the
secondary’s ready status.
Before sending data, the primary creates and transmits
a select (SEL) frame, one field of which includes the
address of the intended secondary.
Poll
The poll function is used by the primary device to solicit (ask) transmissions
from the secondary devices.
When the primary is ready to receive data, it must ask (poll) each device in
turn if it has anything to send.
When the first secondary is approached, it responds either with a NAK frame
if it has nothing to send or with data (in the form of a data frame) if it does.
If the response is negative (a NAK frame), then the primary polls the next
secondary in the same manner until it finds one with data to send.
When the response is positive (a data frame), the primary reads the frame
and returns an acknowledgment (ACK frame), verifying its receipt.
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
3. Token Passing
Token-passing procedure
Logical Ring
In a token-passing network, stations do not have to be physically
connected in a ring; the ring can be a logical one. Figure below shows
four different physical topologies that can create a logical ring.
Channel Partitioning
• Divide channel into
smaller “pieces” (time
slots, frequency, code)
• Allocate piece to node
for exclusive use
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2000
Media Access Control
PROTOCOLS
CHANNELIZATION
Analogy
CDMA simply means communication with different codes.
Example:
In a large room with many people, two people can talk in English if
nobody else understands English.
Another two people can talk in Chinese if they are the only ones
who understand Chinese, and so on.
The common channel, the space of the room in this case, can
easily allow communication between several couples, but in
different languages (codes).