Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 157

Data Communication & Networking

Unit 3
Data Link Layer
2
3
Data Link Layer

4
Functionality of Data-link Layer
 Framing
 Data-link layer takes packets from Network Layer and encapsulates them into Frames.
Then, it sends each frame bit-by-bit on the hardware. At receiver’ end, data link layer picks
up signals from hardware and assembles them into frames.

 Addressing
 Data-link layer provides layer-2 hardware addressing mechanism. Hardware address is
assumed to be unique on the link. It is encoded into hardware at the time of
manufacturing

 Synchronization
 When data frames are sent on the link, both machines must be synchronized in order to
transfer to take place.

 Error Control
 Sometimes signals may have encountered problem in transition and the bits are
flipped.These errors are detected and attempted to recover actual data bits. It also
provides error reporting mechanism to the sender.

5
 Flow Control
 Stations on same link may have different speed or capacity. Data-link layer
ensures flow control that enables both machine to exchange data on same
speed.

 Multi-Access
 When host on the shared link tries to transfer the data, it has a high
probability of collision. Data-link layer provides mechanism such as
CSMA/CD to equip capability of accessing a shared media among multiple
Systems.

6
Purpose of the Data Link Layer
• The Data Link layer is responsible for communications between end-device
network interface cards.
• It allows upper layer protocols to access the physical layer media and
encapsulates Layer 3 packets (IPv4 and IPv6) into Layer 2 Frames.
• It also performs error detection and rejects corrupts frames.

7
TheData Link Layer consists of two sub layers. Logical Link Control (LLC) and
Media Access Control (MAC).
• The LLC sub layer communicates between the networking software at the
upper layers and the device hardware at the lower layers.
• The MAC sub layer is responsible for data encapsulation and media access
control.

Packets exchanged between nodes may experience numerous data link layers
and media transitions.

 At each hop along the path, a router performs four basic Layer 2 functions:
• Accepts a frame from the network medium.
• De-encapsulates the frame to expose the encapsulated packet.
 Re-encapsulates the packet into a new frame.
 Forwards the new frame on the medium of the next network segment.

8
Data link layer protocols are defined
by engineering organizations:
• Institute for Electrical and Electronic Engineers (IEEE).
• International Telecommunications Union (ITU).
• International Organizations for Standardization (ISO).
• American National Standards Institute (ANSI).

9
Error Control
 Detecting errors
 Correcting errors
 Forward error correction
 Automatic repeat request

10
Types of Errors
 Single-bit errors

 Burst errors
Redundancy
 To detect or correct errors, redundant bits
of data must be added

12
Detection Versus Correction
 The correction of errors is more difficult than the
detection.
 In error detection, we are looking only to see if any
error has occurred. The answer is a simple yes or
no.
 In error correction, we need to know the exact
number of bits that are corrupted and more
importantly, their location in the message.
 If we need to correct one single error in an 8-bit
data unit, we need to consider eight possible error
locations
13
Forward Error Correction Versus
Retransmission
 There are two main methods of error correction.
 Forward error correction is the process in which the receiver
tries to guess the message by using redundant bits.
 This is possible, if the number of errors is small.
 Correction by retransmission is a technique in which the
receiver detects the occurrence of an error and asks the
sender to resend the message.
 Resending is repeated until a message arrives that the
receiver believes is error-free

14
Coding
 Redundancy is achieved through various coding schemes.
The sender adds redundant bits through a process that
creates a relationship between the redundant bits and the
actual data bits.
 The receiver checks the relationships between the two sets
of bits to detect or correct the errors
 Process of adding redundancy for error detection or
correction

15
 Two types:
 Block codes

 Divides the data to be sent into a set of blocks

 Extra information attached to each block

 Convolutional codes

 Treats data as a series of bits, and computes a code

over a continuous series


 The code computed for a set of bits depends on the

current and previous input

16
The structure of encoder and decoder

17
Block Coding
 Message is divided into k-bit blocks
 Known as datawords
 r redundant bits are added
 Blocks become n=k+r bits
 Known as codewords

18
Example: 4B/5B Block Coding

Data Code Data Code


0000 11110 1000 10010
0001 01001 1001 10011
k=?
r=?
0010 10100 1010 10110
n=?
0011 10101 1011 10111
0100 01010 1100 11010
0101 01011 1101 11011
0110 01110 1110 11100
0111 01111 1111 11101

19
Error Detection in Block Coding

20
 How can errors be detected by using block
coding? If the following two conditions are met,
the receiver can detect a change in the original
codeword.

 1. The receiver has (or can find) a list of valid


codeword's.
 2. The original codeword has changed to an
invalid one.

21
 The sender creates codewords out of datawords by using a
generator that applies the rules and procedures of encoding.
 Each codeword sent to the receiver may change during
transmission.
 If the received codeword is the same as one of the valid
codewords, the word is accepted; the corresponding
dataword is extracted for use.
 If the received codeword is not valid, it is discarded.
 However if the codeword is corrupted during transmission
but received word still matches a valid codeword the error
remains undetected.
 This types of coding can detect only single bit error.

22
23
Error Correction

24
Example: Error Correction Code

k, r, n = ?
The receiver receives 01001, what is the original dataword?

25
26
Notes
 An error-detecting code can detect
only the types of errors for which it is
designed
 Other types of errors may remain undetected.
 There is no way to detect every possible
error

27
Hamming Distance
 the Hamming distance between two strings of equal length is the number
of positions at which the corresponding symbols are different.

 Hamming distance is a metric for comparing two binary data strings.


While comparing two binary strings of equal length, Hamming distance is
the number of bit positions in which the two bits are different.

 The Hamming distance between two strings, a and b is denoted as


d(a,b).
 It is used for error detection or error correction when data is transmitted
over computer networks. It is also using in coding theory for comparing
equal length data words.
 Hamming distance can easily be found if we apply the XOR operation on
the two words and count the number of 1s in the result.
 note that Hamming distance is the value greater than 0.

28
Calculation of Hamming Distance

 In order to calculate the Hamming distance between two


strings, and , we perform their XOR operation, (a⊕ b),
and then count the total number of 1s in the resultant
string.
 Example 1:
 Suppose there are two strings 1101 1001 and 1001
1101.
 11011001 ⊕ 10011101 = 01000100. Since, this contains
two 1s, the Hamming distance, d(11011001, 10011101)
= 2.

29
Minimum Hamming Distance

 In a set of strings of equal lengths, the minimum


Hamming distance is the smallest Hamming distance
between all possible pairs of strings in that set.
 Example 2:
 Suppose there are four strings 010, 011, 101 and 111.
 010 ⊕ 011 = 001, d(010, 011) = 1.
 010 ⊕ 101 = 111, d(010, 101) = 3.
 010 ⊕ 111 = 101, d(010, 111) = 2.
 011 ⊕ 101 = 110, d(011, 101) = 2.
 011 ⊕ 111 = 100, d(011, 111) = 1.
 101 ⊕ 111 = 010, d(011, 111) = 1.
 Hence, the Minimum Hamming Distance, dmin = 1.

30
Common Detection Methods
 Parity check
 Cyclic Redundancy Check
 Checksum

31
Parity Check
 Most common, least complex
 Single bit is added to a block
 Two schemes:
 Even parity – Maintain even number of 1s
 E.g., 1011  10111
 Odd parity – Maintain odd number of 1s
 E.g., 1011  10110

32
Example: Parity Check
Suppose the sender wants to send the word world. In
ASCII the five characters are coded (with even parity) as
1110111 1101111 1110010 1101100 1100100
The following shows the actual bits sent
11101110 11011110 11100100 11011000 11001001

Receiver receives this sequence of words:


11111110 11011110 11101100 11011000 11001001
Which blocks are accepted? Which are rejected?
33
Note

A simple parity-check code is a


single-bit error-detecting
code in which
n = k + 1 with dmin = 2.
Even parity (ensures that a codeword
has an even number of 1’s) and odd
parity (ensures that there are an odd
number of 1’s in the codeword)

10.
Table 10.3 Simple parity-check code C(5, 4)

10.
Figure 10.10 Encoder and decoder for simple parity-check code

10.
Example 10.12

Let us look at some transmission scenarios. Assume the


sender sends the dataword 1011. The codeword
created from this dataword is 10111, which is sent to
the receiver. We examine five cases:

1. No error occurs; the received codeword is 10111.


The
syndrome is 0. The dataword 1011 is created.
2. One single-bit error changes a1 . The received
codeword is 10011. The syndrome is 1. No dataword
is created.
3. One single-bit error changes r0 . The received
codeword
is 10110. The syndrome is 1. No dataword is created.
10.
Example 10.12 (continued)

4. An error changes r0 and a second error changes a3 .


The received codeword is 00110. The syndrome is 0.
The dataword 0011 is created at the receiver. Note
that
here the dataword is wrongly created due to the
syndrome value.
5. Three bits—a3, a2, and a1—are changed by errors.
The received codeword is 01011. The syndrome is 1.
The dataword is not created. This shows that the
simple
parity check, guaranteed to detect one single error,
can
also find any odd number of errors.

10.
Note

A simple parity-check code can detect an


odd number of errors.

10.
Cyclic Redundancy Check
 In a cyclic code, rotating a codeword
always results in another codeword
 Example:

40
CRC Encoder/Decoder

41
CRC Generator

42
Checking CRC

43
Polynomial Representation
 More common representation than binary form
 Easy to analyze
 Divisor is commonly called generator polynomial

44
Division Using Polynomial

45
CRC Examples
Q2. Solve following using Cyclic Redundancy Check (CRC)
A. Given the data word 1010011010 and the divisor 10111
i. Show the generation of code word at sender side.
ii. Show the checking of code word at receiver side to detect
error (Assume no error)

B. Given the data word x6+ x4 +x3+x+1 and the divisor x3+x2+1
i. Show the generation of code word at sender side.
ii. Show the checking of code word at receiver side to detect
error (Assume one-bit error at 4th bit (at x3) position)

Mr. Vivek V. Kheradkar


46
Error Correction
 Two methods
 Retransmission after detecting error
 Forward error correction (FEC)

47
Number of Redundant Bits
Number of Number of Total
data bits redundancy bits bits
k r k+r
1 2 3
2 3 5
3 3 6
4 3 7
5 4 9
6 4 10
7 4 11

48
Hamming Code
 Simple, powerful FEC
 Widely used in computer memory
 Known as ECC memory

error-correcting bits

49
Redundant Bit Calculation

50
Example: Hamming Code

51
Example: Correcting Error
 Receiver receives 10010100101

52
Hamming Code
 SENDER-
 Determine the number of redundant bits/ Parity Bit to be
added by using formula 2r >= m+r+1; where r =
redundant bit & m = data bit.
e.g. 4 data bits and 3 redundancy bits, referring to the
received 7-bit hamming code
 Place the parity bit and data bit in order to form hamming
code

 Find value of parity bit using even parity/ odd parity


 For checking parity bit P1, use check one and skip one method

Mr. Vivek V. Kheradkar


53
Hamming Code (cont…)
 For checking parity bit P2, use check two and skip two method

 For checking parity bit P4, use check four and skip four method

 Find hamming code for transmission

Mr. Vivek V. Kheradkar


54
Hamming Code
 RECEIVER-
 Place received hamming code in data bit and parity bit in
place order

 Detecting Error
 Analyzing parity bit P4,

 Analyzing parity bit P2,

 Analyzing parity bit P1,

Mr. Vivek V. Kheradkar


55
Hamming Code (cont…)
 Find the error word E will be:

 determine the decimal value of this error word which states that the
error data bit

 Correcting Error
 Change error bit value from 0->1 or 1->0

Mr. Vivek V. Kheradkar


56
Home Work
 Generate hamming code for the message
1110

 A 7-bit hamming code is received as


1011011. Assume Even parity and state
whether the received code is correct or
wrong, if wrong locate the bit in error.

Mr. Vivek V. Kheradkar


57
58
59
Data Link Control

11.60
11-1 FRAMING

The data link layer needs to pack bits into frames, so


that each frame is distinguishable from another. Our
postal system practices a type of framing. The simple
act of inserting a letter into an envelope separates one
piece of information from another; the envelope serves
as the delimiter.
Topics discussed in this section:
Fixed-Size Framing
Variable-Size Framing

11.61
Figure 11.1 A frame in a character-oriented protocol

11.62
Figure 11.2 Byte stuffing and unstuffing

11.63
Note
Byte stuffing is the process of adding 1 extra byte whenever there is a flag or
escape character in the text.

11.64
Figure 11.3 A frame in a bit-oriented protocol

11.65
Note
Bit stuffing is the process of adding one extra 0 whenever five consecutive 1s
follow a 0 in the data, so that the receiver does not mistake
the pattern 0111110 for a flag.

11.66
Figure 11.4 Bit stuffing and unstuffing

11.67
11-2 FLOW AND ERROR CONTROL

The most important responsibilities of the data link


layer are flow control and error control. Collectively,
these functions are known as data link control.

Topics discussed in this section:


Flow Control
Error Control

11.68
Note
Flow control refers to a set of procedures used to restrict the amount of data
that the sender can send before
waiting for acknowledgment.

11.69
Note
Error control in the data link layer is based on automatic repeat request,
which is the retransmission of data.

11.70
11-3 PROTOCOLS

Now let us see how the data link layer can combine
framing, flow control, and error control to achieve the
delivery of data from one node to another. The protocols
are normally implemented in software by using one of
the common programming languages. To make our
discussions language-free, we have written in
pseudocode a version of each protocol that concentrates
mostly on the procedure instead of delving into the
details of language rules.

11.71
Figure 11.5 Taxonomy of protocols discussed in this chapter

11.72
11-4 NOISELESS CHANNELS

Let us first assume we have an ideal channel in which


no frames are lost, duplicated, or corrupted. We
introduce two protocols for this type of channel.

Topics discussed in this section:


Simplest Protocol
Stop-and-Wait Protocol

11.73
1. Simplest Protocol

 We assume that the receiver can immediately handle any frame it


receives with a processing time that is small enough to be negligible.
 In other words, the receiver can never be overwhelmed with
incoming frames.
 Hence there is no need for flow control in this scheme.
Figure 11.6 The design of the simplest protocol with no flow or error control

11.75
Example 11.1

Figure 11.7 shows an example of communication using this


protocol. It is very simple. The sender sends a sequence of
frames without even thinking about the receiver. To send
three frames, three events occur at the sender site and three
events at the receiver site. Note that the data frames are
shown by tilted boxes; the height of the box defines the
transmission time difference between
the first bit and the last bit in the frame.

11.76
Figure 11.7 Flow diagram for Example 11.1

11.77
2. Stop-and-Wait Protocol

 If the data frames arrives at the receiver site faster than they can processed,
the frames must be stored until their use.
 Normally, the receiver does not have enough storage space, especially if it
is receiving data from many sources.
 Hence to prevent the receiver from becoming overwhelmed with incoming
frames, we somehow need to tell the sender to slow down. As such there
must be feedback from the receiver to the sender. In other words we need
to employ a flow control mechanism in the protocol.
 Acknowledgement (ACK) frames that are auxiliary frames help in this
regard.

 The protocol is called Stop-and-Wait protocol because the sender sends


one frame, stops until it receives confirmation from the receiver (okay to
go ahead ) and then sends the next frame.
Figure 11.8 Design of Stop-and-Wait Protocol

11.79
Example 11.2

Figure 11.9 shows an example of communication using this


protocol. It is still very simple. The sender sends one frame
and waits for feedback from the receiver. When the ACK
arrives, the sender sends the next frame. Note that sending
two frames in the protocol involves the sender in four
events and the receiver in two events.

11.80
Figure 11.9 Flow diagram for Example 11.2

11.81
11-5 NOISY CHANNELS

Although the Stop-and-Wait Protocol gives us an idea of


how to add flow control to its predecessor, noiseless
channels are nonexistent. We discuss three protocols in
this section that use error control.

Topics discussed in this section:


Stop-and-Wait Automatic Repeat Request
Go-Back-N Automatic Repeat Request
Selective Repeat Automatic Repeat Request

11.82
Note

Error correction in Stop-and-Wait ARQ is


done by keeping a copy of the sent
frame and retransmitting of the frame
when the timer expires.

11.83
STOP-AND-WAIT, LOST FRAME Stop-and-Wait, lost ACK frame
 To overcome this problem it is required that the receiver be able to
distinguish a frame that it is seeing for the first time from a
retransmission. One way to achieve this is to have the sender put a
sequence number in the header of each frame it sends. The receiver
then can check the sequence number of each arriving frame to see if it is
a new frame or a duplicate to be discarded.

 The receiver needs to distinguish only 2 possibilities: a new frame or a


duplicate; a 1-bit sequence number is sufficient. At any instant the
receiver expects a particular sequence number. Any wrong sequence
numbered frame arriving at the receiver is rejected as a duplicate. A
correctly numbered frame arriving at the receiver is accepted, passed to
the host, and the expected sequence number is incremented by 185
(modulo 2).
Note

In Stop-and-Wait ARQ, we use sequence


numbers to number the frames.
The sequence numbers are based on
modulo-2 arithmetic.

11.86
Note

In Stop-and-Wait ARQ, the


acknowledgment number always
announces in modulo-2 arithmetic the
sequence number of the next frame
expected.

11.87
Figure 11.10 Design of the Stop-and-Wait ARQ Protocol

11.88
Example 11.3

Figure 11.11 shows an example of Stop-and-Wait ARQ.


Frame 0 is sent and acknowledged. Frame 1 is lost and
resent after the time-out. The resent frame 1 is
acknowledged and the timer stops. Frame 0 is sent and
acknowledged, but the acknowledgment is lost. The sender
has no idea if the frame or the acknowledgment is lost, so
after the time-out, it resends frame 0, which is
acknowledged.

11.89
Figure 11.11 Flow diagram for Example 11.3

11.90
4. Go-Back-N ARQ Protocol

 To improve efficiency of transmission, we should have a


protocol in which multiple frames must be in transition while
waiting for acknowledgement.
 In other words, we need to let more than one frame be
outstanding to keep the channel busy while the sender is
waiting for acknowledgement.
 Go-Back-N is one such protocol. In this protocol we
can send several frames before receiving acknowledgements;
we keep a copy of these frames until the
acknowledgements arrive.
i) Sequence Numbers

In the Go-Back-N Protocol, the sequence


numbers are modulo 2m, where m is the
size of the sequence number field in bits.

For example,
~ if m=2 then the sequence numbers will range from 0 to 3.
( 0, 1, 2, 3 )
~ If m=4 then the sequence numbers will range from 0 to 15.
( 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 )
Figure 11.12 Send window for Go-Back-N ARQ

11.93
Note

The send window is an abstract concept


defining an imaginary box of size 2m − 1
with three variables: Sf, Sn, and Ssize.

11.94
Note

The send window can slide one


or more slots when a valid
acknowledgment arrives.

11.95
Figure 11.13 Receive window for Go-Back-N ARQ

11.96
Note

The receive window is an abstract


concept defining an imaginary box
of size 1 with one single variable Rn.
The window slides
when a correct frame has arrived;
sliding occurs one slot at a time.

11.97
Figure 11.14 Design of Go-Back-N ARQ

11.98
Figure 11.15 Window size for Go-Back-N ARQ

11.99
Note

In Go-Back-N ARQ, the size of the send


window must be less than 2m;
the size of the receiver window
is always 1.

11.100
Example 11.6

Figure 11.16 shows an example of Go-Back-N. This is an


example of a case where the forward channel is reliable,
but the reverse is not. No data frames are lost, but some
ACKs are delayed and one is lost. The example also shows
how cumulative acknowledgments can help if
acknowledgments are delayed or lost. After initialization,
there are seven sender events. Request events are triggered
by data from the network layer; arrival events are triggered
by acknowledgments from the physical layer. There is no
time-out event here because all outstanding frames are
acknowledged before the timer expires. Note that although
ACK 2 is lost, ACK 3 serves as both ACK 2 and ACK 3.

11.101
Figure 11.16 Flow diagram for Example 11.6

11.102
Example 11.7

Figure 11.17 shows what happens when a frame is lost.


Frames 0, 1, 2, and 3 are sent. However, frame 1 is lost.
The receiver receives frames 2 and 3, but they are
discarded because they are received out of order. The
sender receives no acknowledgment about frames 1, 2, or
3. Its timer finally expires. The sender sends all
outstanding frames (1, 2, and 3) because it does not know
what is wrong. Note that the resending of frames 1, 2, and
3 is the response to one single event. When the sender is
responding to this event, it cannot accept the triggering of
other events. This means that when ACK 2 arrives, the
sender is still busy with sending frame 3.
11.103
Example 11.7 (continued)

The physical layer must wait until this event is completed


and the data link layer goes back to its sleeping state. We
have shown a vertical line to indicate the delay. It is the
same story with ACK 3; but when ACK 3 arrives, the
sender is busy responding to ACK 2. It happens again when
ACK 4 arrives. Note that before the second timer expires,
all outstanding frames have been sent and the timer is
stopped.

11.104
Figure 11.17 Flow diagram for Example 11.7

11.105
Note

Stop-and-Wait ARQ is a special case of


Go-Back-N ARQ in which the size of the
send window is 1.

11.106
5. Selective Repeat ARQ Protocol

 Go-Back-N ARQ simplifies the process at the receiver site.


The receiver keeps track of only one variable, and there is no
need to buffer out-of-order frames; they are simply
discarded.
 However, this protocol is very inefficient for a noisy link. In a
noisy link a frame has higher probability of damage, which
means the resending of multiple frames. This resending uses
up the bandwidth and slows down the transmission.
 For noisy links, there is another mechanism that does not
resend N frames when just one frame is damaged; only the
damaged frame is resent. This mechanism is called Selective
Repeat ARQ. It is more efficient for noisy links, but
processing at the receiver is more complex.
Windows

a) Send window for Selective Repeat ARQ

b) Receive window for Selective Repeat ARQ


Figure 11.20 Design of Selective Repeat ARQ

11.109
Figure 11.21 Selective Repeat ARQ, window size

11.110
Note

In Selective Repeat ARQ, the size of the


sender and receiver window
must be at most one-half of 2m.

11.111
Figure 11.22 Delivery of data in Selective Repeat ARQ

11.112
Example 11.8

This example is similar to Example 11.3 in which frame 1 is


lost. We show how Selective Repeat behaves in this case.
Figure 11.23 shows the situation. One main difference is
the number of timers. Here, each frame sent or resent needs
a timer, which means that the timers need to be numbered
(0, 1, 2, and 3). The timer for frame 0 starts at the first
request, but stops when the ACK for this frame arrives. The
timer for frame 1 starts at the second request, restarts when
a NAK arrives, and finally stops when the last ACK arrives.
The other two timers start when the corresponding frames
are sent and stop at the last arrival event.

11.113
Example 11.8 (continued)

At the receiver site we need to distinguish between the


acceptance of a frame and its delivery to the network layer.
At the second arrival, frame 2 arrives and is stored and
marked, but it cannot be delivered because frame 1 is
missing. At the next arrival, frame 3 arrives and is marked
and stored, but still none of the frames can be delivered.
Only at the last arrival, when finally a copy of frame 1
arrives, can frames 1, 2, and 3 be delivered to the network
layer. There are two conditions for the delivery of frames to
the network layer: First, a set of consecutive frames must
have arrived. Second, the set starts from the beginning of
the window.
11.114
Example 11.8 (continued)

Another important point is that a NAK is sent after the


second arrival, but not after the third, although both
situations look the same. The reason is that the protocol
does not want to crowd the network with unnecessary
NAKs and unnecessary resent frames. The second NAK
would still be NAK1 to inform the sender to resend frame 1
again; this has already been done. The first NAK sent is
remembered (using the nakSent variable) and is not sent
again until the frame slides. A NAK is sent once for each
window position and defines the first slot in the window.

11.115
Example 11.8 (continued)

The next point is about the ACKs. Notice that only two ACKs
are sent here. The first one acknowledges only the first
frame; the second one acknowledges three frames. In
Selective Repeat, ACKs are sent when data are delivered to
the network layer. If the data belonging to n frames are
delivered in one shot, only one ACK is sent for all of them.

11.116
Figure 11.23 Flow diagram for Example 11.8

11.117
Media Access Control
(MAC)
introduction
 In the protocols we described, we assumed that there is an available
dedicated link (or channel) between the sender and the receiver.
 This assumption mayor may not be true.
 if we use our cellular phone to connect to another cellular phone, the
channel (the band allocated to the vendor company) is not dedicated.
 A person a few feet away from us may be using the same channel to
talk to her friend.
 We can consider the data link layer as two sublayers.
 The upper sublayer is responsible for data link control, and the lower
sublayer is responsible for resolving access to the shared media.
 If the channel is dedicated, we do not need the lower sublayer.
 The upper sublayer that is responsible for flow and error control is called
the logical link control (LLC) layer; the lower sublayer that is mostly
responsible for multiple-access resolution is called the media access
control (MAC) layer.

119
Taxonomy of multiple-access protocols

12.
120
RANDOM ACCESS
 In random access or contention methods, no station is
superior to another station and none is assigned the control
over another.
 No station permits, or does not permit, another station to
send.
 At each instance, a station that has data to send uses a
procedure defined by the protocol to make a decision on
whether or not to send.
 Two features give this method its name.
• First, there is no scheduled time for a station to transmit.
Transmission is random among the stations
• Second, no rules specify which station should send next.
Stations compete with one another to access the medium

12.121
ALOHA
- the earliest random access method
- was developed at the University of Hawaii in
early 1970.
- It was designed for a radio (wireless) LAN, but
it can be used on any shared medium.
- It is obvious that there are potential collisions in
this arrangement.
- The medium is shared between the stations.
When a station sends data, another station may
attempt to do so at the same time.
- The data from the two stations collide and
become garbled.
12.
122
Pure ALOHA
 The original ALOHA protocol is called pure
ALOHA. This is a simple, but elegant protocol.
 The idea is that each station sends a frame
whenever it has a frame to send.
 However, since there is only one channel to
share, there is the possibility of collision
between frames from different stations.

123
 The pure ALOHA protocol relies on acknowledgments from the receiver.
 When a station sends a frame, it expects the receiver to send an
acknowledgment.
 If the acknowledgment does not arrive after a time-out period, the
station assumes that the frame (or the acknowledgment) has been
destroyed and resends the frame.
 A collision involves two or more stations. If all these stations try to
resend their frames after the time-out, the frames will collide again.
 Pure ALOHA dictates that when the time-out period passes, each station
waits a random amount of time before resending its frame.
 The randomness will help avoid more collisions. We call this time the
back-off time TB.
 Pure ALOHA has a second method to prevent congesting the channel
with retransmitted frames. After a maximum number of retransmission
attempts Kmax' a station must give up and try later.

124
Frames in a pure ALOHA network

12.125
Procedure for pure ALOHA protocol

126
Vulnerable time
 Vulnerable time
 The vulnerable time is in which there is a possibility of collision. We
assume that the stations send fixed-length frames with each frame
taking Tfr S to send.
 The following figure shows the vulnerable time for station A.

127
 Station A sends a frame at time t. Now imagine station B has
already sent a frame between t - Tfr and t. This leads to a collision
between the frames from station A and station B. The end of B's
frame collides with the beginning of A's frame. On the other hand,
suppose that station C sends a frame between t and t + Tfr . Here,
there is a collision between frames from station A and station C.
The beginning of C's frame collides with the end of A's frame.

128
slotted ALOHA
 Pure ALOHA has a vulnerable time of 2 x Tfr. This is so
because there is no rule that defines when the station
can send. A station may send soon after another station
has started or soon before another station has finished.
Slotted ALOHA was invented to improve the efficiency of
pure ALOHA.

 In slotted ALOHA we divide the time into slots of Tfr s


and force the station to send only at the beginning of
the time slot. The following figure shows an example of
frame collisions in slotted ALOHA.

129
Frames in a slotted
ALOHA network

12.
130
 Because a station is allowed to send only at the
beginning of the synchronized time slot, if a station
misses this moment, it must wait until the beginning of
the next time slot.
 This means that the station which started at the
beginning of this slot has already finished sending its
frame.
 But, still there is the possibility of collision if two stations
try to send at the beginning of the same time slot.
However, the vulnerable time is now reduced to one-half,
equal to Tfr. The following figure shows the situation.

131
132
CSMA
- The chance of collision can be reduced if a
station senses the medium before trying to use
it.
- Carrier sense multiple access (CSMA) requires
that each station first listen to the medium (or
check the state of the medium) before sending.

- In other words, CSMA is based on the principle


“sense before transmit” or “listen before talk.”

12.
133
Space/time model of a
collision in CSMA

12.
134
 Persistence Methods
 What should a station do if the channel is busy?
What should a station do if the channel
 is idle? Three methods have been devised to
answer these questions: the I-persistent
 method, the nonpersistent method, and the p-
persistent method.

135
 1-Persistent: In the 1-Persistent mode of CSMA that defines each node,
first sense the shared channel and if the channel is idle, it immediately
sends the data. Else it must wait and keep track of the status of the
channel to be idle and broadcast the frame unconditionally as soon as the
channel is idle.

 Non-Persistent: It is the access mode of CSMA that defines before


transmitting the data, each node must sense the channel, and if the
channel is inactive, it immediately sends the data. Otherwise, the station
must wait for a random time (not continuously), and when the channel is
found to be idle, it transmits the frames.

 P-Persistent: It is the combination of 1-Persistent and Non-persistent


modes. The P-Persistent mode defines that each node senses the channel,
and if the channel is inactive, it sends a frame with a P probability. If the
data is not transmitted, it waits for a (q = 1-p probability) random time
and resumes the frame with the next time slot.
136
Behavior of three
persistence methods

12.
137
CSMA/CD
- It is a carrier sense multiple access/ collision detection network
protocol to transmit data frames.
- The CSMA/CD protocol works with a medium access control layer.
Therefore, it first senses the shared channel before broadcasting the
frames, and if the channel is idle, it transmits a frame to check whether
the transmission was successful.
- If the frame is successfully received, the station sends another frame. If
any collision is detected in the CSMA/CD, the station sends a jam/ stop
signal to the shared channel to terminate data transmission.
- After that, it waits for a random time before sending a frame to a
channel.

12.
138
Collision of the first
bits in CSMA/CD

12.
139
Collision and
abortion in
CSMA/CD

12.
140
Flow diagram for the CSMAlCD

141
CSMA/CA
- It is a carrier sense multiple access/collision
avoidance network protocol for carrier transmission of data
frames.
- It is a protocol that works with a medium access control
layer.
- When a data frame is sent to a channel, it receives an
acknowledgment to check whether the channel is clear.
- If the station receives only a single (own) acknowledgments,
that means the data frame has been successfully transmitted
to the receiver.
- But if it gets two signals (its own and one more in which the
collision of frames), a collision of the frame occurs in the
shared channel.
- Detects the collision of the frame when a sender receives an
12.
142 acknowledgment signal.
 Following are the methods used in the CSMA/ CA to avoid the collision:

 Interframe space: In this method, the station waits for the channel to
become idle, and if it gets the channel is idle, it does not immediately
send the data. Instead of this, it waits for some time, and this time period
is called the Interframe space or IFS. However, the IFS time is often
used to define the priority of the station.
 Contention window: In the Contention window, the total time is divided
into different slots. When the station/ sender is ready to transmit the data
frame, it chooses a random slot number of slots as wait time. If the
channel is still busy, it does not restart the entire process, except that it
restarts the timer only to send data packets when the channel is inactive.
 Acknowledgment: In the acknowledgment method, the sender station
sends the data frame to the shared channel if the acknowledgment is not
received ahead of time.

143
Contention
window

12.
144
Ethernet
 What is Ethernet?
 Ethernet is the traditional technology for connecting devices in a wired
local area network (LAN) or wide area network (WAN). It enables devices
to communicate with each other via a protocol, which is a set of rules or
common network language.

 Ethernet describes how network devices format and transmit data so


other devices on the same LAN or campus network can recognize,
receive and process the information. An Ethernet cable is the physical,
encased wiring over which the data travels.

145
Ethernet Frames
Ethernet Encapsulation

• Ethernet operates in the


data link layer and the
physical layer.
• It is a family of networking
technologies defined in the
IEEE 802.2 and 802.3
standards.
Ethernet Frames
Data Link Sublayers

The 802 LAN/MAN standards, including


Ethernet, use two separate sublayers
of the data link layer to operate:
• LLC Sublayer: (IEEE 802.2) Places
information in the frame to identify
which network layer protocol is used for
the frame.
• MAC Sublayer: (IEEE 802.3, 802.11, or
802.15) Responsible for data
encapsulation and media access control,
and provides data link layer addressing.
Ethernet Frames
MAC Sublayer

The MAC sublayer is responsible for data encapsulation and accessing the media.

Data Encapsulation
IEEE 802.3 data encapsulation includes the following:
1. Ethernet frame - This is the internal structure of the Ethernet frame.
2. Ethernet Addressing - The Ethernet frame includes both a source and destination MAC
address to deliver the Ethernet frame from Ethernet NIC to Ethernet NIC on the same LAN.
3. Ethernet Error detection - The Ethernet frame includes a frame check sequence (FCS) trailer
used for error detection.
Ethernet Frames
MAC Sublayer
Media Access
• The IEEE 802.3 MAC sublayer includes the
specifications for different Ethernet
communications standards over various types
of media including copper and fiber.
• Legacy Ethernet using a bus topology or
hubs, is a shared, half-duplex medium.
Ethernet over a half-duplex medium uses a
contention-based access method, carrier
sense multiple access/collision detection
(CSMA/CD).
• Ethernet LANs of today use switches that
operate in full-duplex. Full-duplex
communications with Ethernet switches do
not require access control through CSMA/CD.
Ethernet Frames
Ethernet Frame Fields

• The minimum Ethernet frame size is 64 bytes and the maximum is 1518 bytes. The
preamble field is not included when describing the size of the frame.
• Any frame less than 64 bytes in length is considered a “collision fragment” or “runt frame”
and is automatically discarded. Frames with more than 1500 bytes of data are considered
“jumbo” or “baby giant frames”.
• If the size of a transmitted frame is less than the minimum, or greater than the maximum,
the receiving device drops the frame. Dropped frames are likely to be the result of
collisions or other unwanted signals. They are considered invalid. Jumbo frames are
usually supported by most Fast Ethernet and Gigabit Ethernet switches and NICs.
Ethernet MAC Address

© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 151
Ethernet MAC Addresses
MAC Address and Hexadecimal

• An Ethernet MAC address consists of a 48-bit binary value, expressed using


12 hexadecimal values.
• Given that 8 bits (one byte) is a common binary grouping, binary 00000000
to 11111111 can be represented in hexadecimal as the range 00 to FF,
• When using hexadecimal, leading zeroes are always displayed to complete
the 8-bit representation. For example the binary value 0000 1010 is
represented in hexadecimal as 0A.
• Hexadecimal numbers are often represented by the value preceded
by 0x (e.g., 0x73) to distinguish between decimal and hexadecimal values in
documentation.
• Hexadecimal may also be represented by a subscript 16, or the hex number
followed by an H (e.g., 73H).
Ethernet MAC Addresses
Ethernet MAC Address

• In an Ethernet LAN, every network device is connected to the same, shared media.
MAC addressing provides a method for device identification at the data link layer of the
OSI model.
• An Ethernet MAC address is a 48-bit address expressed using 12 hexadecimal digits.
Because a byte equals 8 bits, we can also say that a MAC address is 6 bytes in length.
• All MAC addresses must be unique to the Ethernet device or Ethernet interface. To
ensure this, all vendors that sell Ethernet devices must register with the IEEE to obtain
a unique 6 hexadecimal (i.e., 24-bit or 3-byte) code called the organizationally unique
identifier (OUI).
• An Ethernet MAC address consists of a 6 hexadecimal vendor OUI code followed by a 6
hexadecimal vendor-assigned value.
Ethernet MAC Addresses
Frame Processing

• When a device is forwarding a message to an


Ethernet network, the Ethernet header include a
Source MAC address and a Destination MAC address.
• When a NIC receives an Ethernet frame, it examines
the destination MAC address to see if it matches the
physical MAC address that is stored in RAM. If there
is no match, the device discards the frame. If there
is a match, it passes the frame up the OSI layers,
where the de-encapsulation process takes place.
Note: Ethernet NICs will also accept frames if the
destination MAC address is a broadcast or a multicast
group of which the host is a member.
• Any device that is the source or destination of an
Ethernet frame, will have an Ethernet NIC and
therefore, a MAC address. This includes
workstations, servers, printers, mobile devices, and
routers.
Ethernet MAC Addresses
Unicast MAC Address

In Ethernet, different MAC addresses are


used for Layer 2 unicast, broadcast, and
multicast communications.
• A unicast MAC address is the unique
address that is used when a frame is sent
from a single transmitting device to a
single destination device.
• The process that a source host uses to
determine the destination MAC address
associated with an IPv4 address is known
as Address Resolution Protocol (ARP). The
process that a source host uses to
determine the destination MAC address
associated with an IPv6 address is known
as Neighbor Discovery (ND).
Note: The source MAC address must always
be a unicast.
Ethernet MAC Addresses
Broadcast MAC Address

An Ethernet broadcast frame is received and


processed by every device on the Ethernet LAN.
The features of an Ethernet broadcast are as
follows:
• It has a destination MAC address of FF-FF-FF-
FF-FF-FF in hexadecimal (48 ones in binary).
• It is flooded out all Ethernet switch ports
except the incoming port. It is not forwarded
by a router.
• If the encapsulated data is an IPv4 broadcast
packet, this means the packet contains a
destination IPv4 address that has all ones (1s)
in the host portion. This numbering in the
address means that all hosts on that local
network (broadcast domain) will receive and
process the packet.
Ethernet MAC Addresses
Multicast MAC Address

An Ethernet multicast frame is received and processed by a


group of devices that belong to the same multicast group.
• There is a destination MAC address of 01-00-5E when the
encapsulated data is an IPv4 multicast packet and a
destination MAC address of 33-33 when the encapsulated
data is an IPv6 multicast packet.
• There are other reserved multicast destination MAC
addresses for when the encapsulated data is not IP, such
as Spanning Tree Protocol (STP).
• It is flooded out all Ethernet switch ports except the
incoming port, unless the switch is configured for
multicast snooping. It is not forwarded by a router, unless
the router is configured to route multicast packets.
• Because multicast addresses represent a group of
addresses (sometimes called a host group), they can only
be used as the destination of a packet. The source will
always be a unicast address.
• As with the unicast and broadcast addresses, the
multicast IP address requires a corresponding multicast
MAC address.

You might also like