Professional Documents
Culture Documents
Unit 2
Unit 2
control
MAC
❑ Time Division Multiple Access (TDMA) – With TDMA the time axis is
divided into time slots of a fixed length. Each user is allocated a fixed
set of time slots at which it can transmit. TDMA requires that users be
synchronized to a common clock. Typically extra overhead bits are
required for synchronization.
The farthest
station
Station B
receives
the first
bit of the
frame at
time t= tp
Node 1 Waiting a random time
Packet
Node 2
Packet Retransmission Retransmission
1 2 3 3 2
Time
Collision
Node 3
Packet
Solution
Average frame transmission time Tfr is 200 bits/200 kbps
or 1 ms.
The vulnerable time is 2 × 1 ms = 2 ms.
This means no station should send later than 1 ms before
this station starts transmission and no station should start
sending during the one 1-ms period that this station is
sending.
Example
A pure ALOHA network transmits 200-bit frames on a
shared channel of 200 kbps. What is the throughput if the
system (all stations together) produces
a. 1000 frames per second b. 500 frames per second
c. 250 frames per second.
Solution
The frame transmission time Tfr = 200 bits/200 kbps or 1 ms.
a. If the system creates 1000 frames per second, this is 1
frame per millisecond. The load G is 1.
In this case -
S = G× e−2 G or S = 0.135 (13.5 percent). This means
that the throughput is 1000 × 0.135 = 135 frames. Only
135 frames out of 1000 will probably survive.
Example (continued)
b. If the system creates 500 frames per second, this is
(1/2) frame per millisecond. The load G is (1/2).
In this
case S = G × e −2G or S = 0.184 (18.4 percent). This
means that the throughput is 500 × 0.184 = 92 and that
only 92 frames out of 500 will probably survive.
0.18
0.16
0.14
Throughput (S)
0.12
0.1
0.08
0.06
0.04
0.02
0
0 1 2 3 4 5
Average Number of frames per unit time (G)
Slotted ALOHA protocol
❑ The method: Divide time into discrete intervals,
each interval corresponding to one frame.
0.35
0.3
Throughput (S)
0.25
0.2
0.15
0.1
0.05
0
0 1 2 3 4 5
Average Number of frames per unit time (G)
Performance Comparison of ALOHA
Slotted ALOHA can double the throughput
of pure ALOHA Slotted ALOHA peaks at G = 1, with S = 1/e 0.368, twice that of
pure ALOHA. The main reason for poor channel utilization of
ALOHA (pure or slotted) is that all stations can transmit at will,
without paying attention to what the other stations are doing.
Ten thousand airline reservation stations are competing for the use of a single
slotted ALOHA channel. The average station makes 18 requests/hour. A slot is
125 μ s. What is the approximate total channel load?
Solution
Each terminal makes one request every 200 seconds, for a total load of
50requests/second. This allows us to find the attempt rate, which is the quantity
that determines channel load (not just original transmissions but the
retransmissions as well).Thus G= 50/8000 = 1/160 is the answer.
Measurements of a slotted ALOHA channel with an infinite number of users
show that 10 percent of the slots are idle.
(a) What is the channel load G ?
(b) (b) What is the throughput?
(c) (c) Is the channel underloaded or overloaded?
Solution
(a) What is the channel load, G?
Ans: When a slot is idle, there is 0 frame generated in that frame time.
Therefore P[succ]=0.1.
P[succ]=e-G=0.1;
-G=ln(0.1);
G=2.303.
(b) What is the throughput?
Ans: S=Ge-G=2.303*0.1=0.2303.
(c) Is the channel underloaded or overloaded?
Ans: When G=1, the slotted Aloha obtains the optimal throughput. G>1, we have too many
generated frame in a slot. It is a overloaded situation.
Here we have G=2.303; S=0.2303<Smax=0.368. G>S. Therefore the channel is overloaded.
Suppose measurements made on a slotted ALOHA channel for a very large
number of users shows that on average 20% of the slots are idle.
a) What is the channel load G ?
b) What is the throughput?
c) Is the channel underloaded or overloaded?
Solution
(a) What is the channel load, G?
Ans: When a slot is idle, there is 0 frame generated in that frame time.
Therefore P[succ]=0.2.
P[succ]=e-G=0.2;
-G=ln(0.2);
G=1.61.
(b) What is the throughput?
Ans: S=Ge-G=1.61*0.2=0.32.
(c) Is the channel underloaded or overloaded?
Ans: Slotted ALOHA is at is optimal (maximum utilization / throughput) at G = 1.0, so this
channel is overloaded.
Therefore the channel is overloaded.
CSMA (Carrier-Sense Multiple-Access )
The poor efficiency of the ALOHA scheme can be attributed to
the fact that a node start transmission without paying any
attention to what others are doing.
In situations where propagation delay of the signal between two
nodes is small compared to the transmission time of a packet,
all other nodes will know very quickly when a node starts
transmission. This observation is the basis of the carrier-sense
multiple-access (CSMA) protocol.
In this scheme, a node having data to transmit first listens to the
medium to check whether another transmission is in progress or
not. The node starts sending only when the channel is free, that
is there is no carrier. That is why the scheme is also known as
listen-beforetalk.
Vulnerable time in CSMA
Behavior of three persistence methods
There are three variations of this basic scheme as outlined below.
• When two stations both begin transmitting at exactly the same time, how long will it
take them to realize that there has been a collision ?
The minimum time to detect the collision is the time it takes the signal to
propagate from one station to the other.
• How long could the transmitting station be sure it has seized the network ?
• It is worth noting that no MAC-sublayer protocol guarantees reliable delivery. Even
in the absence of collisions, the receiver may not have copied the frame correctly
due to various reasons (e.g., lack of buffer space or a missed interrupt).
Collision and abortion in CSMA/CD
Question
3. Acknowledgement
• Despite all the precautions, collisions may occur and destroy the data.
• The positive acknowledgment and the time-out timer can help guarantee that
receiver has received the frame.
Timing in CSMA/CA
Flow diagram for CSMA/CA
Performance Comparison of all protocols
0.5
1-persistent CSMA
0.4
0.3
0.2 Slotted Aloha
Aloha
0.1
0
0 1 2 3 4 5 6 7 8 9
G
CONTROLLED ACCESS METHODS
In controlled access, the stations consult one another
to find which station has the right to send. A station
cannot send unless it has been authorized by other
stations. We discuss three popular controlled-access
methods.
❑Reservation
❑Polling
❑Token Passing
RESERVATION ACCESS METHOD
POLLING
Select and poll functions in polling access method
Logical ring and physical topology in Token-passing Access Method
CHANNELIZATION
Chip sequences
Sharing channel in CDMA
General rule and examples of creating Walsh tables
The number of sequences in a Walsh
table needs to be N = 2m .
Walsh codes are the most common orthogonal codes used in CDMA
applications. A set of Walsh codes of length n consists of the n rows
of an n×n Walsh matrix. The matrix is defined recursively in
previous diagram where n is the dimension of the matrix and the
overscore denotes the logical NOT of the bits in the matrix. The
Walsh matrix has the property that every row is orthogonal
to every other row and to the logical NOT of every other row.
Solution
We can use the rows of W2 and W4 in previous figure:
a. For a two-station network, we have
[+1 +1] and [+1 −1].
Solution
The number of sequences needs to be 2m.
We need to choose m = 7
and
N (Number of Stations) = 27 or 128.
All terminals are active for Every terminal has All terminals can be active at
Terminals short periods of time on its own frequency the same place at the same
same frequency. uninterrupted moment uninterrupted.
❑ The least significant bit of the first byte defines the type of address.
If the bit is 0, the address is unicast; otherwise, it is multicast.
❑ The broadcast destination address is a special case of the multicast address
in which all bits are 1s.
Define the type of the following destination addresses:
a. 4A:30:10:21:10:1A b. 47:20:1B:2E:08:EE
c. FF:FF:FF:FF:FF:FF
Solution
To find the type of the address, we need to look at the
second hexadecimal digit from the left. If it is even, the
address is unicast. If it is odd, the address is multicast. If all
digits are F’s, the address is broadcast. Therefore, we have
the following:
a. This is a unicast address because A in binary is 1010.
b. This is a multicast address because 7 in binary is 0111.
c. This is a broadcast address because all digits are F’s.
Categories of Standard Ethernet (802.3)
Ethernet Cabling
Name Cable Max seg. Nodes per Advantages
(m) segment
Term Used:
A transceiver is a device comprising both a transmitter and a receiver that are
combined and share common circuitry or a single housing.
10Base5 implementation
Term Used:
A transceiver is a device comprising both a transmitter and a receiver that are
combined and share common circuitry or a single housing.
10Base2 implementation
10Base-F implementation
10Base-T implementation
Ethernet-802.3
Following table mentions different 10 Mbps Ethernet such as 10BASE5, 10BASE2,
10BASE-F and 10BASE-T.
Specification 10BASE5 10BASE2 10BASE-F 10BASE-T
varies from
Maximum
500 m 185 m 400 m to 100m
segment length
2000 m
topology Bus Bus Star Star
50-"omega" thick 50-"omega" thin multimode 100-"omega"
medium
coax. coax. fiber UTP
Maximum 5 5 5 5 segments
Fast Ethernet-802.3u
The ethernet working at the speed of 100Mbps is referred as
fast ethernet. IEEE standard 802.3u fast ethernet/100BASE-T
specified in May1995. The features of this type of fast ethernet
are as follows:
• Includes multiple Physical layers.
• It uses original ethernet MAC but operates at 10 times higher speed.
• It needs star wired configuration with central hub.
The MAC parameters are same as described for ethernet
above. There are three physical layers for fast ethernet.
• 100BASE-TX: Needs 2 pairs of cat.5 UTP/Type1 STP cables
• 100BASE-FX: Needs 2 strands of multimode fiber
• 100BASE-T4: Needs 4 pairs of cat.3
Full duplex
Yes Yes No
capabilities
Gigabit Ethernet-802.3z
The ethernet working at the speed of 1000Mbps (i.e. 1Gbps)
and above is referred as Gigabit ethernet.
The ending delimiter contains an E bit which is set if any interface detects an
error.
Question:
A 8-Mbps token ring has a token holding timer value of
10 msec. What is the longest frame (assume header bits
are negligible) that can be sent on this ring?
Answer:
At 8 Mbps, a station can transmit 80,000 bits or 10,000 bytes in 10
msec.
This is an upper bound on frame length.
From this amount, some overhead must be subtracted, giving a
slightly lower limit for the data portion.
FDDI (Fiber Distributed Data Interface)
• FDDI is a standard developed by the American National
Standards Institute (ANSI) for transmitting data on
optical fibers
• Supports transmission rates of up to 200 Mbps
• Uses a dual ring
❑First ring used to carry data at 100 Mbps
❑Second ring used for primary backup in case first ring fails
❑If no backup is needed, second ring can also carry data,
increasing the data rate up to 200 Mbps
• Supports up to 1000 nodes
• Has a range of up to 200 km
FDDI (Fiber Distributed Data Interface)
Differences between 802.5 and FDDI
The base version of the standard was released in 1997, and has had subsequent
amendments. The standard and amendments provide the basis for wireless
network products using the Wi-Fi brand. While each amendment is officially
revoked when it is incorporated in the latest version of the standard, the corporate
world tends to market to the revisions because they concisely denote capabilities
of their products. As a result, in the marketplace, each revision tends to become
its own standard.
LOGICAL LINK CONTROL SUBLAYER
Logical Link Control Design Issues
Services Provided to the Network Layer
• The network layer wants to be able to send packets to its neighbors
without worrying about the details of getting it there in one piece.
Framing
• Group the physical layer bit stream into units called frames. Frames are
nothing more than "packets" or "messages". By convention, we use the
term "frames" when discussing DLL.
Flow Control
• Prevent a fast sender from overwhelming a slower receiver.
Error Control
• Sender checksums the frame and transmits checksum together with
data. Receiver re-computes the checksum and compares it with the
received value.
Services provided to the network layer
• The function of the data link layer is to provide services to the
network layer.
• The principal service is transferring data from the network
layer on the source machine to the network layer on the
destination machine.
• The data link layer can be designed to offer various services. The
actual services offered can vary from system to system. Three
reasonable possibilities that are commonly provided are-
• Other benefits of HDLC are that the control information is always in the
same position, and specific bit patterns used for control differ
dramatically from those in representing data, which reduces the chance
of errors. It has also led to many subsets.
• Two subsets widely in use are Synchronous Data Link Control (SDLC) and
Link Access Procedure-Balanced (LAP-B).
• Secondary Station
The secondary station is under the control of the primary station. It
has no ability, or direct responsibility for controlling the link. It is only
activated when requested by the primary station. It can only send
response frames when requested by the primary station.
• Combined Station
A combined station is a combination of a primary and secondary
station. On the link, all combined stations are able to send and receive
commands and responses without any permission from any other
stations on the link.
HDLC (High level data link control) - contd.
• Following are the three configurations defined by HDLC:
• Unbalanced Configuration
The unbalanced configuration in an HDLC link consists of a primary station
and one or more secondary stations. The unbalanced condition arises
because one station controls the other stations.
• Balanced Configuration
The balanced configuration in an HDLC link consists of two or more combined
stations. Each of the stations has equal and complimentary responsibility
compared to each other.
• Symmetrical Configuration
This third type of configuration is not widely in use today. It consists of two
independent point-to-point, unbalanced station configurations as shown in Fig. below.
In this configuration, each station has a primary and secondary status. Each station is
logically considered as two stations as shown below -
HDLC (High level data link control) - contd.
• Operational Modes:
A mode in HDLC is the relationship between two devices involved in an
exchange; the mode describes who controls the link. Exchanges over
unbalanced configurations are always conducted in normal response mode.
Exchanges over symmetric or balanced configurations can be set to specific
mode using a frame design to deliver the command. HDLC offers three
different modes of operation.
Disadvantage –
If the count is corrupted by a transmission error, the destination will lose synchronization and will
be unable to locate the start of the next frame. So, this method is rarely used.
Framing – Byte Stuffing (character stuffing)
• In character-oriented protocol, we add special characters (called flag)
to distinguish beginning and end of a frame. Usually flag has 8-bit
length.
• While using character–oriented protocol another problem is
arises, pattern used for the flag may also part of the data to
send. If this happens, the destination node, when it encounters this
pattern in the middle of the data, assumes it has reached the end of
the frame.
• To deal with this problem, a byte stuffing (also known as
character stuffing) approach was included to character-oriented
protocol. In byte stuffing a special byte is add to the data part, this is
known as escape character (ESC). The escape characters have a
predefined pattern. The receiver removes the escape character and
keeps the data part. It cause to another problem, if the text contains
escape characters as part of data. To deal with this, an escape
character is prefixed with another escape character.
Byte stuffing and unstuffing
The following figure explains everything we discussed about character stuffing.
• Example:
10 bit is a high-low pair and a 01 bit is a low-high pair. The
combinations of 00 low-low and 11 high-high which are not
used for data may be used for marking frame boundaries.
Flow Control
• Flow control deals with throttling the speed of the sender to
match that of the receiver.
• Two Approaches:
• rate-based flow control, the protocol has a built-in mechanism that
limits the rate at which senders may transmit data, without using
feedback from the receiver.
• feedback-based flow control, the receiver sends back information to
the sender giving it permission to send more data or at least telling
the sender how the receiver is doing
Cases of Operations:
1.Normal operation
2.The frame is lost
3.The Acknowledgment (ACK) is lost
4.The Ack is delayed
Stop-and-Wait ARQ
Normal operation
➢ Importance of frame
numbering
In Stop and-Wait ARQ, numbering frames prevents the retaining of
duplicate frames.
Stop-and-Wait ARQ
4. Delayed ACK and lost frame
➢ Importance of frame
numbering
Piggybacking
• Temporarily delaying transmission of outgoing acknowledgement so that they
can be hooked onto the next outgoing data frame.
• Combining data to be sent with control information is called
piggybacking. Thus, piggybacking means combining data to be
sent and acknowledgement of the frame received in one single
frame for higher channel bandwidth utilization.
• Complication:
• How long to wait for a packet to piggyback?
• If longer than sender timeout period then sender retransmits
Purpose of acknowledgement is lost
• Solution for timing complexion
• If a new packet arrives quickly
Piggybacking
• If no new packet arrives after a receiver ack timeout
Sending a separate acknowledgement frame
Sliding Window Protocols
Inspite of the use of timers, the stop and wait ARQ protocol still suffers from
few drawbacks.
❑ Firstly, if the receiver had the capacity to accept more than one frame,
its resources are being underutilized.
❑ Secondly, if the receiver was busy and did not wish to receive any more
packets, it may delay the acknowledgement. However, the timer on the
sender's side may go off and cause an unnecessary retransmission. These
drawbacks are overcome by the sliding window protocols.
❑ The sequence numbers within the sender's window represent the frames sent or
can be sent but as yet not acknowledged. Whenever a new packet arrives from
the network layer, the upper edge of the window is advanced by one. When an
acknowledgement arrives from the receiver the lower edge is advanced by one.
❑ The receiver's window corresponds to the frames that the receiver's data link
layer may accept. When a frame with sequence number equal to the lower edge
of the window is received, it is passed to the network layer, an acknowledgement
is generated and the window is rotated by one. If however, a frame falling outside
the window is received, the receiver's data link layer has two options. It may
either discard this frame and all subsequent frames until the desired frame is
received or it may accept these frames and buffer them until the appropriate
frame is received and then pass the frames to the network layer in sequence.
Sliding Window Protocol
Sliding window protocols apply Pipelining :
❖ Go-Back-N ARQ Allowing the sender to transmit multiple contiguous frames
❖ Selective Repeat ARQ (say up to frames) before it receives an
W
If m = 3;
sequence numbers = 8 and
window size =7
Acknowledged frames
Go_Back _N ARQ Sliding Window Protocol
Receiver sliding window
➢ The receive window is an abstract concept defining
an imaginary box of size 1 with one single variable
Rn.
➢ The window slides when a correct frame has arrived;
sliding occurs one slot at a time.
Go_Back _N ARQ Sliding Window Protocol
control variables
Outstanding frames: frames sent but not
acknowledged
What is the
disadvantage of this?
Go_Back _N ARQ Sliding Window Protocol
Selective Repeat ARQ
Go-Back-N ARQ is inefficient in a noisy link.
Solution:
➢ Selective Repeat ARQ protocol : resend only the damaged frame
➢ It defines a negative Acknowlgement (NAK) that report the
sequence number of a damaged frame before the timer expires
➢ It is more efficient for noisy link, but the processing at the
receiver is more complex
In Selective Repeat ARQ, the size of the sender and receiver
window must be at most one-half of 2m.
Selective Repeat ARQ
Selective Repeat ARQ
m=3, so, 2m =8
Lost Frame Sequences no= : 0,1,2 ,3,4,5,6,7
Window size =2m/2= 8/2=4
Selective Repeat ARQ
Notice that only two ACKs are sent here.
Since frame = 1000 bits and Max bit rate = channel data rate = 1Mbps,
the tframe= 1000/106=1 ms.
(c) Maximum link utilization with window flow control of window size 127:
Since W=127 then (2a+1)=541, which means that W<2a+1
So, U=W/(2a+1)= 127/541
= 0.235 = 23.5%
(d) Maximum link utilization with window flow control of window size 255:
Since W=255 then (2a+1)=541, which means that W<2a+1
So, U=W/(2a+1)= 255/541
= 0.471 = 47.1%
Review Questions
A channel has a data rate of 4kbps and a propagation delay of 20ms.
For what range of frame sizes does stop-and-wait give an efficiency of
atleast 50%?
It is given that data rate = 4kbps, hence bit duration = 1/4000 =0.25ms
Time to transmit the frame is
tframe= Frame_Size/Bit rate = Frame_Size x Bit_duration
Also, tprop=20ms.
For stop-and-wait flow control, efficiency is equal to -
U = 1/(1+2a) where a = tprop / tframe
Rewriting the equation yields a= 0.5 [(1/U) – 1]
For U>= 50%=0.5,
then a <=0.5[(1/0.5) –1] i.e. a <= 0.5
Since a = tprop/tframe then tprop/tframe <= 0.5 i.e. tframe >= 2tprop
But Frame_size = tframe/bit_duration i.e.-
Frame_size >= 2tprop/bit_duration = 2 x 20ms / 0.25ms = 160 bits
So, in order to have an efficiency of at least 50%, frame size must be atleast 160 bits
long.
Review Questions
2. Burst error : It means two or more bits in data unit are changed from 1 to 0
from 0 to 1. In burst error, it is not necessary that only consecutive bits are
changed. The length of burst error is measured from first changed bit to last
changed bit.
Error Detection vs Error Correction
There are two types of attacks against errors:
❑ Error Detecting Codes: Include enough redundancy bits to
detect errors and use ACKs and retransmissions to recover from the
errors.
❑ Error Correcting Codes: Include enough redundancy to detect
and correct errors. The use of error-correcting codes is often referred
to as forward error correction.
Error Detection Techniques
Error detection means to decide whether the received
data is correct or not without having a copy of the
original message.
• Receiving
1. Receive F’(x)
2. Divide F’(x) by P(x)
3. Accept if remainder is 0, reject otherwise
CRC encoder and decoder
Division in CRC encoder
Division in the CRC decoder for two cases:
• At the receiver’s end, all received segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented. If the result is zero, the
received data is accepted; otherwise discarded
• The checksum detects all errors involving an odd number of bits. It also
detects most errors involving even number of bits.
Checksum
Checksum Example
Practice Question)
For a pattern of, 10101001 00111001 00011101 Find out whether any
transmission errors have occurred or not
Error Correction
• Messages (frames) consist of d data (message) bits and r redundancy
bits, yielding an n = (d+r) bit codeword.
The number of redundant bits can be calculated using the following formula:
2^r > m + r + 1 where, r = redundant bit, m = data bit
Suppose the number of data bits is 7, then the number of redundant bits can be
calculated using: = 2^4 > 7 + 4 + 1
Thus, the number of redundant bits= 4
Parity bits –
Hamming Code
A parity bit is a bit appended to a data of binary bits to ensure that the
total number of 1’s in the data are even or odd. Parity bits are used for
error detection. There are two types of parity bits: