unit 3 cn

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Data Link Layer

 The data link layer is used in a computer network to transmit the data between two devices.
 It is divided into two parts.
 The upper layer has the responsibility to flow control and the error control in the data link
layer, and hence it is termed as logical link control. And the lower sub-layer is used to
handle and reduce the collision or multiple access on a channel. Hence it is termed as media
access control.

What is a multiple access protocol?

 When a sender and receiver have a dedicated link to transmit data packets, the data link
control is enough to handle the channel.
 Suppose there is no dedicated path to communicate between two devices. In that case,
multiple stations access the channel and simultaneously transmit the data over the channel. It
may create collision and cross talk. Hence, the multiple access protocol is required to reduce
the collision and avoid crosstalk between the channels.

Types of multiple access protocol

A. Random Access Protocol

 In this protocol, all the station has the equal priority to send the data over a channel.
 In random access protocol, one or more stations cannot depend on another station nor any
station control another station.
 Depending on the channel's state (idle or busy), each station transmits the data frame.
 However, if more than one station sends the data over a channel, there may be a collision or
data conflict. Due to the collision, the data frame packets may be lost or changed. And hence,
it does not receive by the receiver end.
Different methods of random-access protocols:.

1. Aloha
2. CSMA
3. CSMA/CD
4. CSMA/CA

1.ALOHA Random Access Protocol

 It is designed for wireless LAN but can also be used in a shared medium to transmit data.
 Using this method, any station can transmit data across a network simultaneously when a
data frameset is available for transmission.

Aloha Rules:
1. Any station can transmit data to a channel at any time.
2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through multiple
stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.

Pure Aloha

 We use Pure Aloha whenever data is available for sending over a channel at stations. When
each station transmits data to a channel without first checking whether the channel is idle or
not, there is a risk of collision and the data frame being lost in pure Aloha.

 The pure Aloha waits for the receiver’s acknowledgment when any station transmits a data
frame to a channel. The station waits for a random amount of time, called the back-off time,
if it does not acknowledge the receiver end within the specified time (Tb). It’s possible that
the station will assume the frame has been misplaced or destroyed. As a result, the frame is
retransmitted until all of the data is successfully transmitted to the receiver.
 Example: In the figure above, there are four stations for accessing a shared channel and
transmitting data frames. Some frames collide because most stations send their frames at the
same time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the
receiver end. At the same time, other frames are lost or destroyed.

 Whenever two frames fall on a shared channel simultaneously, collisions can occur. If the
new frame's first bit enters the channel before finishing the last bit of the second frame. Both
frames are completely finished, and both stations must retransmit the data frame.

Vulnerable Time = 2*(Frame Transmission Time)


Maximum Throughput (G=1/2) = 18.4%
Probability of successful transmission of data frame = G * e ^ – 2 G

Slotted Aloha

 The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha
has a very high possibility of frame hitting.
 In slotted Aloha, the shared channel is divided into a fixed time interval called slots. So that,
if a station wants to send a frame to a shared channel, the frame can only be sent at the
beginning of the slot and only one frame is allowed to be sent to each slot. And if the
stations are unable to send data to the beginning of the slot, the station will have to wait until
the beginning of the slot for the next time. However, the possibility of a collision remains
when trying to send a frame at the beginning of two or more station time slot.

Vulnerable Time = Frame Transmission Time


Maximum Throughput (G=1) = 37%
Probability of successful transmission of data frame = G * e ^ – 2 G
2.CSMA (Carrier Sense Multiple Access)

 It is a carrier sense multiple access based on media access protocol to sense the traffic on a
channel (idle or busy) before transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait until the channel becomes idle.
Hence, it reduces the chances of a collision on a transmission medium.

CSMA Access Modes

1-Persistent:

 In the 1-Persistent mode of CSMA that defines each node, first sense the shared channel and
if the channel is idle, it immediately sends the data. Else it must wait and keep track of the
status of the channel to be idle and broadcast the frame unconditionally as soon as the
channel is idle.

Non-Persistent:

 It is the access mode of CSMA that defines before transmitting the data, each node must
sense the channel, and if the channel is inactive, it immediately sends the data. Otherwise, the
station must wait for a random time (not continuously), and when the channel is found to
be idle, it transmits the frames.

P-Persistent:

 It is the combination of 1-Persistent and Non-persistent modes.


 The P-Persistent mode defines that each node senses the channel, and if the channel is
inactive, it sends a frame with a P probability. If the data is not transmitted, it waits for a (q =
1-p probability) random time and resumes the frame with the next time slot.

CSMA/ CD [carrier sense multiple access/ collision detection]

 Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a network


protocol that operates in the Medium Access Control (MAC) layer.

 It senses whether the shared channel for transmission is busy or not, and do not transmit until
the channel is free.

 The collision detection technology detects collisions by sensing transmissions from other
stations. On detection of a collision, the station stops transmitting, sends a jam signal, and
then waits for a random time interval before retransmission.
The algorithm of CSMA/CD is:
 When a frame is ready, the transmitting station checks whether the channel is idle or
busy.
 If the channel is busy, the station waits until the channel becomes idle.
 If the channel is idle, the station starts transmitting and continually monitors the channel
to detect collision.
 If a collision is detected, the station starts the collision resolution algorithm.
 The station resets the retransmission counters and completes frame transmission.

The algorithm of Collision Resolution is:


 The station continues transmission of the current frame for a specified time along with a
jam signal, to ensure that all the other stations detect collision.
 The station increments the retransmission counters.
 If the maximum number of retransmission attempts is reached, then the station aborts
transmission.
 Otherwise, the station waits for a back-off period which is generally a function of the
number of collisions and restart main algorithm.
Advantages of CSMA/CD:
 Simple and widely used: CSMA/CD is a widely used protocol for Ethernet networks, and
its simplicity makes it easy to implement and use.
 Fairness: In a CSMA/CD network, all devices have equal access to the transmission
medium, which ensures fairness in data transmission.
 Efficiency: CSMA/CD allows for efficient use of the transmission medium by preventing
unnecessary collisions and reducing network congestion.

Disadvantages of CSMA/CD:
 Limited scalability: CSMA/CD has limitations in terms of scalability, and it may not be
suitable for large networks with a high number of devices.
 Vulnerability to collisions: While CSMA/CD can detect collisions, it cannot prevent them
from occurring. Collisions can lead to data corruption, retransmission delays, and reduced
network performance.
 Inefficient use of bandwidth: CSMA/CD uses a random back-off algorithm that can result
in inefficient use of network bandwidth if a device continually experiences collisions.
 Susceptibility to security attacks: CSMA/CD does not provide any security features, and
the protocol is vulnerable to security attacks such as packet sniffing and spoofing.

CSMA/ CA

 It is a carrier sense multiple access/collision avoidance network protocol for carrier


transmission of data frames.
 When a data frame is sent to a channel, it receives an acknowledgment to check whether the
channel is clear. If the station receives only a single (own) acknowledgments, that means the
data frame has been successfully transmitted to the receiver. But if it gets two signals (its
own and one more in which the collision of frames), a collision of the frame occurs in the
shared channel.
 Detects the collision of the frame when a sender receives an acknowledgment signal.

Following are the methods used in the CSMA/ CA to avoid the collision:

1.Interframe space:

 In this method, the station waits for the channel to become idle, and if it gets the channel is
idle, it does not immediately send the data. Instead of this, it waits for some time, and this
time period is called the Inter-frame space

2.Contention window:

 In the Contention window, the total time is divided into different slots. When the station is
ready to transmit the data frame, it chooses a random slot number of slots as wait time. If the
channel is still busy, it does not restart the entire process, except that it restarts the timer
only to send data packets when the channel is idle.

3.Acknowledgment:

 In the acknowledgment method, the sender station sends the data frame to the shared channel
if the acknowledgment is not received ahead of time.
B. Channelization Protocols:

 Channelization protocol allows the total usable bandwidth in a shared channel to be shared
across multiple stations based on their time, distance and codes.
 It can access all the stations at the same time to send the data frames to the channel.

Following are the various methods to access the channel based on their time, distance and codes:
1. FDMA (Frequency Division Multiple Access)
2. TDMA (Time Division Multiple Access)
3. CDMA (Code Division Multiple Access)

1.FDMA

 Frequency division multiple access (FDMA) method used to divide the available bandwidth
into equal bands so that multiple users can send data through a different frequency to the
sub-channel.
 Each station is reserved with a particular band to prevent the crosstalk between the channels
and interferences of stations.
2.TDMA [Time Division Multiple Access]

 Time Division Multiple Access (TDMA) is a channel access method.


 It allows the same frequency bandwidth to be shared across multiple stations. And to avoid
collisions in the shared channel, it divides the channel into different frequency slots that
allocate stations to transmit the data frames. The same frequency bandwidth into the shared
channel by dividing the signal into various time slots to transmit it. However, TDMA has an
overhead of synchronization that specifies each station's time slot by adding synchronization
bits to each slot.

3.CDMA

 The code division multiple access (CDMA) is a channel access method.


 In CDMA, all stations can simultaneously send the data over the same channel. It means that
it allows each station to transmit the data frames with full frequency on the shared channel at
all times.
 It does not require the division of bandwidth on a shared channel based on time slots.
 If multiple stations send data to a channel simultaneously, their data frames are separated by
a unique code sequence. Each station has a different unique code for transmitting the data
over a shared channel. For example, there are multiple users in a room that are continuously
speaking. Data is received by the users if only two-person interact with each other using the
same language. Similarly, in the network, if different stations communicate with each other
simultaneously with different code language.
C. Controlled Access Protocol

 It is a method of reducing data frame collision on a shared channel.


 In the Controlled access technique, all stations need to consult with one another in order to
find out which station has the right to send the data.
 The controlled access protocols mainly grant permission to send only one node at a time;
thus in order to avoid the collisions among the shared mediums.
 No station can send the data unless it has been authorized by the other stations.

The protocols lies under the category of Controlled access are as follows:

1. Reservation
2. Polling
3. Token Passing

1. Reservation

In this method, a station needs to make a reservation before sending the data.

 Time is mainly divided into intervals.


 Also, in each interval, a reservation frame precedes the data frame that is sent in that interval.
 Suppose if there are 'N' stations in the system in that case there are exactly 'N' reservation
mini-slots in the reservation frame; where each mini-slot belongs to a station.
 Whenever a station needs to send the data frame, then the station makes a reservation in its
own minislot.
 Then the stations that have made reservations can send their data after the reservation frame.

Example:

Let us take an example of 5 stations and a 5-minislot reservation frame. In the first interval, the
station 2,3 and 5 have made the reservations. While in the second interval only station 2 has
made the reservations.
2. Polling

The polling method mainly works with those topologies where one device is designated as the
primary station and the other device is designated as the secondary station.

 All the exchange of data must be made through the primary device even though the final
destination is the secondary device.
 Thus to impose order on a network that is of independent users, and in order to establish one
station in the network that will act as a controller and periodically polls all other stations is
simply referred to as polling.
 The Primary device mainly controls the link while the secondary device follows the
instructions of the primary device.
 The responsibility is on the primary device in order to determine which device is allowed to
use the channel at a given time.
 Therefore the primary device is always an initiator of the session.

Poll Function

In case if primary devices want to receive the data, then it usually asks the secondary devices if
they have anything to send. This is commonly known as Poll Function.

 There is a poll function that is mainly used by the primary devices in order to solicit
transmissions from the secondary devices.
 When the primary device is ready to receive the data then it must ask (poll) each secondary
device in turn if it has anything to send.
 If the secondary device has data to transmit then it sends the data frame, otherwise, it sends
a negative acknowledgment (NAK).
 After that in case of the negative response, the primary then polls the next secondary, in the
same manner until it finds the one with the data to send. When the primary device received a
positive response that means (a data frame), then the primary devices reads the frame and
then returns an acknowledgment (ACK )frame,
Select Function:
In case, if the primary device wants to send the data then it tells the secondary devices in order
to get ready to receive the data. This is commonly known as the Select function.

 Thus the select function is used by the primary device when it has something to send.
 We had already told you that the primary device always controls the link.
 Before sending the data frame, a select (SEL) frame is created and transmitted by the
primary device, and one field of the SEL frame includes the address of the intended
secondary.
 The primary device alerts the secondary devices for the upcoming transmission and after that
wait for an acknowledgment (ACK) of the secondary devices.
Advantages of Polling
1. The minimum and maximum access times and data rates on the channel are predictable
and fixed.
2. There is the assignment of priority in order to ensure faster access from some secondary.

Drawbacks
 There is a high dependency on the reliability of the controller
 The increase in the turnaround time leads to the reduction of the data rate of the channel
under low loads.

3. Token Passing
In the token passing methods, all the stations are organized in the form of a logical ring. We can
also say that for each station there is a predecessor and a successor.

 The predecessor is the station that is logically before the station in the ring; while the
successor is the station that is after the station in the ring. The station that is accessing the
channel now is the current station.
 Basically, a special bit pattern or a small message that circulates from one station to the next
station in some predefined order is commonly known as a token.
 Possessing the token mainly gives the station the right to access the channel and to send its
data.
 When any station has some data to send, then it waits until it receives a token from its
predecessor. After receiving the token, it holds it and then sends its data. When any station
has no more data in order to send then it releases the token and then passes the token to the
next logical station in the ring.
 Also, the station cannot send the data until it receives the token again in the next round.
 In Token passing, when a station receives the token and has no data to send then it just passes
the token to the next station.
 The problem that occurs due to the Token passing technique is the duplication of tokens or
loss of tokens. The insertion of the new station, removal of a station, also needs to be tackled
for correct and reliable operation of the token passing technique.

The performance of a token ring is governed by 2 parameters:

1. Delay is a measure of the time; it is the time difference between a packet ready for
transmission and when it is transmitted. Hence, the average time required to send a token to the
next station is a/N.

2. Throughput is a measure of the successful traffic in the communication channel.

Throughput, S = 1/ (1 + a/N) for a<1

S = 1/[a(1+1/N)] for a>1, here N = number of stations & a = Tp/Tt


Tp = propagation delay & Tt = transmission delay

Example:

In the diagram below when station-1 possess the token, it starts transmitting all the data-frames
which are in its queue. now after transmission, station-1 passes the token to station-2 and so on.
Station-1 can now transmit data again, only when all the stations in the network have transmitted
their data and passed the token.

Note: It is important to note that A token can only work in that channel, for which it is generated,
and not for any other.

Back-off Algorithm for CSMA/CD


 Back-off algorithm is a collision resolution mechanism which is used in random access
MAC protocols (CSMA/CD).
 This algorithm is generally used in Ethernet to schedule re-transmissions after collisions.
 If a collision takes place between 2 stations, they may restart transmission as soon as they
can after the collision. This will always lead to another collision and form an infinite loop
of collisions leading to a deadlock. To prevent such scenario back-off algorithm is used.

 Let us consider a scenario of 2 stations A and B transmitting some data:


 After a collision, time is divided into discrete slots (Tslot) whose length is equal to 2t, where
t is the maximum propagation delay in the network. The stations involved in the collision
randomly pick an integer from the set K i.e {0, 1}. This set is called the contention
window.
 If the sources collide again because they picked the same integer, the contention window
size is doubled and it becomes {0, 1, 2, 3}. Now the sources involved in the second
collision randomly pick an integer from the set {0, 1, 2, 3} and wait for that number of time
slots before trying again. Before they try to transmit, they listen to the channel and transmit
only if the channel is idle. This causes the source which picked the smallest integer in the
contention window to succeed in transmitting its frame. So, the Back-off algorithm defines
a waiting time for the stations involved in collision, i.e. for how much time the station
should wait to re-transmit.

Waiting time = back–off time


Let n = collision number or re-transmission serial number.
Then,
Waiting time = K * Tslot
where K = [0, 2n – 1 ]

Example – Case-1 : Suppose 2 stations A and B start transmitting data (Packet 1) at the same
time then, collision occurs. So, the collision number n for both their data (Packet 1) = 1. Now,
both the station randomly pick an integer from the set K i.e. {0, 1}.
 When both A and B choose K = 0 –> Waiting time for A = 0 * T slot = 0 Waiting time for
B = 0 * Tslot = 0 Therefore, both stations will transmit at the same time and hence collision
occurs.
 When A chooses K = 0 and B chooses K = 1 –> Waiting time for A = 0 * T slot = 0
Waiting time for B = 1 * T slot = Tslot Therefore, A transmits the packet and B waits for time
Tslot for transmitting and hence A wins.
 When A chooses K = 1 and B chooses K = 0 –> Waiting time for A = 1 * T slot =
Tslot Waiting time for B = 0 * T slot = 0 Therefore, B transmits the packet and A waits for
time Tslot for transmitting and hence B wins.
 When both A and B choose K = 1 –> Waiting time for A = 1 * T slot = Tslot Waiting time
for B = 1 * Tslot = Tslot Therefore, both will wait for the same time T slot and then transmit.
Hence, a collision occurs.

Probability that A wins = 1/4


Probability that B wins = 1/4
Probability of collision = 2/4

Case-2: Assume that A wins in Case 1 and transmitted its data(Packet 1). Now, as soon as B
transmits its packet 1, A transmits its packet 2. Hence, collision occurs. Now collision no. n
becomes 1 for packet 2 of A and becomes 2 for packet 1 of B. For packet 2 of A, K = {0, 1}
For packet 1 of B, K = {0, 1, 2, 3}
Probability that A wins = 5/8
Probability that B wins = 1/8
Probability of collision = 2/8
So, the probability of collision decreases as compared to Case 1.
Advantage –
 Collision probability decreases exponentially.
 Improves network performance: Back-off algorithm reduces the number of collisions
and retransmissions, thus improving the overall network performance.
 Increases channel utilization: By reducing the number of collisions, back-off algorithm
increases the channel utilization, leading to better use of network resources.
 Reduces delays: By reducing the number of collisions, back-off algorithm reduces the
waiting time between transmission attempts, resulting in lower delays.
 Fairness: The back-off algorithm ensures that all nodes in the network have an equal
chance to access the channel, which promotes fairness in the distribution of network
resources.
 Adaptability: The back-off algorithm is adaptable to changing network conditions. When
the network is busy, it increases the waiting time before retransmission attempts, and when
the network is idle, it reduces the waiting time, thus ensuring efficient use of network
resources.
 Scalability: The back-off algorithm is scalable, as it can be used in networks of any size,
from small local networks to large wide-area networks.
 Energy Efficiency: By reducing the number of collisions and retransmissions, the back-off
algorithm reduces the energy consumption of network nodes, which is particularly
important in battery-powered devices such as mobile phones and IoT devices.
 Robustness: The back-off algorithm is robust, as it can handle a high number of nodes
trying to access the channel simultaneously without causing the network to crash or
degrade in performance.

Disadvantages –
 Capture effect: Station who wins ones keeps on winning.
 Works only for 2 stations or hosts.
 Increased overhead: Back-off algorithm adds additional overhead to the network due to
the need to wait for a random time before retransmitting a packet.
 Complexity: Back-off algorithm is a complex algorithm that requires a high level of
implementation complexity.
 Vulnerability to attacks: Back-off algorithm is vulnerable to certain attacks, such as
denial-of-service (DoS) attacks, where an attacker can manipulate the random time
intervals to cause the network to stop functioning.
 Limited Performance: While the back-off algorithm can improve network performance in
low- to medium-load scenarios, it may not be effective in high-load scenarios where the
number of nodes trying to access the network is very high.
 Inefficient in Time-Sensitive Applications: The back-off algorithm may not be suitable
for time-sensitive applications, such as real-time video streaming or voice over IP (VoIP),
where even a slight delay can cause significant problems.
 Limited Scalability: While the back-off algorithm can be used in networks of any size, it
may become less effective in larger networks where the number of nodes trying to access
the channel simultaneously is very high.
 Sensitivity to Distance and Interference: The back-off algorithm is sensitive to distance
and interference, as nodes that are physically closer to each other may have a higher
probability of colliding and may require longer random waiting times.
 Limited Support for Quality of Service: The back-off algorithm does not provide support
for quality of service (QoS), which is an important feature for many networking
applications, such as video streaming, voice over IP (VoIP), and online gaming.

Collision-Free Protocols in Computer Network


 Almost all collisions can be avoided in CSMA/CD but they can still occur during the
contention period. The collision during the contention period adversely affects the system
performance, this happens when the cable is long and length of packet are short. This
problem becomes serious as fiber optics network came into use. Here we shall discuss
some protocols that resolve the collision during the contention period.
1. Bit-map Protocol
2. Binary Countdown
3. Limited Contention Protocols
4. The Adaptive Tree Walk Protocol
Pure and slotted Aloha, CSMA and CSMA/CD are Contention based Protocols:
 Try-if collide-Retry
 No guarantee of performance
 What happen if the network load is high?

Collision Free Protocols:


 Pay constant overhead to achieve performance guarantee
 Good when network load is high

1. Bit-map Protocol:

 Bit map protocol is collision free Protocol. In bitmap protocol method, each contention
period consists of exactly N slots. If any station has to send frame, then it transmits a 1 bit
in the corresponding slot. For example, if station 2 has a frame to send, it transmits a 1 bit
to the 2nd slot.
 In general, Station 1 Announce the fact that it has a frame questions by inserting a 1 bit into
slot 1. In this way, each station has complete knowledge of which station wishes to
transmit. There will never be any collisions because everyone agrees on who goes next.
Protocols like this in which the desire to transmit is broadcasting for the actual
transmission are called Reservation Protocols.

Bit Map Protocol fig (1.1)

 For analyzing the performance of this protocol, We will measure time in units of the
contention bits slot, with a data frame consisting of d time units. Under low load
conditions, the bitmap will simply be repeated over and over, for lack of data frames. All
the stations have something to send all the time at high load, the N bit contention period is
prorated over N frames, yielding an overhead of only 1 bit per frame.
 Generally, high numbered stations have to wait for half a scan before starting to transmit
low numbered stations have to wait for half a scan(N/2 bit slots) before starting to transmit,
low numbered stations have to wait on an average 1.5 N slots.
2. Binary Countdown:

 Binary countdown protocol is used to overcome the overhead 1 bit per binary station. In
binary countdown, binary station addresses are used. A station wanting to use the channel
broadcast its address as binary bit string starting with the high order bit. All addresses are
assumed of the same length. Here, we will see the example to illustrate the working of the
binary countdown.
 In this method, different station addresses are read together who decide the priority of
transmitting. If these stations 0001, 1001, 1100, 1011 all are trying to seize the channel for
transmission. All the station at first broadcast their most significant address bit that is 0, 1,
1, 1 respectively. The most significant bits are read together. Station 0001 see the 1 MSB in
another station address and knows that a higher numbered station is competing for the
channel, so it gives up for the current round.
 Other three stations 1001, 1100, 1011 continue. The next station at which next bit is 1 is at
station 1100, so station 1011 and 1001 give up because there 2nd bit is 0. Then station
1100 starts transmitting a frame, after which another bidding cycle starts.

Binary Countdown fig (1.2)

3. Limited Contention Protocols:

 Collision based protocols (pure and slotted ALOHA, CSMA/CD) are good when the
network load is low.
 Collision free protocols (bitmap, binary Countdown) are good when load is high.
 How about combining their advantages :

1. Behave like the ALOHA scheme under light load


2. Behave like the bitmap scheme under heavy load.

4. Adaptive Tree Walk Protocol:

 partition the group of station and limit the contention for each slot.
 Under light load, everyone can try for each slot like aloha
 Under heavy load, only a group can try for each slot
 How do we do it :

1. treat every stations as the leaf of a binary tree


2. first slot (after successful transmission), all stations
can try to get the slot(under the root node).
3. If no conflict, fine.
4. Else, in case of conflict, only nodes under a subtree get to try for the next one. (depth first
search)

Adaptive Tree Walk Protocol fig (1.3)

Slot-0 : C*, E*, F*, H* (all nodes under node 0 can try which are going to send), conflict
Slot-1 : C* (all nodes under node 1 can try}, C sends
Slot-2 : E*, F*, H*(all nodes under node 2 can try}, conflict
Slot-3 : E*, F* (all nodes under node 5 can try to send), conflict
Slot-4 : E* (all nodes under E can try), E sends
Slot-5 : F* (all nodes under F can try), F sends
Slot-6 : H* (all nodes under node 6 can try to send), H sends.
MLMA Limited Contention Protocols:
Adaptive Tree Walk Under conditions of light load, contention is preferable due to its low delay.
As the load increases, contention becomes increasingly less attractive, because the overload
associated with channel arbitration becomes greater. Just the reverse is true for contention - free
protocols. At low load, they have high delay, but as the load increases, the channel efficiency
improves rather than getting worse as it does for contention protocols. It is obvious that the
probability of some station acquiring the channel could only be increased by decreasing the
amount of competition. The limited contention protocols do exactly that. They first divide the
stations up into ( not necessarily disjoint ) groups. Only the members of group 0 are permitted to
compete for slot 0. The competition for acquiring the slot within a group is contention based. If
one of the members of that group succeeds, it acquires the channel and transmits a frame. If there
is collision or no node of a particular group wants to send then the members of the next group
compete for the next slot. The probability of a particular node is set to a particular value (
optimum ).

IEEE Standards 802 series & their variant


Various IEEE 802 standards are as
IEEE 802.1 High Level Interface
IEEE 802.2 Logical Link Control(LLC)
IEEE 802.3 Ethernet
IEEE 802.4 Token Bus IEEE 802.5 Token Ring
IEEE 802.6 Metropolitan Area Networks
IEEE 802.7 Broadband LANs
IEEE 802.8 Fiber Optic LANS
IEEE 802.9 Integrated Data and Voice Network
IEEE 802.10 Security
IEEE 802.11 Wireless Networks

Ethernet – Ethernet is a 10Mbps LAN that uses the Carrier Sense Multiple Access with
Collision Detection (CSMA/CD) protocol to control access network. When an end station
(network device) transmits data, every end station on the LAN receives it. Each end station
checks the data packet to see whether the destination address matches its own address. If the
addresses match, the end station accepts and processes the packet. If they do not match, it
disregards the packet. If two end stations transmit data simultaneously, a collision occurs and the
result is a composite, garbled message. All end stations on the network, including the
transmitting end stations, detect the collision and ignore the message. Each end station that wants
to transmit waits a random amount of time and then attempts to transmit again. This method is
usually used for traditional Ethernet LAN.

Fast Ethernet – This is an extension of 10Mbps Ethernet standard and supports speed up to
100Mbps. The access method used is CSMA/CD .For physical connections Star wiring topology
is used. Fast Ethernet is becoming very popular as an up gradation from 10Mbps Ethernet LAN
to Fast Ethernet LAN is quite easy

Token Bus
Token Bus (IEEE 802.4) is a standard for implementing token ring over virtual ring in LANs.
The physical media has a bus or a tree topology and uses coaxial cables. A virtual ring is created
with the nodes/stations and the token is passed from one node to the next in a sequence along this
virtual ring. Each node knows the address of its preceding station and its succeeding station. A
station can only transmit data when it has the token. The working principle of token bus is
similar to Token Ring.

Token Passing Mechanism in Token Bus:

A token is a small message that circulates among the stations of a computer network providing
permission to the stations for transmission. If a station has data to transmit when it receives a
token, it sends the data and then passes the token to the next station; otherwise, it simply passes
the token to the next station. This is depicted in the following diagram −

Token Ring

Token ring (IEEE 802.5) is a communication protocol in a local area network (LAN) where all
stations are connected in a ring topology and pass one or more tokens for channel acquisition. A
token is a special frame of 3 bytes that circulates along the ring of stations. A station can send
data frames only if it holds a token. The tokens are released on successful receipt of the data
frame.
Token Passing Mechanism in Token Ring:

If a station has a frame to transmit when it receives a token, it sends the frame and then passes
the token to the next station; otherwise it simply passes the token to the next station. Passing the
token means receiving the token from the preceding station and transmitting to the successor
station. The data flow is unidirectional in the direction of the token passing. In order that tokens
are not circulated infinitely, they are removed from the network once their purpose is completed.
This is shown in the following diagram −

Differences between Token Ring and Token Bus


Token Ring Token Bus
The token is passed over the physical ring The token is passed along the virtual ring of
formed by the stations and the coaxial cable stations connected to a LAN.
network.
The stations are connected by ring topology, or The underlying topology that connects the
sometimes star topology. stations is either bus or tree topology.
It is defined by IEEE 802.5 standard. It is defined by IEEE 802.4 standard.
The maximum time for a token to reach a It is not feasible to calculate the time for
station can be calculated here. token transfer.

The 802.11 standard is defined through several specifications of WLANs. It defines an over the-
air interface between a wireless client and a base station or between two wireless clients.
There are several specifications in the 802.11 family –

 802.11 − This pertains to wireless LANs and provides 1 - or 2-Mbps transmission in the
2.4-GHz band using either frequency-hopping spread spectrum (FHSS) or direct-
sequence spread spectrum (DSSS).
 802.11a − This is an extension to 802.11 that pertains to wireless LANs and goes as fast
as 54 Mbps in the 5-GHz band. 802.11a employs the orthogonal frequency division
multiplexing (OFDM) encoding scheme as opposed to either FHSS or DSSS.
 802.11b − The 802.11 high rate WiFi is an extension to 802.11 that pertains to wireless
LANs and yields a connection as fast as 11 Mbps transmission (with a fallback to 5.5, 2,
and 1 Mbps depending on strength of signal) in the 2.4-GHz band. The 802.11b
specification uses only DSSS. Note that 802.11b was actually an amendment to the
original 802.11 standard added in 1999 to permit wireless functionality to be analogous to
hard-wired Ethernet connections.
 802.11g − This pertains to wireless LANs and provides 20+ Mbps in the 2.4-GHz band.

Here is the technical comparison between the three major WiFi standards.
Feature WiFi (802.11b) WiFi (802.11a/g)
Primary Application Wireless LAN Wireless LAN
Frequency Band 2.4 GHz ISM 2.4 GHz ISM (g)
5 GHz U-NII (a)
Channel Bandwidth 25 MHz 20 MHz
Half/Full Duplex Half Half
Radio Technology Direct Sequence OFDM
Spread Spectrum (64-channels)
Bandwidth <=0.44 bps/Hz ≤=2.7 bps/Hz
Modulation QPSK BPSK, QPSK, 16-, 64-QAM
FEC None Convolutional Code
Encryption Optional- RC4m (AES in 802.11i) Optional- RC4(AES in
802.11i)
Mobility In development In development
Mesh Vendor Proprietary Vendor Proprietary
Access Protocol CSMA/CA CSMA/CA

You might also like