Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

DATA LINK LAYER:

 Data Link Layer is second layer of OSI Layered Model.


 Data link layer hides the details of underlying hardware and represents itself to upper
layer as the medium to communicate.
 Data link layer is responsible for converting data stream to signals bit by bit and to send
that over the underlying hardware. At the receiving end, Data link layer picks up data
from hardware which are in the form of electrical signals assembles them in a
recognizable frame format, and hands over to upper layer.

Data link layer has two sub-layers:

 Logical Link Control: It deals with protocols, flow-control, and error control
 Media Access Control: It deals with actual control of media

Services of Data-link Layer

1.Framing:
Data-link layer takes packets from Network Layer and encapsulates them into Frames. Then, it
sends each frame bit-by-bit on the hardware. At receiver’ end, data link layer picks up signals
from hardware and assembles them into frames.
2.Addressing
Data-link layer provides layer-2 hardware addressing mechanism. Hardware address is assumed
to be unique on the link. It is encoded into hardware at the time of manufacturing.
3.Synchronization
When data frames are sent on the link, both machines must be synchronized in order to transfer
to take place.
4.Error Control
Sometimes signals may have encountered problem in transition and the bits are flipped. These
errors are detected and attempted to recover actual data bits. It also provides error reporting
mechanism to the sender.
5.Flow Control
Stations on same link may have different speed or capacity. Data-link layer ensures flow control
that enables both machine to exchange data on same speed.
6.Multi-Access
When host on the shared link tries to transfer the data, it has a high probability of collision. Data-
link layer provides mechanism such as CSMA/CD to equip capability of accessing a shared
media among multiple Systems.

DATA LINK LAYER: FRAMING


What is Framing in a Data Link Layer?
 Frames are the units of digital transmission, notably in Framing in Computer
networks and telecommunications. Frames are akin to the packets of energy
known as photons in the case of sunshine energy. The frame is unendingly
utilized in the Time Division Multiplexing method.

 Framing may be a point-to-point affiliation between 2 computers or devices


consisting of a wire within which knowledge is transmitted as a stream of bits.
However, these bits should be framed into discernible blocks of data. Framing
may be an operation of the info link layer. It provides the simplest way for a
sender to transmit a collection of bits that are important to the receiver. Ethernet,
token ring, frame relay, and alternative electrical circuit layer technologies have
their own frame structures. Frames have headers that contain data like error-
checking codes.

Part of a Frame
Header: It consists of the frame's source and destination address. A frame header holds the
destination address, the supply address, and 3 management fields kind, seq, and ack helping
the subsequent purposes:
kind: This field expresses whether the frame could be an information frame, or it's used for
management functions like error and flow management or link management, etc.
Seq: This holds the sequence variation of the frame for transcription of out–of–sequence
frames and causing acknowledgments by the receiver.
Ack: This contains the acknowledgment variety of some frames, significantly once
piggybacking is employed.
Payload: It contains the message to be delivered. It contains the particular message or data
that the sender desires to transmit to the destination machine. It contains the particular
message or data that the sender desires to transmit to the destination machine
Trailer: It contains the error detection and correction bits.
Flag: It contains the points to the starting and the ending of the frame.
Problems in Framing

Locating at the beginning of the frame: When a frame is transmitted, each station should be
ready to notice it. The station detects frames by searching for a special sequence of bits that
marks the start of the frame which is. SFD (Starting Frame Delimiter).

How will the station notice a frame: Each station listens to the link for the SFD pattern through
a sequential circuit. If SFD is detected, the sequential circuit alerts the station. The station checks
the destination address to just accept or refuse the frame.

Locating end of frame: Once to prevent reading the frame. For example, the communication
device sends a hundred bits of information, out of which fifty bits belong to the border one. The
receiving device receives all the bits. However, can the receiver understand that up to fifty bits
belong to the frame? Solution: Some protocols square measure outlined between the devices
before starting with the communication and knowledge exchange. Let each device agree that the
frames begin and finish are 11011 (added with the header and trailer). The frame received by the
receiving device can seem like this 1101101101101011011.

If you notice rigorously, there's another drawback: the frame that's received contains a region
that resembles the beginning and also the finish of frame 1101101101101011011. Hence, the
receiver can think about solely these bits 1101101011011,i.e., before the resembling half won't
be thought of as a region of the frame. This can be called a Framing Error. To grasp the solutions
to this drawback, we tend to initially learn the categories of framing.

Types of Framing in Data Link Layer

There are two types of framing in the data link layer. The frame can be of fastened or variable
size. founded on the size, the following are the types of framing in data link layers in computer
networks,

1. Fixed Size Framing


 In this Fixed-size framing, the size of the frame is always fixed that's why frame
length acts as the delimiter of the frame. And it also doesn't require additional
boundaries to identify the start and end of the frame. For example- This kind of
framing in the Data Link Layer is used in ATMs and wide area networks(WAN).
They use frames of fastened size known as cells.

2. Variable Size Framing


 The size of the frame is variable during this form of framing. In variable-size
framing, we are in need of a way to outline the tip of the frame and also the starting
of the succeeding frame. This can be utilized in local area networks(LAN).

There are 2 different methods to define the frame boundaries, such as length field and finish
decimeters.

Length field–To confirm the length of the field, a length field is used. It is utilized in
Ethernet (1EEE 802.3).

End Delimeter–To confirm the size of the frame, a pattern is worn as a delimiter. This
methodology is worn in the token ring. In short, it is referred to as ED. Two different
methods are used to avoid this condition if the pattern happens within the message.
Character Oriented Approach Bit Oriented Approach

Framing Approaches in Computer Network


Talking about Framing Approaches in computer networking, there are three 3-
different kind of approaches to Framing in the Data link layer:

1. Bit-Oriented Framing
Most protocols use a special 8-bit pattern flag 01111110 as a result of the delimiter to
stipulate the beginning and so the end of the frame. Bit stuffing is completed at the
sender end and bit removal at the receiver end.

If we have a tendency to get a zero(0) after 5 1s. we have a tendency to tend to still
stuff a zero(0). The receiver will remove the zero. Bit stuffing is in addition said as bit
stuffing.

2. Byte-Oriented Framing
Byte stuffing is one of the methods of adding an additional byte once there is a flag or
escape character within the text. Take an illustration of byte stuffing as appears in the given
diagram.

The sender sends the frame by adding three additional ESC bits and therefore the
destination machine receives the frame and it removes the extra bits to convert the frame
into an identical message.
3. Clock Based Framing

This framing in the data link layer is basically used for Optical Networks like SONET. In this
approach, a sequence of monotonous pulses maintains an everlasting bit rate and keeps the
digital bits oriented within the data stream.

Methods of Framing

There are four different methods of framing as listed below –

1. Character Count

In this given methodology is never worn perhaps is usually needed to count the total range of
characters that are available in the frame. This process is often done by manipulating fields in the
header. This Character count methodology makes sure that the Framing in the data link layer is
at the receiver end regarding the total range of characters maintained, and wherever the frame
ends. It also has its disadvantage conjointly of utilizing this methodology which is, if in any case,
the character count is issued or bended by a miscalculation occurring throughout the
transmission process, then at the receiver end it may drop synchronization. The receiver's
strength is ineffective in finding or establishing the start of the next frame.

2. Flag Byte with Character Stuffing

The Flag Byte with Character stuffing is additionally called byte stuffing or character-oriented
framing which is identified as especially of bit stuffing however byte stuffing truly works on
bytes because bit stuffing works on their bits. while in byte stuffing, a particular byte that is
primarily called ESC (Escape Character) which has a fixed design is usually attached to the data
section of the frame once there is communication that has a matching pattern, especially of the
flag byte.

However, the receiver detaches the ESC along with the data which may cause difficulties.

3. Starting and Ending Flags, with Bit Stuffing

Starting and Ending Flags, with Bit Stuffing, is additionally called a bit-oriented framing or bit-
oriented approach. In bit stuffing, additional bits come about existence supplementary by
framing in computer network protocol creators to data streaming. This is typically the lodging of
additional bits within the transmission unit as an easy way to offer and provide communication
data and information to the recipient and to stay away from or simply disregard the aspect of
unwritten or inessential check sequences. Simply this is a kind of protocol control that is merely
performed to interrupt the bit sample which ends up in communication undergoing out of
synchronization.

4. Encoding Violations

Encoding violation is a technique that is used just for framing in computer networks within
which encoding on physical medium brings about some kind of repetition which is used in more
than one graphical or visual structure to easily encode or represent one variable of information.

Advantages of Framing in Data Link Layer

 Framing in Data links is used incessantly within the method of time-division


multiplexing.
 Framing in the data link layer facilitates a type to the sender for transmission of a class of
valid bits to a receiver.
 Frames are used continuously in the process of time-division multiplexing.
 It facilitates a form to the sender for transmitting a group of valid bits to a receiver.
 Frames also contain headers that include information such as error-checking codes.
 A-frame relay, token ring, ethernet, and other types of data link layer methods have their
frame structures.
 Frames allow the data to be divided into multiple recoverable parts that can be inspected
further for corrupt

Flow Control

When a data frame (Layer-2 data) is sent from one host to another over a single medium, it is
required that the sender and receiver should work at the same speed. That is, sender sends at a
speed on which the receiver can process and accept the data. What if the speed
(hardware/software) of the sender or receiver differs? If sender is sending too fast the receiver
may be overloaded, (swamped) and data may be lost.

Two types of mechanisms can be deployed to control the flow:

Stop-and-wait, and Sliding Window

1.What is Stop and Wait protocol?


Here stop and wait means, whatever the data that sender wants to send, he sends the data to the
receiver. After sending the data, he stops and waits until he receives the acknowledgment from
the receiver.

The stop and wait protocol is a flow control protocol where flow control is one of the services of
the data link layer.

Working of Stop and Wait protocol

The above figure shows the working of the stop and wait protocol.

If there is a sender and receiver, then sender sends the packet and that packet is known as a data
packet. The sender will not send the second packet without receiving the acknowledgment of the
first packet.

The receiver sends the acknowledgment for the data packet that it has received. Once the
acknowledgment is received, the sender sends the next packet. This process continues until all
the packet are not sent.

The main advantage of this protocol is its simplicity but it has some disadvantages also. For
example, if there are 1000 data packets to be sent, then all the 1000 packets cannot be sent at a
time as in Stop and Wait protocol, one packet is sent at a time.
Disadvantages of Stop and Wait protocol

The following are the problems associated with a stop and wait protocol:

1. Problems occur due to lost data

Suppose the sender sends the data and the data is lost. The receiver is waiting for the data for a
long time. Since the data is not received by the receiver, so it does not send any
acknowledgment. Since the sender does not receive any acknowledgment so it will not send the
next packet. This problem occurs due to the lost data.

In this case, two problems occur:

o Sender waits for an infinite amount of time for an acknowledgment.


o Receiver waits for an infinite amount of time for a data.

2. Problems occur due to lost acknowledgment


Suppose the sender sends the data and it has also been received by the receiver. On receiving the
packet, the receiver sends the acknowledgment. In this case, the acknowledgment is lost in a
network, so there is no chance for the sender to receive the acknowledgment. There is also no
chance for the sender to send the next packet as in stop and wait protocol, the next packet cannot
be sent until the acknowledgment of the previous packet is received.

In this case, one problem occurs:

o Sender waits for an infinite amount of time for an acknowledgment.

3. Problem due to the delayed data or acknowledgment

Suppose the sender sends the data and it has also been received by the receiver. The receiver then
sends the acknowledgment but the acknowledgment is received after the timeout period on the
sender's side. As the acknowledgment is received late, so acknowledgment can be wrongly
considered as the acknowledgment of some other data packet.
2.Sliding Window Protocol

The sliding window is a technique for sending multiple frames at a time. It controls the data
packets between the two devices where reliable and gradual delivery of data frames is needed. It
is also used in TCP (Transmission Control Protocol).

In this technique, each frame has sent from the sequence number. The sequence numbers are
used to find the missing data in the receiver end. The purpose of the sliding window technique is
to avoid duplicate data, so it uses the sequence number.
Types of Sliding Window Protocol

Sliding window protocol has two types:

1. Go-Back-N ARQ
2. Selective Repeat ARQ

1.Go-Back-N ARQ

 Go-Back-N ARQ protocol is also known as Go-Back-N Automatic Repeat Request.


 It is a data link layer protocol that uses a sliding window method. In this, if any frame is
corrupted or lost, all subsequent frames have to be sent again.
 The size of the sender window is N in this protocol. For example, Go-Back-8, the size of
the sender window, will be 8. The receiver window size is always 1.
 If the receiver receives a corrupted frame, it cancels it. The receiver does not accept a
corrupted frame. When the timer expires, the sender sends the correct frame again.
 Example of Go-Back-N ARQ is shown below..

2. Selective Repeat ARQ

Selective Repeat ARQ is also known as the Selective Repeat Automatic Repeat Request. It is a
data link layer protocol that uses a sliding window method. The Go-back-N ARQ protocol works
well if it has fewer errors. But if there is a lot of error in the frame, lots of bandwidth loss in
sending the frames again. So, we use the Selective Repeat ARQ protocol. In this protocol, the
size of the sender window is always equal to the size of the receiver window. The size of the
sliding window is always greater than 1.

If the receiver receives a corrupt frame, it does not directly discard it. It sends a negative
acknowledgment to the sender. The sender sends that frame again as soon as on the receiving
negative acknowledgment. There is no waiting for any time-out to send that frame.

Example of the Selective Repeat ARQ protocol is shown below:

Difference between the Go-Back-N ARQ and Selective Repeat ARQ

Go-Back-N ARQ Selective Repeat ARQ


If a frame is corrupted or lost in it, all In this, only the frame is sent again, which is
subsequent frames have to be sent corrupted or lost.
again.
If it has a high error rate, it wastes a lot There is a loss of low bandwidth.
of bandwidth.
It is less complex. It is more complex because it has to do sorting and
searching as well. And it also requires more storage.
It does not require sorting. In this, sorting is done to get the frames in the
correct order.
It does not require searching. The search operation is performed in it.
It is used more. It is used less because it is more complex.

HYBRID ARQ

 The HARQ is the use of conventional ARQ along with an Error Correction
technique called 'Soft Combining', which no longer discards the received bad data
(with error).

 With the 'Soft Combining' data packets that are not properly decoded are not
discarded anymore. The received signal is stored in a 'buffer', and will be
combined with next retransmission.
 That is, two or more packets received each one with insufficient SNR to allow
individual decoding can be combined in such a way that the total signal can be
decoded!
 The following image explains this procedure. The transmitter sends a package [1]. The
package [1] arrives, and is 'OK'. If the package [1] is 'OK' then the receiver sends an
'ACK’

The transmission continues, and is sent a package [2]. The package [2] arrives, but let's consider
now that it arrives with errors. If the package [2] arrives with errors, the receiver sends a 'NACK'
Only now this package [2] (bad) is not thrown away, as it is done in conventional ARQ. Now it is
stored in a 'buffer'

Continuing, the transmitter sends another package [2.1] that also (let's consider) arrives with
errors.

We have then in a buffer: bad package [2], and another package [2.1] which is also bad.
Does by adding (combining) these two packages ([2] + [2.1]) we have the
complete information? Yes. So, we send an 'ACK'.
But if the combination of these two packages still does not give us the complete
information, the process must continue - and another 'NACK' is sent.

And there we have another retransmission. Now the transmitter sends a third package [2.2]. Let's
consider that now it is 'OK', and the receiver sends an 'ACK'
Here we can see the following: along with the received package [2.2], the receiver also
has packages [2] and [2.1] that have not been dropped and are stored in the buffer.

In our example, we see that the package arrived 2 times 'wrong'. And what is the limit of
these retransmissions? Up to 4. ie, we can have up to 4 retransmissions in each
process. This is the maximum number supported by 'buffer'.

Error Detection in Computer Networks


What is Error
 Error is a condition when the receiver’s information does not match the sender’s
information.
 During transmission, digital signals suffer from noise that can introduce errors in the
binary bits traveling from sender to receiver. That means a 0 bit may change to 1 or a 1
bit may change to 0.
 Data (Implemented either at the Data link layer or Transport Layer of the OSI Model)
may get corrupted whenever a message is transmitted. To prevent such errors, error-
detection codes are added as extra data to digital messages. This helps in detecting any
errors that may have occurred during message transmission.

Types of Errors

1. Single-Bit Error
2. Multiple-Bit Error
3. Burst Error

1. Single-Bit Error
 A single-bit error refers to a type of data transmission error that occurs when one bit
(i.e., a single binary digit) of a transmitted data unit is altered during transmission,
resulting in corrupted data unit.
2.Multiple-Bit Error

 A multiple-bit error is an error type that arises when more than one bit in a data
transmission is affected. Although multiple-bit errors are relatively rare when compared
to single-bit errors, they can still occur, particularly in high-noise or high-interference
digital environments.

3.Burst Error
 When several consecutive bits are flipped mistakenly in digital transmission, it creates a
burst error. This error causes a sequence of consecutive incorrect values.
 To detect errors, a common technique is to introduce redundancy bits that provide
additional information.
Various techniques for error detection include:
1. Simple Parity Check
2. Two-dimensional Parity Check
3. Checksum
4. Cyclic Redundancy Check (CRC)

Error Detection Methods

1.Simple Parity Check

 Simple-bit parity is a simple error detection method that involves adding an extra
bit to a data transmission. It works as:
 1 is added to the block if it contains an odd number of 1’s, and
 0 is added if it contains an even number of 1’s
 This scheme makes the total number of 1’s even, that is why it is called even parity
checking.
Disadvantages
 Single Parity check is not able to detect even no. of bit error.
 For example, the Data to be transmitted is 101010. Codeword transmitted to the receiver is
1010101 (we have used even parity).
Let’s assume that during transmission, two of the bits of code word flipped to 1111101.
On receiving the code word, the receiver finds the no. of ones to be even and hence no
error, which is a wrong assumption.

2.Two-dimensional Parity Check


 Two-dimensional Parity check bits are calculated for each row, which is equivalent to
a simple parity check bit.
 Parity check bits are also calculated for all columns, and then both are sent along with
the data.
 At the receiving end, these are compared with the parity bits calculated on the received
data.

3.Checksum
 Checksum error detection is a method used to identify errors in transmitted data. The
process involves dividing the data into equally sized segments and using a 1’s
complement to calculate the sum of these segments. The calculated sum is then sent
along with the data to the receiver.
 At the receiver’s end, the same process is repeated and if all zeroes are obtained in the
sum, it means that the data is correct.
Checksum – Operation at Sender’s Side
 Firstly, the data is divided into k segments each of m bits.
 On the sender’s end, the segments are added using 1’s complement arithmetic to get the
sum. The sum is complemented to get the checksum.
 The checksum segment is sent along with the data segments.
Checksum – Operation at Receiver’s Side
 At the receiver’s end, all received segments are added using 1’s complement arithmetic to
get the sum. The sum is complemented.
 If the result is zero, the received data is accepted; otherwise discarded.

Disadvantages
 If one or more bits of a segment are damaged and the corresponding bit or bits of opposite
value in a second segment are also damaged.

4.Cyclic Redundancy Check (CRC)

Sender Side (Generation of Encoded Data from Data and Generator Polynomial:

1. The binary data is first augmented by adding k-1 zeros in the end of the data
2. Use modulo-2 binary division to divide binary data by the key and store remainder of
division.
3. Append the remainder at the end of the data to form the encoded data and send the same

Receiver Side (Check if there are errors introduced in transmission)

1. Perform modulo-2 division again and if the remainder is 0, then there are no errors.

What is Modulo 2 Division:

The process of modulo-2 binary division is the same as the familiar division process we use for
decimal numbers. Just that instead of subtraction, we use XOR here.
1. In each step, a copy of the divisor (or data) is XORed with the k bits of the dividend (or
key).
2. The result of the XOR operation (remainder) is (n-1) bits, which is used for the next
step after 1 extra bit is pulled down to make it n bits long.
3. When there are no bits left to pull down, we have a result. The (n-1)-bit remainder
which is appended at the sender side.

Example 1 (No error in transmission):


Data word to be sent - 100100
Key - 1101 [ Or generator polynomial x 3 + x2 + 1]

Step 1: At Sender Side:

Therefore, the remainder is 001 and hence the encoded


data sent is 100100001.
Step 2: At Receiver Side:
Code word received at the receiver side: 100100001
Therefore, the remainder is all zeros. Hence, the data received has no error.
Advantages of Error Detection
Increased Data Reliability: Error detection ensures that the data transmitted over the network
is reliable, accurate, and free from errors. This ensures that the recipient receives the same data
that was transmitted by the sender.
Improved Network Performance: Error detection mechanisms can help to identify and
isolate network issues that are causing errors. This can help to improve the overall performance
of the network and reduce downtime.
Enhanced Data Security: Error detection can also help to ensure that the data transmitted
over the network is secure and has not been tampered with.
Disadvantages of Error Detection
Overhead: Error detection requires additional resources and processing power, which can lead
to increased overhead on the network. This can result in slower network performance and
increased latency.
False Positives: Error detection mechanisms can sometimes generate false positives, which
can result in unnecessary retransmission of data. This can further increase the overhead on the
network.
Limited Error Correction: Error detection can only identify errors but cannot correct them.
This means that the recipient must rely on the sender to retransmit the data, which can lead to
further delays and increased network overhead.

Error Correction Techniques

Error Detection and correction using Hamming Code

Hamming code is a set of error-correction codes that can be used to detect and correct the
errors that can occur when the data is moved or stored from the sender to the receiver. It is a
technique developed by R.W. Hamming for error correction.
What is Redundant Bits?

Redundant bits are extra binary bits that are generated and added to the information-carrying
bits of data transfer to ensure that no bits were lost during the data transfer. The number of
redundant bits can be calculated using the following formula:
2^r ≥ m + r + 1
Suppose the number of data bits is 7, then the number of redundant bits can be calculated
using: = 2^4 ≥ 7 + 4 + 1 .Thus, the number of redundant bits= 4

Types of Parity bits

A parity bit is a bit appended to a data of binary bits to ensure that the total number of 1’s in
the data is even or odd. Parity bits are used for error detection. There are two types of parity
bits:
1. Even parity bit: In the case of even parity, for a given set of bits, the number of 1’s are
counted. If that count is odd, the parity bit value is set to 1, making the total count of
occurrences of 1’s an even number. If the total number of 1’s in a given set of bits is
already even, the parity bit’s value is 0.
2. Odd Parity bit: In the case of odd parity, for a given set of bits, the number of 1’s are
counted. If that count is even, the parity bit value is set to 1, making the total count of
occurrences of 1’s an odd number. If the total number of 1’s in a given set of bits is already
odd, the parity bit’s value is 0.

Algorithm of Hamming Code

Hamming Code is simply the use of extra parity bits to allow the identification of an error.
 Write the bit positions starting from 1 in binary form (1, 10, 11, 100, etc).
 All the bit positions that are a power of 2 are marked as parity bits (1, 2, 4, 8, etc).
 All the other bit positions are marked as data bits.
 Each data bit is included in a unique set of parity bits, as determined its bit position in
binary form.
a. Parity bit 1 covers all the bits positions whose binary representation includes a 1 in
the least significant position (1, 3, 5, 7, 9, 11, etc).
b. Parity bit 2 covers all the bits positions whose binary representation includes a 1 in
the second position from the least significant bit (2, 3, 6, 7, 10, 11, etc).
c. Parity bit 4 covers all the bits positions whose binary representation includes a 1 in
the third position from the least significant bit (4–7, 12–15, 20–23, etc).
d. Parity bit 8 covers all the bits positions whose binary representation includes a 1 in
the fourth position from the least significant bit bits (8–15, 24–31, 40–47, etc).
e. In general, each parity bit covers all bits where the bitwise AND of the parity
position and the bit position is non-zero.
 Since we check for even parity set a parity bit to 1 if the total number of ones in the
positions it checks is odd.
 Set a parity bit to 0 if the total number of ones in the positions it checks is even.

Determining the Position of Redundant Bits

A redundancy bits are placed at positions that correspond to the power of 2. As in the above
example:
 The number of data bits = 7
 The number of redundant bits = 4
 The total number of bits = 7+4=>11
 The redundant bits are placed at positions corresponding to power of 2- 1, 2, 4, and 8

 Suppose the data to be transmitted is 1011001 from sender to receiver, the bits will be
placed as follows:

Determining the Parity bits According to Even Parity

 R1 bit is calculated using parity check at all the bits positions whose binary representation
includes a 1 in the least significant position. R1: bits 1, 3, 5, 7, 9, 11

 To find the redundant bit R1, we check for even parity. Since the total number of 1’s in all
the bit positions corresponding to R1 is an even number the value of R1 (parity bit’s value)
=0
 R2 bit is calculated using parity check at all the bits positions whose binary representation
includes a 1 in the second position from the least significant bit. R2: bits 2,3,6,7,10,11

 To find the redundant bit R2, we check for even parity. Since the total number of 1’s in all
the bit positions corresponding to R2 is odd the value of R2(parity bit’s value)=1
 R4 bit is calculated using parity check at all the bits positions whose binary representation
includes a 1 in the third position from the least significant bit. R4: bits 4, 5, 6, 7

1. To find the redundant bit R4, we check for even parity. Since the total number of 1’s in all
the bit positions corresponding to R4 is odd the value of R4(parity bit’s value) = 1
2. R8 bit is calculated using parity check at all the bits positions whose binary representation
includes a 1 in the fourth position from the least significant bit. R8: bit 8,9,10,11

 To find the redundant bit R8, we check for even parity. Since the total number of 1’s in all
the bit positions corresponding to R8 is an even number the value of R8(parity bit’s
value)=0. Thus, the data transferred is:
Error Detection and Correction

Suppose in the above example the 6th bit is changed from 0 to 1 during data transmission, then
it gives new parity values in the binary number:

For all the parity bits we will check the number of 1’s in their respective bit positions.
 For R1: bits 1, 3, 5, 7, 9, 11. We can see that the number of 1’s in these bit positions are 4
and that’s even so we get a 0 for this.
 For R2: bits 2,3,6,7,10,11 . We can see that the number of 1’s in these bit positions are 5
and that’s odd so we get a 1 for this.
 For R4: bits 4, 5, 6, 7 . We can see that the number of 1’s in these bit positions are 3 and
that’s odd so we get a 1 for this.
 For R8: bit 8,9,10,11 . We can see that the number of 1’s in these bit positions are 2 and
that’s even so we get a 0 for this.
 The bits give the binary number 0110 whose decimal representation is 6. Thus, bit 6
contains an error. To correct the error the 6th bit is changed from 1 to 0.

Features of Hamming Code


Error Detection and Correction: Hamming code is designed to detect and correct single-bit
errors that may occur during the transmission of data. This ensures that the recipient receives
the same data that was transmitted by the sender.
Redundancy: Hamming code uses redundant bits to add additional information to the data
being transmitted. This redundancy allows the recipient to detect and correct errors that may
have occurred during transmission.
Efficiency: Hamming code is a relatively simple and efficient error-correction technique that
does not require a lot of computational resources. This makes it ideal for use in low-power and
low-bandwidth communication networks.
Widely Used: Hamming code is a widely used error-correction technique and is used in a
variety of applications, including telecommunications, computer networks, and data storage
systems.
Single Error Correction: Hamming code is capable of correcting a single-bit error, which
makes it ideal for use in applications where errors are likely to occur due to external factors
such as electromagnetic interference.
Limited Multiple Error Correction: Hamming code can only correct a limited number of
multiple errors. In applications where multiple errors are likely to occur, more advanced error-
correction techniques may be required.

What is a Finite State Machine?


 A Finite State Machine(Finite Automata) is a model of computation based on a
hypothetical machine made of one or more states. Only one single state of this machine
can be active at the same time. It means the machine has to transition from one state to
another in to perform different actions.
 Finite state machine is used to recognize patterns.
 Finite automata machine takes the string of symbol as input and changes its state
accordingly. In the input, when a desired symbol is found then the transition occurs.
 While transition, the automata can either move to the next state or stay in the same
state.
 FA has two states: accept state or reject state. When the input string is successfully
processed and the automata reached its final state then it will accept.

Real world examples

Let’s see what could be a Finite State Machine in the real world:

1.Coin-operated turnstile:

States: locked, unlocked


Transitions: pointing a coin in the slot will unlock the turnstile, pushing the arm of the unlocked
turnstile will let the costumer pass and lock the turnstile again

2.Traffic Light

States: Red, Yellow, Green

Transitions: After a given time, Red will change to Green, Green to Yellow, and Yellow to Red

Mathematically,

A finite automaton consists of following:

Q: finite set of states

∑: finite set of input symbol

q0: initial state

F: final state

δ: Transition function

Transition function can be define as

δ: Q x ∑ →Q

FA is characterized into two ways:


1. DFA (finite automata)
2. NDFA (non deterministic finite automata)

1.DFA( Deterministic Finite Automata)

DFA stands for Deterministic Finite Automata. Deterministic refers to the uniqueness of the
computation. In DFA, the input character goes to one state only. DFA doesn't accept the null
move that means the DFA cannot change state without any input character.

DFA has five tuples {Q, ∑, q0, F, δ}

Q: set of all states


∑: finite set of input symbol where δ: Q x ∑ →Q
q0: initial state
F: final state
δ: Transition function
Example: Deterministic finite automata:
Q = {q0, q1, q2}
∑ = {0, 1}
q0 = {q0}
F = {q3}

NDFA(Non Deterministic Finite Automata)

NDFA refer to the Non Deterministic Finite Automata. It is used to transit the any number of
states for a particular input. NDFA accepts the NULL move that means it can change state
without reading the symbols.

NDFA also has five states same as DFA. But NDFA has different transition function.

Transition function of NDFA can be defined as:

δ: Q x ∑ →2Q

Example: Non deterministic finite automata:


Q = {q0, q1, q2}
∑ = {0, 1}
q0 = {q0}
F = {q3}

Petri Nets Model


 Petri Nets were developed originally by Carl Adam Petri [Pet62] in 1962 in his research.
 Petri Nets and their concepts have been extended and developed, and applied in a variety
of areas like Office automation, work-flows, flexible manufacturing, programming
languages, protocols and networks, hardware structures, real-time systems, performance
evaluation, operations research, embedded systems, defense systems,
telecommunications, Internet, e-commerce and trading, railway networks, biological
systems.

The Basics:

 A Petri Net is a collection of directed arcs connecting places and transitions. Places may
hold tokens. The state or marking of a net is its assignment of tokens to places.
 Here is a simple net containing all components of a Petri Net:

 Arcs have capacity 1 by default; if other than 1, the capacity is marked on the arc. Places
have infinite capacity by default, and transitions have no capacity, and cannot store
tokens at all. With the rule that arcs can only connect places to transitions and vice versa,
we have all we need to begin using Petri Nets. A few other features and considerations
will be added as we need them.

 A transition is enabled when the number of tokens in each of its input places is at least
equal to the arc weight going from the place to the transition. An enabled transition may
fire at any time. When fired, the tokens in the input places are moved to output places,
according to arc weights and place capacities. This results in a new marking of the net, a
state description of all places.
 When arcs have different weights, we have what might at first seem confusing behavior.
Here is a similar net, ready to fire:

and here it is after firing:

 When a transition fires, it takes the tokens that enabled it from the input places; it then
distributes tokens to output places according to arc weights. If the arc weights are all the
same, it appears that tokens are moved across the transition. If they differ, however, it
appears that tokens may disappear or be created. That, in fact, is what happens; think of
the transition as removing its enabling tokens and producing output tokens according to
arc weight.

 A special kind of arc, the inhibitor arc, is used to reverse the logic of an input place. With
an inhibitor arc, the absence of a token in the input place enables, not the presence:
 This transition cannot fire, because the token in P2 inhibits it.

 Here is a collection of primitive structures that occur in real systems, and thus we find in
Petri Nets.

 Sequence is obvious - several things happen in order. Conflict is not so obvious. The
token in P4 enables three transitions; but when one of them fires, the token is removed,
leaving the remaining two disabled. Unless we can control the timing of firing, we don't
know how this net is resolved. Concurrency, again, is obvious; many systems operate
with concurrent activities, and this models it well. Synchronization is also modeled well
using Petri Nets; when the processes leading into P8, P9 and P10 are finished, all three
are synchronized by starting P11.

 Confusion is another not so obvious construct. It is a combination of conflict and


concurrency. P12 enables both T11 and T12, but if T11 fires, T12 is no longer enabled.

 Merging is not quite the same as synchronization, since there is nothing requiring that the
three transitions fire at the same time, or that all three fire before T17; this simply merges
three parallel processes. The priority/inhibit construct uses the inhibit arc to control T19;
as long as P16 has a token, T19 cannot fire.

 Very sophisticated logic and control structures can be developed using these primitives.

ARP Protocol
 ARP stands for “Address Resolution Protocol”. It is a network protocol used to determine
the MAC address (hardware address) from any IP address.
 In other words, ARP is used to translate IP Address into MAC Address. When one device
wants to communicate with another device in a LAN (local area network) network, the
ARP protocol is used.
 This protocol is used when a device wants to communicate with another device over a
local area network or Ethernet.
 ARP is a network layer protocol. This is a very important protocol in the TCP/IP protocol
suite. Although it was developed in the early 80s, it was defined in RFC 826 in 1982.
 ARP is implemented with important technologies like IPv4, X.25, frame relay, and ATM.
 ARP protocol finds the MAC address based on IP address. IP address is used to
communicate with any device at the application layer. But to communicate with a device
at the data link layer or to send data to it, a MAC address is required.
 When data is sent to a local host, the data travels between networks via IP address. But to
reach that host in LAN, it needs the MAC address of that host. In this situation the
address resolution protocol plays an important role.

Important ARP Terms

1. ARP Cache :- After receiving the MAC address, ARP passes it to the sender where it is
stored in a table for future reference. And this is called ARP Cache which is later used to
obtain the MAC address.
2. ARP Cache Timeout: - This is the time in which the MAC address can remain in the
ARP Cache.
3. ARP request :- Broadcasting a packet over the network to verify whether we have
arrived at the destination MAC address.
4. ARP response/reply :- It is a MAC address response that the sender receives from the
receiver which helps in further communication of data.

Types of ARP
There are four types of ARP protocol they are as follows:-

1. Proxy ARP
2. Gratuitous ARP
3. Reverse ARP
4. Inverse ARP

1. Proxy ARP
 This is a technique through which proxy ARP in a network can answer ARP queries of IP
addresses that are not in that network. That is, if we understand it in simple language, the
Proxy server can also respond to queries of IP-address of other networks.
 Through this we can fool the other person because instead of the MAC address of the
destination device, the MAC address of the proxy server is used and the other person
does not even know.

2. Gratuitous ARP
 This is an arp request of a host, which we use to check duplicate ip-address. And we can
also use it to update the arp table of other devices. That is, through this we can check
whether the host is using its original IP-address, or is using a duplicate IP-address.
 This is a very important ARP. Which proves to be very helpful in protecting us from the
wrong person, and by using it we can check the ip-address.

3. Reverse ARP
 This is also a networking protocol, which we can use through client computer. That is, it
is used to obtain information about one’s own network from the computer network. That
is, if understood in simple language, it is a TCP/IP protocol which we use to obtain
information about the IP address of the computer server.
 That is, to know the IP address of our computer server, we use Reverse ARP, which
works under a networking protocol.

4. Inverse ARP (InARP)


 Inverse ARP, it is the opposite of ARP, that is, we use it to know the IP address of our
device through MAC Address, that is, it is such a networking technology, through this we
convert MAC Address into IP address. Can translate. It is mainly used in ATM machines.

How ARP Protocol Works?


Working flow diagram of ARP Protocol
ARP Protocol

Below is the working of address resolution protocol is being explained in some steps :-
 When a sender wants to communicate with a receiver, the sender first checks its ARP
cache. Sender checks whether the receiver’s MAC address is already present in the ARP
cache or not?
 If the receiver’s MAC address is already present in the ARP cache, the sender will
communicate with the receiver using that MAC address.
 If the MAC address of the receiver device is not already present in the ARP cache, then in
such a situation an ARP request message is prepared by the sender device.This message
contains the MAC address of the sender, IP address of the sender and IP address of the
receiver. The field containing the MAC address of the receiver is left blank because it is
being searched.
 Sender device broadcasts this ARP request message in the LAN. Because this is a
broadcast message, every device connected to the LAN receives this message.
 All devices match the receiver IP address of this request message with their own IP
address. Devices whose IP address does not match drop this request message.
 The device whose IP address matches the receiver IP address of this request message
receives this message and prepares an ARP reply message. This is a unicast message which
is sent only to the sender.
 In ARP reply message, the sender’s IP address and MAC address are used to send the reply
message. Besides, in this message the receiver also sends its IP address and MAC address.
 As soon as the sender device receives this ARP reply message, it updates its ARP cache
with the new information (Receiver’s MAC address). Now the MAC address of the
receiver is present in the ARP cache of the sender. The sender can send and receive data
without any problem.

Message Format of ARP Protocol


Messages are sent to find the MAC address through ARP(address resolution protocol). These
messages are broadcast to all the devices in the LAN. The format of this message is being
shown in the diagram below :

Fig: ARP packet format

All the fields given in ARP message format are being explained in detail below:-
 Hardware Type: The size of this field is 2 bytes. This field defines what type of Hardware
is used to transmit the message. The most common Hardware type is Ethernet. The value of
Ethernet is 1.
 Protocol Type: This field tells which protocol has been used to transmit the message.
substantially the value of this field is 2048 which indicates IPv4.
 Hardware Address Length: It shows the length of the tackle address in bytes. The size of
Ethernet MAC address is 6 bytes.
 Protocol Address Length: It shows the size of the IP address in bytes. The size of IP
address is 4 bytes.
 OP law: This field tells the type of message. If the value of this field is 1 also it’s a request
message and if the value of this field is 2 also it’s a reply message.
 Sender Hardware Address: This field contains the MAC address of the device
transferring the message.
 Sender Protocol Address: This field contains the IP address of the device transferring the
message.
 Target Hardware Address: This field is empty in the request message. This field contains
the MAC address of the entering device.
 Target Protocol Address: This field contains the IP address of the entering device.

Advantages of ARP Protocol


There are many Advantages of ARP protocol but below we have told you about some
important advantages.
 By using this protocol we can easily find out the MAC Address of the device.
 There is no need to configure the end nodes at all to extract the MAC address through this
protocol.
 Through this protocol we can easily translate IP address into MAC Address.
 There are four main types of this protocol. Which we can use in different ways, and they
prove to be very helpful.

What is RARP
 The Reverse Address Resolution Protocol (RARP) is a networking protocol that is used
to map a physical (MAC) address to an Internet Protocol (IP) address. It is the reverse
of the more commonly used Address Resolution Protocol (ARP), which maps an IP
address to a MAC address.
 RARP was developed in the early days of computer networking as a way to provide IP
addresses to diskless workstations or other devices that could not store their own IP
addresses. With RARP, the device would broadcast its MAC address and request an IP
address, and a RARP server on the network would respond with the corresponding IP
address.
 While RARP was widely used in the past, it has largely been replaced by newer
protocols such as DHCP (Dynamic Host Configuration Protocol), which provides more
flexibility and functionality in assigning IP addresses dynamically. However, RARP is
still used in some specialized applications, such as booting embedded systems and
configuring network devices with pre-assigned IP addresses.
 RARP is specified in RFC 903 and operates at the data link layer of the OSI model. It
has largely been superseded by ARP and DHCP in modern networks, but it played an
important role in the development of computer networking protocols and continues to
be used in certain contexts.
 RARP is abbreviation of Reverse Address Resolution Protocol which is a protocol
based on computer networking which is employed by a client computer to request its IP
address from a gateway server’s Address Resolution Protocol table or cache. The
network administrator creates a table in gateway-router, which is used to map the MAC
address to corresponding IP address.
How does Reverse Address Resolution Protocol work?

The functioning of RARP can be summarized as follows:


 When a device needs an IP address, it broadcasts a RARP request packet containing its
MAC address in both the sender and receiver hardware address fields.
 A special host known as a RARP server configured on the network receives the RARP
request packet. After that, it checks its table of MAC addresses and IP addresses.
 If the RARP server finds a matching entry for the MAC address, it sends back an RARP
reply packet that includes both the MAC address and the corresponding IP address.
 The device that initiated the RARP request packet receives the RARP reply packet. After
getting the reply, it then extracts the IP address from it. This IP address is then used for
communication on the network.

Uses of RARP :

RARP is used to convert the Ethernet address to an IP address. It is available for the LAN
technologies like FDDI, token ring LANs, etc.

Disadvantages of RARP:

 The Reverse Address Resolution Protocol had few disadvantages which eventually led
to its replacement by BOOTP and DHCP.
 Some of the disadvantages are listed below:

 The RARP server must be located within the same physical network.
 The computer sends the RARP request on very cheap layer of the network. Thus, it’s
unattainable for a router to forward the packet because the computer sends the RARP
request on very cheap layer of the network.
 The RARP cannot handle the subnetting process because no subnet masks are sent. If the
network is split into multiple subnets, a RARP server must be available with each of them.
 It isn’t possible to configure the PC in a very modern network.
 It doesn’t fully utilize the potential of a network like Ethernet.

RARP has now become an obsolete protocol since it operates at low level. Due to this, it
requires direct address to the network which makes it difficult to build a server.
Issues in RARP :
The Reverse Address Resolution Protocol (RARP) has several issues that limit its usefulness in
modern networks:
1. Limited scalability: RARP does not scale well in large networks as it relies on broadcasting
to communicate with clients. This can result in network congestion and performance issues,
especially in networks with a high number of clients.
2. Security concerns: RARP does not provide any security mechanisms, such as
authentication or encryption, which makes it vulnerable to attacks such as spoofing and
denial of service.
3. Lack of flexibility: RARP provides only basic functionality and cannot be used to provide
advanced network services such as assigning DNS server information or network gateway
addresses.
4. Limited support: RARP is not supported by many modern operating systems and network
devices, which makes it difficult to implement and maintain in modern networks.
5. Compatibility issues: RARP may not be compatible with newer networking protocols, such
as IPv6, which can limit its usefulness in networks that require advanced features and
functionality.

RARP Packet Format

RARP packet format is the same as the ARP packet, except for the operation field in either 3 or
4. i.e. 3-RARP request and 4-RARP reply
RARP packet

The fields are further explained below:


 Hardware Address Type : It is 2-byte field. It is type of hardware MAC address present
in the packet. For Ethernet, the value of this field is 1.
 Protocol Address Type (PTYPE): It is 2 byte field. It is type of the protocol address
requested for the MAC address. For IP address the value of this field is 0*100.
 Hardware length(HLEN): It is 1–byte field. It indicates the size of the hardware MAC
address. For Ethernet, the value of this field is 6.
 Protocol length(PLEN): It is 1 byte field. It indicates the size of the protocol address. For
IP, the value of this field is 4.
 Operation : It is a 2-byte field. It indicates the type of operation being performed. The
value of this field can be 3 (RARP request) or 4 (RARP reply).
 Sender Hardware Address: It is 6-byte field. In a RARP request packet, this is the
hardware MAC address of the source host. In a RARP reply packet, this is the hardware
MAC address of the RARP server sending the RARP reply.
 Sender Protocol Address : It is 4 byte field. In a RARP request packet, this is undefined.
In a RARP reply packet, this is the IP address of the RARP server sending the RARP reply.
 Target Hardware Address : It is 6–byte field. In a RARP request packet, this is the
hardware MAC address of the source host. In a RARP reply packet, this is the hardware
MAC address of the host, that sent the RARP request packet.
 Target Protocol Address : It is 4 byte field. In a RARP request packet, this is undefined.
In a RARP reply packet, this is the IP address of the host that sent the RARP request
packet.
How is RARP different from ARP ?

RARP ARP
RARP stands for Reverse Address ARP stands for Address Resolution Protocol
Resolution Protocol
A protocol used to map a physical (MAC) A protocol used to map an IP address to a
address to an IP address physical (MAC) address
To obtain the IP address of a network device To obtain the MAC address of a network
when only its MAC address is known device when only its IP address is known
Client broadcasts its MAC address and Client broadcasts its IP address and requests
requests an IP address, and the server a MAC address, and the server responds with
responds with the corresponding IP address the corresponding MAC address
Rarely used in modern networks as most Widely used in modern networks to resolve
devices have a pre-assigned IP address IP addresses to MAC addresses
RFC 903 Standardization RFC 826 Standardization
The MAC address is known and the IP The IP address is known, and the MAC
address is requested address is being requested
It uses the value 3 for requests and 4 for It uses the value 1 for requests and 2 for
responses responses

What is Gratuitous ARP(GARP)

GARP (Gratuitous ARP): Is an ARP message sent without request. Mainly used to notify other
hosts in the network of a MAC address assignment change. When a host receives a GARP it
either adds a new entry to the cache table or modifies an existing one.

Gratuitous ARP GARP messages GARP Request: A regular ARP request that contains the
source IP address as sender and target address, source MAC address as sender, and broadcast
MAC address (ff:ff:ff:ff:ff:ff) as a target. There will be no reply to this request

GARP Reply: The source/destination IP addresses AND MAC addresses are set to the sender
addresses. This message is sent to no request.

GARP Probe: When an interface goes up with a configured IP address, it sends a probe to make
sure no other host is using the same IP; hence, preventing IP conflicts. A probe has the sender IP
set to zeros (0.0.0.0), the target IP is the IP being probed, the sender MAC is the source MAC,
and the target MAC address is set to all zeros.
GARP use cases: Make sure there are no IP conflicts on the segment (GARP Probe) In failover
systems like clusters, HSRP, VRRP, etc.; a virtual MAC address is used to point to the active
device, when this device fails, the failover control sends a gratuitous ARP to ask other systems to
change their ARP table entries and point the vMAC of the cluster to the new active device.
(GARP Request/Reply). GARP is also used when an Ethernet interface goes up; that’s when the
host sends a GARP to notify all systems with his IP address, instead of waiting for them to ask
and send replies to each one. (GARP Request/Reply). Extensively used with data center
technologies like Vmotion and load balancers. Whenever a machine is moved or a virtual
interface is created on a load balancer; GARP is used to notify other devices of the changes.

You might also like