Professional Documents
Culture Documents
unit 2 cn
unit 2 cn
Logical Link Control: It deals with protocols, flow-control, and error control
Media Access Control: It deals with actual control of media
1.Framing:
Data-link layer takes packets from Network Layer and encapsulates them into Frames. Then, it
sends each frame bit-by-bit on the hardware. At receiver’ end, data link layer picks up signals
from hardware and assembles them into frames.
2.Addressing
Data-link layer provides layer-2 hardware addressing mechanism. Hardware address is assumed
to be unique on the link. It is encoded into hardware at the time of manufacturing.
3.Synchronization
When data frames are sent on the link, both machines must be synchronized in order to transfer
to take place.
4.Error Control
Sometimes signals may have encountered problem in transition and the bits are flipped. These
errors are detected and attempted to recover actual data bits. It also provides error reporting
mechanism to the sender.
5.Flow Control
Stations on same link may have different speed or capacity. Data-link layer ensures flow control
that enables both machine to exchange data on same speed.
6.Multi-Access
When host on the shared link tries to transfer the data, it has a high probability of collision. Data-
link layer provides mechanism such as CSMA/CD to equip capability of accessing a shared
media among multiple Systems.
Part of a Frame
Header: It consists of the frame's source and destination address. A frame header holds the
destination address, the supply address, and 3 management fields kind, seq, and ack helping
the subsequent purposes:
kind: This field expresses whether the frame could be an information frame, or it's used for
management functions like error and flow management or link management, etc.
Seq: This holds the sequence variation of the frame for transcription of out–of–sequence
frames and causing acknowledgments by the receiver.
Ack: This contains the acknowledgment variety of some frames, significantly once
piggybacking is employed.
Payload: It contains the message to be delivered. It contains the particular message or data
that the sender desires to transmit to the destination machine. It contains the particular
message or data that the sender desires to transmit to the destination machine
Trailer: It contains the error detection and correction bits.
Flag: It contains the points to the starting and the ending of the frame.
Problems in Framing
Locating at the beginning of the frame: When a frame is transmitted, each station should be
ready to notice it. The station detects frames by searching for a special sequence of bits that
marks the start of the frame which is. SFD (Starting Frame Delimiter).
How will the station notice a frame: Each station listens to the link for the SFD pattern through
a sequential circuit. If SFD is detected, the sequential circuit alerts the station. The station checks
the destination address to just accept or refuse the frame.
Locating end of frame: Once to prevent reading the frame. For example, the communication
device sends a hundred bits of information, out of which fifty bits belong to the border one. The
receiving device receives all the bits. However, can the receiver understand that up to fifty bits
belong to the frame? Solution: Some protocols square measure outlined between the devices
before starting with the communication and knowledge exchange. Let each device agree that the
frames begin and finish are 11011 (added with the header and trailer). The frame received by the
receiving device can seem like this 1101101101101011011.
If you notice rigorously, there's another drawback: the frame that's received contains a region
that resembles the beginning and also the finish of frame 1101101101101011011. Hence, the
receiver can think about solely these bits 1101101011011,i.e., before the resembling half won't
be thought of as a region of the frame. This can be called a Framing Error. To grasp the solutions
to this drawback, we tend to initially learn the categories of framing.
There are two types of framing in the data link layer. The frame can be of fastened or variable
size. founded on the size, the following are the types of framing in data link layers in computer
networks,
There are 2 different methods to define the frame boundaries, such as length field and finish
decimeters.
Length field–To confirm the length of the field, a length field is used. It is utilized in
Ethernet (1EEE 802.3).
End Delimeter–To confirm the size of the frame, a pattern is worn as a delimiter. This
methodology is worn in the token ring. In short, it is referred to as ED. Two different
methods are used to avoid this condition if the pattern happens within the message.
Character Oriented Approach Bit Oriented Approach
1. Bit-Oriented Framing
Most protocols use a special 8-bit pattern flag 01111110 as a result of the delimiter to
stipulate the beginning and so the end of the frame. Bit stuffing is completed at the
sender end and bit removal at the receiver end.
If we have a tendency to get a zero(0) after 5 1s. we have a tendency to tend to still
stuff a zero(0). The receiver will remove the zero. Bit stuffing is in addition said as bit
stuffing.
2. Byte-Oriented Framing
Byte stuffing is one of the methods of adding an additional byte once there is a flag or
escape character within the text. Take an illustration of byte stuffing as appears in the given
diagram.
The sender sends the frame by adding three additional ESC bits and therefore the
destination machine receives the frame and it removes the extra bits to convert the frame
into an identical message.
3. Clock Based Framing
This framing in the data link layer is basically used for Optical Networks like SONET. In this
approach, a sequence of monotonous pulses maintains an everlasting bit rate and keeps the
digital bits oriented within the data stream.
Methods of Framing
1. Character Count
In this given methodology is never worn perhaps is usually needed to count the total range of
characters that are available in the frame. This process is often done by manipulating fields in the
header. This Character count methodology makes sure that the Framing in the data link layer is
at the receiver end regarding the total range of characters maintained, and wherever the frame
ends. It also has its disadvantage conjointly of utilizing this methodology which is, if in any case,
the character count is issued or bended by a miscalculation occurring throughout the
transmission process, then at the receiver end it may drop synchronization. The receiver's
strength is ineffective in finding or establishing the start of the next frame.
The Flag Byte with Character stuffing is additionally called byte stuffing or character-oriented
framing which is identified as especially of bit stuffing however byte stuffing truly works on
bytes because bit stuffing works on their bits. while in byte stuffing, a particular byte that is
primarily called ESC (Escape Character) which has a fixed design is usually attached to the data
section of the frame once there is communication that has a matching pattern, especially of the
flag byte.
However, the receiver detaches the ESC along with the data which may cause difficulties.
Starting and Ending Flags, with Bit Stuffing, is additionally called a bit-oriented framing or bit-
oriented approach. In bit stuffing, additional bits come about existence supplementary by
framing in computer network protocol creators to data streaming. This is typically the lodging of
additional bits within the transmission unit as an easy way to offer and provide communication
data and information to the recipient and to stay away from or simply disregard the aspect of
unwritten or inessential check sequences. Simply this is a kind of protocol control that is merely
performed to interrupt the bit sample which ends up in communication undergoing out of
synchronization.
4. Encoding Violations
Encoding violation is a technique that is used just for framing in computer networks within
which encoding on physical medium brings about some kind of repetition which is used in more
than one graphical or visual structure to easily encode or represent one variable of information.
Flow Control
When a data frame (Layer-2 data) is sent from one host to another over a single medium, it is
required that the sender and receiver should work at the same speed. That is, sender sends at a
speed on which the receiver can process and accept the data. What if the speed
(hardware/software) of the sender or receiver differs? If sender is sending too fast the receiver
may be overloaded, (swamped) and data may be lost.
The stop and wait protocol is a flow control protocol where flow control is one of the services of
the data link layer.
The above figure shows the working of the stop and wait protocol.
If there is a sender and receiver, then sender sends the packet and that packet is known as a data
packet. The sender will not send the second packet without receiving the acknowledgment of the
first packet.
The receiver sends the acknowledgment for the data packet that it has received. Once the
acknowledgment is received, the sender sends the next packet. This process continues until all
the packet are not sent.
The main advantage of this protocol is its simplicity but it has some disadvantages also. For
example, if there are 1000 data packets to be sent, then all the 1000 packets cannot be sent at a
time as in Stop and Wait protocol, one packet is sent at a time.
Disadvantages of Stop and Wait protocol
The following are the problems associated with a stop and wait protocol:
Suppose the sender sends the data and the data is lost. The receiver is waiting for the data for a
long time. Since the data is not received by the receiver, so it does not send any
acknowledgment. Since the sender does not receive any acknowledgment so it will not send the
next packet. This problem occurs due to the lost data.
Suppose the sender sends the data and it has also been received by the receiver. The receiver then
sends the acknowledgment but the acknowledgment is received after the timeout period on the
sender's side. As the acknowledgment is received late, so acknowledgment can be wrongly
considered as the acknowledgment of some other data packet.
2.Sliding Window Protocol
The sliding window is a technique for sending multiple frames at a time. It controls the data
packets between the two devices where reliable and gradual delivery of data frames is needed. It
is also used in TCP (Transmission Control Protocol).
In this technique, each frame has sent from the sequence number. The sequence numbers are
used to find the missing data in the receiver end. The purpose of the sliding window technique is
to avoid duplicate data, so it uses the sequence number.
Types of Sliding Window Protocol
1. Go-Back-N ARQ
2. Selective Repeat ARQ
1.Go-Back-N ARQ
Selective Repeat ARQ is also known as the Selective Repeat Automatic Repeat Request. It is a
data link layer protocol that uses a sliding window method. The Go-back-N ARQ protocol works
well if it has fewer errors. But if there is a lot of error in the frame, lots of bandwidth loss in
sending the frames again. So, we use the Selective Repeat ARQ protocol. In this protocol, the
size of the sender window is always equal to the size of the receiver window. The size of the
sliding window is always greater than 1.
If the receiver receives a corrupt frame, it does not directly discard it. It sends a negative
acknowledgment to the sender. The sender sends that frame again as soon as on the receiving
negative acknowledgment. There is no waiting for any time-out to send that frame.
HYBRID ARQ
The HARQ is the use of conventional ARQ along with an Error Correction
technique called 'Soft Combining', which no longer discards the received bad data
(with error).
With the 'Soft Combining' data packets that are not properly decoded are not
discarded anymore. The received signal is stored in a 'buffer', and will be
combined with next retransmission.
That is, two or more packets received each one with insufficient SNR to allow
individual decoding can be combined in such a way that the total signal can be
decoded!
The following image explains this procedure. The transmitter sends a package [1]. The
package [1] arrives, and is 'OK'. If the package [1] is 'OK' then the receiver sends an
'ACK’
The transmission continues, and is sent a package [2]. The package [2] arrives, but let's consider
now that it arrives with errors. If the package [2] arrives with errors, the receiver sends a 'NACK'
Only now this package [2] (bad) is not thrown away, as it is done in conventional ARQ. Now it is
stored in a 'buffer'
Continuing, the transmitter sends another package [2.1] that also (let's consider) arrives with
errors.
We have then in a buffer: bad package [2], and another package [2.1] which is also bad.
Does by adding (combining) these two packages ([2] + [2.1]) we have the
complete information? Yes. So, we send an 'ACK'.
But if the combination of these two packages still does not give us the complete
information, the process must continue - and another 'NACK' is sent.
And there we have another retransmission. Now the transmitter sends a third package [2.2]. Let's
consider that now it is 'OK', and the receiver sends an 'ACK'
Here we can see the following: along with the received package [2.2], the receiver also
has packages [2] and [2.1] that have not been dropped and are stored in the buffer.
In our example, we see that the package arrived 2 times 'wrong'. And what is the limit of
these retransmissions? Up to 4. ie, we can have up to 4 retransmissions in each
process. This is the maximum number supported by 'buffer'.
Types of Errors
1. Single-Bit Error
2. Multiple-Bit Error
3. Burst Error
1. Single-Bit Error
A single-bit error refers to a type of data transmission error that occurs when one bit
(i.e., a single binary digit) of a transmitted data unit is altered during transmission,
resulting in corrupted data unit.
2.Multiple-Bit Error
A multiple-bit error is an error type that arises when more than one bit in a data
transmission is affected. Although multiple-bit errors are relatively rare when compared
to single-bit errors, they can still occur, particularly in high-noise or high-interference
digital environments.
3.Burst Error
When several consecutive bits are flipped mistakenly in digital transmission, it creates a
burst error. This error causes a sequence of consecutive incorrect values.
To detect errors, a common technique is to introduce redundancy bits that provide
additional information.
Various techniques for error detection include:
1. Simple Parity Check
2. Two-dimensional Parity Check
3. Checksum
4. Cyclic Redundancy Check (CRC)
Simple-bit parity is a simple error detection method that involves adding an extra
bit to a data transmission. It works as:
1 is added to the block if it contains an odd number of 1’s, and
0 is added if it contains an even number of 1’s
This scheme makes the total number of 1’s even, that is why it is called even parity
checking.
Disadvantages
Single Parity check is not able to detect even no. of bit error.
For example, the Data to be transmitted is 101010. Codeword transmitted to the receiver is
1010101 (we have used even parity).
Let’s assume that during transmission, two of the bits of code word flipped to 1111101.
On receiving the code word, the receiver finds the no. of ones to be even and hence no
error, which is a wrong assumption.
3.Checksum
Checksum error detection is a method used to identify errors in transmitted data. The
process involves dividing the data into equally sized segments and using a 1’s
complement to calculate the sum of these segments. The calculated sum is then sent
along with the data to the receiver.
At the receiver’s end, the same process is repeated and if all zeroes are obtained in the
sum, it means that the data is correct.
Checksum – Operation at Sender’s Side
Firstly, the data is divided into k segments each of m bits.
On the sender’s end, the segments are added using 1’s complement arithmetic to get the
sum. The sum is complemented to get the checksum.
The checksum segment is sent along with the data segments.
Checksum – Operation at Receiver’s Side
At the receiver’s end, all received segments are added using 1’s complement arithmetic to
get the sum. The sum is complemented.
If the result is zero, the received data is accepted; otherwise discarded.
Disadvantages
If one or more bits of a segment are damaged and the corresponding bit or bits of opposite
value in a second segment are also damaged.
Sender Side (Generation of Encoded Data from Data and Generator Polynomial:
1. The binary data is first augmented by adding k-1 zeros in the end of the data
2. Use modulo-2 binary division to divide binary data by the key and store remainder of
division.
3. Append the remainder at the end of the data to form the encoded data and send the same
1. Perform modulo-2 division again and if the remainder is 0, then there are no errors.
The process of modulo-2 binary division is the same as the familiar division process we use for
decimal numbers. Just that instead of subtraction, we use XOR here.
1. In each step, a copy of the divisor (or data) is XORed with the k bits of the dividend (or
key).
2. The result of the XOR operation (remainder) is (n-1) bits, which is used for the next
step after 1 extra bit is pulled down to make it n bits long.
3. When there are no bits left to pull down, we have a result. The (n-1)-bit remainder
which is appended at the sender side.
Hamming code is a set of error-correction codes that can be used to detect and correct the
errors that can occur when the data is moved or stored from the sender to the receiver. It is a
technique developed by R.W. Hamming for error correction.
What is Redundant Bits?
Redundant bits are extra binary bits that are generated and added to the information-carrying
bits of data transfer to ensure that no bits were lost during the data transfer. The number of
redundant bits can be calculated using the following formula:
2^r ≥ m + r + 1
Suppose the number of data bits is 7, then the number of redundant bits can be calculated
using: = 2^4 ≥ 7 + 4 + 1 .Thus, the number of redundant bits= 4
A parity bit is a bit appended to a data of binary bits to ensure that the total number of 1’s in
the data is even or odd. Parity bits are used for error detection. There are two types of parity
bits:
1. Even parity bit: In the case of even parity, for a given set of bits, the number of 1’s are
counted. If that count is odd, the parity bit value is set to 1, making the total count of
occurrences of 1’s an even number. If the total number of 1’s in a given set of bits is
already even, the parity bit’s value is 0.
2. Odd Parity bit: In the case of odd parity, for a given set of bits, the number of 1’s are
counted. If that count is even, the parity bit value is set to 1, making the total count of
occurrences of 1’s an odd number. If the total number of 1’s in a given set of bits is already
odd, the parity bit’s value is 0.
Hamming Code is simply the use of extra parity bits to allow the identification of an error.
Write the bit positions starting from 1 in binary form (1, 10, 11, 100, etc).
All the bit positions that are a power of 2 are marked as parity bits (1, 2, 4, 8, etc).
All the other bit positions are marked as data bits.
Each data bit is included in a unique set of parity bits, as determined its bit position in
binary form.
a. Parity bit 1 covers all the bits positions whose binary representation includes a 1 in
the least significant position (1, 3, 5, 7, 9, 11, etc).
b. Parity bit 2 covers all the bits positions whose binary representation includes a 1 in
the second position from the least significant bit (2, 3, 6, 7, 10, 11, etc).
c. Parity bit 4 covers all the bits positions whose binary representation includes a 1 in
the third position from the least significant bit (4–7, 12–15, 20–23, etc).
d. Parity bit 8 covers all the bits positions whose binary representation includes a 1 in
the fourth position from the least significant bit bits (8–15, 24–31, 40–47, etc).
e. In general, each parity bit covers all bits where the bitwise AND of the parity
position and the bit position is non-zero.
Since we check for even parity set a parity bit to 1 if the total number of ones in the
positions it checks is odd.
Set a parity bit to 0 if the total number of ones in the positions it checks is even.
A redundancy bits are placed at positions that correspond to the power of 2. As in the above
example:
The number of data bits = 7
The number of redundant bits = 4
The total number of bits = 7+4=>11
The redundant bits are placed at positions corresponding to power of 2- 1, 2, 4, and 8
Suppose the data to be transmitted is 1011001 from sender to receiver, the bits will be
placed as follows:
R1 bit is calculated using parity check at all the bits positions whose binary representation
includes a 1 in the least significant position. R1: bits 1, 3, 5, 7, 9, 11
To find the redundant bit R1, we check for even parity. Since the total number of 1’s in all
the bit positions corresponding to R1 is an even number the value of R1 (parity bit’s value)
=0
R2 bit is calculated using parity check at all the bits positions whose binary representation
includes a 1 in the second position from the least significant bit. R2: bits 2,3,6,7,10,11
To find the redundant bit R2, we check for even parity. Since the total number of 1’s in all
the bit positions corresponding to R2 is odd the value of R2(parity bit’s value)=1
R4 bit is calculated using parity check at all the bits positions whose binary representation
includes a 1 in the third position from the least significant bit. R4: bits 4, 5, 6, 7
1. To find the redundant bit R4, we check for even parity. Since the total number of 1’s in all
the bit positions corresponding to R4 is odd the value of R4(parity bit’s value) = 1
2. R8 bit is calculated using parity check at all the bits positions whose binary representation
includes a 1 in the fourth position from the least significant bit. R8: bit 8,9,10,11
To find the redundant bit R8, we check for even parity. Since the total number of 1’s in all
the bit positions corresponding to R8 is an even number the value of R8(parity bit’s
value)=0. Thus, the data transferred is:
Error Detection and Correction
Suppose in the above example the 6th bit is changed from 0 to 1 during data transmission, then
it gives new parity values in the binary number:
For all the parity bits we will check the number of 1’s in their respective bit positions.
For R1: bits 1, 3, 5, 7, 9, 11. We can see that the number of 1’s in these bit positions are 4
and that’s even so we get a 0 for this.
For R2: bits 2,3,6,7,10,11 . We can see that the number of 1’s in these bit positions are 5
and that’s odd so we get a 1 for this.
For R4: bits 4, 5, 6, 7 . We can see that the number of 1’s in these bit positions are 3 and
that’s odd so we get a 1 for this.
For R8: bit 8,9,10,11 . We can see that the number of 1’s in these bit positions are 2 and
that’s even so we get a 0 for this.
The bits give the binary number 0110 whose decimal representation is 6. Thus, bit 6
contains an error. To correct the error the 6th bit is changed from 1 to 0.
Let’s see what could be a Finite State Machine in the real world:
1.Coin-operated turnstile:
2.Traffic Light
Transitions: After a given time, Red will change to Green, Green to Yellow, and Yellow to Red
Mathematically,
F: final state
δ: Transition function
δ: Q x ∑ →Q
DFA stands for Deterministic Finite Automata. Deterministic refers to the uniqueness of the
computation. In DFA, the input character goes to one state only. DFA doesn't accept the null
move that means the DFA cannot change state without any input character.
NDFA refer to the Non Deterministic Finite Automata. It is used to transit the any number of
states for a particular input. NDFA accepts the NULL move that means it can change state
without reading the symbols.
NDFA also has five states same as DFA. But NDFA has different transition function.
δ: Q x ∑ →2Q
The Basics:
A Petri Net is a collection of directed arcs connecting places and transitions. Places may
hold tokens. The state or marking of a net is its assignment of tokens to places.
Here is a simple net containing all components of a Petri Net:
Arcs have capacity 1 by default; if other than 1, the capacity is marked on the arc. Places
have infinite capacity by default, and transitions have no capacity, and cannot store
tokens at all. With the rule that arcs can only connect places to transitions and vice versa,
we have all we need to begin using Petri Nets. A few other features and considerations
will be added as we need them.
A transition is enabled when the number of tokens in each of its input places is at least
equal to the arc weight going from the place to the transition. An enabled transition may
fire at any time. When fired, the tokens in the input places are moved to output places,
according to arc weights and place capacities. This results in a new marking of the net, a
state description of all places.
When arcs have different weights, we have what might at first seem confusing behavior.
Here is a similar net, ready to fire:
When a transition fires, it takes the tokens that enabled it from the input places; it then
distributes tokens to output places according to arc weights. If the arc weights are all the
same, it appears that tokens are moved across the transition. If they differ, however, it
appears that tokens may disappear or be created. That, in fact, is what happens; think of
the transition as removing its enabling tokens and producing output tokens according to
arc weight.
A special kind of arc, the inhibitor arc, is used to reverse the logic of an input place. With
an inhibitor arc, the absence of a token in the input place enables, not the presence:
This transition cannot fire, because the token in P2 inhibits it.
Here is a collection of primitive structures that occur in real systems, and thus we find in
Petri Nets.
Sequence is obvious - several things happen in order. Conflict is not so obvious. The
token in P4 enables three transitions; but when one of them fires, the token is removed,
leaving the remaining two disabled. Unless we can control the timing of firing, we don't
know how this net is resolved. Concurrency, again, is obvious; many systems operate
with concurrent activities, and this models it well. Synchronization is also modeled well
using Petri Nets; when the processes leading into P8, P9 and P10 are finished, all three
are synchronized by starting P11.
Merging is not quite the same as synchronization, since there is nothing requiring that the
three transitions fire at the same time, or that all three fire before T17; this simply merges
three parallel processes. The priority/inhibit construct uses the inhibit arc to control T19;
as long as P16 has a token, T19 cannot fire.
Very sophisticated logic and control structures can be developed using these primitives.
ARP Protocol
ARP stands for “Address Resolution Protocol”. It is a network protocol used to determine
the MAC address (hardware address) from any IP address.
In other words, ARP is used to translate IP Address into MAC Address. When one device
wants to communicate with another device in a LAN (local area network) network, the
ARP protocol is used.
This protocol is used when a device wants to communicate with another device over a
local area network or Ethernet.
ARP is a network layer protocol. This is a very important protocol in the TCP/IP protocol
suite. Although it was developed in the early 80s, it was defined in RFC 826 in 1982.
ARP is implemented with important technologies like IPv4, X.25, frame relay, and ATM.
ARP protocol finds the MAC address based on IP address. IP address is used to
communicate with any device at the application layer. But to communicate with a device
at the data link layer or to send data to it, a MAC address is required.
When data is sent to a local host, the data travels between networks via IP address. But to
reach that host in LAN, it needs the MAC address of that host. In this situation the
address resolution protocol plays an important role.
1. ARP Cache :- After receiving the MAC address, ARP passes it to the sender where it is
stored in a table for future reference. And this is called ARP Cache which is later used to
obtain the MAC address.
2. ARP Cache Timeout: - This is the time in which the MAC address can remain in the
ARP Cache.
3. ARP request :- Broadcasting a packet over the network to verify whether we have
arrived at the destination MAC address.
4. ARP response/reply :- It is a MAC address response that the sender receives from the
receiver which helps in further communication of data.
Types of ARP
There are four types of ARP protocol they are as follows:-
1. Proxy ARP
2. Gratuitous ARP
3. Reverse ARP
4. Inverse ARP
1. Proxy ARP
This is a technique through which proxy ARP in a network can answer ARP queries of IP
addresses that are not in that network. That is, if we understand it in simple language, the
Proxy server can also respond to queries of IP-address of other networks.
Through this we can fool the other person because instead of the MAC address of the
destination device, the MAC address of the proxy server is used and the other person
does not even know.
2. Gratuitous ARP
This is an arp request of a host, which we use to check duplicate ip-address. And we can
also use it to update the arp table of other devices. That is, through this we can check
whether the host is using its original IP-address, or is using a duplicate IP-address.
This is a very important ARP. Which proves to be very helpful in protecting us from the
wrong person, and by using it we can check the ip-address.
3. Reverse ARP
This is also a networking protocol, which we can use through client computer. That is, it
is used to obtain information about one’s own network from the computer network. That
is, if understood in simple language, it is a TCP/IP protocol which we use to obtain
information about the IP address of the computer server.
That is, to know the IP address of our computer server, we use Reverse ARP, which
works under a networking protocol.
Below is the working of address resolution protocol is being explained in some steps :-
When a sender wants to communicate with a receiver, the sender first checks its ARP
cache. Sender checks whether the receiver’s MAC address is already present in the ARP
cache or not?
If the receiver’s MAC address is already present in the ARP cache, the sender will
communicate with the receiver using that MAC address.
If the MAC address of the receiver device is not already present in the ARP cache, then in
such a situation an ARP request message is prepared by the sender device.This message
contains the MAC address of the sender, IP address of the sender and IP address of the
receiver. The field containing the MAC address of the receiver is left blank because it is
being searched.
Sender device broadcasts this ARP request message in the LAN. Because this is a
broadcast message, every device connected to the LAN receives this message.
All devices match the receiver IP address of this request message with their own IP
address. Devices whose IP address does not match drop this request message.
The device whose IP address matches the receiver IP address of this request message
receives this message and prepares an ARP reply message. This is a unicast message which
is sent only to the sender.
In ARP reply message, the sender’s IP address and MAC address are used to send the reply
message. Besides, in this message the receiver also sends its IP address and MAC address.
As soon as the sender device receives this ARP reply message, it updates its ARP cache
with the new information (Receiver’s MAC address). Now the MAC address of the
receiver is present in the ARP cache of the sender. The sender can send and receive data
without any problem.
All the fields given in ARP message format are being explained in detail below:-
Hardware Type: The size of this field is 2 bytes. This field defines what type of Hardware
is used to transmit the message. The most common Hardware type is Ethernet. The value of
Ethernet is 1.
Protocol Type: This field tells which protocol has been used to transmit the message.
substantially the value of this field is 2048 which indicates IPv4.
Hardware Address Length: It shows the length of the tackle address in bytes. The size of
Ethernet MAC address is 6 bytes.
Protocol Address Length: It shows the size of the IP address in bytes. The size of IP
address is 4 bytes.
OP law: This field tells the type of message. If the value of this field is 1 also it’s a request
message and if the value of this field is 2 also it’s a reply message.
Sender Hardware Address: This field contains the MAC address of the device
transferring the message.
Sender Protocol Address: This field contains the IP address of the device transferring the
message.
Target Hardware Address: This field is empty in the request message. This field contains
the MAC address of the entering device.
Target Protocol Address: This field contains the IP address of the entering device.
What is RARP
The Reverse Address Resolution Protocol (RARP) is a networking protocol that is used
to map a physical (MAC) address to an Internet Protocol (IP) address. It is the reverse
of the more commonly used Address Resolution Protocol (ARP), which maps an IP
address to a MAC address.
RARP was developed in the early days of computer networking as a way to provide IP
addresses to diskless workstations or other devices that could not store their own IP
addresses. With RARP, the device would broadcast its MAC address and request an IP
address, and a RARP server on the network would respond with the corresponding IP
address.
While RARP was widely used in the past, it has largely been replaced by newer
protocols such as DHCP (Dynamic Host Configuration Protocol), which provides more
flexibility and functionality in assigning IP addresses dynamically. However, RARP is
still used in some specialized applications, such as booting embedded systems and
configuring network devices with pre-assigned IP addresses.
RARP is specified in RFC 903 and operates at the data link layer of the OSI model. It
has largely been superseded by ARP and DHCP in modern networks, but it played an
important role in the development of computer networking protocols and continues to
be used in certain contexts.
RARP is abbreviation of Reverse Address Resolution Protocol which is a protocol
based on computer networking which is employed by a client computer to request its IP
address from a gateway server’s Address Resolution Protocol table or cache. The
network administrator creates a table in gateway-router, which is used to map the MAC
address to corresponding IP address.
How does Reverse Address Resolution Protocol work?
Uses of RARP :
RARP is used to convert the Ethernet address to an IP address. It is available for the LAN
technologies like FDDI, token ring LANs, etc.
Disadvantages of RARP:
The Reverse Address Resolution Protocol had few disadvantages which eventually led
to its replacement by BOOTP and DHCP.
Some of the disadvantages are listed below:
The RARP server must be located within the same physical network.
The computer sends the RARP request on very cheap layer of the network. Thus, it’s
unattainable for a router to forward the packet because the computer sends the RARP
request on very cheap layer of the network.
The RARP cannot handle the subnetting process because no subnet masks are sent. If the
network is split into multiple subnets, a RARP server must be available with each of them.
It isn’t possible to configure the PC in a very modern network.
It doesn’t fully utilize the potential of a network like Ethernet.
RARP has now become an obsolete protocol since it operates at low level. Due to this, it
requires direct address to the network which makes it difficult to build a server.
Issues in RARP :
The Reverse Address Resolution Protocol (RARP) has several issues that limit its usefulness in
modern networks:
1. Limited scalability: RARP does not scale well in large networks as it relies on broadcasting
to communicate with clients. This can result in network congestion and performance issues,
especially in networks with a high number of clients.
2. Security concerns: RARP does not provide any security mechanisms, such as
authentication or encryption, which makes it vulnerable to attacks such as spoofing and
denial of service.
3. Lack of flexibility: RARP provides only basic functionality and cannot be used to provide
advanced network services such as assigning DNS server information or network gateway
addresses.
4. Limited support: RARP is not supported by many modern operating systems and network
devices, which makes it difficult to implement and maintain in modern networks.
5. Compatibility issues: RARP may not be compatible with newer networking protocols, such
as IPv6, which can limit its usefulness in networks that require advanced features and
functionality.
RARP packet format is the same as the ARP packet, except for the operation field in either 3 or
4. i.e. 3-RARP request and 4-RARP reply
RARP packet
RARP ARP
RARP stands for Reverse Address ARP stands for Address Resolution Protocol
Resolution Protocol
A protocol used to map a physical (MAC) A protocol used to map an IP address to a
address to an IP address physical (MAC) address
To obtain the IP address of a network device To obtain the MAC address of a network
when only its MAC address is known device when only its IP address is known
Client broadcasts its MAC address and Client broadcasts its IP address and requests
requests an IP address, and the server a MAC address, and the server responds with
responds with the corresponding IP address the corresponding MAC address
Rarely used in modern networks as most Widely used in modern networks to resolve
devices have a pre-assigned IP address IP addresses to MAC addresses
RFC 903 Standardization RFC 826 Standardization
The MAC address is known and the IP The IP address is known, and the MAC
address is requested address is being requested
It uses the value 3 for requests and 4 for It uses the value 1 for requests and 2 for
responses responses
GARP (Gratuitous ARP): Is an ARP message sent without request. Mainly used to notify other
hosts in the network of a MAC address assignment change. When a host receives a GARP it
either adds a new entry to the cache table or modifies an existing one.
Gratuitous ARP GARP messages GARP Request: A regular ARP request that contains the
source IP address as sender and target address, source MAC address as sender, and broadcast
MAC address (ff:ff:ff:ff:ff:ff) as a target. There will be no reply to this request
GARP Reply: The source/destination IP addresses AND MAC addresses are set to the sender
addresses. This message is sent to no request.
GARP Probe: When an interface goes up with a configured IP address, it sends a probe to make
sure no other host is using the same IP; hence, preventing IP conflicts. A probe has the sender IP
set to zeros (0.0.0.0), the target IP is the IP being probed, the sender MAC is the source MAC,
and the target MAC address is set to all zeros.
GARP use cases: Make sure there are no IP conflicts on the segment (GARP Probe) In failover
systems like clusters, HSRP, VRRP, etc.; a virtual MAC address is used to point to the active
device, when this device fails, the failover control sends a gratuitous ARP to ask other systems to
change their ARP table entries and point the vMAC of the cluster to the new active device.
(GARP Request/Reply). GARP is also used when an Ethernet interface goes up; that’s when the
host sends a GARP to notify all systems with his IP address, instead of waiting for them to ask
and send replies to each one. (GARP Request/Reply). Extensively used with data center
technologies like Vmotion and load balancers. Whenever a machine is moved or a virtual
interface is created on a load balancer; GARP is used to notify other devices of the changes.