Download as pdf or txt
Download as pdf or txt
You are on page 1of 90

COMPUTER NETWORKS

Data communication Components:


Computer Network:
A network is a set of devices (often referred to as nodes) connected by communication
links. A node can be a computer, printer, or any other device capable of sending and/or receiving data
generated by other nodes on the network.
“Computer network’’ to mean a collection of autonomous computers interconnected by a single
technology. Two computers are said to be interconnected if they are able to exchange information.
o The connection need not be via a copper wire; fibre optics, microwaves, infrared, and
communication satellites can also be used.

Data Communication:
Data communications (DC) is the process of using computing and communication technologies to
transfer data from one place to another, or between participating parties.

DC enables the movement of electronic or digital data between two or more network nodes,
regardless of geographical location, technological medium or data contents.

The effectiveness of a data communications system depends on four fundamental


characteristics: delivery, accuracy, timeliness, and jitter.
Delivery: - The system must deliver data to the correct destination. Data must be received by the
intended device or user and only by that device or user.

Accuracy: - The system must deliver the data accurately. Data that have been altered in transmission
and left uncorrected are unusable.

Timeliness: - The system must deliver data in a timely manner. Data delivered late are useless. In the
case of video and audio, timely delivery means delivering data as they are produced, in the same order
that they are produced, and without significant delay. This kind of delivery is called real-time
transmission.

Jitter: - Jitter refers to the variation in the packet arrival time. It is the uneven delay in the delivery
of audio or video packets.

A data communications system has five components:


Message: - The message is the information (data) to be communicated. Popular forms of information
include text, numbers, pictures, audio, and video.

Sender: -The sender is the device that sends the data message. It can be a computer, workstation,
telephone handset, video camera, and so on.

Receiver: - The receiver is the device that receives the message. It can be a computer, workstation,
telephone handset, television, and so on.
Transmission medium: - The transmission medium is the physical path by which a message travels
from sender to receiver. Some examples of transmission media include twisted-pair wire, coaxial
cable, fibre-optic cable, and radio waves.

Protocol: - A protocol is a set of rules that govern data communications. It represents an agreement
between the communicating devices. Without a protocol, two devices may be connected but not
communicating, just as a person speaking French cannot be understood by a person who speaks only
Japanese.

Data Representation:
Data Representation refers to the form in which data is stored, processed, and transmitted.
Information is represented in various forms such as text, numbers, images, audio and video.

Text: In data communication, it is represented as a bit pattern, a sequence of bits (0s and 1s). Different
sets of bit patterns, called code, have been designed to represent text and symbols and the process
of representing them is called coding. Characters can be represented in Unicode by using the American
Standard Code for Information Interchange (ASCII).

Numbers: They are also represented by bit patterns. To simplify mathematical operations the
number is directly converted to a binary number.

Images: They are also represented by bit patterns. In its simplest form, an image is composed of a
matrix of pixels, where each pixel is a small dot. Resolution determines the size of the pixel.

Audio: It refers to the recording or broadcasting of sound or music. Audio is by nature different from
text, numbers, images. It is not discrete, but continuous.

Video: It refers to the recording or broadcasting of a picture or a movie. It can either be produced as
a continuous entity, or can be a combination of images, each a discrete entity, arranged to convey the
idea of motion.
Data Flow:
Communication between two devices can be simplex, half-duplex, or full-duplex as shown in Figure:

Simplex: - In simplex mode, the communication is unidirectional, as on a one-way street. Only one of
the two devices on a link can transmit; the other can only receive (Figure a). Keyboards and traditional
monitors are examples of simplex devices.

Half-Duplex: - In half-duplex mode, each station can both transmit and receive, but not at the same
time. When one device is sending, the other can only receive, and vice versa (Figure b). Walkie-talkies
and CB (citizens band) radios are both half-duplex systems.

Full-Duplex: - In full-duplex, both stations can transmit and receive simultaneously (Figure c). One
common example of full-duplex communication is the telephone network. When two people are
communicating by a telephone line, both can talk and listen at the same time. The full-duplex mode
is used when communication in both directions is required all the time.

Various Connection Topology:


A network is two or more devices connected through links. A link is a communications pathway that
transfers data from one device to another. There are two possible types of connections: point-to-
point and multipoint.

Point-to-Point: - A point-to-point connection provides a dedicated link between two devices. The
entire capacity of the link is reserved for transmission between those two devices.

o When you change television channels by infrared remote control, you are
establishing a point-to-point connection between the remote control and the
television's control system.

Multipoint: - A multipoint (also called multi-drop) connection is one in which more than two specific
devices share a single link. In a multipoint environment, the capacity of the channel is shared, either
spatially or temporally.

o If several devices can use the link simultaneously, it is a spatially shared connection.
If users must take turns, it is a timeshared connection.
Physical Topology:
The term physical topology refers to the way in which a network is laid out physically. Two or more
devices connect to a link; two or more links form a topology. The topology of a network is the
geometric representation of the relationship of all the links and linking devices (usually called nodes)
to one another. There are four basic topologies possible: mesh, star, bus, and ring.

MESH:

• A mesh topology is the one where every node is connected to every other node in the
network.

• A mesh topology can be a full mesh topology or a partially connected mesh topology.

➢ In a full mesh topology, every computer in the network has a connection to each of
the other computers in that network. The number of connections in this network can
be calculated using the following formula (n is the number of computers in the
network): n(n-1)/2
➢ In a partially connected mesh topology, at least two of the computers in the network
have connections to multiple other computers in that network. It is an inexpensive
way to implement redundancy in a network. In the event that one of the primary
computers or connections in the network fails, the rest of the network continues to
operate normally.

Advantages of a mesh topology:

• Can handle high amounts of traffic, because multiple devices can transmit data
simultaneously.
• A failure of one device does not cause a break in the network or transmission of data.
• Adding additional devices does not disrupt data transmission between other devices.

Disadvantages of a mesh topology

• The cost to implement is higher than other network topologies, making it a less desirable
option.
• Building and maintaining the topology is difficult and time consuming.
• The chance of redundant connections is high, which adds to the high costs and potential for
reduced efficiency.
STAR:
A star network, star topology is one of the most common network setups. In this configuration, every
node connects to a central network device, like a hub, switch, or computer. The central network
device acts as a server and the peripheral devices act as clients. Depending on the type of network
card used in each computer of the star topology, a coaxial cable or a RJ-45 network cable is used to
connect computers together

Advantages of star topology:

• Centralized management of the network, through the use of the central computer, hub, or
switch.
• Easy to add another computer to the network.
• If one computer on the network fails, the rest of the network continues to function normally.

Disadvantages of star topology:

• Can have a higher cost to implement, especially when using a switch or router as the central
network device.
• If the central computer, hub, or switch fails, the entire network goes down and all computers
are disconnected from the network.

BUS:
A line topology, a bus topology is a network setup in which each computer and network device are
connected to a single cable or backbone.
Advantages of bus topology:
• It works well when you have a small network.
• It's the easiest network topology for connecting computers or peripherals in a linear fashion.
• It requires less cable length than a star topology.

Disadvantages of bus topology:

• It can be difficult to identify the problems if the whole network goes down.
• It can be hard to troubleshoot individual device issues.
• Bus topology is not great for large networks.
• Terminators are required for both ends of the main cable.
• Additional devices slow the network down.
• If a main cable is damaged, the network fails or splits into two.

RING:
A ring topology is a network configuration in which device connections create a circular data path. In
a ring network, packets of data travel from one device to the next until they reach their destination.

➢ Most ring topologies allow packets to travel only in one direction, called a unidirectional ring
network.
➢ Others permit data to move in either direction, called bidirectional.

The major disadvantage of a ring topology is that if any individual connection in the ring is broken, the
entire network is affected.

o Ring topologies may be used in either local area networks (LANs) or wide area
networks (WANs).

Advantages of ring topology:


• All data flows in one direction, reducing the chance of packet collisions.
• A network server is not needed to control network connectivity between each workstation.
• Data can transfer between workstations at high speeds.
• Additional workstations can be added without impacting performance of the network.

Disadvantages of ring topology:


• All data being transferred over the network must pass through each workstation on the
network, which can make it slower than a star topology.
• The entire network will be impacted if one workstation shuts down.
HYBRID TOPOLOGY:
A hybrid topology is a type of network topology that uses two or more differing network topologies.
These topologies can include a mix of bus topology, mesh topology, ring topology and star topology.

For example, we can have a main star topology with each branch connecting several stations in a bus
topology as shown in Figure:

Protocols and Standard:


Protocol:
In Order to make communication successful between devices, some rules and procedures should be
agreed upon at the sending and receiving ends of the system. Such rules and procedures are called as
Protocols. Different types of protocols are used for different types of communication.

Standards:
Standards are the set of rules for data communication that are needed for exchange of information
among devices. It is important to follow Standards which are created by various Standard Organization
like IEEE, ISO, ANSI etc.

Types of Standards:
Standards are of two types:

• De Facto Standard.
• De Jure Standard.
De Facto Standard: The meaning of the word “De Facto” is” By Fact” or “By Convention”.
These are the standard s that have not been approved by any Organization, but have been adopted
as Standards because of its widespread use. Also, sometimes these standards are often established by
Manufacturers.

For example: Apple and Google are two companies which established their own rules on their
products which are different. Also, they use some same standard rules for manufacturing for their
products.

De Jure Standard: The meaning of the word “De Jure” is “By Law” or “By Regulations”.
Thus, these are the standards that have been approved by officially recognized body like ANSI, ISO,
IEEE etc. These are the standard which are important to follow if it is required or needed.

For example: All the data communication standard protocols like SMTP, TCP, IP, UDP etc. are
important to follow the same when we needed them.

OSI MODEL:
OSI stands for Open Systems Interconnection. It has been developed by ISO – ‘International
Organization for Standardization’, in the year 1984. The Open Systems Interconnection (OSI)
model is a conceptual framework that describes networking or telecommunications systems
as seven layers, each with its own function. All these 7 layers work collaboratively to transmit
the data from one person to another across the globe.
PHYSICAL LAYER:
The lowest layer of the OSI reference model is the physical layer. It is responsible for the actual
physical connection between the devices. The physical layer contains information in the form
of bits. It is responsible for transmitting individual bits from one node to the next. When
receiving data, this layer will get the signal received and convert it into 0s and 1s and send
them to the Data Link layer, which will put the frame back together.
The functions of the physical layer are:

• Bit synchronization: The physical layer provides the synchronization of the bits by
providing a clock. This clock controls both sender and receiver thus providing
synchronization at bit level.

• Bit rate control: The Physical layer also defines the transmission rate i.e. the number
of bits sent per second.

• Physical topologies: Physical layer specifies the way in which the different,
devices/nodes are arranged in a network i.e. bus, star, or mesh topology.

• Transmission mode: Physical layer also defines the way in which the data flows
between the two connected devices. The various transmission modes possible are
Simplex, half-duplex and full-duplex.
Fundamental principles of Physical Layer:
• One of the major functions of physical layer is to move data in the form of
electromagnetic signals across a transmission medium.
• The data usable to a person or an application are not in a form that can be transmitted
over a network.
• For example, an image must first be changed to a form that transmission media can
accept.
• To be transmitted, data must be transformed to electromagnetic signals.

LAN Technologies (Ethernet):


Local Area Network (LAN) is a data communication network connecting various terminals or
computers within a building or limited geographical area. The connection among the devices
could be wired or wireless. Ethernet, Token Ring and Wireless LAN using IEEE 802.11 are
examples of standard LAN technologies.
Ethernet: -
➢ Ethernet is the most widely used LAN technology, which is defined under IEEE
standards 802.3.
➢ The reason behind its wide usability is Ethernet is easy to understand, implement,
maintain, and allows low-cost network implementation. Also, Ethernet offers
flexibility in terms of topologies that are allowed.

➢ Ethernet generally uses Bus Topology. Ethernet operates in two layers of the OSI
model, Physical Layer, and Data Link Layer.

Ethernet LANs consist of network nodes and interconnecting media or links. The network
nodes can be of two types:
1. Data Terminal Equipment (DTE): -
o Generally, DTEs are the end devices that convert the user information into
signals or reconvert the received signals.
o DTEs devices are: personal computers, workstations, file servers or print
servers also referred to as end stations. These devices are either the source
or the destination of data frames.
o The data terminal equipment may be a single piece of equipment or multiple
pieces of equipment that are interconnected and perform all the required
functions to allow the user to communicate. A user can interact with DTE or
DTE may be a user.

2. Data Communication Equipment (DCE): -


o DCEs are the intermediate network devices that receive and forward frames
across the network.
o They may be either standalone devices such as repeaters, network switches,
routers, or maybe communications interface units such as interface cards and
modems.
o The DCE performs functions such as signal conversion, coding, and maybe a
part of the DTE or intermediate equipment.

Transmission Media:
The media over which the information between two computer systems is sent, called
transmission media. Transmission media comes in two forms.
Guided Media: - All communication wires/cables are guided media, such as UTP, coaxial
cables, and fibre Optics. In this media, the sender and receiver are directly connected and
the information is sent (guided) through it.
Unguided Media: - Wireless or open-air space is said to be unguided media, because there
is no connectivity between the sender and receiver. Information is spread over the air, and
anyone including the actual recipient may collect the information.
Multiplexing:
Multiplexing is a technique to mix and send multiple data streams over a single medium. This
technique requires system hardware called multiplexer (MUX) for multiplexing the streams
and sending them on a medium, and de-multiplexer (DMUX) which takes information from
the medium and distributes to different destinations.

Why Multiplexing?

o The transmission medium is used to send the signal from sender to receiver. The
medium can only have one signal at a time.
o If there are multiple signals to share one medium, then the medium must be divided
in such a way that each signal is given some portion of the available bandwidth. For
example: If there are 10 signals and bandwidth of medium is100 units, then the 10
unit is shared by each signal.
o When multiple signals share the common medium, there is a possibility of collision.
Multiplexing concept is used to avoid such collision.

Concept of Multiplexing:

o The 'n' input lines are transmitted through a multiplexer and multiplexer combines
the signals to form a composite signal.
o The composite signal is passed through a Demultiplexer and demultiplexer separates
a signal to component signals and transfers them to their respective destinations.

Switching:
Switching is a mechanism by which data/information are sent from source towards
destination which are not directly connected. Networks have interconnecting devices, which
receives data from directly connected sources, stores data, analyse it and then forwards to
the next interconnecting device closest to the destination.
Switching can be categorized as:

Circuit Switching:

Circuit switching is a connection-oriented network switching technique. Here, a dedicated


route is established between the source and the destination and the entire message is
transferred through it.

Phases of Circuit Switch Connection:

• Circuit Establishment: In this phase, a dedicated circuit is established from the


source to the destination through a number of intermediate switching centres. The
sender and receiver transmit communication signals to request and acknowledge
establishment of circuits.
• Data Transfer: Once the circuit has been established, data and voice are transferred
from the source to the destination. The dedicated connection remains as long as the
end parties communicate.
• Circuit Disconnection: When data transfer is complete, the connection is
relinquished. The disconnection is initiated by any one of the users. Disconnection
involves removal of all intermediate links from the sender to the receiver.
Advantages
• It is suitable for long continuous transmission, since a continuous transmission route
is established, that remains throughout the conversation.
• The dedicated path ensures a steady data rate of communication.
• No intermediate delays are found once the circuit is established. So, they are suitable
for real time communication of both voice and data transmission.
Disadvantages
• Bandwidth requirement is high even in cases of low data volume.
• There is underutilization of system resources. Once resources are allocated to a
particular connection, they cannot be used for other connections.
• Time required to establish connection may be high.
Packet Switching:
Packet switching is a connectionless network switching technique. Here, the message is
divided and grouped into a number of units called packets that are individually routed from
the source to the destination. At the destination, all these small parts (packets) have to be
reassembled, belonging to the same file. There is no need to establish a dedicated circuit for
communication.

Process:
Each packet in a packet switching technique has two parts: a header and a payload. The
header contains the addressing information of the packet and is used by the intermediate
routers to direct it towards its destination. The payload carries the actual data.
➢ Packet Switching uses Store and Forward technique while switching the packets;
while forwarding the packet each hop first stores that packet then forward.
➢ This technique is very beneficial because packets may get discarded at any hop due to
some reason.
➢ More than one path is possible between a pair of sources and destinations.
➢ Each packet contains Source and destination address using which they independently
travel through the network.
➢ In other words, packets belonging to the same file may or may not travel through the
same path. If there is congestion at some path, packets are allowed to choose different
paths possible over an existing network.

Advantages:
• More efficient in terms of bandwidth, since the concept of reserving circuit is not
there.
• Minimal transmission latency.
• More reliable as a destination can detect the missing packet.

Disadvantages:
• They are unsuitable for applications that cannot afford delays in communication like
high quality voice calls.
• Packet switching high installation costs.
• They require complex protocols for delivery.

Message Switching:

Message switching is a connectionless network switching technique where the entire


message is routed from the source node to the destination node, one hop at a time. It was a
precursor of packet switching.
➢ In message switching, end-users communicate by sending and
receiving messages that included the entire data to be shared. Messages are the
smallest individual unit.

➢ Also, the sender and receiver are not directly connected. There are a number of
intermediate nodes that transfer data and ensure that the message reaches its
destination. Message switched data networks are hence called hop-by-hop systems.

They provide 2 distinct and important characteristics:

1. Store and forward – The intermediate nodes have the responsibility of transferring
the entire message to the next node. Hence, each node must have storage capacity. A
message will only be delivered if the next hop and the link connecting it are both
available, otherwise, it’ll be stored indefinitely. A store-and-forward switch forwards
a message only if sufficient resources are available and the next hop is accepting data.
This is called the store-and-forward property.

2. Message delivery – This implies wrapping the entire information in a single message
and transferring it from the source to the destination node. Each message must have
a header that contains the message routing information, including the source and
destination.

Advantages:
• As message switching is able to store the message for which communication channel
is not available, it helps in reducing the traffic congestion in the network.
• In message switching, the data channels are shared by the network devices.
• It makes traffic management efficient by assigning priorities to the messages.

Disadvantages:
• Message switching cannot be used for real-time applications as storing messages
causes delay.
• In message switching, the message has to be stored for which every intermediate
device in the network requires a large storing capacity.
DATA LINK LAYER:
Data Link Layer is second layer of OSI Layered Model. This layer is one of the most complicated layers
and has complex functionalities and liabilities. Data link layer hides the details of underlying hardware
and represents itself to upper layer as the medium to communicate.

Data link layer works between two hosts which are directly connected in some sense. This direct
connection could be point to point or broadcast. Systems on broadcast network are said to be on
same link. The work of data link layer tends to get more complex when it is dealing with multiple
hosts on single collision domain.
Data link layer is responsible for converting data stream to signals bit by bit and to send that over the
underlying hardware. At the receiving end, Data link layer picks up data from hardware which are in
the form of electrical signals, assembles them in a recognizable frame format, and hands over to
upper layer.
Data link layer has two sub-layers:
• Logical Link Control: It deals with protocols, flow-control, and error control
• Media Access Control: It deals with actual control of media.

Functionality of Data-link Layer:


Data link layer does many tasks on behalf of upper layer. These are:
• Framing: - Data-link layer takes packets from Network Layer and encapsulates them into
Frames. Then, it sends each frame bit-by-bit on the hardware. At receiver’ end, data link layer
picks up signals from hardware and assembles them into frames.
• Addressing: - Data-link layer provides layer-2 hardware addressing mechanism. Hardware
address is assumed to be unique on the link. It is encoded into hardware at the time of
manufacturing.
• Synchronization: - When data frames are sent on the link, both machines must be
synchronized in order to transfer to take place.
• Error Control: - Sometimes signals may have encountered problem in transition and the bits
are flipped. These errors are detected and attempted to recover actual data bits. It also
provides error reporting mechanism to the sender.
• Flow Control: - Stations on same link may have different speed or capacity. Data-link layer
ensures flow control that enables both machine to exchange data on same speed.
• Multi-Access: - When host on the shared link tries to transfer the data, it has a high probability
of collision. Data-link layer provides mechanism such as CSMA/CD to equip capability of
accessing a shared media among multiple Systems.

Data-link layer is responsible for implementation of point-to-point flow and error control
mechanism.
Flow Control:
When a data frame (Layer-2 data) is sent from one host to another over a single medium, it is required
that the sender and receiver should work at the same speed. That is, sender sends at a speed on
which the receiver can process and accept the data. What if the speed (hardware/software) of the
sender or receiver differs? If sender is sending too fast the receiver may be overloaded, (swamped)
and data may be lost.
Two types of mechanisms can be deployed to control the flow:
Stop and Wait
This flow control mechanism forces the sender after transmitting a data frame to stop and wait until
the acknowledgement of the data-frame sent is received.

Sliding Window
In this flow control mechanism, both sender and receiver agree on the number of data-frames after
which the acknowledgement should be sent. Since the stop and wait flow control mechanism wastes
resources, this protocol tries to make use of underlying resources as much as possible.

Error Control:

When data-frame is transmitted, there is a probability that data-frame may be lost in the transit or it
is received corrupted. In both cases, the receiver does not receive the correct data-frame and sender
does not know anything about any loss. In such case, both sender and receiver are equipped with
some protocols which helps them to detect transit errors such as loss of data-frame. Hence, either
the sender retransmits the data-frame or the receiver may request to resend the previous data-frame.
Requirements for error control mechanism:
• Error detection - The sender and receiver, either both or any, must ascertain that there is
some error in the transit.
• Positive ACK - When the receiver receives a correct frame, it should acknowledge it.
• Negative ACK - When the receiver receives a damaged frame or a duplicate frame, it sends
a NACK back to the sender and the sender must retransmit the correct frame.
• Retransmission - The sender maintains a clock and sets a timeout period. If an
acknowledgement of a data-frame previously transmitted does not arrive before the timeout
the sender retransmits the frame, thinking that the frame or it’s acknowledgement is lost in
transit.
There are three types of techniques available which Data-link layer may deploy to control the errors
by Automatic Repeat Requests (ARQ):

Stop-and-wait ARQ:

The following transition may occur in Stop-and-Wait ARQ:

o The sender maintains a timeout counter.


o When a frame is sent, the sender starts the timeout
counter.
o If acknowledgement of frame comes in time, the sender
transmits the next frame in queue.
o If acknowledgement does not come in time, the sender
assumes that either the frame or its acknowledgement is
lost in transit. Sender retransmits the frame and starts the
timeout counter.
o If a negative acknowledgement is received, the sender
retransmits the frame.

Go-Back-N ARQ:
Stop and wait ARQ mechanism does not utilize the resources at their best. When the
acknowledgement is received, the sender sits idle and does nothing. In Go-Back-N ARQ method, both
sender and receiver maintain a window.
The sending-window size enables the sender to send multiple frames without receiving the
acknowledgement of the previous ones. The receiving-window enables the receiver to receive
multiple frames and acknowledge them. The receiver keeps track of incoming frame’s sequence
number.
When the sender sends all the frames in window, it checks up to what sequence number it has
received positive acknowledgement. If all frames are positively acknowledged, the sender sends next
set of frames. If sender finds that it has received NACK or has not receive any ACK for a particular
frame, it retransmits all the frames after which it does not receive any positive ACK.
Selective Repeat ARQ:
In Go-back-N ARQ, it is assumed that the receiver does not have any buffer space for its window size
and has to process each frame as it comes. This enforces the sender to retransmit all the frames which
are not acknowledged.

In Selective-Repeat ARQ, the receiver while keeping track of


sequence numbers, buffers the frames in memory and sends
NACK for only frame which is missing or damaged.

The sender in this case, sends only packet for which NACK is
received.
Multiple access protocols:
When a sender and receiver have a dedicated link to transmit data packets, the data link control is
enough to handle the channel. Suppose there is no dedicated path to communicate or transfer the
data between two devices. In that case, multiple stations access the channel and simultaneously
transmits the data over the channel. It may create collision and cross talk. Hence, the multiple access
protocol is required to reduce the collision and avoid crosstalk between the channels.

Following are the types of multiple access protocol that is subdivided into the different process as:

Random Access Protocol:

In this protocol, all the station has the equal priority to send the data over a channel. In random access
protocol, one or more stations cannot depend on another station nor any station control another
station. Depending on the channel's state (idle or busy), each station transmits the data frame.
However, if more than one station sends the data over a channel, there may be a collision or data
conflict. Due to the collision, the data frame packets may be lost or changed. And hence, it does not
receive by the receiver end.

Following are the different methods of random-access protocols for broadcasting frames on the
channel:
o Aloha
o CSMA
o CSMA/CD
o CSMA/CA

ALOHA Random Access Protocol:

It is designed for wireless LAN (Local Area Network) but can also be used in a shared medium to
transmit data. Using this method, any station can transmit data across a network simultaneously when
a data frameset is available for transmission.

Aloha Rules:
1. Any station can transmit data to a channel at any time.
2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through multiple
stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.

Pure Aloha:
Whenever data is available for sending over a channel at stations, we use Pure
Aloha. In pure Aloha, when each station transmits data to a channel without checking whether the
channel is idle or not, the chances of collision may occur, and the data frame can be lost. When any
station transmits the data frame to a channel, the pure Aloha waits for the receiver's
acknowledgment. If it does not acknowledge the receiver end within the specified time, the station
waits for a random amount of time, called the backoff time (Tb). And the station may assume the
frame has been lost or destroyed. Therefore, it retransmits the frame until all the data are successfully
transmitted to the receiver.

1. The total vulnerable time of pure Aloha is 2 * Tfr.


2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.

As we can see in the figure above, there are four stations for accessing a shared channel and
transmitting data frames. Some frames collide because most stations send their frames at the same
time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the receiver end. At
the same time, other frames are lost or destroyed. Whenever two frames fall on a shared channel
simultaneously, collisions can occur, and both will suffer damage. If the new frame's first bit enters
the channel before finishing the last bit of the second frame. Both frames are completely finished, and
both stations must retransmit the data frame.
Slotted Aloha:
The slotted Aloha is designed to overcome the pure Aloha's efficiency because
pure Aloha has a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided
into a fixed time interval called slots. So that, if a station wants to send a frame to a shared channel,
the frame can only be sent at the beginning of the slot, and only one frame is allowed to be sent to
each slot. And if the stations are unable to send data to the beginning of the slot, the station will have
to wait until the beginning of the slot for the next time. However, the possibility of a collision remains
when trying to send a frame at the beginning of two or more station time slot.

1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.


2. The probability of successfully transmitting the data frame in the slotted Aloha is S = G * e ^ -
2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.

CSMA (Carrier Sense Multiple Access):

It is a carrier sense multiple access based on media access protocol to sense the traffic on a channel
(idle or busy) before transmitting the data. It means that if the channel is idle, the station can send
data to the channel. Otherwise, it must wait until the channel becomes idle. Hence, it reduces the
chances of a collision on a transmission medium.

CSMA Access Modes:

1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared
channel and if the channel is idle, it immediately sends the data. Else it must wait and keep track of
the status of the channel to be idle and broadcast the frame unconditionally as soon as the channel is
idle.
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data, each node
must sense the channel, and if the channel is inactive, it immediately sends the data. Otherwise, the
station must wait for a random time (not continuously), and when the channel is found to be idle, it
transmits the frames.

P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-Persistent mode
defines that each node senses the channel, and if the channel is inactive, it sends a frame with
a P probability. If the data is not transmitted, it waits for a (q = 1-p probability) random time and
resumes the frame with the next time slot.

O- Persistent: It is an O-persistent method that defines the superiority of the station before the
transmission of the frame on the shared channel. If it is found that the channel is inactive, each station
waits for its turn to retransmit the data.

CSMA/ CD:
It is a carrier sense multiple access/ collision detection network protocol to transmit
data frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it first
senses the shared channel before broadcasting the frames, and if the channel is idle, it transmits a
frame to check whether the transmission was successful. If the frame is successfully received, the
station sends another frame. If any collision is detected in the CSMA/CD, the station sends a jam/ stop
signal to the shared channel to terminate data transmission. After that, it waits for a random time
before sending a frame to a channel.
CSMA/ CA:
It is a carrier sense multiple access/collision avoidance network protocol for
carrier transmission of data frames. It is a protocol that works with a medium access control layer.
When a data frame is sent to a channel, it receives an acknowledgment to check whether the channel
is clear. If the station receives only a single (own) acknowledgment, that means the data frame has
been successfully transmitted to the receiver. But if it gets two signals (its own and one more in which
the collision of frames), a collision of the frame occurs in the shared channel. Detects the collision of
the frame when a sender receives an acknowledgment signal.

Following are the methods used in the CSMA/ CA to avoid the collision:

Interframe space: In this method, the station waits for the channel to become idle, and if it gets the
channel is idle, it does not immediately send the data. Instead of this, it waits for some time, and this
time period is called the Interframe space or IFS. However, the IFS time is often used to define the
priority of the station.

Contention window: In the Contention window, the total time is divided into different slots. When
the station/ sender is ready to transmit the data frame, it chooses a random slot number of slots
as wait time. If the channel is still busy, it does not restart the entire process, except that it restarts
the timer only to send data packets when the channel is inactive.

Acknowledgment: In the acknowledgment method, the sender station sends the data frame to the
shared channel if the acknowledgment is not received ahead of time.

Data-link layer uses error control techniques to ensure that frames, i.e. bit streams of data, are
transmitted from the source to the destination with a certain extent of accuracy .

Errors:
When bits are transmitted over the computer network, they are subject to get corrupted due to
interference and network problems. The corrupted bits lead to spurious data being received by the
destination and are called errors.

Types of Errors:
Errors can be of three types, namely single bit errors, multiple bit errors, and burst errors.

• Single bit error − In the received frame, only one bit has been corrupted, i.e. either changed
from 0 to 1 or from 1 to 0.
• Multiple bits error − In the received frame, more than one bits are corrupted.

• Burst error − In the received frame, more than one consecutive bits are corrupted.

Error Control:
Error control can be done in two ways

• Error detection − Error detection involves checking whether any error has occurred or not.
The number of error bits and the type of error does not matter.

• Error correction − Error correction involves ascertaining the exact number of bits that has
been corrupted and the location of the corrupted bits.
For both error detection and error correction, the sender needs to send some additional bits along
with the data bits. The receiver performs necessary checks based upon the additional redundant bits.
If it finds that the data is free from errors, it removes the redundant bits before passing the message
to the upper layers.

Redundancy:
• The central concept in detecting or correcting errors is redundancy. To be able to detect
or correct errors, we need to send some extra bits with our data. These redundant bits
are added by the sender and removed by the receiver.
• Their presence allows the receiver to detect or correct corrupted bits.

Coding:
o Redundancy is achieved through various coding schemes. The sender adds
redundant bits through a process that creates a relationship between
the redundant bits and the actual data bits.
o The receiver checks the relationships between the two sets of bits to detect or
correct the errors. The ratio of redundant bits to the data bits and the
robustness of the process are important factors in any coding scheme.

BLOCK CODING
• In block coding, we divide our message into blocks, each of "k" bits, called data words.
We add "r" redundant bits to each block to make the length n = k + r. The resulting n-bit
blocks are called codewords.
• Now we have a set of datawords, each of size "k", and a set of codewords, each of size of
"n". With "k" bits, we can create a combination of 2k datawords; with "n" bits, we can create
a combination of 2n codewords.

• Since n > k, the number of possible codewords is larger than the number of possible
datawords. The block coding process is one-to-one; the same dataword is always encoded
as the same codeword. This means that we have 2 n - 2k codewords that are not used. We
call these codewords invalid or illegal.

How can errors be detected by using block coding?


If the following two conditions are met, the receiver can detect a change in the original codeword.

➢ The receiver has (or can find) a list of valid codewords.


➢ The original codeword has changed to an invalid one.

• The sender creates codewords out of datawords by using a generator that applies the rules
and procedures of encoding (discussed later). Each codeword sent to the receiver may
change during transmission.
• If the received codeword is the same as one of the valid codewords, the word is accepted;
the corresponding dataword is extracted for use. If the received codeword is not valid, it is
discarded.
• However, if the codeword is corrupted during transmission but the received word still
matches a valid codeword, the error remains undetected. This type of coding can detect
only single errors. Two or more errors may remain undetected.
Error Correction using Block Coding:
• In error detection, the receiver needs to know only that the received codeword is invalid; in
error correction the receiver needs to find (or guess) the original codeword sent.
• We can say that we need more redundant bits for error correction than for error detection

Example of Error correction using Block coding


• We add 3 redundant bits to the 2-bit dataword to make 5-bit codewords.
• Assume the dataword is 01. The sender consults the table (or uses an algorithm) to create
the codeword 01011. The codeword is corrupted during transmission, and 01001 is
received (error in the second bit from the right).
• First, the receiver finds that the received codeword is not in the table. This means an error
has occurred. (Detection must come before correction.) The receiver, assuming that there
is only 1 bit corrupted, uses the following strategy to guess the correct dataword.

Datawords Codewords

00 00000

01 01011

10 10101

11 11110

• Comparing the received codeword with the first codeword in the table (01001 versus
00000), the receiver decides that the first codeword is not the one that was sent because
there are two different bits.
• By the same reasoning, the original codeword cannot be the third or fourth one in the table.
• The original codeword must be the second one in the table because this is the only one
that differs from the received codeword by 1 bit. The receiver replaces 01001 with 01011
and consults the table to find the dataword 01.

Cyclic Redundancy Check (CRC):


• The Cyclic Redundancy Checks (CRC) is the most powerful method for Error-Detection
and Correction.
o It is given as a kbit message and the transmitter creates an (n – k) bit sequence
called frame check sequence.
o The out coming frame, including n bits, is precisely divisible by some fixed number.
o Modulo 2 Arithmetic is used in this binary addition with no carries, just like the XOR
operation.
• Redundancy means duplicacy. The redundancy bits used by CRC are changed by
splitting the data unit by a fixed divisor. The remainder is CRC.
Qualities of CRC
• It should have accurately one less bit than the divisor.
• Joining it to the end of the data unit should create the resulting bit sequence precisely
divisible by the divisor.

CRC generator and checker:

Process:
• A string of n 0s is added to the data unit. The number n is one smaller than the number of
bits in the fixed divisor.
• The new data unit is divided by a divisor utilizing a procedure known as binary division;
the remainder appearing from the division is CRC.
• The CRC of n bits interpreted in phase 2 restores the added 0s at the end of the data unit.

Example:
Message D = 1010001101 (10 bits)
Predetermined P = 110101 (6 bits)
FCS R = to be calculated 5 bits
Hence, n = 15 K = 10 and (n – k) = 5
The message is generated through 2 5: accommodating
1010001101000
The product is divided by P.

❖ The remainder is inserted to 2 5 D to provide T =


101000110101110 that is sent.
Suppose that there are no errors, and the receiver gets T perfect. The received frame is divided
by P.

❖ Because of no remainder, there are no errors.

Hamming Code:
• Hamming code is a liner code that is useful for error detection up to two immediate bit
errors. It is capable of single-bit errors.
• In Hamming code, the source encodes the message by adding redundant bits in the
message. These redundant bits are mostly inserted and generated at certain positions in
the message to accomplish error detection and correction process.

Parity bits:

➢ The bit which is appended to the original data of binary bits so that the total number of
1s is even or odd.

Even parity:

➢ To check for even parity, if the total number of 1s is even, then the value of the parity bit
is 0. If the total number of 1s occurrences is odd, then the value of the parity bit is 1.

Odd Parity:

➢ To check for odd parity, if the total number of 1s is even, then the value of parity bit is 1.
If the total number of 1s is odd, then the value of parity bit is 0.
Algorithm of Hamming code:

• An information of 'd' bits are added to the redundant bits 'r' to form d+r.
• The location of each of the (d+r) digits is assigned a decimal value.
• The 'r' bits are placed in the positions 1,2,.....2k-1.
• At the receiving end, the parity bits are recalculated. The decimal value of the parity bits
determines the position of an error.

Example:

Suppose the original data is 1010 which is to be sent.

Total number of data bits 'd' = 4


Number of redundant bits r : 2r >= d+r+1
2r >= 4+r+1
Therefore, the value of r is 3 that satisfies the above relation.
Total number of bits = d+r = 4+3 = 7;

Determining the position of the redundant bits:

The number of redundant bits is 3. The three bits are represented by r1, r2, r4. The position of the
redundant bits is calculated with corresponds to the raised power of 2. Therefore, their
corresponding positions are 1, 21, 22.

➢ The position of r1 = 1
➢ The position of r2 = 2
➢ The position of r4 = 4

Representation of Data on the addition of parity bits:

Determining the Parity bits:

✓ Determining the r1 bit

o The r1 bit is calculated by performing a parity check on the bit positions whose
binary representation includes 1 in the first position .
We observe from the above figure that the bit positions that includes 1 in the first position are
1, 3, 5, 7. Now, we perform the even-parity check at these bit positions. The total number of
1 at these bit positions corresponding to r1 is even, therefore, the value of the r1 bit is 0.

✓ Determining r2 bit

o The r2 bit is calculated by performing a parity check on the bit positions whose
binary representation includes 1 in the second position.

We observe from the above figure that the bit positions that includes 1 in the second position
are 2, 3, 6, 7. Now, we perform the even-parity check at these bit positions. The total number
of 1 at these bit positions corresponding to r2 is odd, therefore, the value of the r2 bit is 1.

✓ Determining r4 bit

o The r4 bit is calculated by performing a parity check on the bit positions whose
binary representation includes 1 in the third position.

We observe from the above figure that the bit positions that includes 1 in the third position
are 4, 5, 6, 7. Now, we perform the even-parity check at these bit positions. The total number
of 1 at these bit positions corresponding to r4 is even, therefore, the value of the r4 bit is
0.

Data transferred is given below:


Suppose the 4th bit is changed from 0 to 1 at the receiving end, then parity bits are
recalculated.

R1 bit:

The bit positions of the r1 bit are 1,3,5,7

We observe from the above figure that, the binary representation of r1 is 1100. Now, we
perform the even-parity check, the total number of 1s appearing in the r1 bit is an even
number. Therefore, the value of r1 is 0.

R2 bit:

The bit positions of r2 bit are 2,3,6,7.

We observe from the above figure that the binary representation of r2 is 1001. Now, we
perform the even-parity check, the total number of 1s appearing in the r2 bit is an even
number. Therefore, the value of r2 is 0.

R4 bit:

The bit positions of r4 bit are 4,5,6,7.

We observe from the above figure that the binary representation of r4 is 1011. Now, we
perform the even-parity check, the total number of 1s appearing in the r4 bit is an odd number.
Therefore, the value of r4 is 1.
o The binary representation of redundant bits, i.e., r4 r2 r1 is 100, and its corresponding
decimal value is 4. Therefore, the error occurs in a 4th bit position. The bit value must be
changed from 1 to 0 to correct the error.
NETWORK LAYER
Layer-3 in the OSI model is called Network layer. Network layer manages options pertaining
to host and network addressing, managing sub-networks, and internetworking.
Network layer takes the responsibility for routing packets from source to destination within
or outside a subnet. Two different subnets may have different addressing schemes or non-
compatible addressing types. Same with protocols, two different subnets may be operating
on different protocols which are not compatible with each other. Network layer has the
responsibility to route the packets from source to destination, mapping different
addressing schemes and protocols.
The main functions performed by the network layer are:
o Routing: When a packet reaches the router's input link, the router will move the
packets to the router's output link. For example, a packet from S1 to R1 must be
forwarded to the next router on the path to S2.
o Logical Addressing: The data link layer implements the physical addressing and
network layer implements the logical addressing. Logical addressing is also used to
distinguish between source and destination system. The network layer adds a header
to the packet which includes the logical addresses of both the sender and the receiver.
o Internetworking: This is the main role of the network layer that it provides the logical
connection between different types of networks.
o Fragmentation: The fragmentation is a process of breaking the packets into the
smallest individual data units that travel through different networks.

Forwarding & Routing:


In Network layer, a router is used to forward the packets. Every router has a forwarding
table. A router forwards a packet by examining a packet's header field and then using the
header field value to index into the forwarding table. The value stored in the forwarding
table corresponding to the header field value indicates the router's outgoing interface link
to which the packet is to be forwarded.

Internetworking Devices:

➢ Repeater – A repeater operates at the physical layer. Its job is to regenerate the
signal over the same network before the signal becomes too weak or corrupted so as
to extend the length to which the signal can be transmitted over the same network.
o An important point to be noted about repeaters is that they do not amplify
the signal. When the signal becomes weak, they copy the signal bit by bit and
regenerate it at the original strength. It is a 2-port device.

➢ Hub – A hub is basically a multiport repeater. A hub connects multiple wires coming
from different branches, for example, the connector in star topology which connects
different stations.
o Hubs cannot filter data, so data packets are sent to all connected
devices. Also, they do not have the intelligence to find out the best path for
data packets which leads to inefficiencies and wastage.
Types of Hub:
• Active Hub: - These are the hubs that have their own power supply and can
clean, boost, and relay the signal along with the network. It serves both as a
repeater as well as a wiring centre. These are used to extend the maximum
distance between nodes.
• Passive Hub: - These are the hubs that collect wiring from nodes and power
supply from the active hub. These hubs relay signals onto the network without
cleaning and boosting them and can’t be used to extend the distance between
nodes.
• Intelligent Hub: - It works like active hubs and includes remote management
capabilities. They also provide flexible data rates to network devices. It also
enables an administrator to monitor the traffic passing through the hub and to
configure each port in the hub.

➢ Bridge – A bridge operates at the data link layer. A bridge is a repeater, with add on
the functionality of filtering content by reading the MAC addresses of source and
destination. It is also used for interconnecting two LANs working on the same
protocol. It has a single input and single output port, thus making it a 2-port device.
Types of Bridges:
• Transparent Bridges: - These are the bridge in which the stations are
completely unaware of the bridge’s existence i.e. whether or not a bridge is
added or deleted from the network, reconfiguration of the stations is
unnecessary. These bridges make use of two processes i.e. bridge forwarding
and bridge learning.
• Source Routing Bridges: - In these bridges, routing operation is performed by
the source station and the frame specifies which route to follow. The host can
discover the frame by sending a special frame called the discovery frame,
which spreads through the entire network using all possible paths to the
destination.

➢ Switch – A switch is a multiport bridge with a buffer and a design that can boost its
efficiency (a large number of ports imply less traffic) and performance. A switch is a
data link layer device.
o The switch can perform error checking before forwarding data, which makes
it very efficient as it does not forward packets that have errors and forward
good packets selectively to the correct port only.
o In other words, the switch divides the collision domain of hosts, but broadcast
domain remains the same.

➢ Routers– A router is a device like a switch that routes data packets based on their
IP addresses. The router is mainly a Network Layer device.
o Routers normally connect LANs and WANs together and have a dynamically
updating routing table based on which they make decisions on routing the
data packets.
o Router divide broadcast domains of hosts connected through it.

➢ Gateway – A gateway, as the name suggests, is a passage to connect two networks


together that may work upon different networking models. They basically work as the
messenger agents that take data from one system, interpret it, and transfer it to
another system.
o Gateways are also called protocol converters and can operate at any network
layer.
o Gateways are generally more complex than switches or routers.

➢ Brouter – It is also known as the bridging router is a device that combines features
of both bridge and router. It can work either at the data link layer or a network layer.
o Working as a router, it is capable of routing packets across networks, and
working as the bridge, it is capable of filtering local area network traffic.

➢ NIC – NIC or network interface card is a network adapter that is used to connect the
computer to the network. It is installed in the computer to establish a LAN.
o It has a unique id that is written on the chip, and it has a connector to connect
the cable to it. The cable acts as an interface between the computer and
router or modem.
o NIC card is a layer 2 device which means that it works on both physical and
data link layer of the network model.

IP Addressing and Subnetting:


A subnet mask is used to divide an IP address into two parts. One part identifies the host
(computer), the other part identifies the network to which it belongs.
IP addresses:
An IP address is used globally to refer to the logical address in the network layer of the TCP/IP
protocol. The Internet addresses are 32 bits in length; this gives us a maximum of 232
addresses. These addresses are referred to as IPv4 (IP version 4) addresses or popularly as IP
addresses.
IPV4 addresses

• An IPv4 address is a 32-bit address that uniquely and universally defines the
connection of a device (for example, a computer or a router) to the Internet.
✓ They are unique so that each address defines only one connection to the
Internet. Two devices on the Internet can never have the same IPV4 address
at the same time.
• On the other hand, if a device operating at the network layer has m connections to
the Internet, it needs to have m addresses, for example, a router.
• The IPv4 addresses are universal in the sense that the addressing system must be
accepted by any host that wants to be connected to the Internet. That means global
addressing.
Address Space

• IPv4 has a certain address space. An address space is the total number of addresses
used by the protocol. If a protocol uses N bits to define an address, the address space
is 2N.
✓ IPv4 uses 32-bit address format, which means that the address space is 232 or
4,294,967,296
Notations:
There are two notations to show an IPv4 address:
1. Binary notation
2. Dotted decimal notation
1) Binary Notation
In binary notation, the IPv4 address is displayed as 32 bits. Each octet is often referred to as
a byte. So, it is common to hear an IPv4 address referred to a 4-byte address. The following is
an example of an IPv4 address in binary notation: 01110111 10010101 00000001 00000011
2) Dotted-Decimal Notation
IPV4 addresses are usually written in decimal form with a decimal point (dot) separating the
bytes since it’s more compatible. The following is an example: 119.149.1.3 (above one and
this one is same just different notation).
N.B: Each number in dotted-decimal notation is a value ranging from 0 to 255.
Classful Addressing:
The 32-bit IP address is divided into five sub-classes. These are:
1. Class A: IP address belonging to class A are assigned to the networks that contain a
large number of hosts.

2. Class B: IP address belonging to class B are assigned to the networks that ranges from
medium-sized to large-sized networks.

3. Class C: IP address belonging to class C are assigned to small-sized networks.

4. Class D: IP address belonging to class D are reserved for multi-casting. The higher order
bits of the first octet of IP addresses belonging to class D are always set to 1110. The
remaining bits are for the address that interested hosts recognize.
✓ Class D does not possess any sub-net mask. IP addresses belonging to class D
ranges from 224.0.0.0 – 239.255.255.255.

5. Class E: IP addresses belonging to class E are reserved for experimental and research
purposes. IP addresses of class E ranges from 240.0.0.0 – 255.255.255.254.
✓ This class doesn’t have any sub-net mask. The higher order bits of first octet of
class E are always set to 1111.

Each of these classes has a valid range of IP addresses. Classes D and E are reserved for
multicast and experimental purposes respectively. The order of bits in the first octet
determine the classes of IP address. IPv4 address is divided into two parts:
a) Network ID
b) Host ID
The class of IP address is used to determine the bits used for network ID and host ID and the
number of total networks and hosts possible in that particular class.
Subnetting:
IP Subnetting is a process of dividing a large IP network into smaller IP networks. In
Subnetting we create multiple small manageable networks from a single large IP network.

• Subnetting allows us to create smaller networks from a single large network which not
only fulfil our hosts’ requirement but also offer several other networking benefits.

Let a network like –


200.23.49.0
Divide it into two sub-networks:
200.23.49.XXXXXXXX → 200.23.49.0XXXXXXX & 200.23.49.1XXXXXXX
200.23.49.0 to 200.23.49.127 & 200.23.49.128 to 200.23.49.255
subnet id – 200.23.49.0 & subnet id – 200.23.49.128
broadcast address – 200.23.49.127 broadcast address – 200.23.49.255
Network Layer Protocols:
Internet Protocol Version 4 (IPV4):
The Internet Protocol version 4 (IPv4) is the delivery mechanism used by the TCP/IP protocols.
Following figure shows the position of IPv4 in the suite:

Characteristics of IPv4 protocol


• IPv4 is an unreliable and connectionless datagram protocol-a best-effort delivery
service.
• The term best-effort means that IPv4 provides no error control or flow control
(except for error detection on the header).
• IPv4 assumes the unreliability of the underlying layers and does its best to get a
transmission through to its destination, but with no guarantees.
• If reliability is important, IPv4 must be paired with a reliable protocol such as TCP
• IPv4 is also a connectionless packet-switching network that uses the datagram
approach.
✓ This means that each datagram is handled independently, and each datagram
can follow a different route to the destination.
✓ This implies that datagrams sent by the same source to the same destination
could arrive out of order. Also, some could be lost or corrupted during
transmission.
✓ IPv4 relies on a higher-level protocol like TCP to take care of all these problems.

Internet Protocol Version 6 (IPv6)


Exhaustion of IPv4 addresses gave birth to a next generation Internet Protocol version 6. IPv6
addresses its nodes with 128-bit wide address providing plenty of address space for future to
be used on entire planet or beyond.
• IPv6 has introduced Anycast addressing but has removed the concept of
broadcasting.
• IPv6 enables devices to self-acquire an IPv6 address and communicate within that
subnet. This auto-configuration removes the dependability of Dynamic Host
Configuration Protocol (DHCP) servers. This way, even if the DHCP server on that
subnet is down, the hosts can communicate with each other.
• IPv6 provides new feature of IPv6 mobility. Mobile IPv6 equipped machines can roam
around without the need of changing their IP addresses.
• IPv6 is still in transition phase and is expected to replace IPv4 completely in coming
years. At present, there are few networks which are running on IPv6.
• There are some transition mechanisms available for IPv6 enabled networks to speak
and roam around different networks easily on IPv4. These are:
▪ Dual stack implementation
▪ Tunneling
▪ NAT-PT

Internet Control Message Protocol (ICMP):


Since IP does not have an inbuilt mechanism for sending error and control messages. It
depends on Internet Control Message Protocol (ICMP) to provide an error control.
• ICMP is mainly used to determine whether or not data is reaching its intended
destination in a timely manner.
• It is used for reporting errors and management queries.
• It is a supporting protocol and is used by networks devices like routers for sending
error messages and operations information., e.g. the requested service is not
available or that a host or router could not be reached.
• ICMP is crucial for error reporting and testing, but it can also be used in distributed
denial-of-service (DDoS) attacks.

What is ICMP used for?

• The primary purpose of ICMP is for error reporting.

o When two devices connect over the Internet, the ICMP generates errors to
share with the sending device in the event that any of the data did not get to
its intended destination. For example, if a packet of data is too large for a
router, the router will drop the packet and send an ICMP message back to the
original source for the data.

• A secondary use of ICMP protocol is to perform network diagnostics; the commonly


used terminal utilities traceroute and ping both operate using ICMP.
o The traceroute utility is used to display the routing path between two
Internet devices. The routing path is the actual physical path of connected
routers that a request must pass through before it reaches its destination. The
journey between one router and another is known as a ‘hop,’ and a traceroute
also reports the time required for each hop along the way. This can be useful
for determining sources of network delay.
o The ping utility is a simplified version of traceroute. A ping will test the speed
of the connection between two devices and report exactly how long it takes a
packet of data to reach its destination and come back to the sender’s device.
The ICMP echo-request and echo-reply messages are commonly used for the
purpose of performing a ping.

How does ICMP work?


Unlike the Internet Protocol (IP), ICMP is not associated with a transport layer protocol such
as TCP or UDP. This makes ICMP a connectionless protocol: one device does not need to open
a connection with another device before sending an ICMP message.
✓ Normal IP traffic is sent using TCP, which means any two devices that exchange data
will first carry out a TCP handshake to ensure both devices are ready to receive data.
✓ ICMP does not open a connection in this way. The ICMP protocol also does not allow
for targeting a specific port on a device.

How is ICMP used in DDoS attacks?

➢ ICMP flood attack:


o A ping flood or ICMP flood is when the attacker attempts to overwhelm a
targeted device with ICMP echo-request packets. The target has to process
and respond to each packet, consuming its computing resources until
legitimate users cannot receive service.
➢ Ping of death attack
o A ping of death attack is when the attacker sends a ping larger than the
maximum allowable size for a packet to a targeted machine, causing the
machine to freeze or crash.
o The packet gets fragmented on the way to its target, but when the target
reassembles the packet into its original maximum-exceeding size, the size of
the packet causes a buffer overflow.

➢ Smurf attack
o In a Smurf attack, the attacker sends an ICMP packet with a spoofed source IP
address.
o Networking equipment replies to the packet, sending the replies to the
spoofed IP and flooding the victim with unwanted ICMP packets.

Address Mapping:
Address mapping is a process of determining a logical address knowing the physical address
of the device and determining the physical address by knowing the logical address of the
device.
✓ Address mapping is required when a packet is routed from source host to destination
host in the same or different network.
✓ This address information can be achieved through static and dynamic mapping.

Static mapping: In static mapping, it creates a table i.e. routing table that contains a logical
address with a physical address.

Dynamic mapping: When a machine knows one of two addresses (logical or physical) through
dynamic mapping, it may use this protocol to find the other one address. There are designed
two protocols for done dynamic mapping.
1. Address Resolution Protocol (ARP)
2. Reverse Address Resolution Protocol (RARP)

Address Resolution Protocol (ARP):

It is a dynamic mapping protocol that is used to find out the physical address associated with
the logical address and then sent it to the data link layer. The working of ARP is shown below
in the figure:
• Firstly, the client broadcasts the ARP request packet to all the hosts in the network.
• In this ARP request packet, stores the logical address and physical address of the client
and the IP address of the receiver.
• Each host receives this ARP request packet, but only the one who is the authorized
host completes the ARP service.
• Finally, the authorized host sends the ARP response packet to the client in which its
physical address is stored.
Note: ARP request is broadcast, and ARP response is unicast.

Reverse Address Resolution Protocol (RARP):


It is a dynamic mapping protocol that is the opposite of ARP. It is used to find out the logical
address of the machine associated with the physical address. The working of RARP is
shown below in the figure:
• Firstly, the client broadcasts the RARP request packet to all the hosts in the network.
• The physical address of the client is stored in this RARP request packet.
• Each host receives this RARP request packet, but only the one who is the authorized
host completes the RARP service. This authorized host is called the RARP server.
• RARP server sends the RARP response packet to the client in which its logical address
is stored.
RARP is not used nowadays, it was replaced by BOOTP (Bootstrap protocol), and now BOOTP
has been replaced by DHCP (dynamic host configuration protocol).

Dynamic Host Configuration Protocol (DHCP):


The Dynamic Host Configuration Protocol has been devised to provide static and dynamic
address allocation that can be manual or automatic as required.
• When a DHCP client sends a DHCP request to a DHCP server, the server first checks its
static database. If an entry with the requested physical address exists in the static
database, the permanent IP address of the client is returned.
• On the other hand, if the entry does not exist in the static database, the server selects
an IP address from the available pool, assigns the address to the client, and adds the
entry to the dynamic database.
• The dynamic aspect of DHCP is needed when a host moves from network to network
or is connected and disconnected from a network (as is a subscriber to a service
provider).
• DHCP provides temporary IP addresses for a limited time. The addresses assigned from
the pool are temporary addresses.
• The DHCP server issues a lease for a specific time. When the lease expires, the client
must either stop using the IP address or renew the lease.
• The server has the option to agree or disagree with the renewal. If the server
disagrees, the client stops using the address.

DHCP Operation:
DHCP provides an automated way to distribute and update IP addresses and other
configuration information on a network. A DHCP server provides this information to a DHCP
client through the exchange of a series of messages, known as the DHCP conversation or the
DHCP transaction displayed in Figure given below:
• A DHCP client sends a broadcast packet (DHCP Discover) to discover DHCP servers on
the LAN segment.
• The DHCP servers receive the DHCP Discover packet and respond with DHCP Offer
packets, offering IP addressing information.
• If the client receives the DHCP Offer packets from multiple DHCP servers, the first
DHCP Offer packet is accepted. The client responds by broadcasting a DHCP Request
packet, requesting network parameters from a single server.
• The DHCP server approves the lease with a DHCP Acknowledgement (DHCP Ack)
packet. The packet includes the lease duration and other configuration information.

Routing algorithms:
Routing is process of establishing the routes that data packets must follow to reach the
destination. In this process, a routing table is created which contains information regarding
routes which data packets follow. Various routing algorithm are used for the purpose of
deciding which route an incoming data packet needs to be transmitted on to reach
destination efficiently.
Classification of Routing Algorithms:
The routing algorithms can be classified as follows:
1. Adaptive Algorithms – These are the algorithms which change their routing decisions
whenever network topology or traffic load changes.
✓ The changes in routing decisions are reflected in the topology as well as traffic of
the network.
✓ Also known as dynamic routing, these make use of dynamic information such as
current topology, load, delay, etc. to select routes.
✓ Optimization parameters are distance, number of hops and estimated transit time.

Further these are classified as follows:

a) Isolated – In this method each node makes its routing decisions using the information it has
without seeking information from other nodes. The sending nodes don’t have information
about status of particular link.
o Disadvantage is that packet may be sent through a congested network which may
result in delay.
o Examples: Hot potato routing, backward learning.

b) Centralized – In this method, a centralized node has entire information about the network
and makes all the routing decisions.
o Advantage of this is only one node is required to keep the information of entire
network and disadvantage is that if central node goes down the entire network is
done.

c) Distributed – In this method, the node receives information for its neighbours and then takes
the decision about routing the packets.
o Disadvantage is that the packet may be delayed if there is change in between interval
in which it receives information and sends packet.

2. Non-Adaptive Algorithms – These are the algorithms which do not change their routing decisions
once they have been selected.
✓ This is also known as static routing as route to be taken is computed in advance and
downloaded to routers when router is booted.

Further these are classified as follows:

a) Flooding – This adapts the technique in which every incoming packet is sent on ever outgoing
line except from which it arrived.
o One problem with this is that packets may go in loop and as a result of which a node
may receive duplicate packets.
o These problems can be overcome with the help of sequence numbers, hop count and
spanning tree.

b) Random walk – In this method, packets are sent host by host or node by node to one of its
neighbours randomly. This is highly robust method which is usually implemented by sending
packets onto the link which is least queued.

Distance vector routing:


A distance-vector routing protocol in data networks determines the best route for data packets based
on distance. Distance-vector routing protocols measure the distance by the number of routers a
packet has to pass, one router counts as one hop.

• Distance vector routing protocol is easy to implement in small networks. Debugging is


very easy in the distance vector routing protocol. This protocol has a very limited
dependency in a small network.
• Distance vector routing is a dynamic routing algorithm in which each router computes
distance between itself and each possible destination i.e. its immediate neighbours.

Distance-Vector Routing with example –


Each node constructs a one-dimensional array containing the "distances"(costs) to all other nodes
and distributes that vector to its immediate neighbours.
o The starting assumption for distance-vector routing is that each node knows the cost of the
link to each of its directly connected neighbours.
o A link that is missing is assigned an infinite cost.

Note that each node only knows the information in one row of the table.

Every node sends a message to its directly connected neighbours containing its personal list of
distance. After every node has exchanged a few updates with its directly connected neighbours, all
nodes will know the least-cost path to all the other nodes. When they receive updates, the nodes need
to keep track of which node told them about the path that they used to calculate the cost, so that they
can create their forwarding table.
In practice, each node's forwarding table consists of a set of triples of the form:

(Destination, Cost, NextHop).

For example, Table 3 shows the complete routing table maintained at node B for the network in
figure1.

It uses Bellman Ford Algorithm for making routing tables.

Link State Routing:


It is a dynamic routing algorithm in which each router shares knowledge of its neighbours with every
other router in the network. A router sends information about its neighbours to all the routers through
flooding. Information sharing takes place only whenever there is a change.

• It makes use of Dijkastra’s Algorithm for making routing tables.


• Distance vector routers use a distributed algorithm to compute their routing tables and link-
state routing uses link-state routers to exchange messages that allow each router to learn the
entire network topology. Based on this learned topology, each router is then able to compute
its routing table by using a shortest path computation.

Features of link state routing protocols –


o Link state packet – A small packet that contains routing information.
o Link state database – A collection information gathered from link state packet.
o Shortest path first algorithm (Dijkstra algorithm) – A calculation performed on the database
results into shortest path.
o Routing table – A list of known paths and interfaces.

Link State Routing has two phases:


1. Reliable Flooding:
o Initial state: Each node knows the cost of its neighbours.
o Final state: Each node knows the entire graph.
2. Route Calculation:
o Each node uses Dijkstra’s algorithm on the graph to calculate the optimal routes to
all nodes, which is actually used to find the shortest path from one node to every
other node in the network.

Link State protocols in comparison to Distance Vector protocols have:


• It requires large amount of memory.
• Shortest path computations require many CPU circles.
• If network use the little bandwidth; it quickly reacts to topology changes
• All items in the database must be sent to neighbours to form link state packets.
• All neighbours must be trusted in the topology.
• Authentication mechanisms can be used to avoid undesired adjacency and problems.
• No split horizon techniques are possible in the link state routing.

Differences between Distance Vector Routing with Link State Routing -


Module 04

TRANSPORT LAYER
Transport Layer is the second layer in the TCP/IP model and the fourth layer in the OSI
model. It is an end-to-end layer used to deliver messages to a host.

✓ It is termed an end-to-end layer because it provides a point-to-point


connection rather than hop-to- hop, between the source host and destination host to
deliver the services reliably.
✓ The unit of data encapsulation in the Transport Layer is a segment.

Working of Transport Layer:


The transport layer takes services from the Network layer and provides services to
the Application layer

At the sender’s side: The transport layer receives data (message) from the Application layer
and then performs Segmentation, divides the actual message into segments, adds source and
destination’s port numbers into the header of the segment, and transfers the message to the
Network layer.

At the receiver’s side: The transport layer receives data from the Network layer, reassembles
the segmented data, reads its header, identifies the port number, and forwards the message
to the appropriate port in the Application layer.

Services provided by the Transport Layer:


The services provided by the transport layer are similar to those of the data link layer. The data
link layer provides the services within a single network while the transport layer provides the
services across an internetwork made up of many networks. The data link layer controls the
physical layer while the transport layer controls all the lower layers.

The services provided by the transport layer protocols can be divided into five
categories:
End-to-end delivery:
The transport layer transmits the entire message to the destination. Therefore, it ensures the
end-to-end delivery of an entire message from a source to the destination.

Reliable delivery:
The transport layer provides reliability services by retransmitting the lost and damaged
packets.

The reliable delivery has four aspects:


o Error control
o Sequence control
o Loss control
o Duplication control

Error Control:

o The primary role of reliability is Error Control. In reality, no transmission will be 100
percent error-free delivery. Therefore, transport layer protocols are designed to
provide error-free transmission.
o The data link layer also provides the error handling mechanism, but it ensures only
node-to-node error-free delivery. However, node-to-node reliability does not ensure
the end-to-end reliability.
o The data link layer checks for the error between each network. If an error is introduced
inside one of the routers, then this error will not be caught by the data link layer. It only
detects those errors that have been introduced between the beginning and end of the
link.
o Therefore, the transport layer performs the checking for the errors end-to-end to
ensure that the packet has arrived correctly.
Sequence Control
o The second aspect of the reliability is sequence control which is implemented at the
transport layer.
o At the sending end, the transport layer is responsible for ensuring that the packets
received from the upper layers can be used by the lower layers.
o At the receiving end, it ensures that the various segments of a transmission can be
correctly reassembled.

Loss Control
o Loss Control is a third aspect of reliability. The transport layer ensures that all the
fragments of a transmission arrive at the destination, not some of them.
o At the sending end, all the fragments of transmission are given sequence numbers by
a transport layer. These sequence numbers allow the receiver’s transport layer to
identify the missing segment.

Duplication Control
o Duplication Control is the fourth aspect of reliability. The transport layer guarantees
that no duplicate data arrive at the destination.
o Sequence numbers are used to identify the lost packets; similarly, it allows the
receiver to identify and discard duplicate segments.

Flow Control:
• Flow control is used to prevent the sender from overwhelming the receiver. It ensures that
the data is sent at a speed that the receiver can handle it.

• If the receiver is overloaded with too much data, then the receiver discards the packets
and asking for the retransmission of packets. This increases network congestion and thus,
reducing the system performance.

• The transport layer is responsible for flow control. It uses the Stop and wait protocol and
sliding window protocol that makes the data transmission more efficient as well as it
controls the flow of data so that the receiver does not become overwhelmed.

The Stop-and-Wait Protocol-


In the stop-and-wait protocol, the sender sends one frame and then waits for an
acknowledgment (or ACK) from the receiver before sending another frame. Since the
sender cannot send a new frame until the receiver has issued an ACK for the previous
frame, it is obvious that the protocol ensures that the sender is transmitting at a rate that
the receiver can handle.
One of the drawbacks of the scheme is its poor channel utilization. A single frame travels
from the source to the destination, and the associated acknowledgment travels from the
destination to the source. This round-trip delay, which is the time for a frame to go from
the source to the destination and for the ACK to be received at the source, can be
appreciable, particularly if the distance between the source and the destination is long.

Sliding Window Protocol


• Sliding window protocol is byte oriented rather than frame oriented.

• In the sliding window flow control, each outgoing frame contains a sequence number that
ranges from zero to a predefined maximum number.
✓ The maximum number is usually 2n-1, which means that the sequence
numbers are represented by an n-bit field in the frame.

• The essence of the window scheme is that at any time instant, the sender maintains a set
of sequence numbers corresponding to the frames that it is allowed to send. These frames
are said to fall within the sending window.

• Similarly, the receiver maintains a receiving window, which corresponds to the set of frames
it is permitted to receive.

Here RR denotes – Receive Ready

• The sliding window flow control is a major improvement over the stop-and-wait protocol
in terms of the channel utilization. The channel utilization increases as the window size
increases.
Multiplexing
The transport layer uses the multiplexing to improve transmission efficiency.

Multiplexing can occur in two ways:

o Upward multiplexing: Upward multiplexing means


multiple transport layer connections use the same network
connection. To make more cost-effective, the transport
layer sends several transmissions bound for the same
destination along the same path; this is achieved through
upward multiplexing.

o Downward multiplexing: Downward multiplexing means


one transport layer connection uses the multiple network
connections. Downward multiplexing allows the transport
layer to split a connection among several paths to improve the
throughput. This type of multiplexing is used when networks
have a low or slow capacity.

Addressing
o Data generated by an application on one machine must be transmitted to the correct
application on another machine. In this case, addressing is provided by the transport
layer.

o The transport layer provides the user address which is specified as a station or port.
The port variable represents a particular TS user of a specified station known as a
Transport Service access point (TSAP). Each station has only one transport entity.

o The transport layer protocols need to know which upper-layer protocols are
communicating.
Process to process delivery:
While Data Link Layer requires the MAC address (48 bits address contained inside the Network
Interface Card of every host machine) of source-destination hosts to correctly deliver a frame
and the Network layer requires the IP address for appropriate routing of packets, in a similar
way Transport Layer requires a Port number to correctly deliver the segments of data to the
correct process amongst the multiple processes running on a particular host.

✓ A port number is a 16-bit address used to identify any client-server program


uniquely.

Transport Layer protocols:

The transport layer is represented by two protocols: TCP and UDP.

TCP (Transmission Control Protocol):


TCP is a layer 4 protocol which provides acknowledgement of the received packets and is
also reliable as it resends the lost packets. It is used by application protocols like HTTP and
FTP.

o It is a connection-oriented protocol means the connection established between both


the ends of the transmission. For creating the connection, TCP generates a virtual
circuit between sender and receiver for the duration of a transmission.

Features of TCP protocol:


o Stream data transfer: TCP protocol transfers the data in the form of contiguous
stream of bytes. TCP group the bytes in the form of TCP segments and then passes it
to the IP layer for transmission to the destination. TCP itself segments the data and
forward to the IP.
o Reliability: TCP assigns a sequence number to each byte transmitted and expects a
positive acknowledgement from the receiving TCP. If ACK is not received within a
timeout interval, then the data is retransmitted to the destination.

✓ The receiving TCP uses the sequence number to reassemble the segments if
they arrive out of order or to eliminate the duplicate segments.

o Flow Control: When receiving TCP sends an acknowledgement back to the sender
indicating the number the bytes it can receive without overflowing its internal buffer.
This number of bytes is sent in ACK in the form of the highest sequence number that
it can receive without any problem. This mechanism is also referred to as a window
mechanism.

o Multiplexing: Multiplexing is a process of accepting the data from different


applications and forwarding to the different applications on different computers. At
the receiving end, the data is forwarded to the correct application. This process is
known as demultiplexing. TCP transmits the packet to the correct application by using
the logical channels known as ports.

o Logical Connections: The combination of sockets, sequence numbers, and window


sizes, is called a logical connection. Each connection is identified by the pair of sockets
used by sending and receiving processes.

o Full Duplex: TCP provides Full Duplex service, i.e., the data flow in both the directions
at the same time. To achieve Full Duplex service, each TCP should have sending and
receiving buffers so that the segments can flow in both the directions. TCP is a
connection-oriented protocol. Suppose the process A wants to send and receive the
data from process B. The following steps occur:

o Establish a connection between two TCPs.


o Data is exchanged in both the directions.
o The Connection is terminated.

UDP (User Datagram Protocol):


UDP is also a layer 4 protocol but unlike TCP it doesn’t provide acknowledgement of the sent
packets. Therefore, it isn’t reliable and depends on the higher layer protocols for the same. But
on the other hand, it is simple, scalable and comes with lesser overhead as compared to TCP.
It is used in video and voice streaming.

o UDP is an end-to-end transport level protocol that adds transport-level addresses,


checksum error control, and length information to the data from the upper layer.
o The packet produced by the UDP protocol is known as a user datagram.
User Datagram Format:
The user datagram has a 16-byte header which is shown below:

Where,

o Source port address: It defines the address of the application process that has
delivered a message. The source port address is of 16 bits address.
o Destination port address: It defines the address of the application process that will
receive the message. The destination port address is of a 16-bit address.
o Total length: It defines the total length of the user datagram in bytes. It is a 16-bit
field.
o Checksum: The checksum is a 16-bit field which is used in error detection.

Disadvantages of UDP protocol:


o UDP provides basic functions needed for the end-to-end delivery of a transmission.
o It does not provide any sequencing or reordering functions and does not specify the
damaged packet when reporting an error.
o UDP can discover that an error has occurred, but it does not specify which packet has
been lost as it does not contain an ID or sequencing number of a particular data
segment.

Differences b/w TCP & UDP

Basis for
TCP UDP
Comparison

Definition TCP establishes a virtual circuit UDP transmits the data directly to the
before transmitting the data. destination computer without verifying
whether the receiver is ready to receive
or not.

Connection Type It is a Connection-Oriented It is a Connectionless protocol


protocol
Speed slow high

Reliability It is a reliable protocol. It is an unreliable protocol.

Header size 20 bytes 8 bytes

acknowledgement It waits for the It neither takes the acknowledgement,


acknowledgement of data and nor it retransmits the damaged frame.
has the ability to resend the lost
packets.

Congestion Control and Quality of Service:

Congestion Control:
Congestion is a state occurring in network layer when the message traffic is so heavy that it
slows down network response time.

Congestion Control is a type of network layer issue, and it is concerned with what happens
when there is more data in the network than can be sent with reasonable packet delays, there
are no lost packets.

Causes of Congestion:
The main cause of congestion is huge amount of data traffic. But other factors are equally
important for making congestion as given below:

• Sudden arrival of large data (called burst data) from many input lines and trying to
access a single output line of a router. In this case, the particular output line is blocked if
its bandwidth isn’t sufficiently high.

• Low bandwidth line will produce congestion even if the data rate isn’t too high.
• Mismatch between the speeds of different components of the system may also
produce congestion.

Principle of Congestion Control:


Congestion can be controlled is either of the two ways:

1. Open-Loop Control
2. Closed-Loop Control
➢ Open-Loop Control: Open loop congestion control policies are applied to prevent
congestion before it happens. The congestion control is handled either by the source or
the destination.

Policies adopted by open loop congestion control –

• Retransmission Policy: It is the policy in which retransmission of the packets are taken
care of. If the sender feels that a sent packet is lost or corrupted, the packet needs to
be retransmitted. This transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent congestion
and also able to optimize efficiency.

• Window Policy: The type of window at the sender’s side may also affect the
congestion. Several packets in the Go-back-n window are re-sent, although some
packets may be received successfully at the receiver side.

✓ This duplication may increase the congestion in the network and make it
worse. Therefore, Selective repeat window should be adopted as it sends the
specific packet that may have been lost.

• Discarding Policy: A good discarding policy adopted by the routers is that the routers
may prevent congestion and at the same time partially discard the corrupted or less
sensitive packages and also be able to maintain the quality of a message.

✓ In case of audio file transmission, routers can discard fewer sensitive packets
to prevent congestion and also maintain the quality of the audio file.

• Acknowledgment Policy: Since acknowledgements are also the part of the load in
the network, the acknowledgment policy imposed by the receiver may also affect
congestion. Several approaches can be used to prevent congestion related to
acknowledgment.

✓ The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an
acknowledgment only if it has to send a packet or a timer expires.

• Admission Policy: In admission policy a mechanism should be used to prevent


congestion. Switches in a flow should first check the resource requirement of a
network flow before transmitting it further. If there is a chance of a congestion or there
is a congestion in the network, router should deny establishing a virtual network
connection to prevent further congestion.

All the above policies are adopted to prevent congestion before it happens in the network.
➢ Closed-Loop Control: Closed loop congestion control techniques are used to treat or
alleviate congestion after it happens. Several techniques are used by different protocols;
some of them are:

Backpressure:
• Backpressure is a technique in which a congested node stops receiving packets from
upstream node. This may cause the upstream node or nodes to become congested
and reject receiving data from above nodes.

• Backpressure is a node-to-node congestion control technique that propagate in the


opposite direction of data flow.

• The backpressure technique can be applied only to virtual circuit where each node has
information of its above upstream node.

In above diagram the 3rd node is congested and stops receiving packets as a result 2nd node
may be get congested due to slowing down of the output data flow. Similarly, 1st node may
get congested and inform the source to slow down.

Choke Packet Technique:


• Choke packet technique is applicable to both virtual networks as well as datagram
subnets.

• A choke packet is a packet sent by a node to the source to inform it of congestion.


Each router monitors its resources and the utilization at each of its output lines.

• Whenever the resource utilization exceeds the threshold value which is set by the
administrator, the router directly sends a choke packet to the source giving it a
feedback to reduce the traffic.

• The intermediate nodes through which the packets have travelled are not warned
about congestion.
Implicit Signalling:
In implicit signalling, there is no communication between the congested nodes and the source.
The source guesses that there is congestion in a network. For example, when sender sends
several packets and there is no acknowledgment for a while, one assumption is that there is a
congestion.

Explicit Signalling:
• In explicit signalling, if a node experiences congestion it can explicitly sends a packet
to the source or destination to inform about congestion.

✓ The difference between choke packet and explicit signaling is that the signal is
included in the packets that carry data rather than creating a different packet
as in case of choke packet technique.

Explicit signaling can occur in either forward or backward direction.

Forward Signaling: In forward signaling, a signal is sent in the direction of the


congestion. The destination is warned about congestion. The receiver in this case adopt
policies to prevent further congestion.

Backward Signaling: In backward signaling, a signal is sent in the opposite direction


of the congestion. The source is warned about congestion and it needs to slow down.

Congestion control algorithms:


1) Leaky Bucket Algorithm
Let us consider an example to understand:

Imagine a bucket with a small hole in the bottom. No matter at what rate water enters the
bucket, the outflow is at constant rate. When the bucket is full with water additional water
entering spills over the sides and is lost.
Similarly, each network interface contains a leaky bucket and the following steps are involved
in leaky bucket algorithm:

i. When host wants to send packet, packet is thrown into the bucket.

ii. The bucket leaks at a constant rate, meaning the network interface transmits packets
at a constant rate.

iii. Bursty traffic is converted to a uniform traffic by the leaky bucket.

iv. In practice the bucket is a finite queue that outputs at a finite rate.

2) Token bucket Algorithm


Need of token bucket Algorithm: -

The leaky bucket algorithm enforces output pattern at the average rate, no matter how bursty
the traffic is. So, in order to deal with the bursty traffic we need a flexible algorithm so that the
data is not lost. One such algorithm is token bucket algorithm.

Steps of this algorithm can be described as follows:

• In regular intervals tokens are thrown into the bucket.


• The bucket has a maximum capacity.
• If there is a ready packet, a token is removed from the bucket, and the packet is sent.
• If there is no token in the bucket, the packet cannot be sent.

Ways in which token bucket is superior to leaky bucket:

• The leaky bucket algorithm controls the rate at which the packets are introduced in the
network, but it is very conservative in nature.

• Some flexibility is introduced in the token bucket algorithm. In the token bucket,
algorithm tokens are generated at each tick (up to a certain limit).

• For an incoming packet to be transmitted, it must capture a token and the transmission
takes place at the same rate.

• Hence some of the busty packets are transmitted at the same rate if tokens are
available and thus introduces some amount of flexibility in the system.
Formula:

M*s=C+ρ*s
Where,

o S – is time taken
o M – Maximum output rate
o ρ – Token arrival rate
o C – Capacity of the token bucket in byte

Let’s understand with an example,

In figure (A) we see a bucket holding three tokens, with five


packets waiting to be transmitted. For a packet to be
transmitted, it must capture and destroy one token. In figure
(B) We see that three of the five packets have gotten through,
but the other two are stuck waiting for more tokens to be
generated.

Quality of Service (QoS):


Quality of Service (QoS) is basically the ability to provide different priority to different
applications, users, or data flows, or in order to guarantee a certain level of performance to
the flow of data.

QoS is basically the overall performance of the computer network. Mainly the performance of
the network is seen by the user of the Network.

With the increasing use of network assuring the good Quality of Service (QoS) became
another important requirement of network. Four characteristics parameters that contribute to
the quality of service are:

Reliability:

• It is one of the main characteristics that the flow needs. If there is a lack of reliability
then it simply means losing any packet or losing an acknowledgement due to which
retransmission is needed.
• Reliability becomes more important for electronic mail, file transfer, and for internet
access.

Delay:

• Another characteristic of the flow is the delay in transmission between the source and
destination. During audio conferencing, telephony, video conferencing, and remote
conferencing there should be a minimum delay.
Jitter:

• It is basically the variation in the delay for packets that belongs to the same flow. Thus,
Jitter is basically the variation in the packet delay. Higher the value of jitter means there
is a large delay and the low jitter means the variation is small.

Bandwidth:

• The different applications need different bandwidth.

Techniques for Achieving Good QoS:


A single technique can’t provide good quality of service, rather a combination of techniques
is implemented by the system designers to get better QoS. Some of the techniques are given
below:

➢ Over Provisioning:

• It is the easiest solution. A sufficient number of routers with sufficient capacity, buffer
space and bandwidth are used in the network such that it can always provide good
QoS. But it is an expensive solution.

➢ Buffering:
• In this technique, packets are buffered before delivery. Packets deliver at regular
intervals of time. This method doesn’t improve reliability and bandwidth but it
increases the delay and improves jitter.

➢ Traffic Shaping:
• In the previous technique, hosts are designed to transmit at a uniform rate. But
sometimes it’s not possible to meet regularity in transmission because of various
reasons. Traffic shaping smoothens out the traffic on the server-side rather than on the
client side. The steps are given below:

a) Service level agreement is done between user and subnet.


b) As long as the user sends packets according to agree on contact, the carrier
promises to deliver them all in a timely fashion.
c) It is highly important for real-time data transfer.
d) Monitoring the traffic flow is called Traffic Policing.
e) Leaky Bucket and Token Bucket Algorithm is used for traffic shaping.

➢ Resource Reservation:

• Traffic shaping improves the QoS but it maintains that some predefined path must be
set before sending data packets. For virtual-circuit connection, the path is set before
data transfer and resources can be reserved for the specific circuit. Kinds of resources
that can be preserved are:
a) Bandwidth: High Bandwidth can handle a large data rate. So, it provides high
bandwidth that will ensure good quality of service.
b) Buffer Space: High Buffer Space is reserved for accepting and temporarily
storing packets such that traffic shaping can be done and good QoS is provided.
c) CPU Cycles: For timely processing of packets CPU cycles are also reserved
thereby providing good QoS.

There are 2 types of Quality of Service Solutions:


1. Stateless solution: Here, the server is not required to keep or store the server
information or session details to itself.

• The routers maintain no fine-grained state about traffic, one positive


factor of this is that it's scalable and robust.
• But also, it has weak services as there is no guarantee about the kind
of performance delay in a particular application which we encounter.
• In the stateless solution, the server and client are loosely coupled.

2. Stateful solution: Here, the server is required to maintain the current state
and session information.

• The routers maintain per-flow state as the flow is very important in


providing the Quality-of-Service which is providing powerful services
such as guaranteed services and high resource utilization, provides
protection, and is much less scalable and robust.
• Here, the server and client are tightly bounded.
Module 05

APPLICATION LAYER
The Application Layer is topmost layer in the Open System Interconnection (OSI) model. This
layer provides several ways for manipulating the data (information) which actually enables any
type of user to access network with ease.

• This layer also makes a request to its bottom layer, which is presentation layer for
receiving various types of information from it.

• The Application Layer interface directly interacts with application and provides
common web application services.

• This layer is basically highest level of open system, which provides services directly for
application process.

• The application layer programs are based on client and servers.

The Application layer includes the following functions:


• Identifying communication partners: The application layer identifies the
availability of communication partners for an application with data to transmit.

• Determining resource availability: The application layer determines whether


sufficient network resources are available for the requested communication.

• Synchronizing communication: All the communications occur between the


applications requires cooperation which is managed by an application layer.

Services of Application Layers:


o Network Virtual terminal: An application layer allows a user to log on to a remote host.
To do so, the application creates a software emulation of a terminal at the remote host.
The user's computer talks to the software terminal, which in turn, talks to the host. The
remote host thinks that it is communicating with one of its own terminals, so it allows the
user to log on.

o File Transfer, Access, and Management (FTAM): An application allows a user to access
files in a remote computer, to retrieve files from a computer and to manage files in a
remote computer.

✓ FTAM defines a hierarchical virtual file in terms of file structure, file attributes and
the kind of operations performed on the files and their attributes.
o Addressing: To obtain communication between client and server, there is a need for
addressing. When a client made a request to the server, the request contains the server
address and its own address. The server response to the client request, the request contains
the destination address, i.e., client address. To achieve this kind of addressing, DNS is used.

o Mail Services: An application layer provides Email forwarding and storage.

o Directory Services: An application contains a distributed database that provides access


for global information about various objects and services.

o Authentication: It authenticates the sender or receiver's message or both.

Working of Application Layer in the OSI model:


In the OSI model, this application layer is narrower in scope.

The application layer in the OSI model generally acts only like the interface which is responsible
for communicating with host-based and user-facing applications. This is in contrast with
TCP/IP protocol, wherein the layers below the application layer, which is Session Layer and
Presentation layer, are clubbed together and form a simple single layer which is responsible
for performing the functions, which includes controlling the dialogues between computers,
establishing as well as maintaining as well as ending a particular session, providing data
compression and data encryption and so on.

• At first, client sends a command to server and when server receives that command, it
allocates port number to client.
• Thereafter, the client sends an initiation connection request to server and when server
receives request, it gives acknowledgement (ACK) to client.
• Then client successfully establish a connection with the server and, therefore, now
client has access to server through which it may either ask server to send any types of
files or other documents or it may upload some files or documents on server itself.
Application Layer Protocols:
The application layer provides several protocols which allow any software to easily send and
receive information and present meaningful data to its users.

The following are some of the protocols which are provided by the application layer:

1. DNS:
o DNS stands for Domain Name System.
o Domain Name System (DNS) Server is a standard protocol that helps Internet users
discover websites using human readable addresses.
o DNS is a service that translates the domain name into IP addresses. This allows the
users of networks to utilize user-friendly names when looking for other hosts instead
of remembering the IP addresses.
o For example, suppose the FTP site at EduSoft had an IP address of 132.147.165.50, most
people would reach this site by specifying ftp.EduSoft.com. Therefore, the domain
name is more reliable than IP address.

The domain name space is divided into three different sections: generic domains, country
domains, and inverse domain.

A. Generic Domains
o It defines the registered hosts according to their generic behavior.
o Each node in a tree defines the domain name, which is an index to the DNS
database.
o It uses three-character labels, and these labels describe the organization type.
o .com(commercial) .edu (educational) .mil(military) .org (non-profit organization)
.net (similar to commercial) all these are generic domain.

B. Country Domain

o The format of country domain is same as a generic domain, but it uses two-
character country abbreviations (e.g., us for the United States) in place of three-
character organizational abbreviations.

C. Inverse Domain

o The inverse domain is used for mapping an address to a name. When the server
has received a request from the client, and the server contains the files of only
authorized clients. To determine whether the client is on the authorized list or not,
it sends a query to the DNS server and ask for mapping an address to the name.
Working of DNS
o DNS is a client/server network communication protocol. DNS clients send requests to
the server while DNS servers send responses to the client.
o Client requests contain a name which is converted into an IP address known as a
forward DNS lookup while requests containing an IP address which is converted into
a name known as reverse DNS lookups.
o DNS implements a distributed database to store the name of all the hosts available on
the internet.
o If a client like a web browser sends a request containing a hostname, then a piece of
software such as DNS resolver sends a request to the DNS server to obtain the IP
address of a hostname. If DNS server does not contain the IP address associated with
a hostname, then it forwards the request to another DNS server. If IP address has
arrived at the resolver, which in turn completes the request over the internet protocol.

2. FTP:
• FTP stands for File transfer protocol.
• FTP is a standard internet protocol provided by TCP/IP used for transmitting the files
from one host to another.
• It is mainly used for transferring the web page files from their creator to the computer
that acts as a server for other computers on the internet.
• It is also used for downloading the files to computer from other servers.

Objectives of FTP
o It provides the sharing of files.
o It is used to encourage the use of remote computers.
o It transfers the data more reliably and efficiently.

Why FTP?

Although transferring files from one system to another is very simple and straightforward, but
sometimes it can cause problems. For example, two systems may have different file
conventions. Two systems may have different ways to represent text and data. Two systems
may have different directory structures.

• FTP protocol overcomes these problems by establishing two connections between


hosts. One connection is used for data transfer, and another connection is used for
the control connection.
There are two types of connections in FTP:

Control Connection: The control connection uses very simple rules for
communication. Through control connection, we can transfer a line of command or
line of response at a time.
o The control connection is made between the control processes.
o The control connection remains connected during the entire interactive FTP
session.

Data Connection: The Data Connection uses very complex rules as data types may
vary. The data connection is made between data transfer processes. The data
connection opens when a command comes for transferring the files and closes when
the file is transferred.

Advantages of FTP:
o Speed: One of the biggest advantages of FTP is speed. The FTP is one of the fastest
ways to transfer the files from one computer to another computer.

o Efficient: It is more efficient as we do not need to complete all the operations to get
the entire file.

o Security: To access the FTP server, we need to login with the username and password.
Therefore, we can say that FTP is more secure.

o Back & forth movement: FTP allows us to transfer the files back and forth. Suppose
you are a manager of the company, you send some information to all the employees,
and they all send information back on the same server.

Disadvantages of FTP:
o FTP serves two operations, i.e., to send and receive large files on a network. However,
the size limit of the file is 2GB that can be sent. It also doesn't allow you to run
simultaneous transfers to multiple receivers.

o Passwords and file contents are sent in clear text that allows unwanted eavesdropping.

o It is not compatible with every system.


3. SMTP:

• SMTP stands for Simple Mail Transfer Protocol.


• SMTP is a set of communication guidelines that allow software to transmit an
electronic mail over the internet is called Simple Mail Transfer Protocol.
• It provides a mail exchange between users on the same or different computers,
and it also supports:
o It can send a single message to one or more recipients.
o Sending message can include text, voice, video or graphics.
o It can also send the messages on networks outside the internet.

Working of SMTP

1. Composition of Mail: A user sends an e-mail by composing an electronic mail message


using a Mail User Agent (MUA).

✓ Mail User Agent is a program which is used to send and receive mail.
✓ The message contains two parts: body and header. The body is the main part of
the message while the header includes information such as the sender and recipient
address.

2. Submission of Mail: After composing an email, the mail client then submits the
completed e-mail to the SMTP server by using SMTP on TCP port 25.

3. Delivery of Mail: E-mail addresses contain two parts: username of the recipient and
domain name.

✓ If the domain name of the recipient's email address is different from the sender's
domain name, then MSA will send the mail to the Mail Transfer Agent (MTA).

✓ To relay the email, the MTA will find the target domain. It checks the MX record
from Domain Name System to obtain the target domain. The MX record contains
the domain name and IP address of the recipient's domain.

✓ Once the record is located, MTA connects to the exchange server to relay the
message.

4. Receipt and Processing of Mail: Once the incoming message is received, the exchange
server delivers it to the incoming server (Mail Delivery Agent) which stores the e-mail where
it waits for the user to retrieve it.

5. Access and Retrieval of Mail: The stored email in MDA can be retrieved by using MUA
(Mail User Agent). MUA can be accessed by using login and password.
4. POP:
The POP protocol stands for Post Office Protocol. As we know that SMTP is used as a message
transfer agent. When the message is sent, then SMPT is used to deliver the message from the
client to the server and then to the recipient server. But the message is sent from the recipient
server to the actual server with the help of the Message Access Agent. The Message Access
Agent contains two types of protocols, i.e., POP3 and IMAP.

How is mail transmitted?

Suppose sender wants to send the mail to receiver.

➢ First mail is transmitted to the sender's mail server. Then, the mail is transmitted
from the sender's mail server to the receiver's mail server over the internet.
➢ On receiving the mail at the receiver's mail server, the mail is then sent to the
user.
➢ The whole process is done with the help of Email protocols. The transmission of
mail from the sender to the sender's mail server and then to the receiver's mail
server is done with the help of the SMTP protocol.
At the receiver's mail server, the POP or IMAP protocol takes the data and transmits
to the actual user.
• Since SMTP is a push protocol so it pushes the message from the client to the
server.
• The third stage of email communication requires a pull protocol, and POP is a
pull protocol. When the mail is transmitted from the recipient mail server to
the client which means that the client is pulling the mail from the server.
What is POP3?
The POP3 is a simple protocol and having very limited functionalities. In the case of the POP3
protocol, the POP3 client is installed on the recipient system while the POP3 server is installed
on the recipient's mail server.

Working of POP3 Protocol:


• To establish the connection between the POP3 server and the POP3 client, the POP3
server asks for the user name to the POP3 client. If the username is found in the POP3
server, then it sends the ok message.

• It then asks for the password from the POP3 client; then the POP3 client sends the
password to the POP3 server. If the password is matched, then the POP3 server sends
the OK message, and the connection gets established.

• After the establishment of a connection, the client can see the list of mails on the POP3
mail server. In the list of mails, the user will get the email numbers and sizes from the
server. Out of this list, the user can start the retrieval of mail.

Once the client retrieves all the emails from the server, all the emails from the server
are deleted. Therefore, we can say that the emails are restricted to a particular machine,
so it would not be possible to access the same mails on another machine. This situation
can be overcome by configuring the email settings to leave a copy of mail on the mail
server.

Advantages of POP3 protocol:

o It provides easy and fast access to the emails as they are already stored on our
PC.
o There is no limit on the size of the email which we receive or send.
o It requires less server storage space as all the mails are stored on the local
machine.
o It is easy to configure and use.

Disadvantages of POP3 protocol:

o If the emails are downloaded from the server, then all the mails are deleted from
the server by default. So, mails cannot be accessed from other machines unless
they are configured to leave a copy of the mail on the server.
o The email folder which is downloaded from the mail server can also become
corrupted.
o The mails are stored on the local machine, so anyone who sits on your machine
can access the email folder.

5. HTTP:
• HTTP stands for Hyper Text Transfer Protocol.
• It is a protocol used to access the data on the World Wide Web (www).
• The HTTP protocol can be used to transfer the data in the form of plain text, hypertext,
audio, video, and so on.
• This protocol is known as Hyper Text Transfer Protocol because of its efficiency that
allows us to use in a hypertext environment where there are rapid jumps from one
document to another document.

Features of HTTP:

o Connectionless protocol: HTTP is a connectionless protocol. HTTP client initiates a


request and waits for a response from the server. When the server receives the request,
the server processes the request and sends back the response to the HTTP client after
which the client disconnects the connection. The connection between client and server
exist only during the current request and response time only.

o Media independent: HTTP protocol is a media independent as data can be sent as


long as both the client and server know how to handle the data content. It is required
for both the client and server to specify the content type in MIME-type header.

o Stateless: HTTP is a stateless protocol as both the client and server know each other
only during the current request. Due to this nature of the protocol, both the client and
server do not retain the information between various requests of the web pages.
HTTP Transactions:
The given figure shows the HTTP transaction between client and server. The client initiates a
transaction by sending a request message to the server. The server replies to the request
message by sending a response message.

HTTP Messages

HTTP messages are of two types: request and response. Both the message types follow
the same message format.

Request Message: The request message is sent by the client that consists of a request line,
headers, and sometimes a body.

Response Message: The response message is sent by the server to the client that consists of
a status line, headers, and sometimes a body
Basics of Wi-Fi:

Wi-Fi stands for Wireless Fidelity. It is a technology for wireless local area networking with
devices based on IEEE 802.11 standards.

Wi-Fi compatible devices can connect to the internet via WLAN network and a wireless access
point abbreviated as AP. Every WLAN has an access point which is responsible for receiving
and transmitting data from/to users.

Access Point (AP) is a wireless LAN base station that can connect one or many wireless
devices simultaneously to internet.

The architecture of this standard has 2 kinds of services:

1. BSS (Basic Service Set)


2. ESS (Extended Service Set)

BSS (Basic Service Set):

• BSS is the basic building block of WLAN. It is made of wireless mobile stations and an
optional central base station called Access Point.

• Stations can form a network without an AP and can agree to be a part of a BSS.

• A BSS without an AP cannot send data to other BSSs and defines a standalone network.
It is called Ad-hoc network or Independent BSS(IBSS) i.e., A BSS without AP is an ad-
hoc network.

• A BSS with AP is infrastructure network.

The figure below depicts an IBSS, BSS with the green coloured box depicting an AP:
ESS (Extended Service Set):

ESS is made up of 2 or more BSSs with APs. BSSs are connected to the distribution system via
their APs. The distribution system can be any IEEE LAN such as Ethernet.

ESS has 2 kinds of stations:

1. Mobile – stations inside the BSS


2. Stationary – AP stations that are part of wired LAN.

The topmost green box represents the distribution system and the other 2 green boxes
represent the APs of 2 BSSs.
Module 06

NETWORK SECURITY
Network Security refers to the measures taken by any enterprise or organization to secure its
computer network and data using both hardware and software systems. This aims at securing
the confidentiality and accessibility of the data and network.

• The most basic example of Network Security is password protection which the user
of the network oneself chooses.

Network Security: Working


The basic principle of network security is protecting huge stored data and networks in layers
that ensure the bedding of rules and regulations that have to be acknowledged before
performing any activity on the data. These levels are:

1. Physical
2. Technical
3. Administrative

➢ Physical Network Security:


• This is the most basic level that includes protecting the data and network from
unauthorized personnel acquiring control over the confidentiality of the network.
These include external peripherals and routers that might be used for cable
connections. The same can be achieved by using devices like biometric systems.

➢ Technical Network Security:


• It primarily focuses on protecting the data stored in the network or data involved in
transitions through the network. This type serves two purposes. One is protection from
unauthorized users, and the other is protection from malicious activities.

➢ Administrative Network Security:


• This level of network security protects user behavior like how the permission has been
granted and how the authorization process takes place. This also ensures the level of
sophistication the network might need for protecting it from all the attacks. This level
also suggests necessary amendments that have to be done to the infrastructure.

Aspects of Network Security:


Privacy: Privacy means both the sender and the receiver expects confidentiality. The
transmitted message should be sent only to the intended receiver while the message should
be opaque for other users. Only the sender and receiver should be able to understand the
transmitted message.

Message Integrity: Data integrity means that the data must arrive at the receiver exactly as it
was sent. There must be no changes in the data content during transmission, either maliciously
or accident, in a transit.

End-point authentication: Authentication means that the receiver is sure of the sender’s
identity, i.e., no imposter has sent the message.

Non-Repudiation: Non-Repudiation means that the receiver must be able to prove that the
received message has come from a specific sender. The sender must not deny sending a
message that he or she send.

AUTHENTICATION:

Authentication is the process of verifying the identity of a user or information. User


authentication is the process of verifying the identity of a user when that user logs in to a
computer system.

There are different types of authentication systems which are: –

1. Single-Factor authentication: – This was the first method of security that was developed.
On this authentication system, the user has to enter the username and the password to
confirm whether that user is logging in or not. Now if the username or password is wrong,
then the user will not be allowed to log in or access the system.

Advantage of the Single-Factor Authentication System: –


• It is a very simple to use and straightforward system.
• it is not at all costly.
• The user does not need any huge technical skills.

The disadvantage of the Single-Factor Authentication


• It is not at all password secure.
• It will depend on the strength of the password entered by the user.
• The protection level in Single-Factor Authentication is much low.

2. Two-factor Authentication: – In this authentication system, the user has to give a


username, password, and other information. There are various types of authentication
systems that are used by the user for securing the system. Some of them are: – wireless
tokens and virtual tokens. OTP and more.
Advantages of the Two-Factor Authentication
• The Two-Factor Authentication System provides better security than the Single-
factor Authentication system.
• The productivity and flexibility increase in the two-factor authentication system.
• Two-Factor Authentication prevents the loss of trust.

Disadvantages of Two-Factor Authentication


• It is time-consuming.

3. Multi-Factor authentication system: – In this type of authentication, more than one


factor of authentication is needed. This gives better security to the user. Any type of
keylogger or phishing attack will not be possible in a Multi-Factor Authentication system.
This assures the user, that the information will not get stolen from them.

The advantages of the Multi-Factor Authentication System are: –


• No risk of security.
• No information could get stolen.
• No risk of any key-logger activity.
• No risk of any data getting captured.

The disadvantages of the Multi-Factor Authentication System are: –


• It is time-consuming.
• it can rely on third parties.

The main objective of authentication is to allow authorized users to access the computer and
to deny access to unauthorized users. Operating Systems generally identify/authenticates
users using the following 3 ways: Passwords, Physical identification, and Biometrics. These
are explained as below:

➢ Passwords: Password verification is the most popular and commonly used authentication
technique. A password is a secret text that is supposed to be known only to a user.

o In a password-based system, each user is assigned a valid username and password


by the system administrator. The system stores all usernames and Passwords.

o When a user logs in, their user name and password are verified by comparing them
with the stored login name and password.

o If the contents are the same then the user is allowed to access the system otherwise
it is rejected.

➢ Physical Identification: This technique includes machine-readable badges(symbols),


cards, or smart cards. In some companies, badges are required for employees to gain
access to the organization’s gate.
o In many systems, identification is combined with the use of a password i.e., the user
must insert the card and then supply his /her password. This kind of authentication
is commonly used with ATMs.

o Smart cards can enhance this scheme by keeping the user password within the
card itself. This allows authentication without the storage of passwords in the
computer system. The loss of such a card can be dangerous.

➢ Biometrics: This method of authentication is based on the unique biological


characteristics of each user such as fingerprints, voice or face recognition, signatures, and
eyes. It requires:
o A scanner or other devices to gather the necessary data about the user.

o Software to convert the data into a form that can be compared and stored.

o A database that stores information for all authorized users.

How it works?

Facial Characteristics – Humans are differentiated on the basis of facial characteristics such
as eyes, nose, lips, eyebrows, and chin shape.

Fingerprints – Fingerprints are believed to be unique across the entire human population.

Hand Geometry – Hand geometry systems identify features of the hand that includes the
shape, length, and width of fingers.

Retinal pattern – It is concerned with the detailed structure of the eye.

Signature – Every individual has a unique style of handwriting, and this feature is reflected in
the signatures of a person.

Voice – This method records the frequency pattern of the voice of an individual speaker.
CRYPTOGRAPHY:

Cryptography is technique of securing information and communications through use of codes


so that only those persons for whom the information is intended can understand it and process
it. Thus, preventing unauthorized access to information. The prefix “crypt” means “hidden”
and suffix “graphy” means “writing”.

Through cryptography, we convert our data into Unreadable Secret Codes, called Cipher Text
and can read this data only, which will have the secret key to decrypt it. Decrypt data is called
plain text. It maintains the security and integrity of the data.

In cryptography, encryption and decryption are two processes. It is used to protect the
Messages, Credit/Debit Card details, and other relevant information. In encryption, plain text
is converted to ciphertext, and in decryption, the ciphertext is converted to plain text.

Encryption:
In cryptography, encryption is a process in which the information is converted into a secret
code called ciphertext. Ciphertext cannot be easily understood, only experts can understand
it. The main purpose of encryption is to secure digital data or information, which transmit via
the internet.

Decryption:
Decryption is a process in which encrypted data is converted back to original data. The
encrypted data is called ciphertext, and the original data is called plain text, and the conversion
of ciphertext to plain text is called decryption.

Types of Keys
There are three different types of the key used in cryptography: a secret key, public key, and
private key.
• The secret key is used in the symmetric-key cryptography, and the other two keys
are used in the asymmetric key cryptography.
There are two types of cryptography

1. Symmetric key cryptography


2. Asymmetric key cryptography

Symmetric key cryptography:


• Symmetric key cryptography is that cryptography in which the same key (only one key)
is used for encryption of plain text and decryption of ciphertext. Symmetric key
cryptography is also known as secret-key cryptography.

Asymmetric key cryptography


• Asymmetric key cryptography is that cryptography in which both encryption and
decryption have different keys. In this, the public key is used to do encryption, and
the private key is used to do the decryption. It is also called public-key cryptography.

For example, if Paul sends a message to bob, they will use the bob's public key to
encrypt the message, and then bob will decrypt that message with their private key.
Difference between public and private key:

On the
basis of Public key Private key

Definition It is defined as the technique that It is defined as the technique that


uses two different keys for uses a single shared key (secret key)
encryption and decryption. to encrypt and decrypt the message.

Known as It is also called as Asymmetric key It is also called as symmetric key


encryption. encryption. It is because the same
secret key is used in bidirectional
communication.

Efficiency It is inefficient as this technique is It is efficient as this technique is


used only for short messages. recommended for large amounts of
text.

Speed It is slower as it uses two different It is faster as it uses a single key for
keys; both keys are related to each encryption and decryption.
other through the complicated
mathematical process.

Secret It is free to use. Apart from the sender and receiver,


the private key is kept secret and not
public to anyone.

Purpose The main purpose of the public key The main purpose of the secret key
algorithm is to share the keys algorithm is to transmit the bulk
securely. data.

Loss of key There is a less possibility of key loss, There is a possibility of losing the key
as the key held publicly. that renders the system void.
DIGITAL SIGNATURES and CERTIFICATES:

The Digital Signature is a technique which is used to validate the authenticity and integrity of
the message. The basic idea behind the Digital Signature is to sign a document. When we send
a document electronically, we can also sign it. We can sign a document in two ways: to sign a
whole document and to sign a digest.

➢ Signing the Whole Document:

o In Digital Signature, a public key encryption technique is used to sign a document.


However, the roles of a public key and private key are different here. The sender
uses a private key to encrypt the message while the receiver uses the public key of the
sender to decrypt the message.

o In Digital Signature, the private key is used for encryption while the public key is
used for decryption.

o Digital Signature cannot be achieved by using secret key encryption.

➢ Signing the Digest:

o Public key encryption is efficient if the message is short. If the message is long, a public
key encryption is inefficient to use. The solution to this problem is to let the sender
sign a digest of the document instead of the whole document.

o The sender creates a miniature version (digest) of the document and then signs it, the
receiver checks the signature of the miniature version.

o The hash function is used to create a digest of the message. The hash function creates
a fixed-size digest from the variable-length message.

o The two most common hash functions used: MD5 (Message Digest 5) and SHA-1
(Secure Hash Algorithm 1). The first one produces 120-bit digest while the second one
produces a 160-bit digest.
o A hash function must have two properties to ensure the success:

o First, the digest must be one way, i.e., the digest can only be created from the
message but not vice versa.

o Second, hashing is a one-to-one function, i.e., two messages should not


create the same digest.

At the Sender side:

At the Receiver’s side:


Following are the steps taken to ensure security:
o The miniature version (digest) of the message is created by using a hash function.

o The digest is encrypted by using the sender's private key.

o After the digest is encrypted, then the encrypted digest is attached to the original
message and sent to the receiver.

o The receiver receives the original message and encrypted digest and separates the two.
The receiver implements the hash function on the original message to create the
second digest, and it also decrypts the received digest by using the public key of the
sender. If both the digests are same, then all the aspects of security are preserved.

DIGITAL CERTIFICATES:

A Digital Certificate is an electronic "password" that allows a person, organization to


exchange data securely over the Internet using the public key infrastructure (PKI). Digital
Certificate is also known as a public key certificate or identity certificate.

✓ It is a certificate issued by a Certificate Authority (CA) to verify the identity of the


certificate holder.
✓ The CA issues an encrypted digital certificate containing the applicant’s public key and
a variety of other identification information.
✓ Digital certificate is used to attach public key with a particular individual or an entity.

Digital certificate contains: -

▪ Name of certificate holder.


▪ Serial number which is used to uniquely identify a certificate, the individual or the entity
identified by the certificate
▪ Expiration dates.
▪ Copy of certificate holder’s public key (used for decrypting messages and digital
signatures)
▪ Digital Signature of the certificate issuing authority.

➔ Digital certificate is also sent with the digital signature and the message.
FIREWALL:

A firewall is a network security device, either hardware or software-based, which monitors all
incoming and outgoing traffic and based on a defined set of security rules it accepts, rejects
or drops that specific traffic.

• Accept : allow the traffic


• Reject : block the traffic but reply with an “unreachable error”
• Drop : block the traffic with no reply.

A firewall establishes a barrier between secured internal networks and outside untrusted
network, such as the Internet.

History and Need for Firewall


Before Firewalls, network security was performed by Access Control Lists (ACLs) residing on
routers.

o ACLs are rules that determine whether network access should be granted or denied to
specific IP address.
o But ACLs cannot determine the nature of the packet it is blocking. Also, ACL alone does
not have the capacity to keep threats out of the network. Hence, the Firewall was
introduced.

Connectivity to the Internet is no longer optional for organizations. However, accessing the
Internet provides benefits to the organization; it also enables the outside world to interact with
the internal network of the organization. This creates a threat to the organization. In order to
secure the internal network from unauthorized traffic, we need a Firewall.
How Firewall Works
Firewall match the network traffic against the rule set defined in its table. Once the rule is
matched, associate action is applied to the network traffic.

✓ For example, Rules are defined as any employee from HR department cannot access
the data from code server and at the same time another rule is defined like system
administrator can access the data from both HR and technical department.
✓ Rules can be defined on the firewall based on the necessity and security policies of the
organization.

From the perspective of a server, network traffic can be either outgoing or incoming. Firewall
maintains a distinct set of rules for both the cases.

o Mostly the outgoing traffic, originated from the server itself, allowed to pass. Still,
setting a rule on outgoing traffic is always better in order to achieve more security and
prevent unwanted communication.
o Incoming traffic is treated differently.

Most traffic which reaches on the firewall uses one of these three major Transport Layer
protocols- TCP, UDP or ICMP. All these types have a source address and destination address.
Also, TCP and UDP have port numbers. ICMP uses type code instead of port number which
identifies purpose of that packet.

Default policy: It is very difficult to explicitly cover every possible rule on the firewall. For this
reason, the firewall must always have a default policy.

✓ Default policy only consists of action (accept, reject or drop).


✓ Suppose no rule is defined about SSH connection to the server on the firewall. So, it
will follow the default policy.
✓ If default policy on the firewall is set to accept, then any computer outside of your
office can establish an SSH connection to the server. Therefore, setting default policy
as drop (or reject) is always a good practice.

Types of Firewall:
Firewalls are generally of two types: Host-based and Network-based.

1. Host- based Firewalls: Host-based firewall is installed on each network node which
controls each incoming and outgoing packet. It is a software application or suite of
applications, comes as a part of the operating system.
o Host-based firewalls are needed because network firewalls cannot provide
protection inside a trusted network. Host firewall protects each host from attacks
and unauthorized access.
2. Network-based Firewalls: Network firewall function on network level. In other words,
these firewalls filter all incoming and outgoing traffic across the network.
o It protects the internal network by filtering the traffic using rules defined on the
firewall.
o A Network firewall might have two or more network interface cards (NICs).
o A network-based firewall is usually a dedicated system with proprietary software
installed.

You might also like