Chapter - 1

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 20

Chapter – 1

Introduction to computer networks, applications and uses, classification of Networks based on


topologies, geographical distribution and communication techniques
Introduction to Computer Networks: Computer networks are collections of interconnected
devices, such as computers, servers, switches, routers, and other networking equipment, that are
used to facilitate communication and data sharing among them. They play a crucial role in
modern society and are used in various applications.

Applications and Uses of Computer Networks: Computer networks are used in a wide range of
applications, including:

1. Communication: Computer networks enable communication among users through


various means, such as email, instant messaging, video conferencing, and social media.
2. Data Sharing and File Transfer: Computer networks allow users to share and transfer data,
files, and resources, such as printers and storage devices, over the network, facilitating
efficient information sharing.
3. Resource Sharing: Computer networks facilitate sharing of hardware and software resources,
such as servers, databases, and applications, leading to efficient utilization of resources.
4. Collaboration: Computer networks support collaborative work environments, where multiple
users can work on the same documents or projects in real-time, enhancing productivity and
teamwork.
5. Internet Access: Computer networks provide access to the internet, allowing users to browse
the web, search for information, and use online services and applications.
6. E-commerce: Computer networks facilitate online buying and selling, enabling e-commerce
activities, such as online shopping, online banking, and online payments.
7. Entertainment: Computer networks are used for streaming audio and video content, online
gaming, and other forms of digital entertainment.

Classification of Networks based on Topologies: Computer networks can be classified based on


their topologies, which refer to the physical or logical arrangement of devices in a network. Some
common network topologies are:

Bus Topology: In a bus topology, all devices are connected to a single communication line, called a bus, which acts as a
Star Topology: In a star topology, all devices are connected to a central device, such as a hub or a switch, which acts as
Ring Topology: In a ring topology, devices are connected in a circular manner, where each device is connected to its ad
Mesh Topology: In a mesh topology, devices are connected to each other in a fully interconnected manner, where each
Tree/Hierarchical Topology: In a tree or hierarchical topology, devices are arranged in a hierarchical structure, resemblin
which in turn can have their own child nodes. This topology is commonly used in large networks, such as enterprise n

Classification of Networks based on Geographical Distribution: Computer networks can also be


classified based on their geographical distribution, which refers to the geographical area that the
network covers. Some common types of networks based on geographical distribution are:

1. Local Area Network (LAN): A LAN is a network that covers a small geographical area, such as a
single building or a campus. LANs are typically used in homes, offices, schools, and other small-
scale environments.
2. Metropolitan Area Network (MAN): A MAN is a network that covers a larger geographical
area, such as a city or a town. MANs are used to connect multiple LANs within a city or town.
3. Wide Area Network (WAN): A WAN is a network that covers a large geographical area, such as a
country or even the entire world.

eference models: OSI model, TCP/IP model , Overview of Connecting devices (Hub, Repeaters,
Switches, Bridges, Routers, Gateways)
Reference Models: Reference models are conceptual frameworks that provide a standard way of
understanding and organizing the various components and protocols of a computer network.
Two commonly used reference models are the OSI (Open Systems Interconnection) model and
the TCP/IP (Transmission Control Protocol/Internet Protocol) model.

1. OSI Model: The OSI model is a theoretical model developed by the International Organization
for Standardization (ISO) that consists of seven layers, each representing a specific function of
communication in a network. The seven layers of the OSI model are:
 Physical Layer: Deals with the physical connection between devices, such as cables, connectors,
and network interfaces. It defines the electrical and mechanical characteristics of the network.
 Data Link Layer: Provides error-free communication between directly connected devices over a
physical link. It is responsible for framing data into packets and detecting and correcting
errors.
 Network Layer: Handles the routing of data packets between different networks. It establishes
the logical path for data transfer and determines the best route for data to reach its destination.
 Transport Layer: Ensures reliable and efficient data transfer between end-to-end applications. It
provides error checking, flow control, and congestion control mechanisms.
 Session Layer: Manages the communication sessions between applications. It establishes,
maintains, and terminates connections between devices.
 Presentation Layer: Deals with the translation and formatting of data exchanged between
applications. It handles tasks such as data compression, encryption, and data
 Application Layer: Provides services directly to the end-user applications, such as file transfer,
email, and web browsing.
2. TCP/IP Model: The TCP/IP model is a widely used reference model that is based on the protocols
used in the Internet. It consists of four layers, which are:
 Network Interface Layer: Deals with the physical connection between devices and the
transmission of data at the physical and data link layers of the OSI model.
 Internet Layer: Handles the routing of data packets between networks, similar to the network
layer of the OSI model.
 Transport Layer: Provides reliable data transfer between end-to-end applications, similar to the
transport layer of the OSI model.
 Application Layer: Provides services directly to the end-user applications, similar to the
application layer of the OSI model.

Overview of Connecting Devices: Connecting devices are essential components of a computer


network that facilitate the exchange of data between devices. Some common types of connecting
devices are:

1. Hub: A hub is a simple device that operates at the physical layer of the OSI model. It receives
data from one port and broadcasts it to all other ports, allowing devices to communicate with
each other. However, hubs are not intelligent and do not filter or manage network traffic,
which can lead to collisions and reduced network performance.
2. Repeaters: A repeater is a device that amplifies and regenerates signals to extend the range of
a network. It operates at the physical layer and can be used to extend the length of a network
segment.
3. Switches: A switch is a more advanced connecting device that operates at the data link layer of
the OSI model. It uses MAC (Media Access Control) addresses to forward data packets only to
the intended recipient, reducing collisions and improving network performance.
4. Bridges: A bridge is a connecting device that operates at the data link layer and can connect two
or more network segments to filter and forward data packets based on their MAC addresses.
Bridges help to reduce collisions and improve network performance by segmenting the network
into smaller collision domains.
5. Routers: A router is a sophisticated connecting device that operates at the network layer of
the OSI model. It uses IP (Internet Protocol) addresses to route data packets between different
networks, such as LANs and WANs. Routers can make intelligent decisions based on network
topology, traffic load, and other factors to optimize data routing and

Reference models: OSI model, TCP/IP model , Overview of Connecting devices (Hub, Repeaters,
Switches, Bridges, Routers, Gateways)
Sure! Let's continue with the overview of connecting devices.

6. Gateways: A gateway is a device that connects two or more networks that use different
communication protocols. It operates at the application layer of the OSI model and performs
protocol translation to enable communication between networks with different protocols.
Gateways can also provide security features, such as firewall functionality, to protect the

In summary, connecting devices such as hubs, repeaters, switches, bridges, routers, and gateways play crucial roles
Understanding the functionalities and characteristics of these connecting devices is essential for designing and m

Transmission Media: wired , wireless, Multiplexing techniques-FDM, TDM


Transmission Media: Transmission media are the physical channels that are used to transmit data
in a computer network. They can be classified into two main types: wired and wireless.

1. Wired Media: Wired media refer to physical cables or wires that are used to transmit
data between devices in a network. Some common types of wired media are:
 Coaxial Cable: Coaxial cable consists of a central conductor, an insulating layer, a metal shield,
and an outer insulating layer. It is commonly used for cable television (CATV) and broadband
internet connections.
 Twisted Pair Cable: Twisted pair cable consists of pairs of copper wires twisted together to
reduce interference. It is widely used for Ethernet networks and telephone systems.
 Fiber Optic Cable: Fiber optic cable uses light signals to transmit data through glass or plastic
fibers. It offers high data transfer rates, low signal loss, and immunity to electromagnetic
interference. Fiber optic cable is commonly used for long-distance and high-bandwidth
applications, such as in backbone networks.
2. Wireless Media: Wireless media refer to the transmission of data without the use of physical
cables or wires. It uses electromagnetic waves to transmit data through the air. Some common
types of wireless media are:
 Radio Waves: Radio waves are used for wireless communication, such as Wi-Fi networks,
Bluetooth, and cellular networks. They have a wide range and can penetrate obstacles, but
their performance can be affected by interference and signal degradation.
 Infrared Waves: Infrared waves are used for short-range wireless communication, such as
remote controls and IrDA (Infrared Data Association) for data transfer between devices. Infrared
waves have a limited range and require line-of-sight communication.

Multiplexing Techniques: Multiplexing is a technique used to combine multiple data streams into
a single transmission medium, allowing multiple devices to share the same communication
channel. There are two common multiplexing techniques:
1. Frequency Division Multiplexing (FDM): FDM is a technique that divides the frequency bandwidth
of a transmission medium into multiple non-overlapping frequency bands, with each band
assigned to a different data stream. Each data stream is modulated onto a carrier signal at a
specific frequency band, and the modulated signals are combined for transmission. At the
receiving end, the carrier signals are demodulated to extract the original data streams. FDM is
commonly used in analog communication systems, such as traditional analog phone lines.
2. Time Division Multiplexing (TDM): TDM is a technique that divides the time slots of a
transmission medium into multiple time intervals, with each interval assigned to a different data
stream. Each data stream is transmitted in its allocated time slot, and the time slots are repeated
in a continuous cycle. At the receiving end, the data streams are demultiplexed based on their
time slots to extract the original data streams. TDM is commonly used in digital communication
systems, such as digital phone lines and Ethernet networks.

In conclusion, transmission media play a crucial role in computer networks, enabling the
transmission of data between devices. Wired media, such as coaxial cables, twisted pair cables,
and fiber optic cables, use physical cables or wires to transmit data, while wireless media, such as
radio waves and infrared waves, use electromagnetic waves. Multiplexing techniques, such as
FDM and TDM, are used to combine multiple data streams into a single transmission medium,
allowing multiple devices to share the same communication channel.

Chapter – 2

Data Link Layer Functions, Framing, Error Control-Error correction codes(Hamming code),Error
Detection codes(Parity Bit, CRC)
Data Link Layer:

The Data Link Layer is the second layer of the OSI (Open Systems Interconnection) model and
the TCP/IP model. It is responsible for providing reliable and error-free communication between
two directly connected devices over a physical link. The Data Link Layer is responsible for
framing data packets, error control, and error detection.

Functions of the Data Link Layer:


1. Framing: The Data Link Layer breaks the data received from the upper layer into smaller units
called frames. Frames are then transmitted over the physical link. Framing helps in identifying the
boundaries of data packets, allowing the receiver to correctly reassemble the original data.
2. Error Control: The Data Link Layer ensures reliable communication by providing error control
mechanisms. Error control techniques are used to detect and correct errors that may occur
during data transmission. Error control techniques include error correction codes and error
detection codes.
3. Access Control: The Data Link Layer is responsible for controlling the access to the shared
communication medium in case of shared media networks, such as Ethernet. It ensures
that multiple devices connected to the same communication medium can transmit data
without interference.

Framing: Framing is the process of dividing the data received from the upper layer into smaller
units called frames. A frame typically consists of a header, data field, and a trailer. The header
contains control information, such as the source and destination addresses, to identify the
frames. The data field contains the actual data being transmitted. The trailer contains error
detection codes or error correction codes, which help in detecting or correcting errors during
transmission.

Error Control: Error control techniques are used to ensure reliable communication in the presence
of errors that may occur during data transmission. Two common error control techniques are:

1. Error Correction Codes (Hamming Code): Error correction codes are used to correct errors that
may occur during data transmission. Hamming Code is a popular error correction code that
adds redundant bits to the data at specific positions. These redundant bits are used to detect
and correct single-bit errors. Hamming Code is widely used in communication systems where
data integrity is critical, such as in computer memory, data storage, and digital communication.
2. Error Detection Codes (Parity Bit, CRC): Error detection codes are used to detect errors that may
occur during data transmission. Parity Bit is a simple error detection code that adds an additional
bit (parity bit) to the data. The parity bit can be even parity or odd parity, and it is used to detect
single-bit errors. Cyclic Redundancy Check (CRC) is a more sophisticated error detection code
that uses polynomial division to generate a checksum, which is added to the data. The receiver
calculates the checksum again and compares it with the received checksum to detect errors.

In conclusion, the Data Link Layer is responsible for framing data packets, error control, and error
detection in a computer network. Framing divides the data into smaller units called frames, error
correction codes such as Hamming Code are used to correct errors, and error detection codes
such as Parity Bit and CRC are used to detect errors during data transmission. These mechanisms
ensure reliable communication between directly connected devices over a physical link.
Flow Control- Stop and Wait Protocol, Sliding window –Goback N and Selective repeat(ARQ)
Flow Control:

Flow control is a mechanism used in computer networks to manage the rate at which data is
transmitted between sender and receiver, preventing the receiver from being overwhelmed with
data it cannot process. Flow control ensures that data is transmitted at a rate that the receiver
can handle, preventing data loss or buffer overflow.

Stop and Wait Protocol: Stop and Wait Protocol is a simple form of flow control used in
communication systems. In this protocol, the sender transmits a single data frame and waits for
an acknowledgment (ACK) from the receiver before sending the next frame. The receiver
acknowledges the receipt of each frame by sending an ACK or a negative acknowledgment (NAK)
if the frame is found to be corrupted or lost. The sender retransmits the frame in case of a NAK or
if no ACK is received within a timeout period.

Sliding Window Protocol: Sliding Window Protocol is a more advanced form of flow control that
allows the sender to transmit multiple frames without waiting for individual acknowledgments.
The sender maintains a "window" of allowed sequence numbers, and the receiver acknowledges
the receipt of frames within the window. The window "slides" as new acknowledgments are
received, allowing the sender to transmit more frames. There are two main types of Sliding
Window Protocol:

1. Go-Back-N (GBN): In Go-Back-N protocol, the sender can transmit multiple frames without
waiting for acknowledgments. However, the receiver acknowledges only the last correctly
received frame, and all subsequent frames are discarded until the missing frame is received.
The sender then retransmits all the discarded frames from the last acknowledged frame
onwards.
2. Selective Repeat (SR or ARQ): In Selective Repeat protocol, the sender can transmit multiple
frames without waiting for acknowledgments. The receiver acknowledges each correctly
received frame, and the sender only retransmits the frames that are not acknowledged. This
allows for more efficient retransmission of only the lost or corrupted frames, instead of

In conclusion, flow control is an important mechanism in computer networks to manage the rate
of data transmission between sender and receiver. Stop and Wait Protocol is a simple form of
flow control where the sender waits for acknowledgments before sending the next frame, while
Sliding Window Protocol, including Go-Back-N and Selective Repeat, allows for the transmission
of multiple frames without waiting for individual acknowledgments, improving the efficiency of
data transmission.
MAC- Sub-layer Protocols: ALOHA, CSMA, CSMA/CD protocols, IEEE Standards 802.3, 802.4,802.5
MAC (Media Access Control) is a sub-layer of the Data Link Layer in the OSI model that is
responsible for managing access to the shared media in a computer network. MAC sub-layer
protocols determine how devices on a network share the available bandwidth and avoid
collisions when multiple devices try to transmit data simultaneously.

ALOHA Protocol: ALOHA is a random access protocol used for media access control in wireless
networks. In the ALOHA protocol, a device can transmit data whenever it has data to send,
without checking for the availability of the channel. If a collision occurs due to multiple devices
transmitting simultaneously, the devices wait for a random time and then retry the transmission.
ALOHA protocol is simple, but it has a higher probability of collisions and lower efficiency
compared to other protocols.

CSMA (Carrier Sense Multiple Access): CSMA is a protocol used for media access control in
Ethernet networks. In CSMA, devices first listen for the carrier signal on the shared medium to
check if it is idle before attempting to transmit data. If the medium is busy, devices wait until it
becomes idle before trying to transmit. However, collisions can still occur if multiple devices start
transmitting at the same time, leading to retransmissions and decreased network efficiency.

CSMA/CD (Carrier Sense Multiple Access with Collision Detection): CSMA/CD is a protocol used
for media access control in Ethernet networks that also includes collision detection. In CSMA/CD,
devices listen for the carrier signal while transmitting data, and if a collision is detected (i.e., two
or more devices transmit data simultaneously and their signals collide), the devices stop
transmitting and wait for a random time before retrying the transmission. CSMA/CD helps to
reduce collisions and improve network efficiency compared to basic CSMA.

IEEE Standards 802.3, 802.4, 802.5: These are IEEE (Institute of Electrical and Electronics Engineers)
standards that define specific protocols for media access control in different types of networks:

1. IEEE 802.3: Also known as Ethernet, this standard defines protocols for wired local area networks
(LANs) based on CSMA/CD. Ethernet is the most widely used LAN technology and supports
data rates ranging from 10 Mbps to multi-gigabit speeds.
2. IEEE 802.4: Also known as Token Bus, this standard defines protocols for a token-passing bus
network, where devices take turns transmitting data using a token that circulates on the bus.
Token Bus is a legacy LAN technology that is not widely used today.
3. IEEE 802.5: Also known as Token Ring, this standard defines protocols for a token-passing ring
network, where devices take turns transmitting data using a token that circulates on a physical
ring topology. Token Ring is also a legacy LAN technology that is not widely used today.

In conclusion, MAC sub-layer protocols, such as ALOHA, CSMA, and CSMA/CD, are used for
managing media access in computer networks. Additionally, IEEE standards such as 802.3
(Ethernet), 802.4 (Token Bus), and 802.5 (Token Ring) define specific protocols for media access
control in different types of networks.
Chapter – 3

Network Layer Design issues , IPV4addressing basics and Header format, CIDR, sub-netting and
sub-masking
The Network Layer is the third layer in the OSI model and is responsible for routing and
forwarding data packets across different networks. It is responsible for establishing and
maintaining end-to-end communication between source and destination devices in a computer
network. Some of the design issues in the Network Layer include addressing, routing, and
congestion control.

IPV4 Addressing Basics and Header Format: IPv4 (Internet Protocol version 4) is the most widely
used protocol for addressing devices in a computer network. IPv4 addresses are 32-bit addresses
represented in the decimal form with four numbers separated by periods (e.g., 192.168.0.1). The
Network Layer uses IPv4 addresses to uniquely identify devices in a network and route data
packets to their destinations.

The header format of an IPv4 packet consists of several fields, including the source and
destination IP addresses, version number, header length, type of service (TOS), total length,
identification, flags, fragment offset, time-to-live (TTL), protocol, header checksum, and options.
The source and destination IP addresses are the most critical fields, as they are used to determine
the source and destination devices for routing purposes.

CIDR (Classless Inter-Domain Routing): CIDR is a method used for IP address allocation and
routing. It allows for more efficient use of IP addresses by eliminating the limitations of the older
classful IP addressing scheme. In CIDR, IP addresses are represented in the form of a prefix
followed by a slash (/) and a number, which represents the number of bits used for the network
portion of the address. For example, 192.168.0.0/24 represents a network with a 24-bit network
prefix, which allows for 256 host addresses.

Subnetting and Subnet Masking: Subnetting is the process of dividing a large IP network into
smaller, more manageable subnets. Subnets are created by borrowing bits from the host portion
of an IP address and using them to create a network prefix for the subnet. Subnetting helps in
optimizing network performance, improving security, and simplifying network management.

A subnet mask is a 32-bit value that is used to determine the network and host portions of an IP
address. It is applied to both the source and destination IP addresses in a data packet to
determine whether they are on the same local network or need to be routed to a different
network. The subnet mask is used in conjunction with the IP address to determine the network
address and the host address.

In conclusion, the Network Layer of the OSI model deals with issues such as addressing, routing,
and congestion control. IPv4 is the most widely used protocol for addressing devices in a
network, and CIDR and subnetting/subnet masking are methods used for efficient IP address
allocation and routing. Understanding IPv4 addressing basics, header format, CIDR, and
subnetting/subnet masking is crucial for network administrators to effectively manage and optimize their netwo

Routing, optimality Principle Routing protocols-, Shortest path, flooding, distance vector routing ,
link state routing
Routing is the process of determining the best path for data packets to travel from a source
device to a destination device across a network. The optimality principle is a fundamental
concept in routing that states that each node in a network should make its routing decision
based on the best available information at the time, without having complete knowledge of the
entire network topology.

Routing protocols are algorithms or sets of rules used by routers to determine the best path for
data packets to reach their destination. Some commonly used routing protocols include:

1. Shortest Path Routing: This type of routing protocol determines the shortest path from the
source to the destination based on the number of hops or distance between nodes. Examples
of shortest path routing protocols include Dijkstra's algorithm and Bellman-Ford algorithm.
2. Flooding: In this type of routing, a data packet is broadcasted to all neighboring nodes in the
network, and each node forwards the packet to all its neighboring nodes, except the one from
which it received the packet. Flooding is simple but can result in network congestion and
unnecessary data duplication.
3. Distance Vector Routing: Distance vector routing protocols use metrics, such as hop count or
cost, to determine the best path for data packets. Each node maintains a table that contains the
distance or cost to reach all possible destinations in the network. Examples of distance vector
routing protocols include Routing Information Protocol (RIP) and Interior Gateway Routing
Protocol (IGRP).
4. Link State Routing: Link state routing protocols involve each node in the network sending
information about its directly connected links to all other nodes in the network. This information
is used to build a complete map of the network, which is then used to determine the best path
for data packets. Examples of link state routing protocols include Open Shortest Path First
(OSPF) and Intermediate System to Intermediate System (IS-IS).

The choice of routing protocol depends on factors such as the size and complexity of the network, the required lev
network environment. Network administrators need to carefully select and configure routing protocols to ensure

Congestion control-Leaky bucket , Token Bucket, jitter control


Congestion control is an important aspect of network management that aims to prevent or
mitigate congestion, which is the phenomenon of excessive data traffic overwhelming a network
or network segment, leading to degraded performance or complete network failure. Two
commonly used congestion control mechanisms are the Leaky Bucket and Token Bucket
algorithms. Additionally, jitter control is another important aspect of network management that
aims to minimize the variation in packet arrival times, which can impact the quality of real-time
applications such as voice over IP (VoIP) or video streaming.

1. Leaky Bucket: The Leaky Bucket algorithm is a simple congestion control mechanism that
regulates the rate at which data packets are transmitted from a source. It can be implemented in
a network device, such as a router, to ensure that data is transmitted at a controlled rate. The
basic idea is that incoming packets are added to the "bucket" or buffer, and if the bucket
overflows, the excess packets are dropped or delayed. This helps in controlling the rate of data
transmission and preventing congestion.
2. Token Bucket: The Token Bucket algorithm is similar to the Leaky Bucket algorithm, but it uses a
token-based approach. Tokens are generated at a fixed rate and are used to represent the
availability of network resources. Data packets are only allowed to be transmitted if there are
tokens available in the bucket. If no tokens are available, the packets are either dropped or
delayed, helping to control the rate of data transmission and prevent congestion.
3. Jitter Control: Jitter is the variation in packet arrival times, which can result in uneven playback
or performance issues in real-time applications. Jitter control mechanisms aim to minimize the
variation in packet arrival times to ensure smooth playback or performance of real-time
applications. Some common jitter control techniques include buffering, queuing, and packet
scheduling algorithms that prioritize packets based on their time sensitivity or importance.

Congestion control mechanisms, such as the Leaky Bucket and Token Bucket algorithms, help regulate the rate of
techniques are important for ensuring efficient and reliable network performance in modern networks.

Chapter – 4

Transport Layer Need of transport layer with its services, Quality of service, connection oriented
and connection less
The transport layer is responsible for providing reliable communication between applications
running on different devices in a network. It ensures that data is delivered accurately, efficiently,
and in the correct order. The transport layer also provides services such as flow control, error
control, congestion control, and quality of service (QoS) management to ensure reliable and
efficient communication.

Need of Transport Layer: The transport layer is necessary because it enables communication between different applicat
Services of Transport Layer: The transport layer provides several important services, including:
Reliable delivery: The transport layer ensures that data sent from the sender is received accurately and completely by th
Flow control: The transport layer manages the rate of data flow between the sender and the receiver to prevent overwh
Error control: The transport layer detects and corrects errors in data transmission to ensure the integrity of the data bei
Congestion control: The transport layer manages the network traffic to prevent congestion, which can degrade network
3.Quality of Service (QoS): Quality of Service refers to the level of service quality that can be guaranteed by a network fo
traffic, allocating bandwidth, and managing network resources to ensure smooth and efficient communication for crit
4. Connection-oriented and Connectionless Protocols: Transport layer protocols can be classified into two types: conn
Connection-oriented protocols establish a reliable connection between the sender and receiver before transmitting d
Connectionless protocols do not establish a dedicated connection before transmitting data. Each packet or datagram

In summary, the transport layer is essential for providing reliable communication between
applications running on different devices in a network. It offers services such as reliable delivery,
flow control, error control, and congestion control. It also manages Quality of Service (QoS) and
can use either connection-oriented or connectionless protocols to establish communication
between applications.

Transmission Control Protocol: Segment structure and header format, TCP Connection
Management, Flow Control
Transmission Control Protocol (TCP) is a widely used transport layer protocol in computer
networks. It provides reliable, connection-oriented communication between applications running
on different devices. TCP uses segments to divide data into manageable units for transmission
over the network.

1. Segment Structure and Header Format: A TCP segment consists of a header and a payload.
The header contains information necessary for the delivery of the segment, including:
 Source and destination port numbers: Port numbers uniquely identify applications running on the
sender and receiver devices.
 Sequence number: It indicates the position of the segment in the stream of data
being transmitted.
 Acknowledgment number: It indicates the next expected sequence number that the receiver is
expecting to receive.
 Flags: Flags are used to control various aspects of TCP communication, such as establishing,
maintaining, and terminating connections, as well as handling data flow, error control, and
congestion control.
 Window size: It indicates the amount of data that can be sent by the sender before receiving an
acknowledgment from the receiver.
 Checksum: It is used for error detection to ensure the integrity of the segment.
2. TCP Connection Management: TCP uses a three-way handshake to establish a connection
between the sender and receiver devices. The three steps in the TCP connection
establishment process are:
 SYN (Synchronize): The sender sends a SYN segment to the receiver, indicating its intention
to establish a connection.
 SYN-ACK (Synchronize-Acknowledge): The receiver responds with a SYN-ACK segment,
acknowledging the receipt of the SYN segment and indicating its readiness to establish
a connection.
 ACK (Acknowledge): The sender sends an ACK segment to the receiver, confirming the
establishment of the connection.

TCP also uses connection termination, known as a four-way handshake, to gracefully close a TCP
connection when both the sender and receiver have finished exchanging data.

3. Flow Control: TCP implements flow control to prevent the sender from overwhelming the receiver
with data. TCP uses a sliding window mechanism to control the flow of data. The receiver
advertises a window size in its acknowledgments, indicating the amount of buffer space it has
available to receive data. The sender adjusts its transmission rate based on the window size,
ensuring that data is transmitted at a rate that the receiver can handle. This prevents congestion
and ensures efficient data transfer between the sender and receiver.

In summary, TCP uses segments with a header format that includes important information such
as port numbers, sequence numbers, acknowledgment numbers, flags, window size, and
checksum. TCP also uses a three-way handshake for connection establishment and a four-way
handshake for connection termination. Additionally, TCP implements flow control to manage the
rate of data transmission and prevent congestion.
TCP congestion control, Internet Congestion Control Algorithm, Overview of User Datagram
Protocol (UDP)
TCP Congestion Control:

TCP congestion control is a mechanism used by the Transmission Control Protocol (TCP) to
manage and avoid network congestion, which can occur when the demand for network resources
exceeds their availability. Congestion can lead to degraded network performance, increased
packet loss, and decreased throughput. TCP employs various congestion control algorithms to
mitigate these issues and maintain efficient data transfer over the network.

Some popular TCP congestion control algorithms include:

1. Slow Start: This algorithm is used during the initial phase of a TCP connection to gradually
increase the sending rate until congestion is detected. It starts with a low sending rate
and exponentially increases it until a threshold (known as the congestion window) is
reached.
2. Congestion Avoidance: Once the slow start phase is completed, TCP switches to the congestion
avoidance phase. In this phase, the sending rate is increased linearly, rather than exponentially, to
avoid triggering congestion.
3. Fast Retransmit and Fast Recovery: These algorithms are used to quickly recover from lost
packets without waiting for a timeout. When multiple duplicate acknowledgments are received,
indicating that a packet has been lost, TCP can trigger fast retransmit and fast recovery to

Internet Congestion Control Algorithm:

One of the widely used Internet congestion control algorithms is the TCP Reno algorithm, which
is a combination of slow start, congestion avoidance, fast retransmit, and fast recovery. It is
widely deployed in modern TCP implementations and is designed to respond to network
congestion in a conservative and fair manner.

Overview of User Datagram Protocol (UDP):

User Datagram Protocol (UDP) is a transport layer protocol that provides a connectionless and
unreliable communication service. Unlike TCP, which ensures reliable delivery of data with flow
control and congestion control mechanisms, UDP does not guarantee reliable delivery or provide
built-in mechanisms for flow control or congestion control. UDP is often used in scenarios where
real-time or near-real-time communication is required, and some data loss or delay can be
tolerated, such as in streaming multimedia, online gaming, or voice over IP (VoIP) applications.

UDP is a simple protocol that operates at the transport layer and provides only basic
functionality, including source and destination port numbers for identifying applications, and
length and checksum fields for basic error detection. It does not include features such as
acknowledgment, retransmission, or sequencing of packets, which are present in TCP. However,
the lack of these features also makes UDP more lightweight and faster compared to TCP, as it
does not incur the overhead associated with reliability mechanisms.

In summary, TCP congestion control algorithms are used to manage network congestion and
ensure reliable data transfer, while UDP is a connectionless and unreliable transport layer
protocol that is used in scenarios where real-time or near-real-time communication is required and some data lo

Chapter – 5

Application Layer Domain Name System (DNS), HTTP, FTP, SMTP


Domain Name System (DNS):

The Domain Name System (DNS) is a hierarchical and distributed naming system used to
translate human-readable domain names, such as www.example.com, into IP addresses, which
are numerical addresses used by computers to identify each other on the Internet. DNS plays a
critical role in the functioning of the Internet by providing a decentralized and scalable means of
resolving domain names to IP addresses.

DNS operates at the application layer of the TCP/IP protocol stack and is essential for web
browsing, email, and other Internet-based applications. It uses a client-server model, where DNS
clients (also known as resolvers) query DNS servers (also known as name servers) to resolve
domain names to IP addresses. DNS servers are organized in a hierarchical manner, with the root
servers at the top of the hierarchy, followed by top-level domain (TLD) servers, and then
authoritative name servers for individual domain names.

HTTP (Hypertext Transfer Protocol):

HTTP is the foundation of the World Wide Web and is used for transferring hypermedia
documents, such as web pages, over the Internet. It is a client-server protocol, where web
browsers act as clients that request web resources from web servers, which serve the requested
resources back to the clients. HTTP operates at the application layer of the TCP/IP protocol stack
and is based on a request-response model, where clients send HTTP requests and servers
respond with HTTP responses.

FTP (File Transfer Protocol):

FTP is a standard network protocol used for transferring files between a client and a server over a
TCP-based network, such as the Internet. It provides a reliable and efficient means of transferring
files, with features such as file listing, file uploading, file downloading, and file deletion. FTP uses
a client-server model, where FTP clients initiate FTP commands and FTP servers respond to these
commands. FTP operates at the application layer of the TCP/IP protocol stack and uses two
separate connections, a control connection for sending FTP commands and a data connection for transferring ac

SMTP (Simple Mail Transfer Protocol):

SMTP is a standard network protocol used for sending and receiving email messages over the Internet. It is a clie

In summary, DNS is used for translating domain names to IP addresses, HTTP is used for transferring web resour

Network Security services, cryptography, Symmetric versus Asymmetric cryptographic


algorithms- DES, and RSA
Network security services:

Network security refers to the measures and techniques used to protect the integrity,
confidentiality, and availability of data and services in a computer network. Network security
services are the various mechanisms and protocols used to ensure that data transmitted over a
network is secure and protected from unauthorized access, tampering, or interception. Some of
the key network security services include:

1. Authentication: Authentication is the process of verifying the identity of users, devices, or systems
accessing a network. It ensures that only authorized users and devices are granted access to the
network.
2. Authorization: Authorization is the process of granting or denying access to specific resources or
actions based on the authenticated user's privileges or permissions. It ensures that users are
only allowed to perform actions or access resources for which they have appropriate
permissions.
3. Confidentiality: Confidentiality ensures that data transmitted over a network is protected from
unauthorized access or interception. It involves using encryption techniques to ensure that data is
only accessible to authorized recipients.
4. Integrity: Integrity ensures that data transmitted over a network is not tampered with or
modified during transit. It involves using techniques such as checksums or digital signatures to
detect any unauthorized modifications to data.
5. Availability: Availability ensures that network resources and services are accessible and usable
when needed. It involves implementing measures to prevent network downtime due to failures,

Cryptography:

Cryptography is the science and art of securing communication and information by converting it
into a form that is not readily understandable, known as ciphertext, using mathematical
algorithms and techniques. Cryptography plays a crucial role in network security as it provides
the foundation for many security mechanisms, including confidentiality, integrity, and
authentication.

Symmetric vs. Asymmetric Cryptographic Algorithms:

Symmetric cryptography and asymmetric cryptography are two types of cryptographic


algorithms used for different purposes in network security.

1. Symmetric Cryptography: In symmetric cryptography, the same key is used for both encryption
and decryption. The sender and the receiver use the same secret key to encrypt and decrypt
data, respectively. Examples of symmetric cryptographic algorithms include Data Encryption
Standard (DES), Advanced Encryption Standard (AES), and Rivest Cipher (RC4).
2. Asymmetric Cryptography: In asymmetric cryptography, also known as public-key cryptography,
a pair of keys, consisting of a public key and a private key, is used for encryption and decryption.
The public key is shared with others, while the private key is kept secret by the owner. Data
encrypted with the public key can only be decrypted with the corresponding private key, and
vice versa. Examples of asymmetric cryptographic algorithms include Rivest-Shamir-Adleman
(RSA), Digital Signature Algorithm (DSA), and Elliptic Curve Cryptography (ECC).

DES (Data Encryption Standard):

DES is a symmetric cryptographic algorithm used for encryption and decryption of data. It was developed in the 19

RSA (Rivest-Shamir-Adleman):

RSA is an asymmetric cryptographic algorithm used for encryption and digital signatures. It was invented in the 19
basis of its security. RSA is known for its flexibility in key management, as it allows for secure communication wit

In summary, network security services include authentication, authorization, confidentiality, integrity, and availab

Application of Security in Networks: Digital signature


Digital signatures are a cryptographic technique used in network security to verify the
authenticity and integrity of digital documents, messages, or transactions. A digital signature is a
mathematical scheme that uses a private key to sign a document and a public key to verify the
signature. Digital signatures provide a way to ensure that a document or message has not been
tampered with during transmission and that it originates from the claimed sender.

The application of digital signatures in networks can have several benefits, including:

1. Authentication: Digital signatures provide a way to authenticate the sender of a document or


message. The use of private and public keys ensures that the signature can only be created by
the owner of the private key, which serves as a form of identification for the sender. This
helps prevent impersonation or unauthorized access to sensitive information.
2. Integrity: Digital signatures ensure the integrity of a document or message. Any alteration or
modification of the document or message after it has been signed will result in the digital
signature becoming invalid. This helps detect any tampering or unauthorized changes to the
data during transmission or storage.
3. Non-repudiation: Digital signatures provide non-repudiation, which means that the sender
cannot later deny sending a signed document or message. Once a digital signature is applied, it
serves as proof of the sender's intent and cannot be repudiated, providing a strong legal and
evidentiary basis in case of disputes or legal proceedings.
4. Trust and Confidentiality: Digital signatures help establish trust in network communications by
ensuring that the sender is verified and the data is not tampered with. This helps establish a
secure communication channel, which is especially important in sensitive transactions or
communications. Digital signatures can also be used in combination with other encryption
techniques to provide confidentiality, ensuring that only authorized recipients can decrypt and
access the signed data.
5. Compliance and Regulatory Requirements: Many industries and jurisdictions have regulatory
requirements for data security, privacy, and authenticity. The use of digital signatures can help
organizations comply with these requirements, such as in financial transactions, legal documents,
or electronic health records.

Overall, the application of digital signatures in networks provides a secure and reliable way to
verify the authenticity and integrity of digital documents or messages, establishing trust, ensuring
compliance, and protecting against tampering or repudiation.

You might also like