Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

GGS COLLEGE OF MODERN TECHNOLOGY

KHARAR, PUNJAB

COMPUTER NETWORKS
ASSIGNMENT – III
BTCS 504 – 18
B. TECH 5TH SEMESTER
2023-24

Submitted By: Eshan Padhiar


Submitted To: Mr. Munish Kumar
Roll No: 2103769
Course: B. Tech
Semester: 5th
Department: CSE
Question 1: What is Flow control?
Flow control is a fundamental concept in data communication and networking, and it refers to
the mechanisms and techniques used to manage the rate of data transmission between sender
and receiver to ensure that data is sent and received at an appropriate and sustainable pace.
The primary purpose of flow control is to prevent the sender from overwhelming the receiver
with data, especially in situations where the sender's transmission speed exceeds the
receiver's processing or buffer capacity. Flow control helps to avoid data loss, data
corruption, and network congestion.
There are two main types of flow control:

1. Automatic Flow Control:


 Automatic flow control, also known as end-to-end flow control, is controlled
by the devices at the two ends of the communication link (sender and
receiver).
 The receiver communicates its readiness to accept data to the sender. This is
typically done using flow control signals or acknowledgments (ACKs).
 Examples of automatic flow control include the use of the Transmission
Control Protocol (TCP) in computer networks, where the receiver
acknowledges the receipt of data, and the sender adjusts its transmission rate
accordingly.
2. Explicit Flow Control:
 Explicit flow control is controlled by additional control signals sent over a
separate channel or a separate path within the communication link.
 Common techniques for explicit flow control include the use of dedicated
control lines (e.g., Request to Send/Clear to Send in RS-232 serial
communication) or specific characters or commands in the data stream.
 In this approach, the receiver explicitly informs the sender when it is ready to
receive more data or when it needs the sender to pause data transmission.

Flow control can be important in various scenarios, including:


 Mismatches in Data Rates: When the sender and receiver operate at different speeds
or have varying processing capabilities, flow control prevents data loss due to buffer
overflows or underruns.
 Network Congestion: In situations where multiple senders are sharing a network or a
communication medium, flow control helps prevent network congestion and ensures
fair access to resources.
 Error Recovery: Flow control mechanisms can be used in combination with error
recovery to ensure that lost or corrupted data is retransmitted correctly.
 Buffer Management: In computer systems and network devices, flow control is
crucial for managing the use of buffers or memory space used to temporarily store
incoming data.
In data communication, one of the most common examples of flow control is the use of TCP
in the Transmission Control Protocol/Internet Protocol (TCP/IP) suite. In this context, the
sender monitors acknowledgments from the receiver and adjusts its transmission rate based
on the receiver's feedback, ensuring reliable and efficient data transfer over the network.
Flow control mechanisms vary based on the specific application and technology but play a
vital role in maintaining the integrity and efficiency of data communication.

Question 2: Write a short note

i) Error control
Error control, in the context of data communication and networking, is a set of
techniques and mechanisms designed to detect, correct, and prevent errors in data
transmission. Errors can occur during the transmission of data due to various factors,
including noise, interference, signal attenuation, and other impairments in the
communication channel. Error control is essential to ensure the integrity and reliability
of data as it travels from a sender to a receiver. Here's a brief overview of error control:

1. Error Detection:
- Error detection techniques involve adding extra bits (e.g., checksums, cyclic
redundancy checks) to the transmitted data to create a redundancy in the data stream.
- The receiver can use these extra bits to detect the presence of errors in the received
data.
- If an error is detected, the receiver requests retransmission of the erroneous data.

2. Error Correction:
- Error correction techniques go beyond error detection and allow the receiver to
correct errors in the received data.
- Error-correcting codes, such as Hamming codes and Reed-Solomon codes, are used
to encode the data with additional redundancy.
- The receiver can use this redundancy to not only detect errors but also reconstruct
the original, error-free data.
3. Automatic Repeat Request (ARQ):
- ARQ is a common error control mechanism that involves requesting retransmission
of data when errors are detected.
- The sender and receiver engage in a communication cycle where the sender
retransmits the data upon request.
- ARQ may use mechanisms like Stop-and-Wait, Go-Back-N, or Selective Repeat for
managing retransmissions.

4. Forward Error Correction (FEC):


- FEC is a proactive error control technique that adds redundant information to the
data stream to allow the receiver to correct errors without requesting retransmission.
- This technique is commonly used in scenarios where latency is a critical factor, such
as streaming media.

5. Acknowledgments (ACKs) and Negative Acknowledgments (NAKs):


- In some error control protocols, the receiver sends ACKs to confirm successful data
reception and NAKs to request retransmission when errors are detected.
- The sender uses these acknowledgments to determine which data needs to be
retransmitted.

6. Sliding Window Protocols:


- Sliding window protocols, such as Selective Repeat and Go-Back-N, are used to
manage the flow of data and acknowledgments.
- They allow the sender to transmit multiple frames before receiving
acknowledgments and provide error control by selectively retransmitting only the
frames with errors.

Error control is critical for ensuring the reliability of data transmission, particularly in
scenarios where data accuracy and integrity are essential, such as in computer
networks, telecommunication systems, and file transfers. The choice of error control
methods depends on factors like the type of data being transmitted, the communication
medium, and the acceptable trade-offs between error detection, correction, and the
associated overhead.
ii) IPv4
IPv4, or Internet Protocol version 4, is one of the foundational protocols of the internet and
network communication. It is the fourth version of the Internet Protocol and serves as the
basis for most internet communication. Here's a brief overview of IPv4:

1. Addressing: IPv4 uses 32-bit addresses, represented as four decimal numbers separated by
periods (e.g., 192.168.1.1). Each IPv4 address is unique and identifies a specific device or
node on a network. The addressing scheme is hierarchical, with classes of addresses and
subnets to efficiently manage IP allocations.
2. Packet Structure: IPv4 data is transmitted in packets called datagrams. Each datagram
includes a header with control information and a payload containing the actual data. The
header includes fields for the source and destination IP addresses, Time-to-Live (TTL),
protocol version, and checksum, among others.
3. Routing: IPv4 relies on routing tables and algorithms to determine the best path for data to
traverse a network. Routers use these tables to forward packets toward their destination. The
Internet is a collection of interconnected networks, and IPv4 is crucial for ensuring data
reaches its intended recipient across these networks.
4. Subnetting: Subnetting is a technique that allows organizations to divide their IP address
space into smaller, more manageable segments. It helps improve network management and
security. Subnet masks determine how IP addresses are divided between network and host
portions.
5. NAT (Network Address Translation): NAT is a technique used to conserve IPv4 addresses.
It allows multiple devices within a private network to share a single public IP address. NAT
translates private IP addresses to the public address when data is sent outside the private
network.
6. IPv4 Exhaustion: The biggest challenge with IPv4 is address exhaustion. The limited pool
of available IPv4 addresses has been largely depleted due to the rapid growth of the internet
and the proliferation of connected devices. This has led to the development and adoption of
IPv6, which uses 128-bit addresses and offers an enormous number of unique addresses to
meet the growing demands of the internet.

While IPv6 adoption is increasing, IPv4 is still widely used, and the transition to IPv6 is a
gradual process. Many devices and networks continue to rely on IPv4, and mechanisms like
NAT have helped extend its usability. However, as the need for IP addresses continues to
grow, IPv6 is becoming increasingly important to ensure the continued expansion and
scalability of the internet.
Question 3: What is Address mapping?
Address mapping is a fundamental concept in computer networks and operating systems, and
it involves the translation of one type of address or identifier to another. Address mapping is
typically used to facilitate communication and data transfer between different layers of a
networking protocol stack or between hardware and software components. Here are some
common examples of address mapping:

1. Network Address Translation (NAT):


 NAT is a widely used address mapping technique in networking. It translates
private IP addresses used within a local network to a single public IP address
when communicating with external networks like the internet. This allows
multiple devices within a local network to share a single public IP address,
helping conserve public IP address space.
2. MAC Address to IP Address Mapping:
 In local area networks (LANs), devices are identified by both Media Access
Control (MAC) addresses and IP addresses. Address mapping is necessary
when a device with an IP address needs to communicate with a device
identified by its MAC address within the same network.
3. Domain Name System (DNS):
 DNS is a system that maps human-readable domain names (e.g.,
www.example.com) to IP addresses. It allows users to access websites using
domain names, while the internet relies on IP addresses for routing. DNS
serves as an address mapping system that connects user-friendly domain
names to the underlying IP addresses.
4. Host-to-Host Mapping:
 Address mapping can involve translating between different identifiers used in
a network. For example, mapping between a host name and an IP address or
between a uniform resource locator (URL) and an IP address.
5. Physical-to-Logical Address Mapping:
 In operating systems and networking, address mapping can involve translating
between physical memory addresses and logical memory addresses. This
mapping helps manage memory allocation and isolation.
6. Port Address Translation (PAT):
 PAT is a form of NAT used in networking to map multiple private IP addresses
to a single public IP address. It involves translating not only the IP addresses
but also the port numbers to allow multiple internal devices to use the same
public IP address.
Address mapping plays a crucial role in ensuring efficient and accurate data transfer between
different layers of communication protocols and between hardware and software components.
It simplifies and streamlines the complex addressing schemes used in networking and
operating systems, making it easier for data to be routed and processed correctly across
various layers and devices in a network.

Question 4: What is DNS?


DNS stands for Domain Name System, and it is a hierarchical and distributed naming system
used to translate human-friendly domain names, such as "www.example.com," into the
numeric IP addresses that computers use to identify each other on the internet. In essence,
DNS acts as the "phone book" of the internet, enabling users to access websites and other
online services by simply typing a domain name into their web browser, rather than needing
to remember the corresponding IP address.
Here are the key components and functions of the Domain Name System (DNS):
1. Domain Names: Domain names are user-friendly and human-readable labels used to
identify websites, servers, and other resources on the internet. They consist of a series
of labels separated by dots (periods) and are organized hierarchically, with the top-
level domain (TLD) on the right and the specific domain or subdomain on the left.
Examples of TLDs include .com, .org, and .net.

2. DNS Servers: DNS operates as a distributed system with a network of DNS servers.
These servers are organized into a hierarchy, and they work together to resolve
domain name queries. The hierarchy includes root servers, top-level domain servers,
authoritative name servers, and caching resolvers.

3. DNS Resolution Process: When a user enters a domain name into a web browser, the
browser sends a DNS query to a DNS resolver (usually provided by the Internet
Service Provider, ISP). The resolver then follows a series of steps to resolve the
domain name:
 It checks its local cache for a previously resolved domain name.
 If not found, the resolver queries a root DNS server for information about the
TLD's authoritative name server.
 The root server directs the resolver to the authoritative name server for the
specific domain.
 The authoritative name server returns the IP address associated with the
domain name.
 The resolver caches this information for future use and returns the IP address
to the user's device.
4. Caching: To reduce the load on DNS servers and improve query response times, DNS
resolvers cache domain name information for a specified period (Time to Live or
TTL). This allows the resolver to respond quickly to subsequent queries for the same
domain.

5. Resource Records: DNS servers store resource records (RRs) that contain
information associated with domain names. Common types of RRs include A records
(for IPv4 addresses), AAAA records (for IPv6 addresses), MX records (for mail
servers), and CNAME records (for aliasing one domain to another).

DNS is a crucial component of the internet infrastructure, enabling users to access websites
and services without needing to know the underlying IP addresses. It plays a vital role in
making the internet user-friendly and accessible. Additionally, DNS provides essential
functionality for email delivery, voice-over-IP (VoIP), and various other internet-based
services.

Question 5: Explain Cryptography.


Cryptography is the science and practice of securing communication and information by
encoding it in a way that only authorized parties can access and understand. It involves the
use of mathematical algorithms and keys to transform plaintext data into ciphertext (encoded
data) and back to plaintext when needed. Cryptography is essential for maintaining the
confidentiality, integrity, and authenticity of data in various applications, including secure
communication, data storage, and digital transactions.

Key components and concepts of cryptography include:

1. Encryption and Decryption:


 Encryption is the process of converting plaintext data into ciphertext using an
encryption algorithm and a secret key.
 Decryption is the reverse process, where ciphertext is converted back into
plaintext using a decryption algorithm and the same secret key.
 Common encryption methods include symmetric-key encryption (same key for
both encryption and decryption) and asymmetric-key encryption (public and
private key pairs).
2. Keys:
 Keys are critical to cryptography. They are secret values used in encryption
and decryption algorithms.
 In symmetric-key cryptography, the same key is used for both encryption and
decryption.
 In asymmetric-key cryptography, there are two keys: a public key (used for
encryption) and a private key (used for decryption).
 Key management is vital for the security of cryptographic systems, as the
compromise of keys can lead to data breaches.

3. Ciphers:
 Ciphers are algorithms or methods used for encryption and decryption.
 There are various types of ciphers, including substitution ciphers (replacing
one symbol with another) and transposition ciphers (rearranging symbols).
 Modern ciphers, such as the Advanced Encryption Standard (AES) and RSA,
are complex mathematical algorithms that provide strong security.

4. Hash Functions:
 Hash functions are cryptographic algorithms that transform input data into a
fixed-length output (hash) that is typically of a fixed size.
 Hashes are used to verify data integrity and generate digital signatures.
 A good hash function should produce a unique hash for each unique input and
be irreversible (i.e., it should be computationally infeasible to derive the
original input from the hash).

5. Digital Signatures:
 Digital signatures use asymmetric-key cryptography to provide authentication
and integrity for digital messages or documents.
 The sender uses their private key to sign a message, and the recipient can
verify the signature using the sender's public key. If the signature is valid, it
confirms the message's origin and integrity.
6. Secure Protocols:
 Cryptography is used in various secure communication protocols, such as
Secure Sockets Layer (SSL) and its successor, Transport Layer Security
(TLS), for secure web communication.
 Secure communication protocols ensure the confidentiality and integrity of
data exchanged over a network.

Cryptography has numerous applications, including securing online transactions, protecting


sensitive data in databases, enabling secure messaging and email, and ensuring the
authenticity of software and digital content. As technology evolves, so does the field of
cryptography, with ongoing efforts to develop stronger encryption techniques to counter
emerging threats to data security.

You might also like