CN Report - 5-2

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 27

RN SHETTY TRUST®

RNS INSTITUTE OF TECHNOLOGY


Autonomous Institution Affiliated to VTU, Recognized by GOK, Approved by AICTE
(NAAC ‘A+ Grade’ Accredited, NBA Accredited (UG - CSE, ECE, ISE, EIE and EEE)
Channasandra, Dr. Vishnuvardhan Road, Bengaluru - 560 098 Ph:
(080)28611880,28611881 URL: www.rnsit.ac.in
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

Subject Activity-Report
Computer Networks(21CS52)
2023-2024

5th Semester B.E

NAME USN : 1RN21CS008

NAME: Bharath R Sindhe USN : 1RN21CS039

NAME USN : 1RN21CS043

NAME USN : 1RN21CS058

NAME USN : 1RN21CS061


VISIONANDMISSION OFINSTITUTION
Vision
Building RNSIT into a World Class Institution

Mission
To impart high quality education in Engineering, Technology and
Management with a Difference, Enabling Students to Excel in their Career by
1. Attracting quality Students and preparing them with a strong foundation in
fundamentals soastoachievedistinctionsinvariouswalksoflifeleadingto outstanding
contributions.
2. Imparting value based, need based, choice based and skill based professional education to
the aspiring youth and carving them into disciplined, World class Professionals with social
responsibility.
3. Promoting excellence in Teaching, Research and Consultancy that galvanizes
academic consciousness among Faculty and Students.
4. Exposing Students to emerging frontiers of knowledge in various domains and make them
suitable for Industry, Entrepreneurship,Higher studies, and Research & Development.
5. Providing freedom of action and choice for all the Stakeholders with better visibility.

VISIONANDMISSIONOF CSEDEPARTMENT
Vision
Preparingbettercomputerprofessionalsforarealworld

Mission
The Department of Computer Science and Engineering will make every effort to
promote an intellectual and an ethical environment in which the strengths and skills of
Computer Professionals will flourish by
1. ImpartingSolidfoundationsandAppliedaspectsinbothComputerScienceTheoryandProgramm
ingpractices.
2. ProvidingTrainingandencouragingR&DandConsultancyServicesinfrontierareasofComputer
Science witha Globaloutlook.
3. FosteringthehighestidealsofEthics,ValuesandcreatingAwarenessontheroleofComputingin
GlobalEnvironment.
4. Educatingandpreparingthegraduates,highlySought-after,Productive,andWell-
respectedfortheir workculture.
5. Supportingand inducingLifelongLearningpractice
Table of Contents

SL. NO. Subject Activities PAGENO. Sign

1 Checksum algorithm 4-5


2 Hamming code error detection technique 6-7
3 Dijkstra’s routing algorithm 8-9
4 Stop and wait Protocol 10-12 `
5 Go-Back N protocol 13-14
6 Selective Repeat Protocol 15-16
7 Socket programming using TCP 17-18
8 Socket programming using UDP 19-21
9 DDOS attack simulation using NS3 22-23
10 HTTP Protocol 24-25

Department of CSE, RNSIT


1. Checksum Algorithm in Computer Networks

In the realm of computer networks, maintaining the integrity of transmitted


data stands as a fundamental priority. Data faces the risk of corruption
during transmission, attributed to factors like noise, interference, or
hardware malfunctions. To mitigate this risk, checksum algorithms hold
vital importance. A checksum represents a computed value derived from a
data packet, facilitating error detection for receivers and safeguarding data
integrity. This discourse aims to delve into the basics of checksum
algorithms in computer networks, their practical application, and their
pivotal role in upholding data integrity.

1. Understanding Checksum Algorithms

A checksum algorithm produces a consistent numerical value, often a sum


or hash, based on the data packet's content. This value is added to the
packet and sent alongside the data. When the packet reaches its
destination, the recipient recalculates the checksum using the received data
and compares it to the transmitted checksum. If the calculated checksum
matches the transmitted one, the data is considered intact. Any discrepancy
signifies an error, indicating potential data corruption.

2. Implementation of Checksum Algorithms

Multiple checksum algorithms find application in computer networks, each


possessing distinct implementations and features. Among these algorithms,
the Internet Checksum stands out as a commonly utilized one, extensively
integrated into protocols like IPv4, UDP, and ICMP.

Internet Checksum:
The Internet Checksum algorithm works with 16-bit data packet words.
The packet is segmented into 16-bit segments, with the sum of these
segments being computed.
The resulting sum is then complemented using the ones' complement
method to derive the checksum.
The checksum value is included in the packet header and sent along with
the data during transmission.

Page|4
Upon reception of the packet, the receiver conducts a similar computation
and contrasts the computed checksum with the transmitted checksum. If
they coincide, the data is deemed valid; if not, it signifies potential data
corruption.

3. Significance of Checksum Algorithms

Checksum algorithms play a critical role in guaranteeing data integrity


within computer networks for numerous reasons.:

 Error Detection: Checksum algorithms facilitate the prompt detection


of errors that may arise during data transmission. Through the
comparison of checksum values, recipients can swiftly identify
corrupted data packets..
 Reliability: Incorporating checksums into network protocols
improves the reliability of data transmission by enabling receivers to
authenticate the integrity of received data and request retransmission
in case of detected errors..
 Performance: Checksum algorithms present a lightweight approach
to error detection without imposing substantial overhead. They strike
a balance between computational complexity and error detection
capabilities, rendering them appropriate for real-time communication
within networks..
4. Conclusion

To summarize, checksum algorithms serve as vital elements within


computer networks, offering a robust means of error detection and
safeguarding data integrity during transmission. Through the generation
and comparison of checksum values at the receiver's end, these algorithms
play a pivotal role in error detection and correction. Familiarity with
checksum algorithms is essential for network engineers and developers,
enabling them to craft efficient and dependable communication protocols.
As technology progresses, checksum algorithms remain indispensable in
preserving the integrity of exchanged data across networks, thereby
supporting the smooth functioning of contemporary communication
systems.

Page|5
2. Hamming Code Error Detection Technique in Computer Networks

In computer networks, guaranteeing the precision and integrity of


transmitted data is crucial. Errors may arise during data transmission due
to factors like noise, interference, or hardware malfunctions. Efficiently
identifying and rectifying these errors is vital for preserving data integrity.
One notable method employed for error detection in computer networks is
the Hamming Code. This discussion will explore the basics of Hamming
Code, its application, and its importance in error detection within computer
networks.

1. Understanding Hamming Code

The Hamming Code, devised by Richard Hamming in the 1940s, serves as


a method for both detecting and correcting errors in digital data
transmission. This technique finds extensive application in diverse
communication systems, including computer networks. The fundamental
concept of Hamming Code involves supplementing the transmitted data
with extra parity bits, enabling the detection and correction of errors.

2. Implementation of Hamming Code

Incorporating Hamming Code entails appending redundant parity bits to


the original data bits prior to transmission. These parity bits are computed
according to predefined rules, enabling receivers to identify errors in the
received data.

Key Steps in Hamming Code Implementation:

Calculating Parity Bits: The quantity and placement of parity bits are
established in accordance with the quantity of data bits. Parity bits are
positioned at locations corresponding to powers of 2 (1, 2, 4, 8, etc.).

Incorporating Parity Bits: The computed parity bits are appended to the
original data bits to construct the Hamming Code word. Subsequently, this
resultant Hamming Code word is transmitted across the network.

Page|6
3. Significance of Hamming Code

Hamming Code offers numerous benefits in error detection and correction


within computer networks:
• Single Error Detection: Hamming Code can identify and rectify
single-bit errors in transmitted data. Through the utilization of parity
bits, errors within individual bits can be pinpointed and rectified at
the receiving end.
• Efficiency: Hamming Code ensures efficient error detection and
correction with minimal overhead. By incorporating only a small
number of additional bits to the original data, it achieves reliable
error detection capabilities.
• Versatility: Hamming Code is adaptable and can be implemented
across various communication systems and protocols. This versatility
makes it a valuable technique for error detection within computer
networks. Its widespread usage spans critical applications such as
telecommunications and data storage systems, where data integrity is
paramount..

4. Conclusion

In summary, Hamming Code stands as a robust error detection technique


extensively employed in computer networks to uphold data integrity.
Through the addition of redundant parity bits to transmitted data,
Hamming Code empowers receivers to efficiently identify and rectify
errors, thus enhancing the reliability of data transmission. Grasping the
principles and application of Hamming Code is imperative for network
engineers and developers engaged in crafting resilient communication
systems. As technological progress unfolds, Hamming Code remains an
invaluable asset for error mitigation and the preservation of data integrity
across computer networks.

Page|7
3. DIjkstras Algorithm

Dijkstra's algorithm holds a pivotal role in computer science, particularly


within networking, as it efficiently determines the shortest paths between
nodes in a graph, representing various networks like road or
communication networks. This algorithm was devised by computer
scientist Edsger W. Dijkstra in 1956 and subsequently published three
years later.

• Initialization: The algorithm commences by initializing distances. The


distance to the source node is designated as zero, while all other distances
are set to infinity. The source node is identified as the current node.
• Relaxation: For the current node, the algorithm assesses all unvisited
neighbors, computing tentative distances through the current node. If this
distance proves shorter than the previously recorded distance, the
algorithm updates the distance accordingly.
• Selection of the Next Node: Following distance updates, the algorithm
chooses the unvisited node with the smallest tentative distance to serve as
the subsequent "current node," repeating the relaxation process for it.
• Termination: The algorithm concludes once all nodes have been visited.
The outcome is a depiction of the shortest distances from the source node
to all other nodes in the graph.
Properties:
• Greedy Approach: Dijkstra's algorithm adopts a greedy strategy, as it
consistently selects the most advantageous node, which is typically the one
with the smallest tentative distance, at each step.
• Computational Complexity: The time complexity of Dijkstra's algorithm
varies based on the implementation. Utilizing a min-priority queue realized
through a binary heap, the complexity can be expressed as O((V+E) log
V), where V represents the number of vertices and E signifies the number
of edges within the graph.
Applications in Networking:
• Routing: Deployed within protocols such as OSPF (Open Shortest Path
First) and IS-IS (Intermediate System to Intermediate System) to ascertain
the optimal path for data packets to traverse within a network.
• Traffic Engineering: Facilitates the optimization of traffic flow to
mitigate congestion and uphold dependable data transmission.

Page|8
Variations and Extensions
1. Bidirectional Dijkstra's Algorithm: This variant conducts two
simultaneous searches, originating one forward from the source and the
other backward from the target. These searches converge midway, often
notably diminishing the search area and enhancing runtime, particularly in
dense graphs.
2. A* Search Algorithm*: Extending Dijkstra's algorithm, A* incorporates
a heuristic to direct the search more efficiently toward the target,
potentially reducing the number of explored nodes. A* finds particular
utility in game development and robotics for pathfinding within a
predictable environment.
3. Dynamic Graphs: In scenarios where graphs undergo changes over time,
such as edge or node additions or removals, or alterations in weights,
maintaining the shortest path efficiency presents a challenge. Incremental
or dynamic iterations of Dijkstra's algorithm are tailored to update shortest
paths without necessitating a complete recomputation.

Page|9
4. Stop-and-Wait Protocol:
The Stop-and-Wait protocol represents a fundamental approach to flow
control and error control within data communication, primarily utilized in
scenarios where the communication channel's reliability is questionable. It
serves as a cornerstone in computer networks, ensuring the dependable
transfer of data.
Operation:
• Sender Side: The sender dispatches a frame (data packet) and then halts,
awaiting an acknowledgment (ACK) from the receiver before proceeding
to transmit the subsequent frame.
• Receiver Side: Upon receipt of a frame, the receiver dispatches an ACK
if the frame is devoid of errors. In cases where the frame is received
erroneously (e.g., due to data errors), the receiver may issue a negative
acknowledgment (NACK) or opt not to acknowledge the frame altogether,
contingent on the specific protocol implementation.
• Timeouts: Each frame transmitted by the sender is accompanied by a
timeout. If the timeout elapses before an ACK is received, the sender
initiates a retransmission of the frame.
Characteristics:
• Simplicity: While its implementation is uncomplicated, its efficiency
diminishes, particularly across channels characterized by high latency, as
the sender is compelled to await acknowledgment before transmitting
additional data.
• Reliability: Ensures a dependable delivery mechanism by mandating
acknowledgment for each frame before progression. Despite its ease of
comprehension and analysis, this simplicity renders it less efficient
compared to more intricate protocols such as sliding window protocols.
Applications:
• Employed in early telecommunication protocols and educational settings
to demonstrate fundamental concepts of dependable data transfer within
computer networks.
• Although seldom utilized in contemporary high-speed networks owing to
its inefficiency, the principles underpinning Stop-and-Wait serve as the
cornerstone for more sophisticated protocols proficient in managing flow
and error control more effectively, such as the sliding window protocols
employed in TCP (Transmission Control Protocol).
Advanced Aspects of Stop-and-Wait Protocol:

Page|10
 Efficiency in Low-latency Networks: Although typically deemed
inefficient because it necessitates the sender to await
acknowledgment before transmitting the subsequent frame, Stop-
and-Wait can exhibit relatively high efficiency in environments
characterized by very low round-trip time (RTT). In such
circumstances, the simplicity of Stop-and-Wait renders it an
attractive option for guaranteeing reliability without the added
complexity of more advanced sliding window protocols.
 Role in Historical and Educational Contexts: In addition to its
practical utility, the Stop-and-Wait protocol holds significance as a
valuable educational resource within computer networking courses. It
familiarizes students with fundamental concepts of dependable data
transfer, encompassing error detection, acknowledgments (ACKs),
negative acknowledgments (NACKs), and timeouts. This
foundational understanding forms an essential basis for
comprehending more intricate protocols.
 Implementation Simplicity: Stop-and-Wait ARQ continues to be a
subject of discussion, partly due to its simplicity of implementation.
Requiring minimal buffering at both the sender and receiver, it
emerges as a straightforward option for uncomplicated or resource-
constrained systems, particularly in scenarios where memory
resources are scarce and data transmission volumes are modest.
 Timeout Calculation: The effectiveness of Stop-and-Wait is greatly
influenced by the method of calculating the timeout. A timeout set
too briefly could result in unnecessary retransmissions, whereas one
set too long might decrease network throughput. Determining the
optimal timeout often requires dynamic adjustment algorithms that
account for the network's present conditions, including congestion
levels and fluctuating round-trip times.
 Use in Satellite and Deep Space Communication: Although
inefficient over links with high latency, adaptations of the Stop-and-
Wait protocol have been utilized in satellite and deep space
communications. In these contexts, the protocol's simplicity and
reliability are prioritized over latency concerns. Given the high cost
Page|11
and complexity of communication in such scenarios, ensuring highly
reliable transmission methods takes precedence, even if they may not
offer optimal throughput efficiency.
 Protocol Enhancements: Over time, numerous enhancements have
been suggested to boost the efficiency of the Stop-and-Wait protocol
without substantially raising its complexity. These enhancements
encompass employing selective acknowledgments to handle out-of-
order frames and integrating error correction codes, enabling the
receiver to rectify specific errors without necessitating
retransmissions.
 Integration with Other Protocols: In practical applications, Stop-
and-Wait ARQ is often integrated with additional protocols to
achieve a harmonious balance between simplicity and efficiency. For
example, it may be coupled with higher-level protocols tasked with
managing session establishment, data segmentation, and reassembly.
This combined approach facilitates dependable data transmission
while simultaneously optimizing for both throughput and latency.

Page|12
5. Go Back-N

Go-Back-N represents a form of automatic repeat request (ARQ) protocol


utilized in computer networks to ensure dependable data transmission,
particularly in environments prone to errors, such as wireless mediums or
noisy connections. It operates as a sliding window protocol, permitting the
simultaneous transit of multiple frames.

Here's a breakdown of how the Go-Back-N protocol functions:

Fundamental Principle:
Sender Side: The sender manages a sliding window comprising frames
destined for transmission. It dispatches multiple frames successively
without pausing for individual acknowledgments.

Receiver Side: Correctly received frames are acknowledged by the


receiver. It monitors the anticipated sequence number and disregards any
out-of-order or duplicate frames. In the event of an out-of-order frame, the
receiver still issues an acknowledgment but delays advancing the expected
sequence number until the missing frame is received.

Sliding Window: Both the sender and receiver maintain a window of


frames. The size of this window dictates the number of frames that can be
dispatched or received before awaiting acknowledgments.

Operation:

Sender Behavior:
The sender initiates transmission with a window of frames starting from
the base sequence number.
Page|13
It continues sending frames within the window until reaching the window's
end or exhausting available frames for transmission.
Following transmission, it awaits acknowledgments.
Upon receiving acknowledgments, it adjusts the window size based on the
acknowledgment received and shifts the base sequence number
accordingly.

Receiver Behavior:The receiver accepts frames sequentially within its


receiving window.It dispatches cumulative acknowledgments, indicating
the highest in-order frame received.Out-of-order frames are disregarded,
but an acknowledgment for the last in-order frame is dispatched to notify
the sender of any gaps.Frames with errors are discarded, and no
acknowledgment is issued.

Retransmission:
Should the sender's timer expire before receiving acknowledgments for
frames within the window, it infers potential frame loss or damage and
proceeds to retransmit all frames within the window, commencing from
the base sequence number.

Advantages:
Efficiency: Go-Back-N enables frame pipelining, enhancing efficiency by
obviating the need for the sender to await individual acknowledgments
before sending subsequent frames.

Simplicity: Its implementation is relatively straightforward compared to


other ARQ protocols like Selective Repeat.

Limitations:
 Potential Bandwidth Inefficiency: In cases of a single lost frame, all
subsequent frames within the window necessitate retransmission,
potentially leading to inefficient bandwidth utilization.
 Increased Buffering Requirements: The receiver is compelled to
buffer out-of-order frames until missing frames are received,
potentially augmenting memory requirements.
In summary, the Go-Back-N protocol offers a direct approach to reliable
data transmission in error-prone networks, facilitating the efficient
transmission of multiple frames before awaiting acknowledgments.
Nonetheless, it is subject to certain limitations concerning bandwidth
efficiency and buffer requirements.

Page|14
6. Selective Repeat Protocol
Selective Repeat represents another form of automatic repeat request
(ARQ) protocol utilized in computer networks to ensure reliable data
transmission, akin to Go-Back-N. However, unlike Go-Back-N, Selective
Repeat exclusively retransmits individual lost or corrupted frames, rather
than retransmitting an entire window of frames.

Basic Concept:
Sender Behavior: The sender manages a window of frames intended for
transmission and dispatches them sequentially. It awaits acknowledgments
for each frame individually before transmitting the subsequent frame.

Receiver Behavior: Correctly received frames are acknowledged


individually by the receiver. It monitors the anticipated sequence number
and buffers out-of-order frames until the missing frames are received.

Sliding Window: Both the sender and receiver maintain a window of


frames. The window size dictates the number of frames that can be
dispatched or received before awaiting acknowledgments.

Operation:
Sender Behavior:
The sender transmits frames sequentially within its designated window.
Following the transmission of each frame, it awaits acknowledgment.
In the event that acknowledgment for a frame is not received within a
specified timeout interval, the sender solely retransmits that particular
frame.

Receiver Behavior:
The receiver accepts frames in sequential order within its designated
receiving window.
Acknowledgments are dispatched for accurately received frames,
indicating the highest in-order frame received.
Out-of-order frames are buffered until the missing frames are received.
Frames received with errors are discarded, with no acknowledgment
issued, thereby prompting the sender to retransmit the specific frame.
Advantages:
Efficiency: Selective Repeat has the potential for greater efficiency in
terms of bandwidth utilization compared to Go-Back-N since it retransmits
only the lost frames rather than an entire window of frames.
Page|15
Reduced Retransmission Overhead: Because Selective Repeat solely
retransmits individual lost frames, it has the potential to diminish
retransmission overhead in contrast to Go-Back-N, particularly in
situations characterized by sporadic errors.
Limitations:
Complexity: Implementing Selective Repeat is more intricate than Go-
Back-N because it necessitates individual acknowledgments and the
buffering of out-of-order frames.

Heightened Memory Demands: The receiver is required to buffer out-of-


order frames until the missing ones are received, potentially demanding
more memory than Go-Back-N.

In conclusion, Selective Repeat provides more efficient bandwidth


utilization and reduced retransmission overhead compared to Go-Back-N.
However, this advantage comes at the expense of increased complexity
and potentially greater memory requirements. Selective Repeat is well-
suited for environments where occasional errors occur, and optimal
bandwidth utilization is paramount.

Selective Repeat Protocol

Page|16
7. Socket Programming Using TCP in Computer Networks

TCP (Transmission Control Protocol) socket programming stands as a pivotal


element of network communication within contemporary computing systems. It
empowers applications to forge dependable, connection-oriented communication
channels across networks, facilitating data exchange among diverse endpoints. This
discourse delves into the rudiments of TCP socket programming, its execution, and
its significance in computer networks.
1. Understanding TCP and Sockets

TCP, a cornerstone protocol within the Internet Protocol Suite, furnishes


reliable, stream-oriented communication between two hosts over IP-based
networks. It ensures data integrity, sequencing, and flow control, rendering
it apt for applications necessitating steadfast data transmission.
Sockets, conversely, serve as communication endpoints in networks. In
socket programming, applications spawn sockets to establish connections
and swap data with other applications over the network. TCP sockets
furnish a reliable, connection-oriented communication conduit for data
transmission among processes running on distinct hosts.

2. Implementation of Socket Programming Using TCP

The implementation of socket programming using TCP involves several


key steps:

Socket Creation: Applications forge TCP sockets employing system calls or


programming interfaces proffered by the operating system. The socket interface
furnishes functions for socket creation, binding, connection establishment, listening,
acceptance of connections, and data transmission over TCP connections.
Connection Establishment: In TCP socket programming, a connection is established
between a client and a server. The server engenders a passive socket, awaiting
incoming connection solicitations. The client instigates a connection by spawning an
active socket and designating the server's address and port.
Data Exchange: Upon connection establishment, data interchange becomes feasible
between the client and server leveraging the socket interface. Applications may
employ functions such as send() and recv() to dispatch and retrieve data across the
TCP connection. TCP guarantees dependable delivery of data in the correct sequence.
Connection Termination: Following data exchange culmination, the TCP connection
may be terminated. Both client and server can kickstart the connection termination
procedure employing the close() function or its equivalent.

3. Significance of Socket Programming with TCP

Page|17
Socket programming using TCP offers several advantages in computer
networks:

Reliability: TCP guarantees dependable, error-checked data transmission,


ensuring accurate and orderly delivery of data. This reliability is essential
for applications like file transfer, email, and web browsing.
Compatibility: TCP enjoys broad support across various operating
systems and network devices, rendering socket programming with TCP
compatible with an extensive array of systems and platforms.
Versatility: TCP socket programming is flexible and applicable to diverse
network scenarios, encompassing client-server applications, peer-to-peer
communication, and distributed systems..
Scalability: TCP facilitates concurrent connections between numerous
clients and a server, enabling scalable and effective communication within
networked environments.
4. Conclusion
In summary, TCP socket programming stands as a foundational method for
network communication in computer networks. By utilizing TCP sockets,
applications can establish dependable, connection-oriented channels for
data exchange across IP-based networks. Grasping the principles and
execution of socket programming with TCP is imperative for network
engineers and developers engaged in crafting and deploying networked
applications. As technology progresses, TCP socket programming persists
as a fundamental element of network communication, fostering smooth

data exchange and empowering the creation of resilient and scalable


networked systems.

Page|18
8. Socket Programming using UDP

Socket programming utilizing the User Datagram Protocol (UDP) serves


as a technique for establishing communication between two endpoints
within a network. Differing from the Transmission Control Protocol
(TCP), UDP operates without connections and lacks assurance regarding
packet delivery or sequencing. This characteristic renders it suitable for
applications prioritizing real-time communication and speed over
reliability, such as video streaming, online gaming, and Voice over IP
(VoIP).

At the core of UDP socket programming lies the concept of sockets, which
serve as endpoints for communication between two machines. A socket
comprises an IP address and a port number. In UDP, a socket facilitates
both the transmission and reception of datagrams, which represent discrete
units of data.
The steps involved in UDP socket programming include:

1. Creating a Socket: In order to initiate communication, both the client


and server must create sockets. This is achieved through the socket()
system call, which requires parameters defining the address family (e.g.,
IPv4 or IPv6) and the socket type (in this instance, UDP). For example, in
Python:

python
import socket
udp_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)

2. Binding the Socket (Server): After the socket is created on the server
side, it must be associated with a particular IP address and port number.
This enables the server to await incoming datagrams on the specified port.

python
udp_socket.bind(("0.0.0.0", 8080))

3. Sending Data (Client): The client transmits data by generating a


datagram and indicating the destination address and port. This action is
accomplished through the utilization of the sendto() function.

Page|19
python
udp_socket.sendto(b"Hello, server!", ("server_ip", 8080))

4. Receiving Data (Server): The server awaits incoming datagrams


utilizing the recvfrom() function. Upon data arrival, it is stored in a buffer,
and concurrently, the sender's address is captured.
python
data, client_address = udp_socket.recvfrom(1024)

5. Handling Data (Server): Upon receiving the data, the server proceeds
to process it accordingly. This may entail parsing the data, conducting
calculations, or executing designated actions depending on the content of
the datagram.
6. Closing the Socket: Upon completion of the communication process,
both the client and server are advised to terminate their sockets utilizing
the close() function to relinquish allocated resources.
python
udp_socket.close()

UDP socket programming offers several advantages:


1. Low Overhead: UDP incurs minimal overhead in contrast to TCP as it
does not encompass handshakes, acknowledgments, or connection
management.
2. Real-time Communication: UDP is optimal for real-time applications
wherein low latency holds paramount importance, examples of which
include online gaming and video streaming..
3. Broadcasting: UDP permits broadcasting messages to multiple
recipients concurrently, a capability that proves beneficial in specific
scenarios.

However, UDP also has limitations:


1. Unreliable: UDP offers no assurance regarding the delivery or
sequencing of packets. Consequently, there exists the possibility of some
packets being lost, duplicated, or arriving out of order.
2. No Congestion Control: Due to the absence of built-in congestion
control mechanisms, UDP may not be well-suited for applications
necessitating reliable delivery, like file transfers or web browsing.

Page|20
Limited Packet Size: UDP packets are constrained in size, typically
reaching up to 64KB, a limitation that may prove inadequate for extensive
data transfers.

In conclusion, UDP socket programming offers a streamlined and effective


approach to communication for applications prioritizing speed and real-
time interaction. Through a comprehensive grasp of UDP socket
fundamentals and their application, developers can craft swift and
responsive networked applications tailored to their precise needs.

Page|21
9. Distributed Denial of Service

Distributed Denial of Service (DDoS) attacks constitute a form of


cyberattack wherein numerous compromised systems, frequently infected
with malware, inundate a target system with an excessive volume of
traffic, thereby rendering it inaccessible to legitimate users. The simulation
of such attacks is essential for comprehending their dynamics and devising
robust defense strategies. NS3 (Network Simulator 3) stands as a widely
employed discrete-event network simulator empowering researchers to
model and simulate diverse network scenarios, encompassing DDoS
attacks among them.
In simulating a DDoS attack using NS3, several key components and steps
are involved:

1. *Network Topology Setup*: The initial phase involves delineating the


network topology, encompassing the determination of the number of
nodes, their connections, and attributes like bandwidth, latency, and packet
loss rates. NS3 offers an array of pre-existing models for network
components such as routers, switches, and hosts, enabling users to
construct intricate network topologies.
2. *Attack Scenario Definition*: When simulating a DDoS attack, it's
imperative to outline the attack scenario. This involves pinpointing the
target of the attack, such as a web server, specifying the type of attack,
such as UDP flood or SYN flood, and detailing the attributes of the
attacking nodes, including the quantity of nodes and the intensity of the
attack.
3. *Traffic Generation*: NS3 furnishes utilities for generating network
traffic, which can be tailored to simulate various traffic patterns. In the
scenario of a DDoS attack simulation, it's imperative to configure traffic
generation models to produce malevolent traffic emanating from the
attacking nodes directed towards the target node(s). This may entail
generating a substantial influx of packets endowed with specific attributes
(e.g., source IP addresses, packet payloads) to emulate the actions
observed in genuine DDoS attacks.
4. *Attack Execution*: Upon defining the network topology, attack
scenario, and traffic generation parameters, the simulation can commence.
Throughout the simulation, NS3 encapsulates the behavior of every node
within the network, encompassing packet transmission, routing
determinations, and protocol engagements. As the attack progresses, the
designated target node(s) will encounter a notable surge in traffic,
potentially culminating in service degradation or denial of service.
Page|22
5. *Performance Analysis*: Once the simulation concludes, an analysis
of diverse performance metrics can be conducted to assess the impact of
the DDoS attack. These metrics might encompass network throughput,
rates of packet loss, end-to-end delay, and resource utilization. Through
comparing these metrics in normal scenarios versus during the attack,
researchers can evaluate the efficacy of various defense mechanisms and
mitigation strategies.
6. *Mitigation Strategies*: NS3-based simulations of DDoS attacks can
serve as a means to gauge the efficiency of different mitigation strategies.
These strategies encompass rate limiting, traffic filtering, intrusion
detection systems, and distributed defense mechanisms. Through
simulating diverse attack scenarios and mitigation methods, researchers
can acquire insights into the advantages and drawbacks of each tactic,
thereby identifying optimal practices for safeguarding real-world networks
against DDoS attacks.

To sum up, utilizing NS3 for simulating DDoS attacks empowers


researchers to scrutinize the behavior of these attacks within a controlled
setting, assess the efficacy of defense mechanisms, and formulate
strategies to lessen their impact. Leveraging NS3's capabilities to simulate
intricate network scenarios and produce lifelike traffic patterns allows
researchers to glean valuable insights into the intricacies of DDoS attacks.
This, in turn, contributes to the enhancement of more resilient and robust
network infrastructures.

Page|23
10. HTTP Protocol

1. Introduction:
The Hypertext Transfer Protocol (HTTP) is a fundamental protocol
used for communication on the World Wide Web. It serves as the
foundation for data communication in the form of text, images, videos,
and other multimedia content across the internet. Understanding HTTP
is crucial for anyone working with web technologies, as it governs the
exchange of information between clients and servers.

2. Overview:
HTTP operates as a request-response protocol within the client-
server computing model. It facilitates the transfer of hypertext, which
includes HTML documents, images, and other resources. The protocol
operates on top of the TCP/IP protocol suite, utilizing TCP port 80 by
default for communication.

3. Key Concepts:
1. Request-Response Cycle: HTTP follows a simple request-response
cycle. A client, typically a web browser, sends an HTTP request to a
server, which then processes the request and returns an HTTP
response containing the requested resource or an error message.

2. Uniform Resource Identifier (URI): URIs are used to identify


resources on the web. They consist of a scheme (e.g., "http://" for
HTTP), a hostname, and a path to the resource.

3. Methods: HTTP defines several request methods, including GET,


POST, PUT, DELETE, etc. Each method specifies the action to be
performed on the identified resource.

4. Status Codes: HTTP responses include status codes indicating the


outcome of the request. Common status codes include 200 (OK), 404
(Not Found), 500 (Internal Server Error), etc\

Page|24
4. Versions:
Over the years, several versions of HTTP have been developed, with
each iteration introducing improvements and enhancements. The most
widely used versions include HTTP/1.1 and HTTP/2. The latter introduces
features like multiplexing, header compression, and server push, aimed at
improving performance and efficiency.

5. Security Considerations:
Security is a critical aspect of web communication. HTTP does not
inherently provide encryption or data integrity mechanisms, making it
vulnerable to various attacks, such as eavesdropping and tampering. To
address these concerns, HTTPS (HTTP Secure) employs encryption via
SSL/TLS protocols to secure data transmission.

6. Conclusion:
In conclusion, HTTP is the backbone of communication on the
World Wide Web, facilitating the exchange of resources between clients
and servers. Its simplicity and versatility have contributed to the growth
and evolution of the internet. As technology advances, HTTP continues to
adapt to meet the demands of modern web applications, emphasizing
performance, security, and reliability.

This report provides a comprehensive overview of the HTTP protocol,


covering its fundamental concepts, key features, versions, and security
considerations. You can expand upon each section with more detailed
information as needed. Make sure to review and edit the content to align
with your requirements before submission.

Page|25
Team pictures:

Page|26
Page|27

You might also like