Types of Csma

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

TYPES OF CSMA

1-PERSISTENT CSMA

NON-PERSISTENT CSMA

P-PERSISTENT CSMA
- applies to slotted channels
- When a station becomes ready to send, it
senses the channel.
-P-Persistent CSMA addresses collisions by
introducing a probabilistic approach. When a
device has data to transmit and senses the
channel is busy, it waits for a random amount
of time, based on a probability value. The
probability value, denoted as p, determines the
chance of transmission in each time slot. If the
channel is still busy after the waiting period,
the device continues to listen and wait for the
next opportunity.
// higher values of p result in higher channel utilization but also increase the chances of collisions. Lower values of
p reduce the collision probability but may underutilize the channel.
O-PERSISTENT CSMA
in this method of CSMA supervisory node assign a transmission order to each node in the network. When the
channel was idle instead of immediately sending the data channel will wait for its transmission order assigned to
them.
-In this mode, Every station in the channel transmits the data in its turn.

CSMA/CD (collision detection)


In this method, a station monitors the medium after it sends a frame to see if the transmission was successful. If
successful, the transmission is finished, if not, the frame is sent again.

// Propogation time: Time required for one bit to


reach its destination
//backoff time: The time period for which the node
will wait before sending the data again.
//after a station detects a collision, it aborts its
transmission, waits a random period of time, and
then tries again.
// frame transmission time (Tfr) should be at least
twice the maximum propagation time (Tp).
//it works in wired networks
CSMA(CA) (COLLISION AVOIDANCE)

// nodes attempt to avoid collisions by beginning


transmission only after the channel is sensed to be
"idle".
//They do so by receiving ack from all other nodes.
// a Request-To-Send (RTS) and Clear-To-Send (CTS)
handshake is used to reserve the channel before
transmission. This reduces the chance of collisions
and increases efficiency.
(RTS= Asking for acknowledgements from all other
devices.
CTS=response to RTS sent by other devices indicates
the line is empty for transmission)
//IFS= The protocol requires a minimum amount of
time between transmissions to allow the channel to
be clear and reduce the likelihood of collisions.(RTS
aur CTS ke bich ka time)
//it works in wireless networks.

NETWORK LAYER
NETWORK LAYER AT SOURCE
#NETWORK
LAYER DESIGN ISSUES
a) STORE n FORWARD PACKET
SWITCHING :

-The host sends the packet to the nearest


router. This packet is stored there until it
has fully arrived and then checksum is
verified before forwarding it to the next router till it reaches the destination. This mechanism is called “Store and
Forward packet switching.”

b) Service provided to the transport layer:


The network layer provides services to the transport layer at the network layer/transport layer interface.
An important question is what kind of services the network layer provides to the transport layer. · The
network layer services have been designed with the following goals in mind.
1. The services should be independent of the router technology.
2. The transport layer should be shielded from the number, type, and topology of the routers present.
3. The network addresses made available to the transport layer should use a uniform numbering plan, even
across LANs and WANs.

c) Implementation of Connectionless Service:[datagram]


Connectionless service is a type of communication service provided by the transport layer. In connectionless service,
data is transmitted as individual packets or datagrams without establishing a dedicated connection between the
sender and receiver. Each packet is treated independently and can take different paths through the network.
Examples of connectionless protocols include UDP (User Datagram Protocol) in the Internet Protocol Suite.

d) Implementation of Connection-oriented Service:[virtual connection]


To use a connection-oriented service, first we establishes a connection, use it and then release it. In
connection-oriented services, the data packets are delivered to the receiver in the same order in which they
have been sent by the sender. It can be done in either two ways:
-Circuit Switched Connection – A dedicated physical path or a circuit is established between the
communicating nodes and then data stream is transferred.
-Virtual Circuit Switched Connection – The data stream is transferred over a packet switched network, in
such a way that it seems to the user that there is a dedicated path from the sender to the receiver. A virtual
path is established here. While, other connections may also be using the same path.

Comparison of virtual circuit and datagram services: Virtual circuit and datagram are two approaches
used in network communication.

Virtual circuit: Virtual circuit service is provided by connection-oriented protocols like TCP. In virtual
circuit switching, a dedicated logical path or circuit is established between the sender and receiver before
data transmission. This path remains intact throughout the communication session. Each packet of data
carries a virtual circuit identifier, allowing the network to forward packets along the predefined path.
Virtual circuits provide reliability, ordered delivery, and flow control. However, establishing and
maintaining the circuit incurs overhead and requires additional resources.
Datagram: Datagram service is provided by connectionless protocols like UDP. In datagram switching, each
packet is treated independently and carries the necessary addressing information to reach its destination.
Packets can take different paths through the network and may arrive out of order or be lost. Datagram
service is simpler and has lower overhead compared to virtual circuit service, but it lacks reliability
guarantees and flow control mechanisms. It is commonly used for applications that prioritize low latency
and are tolerant of occasional packet loss, such as real-time streaming or gaming.

ROUTING ALGORITHIMS

o In order to transfer the packets from source to the destination, the network layer must
determine the best route through which packets can be transmitted.
o The routing protocol/algorithm, provides this job.

The Routing algorithm is divided into two categories:

o Adaptive Routing algorithm


o Non-adaptive Routing algorithm
1. ADAPTIVE ROUTING ALGORITHM
o also known as dynamic routing algorithm.
o This algorithm makes the routing decisions based on the topology and network traffic.
o The main parameters related to this algorithm are hop count (how many nodes they have
jumped), distance and estimated transit time.

An adaptive routing algorithm can be classified into three parts:

o Centralized algorithm: It is also known as global routing algorithm. It computes the least-cost
path between source and destination by using complete and global knowledge about the network. This
algorithm takes the connectivity between the nodes and link’s cost as input to calculate best path.
EX: Link state algorithm- aware of the cost of each link in the network.
o Isolation algorithm: It is an algorithm that obtains the routing information by using local information
rather than gathering information from other nodes. It only knows about its directly connected links
and uses that information to determine the best path.
o Distributed algorithm: It is also known as decentralized algorithm as it computes the least-cost path
between source and destination in an iterative and distributed manner. In the decentralized algorithm,
no node has the knowledge about the cost of all the network links. Each node starts with limited
knowledge about its nearby links and iteratively shares and updates information with its neighbors.
EX: Distance Vector routing algorithm.
2) NON-ADAPTIVE ROUTING ALGORITHM

o Also known as a static routing algorithm.


o They do not take the routing decision based on the network topology or network traffic.

The Non-Adaptive Routing algorithm is of two types:

o Flooding: In case of flooding, every incoming packet is sent to all the outgoing links except the
one from it has been reached. The disadvantage of flooding is that node may contain several
copies of a particular packet.
o Random walks: In case of random walks, a packet sent by the node to one of its neighbors randomly.
An advantage of using random walks is that it uses the alternative routes very efficiently.

#DISTANCE VECTOR ROUTING PROTOCOL


The Distance vector algorithm is iterative, asynchronous and distributed.
// Asynchronous: It does not require that all of its nodes operate in the lock
step with each other.
1. The Distance vector algorithm is a dynamic algorithm.
2. It is mainly used in RIP.
3. Each router maintains a distance table known as Vector.
4. Based on bellman and fords algorithm.
5. In this, The router sends its knowledge about the network to only those routers which have direct links. The router
sends whatever it has about the network through the ports. The information is received by the router and uses the
information to update its own routing table.
o Each router should keep atleast three pieces of information for each element:

a)DESTINATION
b)COST
c)NEXT HOP:(matlab next kisko pass kregi )
STEPS:

(1)INITIALIZATION: Each node can know only the distance


between itself and its immediate neighbors, those directly
connected to it. So for the moment, we assume that each
node can send a message to the immediate neighbors and
find the distance between itself and these neighbor.

//A se A ke jaane ka is 0.
//A se kisi aisi node tak jaane ka jo aise se directlt connected
nahi hai uska infinity. EX A to E = infinity

{2}SHARING: The whole idea of distance vector routing is the


sharing of information between neighbors. Although node A
does not know about node E, node C does. So if node C shares
its routing table with A, node A can also know how to reach
node E. On the other hand, node C does not know how to
reach node D, but node A does. If node A shares its routing
table with node C, node C also knows how to reach node D. In
other words, nodes A and C, as immediate neighbors, can
improve their routing tables if they help each other.

PROBLEM : Count to infinity; in the distance vector algorithm, the information about the failed link takes time to
propagate throughout the network. During this propagation delay, routers might still consider the failed route as
valid and try to send packets through it. The problem arises when routers continue to advertise the failed route to
their neighbors and a get stuck in an infinite loop.
To prevent routing loops, routers increase the cost of the failed route by a certain value (often infinity) and pass it
to their neighbors. Each router that receives this updated cost increases the cost again and passes it on. This
process continues until the cost reaches the maximum limit (infinity) and is considered unreachable.

SOLUTION TO COUNT INFINITY PROBLEM :


-DEFINING INFINTY: Redefine
infinity to a smaller number,
such as 100.

#LINK STATE ROUTING


1) It is primarily associated with link-state routing protocols like OSPF (Open Shortest Path First).
2) Routing table is made based on dijkstra algorithm.
3)Dynamic+ has global knowledge.
@STEPS:
a) Each router in the network collects information about its directly connected links and creates a packet
called a "link state packet” (LSP).
b) FLOODING= Each router sends the information to every other router on the internetwork except its
neighbors. This process is known as Flooding.
c) Formation of a shortest path tree for each node.
d) Calculation of a routing table based on the shortest path tree.

a)CREATION OF LSPS.
-Link state packet can carry a large amnt of data unlike distance vector
routing. For ex apart from desination, costs it also carry a sequence no etc.

//these R1, R2 … are LSP for each nodes that contains all the necessary
information about its neighboring devices.

b) FLOODING
-Each router floods its LSPs to all other routers in the network. This means that every router receives the LSPs
from all other routers, allowing them to build a complete and accurate network map. The node that receives an
LSP compares it with the copy it may already have. If the newly arrived LSP is older than the one it has (found by
checking the sequence number), it discards the LSP.

c) Formation of shortest path tree (using dijsktra)


- R1 se R2 and R3 can be directly achieved. Therefore final value of r2,r3 is 6,9.
-r1 cannot reach r4 directly. It has to take help from r2,r3. Distance for taking a path from R1
till r4 is13(via r2) but from 32,r3 it is 12. So we chose r1 and r3.
0 same steps for r5 and r6.

d)Creating Routing tables.


- Based on the calculated shortest paths, each router builds its routing
table, which contains information about the next-hop router and the cost
associated with each destination.one routing table which contains the cost
for travelling all the next nodes from r1 and hop.

IPV4 ADRESSING

-IPv4 is a version 4 of IP. It is a current version and the most commonly used IP address.
-It is a 32-bit address written in four numbers separated by 'dot', i.e., periods.
-This address is unique for each device.
-For example, 66.94.29.13
-The above example represents the IP address in which each group of numbers separated by periods is called an Octet.
Each number in an octet is in the range from 0-255.
-POSSIBLE ADRESS SPACE: Total no of addresses used by the protocol. This address can produce 2 rasied to power
32(2^32)= 4,294,967,296 possible unique addresses.

//IPv4 address can be represented in two formats:


-Binary
-Decimal

CLASSFUL ADRESSING,SUBNET MASK:

//NETID= it is used to identify a network.


Each n/w has a unique id.
-To determine net id and host id we take
help from subnet mask.
-Each class a unique subnet mask to
determine how many bits represent netid
and host id.

//BROADCAST ADRESS= Direct broadcast is the last address


of subnet and can be hear by all hosts in subnet.
//HOST ADRESS=All address between the network address and
the directed broadcast address is called host address for the sub.

DRAWBACKS OF CLASSFUL ADRESSING


IPV4 DATAGRAM PKT

IPV4 TO IPV6
Ipv4 Ipv6

Address length IPv4 is a 32-bit address. IPv6 is a 128-bit address.

Fields IPv4 is a numeric address that consists IPv6 is an alphanumeric address that
of 4 fields which are separated by dot (.). consists of 8 fields, which are separated by
colon.

Classes IPv4 has 5 different classes of IP address IPv6 does not contain classes of IP
that includes Class A, Class B, Class C, addresses.
Class D, and Class E.

Number of IP IPv4 has a limited number of IP IPv6 has a large number of IP addresses.
address addresses.

Address space It generates 4 billion unique addresses It generates 340 undecillion unique
addresses.

End-to-end In IPv4, end-to-end connection integrity In the case of IPv6, end-to-end connection
connection integrity is unachievable. integrity is achievabl

Address In IPv4, the IP address is represented in In IPv6, the representation of the IP


representation decimal. address in hexadecimal.

Checksum field The checksum field is available in IPv4. The checksum field is not available in IPv6.

Transmission IPv4 is broadcasting. On the other hand, IPv6 is multicasting,


scheme which provides efficient network
operations.

Number of octets It consists of 4 octets. It consists of 8 fields, and each field


contains 2 octets. Therefore, the total
number of octets in IPv6 is 16.

CONGESTION CONTROL
//back pressure= node to node congestion control technique in which all the nodes from where
congestion has occurred stop receiveing data.

//choke packet: here, a signal is transmitted from the node where congestion has occurred to the
source node.

QoS

LEAKY BUCKET ALGO


TRANSPORT LAYER

TCP, which stands for Transmission


Control Protocol, is a widely used
transport layer protocol in computer
network. It provides reliable,
connection-oriented communication
between hosts or endpoints. Here
are some key features and
characteristics of TCP:

1. Connection-oriented: TCP
establishes a reliable and
ordered connection between the
sender and receiver before data
transfer begins. This is achieved
through a three-way handshake
process, where the sender and
receiver exchange control packets to synchronize their initial sequence numbers and establish a connection.
2. Reliable data delivery: TCP ensures reliable data delivery by implementing mechanisms such as acknowledgments,
sequence numbers, and retransmissions. The sender keeps track of the packets it sends and waits for
acknowledgments from the receiver. If an acknowledgment is not received within a certain timeout period, the
sender retransmits the packet.
3. Flow control: It uses a sliding window protocol to regulate the transmission rate based on the receiver's available
buffer space. The receiver sends its window size to the sender, who adjusts the transmission rate accordingly.
4. Congestion control: TCP incorporates congestion control mechanisms to prevent network congestion. It monitors
the network for signs of congestion, such as packet loss or increased latency. TCP dynamically adjusts its
transmission rate based on network conditions, reducing the sending rate to avoid overloading the network.
5. Packet ordering: TCP guarantees in-order delivery of packets to the receiver. It assigns a sequence number to each
packet and reorders out-of-order packets at the receiver before passing them to the upper layers of the network
stack.
6. Full-duplex communication
7. Connection termination: TCP provides a reliable connection termination process to gracefully close the connection.
This involves a four-way handshake process, where both the sender and receiver exchange control packets to
confirm the termination of the connection.
UDP, which stands for User Datagram Protocol, is a transport layer protocol in computer networks. It provides a
connectionless, unreliable communication service between hosts or endpoints. Unlike TCP, UDP does not
guarantee reliable delivery or ordering of data. Here are some key features and characteristics of UDP:

1. Connectionless communication: UDP operates in a connectionless manner, meaning it does not establish a
dedicated connection before data transfer.
2. Unreliable data delivery: Unlike TCP, UDP does not provide mechanisms for ensuring reliable data delivery. It does
not use acknowledgments, sequence numbers, or retransmissions. This means that UDP packets may be lost,
duplicated, or delivered out of order.
3. Low overhead: UDP has a minimal overhead compared to TCP. It has a smaller header size, as it does not include
mechanisms like sequence numbers or acknowledgments. This makes UDP more lightweight and efficient in terms
of network resources.
4. No congestion control: UDP does not include built-in congestion control mechanisms like TCP. It does not actively
monitor or react to network congestion. Instead, it allows applications to send data at their desired rate, which can
potentially lead to network congestion if not managed properly.
5. Fast and low latency: Due to its connectionless nature and minimal overhead, UDP offers lower latency and faster
transmission compared to TCP. This makes UDP suitable for real-time applications that require fast data transfer,
such as multimedia streaming, video conferencing, online gaming, and VoIP (Voice over IP).
6. Broadcasting and multicasting: UDP supports broadcasting and multicasting, allowing a single UDP packet to be
sent to multiple recipients simultaneously. This makes it useful for applications that require one-to-many or many-
to-many communication, such as multimedia distribution or network discovery protocols.
7. Simple and flexible: UDP is a simple and straightforward protocol, providing a basic mechanism for data transfer. It
allows applications to have more control over the communication process, as they can define their own error
detection, retransmission, or ordering mechanisms if needed.
The transport layer is responsible for providing end-to-end communication services for applications running on
different hosts or endpoints in a network. The two most commonly used transport layer protocols are the
Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP). Here are the key elements of the
transport layer:

1. Port Numbers: The transport layer uses port numbers to identify specific services or processes running on a host.
Port numbers are used for both source and destination endpoints to multiplex and demultiplex data across
different applications. Well-known port numbers are standardized for specific services (e.g., HTTP uses port 80),
while ephemeral port numbers are dynamically assigned by the operating system for temporary connections.
2. Segmentation and Reassembly: The transport layer breaks application data into smaller units called segments (in
the case of TCP) or datagrams (in the case of UDP) to facilitate transmission over the network. The segments or
datagrams include header information such as source and destination port numbers, sequence numbers, and
checksums. At the receiving end, the transport layer reassembles the received segments or datagrams into
complete application data.
3. Connection-Oriented and Connectionless Communication: The transport layer can operate in either a connection-
oriented or connectionless mode. TCP provides connection-oriented communication, where a reliable and ordered
connection is established between the sender and receiver before data transfer. UDP, on the other hand, offers
connectionless communication, where data is sent without establishing a connection, and each datagram is
handled independently.
4. Flow Control: The transport layer implements flow control mechanisms to ensure that the sender does not
overwhelm the receiver with data. It regulates the rate at which data is transmitted based on the receiver's
capacity to process and receive data. This prevents buffer overflow and data loss.
5. Congestion Control: Congestion control mechanisms are implemented in transport layer protocols to manage
network congestion. The transport layer monitors the network conditions, such as packet loss and delays, and
adjusts the transmission rate accordingly. This helps to prevent congestion and maintain the stability and fairness
of the network.
6. Reliable Data Delivery: Transport layer protocols like TCP provide reliable data delivery. They ensure that data
sent by the sender is received correctly and in the same order by the receiver. This is achieved through
mechanisms such as acknowledgments, sequence numbers, retransmissions, and error detection. If any data is lost
or corrupted during transmission, TCP retransmits the data to ensure its successful delivery.
7. Error Detection and Correction: The transport layer performs error detection by adding checksums to the
segments or datagrams. The receiver checks the checksum to identify and discard any corrupted or invalid data.
However, error correction is not typically provided at the transport layer and is left to higher layers of the protocol
stack.
PDH

//IN this, all the systems are almost synchronized but not completely synchronized.
//PDH is an Older tech used in telecommunications network, It was used to transmit digital voice and data
signals before newer tech came.

You might also like