Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 44

1

R-20 III-I IT-CN


Unit-4 Network Layer

1. The network layer is concerned with getting packets from the source all the way to
the destination. Getting to the destination may require travelling through intermediate
routers along the way.

NOTE: DLL’s goal is just moving frames from one end of a wire to the other.

Network Layer Design Issues:


2. Store-and-Forward Packet Switching: The context in which network layer
protocols operate is shown in the figure below.

The major components of the network are the ISP’s routers inside the oval, and the
customers’ equipment, outside the oval. Host H1 is directly connected to one of the
routers, A, perhaps as a home computer.

H2 is on a LAN, which might be an office Ethernet, with a router, F, operated by the


customer. This router has a leased line to the ISP’s equipment. F is being shown
outside the oval because it does not belong to the ISP.

This equipment is used as follows. A host with transmits a packet to the nearest
router to the ISP. The packet is stored there until it has fully arrived and the link has
finished its processing by verifying the checksum. Then it is forwarded to the next
router along the path until it reaches the destination. This mechanism is store-and-
forward packet switching.

3. Services provided to the Transport Layer: The network layer provides services to
the transport layer at their interface. The goals here can be classified as:

 The services should be independent of router technology


2

 The transport layer should be shielded from the number, type and topology of the
routers
 The network addresses made available for the transport layer should use a
uniform numbering plan

4. Implementation of Connectionless Service: Two different organizations of the


network layer are possible, depending on the type of service offered.

If connectionless service is offered, packets are injected into the network


individually and routed independently of each other. No advance setup is needed.
The packets are called datagrams and the network is called a datagram network.

If connection-oriented service is used, a path from the source router to the


destination router must be established before any data packets can be sent. This is
called a VC (virtual circuit), and the network is called a virtual-circuit network.

5. Working of Datagram Networks:

Suppose that process P1 has a message for P2. It hands the message to the
transport layer, with instructions to deliver it to process P2 on host H2.

The transport layer code runs on H1, typically within the operating system. It
prepends (adds) a transport header to the front of the message and hands the result
to the network layer.

Assume that the message is four times larger than the maximum packet size and
hence has to be split into 4 packets 1, 2, 3, and 4. Each of them is sent to the router
3

A using a point-to-point protocol (PPP). Next, the Internet Service Provider (ISP)
takes over.

Every router has an internal table telling it where to send packets for each of the
possible destinations. Each table entry is a pair consisting of a destination and the
outgoing line to use for that destination. Only directly connected lines can be used.

Example: A has only two outgoing lines—to B and to C—so every incoming packet
must be sent to one of these routers, even if the ultimate destination is to some other
router. A’s initial routing table is shown in the figure.

At A, packets 1, 2, and 3 are stored briefly, having arrived on the incoming link and
had their checksums verified. Then each packet is forwarded according to A’s table,
onto the outgoing link to C within a new frame. Packet 1 is then forwarded to E and
then to F. When it gets to F, it is sent within a frame over the LAN to H2. Packets 2
and 3 follow the same route.

When packet 4 gets to A, it is sent to B, even though it was destined for F. The
reason can me high traffic among the ACE path and hence updating its table. The
algorithm that manages the tables and makes the routing decisions is called the
routing algorithm.

6. Implementation of Connection Oriented Service: For connection-oriented service,


we need a virtual-circuit network. The idea behind virtual circuits is to avoid having
to choose a new route for every packet sent.

When a connection is established, a route from the source to the destination is


chosen as part of the connection and is stored in tables inside the routers. That route
is used for all traffic. When the connection is released, the virtual circuit is
terminated.

NOTE: Each packet carries an identifier telling which virtual circuit it belongs to.

Example: Consider the situation shown in the figure below. Here, host H1 has
established connection 1 with host H2. This connection becomes the first entry in
each of the routing tables.
The first line of A’s table says that if a packet bearing connection identifier 1 comes
in from H1, it is to be sent to router C and given connection identifier 1. Similarly, the
first entry at C routes the packet to E, with connection identifier 1.
4

7. If H3 also wants to establish a connection to H2, it chooses identifier 1 and tells the
network to establish the virtual circuit. We have a conflict here because although A
can easily distinguish connection 1 packets from H1 from connection 1 packets from
H3, C cannot do this.

For this reason, A assigns a different connection identifier to the outgoing traffic for
the second connection. This process is called label switching.

8. Comparison of Virtual Circuit and Datagram Networks:


5

9. Routing Algorithms: In most networks, packets will require multiple hops to make
the journey. The algorithms that choose the routes and the data structures that they
use are a major area of network layer.

The routing algorithm is that part of the network layer software responsible for
deciding which output line should be used for packet transmission. If datagrams are
used, the routing algorithm should be decided for each packet (since the previous
best route might have changed by now).

If the network uses virtual circuits, routing decisions are made only when a new
virtual circuit is being setup. Thereafter, data packets just follow the established
route.

NOTE: This is also known as Session Routing since the route remains in place for
the whole session (login session etc).

Routing -> making the decision which routes are to be used as per the routing
table/routing algorithms

Forwarding -> a packet has arrived and will be forwarded as it is

10. Properties that are desirable in Routing Algorithms:


 Correctness & simplicity
 Robustness: During transmission period there will be hardware and software
failures of all kinds. The routing algorithm should be able to cope with changes in
the topology and traffic without requiring any of the jobs to be aborted.
 Stability: If an algorithm has been set and approved for a route, it should stay in
place till it is revised.
 Fairness and Efficiency: Suppose that there is enough traffic between A and A′,
between B and B′, and between C and C′ to saturate the horizontal links. To
maximize the total flow, the X to X′ traffic should be shut off. But X and X′ may
not see it in that way.

 Networks attempt to minimize the distance a packet must travel, or reduce the
number of hops a packet must make. Both of these options should reduce the
delay and also reduce the amount of bandwidth consumed per packet.
6

11. Types of Routing Algorithms:


 Non-adaptive Algorithms: They do not base their routing decisions on any
measurements or estimates of the current traffic or topology. The route is chosen
in advance and downloaded to the routers when the network is booted. This is
also known as static routing.
 Adaptive Algorithms: These change their routing decisions to reflect changes
in the topology and traffic. These dynamic routing algorithms differ in where
they get their information (e.g., from adjacent routers, or from all routers), when
they change the routes (e.g., when the topology changes), and what metric is
used for optimization (e.g., distance, number of hops, or estimated transit time).

12. The Optimality Principle: It states that if router J is on the optimal/best path from
router I to router K, then the optimal path from J to K also falls along the same route.

As a direct consequence of the optimality principle, we can see that the set of
optimal routes from all sources to a given destination form a tree rooted at the
destination. Such a tree is called a sink tree and is illustrated in Fig. 5-6(b), where
the distance metric is the number of hops.

Note that a sink tree is not necessarily unique; other trees with the same path
lengths may exist. If we allow all of the possible paths to be chosen, the tree
becomes a more general structure called a DAG (Directed Acyclic Graph). DAGs
have no loops.

Assumption for Sink Trees and DAGs: The paths do not interfere with each other
and hence, a traffic jams on one path will not disturb another.

Advantages: Since a sink tree does not contain any loops, each packet will be
delivered within a finite number of hops.

Disadvantages: Links and routers can go down and come back during operation.
Different routers may have different ideas about the current topology.
7

Also, there exists the issue of whether each router has to individually acquire the
information on which router is the root of the sink tree.

13. Shortest Path Algorithm: There exists a simple technique for computing optimal
paths given a complete network. These paths are the ones that a distributing
algorithm should aim to find.

The idea is to build a graph of the network, with each node of the graph representing
a router and each edge of the graph representing a link. To choose a route between
a given pair of routers, the algorithm finds the shortest path between them on the
graph.

One way of measuring path length is the number of hops. Using this metric, the
paths ABC and ABE in Fig. 5-7 are equally long. Another metric is the geographic
distance in km, in which case ABC is clearly much longer than ABE.

Many other metrics also exist beside the above stated metrics. For example, each
edge could be labelled with the mean delay of a standard test packet, as measured
by hourly runs. With this graph labelling, the shortest path is the fastest path
rather than the path with the fewest edges or km.
8

The labels on the edges could be computed as a function of the distance,


bandwidth, average traffic, communication cost, measured delay, and others.

By changing the weighting function, the algorithm would then compute the ‘‘shortest’’
path measured according to any one of the stated criteria or a combination of
criteria.

14. Dijkstra’s Shortest Path Algorithm: It finds the shortest paths between a source
and all the destinations in the network.

Each node is labelled with its distance from the source node along the best known
path (in parentheses). The distances must be non-negative. Initially, no paths are
known, so all nodes are labelled with infinity. As the algorithm proceeds and paths
are found, the labels may change, reflecting better paths.

Initially, all labels are tentative. When it is discovered that a label represents the
shortest possible path from the source to that node, it is made permanent and never
changed thereafter.

Ex: Look at the weighted, undirected graph of Fig. 5-7(a), where the weights
represent, for example, distance. We want to find the shortest path from A to D.

We start out by marking node A as permanent, indicated by a filled-in circle. Then we


examine, in turn, each of the nodes adjacent to A (the working node), relabelling
each one with the distance to A.

Whenever a node is relabelled, we also label it with the node from which the
probe was made so that we can reconstruct the final path later. If the network had
more than one shortest path from A to D, we need to remember all of the probe
nodes that could reach a node with the same distance.

Having examined each of the nodes adjacent to A, we examine all the tentatively
labelled nodes in the whole graph and make the one with the smallest label
permanent, as shown in Fig. 5-7(b). This one becomes the new working node.

We now start at B and examine all nodes adjacent to it. If the sum of the label on B
and the distance from B to the node being considered is less than the existing label
of that node, we have a shorter path, so the node is relabelled.

After all the nodes adjacent to the working node have been inspected and the
tentative labels changed, the entire graph is searched for the node with the smallest
value. This node is made permanent and becomes the working node for the next
round.

Figure 5-7 shows the first six steps of the algorithm.

15. Flooding: When a routing algorithm is implemented, each router must make
decisions based on local knowledge, not the complete network. A simple local
technique is flooding, in which every incoming packet is sent out on every outgoing
line except the one it arrived on.
9

Flooding obviously generates vast numbers of duplicate packets, unless some


control measures are applied. One such measure is to have a hop counter contained
in the header of each packet that is decremented at each hop, with the packet being
discarded when the counter reaches zero.

The hop counter should be initialized to the length of the path from source to
destination. If the sender doesn’t know the length of the path, the initial value can be
set as the full diameter of the network.

A better technique is to have routers keep track of which packets have been flooded,
to avoid sending them out a second time. The source router places a sequence
number in each packet it receives from its hosts.

Each router is provided a list from source indicating which sequence numbers
originating at the source have already been seen. If an incoming packet is on the list,
it is not considered for flooding.

Advantages:

 Flooding ensures that a packet is delivered to every node in the network. For a
single destination, it is wasteful, but is effective for broadcasting.
 Flooding is robust. If some routers are missing and the expected path is not
active, flooding can discover another path to get a packet to its destination.
 Flooding needs no knowledge of the entire network – the routers only need to
know their neighbours.
 Flooding always chooses the shortest path. It sends data in all paths, understands
which is the shortest one, and sticks to it.

16. Distance Vector Routing: It is a dynamic routing algorithm that operates by having
each router to maintain a vector giving the best-known distance to each destination
and which link to be used to get there.

These tables are updated by exchanging information with the neighbours.


Eventually, every router knows the best link to reach each destination. Also known
as Bellman-Ford routing algorithm.

Each router maintains a routing table indexed by each router in the network, and
contains one entry for all routers. This entry has two parts: the preferred outgoing
line for destination and an estimate of the distance to that destination. The distance
might be measured as the number of hops or through the shortest path algorithm.

If the metric is hop, the distance to a neighbour is just one hop. If the metric is
propagation delay, the router can measure it directly with special ECHO packets that
the receiver timestamps and sends back immediately.

17. Example: Assume that delay is used as a metric and that the router knows the delay
to each of its neighbours. Once every T msec, each router sends to each neighbour
a list of its estimated delays to each destination. It also receives a similar list from
each neighbour.
10

Imagine that one of these tables has just come in from neighbour X, with X i being X’s
estimate of how long it takes to get to router i. If the router knows that the delay to X
is ‘m’ msec, it also knows that it can reach router i through X in Xi + m msec.

By performing this calculation for each neighbour, a router can find out which
estimate seems the best and uses that estimate and the corresponding link in its
new routing table.

This updating process is illustrated in Fig. 5-9. Part (a) shows a network.

The first four columns of part (b) show the delay vectors received from the
neighbours of router J. A claims to have a 12-msec delay to B, a 25-msec delay to C,
a 40-msec delay to D, etc.

Suppose that J has measured or estimated its delay to its neighbours, A, I, H, and K,
as 8, 10, 12, and 6 msec, respectively. Consider how J computes its new route to
router G. It knows that it can get to A in 8 msec, and A can get to G in 18 msec.

So J knows it can count on a delay of 26 msec (8+18) to G if it forwards packets


bound for G to A. Similarly, it computes the delay to G via I, H, and K as 41 (31 +
10), 18 (6 + 12), and 37 (31 + 6) msec, respectively.

The best of these values is 18, so it makes an entry in its routing table that the delay
to G is 18 msec and that the route to use is via H. The same calculation is performed
11

for all the other destinations, with the new routing table shown in the last column of
the figure.

18. Link State Routing: The idea behind link state routing can be stated as five parts:

 Discover the router’s neighbours and learn their network addresses.


 Set the distance/cost metric to each of the router’s neighbours.
 Construct a packet describing all the information that it has learnt.
 Send this packet to and receive packets from all other routers.
 Compute the shortest path to every other router.

Thus, the complete topology is distributed to every other router. Then Dijkstra’s
algorithm can be run at each router to find the shortest path to every other router.

19. Learning about the Neighbours: This can be done by sending a HELLO packet on
each point-to-point line. The router at the other end is expected to send back a reply
giving its unique name.

When two or more routers are connected by a broadcast link (e.g., a switch, ring, or
Ethernet), as shown in part (a) above, complexity increases. (three routers, A, C, and
F, are directly connected).

A better way to model the LAN is to consider an artificial node, N, to which A, C, and
F are connected. A router on the LAN is selected to play the role of N in the routing
protocol. Part (b) above depicts this idea.

20. Setting Link Costs: This can be done by making the cost inversely proportional to the
bandwidth of the link. Another way is to set cost depending on the shortness of the
link. (Send an ECHO packet, which is sent back by the other side immediately. Use
the path that has less time delay and set its cost as high.)

21. Building Link State Packets (LSPs): The next step is, each router builds a packet
containing all the data that it has learnt. The packet consists of
12

 The identity of the sender


 A sequence number
 Age (the lifetime of the packet in seconds)
 A list of neighbours.
 The cost to each neighbour

Example:

In this example, costs are shown as labels on the links. The corresponding LSPs for
all six routers are shown in Fig. 5-12(b).

NOTE: The LSPs are built periodically or when a significant event occurs (a
neighbour going down or links changing etc.)

22. Distributing the LSPs: All of the routers must get all of the LSPs quickly and reliably.
Basic idea is to use flooding to distribute the LSPs to all routers. Each packet
contains a sequence number that is incremented for each new packet sent.

Routers keep track of all the (source router, sequence) pairs they see. When a new
LSP comes in, it is checked against the list of packets that been noticed already. If it
is new, it is forwarded on all lines except the one it arrived on. If it is a duplicate, it is
discarded.

If a packet with a sequence number lower than the highest one seen so far ever
arrives, it is rejected as being obsolete as the router has more recent data.

Disadvantages:

 If the sequence numbers wrap around, confusion will increase. Ex: A car being
overlapped by race leader. (Use 32-bit sequence number so that wrapping can
occur only once in many years)
 If a router crashes, it will lose track of its sequence number.
 If a sequence number is corrupted, all the concerned packets with numbers below
it will be rejected as obsolete.
13

The solution to these problems is to include the age of each packet after the
sequence number and decrement it once per second. When the age hits zero, the
information from that router is discarded.

The Age field is also decremented by each router during the initial flooding process,
to make sure no packet can get lost and live for an indefinite period.

23. Computing the New Routes: Dijkstra’s algorithm can be run locally to construct the
shortest paths to all possible destinations.

Compared to distance vector routing, link state routing requires more memory and
computation. For a network with n routers, each of which has k neighbours, the
memory required to store the input data is proportional to kn, which is at least as
large as a routing table listing all the destinations.

Link state routing works well because it does not suffer from slow convergence
problems. IP and OSPF (Open Shortest Path First) are the main protocols that are
used by many ISPs.

24. Hierarchical Routing: As the network increases, more CPU time is needed to scan
them and more bandwidth is needed to send status reports. It may not be feasible to
construct connection tables of every router with every other router. Hence routing
has to be done hierarchically.

In hierarchical routing, the routers are divided into regions. Each router knows all
the details about how to route packets to destinations within its own region but
knows nothing about the internal structure of other regions.

For huge networks, a two-level hierarchy is insufficient. It may be necessary to group


the regions into clusters, the clusters into zones, the zones into groups, and so on.
14

Kamoun and Kleinrock (1979) discovered that the optimal number of levels for an N
router network is ln N, requiring a total of e ln N entries per router. They have also
shown that the increase in effective mean path length caused by hierarchical routing
is small and acceptable.

25. Congestion Control Algorithms: Too many packets present in (a part of) the
network causes packet delay and loss that degrades performance. This situation is
called congestion. The network and transport layers share the responsibility for
handling congestion.

The most effective way to control congestion is to reduce the load on the network.

Figure 5-21 depicts the onset of congestion. When the number of packets hosts
send into the network is within its capacity, the number delivered is proportional to
the number sent.
As the load approaches carrying capacity, traffic occasionally fills up the buffers
inside routers and some packets are lost. These lost packets consume some of the
capacity, and the number of delivered packets falls below the ideal curve. The
network is now congested. Due to congestion, performance of the network reduces
and redundancy increases.
NOTE: Goodput is the rate at which useful packets are delivered by the network.
15

Congestion cannot be avoided totally. Increasing memory (RAM) might help up to a


point. It has been discovered that if routers possess infinite amount of memory,
congestion gets worse.

This is because by the time packets get to the front of the queue, they have already
timed out (repeatedly) and duplicates have been sent. This makes matters worse,
and leads to a collapse of the network.

26. Approaches to Congestion Control: The presence of congestion means that the
load is tentatively greater than the resources in the network. Two solutions come to
mind: increase the resources or decrease the load.

As shown below, these solutions are applied on different time scales to prevent
congestion or react to it.

27. Network Provisioning: The basic way to avoid congestion is to build a network that
matches to its traffic. If there is a low-bandwidth link on the path, congestion is likely
to occur. If there is a serious congestion, resources are added dynamically (using
spare routers, using backup lines etc).

Also, links and routers that are heavily utilized are upgraded regularly. This is called
provisioning.
16

28. Traffic-Aware Routing:


### Traffic-Aware Routing

Traffic-aware routing aims to improve network performance by considering


both the fixed characteristics of network links (such as bandwidth and
propagation delay) and their variable characteristics (such as current load or
queuing delay) when determining the best routes for traffic. This approach
helps to prevent congestion and ensures that traffic is distributed more evenly
across the network.

#### The Problem

Traditional routing schemes adapt to changes in network topology but do not


account for changes in traffic load. This can lead to congestion, particularly on
links that become hotspots due to heavy traffic. To address this, traffic-aware
routing includes load as a factor in route calculations.

#### How Traffic-Aware Routing Works

1. **Link Weight Calculation**: In traffic-aware routing, the weight of a link is


calculated based on:
- **Fixed metrics**: Link bandwidth and propagation delay.
- **Variable metrics**: Measured load or average queuing delay.

The combined weight is used to find the least-weight (optimal) path, which
tends to favor less loaded links.

#### Example Scenario

Consider a network divided into two parts, East and West, connected by two
links, CF and EI. If most traffic between East and West is routed through CF,
this link becomes heavily loaded, leading to long delays. Including queuing
delay in the link weight calculation can make EI appear more attractive for
routing.

- **Initial State**: Most East-West traffic uses link CF.


- **Load-Based Adjustment**: Due to high load on CF, EI becomes the
preferred route in the next routing update.
- **New State**: Most traffic switches to EI, making EI heavily loaded.
- **Subsequent Update**: CF now appears to be the less loaded path again.

This can result in routing oscillations where traffic constantly shifts between
CF and EI, causing instability and erratic routing behavior.

#### Solution Techniques


17

1. **Multipath Routing**:
- **Definition**: Allows traffic to be distributed across multiple paths rather
than a single path.
- **Implementation**: Traffic between East and West can be spread over
both CF and EI.
- **Benefit**: Reduces the likelihood of any single link becoming a hotspot,
balancing the load more effectively.

2. **Gradual Traffic Shifting**:


- **Definition**: Adjusts routing decisions slowly rather than abruptly to allow
the network to stabilize.
- **Implementation**: Instead of immediately switching all traffic from CF to
EI, the routing algorithm incrementally adjusts the proportion of traffic on each
link.
- **Benefit**: Prevents dramatic oscillations and allows the network to
converge to a stable state, ensuring smoother transitions and consistent
performance.

### Summary

- **Traffic-Aware Routing**: Incorporates both fixed and variable link


characteristics to dynamically adjust routes based on current load.
- **Problem**: Rapid changes in routing due to load can cause oscillations
and instability.
- **Solutions**:
1. **Multipath Routing**: Distributes traffic across multiple links to avoid
overloading any single path.
2. **Gradual Traffic Shifting**: Adjusts routes slowly to allow the network to
stabilize, preventing erratic behavior.

By implementing these techniques, traffic-aware routing can more effectively


manage network load, reduce congestion, and improve overall network performance.
18

Previous routing algorithms used fixed link weights, of distance, cost, time or
bandwidth. Changes in load were not considered or adapted in those algorithms.

In traffic-aware routing, routes may be changes appropriately at different times to


shift traffic from heavily used routes to low-traffic routes. This is generally done by
taking time zones into consideration.

In this procedure, we set the link weight to be a function of the link bandwidth
and propagation delay plus the measured load or average queuing delay.

Least-weight paths will then favour paths that are more lightly loaded, all else being
equal.
19

Consider the network of Fig. 5-23, which is divided into two parts, East and West,
connected by two links, CF and EI.

Suppose that most of the traffic between East and West is using link CF, and, as a
result, this link is heavily loaded with long delays. Including queueing delay in the
weight used for the shortest path calculation will make EI more attractive.

After the new routing tables have been installed, most of the East-West traffic will
now go over EI, loading this link. Consequently, in the next update, CF will appear to
be the shortest path. As a result, the routing tables may oscillate wildly, leading to
erratic routing and many potential problems.

NOTE: If load is ignored and only bandwidth and propagation delay are considered,
this problem does not occur.

Hence, the Internet routing protocols do not generally adjust their routes depending
on the load. Instead, adjustments are made outside the routing protocol by slowly
changing its inputs. This is called Traffic Engineering.

29. Admission Control: The idea is simple: do not set up a new virtual circuit unless the
network can carry the added traffic without becoming congested. Thus, attempts to
set up a virtual circuit may fail. This is much better than the failure of an existing
topology.

Ex: In the telephone system, when a switch gets overloaded it practices admission
control by not giving dial tones.

Traffic is often described in terms of its rate and shape. The problem of how to
describe it in a simple way is difficult, because traffic is typically bursty.

For example, traffic that varies while browsing the Web is more difficult to handle
than a streaming movie because the bursts of Web traffic are more likely to congest
routers in the network.

A commonly used descriptor that captures this effect is the leaky bucket or token
bucket. A leaky bucket has two parameters that bound the average rate and the
instantaneous burst size of traffic.

With all this data, the network can use traffic descriptions for admission control. The
task is then to estimate how many circuits will fit within the carrying capacity of the
network without congestion.

Ex: Suppose that virtual circuits that may blast traffic at rates up to 10 Mbps all pass
through the same 100 Mbps physical link. How many circuits should be admitted?
Clearly, 10 circuits can be admitted without risking congestion, but this is wasteful
normally since all 10 are transmitting at the same level at the same time.

Admission control can also be combined with traffic-aware routing by considering


routes around traffic hotspots as part of the setup procedure.
20

Suppose that a host attached to router A wants to set up a connection to a host


attached to router B. Normally, this connection would pass through one of the
congested routers. To avoid this situation, we can redraw the network as shown in
Fig. 5-24(b), omitting the congested routers and all of their lines. The dashed line
shows a possible route for the virtual circuit that avoids the congested routers.

30. Traffic Throttling:


### Traffic Throttling

**Traffic throttling** is a network management technique used to control the


amount of data that a network can handle during a given time period. By
regulating the flow of traffic, it helps prevent network congestion, ensures fair
distribution of bandwidth among users, and maintains network performance
and stability.

#### Key Concepts

1. **Bandwidth Management**:
- Traffic throttling helps manage bandwidth by setting limits on data transfer
rates. This ensures that no single user or application consumes excessive
bandwidth, which could degrade the network performance for others.

2. **Congestion Control**:
- By controlling the rate at which data packets are sent, throttling can
prevent congestion, which occurs when the network is overwhelmed by too
much data traffic. Congestion can lead to packet loss, increased latency, and
reduced overall network performance.

3. **Fairness**:
21

- Throttling ensures a fair allocation of network resources. It prevents


scenarios where some users or applications monopolize the bandwidth,
thereby providing a more balanced and equitable network experience for all
users.

#### How Traffic Throttling Works

1. **Rate Limiting**:
- **Fixed Rate Limiting**: This sets a maximum data transfer rate for a
specific period (e.g., 1 Mbps per user). Once the limit is reached, data transfer
is slowed down or paused.
- **Dynamic Rate Limiting**: The limits can adjust based on network
conditions, user priority, or time of day.

2. **Shaping**:
- **Traffic Shaping**: Modifies the flow of data packets to conform to a
specified profile. This can smooth out bursts of traffic, ensuring a more
consistent flow and avoiding sudden spikes that can cause congestion.

3. **Priority Queuing**:
- Traffic is categorized and assigned different priority levels. Higher priority
traffic (e.g., VoIP, streaming) is allowed to pass through more quickly, while
lower priority traffic (e.g., file downloads) is throttled more aggressively.

#### Applications of Traffic Throttling

1. **Internet Service Providers (ISPs)**:


- ISPs use throttling to manage bandwidth on their networks, ensuring that
high-demand periods do not degrade service quality. They may throttle heavy
users to maintain overall network performance.

2. **Corporate Networks**:
- Enterprises implement throttling to ensure critical applications (e.g., video
conferencing, CRM systems) have sufficient bandwidth and to control non-
essential traffic (e.g., personal video streaming).

3. **Cloud Services**:
- Cloud providers use throttling to manage the resources used by different
tenants, ensuring fair access and preventing any single user from overloading
the system.

#### Benefits of Traffic Throttling

1. **Prevents Network Congestion**:


- By controlling the rate of data flow, throttling helps avoid congestion,
reducing packet loss and latency.
22

2. **Ensures Fair Bandwidth Allocation**:


- Provides a fair share of network resources to all users, preventing
scenarios where a few users consume the majority of the bandwidth.

3. **Enhances Network Stability and Performance**:


- Smoothing out traffic spikes and managing bandwidth usage contributes to
a more stable and predictable network performance.

4. **Improves Quality of Service (QoS)**:


- By prioritizing critical traffic, throttling helps maintain high QoS for
important applications, ensuring they perform reliably even during peak usage
times.

#### Potential Drawbacks

1. **Reduced Performance for Throttled Users**:


- Users or applications subjected to throttling may experience slower data
transfer rates, impacting performance.

2. **Complex Configuration**:
- Implementing effective throttling policies can be complex and requires
careful planning to balance performance and fairness.

3. **User Dissatisfaction**:
- If users are unaware of or do not understand the reasons for throttling, it
can lead to dissatisfaction and complaints.

### Conclusion

Traffic throttling is a crucial tool for network management, allowing administrators to


control bandwidth usage, prevent congestion, and ensure a fair distribution of
network resources. While it has significant benefits in maintaining network
performance and stability, it must be carefully implemented to avoid negative
impacts on user experience.

In the networks, senders adjust their transmissions depending on the capacity of the
network. In this setting, the network operates just before congestion. When
congestion is imminent, network must tell the senders to slow down. This concept is
known as congestion avoidance.

31. Choke Packets: The direct way to notify a sender of congestion is to tell it directly.
The router selects a congested packet and sends a choke packet back to the
source host, giving it the destination found in the packet.
23

The original packet may be tagged so that it will not generate any more choke
packets along the path and then forwarded. To avoid increasing load on the network
during a time of congestion, the router may only send choke packets at a low rate.

When the source host gets the choke packet, it reduces the traffic sent to the
destination.

32. Explicit Congestion Notification: Instead of generating additional packets to warn


of congestion, a router can tag any packet it forwards to signal that it is experiencing
congestion.

Through this process, the destination can note that there is congestion and inform
the sender when it sends a reply packet. The sender can then throttle its
transmissions as before. This design is called ECN (Explicit Congestion Notification)
and is used in the Internet.

33. Hop-by-Hop Backpressure:


24

At high speeds or over long distances, many new packets may be transmitted after
congestion has been signalled because of the delay before the signal takes effect.

An alternative approach is to have the choke packet take effect at every hop it
passes through, as shown in Fig. 5-26(b). Here, as soon as the choke packet
25

reaches F, F is required to reduce the flow to D. Doing so will require F to devote


more buffers to the connection, since the source is still sending away at full blast, but
it gives D immediate relief.

This action puts a greater demand on E’s buffers but gives F immediate relief.
Finally, the choke packet reaches A and the flow genuinely slows down.

34. Load Shedding: When routers are being flooded by packets that they cannot
handle, they just throw them away. The word of ‘load shedding’ is borrowed from
electrical power generation, where certain areas are placed under power cut for
some time to save the total grid from collapsing.

It is important to decide which packets are to be dropped. The choice may depend
on the type of applications that use the network. In some situations, old packets are
dropped (file transfer) and in some other situations new packets are dropped
(multimedia data).

It is better that applications must mark their packets to indicate how important they
are. Then, when packets have to be discarded, routers can first drop packets from
the least important class, then the next most important class, and so on.

35. Random Early Detection (RED): This algorithm gives information to the routers on
when to start discarding packets. Routers maintain average queue lengths; when the
average queue length on some link exceeds a threshold, the link is congested and a
small fraction of the packets are dropped at random.

36. NOTE: Traffic shaping is a technique for regulating the average rate data flow in a
network. The goal is to allow applications to transmit the traffic that suits their needs,
and have a simple way to describe the traffic patterns in the network.

37. Through sliding window protocols we have seen the usage of parameters (distance,
cost, time and bandwidth) to limit how much data is in transit at any given time.
Leaky bucket and token bucket are traffic control algorithms with a generalized way
to control the traffic and provide an equivalent result.

38. Leaky Bucket Algorithm: Imagine a bucket with a small hole in the bottom, as
illustrated in Fig. 5-28(b). The capacity of the bucket is ‘B’.

 The outflow is at a constant rate, R, when there is any water in the bucket.
 The outflow is zero when the bucket is empty.
 Once the bucket is full, any additional water that enters the bucket overflows and
is lost.
26

This bucket can be used to monitor packets entering the network, as shown in Fig. 5-
28(a). As per the concept, each host is connected to the network by an interface
containing a leaky bucket. To send a packet into the network, it must be possible to
put more water into the bucket.

If a packet arrives when the bucket is full,

 The packet must be placed in a queue until enough water leaks out or (ex: a
host regulating its traffic)
 The packet must be discarded (a network interface that that is policing the
traffic)

This technique was proposed by Turner (1986) and is called the leaky bucket
algorithm.

39. Token Bucket Algorithm: In another formulation, imagine the network interface as
a bucket that is being filled, as shown in Fig. 5-28(c). The tap is running at rate R
and the bucket has a capacity of B, as before.

To send a packet we must be able to take water, or tokens (data contents), out of the
bucket (rather than putting water into the bucket).

The maximum number of tokens that can accumulate in the bucket is B, and if the
bucket is empty, we must wait until more tokens arrive to send another packet. This
algorithm is called the token bucket algorithm.

Leaky and token buckets limit the long-term rate of a flow but allow short-term bursts
up to a maximum length to pass through the bucket without any changes or delays.

Large bursts will be regulated by a leaky bucket to reduce congestion in the network.
27

Ex: Imagine that a computer can produce data at up to 1000 Mbps (125 million
bytes/sec) and that the first link of the network also runs at this speed. The pattern of
traffic the host generates is shown in Fig. 5-29(a). The average rate over 1 second is
200 Mbps.

Suppose that the routers can accept data at the maximum speed only for short
intervals, until their buffers fill up. The buffer size is 9600 KB.

For long intervals, the routers work best at rates not exceeding 200 Mbps (say,
because this is the bandwidth given to the customer). The problem here is that if
traffic is sent in this pattern, some of it will be dropped in the network because it does
not fit into the buffers at routers. (Overflow)

To avoid this packet loss, we can control the traffic at the host with a token bucket. If
we use a rate, R, of 200 Mbps and a capacity, B, of 9600 KB, the traffic will fall within
the data range that the network can handle. The output of this token bucket is shown
in Fig. 5-29(b).

Let burst length = S sec.,

maximum output rate = M bytes/sec,

token bucket capacity = B bytes, and

token arrival rate = R bytes/sec.

Hence, an output burst contains a maximum of B + RS bytes.


28

We also know that the number of bytes in a maximum-speed burst of length S


seconds is MS.

Therefore, B + RS = MS

We can solve this equation to get S = B / (M − R).

40. Internetworking: Until now, we have assumed that there is a single network, with
each machine using the same protocol in each layer. But this assumption is not
possible and can’t be implemented. Many different networks exist, including LANs,
MANs, and WANs.

This issues that arise when two or more networks are connected to form an internet
will be examined in this category.

41. How Networks Differ:

42. How Networks can be Connected:

This passage explains how different networks can be interconnected using a


common network layer. Here's a breakdown:
The Challenge:
 Connecting networks that use different technologies and addressing schemes
can be complex.
29

Two Approaches:
1. Translation Devices: Build devices that convert packets from one network
format to another. This can be cumbersome and inefficient.
2. Common Network Layer: Introduce a layer that sits on top of the individual
networks and uses a common addressing system. This is the preferred
approach.
Example with 802.11, MPLS, and Ethernet:
 Imagine a network with three different network types: Wi-Fi (802.11), a special
network called MPLS, and wired Ethernet.
 A device on the Wi-Fi network wants to send data to a device on the Ethernet
network.
Challenges and Solutions:
1. Different Network Types: Wi-Fi and MPLS have different service models
(connectionless vs. connection-oriented). This requires setting up a virtual
circuit for the data to travel through MPLS.
2. Addressing: Each network might have its own addressing format. The
common network layer uses a universal addressing system (e.g., IP
addresses) to identify any device across all three networks.
3. Packet Size: Wi-Fi frames are larger than Ethernet frames. The packet is split
into smaller fragments for transmission on the Ethernet network and then
reassembled at the destination.
The Protocol Journey:
1. Source creates a packet with the common network layer header (e.g., IP
header).
2. The packet is encapsulated in a Wi-Fi frame and sent to the first router.
3. The router extracts the packet from the Wi-Fi frame, checks the IP address,
and routes it through the MPLS network using a virtual circuit.
4. MPLS headers are added and removed as the packet travels.
5. At the edge of the MPLS network, the IP address is used again to determine
the next hop (destination).
6. If the packet is too large for Ethernet, it's fragmented into smaller pieces.
7. Each fragment is wrapped in an Ethernet frame and sent to the destination's
Ethernet address.
30

8. At the destination, the Ethernet headers are removed, and the fragments are
reassembled into the original packet.
Benefits of a Common Network Layer:
 Enables communication between different network types.
 Simplifies routing by using a universal addressing system.
 Provides a standardized way to handle packet size differences.
Overall, the concept of a common network layer allows for seamless
communication across diverse network infrastructures.
=============================================================

There are two basic choices for connecting different networks:


 We can build devices that convert packets from each kind of network into packets
for each other network.
 Add a layer of indirection and build a common layer on top of the different
networks.

Cerf and Kahn (1974) argued for a common layer to hide the differences of existing
networks. This approach has been successful, and the layer they proposed was
eventually separated into the TCP and IP protocols.

The source accepts data from the transport layer and generates a packet with the
common network layer header, which is IP in this example. The network header
contains the ultimate destination address, which is used to determine that the packet
should be sent via the first router.
31

So, the packet is encapsulated in an 802.11 frame whose destination is the first
router and transmitted. At the router, the packet is removed from the frame’s data
field and the 802.11 frame header is discarded.

The router now examines the IP address in the packet and looks up this address in
its routing table. Based on this address, it decides to send the packet to the second
router next. For this part of the path, a virtual circuit must be established to the
second router and the packet must be encapsulated with headers that travel this
circuit. At the far end, the header is discarded and the network address is again
consulted to find the next network layer hop. It is the destination.

Since the packet is too long, it is split into two portions. Each of these portions is put
into the data field of an Ethernet frame and sent to the Ethernet address of the
destination. At the destination, the contents are reassembled.

43. Tunnelling: Consider a special case where source and destination hosts are on the
same type of network, but there is a different type of network in between. The
solution to this problem is a technique called tunnelling.

To send an IP packet to the London office, the Paris office constructs the packet
containing an IPv6 address in London, and sends it to the multiprotocol router that
connects the Paris IPv6 network to the IPv4 Internet.

When this router gets the IPv6 packet, it encapsulates the packet with an IPv4
header addressed to the IPv4 side of the multiprotocol router that connects to the
London IPv6 network.

That is, the router puts a (IPv6) packet inside a (IPv4) packet. When this packet
arrives, the London router recovers the original IPv6 packet and sends it forward.

44. Internetwork Routing: The networks may internally use different routing algorithms.
For example, one network may use link state routing and another distance vector
routing. Since link state algorithms need to know the topology but distance vector
algorithms do not, it is unclear how to find the shortest paths across the internet.
32

All of these considerations lead to a two-level routing algorithm. Within each network,
an intra-domain or interior protocol is used for routing.

45. Packet Fragmentation:


### Fragmentation in Networking

In networking, each network segment has a maximum packet size, influenced


by factors like hardware limitations, operating system buffer sizes, protocol
constraints, standards, and considerations for error handling and channel
occupancy. When a packet exceeds the maximum size of a network it must
traverse, it needs to be fragmented into smaller packets.

#### Causes for Packet Size Limitations


1. **Hardware Constraints**: For example, Ethernet frames have a maximum
size limit.
2. **Operating System Limits**: Buffer sizes might be fixed, such as 512
bytes.
3. **Protocol Constraints**: Packet length fields in protocols limit size (e.g.,
16-bit length fields).
4. **Standards Compliance**: Adherence to specific network standards.
5. **Error Handling**: Smaller packets reduce the need for large
retransmissions.
6. **Channel Occupancy**: Preventing long occupancy of the network channel
by a single packet.

### Fragmentation Strategies

#### Transparent Fragmentation


- **Description**: Fragments are recombined at the exit gateway of the small-
packet network, making fragmentation transparent to subsequent networks.
- **Process**:
1. **Gateway Fragmentation**: The gateway fragments the large packet.
2. **Passage through Network**: Fragments traverse the small-packet
network.
3. **Reassembly at Exit Gateway**: Fragments are recombined at the exit
gateway before proceeding.
- **Advantages**:
- Simplifies handling for subsequent networks.
- **Disadvantages**:
- Exit gateway needs a mechanism to detect the completion of fragment
reception (e.g., count field or end-of-packet bit).
- All packets must exit via the same gateway, potentially reducing
performance.
- Overhead for reassembly and fragmentation can be significant when
traversing multiple small-packet networks.
33

#### Nontransparent Fragmentation


- **Description**: Fragments are not recombined at intermediate gateways but
only at the destination.
- **Process**:
1. **Initial Fragmentation**: The original packet is fragmented once.
2. **Intermediate Handling**: Each fragment is treated as an independent
packet.
3. **Final Reassembly**: Fragments are recombined only at the destination
host.
- **Advantages**:
- Allows for multiple exit gateways, improving performance.
- Reduces the reassembly and re-fragmentation overhead at intermediate
gateways.
- **Disadvantages**:
- Each fragment requires a header, increasing overall overhead.
- Every host must be capable of reassembling fragments.
- Increased total overhead due to header requirements for each fragment.

### Fragmentation Handling

#### Fragment Numbering


- **Method**: Use a hierarchical numbering system to identify fragments and
ensure correct reassembly, even if fragments are further fragmented.
- **Example**:
- Original packet: 0
- First-level fragments: 0.0, 0.1, 0.2, etc.
- Second-level fragments (if further fragmented): 0.0.0, 0.0.1, etc.
- **Handling Losses**: If a fragment is lost (e.g., 0.1), the source may timeout
and retransmit the original packet, with each fragment maintaining unique
numbering.

#### Header Information


- **Internet Header Fields**:
- **Original Packet Number**: Identifies the original packet.
- **Fragment Number**: Identifies the specific fragment.
- **End-of-Packet Bit**: Indicates the final fragment of the original packet.

### Practical Example


Consider a packet of 1024 bits, fragmented into four 256-bit fragments:
1. **Initial Fragmentation**: Packet 0 becomes fragments 0.0, 0.1, 0.2, 0.3.
2. **Intermediate Network Handling**: Each fragment might be further
fragmented if required by smaller packet size limits in the intermediate
network.
3. **Reassembly**:
- Destination receives fragments and reassembles them based on the
fragment numbers.
34

- If fragment 0.1 is lost, retransmission from the source is triggered.

### Visual Representation

#### Fragmentation Process

1. **Original Packet**:
```
[Original Packet]
```

2. **Fragmentation through Smaller Network**:


- Max packet size: 8 bytes (plus header)
```
[8-byte Fragment 1] [8-byte Fragment 2] ...
```

3. **Further Fragmentation**:
- Max packet size: 5 bytes (plus header)
```
[5-byte Fragment 1] [5-byte Fragment 2] ...
```

4. **Reassembly at Destination**:
- Recombined based on fragment numbers.

### Conclusion

Fragmentation is essential for handling packets that exceed the size limits of
intermediate networks. The choice between transparent and nontransparent
fragmentation depends on factors such as network performance, overhead
management, and the capabilities of network devices and protocols. Transparent
fragmentation simplifies handling but adds overhead at exit gateways, while
nontransparent fragmentation allows for flexible routing but increases total overhead
due to additional headers for each fragment.

Each network or link imposes some maximum size on its packets. These limits have
various causes, among them
1. Hardware (size of an Ethernet frame).
2. Operating system (all buffers are 512 bytes).
3. Protocols (the number of bits in the packet length field).
4. Compliance with some international standards.
5. Desire to reduce errors.
6. Desire to prevent one packet from occupying the channel too long.
35

Network designers are not free to choose any old maximum packet size they wish.
Hosts usually prefer to transmit large packets because this reduces packet
overheads such as bandwidth. Problems appear when a large packet wants to travel
through a network whose maximum packet size is too small.

Solution 1: Send the packet whose size is suited to the network existing between the
host and destination. This is difficult since the host doesn’t know the path that might
be utilized and the exact size of each packet. This packet size is called the Path
MTU (Path Maximum Transmission Unit).

Solution 2: Allow routers to break up packets into fragments, sending each fragment
as a separate packet. The problem here is recombining the fragments into the
original packet is a difficult process.

46. Transparent Fragmentation: In this approach, when an oversized packet arrives at


G1, the router breaks it up into fragments. Each fragment is addressed to the same
exit router, G2, where the pieces are recombined.

In this way, passage through the small-packet network is made transparent.


Subsequent networks are not even aware that fragmentation has occurred.

Problems in transparent fragmentation:


 The exit router must know when it has received all the pieces. So, either a count
field bit must be provided.
 Because all packets must exit through the same router, the routes have
limitations.
36

47. Non-Transparent Fragmentation: The strategy is not to recombine fragments at


any intermediate routers. After fragmentation, each fragment is treated as though it
were an original packet. The routers pass the fragments, as shown in Fig. 5-42(b),
and reassembly is performed only at the destination host.

The advantage here is that it requires routers to do less work.

Example: A complete design requires that the fragments be numbered in such a way
that the original data stream can be reconstructed.

The design used by IP is to give every fragment a packet number (carried on all
packets), an absolute byte offset within the packet, and a flag indicating whether it is
the end of the packet.
37

48. The Network Layer in the Internet: In the network layer, the Internet can be viewed
as a collection of networks or Autonomous Systems that are interconnected. There is
no real structure, but several major backbones exist.

These are constructed from high-bandwidth lines and fast routers. The biggest of
these backbones, are called Tier 1 networks.

ISPs (Internet Service Providers) are attached to these backbones that provide
Internet access to homes, businesses, data centres, servers, and regional networks.

The data centres serve much of the content that is sent over the Internet. A sketch of
this quasi-hierarchical organization is given in Fig. 5-45.

The glue that holds the whole Internet together is the network layer protocol, IP
(Internet Protocol).
38

49. The IP Version 4 Protocol: An IPv4 datagram consists of a header part and a
payload part. The header has a 20-byte fixed part and a variable-length optional part.
The header format is shown in Fig. 5-46.

The bits are transmitted from left to right and top to bottom, with the high-order bit of
the Version field going first.

The Version field keeps track of which version of the protocol the datagram belongs
to. Version 4 dominates the Internet today. By including the version at the start of
each datagram, it becomes possible to have a transition between versions over a
long period of time.
39

IHL is provided to tell how long the header is, in 32-bit words. The minimum value is
5, and maximum value is 15, which limits the header to 60 bytes, and thus the
Options field to 40 bytes.

The Differentiated services field is intended to distinguish between different classes


of service. Various combinations of reliability and speed are possible. For digitized
voice, fast delivery is considered more appropriate than accurate delivery.

The Total length includes everything in the datagram—both header and data. The
maximum length is 65,535 bytes.

When a new fragment arrives, the destination host determines to which packet does
it belong to through the Identification field. All the fragments of a packet contain the
same Identification value.

Next comes an unused bit, used to detect malicious traffic.

Then come two 1-bit fields related to fragmentation. DF stands for Don’t Fragment. It
is an order to the routers not to fragment the packet.

MF stands for More Fragments. All fragments have this bit set. It is needed to know
when all fragments of a datagram have arrived.

The Fragment offset tells where in the current packet this fragment belongs.

The Time to live field is a counter used to limit packet lifetimes. It was originally
supposed as 255 sec. It must be decremented on each hop and when it hits zero,
the packet is discarded.

When the network layer has assembled a complete packet, it needs to know what to
do with it. This is identified by The Protocol field. The numbering of protocols is
global across the entire Internet.

The Options field was designed


 To allow successive versions of the protocol to include information that was
not present in the original design,
 To try out new ideas, and
 To avoid information that is rarely needed.

NOTE: The options are of variable length.


40

50. IP Addresses: An important feature of IPv4 is its 32-bit address. Every host and
router have IP address that can be used in the Source and Destination fields of IP
packets. Note that an IP address refers to a network interface, not to a host.

Hosts are on one network and have one IP address. Routers have multiple
interfaces and have multiple IP addresses.

51. Prefixes: IP addresses are hierarchical. Each 32-bit address is comprised of a


network portion in the top bits and a host portion in the bottom bits. The network
portion has the same value for all hosts on a single network. This block is called a
prefix.

IP addresses are written in dotted decimal notation. In this format, each of the 4
bytes is written in decimal, from 0 to 255. For example, the 32-bit hexadecimal
address 80D00297 is written as 128.208.2.151.

The length of the prefix corresponds to a binary mask of 1s in the network portion.
This is called a subnet mask.

The advantage of prefixes is that routers can forward packets based on only the
network portion of the address. The host does not matter to the routers because all
hosts on the same network will be sent in the same direction.

52. Subnets: To avoid conflicts of IP addresses, we should allow the block of addresses
to be split into several parts for internal use as multiple networks. The set of
addresses still acts like a single network to the outside world.
41

This is called subnetting and the networks (such as Ethernet LANs) that result from
dividing up a larger network are called subnets.

In this case, half of the block (a /17) is allocated to the Computer Science Dept, a
quarter is allocated to the Electrical Engineering Dept. (a /18), and one eighth (a /19)
to the Art Dept.

The remaining eighth is unallocated. A different way to see how the block was
divided is to look at the resulting prefixes when written in binary notation:

53. CIDR—Classless Inter-Domain Routing: CIDR is a networking technique that


overcomes the limitations of the traditional IP addressing system, which was based
on classes.

CIDR is based on the idea that IP addresses can be allocated and routed based on
their network prefix rather than their class, which was the traditional way of IP
address allocation.

CIDR addresses are represented using a slash notation, which specifies the number
of bits in the network prefix.

For example, an IP address of 192.168.1.0 with a prefix length of 24 would be


represented as 192.168.1.0/24. This notation indicates that the first 24 bits of the IP
address are the network prefix and the remaining 8 bits are the host identifier.
42

54. NAT—Network Address Translation: The scarcity of IP addresses is caused by


the increasing number of systems that are connecting to the network. One approach
to deal this is to dynamically assign an IP address to a computer when it is using the
network, and to take back the IP address when it is inactive.

This strategy works well in dialup networking and mobiles that may be temporarily
powered off. It does not work well for PCs in businesses, which are expected to be
on continuously.

The problem of running out of IP addresses is happening right now. The solution is
to migrate the Internet to IPv6, which has 128-bit addresses. This transition will take
some years before the process is complete.

Meanwhile, a quick fix was needed. The quick fix that is widely used today came in
the form of NAT (Network Address Translation).

The basic idea behind NAT is for the ISP to assign each home or business a single
IP address for Internet traffic. Just before a packet exits the customer network and
goes to the ISP, an address translation from the unique internal IP address to the
shared public IP address takes place.
43

Within the customer premises, every machine has a unique address of the form
10.x.y.z. However, before a packet leaves the customer premises, it passes through
a NAT box that converts the internal IP source address, 10.0.0.1 in the figure, to the
customer’s true IP address, 198.60.42.12.

55. IP Version 6: IP is close to running out of addresses. The only long-term solution is
to move to larger addresses. IPv6 (IP version 6) is a replacement design that uses
128-bit addresses. A shortage of these addresses is not likely in the near future.

However, IPv6 has proved difficult to install. It is a different network layer protocol
that does not interwork with IPv4.

56. The Main IPV6 header:


44

You might also like