Computer Networking Notes

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 103

Common Reference Material BSBC 603 [Computer Network] BCA 6th

Section A
ANALOG AND DIGITAL
Analog and Digital Data
Data can be analog or digital. The term analog data refers to information that is continuous, digital
data refers to information that has discrete states.
Digital data take on discrete values. For example, data are stored in computer memory in the form of
Os and 1s. They can be converted to a digital signal or modulated into an analog signal for
transmission across a medium.
Data can be analog or digital. Analog data are continuous and take continuous values. Digital data
have discrete states and take discrete values.

Analog and Digital Signals


Like the data they represent, signals can be either analog or digital. An analog signal has infinitely
many levels of intensity over a period of time. As the wave moves from value A to value B, it passes
through and includes an infinite number of values along its path. A digital signal, on the other hand,
can have only a limited number of defined values. Although each value can be any number, it is often
as simple as 1 and O. The simplest way to show signals is by plotting them on a pair of perpendicular
axes. The vertical axis represents the value or strength of a signal. The horizontal axis represents
time.

Periodic and Non-periodic Signals


Both analog and digital signals can take one of two forms: periodic or non-periodic (sometimes
refer to as aperiodic, because the prefix a in Greek means "non"). A periodic signal completes a
pattern within a measurable time frame, called a period, and repeats that pattern over subsequent
identical periods. The completion of one full pattern is called a cycle. A non-periodic signal changes
without exhibiting a pattern or cycle that repeats over time.
Both analog and digital signals can be periodic or non-periodic. In data communications, we
commonly use periodic analog signals.

PERIODIC ANALOG SIGNALS


Periodic analog signals can be classified as simple or composite. A simple periodic analog
signal, a sine wave, cannot be decomposed into simpler signals. A composite periodic analog
signal is composed of multiple sine waves.
Sine Wave

1
Common Reference Material BSBC 603 [Computer Network] BCA 6th

The sine wave is the most fundamental form of a periodic analog signal. When we visualize it as
a simple oscillating curve, its change over the course of a cycle is smooth and consistent, a
continuous, rolling flow.

sine wave can be represented by three parameters: the peak amplitude, the frequency,
and the phase. These three parameters fully describe a sine wave.

Peak Amplitude
The peak amplitude of a signal is the absolute value of its highest intensity, proportional
to the energy it carries. For electric signals, peak amplitude is normally measured
in volts.

Period and Frequency


Period refers to the amount of time, in seconds, a signal needs to complete 1 cycle.
Frequency refers to the number of periods in I s. Note that period and frequency are just
one characteristic defined in two ways. Period is the inverse of frequency, and frequency
is the inverse of period, as the following formulas show.

Wavelength
Wavelength is another characteristic of a signal traveling through a transmission
medium. Wavelength binds the period or the frequency of a simple sine wave to the
propagation speed of the medium.
The distance between one peak of a wave to the next corresponding peak, or between any two
adjacent corresponding points, defined as the speed of a wave divided by its frequency.

2
Common Reference Material BSBC 603 [Computer Network] BCA 6th

It is obvious that the frequency domain is easy to plot and conveys the information that one can find
in a time domain plot. The advantage of the frequency domain is that we can immediately see the
values of the frequency and peak amplitude. A complete sine wave is represented by one spike. The
position of the spike shows the frequency, its height shows the peak amplitude.

Composite Signals
A composite signal is a signal that is composed of other signals. The individual element signals originate
separately and join to form the composite signal. You can extract individual signals from the composite
signal downstream and use the extracted signal as if it never was part of a composite signal.

The bandwidth of a composite signal is the difference between the highest and the lowest frequencies contained in that
signal.

Digital Signals
Digital Signal is a sequence of voltage pulses that may be transmitted over a wire medium. For
example, constant positive voltage level represents binary 1 and a constant negative voltage level
may represent binary 0.
Generally, digital signals are used to represent digital data such as text.
Digital technology generates, stores and processes data in terms of two states ( 0’s and 1’s ). Each of
these state digits is referred to as a bit . ( 8bits = 1 byte).

Most digital signals are nonperiodic, and thus period and frequency are not appropriate
characteristics. Another term - bit rate (instead of frequency )- is used to describe digital signals. The
bit rate is the number of bits sent in 1 second, expressed in bits per second (bps). - Digital technology
generates, stores and processes data in terms of two states ( 0’s and 1’s ). Each of these state digits is
referred to as a bit . ( 8bits = 1 byte). Analog and Digital Signals In digital technology bit interval is
used (instead of period ) to refer to the time required to send one single bit.
3
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Bit Rate
Most digital signals are nonperiodic, and thus period and frequency are not appropriate
characteristics. Another term-bit rate (instead ofjrequency)-is used to describe digital signals. The bit
rate is the number of bits sent in Is, expressed in bits per second (bps).

Bit Length
We discussed the concept of the wavelength for an analog signal: the distance one cycle occupies
on the transmission medium. We can define something similar for a digital signal: the bit length.
The bit length is the distance one bit occupies on the transmission medium.
Bit length = propagation speed x bit duration

Modem (Modulator/Demodulator)

A modem (modulator) is used to convert digital data to analog signals to be propagated


over telephone lines and at the other end demodulator is used to convert the analog
signal to digital data.

4
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Transmission of Digital Signals


The previous discussion asserts that a digital signal, periodic or nonperiodic, is a composite analog
signal with frequencies between zero and infinity. For the remainder of the discussion, let us consider
the case of a nonperiodic digital signal, similar to the ones we encounter in data communications.
The fundamental question is, How can we send a digital signal from point A to point B? We can
transmit a digital signal by using one of two different approaches: baseband transmission or
broadband transmission (using modulation).

Baseband Transmission
Baseband transmission means sending a digital signal over a channel without changing
the digital signal to an analog signal. Figure 3.18 shows baseband transmission.
A digital signal is a composite analog signal with an infinite bandwidth.
Baseband transmission requires that we have a low-pass channel, a channel with a
bandwidth that starts from zero. This is the case if we have a dedicated medium with a
bandwidth constituting only one channel. For example, the entire bandwidth of a cable
connecting two computers is one single channel. As another example, we may connect
several computers to a bus, but not allow more than two stations to communicate at a
time. Again we have a low-pass channel, and we can use it for baseband communication.

Serial and Parallel Transmission


Digital data transmission can occur in two basic modes: serial or parallel. Data within a computer
system is transmitted via parallel mode on buses with the width of the parallel bus matched to
the word size of the computer system. Data between computer systems is usually transmitted in
bit serial mode . Consequently, it is necessary to make a parallel-to-serial conversion at a
computer interface when sending data from a computer system into a network and a serial-to-
parallel conversion at a computer interface when receiving information from a network. The type
of transmission mode used may also depend upon distance and required data rate.

Parallel Transmission
In parallel transmission, multiple bits (usually 8 bits or a byte/character) are sent simultaneously
on different channels (wires, frequency channels) within the same cable, or radio path, and
synchronized to a clock. Parallel devices have a wider data bus than serial devices and can
therefore transfer data in words of one or more bytes at a time. As a result, there is a speedup in
parallel transmission bit rate over serial transmission bit rate. However, this speedup is a tradeoff
versus cost since multiple wires cost more than a single wire, and as a parallel cable gets longer,
the synchronization timing between multiple channels becomes more sensitive to distance. The
timing for parallel transmission is provided by a constant clocking signal sent over a separate
wire within the parallel cable; thus parallel transmission is considered synchronous .

Serial Transmission
In serial transmission, bits are sent sequentially on the same channel (wire) which reduces costs
for wire but also slows the speed of transmission. Also, for serial transmission, some overhead
time is needed since bits must be assembled and sent as a unit and then disassembled at the
receiver.

5
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Serial transmission can be either synchronous or asynchronous . In synchronous transmission,


groups of bits are combined into frames and frames are sent continuously with or without data to
be transmitted. In asynchronous transmission, groups of bits are sent as independent units with
start/stop flags and no data link synchronization, to allow for arbitrary size gaps between frames.
However, start/stop bits maintain physical bit level synchronization once detected.

Synchronous transmission is a data transfer method which is characterized by a continuous


stream of data in the form of signals which are accompanied by regular timing signals which are
generated by some external clocking mechanism meant to ensure that both the sender and
receiver are synchronized with each other.

Data are sent as frames or packets in fixed intervals.

Asynchronous transmission is the transmission of data in which each character is a self-


contained unit with its own start and stop bits and an uneven interval between them.

Asynchronous transmission is also referred to as start/stop transmission.

6
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Transmission Mode
A given transmission on a communications channel between two machines can occur in several
different ways. The transmission is characterized by:

 the direction of the exchanges


 the transmission mode: the number of bits sent simultaneously
 synchronization between the transmitter and receiver

Types of Transmission mode


 Simplex
 Half Duplex
 Full Duplex

Simplex
A simplex connection is a connection in which the data flows in only one
direction, from the transmitter to the receiver. This type of connection is useful if
the data do not need to flow in both directions (for example, from your computer to
the printer or from the mouse to your computer...).
Half Duplex
A half-duplex connection (sometimes called an alternating connection or semi-
duplex) is a connection in which the data flows in one direction or the other, but

7
Common Reference Material BSBC 603 [Computer Network] BCA 6th

not both at the same time. With this type of connection, each end of the connection
transmits in turn. This type of connection makes it possible to have bidirectional
communications using the full capacity of the line.
Full Duplex
A full-duplex connection is a connection in which the data flow in both directions
simultaneously. Each end of the line can thus transmit and receive at the same
time, which means that the bandwidth is divided in two for each direction of data
transmission if the same transmission medium is used for both directions of
transmission.

Line Configuration

Line configuration refers to the way two or more communication devices attached to a link. Line
configuration is also referred to as connection. A Link is the physical communication pathway
that transfers data from one device to another. For communication to occur, two devices must be
connected in same way to the same link at the same time.
There are two possible line configurations.
       1. Point-to-Point.
       2. Multipoint.

Point-to-Point

A Point to Point Line Configuration Provide dedicated link between two devices use actual
length of wire or cable to connect the two end including microwave & satellite link. Infrared
remote control & tvs remote control.
The entire capacity of the channel is reserved for transmission  between those two devices. Most
point-to-point line configurations use an actual length of wire or cable to connect the two ends,
but other options, such as microwave or satellite links, are also possible.
Point to point network topology is considered to be one of the easiest and most conventional
network topologies. It is also the simplest to establish and understand. To visualize, one can
consider point to point network topology as two phones connected end to end for a two way
communication

8
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Multipoint Configuration
Multipoint Configuration also known as Multidrop line configuration one or more than two  
specific devices share a single link capacity of the channel is shared.

More than two devices share the Link that is the capacity of the channel is shared now. With
shared capacity, there can be two possibilities in a Multipoint Line Config:

 Spatial Sharing: If several devices can share the link simultaneously, its called Spatially
shared line configuration

 Temporal (Time) Sharing: If users must take turns using the link , then its called
Temporally shared or Time Shared Line Configuration

9
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Types of Network
One way to categorize the different types of computer network designs is by their scope or scale.
For historical reasons, the networking industry refers to nearly every type of design as some kind
of area network. Common examples of area network types are:
 LAN - Local Area Network
 WLAN - Wireless Local Area Network
 WAN - Wide Area Network
 MAN - Metropolitan Area Network

Local Area Network


A LAN connects network devices over a relatively short distance. A networked office building,
school, or home usually contains a single LAN, though sometimes one building will contain a
few small LANs (perhaps one per room), and occasionally a LAN will span a group of nearby
buildings. In TCP/IP networking, a LAN is often but not always implemented as a single IP
subnet. In addition to operating in a limited space, LANs are also typically owned, controlled,
and managed by a single person or organization. They also tend to use certain connectivity
technologies, primarily Ethernet and Token Ring.

Wireless Local Area Network


As the term implies, a WAN spans a large physical distance. The Internet is the largest WAN,
spanning the Earth. A WAN is a geographically-dispersed collection of LANs. A network device
called a router connects LANs to a WAN. In IP networking, the router maintains both a LAN
address and a WAN address.

A WAN differs from a LAN in several important ways. Most WANs (like the Internet) are not
owned by any one organization but rather exist under collective or distributed ownership and
management. WANs tend to use technology like ATM, Frame Relay and X.25 for connectivity
over the longer distances.

10
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Wide Area Network


A WAN is a network that spans more than one geographical location often connecting separated
LANs. WANs are slower than LANs and often require additional and costly hardware such as
routers, dedicated leased lines, and complicated implementation procedures.

Metropolitan Area Network


A network spanning a physical area larger than a LAN but smaller than a WAN, such as a city. A
MAN is typically owned an operated by a single entity such as a government body or large
corporation.

Network Topology
The term “Topology” refers to the way in which the end points or stations/computer systems,
attached to the networks, are interconnected. We have seen that a topology is essentially a stable
geometric arrangement of computers in a network. If you want to select a topology for doing
networking. You have attention to the following points.
  Application S/W and protocols.
  Types of data communicating devices.
  Geographic scope of the network.
  Cost.
  Reliability.

11
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Depending on the requirement there are different Topologies to construct a network.


(1) Mesh topology.
(2) Star topology.
(3) Tree (Hierarchical) topology.
(4) Bus topology.
(5) Ring topology.
(6) Cellular topology.
 Ring and mesh topologies are felt convenient for peer to peer transmission.
 Star and tree are more convenient for client server.
 Bus topology is equally convenient for either of them.

Mesh Topology
The value of fully meshed networks is proportional to the exponent of the number of subscribers,
assuming that communicating groups of any two endpoints, up to and including all the endpoints,
is approximated by Reed's Law.
The number of connections in a full mesh = n(n - 1) / 2

Star Topology
In a star topology, cables run from every computer to a centrally located device called a HUB.
Star topology networks require a central point of connection between media segment. These
central points are referred to as Hubs.
Hubs are special repeaters that overcome the electromechanical limitations of a media. Each
computer on a star network  communicates with a central hub that resends the message either to
all the computers. (In a broadcast network) or only the destination
computer. (In a switched network).
Ethernet 10 base T is a popular network based on the star topology.

Tree (Hierarchical) topology


It is similar to the star network, but the nodes are connected to the secondary hub that in turn is
connected to the central hub.

12
Common Reference Material BSBC 603 [Computer Network] BCA 6th

The central hub is the active hub.


The active hub contains the repeater, which regenerates the bits pattern it receives before sending
them out.
The secondary hub can be either active or passive.
A passive hub provides a simple physical connection between the attached devices.

Bus topology
A bus topology connects computers along a single or more cable to connect linearly. A network
that uses a bus topology is referred to as a "bus network" which was the original form of Ethernet
networks. Ethernet 10Base2 (also known as thinnet) is used for bus topology.

Ring topology
In ring topology, each device has a dedicated point-to-point line configuration only with two
devices on either side of it.
A signal is passed along the ring in one direction, from device to device until it reaches its
destination.
Each device in the ring has a repeater. When the devices receive the signal intended for the other
node, it just regenerates the bits and passes them along.
Ring network passes a token.
A token is a short message with the electronic address of the receiver.
Each network interface card is given a unique electronic address, which is used to identify the
computer on the network.

13
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Multiplexing
Whenever the bandwidth of a medium linking two devices is greater than the bandwidth needs of the
devices, the link can be shared. Multiplexing is the set of techniques that allows the simultaneous
transmission of multiple signals across a single data link. As data and telecommunications use
increases, so does traffic. We can accommodate this increase by continuing to add individual links
each time a new channel is needed; or we can install higher-bandwidth links and use each to carry
multiple signals.

In a multiplexed system, n lines share the bandwidth of one link. Figure shows the basic format
of a multiplexed system. The lines on the left direct their transmission streams to a multiplexer
(MUX), which combines them into a single stream (many-toone). At the receiving end, that
stream is fed into a demultiplexer (DEMUX), which separates the stream back into its
component transmissions (one-to-many) and directs them to their corresponding lines. In the
figure, the word link refers to the physical path. The word channel refers to the portion of a link
that carries a transmission between a given pair of lines. One link can have many (n) channels.

There are three basic multiplexing techniques: frequency-division multiplexing, wavelength-


division multiplexing, and time-division multiplexing. The first two are techniques designed for
analog signals, the third, for digital signals.

Frequency-Division Multiplexing
Frequency-division multiplexing (FDM) is an analog technique that can be applied when the
bandwidth of a link (in hertz) is greater than the combined bandwidths of the signals to be
transmitted. In FOM, signals generated by each sending device modulate different carrier
frequencies. These modulated signals are then combined into a single composite signal that can
be transported by the link. Carrier frequencies are separated by sufficient bandwidth to
accommodate the modulated signal. These bandwidth ranges are the channels through which the

14
Common Reference Material BSBC 603 [Computer Network] BCA 6th

various signals travel. Channels can be separated by strips of unused bandwidth-guard bands-to
prevent signals from overlapping. In addition, carrier frequencies must not interfere with the
original data frequencies. In this illustration, the transmission path is divided into three parts,
each representing a channel that carries one transmission.

We consider FDM to be an analog multiplexing technique; however, this does not mean that
FDM cannot be used to combine sources sending digital signals. A digital signal can be
converted to an analog signal before FDM is used to multiplex them.
FDM is an analog multiplexing technique that combines analog signals.

Multiplexing Process
It is a conceptual illustration of the multiplexing process. Each source generates a signal of a
similar frequency range. Inside the multiplexer, these similar signals modulates different carrier
frequencies (f1,f2,f3). The resulting modulated signals are then combined into a single
composite signal that is sent out over a media link that has enough bandwidth to accommodate it.

De-multiplexing Process
The de-multiplexer uses a series of filters to decompose the multiplexed signal into its constituent
component signals. The individual signals are then passed to a demodulator that separates them from
their carriers and passes them to the output lines. Figure is a conceptual illustration of de-
multiplexing process.

15
Common Reference Material BSBC 603 [Computer Network] BCA 6th

FDM can be implemented very easily. In many cases, such as radio and television
broadcasting, there is no need for a physical multiplexer or demultiplexer. As long as
the stations agree to send their broadcasts to the air using different carrier frequencies,
multiplexing is achieved. In other cases, such as the cellular telephone system, a base
station needs to assign a carrier frequency to the telephone user. There is not enough
bandwidth in a cell to permanently assign a bandwidth range to every telephone user.
When a user hangs up, her or his bandwidth is assigned to another caller.

16
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Wavelength-Division Multiplexing
Wavelength-division multiplexing (WDM) is designed to use the high-data-rate capability of
fiber-optic cable. The optical fiber data rate is higher than the data rate of metallic transmission
cable. Using a fiber-optic cable for one single line wastes the available bandwidth. Multiplexing
allows us to combine several lines into one.
WDM is conceptually the same as FDM, except that the multiplexing and demultiplexing
involve optical signals transmitted through fiber-optic channels. The idea is the same: We are
combining different signals of different frequencies. The difference is that the frequencies are
very high. Figure gives a conceptual view of a WDM multiplexer and demultiplexer. Very
narrow bands of light from different sources are combined to make a wider band of light. At the
receiver, the signals are separated by the demultiplexer.

Although WDM technology is very complex, the basic idea is very simple. We want to combine
multiple light sources into one single light at the multiplexer and do the reverse at the
demultiplexer. The combining and splitting of light sources are easily handled by a prism. Recall
from basic physics that a prism bends a beam of light based on the angle of incidence and the
frequency. Using this technique, a multiplexer can be made to combine several input beams of
light, each containing a narrow band of frequencies, into one output beam of a wider band of
frequencies. A demultiplexer can also be made to reverse the process.

Synchronous Time-Division Multiplexing


Time-division multiplexing (TDM) is a digital process that allows several connections to share the high
bandwidth of a linle Instead of sharing a portion of the bandwidth as in FDM, time is shared. Each
connection occupies a portion of time in the link. Figure gives a conceptual view of TDM. Note that the
same link is used as in FDM; here, however, the link is shown sectioned by time rather than by frequency.
In the figure, portions of signals 1,2,3, and 4 occupy the link sequentially.

We also need to remember thatTDM is, in principle, a digital multiplexing technique. Digital data from
different sources are combined into one timeshared link. However, this does not mean that the sources
cannot produce analog data; analog data can be sampled, changed to digital data, and then multiplexed by
using TDM.

Time Slots and Frames


In synchronous TDM, the data flow of each input connection is divided into units, where each input
occupies one input time slot. A unit can be 1 bit, one character, or one block of data. Each input unit
becomes one output unit and occupies one output time slot. However, the duration of an output time slot

17
Common Reference Material BSBC 603 [Computer Network] BCA 6th

is n times shorter than the duration of an input time slot. If an input time slot is T s, the output time slot is
Tin s, where n is the number of connections. In other words, a unit in the output connection has a shorter
duration; it travels faster. Figure 6.13 shows an example of synchronous TDM where n is 3.

In synchronous TDM, a round of data units from each input connection is collected into a frame
(we will see the reason for this shortly). If we have n connections, a frame is divided into n time
slots and one slot is allocated for each unit, one for each input line. If the duration of the input
unit is T, the duration of each slot is Tin and the duration of each frame is T (unless a frame
carries some other information, as we will see shortly).
The data rate of the output link must be n times the data rate of a connection to guarantee the
flow of data. In Figure 6.13, the data rate of the link is 3 times the data rate of a connection;
likewise, the duration of a unit on a connection is 3 times that of the time slot (duration of a unit
on the link). In the figure we represent the data prior to multiplexing as 3 times the size of the
data after multiplexing. This is just to convey the idea that each unit is 3 times longer in duration
before multiplexing than after.

Interleaving
TDM can be visualized as two fast-rotating switches, one on the multiplexing side and the other
on the demultiplexing side. The switches are synchronized and rotate at the same speed, but in
opposite directions. On the multiplexing side, as the switch opens in front of a connection, that
connection has the opportunity to send a unit onto the path. This process is called interleaving. On
the demultiplexing side, as the switch opens in front of a connection, that connection has the
opportunity to receive a unit from the path.

Data Rate Management


One problem with TDM is how to handle a disparity in the input data rates. In all our
discussion so far, we assumed that the data rates of all input lines were the same. However,

18
Common Reference Material BSBC 603 [Computer Network] BCA 6th

if data rates are not the same, three strategies, or a combination of them, can be used. We call
these three strategies multilevel multiplexing, multiple-slot allocation, and pulse stuffing.

Communication Channel
A path through which information is transmitted from one place to another is called
communication channel. It is also referred to as communication medium or link. The twisted pair
wire, coaxial cable, fiber optic cable, microwave, satellite etc. are examples of communication
channels.

In a communication channel, data is transmitted in the form of signals (analog signal). The data
transmission is measured in bandwidth. The bandwidth will be higher if more signals can be
transmitted. Actually, the bandwidth measures the amount of information that can be transmitted
through the media within the given period of time. For analog signals, bandwidth is represented
in hertz (Hz). It means number of signals transmitted per second. For digital signals, it is
represented in bits per second (bps). Different transmission media have different bandwidths.
The higher the bandwidth of the transmission media, the more information can be transmitted.

Types of Communication Channel:

The communication channel or media is divided into two types.

1. Guided Media.
2. Unguided Media.

1. Guided Media:

In guided communication media, communication devices are directly linked with each other via
cables or physical media for transmission of data. The data signals are bounded to a cabling
media. Therefore, guided media is also called bounded media. The guided media are usually
used in LAN. The examples of guided or bounded media are:

a) Twisted pair wire.


b) Coaxial cable.
c) Fiber optic cable.

Twisted Pair Cable: Twisted pair cable is one of the most commonly used communication
media. It is used in local area network (LAN) for data communication between different
computers. It is also used in telephone lines to carry voice and data signals.

19
Common Reference Material BSBC 603 [Computer Network] BCA 6th

A twisted pair cable consists of a pair of thin diameter copper wires. These wires are covered by
insulating material (such as plastic). These pair of wires are twisted together to form a cable. The
wires are twisted around each other to minimize (or reduce) interference from other twisted pairs
in the cable.

Copper wires are the most common wires used for transmitting signals because of good
performance at low costs. They are most commonly used in telephone lines. However, if two or
more wires are lying together, they can interfere with each other’s signals. To reduce this
electromagnetic interference, pair of copper wires are twisted together in helical shape like a
DNA molecule. Such twisted copper wires are called twisted pair. To reduce interference
between nearby twisted pairs, the twist rates are different for each pair.

Advantages of twisted pair cable

Twisted pair cable are the oldest and most popular cables all over the world. This is due to the
many advantages that they offer −

 Trained personnel easily available due to shallow learning curve


 Can be used for both analog and digital transmissions
 Least expensive for short distances
 Entire network does not go down if a part of network is damaged

Disadvantages of twisted pair cable

With its many advantages, twisted pair cables offer some disadvantages too −

 Signal cannot travel long distances without repeaters


 High error rate for distances greater than 100m
 Very thin and hence breaks easily
 Not suitable for broadband connections

The data transmission speed through twisted pair cable is about 9600 bits per second in a
distance of 100 meters. However, this transmission speed is less than coaxial cable or optical
fiber.

The twisted pair cable has been the standard communication channel for voice and data
communication. But now its use is reducing because today more reliable communication media
are available such as coaxial cable, fiber optic cable microwave and satellite.

Coaxial Cable: Coaxial cable is also referred to as Coax. It carries signals of higher frequency
ranges than twisted-pair cable. Coaxial cable consists of a single solid copper wire, which is
called the inner conductor.

20
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Coaxial cables are copper cables with better shielding than twisted pair cables, so that
transmitted signals may travel longer distances at higher speeds. A coaxial cable consists of these
layers, starting from the innermost −

 Stiff copper wire as core


 Insulating material surrounding the core
 Closely woven braided mesh of conducting material surrounding the insulator
 Protective plastic sheath encasing the wire

Coaxial cables are widely used for cable TV connections and LANs.

Coaxial cable can be used for telephone lines for voice and data transmission with very high
frequency. The bandwidth of coaxial cable is 80 times greater than that of twisted pair media.
Coaxial cable is also widely used in local area network (LAN). It is more expensive than twisted-
pair wire.

Advantages of Coaxial Cables

These are the advantages of coaxial cables −

 Excellent noise immunity


 Signals can travel longer distances at higher speeds, e.g. 1 to 2 Gbps for 1 Km cable
 Can be used for both analog and digital signals
 Inexpensive as compared to fibre optic cables
 Easy to install and maintain

Disadvantages of Coaxial Cables

These are some of the disadvantages of coaxial cables −

 Expensive as compared to twisted pair cables


 Not compatible with twisted pair cables

21
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Fiber-Optic Cable: In twisted-pair cable and coaxial cable, data is transmitted in the form of
electric frequencies. The fiber optic cable uses light to transmit data. The data transmission speed
is very high (because fiber-optic cable uses light to transmit data). The data transmission speed is
up to billions bits per second. Today, most of the telephone companies and cable TV operators
are using fiber optic cables in their networks.

Thin glass or plastic threads used to transmit data using light waves are called optical fibre.
Light Emitting Diodes (LEDs) or Laser Diodes (LDs) emit light waves at the source, which is
read by a detector at the other end. Optical fibre cable has a bundle of such threads or fibres
bundled together in a protective covering. Each fibre is made up of these three layers, starting
with the innermost layer −

 Core made of high quality silica glass or plastic


 Cladding made of high quality silica glass or plastic, with a lower refractive index than
the core
 Protective outer covering called buffer

Note that both core and cladding are made of similar material. However, as refractive index of
the cladding is lower, any stray light wave trying to escape the core is reflected back due to total
internal reflection.

Optical fiber is rapidly replacing copper wires in telephone lines, internet communication and
even cable TV connections because transmitted data can travel very long distances without
weakening. Single node fiber optic cable can have maximum segment length of 2 kms and
bandwidth of up to 100 Mbps. Multi-node fiber optic cable can have maximum segment length
of 100 kms and bandwidth up to 2 Gbps.

Advantages of Optical Fiber

Optical fiber is fast replacing copper wires because of these advantages that it offers −

 High bandwidth
 Immune to electromagnetic interference

22
Common Reference Material BSBC 603 [Computer Network] BCA 6th

 Suitable for industrial and noisy areas


 Signals carrying data can travel long distances without weakening

Disadvantages of Optical Fiber

Despite long segment lengths and high bandwidth, using optical fibre may not be a viable option
for every one due to these disadvantages −

 Optical fibre cables are expensive


 Sophisticated technology required for manufacturing, installing and maintaining optical fibre
cables
 Light waves are unidirectional, so two frequencies are required for full duplex transmission

2. Unguided Media:

In unguided communication media, data is communicated between communication devices in the


form of wave. Unguided media provides means to transmit data signals but does not guide them
along a specific path. The data signals are not bounded to a cabling media. Therefore,. unguided
media is also called unbounded media.

This transmission medium is used when it is impossible to install the cables. The data can be
transmitted all over the world through this medium. The examples of unguided or unbounded
media are:

a) Microwave
b) Satellite
c) Radio Broadcast
d) Cellular Radio

Microwaves: In microwave transmission, data is transmitted through air or space, instead of


through cables or wires. Microwaves are high frequency radio waves. These waves can only
travel in straight lines.

The data is transmitted and received through a microwave station. A microwave station is also
called relay station or booster. A microwave station contains an antenna, transmitter, receiver,
and other equipments that are required for microwave transmission. Microwave antennas are
placed on the high towers or buildings and these are placed within 20 to 30 miles of each other.
There may be many microwave stations between the sender and receiver. Data is transmitted
from one microwave station to another. Each microwave station receives signals from previous
microwave station and transmits to next station. In this way, data is transmitted over larger
distances.

The data transmission speed of microwave transmission is up to 150 Mbps. Microwave


transmission is used in environments where installing physical transmission media is impossible

23
Common Reference Material BSBC 603 [Computer Network] BCA 6th

and where line-of-sight transmission is available, it is used in wide-open areas. Today, it is used
by telephone companies, cable television providers, universities etc.

Satellite Communication: A communication satellite is a space station. It receives microwave


signals (or messages) from earth stations. Satellite transmission station that can send and receive
messages is known as earth station. The earth based stations often are microwave stations. Other
devices, such as PDAs and GPS receivers, also functions as earth based stations.

Satellites rotate approximately 22,300 miles above the earth in precise locations. The
communication satellite consists of solar powered, transceiver that receives and sends signals.
The signals are transmitted from one earth station to the satellite. The satellite receives and
amplifies the signals and sends them to another earth station. This entire process takes only a few
seconds. In this way, data or messages are transferred from one location to another. Transmitting
a signal from ground or earth station to a satellite station in space is called up-linking and the
reverse is called the down-linking. The data transmission speed of communication satellite is
very high such as upto 1 Gbps.

Different communication satellites are used to carry different kinds of information such as
telephone calls, television transmissions, military communication, weather data, and even radio
stations use them for broadcasting. The global positioning systems and Internet also use the
communication satellites.

Radio Broadcast: It is a wireless transmission medium that is used to communicate information


through radio signals in air, over long distance such as between cities and countries.

In this medium, a transmitter is required to send messages (signals) and receiver is required to
receive them. To receive the radio signal, the receiver has an antenna that is located in the range
of signal. Some networks use a special device called transceiver used to send and to receive
messages in the form of radio signals. The data transmission speed of radio broadcast is up to 54
Mbps.

Transmission of data using radio frequencies is called radio-wave transmission. We all are
familiar with radio channels that broadcast entertainment programs. Radio stations transmit radio
waves using transmitters, which are received by the receiver installed in our devices.

Both transmitters and receivers use antennas to radiate or capture radio signals. These radio
frequencies can also be used for direct voice communication within the allocated range. This
range is usually 10 miles.

Advantages of Radio Wave

These are some of the advantages of radio wave transmissions −

 Inexpensive mode of information exchange


 No land needs to be acquired for laying cables
 Installation and maintenance of devices is cheap

24
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Disadvantages of Radio Wave

These are some of the disadvantages of radio wave transmissions −

 Insecure communication medium


 Prone to weather changes like rain, thunderstorms, etc.

Cellular Radio: Cellular radio is a form of radio broadcast that is used for mobile
communications such as cellular telephones and wireless modems. A cellular, telephone is a
telephone device that uses high frequency radio waves to transmit voice and digital messages.
Some mobile users connect their laptop computer or other mobile devices to a cellular telephone
to access the Web, send and receive e-mail etc. while away from a standard telephone line.

Infrared
Low frequency infrared waves are used for very short distance communication like TV remote,
wireless speakers, automatic doors, hand held devices etc. Infrared signals can propagate within
a room but cannot penetrate walls. However, due to such short range, it is considered to be one
of the most secure transmission modes.

25
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Section B
Unguided communication media, data is communicated between communication devices in the
form of wave. Unguided media provides means to transmit data signals but does not guide them
along a specific path. The data signals are not bounded to a cabling media. Therefore,. unguided
media is also called unbounded media.

This transmission medium is used when it is impossible to install the cables. The data can be
transmitted all over the world through this medium. The examples of unguided or unbounded
media are:

e) Microwave
f) Satellite
g) Radio Broadcast
h) Cellular Radio

Microwaves: In microwave transmission, data is transmitted through air or space, instead of


through cables or wires. Microwaves are high frequency radio waves. These waves can only
travel in straight lines.

The data is transmitted and received through a microwave station. A microwave station is also
called relay station or booster. A microwave station contains an antenna, transmitter, receiver,
and other equipments that are required for microwave transmission. Microwave antennas are
placed on the high towers or buildings and these are placed within 20 to 30 miles of each other.
There may be many microwave stations between the sender and receiver. Data is transmitted
from one microwave station to another. Each microwave station receives signals from previous
microwave station and transmits to next station. In this way, data is transmitted over larger
distances.

The data transmission speed of microwave transmission is up to 150 Mbps. Microwave


transmission is used in environments where installing physical transmission media is impossible
and where line-of-sight transmission is available, it is used in wide-open areas. Today, it is used
by telephone companies, cable television providers, universities etc.

Satellite Communication: A communication satellite is a space station. It receives microwave


signals (or messages) from earth stations. Satellite transmission station that can send and receive
messages is known as earth station. The earth based stations often are microwave stations. Other
devices, such as PDAs and GPS receivers, also functions as earth based stations.

Satellites rotate approximately 22,300 miles above the earth in precise locations. The
communication satellite consists of solar powered, transceiver that receives and sends signals.
The signals are transmitted from one earth station to the satellite. The satellite receives and
amplifies the signals and sends them to another earth station. This entire process takes only a few
seconds. In this way, data or messages are transferred from one location to another. Transmitting
a signal from ground or earth station to a satellite station in space is called up-linking and the

26
Common Reference Material BSBC 603 [Computer Network] BCA 6th

reverse is called the down-linking. The data transmission speed of communication satellite is
very high such as upto 1 Gbps.

Different communication satellites are used to carry different kinds of information such as
telephone calls, television transmissions, military communication, weather data, and even radio
stations use them for broadcasting. The global positioning systems and Internet also use the
communication satellites.

Radio Broadcast: It is a wireless transmission medium that is used to communicate information


through radio signals in air, over long distance such as between cities and countries.

In this medium, a transmitter is required to send messages (signals) and receiver is required to
receive them. To receive the radio signal, the receiver has an antenna that is located in the range
of signal. Some networks use a special device called transceiver used to send and to receive
messages in the form of radio signals. The data transmission speed of radio broadcast is up to 54
Mbps.

Transmission of data using radio frequencies is called radio-wave transmission. We all are
familiar with radio channels that broadcast entertainment programs. Radio stations transmit radio
waves using transmitters, which are received by the receiver installed in our devices.

Both transmitters and receivers use antennas to radiate or capture radio signals. These radio
frequencies can also be used for direct voice communication within the allocated range. This
range is usually 10 miles.

Advantages of Radio Wave

These are some of the advantages of radio wave transmissions −

 Inexpensive mode of information exchange


 No land needs to be acquired for laying cables
 Installation and maintenance of devices is cheap

Disadvantages of Radio Wave

These are some of the disadvantages of radio wave transmissions −

 Insecure communication medium


 Prone to weather changes like rain, thunderstorms, etc.

Cellular Radio: Cellular radio is a form of radio broadcast that is used for mobile
communications such as cellular telephones and wireless modems. A cellular, telephone is a
telephone device that uses high frequency radio waves to transmit voice and digital messages.

27
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Some mobile users connect their laptop computer or other mobile devices to a cellular telephone
to access the Web, send and receive e-mail etc. while away from a standard telephone line.

Infrared : Low frequency infrared waves are used for very short distance communication like
TV remote, wireless speakers, automatic doors, hand held devices etc. Infrared signals can
propagate within a room but cannot penetrate walls. However, due to such short range, it is
considered to be one of the most secure transmission modes.

Switching Techniques
A network is a set of connected devices. Whenever we have multiple devices, we have the problem
of how to connect them to make one-to-one communication possible. One solution is to make a
point-to-point connection between each pair of devices (a mesh topology) or between a central
device and every other device (a star topology). These methods, however, are impractical and
wasteful when applied to very large networks. The number and length of the links require too much
infrastructure to be cost-efficient, and the majority of those links would be idle most of the time.
Other topologies employing multipoint connections, such as a bus, are ruled out because the
distances between devices and the total number of devices increase beyond the capacities of the
media and equipment.
A better solution is switching. A switched network consists of a series of interlinked nodes, called
switches. Switches are devices capable of creating temporary connections between two or more
devices linked to the switch. In a switched network, some of these nodes are connected to the end
systems (computers or telephones, for example). Others are used only for routing.

Traditionally, three methods of switching have been important: circuit switching, packet
switching, and message switching. The first two are commonly used today. The third has been
phased out in general communications but still has networking applications. We can then divide
today's networks into three broad categories: circuit-switched networks, packet-switched
networks, and message-switched. Packet-switched networks can further be divided into two
subcategories-virtual-circuit networks and datagram networks.

28
Common Reference Material BSBC 603 [Computer Network] BCA 6th

CIRCUIT-SWITCHED NETWORKS
A circuit-switched network consists of a set of switches connected by physical links.
A connection between two stations is a dedicated path made of one or more links. However,
each connection uses only one dedicated channel on each link.
In circuit switching the routing decision is made when the path is set up across the given
network. After the link has been sets in between the sender and the receiver then the information
is forwarded continuously over the provided link.
In Circuit Switching a dedicated link/path is established across the sender and the receiver which
is maintained for the entire duration of conversation.

When two nodes communicate with each other over a dedicated communication path, it is called
circuit switching. There 'is a need of pre-specified route from which data will travels and no
other data is permitted. In circuit switching, to transfer the data, circuit must be established so
that the data transfer can take place.

Circuits can be permanent or temporary. Applications which use circuit switching may have to
go through three phases:

 Establish a circuit
 Transfer the data
 Disconnect the circuit

Circuit switching was designed for voice applications. Telephone is the best suitable example of circuit
switching. Before a user can make a call, a virtual path between caller and callee is established over the
network.

Benefits or advantages of circuit switching

Following are the benefits or advantages of circuit switching type:


➨As there is very less delay for the call to be established and also during the conversation, the
circuit switching network is widely used for realtime voice services throughtout the world since
years. There is almost no waiting time at voice switches used for the call.
➨It will have realistic voice communication and consecutively speaking persons are easily
identified due to higher sampling rates used.
➨Once the connection is established between two parties, it will be available till end of the
conversation. This guarantees reliable connection in terms of constant data rate and availability
of resources (Bandwidth, channels etc.). Hence it is used for long distance and long duration
calls without any sort of tiredness.
➨No loss of packets or out of order packets here as this is connection oriented network unlike
packet switched network.
➨The forwarding of information is based on time or frequency slot assignments and hence there
is no need to examine the header as in the case of packet switching network. As there is no
header requirement, there is low overhead in circuit switching network.

29
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Drawbacks or disadvantages of circuit switching

Following are the disadvantages of circuit switching type:


➨As is it designed for voice traffic, it is not suitable for data transmission.
➨The channels and bandwidth used in the connection are not available till the convesation or
call is broken. Due to this, even if they are not utilized, they can not be used for any other
purpose (e.g. connections). Hence circuit switching is inefficient in terms of resource utilization
(i.e. channels, bandwidth etc.). Moreover due to this, if there are many users than the available
channels, it leads to dropped calls or calls not being established.
➨The connection requires call setup delay and it is not instantaneous. This means there is no
communication until connection is established and resources are available.
➨It is more expensive compare to other techniques due to dedicated path requirement.
Consecutively, the call rates are also higher.

Message Switching
This technique was somewhere in middle of circuit switching and packet switching. In message
switching, the whole message is treated as a data unit and is switching / transferred in its entirety.

A switch working on message switching, first receives the whole message and buffers it until
there are resources available to transfer it to the next hop. If the next hop is not having enough
resource to accommodate large size message, the message is stored and switch waits.

This technique was considered substitute to circuit switching. As in circuit switching the whole
path is blocked for two entities only. Message switching is replaced by packet switching.
Message switching has the following drawbacks:

 Every switch in transit path needs enough storage to accommodate entire message.
 Because of store-and-forward technique and waits included until resources are available,
message switching is very slow.
 Message switching was not a solution for streaming media and real-time applications.

30
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Figure-1 depicts message switching operation. As shown in the figure, each router waits until it
receives the entire message. Once it receives the complete message it transmits the same over the
next link and so on. All the routers over the router does the same. It is used for message
transmission between two parties not requiring realtime information sharing.

➨Message switching overhead is lower compare to packet switching.


➨Message switching has higher reliability and lower complexity. As in message switching, one
single datagram is either received or lost. One single network path is used for the same.
➨As explained above message switching takes more time compare to packet switching as entire
message will be stored at each of the hop points till it is completely received.

Benefits or advantages of Message switching

Following are the benefits or advantages of Message switching type:


➨As more devices share the same channel simultaneously for message transfer, it has higher
channel efficiency compare to circuit switching.
➨In this type, messages are stored temporarily en-route and hence congestion can be reduced to
greater extent.
➨It is possible to incorporate priorities to the messages as they use store and forward technique
for delivery.
➨It supports message lengths of unlimited size.
➨It does not require physical connection between source and destination devices unlike circuit
switching.
➨Broadcasting of single message can be done to multiple receivers by appending broadcast
address to the message. This is being used by advertisement agencies for transmission of ads to
particular regions served by cell sites in the cities. This is also used by government agencies to
transmit warnings and other security related messages to the people.
➨Due to its storage mechanism, it is being used by police for criminal cases.

Drawbacks or disadvantages of Message switching

Following are the disadvantages of Message switching type:


➨This switching type is not compatible for interactive applications such as voice and video.
This is due to longer message delivery time.
➨The method is costly as store and forward devices are expensive. This is due to large storage
disks requirements to store long messages for long duration.

31
Common Reference Material BSBC 603 [Computer Network] BCA 6th

➨It can lead to security issues if hacked by intruders. Hence vital informations such as banking
accounts, official and personal passwords should not be included in the messages.
➨As the system is complex, often people are not aware whether the messages are transferred
successfully or not. This may lead to complications in social relationships.
➨Message switching type does not establish dedicated path between the devices. As there is no
direct link between sender and receiver, it is not reliable communication.

Packet Switching
Shortcomings of message switching gave birth to an idea of packet switching. The entire
message is broken down into smaller chunks called packets. The switching information is added
in the header of each packet and transmitted independently.

It is easier for intermediate networking devices to store small size packets and they do not take
much resources either on carrier path or in the internal memory of switches.

Packet switching enhances line efficiency as packets from multiple applications can be
multiplexed over the carrier. The internet uses packet switching technique. Packet switching
enables the user to differentiate data streams based on priorities. Packets are stored and
forwarded according to their priority to provide quality of service.

In packet switching network unlike CS network, it is not required to establish the connection
initially. The connection/channel is available to use by many users, but when capacity or number
of users increases then it will lead to congestion in the network. Packet switched networks are
mainly used for data and voice applications requiring non-real time scenarios.

32
Common Reference Material BSBC 603 [Computer Network] BCA 6th

In packet switching, station or IP device breaks long message into small size packets and inserts
headers as per predefined formats of TCP/IP stack. These headers contain many useful
information's including source address, destination address, length of packet, port number,
protocol field, checksum etc. Packets are transmitted sequentially. Packets are assembled at the
receiver with the help of sequence numbers. In packet switching, packets are handled in two
ways viz. datagram and virtual circuit. In datagram type, each packet is treated independently.
Packets can take up any route between source PC and destination PC. Packets may arrive out of
order and may even not reach destination in some cases. In virtual circuit type, pre-planned route
is established before the transmission of packets. The Call request and call accept messages are
used as handshake mechanism. In this type, each packet contains VCI (virtual circuit identifier)
instead of the destination IP address. In this type, routing decisions for each packet are not
needed as it is made only once for all the packets using virtual circuit.

Benefits or advantages of Packet switching

Following are the benefits or advantages of Packet switching type:


➨As packets contain maximum length, they can be stored in the main memory itself and not
disk. This reduces access delay. Moreover packet size is fixed and hence network will have
improved delay characteristics as no long messages are available in the queue.
➨As switching devices do not require massive secondary storage, costs are minimized to great
extent. Hence packet switching is very cost effective technique.
➨Packets are rerouted in case of any problems (e.g. busy links or disabled links). This ensures
reliable communication.
➨It is more efficient for data transmission as it does not require path to be established between
the sender and receiver and data are transmitted immediately.
➨Many users can share the same channel simultaneously. Hence packet switching makes use of
available bandwidth efficiently.
➨With improved protocols, packet switching is widely used for video and voice calls using
applications such as WhatsApp, Skype, Google Talk etc.
➨Due to competition among telecom carriers and availability of latest wireless standards such as
LTE, LTE-Advanced packet switching is widely used by internet users.

Drawbacks or disadvantages of Packet switching

Following are the disadvantages of Packet switching type:


➨Packet switching network can not be used in applications requiring very little delay and higher
quality of service e.g. reliable voice calls.
➨Protocols used in the packet switching are complex and require high initial implementation
costs.
➨If the network becomes overloaded, packets are delayed or discarded or dropped. This leads to
retransmission of lost packets by the sender. This often leads to loss of critical information if
errors are not recovered.
➨It is not secured if security protocols (e.g. IPsec) are not used during packet transmission.

33
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Difference

Packet Packet Switching(Virtual Circuit


Circuit Switching
Switching(Datagram type) type)
Dedicated path No Dedicated path No Dedicated path
Path is established for Route is established for each Route is established for entire
entire conversation packet conversation
call setup delay as well as packet
Call setup delay packet transmission delay
transmission delay
Overload may block call Overload increases packet Overload may block call setup and
setup delay increases packet delay
Fixed bandwidth Dynamic bandwidth Dynamic bandwidth
No overhead bits after call
overhead bits in each packet overhead bits in each packet
setup

THE OSI MODEL


Established in 1947, the International Standards Organization (ISO) is a multinational body
dedicated to worldwide agreement on international standards. An ISO standard that covers all
aspects of network communications is the Open Systems Interconnection model. It was first
introduced in the late 1970s. An open system is a set of protocols that allows any two different
systems to communicate regardless of their underlying architecture. The purpose of the OSI
model is to show how to facilitate communication between different systems without requiring
changes to the logic of the underlying hardware and software. The OSI model is not a protocol; it
is a model for understanding and designing a network architecture that is flexible, robust, and
interoperable.

The OSI model is a layered framework for the design of network systems that allows
communication between all types of computer systems. It consists of seven separate but related
layers, each of which defines a part of the process of moving information across a network (see

34
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Figure 2.2). An understanding of the fundamentals of the OSI model provides a solid basis for
exploring data communications.

Layered Architecture
The OSI model is composed of seven ordered layers: physical (layer 1), data link (layer 2), network
(layer 3), transport (layer 4), session (layer 5), presentation (layer 6), and application (layer 7). Figure
2.3 shows the layers involved when a message is sent from device A to device B. As the message
travels from A to B, it may pass through many intermediate nodes. These intermediate nodes usually
involve only the first three layers of the OSI model.
In developing the model, the designers distilled the process of transmitting data to its most
fundamental elements. They identified which networking functions had related uses and collected
those functions into discrete groups that became the layers. Each layer defines a family of functions
distinct from those of the other layers. By defining and localizing functionality in this fashion, the
designers created an architecture that is both comprehensive and flexible. Most importantly, the OSI
model allows complete interoperability between otherwise incompatible systems.
Within a single machine, each layer calls upon the services of the layer just below it. Layer 3, for
example, uses the services provided by layer 2 and provides services for layer 4. Between machines,
layer x on one machine communicates with layer x on another machine. This communication is
governed by an agreed-upon series of rules and conventions called protocols. The processes on each
machine that communicate at a given layer are called peer-to-peer processes. Communication
between machines is therefore a peer-to-peer process using the protocols appropriate to a given layer.

Peer-to-Peer Processes
At the physical layer, communication is direct: In Figure 2.3, device A sends a stream of bits to
device B (through intermediate nodes). At the higher layers, however, communication must move
down through the layers on device A, over to device B, and then back up through the layers. Each
layer in the sending device adds its own information to the message it receives from the layer
just above it and passes the whole package to the layer just below it.

35
Common Reference Material BSBC 603 [Computer Network] BCA 6th

At layer 1 the entire package is converted to a form that can be transmitted to the receiving
device. At the receiving machine, the message is unwrapped layer by layer, with each process
receiving and removing the data meant for it. For example, layer 2 removes the data meant for it,
then passes the rest to layer 3. Layer 3 then removes the data meant for it and passes the rest to
layer 4, and so on.

Interfaces Between Layers


The passing of the data and network information down through the layers of the sending device
and back up through the layers of the receiving device is made possible by an interface between
each pair of adjacent layers. Each interface defines the information and services a layer must
provide for the layer above it. Well-defined interfaces and layer functions provide modularity to a
network. As long as a layer provides the expected services to the layer above it, the specific
implementation of its functions can be modified or replaced without requiring changes to the
surrounding layers.

Organization of the Layers


The seven layers can be thought of as belonging to three subgroups. Layers I, 2, and 3-physical,
data link, and network-are the network support layers; they deal with the physical aspects of
moving data from one device to another (such as electrical specifications, physical connections,
physical addressing, and transport timing and reliability). Layers 5, 6, and 7-session,
presentation, and application-can be thought of as the user support layers; they allow
interoperability among unrelated software systems. Layer 4, the transport layer, links the two
subgroups and ensures that what the lower layers have transmitted is in a form that the upper
layers can use. The upper OSI layers are almost always implemented in software; lower layers
are a combination of hardware and software, except for the physical layer, which is mostly
hardware.
In Figure 2.4, which gives an overall view of the OSI layers, D7 means the data unit at layer 7,
D6 means the data unit at layer 6, and so on. The process starts at layer 7 (the application layer),

36
Common Reference Material BSBC 603 [Computer Network] BCA 6th

then moves from layer to layer in descending, sequential order. At each layer, a header, or
possibly a trailer, can be added to the data unit. Commonly, the trailer is added only at layer 2.
When the formatted data unit passes through the physical layer (layer 1), it is changed into an
electromagnetic signal and transported along a physical link.

Upon reaching its destination, the signal passes into layer 1 and is transformed back into digital
form. The data units then move back up through the OSI layers. As each block of data reaches
the next higher layer, the headers and trailers attached to it at the corresponding sending layer are
removed, and actions appropriate to that layer are taken. By the time it reaches layer 7, the
message is again in a form appropriate to the application and is made available to the recipient.

Encapsulation
Figure 2.3 reveals another aspect of data communications in the OSI model: encapsulation. A
packet (header and data) at level 7 is encapsulated in a packet at level 6. The whole packet at
level 6 is encapsulated in a packet at level 5, and so on. In other words, the data portion of a
packet at level N - 1 carries the whole packet (data and header and maybe trailer) from level N.
The concept is called encapsulation; level N - 1 is not aware of which part of the encapsulated
packet is data and which part is the header or trailer. For level N - 1, the whole packet coming
from level N is treated as one integral unit.

LAYERS IN THE OSI MODEL


Physical Layer
The physical layer coordinates the functions required to carry a bit stream over a physical
medium. It deals with the mechanical and electrical specifications of the interface and
transmission medium. It also defines the procedures and functions that physical devices and
interfaces have to perform for transmission to Occur. Figure 2.5 shows the position of the
physical layer with respect to the transmission medium and the data link layer.

37
Common Reference Material BSBC 603 [Computer Network] BCA 6th

The physical layer is also concerned with the following:


o Physical characteristics of interfaces and medium. The physical layer defines the
characteristics of the interface between the devices and the transmission medium. It also defines
the type of transmission medium.
o Representation of bits. The physical layer data consists of a stream of bits (sequence of Os or
1s) with no interpretation. To be transmitted, bits must be encoded into signals--electrical or
optical. The physical layer defines the type of encoding (how Os and I s are changed to signals).
o Data rate. The transmission rate-the number of bits sent each second-is also defined by the
physical layer. In other words, the physical layer defines the duration of a bit, which is how long
it lasts.
o Synchronization of bits. The sender and receiver not only must use the same bit rate but also
must be synchronized at the bit level. In other words, the sender and the receiver clocks must be
synchronized.
o Line configuration. The physical layer is concerned with the connection of devices to the
media. In a point-to-point configuration, two devices are connected through a dedicated link. In a
multipoint configuration, a link is shared among several devices.
o Physical topology. The physical topology defines how devices are connected to make a
network. Devices can be connected by using a mesh topology (every device is connected to
every other device), a star topology (devices are connected through a central device), a ring
topology (each device is connected to the next, forming a ring), a bus topology (every device is
on a common link), or a hybrid topology (this is a combination of two or more topologies).
o Transmission mode. The physical layer also defines the direction of transmission between
two devices: simplex, half-duplex, or full-duplex. In simplex mode, only one device can send;
the other can only receive. The simplex mode is a one-way communication. In the half-duplex
mode, two devices can send and receive, but not at the same time. In a full-duplex (or simply
duplex) mode, two devices can send and receive at the same time.

OSI Model
This model is called Open System Interconnection (OSI) because this model allows any
two different systems to communicate regardless of their underlying architecture.

38
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Therefore OSI reference model allows open communication between different systems
without requiring changes' to the logic of the underlying hardware and software. 

The International standard organization (ISO), in an effort to encourage open networks,


developed an open systems interconnect reference model. The model logically groups
the functions and sets rules, called protocols, necessary to establish and conduct
communication between two or more parties. The model consists of seven functions,
often referred to as layers.

OSI reference model is a logical framework for standards for the network
communication. OSI reference model is now considered as a primary standard for
internetworking and inter computing. Today many network communication protocols are
based on the standards of OSI model. In the OSI model the network/data
communication is defined into seven layers. The seven layers can be grouped into
three groups - Network, Transport and Application. 

• Layer 1, 2 and 3 i.e. physical, data link, and network are network support layers.
• Layer 4, Transport layer provides end to end reliable data transmission.
• Layer 5, 6 and 7 i.e. Session, Presentation, and Application layer are user support
layers.

                     

It is important to note that OSI model is just a model. It is not a protocol that can be
installed or run on any system.

To remember the names of seven layers in order one cornmon mnemonic used is -" All
People Seem to Need Data Processing".

39
Common Reference Material BSBC 603 [Computer Network] BCA 6th

                                   

The last three layers are mainly concerned with the organization of terminal software
and are not directly the concern of communications engineers. The transport layer is the
one which links the communication processes to this software-oriented protocol.

Layer 7 – Application Layer 

The application layer serves as the window for users and application processes to
access network services. The application layer makes the interface between the
program that is sending or is receiving data and the protocol stack. When you download
or send emails, your e-mail program contacts this layer. This layer provides network
services to the end-users like Mail, ftp, telnet, DNS.

                    

40
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Function of Application Layer:

• Resource sharing and device redirection.


• Remote file access.
• Remote printer access.
• Inter-process communication.
• Network management.
• Directory services.
• Electronic messaging (such as mail).

Network Virtual Terminal: A network virtual terminal is a software version of a physical


terminal and allows a user to log on to a remote host. For this, application layer creates
a software emulation of a terminal at the remote host. The user's computer talks to the
software terminal which, in turn, talks to the host and vice-versa. The remote host
believes it is communicating with one of its own terminals and allows the user to log on.

                          

File transfer, access and management (FTAM): This application allows a user to
access a file in a remote host to make changes or to read data, to retrieve files from
remote computer for use in local computer, and to manage or control files in a remote
computer locally.

Mail services: This application provides various e-mail services such as email
forwarding and storage.

Directory services: This application provides the distributed database sources and
access for global information about various objects and services.

Protocols used at application layer are FTP, DNS, SNMP, SMTP, FINGER, and
TELNET.

Layer 6 – Presentation Layer  

Presentation Layer is also called Translation layer. The presentation layer presents the
data into a uniform format and masks the difference of data format between two

41
Common Reference Material BSBC 603 [Computer Network] BCA 6th

dissimilar systems. The presentation layer formats the data to be presented to the
application layer. It can be viewed as the translator for the network. This layer may
translate data from a format used by the application layer into a common format at the
sending station, and then translate the common format to a format known to the
application layer at the receiving station.

                        

Functions of Presentation Layer:

• Character code translation : for example, ASCII to EBCDIC.


• Data conversion : bit order, CR-CR/LF, integer-floating point, and so on.
• Data compression : reduces the number of bits that need to be transmitted on the
network.
• Data encryption : encrypt data for security purposes. For example, password
encryption.

Layer 5 - Session Layer

Session layer has the primary responsibility of beginning, maintaining and ending the
communication between two devices, which is called Session. It also provides for
orderly communication between devices by regulating the flow of data.

The session protocol defines the format of the data sent over the connections. Session
layer establish and manages the session between the two users at different ends in a
network. Session layer also manages who can transfer the data in a certain amount of
time and for how long.

The examples of session layers and the interactive logins and file transfer sessions.
Session layer reconnect the session if it disconnects. It also reports and logs and upper
layer errors. The session layer allows session establishment between processes
running on different stations. The dialogue control and token management are
responsibility of session layer. 

42
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Functions of Session Layer:

Session establishment, maintenance and termination : allows two application


processes on different machines to establish, use and terminate a connection, called a
session.

Session support : performs the functions that allow these processes to communicate
over the network, performing security, name recognition, logging and so on.

Dialog control: Dialog control is the function of session layer that determines which
device will communicate first and the amount of data that will be sent.

                         

When a device is contacted first, the session layer is responsible for determining which
device participating in the communication will transmit at a given time as well as
controlling the amount of data that can be sent in a transmission. This is called dialog
control.

The types of dialog control that can take place include simplex, half duplex and full
duplex.

Dialog separation or Synchronization: The session layer is also responsible for


adding checkpoint or markers within the message. This process of inserting markers to
the stream of data is known as dialog separation.

Protocols: The protocols that work on the session layer are NetBIOS, Mail Slots,
Names Pipes, and RPC.

Layer 4 – Transport Layer

Transport layer manages end to end (source to destination) (process to process)


message delivery in a network and also provides the error checking and hence
guarantees that no duplication or errors are occurring in the data transfers across the
network. It makes sure that all the packets of a message arrive intact and in order.

43
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Transport layer also provides the acknowledgement of the successful data transmission
and retransmits the data if error is found. The transport layer ensures that messages
are delivered error-free, in sequence, and with no losses or duplications.

The size and complexity of a transport protocol depends on the type of service it can get
from the network layer. Transport layer is at the core of OSI model. Transport layer
provides services to application layer and takes services from network layer.

Transport layer divides the message received from upper layer into packets at source
and reassembles these packets again into message at the destination.

Transport layer provides two types of services:

 Connection Oriented Transmission

(a) In this type of transmission the receiving device sends an acknowledgment, back to
the source after a packet or group of packet is received.

(b) This type of transmission is also known as reliable transport method.

(c) Because connection oriented transmission requires more packets be sent across
network, it is considered a slower transmission method.

(d) If the data that is sent has problems, the destination requests the source for
retransmission by acknowledging only packets that have been received and are
recognizable.

(e) Once the destination computer receives all of the data necessary to reassemble the
packet, the transport layer assembles the data in the correct sequence and then passes
it up, to the session layer.

Connectionless Transmission

(a) In this type of transmission the receiver does not acknowledge receipt of a packet.

(b) Sending device assumes that packet arrive just fine.

(c) This approach allows for much faster communication between devices.

(d) The trade-off is that connectionless transmission is less reliable than connection
oriented.

Functions of Transport Layer:

Segmentation of message into packet and reassembly of packets into message:


accepts a message from the (session) layer above it, splits the message into smaller

44
Common Reference Material BSBC 603 [Computer Network] BCA 6th

units (if not already small enough), and passes the smaller units down to the network
layer. The transport layer at the destination station reassembles the message.

Message acknowledgment: provides reliable end-to-end message delivery with


acknowledgments.

Message traffic control : tells the transmitting station to "back-off" when no message
buffers are available.

Session multiplexing : multiplexes several message streams, or sessions onto one


logical link and keeps track of which messages belong to which sessions.

Service point addressing: The purpose of transport layer is to delivery message from
one process running on source machine to another process running on destination
machine. It may be possible that several programs or processes are running on both the
machines at a time. In order to delivery the message to correct process, transport layer
header includes a type of address called service point address or port address. Thus
by specifying this address, transport layer makes sure that the message is delivered to
the correct process on destination machine.

Flow control: Like Data link layer, transport layer also performs flow control. Transport
layer makes sure that the sender and receiver communicate at a rate they both can
handle. Therefore flow control prevents the source from sending data packets faster
than the destination can handle. Here, flow control is performed end-to-end rather than
across a link.

      

Error control: Like Data link layer, Transport layer also performs error control. Here
error control is performed end-to-end rather than across a single link. The sending
transport layer ensures that the entire message arrives at the receiving transport layer
without error (damage, loss or duplication). Error correction is achieved through
retransmission.

45
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Protocols: These protocols work on the transport layer TCP, SPX, NETBIOS, ATP and
NWLINK.

Layer 3 – Network Layer

This layer is incharge of packet addressing , converting logical addresses into


physical addresses. It is responsible for the source-to-destination delivery of a packet
across multiple networks (links). This layer is also incharge of setting the routing.
The packets will use to arrive at their destination, based on factors like traffic and
priorities. The network layer determines that how data transmits between the
network devices.

                              

If two systems are connected to same link, then there is no need for network layer. And
if two systems are attached to different networks with connecting devices like routers
between the networks, then there is need for the network layer.

It also translates the logical address into the physical address e.g computer name
into MAC address. It is also responsible for defining the route, it managing the
network problems and addressing The network layer controls the operation of the
subnet, deciding which physical path the data should take based on network
conditions, priority of service, and other factors. The X.25 protocols works at the
physical, data link, and network layers.

 The network layer lies between data link kyer and transport layer. It takes services from
Data link and provides services to the transport layer.

46
Common Reference Material BSBC 603 [Computer Network] BCA 6th

                                   

Functions of Network Layer:

• Subnet Traffic Control: Routers (network layer intermediate systems) can instruct a
sending station to "throttle back" its frame transmission when the router's buffer fills up.

• Logical-Physical Address Mapping: translates logical addresses, or names, into


physical addresses.

• Subnet Usage Accounting: has accounting functions to keep track of frames


forwarded by subnet intermediate systems, to produce billing information.

In the network layer and the layers below, peer protocols exist between a node and its
immediate neighbor, but the neighbor may be a node through which data is routed, not
the destination station. The source and destination stations may be separated by many
intermediate systems.

Internetworking

• One of the main responsibilities of network layer is to provide internetworking between


different networks.

• It provides logical connection between different types of network.

• It is because of this layer, we can combine various different networks to form a bigger
network.

Logical Addressing

• Large number of different networks can be combined together to from bigger networks
or internetwork.

47
Common Reference Material BSBC 603 [Computer Network] BCA 6th

• In order to identify each device on internetwork uniquely, network layer defines an


addressing scheme.

• Such an address distinguishes each device uniquely and universally.

Routing

• When independent networks or links are combined together to create internet works,
multiple routes are possible from source machine to destination machine.

• The network layer protocols determine which route or path is best from source to
destination. This function of network layer is known as routing.

• Routes frames among networks.

Packetizing

• The network layer receives the data from the upper layers and creates its own packets
by encapsulating these packets. The process is known as packetizing.

• This packetizing in done by Internet Protocol (IP) that defines its own packet format.

Fragmentation

• Fragmentation means dividing the larger packets into small fragments.

• The maximum size for a transportable packet in defined by physical layer protocol.

• For this, network layer divides the large packets into fragments so that they can be
easily sent on the physical medium.

• If it determines that a downstream router's maximum transmission unit (MTU) size is


less than the frame size, a router can fragment a frame for transmission and re-
assembly at the destination station.

                          

48
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Protocols: These protocols work on the network layer IP, ICMP, ARP, RIP, OSI, IPX
and OSPF.

Layer 2 - Data Link layer

It is responsible for reliable node-to-node delivery of data. It receives the data from
network layer and creates frames, add physical address to these frames and pass them
to physical layer

The data link layer provides error-free transfer of data frames from one node to
another over the physical layer, allowing layers above it to assume virtually error-free
transmission over the link. Data Link layer defines the format of data on the network. A
network data frame, packet, includes checksum, source and destination address, and
data.  

           

The data link layer handles the physical and logical connections to the packet's
destination, using a network interface. This layer gets the data packets send by the
network layer and convert them into frames that will be sent out to the network media,
adding the physical address of the network card of your computer, the physical
address of the network card of the destination, control data and a checksum data,
also known as CRC. The X.25 protocols works at the physical, data link, and
network layers.

Data Link layer consists of two sub-layers

1. Logical Link Control (LLC) sublayer

2. Medium Access Control (MAC) sublayer

LLC sublayer provides interface between the media access methods and network layer
protocols such as Internet protocol which is a part of TCP/IP protocol suite.

LLC sublayer determines whether the communication is going to be connectionless or


connection-oriented at the data link layer.

49
Common Reference Material BSBC 603 [Computer Network] BCA 6th

MAC sublayer is responsible for connection to physical media. At the MAC sublayer of
Data link layer, the actual physical address of the device, called the MAC address is
added to the packet. Such a packet is called a Frame that contains all the addressing
information necessary to travel from source device to destination device.

                                                        

MAC address is the 12 digit hexadecimal number unique to every computer in this
world. A device's MAC address is located on its Network Interface Card (NIC). In these
12 digits of MAC address, the first six digits indicate the NIC manufacturer and the last
six digits are unique. For example, 32-14-a6-42-71-0c is the 12 digit hexadecimal MAC
address. Thus MAC address represents the physical address of a device in the network.

Functions of Data Link Layer:

                       

Link Establishment and Termination: Establishes and terminates the logical link
between two nodes.

Physical addressing: After creating frames, Data link layer adds physical addresses
(MAC address) of sender and/or receiver in the header of each frame.

Frame Traffic Control: Tells the transmitting node to "back-off algorithm" when no
frame buffers are available.

Frame Sequencing: Transmits/receives frames sequentially.

50
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Frame Acknowledgment: Provides/expects frame acknowledgments. Detects and


recovers from errors that occur in the physical layer by retransmitting non-
acknowledged frames and handling duplicate frame receipt.

Frame Delimiting: Creates and recognizes frame boundaries.

Frame Error Checking: Checks received frames for integrity.

Media Access Management: determines when the node "has the right" to use the
physical medium.

Flow control: It is the traffic regulatory mechanism implemented by Data Link layer that
prevents the fast sender from drowning the slow receiver. If the rate at which data is
absorbed by receiver is less that the rate produced in the sender, the data link layer
imposes this flow control mechanism.

Error control: Data link layer provides the mechanism of error control in which it
detects and retransmits damaged· or lost frames. It also deals with the problem of
duplicate frame, thus providing reliability to physical layer.

Access control: When a single communication channel is shared by multiple devices,


MAC sub-layer of data link layer helps to determine which device has control over the
channel at a given time.

Feedback: After transmitting the frames, the system waits for the feedback. The
receiving device then sends the acknowledgement frames back to the source providing
the receipt of the frames.

Layer 1 – Physical Layer

The physical layer, the lowest layer of the OSI model, is concerned with the
transmission and reception of the unstructured raw bit stream over a physical
medium. It describes the electrical/optical, mechanical, and functional interfaces to
the physical medium, and carries the signals for all of the higher layers. Physical layer
defines the cables, network cards and physical aspects.

It is responsible for the actual physical connection between the devices. Such physical
connection may be made by using twisted pair cable, fiber-optic, coaxial cable or
wireless communication media. This layer gets the frames sent by the Data Link layer
and converts them into signals compatible with the transmission media. If a metallic
cable is used, then it will convert data into electrical signals; if a fiber optical cable is
used, then it will convert data into luminous signals; if a wireless network is used,
then it will convert data into electromagnetic signals; and so on.

When receiving data, this layer will get the signal received and convert it into 0s and
1s and send them to the Data Link layer, which will put the frame back together and

51
Common Reference Material BSBC 603 [Computer Network] BCA 6th

check for its integrity The X.25 protocols works at the physical, data link, and
network layers.

Functions of Physical layer: 

Data Encoding: Modifies the simple digital signal pattern (1s and 0s) used by the PC to
better accommodate the characteristics of the physical medium, and to aid in bit and
frame synchronization. It determines:

• What signal state represents a binary 1?


• How the receiving station knows when a "bit-time" starts.
• How the receiving station delimits a frame.

Physical Medium Attachment, Accommodating Various Possibilities in the


Medium:

• Will an external transceiver (MAU) be used to connect to the medium?


• How many pins do the connectors have and what is each pin used for?

Transmission Technique : determines whether the encoded bits will be transmitted by


baseband (digital) or broadband (analog) signaling.

Physical Medium Transmission : transmits bits as electrical or optical signals


appropriate for the physical medium, and determines:

• What physical medium options can be used.


• How many volts/db should be used to represent a given signal state, using a given
physical medium.

Protocols used at physical layer are ISDN, IEEE 802 and IEEE 802.2.

Bit synchronization: The physical layer provides the synchronization of the bits by
providing a clock. This clock controls both transmitter as well as receiver thus providing
synchronization at bit level.

Provides physical characteristics of interfaces and medium: Physical layer


manages the way a device connects to network media. For example, if the physical
connection from the device to the network uses coaxial cable, the hardware that
functions at the physical layer will be designed for that specific type of network. All
components including connectors are also specified at physical layer.

Bit rate control: Physical layer defines the transmission rate i.e. the number of bits
sent in one second. Therefore it defines the duration of a bit.

52
Common Reference Material BSBC 603 [Computer Network] BCA 6th

                           

Line configuration: Physical layer also defines the way in which the devices are
connected to the medium. Two different line configurations are used point to point
configuration and multipoint configuration. To activate, maintain and deactivate the
physical connection.

Transmission mode: Physical layer also defines the way in which the data flows
between the two connected devices. The various transmission modes possible are:
Simplex, half-duplex and full-duplex.

Physical topologies: Physical layer specifies the way in which the different,
devices/nodes are arranged in a network i.e. bus, star or mesh.

Multiplexing: Physical layer can use different techniques of multiplexing, in order to


improve the channel efficiency.

Circuit switching: Physical layer also provides the circuit switching to interconnect
different networks.

TCP/IP Reference Model


Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite is the
engine for the Internet and networks worldwide. Its simplicity and power has led to its
becoming the single network protocol of choice in the world today. TCP/IP is a set of
protocols developed to allow cooperating computers to share resources across the
network. 

 This model was initially developed & used by ARPANET (Advanced Research Project
Agency Network). ARPANET was a community of researchers sponsored by the U.S.
department of defense. It connects many universities and government installations
using leased telephone lines .Certainly the ARPAnet is the best- known TCP/IP
network.

53
Common Reference Material BSBC 603 [Computer Network] BCA 6th

The most accurate name for the set of protocols is the "Internet protocol suite". TCP
and IP are two of the protocols in this suite. The Internet is a collection of networks.
Term "Internet" applies to this entire set of networks. Like most networking software,
TCP/IP is modeled in layers. This layered representation leads to the term protocol
stack, which refers to the stack of layers in the protocol suite. It can be used for
positioning the TCP/IP protocol suite against others network software like Open System
Interconnection (OSI) model.

By dividing the communication software into layers, the protocol stack allows for division
of labor, ease of implementation and code testing, and the ability to develop alternative
layer implementations. Layers communicate with those above and below via concise
interfaces. In this regard, a layer provides a service for the layer directly above it and
makes use of services provided by the layer directly below it. For example, the IP layer
provides the ability to transfer data from one host to another without any guarantee to
reliable delivery or duplicate suppression.

                            

TCP/IP is a family of protocols . A few provide "low- level" functions needed for many
applications. These include IP, TCP, and UDP. Others are protocols for doing specific
tasks, e.g. transferring files between computers, sending mail, or finding out who is
logged in on another computer. Initially TCP/IP was used mostly between
minicomputers or mainframes. These machines had their own disks, and generally
were self contained.

Application Layer 

The application layer is provided by the program that uses TCP/IP for communication.
An application is a user process cooperating with another process usually on a different
host (there is also a benefit to application communication within a single host).
Examples of applications include Telnet and the File Transfer Protocol (FTP).

Transport Layer

The transport layer provides the end-to-end data transfer by delivering data from an
application to its remote peer. Multiple applications can be supported simultaneously.
The most-used transport layer protocol is the Transmission Control Protocol (TCP),

54
Common Reference Material BSBC 603 [Computer Network] BCA 6th

which provides connection-oriented reliable data delivery, duplicate data


suppression, congestion control, and flow control.

Another transport layer protocol is the User Datagram Protocol It provides


connectionless, unreliable, best-effort service. As a result, applications using UDP
as the transport protocol have to provide their own end-to-end integrity, flow control,
and congestion control, if desired. Usually, UDP is used by applications that need a
fast transport mechanism and can tolerate the loss of some data.

Internetwork Layer

The internetwork layer, also called the internet layer or the network layer, provides
the “virtual network” image of an internet this layer shields the higherlevels from the
physical network architecture below it. Internet Protocol (IP) is the most important
protocol in this layer. It is a connectionless protocol that does not assume reliability
from lower layers. IP does not provide reliability, flow control, or error recovery.

These functions must be provided at a higher level. IP provides a routing function that
attempts to deliver transmitted messages to their destination. A message unit in an IP
network is called an IP datagram.

This is the basic unit of information transmitted across TCP/IP networks. Other
internetwork-layer protocols are IP, ICMP, IGMP, ARP, and RARP.

Network Interface Layer

The network interface layer, also called the link layer or the data-link layer or Host to
Network Layer, is the interface to the actual network hardware. This interface may or
may not provide reliable delivery, and may be packet or stream oriented.

In fact, TCP/IP does not specify any protocol here, but can use almost any network
interface available, which illustrates the flexibility of the IP layer. Examples are IEEE
802.2, X.25,ATM, FDDI, and even SNA.TCP/IP specifications do not describe or
standardize any network-layer protocols, they only standardize ways of accessing those
protocols from the internet work layer.

                                        

55
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Comparison of OSI Reference Model and TCP/IP Reference Model


Following are some major differences between OSI Reference Model and TCP/IP Reference
Model, with diagrammatic comparison below.

TCP/IP(Transmission Control Protocol / Internet


OSI(Open System Interconnection)
Protocol)

1. TCP/IP model is based on standard protocols around


1. OSI is a generic, protocol independent
which the Internet has developed. It is a communication
standard, acting as a communication gateway
protocol, which allows connection of hosts over a
between the network and end user.
network.

2. In TCP/IP model the transport layer does not


2. In OSI model the transport layer guarantees
guarantees delivery of packets. Still the TCP/IP model is
the delivery of packets.
more reliable.

3. Follows vertical approach. 3. Follows horizontal approach.

4. OSI model has a separate Presentation layer 4. TCP/IP does not have a separate Presentation layer or
and Session layer. Session layer.

5. OSI is a reference model around which the


5. TCP/IP model is, in a way implementation of the OSI
networks are built. Generally it is used as a
model.
guidance tool.

6. Network layer of OSI model provides both


6. The Network layer in TCP/IP model provides
connection oriented and connectionless
connectionless service.
service.

7. OSI model has a problem of fitting the


7. TCP/IP model does not fit any protocol
protocols into the model.

8. Protocols are hidden in OSI model and are


8. In TCP/IP replacing protocol is not easy.
easily replaced as the technology changes.

9. OSI model defines services, interfaces and


protocols very clearly and makes clear 9. In TCP/IP, services, interfaces and protocols are not
distinction between them. It is protocol clearly separated. It is also protocol dependent.
independent.

10. It has 7 layers 10. It has 4 layers

Section-C
56
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Service Provided To Network Layer and Framing


Data Link layer

The main task of data link layer is to transform a row transmission facility into a line that appears
free of conducted transmission layer to a network layer. This deals with the algorithm for
achieving reliable, efficient communication between two adjacent machines. It is responsible for
moving the message from one device to another. It takes the data from the physical layer and
provides the services to the network layer.

Major responsibility of Data Link Layer are:

1. Logical Link Control (LLC)


2. Medium Access Control (MAC)
3. Data Framing
4. Physical Addressing
5. Error Detection and Handling
6. Flow Control

 1) Logical Link Control

Logical link control refers to the function required for establishment and control of logical links
between logical devices on a network.

2) Medium Access Control

This refers to the procedure used by the devices to control the access to the network medium.
Since many networks use a shared medium, it is necessary to have a rule for managing the
medium to avoid the conflict.

 3) Data Framing

The data link layer is responsible for final encapsulation of the higher level message into frames
that are sent over the network.

 4) Physical Addressing

Each device on a network has a unique no. usually called hardware address or MAC address. It is
used by the data link layer to ensure that data intended for specific machine gets it properly.

 5) Error Detection and Handling

Data link layer handles the error that occurs at the lower level of the network stack.

57
Common Reference Material BSBC 603 [Computer Network] BCA 6th

6) Flow Control

Data link layer controls the flow between the two stations.

 Service Provided To Network Layer

Service provided to the data link layer to the network layer is the transmission of data from the
source network layer to the destination network layer. This can be done in 3 ways:

1. Unacknowledgement Connectionless Service


2. Acknowledgement Connectionless Service
3. Acknowledgement Connection Oriented Service 

1) Unacknowledgement Connectionless Service:

It has no logical connection established before sending a message transfer. If a frame is lost in
the medium, the receiver does not acknowledge the sender about it. It does not have error
recovering mechanism. So, the service is suitable when the error rate is low.

 2) Acknowledgement Connectionless Service:

In this type of service, the data link layer always sends a frame and wait for it to be
acknowledged. If the acknowledgment is not coming before the expired time the sender sends
the entire message again.

 3) Acknowledgement Connection-Oriented Service:

It establishes the connections before sending the frames. It guarantees each frame is received
exactly and in the right order. There is a logical connection setup between sender and receiver.
The process of communication follows the three steps:

 A logical connection is a set up between the sender and receiver.


 Data is transmitted.
 After data transmission is complete, the logical condition is terminated.

 Framing

While transmitting the message, from sender to receiver, the large size message is broken down
into small size data unit called frames. The process of formation of the frames is called framing.
If the message is transmitted without breaking into frames, It may monopolize the transmission
line and if there is an error in the message, we need to retransmit the whole message.

A frame could be a digital information transmission unit in PC networking and


telecommunication. A frame generally includes frame synchronization options consisting of a
sequence of bits or symbols that illustrate the receiver start and finish of the payload information

58
Common Reference Material BSBC 603 [Computer Network] BCA 6th

at intervals stream of symbols or bits it receives. If a receiver is connected to the system in the
middle of a frame transmission, it ignores the information till it detects brand new frame
synchronization sequence.

In the OSI model of PC networking, a frame is the protocol information unit at the information
link layer. Frames square measure the results of the ultimate layer of encapsulation before the
information is transmitted over the physical layer. A frame is "the unit of transmission in an
exceedingly link layer protocol, and consists of a link layer header followed by a packet." every
frame is separated from following by associate in nursing interframe gap. A frame could be a
series of bits typically composed of framing bits, the packet payload, and a frame check
sequence. Examples LAN frames, Point-to-Point Protocol (PPP) frames, Fibre Channel frames,
and V.42electronic equipment frames.

In telecommunications, specifically in time-division multiplex (TDM) and time-division multiple


access (TDMA) variants, a frame could be acyclically continual information block that consists
of the hard and fast range of your time slots, one for every logical TDM channel or TDMA
transmitter.During this context, a frame is usually Associated in Nursing entity at the physical
layer. TDM application examples square measure SONET/SDH and also the ISDN circuit
switched B-channel, whereas TDMA examples square measure the 2G and 3G circuit-switched
cellular voice services. The frame is additionally Associated in Nursing entity for time-division
duplex, wherever the mobile terminal could transmit throughout some time slots and receive
throughout others.

Frame can be categorized into two types: 

1. Variable Size Frame


2. Fix Size Frame

 Framing Techniques

Breaking the bit stream into a frame is a most significant task in the network. One way to
achieve this task is to make the timing gaps between the frames or inserting starting and ending
point. Following are the approaches to break up the frames: 

1. Character Count
2. Character Stuffing
3. Bit Stuffing
4. Physical Layer Code Violation

 1) Character Count

It is the simple method of framing. In this framing technique, the first field of the frame defines
how many character or data are enclosed within that respective frames. In this method of framing
only starting definition is defined. If there is the problem with any part, the whole message needs
to be re-transmitted.

59
Common Reference Material BSBC 603 [Computer Network] BCA 6th

 2) Character Stuffing

In this method of framing, the starting and the ending of the frames id is defined with characters.
Suppose, we are using ST and ED of the frame respectively and we need to transmit a message
like:

Hi everyone: Good morning

In this framing technique if the controlled character itself is the part of the message to be
transmitted, then there will be the problem in the receiver to identify which is the control
character and which is the regular message. So, special character DLE (Data Link Escape) is
stuffed in a message. At the receiving side, the special character is unstuffed and remaining
message is transmitted to the higher layer.

 3) Bit Stuffing

In this method, a special bit combination 01111110 is used for indicating the start and end of the
frame. For eg: if we need to transmit a data 10110011011, the framing will be:

0111110 10110 01111110

Start frame Data End frame

 01111110 0110111 01111110

Start frame Data End frame

 The problem with this method is that if the bit sequence 01111110 is itself the part of the
message, in this case, 0 is stuff. After the 5 consecutive 1. At the receiving end, this 0 is
unstuffed and regular data passes to a higher layer.

 4) Physical Layer Code

In this method, the digital data is represented by high to low i.e. 10 and the digital data 1 is
represented by low to high. In this method, the bit pattern 00 and 11 can be used for indicating
starting and ending frame.

Error Detection & Correction

There are many reasons such as noise, cross-talk etc., which may help data to get corrupted
during transmission. The upper layers work on some generalized view of network architecture
and are not aware of actual hardware data processing.Hence, the upper layers expect error-free
transmission between the systems. Most of the applications would not function expectedly if they
receive erroneous data. Applications such as voice and video may not be that affected and with
some errors they may still function well.

60
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Data-link layer uses some error control mechanism to ensure that frames (data bit streams) are
transmitted with certain level of accuracy. But to understand how errors is controlled, it is
essential to know what types of errors may occur.

Types of Errors
There may be three types of errors:

 Single bit error

In a frame, there is only one bit, anywhere though, which is corrupt.

 Multiple bits error

Frame is received with more than one bits in corrupted state.

 Burst error

Frame contains more than1 consecutive bits corrupted.

Error control mechanism may involve two possible ways:

 Error detection
 Error correction

Error Detection
Errors in the received frames are detected by means of Parity Check and Cyclic Redundancy
Check (CRC). In both cases, few extra bits are sent along with actual data to confirm that bits

61
Common Reference Material BSBC 603 [Computer Network] BCA 6th

received at other end are same as they were sent. If the counter-check at receiver’ end fails, the
bits are considered corrupted.

Parity Check

One extra bit is sent along with the original bits to make number of 1s either even in case of even
parity, or odd in case of odd parity.

The sender while creating a frame counts the number of 1s in it. For example, if even parity is
used and number of 1s is even then one bit with value 0 is added. This way number of 1s remains
even.If the number of 1s is odd, to make it even a bit with value 1 is added.

The receiver simply counts the number of 1s in a frame. If the count of 1s is even and even parity
is used, the frame is considered to be not-corrupted and is accepted. If the count of 1s is odd and
odd parity is used, the frame is still not corrupted.

If a single bit flips in transit, the receiver can detect it by counting the number of 1s. But when
more than one bits are erro neous, then it is very hard for the receiver to detect the error.

Cyclic Redundancy Check (CRC)

CRC is a different approach to detect if the received frame contains valid data. This technique
involves binary division of the data bits being sent. The divisor is generated using polynomials.
The sender performs a division operation on the bits being sent and calculates the remainder.
Before sending the actual bits, the sender adds the remainder at the end of the actual bits. Actual
data bits plus the remainder is called a codeword. The sender transmits data bits as codewords.

62
Common Reference Material BSBC 603 [Computer Network] BCA 6th

At the other end, the receiver performs division operation on codewords using the same CRC
divisor. If the remainder contains all zeros the data bits are accepted, otherwise it is considered as
there some data corruption occurred in transit.

Error Correction
In the digital world, error correction can be done in two ways:

 Backward Error Correction  When the receiver detects an error in the data received, it
requests back the sender to retransmit the data unit.
 Forward Error Correction  When the receiver detects some error in the data received, it
executes error-correcting code, which helps it to auto-recover and to correct some kinds
of errors.

The first one, Backward Error Correction, is simple and can only be efficiently used where
retransmitting is not expensive. For example, fiber optics. But in case of wireless transmission
retransmitting may cost too much. In the latter case, Forward Error Correction is used.

To correct the error in data frame, the receiver must know exactly which bit in the frame is
corrupted. To locate the bit in error, redundant bits are used as parity bits for error detection.For
example, we take ASCII words (7 bits data), then there could be 8 kind of information we need:
first seven bits to tell us which bit is error and one more bit to tell that there is no error.

63
Common Reference Material BSBC 603 [Computer Network] BCA 6th

For m data bits, r redundant bits are used. r bits can provide 2r combinations of information. In
m+r bit codeword, there is possibility that the r bits themselves may get corrupted. So the
number of r bits used must inform about m+r bit locations plus no-error information, i.e. m+r+1.

Flow Control

Definition - What does Flow Control mean?


Flow control is the mechanism that ensures the rate at which a sender is transmitting is in
proportion with the receiver’s receiving capabilities.

Flow control is utilized in data communications to manage the flow of data/packets among two
different nodes, especially in cases where the sending device can send data much faster than the
receiver can digest.

Networks of any size have many different devices connected and each device has unique data
transmission parameters. For instance, a router is built to manage the routing of data whereas a
desktop, at the receiving end of that data, has far less sending/receiving abilities.

These differences sending/receiving abilities may lead to conflict if the sender starts transmitting
data faster than the receiving node’s ability. To counteract this problem, flow control is used.
This technique manages the flow of data between nodes, keeping the sending/receiving
capabilities of both nodes as the primary concern.

Xon-Xoff is an example of a flow control protocol that sync the sender with the receiver. It
transmits a transmit off signal when the receiver no longer has space in its buffer and a transmit
on signal when the receiver can resume taking data. Xon-Xoff works on asynchronous serial
connections.

PPP and SLIP protocols


The majority of people, not having lines (cable or Ethernet) linked directly to the Internet, must
use telephone lines (the most widely used network) to connect to the Internet. The connection is
made using a modem, a device capable of converting digital data from the computer into
analogue signals (that can circulate on telephone lines by amplitude or frequency modulation, in
the same way as voice when you use the telephone).

Considering that only two computers are communicating and the speed of a telephone line is
slow in comparison to that of a local network, it is necessary to use a protocol enabling standard
communication between the different machines using a modem, and not overload the telephone
line. These protocols are called modem protocols.

64
Common Reference Material BSBC 603 [Computer Network] BCA 6th

The notion of a point to point link


Via a standard telephone line, a maximum of two computers can communicate using a modem,
in the same way that it is impossible to call two people simultaneously using the same telephone
line. This is thus called a point to point link, i.e. a link between two machines reduced to its
most simple expression: there is no need to share the line between several machines, each one
speaks and responds in turn.

So, many modem protocols have been developed. The first of them allowed a single transmission
of data between two machines, then some of them were equipped with error control and with the
growth of the Internet, were equipped with the ability to address machines. In this way, there are
now two main modem protocols:

 SLIP: an old protocol, low in controls


 PPP: the most widely used protocol for accessing the Internet via a modem, it authorizes
addressing machines

The SLIP protocol


SLIP means Serial Line Internet Protocol. SLIP is the result of the integration of modem
protocols prior to the suite of TCP/IP protocols.

It is a simple Internet link protocol conducting neither address or error control, this is the reason
that it is quickly becoming obsolete in comparison to PPP.

Data transmission with SLIP is very simple: this protocol sends a frame composed only of data
to be sent followed by an end of transmission character (the END character, the ASCII code of
which is 192). A SLIP frame looks like this:

Data to be transmitted END

The PPP protocol


PPP means Point to Point Protocol. It is a much more developed protocol than SLIP (which is
why it is replacing it), insofar as it transfers additional data, better suited to data transmission
over the Internet (the addition of data in a frame is mainly due to the increase in bandwidth).

65
Common Reference Material BSBC 603 [Computer Network] BCA 6th

In reality, PPP is a collection of three protocols:

 a datagram encapsulation protocol


 an LCP, Link Control Protocol, enabling testing and communication configuration
 a collection of NCPs, Network Control Protocols allowing integration control of PPP
within the protocols of the upper layers

Data encapsulated in a PPP frame is called a packet. These packets are generally datagrams, but
can also be different (hence the specific designation of packet instead of datagram). So, one field
of the frame is reserved for the type of protocol to which the packet belongs. A PPP frame looks
like this:

Protocol (1-2 bytes) Data to be transmitted Padding data

The padding data is used to adapt the length of the frame for certain protocols.

A PPP session (from opening to closure) takes place as follows:

 Upon connection, an LCP packet is sent


 In the event of an authentication request from the server, a packet relating to an
authentication protocol may be sent (PAP, Password Authentication Protocol, or CHAP,
Challenge Handshake Authentication Protocol or Kerberos)
 Once communication is established, PPP sends configuration information using the NCP
protocol
 Datagrams to be sent are transmitted as packets
 Upon disconnection, an LCP packet is sent to end the session

Multiplexing and Types of Multiplexing

Multiplexing is the set of techniques that allows the simultaneous transmission of multiple
signals across a single data link. Whenever the bandwidth of a medium linking two devices is
greater than the bandwidth needs of the devices, the link can be shared. In a multiplexed system,
n lines share the bandwidth of one link.

The following figure shows the basic format of a multiplexed system. The lines on the left direct
their transmission streams to a multiplexer (MUX), which combines them into a single stream
(many-toone). At the receiving end, that stream is fed into a demultiplexer (DEMUX), which
separates the stream back into its component transmissions (one-to-many) and directs them to
their corresponding lines.

66
Common Reference Material BSBC 603 [Computer Network] BCA 6th

The three basic multiplexing techniques are

1. Frequency-division multiplexing

2. Wavelength-division multiplexing

3. Time-division multiplexing.

Frequency-division multiplexing

Frequency-division multiplexing (FDM): The spectrum of each input signal is shifted to a distinct
frequency range.

Frequency-division multiplexing (FDM) is inherently an analog technology. FDM achieves the


combining of several signals into one medium by sending signals in several distinct frequency
ranges over a single medium. In FDM the signals are electrical signals. One of the most
common applications for FDM is traditional radio and television broadcasting from terrestrial,
mobile or satellite stations, or cable television. Only one cable reaches a customer's residential
area, but the service provider can send multiple television channels or signals simultaneously
over that cable to all subscribers without interference. Receivers must tune to the appropriate
frequency (channel) to access the desired signal.

67
Common Reference Material BSBC 603 [Computer Network] BCA 6th

A variant technology, called wavelength-division multiplexing (WDM) is used in optical


communications.

Time-division multiplexing

Time-division multiplexing (TDM).

Time-division multiplexing (TDM) is a digital (or in rare cases, analog) technology which uses
time, instead of space or frequency, to separate the different data streams. TDM involves
sequencing groups of a few bits or bytes from each individual input stream, one after the other,
and in such a way that they can be associated with the appropriate receiver. If done sufficiently
quickly, the receiving devices will not detect that some of the circuit time was used to serve
another logical communication path.

Consider an application requiring four terminals at an airport to reach a central computer. Each
terminal communicated at 2400 baud, so rather than acquire four individual circuits to carry such
a low-speed transmission, the airline has installed a pair of multiplexers. A pair of 9600 baud
modems and one dedicated analog communications circuit from the airport ticket desk back to
the airline data center are also installed. [1] Some web proxy servers (e.g. polipo) use TDM in
HTTP pipelining of multiple HTTP transactions onto the same TCP/IP connection.

Carrier sense multiple access and multidrop communication methods are similar to time-division
multiplexing in that multiple data streams are separated by time on the same medium, but
because the signals have separate origins instead of being combined into a single signal, are best
viewed as channel access methods, rather than a form of multiplexing.

CDMA
Code Division Multiple Access (CDMA) is a sort of multiplexing that facilitates various signals to
occupy a single transmission channel. It optimizes the use of available bandwidth. The technology is
commonly used in ultra-high-frequency (UHF) cellular telephone systems, bands ranging between the
800-MHz and 1.9-GHz.

Code Division Multiple Access system is very different from time and frequency multiplexing.
In this system, a user has access to the whole bandwidth for the entire duration. The basic
principle is that different CDMA codes are used to distinguish among the different users.

Techniques generally used are direct sequence spread spectrum modulation (DS-CDMA),
frequency hopping or mixed CDMA detection (JDCDMA). Here, a signal is generated which
extends over a wide bandwidth. A code called spreading code is used to perform this action.

68
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Using a group of codes, which are orthogonal to each other, it is possible to select a signal with a
given code in the presence of many other signals with different orthogonal codes.

How Does CDMA Work?


CDMA allows up to 61 concurrent users in a 1.2288 MHz channel by processing each voice
packet with two PN codes. There are 64 Walsh codes available to differentiate between calls and
theoretical limits. Operational limits and quality issues will reduce the maximum number of calls
somewhat lower than this value.

In fact, many different "signals" baseband with different spreading codes can be modulated on
the same carrier to allow many different users to be supported. Using different orthogonal codes,
interference between the signals is minimal. Conversely, when signals are received from several
mobile stations, the base station is capable of isolating each as they have different orthogonal
spreading codes.

The following figure shows the technicality of the CDMA system. During the propagation, we
mixed the signals of all users, but by that you use the same code as the code that was used at the
time of sending the receiving side. You can take out only the signal of each user.

CDMA Capacity
The factors deciding the CDMA capacity are −

 Processing Gain
 Signal to Noise Ratio

69
Common Reference Material BSBC 603 [Computer Network] BCA 6th

 Voice Activity Factor


 Frequency Reuse Efficiency

Capacity in CDMA is soft, CDMA has all users on each frequency and users are separated by
code. This means, CDMA operates in the presence of noise and interference.

In addition, neighboring cells use the same frequencies, which means no re-use. So, CDMA
capacity calculations should be very simple. No code channel in a cell, multiplied by no cell. But
it is not that simple. Although not available code channels are 64, it may not be possible to use a
single time, since the CDMA frequency is the same.

Centralized Methods
 The band used in CDMA is 824 MHz to 894 MHz (50 MHz + 20 MHz separation).
 Frequency channel is divided into code channels.
 1.25 MHz of FDMA channel is divided into 64 code channels.

Processing Gain
CDMA is a spread spectrum technique. Each data bit is spread by a code sequence. This means,
energy per bit is also increased. This means that we get a gain of this.

P (gain) = 10log (W/R)

W is Spread Rate

R is Data Rate

For CDMA P (gain) = 10 log (1228800/9600) = 21dB

This is a gain factor and the actual data propagation rate. On an average, a typical transmission
condition requires a signal to the noise ratio of 7 dB for the adequate quality of voice.

Translated into a ratio, signal must be five times stronger than noise.

Actual processing gain = P (gain) - SNR

= 21 – 7 = 14dB

CDMA uses variable rate coder

The Voice Activity Factor of 0.4 is considered = -4dB.

Hence, CDMA has 100% frequency reuse. Use of same frequency in surrounding cells causes
some additional interference.

70
Common Reference Material BSBC 603 [Computer Network] BCA 6th

In CDMA frequency, reuse efficiency is 0.67 (70% eff.) = -1.73dB

Advantages of CDMA
CDMA has a soft capacity. The greater the number of codes, the more the number of users. It has
the following advantages −

 CDMA requires a tight power control, as it suffers from near-far effect. In other words, a
user near the base station transmitting with the same power will drown the signal latter.
All signals must have more or less equal power at the receiver
 Rake receivers can be used to improve signal reception. Delayed versions of time (a chip
or later) of the signal (multipath signals) can be collected and used to make decisions at
the bit level.
 Flexible transfer may be used. Mobile base stations can switch without changing
operator. Two base stations receive mobile signal and the mobile receives signals from
the two base stations.
 Transmission Burst − reduces interference.

Disadvantages of CDMA
The disadvantages of using CDMA are as follows −

 The code length must be carefully selected. A large code length can induce delay or may
cause interference.
 Time synchronization is required.
 Gradual transfer increases the use of radio resources and may reduce capacity.
 As the sum of the power received and transmitted from a base station needs constant tight
power control. This can result in several handovers.

71
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Section D
MAC sublayer: CSMA/CD/CA

To reduce the impact of collisions on the network performance, Ethernet uses an algorithm
called CSMA with Collision Detection (CSMA / CD): CSMA/CD is a protocol in which the
station senses the carrier or channel before transmitting frame just as in persistent and non-
persistent CSMA. If the channel is busy, the station waits. it listens at the same time on
communication media to ensure that there is no collision with a packet sent by another station. In
a collision, the issuer immediately cancels the sending of the package. This allows limiting the
duration of collisions: we do not waste time to send a packet complete if it detects a collision.
After a collision, the transmitter waits again silence and again, he continued his hold for a
random number; but this time the random number is nearly double the previous one: it is this
called back-off (that is to say, the "decline") exponential. In fact, the window collision is simply
doubled (unless it has already reached a maximum). From a packet is transmitted successfully,
the window will return to its original size.
Again, this is what we do naturally in a meeting room if many people speak exactly the same
time, they are realizing account immediately (as they listen at the same time they speak), and
they interrupt without completing their sentence. After a while, one of them speaks again. If a
new collision occurs, the two are interrupted again and tend to wait a little longer before
speaking again.

The entire scheme of CSMA/CD is depicted in the fig.

72
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Frame format of CSMA/CD


The frame format specified by IEEE 802.3 standard contains following fields.

1. Preamble: It is seven bytes (56 bits) that provides bit synchronization. It consists of
alternating Os and 1s. The purpose is to provide alert and timing pulse.
2. Start Frame Delimiter (SFD): It is one byte field with unique pattern: 10 10 1011. It marks
the beginning of frame.
3. Destination Address (DA): It is six byte field that contains physical address of packet's
destination.
4. Source Address (SA): It is also a six byte field and contains the physical address of source or
last device to forward the packet (most recent router to receiver).
5. Length: This two byte field specifies the length or number of bytes in data field.

73
Common Reference Material BSBC 603 [Computer Network] BCA 6th

6. Data: It can be of 46 to 1500 bytes, depending upon the type of frame and the length of
the information field.
7. Frame Check Sequence (FCS): This for byte field contains CRC for error detection.

CSMA/CD Procedure:
Fig. Shows a flow chart for the CSMA/CD protocol.

Explanation:
• The station that has a ready frame sets the back off parameter to zero.
• Then it senses the line using one of the persistent strategies.
• If then sends the frame. If there is no collision for a period corresponding to one complete
frame, then the transmission is successful.
• Otherwise the station sends the jam signal to inform the other stations about the collision.
74
Common Reference Material BSBC 603 [Computer Network] BCA 6th

• The station then increments the back off time and waits for a random back off time and sends
the frame again.
• If the back off has reached its limit then the station aborts the transmission.
• CSMA/CD is used for the traditional Ethernet.
• CSMA/CD is an important protocol. IEEE 802.3 (Ethernet) is an example of CSMNCD. It is an
international standard.
• The MAC sublayer protocol does not guarantee reliable delivery. Even in absence of collision
the receiver may not have copied the frame correctly.

Difference Between CSMA CA and CSMA CD


CSMA CA vs CSMA CD
Carrier Sense Multiple Access or CSMA is a Media Access Control (MAC) protocol that is used
to control the flow of data in a transmission media so that packets do not get lost and data
integrity is maintained. There are two modifications to CSMA, the CSMA CD (Collision
Detection) and CSMA CA (Collision Avoidance), each having its own strengths.

CSMA operates by sensing the state of the medium in order to prevent or recover from a
collision. A collision happens when two transmitters transmit at the same time. The data gets
scrambled, and the receivers would not be able to discern one from the other thereby causing the
information to get lost. The lost information needs to be resent so that the receiver will get it.

CSMA CD operates by detecting the occurrence of a collision. Once a collision is detected,


CSMA CD immediately terminates the transmission so that the transmitter does not have to
waste a lot of time in continuing. The last information can be retransmitted. In comparison,
CSMA CA does not deal with the recovery after a collision. What it does is to check whether the
medium is in use. If it is busy, then the transmitter waits until it is idle before it starts
transmitting. This effectively minimizes the possibility of collisions and makes more efficient
use of the medium.

Another difference between CSMA CD and CSMA CA is where they are typically used. CSMA
CD is used mostly in wired installations because it is possible to detect whether a collision has
occurred. With wireless installations, it is not possible for the transmitter to detect whether a

75
Common Reference Material BSBC 603 [Computer Network] BCA 6th

collision has occurred or not. That is why wireless installations often use CSMA CA instead of
CSMA CD.

Most people do not really have to deal with access control protocols as they work behind the
scenes in order for our devices to work together. CSMA CD has also fallen out of favor with
modern wired networks as they were only necessary with hubs and not with modern switches
that route the information instead of broadcasting it.

IEEE standards(IEEE802.3 Ethernet, GigabitEthernet,IEEE802.4 Token Bus, IEEE


802.5Token Ring)

1) IEEE802.3 Ethernet

Description: Ethernet local area network operation is specified for selected speeds of operation
from 1 Mb/s to 100 Gb/s using a common media access control (MAC) specification and
management information base (MIB). The Carrier Sense Multiple Access with Collision
Detection (CSMA/CD) MAC protocol specifies shared medium (half duplex) operation, as well
as full duplex operation. Speed specific Media Independent Interfaces (MIIs) allow use of
selected Physical Layer devices (PHY) for operation over coaxial, twisted pair or fiber optic
cables, or electrical backplanes. System considerations for multisegment shared access networks
describe the use of Repeaters which are defined for operational speeds up to 1000 Mb/s. Local
Area Network (LAN) operation is supported at all speeds. Other specified capabilities include:
various PHY types for access networks, PHYs suitable for metropolitan area network
applications, and the provision of power over selected twisted pair PHY types. (The PDF of this
standard is available at no cost compliments of the GETIEEE802 program.

 STATUS:  

Working Group: WG802.3 - Ethernet Working Group


Oversight Committee: C/LM - LAN/MAN Standards Committee 
Sponsor: C - IEEE Computer Society

2) Gigabit Ethernet

Gigabit Ethernet and ATM

A few main factors drive network scalability on the campus. First, bandwidth and latency
performance become more important as existing and emerging applications are and will be
requiring higher bandwidth. The typical 80/20 rule (80 percent of the network traffic is local
compared to 20 percent to the backbone) is being reversed such that 80 percent of the traffic is

76
Common Reference Material BSBC 603 [Computer Network] BCA 6th

now destined for the backbone. This setup requires the backbone to have higher bandwidth and
switching capacity.

Both ATM and Gigabit Ethernet solve the issue of bandwidth. ATM provides a migration from
25 Mbps at the desktop, to 155 Mbps from the wiring closet to the core, to 622 Mbps within the
core. All this technology is available and shipping today. ATM also promises 2.4 Gbps of
bandwidth via OC-48, which was available and standard at the end of 1997. Ethernet currently
provides 10 Mbps to the desktop, with 100 Mbps to the core. Cisco has provided Fast
EtherChannel as a mechanism of scaling the core bandwidth and providing a migration to
Gigabit Ethernet.

Second, a scalable campus networking architecture must account for existing desktops and
networking protocols. This scenario forces compatibility with current desktop PCs, servers,
mainframes, and cabling plants. Large enterprise networks have invested millions of dollars into
this infrastructure. Also, in order to ensure a smooth migration, existing LAN protocols must be
supported in some way in order to assure a smooth migration.

Quality of service (QoS) has increased in visibility, as network managers require some traffic to
have higher-priority access to the network relative to other traffic, particularly over the WAN.
The options for QoS include Guaranteed QoS, where a particular user or "flow" is guaranteed
performance, and class of service (CoS), which provides best-effort QoS, and finally, increased
bandwidth such that contention for that bandwidth is no longer an issue.

Ethernet promises to provide CoS by mapping priority within the network to mechanisms such as
Resource Reservation Protocol (RSVP) for IP as well as other mechanisms for Internetwork
Packet Exchange (IPX). ATM guarantees QoS within the backbone and over the WAN by using
such mechanisms as available bit rate (ABR), constant bit rate (CBR), variable bit rate (VBR),
and unspecified bit rate (UBR).

Both ATM and Ethernet attempt to solve similar application-type problems. Traditionally,
Ethernet and Fast Ethernet have been utilized for high-speed backbone and riser connectivity. A
common application, for example, is to provide switched or group-switched 10 Mbps to each
desktop, with Fast Ethernet connectivity to and within the core. This scenario can be
accomplished at a relatively low cost. Gigabit Ethernet promises to continue scaling that
bandwidth further. Recently, ATM has also been utilized to build campus-wide backbones at a
moderate price range. However, the key benefit of ATM has been seen in the metropolitan-area
network and in the wide-area network. WAN integration and compatibility has been a significant
driver in scaling campus networks. The importance of integrating data types such as voice,
video, and data over a WAN has been a significant driver for service integration and will be key
in reducing the cost of WAN services and maintenance.

Migration to Gigabit Ethernet

Related Standards
The following sections briefly summarize four related IEEE standards.

77
Common Reference Material BSBC 603 [Computer Network] BCA 6th

IEEE 802.1p
Quality of service has become increasingly important to network managers. In June 1998, the
IEEE 802.1p

committee standardized a means of individual end station requesting a particular QoS of the
network and the network being able to respond accordingly. This standard also specifies
multicast group management.

A new protocol is defined in 802.1p, generic attribute registration protocol (GARP). GARP is a
generic protocol that will be used by specific GARP applications; for example, GARP multicast
registration protocol (GMRP), and GARP VLAN registration protocol (GVRP). GMRP is
defined in 802.1p; GMRP provides registration services for multicast MAC address groups.

IEEE 802.1Q
The introduction of virtual LANs (VLANs) into switched internetworks has created significant
advantages to networking vendors because they can offer value-added features to their products
such as VLAN trunking, reduction in spanning-tree recalculations effects, and broadcast control.
However, with the exception of ATM LAN emulation, there is no industry standard means of
creating VLANs.

The 802.1Q committee has worked to create standards-based VLANs. This standard is based on
a frame-tagging mechanism that will work over Ethernet, Fast Ethernet, Token Ring, and FDDI.
The standard will allow a means of VLAN tagging over switches and routers and will allow
vendor VLAN interoperability. GVRP has been introduced in 802.1Q; this protocol provides
registration services for VLAN membership.

IEEE 802.3x
The IEEE 802.3x committee standardized a method of flow control for full-duplex Ethernet. This
mechanism is set up between the two stations on the point-to-point link. If the receiving station
at the end becomes congested, it can send back a frame called a "pause frame" to the source at
the opposite end of the connection, instructing that station to stop sending packets for a specific
period of time. The sending station waits the requested time before sending more data. The
receiving station can also send a frame back to the source with a time-to-wait of zero, instructing
the source to begin sending data again

3) IEEE802.4 Token Bus

Description:

Protocol suite:
Type: Physical and data link layer protocols.

The Data Link Layer is divided into two sublayers:

78
Common Reference Material BSBC 603 [Computer Network] BCA 6th

 Logical Link Control (LLC). This sublayer establishes the transmission paths between
computers on a network.
 Media Access Control (MAC). On a network, the network interface card (NIC) has an
unique hardware address which identifies a computer or peripheral device. The hardware
address is utilized for the MAC sub layer addressing.

IEEE 802.2 LLC fields are optional. But to transfer IP packets over IEEE 802 networks, the
IEEE LLC and SNAP fields are utilized.

RFC 1042:

IP datagrams are sent on IEEE 802 networks encapsulated within the 802.2 LLC and SNAP data
link layers, and the 802.3, 802.4, or 802.5 physical networks layers. The SNAP is used with an
Organization Code indicating that the following 16 bits specify the Ether Type code.

In IEEE 802.4, the token passing scheme is used in place of Carrier Sense Multiple Access with
Collision Detection (CSMA/CD) on a bus local area network (LAN). A token is circulated on a
network. The computer that has possession of the token has the right to transmit packets for a
certain period of time. If that computer has no packets to transmit then the token is passed to the
next computer. Only one computer at a time can transmit packets so this helps to avoid collision
problems.

4) IEEE 802.5Token Ring

 Token Ring is formed by the nodes connected in ring format as shown in the diagram
below. The principle used in the token ring network is that a token is circulating in the
ring and whichever node grabs that token will have right to transmit the data.
 Whenever a station wants to transmit a frame it inverts a single bit of the 3-byte token
which instantaneously changes it into a normal data packet. Because there is only one
token, there can atmost be one transmission at a time.
 Since the token rotates in the ring it is guarenteed that every node gets the token with in
some specified time. So there is an upper bound on the time of waiting to grab the token
so that starvation is avoided.
 There is also an upper limit of 250 on the number of nodes in the network.
 To distinguish the normal data packets from token (control packet) a special sequence is
assigned to the token packet. When any node gets the token it first sends the data it wants
to send, then recirculates the token.

If a node transmits the token and nobody wants to send the data the token comes back to the
sender. If the first bit of the token reaches the sender before the transmission of the last bit, then
error situation araises. So to avoid this we should have:

propogation delay + transmission of n-bits (1-bit delay in each node ) > transmission of the
token time

79
Common Reference Material BSBC 603 [Computer Network] BCA 6th

A station may hold the token for the token-holding time. which is 10 ms unless the installation
sets a different value. If there is enough time left after the first frame has been transmitted to
send more frames, then these frames may be sent as well. After all pending frames have been
transmitted or the transmission frame would exceed the token-holding time, the station
regenerates the 3-byte token frame and puts it back on the ring.

Modes of Operation

1. Listen Mode: In this mode the node listens to the data and transmits the data to the next
node. In this mode there is a one-bit delay associated with the transmission.

2. Transmit Mode: In this mode the node just discards the any data and puts the data
onto the network.

3. By-pass Mode: In this mode reached when the node is down. Any data is just bypassed.
There is no one-bit delay in this mode.

Token Ring Using Ring Concentrator

One problem with a ring network is that if the cable breaks somewhere, the ring dies. This
problem is elegantly addressed by using a ring concentrator. A Token Ring concentrator simply
changes the topology from a physical ring to a star wired ring. But the network still remains a
ring logically. Physically, each station is connected to the ring concentrator (wire center) by a
cable containing at least two twisted pairs, one for data to the station and one for data from the
station. The Token still circulates around the network and is still controlled in the same manner,
however, using a hub or a switch greatly improves reliability because the hub can automatically
bypass any ports that are disconnected or have a cabling fault. This is done by having bypass
relays inside the concentrator that are energized by current from the stations. If the ring breaks or
station goes down, loss of the drive current will release the relay and bypass the station. The ring
can then continue operation with the bad segment bypassed.

Who should remove the packet from the ring ?


There are 3 possibilities-

1. The source itself removes the packet after one full round in the ring.
2. The destination removes it after accepting it: This has two potential problems. Firstly,
the solution won't work for broadcast or multicast, and secondly, there would be no way
to acknowledge the sender about the receipt of the packet.
3. Have a specialized node only to discard packets: This is a bad solution as the
specialized node would know that the packet has been received by the destination only

80
Common Reference Material BSBC 603 [Computer Network] BCA 6th

when it receives the packet the second time and by that time the packet may have actually
made about one and half (or almost two in the worst case) rounds in the ring.

Thus the first solution is adopted with the source itself removing the packet from the ring after a
full one round. With this scheme, broadcasting and multicasting can be handled as well as the
destination can acknowledge the source about the receipt of the packet (or can tell the source
about some error).

Token Format

The token is the shortest frame transmitted (24 bit)


MSB (Most Significant Bit) is always transmitted first - as opposed to Ethernet

SD AC ED

SD = Starting Delimiter (1 Octet)


AC = Access Control (1 Octet)
ED = Ending Delimiter (1 Octet)

Starting Delimiter Format:


J KOJKOOO

J = Code Violation 
K = Code Violation

Access Control Format:


P PPTMRRR

T=Token
T = 0  for Token 
T = 1  for Frame
When a station with a Frame to transmit detects a token which has a priority equal to or less than
the Frame to be transmitted, it may change the token to a start-of-frame sequence and transmit
the Frame

P = Priority
Priority Bits indicate tokens priority, and therefore, which stations are allowed to use it. Station
can transmit if its priority as at least as high as that of the token. 

M = Monitor 
The monitor bit is used to prevent a token whose priority is greater than 0 or any frame from
continuously circulating on the ring. If an active monitor detects a frame or a high priority token

81
Common Reference Material BSBC 603 [Computer Network] BCA 6th

with the monitor bit equal to 1, the frame or token is aborted. This bit shall be transmitted as 0 in
all frame and tokens. The active monitor inspects and modifies this bit. All other stations shall
repeat this bit as received.

R = Reserved bits 
The reserved bits allow station with high priority Frames to request that the next token be issued
at the requested priority.

Ending Delimiter Format:


J K1JK11E

J = Code Violation 
K = Code Violation 
I = Intermediate Frame Bit 
E = Error Detected Bit

Frame Format:

MSB (Most Significant Bit) is always transmitted first - as opposed to Ethernet

SD AC FC DA SA DATA CRC ED FS

SD=Starting Delimiter(1 octet)


AC=Access Control(1 octet)
FC = Frame Control (1 Octet)
DA = Destination Address (2 or 6 Octets)
SA = Source Address (2 or 6 Octets)
DATA = Information 0 or more octets up to 4027 
CRC = Checksum(4 Octets)
ED = Ending Delimiter (1 Octet)
FS=Frame Status

Starting Delimiter Format:


J K0JK000

J = Code Violation
K = Code Violation

Access Control Format:


P PPTMRRR

82
Common Reference Material BSBC 603 [Computer Network] BCA 6th

T=Token

T = “0” for Token,


T = “1” for Frame.

When a station with a Frame to transmit detects a token which has a priority equal to or less than
the Frame to be transmitted, it may change the token to a start-of-frame sequence and transmit
the Frame.

P = Priority
Bits Priority Bits indicate tokens priority, and therefore, which stations are allowed to use it.
Station can transmit if its priority as at least as high as that of the token.

M = Monitor
The monitor bit is used to prevent a token whose priority is greater than 0 or any frame from
continuously circulating on the ring. if an active monitor detects a frame or a high priority token
with the monitor bit equal to 1, the frame or token is aborted. This bit shall be transmitted as 0 in
all frame and tokens. The active monitor inspects and modifies this bit. All other stations shall
repeat this bit as received.

R = Reserved bits the reserved bits allow station with high priority Frames to request that the
next token be issued at the requested priority

Frame Control Format:


F F CONTROL BITS (6 BITS)

FF= Type of Packet-Regular data packet or MAC layer packet


Control Bits= Used if the packet is for MAC layer protocol itself

Source and Destination Address Format:

The addresses can be of 2 bytes (local address) or 6 bytes (global address).

local address format:

I/G (1 BIT) NODE ADDRESS (15 BITS)

alternatively

I/G (1 BIT) RING ADDRESS (7 BITS) NODE ADDRESS (8 BITS)

The first bit specifies individual or group address.

83
Common Reference Material BSBC 603 [Computer Network] BCA 6th

universal (global) address format:

I/G (1 BIT) L/U (1 BIT) RING ADDRESS (14 BITS) NODE ADDRESS (32 BITS)

The first bit specifies individual or group address.


The second bit specifies local or global (universal) address.

local group addresses (16 bits):

I/G (1 BIT) T/B(1 BIT) GROUP ADDRESS (14 BITS)

The first bit specifies an individual or group address.


The second bit specifies traditional or bit signature group address.

Traditional Group Address: 2Exp14 groups can be defined.


Bit Signature Group Address: 14 grtoups are defined. A host can be a member of none or any
number of them. For multicasting, those group bits are set to which the packet should go. For
broadcasting, all 14 bits are set. A host receives a packet only if it is a member of a group whose
corresponding bit is set to 1.

universal group addresses (16 bits):

I/G (1 BIT) RING NUMBER T/B (1 BIT) GROUP ADDRESS (14 BITS)

The description is similar to as above.

Data Format:

No upper limit on amount of data as such, but it is limited by the token holding time.

Checksum:

The source computes and sets this value. Destination too calculates this value. If the two are
different, it indicates an error, otherwise the data may be correct.

Frame Status:

It contains the A and C bits.

A bit set to 1: destination recognized the packet.


C bit set to 1: destination accepted the packet.

84
Common Reference Material BSBC 603 [Computer Network] BCA 6th

This arrangement provides an automatic acknowledgement for each frame. The A and C bits are
present twice in the Frame Status to increase reliability in as much as they are not covered by the
checksum.

Ending Delimiter Format:


J K1JK1IE

J = Code Violation
K = Code Violation
I = Intermediate Frame Bit
If this bit is set to 1, it indicates that this packet is an intermediate part of a bigger packet, the last
packet would have this bit set to 0. 
E = Error Detected Bit
This bit is set if any interface detects an error.

This concludes our description of the token ring frame format.

The Network Layer

Design Issues

What is Network Layer?


The network layer is concerned with getting packets from the source all the way to the
destination. The packets may require to make many hops at the intermediate routers while
reaching the destination. This is the lowest layer that deals with end to end transmission. In order
to achieve its goals, the network layer must know about the topology of the communication
network. It must also take care to choose routes to avoid overloading of some of the
communication lines while leaving others idle. The network layer-transport layer interface
frequently is the interface between the carrier and the customer, that is the boundary of the
subnet. The functions of this layer include :

1. Routing - The process of transferring packets received from the Data Link Layer of the
source network to the Data Link Layer of the correct destination network is called
routing. Involves decision making at each intermediate node on where to send the packet
next so that it eventually reaches its destination. The node which makes this choice is
called a router. For routing we require some mode of addressing which is recognized by
the Network Layer. This addressing is different from the MAC layer addressing.
2. Inter-networking - The network layer is the same across all physical networks (such as
Token-Ring and Ethernet). Thus, if two physically different networks have to
communicate, the packets that arrive at the Data Link Layer of the node which connects
these two physically different networks, would be stripped of their headers and passed to
the Network Layer. The network layer would then pass this data to the Data Link Layer
of the other physical network..

85
Common Reference Material BSBC 603 [Computer Network] BCA 6th

3. Congestion Control - If the incoming rate of the packets arriving at any router is more
than the outgoing rate, then congestion is said to occur. Congestion may be caused by
many factors. If suddenly, packets begin arriving on many input lines and all need the
same output line, then a queue will build up. If there is insufficient memory to hold all of
them, packets will be lost. But even if routers have an infinite amount of memory,
congestion gets worse, because by the time packets reach to the front of the queue, they
have already timed out (repeatedly), and duplicates have been sent. All these packets are
dutifully forwarded to the next router, increasing the load all the way to the destination.
Another reason for congestion are slow processors. If the router's CPUs are slow at
performing the bookkeeping tasks required of them, queues can build up, even though
there is excess line capacity. Similarly, low-bandwidth lines can also cause congestion.

We will now look at these function one by one. 

Addressing Scheme 
IP addresses are of 4 bytes and consist of : 
i) The network address, followed by
ii) The host address
The first part identifies a network on which the host resides and the second part identifies the
particular host on the given network. Some nodes which have more than one interface to a
network must be assigned separate internet addresses for each interface. This multi-layer
addressing makes it easier to find and deliver data to the destination. A fixed size for each of
these would lead to wastage or under-usage that is either there will be too many network
addresses and few hosts in each (which causes problems for routers who route based on the
network address) or there will be very few network addresses and lots of hosts (which will be a
waste for small network requirements). Thus, we do away with any notion of fixed sizes for the
network and host addresses. 
We classify networks as follows:

1. Large Networks : 8-bit network address and 24-bit host address. There are
approximately 16 million hosts per network and a maximum of 126 ( 2^7 - 2 ) Class A
networks can be defined. The calculation requires that 2 be subtracted because 0.0.0.0 is
reserved for use as the default route and 127.0.0.0 be reserved for the loop back function.
Moreover each Class A network can support a maximum of 16,777,214 (2^24 - 2) hosts
per network. The host calculation requires that 2 be subtracted because all 0's are
reserved to identify the network itself and all 1s are reserved for broadcast addresses. The
reserved numbers may not be assigned to individual hosts.
2. Medium Networks : 16-bit network address and 16-bit host address. There are
approximately 65000 hosts per network and a maximum of 16,384 (2^14) Class B
networks can be defined with up to (2^16-2) hosts per network.
3. Small networks : 24-bit network address and 8-bit host address. There are approximately
250 hosts per network.

86
Common Reference Material BSBC 603 [Computer Network] BCA 6th

You might think that Large and Medium networks are sort of a waste as few
corporations/organizations are large enough to have 65000 different hosts. (By the way, there are
very few corporations in the world with even close to 65000 employees, and even in these
corporations it is highly unlikely that each employee has his/her own computer connected to the
network.) Well, if you think so, you're right. This decision seems to have been a mistak

Address Classes

The IP specifications divide addresses into the following classes :

 Class A - For large networks

0 7 bits of the network address 24 bits of host address


 Class B - For medium networks

1 0 14 bits of the network address 16 bits of host address


 Class C - For small networks

1 1 0 21 bits of the network address 8 bits of host address


 Class D - For multi-cast messages ( multi-cast to a "group" of networks )

1 1 1 0 28 bits for some sort of group address


 Class E - Currently unused, reserved for potential uses in the future

1 1 1 1 28 bits

Internet Protocol
Special Addresses : There are some special IP addresses : 

1. Broadcast Addresses They are of two types :


(i) Limited Broadcast : It consists of all 1's, i.e., the address is 255.255.255.255 . It is
used only on the LAN, and not for any external network. 
(ii) Directed Broadcast : It consists of the network number + all other bits as1's. It reaches

87
Common Reference Material BSBC 603 [Computer Network] BCA 6th

the router corresponding to the network number, and from there it broadcasts to all the
nodes in the network. This method is a major security problem, and is not used anymore.
So now if we find that all the bits are 1 in the host no. field, then the packet is simply
dropped. Therefore, now we can only do broadcast in our own network using Limited
Broadcast.
2. Network ID = 0
It means we are referring to this network and for local broadcast we make the host ID
zero.
3. Host ID = 0
This is used to refer to the entire network in the routing table.
4. Loop-back Address
Here we have addresses of the type 127.x.y.z It goes down way upto the IP layer and
comes back to the application layer on the same host. This is used to test network
applications before they are used commercially.

Subnetting 
Sub netting means organizing hierarchies within the network by dividing the host ID as per our
network. For example consider the network ID : 150.29.x.y 
We could organize the remaining 16 bits in any way, like : 
4 bits - department
4 bits - LAN
8 bits - host
This gives some structure to the host IDs. This division is not visible to the outside world. They
still see just the network number, and host number (as a whole). The network will have an
internal routing table which stores information about which router to send an address to. Now
consider the case where we have : 8 bits - subnet number, and 8 bits - host number. Each router
on the network must know about all subnet numbers. This is called the subnet mask. We put the
network number and subnet number bits as 1 and the host bits as 0. Therefore, in this example
the subnet mask becomes : 255.255.255.0 . The hosts also need to know the subnet mask when
they send a packet. To find if two addresses are on the same subnet, we can AND source address
with subnet mask, and destination address with with subnet mask, and see if the two results are
the same. The basic reason for sub netting was avoiding broadcast. But if at the lower level, our
switches are smart enough to send directed messages, then we do not need sub netting. However,
sub netting has some security related advantages. 

Supernetting 
This is moving towards class-less addressing. We could say that the network number is 21 bits
( for 8 class C networks ) or say that it is 24 bits and 7 numbers following that. For example :
a.b.c.d / 21 This means only look at the first 21 bits as the network address. 

Addressing on IITK Network


If we do not have connection with the outside world directly then we could have Private IP

88
Common Reference Material BSBC 603 [Computer Network] BCA 6th

addresses ( 172.31 ) which are not to be publicised and routed to the outside world. Switches will
make sure that they do not broadcast packets with such addressed to the outside world. The basic
reason for implementing subnetting was to avoid broadcast. So in our case we can have some
subnets for security and other reasons although if the switches could do the routing properly,
then we do not need subnets. In the IITK network we have three subnets -CC, CSE building are
two subnets and the rest of the campus is one subset

Packet Structure

Versio Heade
n r Type of
Numbe Lengt Service (8 Total Length (16 bits)
r (4 h (4 bits)
bits) bits)
Flags
ID (16 bits) (3bits Flag Offset (13 bits)
)
Time To Live Protocol (8
Header Checksum (16 bits)
(8 bits) bits)
Source (32 bits)
Destination (32 bits)
Options

Version Number : The current version is Version 4 (0100).

1. Header Length : We could have multiple sized headers so we need this field. Header
will always be a multiple of 4bytes and so we can have a maximum length of the field as
15, so the maximum size of the header is 60 bytes ( 20 bytes are mandatory ).
2. Type Of Service (ToS) : This helps the router in taking the right routing decisions. The
structure is :
First three bits : They specify the precedences i.e. the priority of the packets. 
Next three bits :
o D bit - D stands for delay. If the D bit is set to 1, then this means that the
application is delay sensitive, so we should try to route the packet with minimum
delay.
o T bit - T stands for throughput. This tells us that this particular operation is
throughput sensitive.
o R bit - R stands for reliability. This tells us that we should route this packet
through a more reliable network.

Last two bits: The last two bits are never used. Unfortunately, no router in this world
looks at these bits and so no application sets them nowadays. The second word is meant
for handling fragmentations. If a link cannot transmit large packets, then we fragment the
packet and put sufficient information in the header for recollection at the destination.

89
Common Reference Material BSBC 603 [Computer Network] BCA 6th

3. ID Field : The source and ID field together will represent the fragments of a unique
packet. So each fragment will have a different ID.
4. Offset : It is a 13 bit field that represents where in the packet, the current fragment starts.
Each bit represents 8 bytes of the packet. So the packet size can be at most 64 kB. Every
fragment except the last one must have its size in bytes as a multiple of 8 in order to
ensure compliance with this structure. The reason why the position of a fragment is given
as an offset value instead of simply numbering each packet is because refragmentation
may occur somewhere on the path to the other node. Fragmentation, though supported by
IPv4 is not encouraged. This is because if even one fragment is lost the entire packet
needs to be discarded. A quantity M.T.U (Maximum Transmission Unit) is defined for
each link in the route. It is the size of the largest packet that can be handled by the link.
The Path-M.T.U is then defined as the size of the largest packet that can be handled by
the path. It is the smallest of all the MTUs along the path. Given information about the
path MTU we can send packets with sizes smaller than the path MTU and thus prevent
fragmentation. This will not completely prevent it because routing tables may change
leading to a change in the path.
5. Flags :It has three bits -
o M bit : If M is one, then there are more fragments on the way and if M is 0, then it
is the last fragment
o DF bit : If this bit is sent to 1, then we should not fragment such a packet.
o Reserved bit : This bit is not used.

Reassembly can be done only at the destination and not at any intermediate node. This is
because we are considering Datagram Service and so it is not guaranteed that all the
fragments of the packet will be sent thorough the node at which we wish to do
reassembly.

6. Total Length : It includes the IP header and everything that comes after it.
7. Time To Live (TTL) : Using this field, we can set the time within which the packet
should be delivered or else destroyed. It is strictly treated as the number of hops. The
packet should reach the destination in this number of hops. Every router decreases the
value as the packet goes through it and if this value becomes zero at a particular router, it
can be destroyed.
8. Protocol : This specifies the module to which we should hand over the packet ( UDP or
TCP ). It is the next encapsulated protocol. 
Value                                    Protocol 
0                                    Pv6 Hop-by-Hop Option. 
1                                    ICMP, Internet Control Message Protocol. 
2                                    IGMP, Internet Group Management Protocol. RGMP, Router-
port Group Management Protocol. 
3                                    GGP, Gateway to Gateway Protocol. 
4                                    IP in IP encapsulation. 
5                                    ST, Internet Stream Protocol. 
6                                    TCP, Transmission Control Protocol. 
7                                    UCL, CBT. 
8                                    EGP, Exterior Gateway Protocol. 

90
Common Reference Material BSBC 603 [Computer Network] BCA 6th

9                                    IGRP. 
10                                    BBN RCC Monitoring. 
11                                    NVP, Network Voice Protocol. 
12                                    PUP. 
13                                    ARGUS. 
14                                    EMCON, Emission Control Protocol. 
15                                    XNET, Cross Net Debugger. 
16                                    Chaos. 
17                                    UDP, User Datagram Protocol. 
18                                    TMux, Transport Multiplexing Protocol. 
19                                    DCN Measurement Subsystems. 

-
-
255
9. Header Checksum : This is the usual checksum field used to detect errors. Since the
TTL field is changing at every router so the header checksum ( upto the options field ) is
checked and recalculated at every router.
10. Source : It is the IP address of the source node
11. Destination : It is the IP address of the destination node.
12. IP Options : The options field was created in order to allow features to be added into IP
as time passes and requirements change. Currently 5 options are specified although not
all routers support them. They are:
o Securtiy: It tells us how secret the information is. In theory a military router
might use this field to specify not to route through certain routers. In practice no
routers support this field.
o Source Routing: It is used when we want the source to dictate how the packet
traverses the network. It is of 2 types 
-> Loose Source Record Routing (LSRR): It requires that the packet traverse a
list of specified routers, in the order specified but the packet may pass though
some other routers as well. 
-> Strict Source Record Routing (SSRR): It requires that the packet traverse
only the set of specified routers and nothing else. If it is not possible, the packet is
dropped with an error message sent to the host.

The above is the format for SSRR. For LSRR the code is 131.

o Record Routing :

In this the intermediate routers put there IP addresses in the header, so that the
destination knows the entire path of the packet. Space for storing the IP address is
specified by the source itself. The pointer field points to the position where the
next IP address has to be written. Length field gives the number of bytes reserved
by the source for writing the IP addresses. If the space provided for storing the IP

91
Common Reference Material BSBC 603 [Computer Network] BCA 6th

addresses of the routers visited, falls short while storing these addresses, then the
subsequent routers do not write their IP addresses.

o Time Stamp Routing :

It is similar to record route option except that nodes also add their timestamps to
the packet. The new fields in this option are 
-> Flags: It can have the following values

 0- Enter only timestamp.


 1- The nodes should enter Timestamp as well as their IP.
 3 - The source specifies the IPs that should enter their timestamp. A
special point of interest is that only if the IP is the same as that at the
pointer then the time is entered. Thus if the source specifies IP1 and IP2
but IP2 is first in the path then the field IP2 is left empty, even after
having reached IP2 but before reaching IP1.

-> Overflow: It stores the number of nodes that were unable to add their
timestamps to the packet. The maximum value is 15.

o Format of the type/code field

Copy Bit Type of option Option Number.

 Copy bit: It says whether the option is to be copied to every fragment or


not. a value of 1 stands for copying and 0 stands for not copying.
 Type: It is a 2 bit field. Currently specified values are 0 and 2. 0 means
the option is a control option while 2 means the option is for measurement
 Option Number: It is a 5 bit field which specifies the option number.

For all options a length field is put in order that a router not familiar with the
option will know how many bytes to skip. Thus every option is of the form

o TLV: Type/Length/Value. This format is followed in not only in IP but in nearly


all major protocols.

The network layer is concerned with getting packets from the source all the way to the
destnation. The packets may require to make many hops at the intermediate routers while
reaching the destination. This is the lowest layer that deals with end to end transmission. In order
to achieve its goals, the network later must know about the topology of the communication
network. It must also take care to choose routes to avoid overloading of some of the
communication lines while leaving others idle. The main functions performed by the network
layer are as follows:

92
Common Reference Material BSBC 603 [Computer Network] BCA 6th

 Routing
 Congestion Control
 Internetwokring

Routing Algorithms

Routing
Routing is the process of forwarding of a packet in a network so that it reaches its intended
destination. The main goals of routing are:

1. Correctness: The routing should be done properly and correctly so that the packets may
reach their proper destination.
2. Simplicity: The routing should be done in a simple manner so that the overhead is as low
as possible. With increasing complexity of the routing algorithms the overhead also
increases.
3. Robustness: Once a major network becomes operative, it may be expected to run
continuously for years without any failures. The algorithms designed for routing should
be robust enough to handle hardware and software failures and should be able to cope
with changes in the topology and traffic without requiring all jobs in all hosts to be
aborted and the network rebooted every time some router goes down.
4. Stability: The routing algorithms should be stable under all possible circumstances.
5. Fairness: Every node connected to the network should get a fair chance of transmitting
their packets. This is generally done on a first come first serve basis.
6. Optimality: The routing algorithms should be optimal in terms of throughput and
minimizing mean packet delays. Here there is a trade-off and one has to choose
depending on his suitability.

Classification of Routing Algorithms


The routing algorithms may be classified as follows:

1. Adaptive Routing Algorithm: These algorithms change their routing decisions to reflect


changes in the topology and in traffic as well. These get their routing information from
adjacent routers or from all routers. The optimization parameters are the distance, number
of hops and estimated transit time. This can be further classified as follows:
1. Centralized: In this type some central node in the network gets entire information
about the network topology, about the traffic and about other nodes. This then
transmits this information to the respective routers. The advantage of this is that
only one node is required to keep the information. The disadvantage is that if the
central node goes down the entire network is down, i.e. single point of failure.
2. Isolated: In this method the node decides the routing without seeking information
from other nodes. The sending node does not know about the status of a particular
link. The disadvantage is that the packet may be send through a congested route
resulting in a delay. Some examples of this type of algorithm for routing are:
 Hot Potato: When a packet comes to a node, it tries to get rid of it as fast
as it can, by putting it on the shortest output queue without regard to
where that link leads. A variation of this algorithm is to combine static

93
Common Reference Material BSBC 603 [Computer Network] BCA 6th

routing with the hot potato algorithm. When a packet arrives, the routing
algorithm takes into account both the static weights of the links and the
queue lengths.
 Backward Learning: In this method the routing tables at each node gets
modified by information from the incoming packets. One way to
implement backward learning is to include the identity of the source node
in each packet, together with a hop counter that is incremented on each
hop. When a node receives a packet in a particular line, it notes down the
number of hops it has taken to reach it from the source node. If the
previous value of hop count stored in the node is better than the current
one then nothing is done but if the current value is better then the value is
updated for future use. The problem with this is that when the best route
goes down then it cannot recall the second best route to a particular node.
Hence all the nodes have to forget the stored informations periodically and
start all over again.
3. Distributed: In this the node receives information from its neighbouring nodes
and then takes the decision about which way to send the packet. The disadvantage
is that if in between the the interval it receives information and sends the paket
something changes then the packet may be delayed.
2. Non-Adaptive Routing Algorithm: These algorithms do not base their routing decisions
on measurements and estimates of the current traffic and topology. Instead the route to be
taken in going from one node to the other is computed in advance, off-line, and
downloaded to the routers when the network is booted. This is also known as static
routing. This can be further classified as:
1. Flooding: Flooding adapts the technique in which every incoming packet is sent
on every outgoing line except the one on which it arrived. One problem with this
method is that packets may go in a loop. As a result of this a node may receive
several copies of a particular packet which is undesirable. Some techniques
adapted to overcome these problems are as follows:
 Sequence Numbers: Every packet is given a sequence number. When a
node receives the packet it sees its source address and sequence number. If
the node finds that it has sent the same packet earlier then it will not
transmit the packet and will just discard it.
 Hop Count: Every packet has a hop count associated with it. This is
decremented(or incremented) by one by each node which sees it. When
the hop count becomes zero(or a maximum possible value) the packet is
dropped.
 Spanning Tree: The packet is sent only on those links that lead to the
destination by constructing a spanning tree routed at the source. This
avoids loops in transmission but is possible only when all the intermediate
nodes have knowledge of the network topology.

Flooding is not practical for general kinds of applications. But in cases where high
degree of robustness is desired such as in military applications, flooding is of
great help.

94
Common Reference Material BSBC 603 [Computer Network] BCA 6th

2. Random Walk: In this method a packet is sent by the node to one of its
neighbours randomly. This algorithm is highly robust. When the network is
highly interconnected, this algorithm has the property of making excellent use of
alternative routes. It is usually implemented by sending the packet onto the least
queued link.

Delta Routing
Delta routing is a hybrid of the centralized and isolated routing algorithms. Here each node
computes the cost of each line (i.e some functions of the delay, queue length, utilization,
bandwidth etc) and periodically sends a packet to the central node giving it these values which
then computes the k best paths from node i to node j. Let Cij1 be the cost of the best i-
j path, Cij2 the cost of the next best path and so on.If Cijn - Cij1 < delta, (Cijn - cost
of n'th best i-j path, delta is some constant) then path n is regarded equivalent to the best i-
j path since their cost differ by so little. When delta -> 0 this algorithm becomes centralized
routing and when delta -> infinity all the paths become equivalent.

Multipath Routing
In the above algorithms it has been assumed that there is a single best path between any pair of
nodes and that all traffic between them should use it. In many networks however there are
several paths between pairs of nodes that are almost equally good. Sometimes in order to
improve the performance multiple paths between single pair of nodes are used. This technique is
called multipath routing or bifurcated routing. In this each node maintains a table with one row
for each possible destination node. A row gives the best, second best, third best, etc outgoing line
for that destination, together with a relative weight. Before forwarding a packet, the node
generates a random number and then chooses among the alternatives, using the weights as
probabilities. The tables are worked out manually and loaded into the nodes before the network
is brought up and not changed thereafter.

Hierarchical Routing

In this method of routing the nodes are divided into regions based on hierarchy. A
particular node can communicate with nodes at the same hierarchial level or the nodes at
a lower level and directly under it. Here, the path from any source to a destination is fixed
and is exactly one if the heirarchy is a tree.

Non-Hierarchical Routing
In this type of routing, interconnected networks are viewed as a single network, where bridges,
routers and gateways are just additional nodes.

 Every node keeps information about every other node in the network
 In case of adaptive routing, the routing calculations are done and updated for all the
nodes.

95
Common Reference Material BSBC 603 [Computer Network] BCA 6th

The above two are also the disadvantages of non-hierarchical routing, since the table sizes and
the routing calculations become too large as the networks get bigger. So this type of routing is
feasible only for small networks.

Hierarchical Routing
This is essentially a 'Divide and Conquer' strategy. The network is divided into different regions
and a router for a particular region knows only about its own domain and other routers. Thus, the
network is viewed at two levels:

1. The Sub-network level, where each node in a region has information about its peers in the
same region and about the region's interface with other regions. Different regions may
have different 'local' routing algorithms. Each local algorithm handles the traffic between
nodes of the same region and also directs the outgoing packets to the appropriate
interface.
2. The Network Level, where each region is considered as a single node connected to its
interface nodes. The routing algorithms at this level handle the routing of packets
between two interface nodes, and is isolated from intra-regional transfer.

Networks can be organized in hierarchies of many levels; e.g. local networks of a city at one
level, the cities of a country at a level above it, and finally the network of all nations.

In Hierarchical routing, the interfaces need to store information about:

 All nodes in its region which are at one level below it.
 Its peer interfaces.
 At least one interface at a level above it, for outgoing packages.

Advantages of Hierarchical Routing :

 Smaller sizes of routing tables.


 Substantially lesser calculations and updates of routing tables.

Disadvantage :

 Once the hierarchy is imposed on the network, it is followed and possibility of direct
paths is ignored. This may lead to sub optimal routing.

Source Routing
Source routing is similar in concept to virtual circuit routing. It is implemented as under:

 Initially, a path between nodes wishing to communicate is found out, either by flooding
or by any other suitable method.
 This route is then specified in the header of each packet routed between these two nodes.
A route may also be specified partially, or in terms of some intermediate hops.

96
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Advantages:

 Bridges do not need to lookup their routing tables since the path is already specified in
the packet itself.
 The throughput of the bridges is higher, and this may lead to better utilization of
bandwidth, once a route is established.

Disadvantages:

 Establishing the route at first needs an expensive search method like flooding.
 To cope up with dynamic relocation of nodes in a network, frequent updates of tables are
required, else all packets would be sent in wrong direction. This too is expensive.

Policy Based Routing


In this type of routing, certain restrictions are put on the type of packets accepted and sent. e.g..
The IIT- K router may decide to handle traffic pertaining to its departments only, and reject
packets from other routes. This kind of routing is used for links with very low capacity or for
security purposes.

Shortest Path Routing


Here, the central question dealt with is 'How to determine the optimal path for routing ?' Various
algorithms are used to determine the optimal routes with respect to some predetermined criteria.
A network is represented as a graph, with its terminals as nodes and the links as edges. A 'length'
is associated with each edge, which represents the cost of using the link for transmission. Lower
the cost, more suitable is the link. The cost is determined depending upon the criteria to be
optimized. Some of the important ways of determining the cost are:

 Minimum number of hops: If each link is given a unit cost, the shortest path is the one
with minimum number of hops. Such a route is easily obtained by a breadth first search
method. This is easy to implement but ignores load, link capacity etc.
 Transmission and Propagation Delays: If the cost is fixed as a function of transmission
and propagation delays, it will reflect the link capacities and the geographical distances.
However these costs are essentially static and do not consider the varying load
conditions.
 Queuing Delays: If the cost of a link is determined through its queuing delays, it takes
care of the varying load conditions, but not of the propagation delays.

Ideally, the cost parameter should consider all the above mentioned factors, and it should be
updated periodically to reflect the changes in the loading conditions. However, if the routes are
changed according to the load, the load changes again. This feedback effect between routing and
load can lead to undesirable oscillations and sudden swings.

97
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Routing Algorithms
As mentioned above, the shortest paths are calculated using suitable algorithms on the graph
representations of the networks.  Let the network be represented by graph G ( V, E ) and let the
number of nodes be 'N'.   For all the algorithms discussed below, the costs associated with the
links are assumed to be positive.  A node has zero cost w.r.t itself.  Further, all the links are
assumed to be symmetric, i.e.  if  di,j  =  cost of link  from node i to node j, then d i,j = d j,i .  The
graph is assumed to be complete. If there exists no edge between two nodes, then a link of
infinite cost is assumed.  The algorithms given below find costs of the paths from all nodes to a
particular node; the problem is equivalent to finding the cost of paths from a source to all
destinations.

Bellman-Ford Algorithm
This algorithm iterates on the number of edges in a path to obtain the shortest path. Since the
number of hops possible is limited (cycles are implicitly not allowed),  the algorithm terminates
giving the shortest path.

Notation: 
    d i,j         =   Length of path between nodes i and j,  indicating the cost of the link. 
    h            =   Number of hops. 
    D[ i,h]   =   Shortest path length from node i to node 1, with upto 'h' hops. 
    D[ 1,h]  =   0  for all h . 
  
Algorithm : 
  
    Initial condition  :                 D[ i, 0]  =  infinity,  for all  i  ( i != 1 )

Iteration             :                 D[i, h+1]  = min { di,j + D[j,h] }     over all values of j .

Termination        :                The algorithm terminates when

D[i, h]  =  D [ i,  h+1]     for all  i .

Principle: 
For zero hops,  the minimum length path has length of infinity, for every node.  For one hop the
shortest-path length associated with a node is equal to the length of the edge between  that node
and node 1. Hereafter, we increment the number of hops allowed, (from h to h+1 ) and find out
whether a shorter path exists through each of the  other nodes.  If  it exists, say through node 'j', 
then its length must be the sum of the lengths between these two nodes (i.e.  di,j ) and the shortest
path between j and 1 obtainable in upto h paths. If such a path doesn't exist, then the path length
remains the same. The algorithm is guaranteed to terminate, since there are utmost N nodes, and
so N-1 paths. It has time complexity of O ( N3 ) .

98
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Dijkstra's Algorithm

Notation:
Di   =     Length of shortest path from node 'i' to node 1. 
di,j  =     Length of path between nodes i and j .

Algorithm
Each node j  is  labeled with Dj, which is an estimate of cost of path from node j to node 1.
Initially, let the estimates be infinity, indicating that nothing is known about the paths.  We now
iterate on the length of paths, each time revising our estimate to lower values, as we obtain them.
Actually, we divide the nodes into two groups ; the first one, called set P contains the nodes
whose shortest distances have been found, and the other Q containing all the remaining nodes.
Initially P contains only the node 1. At each step,  we select the node that has minimum cost path
to node 1. This node is transferred to set P.  At the first step, this corresponds to shifting the node
closest to 1 in P. Its minimum cost to node 1 is now known. At the next step, select the next
closest node from set Q and update the labels corresponding to each node using :

Dj    =   min [ Dj  ,  Di  +  dj,i   ]

Finally, after N-1 iterations, the  shortest paths for all nodes are known, and the algorithm
terminates. 

Principle
Let the closest node to 1 at some step be i. Then i is shifted to P. Now, for each node j , the
closest path to 1 either passes through i or it doesn't.  In the first case Dj remains the same. In the
second case, the revised estimate of Dj is the sum Di  +  di,j . So we take the minimum of these
two cases and update Dj accordingly.  As each of the nodes get transferred to set P, the estimates
get closer to the lowest possible value. When a node is transferred, its shortest path length is
known. So finally all the nodes are in P and the Dj 's represent the minimum costs. The algorithm
is guaranteed to terminate in N-1 iterations and  its complexity is O( N2 ).

The Floyd Warshall Algorithm


This algorithm iterates on the set of nodes that can be used as intermediate nodes on paths. This
set grows from a single node ( say node 1 ) at start to finally all the nodes of the graph.  At each
iteration, we find the shortest path using given set of nodes as intermediate nodes, so that finally
all the shortest paths are obtained.

Notation
Di,j [n]     =     Length of shortest  path between the nodes i and j using only the nodes 1,2,....n as
intermediate nodes.

Initial Condition
Di,j[0]     =     di,j        for all nodes i,j .

99
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Algorithm
Initially,  n = 0.      At each iteration, add next node to n. i.e.   For  n = 1,2, .....N-1 , 

Di,j[n + 1]    =  min  {  Di,j[n] ,   Di,n+1[n]  + Dn+1,j[n]  }

Principle
Suppose the shortest path between i and j using nodes 1,2,...n is known. Now, if node n+1 is
allowed to be an intermediate node, then the shortest path under new conditions either passes
through node n+1 or it doesn't. If it does not pass through the node n+1, then Di,j[n+1] is same as
Di,j[n] .  Else, we find the cost of the new route, which is obtained from the sum,  Di,n+1[n] +
Dn+1,j[n]. So we take the minimum of these two cases at each step.  After adding all the nodes to
the set of intermediate nodes, we obtain the shortest paths between all pairs of nodes together. 
The complexity of  Floyd-Warshall algorithm is O ( N3 ).

It is observed that all the three algorithms mentioned above give comparable performance,
depending upon the exact topology of the network.

Congestion Control Policies

Congestion in a network may occur if the load on the network(the number of packets sent to the
network) is greater than the capacity of the network(the number of packets a network can
handle). Congestion control refers to the mechanisms and techniques to control the congestion
and keep the load below the capacity . • When too many packets are pumped into the system,
congestion occur leading into degradation of performance. • Congestion tends to feed upon itself
and back ups. • Congestion shows lack of balance between various networking equipments. • It is
a global issue. In general, we can divide congestion control mechanisms into two broad
categories: open-loop congestion control (prevention) and closed-loop congestion control
(removal) as shown in Figure Open Loop Congestion Control: In open-loop congestion control,
policies are applied to prevent congestion before it happens. In these mechanisms, congestion
control is handled by either the source or the destination Retransmission Policy Retransmission
is sometimes unavoidable. If the sender feels that a sent packet is lost or corrupted, the packet
needs to be retransmitted. Retransmission in general may increase congestion in the network.
However, a good retransmission policy can prevent congestion. The retransmission policy and
the retransmission timers must be designed to optimize efficiency and at the same time prevent
congestion. For example, the retransmission policy used by TCP is designed to prevent or
alleviate congestion. Window Policy The type of window at the sender may also affect
congestion. The Selective Repeat window is better than the Go-Back-N window for congestion
control. In the Go-Back-N window, when the timer for a packet times out, several packets may
be resent, although some may have arrived safe and sound at the receiver. This duplication may
make the congestion worse. The Selective Repeat window, on the other hand, tries to send the
specific packets that have been lost or corrupted. Acknowledgment Policy : The
acknowledgment policy imposed by the receiver may also affect congestion. If the receiver does
not acknowledge every packet it receives, it may slow down the sender and help prevent
congestion. Several approaches are used in this case. A receiver may send an acknowledgment
only if it has a packet to be sent or a special timer expires. A receiver may decide to
acknowledge only N packets at a time. We need to know that the acknowledgments are also part

100
Common Reference Material BSBC 603 [Computer Network] BCA 6th

of the load in a network. Sending fewer acknowledgments means imposing less load on the
network. Discarding Policy : A good discarding policy by the routers may prevent congestion
and at the same time may not harm the integrity of the transmission. For example, in audio
transmission, if the policy is to discard less sensitive packets when congestion is likely to
happen, the quality of sound is still preserved and congestion is prevented or alleviated.
Admission Policy : An admission policy, which is a quality-of-service mechanism, can also
prevent congestion in virtual-circuit networks. Switches in a flow first check the resource
requirement of a flow before admitting it to the network. A router can deny establishing a
virtual- circuit connection if there is congestion in the network or if there is a possibility of future
congestion. Closed-Loop Congestion Control Closed-loop congestion control mechanisms try to
alleviate congestion after it happens. Several mechanisms have been used by different protocols.
Back-pressure: The technique of backpressure refers to a congestion control mechanism in which
a congested node stops receiving data from the immediate upstream node or nodes. This may
cause the upstream node or nodes to become congested, and they, in turn, reject data from their
upstream nodes or nodes. And so on. Backpressure is a node-to-node congestion control that
starts with a node and propagates, in the opposite direction of data flow, to the source. The
backpressure technique can be applied only to virtual circuit networks, in which each node
knows the upstream node from which a flow of data is corning. Node III in the figure has more
input data than it can handle. It drops some packets in its input buffer and informs node II to
slow down. Node II, in turn, may be congested because it is slowing down the output flow of
data. If node II is congested, it informs node I to slow down, which in turn may create
congestion. If so, node I informs the source of data to slow down. This, in time, alleviates the
congestion. Note that the pressure on node III is moved backward to the source to remove the
congestion. None of the virtual-circuit networks we studied in this book use backpressure. It was,
however, implemented in the first virtual-circuit network, X.25. The technique cannot be
implemented in a datagram network because in this type of network, a node (router) does not
have the slightest knowledge of the upstream router. Choke Packet A choke packet is a packet
sent by a node to the source to inform it of congestion. Note the difference between the
backpressure and choke packet methods. In backpresure, the warning is from one node to its
upstream node, although the warning may eventually reach the source station. In the choke
packet method, the warning is from the router, which has encountered congestion, to the source
station directly. The intermediate nodes through which the packet has traveled are not warned.
We have seen an example of this type of control in ICMP. When a router in the Internet is
overwhelmed datagrams, it may discard some of them; but it informs the source . host, using a
source quench ICMP message. The warning message goes directly to the source station; the
intermediate routers, and does not take any action. Figure shows the idea of a choke packet.
Implicit Signaling In implicit signaling, there is no communication between the congested node
or nodes and the source. The source guesses that there is a congestion somewhere in the network
from other symptoms. For example, when a source sends several packets and there is no
acknowledgment for a while, one assumption is that the network is congested. The delay in
receiving an acknowledgment is interpreted as congestion in the network; the source should slow
down. We will see this type of signaling when we discuss TCP congestion control later in the
chapter. Explicit Signaling The node that experiences congestion can explicitly send a signal to
the source or destination. The explicit signaling method, however, is different from the choke
packet method. In the choke packet method, a separate packet is used for this purpose; in the
explicit signaling method, the signal is included in the packets that carry data. Explicit signaling,

101
Common Reference Material BSBC 603 [Computer Network] BCA 6th

as we will see in Frame Relay congestion control, can occur in either the forward or the
backward direction. Backward Signaling A bit can be set in a packet moving in the direction
opposite to the congestion. This bit can warn the source that there is congestion and that it needs
to slow down to avoid the discarding of packets. Forward Signaling A bit can be set in a packet
moving in the direction of the congestion. This bit can warn the destination that there is
congestion. The receiver in this case can use policies, such as slowing down the
acknowledgments, to alleviate the congestion.

Concept of Internetworking

Internetworking started as a way to connect disparate types of computer networking


technology. Computer network term is used to describe two or more computers that are linked to
each other. When two or more computer LANs or WANs or computer network segments are
connected using devices such as a router and configure by logical addressing scheme with
a protocol such as IP, then it is called as computer internetworking.

Internetworking is a term used by Cisco. Any interconnection among or between public, private,
commercial, industrial, or governmental computer networks may also be defined as an
internetwork or "Internetworking".

In modern practice, the interconnected computer networks or Internetworking use the Internet
Protocol. Two architectural models are commonly used to describe the protocols and methods
used in internetworking. The standard reference model for internetworking is Open Systems
Interconnection (OSI).

Type of Internetworking

102
Common Reference Material BSBC 603 [Computer Network] BCA 6th

Internetworking is implemented in Layer 3 (Network Layer) of this model The most notable
example of internetworking is the Internet (capitalized). There are three variants of internetwork
or Internetworking, depending on who administers and who participates in them :

• Extranet

• Intranet

• Internet

Intranets and extranets may or may not have connections to the Internet. If connected to the
Internet, the intranet or extranet is normally protected from being accessed from the Internet
without proper authorization. The Internet is not considered to be a part of the intranet or
extranet, although it may serve as a portal for access to portions of an extranet.

Extranet

An extranet is a network of internetwork or Internetworking that is limited in scope to a single


organisation or entity but which also has limited connections to the networks of one or more
other usually, but not necessarily, trusted organizations or entities .Technically, an extranet may
also be categorized as a MAN, WAN, or other type of network, although, by definition, an
extranet cannot consist of a single LAN; it must have at least one connection with an external
network.

Intranet

An intranet is a set of interconnected networks or Internetworking, using the Internet Protocol


and uses IP-based tools such as web browsers and ftp tools, that is under the control of a single
administrative entity. That administrative entity closes the intranet to the rest of the world, and
allows only specific users. Most commonly, an intranet is the internal network of a company or
other enterprise. A large intranet will typically have its own web serverto provide users with
browseable information.

Internet

A specific Internetworking, consisting of a worldwide interconnection of governmental,


academic, public, and private networks based upon the Advanced Research Projects Agency
Network (ARPANET) developed by ARPA of the U.S. Department of Defense also home to the
World Wide Web (WWW) and referred to as the 'Internet' with a capital 'I' to distinguish it from
other generic internetworks. Participants in the Internet, or their service providers, use IP
Addresses obtained from address registries that control assignments.

103

You might also like