Download as pdf or txt
Download as pdf or txt
You are on page 1of 49

DATA COMMUNICATION AND COMPUTER NETWORKS

Sub-Topics:
 Principles of data communication and computer networks
 Data communication devices
 Data transmission characteristics
 Types of networks
 Network topologies
 Network and internet protocols
 Benefits and challenges of networks in organizations
 Network security
 Internet of things

Introduction
Data is useful once it has been transferred from the source to the recipient. The transfer of such data involves
various techniques and technology of essence to facilitate fast, efficient and effective data transfer so that
delays and eavesdropping by unintended recipients is avoided.

Principles of data communication


Data communications
This refers to the transmission of this digital data between two or more computers.
Computer network or data network is a telecommunications network that allows computers to exchange data.
The physical connection between networked computing devices is established using either cable media or
wireless media. The best-known computer network is the Internet

Data communication systems


These are the electronic systems that transmit data over communication lines from one location to another.
End users need to know the essential parts of communication technology, including connections, channels,
transmission, network architectures and network types. Communication allows microcomputer users to
transmit and receive data and gain access to electronic resources

Components:
A data communications system has five components
The physical data communication system consist of the following five main components
1. Source– creates the data, could be a computer or a telephone
2. Transmitter – encodes the information e.g. modem, network card
3. Transmission system – is the physical path by which a message travels from sender to receiver. Some
examples of transmission media include twisted-pair wire, coaxial cable, fiber-optic cable, and radio
wave
4. Receiver– decodes the information for the destination e.g. modem, network card
5. Destination – accepts and uses the incoming information, could be a computer or telephone

The following determine successful communication


1. Message. The message is the information (data) to be communicated. Popular forms of In formation
include text, numbers, pictures, audio, and video.
2. Protocol. A protocol is a set of rules that govern data communications. It represents an agreement
between the communicating devices. Without a protocol, two devices may be connected but not
communicating, just as a person speaking French cannot be understood by a person who speaks only
Japanese.

Data Representation:
Information today comes in different forms such as text, numbers, images, audio, and video.

Text:
In data communications, text is represented as a bit pattern, a sequence of bits (Os or Is).
Different sets of bit patterns have been designed to represent text symbols. Each set is called a code, and the
process of representing symbols is called coding. Today, the prevalent coding system is called Unicode, which
uses 32 bits to represent a symbol or character used in any language in the world. The American Standard Code
for Information Interchange (ASCII), developed some decades ago in the United States, now constitutes the first
127 characters in Unicode and is also referred to as Basic Latin.
Numbers:
Numbers are also represented by bit patterns. However, a code such as ASCII is not used to represent numbers;
the number is directly converted to a binary number to simplify mathematical operations. Appendix B
discusses several different numbering systems.

Images:
Images are also represented by bit patterns. In its simplest form, an image is composed of a matrix of pixels
(picture elements), where each pixel is a small dot. The size of the pixel depends on the resolution. For example,
an image can be divided into 1000 pixels or 10,000 pixels. In the second case, there is a better representation
of the image (better resolution), but more memory is needed to store the image. After an image is divided into
pixels, each pixel is assigned a bit pattern. The size and the value of the pattern depend on the image.

Audio
Audio refers to the recording or broadcasting of sound or music. Audio is by nature different from text,
numbers, or images. It is continuous, not discrete. Even when we use a microphone to change voice or music to
an electric signal, we create a continuous signal.

Video
Video refers to the recording or broadcasting of a picture or movie. Video can either be produced as a
continuous entity (e.g., by a TV camera), or it can be a combination of images, each a discrete entity, arranged
to convey the idea of motion. Again we can change video to a digital or an analog signal.

Communication channels
The transmission media used in communication are called communication channels. Two ways of connecting
microcomputers for communication with each other and with other equipment is through cable and air.
There are five kinds of communication channels used for cable or air connections:
 Telephone lines
 Coaxial cable
 Fibre-optic cable
 Microwave
 Satellite

Data transmission
Technical matters that affect data transmission include:
 Bandwidth
 Type of transmission
 Direction of data flow
 Mode of transmitting data
 Protocols

Bandwidth
Bandwidth is the bits-per-second (bps) transmission capability of a communication channel.

There are three types of bandwidth:


 Voice band – bandwidth of standard telephone lines (9.6 to 56 kbps)
 Medium band – bandwidth of special leased lines used (56 to 264,000 kbps)
 Broadband – bandwidth of microwave, satellite, coaxial cable and fiber optic (56 to 30,000,000 kbps).

Types of transmission – serial or parallel


Serial data transmission
In serial transmission, bits flow in a continuous stream. It is the way most data is sent over telephone lines. It
is used by external modems typically connected to a microcomputer through a serial port. The technical names
for such serial ports are RS-232C connector or asynchronous communications port.
Parallel data transmission
In parallel transmission, bits flow through separate lines simultaneously (at the same time).
Parallel transmission is typically limited to communications over short distances (not telephone lines). It is the
standard method of sending data from a computer’s CPU to a printer.

Direction of data transmission


There are three directions or modes of data flow in a data communication system.
 Simplex
 Half-duplex
 Full-duplex

Communication between two devices can be simplex, half-duplex, or full-duplex as shown in Figure below

Simplex:
In simplex mode, the communication is unidirectional, as on a one-way street. Only one of the two devices on a
link can transmit; the other can only receive (see Figure a). Keyboards and traditional monitors are examples
of simplex devices. The keyboard can only introduce input; the monitor can only accept output. The simplex
mode can use the entire capacity of the channel to send data in one direction.

Half-Duplex:
In half-duplex mode, each station can both transmit and receive, but not at the same time.
When one device is sending, the other can only receive, and vice versa The half-duplex mode is like a one -lane
road with traffic allowed in both directions.
When cars are traveling in one direction, cars going the other way must wait. In a half-duplex transmission, the
entire capacity of a channel is taken over by whichever of the two devices is transmitting at the time. Walkie-
talkies and CB (citizens band) radios are both half-duplex systems.
The half-duplex mode is used in cases where there is no need for communication in both directions at the same
time; the entire capacity of the channel can be utilized for each direction.

Full-Duplex:
In full-duplex both stations can transmit and receive simultaneously (see Figure c). The full-duplex mode is like
a two-way street with traffic flowing in both directions at the same time.
In full-duplex mode, signals going in one direction share the capacity of the link: with signals going in the other
direction. This sharing can occur in two ways: Either the link must contain two physically separate transmission
paths, one for sending and the other for receiving; or the capacity of the channel is divided between signals
traveling in both directions. One common example of full-duplex communication is the telephone network.
When two people are communicating by a telephone line, both can talk and listen at the same time. The full-
duplex mode is used when communication in both directions is required all the time. The capacity of the
channel, however, must be divided between the two directions.
COMPUTER NETWORKS
A computer network, is a collection of computers and other hardware components interconnected by
communication channels that allow sharing of resources and information
A computer network is an interconnection of computers, printers, scanners and other hardware devices and
software applications.
Networks connect users within a defined physical space (such as within an office building). The Internet is a
network that connects users from all parts of the world. Educational institutions, government agencies, health
care facilities, banking and other financial institutions, and residential applications use computer networking
to send and receive data and share resources.

Network architecture is a description of how a computer is set-up (configured) and what strategies are used in
the design. The interconnection of PCs over a network is becoming more important, especially as more
hardware is accessed remotely and PCs intercommunicate with one another

Terms used to describe computer networks


 Node–any device connected to a network such as a computer, printer or data storage device.
 Client– a node that requests and uses resources available from other nodes. Typically a
microcomputer.
 Server – a node that shares resources with other nodes. May be called a file server, printer server,
communication server, web server or database server.
 Network Operating System (NOS) – the operating system of the network that controls and coordinates
the activities between computers on a network, such as electronic communication and sharing of
information and resources.
 Distributed processing– computing power is located and shared at different locations. Common in
decentralized organizations (each office has its own computer system but is networked to the main
computer).
 Host computer– a large centralized computer, usually a minicomputer or mainframe.

Uses of computer networks

Communication and Access to Information


A network allows a user to instantly connect with another user, or network, and send and receive data. It allows
remote users to connect with one other via videoconferencing, virtual meetings and digital emails.
Computer networks provide access to online libraries, journals, electronic newspapers, chat rooms, social
networking websites, email clients and the World Wide Web. Users can benefit from making online bookings
for theaters, restaurants, hotels, trains and airplanes. They can shop and carry out banking transactions from
the comfort of their homes.
Computer networks allow users to access interactive entertainment channels, such as video on demand,
interactive films, interactive and live television, multiperson real-time games and virtual-reality models.

Resource Sharing
Computer networks allow users to share files and resources. They are popularly used in organizations to cut
costs and streamline resource sharing. A single printer attached to a small local area network (LAN) can
effectively service the printing requests of all computer users on the same network. Users can similarly share
other network hardware devices, such as modems, fax machines, hard drives and removable storage drives.
Networks allow users to share software applications, programs and files. They can share documents (such as
invoices, spreadsheets and memos), word processing software, videos, photographs, audio files, project
tracking software and other similar programs. Users can also access, retrieve and save data on the hard drive
of the main network server.

Centralized Support and Administration


Computer networking centralizes support, administration and network support tasks. Technical personnel
manage all the nodes of the network, provide assistance, and troubleshoot network hardware and software
errors. Network administrators ensure data integrity and devise systems to maintain the reliability of
information through the network. They are responsible for providing high-end antivirus, anti-spyware and
firewall software to the network users. Unlike a stand-alone system, a networked computer is fully managed
and administered by a centralized server, which accepts all user requests and services them as required.

Other uses
 Information sharing by using Web or Internet
 Interaction with other users using dynamic web pages
 IP phones
 Video conferences
 Parallel computing
 Instant messaging

Network Services
Computer systems and computerized systems help human beings to work efficiently and explore the
unthinkable. When these devices are connected together to form a network, the capabilities are enhanced
multiple-times. Some basic services computer network can offer are.
 Directory services
 File services
 Communication services
 Application services

Directory Services
These services are mapping between name and its value, which can be variable value or fixed. This software
system helps to store the information, organize it, and provides various means of accessing it.

 Accounting
In an organization, a number of users have their user names and passwords mapped to them. Directory
Services provide means of storing this information in cryptic form and make available when requested.

 Authentication &and Authorization


User credentials are checked to authenticate a user at the time of login and/or periodically. User
accounts can be set into hierarchical structure and their access to resources can be controlled using
authorization schemes.

 Domain Name Services


DNS is widely used and one of the essential services on which internet works. This system maps IP
addresses to domain names, which are easier to remember and recall than IP addresses. Because
network operates with the help of IP addresses and humans tend to remember website names, the
DNS provides website’s IP address which is mapped to its name from the back-end on the request of a
website name from the user.

File Services
File services include sharing and transferring files over the network.

 File Sharing
One of the reasons which gave birth to networking was file sharing. File sharing enables its users to
share their data with other users. User can upload the file to a specific server, which is accessible by all
intended users. As an alternative, user can make its file shared on its own computer and provides
access to intended users.

 File Transfer
This is an activity to copy or move file from one computer to another computer or to multiple
computers, with help of underlying network. Network enables its user to locate other users in the
network and transfers files.
Communication Services
 Email
Electronic mail is a communication method and something a computer user cannot work without. This
is the basis of today’s internet features. Email system has one or more email servers. All its users are
provided with unique IDs. When a user sends email to other user, it is actually transferred between
users with help of email server.

 Social Networking
Recent technologies have made technical life social. The computer savvy peoples, can find other known
peoples or friends, can connect with them, and can share thoughts, pictures, and videos.

 Internet Chat
Internet chat provides instant text transfer services between two hosts. Two or more people can
communicate with each other using text based Internet Relay Chat services. These days, voice chat and
video chat are very common.

 Discussion Boards
Discussion boards provide a mechanism to connect multiple peoples with same interests. It enables
the users to put queries, questions, suggestions etc. which can be seen by all other users. Other may
respond as well.

 Remote Access
This service enables user to access the data residing on the remote computer. This feature is known as
Remote desktop. This can be done via some remote device, e.g. mobile phone or home computer.

Application Services
These are nothing but providing network based services to the users such as web services, database managing,
and resource sharing.
 Resource Sharing
To use resources efficiently and economically, network provides a mean to share them. This may
include Servers, Printers, and Storage Media etc.
 Databases
This application service is one of the most important services. It stores data and information, processes
it, and enables the users to retrieve it efficiently by using queries. Databases help organizations to
make decisions based on statistics.
 Web Services
World Wide Web has become the synonym for internet. It is used to connect to the internet, and access
files and information services provided by the internet servers.

Terminologies
Physical layer in the OSI model plays the role of interacting with actual hardware and signaling mechanism.
Physical layer is the only layer of OSI network model which actually deals with the physical connectivity of two
different stations. This layer defines the hardware equipment, cabling, wiring, frequencies, pulses used to
represent binary signals etc.
Physical layer provides its services to Data-link layer. Data-link layer hands over frames to physical layer.
Physical layer converts them to electrical pulses, which represent binary data. The binary data is then sent over
the wired or wireless media.

Signals
When data is sent over physical medium, it needs to be first converted into electromagnetic signals. Data itself
can be analog such as human voice, or digital such as file on the disk. Both analog and digital data can be
represented in digital or analog signals.
 Digital Signals
Digital signals are discrete in nature and represent sequence of voltage pulses. Digital signals are used
within the circuitry of a computer system.
 Analog Signals
Analog signals are in continuous wave form in nature and represented by continuous electromagnetic
waves.

Transmission Impairment
When signals travel through the medium they tend to deteriorate. This may have many reasons as given:
 Attenuation
For the receiver to interpret the data accurately, the signal must be sufficiently strong. When the signal
passes through the medium, it tends to get weaker. As it covers distance, it loses strength.
 Dispersion
As signal travels through the media, it tends to spread and overlaps. The amount of dispersion depends
upon the frequency used.
 Delay distortion
Signals are sent over media with pre-defined speed and frequency. If the signal speed and frequency
do not match, there are possibilities that signal reaches destination in arbitrary fashion. In digital
media, this is very critical that some bits reach earlier than the previously sent ones.
 Noise
Random disturbance or fluctuation in analog or digital signal is said to be Noise in signal, which may
distort the actual information being carried. Noise can be characterized in one of the following class:
o Thermal Noise
Heat agitates the electronic conductors of a medium which may introduce noise in the media.
Up to a certain level, thermal noise is unavoidable.
o Intermodulation
When multiple frequencies share a medium, their interference can cause noise in the medium.
Intermodulation noise occurs if two different frequencies are sharing a medium and one of
them has excessive strength or the component itself is not functioning properly, then the
resultant frequency may not be delivered as expected.
o Crosstalk
This sort of noise happens when a foreign signal enters into the media. This is because signal
in one medium affects the signal of second medium.
o Impulse
This noise is introduced because of irregular disturbances such as lightening, electricity,
short-circuit, or faulty components. Digital data is mostly affected by this sort of noise.

Transmission Media
The media over which the information between two computer systems is sent, called transmission media.
Transmission media comes in two forms.
 Guided Media
All communication wires/cables are guided media, such as UTP, coaxial cables, and fiber Optics. In this
media, the sender and receiver are directly connected and the information is send (guided) through it.
 Unguided Media
Wireless or open air space is said to be unguided media, because there is no connectivity between the
sender and receiver. Information is spread over the air, and anyone including the actual recipient may
collect the information.

Channel Capacity
The speed of transmission of information is said to be the channel capacity. We count it as data rate in digital
world. It depends on numerous factors such as:
 Bandwidth: The physical limitation of underlying media.
 Error-rate: Incorrect reception of information because of noise.
 Encoding: The number of levels used for signaling.
Multiplexing
Multiplexing is a technique to mix and send multiple data streams over a single medium. This technique
requires system hardware called multiplexer (MUX) for multiplexing the streams and sending them on a
medium, and de-multiplexer (DMUX) which takes information from the medium and distributes to different
destinations.

Switching
Switching is a mechanism by which data/information sent from source towards destination which are not
directly connected. Networks have interconnecting devices, which receives data from directly connected
sources, stores data, analyze it and then forwards to the next interconnecting device closest to the destination.
Switching can be categorized as:

COMPONENTS OF COMPUTER NETWORKS


A network consists of hardware (devices and media) and software (services) and protocols to facilitate
communication

Network Hardware
Network hardware refers to the collection of end devices, intermediary devices and c o m m u n i c a t i o n
media that are used to build a network.

End devices
An end device, also known as a host, refers to a piece of equipment that is either the source or the destination
of a message on a network. It is the interface between the human user and the network. Each host on a
network is identified by an address
They include
 Computers
 Printers
 Servers
 Mobile phones
 PDAs
 etc.

End devices on a typical networks setup


Intermediary devices
Provide connectivity and ensures data flows across the network
Connect individual hosts to the network and can connect multiple individual networks to form an
internetwork
Use the destination host address to determine the path the messages take through the network.
They include
 Hubs
 Switches
 Repeaters
 Bridges
 Router
 Gateway
 Transceivers
 Transponders

Repeaters]
A repeater is an electronic device that receives a signal, cleans it of unnecessary noise, regenerates it, and
retransmits it at a higher power level, or to the other side of an obstruction, so that the signal can cover longer
distances without degradation. In most twisted pair Ethernet configurations, repeaters are required for cable
that runs longer than 100 meters. A repeater with multiple ports is known as a hub. Repeaters work on the
Physical Layer of the OSI model. Repeaters require a small amount of time to regenerate the signal. This can
cause a propagation delay which can affect network communication when there are several repeaters in a row.
Many network architectures limit the number of repeaters that can be used in a row (e.g. Ethernet's 5-4-3 rule).

Bridges
A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model. Bridges
broadcast to all ports except the port on which the broadcast was received. However, bridges do not
promiscuously copy traffic to all ports, as hubs do, but learn which MAC addresses are reachable through
specific ports. Once the bridge associates a port and an address, it will send traffic for that address to that port
only.
Bridges learn the association of ports and addresses by examining the source address of frames that it sees on
various ports. Once a frame arrives through a port, its source address is stored and the bridge assumes that
MAC address is associated with that port. The first time that a previously unknown destination address is seen,
the bridge will forward the frame to all ports other than the one on which the frame arrived.
Bridges come in three basic types:
 Local bridges: Directly connect LANs
 Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges,
where the connecting link is slower than the end networks, largely have been replaced with routers.
 Wireless bridges: Can be used to join LANs or connect remote stations to LANs.
Switches
A network switch is a device that forwards and filters OSI layer 2 datagrams (chunks of data communication)
between ports (connected cables) based on the MAC addresses in the packets. A switch is distinct from a hub
in that it only forwards the frames to the ports involved in the communication rather than all ports connected.
A switch breaks the collision domain but represents itself as a broadcast domain. Switches make forwarding
decisions of frames on the basis of MAC addresses. A switch normally has numerous ports, facilitating a star
topology for devices, and cascading additional switches. Some switches are capable of routing based on Layer
3 addressing or additional logical levels; these are called multi-layer switches. The term switch is used loosely
in marketing to encompass devices including routers and bridges, as well as devices that may distribute traffic
on load or by application content (e.g., a Web URL identifier).

Routers
A router is an internetworking device that forwards packets between networks by processing information
found in the datagram or packet (Internet protocol information from Layer 3 of the OSI Model). In many
situations, this information is processed in conjunction with the routing table (also known as forwarding table).
Routers use routing tables to determine what interface to forward packets (this can include the "null" also
known as the "black hole" interface because data can go into it, however, no further processing is done for said
data).

COMMUNICATION MEDIA
Network media are the channels that provide pathways for messages to move around a network. Modern
networks primarily use three types of media
Computer networks can be classified according to the hardware and associated software technology that is
used to interconnect the individual devices in the network, such as electrical cable optical fiber, and radio waves
(wireless LAN).
A well-known family of communication media is collectively known as Ethernet. It is defined by IEEE 802 and
utilizes various standards and media that enable communication between devices. Wireless LAN technology is
designed to connect devices without wiring. These devices use radio waves or infrared signals as a transmission
medium.
a) Guided media (Wired technologies)
b) Unguided media (Wireless technologies)

Wired technologies
The order of the following wired technologies is, roughly, from slowest to fastest transmission speed.
1. Twisted pair wire is the most widely used medium for telecommunication. Twisted-pair cabling
consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two insulated
copper wires twisted into pairs. Computer networking cabling (wired Ethernet as defined by IEEE
802.3) consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission.
The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The
transmission speed ranges from 2 million bits per second to 10 billion bits per second. Twisted pair
cabling comes in two forms:
 Unshielded twisted pair (UTP) and
 Shielded twisted-pair (STP).
Each form comes in several category ratings, designed for use in various scenarios.

Unshielded Twisted Pair (UTP)


UTP stands for Unshielded Twisted Pair cable. UTP cable is a 100 ohm copper cable that consists of 2 to 1800
unshielded twisted pairs surrounded by an outer jacket. They have no metallic shield. This makes the cable
small in diameter but unprotected against electrical interference. The twist helps to improve its immunity to
electrical noise and EMI.
For horizontal cables, the number of pairs is typically 4 pair as shown below.

Shielded Twisted Pair cable (STP)


This is type of copper telephone wiring in which each of the two copper wires that are twisted together are
coated with an insulating coating that functions as a ground for the wires. The extra covering in shielded
twisted pair wiring protects the transmission line from electromagnetic interference leaking into or out of the
cable. STP cabling often is used in Ethernet networks, especially fast data rate Ethernets

2. Coaxial cable
Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area
networks. The cables consist of copper or aluminum wire surrounded by an insulating layer (typically a flexible
material with a high dielectric constant), which itself is surrounded by a conductive layer. The insulation helps
minimize interference and distortion. Transmission speed ranges from 200 million bits per second to more
than 500 million bits per second.
Because of its structure, the coax cable is capable of carrying high frequency signals than that of twisted pair
cable. The wrapped structure provides it a good shield against noise and cross talk. Coaxial cables provide high
bandwidth rates of up to 450 mbps.
There are three categories of coax cables namely, RG-59 (Cable TV), RG-58 (Thin Ethernet), and RG-11 (Thick
Ethernet). RG stands for Radio Government.
Cables are connected using BNC connector and BNC-T. BNC terminator is used to terminate the wire at the far
ends.

3. An optical fiber
Fiber optics is a glass fiber. Fiber Optic works on the properties of light. When light ray hits at critical angle it
tends to refracts at 90 degree. This property has been used in fiber optic. The core of fiber optic cable is made
of high quality glass or plastic. From one end of it light is emitted, it travels through it and at the other end light
detector detects light stream and converts it to electric data.
Fiber Optic provides the highest mode of speed. It comes in two modes, one is single mode fiber and second is
multimode fiber. Single mode fiber can carry a single ray of light whereas multimode is capable of carrying
multiple beams of light.
Some advantages of optical fibers over metal wires are
 less transmission loss,
 immunity from electromagnetic radiation, and
 Very fast transmission speed, up to trillions of bits per second.
One can use different colors of lights to increase the number of messages being sent over a fiber optic cable.
Fiber Optic also comes in unidirectional and bidirectional capabilities. To connect and access fiber optic special
type of connectors are used. These can be Subscriber Channel (SC), Straight Tip (ST), or MT-RJ.
Wireless Technologies
Wireless transmission is a form of unguided media. Wireless communication involves no physical link
established between two or more devices, communicating wirelessly. Wireless signals are spread over in the
air and are received and interpreted by appropriate antennas.
When an antenna is attached to electrical circuit of a computer or wireless device, it converts the digital data
into wireless signals and spread all over within its frequency range. The receptor on the other end receives
these signals and converts them back to digital data.
A little part of electromagnetic spectrum can be used for wireless transmission.

Wireless technologies include


 Radio transmission
 Microwave Transmission
 Infrared Transmission
 Light Transmission
 Terrestrial microwave
 Communications satellites
 Cellular and PCS systems
 Radio and spread spectrum technologies
 A global area network

Radio Transmission
Radio frequency is easier to generate and because of its large wavelength it can penetrate through walls and
structures alike. Radio waves can have wavelength from 1 mm – 100,000 km and have frequency ranging from
3 Hz (Extremely Low Frequency) to 300 GHz (Extremely High Frequency). Radio frequencies are sub-divided
into six bands.
Radio waves at lower frequencies can travel through walls whereas higher RF can travel in straight line and
bounce back. The power of low frequency waves decreases sharply as they cover long distance. High frequency
radio waves have more power.
Wireless local area network use a high-frequency radio technology similar to digital cellular and a low-
frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between
multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave
technology
Lower frequencies such as VLF, LF, MF bands can travel on the ground up to 1000 kilometers, over the earth’s
surface.

Radio waves of high frequencies are prone to be absorbed by rain and other obstacles. They use Ionosphere of
earth atmosphere. High frequency radio waves such as HF and VHF bands are spread upwards. When they
reach Ionosphere, they are refracted back to the earth.
Microwave Transmission
Electromagnetic waves above 100 MHz tend to travel in a straight line and signals over them can be sent by
beaming those waves towards one particular station. Because Microwaves travels in straight lines, both sender
and receiver must be aligned to be strictly in line-of-sight.
Microwaves can have wavelength ranging from 1 mm – 1 meter and frequency ranging from 300 MHz to 300
GHz.

Microwave antennas concentrate the waves making a beam of it. As shown in picture above, multiple antennas
can be aligned to reach farther. Microwaves have higher frequencies and do not penetrate wall like obstacles.
Microwave transmission depends highly upon the weather conditions and the frequency it is using.

Infrared Transmission
Infrared wave lies in between visible light spectrum and microwaves. It has wavelength of 700-nm to 1-mm
and frequency ranges from 300-GHz to 430-THz.
Infrared wave is used for very short range communication purposes such as television and it’s remote. Infrared
travels in a straight line hence it is directional by nature. Because of high frequency range, Infrared cannot cross
wall-like obstacles.

Light Transmission
Highest most electromagnetic spectrum which can be used for data transmission is light or optical signaling.
This is achieved by means of LASER.
Because of frequency light uses, it tends to travel strictly in straight line. Hence the sender and receiver must
be in the line-of-sight. Because laser transmission is unidirectional, at both ends of communication the laser
and the photo-detector needs to be installed. Laser beam is generally 1mm wide hence it is a work of precision
to align two far receptors each pointing to lasers source.

Laser works as Tx (transmitter) and photo-detectors works as Rx (receiver).


Lasers cannot penetrate obstacles such as walls, rain, and thick fog. Additionally, laser beam is distorted by
wind, atmosphere temperature, or variation in temperature in the path.
Laser is safe for data transmission as it is very difficult to tap 1mm wide laser without interrupting the
communication channel.

Terrestrial microwave
Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes.
Terrestrial microwaves are in the low-gigahertz range, which limits all communications to line-of-sight. Relay
stations are spaced approximately 48 km (30 mi) apart.

Communications satellites
The satellites communicate via microwave radio waves, which are not deflected by the Earth's atmosphere.
The satellites are stationed in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the
equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.

Cellular and PCS systems


They use several radio communications technologies. The systems divide the region covered into multiple
geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one
area to the next area.

Infrared communication
They can transmit signals for small distances, typically no more than 10 meters. In most cases, line-of-sight
propagation is used, which limits the physical positioning of communicating devices.

A global area network (GAN)


This is a network used for supporting mobile across an arbitrary number of wireless LANs, satellite coverage
areas, etc. The key challenge in mobile communications is handing off user communications from one local
coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs.

Selecting a network media


When selecting which media to use, you need to consider the following factors:
 The distance the media can successfully carry a signal. Each media has a maximum range (length) over
which it can reliably carry data.
 The environment in which the media is installed
 The amount of data and the speed at which it must be transmitted. Different media have different
bandwidths hence carry data at different speeds
 The cost of the media and not cost of installation. Different media have different purchase prices and
requirements for installation

SOFTWARE
The main software used for network management is a (Network Operating System)
Network Operating System
Unlike operating systems, such as Windows, that are designed for single users to control one computer,
network operating systems (NOS) coordinate the activities of multiple computers across a network. The
network operating system acts as a director to keep the network running smoothly.
The two major types of network operating systems are:
 Peer-to-Peer
 Client/Server
Nearly all modern networks are a combination of both. The networking design can be considered independent
of the servers and workstations that will share it.

Peer-to-Peer
Peer-to-peer network operating systems allow users to share resources and files located on their computers
and to access shared resources found on other computers. However, they do not have a file server or a
centralized management source. In a peer-to-peer network, all computers are considered equal; they all have
the same abilities to use the resources available on the network. Peer-to-peer networks are designed primarily
for small to medium local area networks. Nearly all modern desktop operating systems, such as Macintosh OSX,
Linux, and Windows, can function as peer-to-peer network operating systems.

Advantages of a peer-to-peer network:


 Less initial expense - No need for a dedicated server.
 Setup - An operating system (such as Windows XP) already in place may only need to be reconfigured
for peer-to-peer operations.

Disadvantages of a peer-to-peer network:


 Decentralized - No central repository for files and applications.
 Security - Does not provide the security available on a client/server network.

Client/Server
Client/server network operating systems allow the network to centralize functions and applications in one or
more dedicated file servers. The file servers become the heart of the system, providing access to resources and
security. Individual workstations (clients) have access to the resources available on the file servers. The
network operating system provides the mechanism to integrate all the components of the network and allow
multiple users to simultaneously share the same resources irrespective of physical location.

Advantages of a client/server network:


 Centralized - Resources and data security are controlled through the server.
 Scalability - Any or all elements can be replaced individually as needs increase.
 Flexibility - New technology can be easily integrated into system.
 Interoperability - All components (client/network/server) work together.
 Accessibility - Server can be accessed remotely and across multiple platforms.

Disadvantages of a client/server network:


 Expense - Requires initial investment in dedicated server.
 Maintenance - Large networks will require a staff to ensure efficient operation.
Dependence - When server goes down, operations will cease across the network

Protocols and standards

Protocols
In computer networks, communication occurs between entities in different systems. An entity is anything
capable of sending or receiving information. However, two entities cannot simply send bit streams to each
other and expect to be understood. For communication to occur, the entities must agree on a protocol. A
protocol is a set of rules that govern data communications. A protocol defines what is communicated, how
it is communicated, and when it is communicated. The key elements of a protocol are syntax, semantics, and
timing.
A group of inter-related protocols that are necessary to perform a communication function is called a
protocol suite.
 Syntax. The term syntax refers to the structure or format of the data, meaning the order in which they
are presented. For example, a simple protocol might expect the first 8 bits of data to be the address of
the sender, the second 8 bits to be the address of the receiver, and the rest of the stream to be the
message itself.
 Semantics. The word semantics refers to the meaning of each section of bits. How is a particular
pattern to be interpreted, and what action is to be taken based on that interpretation?
For example, does an address identify the route to be taken or the final destination of the message?
 Timing. The term timing refers to two characteristics: when data should be sent and how fast they can
be sent. For example, if a sender produces data at 100 Mbps but the receiver can process data at only
1 Mbps, the transmission will overload the receiver and some data will be lost.
In a data communications network, protocols play the following roles:
 Determine the format or structure of the message, such as how much data to put into each segment.
 Establish the process by which intermediary devices share information about the path to the
destination.
 Establish the method to handle error and system messages between intermediary devices.
 Establish the process to setup and terminate communications or data transfers between hosts.

Internet protocols

TCP/IP
The standard protocol for the Internet is TCP/IP. TCP/IP (Transmission Control Protocol/Internet
Protocol) are the rules for communicating over the Internet. Protocols control how the messages are broken
down, sent and reassembled. With TCP/IP, a message is broken down into small parts called packets before it
is sent over the Internet. Each packet is sent separately, possibly travelling through different routes to a
common destination. The packets are reassembled into correct order at the receiving computer.

Internet services
The four commonly used services on the Internet are:
 Telnet
 FTP
 Gopher
 The Web

Telnet
 Telnet allows you to connect to another computer (host) on the Internet.
 With Telnet you can log on to the computer as if you were a terminal connected to it.
 There are hundreds of computers on the Internet you can connect to.
 Some computers allow free access; some charge a fee for their use.

FTP (File Transfer Protocol)


 FTP allows you to copy files on the Internet
 If you copy a file from an Internet computer to your computer, it is called downloading.
 If you copy a file from your computer to an Internet computer, it is called uploading.

Gopher
 Gopher allows you to search and retrieve information at a particular computer site called a gopher site.
 Gopher is a software application that provides menu-based functions for the site.
 It was originally developed at the University of Minnesota in 1991.
 Gopher sites are computers that provide direct links to available resources, which may be on other
computers.
 Gopher sites can also handle FTP and Telnet to complete their retrieval functions.

The Web
 The web is a multimedia interface to resources available on the Internet.
 It connects computers and resources throughout the world.
 It should not be confused with the term Internet.

Browser
 A browser is special software used on a computer to access the web.
 The software provides an uncomplicated interface to the Internet and web documents.
 It can be used to connect you to remote computers using Telnet.
 It can be used to open and transfer files using FTP.
 It can be used to display text and images using the web.
 Two well-known browsers are:
o Netscape communicator
o Microsoft Internet Explorer

Uniform Resource Locators (URLs)


 URLs are addresses used by browsers to connect to other resources.
 URLs have at least two basic parts.
o Protocol – used to connect to the resource, HTTP (Hyper Text Transfer Protocol) is the most
common
o Domain Name – the name of the server where the resource is located.
 Many URLs have additional parts specifying directory paths, file names and pointers.
 Connecting to a URL means that you are connecting to another location called a web site.
 Moving from one web site to another is called surfing.

Web portals
Web portals are sites that offer a variety of services typically including e-mail, sports updates, financial data,
news and links to selected websites. They are designed to encourage you to visit them each time you access the
web. They act as your home base and as a gateway to their resources.
243
Web pages
A web page is a document file sent to your computer when the browser has connected to a website. The
document file may be located on a local computer or halfway around the world.
The document file is formatted and displayed on your screen as a web page through the interpretation of special
command codes embedded in the document called HTML (Hyper Text Mark-up Language).
Typically, the first web page on a website is referred to as the home page. The home page presents information
about the site and may contain references and connections to other documents or sites called hyperlinks.
Hyperlink connections may contain text files, graphic images, audio and video clips. Hyperlink connections can
be accessed by clicking on the hyperlink.

Applets and Java


 Web pages contain links to special programmes called applets written in a programming language
called Java.
 Java applets are widely used to add interest and activity to a website.
 Applets can provide animation, graphics, interactive games and more.
 Applets can be downloaded and run by most browsers.

Search tools
Search tools developed for the Internet help users locate precise information. To access a search tool, you must
visit a web site that has a search tool available. There are two basic types of search tools available:
- Indexes
- Search engines

Indexes
 Indexes are also known as web directories.
 They are organized by major categories e.g. health, entertainment, education, etc.
 Each category is further organized into sub-categories
 Users can continue to search of subcategories until a list of relevant documents appear
 The best known search index is Yahoo.

Search engines
 Search engines are also known as web crawlers or web spiders.
 They are organized like a database.
 Key words and phrases can be used to search through a database.
 Databases are maintained by special programmes called agents, spiders or bots.
 Widely used search engines are Google, HotBot and AltaVista.

Web utilities
Web utilities are programmes that work with a browser to increase your speed, productivity and capabilities.
These utilities can be included in a browser. Some utilities may be free on the
Internet while others can be charged a nominal fee. There are two categories of web utilities:
 Plug-ins
 Helper applications

Plug-ins
 A plug-in is a programme that automatically loads and operates as part of your browser.
 Many websites require plug-ins for users to fully experience web page contents
 Some widely used plug-ins are:
o Shockwave from macromedia – used for web-based games, live concerts and dynamic
animations.
o QuickTime from Apple – used to display video and play audio.
o Live-3D from Netscape – used to display three-dimensional graphics and virtual reality.

Helper applications
Helper applications are also known as add-ons. They are independent programmes that can be executed or
launched from your browser. The four most common types of helper applications are:
 Off-line browsers – also known as web-downloading utilities and pull products. It is a programme
that automatically connects you to selected websites. They download HTML documents before saving
them to your hard disk. The document can be read latter without being connected to the Internet.
 Information pushers – also known as web broadcasters or push products. They automatically gather
information on topic areas called channels. The topics are then sent to your hard disk. The information
can be read later without being connected to the Internet.
 Metasearch utilities–offline search utilities are also known as metasearch programmes. They
automatically submit search requests to several indices and search engines. They receive the results,
sort them, eliminate duplicates and create an index.
 Filters– filters are programmes that allow parents or organizations to block out selected sites e.g. adult
sites. They can monitor the usage and generate reports detailing time spent on activities.

Discussion groups
There are several types of discussion groups on the Internet:
 Mailing lists
 Newsgroups
 Chat groups

Mailing lists
In this type of discussion groups, members communicate by sending messages to a list address.
To join, you send your e-mail request to the mailing list subscription address. To cancel, send your email
request to unsubscribe to the subscription address.

Newsgroups
Newsgroups are the most popular type of discussion group. They use a special type of computers called UseNet.
Each UseNet computer maintains the newsgroup listing. There are over 10,000 different newsgroups organized
into major topic areas. Newsgroup organization hierarchy system is similar to the domain name system.
Contributions to a particular newsgroup are sent to one of the UseNet computers. UseNet computers save
messages and periodically share them with other UseNet computers. Interested individuals can read
contributions to a newsgroup.
Chat groups
Chat groups are becoming a very popular type of discussion group. They allow direct ‘live’ communication (real
time communication). To participate in a chat group, you need to join by selecting a channel or a topic. You
communicate live with others by typing words on your computer. Other members of your channel immediately
see the words on their computers and they can respond. The most popular chat service is called Internet Relay
Chat (IRC), which requires special chat client software.

Instant messaging
Instant messaging is a tool to communicate and collaborate with others. It allows one or more people to
communicate with direct ‘live’ communication. It is similar to chat groups, but it provides greater control and
flexibility. To use instant messaging, you specify a list of friends (buddies) and register with an instant
messaging server e.g. Yahoo Messenger. Whenever you connect to the Internet, special software will notify your
messaging server that you are online. It will notify you if any of your friends are online and will also notify your
buddies that you are online.

E-mail (Electronic Mail)


E-mail is the most common Internet activity. It allows you to send messages to anyone in the world who has an
Internet e-mail account. You need access to the Internet and e-mail programme to use this type of
communication. Two widely used e-mail programmes are Microsoft’s Outlook Express and Netscape’s
Communicator.

E-mail has three basic elements:


i) Header – appears first in an e-mail message and contains the following information
a) Address – the address of the person(s) that is to receive the e-mail.
b) Subject – a one line description of the message displayed when a person checks their mail.
c) Attachment–files that can be sent by the e-mail programme.
ii) Message – the text of the e-mail communication.
iii) Signature – may include sender’s name, address and telephone number (optional).

E-mail addresses
The most important element of an e-mail message is the address of the person who is to receive the letter. The
Internet uses an addressing method known as the Domain Name System (DNS).
The system divides an address into three parts:
i) User name – identifies a unique person or computer at the listed domain.
ii) Domain name – refers to a particular organization.
iii) Domain code – identifies the geographical or organizational area.
Almost all ISPs and online service providers offer e-mail service to their customers.
The main advantages of email are:
 It is normally much cheaper than using the telephone (although, as time equates to money for
most companies, this relates any savings or costs to a user’s typing speed).
 Many different types of data can be transmitted, such as images, documents, speech, etc.
 It is much faster than the postal service
 Users can filter incoming email easier than incoming telephone calls.
 It normally cuts out the need for work to be typed, edited and printed by a secretary.
 It reduces the burden on the mailroom.
 It is normally more secure than traditional methods.
 It is relatively easy to send to groups of people (traditionally, either a circulation list was required or a
copy to everyone in the group was required).
 It is usually possible to determine whether the recipient has actually read the message (the electronic
mail system sends back an acknowledgement).

The main disadvantages are:


 It stops people from using the telephone
 It cannot be used as a legal document
 Electronic mail messages can be sent on the spur of the moment and may be regretted later on
(sending by traditional methods normally allows for a rethink). In extreme cases, messages can be sent
to the wrong person (typically when replying to an email message, where a message is sent to the
mailing list rather than the originator).
 It may be difficult to send to some remote sites. Many organizations have either no electronic mail or
merely an intranet. Large companies are particularly wary of Internet connections and limit the
amount of external traffic.
 Not everyone reads his or her electronic mail on a regular basis (although this is changing as more
organizations adopt email as the standard communication medium).

The main standards that relate to the protocols of email transmission and reception are:
• Simple Mail Transfer Protocol (SMTP) – which is used with the TCP/IP suite? It has traditionally
been limited to the text-based electronic messages
• Multipurpose Internet Mail Extension – This allows the transmission and reception of mail that
contains various types of data, such as speech, images and motion video.
It is a newer standard than SMTP and uses much of its basic protocol.

Standards
Standards are essential in creating and maintaining an open and competitive market for equipment
manufacturers and in guaranteeing national and international interoperability of data and telecommunications
technology and processes. Standards provide guidelines to manufacturers, vendors, government agencies, and
other service providers to ensure the kind of interconnectivity necessary in today's marketplace and in
international communications.
Data communication standards fall into two categories:
 de facto (meaning "by fact" or "by convention") and
 de jure (meaning "by law" or "by regulation").

De facto. Standards that have not been approved by an organized body but have been adopted as standards
through widespread use are de facto standards. De facto standards are often established originally by
manufacturers who seek to define the functionality of a new product or technology.

De jure. Those standards that have been legislated by an officially recognized body are de jure standards.

CLASSIFICATION OF NETWORKS
Networks can be classified by the following
 Size or scale (geographical area/span)
 Network Configuration
 Topology or physical connectivity

Classification of networks by size or scale


Under this scheme, networks are classified as belonging to one of the following classes:-
 LAN-Local Area Network
 WAN-Wide Area Network
 MAN-Metropolitan Area Network
 PAN-Personal Area Network
 CAN- Campus Area Network
 HAN- Home Area Network
 SAN- Storage Area Network

Local Area Network (LAN)


A local area network (LAN) is a network that connects computers and devices in a limited geographical area
such as home, school, computer laboratory, office building, or closely positioned group of buildings. Each
computer or device on the network is a node. Current wired LANs are most likely to be based on Ethernet
technology, although new standards like ITU-T G.hn also provide a way to create a wired LAN using existing
home wires (coaxial cables, phone lines and power lines).

Current Ethernet or other IEEE 802.3 LAN technologies operate at data transfer rates up to 10 Gbit/s.

Characteristics of LAN
 Higher data transfer rates
 Smaller geographic range
 No need for leased telecommunication lines

Wide Area Network (WAN)


A wide area network (WAN) is a computer network that covers a large geographic area such as a country, or
spans even intercontinental distances, using a communications channel that combines many types of media
such as telephone lines, cables, and air waves. A WAN often uses transmission facilities provided by common
carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the
OSI reference model: the physical layer, the data link layer, and the network layer.

Metropolitan Area Network (MAN)


A Metropolitan area network (MAN) is a large computer network that usually spans a city or a town.

Personal Area Networks (PAN)


A personal area network (PAN) is a computer network used for communication among computer and different
information technological devices close to one person. Some examples of devices that are used in a PAN are
personal computers, printers, fax machines, telephones, PDAs, scanners, and even video game consoles. A PAN
may include wired and wireless devices. The reach of a PAN typically extends to 10 meters. A wired PAN is
usually constructed with USB and Firewire connections while technologies such as Bluetooth and infrared
communication typically form a wireless PAN
For example, Piconet is Bluetooth-enabled Personal Area Network which may contain up to 8 devices
connected together in a master-slave fashion.

Home Area Network (HAN)


A home area network (HAN) is a residential LAN which is used for communication between digital devices
typically deployed in the home, usually a small number of personal computers and accessories, such as printers
and mobile computing devices. An important function is the sharing of Internet access, often a broadband
service through a cable TV or Digital Subscriber Line (DSL) provider.

Storage Area Network (SAN)


A storage area network (SAN) is a dedicated network that provides access to consolidated, block level data
storage. SANs are primarily used to make storage devices, such as disk arrays, tape libraries, and optical
jukeboxes, accessible to servers so that the devices appear like locally attached devices to the operating system.
A SAN typically has its own network of storage devices that are generally not accessible through the local area
network by other devices.

Campus Area Network (CAN)


A campus area network (CAN) is a computer network made up of an interconnection of LANs within a limited
geographical area. The networking equipment (switches, routers) and transmission media (optical fiber,
copper plant, Cat5 cabling etc.) are almost entirely owned (by the campus tenant / owner: an enterprise,
university, government etc.).
In the case of a university campus-based campus network, the network is likely to link a variety of campus
buildings including, for example, academic colleges or departments, the university library, and student
residence halls.

Backbone network
A backbone network is part of a computer network infrastructure that interconnects various pieces of network,
providing a path for the exchange of information between different LANs or sub-networks. A backbone can tie
together diverse networks in the same building, in different buildings in a campus environment, or over wide
areas. Normally, the backbone's capacity is greater than that of the networks connected to it.
A large corporation which has many locations may have a backbone network that ties all of these locations
together, for example, if a server cluster needs to be accessed by different departments of a company which are
located at different geographical locations. The equipment which ties these departments together constitutes
the network backbone. Network performance management including network congestion is critical
parameters taken into account when designing a network backbone.
A specific case of a backbone network is the Internet backbone, which is the set of wide-area network
connections and core routers that interconnect all networks connected to the Internet.

Virtual Private Network (VPN)


A virtual private network (VPN) is a computer network in which some of the links between nodes are carried
by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires.
The data link layer protocols of the virtual network are said to be tunneled through the larger network when
this is the case. One common application is secure communications through the public Internet, but a VPN need
not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used
to separate the traffic of different user communities over an underlying network with strong security features.
VPN may have best-effort performance, or may have a defined service level agreement (SLA) between the VPN
customer and the VPN service provider. Generally, a VPN has a topology more complex than point-to-point.

Intranets
Intranets are in-house, tailor-made networks for use within the organization and provide limited access
(if any) to outside services and also limit the external traffic (if any) into the intranet.
An intranet might have access to the Internet but there will be no access from the Internet to the organization’s
intranet.
Organizations which have a requirement for sharing and distributing electronic information normally have
three choices:
- Use a proprietary groupware package such as Lotus Notes
- Set up an Intranet
- Set up a connection to the Internet
STUDY TEXT
Groupware packages normally replicate data locally on a computer whereas Intranets centralize their
information on central servers which are then accessed by a single browser package. The stored data is
normally open and can be viewed by any compatible WWW browser. Intranet browsers have the great
advantage over groupware packages in that they are available for a variety of clients, such as PCs, Macs, UNIX
workstations and so on. A client browser also provides a single GUI interface, which offers easy integration
with other applications such as electronic mail, images, audio, video, animation and so on.

The main elements of an Intranet are:


• Intranet server hardware
• Intranet server software
• TCP/IP stack software on the clients and servers
• WWW browsers
• A firewall

Other properties defining an Intranet are:


• Intranets use browsers, websites, and web pages to resemble the Internet within the business.
• They typically provide internal e-mail, mailing lists, newsgroups and FTP services
• These services are accessible only to those within the organization

Extranets
Extranets (external Intranets) allow two or more companies to share parts of their Intranets related to joint
projects. For example, two companies may be working on a common project, an
Extranet would allow them to share files related with the project.
• Extranets allow other organizations, such as suppliers, limited access to the organization’s network.
• The purpose of the extranet is to increase efficiency within the business and to reduce costs

Classification of networks by network configuration


Under this scheme networks can also be categorized either as
 Client / Server OR
 Peer to Peer (P2PN)

Peer to Peer networks


 Nodes provide and request services-A node typically is a computer on the network.
 User in each node administers resources
 Cheap to implement
 Easy to setup
 Very weak security
 Additional load on nodes
 Most operating systems can support peer to peer networks

Client-Server networks
 Typically consists of a designated computer to administer the network
 Nodes are called clients
 Servers are used to control access
 Resources centralized on dedicated servers, from which clients get services on request
 Nodes and servers share data roles
 Supports larger networks
 Strong security
 Expensive
 Access to data controlled by server
 Uses a Network operating system

Classification of Networks by Topology


Network topology is the arrangement of the various elements (links, nodes, etc.) of a computer. Essentially, it
is the topological structure of a network, and may be depicted physically or logically.
Physical topology refers to the placement of the network's various components, including device location and
cable installation, while
Logical topology shows how data flows within a network, regardless of its physical design. Distances between
nodes, physical interconnections, transmission rates, and/or signal types may differ between two networks,
yet their topologies may be identical.

Typical topologies include:


 Bus
 Star
 Ring
 Mesh
 Hybrid

A ring network is a network topology in which each node connects to exactly two other nodes, forming a single
continuous pathway for signals through each node - a ring. Data travels from node to node, with each node
along the way handling every packet.

Advantages
 Very orderly network where every device has access to the token and the opportunity to transmit
 Performs better than a bus topology under heavy network load
 Does not require a central node to manage the connectivity between the computers
 Due to the point to point line configuration of devices with a device on either side each device is
connected to its immediate neighbor, it is quite easy to install and reconfigure since adding or
removing a device requires moving just two connections.
 Point to point line configuration makes it easy to identify and isolate faults.
Disadvantages
 One malfunctioning workstation can create problems for the entire network. This can be solved by
using a dual ring or a switch that closes off the break.
 Moves, adds and changes of devices can affect the network
 Communication delay is directly proportional to number of nodes in the network
 Bandwidth is shared on all links between devices
 More difficult to configure than a Star: node adjunction ⇨ Ring shutdown and reconfiguration

Mesh networking (topology) is a type of networking where each node must not only capture and disseminate
its own data, but also serve as a relay for other nodes, that is, it must collaborate to propagate the data in the
network.
A mesh network can be designed using a flooding technique or a routing technique. When using a routing
technique, the message propagates along a path, by hopping from node to node until the destination is reached.
To ensure all its paths' availability, a routing network must allow for continuous connections and
reconfiguration around broken or blocked paths, using self-healing algorithms

Advantages
 Point to point line configuration makes identification and isolation of faults easy.
 Messages travel through a dedicated line meaning that only the intended recipient receives the
message: privacy and security is thus ensured,
 In the case of a fault in one link, only the communication between the two devices sharing the link is
affected.
 The use of dedicated links ensures that each connection carries its own data load thus ridding of traffic
problems that would have been encountered if a connection/link was shared.

Disadvantages
 If the network covers a great area, huge investments may be required due to the amount of cabling and
ports required for input and output devices. It is a rare choice of a network connection due to the costs
involved.

Star topology
Star networks are one of the most common computer network topologies. In its simplest form, a star network
consists of one central switch, hub or computer, which acts as a conduit to transmit messages. This consists of
a central node, to which all other nodes are connected; this central node provides a common connection point
for all nodes through a hub. In Star topology every node (computer workstation or any other peripheral) is
connected to central node called hub or switch.

Advantages
 Better performance: star topology prevents the passing of data packets through an excessive number
of nodes. At most, 3 devices and 2 links are involved in any communication between any two devices.
Although this topology places a huge overhead on the central hub, with adequate capacity, the hub can
handle very high utilization by one device without affecting others.
 Isolation of devices: Each device is inherently isolated by the link that connects it to the hub. This
makes the isolation of individual devices straightforward and amounts to disconnecting each device
from the others. This isolation also prevents any non-centralized failure from affecting the network.
 Benefits from centralization: As the central hub is the bottleneck, increasing its capacity, or
connecting additional devices to it, increases the size of the network very easily. Centralization also
allows the inspection of traffic through the network. This facilitates analysis of the traffic and detection
of suspicious behavior.
 Easy to detect faults and to remove parts.
 No disruptions to the network when connecting or removing devices.
 Installation and configuration is easy since every one device only requires a link and one input/output
port to connect it to any other device(s).

Disadvantages
 High dependence of the system on the functioning of the central hub
 Failure of the central hub renders the network inoperable

Bus network topology


A bus network topology is a network architecture in which a set of clients are connected via a shared
communications line, called a bus.
In a computer or on a network, a bus is a transmission path on which signals are dropped off or picked up at
every device attached to the line. Only devices addressed by the signals pay attention to them; the others
discard the signals.
A bus is a network topology or circuit arrangement in which all devices are attached to a line directly and all
signals pass through each of the devices. Each device has a unique identity and can recognize those signals
intended for it.
Advantages (benefits) of Linear Bus Topology
 It is easy to set-up and extend bus network.
 Cable length required for this topology is the least compared to other networks.
 Bus topology costs very less.
 Linear Bus network is mostly used in small networks. Good for LAN.

Disadvantages (Drawbacks) of Linear Bus Topology


 There is a limit on central cable length and number of nodes that can be connected.
 Dependency on central cable in this topology has its disadvantages. If the main cable (i.e. bus )
encounters some problem, whole network breaks down.
 Proper termination is required to dump signals. Use of terminators is must.
 It is difficult to detect and troubleshoot fault at individual station.
 Maintenance costs can go higher with time.
 Efficiency of Bus network reduces as the number of devices connected to it increases.
 It is not suitable for networks with heavy traffic.
 Security is very low because all the computers receive the sent signal from the source

Hybrid topology
A network structure whose design contains more than one topology is said to be hybrid topology. Hybrid
topology inherits merits and demerits of all the incorporating topologies.

The above picture represents an arbitrarily hybrid topology. The combining topologies may contain attributes
of Star, Ring, Bus, and Daisy-chain topologies. Most WANs are connected by means of Dual-Ring topology and
networks connected to them are mostly Star topology networks. Internet is the best example of largest Hybrid
topology
REFERENCE MODELS
International Standards Organization (ISO) is a multinational body dedicated to worldwide agreement on
international standards. Almost three-fourths of countries in the world are represented in the ISO. An ISO
standard that covers all aspects of network communications is the Open Systems Interconnection (OSI) model.
It was first introduced in the late 1970s.

Two major approaches:


 The seven-layer OSI/ISO model – Open Systems Interconnection, currently maintained by the
International Organization for Standards.
 The five-layer TCP/IP model – Transmission Control Protocol/Internet Protocol.

Open System Interconnection/International Standards Organization (OSI/ISO)


This model is based on a proposal developed by the International Standards Organization (ISO) as a first step
toward international standardization of the protocols used in the various layers (Day and Zimmermann, 1983).
It was revised in 1995(Day, 1995). The model is called the ISO-OSI (Open Systems Interconnection)
Reference Model because it deals with connecting open systems—that is, systems that are open for
communication with other systems.
The OSI model has seven layers. The principles that were applied to arrive at the seven layers can be briefly
summarized as follows:
1. A layer should be created where a different abstraction is needed.
2. Each layer should perform a well-defined function.
3. The function of each layer should be chosen with an eye toward defining internationally
standardized protocols.
4. The layer boundaries should be chosen to minimize the information flow across the interfaces.
5. The number of layers should be large enough that distinct functions need not be thrown together in
the same layer out of necessity and small enough that the architecture does not become unwieldy.
The fundamental reason for the seven layers OSI model is to comprehend the data transmission and
communication system steps.

1. Physical layer –
The physical layer is concerned with transmitting raw bits over a communication channel. The design issues
have to do with making sure that when one side sends a 1 bit, it is received by the other side as a 1 bit, not as a
0 bit.
It controls electrical and mechanical aspects of data transmission, it controls the transmission of bits between
two nodes.
1. Type of medium: copper wire, coaxial cable, satellite, etc.
2. Digital versus analog transmission, etc.

2. Data-link layer
The main task of the data link layer is to transform a raw transmission facility into a line that appears free of
undetected transmission errors to the network layer. It accomplishes this task by having the sender break up
the input data into data frames (typically a few hundred or a few thousand bytes) and transmits the frames
sequentially. If the service is reliable, the receiver confirms correct receipt of each frame by sending back an
acknowledgement frame.
Another issue that arises in the data link layer (and most of the higher layers as well) is how to keep a fast
transmitter from drowning a slow receiver in data. Some traffic regulation mechanism is often needed to let
the transmitter know how much buffer space the receiver has at the moment.
Frequently, this flow regulation and the error handling are integrated.
Reliable transmission over physical medium addresses the transmission of data frames (or packets) over a
physical link between network entities, includes error correction..
1. Takes bits received by physical layer and makes sure there are no errors.
2. If errors, request peer to retransmit data until it is correctly received: error control.
3. Synchronization, flow control; media access in shared medium.
Data link layer is responsible for controlling the error between adjacent nodes and transfer the frames to other
computer via physical layer. Data link layer is used by hubs and switches for their operation.

3. Network layer
The network layer controls the operation of the subnet. A key design issue is determining how packets are
routed from source to destination. Routes can be based on static tables that are ''wired into'' the network and
rarely changed. They can also be determined at the start of each conversation, for example, a terminal session
(e.g., a login to a remote machine). Finally, they can be highly dynamic, being determined anew for each packet,
to reflect the current network load.
If too many packets are present in the subnet at the same time, they will get in one another's way, forming
bottlenecks. The control of such congestion also belongs to the network layer. More generally, the quality of
service provided (delay, transit time, jitter, etc.) is also a network layer issue.
When a packet has to travel from one network to another to get to its destination, many problems can arise.
The addressing used by the second network may be different from the first one. The second one may not accept
the packet at all because it is too large. The protocols may differ, and so on. It is up to the network layer to
overcome all these problems to allow heterogeneous networks to be interconnected. In broadcast networks,
the routing problem is simple, so the network layer is often thin or even non-existent establishes paths for data
between computers and determines switching among routes between computers, determines how to
disaggregate messages into individual packets.
1. Data is received error free by network layer.
2. Main function: routing and forwarding;
3. Also, congestion control.
This layer is responsible for translating the logical network address and names into their physical address
(MAC address). This layer is also responsible for addressing, determining routes for sending and managing
network problems such as packet switching, data congestion and routines.

4. Transport layer
The basic function of the transport layer is to accept data from above, split it up into smaller units if need be,
pass these to the network layer, and ensure that the pieces all arrive correctly at the other end. Furthermore,
all this must be done efficiently and in a way that isolates the upper layers from the inevitable changes in the
hardware technology. The transport layer also determines what type of service to provide to the session layer,
and, ultimately, to the users of the network. The most popular type of transport connection is an error-free
point-to-point channel that delivers messages or bytes in the order in which they were sent. However, other
possible kinds of transport service are the transporting of isolated messages, with no guarantee about the order
of delivery, and the broadcasting of messages to multiple destinations. The type of service is determined when
the connection is established.
The transport layer is a true end-to-end layer, all the way from the source to the destination. In other words, a
program on the source machine carries on a conversation with a similar program on the destination machine,
using the message headers and control messages. In the lower layers, the protocols are between each machine
and its immediate neighbors, and not between the ultimate source and destination machines, which may be
separated by many routers.

It deals with data transfer between end systems and determines flow control.
This layer is responsible for end-to-end delivers of messages between the networked hosts. It first divides the
streams of data into chunks or packets before transmission and then the receiving computer re-assembles the
packets. It also guarantee error free data delivery without loss or duplications.
1. E2E communication.
2. Error-, flow- and congestion control end-to-end.

5. Session layer –
The session layer allows users on different machines to establish sessions between them.
Sessions offer various services, including dialog control (keeping track of whose turn it is to transmit), token
management (preventing two parties from attempting the same critical operation at the same time), and
synchronization (check pointing long transmissions to allow them to continue from where they were after a
crash).
Creates and manages sessions when one application process requests access to another applications
process, e.g., MSWord importing a spread sheet from Excel.
This layer is responsible for establishing and ending the sessions across the network. The interactive login is
an example of services provided by this layer in which the connective are re-connected in care of any
interruption.

6. Presentation layer
The presentation layer is concerned with the syntax and semantics of the information transmitted.
In order to make it possible for computers with different data representations to communicate, the data
structures to be exchanged can be defined in an abstract way, along with a standard encoding to be used ''on
the wire.'' The presentation layer manages these abstract data structures and allows higher-level data
structures (e.g., banking records), to be defined and exchanged.
Determines syntactic representation of data, e.g., agreement on character code like ASCII/Unicode
1. Data representation.
2. Conversion between different codes.
3. Data compression and encryption.

7. Application layer –
The application layer contains a variety of protocols that are commonly needed by users. One widely-used
application protocol is HTTP (Hypertext Transfer Protocol), which is the basis for the World Wide Web. When
a browser wants a Web page, it sends the name of the page it wants to the server using HTTP. The server then
sends the page back. Other application protocols are used for file transfer, electronic mail, and network news.
Establishes interface between a user and a host computer, e.g., searching in a database application.
This layer interacts with software applications that implement a communicating component. Such application
programs fall outside the scope of the OSI model. Application-layer functions typically include identifying
communication partners, determining resource availability, and synchronizing communication.
1. Provides users with access to the underlying communication infrastructure.
2. E-mail, video conferencing, files transfer, distributed information systems (e.g., the Web).
Examples of services provided by this layer are file transfer, electronic messaging e-mail, virtual terminal
access and network management.

TCP/IP
The TCP/IP reference model was developed prior to OSI model. The major design goals of this
Model were,
1. To connect multiple networks together so that they appear as a single network.
2. To survive after partial subnet hardware failures.
3. To provide a flexible architecture.

Unlike OSI reference model, TCP/IP reference model has only 4 layers. They are,
1. Host-to-Network Layer
2. Internet Layer
3. Transport Layer
4. Application Layer

Application Layer
Transport Layer
Internet Layer
Host-to-Network Layer

Host-to-Network Layer:
The TCP/IP reference model does not really say much about what happens here, except to point out that the
host has to connect to the network using some protocol so it can send IP packets to it.
This protocol is not defined and varies from host to host and network to network.

Internet Layer:
This layer, called the internet layer, is the linchpin that holds the whole architecture together. Its job is to permit
hosts to inject packets into any network and have they travel independently to the destination (potentially on
a different network). They may even arrive in a different order than they were sent, in which case it is the job
of higher layers to rearrange them, if in-order delivery is desired. Note that ''internet'' is used here in a generic
sense, even though this layer is present in the Internet.
The internet layer defines an official packet format and protocol called IP (Internet Protocol).
The job of the internet layer is to deliver IP packets where they are supposed to go. Packet routing is clearly the
major issue here, as is avoiding congestion. For these reasons, it is reasonable to say that the TCP/IP internet
layer is similar in functionality to the OSI network layer.

Transport Layer:
The layer above the internet layer in the TCP/IP model is now usually called the transport layer.
It is designed to allow peer entities on the source and destination hosts to carry on a conversation, just as in
the OSI transport layer. Two end-to-end transport protocols have been defined here. The first one, TCP
(Transmission Control Protocol), is a reliable connection-oriented protocol that allows a byte stream
originating on one machine to be delivered without error on any other machine in the internet. It fragments
the incoming byte stream into discrete messages and passes each one on to the internet layer. At the
destination, the receiving TCP process reassembles the received messages into the output stream. TCP also
handles flow control to make sure a fast sender cannot swamp a slow receiver with more messages than it can
handle.

Application Layer:
The TCP/IP model does not have session or presentation layers. On top of the transport layer is the application
layer. It contains all the higher-level protocols. The early ones included virtual terminal (TELNET), file transfer
(FTP), and electronic mail (SMTP). The virtual terminal protocol allows a user on one machine to log onto a
distant machine and work there. The file transfer protocol provides a way to move data efficiently from one
machine to another. Electronic mail was originally just a kind of file transfer, but later a specialized protocol
(SMTP) was developed for it. Many other protocols have been added to these over the years: the
Domain Name System (DNS) for mapping host names onto their network addresses, NNTP, the protocol for
moving USENET news articles around, and HTTP, the protocol for fetching pages on the World Wide Web, and
many others.
NETWORK SWITCHING
Switching is process to forward packets coming in from one port to a port leading towards the destination.
When data comes on a port it is called ingress, and when data leaves a port or goes out it is called egress. A
communication system may include number of switches and nodes. At broad level, switching can be divided
into two major categories:
 Connectionless: The data is forwarded on behalf of forwarding tables. No previous handshaking is
required and acknowledgements are optional.
 Connection Oriented: Before switching data to be forwarded to destination, there is a need to pre-
establish circuit along the path between both endpoints. Data is then forwarded on that circuit. After
the transfer is completed, circuits can be kept for future use or can be turned down immediately.

Circuit Switching
When two nodes communicate with each other over a dedicated communication path, it is called circuit
switching. There 'is a need of pre-specified route from which data will travels and no other data is permitted.
In circuit switching, to transfer the data, circuit must be established so that the data transfer can take place.
Circuits can be permanent or temporary. Applications which use circuit switching may have to go through three
phases:
 Establish a circuit
 Transfer the data
 Disconnect the circuit

Circuit switching was designed for voice applications. Telephone is the best suitable example of circuit
switching. Before a user can make a call, a virtual path between caller and callee is established over the network.

Message Switching
This technique was somewhere in middle of circuit switching and packet switching. In message switching, the
whole message is treated as a data unit and is switching / transferred in its entirety.
A switch working on message switching, first receives the whole message and buffers it until there are
resources available to transfer it to the next hop. If the next hop is not having enough resource to accommodate
large size message, the message is stored and switch waits.
This technique was considered substitute to circuit switching. As in circuit switching the whole path is blocked
for two entities only. Message switching is replaced by packet switching. Message switching has the following
drawbacks:
 Every switch in transit path needs enough storage to accommodate entire message.
 Because of store-and-forward technique and waits included until resources are available, message
switching is very slow.
 Message switching was not a solution for streaming media and real-time applications.

Packet Switching
Shortcomings of message switching gave birth to an idea of packet switching. The entire message is broken
down into smaller chunks called packets. The switching information is added in the header of each packet and
transmitted independently.
It is easier for intermediate networking devices to store small size packets and they do not take much resources
either on carrier path or in the internal memory of switches.

Packet switching enhances line efficiency as packets from multiple applications can be multiplexed over the
carrier. The internet uses packet switching technique. Packet switching enables the user to differentiate data
streams based on priorities. Packets are stored and forwarded according to their priority to provide quality of
service.
ERROR DETECTION AND CORRECTION
There are many reasons such as noise, cross-talk etc., which may help data to get corrupted during
transmission. The upper layers work on some generalized view of network architecture and are not aware of
actual hardware data processing. Hence, the upper layers expect error-free transmission between the systems.
Most of the applications would not function expectedly if they receive erroneous data. Applications such as
voice and video may not be that affected and with some errors they may still function well.
Data-link layer uses some error control mechanism to ensure that frames (data bit streams) are transmitted
with certain level of accuracy. But to understand how errors is controlled, it is essential to know what types of
errors may occur.

Types of Errors
There may be three types of errors:
 Single bit error

In a frame, there is only one bit, anywhere though, which is corrupt.


 Multiple bits error

Frame is received with more than one bits in corrupted state.


 Burst error

Frame contains more than1 consecutive bits corrupted.


Error control mechanism may involve two possible ways:
 Error detection
 Error correction

Error Detection
Errors in the received frames are detected by means of Parity Check and Cyclic Redundancy Check (CRC). In
both cases, few extra bits are sent along with actual data to confirm that bits received at other end are same as
they were sent. If the counter-check at receiver’ end fails, the bits are considered corrupted.

Parity Check
One extra bit is sent along with the original bits to make number of 1s either even in case of even parity, or odd
in case of odd parity.
The sender while creating a frame counts the number of 1s in it. For example, if even parity is used and number
of 1s is even then one bit with value 0 is added. This way number of 1s remains even. If the number of 1s is odd,
to make it even a bit with value 1 is added.

The receiver simply counts the number of 1s in a frame. If the count of 1s is even and even parity is used, the
frame is considered to be not-corrupted and is accepted. If the count of 1s is odd and odd parity is used, the
frame is still not corrupted.
If a single bit flips in transit, the receiver can detect it by counting the number of 1s. But when more than one
bits are erroneous, then it is very hard for the receiver to detect the error.
Cyclic Redundancy Check (CRC)
CRC is a different approach to detect if the received frame contains valid data. This technique involves binary
division of the data bits being sent. The divisor is generated using polynomials. The sender performs a division
operation on the bits being sent and calculates the remainder. Before sending the actual bits, the sender adds
the remainder at the end of the actual bits. Actual data bits plus the remainder is called a codeword. The sender
transmits data bits as codewords.

At the other end, the receiver performs division operation on codewords using the same CRC divisor. If the
remainder contains all zeros the data bits are accepted, otherwise it is considered as there some data
corruption occurred in transit.

Error Correction
In the digital world, error correction can be done in two ways:
 Backward Error Correction When the receiver detects an error in the data received; it requests back
the sender to retransmit the data unit.
 Forward Error Correction When the receiver detects some error in the data received; it executes
error-correcting code, which helps it to auto-recover and to correct some kinds of errors.
The first one, Backward Error Correction, is simple and can only be efficiently used where retransmitting is not
expensive. For example, fibre optics. But in case of wireless transmission retransmitting may cost too much. In
the latter case, Forward Error Correction is used.
To correct the error in data frame, the receiver must know exactly which bit in the frame is corrupted. To locate
the bit in error, redundant bits are used as parity bits for error detection. For example, we take ASCII words (7
bits data), then there could be 8 kind of information we need: first seven bits to tell us which bit is error and
one more bit to tell that there is no error.

BENEFITS AND CHALLENGES OF NETWORKS IN AN ORGANIZATION


Benefits of Networks in an Organization
Computer networks have highly benefited various fields of educational sectors, business
world and many organizations. They can be seen everywhere they connect people all over
the world.
There are some major advantages which compute networks has provided making the human
life more relaxed and easy. Some of them are listed below
a) Communication
Communication is one of the biggest advantages provided by the computer networks.
Different computer networking technology has improved the way of communications people
from the same or different organization can communicate in the matter of minutes for
collaborating the work activities. In offices and organizations computer networks are
serving as the backbone of the daily communication from top to bottom level of organization.
Different types of software can be installed which are useful for transmitting messages and
emails at fast speed.
b) Data sharing
Another wonderful advantage of computer networks is the data sharing. All the data such as
documents, file, accounts information, reports multimedia etc can be shared with the help
computer networks. Hardware sharing and application sharing is also allowed in many
organizations such as banks and small firms.

c) Instant and multiple accesses


Computer networks are multiply processed .many of users can access the same information
at the same time. Immediate commands such as printing commands can be made with the
help of computer networks.

d) Video conferencing
Before the arrival of the computer networks there was no concept for the video conferencing.
LAN and WAN have made it possible for the organizations and business sectors to call the
live video conferencing for important discussions and meetings

e) Internet Service
Computer networks provide internet service over the entire network. Every single computer
attached to the network can experience the high speed internet. Fast processing and work
load distribution

f) Broadcasting
With the help of computer networks news and important messages can be broadcasted just
in the matter of seconds who saves a lot of time and effort of the work.
People, can exchange messages immediately over the network any time or we can say 24
hour.

g) Photographs and large files


Computer network can also be used for sending large data file such as high resolution
photographs over the computer network.

h) Saves Cost
Computer networks save a lot of cost for any organizations in different ways. Building up
links thorough the computer networks immediately transfers files and messages to the other
people which reduced transportation and communication expense. It also raises the
standard of the organization because of the advanced technologies that are used in
networking.

i) Remote access and login


Employees of different or same organization connected by the networks can access the
networks by simply entering the network remote IP or web remote IP. In this the
communication gap which was present before the computer networks no more exist.

j) Flexible
Computer networks are quite flexible all of its topologies and networking strategies supports
addition for extra components and terminals to the network. They are equally fit for large as
well as small organizations.

k) Reliable
Computer networks are reliable when safety of the data is concerned. If one of the attached
system collapse same data can be gathered form another system attached to the same
network.

l) Data transmission
Data is transferred at the fast speed even in the scenarios when one or two terminals
machine fails to work properly. Data transmission in seldom affected in the computer
networks. Almost complete communication can be achieved in critical scenarios too.

m) Provides broader view


For a common man computer networks are an n idea to share their individual views to the
other world.

Challenges of Networks in an Organization


There are many aspects to network design, all of which present unique challenges. From a
business perspective, planning and executing an effective network strategy requires a
substantial ongoing organizational commitment. The organization must be committed to
developing an infrastructure that facilitates communication of the business objectives to the
network planning team. The organization must also develop internal standards, methods,
and procedures to promote effective planning. A commitment to do things the “right” way
means adhering to the standardized processes and procedures even when there are
substantial pressures to take risky shortcuts.

LIMITATIONS OF NETWORKS IN AN ORGANIZATION


The main disadvantage of networks is that users become dependent upon them.
For example, if a network file server develops a fault, then many users may not be able to run
application programs and get access to shared data. To overcome this, a back-up server can
be switched into action when the main server fails. A fault on a network may also stop users
from being able to access peripherals such as printers and plotters. To minimize this, a
network is normally segmented so that a failure in one part of it does not affect other parts.
Another major problem with networks is that their efficiency is very dependent on the skill
of the systems manager. A badly managed network may operate less efficiently than non-
networked computers.
Also, a badly run network may allow external users into it with little protection against them
causing damage.
Damage could also be caused by novices causing problems, such as deleting important files.

All these could be summarized as below:


1) If a network file server develops a fault, then users may not be able to run application
programs
2) A fault on the network can cause users to loose data (if the files being worked upon
are not saved)
3) If the network stops operating, then it may not be possible to access various resources
4) Users work-throughput becomes dependent upon network and the skill of the
systems manager
5) It is difficult to make the system secure from hackers, novices or industrial espionage
6) Decisions on resource planning tend to become centralized, for example, what word
processor is used, what printers are bought, e.t.c.
7) Networks that have grown with little thought can be inefficient in the long term.
8) As traffic increases on a network, the performance degrades unless it is designed
properly
9) Resources may be located too far away from some users
10)The larger the network becomes, the more difficult it is to manage

CLOUD COMPUTING
Cloud computing is an information technology (IT) paradigm, a model for enabling ubiquitous
access to shared pools of configurable resources (such as computer networks, servers, storage,
applications and services) which can be rapidly provisioned with minimal management effort,
often over the Internet. Cloud computing allows users and enterprises with various computing
capabilities to store and process data either in a privately-owned cloud, or on a third-party server
located in a data center - thus making data-accessing mechanisms more efficient and reliable.
Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar
to a utility.
Cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. As well,
third-party clouds enable organizations to focus on their core businesses instead of expending
resources on computer infrastructure and maintenance. ]Proponents also claim that cloud
computing allows enterprises to get their applications up and running faster, with improved
manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources
to meet fluctuating and unpredictable business demand. Cloud providers typically use a "pay-as-
you-go" model. This could lead to unexpectedly high charges if administrators are not familiarized
with cloud-pricing models.

Characteristics
Cloud computing exhibits the following key characteristics:
 Agility for organizations may be improved, as cloud computing may increase users'
flexibility with re-provisioning, adding, or expanding technological infrastructure
resources.
 Cost reductions are claimed by cloud providers. A public-cloud delivery model converts
capital expenditures (e.g., buying servers) to operational expenditure. This purportedly
lowers barriers to entry, as infrastructure is typically provided by a third party and need not
be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility
computing basis is "fine-grained", with usage-based billing options. As well, less in-house
IT skills are required for implementation of projects that use cloud computing. The e-
FISCAL project's state-of-the-art repository contains several articles looking into cost
aspects in more detail, most of them concluding that costs savings depend on the type of
activities supported and the type of infrastructure available in-house.
 Device and location independence enable users to access systems using a web browser
regardless of their location or what device they use (e.g., PC, mobile phone). As
infrastructure is off-site (typically provided by a third-party) and accessed via the Internet,
users can connect to it from anywhere
 Maintenance of cloud computing applications is easier, because they do not need to be
installed on each user's computer and can be accessed from different places (e.g., different
work locations, while travelling, etc.).
 Multitenancy enables sharing of resources and costs across a large pool of users thus
allowing for:
 Centralization of infrastructure in locations with lower costs (such as real estate,
electricity, etc.)
 Peak-load capacity increases (users need not engineer and pay for the resources and
equipment to meet their highest possible load-levels)
 Utilization and efficiency improvements for systems that are often only 10–20%
utilized.
 Performance is monitored by IT experts from the service provider, and consistent and
loosely coupled architectures are constructed using web services as the system interface.
 Resource pooling is the provider’s computing resources are commingle to serve multiple
consumers using a multi-tenant model with different physical and virtual resources
dynamically assigned and reassigned according to user demand. There is a sense of location
independence in that the consumer generally have no control or knowledge over the exact
location of the provided resource.
 Productivity may be increased when multiple users can work on the same data
simultaneously, rather than waiting for it to be saved and emailed. Time may be saved as
information does not need to be re-entered when fields are matched, nor do users need to
install application software upgrades to their computer.
 Reliability improves with the use of multiple redundant sites, which makes well-designed
cloud computing suitable for business continuity and disaster recovery.
 Scalability and elasticity via dynamic ("on-demand") provisioning of resources on a fine-
grained, self-service basis in near real-time (Note, the VM startup time varies by VM type,
location, OS and cloud providers), without users having to engineer for peak loads. This
gives the ability to scale up when the usage need increases or down if resources are not
being used.

Cloud computing benefits


Cloud computing boasts several attractive benefits for businesses and end users. Five of the main
benefits of cloud computing are:
 Self-service provisioning: End users can spin up compute resources for almost any type
of workload on demand. This eliminates the traditional need for IT administrators to
provision and manage compute resources.
 Elasticity: Companies can scale up as computing needs increase and scale down again as
demands decrease. This eliminates the need for massive investments in local infrastructure,
which may or may not remain active.
 Pay per use: Compute resources are measured at a granular level, enabling users to pay
only for the resources and workloads they use.
 Workload resilience: Cloud service providers often implement redundant resources to
ensure resilient storage and to keep users' important workloads running -- often across
multiple global regions.
 Migration flexibility: Organizations can move certain workloads to or from the cloud --
or to different cloud platforms -- as desired or automatically for better cost savings or to
use new services as they emerge.
 On-demand self-service. A consumer can unilaterally provision computing capabilities,
such as server time and network storage, as needed automatically without requiring human
interaction with each service provider.
 Broad network access. Capabilities are available over the network and accessed through
standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g.,
mobile phones, tablets, laptops, and workstations).
 Measured service. Cloud systems automatically control and optimize resource use by
leveraging a metering capability at some level of abstraction appropriate to the type of
service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage
can be monitored, controlled, and reported, providing transparency for both the provider
and consumer of the utilized service.

Cloud computing service models


Software as a Service (SaaS). The capability provided to the consumer is to use the provider's
applications running on a cloud infrastructure. The applications are accessible from various client
devices through either a thin client interface, such as a web browser (e.g., web-based email), or a
program interface. The consumer does not manage or control the underlying cloud infrastructure
including network, servers, operating systems, storage, or even individual application capabilities,
with the possible exception of limited user-specific application configuration settings.

Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the cloud
infrastructure consumer-created or acquired applications created using programming languages,
libraries, services, and tools supported by the provider. The consumer does not manage or control
the underlying cloud infrastructure including network, servers, operating systems, or storage, but
has control over the deployed applications and possibly configuration settings for the application-
hosting environment.

Infrastructure as a Service (IaaS). The capability provided to the consumer is to provision


processing, storage, networks, and other fundamental computing resources where the consumer is
able to deploy and run arbitrary software, which can include operating systems and applications.
The consumer does not manage or control the underlying cloud infrastructure but has control over
operating systems, storage, and deployed applications; and possibly limited control of select
networking components (e.g., host firewalls).

Security as a service (SECaaS)


Security as a service (SECaaS) is a business model in which a large service provider integrates
their security services into a corporate infrastructure on a subscription basis more cost effectively
than most individuals or corporations can provide on their own, when total cost of ownership is
considered. In this scenario, security is delivered as a service from the cloud, without requiring on-
premises hardware avoiding substantial capital outlays. These security services often include
authentication, anti-virus, anti-malware/spyware, intrusion detection, and security event
management, among others.

Mobile "backend" as a service (MBaaS)


In the mobile "backend" as a service (m) model, also known as backend as a service (BaaS), web
app and mobile app developers are provided with a way to link their applications to cloud storage
and cloud computing services with application programming interfaces (APIs) exposed to their
applications and custom software development kits (SDKs). Services include user management,
push notifications, integration with social networking services and more. This is a relatively recent
model in cloud computing, with most BaaS startups dating from 2011 or later but trends indicate
that these services are gaining significant mainstream traction with enterprise consumers.

Cloud computing types


Private cloud
Private cloud is cloud infrastructure operated solely for a single organization, whether managed
internally or by a third-party, and hosted either internally or externally. Undertaking a private cloud
project requires significant engagement to virtualize the business environment, and requires the
organization to reevaluate decisions about existing resources. It can improve business, but every
step in the project raises security issues that must be addressed to prevent serious vulnerabilities.

Public cloud
A cloud is called a "public cloud" when the services are rendered over a network that is open for
public use. Public cloud services may be free. Technically there may be little or no difference
between public and private cloud architecture, however, security consideration may be
substantially different for services (applications, storage, and other resources) that are made
available by a service provider for a public audience and when communication is effected over a
non-trusted network. Generally, public cloud service providers like Amazon Web Services (AWS),
Microsoft and Google own and operate the infrastructure at their data center and access is generally
via the Internet. AWS and Microsoft also offer direct connect services called "AWS Direct
Connect" and "Azure ExpressRoute" respectively, such connections require customers to purchase
or lease a private connection to a peering point offered by the cloud provider.
Hybrid cloud
Hybrid cloud is a composition of two or more clouds (private, community or public) that remain
distinct entities but are bound together, offering the benefits of multiple deployment models.
Hybrid cloud can also mean the ability to connect collocation, managed and/or dedicated services
with cloud resources. A hybrid cloud service crosses isolation and provider boundaries so that it
can't be simply put in one category of private, public, or community cloud service. It allows one
to extend either the capacity or the capability of a cloud service, by aggregation, integration or
customization with another cloud service.

Advantages of Cloud Computing


Cost Savings
Perhaps, the most significant cloud computing benefit is in terms of IT cost savings. Businesses,
no matter what their type or size, exist to earn money while keeping capital and operational
expenses to a minimum. With cloud computing, you can save substantial capital costs with zero
in-house server storage and application requirements. The lack of on-premises infrastructure also
removes their associated operational costs in the form of power, air conditioning and
administration costs. You pay for what is used and disengage whenever you like - there is no
invested IT capital to worry about. It’s a common misconception that only large businesses can
afford to use the cloud, when in fact, cloud services are extremely affordable for smaller
businesses.

Reliability
With a managed service platform, cloud computing is much more reliable and consistent than in-
house IT infrastructure. Most providers offer a Service Level Agreement which guarantees
24/7/365 and 99.99% availability. Your organization can benefit from a massive pool of redundant
IT resources, as well as quick failover mechanism - if a server fails, hosted applications and
services can easily be transited to any of the available servers.

Manageability
Cloud computing provides enhanced and simplified IT management and maintenance capabilities
through central administration of resources, vendor managed infrastructure and SLA backed
agreements. IT infrastructure updates and maintenance are eliminated, as all resources are
maintained by the service provider. You enjoy a simple web-based user interface for accessing
software, applications and services – without the need for installation - and an SLA ensures the
timely and guaranteed delivery, management and maintenance of your IT services.

Strategic Edge
Ever-increasing computing resources give you a competitive edge over competitors, as the time
you require for IT procurement is virtually nil. Your company can deploy mission critical
applications that deliver significant business benefits, without any upfront costs and minimal
provisioning time. Cloud computing allows you to forget about technology and focus on your key
business activities and objectives. It can also help you to reduce the time needed to market newer
applications and services.

Disadvantages of Cloud Computing


Downtime
As cloud service providers take care of a number of clients each day, they can become
overwhelmed and may even come up against technical outages. This can lead to your business
processes being temporarily suspended. Additionally, if your internet connection is offline, you
will not be able to access any of your applications, server or data from the cloud.

Security
Although cloud service providers implement the best security standards and industry certifications,
storing data and important files on external service providers always opens up risks. Using cloud-
powered technologies means you need to provide your service provider with access to important
business data. Meanwhile, being a public service opens up cloud service providers to security
challenges on a routine basis. The ease in procuring and accessing cloud services can also give
nefarious users the ability to scan, identify and exploit loopholes and vulnerabilities within a
system. For instance, in a multi-tenant cloud architecture where multiple users are hosted on the
same server, a hacker might try to break into the data of other users hosted and stored on the same
server. However, such exploits and loopholes are not likely to surface, and the likelihood of a
compromise is not great.

Vendor Lock-In
Although cloud service providers promise that the cloud will be flexible to use and integrate,
switching cloud services is something that hasn’t yet completely evolved. Organizations may find
it difficult to migrate their services from one vendor to another. Hosting and integrating current
cloud applications on another platform may throw up interoperability and support issues. For
instance, applications developed on Microsoft Development Framework (.Net) might not work
properly on the Linux platform.

Limited Control
Since the cloud infrastructure is entirely owned, managed and monitored by the service provider,
it transfers minimal control over to the customer. The customer can only control and manage the
applications, data and services operated on top of that, not the backend infrastructure itself. Key
administrative tasks such as server shell access, updating and firmware management may not be
passed to the customer or end user.
It is easy to see how the advantages of cloud computing easily outweigh the drawbacks. Decreased
costs, reduced downtime, and less management effort are benefits that speak for themselves.

The risks of cloud computing


 Environmental security — The concentration of computing resources and users in a cloud
computing environment also represents a concentration of security threats. Because of their size
and significance, cloud environments are often targeted by virtual machines and bot malware,
brute force attacks, and other attacks.
 Data privacy and security — Hosting confidential data with cloud service providers involves
the transfer of a considerable amount of an organization's control over data security to the
provider. Make sure your vendor understands your organization’s data privacy and security
needs.
 Data availability and business continuity — A major risk to business continuity in the cloud
computing environment is loss of internet connectivity. Ask your cloud provider what controls
are in place to ensure internet connectivity. If a vulnerability is identified, you may have to
terminate all access to the cloud provider until the vulnerability is rectified. Finally, the seizure
of a data-hosting server by law enforcement agencies may result in the interruption of unrelated
services stored on the same machine.
 Record retention requirements — If your business is subject to record retention requirements,
make sure your cloud provider understands what they are and so they can meet them.
 Disaster recovery — Hosting your computing resources and data at a cloud provider makes
the cloud provider’s disaster recovery capabilities vitally important to your company’s disaster
recovery plans. Know your cloud provider’s disaster recovery capabilities and ask your provider
if they been tested.

GREEN COMPUTING
This is the term used to denote efficient use of resources in computing
It is also known as Green IT
Green Computing is “Where organizations adopt a policy of ensuring that the setup and operations
of Information Technology produces the minimal carbon footprint. Key issues are energy
efficiency in computing and promoting environmentally friendly computer technologies.

It is” the study and practice of designing, manufacturing, using, and disposing of computers,
servers, and associated subsystems.

Core objectives of Green Computing Strategies:


• Minimizing energy consumption
• Purchasing green energy
• Reducing the paper and other consumables used
• Minimizing equipment disposal requirements
• Reducing travel requirements for employees/customers

Overview of Green Computing:


“Greening” your computing equipment is a low-risk way for your business to not only help the
environment but also reduce costs. It’s also one of the largest growing trends in business today.
“Making a proper decision to go green in the workplace such as offices, not only improves the net
profit of your business, but also reduces your carbon footprint. Reducing energy usage, which also
reduces carbon dioxide emissions and your energy bill, is the most effective thing you can do.

History:
The term “Green Computing" was probably coined shortly after the ‘ Energy Star’ program began
way back in 1992. One of the first results of green computing was the “Sleep mode” function of
computer monitors. As the concept developed, green computing began to encompass thin client
solutions, energy cost, accounting, virtualization practices, e-Waste, etc.

Why Go Green?
We go green because of the following
 Climate Change
 Savings
 Reliability of Power
 Computing

Climate Change
First and foremost, conclusive research shows that CO2 and other emissions are causing global
climate and environmental damage. Preserving the planet is a valid goal because it aims to preserve
life. Planets like ours, that supports life, are very rare. None of the planets in our solar system, or
in nearby star systems have m-class planets as we know them.

Savings
Green computing can lead to serious cost savings overtime. Reductions in energy costs from
servers, cooling, and lighting are generating serious savings for many corporations.

Reliability of Power:
As energy demands in the world go up, energy supply is declining or flat. Energy efficient systems
helps ensure healthy power systems. Also, more companies are generating more of their own
electricity, which further motivates them to keep power consumption low.

Computing:
Computing Power Consumption has Reached a Critical Point: Data centers have run out of usable
power and cooling due to high densities.
Approaches to Green Computing:
1. Virtualization:
Computer virtualization is the process of running two or more logical computer systems
on one set of physical hardware.
2. Power Management:
ACPI allows an operating system to directly control the power saving aspects of its
underlying hardware. Power management for computer systems are desired for many
reasons, particularly:
• Prolong battery life for portable and embedded systems.
• Reduce cooling requirements.
• Reduce noise.
• Reduce operating costs for energy and cooling
3. Power Supply:
Climate savers computing initiative promotes energy saving and reduction of greenhouse
gas emissions by encouraging development and use of more efficient power supplies.
4. Storage:
There are three routes available, all of which vary in cost, performance, and capacity.
Example, desktop hard drive, Laptop hard drive, Solid state drive. Desktop Hard Drive
Laptop Hard Drive Solid State Drive
5. Video Card:
A fast GPU may be the largest power consumer in a computer. Energy efficient display
options include:
• No video card – use a shared terminal, shared thin client, or desktop sharing software
if display required.
• Use motherboard video output – typically low 3D performance and low power.
• Reuse an older video card that uses little power, many do not require heat sinks or fans.

• Select a GPU based on average wattage or performance per watt.
6. Displays:
LCD monitors typically use a cold-cathode fluorescent bulb to provide light for the display.
Some newer displays use an array of light emitting diodes (LEDs) in place of the
fluorescent bulb, which reduces the amount of electricity used by the display. LCD
monitors uses three times less when active, and ten times less energy when in sleep
mode.CRT Display LCD Display LED Display

7. Materials Recycling:
Parts from outdated systems may be salvaged and recycled through certain retail outlets
and municipal or private recycling.
8. Telecommuting:
Telecommuting technologies implemented in green computing initiatives have advantages
like increased worker satisfaction, reduction of greenhouse gas emissions related to travel
and increased profit margins.

The goals of Green Computing:


• The goal of green computing reduces the use of hazardous materials, maximize energy
efficiency during the product’s lifetime, and promote the recyclability or biodegradability
of defunct products and factory waste.
• To comprehensively and effectively address the environmental impacts of computing/IT,
we must adopt a holistic approach and make the entire IT lifecycle greener by addressing
environmental sustainability along the following four complementary paths:
• Green Use: Intelligent use of energy and information systems. Reducing the energy
consumption of computers and other information systems as well as using them in an
environmentally sound manner.
• Green Disposal: Reduction of waste, reuse and refurbishment of hardware and recycling
of out of use peripherals and other items. Refurbishing and reusing old computers and
properly recycling unwanted computers and other electronic equipment.
• Green Design: Efficient design of data centers and workstations. Designing energy –
efficient and environmentally sound components, computers, servers, cooling equipment,
and data centers.
• Green Manufacturing: Informed purchasing of components, peripherals and equipment’s
manufactured with the environment in mind. Manufacturing electronic components,
computers, and other associated subsystems with minimal impact on the environment.

Future of Green Computing:


The plan towards green IT should include new electronic products and services with optimum
efficiency and all possible options towards energy savings.
That is enterprise wise companies are laying emphasis on moving towards Eco Friendly
Components in Computers, the use of eco-friendly sustainable components will become the norm
rather than the exception in future

INTERNET OF THINGS
The movement of information in the Internet is achieved via a system of interconnected computer
networks that share data by packet switching using the standardized Internet Protocol Suite
(TCP/IP).

The Internet is a giant worldwide network. The Internet started in 1969 when the United States
government funded a major research project on computer networking called ARPANET
(Advanced Research Project Agency NETwork). When on the Internet you move through
cyberspace.

Cyberspace is the space of electronic movement of ideas and information.


The web provides a multimedia interface to resources available on the Internet. It is also known
as WWW or World Wide Web. The web was first introduced in 1992 at CERN (Centre for
European Nuclear Research) in Switzerland. Prior to the web, the Internet was all text with no
graphics, animations, sound or video.

Common Internet applications


• Communicating
- Communicating on the Internet includes e-mail, discussion groups (newsgroups),
and chat groups.
- You can use e-mail to send or receive messages to people around the world.
- You can join discussion groups or chat groups on various topics.
• Shopping
- Shopping on the Internet is called e-commerce,
- You can window shop at cyber malls called web storefronts
- You can purchase goods using cheques, credit cards or electronic cash called
electronic payment.
• Researching
- You can do research on the Internet by visiting virtual libraries and browse through
stacks of books.
- You can read selected items at the virtual libraries and even check out books.
• Entertainment
- There are many entertainment sites on the Internet such as live concerts, movie
previews and book clubs.
- You can also participate in interactive live games on the Internet.

How do you get connected to the Internet?


You get connected to the Internet through a computer. Connection to the Internet is referred to as
access to the Internet. Using a provider is one of the most common ways users can access the
Internet. A provider is also called a host computer and is already connected to the Internet. A
provider provides a path or connection for individuals to access the Internet.

There are three widely used providers:


a) Colleges and universities – colleges and universities provide free access to the Internet
through their Local Area Networks,
b) Internet Service Providers (ISP) – ISPs offer access to the Internet at a fee. They are
more expensive than online service providers.
c) Online Service Providers – provide access to the Internet and a variety of other services
at a fee. They are the most widely used source for Internet access and less expensive than
ISP.

Connections
There are three types of connections to the Internet through a provider:
• Direct or dedicated
• SLIP and PPP
• Terminal connection
Direct or dedicated
This is the most efficient access method to all functions on the Internet. However, it is expensive
and rarely used by individuals. It is used by many organizations such as colleges, universities,
service providers and corporations.

SLIP and PPP


This type of connection is widely used by end users to connect to the Internet. It is slower and less
convenient than direct connection. However, it provides a high level of service at a lower cost than
direct connection. It uses a high-speed modem and standard telephone line to connect to a provider
that has a direct connection to the Internet. It requires special software protocol:
SLIP (Serial Line Internet Protocol) or PPP (Point-to-Point Protocol). With this type of connection,
your computer becomes part of a client/server network. It requires special client software to
communicate with server software running on the provider’s computer and other Internet
computers.

Terminal connection
This type of connection also uses a high-speed modem and standard telephone line. Your computer
becomes part of a terminal network with a terminal connection. With this connection, your
computer’s operations are very limited because it only displays communication that occurs
between provider and other computers on the Internet. It is less expensive than SLIP or PPP but
not as fast or convenient.

You might also like