Professional Documents
Culture Documents
Basics of It
Basics of It
Detailed Curriculum
Module 5 Internet and its uses: Internet; E-mail, E-Commerce & E-banking on World
wide web, Architecture and functioning of Internet; World-wide web and its
structure;
Module 1
Introduction to Computers
A computer is an electronic device that takes input such as numbers, text, sound, image,
animations, video, etc., processes it, and converts it into meaningful information that could
be understood, presenting the changed input (processed input) as output. All numbers, text,
sound, images, animations, and video used as input are called data, and all numbers, text,
sound, images, animations, and video returned as output are called information. Input is the
raw data entered into the computer by using input devices. Computer Is an electronic
machine/device which can input data, process them according to the instruction given and
then give out the meaningful information.
A computer is “an automatic electric machine for performing calculations”. The United
States of America Institute has defined a computer as “a device capable of solving problems,
by accepting data, performing described operations on the data, and supplying results of
these operations. Various types of computers are calculators, digital computers and analog
computers”.
According to the International Standard Organisation (ISO), a computer is “a data processor
that can perform substantial computations, including numerous arithmetic and logic
operations, without intervention by a human operator during the run”.
The term “Electric Data Processing” (EDP) is associated with electronic computers and refers
to the processing of data through electronic computers. Collected data in a coded form,
known as input, are fed into the computer so that processing of data may be made and the
desired results may be produced in the form of reports and documents, known as the
output, in code or plain language, as desired by the investigator.
Today’s generation could never ever imagine in their wildest dreams about the world, ages
before, when there were no computers or any other technologies. So much we have
advanced that now every information is just a click away and is in your hands 24/7. All this
advancement was possible only with the introduction of a small device called the
“Computer”.
Basically, computer is a device that accepts the message by the imputer and processes this
message and stores the information at the storage devices and later gives an output of the
message through the output devices.
A simple explanation of the computer. Normally, a computer consists of a processing unit
called the Central Processing Unit or the CPU and a form of memory. In the years between
1940 and 1945 were the first electronic digital computers developed. The initial sizes were
as big as a room and consumed power as much as today’s personal computers.
Initially, computer was related to a person who carries out calculations or computations and
as such the word computer was evolved in 1613 and continued till the end of 19thcentury.
Later it as re-described as a machine that carries computations.
Computers are being used for various purposes today like weather forecasting, machinery
operations, guidance of spacecraft and technology. Apart from these in the medical sector,
it provides a great helping hand in storing information that could be referred later, in space
technology, automation in banks, ticket booking through the net, traffic control, and even
games could be played in computers on and many more. All these are possible only because
of the characteristics that a computer posses like speedy, accuracy, reliability and integrity.
It could executive over a billion instructions per second without committing any mistakes is
completely reliable. The memory of the computer is so vast that it could hold in a large
amount of data.
To run a computer, it is the programming that decides and it should be run in a computer.
Programming is defined as a set of instructions allotted to the computer that accepts it in
order to solve a problem. There are many different languages that are being used to
program a computer. Some of the languages are BASIC, COBOL, C, C++, JAVA are a few to
name.
With the introduction of computers, attaining information has become a lot more easier.
Computers have become the backbone of Information Technology and a major application
in this sector is the Internet. With the Internet, nothing is impossible today. Apart from
acquiring information, one could stay connected to friends and family, a great platform for
business expansions, purchasing, studying and the list just goes on, it is endless.
Computerization in almost all sectors, have created job opening for thousands. Computer
education has been introduced at school levels and in primary classes, as such is the
importance of acquiring the knowledge of computers. Every year there are thousands of
students step out from universities and colleges across the globe into the world of computer
technology and this youth is what is tomorrow’s assets in getting technology into the next
level of advancement.
The computer has proved in all roles that it has been assigned. A great helping hand, in
every sector that has been applied with computers. Telecommunication and satellite
imagery are also computer based, which is added to the long list of applications computer
holds in other fields.
Classification Of Computers
The basic types of computers are:
As may be noticed from Fig. 6.2, there are common areas between two adjoining rectangles.
These areas represent the fact that the higher end of smaller computer system may have
the capacities equivalent to lower end of bigger computer system.
For example, a highly configured micro computer may be as good as smaller minicomputer.
The same is true for a mini computer and the mainframe. Only a few years ago, computers
could be distinguished on the basis of amount of primary memory or speed of processing.
These bases are no longer valid for classification.
The distinctions are changing and some of them are fast .dying out as a result of
advancements in hardware technologies. In each category, the buyer has many
configuration options. With increasing competition, sellers are falling on one another trying
to sell configurations as high as possible to push up their revenues.
The innovations like parallel processing using cheaper PC platforms are cutting into the
mainframe market. Such parallel processing involves combining of hundreds of processors
that break down an application into many parts in order to enhance the processing speed.
WORKSTATION
WORKSTATION Was originally a computer used by one person, particularly for graphics and
design applications and was used primarily in engineering. It had a fast and powerful central
processor, a high - resolution monitor and large memory. This enabled complex designs to
be easily manipulated. These characteristics however are no longer unique to Workstations.
High performance personal computer can offer very similar services, so the distinction is a
historical one. Personal computers are generally fitted with some kind of graphics expansion
card - a circuit board containing the necessary electronics.
(i) Hardware:
The physical components of a computer constitute its Hardware. These include keyboard,
mouse, monitor and processor. Hardware consists of input devices and output devices that
make a complete computer system.
Examples of input devices are keyboard, optical scanner, mouse and joystick which are used
to feed data into the computer. Output devices such as monitor and printer are media to
get the output from the computer.
(ii) Software:
A set of programs that form an interface between the hardware and the user of a computer
system are referred to as Software.
They are of six types:
(a) System software:
A set of programs to control the internal operations such as reading data from input
devices, giving results to output devices and ensuring proper functioning of components is
called system software.
(b) Application software:
Programs designed by the user to perform a specific function, such as accounting software,
payroll software etc.
(c) Operating system:
A set of tools and programs to manage the overall working of a computer using a defined
set of hardware components is called an operating system. It is the interface between the
user and the computer system.
(d) Utility software:
Certain special purpose programs that are designed to perform a specialized task, such as
functions to copy, cut or paste files in a computer, formatting a disk etc.
(e) Language processors:
Special software to accept data and interpret it in the form of Machine /Assembly language
understandable by a computer. It also ensures the correctness of language syntax and
errors.
(f) Connectivity software:
A set of programs and instructions to connect the computer with the main server to enable
sharing of resources and information with the server and other connected computers.
(iii) People:
The most important element of a computer system is its users. They are also called live-
ware of the computer system.
The following types of people interact with a computer system:
(a) System Analysts:
People who design the operation and processing of the system.
(b) System Programmers:
People who write codes and programs to implement the working of the system
(c) System Operators:
People who operate the system and use it for different purposes. Also called the end users.
(iv) Procedures:
Procedure is a step by step series of instructions to perform a specific function and achieve
desired output.
In a computer system, there are three types of procedures:
(a) Hardware oriented procedure:
It defines the working of a hardware component.
(b) Software oriented procedure:
It is a set of detailed instructions for using the software.
(c) Internal procedure:
It maintains the overall internal working of each part of a computer system by directing the
flow of information.
(v) Data:
The facts and figures that are fed into a computer for further processing are called data.
Data is raw until the computer system interprets it using machine language, stores it in
memory, classifies it for processing and produces results in conformance with the
instructions given to it. Processed and useful data is called information which is used for
decision making.
(vi) Connectivity:
When two or more computers are connected to each other, they can share information and
resources such as sharing of files (data/music etc.), sharing of printer, sharing of facilities
like the internet etc. This sharing is possible using wires, cables, satellite, infra-red,
Bluetooth, microwave transmission etc.
PRIMARY STORAGE
Primary storage is also called internal
storage or memory. It is used to store
programs and data currently being
processed by CPU. Primary storage circuits
like light bulbs need electricity to stay on. If
the power to the computer is turned off, all the circuits will turn off and all data in primary
storage will be lost. When computer is turned back on the data will not reappear. The data
is lost forever. Because of this characteristics primary storage is called volatile storage. This
type of primary storage is called RANDOM ACCESS MEMORY or RAM. RAM is the main type
of primary storage used with computers and it is volatile.
ROM – Many computers have another type of primary storage called ROM – Read Only
Memory. ROM is non-volatile storage. This means that when the power to the computer is
turned off, the content of the ROM is not lost. ROM can store preset programs that are
always put by computer manufactures. When you turn on a PC, you will usually see a
reference on BIOS (Basic Input output System). This is part of the ROM chip containing all
the programs needed to control the keyboard, monitor, disk drive and so on.
SECONDARY STORAGE
Secondary Storage is an optional attachment, which is cable –connected to the CPU.
Secondary is nonvolatile. Any data or programs stored in secondary storage stays there,
even with the computer power turned off, unless someone purposely erases them.
Secondary storage is a permanent from of storage.
Introduction to High level and low level languages
High-Level Language
A high-level language (HLL) is a programming
language such as C, FORTRAN, or Pascal that enables
a programmer to write programsthat are more or
less independent of a particular type of computer.
Such languages are considered high-level because
they are closer to human languages and further
from machine languages.
In contrast, assembly languages are considered low-
level because they are very close to machine
languages.
A high-level language is any programming language
that enables development of a program in a much
more user-friendly programming context and is generally independent of the computer's
hardware architecture.
A high-level language has a higher level of abstraction from the computer, and focuses more
on the programming logic rather than the underlying hardware components such as
memory addressing and register utilization.
High-level languages are designed to be used by the human operator or the programmer.
They are referred to as "closer to humans." In other words, their programming style and
context is easier to learn and implement than low-level languages, and the entire code
generally focuses on the specific program to be created.
A high-level language does not require addressing hardware constraints when developing a
program. However, every single program written in a high-level language must be
interpreted into machine language before being executed by the computer.
BASIC, C/C++ and Java are popular examples of high-level languages.
High level languages are
similar to the human
language. Unlike low level
languages, high level
languages are programmers
friendly, easy to code, debug
and maintain.
High level language provides
higher level of abstraction
from machine language.
They do not interact directly
with the hardware. Rather,
they focus more on the
complex arithmetic
operations, optimal program
efficiency and easiness in coding.
Low level programming uses machine friendly language. Programmers writes code either in
binary or assembly language. Writing programs in binary is complex and cumbersome
process. Hence, to make programming more programmers friendly. Programs in high level
language is written using English statements.
High level programs require compilers/interpreters to translate source code to machine
language. We can compile the source code written in high level language to multiple
machine languages. Thus, they are machine independent language.
Today almost all programs are developed using a high level programming language. We can
develop a variety of applications using high level language. They are used to develop
desktop applications, websites, system software’s, utility software’s and many more.
High level languages are grouped in two categories based on execution model – compiled or
interpreted languages.
Advantages of High level language
The main advantage of high-level languages over low-level languages is that they are easier
to read, write, and maintain. Ultimately, programs written in a high-level language must be
translated into machine language by a compiler or interpreter.
i. High level languages are programmer friendly. They are easy to write, debug and
maintain.
ii. It provide higher level of abstraction from machine languages.
iii. It is machine independent language.
iv. Easy to learn.
v. Less error prone, easy to find and debug errors.
vi. High level programming results in better programming productivity.
Machine language
Machine language is closest language to the hardware. It consists set of instructions that are
executed directly by the computer. These instructions are a sequence of binary bits. Each
instruction performs a very specific and small task. Instructions written in machine language
are machine dependent and varies from computer to computer.
Example: SUB AX, BX = 00001011 00000001 00100010 is an instruction set to subtract
values of two registers AX and BX.
In the starting days of programming, program were only written in machine language. Each
and every programs were written as a sequence of binaries.
A Programmer must have additional knowledge about the architecture of the particular
machine, before programming in machine language. Developing programs using machine
language is tedious job. Since, it is very difficult to remember sequence of binaries for
different computer architectures. Therefore, nowadays it is not much in practice.
Assembly language
Assembly language is an improvement over machine language. Similar to machine language,
assembly language also interacts directly with the hardware. Instead of using raw binary
sequence to represent an instruction set, assembly language uses mnemonics.
Mnemonics gave relief to the programmers from remembering binary sequence for specific
instructions. As English words like ADD, MOV, SUB are easy to remember, than binary
sequence 10001011. However, programmer still have to remember various mnemonics for
different computer architectures.
Assembly language uses a special program called assembler. Assembler translates
mnemonics to specific machine code.
Assembly language is still in use. It is used for developing operating systems, device drivers,
compilers and other programs that requires direct hardware access.
Application Software
An application is any program, or group of programs, that is designed for the end user.
Application software can be divided into two general classes: systems
software and applications software. Applications software (also called end-user programs)
include such things as database programs, word processors, Web
browsers and spreadsheets.
Application software is the collection of programs that actually process data to generate
information under various applications. This category of software offers tools for satisfying
the users’ need for information. For each application, there has to be software performing
various data processing activities required for the job.
Application software are those software
which are designed to perform a specific
task.
Example tally,payroll etc.
These programs are structured in such a
way that they work according to the user’s
interest) This software consists of a set of
more than one programs in order to solve a
specific problem and to do a specific task. A
program created by a scientist for solving
his/her particular research question is also application software.
The programs which are included in the software package are termed as application
programs and the programmers are known as application software programmers.
Listed below are some of the application software’s such as:
Word-Processing Software: It helps to make use of a system for creating, sorting, and
printing documents, etc.
Spreadsheet Software: This software that allows us to create a computer ledger is a
number-like data analysis tool.
Apart from these above software’s, Application software also includes Database Software,
Graphics Software, Personal Assistance Software and Education Software.
The subclass of a computer program which utilizes the capabilities of computer is called
application software. Application here means the application software and the
implementation. The example of application software programs includes media players,
spreadsheets and word processors. When multiple applications are packaged together then
it is called application suite.
There is a common user interface in each application suite which makes it easier for the user
to learn different applications. In some cases, such as Microsoft Office, the various
application programs have the ability to interact with each other. This facility is very handy
for the user. For example, a user can embed the spreadsheet in a word processor using the
application software. Application software cannot run without the presence of system
software.
System software
This class of software manages the hardware resources such as primary and secondary
memory, display devices, printers, communication links and other peripherals in the IT in-
frastructure. The management of resources includes the operation, control and extension of
the capabilities of each resource. The system software may have a variety of components,
such as operating system and translation programs.
System software is that software which we can used to control the system and also used for
run applications.
Example:-DOS, UNIX etc.
System software is designed to direct the operational responsibility while extending the
processing capability of a computer system in a step by step manner. The computer’s
software system performs the following functions:
a. It supports the advancement of some other application software.
b. It helps in the proper execution of other application software.
Monitors, communicates and controls the overall operation of being able to be attached to
be used by the computer, although it has never been an integral part of the system.
Examples of such a printer, tape, etc. As a result, system software helps in making the
operations of a system more effective and efficient.
This whole process is very important owing to the software and hardware components
working together. System programs are the programs that are included in a system software
package.
Few of the very common types of system software are:
Operating Systems: It takes care of the important and efficient utilization of the software
components of the system.
Programming Language Translators: It transforms the important instructions by
programmers in a language, into a form. Communication Software and Utility Programs are
also a part of this software.
The programs and the file that comprises the operating system are called system software.
These files include configuration files, system preferences, system services, libraries of
functions and the drivers for the hardware installed on the computer. The computer
programs in system software include compilers, system utilities, assemblers, debuggers and
file management tools.
Once you install the operating system, the system software is also installed. Program such
“Software update” or “Windows update” can be used to update the system software.
However, the end user does not run the system software. For example, while using the web
browser, you don’t need to use the assembler program.
System software is also called low-level software as it runs at most basic level of the
computer. It just creates a graphical user interface thorough which the user can interact
with hardware with the help of operating system. System software just runs at the back so
you don’t need to bother about it.
The system software provides an environment to run application software and it controls
the computer as well as the applications installed on the machine.
(i) Operating system:
Operating system (OS), as an integrated set of programs, acts as an intermediary between
the user and computer hardware. The user is, generally, unconcerned about the technical
details of the hardware and need not be aware of the whole process of giving instructions to
hardware.
OS controls the input/output operations, performs the system scheduling tasks, takes care
of system interruptions and monitors system status, giving appropriate messages to
different hardware and users.
The overall control of a computer system is under the supervision and control of an OS
component called the Supervisor or kernel. The supervisor program, generally, resides in the
primary memory.
The other OS programs such as utility and library programs are generally stored on a mass
storage device attached to the computer system. They are called by the supervisor as and
when required for the current job.
The popular operating systems products include MS-DOS, UNIX, Windows 95, OS/2, Mac OS,
etc.
(ii) Translation software:
Translation software translates the programs written in programming languages such as
COBOL, FORTRAN, PASCAL and C++ into machine recognizable instructions (also called
object or machine language programs). The source programs once debugged and translated
become executable on computer hardware, of course, under the control of OS. The
translation software is also called compilers.
System software is computer system specific and it is likely that a given system software
may run only on a specified type of computer system.
Time Sharing,
Resource Sharing,
Client Server
Batch Processing Operating System
Package Program,
Classification Real time Operating System
Customized Program
Multi-processing Operating System
Multi-programming Operating
System
Distributed Operating System
Network Topology refers to layout of a network. How different nodes in a network are
connected to each other and how they communicate is determined by the network's
topology.
Network Topology refers to the layout of a network and how different nodes in a network
are connected to each other and how they communicate. Topologies are either physical (the
physical layout of devices on a network) or logical (the way that the signals act on the
network media, or the way that the data passes through the network from one device to the
next).
Network topology refers to the arrangement of different devices on the network. Star, ring,
mesh, tree and hybrid are main topologies in context to a computer network.
Topology in general is related with the study of spaces. It assists in differentiating between
different types of geometry from each other. The term topology is widely used in context to
network topologies related to the field of computers. It defines the arrangement of various
components like links, nodes, peripherals, etc. in a network. It can be used for describing
physical as well as logical type of arrangement of the nodes involved in the network. There
are many different types of topologies like –
1. Mesh Topology
Mesh Topology: In a mesh network, devices are connected with
many redundant interconnections between network nodes. In a
true mesh topology every node has a connection to every other
node in the network. There are two types of mesh topologies:
Full mesh topology: occurs when every node has a circuit
connecting it to every other node in a network. Full mesh is very
expensive to implement but yields the greatest amount of
redundancy, so in the event that one of those nodes fails, network traffic can be directed to
any of the other nodes. Full mesh is usually reserved for backbone networks.
In this type of arrangement every node participating in the network is connected to every
other node. However, this tends to be very expensive and difficult to implement. Multiple
paths are can be used for transmitting a message. Due to the presence of dedicated links, it
does not provide any traffic problem. The management of this arrangement is tricky due to
heavy wiring. The system is configured in such a way that data takes the shortest path for
reaching to its destination. Fault identification is also easy in this type of topology.
Partial mesh topology: is less expensive to implement and yields less redundancy than full
mesh topology. With partial mesh, some nodes are organized in a full mesh scheme but
others are only connected to one or two in the network. Partial mesh topology is commonly
found in peripheral networks connected to a full meshed backbone.
There are two techniques to transmit data over the Mesh topology, they are :
Routing
Flooding
Routing
In routing, the nodes have a routing logic, as per the network requirements. Like routing
logic to direct the data to reach the destination using the shortest distance. Or, routing logic
which has information about the broken links, and it avoids those node etc. We can even
have routing logic, to re-configure the failed nodes.
Flooding
In flooding, the same data is transmitted to all the network nodes, hence no routing logic is
required. The network is robust, and the its very unlikely to lose the data. But it leads to
unwanted load over the network.
Types of Mesh Topology
Partial Mesh Topology : In this topology some of the systems are connected in the same
fashion as mesh topology but some devices are only connected to two or three devices.
Full Mesh Topology : Each and every nodes or devices are connected to each other.
Features of Mesh Topology
Fully connected.
Robust.
Not flexible.
Advantages of Mesh Topology
Each connection can carry its own data load.
It is robust.
Fault is diagnosed easily.
Provides security and privacy.
Disadvantages of Mesh Topology
Installation and configuration is difficult.
Cabling cost is more.
Bulk wiring is required.
2. Star Topology
Star Topology: In a star network devices are connected to a
central computer, called a hub. Nodes communicate across
the network by passing data through the hub.
Main Advantage: In a star network, one malfunctioning node
doesn't affect the rest of the network.
Main Disadvantage: If the central computer fails, the entire
network becomes unusable.
It is named as star topology as it looks similar to a star
whereas all the elements of the network are primarily
connected to a central device. This central device is known as
hub and can be either of a hub, router or a switch. This central hub also works as a repeater
for data flow. A point-to-point connection is laid between the devices and the central hub.
Thus, all nodes are connected to each other only by the assistance of this central hub.
Installation and wiring is easy of star topology. The functioning of the entire system depends
on the central hub.
In this type of topology all the computers are connected to a single hub through a cable.
This hub is the central node and all others nodes are connected to the central node.
Features of Star Topology
Every node has its own dedicated connection to the hub.
Hub acts as a repeater for data flow.
Can be used with twisted pair, Optical Fibre or coaxial cable.
Advantages of Star Topology
Fast performance with few nodes and low network traffic.
Hub can be upgraded easily.
Easy to troubleshoot.
Easy to setup and modify.
Only that node is affected which has failed, rest of the nodes can work smoothly.
Disadvantages of Star Topology
Cost of installation is high.
Expensive to use.
If the hub fails then the whole network is stopped because all the nodes depend on
the hub.
Performance is based on the hub that is it depends on its capacity
3. Bus Topology
Bus Topology: In networking a bus is the central
cable -- the main wire -- that connects all devices on
a local-area network (LAN). It is also called
the backbone. This is often used to describe the
main network connections composing the
Internet. Bus networks are relatively inexpensive
and easy to install for small networks. Ethernet
systems use a bus topology.
Main Advantage: It's easy to connect a computer or device and typically it requires less
cable than a star topology.
Main Disadvantage: The entire network shuts down if there is a break in the main wire and
it can be difficult to identify the problem if the network shuts down.
It is defined by the use of a single main cable which has terminators on both ends. All the
other nodes like workstations, peripherals, etc. are connected to this main cable. This type
of topology is widely implemented in LANs as it is easy to install and does not cost much. It
also does not require much cabling as in the case of some other topologies like star and
mesh. The main disadvantage of this topology is that the entire network is dependent on
the main cable. In case some problem occurs in the main cable, the whole system gets
affected.
Bus topology is a network type in which every computer and network device is connected to
single cable. When it has exactly two endpoints, then it is called Linear Bus topology.
Features of Bus Topology
It transmits data only in one direction.
Every device is connected to a single cable
Advantages of Bus Topology
It is cost effective.
Cable required is least compared to other network topology.
Used in small networks.
It is easy to understand.
Easy to expand joining two cables together.
Disadvantages of Bus Topology
Cables fails then whole network fails.
If network traffic is heavy or nodes are more the performance of the network
decreases.
Cable has a limited length.
It is slower than the ring topology.
4. Ring Topology
Ring Topology: A local-area network (LAN) whose topology is a
ring. That is, all of the nodes are connected in a closed loop.
Messages travel around the ring, with each node reading those
messages addressed to it.
Main Advantage: One main advantage to a ring network is that
it can span larger distances than other types of networks, such
as bus networks, because each node regenerates messages as
they pass through it.
It is in a shape similar to a ring, in which every node is
connected to only two neighbors. The messages move in only one and the same direction in
this arrangement. In case any cable or device breaks away from the loop, then it can be a
fatal problem for the entire network. Token ring technology is used to implement this type
of topology. It can be used for handling high volume of data. All devices are given the same
importance in this topology. In case the capacity is increased beyond its comfortable limit
then the network starts to compromise on speed.
It is called ring topology because it forms a ring as each computer is connected to another
computer, with the last one connected to the first. Exactly two neighbours for each device.
Features of Ring Topology
A number of repeaters are used for Ring topology with large number of nodes,
because if someone wants to send some data to the last node in the ring topology
with 100 nodes, then the data will have to pass through 99 nodes to reach the 100th
node. Hence to prevent data loss repeaters are used in the network.
The transmission is unidirectional, but it can be made bidirectional by having 2
connections between each Network Node, it is called Dual Ring Topology.
In Dual Ring Topology, two ring networks are formed, and data flow is in opposite
direction in them. Also, if one ring fails, the second ring can act as a backup, to keep
the network up.
Data is transferred in a sequential manner that is bit by bit. Data transmitted, has to
pass through each node of the network, till the destination node.
Advantages of Ring Topology
Transmitting network is not affected by high traffic or by adding more nodes, as only
the nodes having tokens can transmit data.
Cheap to install and expand
Disadvantages of Ring Topology
Troubleshooting is difficult in ring topology.
Adding or deleting the computers disturbs the network activity.
Failure of one computer disturbs the whole network.
5. Tree Topology
Tree Topology: This is a "hybrid" topology
that combines characteristics of linear bus
and star topologies. In a tree network,
groups of star-configured networks are
connected to a linear bus backbone cable.
Main Advantage: A Tree topology is a good
choice for large computer networks as the
tree topology "divides" the whole network
into parts that are more easily manageable.
Main Disadvantage: The entire network
depends on a central hub and a failure of the
central hub can cripple the whole network.
It is also known as the hierarchical topology.
It can be considered as the combination of linear bus and star topologies as it contains
systems with star topology connected to a linear bus main cable. There is dependency on
the main linear bus line, and therefore any fault in this line can bring the entire segment
down. However, this type of arrangement is supported by many hardware and software
tenders. This topology is also known as expanded star topology. The configuration and
wiring is difficult in comparison to other topologies. However, its point to point wiring for
individual sections is a desirable feature of this topology.
It has a root node and all other nodes are connected to it forming a hierarchy. It is also
called hierarchical topology. It should at least have three levels to the hierarchy.
Features of Tree Topology
Ideal if workstations are located in groups.
Used in Wide Area Network.
Advantages of Tree Topology
Extension of bus and star topologies.
Expansion of nodes is possible and easy.
Easily managed and maintained.
Error detection is easily done.
Disadvantages of Tree Topology
Heavily cabled.
Costly.
If more nodes are added maintenance is difficult.
Central hub fails, network fails.
6. Hybrid Topology
It refers to the arrangement which is basically a
combination of any two or more different types of
network topologies. This arrangement is known for its
flexibility and reliability. It tends to be little expensive.
It depends upon the requirements of the organization,
according to which the topologies are selected for
creating a hybrid one. Star-bus and star-ring are two
popular hybrid combinations. Corporate offices
usually use this topology to link internal LANs while
connecting external networks via WANs.
It is two different types of topologies which is a
mixture of two or more topologies. For example if in
an office in one department ring topology is used and
in another star topology is used, connecting these topologies will result in Hybrid Topology
(ring topology and star topology).
Features of Hybrid Topology
It is a combination of two or topologies
Inherits the advantages and disadvantages of the topologies included
Advantages of Hybrid Topology
Reliable as Error detecting and trouble shooting is easy.
Effective.
Scalable as size can be increased easily.
Flexible.
Disadvantages of Hybrid Topology
Complex in design.
Costly.
Intranet
An intranet is a private network that is contained within an enterprise. It may consist of
many interlinked local area networks and also use leased lines in the wide area network.
Typically, an intranet includes connections through one or more gateway computers to the
outside Internet. The main purpose of an intranet is to share company information and
computing resources among employees. An intranet can also be used to facilitate working in
groups and for teleconferences.
An intranet uses TCP/IP, HTTP, and other Internet protocols and in general looks like a
private version of the Internet. With tunneling, companies can send private messages
through the public network, using the public network with special encryption/decryption
and other security safeguards to connect one part of their intranet to another.
Typically, larger enterprises allow users within their intranet to access the public Internet
through firewall servers that have the ability to screen messages in both directions so that
company security is maintained. When part of an intranet is made accessible to customers,
partners, suppliers, or others outside the company, that part becomes part of an extranet.
An intranet is a private network that can only be accessed by authorized users. The prefix
"intra" means "internal" and therefore implies an intranet is designed for internal
communications. "Inter" (as in Internet) means "between" or "among." Since there is only
one Internet, the word "Internet" is capitalized. Because many intranets exist around the
world, the word "intranet" is lowercase.
Some intranets are limited to a specific local area network (LAN), while others can be
accessed from remote locations over the Internet. Local intranets are generally the most
secure since they can only be accessed from within the network. In order to access an
intranet over a wide area network (WAN), you typically need to enter logincredentials.
Intranets serve many different purposes, but their primary objective is to facilitate internal
communication. For example, a business may create an intranet to allow employees to
securely share messages and files with each other. It also provides a simple way for system
administrators to broadcast messages and roll out updates to all workstations connected to
the intranet.
Most intranet solutions provide a web-based interface for users to access. This interface
provides information and tools for employees and team members. It may include calendars,
project timelines, task lists, confidential files, and a messaging tool for communicating with
other users. The intranet website is commonly called a portal and can be accessed using a
custom intranet URL. If the intranet is limited to a local network, it will not respond to
external requests.
Examples of intranet services include Microsoft SharePoint, Huddle, Igloo, and Jostle. While
some services are open source and free of charge, most intranet solutions require a monthly
fee. The cost is usually related to the number of users within the intranet.
An intranet is a private network based on TCP/IP protocols, belonging to an organization,
usually a corporation, accessible only by the organization's members, employees, or others
with authorization. An intranet's websites and software applications look and act just like
any others, but the firewall surrounding an intranet fends off unauthorized access and use.
An intranet is a secure and private enterprise network that shares data o application
resources via Internet Protocol (IP). An Intranet differs from the internet, which is a public
network.
Intranet, which refers to an enterprise’s internal website or partial IT infrastructure, may
host more than one private website and is a critical component for internal communication
and collaboration.
A company's intranet is based on Internet concepts and technology, but for private use. The
term can refer to anything that is web-based but for private use, but typically means a
company's shared web applications. For example, it is common for companies to store
internal contact information, calendars, etc. on their intranet.
Memory Management
Memory management refers to management of Primary Memory or Main Memory. Main
memory is a large array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a
program to be executed, it must in the main memory. An Operating System does the
following activities for memory management:
Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are
not in use.
In multiprogramming, the OS decides which process will get memory when and how much.
Allocates the memory when a process requests it to do so.
De-allocates the memory when a process no longer needs it or has been terminated.
Processor Management
In multiprogramming environment, the OS decides which process gets the processor when
and for how much time. This function is called process scheduling. An Operating System
does the following activities for processor management:
Keeps tracks of processor and status of process. The program responsible for this task is
known as traffic controller.
Allocates the processor (CPU) to a process.
De-allocates processor when a process is no longer required.
Device Management
An Operating System manages device communication via their respective drivers. It does
the following activities for device management:
Keeps tracks of all devices. The program responsible for this task is known as the I/O
controller. Decides which process gets the device when and for how much time.
Allocates the device in the most efficient way.
De-allocates devices.
File Management
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions.
An Operating System does the following activities for file management:
Keeps track of information, location, uses, status etc. The collective facilities are often
known as file system.
Decides who gets the resources.
Allocates the resources.
De-allocates the resources.
Advantages :
Allocating memory is easy and cheap
Any free page is ok, OS can take first one out of list it keeps
Eliminates external fragmentation
Data (page frames) can be scattered all over PM
Pages are mapped appropriately anyway
Allows demand paging and preparing
More efficient swapping
No need for considerations about fragmentation
Just swap out page least likely to be used
Disadvantages :
Longer memory access times (page table lookup)
Can be improved using TLB
Guarded page tables
Inverted page tables
Memory requirements (one entry per VM page)
Improve using Multilevel page tables and variable page sizes (super-pages)
Guarded page tables
Page Table Length Register (PTLR) to limit virtual memory size
Internal fragmentation
File Maintenance
(1) The periodic updating of master files. For example, adding/deleting employees and
customers, making address changes and changing product prices. It does not refer to daily
transaction processing and batch processing (order processing, billing, etc.). The terms file
maintenance and "file management" are used synonymously. See file management.
(2) The periodic reorganization of the disk drives. Data that is continuously updated
becomes physically fragmented over the disk space and requires regrouping. An optimizing
program is run (daily, weekly, etc.) that rewrites all files contiguously.
The routine changes, updates, copying, moving, or deleting of files on a computer.
Usually, file maintenance is performed on computers or servers that are serving a vast
amount of files.
WAN
A wide area network (WAN) is a geographically distributed private
telecommunications network that interconnects multiple local area networks (LANs). In an
enterprise, a WAN may consist of connections to a company's headquarters, branch
offices, colocation facilities, cloud services and other facilities. Typically, a router or other
multifunction device is used to connect a LAN to a WAN. Enterprise WANs allow users to
share access to applications, services and other centrally located resources.
A wide area network (WAN) is a network that exists over a large-scale geographical area. A
WAN connects different smaller networks, including local area networks (LANs) and metro
area networks (MANs). This ensures that computers and users in one location can
communicate with computers and users in other locations. WAN implementation can be
done either with the help of the public transmission system or a private network.
A WAN connects more than one LAN and is used for larger geographical areas. WANs are
similar to a banking system, where hundreds of branches in different cities are connected
with each other in order to share their official data.
A WAN works in a similar fashion to a LAN, just on a larger scale. Typically, TCP/IP is the
protocol used for a WAN in combination with devices such as routers, switches, firewalls
and modems.
A computer network that spans a relatively large geographical area. Typically, a WAN
consists of two or more local-area networks (LANs).
Computers connected to a wide-area network are often connected through public
networks, such as the telephone system. They can also be connected through leased lines or
satellites. The largest WAN in existence is the Internet.
Types of WAN connections
WAN connections can include wired and wireless technologies. Wired WAN services can
include multiprotocol label switching, T1s, Carrier Ethernet and commercial broadband
internet links. Wireless WAN technologies can include cellular data networks like 4GLTE, as
well as public Wi-Fi or satellite networks.
WANs over wired network connections remain the preferred medium for most enterprises,
but wireless WAN technologies, based on the 4G LTE standard, are gaining traction.
WAN infrastructure may be privately owned or leased as a service from a third-party service
provider, such as a telecommunications carrier, internet service provider, private IP network
operator or cable company. The service itself may operate over a dedicated, private
connection -- often backed by a service-level agreement -- or over a shared, public medium
like the internet. Hybrid WANs employ a combination of private and public network
services.
Advantages of WANS
If your company has branches in several locations, a wide area network is a viable option to
boost productivity and increase internal communications. Below are some of the more
critical business advantages to establishing a WAN:
Centralizes IT infrastructure — Many consider this WAN’s top advantage. A WAN eliminates
the need to buy email or file servers for each office. Instead, you only have to set up one at
your head office’s data center. Setting up a WAN also simplifies server management, since
you won’t have to support, back-up, host, or physically protect several units. Also, setting up
a WAN provides significant economies of scale by providing a central pool of IT resources
the whole company can tap into.
Boosts your privacy — Setting up a WAN allows you to share sensitive data with all your
sites without having to send the information over the Internet. Having your WAN encrypt
your data before you send it adds an extra layer of protection for any confidential material
you may be transferring. With so many hackers out there just dying to steal sensitive
corporate data, a business needs all the protection it can get from network intrusions.
Increases bandwidth — Corporate WANS often use leased lines instead of broadband
connections to form the backbone of their networks. Using leased lines offers several pluses
for a company, including higher upload speeds than your typical broadband connections.
Corporate WANS also generally offer unlimited monthly data transfer limits, so you can use
these links as much as you like without boosting costs. Improved communications not only
increase efficiency but also boost productivity.
Eliminates Need for ISDN — WANs can cut costs by eliminating the need to rent expensive
ISDN circuits for phone calls. Instead, you can have your WAN carry them. If your WAN
provider “prioritizes voice traffic,” you probably won’t see any drop off in voice quality,
either. You may also benefit from much cheaper call rates when compared to calls made
using ISDN circuits. Some companies use a hybrid approach. They have inbound calls come
over ISDN and outbound calls go over the WAN. This approach won’t save you as much
money, but it will still lower your bill.
Guaranteed uptime — Many WAN providers offer business-class support. That means you
get a specific amount of uptime monthly, quarterly, or yearly as part of your SLA. They may
also offer you round the clock support. Guaranteed uptime is a big plus no matter what your
industry. Let’s face it. No company can afford to be down for any length of time in today’s
business environment given the stringent demands of modern customers.
Cuts costs, increase profits — In addition to eliminating the need for ISDN, WANs can help
you cut costs and increase profits in a wide variety of other ways. For example, WANS
eliminate or significantly reduce the costs of gathering teams from different offices in one
location. Your marketing team in the United States can work closely with your
manufacturing team in Germany using video conferencing and email. Saving on the travel
costs alone could make investing in a WAN a viable option for you.
WANS also provide some key technical advantages as well. In addition to providing support
for a wide variety of applications and a large number of terminals, WANs allow companies
to expand their networks through plug-in connections over locations and boost
interconnectivity by using gateways, bridges, and routers. Plus, by centralizing network
management and monitoring of use and performance, WANS ensure maximum availability
and reliability.
Disadvantages of WANS
While WANS provide numerous advantages, they have their share of disadvantages. As with
any technology, you need to be aware of these downsides to make an informed decision
about WANS. The three most critical downsides are high setup costs, security concerns, and
maintenance issues.
High setup costs — WANs are complicated and complex, so they are rather expensive to set
up. Obviously, the bigger the WAN, the costlier it is to set up. One reason that the setup
costs are high is the need to connect far-flung remote areas. However, by using public
networks, you can set up a WAN using just software (SD-WAN), which reduces setup costs.
Keep in mind also that the price/performance ratio of WANs is better now than a decade or
so ago.
Security Concerns —WANs open the way for certain types of internal security breaches,
such as unauthorized use, information theft, and malicious damage to files. While many
companies have some security in place when it comes to the branches, they deploy the bulk
of their security at their data centers to control and manage information sent to their
locations. This strategy reduces management costs but limits the company’s ability to deal
directly with security breaches at their locations. Some companies also have a hard time
compressing and accelerating SSL traffic without significantly increasing security
vulnerabilities and creating new management challenges.
Maintenance Issues — Maintaining a WAN is a challenge, no doubt about it. Guaranteeing
that your data center will be up and operating 24/7 is the biggest maintenance challenge of
all. Data center managers must be able to detect failures before they occur and reduce data
center downtime as much as possible, regardless of the reasons. Downtime is costly, in fact,
a study done by infonetics Research estimates that medium and large businesses in North
America lose as much as $100 million annually to IT and communication technology
downtime.
Other maintenance concerns include link quality and performance degradation, on-demand
throughput, load balancing for the data center, bandwidth management, scalability, and
data center consolidation and visualization
As the person responsible for your company’s network requirements, you need to
consider both the advantages and disadvantages of this powerful tool to make an informed
decision on the viability of a WAN for your company.
WANs are powerful business tools. They boost an organization’s communications,
competitiveness, and even profitability. But WANS also have their downsides too, including
internal security concerns and significant maintenance challenges. Either way, now you have
the facts.
MAN
A metropolitan area network (MAN) is a network that interconnects users with computer
resources in a geographic area or region larger than that covered by even a large local area
network (LAN) but smaller than the area covered by a wide area network (WAN). The term
is applied to the interconnection of networks in a city into a single larger network (which
may then also offer efficient connection to a wide area network). It is also used to mean the
interconnection of several local area networks by bridging them with backbone lines. The
latter usage is also sometimes referred to as a campus network.
Examples of metropolitan area networks of various sizes can be found in the metropolitan
areas of London, England; Lodz, Poland; and Geneva, Switzerland. Large universities also
sometimes use the term to describe their networks. A recent trend is the installation of
wireless MANs.
A Metropolitan Area Network (MAN) is a large computer network on the large geographical
area that include several buildings or even the entire city (metropolis). The geographical
area of the MAN is larger than LAN, but smaller than WAN. MAN includes many
communicating devices and provides the Internet connectivity for the LANs in the
metropolitan area.
MAN is used to combine into a network group located in different buildings into a single
network. The diameter of such a network can range from 5 to 50 kilometers. As a rule, MAN
does not belong to any particular organization, in most cases, a group of users or a provider
who takes charge for the service own its connecting elements and other equipment. Level of
service is agreed in advance and some warranties are discussed. MAN often acts as a high-
speed network to allow sharing regional resources (like a big LAN). It is also often used to
provide public available connection to other networks using a WAN connection. There are
many ways of classifying networks. The main criterion for classification is considering the
administration method.
A metropolitan area network (MAN) is a computer network that interconnects users with
computer resources in a geographic area or region larger than that covered by even a
large local area network (LAN) but smaller than the area covered by a wide area
network(WAN). The term is applied to the interconnection of networks in a city into a single
larger network (which may then also offer efficient connection to a wide area network). It is
also used to mean the interconnection of several local area networks by bridging them with
backbone lines. The latter usage is also sometimes referred to as a campus network.
Short for Metropolitan Area Network, a data network designed for a town or city. In terms
of geographic breadth, MANs are larger than local-area networks (LANs), but smaller
than wide-area networks (WANs). MANs are usually characterized by very high-speed
connections using fiber optical cable or other digital media.
1. Integrity:
Centralised control can also ensure that adequate checks are incorporated in the DBMS to
provide data integrity. Data integrity means that the data contained in the data base is both
accurate and consistent. Therefore, Data values being entered for storage could be checked
to ensure that they fall within a specified range and are of the correct format.
For example, the value for the age of an employee may be in the range of 16 and 75.
Another integrity check that should be incorporated in the data base is to ensure that if
there is a reference to certain object, that object must exist. In the case of an automatic
teller machine, for example, a user is not allowed to transfer funds from a non-existent
savings account to a checking account.
2. Security:
Data is of vital importance to an organisation and may be confidential. Such confidential
data must not be accessed by un-authorised persons. The data base administrator (DBA)
who has the ultimate responsibility for the data in the DBMS can ensure that proper access
procedures are followed, including proper authentication schemes for access to the DBMS
and additional checks before permitting access to sensitive data.
Different levels of security could be implemented for various types of data and operations.
The enforcement of security could be data value dependent (e.g., a manager has access to
the salary details of employees in his department only), as well as data type dependent (but
the manager cannot access the medical history of any employee, including those in his
department).
3. Data Independence:
Data independence is usually considered from two points of view; physical data
independence and logical data independence. Physical data independence allows changes in
the physical storage devices or organisation of the files to be made without requiring
changes in the conceptual view or any of the external views and hence in the application
programs using the data base.
Thus, the files may migrate from one type of physical media to another or the file structure
may change without any need for changes in the application programs. Logical data
independence implies that application programs need not be changed if fields are added to
an existing record; nor do they have to be changed if fields not used by application
programs are deleted.
Logical data independence indicates that the conceptual schema can be changed without
affecting the existing external schemas. Data independence is advantageous in the data
base environment since it allows for changes at one level of the data base without affecting
other levels. These changes are absorbed by the mappings between the levels.
4. Shared Data:
A data base allows the sharing of data under its control by any number of application
programs or users. In the example discussed earlier, the applications for the public relations
and payroll departments could share the data contained for the record type employee.
5. Conflict Resolution:
Since the data base is under the control of the data base administrator (DBA), he should
resolve the conflicting requirements of various users and applications. In essence, the DBA
chooses the best file structure and access method to get optimal performance for the
critical applications, while permitting less critical applications to continue to use the data
base, albeit with the relative response.
6. Reduction of Redundancies:
Centralised control of data by the DBA avoids unnecessary duplication of data and
effectively reduces the total amount of data storage required. It also eliminates the extra
processing necessary to trace the required data in a large mass of data.
Another advantage of avoiding duplication is the elimination of the inconsistencies that tend
to be present in redundant data files. Any redundancies that exist in the DBMS are
controlled and the system ensures that these multiple copies are consistent.
Disadvantages of DBMS:
Disadvantages of data base management system are:
1. Complexity of backup and recovery
2. Problem associated with centralization
3. Cost of software, hardware and migration.
(i) Data:
Data is a collection of raw facts that are stored and used inside a database in order to form
meaningful information.
(ii) Hardware:
Hardware is a collection of physical components of a computer system. It includes
secondary storage devices like disk drives (floppy, CD.), processor, etc.
(iii) Software:
Software refers to the program that a database system uses in order to run a DBMS
application. It is the platform through which data is accessed from the physical location
(hardware) where data is stored. For example, a Software named “Database Manager”.
(iv) User:
Users are the people who use the database applications. They can be Database
Administrators, Application programmers, Database designers, End users, etc.
(v) Procedure:
A set of instructions that describe the working of a DBMS is called its procedure.
Telecom: There is a database to keeps track of the information regarding calls made,
network usage, customer details etc. Without the database systems it is hard to maintain
that huge amount of data that keeps updating every millisecond.
Industry: Where it is a manufacturing unit, warehouse or distribution centre, each one
needs a database to keep the records of ins and outs. For example distribution centre
should keep a track of the product units that supplied into the centre as well as the products
that got delivered out from the distribution centre on each day; this is where DBMS comes
into picture.
Banking System: For storing customer info, tracking day to day credit and debit
transactions, generating bank statements etc. All this work has been done with the help of
Database management systems.
Education sector: Database systems are frequently used in schools and colleges to store and
retrieve the data regarding student details, staff details, course details, exam details, payroll
data, attendance details, fees details etc. There is a hell lot amount of inter-related data that
needs to be stored and retrieved in an efficient manner.
Online shopping: You must be aware of the online shopping websites such as Amazon,
Flipkart etc. These sites store the product information, your addresses and preferences,
credit details and provide you the relevant list of products based on your query. All this
involves a Database management system.
Modification of Database Files
Drawbacks of File system:
Data Isolation: Because data are scattered in various files, and files may be in different
formats, writing new application programs to retrieve the appropriate data is difficult.
Duplication of data – Redundant data
Dependency on application programs – Changing files would lead to change in
application programs.
Disadvantages of DBMS:
DBMS implementation cost is high compared to the file system
Complexity: Database systems are complex to understand
Performance: Database systems are generic, making them suitable for various
applications. However this feature affect their performance for some applications
Spyware- Spyware is any technology that aids in gathering information about a person or
organization without their knowledge. On the Internet (where it is sometimes called a
Spybot or tracking software), Spyware is programming that is put in someone's computer to
secretly gather information about the user and relay it to advertisers or other interested
parties. Spyware can get in a computer as a software virus or as the result of installing a new
program.
Virus- a virus is a program or programming code that replicates by being copied or initiating
its copying to another program, computer boot sector or document. Viruses can be
transmitted as attachments to an e-mail note or in a downloaded file, or be present on a
diskette or CD
Worm- a worm is a self-replicating virus that does not alter files but duplicates itself. It is
common for worms to be noticed only when their uncontrolled replication consumes
system resources, slowing or halting other tasks.
Trapdoor- is a method of gaining access to some part of a system other than by the normal
procedure (e.g. gaining access without having to supply a password). Hackers who
successfully penetrate a system may insert trapdoors to allow them entry at a later date,
even if the vulnerability that they originally exploited is closed. There have also been
instances of system developers leaving debug trapdoors in software, which are then
discovered and exploited by hackers.
Trojan (Trojan Horse)- a Trojan horse is a program in which malicious or harmful code is
contained inside apparently harmless programming or data in such a way that it can get
control and do its chosen form of damage, such as ruining the certain area on your hard
disk. A Trojan horse may be widely redistributed as part of a computer virus.
RATs (Remote Admin Trojans) - are a special form of Trojan Horse that allows remote
control over a machine. These programs are used to steal passwords and other sensitive
information. Although they are "invisible", symptoms such as a slow moving system, CD
ports opening and closing and unexplained restarting of your computer may manifest.
Malware - Malware (for "malicious software") is any program or file that is harmful to a
computer user. Thus, malware includes computer viruses, worms, Trojan horses, and also
Spyware, programming that gathers information about a computer user without permission.
Mobile Malicious Code - web documents often have server-supplied code associated with
them which executes inside the web browser. This active content allows information servers
to customize the presentation of their information, but also provides a mechanism to attack
systems running a client browser. Mobile malicious code may arrive at a site through active
content such as JavaScript, Java Applets and ActiveX controls or through Plug-ins.
Malicious Font - webpage text that exploits the default method used to de-compress
Embedded Open Type Fonts in Windows based programs including Internet Explorer and
Outlook. These malicious fonts are designed to trigger a buffer overflow which will disable
the security on Windows-based PCs. This allows an intruder to take complete control of the
affected computer and remotely execute destructive activities including installing
unauthorized programs and manipulating data.
Rootkits - Rootkits are a set of software tools used by an intruder to gain and maintain
access to a computer system without the user's knowledge. These tools conceal covert
running processes, files and system data making them difficult to detect. There are rootkits
to penetrate a wide variety of operating systems including Linux, Solaris and versions of
Microsoft Windows. A computer with rootkits on it is called a rooted computer.
Ransomware: If you see this screen that warns you that you have been locked out of your
computer until you pay for your cybercrimes. Your system is severely infected with a form of
Malware called Ransomware. It is not a real notification from the FBI, but, rather an
infection of the system itself. Even if you pay to unlock the system, the system is unlocked,
but you are not free of it locking you out again. The request for money, usually in the
hundreds of dollars is completely fake.
Keyloggers: Records everything you type on your PC in order to glean your log-in names,
passwords, and other sensitive information, and send it on to the source of the keylogging
program. Many times keyloggers are used by corporations and parents to acquire computer
usage information.
Adware:. The least dangerous and most lucrative Malware. Adware displays ads on your
computer.
Browser Hijacker: When your homepage changes to one that looks like those in the images
inserted next, you may have been infected with one form or another of a Browser Hijacker.
This dangerous Malware will redirect your normal search activity and give you the results
the developers want you to see. Its intention is to make money off your web surfing. Using
this homepage and not removing the Malware lets the source developers capture your
surfing interests. This is especially dangerous when banking or shopping online. These
homepages can look harmless, but in every case they allow other more infectious
Phishing. A fake website which is designed to look almost like the actual website is a form of
phishing attack. The idea of this attack is to trick the user into entering their username and
password into the fake login form which serves the purpose of stealing the identity of the
victim. Every form sent out from the phishing site will not go to the actual server, but the
attacker controlled server.
Cookies. Cookies is not really a Malware. It is just something used by most websites to store
something into your computer. It is here because it has the ability to store things into your
computer and track your activities within the site. If you really don’t like the existence of
cookies, you can choose to reject using cookies for some of the sites which you do not
know.
DDoS. One of the most famous thing done by Anonymous, which is to send millions of traffic
to a single server to cause the system to down with certain security feature disable so that
they can do their data stealing. This kind of trick which is to send a lot of traffic to a machine
is known as Distributed Denial of Service, also known as DDoS.
Cryptography
Cryptography or cryptology is the practice and study of techniques for secure
communication in the presence of third parties called adversaries. More generally,
cryptography is about constructing and analyzing protocols that prevent third parties or the
public from reading private messages; various aspects in information security such as
data confidentiality, data integrity, authentication, and non-repudiation are central to
modern cryptography. Modern cryptography exists at the intersection of the disciplines
of mathematics, computer science, electrical engineering, communication science,
and physics. Applications of cryptography include electronic commerce, chip-based payment
cards, digital currencies, computer passwords, and military communications.
Cryptography is a method of storing and transmitting data in a particular form so that only
those for whom it is intended can read and process it.
Cryptography is closely related to the disciplines of cryptology and cryptanalysis.
Cryptography includes techniques such as microdots, merging words with images, and other
ways to hide information in storage or transit. However, in today's computer-centric world,
cryptography is most often associated with scrambling plaintext (ordinary text, sometimes
referred to as cleartext) into ciphertext (a process called encryption), then back again
(known as decryption). Individuals who practice this field are known as cryptographers.
Modern cryptography concerns itself with the following four objectives:
1) Confidentiality (the information cannot be understood by anyone for whom it was
unintended)
2) Integrity (the information cannot be altered in storage or transit between sender and
intended receiver without the alteration being detected)
3) Non-repudiation (the creator/sender of the information cannot deny at a later stage his
or her intentions in the creation or transmission of the information)
4) Authentication (the sender and receiver can confirm each other’s identity and the
origin/destination of the information)
Digital Signature
A digital signature is a mathematical scheme for demonstrating the authenticity of digital
messages or documents. A valid digital signature gives a recipient reason to believe that the
message was created by a known sender (authentication), that the sender cannot deny
having sent the message (non-repudiation), and that the message was not altered in transit
(integrity).
Digital signatures are a standard element of most cryptographic protocol suites, and are
commonly used for software distribution, financial transactions, contract management
software, and in other cases where it is important to detect forgery or tampering.
A digital signature (not to be confused with a digital certificate) is a mathematical technique
used to validate the authenticity and integrity of a message, software or digital document.
The digital equivalent of a handwritten signature or stamped seal, but offering far more
inherent security, a digital signature is intended to solve the problem of tampering and
impersonation in digital communications. Digital signatures can provide the added
assurances of evidence to origin, identity and status of an electronic document, transaction
or message, as well as acknowledging informed consent by the signer.
In many countries, including the United States, digital signatures have the same legal
significance as the more traditional forms of signed documents. The United States
Government Printing Office publishes electronic versions of the budget, public and private
laws, and congressional bills with digital signatures.
Digital signatures are the public-key primitives of message authentication. In the physical
world, it is common to use handwritten signatures on handwritten or typed messages. They
are used to bind signatory to the message.
Similarly, a digital signature is a technique that binds a person/entity to the digital data. This
binding can be independently verified by receiver as well as any third party.
Digital signature is a cryptographic value that is calculated from the data and a secret key
known only by the signer.
In real world, the receiver of message needs assurance that the message belongs to the
sender and he should not be able to repudiate the origination of that message. This
requirement is very crucial in business applications, since likelihood of a dispute over
exchanged data is very high.
Digital signatures are based on public key cryptography, also known as asymmetric
cryptography. Using a public key algorithm such as RSA, one can generate two keys that are
mathematically linked: one private and one public. To create a digital signature, signing
software (such as an email program) creates a one-way hash of the electronic data to be
signed. The private key is then used to encrypt the hash. The encrypted hash -- along with
other information, such as the hashing algorithm -- is the digital signature. The reason for
encrypting the hash instead of the entire message or document is that a hash function can
convert an arbitrary input into a fixed length value, which is usually much shorter. This saves
time since hashing is much faster than signing.
Importance of Digital Signature
Out of all cryptographic primitives, the digital signature using public key cryptography is
considered as very important and useful tool to achieve information security.
Apart from ability to provide non-repudiation of message, the digital signature also provides
message authentication and data integrity. Let us briefly see how this is achieved by the
digital signature −
Message authentication − When the verifier validates the digital signature using public key
of a sender, he is assured that signature has been created only by sender who possess the
corresponding secret private key and no one else.
Data Integrity − In case an attacker has access to the data and modifies it, the digital
signature verification at receiver end fails. The hash of modified data and the output
provided by the verification algorithm will not match. Hence, receiver can safely deny the
message assuming that data integrity has been breached.
Non-repudiation − Since it is assumed that only the signer has the knowledge of the
signature key, he can only create unique signature on a given data. Thus the receiver can
present data and the digital signature to a third party as evidence if any dispute arises in the
future.
By adding public-key encryption to digital signature scheme, we can create a cryptosystem
that can provide the four essential elements of security namely − Privacy, Authentication,
Integrity, and Non-repudiation.
The receiver after receiving the encrypted data and signature on it, first verifies the
signature using sender’s public key. After ensuring the validity of the signature, he then
retrieves the data through decryption using his private key.
Firewall
In computing, a firewall is a network security system that monitors and controls incoming
and outgoing network traffic based on predetermined security rules. A firewall typically
establishes a barrier between a trusted internal network and untrusted outside network,
such as the Internet.
Firewalls are often categorized as either network firewalls or host-based firewalls. Network
firewalls filter traffic between two or more networks; they are either software
appliances running on general-purpose hardware, or hardware-based firewall computer
appliances. Firewall appliances may also offer other functionality to the internal network
they protect, such as acting as a DHCP or VPN server for that network. Host-based firewalls
run on host computers and control network traffic in and out of those machines.
The word firewall originally referred literally to a wall, which was constructed to halt the
spread of a fire. In the world of computer firewall protection, a firewall refers to a network
device which blocks certain kinds of network traffic, forming a barrier between a trusted
and an untrusted network. It is analogous to a physical firewall in the sense that firewall
security attempts to block the spread of computer attacks.
A firewall is a network security device that monitors incoming and outgoing network traffic
and decides whether to allow or block specific traffic based on a defined set of security
rules.
Firewalls have been a first line of defense in network security for over 25 years. They
establish a barrier between secured and controlled internal networks that can be trusted
and untrusted outside networks, such as the Internet.
A firewall is a network security system, either hardware- or software-based, that uses rules
to control incoming and outgoing network traffic.
A firewall acts as a barrier between a trusted network and and an untrusted network. A
firewall controls access to the resources of a network through a positive control model. This
means that the only traffic allowed onto the network is defined in the firewall policy; all
other traffic is denied.
Types of firewalls
Proxy firewall
An early type of firewall device, a proxy firewall serves as the gateway from one network to
another for a specific application. Proxy servers can provide additional functionality such as
content caching and security by preventing direct connections from outside the network.
However, this also may impact throughput capabilities and the applications they can
support.
Stateful inspection firewall
Now thought of as a “traditional” firewall, a stateful inspection firewall allows or blocks
traffic based on state, port, and protocol. It monitors all activity from the opening of a
connection until it is closed. Filtering decisions are made based on both administrator-
defined rules as well as context, which refers to using information from previous
connections and packets belonging to the same connection.
Unified threat management (UTM) firewall
A UTM device typically combines, in a loosely coupled way, the functions of a stateful
inspection firewall with intrusion prevention and antivirus. It may also include additional
services and often cloud management. UTMs focus on simplicity and ease of use.
See our UTM devices.
Next-generation firewall (NGFW)
Firewalls have evolved beyond simple packet filtering and stateful inspection. Most
companies are deploying next-generation firewalls to block modern threats such as
advanced malware and application-layer attacks.
According to Gartner, Inc.’s definition, a next-generation firewall must include:
Standard firewall capabilities like stateful inspection
Integrated intrusion prevention
Application awareness and control to see and block risky apps
Upgrade paths to include future information feeds
Techniques to address evolving security threats
While these capabilities are increasingly becoming the standard for most companies,
NGFWs can do more.
Threat-focused NGFW
These firewalls include all the capabilities of a traditional NGFW and also provide
advanced threat detection and remediation. With a threat-focused NGFW you can:
Know which assets are most at risk with complete context awareness
Quickly react to attacks with intelligent security automation that sets policies and
hardens your defenses dynamically
Better detect evasive or suspicious activity with network and endpoint event
correlation
Greatly decrease the time from detection to cleanup with retrospective security that
continuously monitors for suspicious activity and behavior even after initial
inspection
Ease administration and reduce complexity with unified policies that protect across
the entire attack continuum
Many grapple with the concept of authentication in information security. What tends to
happen is that they confuse authentication with identification or authorization. They are in
fact all distinct concepts, and should be thought of as such. Let’s go over each and give an
example or two:
Identification
Identification is nothing more than claiming you are somebody. You identify yourself when
you speak to someone on the phone that you don’t know, and they ask you who they’re
speaking to. When you say, “I’m Jason.”, you’ve just identified yourself.
In the information security world, this is analogous to entering a username.
It’s not analogous to entering a password. Entering a password is a method for verifying that
you are who you identified yourself as, and that’s the next one on our list.
Authentication
Authentication is how one proves that they are who they say they are. When you claim to
be Jane Smith by logging into a computer system as “jsmith”, it’s most likely going to ask
you for a password. You’ve claimed to be that person by entering the name into the
username field (that’s the identification part), but now you have to prove that you are really
that person. Most systems use a password for this, which is based on “something you
know”, i.e. a secret between you and the system.
Another form of authentication is presenting something you have, such as a driver’s license,
an RSA token, or a smart card. You can also authenticate via something you are. This is the
foundation for biometrics. When you do this, you first identify yourself and then submit a
thumb print, a retina scan, or another form of bio-based authentication.
Once you’ve successfully authenticated, you have now done two things: you’ve claimed to
be someone, and you’ve proven that you are that person. The only thing that’s left is for the
system to determine what you’re allowed to do.
Authorization
Authorization is what takes place after a person has been both identified and authenticated;
it’s the step determines what a person can then do on the system.
An example in people terms would be someone knocking on your door at night. You say,
“Who is it?”, and wait for a response. They say, “It’s John.” in order to identify themselves.
You ask them to back up into the light so you can see them through the peephole. They do
so, and you authenticate them based on what they look like (biometric). At that point you
decide they can come inside the house.
If they had said they were someone you didn’t want in your house (identification), and you
then verified that it was that person (authentication), the authorization phase would not
include access to the inside of the house.
Everyday Use
It’s interesting to note that these three steps take place every day in a very transparent
fashion. When your boss calls you at work and asks to meet you across town for lunch, two
things happen instantly — usually at the exact same time: just by hearing the boss’s voice
you have both identified and authenticated them. Identification doesn’t have to be done by
the person being identified; it can be done by the person doing the identifying as well.
Another interesting hybrid is trying to get into a night club. When you get to the door and
present your I.D., you’re not just claiming you are that person, but you’re presenting the I.D.
as proof — that’s both steps in one. The result of whether or not your authentication was
accepted as authentic is what determines whether or not you will be given authorization to
get into the club.
Adding a bit of authorization to that analogy, it may be a club where you’re allowed to get in
once you prove who you are, but you only get a wrist band that allows you to consume
alcohol if you’re over 21, and otherwise you’re not allowed to. This would be authorization
because it’s assigning you privileges based on some attribute of your identity.
Security Awareness and Policies
Creating the Security Awareness Program
Identify compliance or audit standards that your organization must adhere to.
Identify security awareness requirements for those standards.
Identify organizational goals, risks, and security policy.
Identify stakeholders and get their support.
Create a baseline of the organization’s security awareness.
Create project charter to establish scope for the security awareness training program.
Create steering committee to assist in planning, executing and maintaining the awareness
program.
Identify who you will be targeting—different roles may require different/additional
training (employees, IT personnel, developers, senior leadership).
Identify what you will communicate to the different groups (goal is shortest training
possible that has the greatest impact).
Identify how you will communicate the content—three categories of training: new,
annual, and ongoing.
Uses of Internet
The key to success of Internet is the information. The better the quality, the more usage of
Internet operations.
Large volume of Information: Internet can be used to collect information from around the
world. This information could relate to education, medicine, literature, software,
computers, business, entertainment, friendship, tourism, and leisure. People can search for
information by visiting the home page of various search engines such as Google, Yahoo,
Bing, etc.
News and Journals: All the newspapers, magazines and journals of the world are available
on the Internet. With the introduction of broadband and advanced mobile
telecommunication technologies such as 3G (third generation) and 4G (fourth generation),
the speed of internet service has increased tremendously. A person can get the latest
news about the world in a matter of few seconds.
Electronic Mode of Communication: Internet has given the most exciting mode of
communication to all. We can send an E-mail (the short form of Electronic Mailing System)
to all the corners of the world.
Chatting: There are many chatting software that can be used to send and receive real-
time messages over the internet. We can chat with our friend and relatives using any one of
the chatting software.
Social Networking: People can connect with old friends on social networking sites. They can
even chat with them when they are online. Social networking sites also allow us to share
pictures with others. We can share pictures with our loved ones, while we are on a vacation.
People are even concluding business deals over these social networking sites such as
Facebook.
Online Banking (Net-Banking): The use of internet can also be seen in the field of banking
transactions. Many banks such as HSBC, SBI, Axis Bank, Hdfc Bank, etc. offers online banking
facilities to its customers. They can transfer funds from one account to another using the
net-banking facility.
E-commerce: Internet is also used for carrying out business operations and that set of
operations is known as Electronic Commerce (E-commerce). Flipkart is the largest e-
commerce company in India. The rival, Amazon, is giving stiff competition to Flipkart.
Mobile commerce: Mobile commerce (also M-Commerce) refers to the commercial
transaction that takes place over the mobile internet. Using the mobile internet technology,
many companies have introduced mobile version of websites and mobile apps, to promote
and sell their products. Customers can simply browse several through the products and buy
online through mobile internet.
Mobile wallet: Many companies offer the service of mobile wallet to its customers. Users
must have a smart-phone and internet connection to use this service. Users can pay an
amount into their mobile wallet, which they can use to make online payment such as bill
payments, recharges, etc.
Entertainment: Apart from a major source of knowledge and information, the utility of
Internet in the field of entertainment cannot be undermined. We can visit various video
sites and watch movies and serials at our convenient time.
Technology of the Future: Internet is the technology of future. In the times to come, offices
would be managed at distant places through Internet.
Conclusion
Internet is very useful for everyone. It is the superhighway of information. The cost of
Internet has been reduced over-time. The cost of the computer system, modem and other
associated hardware is also likely to come down. In case computer system is not available,
one can browse internet over the mobile phones. All major smart-phones support browsing
functionality.
The possibilities an Internet are endless. However, some people waste their time while
surfing through various websites. Some others try to view those websites that are not
meant for them. This is a bad tendency and must be checked. Internet must be used for
development and not for decay.
People must learn Internet operations and must try to collect only the useful information.
The present century would usher humanity into a new era of Information Technology (IT)
and Internet is the backbone of this exciting era.
Architecture and Functioning Of Internet
Internet Architecture
The Internet architecture is based on a simple idea: ask all networks want to be part of
carrying a single packet type, a specific format the IP protocol. In addition, this IP packet
must carry an address defined with sufficient generality in order to identify
each computer and terminals scattered throughout the world. This architecture is illustrated
in Figure.
The user who wishes to make on this internetwork must store its data in IP packets that are
delivered to the first network to cross. This first network encapsulates the IP packet in its
own packet structure, the package A, which circulates in this form until an exit door, where
it is DE capsulated so as to retrieve the IP packet. The IP address is examined to locate,
thanks to a routing algorithm, the next network to cross, and so on until arriving at the
destination terminal.
To complete the IP, the US Defense added the TCP protocol; specify the nature of the
interface with the user. This protocol further determines how to transform a stream of
bytes in an IP packet, while ensuring quality of transport this IP packet. Both protocols,
assembled under the TCP / IP abbreviation, are in the form of a layered architecture. They
correspond to the packet level and message-level reference model.
The internet architecture can be broadly classified into three layers. The very first layer
consists of Internet Backbones and very high speed network lines. The National Science
Foundation (NSF) created the first high-speed backbone in 1987 called NSFNET, it was a T1
line that connected 170 smaller networks together and operated at 1.544 Mbps (million bits
per second). IBM, MCI and Merit worked with NSF to create the backbone and developed a
T3 (45Mbps) backbone the following year. Backbones are typically fiber optic trunk lines.
The trunk line has multiple fiber optic cables combined together to increase the capacity.
Fiber optic cables are designated OC-48 can transmit 2,488 Mbps (2.488 Gbps). The nodes
are known as Network Access Point (NAPs). The second layer is usually known as Internet
Service Provider (ISP). The ISPs are connected to the Backbones at NAP’s with high speed
lines.
The end users which are part of third layer are connected to ISPs by dial up or leased lines
and modems. The speed of communication is usually 1400 bps to 2048 kbps.
In the real Internet, dozens of large Internet providers interconnect at NAPs ( Network
Access Point) in various cities, and trillions of bytes of data flow between the individual
networks at these points. The Internet is a collection of huge corporate networks
interconnected with one other at the NAPs, backbones and routers to talk to each other. A
message can leave one computer and travel halfway across the world through several
different networks and arrive at another computer in a fraction of a second.
The routers determine where to send information from one computer to another. Routers
are specialized computers that send messages to their destinations along thousands of
pathways. It joins two networks, passing information from one to the other.