Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 34

Introduction To Computer Systems

Computer systems are extremely complex objects. Even the smallest systems
consist of millions of individual parts. Each part is relatively simple in its function
and easy to understand. It is only when these millions of simple parts interact with
each other that we develop the complex structures that comprise our modern
computing systems. In this lesson, we take a brief look at the major components of
a computing system. In subsequent lessons, we will study each component in more
detail.

Many people have tried to classify computing systems in different ways, always
with difficulty. We will look at some of the terminology used to describe different
types of computing systems, from the ubiquitous micro to the extremely powerful
supercomputer. Common terms, such as hardware and software, will be discussed.
Different methods of processing data and information will be explained. After this
brief survey, we will come back, in later lessons, and explore in greater detail the
topics introduced here.

Data and information processing have been previously defined as the


transformation of unorganised raw material or data into useful information. This
processing may be done manually or with the help of mechanical and electronic
devices. It is common to refer to the collection of personnel, equipment, software,
and procedures that perform the data and information processing functions as an
information processing system. Very often, when a computer is used, the system is
referred to as a computer system. We will use the term computer system to refer to
the hardware and software exclusively, and reserve the term information
processing system to include the people and procedures. In the early days of
computing, when the computer was used simply to replace manual operations such
as payroll, the term that was used was EDP, or electronic data processing system.
In this lesson we will concentrate on the hardware and software. In subsequent
lessons we will discuss how people and procedures interact with the hardware and
sofware to create the total system.

Among the operations that a computer can perform are the simple mathematical
functions of addition, subtraction, multiplication, and division. The computer also
has the ability to compare two items and make a decision based on this
comparison. It is by combining these simple functions into sequences of increasing
complexity that we can have the computer perform the information processing
functions. The very first computers electronically wired these sequences on
removable cards called breadboards. For each new task the wiring would have to
be changed manually. The idea that these sequences could be symbolically coded
and stored in the computer was a major step forward, since the instructions could
then be changed very easily. Sets of instructions that control the sequence of
operations are called programs. Collectively, the programs are called software and
the physical equipment is called the hardware. The hardware in a modern
computing system may consist of many components. As computers have been
improved, scientists and engineers have been able to make electronic circuits
perform ever more complex tasks. Many of the functions that formerly were
executed by software are now fabricated in hardware. When software functions are
permanently placed in the hardware, we refer to the programs as firmware.

These concepts will be discussed in more depth later. It is the combination of


physical equipment (hardware), logical instructions (software), and software
imbedded in hardware (firmware) that gives modern computing systems their
power and versatility.

In the future we will continue to build computers out of these three logical
components but the electronic implementation will change. Systems will still
include people, machines, software, and procedures, although our techniques for
defining them will become more rigorous. As you read through the following
material, try to distinguish those concepts that are apt to change substantially and
those that will not.

The Hardware (Equipment)

Hardware can be categorized in numerous ways. Below we will look at the central
unit, or central processing unit (CPU), and the associated equipment consisting of
auxiliary storage, input and output devices, and communication devices. All of the
units that are connected to the central processing unit are called peripheral devices.
In this section we will consider each component in turn. In the second section of
the text we will study these components in more detail.

Central Processing Unit


The part of the computer system that performs the logic transformations on the
data and controls all the other devices is the central processing unit, or CPU. The
CPU interprets and executes the program instructions. It has three main
components: the arithmetic-logic unit (ALU), the control unit, and the primary
storage unit. In many personal computers, the ALU and control unit are fabricated
as a single component on a single silicon chip with the primary storage on separate
chips. For this reason, some people refer to the CPU as just the ALU and control
unit and treat primary memory as a separate device.

The control unit serves to coordinate and control the entire operation of the
computer system. It controls the transfer of data between the CPU, primary
storage, and possibly other auxiliary devices that will be discussed later. The
control unit decodes instructions and keeps track of which instruction is to be
executed next. In a modern computing system, there may be hundreds or thousands
of hardware components. It is the function of the control unit to coordinate their
activities. A VDT (video display terminal) is usually employed by the user to
monitor the computer's functions and to send control commands to the CPU. With
large systems, this person would be a regular employee called the operator working
at a special VDT called a console. The control unit examines each instruction and
then routes the proper numbers to the arithmetic-logic unit (ALU). The arithmetic-
logic unit consists of the circuitry that performs the specified logical or arithmetic
operations.

The primary storage unit, or main memory, stores all data and instructions that are
data and instructions before They can to be used by the CPU. Main memory is
divided into a large number of storage locations be used by the CPU. of a fixed
size called words. Each word can be accessed by the control unit because it has a
unique address. The word's address is just like your postal address except it is a
single unique number. Each word is made up of a fixed number of units called
bytes. In most systems, a byte can hold one character of data. Often the hardware
logic permits access to individual bytes and it appears to the user that the computer
is "byte addressable," when, in fact, a full word has been accessed in memory.
(These terms will be explained in detail later in the text.)

The size of a computer's memory is usually expressed in terms of bytes. The metric
abbreviations for one thousand one million and one billion, K, M and G, are used
in specifying memory size. These actually stand for kilobyte, megabyte and
gigabyte respectively. However, because of the binary nature of today's computers,
it is usual to have K as 210 (or 1 024), M as 220 (or 1 048 576) and G as 230 (or
1 073 741 824). Therefore, a 128M computer has approximately 128 000 000
bytes, or characters, of memory. The size of the memory is one of the parameters
that determines the power and cost of the computer. The different types of memory
will be discussed in detail later.

Auxiliary Storage Devices

In today's modern computing systems, it is necessary to have another form of


memory. It is often the case that the main memory is not large enough to
accommodate all the data necessary for a particular application. It is also the case
that main memory is only used to hold data that are currently being processed.
When not in use, data are stored on auxiliary storage devices. Auxiliary storage
devices (sometimes called secondary storage) can overcome this difficulty. The
most common devices used are magnetic disk drives, CDs or magnetic tapes used
for backups. Many personal computers still use a flexible disk called a floppy.
Common memory sizes for personal computers are in the order of several tenths of
M, while associated hard disk drives may contain tenths of gigabytes of storage.

Auxiliary storage devices can store very large quantities of data in an economical
and reliable manner. Their major limitation is that the data are not available to the
CPU nearly as quickly as are data stored in main memory, and care must be taken
to balance the use of main and auxiliary memory. Main memory may be up to
100 000 faster than auxiliary memory.

Input/Output Devices

Information we normally deal with in our daily lives, such as printed material we
read or voice information we hear on the telephone, is essentially useless to the
computer in the form that we understand. Devices that convert data into a form that
the computer can interpret (computer-readable) and enter it into the computer are
called input devices. There is a wide variety of input devices. Among the ones we
will consider are keyboard devices, scanning devices, and audio or voice
recognition devices. As in other areas of computing, technological advances in
input devices continue to improve accuracy and speed while reducing costs.
Output devices serve two important functions:

· to create new computer-readable media that can be utilized in subsequent


data and information processing steps and

· to transform the internal information into a form that we can understand,


such as printed reports or voice messages.

Some familiar output devices include printers and plotters.

Classification of Computer Systems

Any classification of computer systems is an ephemeral thing. It will be at least a


year from the time this lesson was first typed to the time you are reading it in class.
In that time, some of the physical classifications that we have devised will be
outdated. However, we will review some of the classifications that are commonly
used and try to incorporate them into an overall scheme. One trend that seems to
have been followed since the very early days of data and information processing is
that the cost of the largest computing systems has stayed relatively constant. These
systems grew in complexity and power, but not significantly in cost.

Mainframes

The term most commonly used for the largest general purpose computing system is
a mainframe computer. The cost ranges from several hundred thousand to several
million dollars. This cost has stayed relatively constant since the early 1950s;
however, the power mainframes. of the mainframe has increased dramatically.

Mainframe systems are designed for large-scale scientific and commercial


applications. Their scientific applications range from long-range weather
forecasting to the analysis of complex data from high energy nuclear physics
experiments. Typical commercial applications are the very complex airline
reservation systems and massive banking systems.

Mainframe systems have very fast processor times and extremely large memories.
Usually the CPU is comprised of many special purpose logic units that permit the
mainframe to handle many tasks concurrently. How this is accomplished will be
explained later. It is not unusual to have several hundred people using a single
mainframe at once. In addition to the size of memory and speed of processing,
these systems have access to vast amounts of secondary storage. Among the best
known of the mainframe manufacturers are IBM, UNISYS, Honeywell, and
Control Data Corporation (CDC). Burroughs and Sperry merged in 1986 to form
UNISYS. Digital Equipment Corporation (DEC) has recently begun to compete
directly in this market.

Mainframes are found in almost all large organizations. Although many


applications that were once executed exclusively on these systems have migrated
to smaller systems, there are still many applications that require the large capacity
that only a mainframe can offer. A central system also helps enforce standards and
allows management better control of the corporate information resource.

Minicomputers

In the 1960s, as the cost of computing continued to decrease, another type of


computing system appeared. Its cost is typically from about thirty thousand dollars
to several hundred thousand dollars. These systems are usually called
minicomputers, although today the term may be misleading since a mini often is
found doing the work of a mainframe.

Originally, the mini had very limited memory and a single logic unit. The most
attractive thing about it was its cost. Relatively inexpensive computing capacity
could be purchased with a mini, so management was willing to buy these systems
for specialized purposes such as process control. Today the capacity of many of the
minis found in industry rivals the mainframes of a few years ago. Many of the
applications that were exclusively run on the mainframes are being done on minis
and the distinction between the two systems, mainframe and mini, has become
clouded. The only distinction that seems to remain is the relative cost. Among the
best-known minicomputer manufacturers are Digital Equipment Corporation,
Hewlett Packard, SUN and Silicon Graphics.

Microcomputers

In the 1970s one of the most significant things that occurred in the evolution of
computing systems was the development of "a processor on a chip”. Scientists
developed the techniques that allowed all of the functions of the ALU to be placed
on a single wafer of silicon the size of a fingernail. This led to the introduction of
the third general category of computing systems called the microcomputer, or
personal computer. Early micros had very limited processing capabilities, but as
has happened with all other computer developments, they have rapidly grown in
power and decreased in cost. A typical system will cost from under one hundred to
approximately thousands dollars.

Microcomputers are widely used as personal workstations in almost all aspects of


business, industry, and government. They are found extensively in the home and in
educational institutions. Although there are microcomputer systems that are
capable of serving several users at once, most are used by one person. This has led
to the term personal computer (PC), which is used interchangeably with the term
micro. Among the most popular micro computers are those produced by IBM,
Apple, and Tandy-Radio Shack. An entire industry has developed manufacturing
PCs that function identically to the IBM PC. These machines are often called PC
clones. In 1987 IBM introduced its next generation of PCs, the Personal System 2.
It is a family of products that significantly increases the power of the earlier
machines.

The typical personal computer has a typewriter like keyboard for input, a television
type screen, a separate printer for output, and a hard disk for secondary storage.
Larger systems add more secondary storage with larger disk drives or re-writable
CDs. Many of the systems have the ability to communicate with other personal
computers or minis and mainframes. This makes them extremely versatile tools.
The applications available on micros are as varied as on the larger systems; the
only restrictions are due to their relatively slow speed and memory size.

Supercomputers

One additional category of computing systems we have not discussed is commonly


referred to as the supercomputer. These systems have extremely large memories
and are very fast. They cost in the range of millions dollars. They are most
commonly used for very large computational problems. These systems are so
complex that they usually have a mainframe computer that functions as the input
and output device for the supercomputer. A good definition of a supercomputer is
simply the largest and most powerful computer that is available at the present time.
Supercomputers have found application in long-range weather forecasting and
other very complex problems that require enormous amounts of computation.
Manufacturers of the largest supercomputers are Cray Research and Control Data
Corporation.

Software

We mentioned that computing systems had two primary components: hardware


and software. Initially, the most expensive component of the system was the
hardware. The cost of equipment has steadily decreased; at the same time, the
complexity and therefore the cost of the software has increased. More attention has
been paid to the development and maintenance of the software.

Software usually falls into several broad categories. Most computing systems are
sold with certain manufacturer supplied software programs that permit the user to
use the system more conveniently. This software is referred to as the operating
system. Other manufacturer supplied software, called utilities, can be used to help
develop applications. The operating system and utilities are referred to as the
system software. Applications software are special purpose computer programs.
They are designed to solve a specific problem, such as solve a set of equations, or
perform a specific function, such as payroll or word processing.

Just as the divisions between different sizes of computing systems were not clear,
it is also true that distinctions between different categories of software are not
clearly defined. Sometimes, when several of the functions are found in a single
program, we say we have an integrated system. An integrated system may also
include elements of hardware. An example may be a personal computing system
that is designed just to do word processing. A related term that is often used is
integrated package. Many personal computer software packages provide a variety
of functions, such as word processing, spreadsheet, and data-base. This is an
example of an integrated package.

Advances in software technology have not been as dramatic as those in hardware,


but they are equally important. It has been predicted that future productivity gains
in computing and information processing will be primarily due to the result of new
software techniques.

Operating System
An operating system is the collection of programs that controls the overall
processing in the computer. The operating systems acts as an interface between the
system hardware, application software, and the user. It also acts as the system
resource manager. Operating systems have evolved in parallel with developments
in computer hardware. As computing systems became more complex, it became
inefficient for each individual who wanted to make use of the system to rewrite all
of the programs that controlled the hardware. Libraries of these programs were
developed so that everyone could use the same programs, thus saving much time.

The first operating systems were developed when these programs were
automatically accessed for the user by a control program. As the equipment
became more complex, it was necessary for the operating system to provide more
services. The operating system is responsible for the management of the
computer's resources. In the large computer systems that are available today, many
hundreds of different people or programs may be requesting a variety of different
services simultaneously. Operating system programs that manage these complex
functions may contain many millions of instructions.

It is interesting to note that the operating systems for the personal computers
resemble the early operating systems for older, large computers. Both the early
systems and personal computers process only one task at a time; thus, the
management of the computer's resources is very similar.

Today, operating systems provide a variety of functions and very often may
become quite complex. The operating system even for PC contains millions of
instructions. It is the job of the operating system to keep track of which locations in
memory belong to a particular task and then make sure that one program does not
destroy another program's data. Many tasks may be competing for a particular
device, such as a line printer. The operating system must schedule devices among
the users so that all processes are treated fairly and no information is lost.
Input/output devices all require programs to control them. It is also the operating
system's job to see that all these devices are properly controlled and act in unison
with the other components. The operating system is also responsible for
communications.
As distributed data processing becomes more important and the individual
computing systems become more complex, we will have to develop new operating
systems capable of managing the new environment. Even as these systems evolve
in complexity, it is important that they become easy to use, or are "user friendly."

Utilities

Utility programs provide a variety of different functions that are similar in some
ways UtIIIt1eS to the functions of the operating system. One of the more common
functions might be the copying of a data file. Many of the utility programs are
designed to help make programming easier. Computer languages have been
developed in which English words and mathematical expressions take the place of
machine symbols representing the program instructions. Utility computer
programs, called translators, have been written that transform these programs into
machine language programs that the computer can actually understand. Text
editors and word processors are important utility programs. Other utility programs
are spreadsheets, data-bases and graphic programs. Utility programs may also help
monitor system performance or automatically bill for computer services.

Applications

Certainly, the software that is most often seen by the general user is the application
software. Application software may be written by the user or purchased
commercially. These may vary from very large payroll systems to a little program
that your instructor has written to keep track of your grades. The game that you
play on your home computer or in the video arcade is another commonly seen
example of applications software. The less experienced the user, the more
important it is to make the programs user friendly.

Application software development now occupies more time and employs more
people than any other aspect of the data and information processing industry. In the
future, this trend is expected to continue. Writing software can be very time
consuming. It is therefore important that we develop tools to make this task as
efficient as possible.

Integrated Systems
An integrated system combines several different software and hardware
components to provide a specific function. One of the most common examples is a
word processing system that provides no other functions. Word processing is an
application very often found on dedicated integrated systems where a computing
system is used for a single application by one person at a time. Special keys might
provide such capabilities as underlining, setting margins, and centring text.
Although this system may have a very powerful processor, both the hardware and
software are designed for a single purpose.

Another common example of integrated systems is the arcade game. Here the input
(a set of joysticks) and the output (a colour video monitor and speaker) are
designed for one purpose: to play a particular game. The instructions or programs
that execute the game are usually in firmware and cannot be changed.

Methods of Processing Information

As with our discussions on hardware and software, trying to fit different methods
of processing information into rigidly defined categories is difficult. In this section
we will look at two broadly defined methods and try to explain how they may
overlap. The first, batch processing, was developed in conjunction with the first
computers. The second, interactive processing, became possible as computers and
operating systems became more powerful. We will also briefly discuss a third
category called real-time processing in which the computer is used to control other
processes.

Batch Processing

Batch processing requires all of the data and instructions to be placed in the
computing system before the processing begins. Once the application begins to
execute, it runs to completion essentially without human intervention. This type of
processing is well suited finish without human intervention to tasks that are run on
a regular basis, such as grade reports or a payroll system.

In the early computers, all of the data and programs would be punched onto data
cards. These cards would then be read into the computer as a "batch." The program
would run to completion before another batch of cards would be read.
Today's computers are capable of handling many tasks seemingly simultaneously.
In these systems, several batch applications may be in the computer at once with
the operating system scheduling the processor among the several tasks so that they
all share the processor fairly. As one task completes, another is loaded into the
computer memory. This is called multiprogramming with the tasks coming from
multiple batch job streams where the tasks are lined up in several lines or queues
waiting for their turn. When there is more than one processor in the CPU we refer
to the computer as a multiprocessor system. Commonly this is referred to as a
multiprocessing system.

Interactive Processing

As computers became more powerful and less expensive, it became feasible to


have them interact directly with human beings during the execution of a program.
This is called interactive processing. Batch processing does not allow such
interaction. One of the difficulties with the early systems that prevented this was
the large difference in the time it takes a computer to process data and the time it
takes for the human to provide input. If the computer had to wait for a person to
type in each data item before it continued processing, it would be idle most of the
time. Just think of how you would react to a person who could only carry on a
conversation at the rate of one word every few hours! If a computer could think,
that would be how it would feel. Early computers were simply too expensive to
leave them idle so much of the time.

The two problems mentioned above have now been solved. Personal computers are
so inexpensive that we are not concerned if they sit idle waiting for us to respond.
Larger, more expensive systems have the ability to do multiprocessing. When this
kind of system is waiting for a human response, it simply moves onto another task
and works on it for a specified period of time. When the person finally responds,
the system returns to that task and continues the processing. As you might expect,
if too many requests are made for processing, performance slows down for
everyone.

There are two general types of interactive processing. In the first type, we have
many users doing the same kind of processing. This is called transaction
processing. A common example would be an airline reservation system with a
number of reservation clerks all accessing the information and "selling" tickets.

The second type is called time-sharing in which a number of people may be doing
a variety of different things, such as payroll, word processing, and perhaps game
playing. One group may be doing transaction processing. The operating system
again has to schedule the processor so that everyone is treated fairly. A common
way to accomplish this is to give each user a block of processor time, called a time
slice, before moving onto someone else's task. Jobs are scheduled in a circular or
round-robin method so the system will always return to a task that was suspended.
Modern computers are so fast that each user appears to have complete control of
the entire system.

It is possible for all of the users in a time-sharing system to be using the same ap-
plication software. In this case, the time-sharing system is doing transaction
processing. It is also possible to mix batch jobs with interactive applications.

Real-Time Processing

A final type of processing warrants attention. Real-time processing may be used to


control many different types of apparatus. In these applications, sensing devices
measuring such things as temperature and pressure are connected to the computer.
The computer can then control a process like the production of certain chemicals.
A BRIEF COMPUTER HISTORY

The computer as we know it today had its beginning with a 19th century English
mathematics professor name Charles Babbage.
He designed the Analytical Engine and it was this design that the basic framework
of the computers of today are based on.

Generally speaking, computers can be classified into three generations. Each


generation lasted for a certain period of
time,and each gave us either a new and improved computer or an improvement to
the existing computer.

First generation: 1937 – 1946 - In 1937 the first electronic digital computer was
built by Dr. John V. Atanasoff and Clifford Berry. It was called the Atanasoff-
Berry Computer (ABC). In 1943 an electronic computer name the Colossus was
built for the military. Other developments continued until in 1946 the first general–
purpose digital computer, the Electronic Numerical Integrator and Computer
(ENIAC) was built. It is said that this computer weighed 30 tons, and had 18,000
vacuum tubes which was used for processing. When this computer was turned on
for the first time lights dim in sections of Philadelphia. Computers of this
generation could only perform single task, and they had no operating system.

Second generation: 1947 – 1962 - This generation of computers used transistors


instead of vacuum tubes which were more reliable. In 1951 the first computer for
commercial use was introduced to the public; the Universal Automatic Computer
(UNIVAC 1). In 1953 the International Business Machine (IBM) 650 and 700
series computers made their mark in the computer world. During this generation of
computers over 100 computer programming languages were developed, computers
had memory and operating systems. Storage media such as tape and disk were in
use also were printers for output.

Third generation: 1963 - present - The invention of integrated circuit brought us


the third generation of computers. With this invention computers became smaller,
more powerful more reliable and they are able to run many different programs at
the same time. In1980 Microsoft Disk Operating System (MS-Dos) was born and
in 1981 IBM introduced the personal computer (PC) for home and office use.
Three years later Apple gave us the Macintosh computer with its icon driven
interface and the 90s gave us Windows operating system.

As a result of the various improvements to the development of the computer we


have seen the computer being used in all areas of life. It is a very useful tool that
will continue to experience new development as time passes.
Introduction to Computer System

Representation of Basic Information

The basic functional units of computer are made of electronics circuit and it works
with electrical signal. We provide input to the computer in form of electrical signal
and get the output in form of electrical signal.

There are two basic types of electrical signals, namely, analog and digital. The
analog signals are continuous in nature and digital signals are discrete in nature.

The electronic device that works with continuous signals is known as analog
device and the electronic device that works with discrete signals is known
as digital device. In present days most of the computers are digital in nature and we
will deal with Digital Computer in this course.

Computer is a digital device, which works on two levels of signal. We say these
two levels of signal as High and Low. The High-level signal basically corresponds
to some high-level signal (say 5 Volt or 12 Volt) and Low-level signal basically
corresponds to Low-level signal (say 0 Volt). This is one convention, which is
known as positive logic. There are others convention also like negative logic.

Since Computer is a digital electronic device, we have to deal with two kinds of
electrical signals. But while designing a new computer system or understanding the
working principle of computer, it is always difficult to write or work with 0V or
5V.

Computer Organization and Architecture

Computer technology has made incredible improvement in the past half century. In
the early part of computer evolution, there were no stored-program computer, the
computational power was less and on the top of it the size of the computer was a
very huge one.

Today, a personal computer has more computational power, more main


memory,more disk storage, smaller in size and it is available in effordable cost.
This rapid rate of improvement has come both from advances in the technology
used to build computers and from innovation in computer design. In this course we
will mainly deal with the innovation in computer design.

The task that the computer designer handles is a complex one: Determine what
attributes are important for a new machine, and then design a machine to maximize
performance while staying within cost constraints.

This task has many aspects, including instruction set design, functional
organization, logic design, and implementation.

While looking for the task for computer design, both the terms computer
organization and computer architecture come into picture. It is difficult to give
precise definition for the terms Computer Organization and Computer
Architecture. But while describing computer system, we come across these terms,
and in literature, computer scientists try to make a distinction between these two
terms.

Computer architecture refers to those parameters of a computer system that are


visible to a programmer or those parameters that have a direct impact on the
logical execution of a program. Examples of architectural attributes include the
instruction set, the number of bits used to represent different data types, I/O
mechanisms, and techniques for addressing memory.

Computer organization refers to the operational units and their interconnections


that realize the architectural specifications. Examples of organizational attributes
include those hardware details transparent to the programmer, such as control
signals, interfaces between the computer and peripherals, and the memory
technology used.

In this course we will touch upon all those factors and finally come up with the
concept how these attributes contribute to build a complete computer system. Basic
Computer Model and different units of Computer

The model of a computer can be described by four basic units in high level
abstraction which is shown in figure 1.1. These basic units are:

Central Processor Unit


Input Unit

Output Unit Memory Unit

Figure 1.1: Basic Unit of a Computer

Basic Computer Model and different units of Computer

A. Central Processor Unit (CPU) :

Central processor unit consists of two basic blocks :

The program control unit has a set of registers and control circuit to generate
control signals.

The execution unit or data processing unit contains a set of registers for storing
data and an Arithmatic and Logic Unit (ALU) for execution of arithmatic and
logical operations.

In addition, CPU may have some additional registers for temporary storage of data.

B. Input Unit :

With the help of input unit data from outside can be supplied to the computer.
Program or data is read into main storage from input device or secondary storage
under the control of CPU input instruction.

Example of input devices: Keyboard, Mouse, Hard disk, Floppy disk, CD-ROM
drive etc.

C. Output Unit :

With the help of output unit computer results can be provided to the user or it can
be stored in stograge device permanently for future use. Output data from main
storage go to output device under the control of CPU output instructions.
Example of output devices: Printer, Monitor, Plotter, Hard Disk, Floppy Disk etc.

D. Memory Unit :

Memory unit is used to store the data and program. CPU can work with the
information stored in memory unit. This memory unit is termed as primary
memory or main memory module. These are basically semi conductor memories.

There ate two types of semiconductor memories -

Volatile Memory : RAM (Random Access Memory).

Non-Volatile Memory : ROM (Read only Memory), PROM (Programmable


ROM)
EPROM (Erasable PROM), EEPROM (Electrically Erasable
PROM).

Secondary Memory :

There is another kind of storage device, apart from primary or main memory,
which is known as secondary memory. Secondary memories are non volatile
memory and it is used for permanent storage of data and program.

Example of secondary memories:

Hard Disk, Floppy Disk,


------ These are magnetic devices,
Magenetic Tape
CD-ROM ------ is optical device
Thumb drive (or pen drive) ------ is semiconductor memory.

Basic Working Principle of a Computer

Before going into the details of working principle of a computer, we will analyse
how computers work with the help of a small hypothetical computer.

In this small computer, we do not consider about Input and Output unit. We will
consider only CPU and memory module. Assume that somehow we have stored the
program and data into main memory. We will see how CPU can perform the job
depending on the program stored in main memory.
P.S. - Our assumption is that students understand common terms like program,
CPU, memory etc. without knowing the exact details.

Consider the Arithmatic and Logic Unit (ALU) of Central Processing Unit :

Consider an ALU which can perform four arithmatic operations and four logical
operations
To distingish between arithmatic and logical operation, we may use a signal line,

in thatrepresents an arithmatic
0 -
signal, operation and
in that
1 - represents a logical operation.
signal,
In the similar manner, we need another two signal lines to distinguish between four
arithmatic operations.

Main Memory Organization

Main memory unit is the storage unit, There are several location for storing
information in the main memory module.

The capacity of a memory module is specified by the number of memory location


and the information stored in each location.

A memory module of capacity 16 X 4 indicates that, there are 16 location in the


memory module and in each location, we can store 4 bit of information.

We have to know how to indicate or point to a specific memory location. This is


done by address of the memory location.

We need two operation to work with memory.

READ This operation is to retrive the data from memory


Operation: and bring it to CPU register
WRITE Operation: This operation is to store the data to a memory
location from CPU register
We need some mechanism to distinguish these two
operations READ and WRITE.
Information security

Information security, sometimes shortened to InfoSec, is the practice of preventing


unauthorized access, use, disclosure, disruption, modification, inspection,
recording or destruction of information. It is a general term that can be used
regardless of the form the data may take (e.g. electronic, physical). Overview[edit]

IT security

Sometimes referred to as computer security, information technology security is


information security applied to technology (most often some form of computer
system). It is worthwhile to note that a computer does not necessarily mean a home
desktop. A computer is any device with a processor and some memory. Such
devices can range from non-networked standalone devices as simple as calculators,
to networked mobile computing devices such as smartphones and tablet computers.
IT security specialists are almost always found in any major
enterprise/establishment due to the nature and value of the data within larger
businesses. They are responsible for keeping all of the technology within the
company secure from malicious cyber attacks that often attempt to breach into
critical private information or gain control of the internal systems.

Information assurance

The act of providing trust of the information, that the Confidentiality, Integrity and
Availability (CIA) of the information are not violated, e.g. ensuring that data is not
lost when critical issues arise. These issues include, but are not limited to: natural
disasters, computer/server malfunction or physical theft. Since most information is
stored on computers in our modern era, information assurance is typically dealt
with by IT security specialists. A common method of providing information
assurance is to have an off-site backup of the data in case one of the mentioned
issues arise.

Threats[edit]

Information security threats come in many different forms. Some of the most
common threats today are software attacks, theft of intellectual property, identity
theft, theft of equipment or information, sabotage, and information extortion. Most
people have experienced software attacks of some sort. Viruses,
[2] worms, phishing attacks, and Trojan horses are a few common examples of
software attacks. The theft of intellectual property has also been an extensive issue
for many businesses in the IT field. Identity theft is the attempt to act as someone
else usually to obtain that person's personal information or to take advantage of
their access to vital information. Theft of equipment or information is becoming
more prevalent today due to the fact that most devices today are mobile.[citation
needed] Cell phones are prone to theft, and have also become far more desirable as
the amount of data capacity increases. Sabotage usually consists of the destruction
of an organization′s website in an attempt to cause loss of confidence on the part of
its customers. Information extortion consists of theft of a company′s property or
information as an attempt to receive a payment in exchange for returning the
information or property back to its owner, as with ransomware. There are many
ways to help protect yourself from some of these attacks but one of the most
functional precautions is user carefulness.

Governments, military, corporations, financial institutions, hospitals and


private businesses amass a great deal of confidential information about their
employees, customers, products, research and financial status. Most of this
information is now collected, processed and stored on electronic computers and
transmitted across networks to other computers.

Should confidential information about a business' customers or finances or new


product line fall into the hands of a competitor or a black hat hacker, a business
and its customers could suffer widespread, irreparable financial loss, as well as
damage to the company's reputation. From a business perspective, information
security must be balanced against cost; the Gordon-Loeb Model provides a
mathematical economic approach for addressing this concern.[3]

For the individual, information security has a significant effect on privacy, which
is viewed very differently in different cultures.

The field of information security has grown and evolved significantly in recent
years. It offers many areas for specialization, including securing networks and
allied infrastructure, securing applications and databases, security testing,
information systems auditing, business continuity planning and digital forensics.

Responses to threats[edit]
Possible responses to a security threat or risk are:[4]

reduce/mitigate – implement safeguards and countermeasures to eliminate


vulnerabilities or block threats

assign/transfer – place the cost of the threat onto another entity or organization
such as purchasing insurance or outsourcing

accept – evaluate if cost of countermeasure outweighs the possible cost of loss due
to threat

ignore/reject – not a valid or prudent due-care response

In 2011, The Open Group published the information security management


standard O-ISM3.[20] This standard proposed an operational definition of the key
concepts of security, with elements called "security objectives", related to access
control (9), availability (3), data quality (1), compliance and technical (4). This
model is not currently widely adopted.

Confidentiality[edit]

In information security, confidentiality "is the property, that information is not


made available or disclosed to unauthorized individuals, entities, or processes"
(Excerpt ISO27000).

Integrity[edit]

In information security, data integrity means maintaining and assuring the accuracy
and completeness of data over its entire life-cycle.[21] This means that data cannot
be modified in an unauthorized or undetected manner. This is not the same thing
as referential integrity in databases, although it can be viewed as a special case of
consistency as understood in the classic ACID model of transaction processing.
Information security systems typically provide message integrity in addition to
data confidentiality.

Availability[edit]

For any information system to serve its purpose, the information must
be available when it is needed. This means that the computing systems used to
store and process the information, the security controls used to protect it, and the
communication channels used to access it must be functioning correctly. High
availability systems aim to remain available at all times, preventing service
disruptions due to power outages, hardware failures, and system upgrades.
Ensuring availability also involves preventing denial-of-service attacks, such as a
flood of incoming messages to the target system essentially forcing it to shut down.
[22]

Non-repudiation[edit]

In law, non-repudiation implies one's intention to fulfill their obligations to a


contract. It also implies that one party of a transaction cannot deny having received
a transaction nor can the other party deny having sent a transaction. Note: This is
also regarded as part of Integrity.

It is important to note that while technology such as cryptographic systems can


assist in non-repudiation efforts, the concept is at its core a legal concept
transcending the realm of technology. It is not, for instance, sufficient to show that
the message matches a digital signature signed with the sender's private key, and
thus only the sender could have sent the message and nobody else could have
altered it in transit. The alleged sender could in return demonstrate that the digital
signature algorithm is vulnerable or flawed, or allege or prove that his signing key
has been compromised. The fault for these violations may or may not lie with the
sender himself, and such assertions may or may not relieve the sender of liability,
but the assertion would invalidate the claim that the signature necessarily proves
authenticity and integrity and thus prevents repudiation.

Controls[edit]

Main article: security controls

Selecting proper controls and implementing those will initially help an


organization to bring down risk to acceptable levels. Control selection should
follow and should be based on the risk assessment. Controls can vary in nature but
fundamentally they are ways of protecting the confidentiality, integrity or
availability of information. ISO/IEC 27001:2005 has defined 133 controls in
different areas, but this is not exhaustive. Organizations can implement additional
controls according to requirement of the organization. ISO 27001:2013 has cut
down the number of controls to 113. From 08.11.2013 the technical standard of
information security in place is: ABNT NBR ISO/IEC 27002:2013.[27]

Administrative[edit]

Administrative controls (also called procedural controls) consist of approved


written policies, procedures, standards and guidelines. Administrative controls
form the framework for running the business and managing people. They inform
people on how the business is to be run and how day-to-day operations are to be
conducted. Laws and regulations created by government bodies are also a type of
administrative control because they inform the business. Some industry sectors
have policies, procedures, standards and guidelines that must be followed –
the Payment Card Industry Data Security Standard (PCI DSS) required
by Visa and MasterCard is such an example. Other examples of administrative
controls include the corporate security policy, password policy, hiring policies, and
disciplinary policies.

Administrative controls form the basis for the selection and implementation of
logical and physical controls. Logical and physical controls are manifestations of
administrative controls. Administrative controls are of paramount importance.

Logical[edit]

Logical controls (also called technical controls) use software and data to monitor
and control access to information and computing systems. For example:
passwords, network and host-based firewalls, network intrusion
detection systems, access control lists, and data encryption are logical controls.

An important logical control that is frequently overlooked is the principle of least


privilege. The principle of least privilege requires that an individual, program or
system process is not granted any more access privileges than are necessary to
perform the task. A blatant example of the failure to adhere to the principle of least
privilege is logging into Windows as user Administrator to read email and surf the
web. Violations of this principle can also occur when an individual collects
additional access privileges over time. This happens when employees' job duties
change, or they are promoted to a new position, or they transfer to another
department. The access privileges required by their new duties are frequently
added onto their already existing access privileges which may no longer be
necessary or appropriate.

Physical[edit]

Physical controls monitor and control the environment of the work place and
computing facilities. They also monitor and control access to and from such
facilities. For example: doors, locks, heating and air conditioning, smoke and fire
alarms, fire suppression systems, cameras, barricades, fencing, security guards,
cable locks, etc. Separating the network and workplace into functional areas are
also physical controls.

An important physical control that is frequently overlooked is the separation of


duties. Separation of duties ensures that an individual can not complete a critical
task by himself. For example: an employee who submits a request for
reimbursement should not also be able to authorize payment or print the check. An
applications programmer should not also be the server administrator or
the database administrator – these roles and responsibilities must be separated from
one another.[28]
Types of known threats

To know what can threat your data you should know what malicious programs
(Malware) exist and how they function. Malware can be subdivided in the
following types:

Viruses: programs that infect other programs by adding to them a virus code to get
access at an infected file start-up. This simple definition discovers the main action
of a virus – infection. The spreading speed of viruses is lower than that of worms.

Worms: this type of Malware uses network resources for spreading. This class was
called worms because of its peculiar feature to “creep” from computer to computer
using network, mail and other informational channels. Thanks to it spreading speed
of worms is very high.

Worms intrude your computer, calculate network addresses of other computers and
send to these addresses its copies. Besides network addresses, the data of the mail
clients' address books is used as well. Representatives of this Malware type
sometimes create working files on system discs, but may not deploy computer
resources (except the operating memory).

Trojans: programs that execute on infected computers unauthorized by user


actions; i.e. depending on the conditions delete information on discs, make the
system freeze, steal personal information, etc. this Malware type is not a virus in
traditional understanding (i.e. does not infect other programs or
data): Trojans cannot intrude the PC by themselves and are spread by violators as
“useful” and necessary software. And still harm caused by Trojans is higher than
of traditional virus attack.

Spyware: software that allows to collect data about a specific user or organization,
who are not aware of it. You may not even guess about having spyware on your
computer. As a rule the aim of spyware is to:

Trace user's actions on computer

Collect information about hard drive contents; it often means scanning some
folders and system registry to make a list of software installed on the computer.
Collect information about quality of connection, way of connecting, modem speed,
etc.

Collecting information is not the main function of these programs, they also threat
security. Minimum two known programs – Gator and eZula – allow violator not
only collect information but also control the computer. Another example of
spyware are programs embedded in the browser installed on the computer and
retransfer traffic. You have definitely come across such programs, when inquiring
one address of a web-site, another web-site was opened. One of the spyware
is phishing- delivery.

Phishing is a mail delivery whose aim is to get from the user confidential financial
information as a rule. Phishing is a form of a social engineering, characterized by
attempts to fraudulently acquire sensitive information, such as passwords and
credit card details, by masquerading as a trustworthy person or business in an
apparently official electronic communication, such as an email or an instant
message. The messages contain link to a deliberately false site where user is
suggested to enter number of his/her credit card and other confidential information.

Adware: program code embedded to the software without user being aware of it to
show advertising. As a rule adware is embedded in the software that is distributed
free. Advertisement is in the working interface. Adware often gathers and transfer
to its distributor personal information of the user.

Riskware: this software is not a virus, but contains in itself potential threat. By
some conditions presence of such riskware on your PC puts your data at risk. To
this software refer utilities of remote administration, programs that use Dial Up-
connection and some others to connect with pay-per-minute internet sites.

Jokes: software that does not harm your computer but displays messages that this
harm has already been caused, or is going to be caused on some conditions. This
software often warns user about not existing danger, e.g. display messages about
hard disc formatting (though no formatting is really happening), detect viruses in
not infected files and etc.

Rootkit: these are utilities used to conceal malicious activity. They disguise
Malware, to prevent from being detected by the antivirus applications. Rootkits can
also modify operating system on the computer and substitute its main functions to
disguise its presence and actions that violator makes on the infected computer.

Other malware: different programs that have been developed to create other
Malware, organizing DoS-attacks on remote servers, intruding other computers,
etc. Hack Tools, virus constructors and other refer to such programs.

Spam: anonymous, mass undesirable mail correspondence. Spam is political and


propaganda delivery, mails that ask to help somebody. Another category of spam
are messages suggesting you to cash a great sum of money or inviting you to
financial pyramids, and mails that steal passwords and credit card number,
messages suggesting to send them to your friends (messages of happiness), etc.
spam increases load on mail servers and increases the risk lose information that is
important for the user.
Cloud computing

Simply put, cloud computing is the delivery of computing services—servers, storage,


databases, networking, software, analytics and more—over the Internet (“the cloud”).
Companies offering these computing services are called cloud providers and typically
charge for cloud computing services based on usage, similar to how you are billed for water

or electricity at home .
Uses of cloud computing

You are probably using cloud computing right now, even if you don’t realise it. If you use an
online service to send email, edit documents, watch movies or TV, listen to music, play games or
store pictures and other files, it is likely that cloud computing is making it all possible behind the
scenes. The first cloud computing services are barely a decade old, but already a variety of
organisations—from tiny startups to global corporations, government agencies to non-profits—
are embracing the technology for all sorts of reasons. Here are a few of the things you can do
with the cloud:

 Create new apps and services

 Store, back up and recover data

 Host websites and blogs

 Stream audio and video

 Deliver software on demand

 Analyse data for patterns and make predictions


Top benefits of cloud computing

Cloud computing is a big shift from the traditional way businesses think about IT resources.
What is it about cloud computing? Why is cloud computing so popular? Here are 6 common
reasons organisations are turning to cloud computing services:
1. Cost

Cloud computing eliminates the capital expense of buying hardware and software and setting up
and running on-site datacenters—the racks of servers, the round-the-clock electricity for power
and cooling, the IT experts for managing the infrastructure. It adds up fast.
2. Speed

Most cloud computing services are provided self service and on demand, so even vast amounts
of computing resources can be provisioned in minutes, typically with just a few mouse clicks,
giving businesses a lot of flexibility and taking the pressure off capacity planning.

3. Global scale

The benefits of cloud computing services include the ability to scale elastically. In cloud speak,
that means delivering the right amount of IT resources—for example, more or less computing
power, storage, bandwidth—right when its needed and from the right geographic location.

4. Productivity

On-site datacenters typically require a lot of “racking and stacking”—hardware set up, software
patching and other time-consuming IT management chores. Cloud computing removes the need
for many of these tasks, so IT teams can spend time on achieving more important business goals.

5. Performance

The biggest cloud computing services run on a worldwide network of secure datacenters, which
are regularly upgraded to the latest generation of fast and efficient computing hardware. This
offers several benefits over a single corporate datacenter, including reduced network latency for
applications and greater economies of scale.

6. Reliability

Cloud computing makes data backup, disaster recovery and business continuity easier and less
expensive, because data can be mirrored at multiple redundant sites on the cloud provider’s
network.
Types of cloud services: IaaS, PaaS, SaaS
Most cloud computing services fall into three broad categories: infrastructure as a service (IaaS),
platform as a service (PaaS) and software as a service (Saas). These are sometimes called the
cloud computing stack, because they build on top of one another. Knowing what they are and
how they are different makes it easier to accomplish your business goals.

Infrastructure-as-a-service (IaaS)

The most basic category of cloud computing services. With IaaS, you rent IT infrastructure—
servers and virtual machines (VMs), storage, networks, operating systems—from a cloud
provider on a pay-as-you-go basis. To learn more, see What is IaaS?

Platform as a service (PaaS)

Platform-as-a-service (PaaS) refers to cloud computing services that supply an on-demand


environment for developing, testing, delivering and managing software applications. PaaS is
designed to make it easier for developers to quickly create web or mobile apps, without worrying
about setting up or managing the underlying infrastructure of servers, storage, network and
databases needed for development. To learn more, see What is PaaS?

Software as a service (SaaS)

Software-as-a-service (SaaS) is a method for delivering software applications over the Internet,
on demand and typically on a subscription basis. With SaaS, cloud providers host and manage
the software application and underlying infrastructure and handle any maintenance, like software
upgrades and security patching. Users connect to the application over the Internet, usually with a
web browser on their phone, tablet or PC. To learn more, see What is SaaS?
Types of cloud deployments: public, private, hybrid

Not all clouds are the same. There are three different ways to deploy cloud computing resources:
public cloud, private cloud and hybrid cloud.
Public cloud

Public clouds are owned and operated by a third-party cloud service provider, which deliver their
computing resources like servers and storage over the Internet. Microsoft Azure is an example of
a public cloud. With a public cloud, all hardware, software and other supporting infrastructure is
owned and managed by the cloud provider. You access these services and manage your account
using a web browser.
Private cloud

A private cloud refers to cloud computing resources used exclusively by a single business or
organisation. A private cloud can be physically located on the company’s on-site datacenter.
Some companies also pay third-party service providers to host their private cloud. A private
cloud is one in which the services and infrastructure are maintained on a private network.

Hybrid cloud

Hybrid clouds combine public and private clouds, bound together by technology that allows data
and applications to be shared between them. By allowing data and applications to move between
private and public clouds, hybrid cloud gives businesses greater flexibility and more deployment
options.

How cloud computing works

Cloud computing services all work a little differently, depending on the provider. But many
provide a friendly, browser-based dashboard that makes it easier for IT professionals and
developers to order resources and manage their accounts. Some cloud computing services are
also designed to work with REST APIs and a command-line interface (CLI), giving developers
multiple options.
Microsoft and cloud computing

Microsoft is a leading global provider of cloud computing services for businesses of all sizes. To
learn more about our cloud platform, Microsoft Azure and how it compares to other cloud
providers, see What is Azure? and Azure vs. AWS.

You might also like