Professional Documents
Culture Documents
Paper 1 - Introduction To It
Paper 1 - Introduction To It
Computer systems are extremely complex objects. Even the smallest systems
consist of millions of individual parts. Each part is relatively simple in its function
and easy to understand. It is only when these millions of simple parts interact with
each other that we develop the complex structures that comprise our modern
computing systems. In this lesson, we take a brief look at the major components of
a computing system. In subsequent lessons, we will study each component in more
detail.
Many people have tried to classify computing systems in different ways, always
with difficulty. We will look at some of the terminology used to describe different
types of computing systems, from the ubiquitous micro to the extremely powerful
supercomputer. Common terms, such as hardware and software, will be discussed.
Different methods of processing data and information will be explained. After this
brief survey, we will come back, in later lessons, and explore in greater detail the
topics introduced here.
Among the operations that a computer can perform are the simple mathematical
functions of addition, subtraction, multiplication, and division. The computer also
has the ability to compare two items and make a decision based on this
comparison. It is by combining these simple functions into sequences of increasing
complexity that we can have the computer perform the information processing
functions. The very first computers electronically wired these sequences on
removable cards called breadboards. For each new task the wiring would have to
be changed manually. The idea that these sequences could be symbolically coded
and stored in the computer was a major step forward, since the instructions could
then be changed very easily. Sets of instructions that control the sequence of
operations are called programs. Collectively, the programs are called software and
the physical equipment is called the hardware. The hardware in a modern
computing system may consist of many components. As computers have been
improved, scientists and engineers have been able to make electronic circuits
perform ever more complex tasks. Many of the functions that formerly were
executed by software are now fabricated in hardware. When software functions are
permanently placed in the hardware, we refer to the programs as firmware.
In the future we will continue to build computers out of these three logical
components but the electronic implementation will change. Systems will still
include people, machines, software, and procedures, although our techniques for
defining them will become more rigorous. As you read through the following
material, try to distinguish those concepts that are apt to change substantially and
those that will not.
Hardware can be categorized in numerous ways. Below we will look at the central
unit, or central processing unit (CPU), and the associated equipment consisting of
auxiliary storage, input and output devices, and communication devices. All of the
units that are connected to the central processing unit are called peripheral devices.
In this section we will consider each component in turn. In the second section of
the text we will study these components in more detail.
The control unit serves to coordinate and control the entire operation of the
computer system. It controls the transfer of data between the CPU, primary
storage, and possibly other auxiliary devices that will be discussed later. The
control unit decodes instructions and keeps track of which instruction is to be
executed next. In a modern computing system, there may be hundreds or thousands
of hardware components. It is the function of the control unit to coordinate their
activities. A VDT (video display terminal) is usually employed by the user to
monitor the computer's functions and to send control commands to the CPU. With
large systems, this person would be a regular employee called the operator working
at a special VDT called a console. The control unit examines each instruction and
then routes the proper numbers to the arithmetic-logic unit (ALU). The arithmetic-
logic unit consists of the circuitry that performs the specified logical or arithmetic
operations.
The primary storage unit, or main memory, stores all data and instructions that are
data and instructions before They can to be used by the CPU. Main memory is
divided into a large number of storage locations be used by the CPU. of a fixed
size called words. Each word can be accessed by the control unit because it has a
unique address. The word's address is just like your postal address except it is a
single unique number. Each word is made up of a fixed number of units called
bytes. In most systems, a byte can hold one character of data. Often the hardware
logic permits access to individual bytes and it appears to the user that the computer
is "byte addressable," when, in fact, a full word has been accessed in memory.
(These terms will be explained in detail later in the text.)
The size of a computer's memory is usually expressed in terms of bytes. The metric
abbreviations for one thousand one million and one billion, K, M and G, are used
in specifying memory size. These actually stand for kilobyte, megabyte and
gigabyte respectively. However, because of the binary nature of today's computers,
it is usual to have K as 210 (or 1 024), M as 220 (or 1 048 576) and G as 230 (or
1 073 741 824). Therefore, a 128M computer has approximately 128 000 000
bytes, or characters, of memory. The size of the memory is one of the parameters
that determines the power and cost of the computer. The different types of memory
will be discussed in detail later.
Auxiliary storage devices can store very large quantities of data in an economical
and reliable manner. Their major limitation is that the data are not available to the
CPU nearly as quickly as are data stored in main memory, and care must be taken
to balance the use of main and auxiliary memory. Main memory may be up to
100 000 faster than auxiliary memory.
Input/Output Devices
Information we normally deal with in our daily lives, such as printed material we
read or voice information we hear on the telephone, is essentially useless to the
computer in the form that we understand. Devices that convert data into a form that
the computer can interpret (computer-readable) and enter it into the computer are
called input devices. There is a wide variety of input devices. Among the ones we
will consider are keyboard devices, scanning devices, and audio or voice
recognition devices. As in other areas of computing, technological advances in
input devices continue to improve accuracy and speed while reducing costs.
Output devices serve two important functions:
Mainframes
The term most commonly used for the largest general purpose computing system is
a mainframe computer. The cost ranges from several hundred thousand to several
million dollars. This cost has stayed relatively constant since the early 1950s;
however, the power mainframes. of the mainframe has increased dramatically.
Mainframe systems have very fast processor times and extremely large memories.
Usually the CPU is comprised of many special purpose logic units that permit the
mainframe to handle many tasks concurrently. How this is accomplished will be
explained later. It is not unusual to have several hundred people using a single
mainframe at once. In addition to the size of memory and speed of processing,
these systems have access to vast amounts of secondary storage. Among the best
known of the mainframe manufacturers are IBM, UNISYS, Honeywell, and
Control Data Corporation (CDC). Burroughs and Sperry merged in 1986 to form
UNISYS. Digital Equipment Corporation (DEC) has recently begun to compete
directly in this market.
Minicomputers
Originally, the mini had very limited memory and a single logic unit. The most
attractive thing about it was its cost. Relatively inexpensive computing capacity
could be purchased with a mini, so management was willing to buy these systems
for specialized purposes such as process control. Today the capacity of many of the
minis found in industry rivals the mainframes of a few years ago. Many of the
applications that were exclusively run on the mainframes are being done on minis
and the distinction between the two systems, mainframe and mini, has become
clouded. The only distinction that seems to remain is the relative cost. Among the
best-known minicomputer manufacturers are Digital Equipment Corporation,
Hewlett Packard, SUN and Silicon Graphics.
Microcomputers
In the 1970s one of the most significant things that occurred in the evolution of
computing systems was the development of "a processor on a chip”. Scientists
developed the techniques that allowed all of the functions of the ALU to be placed
on a single wafer of silicon the size of a fingernail. This led to the introduction of
the third general category of computing systems called the microcomputer, or
personal computer. Early micros had very limited processing capabilities, but as
has happened with all other computer developments, they have rapidly grown in
power and decreased in cost. A typical system will cost from under one hundred to
approximately thousands dollars.
The typical personal computer has a typewriter like keyboard for input, a television
type screen, a separate printer for output, and a hard disk for secondary storage.
Larger systems add more secondary storage with larger disk drives or re-writable
CDs. Many of the systems have the ability to communicate with other personal
computers or minis and mainframes. This makes them extremely versatile tools.
The applications available on micros are as varied as on the larger systems; the
only restrictions are due to their relatively slow speed and memory size.
Supercomputers
Software
Software usually falls into several broad categories. Most computing systems are
sold with certain manufacturer supplied software programs that permit the user to
use the system more conveniently. This software is referred to as the operating
system. Other manufacturer supplied software, called utilities, can be used to help
develop applications. The operating system and utilities are referred to as the
system software. Applications software are special purpose computer programs.
They are designed to solve a specific problem, such as solve a set of equations, or
perform a specific function, such as payroll or word processing.
Just as the divisions between different sizes of computing systems were not clear,
it is also true that distinctions between different categories of software are not
clearly defined. Sometimes, when several of the functions are found in a single
program, we say we have an integrated system. An integrated system may also
include elements of hardware. An example may be a personal computing system
that is designed just to do word processing. A related term that is often used is
integrated package. Many personal computer software packages provide a variety
of functions, such as word processing, spreadsheet, and data-base. This is an
example of an integrated package.
Operating System
An operating system is the collection of programs that controls the overall
processing in the computer. The operating systems acts as an interface between the
system hardware, application software, and the user. It also acts as the system
resource manager. Operating systems have evolved in parallel with developments
in computer hardware. As computing systems became more complex, it became
inefficient for each individual who wanted to make use of the system to rewrite all
of the programs that controlled the hardware. Libraries of these programs were
developed so that everyone could use the same programs, thus saving much time.
The first operating systems were developed when these programs were
automatically accessed for the user by a control program. As the equipment
became more complex, it was necessary for the operating system to provide more
services. The operating system is responsible for the management of the
computer's resources. In the large computer systems that are available today, many
hundreds of different people or programs may be requesting a variety of different
services simultaneously. Operating system programs that manage these complex
functions may contain many millions of instructions.
It is interesting to note that the operating systems for the personal computers
resemble the early operating systems for older, large computers. Both the early
systems and personal computers process only one task at a time; thus, the
management of the computer's resources is very similar.
Today, operating systems provide a variety of functions and very often may
become quite complex. The operating system even for PC contains millions of
instructions. It is the job of the operating system to keep track of which locations in
memory belong to a particular task and then make sure that one program does not
destroy another program's data. Many tasks may be competing for a particular
device, such as a line printer. The operating system must schedule devices among
the users so that all processes are treated fairly and no information is lost.
Input/output devices all require programs to control them. It is also the operating
system's job to see that all these devices are properly controlled and act in unison
with the other components. The operating system is also responsible for
communications.
As distributed data processing becomes more important and the individual
computing systems become more complex, we will have to develop new operating
systems capable of managing the new environment. Even as these systems evolve
in complexity, it is important that they become easy to use, or are "user friendly."
Utilities
Utility programs provide a variety of different functions that are similar in some
ways UtIIIt1eS to the functions of the operating system. One of the more common
functions might be the copying of a data file. Many of the utility programs are
designed to help make programming easier. Computer languages have been
developed in which English words and mathematical expressions take the place of
machine symbols representing the program instructions. Utility computer
programs, called translators, have been written that transform these programs into
machine language programs that the computer can actually understand. Text
editors and word processors are important utility programs. Other utility programs
are spreadsheets, data-bases and graphic programs. Utility programs may also help
monitor system performance or automatically bill for computer services.
Applications
Certainly, the software that is most often seen by the general user is the application
software. Application software may be written by the user or purchased
commercially. These may vary from very large payroll systems to a little program
that your instructor has written to keep track of your grades. The game that you
play on your home computer or in the video arcade is another commonly seen
example of applications software. The less experienced the user, the more
important it is to make the programs user friendly.
Application software development now occupies more time and employs more
people than any other aspect of the data and information processing industry. In the
future, this trend is expected to continue. Writing software can be very time
consuming. It is therefore important that we develop tools to make this task as
efficient as possible.
Integrated Systems
An integrated system combines several different software and hardware
components to provide a specific function. One of the most common examples is a
word processing system that provides no other functions. Word processing is an
application very often found on dedicated integrated systems where a computing
system is used for a single application by one person at a time. Special keys might
provide such capabilities as underlining, setting margins, and centring text.
Although this system may have a very powerful processor, both the hardware and
software are designed for a single purpose.
Another common example of integrated systems is the arcade game. Here the input
(a set of joysticks) and the output (a colour video monitor and speaker) are
designed for one purpose: to play a particular game. The instructions or programs
that execute the game are usually in firmware and cannot be changed.
As with our discussions on hardware and software, trying to fit different methods
of processing information into rigidly defined categories is difficult. In this section
we will look at two broadly defined methods and try to explain how they may
overlap. The first, batch processing, was developed in conjunction with the first
computers. The second, interactive processing, became possible as computers and
operating systems became more powerful. We will also briefly discuss a third
category called real-time processing in which the computer is used to control other
processes.
Batch Processing
Batch processing requires all of the data and instructions to be placed in the
computing system before the processing begins. Once the application begins to
execute, it runs to completion essentially without human intervention. This type of
processing is well suited finish without human intervention to tasks that are run on
a regular basis, such as grade reports or a payroll system.
In the early computers, all of the data and programs would be punched onto data
cards. These cards would then be read into the computer as a "batch." The program
would run to completion before another batch of cards would be read.
Today's computers are capable of handling many tasks seemingly simultaneously.
In these systems, several batch applications may be in the computer at once with
the operating system scheduling the processor among the several tasks so that they
all share the processor fairly. As one task completes, another is loaded into the
computer memory. This is called multiprogramming with the tasks coming from
multiple batch job streams where the tasks are lined up in several lines or queues
waiting for their turn. When there is more than one processor in the CPU we refer
to the computer as a multiprocessor system. Commonly this is referred to as a
multiprocessing system.
Interactive Processing
The two problems mentioned above have now been solved. Personal computers are
so inexpensive that we are not concerned if they sit idle waiting for us to respond.
Larger, more expensive systems have the ability to do multiprocessing. When this
kind of system is waiting for a human response, it simply moves onto another task
and works on it for a specified period of time. When the person finally responds,
the system returns to that task and continues the processing. As you might expect,
if too many requests are made for processing, performance slows down for
everyone.
There are two general types of interactive processing. In the first type, we have
many users doing the same kind of processing. This is called transaction
processing. A common example would be an airline reservation system with a
number of reservation clerks all accessing the information and "selling" tickets.
The second type is called time-sharing in which a number of people may be doing
a variety of different things, such as payroll, word processing, and perhaps game
playing. One group may be doing transaction processing. The operating system
again has to schedule the processor so that everyone is treated fairly. A common
way to accomplish this is to give each user a block of processor time, called a time
slice, before moving onto someone else's task. Jobs are scheduled in a circular or
round-robin method so the system will always return to a task that was suspended.
Modern computers are so fast that each user appears to have complete control of
the entire system.
It is possible for all of the users in a time-sharing system to be using the same ap-
plication software. In this case, the time-sharing system is doing transaction
processing. It is also possible to mix batch jobs with interactive applications.
Real-Time Processing
The computer as we know it today had its beginning with a 19th century English
mathematics professor name Charles Babbage.
He designed the Analytical Engine and it was this design that the basic framework
of the computers of today are based on.
First generation: 1937 – 1946 - In 1937 the first electronic digital computer was
built by Dr. John V. Atanasoff and Clifford Berry. It was called the Atanasoff-
Berry Computer (ABC). In 1943 an electronic computer name the Colossus was
built for the military. Other developments continued until in 1946 the first general–
purpose digital computer, the Electronic Numerical Integrator and Computer
(ENIAC) was built. It is said that this computer weighed 30 tons, and had 18,000
vacuum tubes which was used for processing. When this computer was turned on
for the first time lights dim in sections of Philadelphia. Computers of this
generation could only perform single task, and they had no operating system.
The basic functional units of computer are made of electronics circuit and it works
with electrical signal. We provide input to the computer in form of electrical signal
and get the output in form of electrical signal.
There are two basic types of electrical signals, namely, analog and digital. The
analog signals are continuous in nature and digital signals are discrete in nature.
The electronic device that works with continuous signals is known as analog
device and the electronic device that works with discrete signals is known
as digital device. In present days most of the computers are digital in nature and we
will deal with Digital Computer in this course.
Computer is a digital device, which works on two levels of signal. We say these
two levels of signal as High and Low. The High-level signal basically corresponds
to some high-level signal (say 5 Volt or 12 Volt) and Low-level signal basically
corresponds to Low-level signal (say 0 Volt). This is one convention, which is
known as positive logic. There are others convention also like negative logic.
Since Computer is a digital electronic device, we have to deal with two kinds of
electrical signals. But while designing a new computer system or understanding the
working principle of computer, it is always difficult to write or work with 0V or
5V.
Computer technology has made incredible improvement in the past half century. In
the early part of computer evolution, there were no stored-program computer, the
computational power was less and on the top of it the size of the computer was a
very huge one.
The task that the computer designer handles is a complex one: Determine what
attributes are important for a new machine, and then design a machine to maximize
performance while staying within cost constraints.
This task has many aspects, including instruction set design, functional
organization, logic design, and implementation.
While looking for the task for computer design, both the terms computer
organization and computer architecture come into picture. It is difficult to give
precise definition for the terms Computer Organization and Computer
Architecture. But while describing computer system, we come across these terms,
and in literature, computer scientists try to make a distinction between these two
terms.
In this course we will touch upon all those factors and finally come up with the
concept how these attributes contribute to build a complete computer system. Basic
Computer Model and different units of Computer
The model of a computer can be described by four basic units in high level
abstraction which is shown in figure 1.1. These basic units are:
The program control unit has a set of registers and control circuit to generate
control signals.
The execution unit or data processing unit contains a set of registers for storing
data and an Arithmatic and Logic Unit (ALU) for execution of arithmatic and
logical operations.
In addition, CPU may have some additional registers for temporary storage of data.
B. Input Unit :
With the help of input unit data from outside can be supplied to the computer.
Program or data is read into main storage from input device or secondary storage
under the control of CPU input instruction.
Example of input devices: Keyboard, Mouse, Hard disk, Floppy disk, CD-ROM
drive etc.
C. Output Unit :
With the help of output unit computer results can be provided to the user or it can
be stored in stograge device permanently for future use. Output data from main
storage go to output device under the control of CPU output instructions.
Example of output devices: Printer, Monitor, Plotter, Hard Disk, Floppy Disk etc.
D. Memory Unit :
Memory unit is used to store the data and program. CPU can work with the
information stored in memory unit. This memory unit is termed as primary
memory or main memory module. These are basically semi conductor memories.
Secondary Memory :
There is another kind of storage device, apart from primary or main memory,
which is known as secondary memory. Secondary memories are non volatile
memory and it is used for permanent storage of data and program.
Before going into the details of working principle of a computer, we will analyse
how computers work with the help of a small hypothetical computer.
In this small computer, we do not consider about Input and Output unit. We will
consider only CPU and memory module. Assume that somehow we have stored the
program and data into main memory. We will see how CPU can perform the job
depending on the program stored in main memory.
P.S. - Our assumption is that students understand common terms like program,
CPU, memory etc. without knowing the exact details.
Consider the Arithmatic and Logic Unit (ALU) of Central Processing Unit :
Consider an ALU which can perform four arithmatic operations and four logical
operations
To distingish between arithmatic and logical operation, we may use a signal line,
in thatrepresents an arithmatic
0 -
signal, operation and
in that
1 - represents a logical operation.
signal,
In the similar manner, we need another two signal lines to distinguish between four
arithmatic operations.
Main memory unit is the storage unit, There are several location for storing
information in the main memory module.
IT security
Information assurance
The act of providing trust of the information, that the Confidentiality, Integrity and
Availability (CIA) of the information are not violated, e.g. ensuring that data is not
lost when critical issues arise. These issues include, but are not limited to: natural
disasters, computer/server malfunction or physical theft. Since most information is
stored on computers in our modern era, information assurance is typically dealt
with by IT security specialists. A common method of providing information
assurance is to have an off-site backup of the data in case one of the mentioned
issues arise.
Threats[edit]
Information security threats come in many different forms. Some of the most
common threats today are software attacks, theft of intellectual property, identity
theft, theft of equipment or information, sabotage, and information extortion. Most
people have experienced software attacks of some sort. Viruses,
[2] worms, phishing attacks, and Trojan horses are a few common examples of
software attacks. The theft of intellectual property has also been an extensive issue
for many businesses in the IT field. Identity theft is the attempt to act as someone
else usually to obtain that person's personal information or to take advantage of
their access to vital information. Theft of equipment or information is becoming
more prevalent today due to the fact that most devices today are mobile.[citation
needed] Cell phones are prone to theft, and have also become far more desirable as
the amount of data capacity increases. Sabotage usually consists of the destruction
of an organization′s website in an attempt to cause loss of confidence on the part of
its customers. Information extortion consists of theft of a company′s property or
information as an attempt to receive a payment in exchange for returning the
information or property back to its owner, as with ransomware. There are many
ways to help protect yourself from some of these attacks but one of the most
functional precautions is user carefulness.
For the individual, information security has a significant effect on privacy, which
is viewed very differently in different cultures.
The field of information security has grown and evolved significantly in recent
years. It offers many areas for specialization, including securing networks and
allied infrastructure, securing applications and databases, security testing,
information systems auditing, business continuity planning and digital forensics.
Responses to threats[edit]
Possible responses to a security threat or risk are:[4]
assign/transfer – place the cost of the threat onto another entity or organization
such as purchasing insurance or outsourcing
accept – evaluate if cost of countermeasure outweighs the possible cost of loss due
to threat
Confidentiality[edit]
Integrity[edit]
In information security, data integrity means maintaining and assuring the accuracy
and completeness of data over its entire life-cycle.[21] This means that data cannot
be modified in an unauthorized or undetected manner. This is not the same thing
as referential integrity in databases, although it can be viewed as a special case of
consistency as understood in the classic ACID model of transaction processing.
Information security systems typically provide message integrity in addition to
data confidentiality.
Availability[edit]
For any information system to serve its purpose, the information must
be available when it is needed. This means that the computing systems used to
store and process the information, the security controls used to protect it, and the
communication channels used to access it must be functioning correctly. High
availability systems aim to remain available at all times, preventing service
disruptions due to power outages, hardware failures, and system upgrades.
Ensuring availability also involves preventing denial-of-service attacks, such as a
flood of incoming messages to the target system essentially forcing it to shut down.
[22]
Non-repudiation[edit]
Controls[edit]
Administrative[edit]
Administrative controls form the basis for the selection and implementation of
logical and physical controls. Logical and physical controls are manifestations of
administrative controls. Administrative controls are of paramount importance.
Logical[edit]
Logical controls (also called technical controls) use software and data to monitor
and control access to information and computing systems. For example:
passwords, network and host-based firewalls, network intrusion
detection systems, access control lists, and data encryption are logical controls.
Physical[edit]
Physical controls monitor and control the environment of the work place and
computing facilities. They also monitor and control access to and from such
facilities. For example: doors, locks, heating and air conditioning, smoke and fire
alarms, fire suppression systems, cameras, barricades, fencing, security guards,
cable locks, etc. Separating the network and workplace into functional areas are
also physical controls.
To know what can threat your data you should know what malicious programs
(Malware) exist and how they function. Malware can be subdivided in the
following types:
Viruses: programs that infect other programs by adding to them a virus code to get
access at an infected file start-up. This simple definition discovers the main action
of a virus – infection. The spreading speed of viruses is lower than that of worms.
Worms: this type of Malware uses network resources for spreading. This class was
called worms because of its peculiar feature to “creep” from computer to computer
using network, mail and other informational channels. Thanks to it spreading speed
of worms is very high.
Worms intrude your computer, calculate network addresses of other computers and
send to these addresses its copies. Besides network addresses, the data of the mail
clients' address books is used as well. Representatives of this Malware type
sometimes create working files on system discs, but may not deploy computer
resources (except the operating memory).
Spyware: software that allows to collect data about a specific user or organization,
who are not aware of it. You may not even guess about having spyware on your
computer. As a rule the aim of spyware is to:
Collect information about hard drive contents; it often means scanning some
folders and system registry to make a list of software installed on the computer.
Collect information about quality of connection, way of connecting, modem speed,
etc.
Collecting information is not the main function of these programs, they also threat
security. Minimum two known programs – Gator and eZula – allow violator not
only collect information but also control the computer. Another example of
spyware are programs embedded in the browser installed on the computer and
retransfer traffic. You have definitely come across such programs, when inquiring
one address of a web-site, another web-site was opened. One of the spyware
is phishing- delivery.
Phishing is a mail delivery whose aim is to get from the user confidential financial
information as a rule. Phishing is a form of a social engineering, characterized by
attempts to fraudulently acquire sensitive information, such as passwords and
credit card details, by masquerading as a trustworthy person or business in an
apparently official electronic communication, such as an email or an instant
message. The messages contain link to a deliberately false site where user is
suggested to enter number of his/her credit card and other confidential information.
Adware: program code embedded to the software without user being aware of it to
show advertising. As a rule adware is embedded in the software that is distributed
free. Advertisement is in the working interface. Adware often gathers and transfer
to its distributor personal information of the user.
Riskware: this software is not a virus, but contains in itself potential threat. By
some conditions presence of such riskware on your PC puts your data at risk. To
this software refer utilities of remote administration, programs that use Dial Up-
connection and some others to connect with pay-per-minute internet sites.
Jokes: software that does not harm your computer but displays messages that this
harm has already been caused, or is going to be caused on some conditions. This
software often warns user about not existing danger, e.g. display messages about
hard disc formatting (though no formatting is really happening), detect viruses in
not infected files and etc.
Rootkit: these are utilities used to conceal malicious activity. They disguise
Malware, to prevent from being detected by the antivirus applications. Rootkits can
also modify operating system on the computer and substitute its main functions to
disguise its presence and actions that violator makes on the infected computer.
Other malware: different programs that have been developed to create other
Malware, organizing DoS-attacks on remote servers, intruding other computers,
etc. Hack Tools, virus constructors and other refer to such programs.
or electricity at home .
Uses of cloud computing
You are probably using cloud computing right now, even if you don’t realise it. If you use an
online service to send email, edit documents, watch movies or TV, listen to music, play games or
store pictures and other files, it is likely that cloud computing is making it all possible behind the
scenes. The first cloud computing services are barely a decade old, but already a variety of
organisations—from tiny startups to global corporations, government agencies to non-profits—
are embracing the technology for all sorts of reasons. Here are a few of the things you can do
with the cloud:
Cloud computing is a big shift from the traditional way businesses think about IT resources.
What is it about cloud computing? Why is cloud computing so popular? Here are 6 common
reasons organisations are turning to cloud computing services:
1. Cost
Cloud computing eliminates the capital expense of buying hardware and software and setting up
and running on-site datacenters—the racks of servers, the round-the-clock electricity for power
and cooling, the IT experts for managing the infrastructure. It adds up fast.
2. Speed
Most cloud computing services are provided self service and on demand, so even vast amounts
of computing resources can be provisioned in minutes, typically with just a few mouse clicks,
giving businesses a lot of flexibility and taking the pressure off capacity planning.
3. Global scale
The benefits of cloud computing services include the ability to scale elastically. In cloud speak,
that means delivering the right amount of IT resources—for example, more or less computing
power, storage, bandwidth—right when its needed and from the right geographic location.
4. Productivity
On-site datacenters typically require a lot of “racking and stacking”—hardware set up, software
patching and other time-consuming IT management chores. Cloud computing removes the need
for many of these tasks, so IT teams can spend time on achieving more important business goals.
5. Performance
The biggest cloud computing services run on a worldwide network of secure datacenters, which
are regularly upgraded to the latest generation of fast and efficient computing hardware. This
offers several benefits over a single corporate datacenter, including reduced network latency for
applications and greater economies of scale.
6. Reliability
Cloud computing makes data backup, disaster recovery and business continuity easier and less
expensive, because data can be mirrored at multiple redundant sites on the cloud provider’s
network.
Types of cloud services: IaaS, PaaS, SaaS
Most cloud computing services fall into three broad categories: infrastructure as a service (IaaS),
platform as a service (PaaS) and software as a service (Saas). These are sometimes called the
cloud computing stack, because they build on top of one another. Knowing what they are and
how they are different makes it easier to accomplish your business goals.
Infrastructure-as-a-service (IaaS)
The most basic category of cloud computing services. With IaaS, you rent IT infrastructure—
servers and virtual machines (VMs), storage, networks, operating systems—from a cloud
provider on a pay-as-you-go basis. To learn more, see What is IaaS?
Software-as-a-service (SaaS) is a method for delivering software applications over the Internet,
on demand and typically on a subscription basis. With SaaS, cloud providers host and manage
the software application and underlying infrastructure and handle any maintenance, like software
upgrades and security patching. Users connect to the application over the Internet, usually with a
web browser on their phone, tablet or PC. To learn more, see What is SaaS?
Types of cloud deployments: public, private, hybrid
Not all clouds are the same. There are three different ways to deploy cloud computing resources:
public cloud, private cloud and hybrid cloud.
Public cloud
Public clouds are owned and operated by a third-party cloud service provider, which deliver their
computing resources like servers and storage over the Internet. Microsoft Azure is an example of
a public cloud. With a public cloud, all hardware, software and other supporting infrastructure is
owned and managed by the cloud provider. You access these services and manage your account
using a web browser.
Private cloud
A private cloud refers to cloud computing resources used exclusively by a single business or
organisation. A private cloud can be physically located on the company’s on-site datacenter.
Some companies also pay third-party service providers to host their private cloud. A private
cloud is one in which the services and infrastructure are maintained on a private network.
Hybrid cloud
Hybrid clouds combine public and private clouds, bound together by technology that allows data
and applications to be shared between them. By allowing data and applications to move between
private and public clouds, hybrid cloud gives businesses greater flexibility and more deployment
options.
Cloud computing services all work a little differently, depending on the provider. But many
provide a friendly, browser-based dashboard that makes it easier for IT professionals and
developers to order resources and manage their accounts. Some cloud computing services are
also designed to work with REST APIs and a command-line interface (CLI), giving developers
multiple options.
Microsoft and cloud computing
Microsoft is a leading global provider of cloud computing services for businesses of all sizes. To
learn more about our cloud platform, Microsoft Azure and how it compares to other cloud
providers, see What is Azure? and Azure vs. AWS.