Professional Documents
Culture Documents
Historical Development of The Computer
Historical Development of The Computer
INTRODUCTION
The computer is one of the marvelous inventions of recent times. It has grown beyond its
original purpose of computing. Today, it is used in all walks of human life. Its applications
range from simple data entry to complex global electronic commerce, and online training to
technology development. Ever since its invention, the power of computer has been growing
rapidly with the cost of the hardware declining by the year. A large number of application
software packages are now available that make computers highly productive, versatile and easy
to use. The use of computers is spreading to more areas of human activities that make life more
comfortable and leisurely.
DEFINITION OF A COMPUTER
A computer can be defined as an electronic device that performs rapid computations and generates
the desired output for users based on input data and programs.
A computer can capture, store, retrieve, and process data. The data may be numbers, characters,
audio, video, or images. Basically, computers can recognize only two states: whether a signal is
present or not. These two states are represented using the binary digits I and 0. All forms of data
are finally converted into binary digits for the computer to recognize and process. Instructions are
also converted into binary digits. A digital computer has the capability to manipulate a series of
binary digits according to the instructions (software) given to it.
Characteristics of a computer
Computer is a versatile device. It can be designed to do any kind of activity provided all data and
instructions are made available to it in digital form. The important characteristics of computer are:
1. Speed
Modern computers have very high processing speeds. Today's computers, including recent
microcomputers, operate at clock speeds of nanoseconds and can carry out millions of instructions
per second (MIPS).
2. Accuracy
A computer can ensure a consistently high degree of accuracy in computations. It processes data
according to the sequence of instructions provided through a program. Hence, if input data and
procedures are correct, the output will be consistently accurate.
1
A computer has huge storage capability. For a personal computer, 8 GB of RAM is almost standard
today. Secondary or auxiliary storage devices are used for permanent storage. Modern computers
have enormous secondary storage capacity; e.g., a PC hard disk can storage capacity of more than
1 TB. Huge storage and fast retrieval capability make computers very special tools for data
processing and communications.
4. Versatility
Though computers are basically designed to carry out simple arithmetic and logical operations,
they are capable of performing almost any task that has a series of finite logical steps. Computers
can be used for communications, process control, research, weather forecasting, healthcare, online
trading, education, training, defense applications, and many other areas.
The computer is free from fatigue. It does not get tired of work and never loses concentration. It
can perform basic arithmetic operations with the same degree of speed and accuracy for any extent
of time continuously, with the same amount of efficiency as the first transaction.
6. Programmable
A computer can be programmed to function automatically, which differentiates it from any other
calculation device. It functions as programmed for any stretch of time until the condition to
terminate is satisfied.
7. Networking capability
Computers can be interconnected into a network. A network, in turn, can be connected to other
networks. Networks extend the capabilities of computers. The networks provide the basic
infrastructure for electronic communications, electronic commerce, online trading, and
information services.
Limitations of a computer
The computer is, no doubt, a marvelous tool. Yet it has some limitations. Some of the major
limitations of computers are as follows:
1. A computer cannot think on its own. It has to be given instructions to perform any
operation. Research is currently underway to impart artificial intelligence (AI) to
computers. Once this becomes possible, computers will be thinking on their own, and
then it will be a reasonable replication of the human mind.
2. It does not have intuition. It cannot draw a conclusion without going through all the
intermediate steps.
2
3. It can do a task only if it can be expressed in a series of finite steps leading to the
completion of the task.
5. It cannot learn from experience. It will commit the same error repeatedly and cannot
learn from experience. But changes are taking place in this area as research progresses
on artificial intelligence (AI).
Introduction
The history of computers dates back to the age when man started using tools for computations.
The whole history of computing can be divided into two periods based on the technology used in
computing devices such as mechanical era and electronic era. During the mechanical era,
computers relied on physical mechanisms like gears and levers to perform calculations.
However, it was during the electronic era that the development of transistors and integrated
circuits revolutionized computing, leading to smaller and more powerful devices.
The mechanical era of computing devices spanned from the early 17th century to the mid-20th
century, during which machines like the abacus, slide rule, and mechanical calculators were used
for mathematical calculations. The electronic era, on the other hand, began with the invention of
electronic computers in the mid-20th century and continues to this day with advancements in
technology leading to smaller, faster, and more powerful computing devices.
The Abacus
The abacus, which emerged about 5,000 years ago in Asia Minor and is still in use today, may be
considered the earliest known computing device. This device allows users to make computations
using a system of sliding beads arranged on a rack. Early merchants used the abacus to keep
trading transactions. But as the use of paper and pencil spread, particularly in Europe, the abacus
3
The Napier’s Bones
Scientists continued to develop improved calculators as needed. John Napier of Scotland devised
the Napier Bones calculating instrument in 1617. Napier's invention employed bone rods with
printed numbers for counting. These rods could perform addition, subtraction, multiplication,
and division calculations with simplicity.
In 1642, Blaise Pascal (1623-1662), the son of a French tax collector, invented what he called a
numerical wheel calculator to help his father with his duties. This brass rectangular box, also
called a Pascaline, used eight movable dials to add sums up to eight figures long. Pascal's device
used a base of ten to accomplish this. For example, as one dial moved ten notches, or one
complete revolution, it moved the next dial - which represented the ten's column.
The hundred dial rotated one notch when the ten dial moved one revolution, and so forth.
In 1694, a German mathematician and philosopher, Gottfried Wilhem von Leibniz (1646-1716),
improved the Pascaline by creating a machine that could also multiply. Like its predecessor,
Leibniz's mechanical multiplier worked by a system of gears and dials. Partly by studying
Pascal's original notes and drawings, Leibniz was able to refine his machine. It wasn't until 1820,
however, that mechanical calculators gained widespread use. Charles Xavier Thomas de Colmar,
a Frenchman, invented a machine that could perform the four basic arithmetic functions.
computing because it could add, subtract, multiply and divide. With its enhanced versatility, the
arithometer was widely used up until the First World War. Although later inventors refined
4
Colmar's calculator, together with fellow inventors Pascal and Leibniz, he helped define the age
of mechanical computation.
Analytical Engine
In 1822, Charles Babbage purposed and began developing the Difference Engine, considered to
be the first automatic computing engine that was capable of computing several sets of numbers
and making hard copies of the results. Unfortunately, because of funding he was never able to
complete a full-scale functional version of this machine. In June of 1991, the London Science
Museum completed the Difference Engine No 2 for the bicentennial year of Babbage's birth and
later completed the printing mechanism in 2000. Later, in 1837 Charles Babbage proposed the
first general mechanical computer, the Analytical Engine. The Analytical Engine contained
an Arithmetic Logic Unit (ALU), basic flow control, and integrated memory and is the first
general-purpose computer concept. Unfortunately, because of funding issues this computer was
also never built while Charles Babbage's was alive. In 1910, Henry Babbage, Charles Babbage's
youngest son was able to complete a portion of this machine and was able to perform basic
calculations.
Herman Hollerith's tabulating machine, used in the 1900 US census, was the first successful
electromechanical machine for processing data. It used punched cards to store data, which were
then read by the machine and tabulated. This was a major improvement over the previous
method of hand-counting census data, which was slow and tedious.
Hollerith's machine was also the first to use a binary code to represent data. This code is still
used today in computers.
5
Hollerith's machine was a significant breakthrough in computing. It paved the way for the
development of modern computers and data processing systems.
It used punched cards to store data. Each card represented a single individual, and each
column on the card represented a different data point, such as name, age, and gender.
The machine used an electric current to read the punched cards. When a hole was present
in a card, the machine would complete an electrical circuit. This would trigger a counter,
which would keep track of the number of holes in each column.
The machine could be used to sort and tabulate the data on the punched cards. This
allowed census workers to quickly and easily generate reports on the population.
Hollerith's machine was a major success. It was used by the US Census Bureau for the 1890 and
1900 censuses, and it was also used by businesses and governments around the world. Hollerith
founded the Tabulating Machine Company, which later became IBM.
Hollerith's machine was a significant breakthrough in computing. It paved the way for the
development of modern computers and data processing systems. Today, his work is still
remembered and celebrated by computer scientists around the world.
6
First generation computers consumed a lot of power, often requiring their own dedicated power
generators. They also required elaborate cooling mechanisms dissipate the generated heat. This
made them expensive to operate and limited their use to large organizations.
First generation computers were programmed using low-level programming languages (machine
language). These languages are difficult to learn and use, and they are specific to the computer
hardware. It is also required that the computer operator had a thorough knowledge of the
computer. This meant that very few people could use these computers.
First generation computers were not very reliable. Vacuum tubes were prone to failure, and
computers would often break down.
They used magnetic tape and punched cards for input and output.
They were very expensive, and only large organizations could afford them.
They were used for a limited range of applications, such as scientific computing and
military applications.
Some popular first generation computers include:
ENIAC
UNIVAC I
Harvard Mark I
Colossus
First generation computers, even though they were limited in their capabilities had a huge impact
on the society. They were the first computers to be used for practical applications, and they laid
the foundation for the development of more powerful and reliable computers in the future.
7
second generation computers more energy-efficient. The programming languages FORTRAN
and COBOL were introduced during this generation.
Second generation computers used magnetic core memory, which was faster and more reliable
than the drum memory used in first generation computers. They were used for a wider range of
applications, including business, scientific, and engineering.
Operating systems became more important in the third generation of computers. Operating
systems are software programs that manage the computer's hardware and resources. They also
provide a platform for running user applications. High-level programming languages became
more popular in the third generation of computers. High-level programming languages are easier
to learn and use than low-level programming languages, such as assembly language. Some
popular high-level programming languages from the third generation include COBOL,
FORTRAN, and ALGOL. High-level programming languages made it easier to develop
software.
Mini and mainframe computers were developed during this generation with provision for
facilitating time sharing and multiprogramming. The storage capacity and speed of these
computers was increased many folds which allowed use of user friendly package programs, word
processing and remote terminals. Remote terminals could get access and use of central computer
facilities and get results instantaneously.
Third generation computers laid the foundation for the personal computer revolution of the
fourth generation.
8
Fourth Generation Computers
The fourth generation computers were introduced in the early 1970s and were characterized by
the development of microprocessors and microcomputers. Microprocessors are integrated
circuits that contain all of the central processing unit (CPU) functions on a single chip.
Microcomputers are small, personal computers that use microprocessors.
Microcomputers were first introduced in the mid-1970s. The Altair 8800, released in 1975, was
one of the first successful microcomputers. It was a kit computer that users had to assemble
themselves. The first commercially successful microcomputer, the Apple II, was released in
1977. It was a fully assembled computer that came with a keyboard, monitor, and software.
High-level programming languages became more popular in the fourth generation of computers.
High-level programming languages are easier to learn and use than low-level programming
languages, such as assembly language. Some popular high-level programming languages from
the fourth generation include BASIC, COBOL, Pascal, and C. The Internet began to develop in
the fourth generation of computers. The Internet allows users to share information and resources
with each other. The first email was sent in 1971, and the first web page was created in 1989.
The fourth generation of computers has had a profound impact on society. Microprocessors and
microcomputers made computers more affordable and accessible to a wider range of users. High-
level programming languages made computers easier to program and use. And the Internet
revolutionized the way people communicate and share information. The fourth generation of
computers has been a time of great innovation in the computer industry with a profound impact
on the society.
9
Fifth Generation (Present and Beyond)
Fifth generation computers are still under development, but they are expected to be characterized
by heavy use of AI technologies and applications. Fifth generation computers will be able to use
artificial intelligence (AI) to understand and respond to human language, learn from data, and
make decisions. This will enable new applications in areas such as natural language processing,
machine translation, and expert systems.
Fifth generation computers will be able to take advantage of the high-speed and low-latency of
5G cellular networks and the Internet of Things (IoT) to connect to a wide range of devices and
sensors. This will enable new applications in areas such as smart cities, self-driving cars, and
telemedicine. They will also make use of parallel processing to perform calculations much faster
than previous generations of computers. This will enable new applications in areas such as
scientific computing, video processing, and financial modeling.
They will use new materials and technologies to create smaller, more powerful, and more
energy-efficient chips.
They will use new software architectures and programming languages to make it easier to
develop and deploy complex AI applications.
They will be more secure and reliable than previous generations of computers.
Natural language processing: Fifth generation computers will be able to understand and
respond to human language, which will enable new applications in areas such as customer
service, education, and entertainment.
Machine translation: Fifth generation computers will be able to translate languages
accurately and fluently, which will enable new applications in areas such as global
communication and international business.
Expert systems: Fifth generation computers will be able to use AI to provide expert advice in
a variety of fields, such as medicine, law, and finance.
Smart cities: Fifth generation computers will be able to help cities manage traffic, energy,
and other resources more efficiently.
Self-driving cars: Fifth generation computers will be able to power the self-driving cars of
the future.
Telemedicine: Fifth generation computers will be used to provide remote medical care to
patients in rural or underserved areas.
Scientific computing: Fifth generation computers will be used to perform complex scientific
simulations and calculations.
10
Video processing: Fifth generation computers will be used to create and process high-quality
video content.
Financial modeling: Fifth generation computers will be used to develop and deploy complex
financial models.
Fifth generation computers are still under development with most of the list areas above having
projects under initial trials. By using AI, 5G, and parallel processing, fifth generation computers
will be able to perform tasks that are currently impossible or impractical. This will enable new
and innovative applications in a wide range of fields.
11