Chapter 2

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 80

COMPUTER EVOLUTION

AND PERFORMANCE

Course Instructor:
Asst. Prof. Dr. Rashidah Funke Olanrewaju
Evolution of computers-key characteristics

Generation/Evolution in computer is a step in


technology.
It provides a framework for the growth of the computer
industry.
Generation was used to distinguish between varying
hardware technologies.
But nowadays, it has been extended to include both the
hardware and the software, which together make up an
entire computer system
Evolution of computers-key characteristics

Characteristics
Increasing Processor Speed-heavy use of pipelining and
parallel execution techniques-use of speculative execution
techniques
Decreasing Component Size
Increasing Memory Size
Increasing I/O capacity and speed
Critical Issue in design
balancing the performance of various elements-processor
speed increased more rapidly than memory access time.
Evolution of Computers: Brief History

The history of computer development is often referred to


in reference to the different generations of computing
devices

Each generation of computer is characterized by a major


technological development that fundamentally changed
the way computers operate, resulting in increasingly
smaller, cheaper, more powerful and more efficient and
reliable devices.
The Early developments : Charles Babbage 1791-
1871
Lucasian Professor of Mathematics, Cambridge University, 1827-
1839
Difference Engine 1823
Analytic Engine 1833
The forerunner of modern digital computer!
Application
Mathematical Tables Astronomy
Nautical Tables Navy
Background
Any continuous function can be approximated by a polynomial ---Weierstrass
Technology
mechanical -gears, Jacquards loom, simple calculators
The Early developments (1930 1940)
Relays were used in the earliest equipment: electronic
accounting machines
Early computers made in the 1930's and early 1940's at
Bell Labs & by Zuse in Germany.

Relays are mechanical and hence relatively slow --- their


response time is measured in milliseconds.
IBM 407: Electronic accounting machine
The IBM 407 accounting machine prepared reports and records from
punched cards.

It was a fast versatile unit with advanced features that provided high
production and complete flexibility.

Introduced on July 19, 1949, the 407 read 80-column cards, positioned
forms, recorded details, and added and subtracted to print any desired
combination of totals up to 150 lines per minute.

The Type 407 had been in the IBM product line for almost three decades
when IBM withdrew the last of the models from marketing in 1976.
(VV4007)
The Early developments: Example of machine-
Relays
Called Harvard Mark I
Built in 1944 in IBM Endicott laboratories
by Howard Aiken Professor of Physics at Harvard
Essentially mechanical but had some electromagnetically controlled
relays and gears
Weighed 5 tons and had 750,000 components
A synchronizing clock that beat every 0.015 seconds
0.3 seconds for addition 6 seconds for multiplication 1 minute for a
sine calculation
Broke down once a week!
The Early developments: Example of
machine- Relays
Linear Equation Solver
John Atanasoff, Iowa State University 1930s:
Atanasoff built the Linear Equation Solver.
It had 300 tubes!

Application:
Linear and Integral differential equations
Background:
Vannevar Bushs Differential Analyzer ---an analog computer
Technology:
Tubes and Electromechanical relays
The First Generation (1940 1956): Vacuum Tubes
First Generation computers are characterized by the use of vacuum tubes.

These vacuum tubes were used for calculation as well as storage and control

They used vacuum tubes for circuitry and magnetic drums for memory

Later, magnetic tapes and magnetic drums were implemented as storage media

They were often enormous, taking up entire rooms.

They were very expensive to operate and in addition to using a great deal of electricity,
generated a lot of heat, which was often the cause of malfunctions

They relied on machine language to perform operations, and they could only solve
one problem at a time.

Input was based on punched cards and paper tape, and output was displayed on
printouts.
The First Generation (1940 1956): Vacuum Tubes
Example: The ENIAC
Acronym for Electronic Numerical Integrator and Computer
The first general-purpose operational electronic digital computer.
Designed by and Constructed under the supervision of John Mauchly
and John Presper Eckert, at the university of Pennsylvania.
The project was a response to US Army to compute World War II
ballistic firing tables.
In addition to ballistics, the ENIAC's field of application included
weather prediction, atomic-energy calculations, cosmic-ray studies,
thermal ignition, random-number studies, wind-tunnel design, and other
scientific uses
The First Generation (1940 1956): Vacuum Tubes

Vacuum Tubes
The First Generation (1940 1956): Vacuum Tubes

Tubes were much faster


than relays (because they
had no mechanical
moving parts), but they
were bulky
Required high power, and
had a relatively short life
(crucial because of the
large number of
components

Vacuum Tubes
The First Generation (1940 1956): Vacuum Tubes
Example: The ENIAC
The ENIAC was completed in 1946
Weighed 30 tons.
Occupied 1500 square feet.
used 200 kilowatts of electric power.
Consisted of 18,000 vacuum tubes.
ENIAC was NOT a stored program device.( it had to be programmed manually
by setting switches and plugging and unplugging cables).

For each problem, someone analyzed the arithmetic processing needed and
prepared wiring diagrams for the computers to use when wiring the machine
Process was time consuming and error prone
The ENIAC was disassembled in 1955.
The First Generation (1940 1956): Vacuum Tubes
Example: The ENIAC
TECHNICAL DETAILS
DECIMAL MACHINE: ENIAC was decimal rather than a binary machine
(numbers were represented in decimal form and arithmetic was performed
in decimal system)
MEMORY: Its memory consisted of 20 Accumulators (decimal 10-digit
signed numbers)
capacity of accumulator - 10 digit decimal number
Single digit - by a ring of 10 vaccum tubes-10 bit ring counter - 1000000000
represents 0
faster than any electromechanical computer - 5000 additions per second

Accumulator (28 vacuum tubes)


The First Generation (1940 1956): Vacuum Tubes

The ENIAC
The Von Neumann Machine

In 1945, John von Neumann (consultant on the


ENIAC project) EDVAC (Electronic Discrete
Variable Automatic Computer) within stored-
program concept.
In 1945: the first publication of the idea was a proposal
by Von Neumann for a new computer EDVAC.

In 1946: design for new stored-program computer:


IAS machine at Princeton Institute for Advanced
John von Neumann
Studies, New Jersey, USA. (December 28, 1903
February 8, 1957)
was a Hungarian
American mathematician
who made major
contributions to a vast
range of fields
The Von Neumann Machine
The IAS machine was a binary computer with a Binary digit
40 bit word, storing two 20 bit instructions in
each word.

The memory was 1024 words (5.1 kilobytes).


Negative numbers were represented in "two's
complement" format.

It had two general-purpose registers available:


the Accumulator (AC) and Multiplier/Quotient
(MQ).
In 1952: IAS completed and became the prototype of
all subsequent general-purpose computers.
The Von Neumann Machine
Three key concepts in von Neumann architecture:

1. Data and instructions are stored together in a single read-write


memory.

2. The contents of this memory are addressable by location,


without regarding to the type of data contained here.

3. Execution occurs in a sequential fashion (unless explicitly


modified) from one instruction to the next.
The Von Neumann Machine (Stored Program Concept )
Structure of von Neumann Machine
IAS Memory Formats

Both data and instructions are


The memory of the IAS stored there
consists of 1000 storage
locations (called words) of Numbers are represented in
binary form and each instruction is
40 bits each a binary code
Structure
of
IAS
Computer
CONCEPT OF REGISTERS
Registers
Registers
Registers are high-speed memories located within the Central Processing
Unit (CPU). All data must be represented in a register before it can be
processed. For example, if two numbers are to be multiplied, both
numbers must be in registers, and the result is also placed in a register.
(The register can contain the address of a memory location where data is
stored rather than the actual data itself.)

small in size, typically a register is less than 64 bits; 32-bit and more
recently 64-bit are common in desktops.

The contents of a register can be read or written very quickly


however, often an order of magnitude faster than main memory and
several orders of magnitude faster than disk memory.
e.g. less than a nanosecond (10-9 sec)

Usually, the movement of data in and out of registers is completely


transparent to users, and even to programmers. Only assembly language
programs can manipulate registers. In high-level languages, the compiler
is responsible for translating high-level operations into low-level
operations that access registers.
Registers
Different processors will have different sets of registers.
A common register is the Accumulator (acc) which is a
data register, where the user is able to directly address
(talk to) it and use it to store any results they wish.
Processors may also have other registers with particular
purposes:
General purpose register - allow users to use them as
they wish
Address registers - used for storing addresses
Conditional registers - hold truth values for loop and
selection
Registers
Memory buffer register Contains a word to be stored in memory or sent to the I/O unit
(MBR) Or is used to receive a word from memory or from the I/O unit

Memory address Specifies the address in memory of the word to be written from or
register (MAR) read into the MBR

Instruction register (IR) Contains the 8-bit opcode instruction being executed

Instruction buffer Employed to temporarily hold the right-hand instruction from a


register (IBR) word in memory

Contains the address of the next instruction pair to be fetched


Program counter (PC) from memory

Accumulator (AC) and Employed to temporarily hold operands and results of ALU
multiplier quotient (MQ) operations
Registers
Arithmetic instructions operands must be
registers
Compiler associates variables with
registers
Usually only 32 registers are provided
Registers
The Second Generation (1957 1963): Transistors
Transistors (made from silicon) replaced vacuum tubes.
Invented 1947 at Bell Labs (William Shockley et al)
The transistor was far superior to the vacuum tube, allowing
200000 operations/second
computers to become
smaller
faster,
cheaper,
Less heat
more energy-efficient
more reliable.
The Second Generation (1957 1963): Transistors
Second-generation computers still relied on punched cards for
input and printouts for output.

Second-generation computers moved from cryptic binary


(machine language) to symbolic (assembly languages) which
allowed programmers to specify instructions in words.

High-level programming languages were also being developed


at this time, such as early versions of COBOL and
FORTRAN.
The Second Generation (1957 1963): Transistors
The Second Generation (1957 1963): Transistors
The Third Generation (1964 1971): Integrated Circuits
The development of the integrated circuit was the hallmark of the
third generation of computers.

Transistors were miniaturized and placed on silicon chips, called


semiconductors, which drastically increased the speed and
efficiency of computers.

Instead of punched cards and printouts, users interacted with


third generation computers through keyboards and monitors and
interfaced with an operating system, which allowed the device to
run many different applications at one time with a central program
that monitored the memory.

Computers for the first time became accessible to a mass audience


because they were smaller and cheaper than their predecessors.
The Third Generation (1964 1971): Integrated Circuits
The Third Generation (1964 1971): Integrated Circuits
The Third Generation (1964 1971): Integrated Circuits
Moores Law
Propounded by Garden Moore, cofounder of Intel in 1965
Increased density of components on chip
Number of transistors on a chip will double every year
Cost of a chip has remained almost unchanged
Higher packing density means shorter electrical paths, giving
higher performance
Smaller size gives increased flexibility
Reduced power and cooling requirements
Fewer interconnections increases reliability
Moores Law

1965; Gordon Moore co-founder of Intel

Observed number of transistors that could be


put on a single chip was doubling every year

Consequences of Moores law:


The pace slowed
to a doubling Computer
The cost of
every 18 months The becomes
in the 1970s but computer
electrical smaller and Reduction in
has sustained that logic and
path length is more power and Fewer
rate ever since memory
is shortened, convenient cooling interchip
circuitry has
increasing to use in a requirement connections
fallen at a
operating variety of s
dramatic
speed environment
rate
s
Characteristics of the
System/360 Family

Table 2.4 Characteristics of the System/360 Family


The Third Generation (1964 1971): Integrated Circuits
Growth in CPU Transistor Count
The Third Generation (1964 1971): Integrated Circuits

Growth in Transistor Count on Integrated Circuits (DRAM memory)


+ LSI
Large
Scale

Later Integration

Generations VLSI
Very Large
Scale
Integration
ULSI
Ultra Large
Scale
Integration
Semiconductor Memory
Microprocessors
The Fourth Generation (1972 present): Microprocessors
Semiconductor Memory
In 1970 Fairchild produced the first relatively capacious semiconductor memory

Chip was about the size of Could hold 256 bits of


Non-destructive Much faster than core
a single core memory

In 1974 the price per bit of semiconductor memory dropped below the price per bit of
core memory
There has been a continuing and rapid decline in
Developments in memory and processor technologies
memory cost accompanied by a corresponding increase
changed the nature of computers in less than a decade
in physical memory density

Since 1970 semiconductor memory has been through 13 generations

Each generation has provided four times the storage density of the previous generation, accompanied by declining
cost per bit and declining access time
The Fourth Generation (1972 present): Microprocessors
The microprocessor brought the fourth generation of
computers, as thousands of integrated circuits were built onto a
single silicon chip.

What in the first generation filled an entire room could now fit
in the palm of the hand.

Intel was found in 1968 by Robert Noyce, Gordon Moore, and


Andrew Grove.(focus on random access memory (RAM)
chips)

The Intel 4004 chip, developed in 1971, located all the


components of the computer - from the central processing unit
and memory to input/output controls - on a single chip. (the
Microprocessor was born)
Evolution of Intel Microprocessors

a. 1970s Processors

b. 1980s Processors
Evolution of Intel Microprocessors

c. 1990s Processors

d. Recent Processors
Microprocessor Speed
Techniques built into contemporary processors include:

Processor moves data or instructions


Pipelining into a conceptual pipe with all stages
of the pipe processing
simultaneously

Processor looks ahead in the


instruction code fetched from
Branch prediction memory and predicts which
branches, or groups of instructions,
are likely to be processed next

Processor analyzes which

Data flow analysis instructions are dependent on each


others results, or data, to create an
optimized schedule of instructions

Using branch prediction and data


flow analysis, some processors
Speculative speculatively execute instructions
ahead of their actual appearance in

execution
the program execution, holding the
results in temporary locations,
keeping execution engines as busy
as possible
branch predictor
In computer architecture, a branch
predictor is a digital circuit that tries to guess
which way a branch (e.g. an if-then-else
structure) will go before this is known for sure.
The purpose of the branch predictor is to
improve the flow in theinstruction pipeline.
Branch predictors play a critical role in achieving
high effective performance in many
modern pipelined microprocessor architectures
such as x86.
pipelining
Speculative execution
Speculative execution is an optimization technique where
a computer system performs some task that may not be actually
needed. The main idea is to do work before it is known whether
that work will be needed at all, so as to prevent a delay that would
have to be incurred by doing the work after it is known whether it is
needed. If it turns out the work was not needed after all, any
changes made by the work are reverted and the results are ignored.
The target is to provide more concurrency if extra resources are
available.
The Fourth Generation (1972 present): Microprocessors
Intel Microprocessors

8088
80286
486 Pentium

Pentium II Pentium 4 Pentium D


Pentium Dual Core
The Fourth Generation (1972 present): Microprocessors
Intel Microprocessors
The Fourth Generation (1972 present): Microprocessors

IBM 5151: First world PC (Intel Original Macintosh: (Motorola


8088) 68000)

Dell Inspiron 530


Blazing Core2 Duo processor
I-MAC computer, modeled in MAX
The Fourth Generation (1972 present): Microprocessors

Processor speed increased


Memory capacity increased
Memory speed lags behind processor speed
The Fourth Generation (1972 present): Microprocessors
The Fourth Generation (1972 present): Microprocessors
Performance Balance: Solution

Increase the
number of bits that
Adjust the organization and are retrieved at one
time by making
architecture to compensate DRAMs wider
rather than
for the mismatch among the deeper and by
using wide bus data
capabilities of the various paths

components Reduce the


frequency of
Architectural examples memory access by
incorporating
include: increasingly
complex and
efficient cache
structures between
Increase the
the processor and
main memory interconnect
Change the DRAM bandwidth
interface to make between
it more efficient processors and
by including a memory by using
cache or other higher speed buses
buffering scheme and a hierarchy of
on the DRAM chip buses to buffer
and structure data
flow
Typical I/O Device Data
Rates
Multi-core processor
A single computing component with two or more independent CPU
(called "cores"),
Multiple cores can run multiple instructions at the same time,
increasing overall speed for programs amenable to parallel
computing.
Manufacturers typically integrate the cores onto a single integrated
circuit die (known as a chip multiprocessor or CMP), or onto multiple
dies in a single chip package.
Processors were originally developed with only one core. Multi-core
processors were developed in the early 2000s by Intel, AMD and
others
Multi-core processor
Multicore processors may have two cores (Dual core) (e.g. AMD Phenom II
X2, Intel Core Duo),
four cores (Quad core) (e.g. AMD Phenom II X4, Intel's quad-core
processors, see i5, and i7 at Intel Core),
6-cores (e.g. AMD Phenom II X6, Intel Core i7 Extreme Edition 980X),
8-cores (e.g. Intel Xeon E7-2820, AMD FX-8350),
10-cores (e.g. Intel Xeon E7-2850) or more.
Multi-core systems have cores that are not identical. Just as with single-
processor systems, cores in multi-core systems may implement
architectures such as superscalar, VLIW, vector processing, SIMD,
or multithreading.
Multi-core processors are widely used across many application domains
including general-purpose, embedded, network, digital signal
processing (DSP), and graphics.
The use of multiple processors
on the same chip provides the

Multicore potential to increase


performance without
increasing the clock rate

Strategy is to use two simpler


processors on the chip rather
than one more complex
processor

With two processors larger


caches are justified

As caches became larger it


made performance sense to
create two and then three
levels of cache on a chip
The Fifth Generation (Present-Future): Artificial Intelligence
Scientists are now at work on the fifth generation computers - based on artificial
intelligence, are still in development, Aim to bring a machines with genuine I.Q.,
the ability to reason logically, and with real knowledge of the world. Thus, unlike
the last four generations that naturally followed its predecessor, the fifth
generation will be totally different, totally novel, and totally new.

In structure it will be parallel and will be able to do multiple tasks simultaneously.


In functions, it will not be algorithmic (step by step, with one step at a time).

In nature, it will not do just data processing (number crunching) but knowledge
processing. In inference, it will not be merely deductive, but also inductive. In
application, it will behave like an expert.

In programming, it will interact with humans in ordinary language (unlike BASIC,


COBOL, FORTRAN, etc. which present computers need).

In architecture, it will have KIPS (Knowledge Information Processing System)


rather than the present DIPS/LIPS (Data/Logic Information Processing System).
FIFTH GENERATION (ARE NOT YET ?)
The odds of coming out with a fifth generation computer are
heaviest for Japan. They have already started work in this direction
few years back. Japan has chosen the PROLOG (Programming in
Logic) language as its operating software and plans to have the final
machine talk with human beings, see and deliver pictures and hear
the normal, natural language

Though there are some applications, such as voice recognition, that


are being used today. The use of parallel processing and
superconductors is helping to make artificial intelligence a reality.
Quantum computation
https://www.youtube.com/watch?v=CMdHDHEuOUE
and molecular and nanotechnology will radically change the face of
computers in years to come. The goal of fifth-generation computing
is to develop devices that respond to natural language input and are
capable of learning and self-organization.
FIFTH GENERATION (ARE HERE NOW)
Every gadget you see today is a
5th generation computer ie:- Your Laptops,
Mobile phones, Tablets, Videogame
consoles, Smart watches, Digital cameras
and even some High-Tech pens. The fifth
generation computers have Huge
memories which ranges in Terabytes and
even more
Chech out Ex machina, chappie, nautilus,
google deep mind
https://www.youtube.com/watch?v=HHe8_PKOWHs
FIFTH GENERATION

DELL XPS Convertible PC


http://www.atarimagazines.com/startv2n5/p
rolog.html
WHAT ELSE ?
SIXTH GENERATION?

http://www.youtube.com/watch?v=V68WRT
V4OcA

http://www.youtube.com/watch?v=hO5HkH
71XcA
Computer Generations
Computer Taxonomy

Increasing
Supercomputers Size
Mainframes Speed
Minicomputers Performance
Capacity
Microcomputers Price
Supercomputers

Largest & fastest - occupy a room or rooms of space


Primary usage - large scientific or military
calculations
For example:
Simulation (Boeing, Ford, etc.)
Very large databases (NASA)
Mainframe Computers

Smaller & slower than supercomputers


Usually take up a portion of a room
Allow remote access
Minicomputers

For general business applications


For large enterprises for department-level operations.
In recent years, the minicomputer has evolved into the
Server" and is part of a network.
Microcomputers
Less powerful than minicomputers & inexpensive
Workstations
Personal Computers
Network Computers:
Laptop
Personal Digital Assistant (PDA)
Wearable Computers
Embedded Computers
Gene Amdahl [AMDA67]
Deals with the potential
speedup of a program using
multiple processors
Amdahls compared to a single
processor
Law Illustrates the problems
facing industry in the
development of multi-core
machines
Software must be adapted to
a highly parallel execution
environment to exploit the
power of parallel processing
Can be generalized to
evaluate and design
technical improvement in a
computer system
Amdahls Law
Littles Law
Fundamental and simple relation with broad
applications
Can be applied to almost any system that is
statistically in steady state, and in which there is
no leakage
Queuing system
If server is idle an item is served immediately,
otherwise an arriving item joins a queue
There can be a single queue for a single server or for
multiple servers, or multiples queues with one being
for each of multiple servers
Average number of items in a queuing system equals the average rate at
which items arrive multiplied by the time that an item spends in the system
Relationship requires very few assumptions
Because of its simplicity and generality it is extremely
useful
Summary
First Generation
Advantages Disadvantages
Vacuum tubes were the only electronic Too bulky in size.
components available during those days. Unreliable.
Vacuum tube technology made possible the Thousands of vacuum tubes that were used
advent of electronic digital computers. emitted large amount of heat and burnt out
frequently
These computers were the fastest
Air conditioning required.
calculating devices of their time. They could
Prone to frequent hardware failures.
perform computations in milliseconds.
Constant maintenance required.
Not portable.
Manual assembly of individual components
into functioning unit required.
Commercial production was difficult and
costly.
Limited commercial use.
Summary
Second Generation
Advantages Disadvantages
Smaller in size as compared to first
generation computers. Air-conditioning required.
More reliable. Frequent maintenance required.
Manual assembly of individual components
Less heat generated.
into a functioning unit was required.
These computers were able to reduce Commercial production was difficult and costly
computational times from milliseconds to
microseconds.
Less prone to hardware failures.
Better portability.
Wider commercial use.
Summary
Third Generation
Advantages Disadvantages
Smaller in size as compared to previous generation
computers. Air-conditioning required in many
Even more reliable than second-generation computers. cases.
Highly sophisticated technology
Even lower heat generated than second generation
required for the manufacture of IC
computers.
chips.
These computers were able to reduce computational times
from microseconds to nanoseconds.
Maintenance cost is low because hardware failures are rare.
Easily portable.
Totally general purpose. Widely used for various
commercial applications all over the world.
Less power requirement than previous generation
computers.
Manual assembly of individual components into a
functioning unit not required. So human labour and cost
involved at assembly stage reduced drastically.
Commercial production was easier and cheaper.
Summary
Fourth Generation
Advantages Disadvantages
Smallest in size because of high component density.
Very reliable. Highly sophisticated technology required
for the manufacture of LSI chips.
Heat generated is negligible.
No air conditioning required in most cases.
Much faster in computation than previous generations.
Hardware failure is negligible and hence minimal
maintenance is required.
Easily portable because of their small size.
Totally general purpose.
Minimal labour and cost involved at assembly stage.
Cheapest among all generations.

You might also like