Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 24

Prepared by Dasun Nilanjana for

Selected Topics
 Computer as a System
Hardware
○ Input , Output, Processing, Storage, Communication
Software
○ System SW
 OS, Language Converters, Utilities
○ Application SW
 Application Packages
 Ready made, Taylor made
○ Open Source and Proprietary (Closed Source)
Firmware
Live ware
ICT in Communication
 Tele working (Telecommuting)
 Presentation
 Conference (A & V)
ICT in Business
 Stock Control
 Credit Control
 Marketing
 Advertizing
Product, Business, Service
 Personnel Management
 Finance and ICT
Computer Hardware
 Input Devices
 Keyboard
 Pointing Devices
 Scanner
 Web Cam
 Digital Camera
 Output Devices
 Impact and Non-Impact printers
○ Dot Matrix, Bubble jet, Ink Jet, Laser Jet
 VDU
 Speakers
Memory
 Cache memory
 Primary Storage
 Secondary Storage
Modes Of Data Input
 keyboard entry for non-spatial attributes and occasionally locational
data
 manual locating devices user directly manipulates a device whose
location is recognized by the computer
 e.g. digitizing
 automated devices automatically extract spatial data from maps
and photography
 e.g. scanning
 conversion directly from other digital sources
 voice input has been tried, particularly for controlling digitizer
operations not very successful - machine needs to be recalibrated
for each operator, after coffee breaks, etc.
Issues created by ICT (40 x 5 min)
 ƒ Social
 ƒ Economical
 ƒ Environmental
 ƒ Ethical
 ƒ Legal
 • Privacy and Piracy
 • Copyright
 • Plagiarism – expressing a copy as its’ owner
 • Licensed software
What is Von Neumann Architecture?

Most computers use the stored-program concept
designed by Hungarian mathematician John Von
Neumann.
 In it, you store programs and data in a slow-to-
access storage medium (such as a hard disk) and
work on them in a fast-access, volatile storage
medium (RAM).
 A stored-program digital computer is one that keeps
its programmed instructions, as well as its data, in
read-write, random-access memory (RAM).
 The terms "von Neumann architecture" and "stored-
program computer" are generally used
interchangeably.
Cont.
 This concept, however, has
an attendant bottleneck: it
was designed to process
instructions one after the
other instead of using faster
parallel processing.

 A von Neumann Architecture


computer has five parts:
 an arithmetic-logic unit ,
 a control unit ,
 a memory ,
 some form of input/output and
 a bus that provides a data path between these
parts.
Stored-program concept
 Storage of instructions in computer memory to enable it to
perform a variety of tasks in sequence or intermittently.
 The fundamental computer architecture in which the
computer acts upon (executes) internally stored
instructions.
 The idea was introduced in the late 1940s by John von
Neumann, who proposed that a program be electronically
stored in binary-number format in a memory device so that
instructions could be modified by the computer as
determined by intermediate computational results.
Fetch-execute cycle
 This is the sequence of steps that happens when the CPU (Central Processing
Unit) fetches an instruction from the memory. It involves several registers inside
the CPU - specifically, the Program Counter. Here are a summary of the
registers needed:
 The program counter is the register that holds the memory address of the current instruction
being executed. When the next instruction is to be fetched, this register is incremented by the
appropriate number of bytes.
 Some CPUs contain a memory address register, which holds the address of the byte being
loaded. Other CPUs don't have this register. They simply increase the program counter and use it
to fetch the next byte(s) into memory.
 CPUs contain general registers. In the example below, I shall use the 6502's registers. The 6502
processor (used in the BBC micro computer) contains three general registers - the Accumulator
(A) and two index registers (X and Y).
 In the CPU, there is a status register (also called the condition codes register) which indicates
various things about the last calculation carried out. For instance, there is a zero flag (which is set
to true if the last calculation produced a zero), a carry flag (true if the last calculation produced a
carry out i.e. an overflow) etc.
Fetch-execute cycle
 A von Neumann Architecture computer
performs or emulates the following
sequence of steps:
 Fetch the next instruction from memory at the
address in the program counter.
 Add 1 to the program counter.
 Decode the instruction using the control unit. The
control unit commands the rest of the computer
to perform some operation. The instruction may
change the address in the program counter,
permitting repetitive operations. The instruction
may also change the program counter only if
some arithmetic condition is true, giving the
effect of a decision, which can be calculated to
any degree of complexity by the preceding
arithmetic and logic.
 Go back to step 1.
Fetch-execute cycle
 A more complete form of the Instruction
Fetch Execute Cycle can be broken
 down into the following steps:
1. Fetch Cycle
2. Decode Cycle
3. Execute Cycle
4. Interrupt Cycle
Cont.
  Very few computers have a pure von Neumann architecture.
Most computers add another step to check for interrupts,
electronic events that could occur at any time. An interrupt
resembles the ring of a telephone, calling a person away from
some lengthy task. Interrupts let a computer do other things
while it waits for events.
    
 Von Neumann computers spend a lot of time moving data to and
from the memory, and this slows the computer. So, engineers
often separate the bus into two or more busses, usually one for
instructions, and the other for data.
Instruction Set Architecture (ISA)
 The Instruction Set Architecture (ISA) is the part of the
processor that is visible to the programmer or compiler writer.
 The ISA serves as the boundary between software and
hardware.
 The ISA of a processor can be described using 5 categories:
 Operand Storage in the CPU - Where are the operands kept other than
in memory?
 Number of explicit named operands - How many operands are named
in a typical instruction.
 Operand location - Can any ALU instruction operand be located in
memory? Or must all operands be kept internally in the CPU?
 Operations - What operations are provided in the ISA.
 Type and size of operands - What is the type and size of each operand
and how is it specified?
Instruction Set Architecture (ISA)

 The 3 most common types of ISAs are:


Stack - The operands are implicitly on top of
the stack.
Accumulator - One operand is implicitly the
accumulator.
General Purpose Register (GPR) - All operands
are explicitly mentioned, they are either
registers or memory locations.
Cont.
 Stack
Advantages: Simple Model of expression evaluation (reverse polish). Short instructions.
Disadvantages: A stack can't be randomly accessed This makes it hard to generate eficient code.
The stack itself is accessed every operation and becomes a bottleneck.

 Accumulator
Advantages: Short instructions.
Disadvantages: The accumulator is only temporary storage so memory traffic is the highest for
this approach.

 GPR
Advantages: Makes code generation easy. Data can be stored for long periods in registers.
Disadvantages: All operands must be named leading to longer instructions.

 Earlier CPUs were of the first 2 types but in the last 15 years all CPUs made are GPR
processors. The 2 major reasons are that registers are faster than memory, the more data that
can be kept internally in the CPU the faster the program wil run. The other reason is that
registers are easier for a compiler to use.
CISC
 Pronounced sisk, and stands for Complex Instruction Set Computer.
Most PC's use CPU based on this architecture. For instance Intel
and AMD CPU's are based on CISC architectures.

 Typically CISC chips have a large amount of different and complex


instructions. The philosophy behind it is that hardware is always
faster than software, therefore one should make a powerful
instruction set, which provides programmers with assembly
instructions to do a lot with short programs.

 In common CISC chips are relatively slow (compared to RISC chips)


per instruction, but use little (less than RISC) instructions.
Reduced Instruction Set Computer or RISC (pronounced risk)

 RISC chips evolved around the mid-1980 as a reaction at CISC chips.


The philosophy behind it is that almost no one uses complex assembly
language instructions as used by CISC, and people mostly use compilers
which never use complex instructions. Apple for instance uses RISC
chips.
 Therefore fewer, simpler and faster instructions would be better, than the
large, complex and slower CISC instructions. However, more instructions
are needed to accomplish a task.
 An other advantage of RISC is that - in theory - because of the more
simple instructions, RISC chips require fewer transistors, which makes
them easier to design and cheaper to produce.
 Finally, it's easier to write powerful optimized compilers, since fewer
instructions exist.
CISC Vs. RISC
CISC RISC
Emphasis on hardware Emphasis on software
Includes multi-clock Single-clock,
complex instructions reduced instruction only

Memory-to-memory: Register to register:


"LOAD" and "STORE" "LOAD" and "STORE"
incorporated in instructions are independent instructions

Small code sizes, Low cycles per second,


high cycles per second large code sizes

Transistors used for storing Spends more transistors


complex instructions on memory registers
Conclusion – RISC vs. CISC?
 CISC
Effectively realizes one particular High Level
Language Computer System in HW - recurring
HW development costs when change needed

 RISC
Allows effective realization of any High Level
Language Computer System in SW - recurring
SW development costs when change needed

You might also like