Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 35

Welcome

Introduction to RISC
Since the development of the stored program computer around 1950,
there are few innovations in the area of computer organization and
architecture. Some of the major developments are:
• The Family Concept (1964)
• Microprogrammed Control Unit (1951)
• Cache Memory (1968)
• Pipelining
• Multiple Processor

Wednesday, June 19, 2024 2


Introduction to RISC
The computer designers intend to reduce this gap (i.e., semantic and
high level programming languages) and include large instruction set,
more addressing mode and various HLL statements implemented in
hardware.
As a result the instruction set becomes complex. Such complex
instruction sets are intended to:
• Ease the task of the compiler writer.
• Improve execution efficiency, because complex sequences of operations
can be implemented in microcode.
• Provide support for even more complex and sophisticated HLLs.

Wednesday, June 19, 2024 3


Introduction to Complex Instruction Set Computer
(CISC)

The instruction execution characteristics involves the following aspects


of computation:
• Operation Performed
• Operand Used
• Execution sequencing

Wednesday, June 19, 2024 4


Introduction to Complex Instruction Set Computer
(CISC)

• A large number of general purpose registers, or the use


of compiler technology to optimize register usage.
• A limited and simple instruction set.
• An emphasis on optimizing the instruction pipeline

Wednesday, June 19, 2024 5


Comparison of RISC & CISC

Wednesday, June 19, 2024 6


Characteristics of Reduced Instruction Set Architecture (RISC)

Certain common characteristics are :

1. One instruction per cycle.


2. Register–to–register operations.
3. Simple addressing modes.
4. Simple instruction formats.

Wednesday, June 19, 2024 7


Design Issues of RISC

1. The use of a large register file:

Two basic approaches are possible, one is based on


software and the other on hardware.
• The software approach

• The hardware approach

Wednesday, June 19, 2024 8


Design Issues of RISC (Cont’d…..)
2. Register Window:
The use of a large set of registers should decrease the need to
access memory. The design task is to organize the registers in such a
way that this goal is realized.
Thus the variables that are used in a program can be categorized
as follows :
• Global variables
• Local variables
• Passed parameters
• Returned variable

Wednesday, June 19, 2024 9


Design Issues of RISC (Cont’d…..)

2. Register Window: (Cont’d….)

The register window is divided into three fixed-size areas.


• Parameter registers: hold parameters passed down from the procedure that
called the current procedure and hold results to be passed back up.
• Local registers are used for local variables.

• Temporary registers are used to exchange parameters and results with the
next lower level (procedure called by current procedure)

Wednesday, June 19, 2024 10


Design Issues of RISC (Cont’d…..)

2. Register Window: (Cont’d….)

Overlapping Register Windows

Wednesday, June 19, 2024 11


Design Issues of RISC (Cont’d…..)
3. Compiler based Register Optimization:
To optimize the use of registers, the approach taken is as follows:
• Each program quantity that is a candidate for residing in a register is assigned to a
symbolic or virtual register.
• The compiler then maps the unlimited number of symbolic registers into a fixed
number of real registers.
• Symbolic registers whose usage does not overlap can share the same real register.

• If in a particular portion of the program, there are more quantities to deal with than
real registers, then some of the quantities are assigned to the memory location.
Wednesday, June 19, 2024 12
Design Issues of RISC (Cont’d…..)
• 3. Compiler based Register Optimization: (Cont’d….)
The technique most commonly used in RISC compiler is known as
graph colouring.

Wednesday, June 19, 2024 13


Large Register file versus cache

Wednesday, June 19, 2024 14


Introduction to pipelining
• Pipelining is a technique of decomposing a sequential process into
sub-operations, with each sub-process being executed in a special
dedicated segment that operates concurrently with all other
segments.
Conventional Sequential Execution
1 2 3 4 5 6 7 8 9 10 11 12
F1 D1 E1 W1
I1 F2 D2 E2 W2
I2 F3 D3 E3 W3
Pipelined Execution
1 2 3 4 5 6 7 8
F1 D1 E1 W1
I1 F2 D2 E2 W2
I2 F3 D3 E3 W3

Wednesday, June 19, 2024 15


Introduction to pipelining: (Cont’d…..)

Wednesday, June 19, 2024 16


Four Stage Pipeline
F: Fetch, Read the instruction from the memory
D: Decode, decode the instruction and fetch the source operand (S)
E: Operate, perform the operation
W: Write, store the result in the destination location.
Step 1 2 3 4 5 6 7 8 9
1 F1 D1 E1 W1
2 F2 D2 E2 W2
3 F3 D3 E3 W3
4 F4 D4 E4 W4
5 F5 D5 E5 W5
6 F6 D6 E6 W6
Timing diagram for 4- stage instruction pipeline

Wednesday, June 19, 2024 17


Instruction Execution in a Four Stage
Pipeline
Fetch instruction from Memory

Decode instruction

Yes
Branch
No
Execute instruction

Interrupt Yes
Interrupt
handling
No
Update PC Write the result in destination

Empty Pipe Flow-chart

Wednesday, June 19, 2024 18


Six Stage Pipelining Instruction Execution

Decomposition of the instruction execution:


• Fetch Instruction (FI): Read the next expected instruction into a buffer
• Decode Instruction (DI) : Determine the opcode and the operands
Specifiers

• Calculate Operand (CO): Calculate the effective address of each source


operand. This may involve displacement, register indirect, indirect, or other
forms of address calculation

Wednesday, June 19, 2024 19


Six Stage Pipelining Instruction
Execution (Cont’d…..)
Decomposition of the instruction execution:
• Fetch Operands(FO): Fetch each operand from memory. Operands
in registers need not be fetched
• Execute Instruction (EI):Perform the indicated operation and store
the result, if any, in the specified destination operand location
• Write Operand(WO): Store the result in memory.

Wednesday, June 19, 2024 20


Instruction Execution in a Six Stage Pipeline

Flow-chart

Wednesday, June 19, 2024 21


Timing diagram of Six stage Pipelining

Timing Diagram for Instruction Pipeline Operation

Wednesday, June 19, 2024 22


Design Issues of Pipeline
The cycle time of an instruction pipeline is the time needed to advance
a set of instructions one stage through the pipeline. The cycle time can
be determined as:

Where,

Wednesday, June 19, 2024 23


Design Issues of Pipeline: (Cont’d….)
• Let Tk,n be the total time required for a pipeline with k stages to
execute n instructions. Then,

• A total of k cycles are required to complete the execution of the first


instruction, and the remaining n-1 instructions require n-1 cycles.

• consider a processor with equivalent functions but no pipeline, and


assume that the instruction cycle time is
Wednesday, June 19, 2024 24
Design Issues of Pipeline: (Cont’d….)
• The speedup factor for the instruction pipeline compared to execution
without the pipeline is defined as

Problem: What is the speedup factor of 6 stage pipeline system to execute


a program of 20 instructions?
Hint: n =20 and k=6

Wednesday, June 19, 2024 25


Pipeline Hazards
• When the pipeline, or some portion of the pipeline, must stall
because conditions do not permit continued execution then pipeline
hazard occurs
• Such a pipeline stall is also referred to as a pipeline bubble.
• Types of hazards:
1. Resource Hazard
2. Data Hazard and
3. Control Hazard .

Wednesday, June 19, 2024 26


Resource Hazards
• Assume that main memory has a single port
and that all instruction fetches and data
reads and writes must be performed one at a
time.
• Assume that the source operand for
instruction I1 is in memory, rather than a
register.
• Therefore, the fetch instruction stage of the
pipeline must idle for one cycle before
beginning the instruction fetch for instruction
I3
Wednesday, June 19, 2024 27
Data Hazards

1. Read after write (RAW), or true


dependency

2. Write after read (RAW), or anti-


dependency

3. Write after write (RAW), or output


dependency.

Wednesday, June 19, 2024 28


Control Hazards

Dealing with Branches


1. Multiple streams
2. Pre-fetch branch target
3. Loop buffer
4. Branch prediction
5. Delayed branch

Wednesday, June 19, 2024 29


Control Hazard due to branching

Due to Conditional Branch the branch penalty of 4 cycles is obtained

Wednesday, June 19, 2024 30


Introduction to Parallel processing
Example of parallel processing

Wednesday, June 19, 2024 31


Introduction to Parallel processing
(Cont’d…..)
• Parallel processing may occur in the instruction stream, in the data
stream, or in both. Flynn's classification divides computers into four
major groups as follows:
1. Single instruction stream, single data stream (SISD)
2. Single instruction stream, multiple data stream (SIMD)
3. Multiple instruction stream, single data stream (MISD)
4. Multiple instruction stream, multiple data stream (MIMD)
Wednesday, June 19, 2024 32
Introduction to Parallel processing
(Cont’d….)
• Single Instruction Stream, Single data stream(SISD)-A computer architecture in which a single uni-core processor
executes a single instruction stream, to operate on data stored in a single memory.

• Single Instruction, Multiple Data (SIMD) system: A single machine instruction controls the simultaneous execution
of a number of processing elements on a lockstep basis. Each processing element has an associated data memory,
so that each instruction is executed on a different set of data by the different processors. Vector and array
processors fall into this category

• Multiple Instruction, Single Data (MISD) system :A sequence of data is transmitted to a set of processors, each

of which executes a different instruction sequence. This structure has never been implemented.

• Multiple Instruction, Multiple Data (MIMD) system: A set of processors simultaneously execute different

instruction sequences on different data sets. SMPs, clusters, and NUMA systems fits into this category.

Wednesday, June 19, 2024 33


Attached Array Processor
• An attached array processor is an auxiliary processor attached to a
general-purpose computer. It is intended to improve the performance
of the host computer in specific task.

Attached array processor with host computer


Wednesday, June 19, 2024 34
SIMD array processor
• An Single Input Multi Data (SIMD) array processor is a computer with
multiple processing units operating in parallel.

SIMD array processor organization

Wednesday, June 19, 2024 35

You might also like