Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Computer Architecture Unit 3

3. Memory addressing is the logical structure of a computer’s random-


access memory (RAM). Refer Section 3.3.
4. A distinct addressing mode field is required in instruction format for
signal processing. Refer Section 3.4.
5. The program is executed by going through a cycle for each instruction.
Each instruction cycle is now subdivided into a sequence of sub cycles
or phases. Refer Section 3.5.
6. The conditions for altering the content of the program counter are
specified by program control instruction, and the conditions for data-
processing operations are specified by data transfer and manipulation
instructions. Refer Section 3.6.
7. After considerable research on efficient processor organisation and
VLSI integration at Stanford University, the MIPS architecture evolved.
Refer Section 3.7.

References:
 Hwang, K. (1993) Advanced Computer Architecture. McGraw-Hill, 1993.
 D. A. Godse & A. P. Godse (2010). Computer Organization. Technical
Publications. pp. 3–9.
 John L. Hennessy, David A. Patterson, David Goldberg (2002)
"Computer Architecture: A Quantitative Approach", Morgan Kaufmann;
3rd edition.
 Dezsö Sima, Terry J. Fountain, Péter Kacsuk (1997) Advanced
computer architectures - a design space approach. Addison-Wesley-
Longman: I-XXIII, 1-766.
E-references:
 http://tams-www.informatik.uni-hamburg.de/applets/hades/webdemos/
mips.html http://www.withfriendship.com/user/servex/mips-architecture.
php
 http://en.wikipedia.org/wiki/File:CDC_Cyber_170_CPU_architecture.png

Manipal University Jaipur B1648 Page No. 76


Computer Architecture Unit 4

Unit 4 Pipelined Processor


Structure:
4.1 Introduction
Objectives
4.2 Pipelining
4.3 Types of Pipelining
4.4 Pipelining Hazards
4.5 Data Hazards
4.6 Control Hazards
4.7 Techniques to Handle Hazards
Minimising data hazard stalls by forwarding
Reducing pipeline branch penalties
4.8 Performance Improvement Pipeline
4.9 Effects of Hazards on Performance
4.10 Summary
4.11 Glossary
4.12 Terminal Questions
4.13 Answers

4.1 Introduction
In the previous unit, you studied about the changing face of computing.
Also, you studied the meaning and tasks of a computer designer. We also
covered the technology trends and the quantitative principles in computer
design. In this unit, we will introduce you to pipelining processing, the
pipeline hazards, structural hazards, control hazards and techniques to
handle them. We will also examine the performance improvement with
pipelines and understand the effect of hazards on performance.
A parallel processing system can carry out concurrent data processing to
attain quicker execution time. For example, as an instruction is being
executed in the ALU, the subsequent instruction can be read from memory.
The system may have more than one ALU and be able to execute two or
more instructions simultaneously. Additionally, the system may have two or
more processors operating at the same time. The rationale of parallel
processing is to speed up the computer processing potential and increase it
all through.

Manipal University Jaipur B1648 Page No. 77


Computer Architecture Unit 4

Parallel processing can be viewed from various levels of complexity. A


multifunctional organisation is usually associated with a complex control unit
to coordinate all the activities among the various components.
There are a variety of ways in which parallel processing can be done. We
consider parallel processing under the following main topics:
1. Pipeline processing
2. Vector processing
3. Array processing
Out of these, we will study the pipeline processing in this unit.
Objectives:
After studying this unit, you should be able to:
 explain the concept of pipelining
 list the types of pipelining
 identify various pipeline hazards
 describe data hazards
 discuss control hazards
 analyse the techniques to handle hazards
 describe the performance improvement with pipelines
 explain the effect of hazards on performance

4.2 Pipelining
An implementation technique by which the execution of multiple instructions
can be overlapped is called pipelining. This pipeline technique splits up the
sequential process of an instruction cycle into sub-processes that operates
concurrently in separate segments. As you know computer processors can
execute millions of instructions per second. At the time one instruction is
getting processed, the following one in line also gets processed within the
same time, and so on. A pipeline permits multiple instructions to get
executed at the same time. Without a pipeline, every instruction has to wait
for the previous one to be complete. The main advantage of pipelining is
that it increases the instruction throughput, which is defined as the number
of instructions completed per unit time. Thus, a program runs faster.
In pipelining, several computations can run in distinct segments at the same
time. A register is associated with each segment in the pipeline to provide

Manipal University Jaipur B1648 Page No. 78


Computer Architecture Unit 4

isolation between each segment. Thus, each segment can operate on


distinct data simultaneously. Pipelining is also called virtual parallelism as it
provides an essence of parallelism only at the instruction level. In pipelining,
the CPU executes each instruction in a series of following stages:
1. Instruction Fetching (IF)
2. Instruction Decoding (ID)
3. Instruction Execution (EX)
4. Memory access (MEM)
5. Register Write back (WB)
The CPU while executing a sequence of instructions can pipeline these
common steps. However, in a non-pipelined CPU, instructions are executed
in strict sequence following the steps mentioned above. In pipelined
processors, it is desirable to determine the outcome of a conditional branch
as early as possible in the execution sequence. To understand pipelining,
let us discuss how an instruction flows through the data path in a five-
segment pipeline.
Consider a pipeline with five processing units, where each unit is assumed
to take 1 cycle to finish its execution as described in the following steps:
a) Instruction fetch cycle (IF): In the first step, the address of the
instruction to be fetched is taken from memory into Instruction Register
(IR) and is stored in PC register.
b) Instruction decode fetch cycle (ID): The fetched instruction is
decoded and instruction is send into two temporary registers. Decoding
and reading of registers is done in parallel.
c) Instruction execution cycle (EX): In this cycle, the result is written into
the register file.
d) Memory access completion cycle (MEM): In this cycle, the address of
the operand calculated during the prior cycle is used to access memory.
In case of load and store instructions, either data returns from memory
and is placed in the Load Memory Data (LMD) register or is written into
memory. In case of branch instruction, the PC is replaced with the
branch destination address in the ALU output register.
e) Register write back cycle (WB): During this stage, both single cycle
and two cycle instructions write their results into the register file.

Manipal University Jaipur B1648 Page No. 79


Computer Architecture Unit 4

These steps of five-segment pipelined processor are shown in figure 4.1.

Figure 4.1: A Five-Segment Pipelined Processor

The segments are isolated by registers. The simplest way to visualise a


segment is to think of a segment consisting of an input register and a
combinational circuit that processes the data stored in register. See table
4.1 for examples of sub operations performed in each segment of pipeline
Table 4.1: Sub operations Performed in Each Segment of Pipeline

R1 An, R2 Bn, R3 Cn, R 4  D n Input An, Bn, Cn Dn

R5 An * Bn, R6 Cn * Dn, Multiply

Add and store in


R7 R5 + R6
Register R7

Manipal University Jaipur B1648 Page No. 80

You might also like