Download as odt, pdf, or txt
Download as odt, pdf, or txt
You are on page 1of 6

1.1.

1 Structure and function of the CPU

Components of a CPU
The CPU is the brain of the computer, executing instructions to allow programs
to run.

Arithmetic logic unit:


The ALU completes all of the arithmetic and logical operations. Results are
stored in the ACC.

The Control Unit:


Sends out control signals to coordinate how the processor works. It: manages
the flow of data between the CPU and other devices, accepts new instructions,
decodes instructions, stores resulting data in memory.

Registers:
Registers are memory locations within the processor that operate at high
speeds.
Program Counter - Holds the address of the next instruction to be executed,
which will be copied to the CIR and the PC will be incremented to point to the
next instruction.
Memory Data Register – Temporarily stores data that has been fetched from or
stored in memory.
Memory Address Register – Stores the address of the data or instructions that
are to be fetched from or sent to.
Current Instruction Register – Stores the most recently fetched instruction
waiting to be decoded and executed.
Accumulator – Stores the results of executed instructions which are usually
arithmetic and logic calculations made by the ALU.

Factors affecting CPU performance


1.1.1 Structure and function of the CPU

Clock speed:
CPUs are measured in terms of their clock speed. The clock is an electronic oscillator that
produces a signal to synchronise the operation of the processor. Clock speed is a theoretical
maximum of how many instructions can be executed per second.

Number of cores:
Each core in a CPU can perform a fetch-execute cycle. The more cores a CPU has the more
instructions it can simultaneously execute. Increasing the amount of cores your computer
has, has a significant impact on performance because you can achieve parallel processing.

Cache:
Cache is a high-speed register in the CPU which allows storage of frequently used data and
instructions. Note that cache doesn't improve performance of ALU or CU but the fetching of
data between memory and MDR/MAR is.
The size of cache varies between systems, though currently cache capacity is measured in
megabytes. Far smaller than RAM. This is because cache memory is expensive and it also
cannot be easily upgraded or replaced.

Modern computer systems have 3 levels of cache.


Level 1: the smallest and fastest cache. Part of the circuitry of each core
Level 2: Often shared by cores, slower than L1 but bigger in capacity.
Level 3: Slower than L2 but again larger. It sits on the processor or near it on the
motherboard.
The larger the cache the more instructions can be queued and carried out quickly, it is much
faster than taking data from the RAM.

Pipelining
Pipelining is a technique used to improve the performance of the CPU by
streamlining the fetch execute cycle. Simultaneously completing multiple
1.1.1 Structure and function of the CPU

instructions, holding data in a buffer in close proximity to the CPU until its
required.

While one instruction is being executed another can be decoded another can
be fetched.
Pipelining aims to reduce the amount of time the CPU is kept idle, which means
instructions are executed at a faster rate improving processor performance.
In order to apply pipelining to a task, that task needs to be able to be broken
down into subtasks that can be handled independently.

Contemporary processing
Contemporary processors use a combination of Harvard and Von Neumann
architecture. Von Neumann is used when working with data and instructions in
main memory, but uses Harvard architecture to divide the cache into
instruction cache and data cache.

Computer architecture
Stored program concept:
Data and programs must be loaded into main memory to be executed by the processor. The
instructions are fetched one at a time, decoded and executed sequentially by the processor.
Von Neumann:
1.1.1 Structure and function of the CPU

The most common computer architecture which implements the stored program concept.
Instructions and data are stored in a common main memory and transferred using a single
shared bus.

Advantages: Simpler design reduces cost and complexity. Most general purpose computers
are based on this design.
Disadvantages: Data cannot be fetched and sent simultaneously as it would cause a
bottleneck.
Usage: Used in PCs, laptops, servers and high performance computers.

Harvard architecture:
An alternative model which separates the data and instructions into separate memories
using different buses. Program instructions and data are no longer competing for the same
bus.
Advantages: separate buses allow parallel access to data and instructions but prevents
bottlenecking and increases speed. Suitable for real time applications.
Disadvantages: expensive and complex. More memory required
Usage: specialist embedded systems where speed takes priority over the complexities of the
design. Used for more dedicated tools

Buses
System bus:
The system bus consists of 3 separate buses, carrying control address and data
signals.
1.1.1 Structure and function of the CPU

Data bus:
Carries data between the processor and memory. The width of the data bus is
defined by the number of lines it contains. This bus is bi-directional.

Address bus:
The bus that carries the address of the memory location being read from or
written to. The width of the bus determines the maximum possible memory
address of the system.

Control bus:
The bus which sends control signals from the control unit, coordinates the use
of the address and data buses and provides status information between system
components.
Control signals include:
 Bus request – indicates that a devices is requesting the use of the
data bus
 Bus grant – indicates that the CPU has granted access to the data bus
 Memory write - causes data on the data bus to be written into the
addressed location in RAM
 Memory read – causes data from the addressed location in RAM to
be placed on the data bus
 Interrupt request – shows that a device is requesting access to the
CPU
 Clock – used to synchronise operations

Fetch – Decode – Execute cycle


https://www.youtube.com/watch?
v=jFDMZpkUWCw&pp=ygUpZmV0Y2ggZGVjb2RlIGV4ZWNpdGUgY3ljbGUgaW4gbW9yZSBkZ
XRhaWw%3D good video on it

Fetching:
The program counter stores the address of the next instruction to be fetched.
Before we can fetch the instruction, its address must be placed into the MAR.
1.1.1 Structure and function of the CPU

Once this happens the contents of the address location can be loaded into the MDR.
This instruction must now be fetched to the CIR.
The program counter increments by one.

Decode / execute:
Instruction is copied from the CIR to the CU to be decoded and executed.

This repeats continuously.

You might also like