Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Chapter 2

Lecture 3
WHAT A
MICROPROCESSOR IS

Dr. Marwa Gamal


Direct memory access
Direct memory access
 In programmed I/O and interrupt I/O, data is transferred to the memory
through the accumulator.
the operation sequence in case of Direct Memory Access.
 1. The microprocessor checks for DMA request signal once in each
machine cycle.
 2. The I/O device sends signal on DMA request pin.
 3. The microprocessor tristates address, data and control buses.
 4. The microprocessor sends acknowledgement signal to I/O device on
DMA acknowledgement pin.
 5. The I/O device uses the bus system to perform data transfer operation
on memory.
 6. On completion of data transfer, the I/O device withdraws DMA request
signal.
 7.The microprocessor continuously checks the DMA request signal.
When the signal is withdrawn, the microprocessor resumes the control
of buses and resumes normal operation.
Serial Data Transfer
 The data transfer between two processors
is in serial mode.
 The data is transferred bit by bit on a
single line
 have two pins for input and output of serial
data
 special software instructions to affect the
data transfer.
Architectural advancements of
microprocessors
 A number of advancements that had
taken place in the computer have
migrated to microprocessor field
 The pipelined microprocessors have a
number of smaller independent units
connected in series like a pipeline. Each
unit is able to perform its tasks
independently and the system as a whole
is able to give much more throughput
than the single microprocessor.
Pipelining
 Pipelining in computer is used to enhance
the execution speed.
 An instruction execution contains the
following four independent sub-tasks.
(a) Instruction Fetch
(b) Instruction Decode
(c) Operand Fetch
(d) Execute
Pipelining
 An example of four processing elements
P(1) to P(4). The task T has to be
subdivided into four sub-tasks T(1) to T(4).
Each processor takes 1 clock cycle to
execute its sub-task as depicted below.
 Thus a single task T will take 4 clock
cycles which is same as a non-pipelined
processor.
 a single task there is no advantage
derived from a pipelined processor.
Pipelining
 an example in which there is continuous
flow of tasks T(1), T(2), T(3), T(4) … each
of which can be divided into four sub-tasks

will get completed at the end of cycle 6.


Pipelining
 Problems::
 not all instructions are independent. If
instruction I2 has to work on the result of
the previous instruction I1.
 Control logic inserts stall or wasted
clock cycles into pipeline until such
dependencies are resolved.
 Another problem frequently encountered
is that relating to branch instructions.
The problem could be solved by (branch
prediction).
Cache Memory
 Increasing memory throughput will
increase the execution speed of the
processor. To achieve this, cache
between the processor and the main
memory are provided.
 The cache memory consists of a few
kilobytes of high speed Static RAM
(SRAM), whereas the main memory
consists of a few Megabytes to
Gigabytes of slower but cheaper Dynamic
RAM (DRAM).
Cache Memory
When the CPU wants to
read a byte or word, it outputs
address on the Address Bus.
The cache controller checks
whether the required contents
are available in cache
memory.
If the addressed byte/word
is available in cache memory,
the cache controller enables
the cache memory to output
the addressed byte/word on
Data Bus.
Cache Memory
 If the addressed byte/word is
not present in cache
memory, the cache controller
enables the DRAM
controller.
 The DRAM controller sends
the address to main memory
to get the byte/word. Since
DRAM is slower, this access
will require some wait states
to be inserted.
 The byte/word which is read
is transferred to CPU as well
as to cache memory. If CPU
needs this byte/word again,
it can be accessed without
any wait state
Multilevel caches

•Multilevel caches can be either inclusive or exclusive.


•an inclusive cache design the data in L1 cache will also be in
L2 cache.
•In exclusive caches, data is present either in L1 or L2 caches
Virtual Memory
 in complexity and consequently computer programs
became complex and lengthy, it became evident that
physical memory size would become a bottleneck.
 The virtual memory system was evolved as a
compromise in which the complete program was
divided into fixed/variable size pages and stored in
hard disk.
 The basic idea is that pages can be swapped
between the disk and the main memory as and when
needed. This task is performed by the operating
system.
Virtual Memory
 The task of translation of virtual address to
physical address is carried out by the
Memory Management Unit (MMU). The
MMU is a hardware device
Virtual Memory
Virtual Memory
 Dirty bit: If bit = 1, the page in the memory
has been modified
 Referenced bit: If bit = 1, the program has
made reference to the page, meaning that
the page may be currently in use.
 Protection bits: These bits specify whether
the data in the page can be read/written/
executed
Assignments
 Page 30 Q.13
 Deadline 25/4

You might also like