Professional Documents
Culture Documents
C1 - The Computer System
C1 - The Computer System
C1 - The Computer System
Controlled
by Control
unit
FIVE MAJOR OPERATIONS OF COMPUTER SYSTEM
COMPUTER ORGANISATION AND COMPUTER
ARCHITECTURE
tions.
The output operation sends output to the video screen or printer
The storage operation keeps track of files for use later. Exam-
describes aspects of
system how the information
bus is being sent, and in
what manner
10
DRAM
SRAM
How Memory Caching Works
1.2.1 What is cache memory
• Cache memory sits between the CPU and the main memory. A
cache controller monitors the addresses that are requested by the
CPU and predicts which memory will be required in the future.
• Data is read into the cache memory in advance, allowing the
computer to obtain data far more quickly from the cache than
from the main memory. Tags are used identify where cached data
originated. Cache is built from SRAM.
Definition of cache memory
• A special very-high-speed memory called a cache, is used to increase
the speed of processing by making current programs and data
available to the CPU at a rapid rate.
• Direct mapping
• Associative mapping
• Set-associative mapping
Mapping Functions
The mapping functions are used to map a particular block of main memory
to a particular block of cache. This mapping function is used to transfer the
block from main memory to cache memory. Three different mapping
functions are available:
Direct mapping:
• A particular block of main memory can be brought to a particular block of
cache memory. So, it is not flexible.
Associative mapping:
• In this mapping function, any block of Main memory can potentially
reside in any cache block position. This is much more flexible mapping
method.
Set-associative mapping:
• In this method, blocks of cache are grouped into sets, and the mapping
allows a block of main memory to reside in any block of a specific set.
From the flexibility point of view, it is in between to the other two
methods.
Cache Organization
Direct Mapped cache is also referred to as 1-Way set associative cache. Figure
above shows a diagram of a direct map scheme. In this scheme, main memory
is divided into cache pages. The size of each page is equal to the size of the
cache. Unlike the fully associative cache, the direct map cache may only store
a specific line of memory within the same line of cache. For example, Line 0 of
any page in memory must be stored in Line 0 of cache memory. Therefore if
Line 0 of Page 0 is stored within the cache and Line 0 of page 1 is requested,
then Line 0 of Page 0 will be replaced with Line 0 of Page 1. This scheme
directly maps a memory line into an equivalent cache line, hence the name
Direct Mapped cache.
Direct Mapping
A Direct Mapped cache scheme is the least complex of all three
caching schemes. Direct Mapped cache only requires that the current
requested address be compared with only one cache address. Since
this implementation is less complex, it is far less expensive than the
other caching schemes.
The disadvantage is that Direct Mapped cache is far less flexible
making the performance much lower, especially when jumping
between cache pages.
Fully-Associative Mapping
wide variety of
WHY ? peripherals with various
?? methods of operation
p/s: A buffer is something that prevents something else from being harmed or that prevents two things from harming each other.
1.3.2 I/O MODULE BLOCK DIAGRAM
The I/O bus consists of data lines, address lines, and control lines.
The I/O bus from the processor is attached to all peripheral
interfaces. To communicate with a particular device, the processor
place a device address to the address line. Each Interface attached
to the I/O bus contains an address decoder that monitors the
address line.
Input/Output Interface provides a method for
transferring information between internal
storage and external I/O devices. Peripheral
connected to a computer need special
communication link for interfacing them with
the central processing unit.
1.4 DESCRIBE INPUT /
OUTPUT DATA TRANSFER
Input / Output data transfer
I/O activities are asynchronous, that is, not
synchronized to the CPU clock, as are
(seperti) memory data transfer. Additional
signal, called handshaking signal, may be
need to incorporate on a separate I/O bus to
coordinate when the device is ready to have
data read from it or written to it.
1.4.1 Define the asynchronous serial transfer
1.4.1 Define the asynchronous serial transfer
The transfer of data between two units may be done in parallel or serial.
Serial transmission is slower but is less expensive since it requires one pair of
conductors. Serial transmission can be synchronous or asynchronous:-
In synchronous transmission, the two units share a common clock
available and the line remains idle when there is no information is being
transmitted.
1.4.2 The asynchronous communication
interface
Character bit
Stop bit
1.4.4 Describe mode of transfer
Modes of Transfer
There are THREE (3) methods for managing input and
output:
Programming I/O (also known as polling)
Interrupt-driven I/O
DMA Controller
DMA services are usually provided by DMA controller,