C1 - The Computer System

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 52

Chapter 1

THE COMPUTER SYSTEM

DFC10093 – COMPUTER SYSTEM AR-


CHITECHTURE
1.1 COMPUTER ORGANISATION AND
COMPUTER ARCHITECTURE

• The way the hardware component operate and


the way they are connected together to form the
COMPUTER computer system.
• structural relationships that are not visible to the
ORGANISA- programmer
TION

• The structure and functional behavior of the


computer as seen by the user/programmer
COMPUTER AR- • engineering considerations that are useful in
CHITECTURE coming up with a desirable computer design.
Computer Architecture Computer Organisation
is about attributes in computer
system as viewed by programmer is about component that linked with
and have a direct impact to logic operational unit of a computer
execution of a program. system

Example : instruction set, arithmetic, Example : hardware technology,


addressing modes and input output interface, memory technology and
mechanisme. control signal.

architecture may maintained for hundred


organisation may change as rapid
years
changes of technology

one computer model, for example Intel


x86; may maintained its architecture but
differ in its organisation
INTERCONNECTION STRUCTURES
WITHIN A COMPUTER SYSTEM

Controlled
by Control
unit
FIVE MAJOR OPERATIONS OF COMPUTER SYSTEM
COMPUTER ORGANISATION AND COMPUTER
ARCHITECTURE

 The input operation recognizes input from keyboard or mouse.

 The processing operation manipulates data according to the user’s instruc-

tions.
 The output operation sends output to the video screen or printer

 The storage operation keeps track of files for use later. Exam-

ples of storage devices include flash memory and hard drives.


 The control –supervises the process of input, output, process-

ing and storage


Computer’s bus

In computer architecture, a bus is a


subsystem that transfers data between
components inside a computer, or
between computers
 
a)Internal bus (system bus)
An internal bus connects all the inter-
nal components of a computer to the
motherboard (and thus, the CPU and
internal memory). These types of
buses are also referred to as a local
bus
 
b)External bus (expansion bus)
An external bus connects external
peripherals to the motherboard.
The Bus System Model Bus interconnection

identifies where the


carries the information address information is being
sent
being transmitted
bus
data control
bus bus

describes aspects of
system how the information
bus is being sent, and in
what manner

communications among the components


are by means of a shared pathway
1.2 Understand the
concept of cache
memory
Background: RAM on the Motherboard
• Loses all data when PC is turned off (except data stored on CMOS
chip)
• Two categories
• Static RAM (SRAM)
• Fast
• Used as a memory cache
• Dynamic RAM (DRAM)
• Slower; requires constant refreshing

10
DRAM
SRAM
How Memory Caching Works
1.2.1 What is cache memory

• Cache memory sits between the CPU and the main memory.  A
cache controller monitors the addresses that are requested by the
CPU and predicts which memory will be required in the future.
• Data is read into the cache memory in advance, allowing the
computer to obtain data far more quickly from the cache than
from the main memory.  Tags are used identify where cached data
originated.  Cache is built from SRAM.
Definition of cache memory
• A special very-high-speed memory called a cache, is used to increase
the speed of processing by making current programs and data
available to the CPU at a rapid rate.

The Concept of Cache Memory


• The cache is used for storing segments of programs currently being
executed in the CPU and temporary data frequently needed in the
present calculation.

• By making the programs and data available at a rapid rate, it is


possible to increase the performance rate of the computer.
Think…….
• Caches are faster than main memory. Then why do we need
main memory at all? Well once again it is the cost, which
determines this. The caches although are fast yet are very
expensive memories and are used in only small sizes.
1.2.2 Types of cache
memory

The transformation of data from main memory to cache memory


is called mapping. There are THREE (3) types of mapping
procedures of practical interest when considering the organization
of cache memory:

• Direct mapping
• Associative mapping
• Set-associative mapping
Mapping Functions   
The mapping functions are used to map a particular block of main memory
to a particular block of cache. This mapping function is used to transfer the
block from main memory to cache memory. Three different mapping
functions are available:
Direct mapping:
• A particular block of main memory can be brought to a particular block of
cache memory. So, it is not flexible.
Associative mapping:
• In this mapping function, any block of Main memory can potentially
reside in any cache block position. This is much more flexible mapping
method.
Set-associative mapping:
• In this method, blocks of cache are grouped into sets, and the mapping
allows a block of main memory to reside in any block of a specific set.
From the flexibility point of view, it is in between to the other two
methods.
Cache Organization

In order to fully understand how caches can be organized, two terms


need to be defined. These terms are cache page and cache line. Main
memory is divided into equal pieces called cache pages. The size of a
page is dependent on the size of the cache and how the cache is
organized. A cache page is broken into smaller pieces, each called a
cache line. The size of a cache line is determined by both the processor
and the cache design. Figure above shows how main memory can be
broken into cache pages and how each cache page is divided into cache
lines.
Direct Mapping

Direct Mapped cache is also referred to as 1-Way set associative cache. Figure
above shows a diagram of a direct map scheme. In this scheme, main memory
is divided into cache pages. The size of each page is equal to the size of the
cache. Unlike the fully associative cache, the direct map cache may only store
a specific line of memory within the same line of cache. For example, Line 0 of
any page in memory must be stored in Line 0 of cache memory. Therefore if
Line 0 of Page 0 is stored within the cache and Line 0 of page 1 is requested,
then Line 0 of Page 0 will be replaced with Line 0 of Page 1. This scheme
directly maps a memory line into an equivalent cache line, hence the name
Direct Mapped cache.
Direct Mapping
A Direct Mapped cache scheme is the least complex of all three
caching schemes. Direct Mapped cache only requires that the current
requested address be compared with only one cache address. Since
this implementation is less complex, it is far less expensive than the
other caching schemes.
The disadvantage is that Direct Mapped cache is far less flexible
making the performance much lower, especially when jumping
between cache pages.
Fully-Associative Mapping

Figure above shows a diagram of a Fully Associative cache. This organizational


scheme allows any line in main memory to be stored at any location in the
cache. Fully-Associative cache does not use cache pages, only lines. Main
memory and cache memory are both divided into lines of equal size. For
example Figure above shows that Line 1 of main memory is stored in Line 0 of
cache. However this is not the only possibility, Line 1 could have been stored
anywhere within the cache. Any cache line may store any memory line, hence
the name, Fully Associative.
Fully-Associative Mapping
A Fully Associative scheme provides the best performance because any
memory location can be stored at any cache location. The disadvantage
is the complexity of implementing this scheme. The complexity comes
from having to determine if the requested data is present in cache. In
order to meet the timing requirements, the current address must be
compared with all the addresses present in the TRAM. This requires a
very large number of comparators that increase the complexity and cost
of implementing large caches. Therefore, this type of cache is usually
only used for small caches, typically less than 4K.
Set Associative mapped cache

A Set-Associative cache scheme is a combination of Fully-Associative and Direct Mapped


caching schemes. A set-associate scheme works by dividing the cache SRAM into equal
sections (2 or 4 sections typically) called cache ways. The cache page size is equal to the
size of the cache way. Each cache way is treated like a small direct mapped cache. To make
the explanation clearer, lets look at a specific example. Figure above shows a diagram of a
2-Way Set- Associate cache scheme. In this scheme, two lines of memory may be stored at
any time. This helps to reduce the number of times the cache line data is written-over.
This scheme is less complex than a Fully-Associative cache because the number of
comparators is equal to the number of cache ways. A 2-Way Set-Associate cache only
requires two comparators making this scheme less expensive than a fully-associative
scheme.
1.3 INPUT/OUTPUT IN
COMPUTER SYSTEM
Why does not connect peripherals directly to the system
bus?

data transfer rate of


peripherals is often
much slower than the
memory or processor

wide variety of
WHY ? peripherals with various
?? methods of operation

data transfer rate of


peripherals is often
different data formats
much faster than the
and word lengths
memory or processor
Why does not connect peripherals directly to the system
bus?

 There are wide variety of peripherals with various methods of


operation. It would be impractical to incorporate the necesary
logic within the processor to control a range of devices.
 The data transfer rate of peripherals is often much slower than
the memory or processor. Thus, it is impractical to use the
high-speed system bus to communicate directly with a
peripheral.
 The data transfer rate of some peripherals is faster than of the
memory or processor. Again, the mismatch would lead to
inefficiencies if not managed properly.
 Peripherals often use different data formats and word lengths
that the computer to which the are attached.
 Thus, the I/O module is required.
1.3.1 Define I/O module
I/O module has two major functions:
 Interface to CPU and Memory via system bus or central switch
 Interface to one or more peripherals devices by tailored data
links.

Generic Model of I/O Module


1.3.1 Define I/O module

 I/O MODULE The input/output subsystem of


a computer, referred to as I/O, provides an
efficient mode of communication between
the central system and the outside
environment.
 I/O MODULE Any program, operation or
device that transfers data to or from a
computer and to or from peripheral device.
1.3.2 I/O MODULE BLOCK DIAGRAM
1.3.2 I/O MODULE BLOCK DIAGRAM

 This module connect to the rest of the computer through a set of


signal lines. (eg. System bus lines)
 Data transferred to and from the module are buffered in one or more
data registers.
 There may be also be one or more status register that provide current
status information.
 A status register may also function as a control register, to accept
detailed control information from the processor.
 The logic within the module interacts with the processor via a set of
control lines.
 The processor uses the control lines to issue commands to the I/O
module

p/s: A buffer is something that prevents something else from being harmed or that prevents two things from harming each other.
1.3.2 I/O MODULE BLOCK DIAGRAM

 The module must also be able to recognize


and generate addresses associated with the
devices it controls.
 Each I/O module has a unique address or if it
controls more than one external device, a
unique set or address.
 Finally the I/O module contains logic specific
to the interface with each device that it
controls.
1.3.2 I/O MODULE DIAGRAM
Function of I/O modules
1)   Control and Timing.
2)   CPU Communicating.
3)   Device Communication.
4)   Data Buffering.
5)   Error Detection.
1.3.3 List the I/O devices

 Human readable communicating with the


computer user
 Printer, keyboard, video display terminals (VDTs)
 Machine readable communicating with
equipment
 Magnetic disk, tape systems, sensors.
 Communication communicating with remote
device
 Terminal, machine readable device or even another
computer
1.3.3 List the I/O devices
1.3.4 Describe the I/O bus and interface
modules
1.3.4 Describe the I/O bus and interface
modules
A typical communication link between processor and several
peripheral is show in the figure above.

The I/O bus consists of data lines, address lines, and control lines.
 
The I/O bus from the processor is attached to all peripheral
interfaces. To communicate with a particular device, the processor
place a device address to the address line. Each Interface attached
to the I/O bus contains an address decoder that monitors the
address line.  
  
 Input/Output Interface provides a method for
transferring information between internal
storage and external I/O devices. Peripheral
connected to a computer need special
communication link for interfacing them with
the central processing unit.
1.4 DESCRIBE INPUT /
OUTPUT DATA TRANSFER
Input / Output data transfer
 I/O activities are asynchronous, that is, not
synchronized to the CPU clock, as are
(seperti) memory data transfer. Additional
signal, called handshaking signal, may be
need to incorporate on a separate I/O bus to
coordinate when the device is ready to have
data read from it or written to it.
1.4.1 Define the asynchronous serial transfer
1.4.1 Define the asynchronous serial transfer

The transfer of data between two units may be done in parallel or serial.
Serial transmission is slower but is less expensive since it requires one pair of
conductors. Serial transmission can be synchronous or asynchronous:-
 In synchronous transmission, the two units share a common clock

frequency and bits are transmitted continuously at the rate dictated(disuruh)


by the clock pulse.
 In asynchronous transmission, binary information is sent only when it is

available and the line remains idle when there is no information is being
transmitted.
1.4.2 The asynchronous communication
interface

The asynchronous communication interface


A serial asynchronous data transmission technique
used in many interactive terminals employs special
bits that are inserted in both ends of the character
code. With this technique, each character consist of
three parts:
 Start bit

 Character bit

 Stop bit
1.4.4 Describe mode of transfer

Modes of Transfer
There are THREE (3) methods for managing input and
output:
 Programming I/O (also known as polling)

 Interrupt-driven I/O

 Direct Memory Access (DMA)

NO INTERRUPTS USE OF INTERRUPT


I/O-to-memory transfer through Programmed I/O Interrupt-driven I/O
processor
Direct I/O-to memory transfer Direct memory access
(DMA)
a) Programmed I/O

With program I/O, data are exchanged between the


processor and the I/O module. The processor
executes a program that gives it direct control of the
I/O operation, including sensing device status,
sending a read or write command and transferring
the data. When the processor issues a command to
the I/O module, it must wait until the I/O operation
is complete. If the processor is faster than the I/O
module, this is wasteful of processor time.
b) The Interrupt-Driven I/O

With interrupt-driven I/O, the processor issues an


I/O command, continues to execute other instruction,
and is interrupted by the I/O module when the latter
has completed its work. With both programmed and
interrupt I/O, the processor is responsible for
extracting data from main memory for output and
storing data in main memory for input.
b) The Interrupt-Driven I/O
 With interrupt-
driven I/O, the CPU
does not access a
device until it needs
servicing, and so it
does not get caught
up in busy-waits. In
interrupt-driven
I/O, the device
requests service
through a special
interrupt request
line that goes
directly to the CPU.
c) Direct Memory Access (DMA)

A direct memory access (DMA) device can transfer data directly


to and from memory rather than using the CPU as an
intermediary, and can thus relieve congestion on the system bus.
The alternative is known direct memory access (DMA). In this
mode, the I/O module and main memory exchange data directly,
without processor involvement.
c) Direct Memory Access (DMA)

DMA Controller
 DMA services are usually provided by DMA controller,

which is, itself a specialized processor whose specialty


is transferring data directly to or from I/O devices and
memory.
 Many hardware systems use DMA, including disk drive

controllers, graphics cards, network cards and


sound cards.

You might also like