Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 29

TM112: Introduction to Computing and

Information Technology 2

Meeting #2
Block 1 (Part 3)
Hardware and Software Concept
OU Materials, PPT prepared by Dr. Khaled Suwais
1
Edited by Dr. Ahmad Mikati
Contents

• Introduction
• 3.1 The processor
• 3.2 Storing and moving data and instructions
• 3.3 Peripherals and pulling it all together
• 3.4 Instructing the processor
• 3.5 Programmers, programming and program
s
• Summary
2
Introduction
You will learn the answers to questions such as the following.
• You will learn the answers to questions such as the following.
• How does a data bottleneck occur in a computer and how it can be avoided?
• How can I melt my computer?
• What are those strange strings of symbols when I get the ‘blue screen of
death’ on my Windows machine?
• How can a sip and a puff help a person with disabilities interact with a
computer?
• How do computers and programmers pull themselves up by their
bootstraps?
• Do you do RISC?
• When is hardware not required for a computer?
3
3.1 The processor

• The processor of a computer is the part that actually


performs the instructions that we ask the computer to
execute.
• A commercial processor is a wafer of silicon, called a chip or
microchip, on which are etched several hundreds of millions
of the logic gates that are used to store and process
instructions and data.
• Processors come in ‘families’ such as the Intel Core, Celeron
and AMD Athlon, etc.

4
3.1 The processor
• The arithmetic and logic unit (ALU) and the floating-
point unit (FPU) are at the heart of the processor, as these
are the places where the data is actually manipulated.
ALU FPU
• Contains electronic circuits that • It is a common part of most
perform binary arithmetic, such modern processors.
as addition, subtraction, • Its function is very similar to that
multiplication and division on of the ALU, but it operates on
integers. floating-point numbers using
• Contains circuits to perform specialised circuitry optimised to
logical operations, such as be as efficient as possible when
comparing integers with zero, working with floating-point
testing two integers for equality, representations.
testing if one integer is greater
than another, etc.

5
3.1.2 Registers and cache memory
• Main memory is a storage area that contains program
instructions and data.
• When a program is first loaded, the corresponding instructions
and data are put into main memory, which is outside the
processor.

• Each instruction and piece of data is held in a ‘chunk’


called a word.
• A word has a fixed size (usually 32 or 64 bits in a modern
computer), and it is handled as a unit by the hardware of the
processor.

6
3.1.2 Registers and cache memory

As each word gets closer to being processed, it is moved


into the processor so that it can be accessed more quickly in
two steps:
• first, the word is moved to cache memory which is inside the
processor
• then from cache memory to memory locations called registers.
• Registers are very small but very fast areas of memory that are used
as a holding area for instructions and data immediately before they
are needed by the ALU/FPU.

7
3.1.2 Registers and cache memory
In modern processors, there may be several levels of cache
memory.
• Level 1 cache is the fastest (and smallest), and the aim is to use this
for the data and instructions that will imminently be transferred to the
registers.
• Level 2 cache is a larger but slower cache memory.
• There may be two more levels of cache below Level 2, each with more
capacity but slower speed.

The organisation, relationship between, and speeds of different levels of cache


3.1.2 Registers and cache memory

• Data has to be moved into cache memory from main


memory before it is needed by the processor.
• The use of the cache to speed up execution depends on how
effective the cache management is at predicting future data
use.
• If a sequence of instructions is to be executed, then pre-
loading all these instructions into cache before execution
begins can improve the overall processing speed.

9
3.1.2 Registers and cache memory
• There are several different types of registers in different
parts of the processor, and each is designed to hold a
particular type of information for a specific function.
• The accumulator is a register within the ALU where an
actual calculation takes place.
• The status register, sometimes called the flags register,
holds further information about the last operation
executed. Each bit in the register represents some
description of the result – is the result zero? Is the result
negative? Is the result too big to be stored in the
accumulator? And so on.
10
3.1.3 The control unit and other registers

• The control unit has the role of coordinating the movement of


data and instructions within the processor.
It does this by sending out electrical pulses, called control signals, that
activate the necessary connections between main memory, cache,
registers, ALU and FPU, as required, to execute the instruction.

• The address register holds the memory address of the next


instruction to be executed.

• The data registers are where data is stored when it is on its


way to the ALU or FPU or when a result is on its way back to
main memory.
11
3.1.4 Multi-core processors
• A multi-core processor is a single chip that contains two or more
independent processors called cores.
• Each core performs the usual functions of loading data and instructions into
registers and performing arithmetic manipulations or floating-point
manipulations, but instructions can be shared between each of the cores and
run at the same time, increasing the overall speed of programs.

• You may think that four cores all working simultaneously would
make a program run four times as fast. However, this is far from
being the case, for several reasons.
• Firstly, each core requires its share of the data and instructions to be moved
from the shared main memory into cache memory, and from there into its
registers.
• Each core may have its own Level 1 cache memory, but often the other levels
of cache memory are shared between them. This can lead to delays while the
12
cores wait for data and instructions to be transferred.
3.1.4 Multi-core processors
• In order to take advantage of multiple cores, the program
has to be written in such a way that a task can be split up into
independent sub-tasks, each of which can be completed by a
core, and then, if necessary, reassembled into a final
solution. This process is called threading – with each of the
independent tasks being coordinated by a separate thread

13

A multi-core processor where each core is processing a separate thread. (L1, Level 1; L2, Level 2.)
3.2 Storing and moving data and instructions
(Main Memory)

• Main memory is where the instructions, and the data they act on, are
loaded from when a program is executed.
• It is volatile memory, which means that its content is lost when the power
is switched off.
• Each byte in main memory is numbered in sequence, so that it has a
unique memory address.
• In main memory, every memory address can be directly accessed, which is
why this type of memory is referred to as random-access memory (RAM).
• Most forms of memory today are random access, but for historical reasons
we still tend to reserve the acronym RAM for main memory.
• An advantage of any form of random access memory, is that accessing any
location in memory takes the same amount of time, regardless of whether
it is stored at a location with a low or a high memory address.
14
3.2.3 Buses and clocks
• The wiring that connects the various internal and external
components of a computer is known as a bus. Internal
buses inside the processor connect the various registers
and cache memory together.
• The control bus: this bus carries the control signals
between the processor and main memory (and other
parts of the computer system).
• The address bus: this bus carries the addresses of
memory locations to be accessed.
• The data bus: this bus transfers data from place to
place.
15
3.2.3 Buses and clocks

• All computers have a processor clock, which sends out


pulses at regular intervals.
• The clock sends a synchronising signal between the
circuits within the processor to ensure that they remain
in step.
• You can think of each pulse of the processor clock as
being like the rhythmic stroke of a pump that regulates
the movement of data and instructions along the buses
within the processor.

16
3.2.4 The operating system
• Managing the various resources of a computer and coordinating
the hardware components is the job of a collection of programs
known as the operating system.
• In early computers, all the direct interaction between devices,
users and executing programs was coded into each program. This
made programs difficult to write, requiring specialist knowledge
of how to interact with devices connected to the computer (so
called peripheral devices) such as keyboards, screens, printers
and disk drives.
• Without an operating system, the programmers also had to have
specific knowledge of the components of the processor on which
their programs would execute.
17
3.2.4 The operating system

• The operating system provides an interface between the


program and the rest of the computer system.
• The operating system allows the user, who writes the
program, to interact at a higher level of abstraction with
the computer that executes the program.
• The operating system is independent of the type of
processor, and this makes it possible to talk generally
about using a Windows’ computer, or a Mac.

18
3.2.4 The operating system
Some of the functions that the operating system provides are as follows:
• Provision of a user interface:
• It provides us with a means of inputting data and instructions, and displaying output in a form
that users can understand.

• Management of multiple programs:


• The operating system supports hardware designed to enable the processor to switch
between different executing programs in order to multitask.

• Management of memory:
• It is the job of the operating system to allocate appropriately sized areas of memory to each
executing program, and to ensure that program instructions and data do not interfere with
each other or with data and instructions of other programs.

• Coordination and control of peripheral devices:


• in order to carry out its tasks, a computer will need to communicate with one or more
peripheral devices. For example, it may wish to receive data from the keyboard or mouse,
read from a file on a disk, send output to the monitor or printer, connect to a network, and so
on.
19
3.3.2 Secondary memory

• Secondary memory (or secondary storage) is the term given


to the storage devices that contain persistent data.
• In most devices, secondary storage is built into the case and is usually
supplied in the form of a hard disk drive, or a solid-state drive.

• Secondary memory is used to store program code and data


files that are not immediately needed by the computer
system.
• Secondary memory devices usually make up the bulk of the
memory in desktop, laptop and mainframe computers, and
other devices such as mobile phones or tablets, but may be
completely absent in embedded computer systems
20
The memory hierarchy

• The fastest memory access is in the registers, however, register


memory is very expensive.

• It is also the case that the registers are built directly into the processor,
so there are usually a fixed number of them – typically fewer than 50.

• The slowest access is to hard disk storage, roughly 10,000 times


slower than for the registers, but hard disk storage memory is much 21

cheaper and expansion is usually possible by adding more disk drives.


3.5 Programmers, programming and programs

• Early computer programmers wrote instructions in a form


that could be directly understood by their computer’s
family of processors, i.e. in some dialect of machine
language (also known as machine code).
• These consisted of binary patterns that were entered
directly into the hardware of the machine using plug
boards or panel switches.

22
3.5 Programmers, programming and programs

• An assembly language is a programming language that uses human-


readable symbolic instructions and symbolic addresses that translate
into machine language instructions on a one-to-one basis.
• A program written in assembly language has the ability to directly
access all the features and instructions available on the processor it is
designed for.
• Whenever a program is written in a language other than machine
language, the instructions in the original program (called the source
code) need to be converted into equivalent machine language
instructions.

23
3.5 Programmers, programming and programs

• The task of converting the source code into machine language is carried
out by special programs called translators.
• When the source code is in assembly language, the program that does
this translation into machine code is called an assembler.
• An assembler takes an assembly language program and generates an
equivalent program in machine language, which can then be loaded
into memory and executed. Since each processor family has a different
machine language, and therefore a different assembly language, they
each require a different assembler.

24
3.5 Programmers, programming and programs

• It would be exceptionally tedious (not to mention error-prone) to have


to deal with computer programs by writing in low-level languages and
writing code specifically for each family of processors, so modern
computing is not done in this way.
• Instead, high-level programming languages are used, in which each
instruction in the high-level language is translated into many
instructions in the machine language of the processor on which it is to
be executed.
• High-level programming languages include Python, JavaScript, Java, C+
+, Smalltalk, Scratch and a whole range of application-specific
languages that attempt to make the process of writing programs easier
for the human involved.
25
3.5 Programmers, programming and programs

• In compilation, the program written in the high-level


language, called the source code or source program, is used
as the input to a translator program called a compiler.
• The compiler translates the entire source program into the
machine language understood by the processor; this
translation is referred to as the object code or object
program.
• The object code is then saved, and it is this machine language
program that is loaded into memory and executed when the
program is executed.
• Languages such as C, C++, and Visual Basic are designed to be
compiled.
26
3.5 Programmers, programming and programs

• Whereas a compiler translates all the source code in one go, an interpreter
translates each instruction in the source code only when it is required for
that instruction to be executed.
• There is never a complete translation of the whole of the source code into
machine language, and so no object code program is generated.
• The advantage of an interpreted language is that the potentially lengthy
process of compilation does not need to be gone through for each small
change in the source code.
• The main disadvantage is that the translation process must take place
every time a program is executed, resulting in a slower execution of the
program. Like compilers, it is also the case that each processor family
needs a different interpreter.
• Languages such as JavaScript, Perl and Basic are designed to be
interpreted.
27
3.5 Programmers, programming and programs

• Virtualisation is a term used to describe any configuration where a


physical computer system is emulated using software.
• Using a virtual machine to interpret bytecode as we described above is just
one example, but there are many different kinds of virtualisation. For
example, if you use a Mac, you might have a virtual machine on your
computer that allows you to also run an emulated Windows platform.

• Cloud computing relies on virtual machines sitting on top of remote


servers, allowing the server’s processing and storage capacity to be
shared between several users by using a software layer called a
hypervisor to act as an intermediary between multiple ‘guest’
operating systems and the host operating system that directly
interacts with the hardware.

28
Summary
• In this part, you have learned how the main components
of a computer work together to execute a program.

• knowing a little bit about processors, memory and various


peripherals to help you to be more aware of how to match
specifications to a person’s particular computing needs.

• We have also explored how code written in high-level


programming languages is turned into the instructions a
processor understands.

29

You might also like