Micro 3

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Computational Model and Architecture

Basic Concepts
• A program is a static sequence of high-level statements or constructions broken down into simple and
structured instructions.
• The central processing unit manipulates instructions and data to perform computations.
• The execution of a program is a dynamic process that can be abstracted using the notion of flow.
o The instruction flow is the continuation of the executed instructions, that is, the path taken by
execution in the program’s code.
o The control flow or flow of control is the succession of path selections for an execution.
o The data flow is the path that the data takes during an execution.
• Mechanisms of Computational Unit
o The control mechanism specifies how the computation is executed and how one instruction causes
the execution of another.
▪ In control-driven execution, an instruction is executed when it is selected by the control flow
in which Its execution will make it possible to designate the following instruction.
▪ In data-driven execution, the statement is executed when all of its arguments are available.
▪ With demand-driven execution, an instruction is executed if its result is necessary for the
execution of another that is already being executed.
▪ For pattern-driven executions, the execution of the instruction is conditioned by the
correspondence of certain patterns called a goal statement.
o The data mechanism specifies how an instruction obtains its operands and how the result is
communicated to others or exactly how computational units exchange data.
▪ In shared memory, the main memory stores a single copy of the information available for
computation. Data is shared and accessed by reference.
▪ In message passing, a copy of the operands is sent to each unit of computation. Here, the
data access mechanism is by value.

Computation Model
It is a high-level abstraction that explains how computations are carried out. It specifies the basic entities for the
computation, the possible operations, and the execution and data models.
• The Turing model, named after its inventor (Turing, 1937), makes it possible to know whether a function is
computable.
• The Object-oriented model (Dahl and Nygaard, 1966; Nguyen and Hailpern, 1986) uses the object as the basic
entity. The latter encapsulates the attributes (variables) and the methods (functions) that will be applied to the
former as directed by messages.
• The Dataflow model is a data-driven execution model with message passing. The basic entity is the data to
which operations will be applied. The instructions produce data consumed by other instructions.
• The Application model uses the argument as the basic entity to which functions will be applied for evaluation.
• The Predicate logic-based model is based on a set of objects to which predicates are applied. A predicate is a
property or attribute of an object. The control mechanism is of the “pattern-driven” type, and the data
mechanism is of the shared data type.

Architecture in Computer Systems


The computation model relies on computer architecture and a programming language.
• The main language types are procedural-imperative, object-oriented, functional, and logical.
• The origin of the term “architecture” is defined as “the attributes of the system seen by the programmer, that
is, the conceptual structure and the functional behavior”, based on a computation model and its programming
languages. (Amdahl et al., 1964)
• The term architecture also refers to the study and classification of computers.
• The semantic gap is the difference between the High-Level programming Language’s (HLL) computation model
and the architecture that must support the execution of programs written with them.
Architecture Models
• Original von Neumann Model
o This architecture is at the heart of all current processors, even if new mechanisms are added to
accelerate computation time or data access speed, such as the pipeline, cache memory, or
prefetching of instructions or data.
▪ The problem description model is procedural, a sequence of instructions executed on an
incoming flow of data and producing an outgoing flow of data.
▪ The centralized execution model is based on the semantics of state transition.
▪ The stream of instructions executed on data stored in memory is unique (single-instruction
stream).
• Modern von Neumann Model
o This architecture has been extended over time to offer additional functionality in high-level languages
and increase processing speed.
▪ The idea of stacking encapsulates recursion in computation.
▪ New kinds of data used: Binary coded decimal, fixed/floating-point real numbers, and
character strings.
▪ Complex modes of addressing: indexing and indirection.
▪ Using tables with the concept of the pointer and dynamic entities.
▪ The capacity of main memory was increased using the concept of virtual memory
mechanisms: paging and segmentation.
▪ Parallel computation was introduced by multiplying the functional units externally and then
internally, for example, in superscalar architectures and by implementing the pipeline
structure.
• Pure Harvard Architecture Model
o To avoid bottlenecks, the computer stores code and data in two distinct memories that operated
independently.
▪ Each possesses its communication path (i.e., bus). Access conflicts are thus avoided.
▪ A consequence is that a given address will correspond to several storage locations, each
belonging to separate address spaces. Parallelism is intrinsic to this model.
• Modified Harvard Architecture Model
o The modern variants gathered under this architecture umbrella are a mix of von Neumann and
Harvard architectures.
o The unified memory, as well as specialized memory closer to the processor to improve flow rate
with cache memory for specialized contents (split cache).
o Concerning the original model, address zero refers to a single memory cell in the unified memory
containing instructions and data, but the communication buses are separated for the caches.

Parallelism
It is a method in computing in which separate parts of an overall complex task are broken up and run simultaneously on
multiple CPUs, thereby reducing the amount of time for processing.
• Instruction-Level Parallelism
o It refers to how many operations simultaneously had been performed by the program.
o This brings together design techniques from other families of processors and compilers to overcome
sequential execution.
o This technique speeds-up the execution of instructions, particularly those related to transfers
between the CPU and main memory (and vice versa), and to arithmetic computation with integer and
floating-point numbers.
• Thread-Level Parallelism
o It allows a program or instruction to work in multiple threads at the same time.
o It is referred to as multithreaded parallelism, which breaks down along the lines of two approaches:
▪ Explicit Multithreading (Chip Multithreading) – It issues instructions from multiple threads
in a cycle.
• Hardware Multithreading – Have multiple thread contexts in a single processor.
o Fine-grained – It provides two or more threads in a chip.
o Coarse-grained – It provides multiple threads within the processor core.
o Hyperthreading – It consists of transforming parallelism at the activity
thread-level into parallelism at the instruction level.
• Chip Multiprocessing – It replicates an entire processor core for each thread to
support multiple threads in a single processor chip
▪ Implicit Multithreading - Threads are generated implicitly by the hardware or the compiler.
• Multicore Architecture
o A multicore microprocessor (single-chip multiprocessor) is made up of several independent cores
gathered on the same chip (die). Example: Dual-core, quad-core, etc.
o When their number exceeds several hundred or even a thousand cores, we must speak of many-core
and massively multicore approaches (Borkar 2007).
o Each core is modern, pipelined, with several levels of cache.
▪ Symmetric Multiprocessing - The cores can be identical (shared memory).
▪ Asymmetric Multiprocessing - One or more of the cores is more powerful than the others.

Instruction Set Architecture (ISA)


This refers to the architecture of the processor seen by the programmer. It is the interface between software and
hardware, providing only the hardware details necessary for programming and compilation.
• Instruction Set
o It consists of defining the operations, the format of the instructions, their coding (code operation) and
addressing modes, and the number, type, and size of the explicit operands.
• Execution modes
o The processor executes execution modes: Application-level programmers’ model, System-level
programmers’ model, and Supervisor (or system) model.
• Storage components
o The potential information storage components are the general-purpose register, the main memory,
and the stack (based on registers or in main memory).
▪ In a register-register architecture, only the load and store instructions access the memory to
(un)load the registers containing the operands and the result of the computation. The other
instructions only use the registers as a memory to improve access time.
▪ In memory-memory architecture, all operands are in memory. An instruction references at
most three operands in memory. The instruction size is therefore large and variable
depending on the addressing mode.
• Types
o Various popular instruction sets are used in the industry and are of theoretical importance. Each one
has its usage and advantages.
▪ Reduced Instruction Set Computer (RISC) is an instruction set architecture (ISA) that has
fewer cycles per instruction (CPI). Examples: ARM, MIPS, OpenRISC, and SPARC.
▪ Complex Instruction Set Computer (CISC) is an instruction set architecture (ISA) that has
fewer instructions per program than a Reduced instruction set computer (RISC). Examples:
x86, z/Architecture, and Intel 8080.
▪ Minimal Instruction Set Computer (MISC) is a processor architecture with a very small
number of basic instruction operations and corresponding opcodes. Example: Transputer.
▪ Very Long Instruction Word (VLIW) is an instruction set architecture designed to exploit
instruction-level parallelism (ILP). Central processing units (CPU, processor) mostly allow
programs to specify instructions to execute in sequence only. A VLIW processor allows
programs to explicitly specify instructions to execute in parallel. Examples: Transmeta Crusoe
and Elbrus 2000.
▪ Explicitly Parallel Instruction Computing (EPIC) is an instruction set that permits
microprocessors to execute software instructions in parallel by using the compiler, rather
than complex on-die circuitry, to control parallel instruction execution. This was intended to
allow simple performance scaling without resorting to higher clock frequencies. Example:
Itanium.
▪ One Instruction Set Computer (OISC) is an abstract machine that uses only one instruction
obviating the need for a machine language opcode. OISCs have been recommended as
guides in teaching computer architecture and have been used as computational models in
structural computing research. Example: Cryptoleq.
▪ Zero Instruction Set Computer (ZISC) is a computer architecture based on pattern matching
and absence of (micro-) instructions in the classical sense. These chips are known for being
thought of as comparable to the neural networks being marketed for the number of
"synapses" and "neurons". Examples: NI1000 and CM1K.
• Basic Examples of Instruction Set
o LOAD - loads information from RAM to the CPU
o STORE - stores information to RAM
o OUT - outputs information to a device, e.g., monitor
o IN - inputs information from a device, e.g., keyboard
o ADD - adds two (2) numbers together
o COMPARE - compares numbers
o JUMP - jumps to designated RAM address
o JUMP IF - a conditional statement that jumps to a designated RAM address

References:
Darche, P. (2020). Computer engineering series: Microprocessor 1: Prolegomena – calculation and storage functions –
models of computation and computer architecture. iSTE & Wiley.
Examples of Instruction Sets. (n.d.). In Iq.opengenus.org. Retrieved on February 7, 2020, from
https://iq.opengenus.org/examples-of-instruction-sets/
Darche, P. (2020). Computer engineering series: Microprocessor 2: Communication in a digital system. iSTE & Wiley.
Darche, P. (2020). Computer engineering series: Microprocessor 3: Core concepts – hardware aspects. iSTE & Wiley.
de Lamadrid, J. (2018). Computer organization: Basic processor structure. CRC Press.
Farahmand, F. (2016). Fundamentals of microprocessor and microcontroller [Lecture notes]. Retrieved from Sonoma
State University.
Instruction Set. (2018). In Computerhope.com. Retrieved on February 7, 2020, from
https://www.computerhope.com/jargon/i/instset.htm
Toomsalu, A. (n.d.). Microprocessor Systems I: Microprocessor systems architecture [Lecture notes]. Retrieved from
Tallinn University of Technology – Department of Computer Engineering.
Parallel Computing. (n.d.). In Omnisci.com. Retrieved on February 7, 2020, from https://www.omnisci.com/technical-
glossary/parallel-computing
7 Types of Instruction Set. (n.d.). In Iq.opengenus.org. Retrieved on February 7, 2020, from
https://iq.opengenus.org/seven-types-of-instruction-set/
Zhu, Z. (n.d.). Chip-level multithreading and multiprocessing [PDF]. Retrieved on February 7, 2020, from
http://home.eng.iastate.edu/~zzhang/courses/cpre585/slides/Lecture25.pdf

You might also like