Computer Architecture (General Approach)

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Computer architecture (general approach)

A computer is a multifunctional electronic device designed for the accumulation,


processing and transmission of information. The architecture of a computer is
understood as its logical organization, structure and resources, that is, the
means of a computing system that can be allocated to the data processing
process for a certain time interval.
The overwhelming majority of computing devices are based on the general
principles formulated by the American scientist John von Neumann
Computers built on these principles have a classic architecture. The variety of
modern computing devices is very large. But their architecture options are based
on common logical principles (according to the von Neumann architecture),
which make it possible to distinguish the following main components in any
computing device:
• Memory (memory) - its functions include receiving information from
other devices, storing and issuing upon request to other components;
• Processor (CPU - Central Processing Unit, central processing unit), which
includes a control unit (CU - control unit, consists of a decoder of
commands (operations), an instruction register (IR - instruction register),
a unit for generating an executive address, a command counter - counter)
and an arithmetic logic unit (ALU - arithmetic logic unit, for performing
arithmetic and logical operations with data, which are called operands),
usually these two devices are isolated purely conditionally, they are not
structurally separated;
• input unit;
• Output unit.
These devices are connected by communication channels (bus) through which
information is transmitted:
1. Устройство управления: CU
2. Счетчик команд: Command counter
3. Буфер адресов: Address buffer
4. Логика управления и дешифрации: Control and decryption logic
5. Регистр команд: Command register
6. Буфер данных: Data buffer
1. Арифметическо логическое устройство: Arithmetic logic unit
2. Внутренние регистры: Internal registers
3. Управляющие сигналы: Control signals

John von Neumann's principles:


1. The principle of programmed control. It follows from it that the program
consists of a set of instructions that are executed by the processor
automatically one after the other in a certain sequence.
Fetching a program from memory is carried out using an instruction
counter. This processor register sequentially increases the address of the
next instruction stored in it by the instruction length. And since the
program commands are located in memory one after another, thus the
selection of a chain of commands from sequentially located memory cells
is organized. If, after the execution of the command, it is necessary to go
not to the next, but to some other, the commands of conditional or
unconditional jumps are used, which enter the number of the memory
cell containing the next command into the command counter. Fetching
commands from memory stops after reaching and executing the
command "stop". Thus, the processor executes the program
automatically, without human intervention.

a. Инициализация:
Initialization
b. Выборка команды:
Command Sample
c. Увеличение
содержимого
счетчика команд:
Increasing the contents
of the command
counter
d. Дешифрация и
выполнение
команды: Decoding
and command
execution
e. Окончание работы:
End of work

2. The principle of homogeneity of memory. Programs and data are stored


in the same memory. Therefore, the computer does not distinguish
between what is stored in a given memory location - a number, a text, or
a command. The instruction memory and data memory contain words in
a binary alphabet. This means that these devices can be combined into
one: main, or random access memory (RAM, RAM - Random Access
Memory). It will contain commands (program code) and data written in
the form of words in a binary alphabet.
In various situations during the execution of a program, the contents of
the same memory area can be considered both as data and as program
code. You can perform the same actions on commands as on data.
3. The principle of targeting. Structurally, main memory consists of
numbered cells; any cell is available to the processor at any time. Each cell
stores a word in the two-letter alphabet {0, 1} representing a number.
Knowing the cell number, you can get exactly the word that is stored in it,
and then, depending on your needs, use it to perform the next operation
or write a new one into the cell.
Computers based on these principles are of the von Neumann type:

Memory functions:
• receiving information from other parts of the computing device;
Storage of information (External storage device VCU - External storage);
• delivery of information upon request to other parts of the computing
device.
Processor functions:
• data processing according to a given program by performing arithmetic
and logical operations;
• program control of the operation of the components of the computing
device;
• short-term storage of a number or command;
• performing some operations on them.
The processor has a number of special storage cells called registers. The main
element of the register is an electronic circuit called a flip-flop, which is capable
of storing one binary digit (bit). The register is a set of triggers connected to each
other in a certain way by a common control system. The register has two
functions:
1) short-term storage of a number or command;
2) performing some operations on them.
A register differs from a memory location in that it can not only store binary code,
but also convert it.
There are several types of registers, differing in the type of operations
performed. Some important registers have special names, for example:
• command counter - CU register, the content of which corresponds to the
address of the next executed command; serves for automatic selection of
a program from sequential memory cells;
• adder - ALU register, participating in the execution of each operation;
• command register - CU register for storing the command code for the
period of time required for its execution.
The basic idea of sequential processing, describing the course of computations
according to a given algorithm, lies at the heart of most modern computing
devices. The calculation algorithms themselves are set from the outside - by
programmers. The computer is the executor.
The format of the three-address command of the von Neumann machine is:

The main steps of the machine cycle when executing a command in the order in
which they are executed:
1. Fetching (reading) a command from memory for subsequent execution. In the
case of a long command, multiple memory accesses are possible. The command
is placed in the command register.
2. Formation of executive (physical) addresses of operands, taking into account
the methods of their addressing (from memory).
3. Calling (reading) operands from memory and placing them in the
corresponding registers of the arithmetic-logical unit of the central processor.
4. Performing an operation by an arithmetic logic device in accordance with the
operation code stored in the command register.
5. Formation of the result in accordance with the command code and the
formation of results signs in the register of signs (flags) for writing to memory.
6. Changing the value (if necessary) of the command counter.

Required processor parts:


1. Module for sampling instructions. Here, instructions are recognized by the
address indicated in the command counter. The number of simultaneous
reading of commands directly depends on the number of installed decryption
blocks, which helps to load each cycle of work with the largest number of
instructions.
2. The transition control module is responsible for the optimal operation of the
instruction fetch unit. It defines the sequence of executable commands.
3. Modules of data sampling. They take memory information. They carry out
exactly the data selection, which is necessary at this moment for the execution
of the instruction.
4. Control block (control device). It is the main element because it distributes
instructions and data.
5. Module for saving results. Designed to write the result after the end of
processing instructions in memory The save address is specified in the running
task.
6. An element of work with interrupts. The processor is capable of performing
several tasks at once thanks to the interrupt function, which allows it to stop the
progress of one program by switching to another instruction.
7. Registers. This is where the temporary results of instructions are stored, this
component can be called a small fast RAM. Often its size does not exceed several
hundred bytes.
8. Command counter. It stores the address of the command that will be used
next when processed on the processor.
Controller (adapter, adapter) - a device that connects peripheral equipment
(input / output) or communication channels (buses) with the central processor,
freeing the processor from direct control of the operation of this equipment.

At present, they strive to increase the speed of data processing by increasing the
degree of parallelism of computations - a multiprocessor computer with multi-
core processors. This desire is due to the development of computer
communications, which have united computers into a single world Internet
network, in which most of the traffic is no longer sequences of symbols, but
images - pictures, music, speech, video. The advancement of modern broadband
communication lines brings processing tasks to the fore. von Neumann
microprocessors have a complex hierarchical architecture, and the number of
gates on a chip grows rapidly with the bit width of the input data. The
advancement of semiconductor integrated technology has provided several
decades of exponential growth in the performance of von Neumann computers,
mainly through increased clock speeds, and to a lesser extent through the use
of parallelism.

The foundation of any processor is the instruction set architecture


The Instruction Set Architecture (ISA) is something of the foundation of the
processor and it depends on how it works and how all internal systems interact
with each other.
The elements of the processor can be divided into two main ones: CU (control
machine) and ALU (operating machine). In simple terms, the processor is a train
in which the driver (UU) controls various engine elements (ALU). ALU is like a
motor - it is the way data is transmitted when it is processed. It receives the
input data, processes it, and sends it to the correct location after the operation
completes. UU directs this data stream. Depending on the instruction
(operation), the ALU will route signals to various components of the processor,
turn different parts of the path on and off, and monitor the health of the entire
processor.
Processor work - black lines represent data flow, and red lines - command
flow:

The processor determines which commands should be executed next, and then
moves them from memory to the CU. The most common types of basic
instructions are "load", "store", "add", "subtract" and others. CPU
always maintains an internal registry that keeps track of where the next
command should run from. This registry is called the command counter. After
the processor has determined the point from which to start the cycle, the
instruction is moved from memory to the aforementioned register - this process
is called instruction fetching. When the processor receives an instruction, it
needs to determine the exact type of that instruction. This process is called
decoding. Each instruction has a special set of bits that allows the processor to
recognize its type. The decoding process is one of the most difficult and takes
up a huge amount of memory. Most often, processors decode instructions
related to memory, arithmetic, and branch. Registers are used to store the
values currently in use and can be thought of as a kind of zero-level cache.
The simplest to understand are arithmetic commands. These commands are
sent to the Arithmetic Logic Unit (ALU) for further processing. The device is a
circuit that most often operates with two values marked with a signal and
produces the result:

The arithmetic logic unit (ALU) processes the type of operation depending on its
code, which the CU sends to the ALU and which, in addition to basic arithmetic,
can perform bitwise operations on values such as AND, OR, NOT and XOR. In
addition, the ALU displays information about the calculation performed for the
control unit (for example, whether it turned out to be positive, negative, equal
to zero, or caused an overflow). ALU is most often associated with arithmetic
operations, but there are also instructions for memory or transition. If the
processor needs to calculate the memory address given as a result of the last
calculation, or if it is necessary to calculate the transition to add to the program
counter, if the instruction requires it (example: "if the previous result is negative,
go 20 instructions forward").
The most common are the following architectural solutions:
Classical architecture (von Multi-machine computing system:
Neumann architecture, several processors that are part of the
uniprocessor computer): one computing system do not have a common
arithmetic logic unit (ALU), RAM, but each has its own (local) one.
through which the data stream Each computer in a multi-machine system
passes, and one control unit (CU), has a classic architecture. However, the
through which the instruction effect of using such a computing system
stream passes - the program. can be obtained only when solving
problems that have a very special
structure: it must be broken down into as
many loosely coupled subtasks as there
are computers in the system.
Multiprocessor architecture:

Multiprocessor architecture: Parallel processor architecture. Here


Having multiple processors in a several ALUs operate under the
computer means that many data control of one CU. This means that a
streams and many instruction lot of data can be processed by one
streams can be organized in parallel. program - that is, one instruction
Thus, several fragments of the same stream at a time. High performance
task can be executed in parallel. The can be obtained only in tasks when
machine has shared RAM and performing the same computational
multiple processors. operations simultaneously on
different datasets of the same type.
The speed advantage of multiprocessor and multicomputer systems over single-
processor ones is obvious.

You might also like