Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 36

Unit 2 - The Instruction Set Architecture and Memory

• SYLLABUS:-
2.1 Hardware Components of the Instruction Set Architecture
2.2 ARC - A RISC Computer
2.3 Pseudo Operations
2.4 Synthetic Instructions
2.5 Examples of Assembly Language Programs
2.6 Accessing Data in Memory-Addressing Modes
2.7 The Memory Hierarchy
2.8 Cache Memory.
• Introduction
• First we will see the simple C program

• In above program a=10 and b=20 these are data and C=a+b, printf( C ) these are instructions.
• We save these data & instruction in memory as per our Von Neumann Model / Architecture.
• Now our data and instructions are come into memory i.e in RAM or registers, but generally first we store it into memory
& then fetch that data & instruction in Register ( E.g. ALU to perform operations).
• We have different types of registers such as accumulator, data register, input register, output registers.
• Our CPU work with these different registers, right now our data is in the memory. And that data may be present in
continuous allocation or on different location of memory.
• Instruction format:

• Mode(Address of Operand): It indicate the addressing mode, basically it find the address
of operand, it’s mean to which operand you are finding and where it is located in memory
or in register.
• Opcode(Operation) : It indicate what to perform, as in our example operation is addition.
• Operand (Data): It indicate on which data we have to perform the operation.
• Instruction Register open the instruction and decode that particular instruction and find out
where the operand is present, then we take that operand into accumulator or data register
then ALU perform the addition operation and finally it store the result into output register.
• Length of instruction is depends on type of computer organization means AMD processor
or other type.
• If length of instruction increased then size of instruction register also increase, also size of
bus increases.
• Instruction Set Architecture (ISA)
• An Instruction Set Architecture (ISA) is part of the abstract model of a computer
that defines how the CPU is controlled by the software.
• The ISA acts as an interface between the hardware and the software, specifying
both what the processor is capable of doing as well as how it gets done.
• ISAs can be categorized into two main types:
• Complex Instruction Set Architecture (CISC): CISC ISAs have a large and diverse
set of complex instructions that can perform multiple operations in a single
instruction. These instructions often have variable-length encodings and can
perform operations that require multiple clock cycles to complete. Examples of
CISC ISAs include x86, which is widely used in desktop and server computers.

• Reduced Instruction Set Architecture (RISC): RISC ISAs have a smaller and
simpler set of instructions that are designed to execute in a single clock cycle.
These instructions have fixed-length encodings and perform basic operations, with
more complex operations being implemented through combinations of simple
instructions. Examples of RISC ISAs include ARM, MIPS, and RISC-V, which
are commonly used in embedded systems, mobile devices, and other specialized
applications.
2.1 Hardware Components of the Instruction Set Architecture
• The Instruction Set Architecture (ISA) is a crucial component of a computer
system that defines the set of instructions that a computer's central processing
unit (CPU) can understand and execute.

• The ISA includes various hardware components that work together to interpret
and execute instructions. Some of the key hardware components of an ISA are:
• Instruction Register (IR): The IR is a register that holds the currently fetched
instruction from the memory. It typically contains the opcode (operation code)
and operands (data or memory addresses) of the instruction being executed.

• Control Unit (CU): The CU is responsible for coordinating the execution of


instructions. It interprets the opcode from the IR and generates control signals to
direct other hardware components to execute the instruction, such as the ALU
(Arithmetic Logic Unit), registers, and memory.
• Arithmetic Logic Unit (ALU): The ALU performs arithmetic and logic operations
on data. It can perform operations such as addition, subtraction, AND, OR, and
NOT, as well as other operations defined by the ISA.

• Registers: Registers are small, high-speed storage locations within the CPU used
to hold data temporarily during instruction execution. Examples of registers
include the program counter (PC), which holds the address of the next instruction
to be fetched; the stack pointer (SP), which points to the top of the stack; and
general-purpose registers (e.g., AX, BX, CX, DX) used for temporary data
storage and manipulation.

• Memory Address Register (MAR) and Memory Data Register (MDR): The MAR
holds the memory address of the data or instruction being fetched from or written
to memory, while the MDR holds the actual data or instruction being read from or
written to memory.
• Bus Interface Unit (BIU): The BIU is responsible for managing data transfer
between the CPU and other parts of the computer system, such as memory and
input/output devices, using data buses and address buses.
• Cache: Cache is a high-speed memory that holds frequently accessed data and
instructions to reduce the time taken to fetch them from main memory. It is
typically organized in levels, such as L1, L2, and L3 cache, with decreasing
speeds and increasing sizes.
• Input/Output (I/O) Interfaces: These interfaces provide communication between
the CPU and various input/output devices, such as keyboards, mice, displays, and
storage devices, allowing data to be exchanged between the CPU and these
devices.

• These are some of the hardware components of an ISA that work together to
execute instructions and perform various operations in a computer system. The
specific implementation of these components may vary depending on the
architecture and design of the CPU and the ISA being used.
2.2 ARC - A RISC Computer
• The architectural design of CPU is Reduced Instruction Set Computing(RSIC)
and Complex instruction Set Computing(CISC).

• The John Coke of IBM research team developed RISC by reducing the number
of instruction required for processing computation faster than CISC

• RISC stands for Reduced Instruction Set Computer. A RISC computer is a type
of computer architecture that emphasizes a small set of simple instructions that
can be executed very quickly.

• The idea behind RISC is that by simplifying the instruction set, the computer can
execute instructions more quickly and efficiently, which can lead to faster overall
performance.
• The RISC computer uses a simplified instruction set, with each instruction performing
a single, low-level operation.

• This approach allows RISC computers to execute instructions more quickly and
efficiently than computers with more complex instruction sets.

• RISC computers also typically have a large number of registers, which are small,
high-speed memory locations that the processor can use to store data and intermediate
results.

• RISC computers typically have a larger number of general-purpose registers, which


can be accessed more quickly than memory.

• This allows RISC computers to perform operations on data in registers rather than
having to fetch data from memory, which can be slower.
• RISC computers also use pipelining [CPU can start working on the next
instruction before the previous instruction is fully completed, which reduces the
idle time of the CPU] , which allows multiple instructions to be executed at the
same time, increasing performance.

• In addition, RISC computers often have separate instruction and data caches,
which further improves performance.

• RISC processors are used in a variety of applications, from small embedded


systems to large supercomputers. Some examples of RISC processors include the
ARM architecture used in many mobile devices, the Power architecture used in
IBM servers, and the MIPS architecture used in some networking equipment.

• A RISC Computer are commonly used in mobile devices, embedded systems,


and high-performance computing applications.
• Features of RISC Architecture:
A. A limited & Simple Instruction Set.
B. It optimize the uses of register with more number of register
in the RISC & more number of interaction with memory can
be prevented.
C. The number of bits used for opcode is reduced.
D. In general there are 32 more registers in the RISC.
E. On chip Cache & floating point register.
F. Simple Instruction Pipeline.
• 2.3 Pseudo Operations

• Pseudo-operations, also known as assembler directives or pseudo-ops, are


commands used in assembly language programming that do not represent actual
machine instructions.

• Instead, they provide instructions to the assembler on how to process the code,
such as defining constants or reserving memory space for variables.

• Pseudo-operations are instruction to assembler to perform some actions at


assembly time.
• Pseudo op stands for "pseudo operation" and is sometimes called "assembler
directive". These are keywords which do not directly translate to a machine
instruction.

• The assembler resolves pseudo-ops during assembly, unlike machine instructions,


which are resolved only at runtime.

• In general, pseudo-ops give the assembler information about data alignment, block
and segment definition, and base register assignment.

• Pseudo-ops are typically used to improve the readability and maintainability of


assembly code, as they allow programmers to use symbolic names and macros
instead of hard-coded values.
Some common examples of pseudo-ops include:
• .org - Defines the starting address for the program
• .equ - Defines a constant with a specified value
• .db - Defines one or more bytes of data
• .dw - Defines one or more words of data
• .ascii - Defines a string of characters
• .align - Aligns the current address to a specified boundary
• .extern - Declares a symbol as external, allowing it to be used across multiple
modules
• Pseudo-ops are processed by the assembler and do not generate any machine
code. Instead, they are used to generate additional instructions or data that are
included in the final executable file.
2.5 Examples of Assembly Language Programs
2.6 Accessing Data in Memory-Addressing Modes
• In computer architecture and assembly language programming, memory-
addressing modes are used to specify how the operand(s) of an instruction are
accessed in memory.
• There are several addressing modes, each of which allows accessing data in
memory in different ways. Here are some of the commonly used memory-
addressing modes for accessing data:

• Direct addressing: In this mode, the operand is directly specified as a memory


address. The instruction uses the specified memory address to read or write data
from or to memory.

• Indirect addressing: In this mode, the operand contains a memory address that
points to the actual data in memory. The instruction uses the memory address
pointed to by the operand to read or write data from or to memory.
• Indexed addressing: In this mode, the operand contains a memory
address, which is modified by adding an index or offset to it. The
instruction uses the modified memory address to read or write data
from or to memory.

• Relative addressing: In this mode, the operand contains a relative


address that is added to the program counter to obtain the memory
address of the data in memory. This mode is used for accessing data in
code segments, where the addresses of the data are relative to the
current program counter.
• Register indirect addressing: In this mode, the operand contains a
register that contains the memory address of the data. The
instruction uses the memory address contained in the register to read
or write data from or to memory.

• Each memory-addressing mode has its own advantages and


disadvantages, and is chosen based on the specific requirements of
the program being written. Understanding memory addressing
modes is an important part of programming in assembly language
and low-level system programming.
2.7 The Memory Hierarchy

• The memory hierarchy refers to the various levels of memory storage in a


computer system, each with varying speed, capacity, and cost.
• The memory hierarchy is designed to maximize system performance by
minimizing the time it takes to access data.
• Memory Hierarchy Design is divided into 2 main types:
• External Memory or Secondary Memory – Comprising of Magnetic Disk,
Optical Disk, Magnetic Tape i.e. peripheral storage devices which are accessible
by the processor via I/O Module.
• Internal Memory or Primary Memory – Comprising of Main Memory, Cache
Memory & CPU registers. This is directly accessible by the processor.

• There are typically four levels of memory in a memory hierarchy:

1.Registers: Registers are small, high-speed memory units located in the CPU.
They are used to store the most frequently used data and instructions. Registers
have the fastest access time and the smallest storage capacity, typically ranging
from 16 to 64 bits.
2.Cache Memory: Cache memory is a small, fast memory unit located close to
the CPU. It stores frequently used data and instructions that have been recently
accessed from the main memory. Cache memory is designed to minimize the time
it takes to access data by providing the CPU with quick access to frequently used
data.

3.Main Memory: Main memory, also known as RAM (Random Access Memory),
is the primary memory of a computer system. It has a larger storage capacity than
cache memory, but it is slower. Main memory is used to store data and instructions
that are currently in use by the CPU.

• Types of Main memory:


• Static RAM
• Dynamic RAM
4.Secondary Storage: Secondary storage, such as hard disk drives (HDD) and
solid-state drives (SSD), is a non-volatile memory unit that has a larger storage
capacity than main memory.
• It is used to store data and instructions that are not currently in use by the CPU.
Secondary storage has the slowest access time and is typically the least expensive
type of memory in the memory hierarchy.

• We can infer the following characteristics of Memory Hierarchy Design from


above figure:
• Capacity: It is the global volume of information the memory can store. As we
move from top to bottom in the Hierarchy, the capacity increases.
• Access Time: It is the time interval between the read/write request and the
availability of the data. As we move from top to bottom in the Hierarchy, the
access time increases.
• Performance: Earlier when the computer system was designed without Memory
Hierarchy design, the speed gap increases between the CPU registers and Main
Memory due to large difference in access time.

• This results in lower performance of the system and thus, enhancement was
required. This enhancement was made in the form of Memory Hierarchy Design
because of which the performance of the system increases.

• One of the most significant ways to increase system performance is minimizing


how far down the memory hierarchy one has to go to manipulate data.

• Cost per bit: As we move from bottom to top in the Hierarchy, the cost per bit
increases i.e. Internal Memory is costlier than External Memory.
• According to the memory Hierarchy, the system supported memory standards are
defined below:
2.8 Cache Memory
• Cache memory is a type of high-speed memory that is used to improve the
performance of a computer system.

• It is located between the main memory (RAM) and the processor (CPU) in the
computer architecture.

• The purpose of cache memory is to store frequently used data and instructions
that the CPU can access quickly, without having to access the slower main
memory.

• Cache memory operates on the principle of locality of reference, which means


that data that has been recently accessed is likely to be accessed again in the near
future.
• There are several levels of cache memory in modern computer systems, with each level
having different capacities and speeds.

• The cache closest to the CPU is the Level 1 (L1) cache, which is the fastest and smallest
but also the most expensive.

• The Level 2 (L2) cache is larger than L1 and slightly slower, and there may be
additional levels of cache beyond that.

• Cache memory is a vital component of modern computer systems, and its use can
greatly enhance the overall performance of a system.

• By storing frequently used data and instructions close to the CPU, cache memory
reduces the amount of time the CPU spends waiting for data from main memory.
Obtained
Roll No. Name of Student Duration
Marks
3 Asabe Aishwarya Bapu 1 min 36 secs 5.00
4 Atar Saniya Riyajahmad 3 mins 4 secs 6.00
9 Chavan Rohit Nanasaheb 3 mins 19 secs 5.00
10 CHAVARE PRADYUMNA PANDURANG 4 mins 33 secs 7.00
12 Shruti Narendra Divate 5 mins 59 secs 8.00
15 SNEHA ANNASAHEB GAIKWAD 11 mins 39 secs 9.00
25 Kadam Onkar Aappaso 1 min 39 secs 5.00
28 _Kamble 1 min 48 secs 7.00
30 KARANDE SANIKA RAJENDRA 7 mins 43 secs 6.00
32 Avishkar Sanjay Kolawale 9 secs 0.00
36 SAISH 36 secs 5.00
37 KULKARNI SAMARTH GIRISH 39 secs 2.00
38 KULKARNI_SHRADDHA_SHRIPAD 1 min 7 secs 3.00
41 Mali Pooja Vilas 37 secs 3.00
45 MASKE SHIVANI MARUTI 10 mins 21 secs 7.00
47 _MHAMANE RUSHIKESH SANTOSH 3 mins 12 secs 7.00
47 NAGANE PRATHMESH DATTATRAY 2 mins 45 secs 7.00
52 Nalawade 3 mins 57 secs 5.00
53 _Sarthak 4 mins 7 secs 6.00
54 OHAL SWAPNIL MOHAN 4 mins 38 secs 7.00
55 PATIL DNYANESHWAR DATTATRAY 2 mins 14 secs 6.00
55 PAWAR NITIN SHAMRAO 2 mins 43 secs 4.00
57 _ PAWAR NIKITA VIVEK 1 min 44 secs 3.00
59 PAWAR RENUKA VIKAS 3 mins 29 secs 5.00
60 _Sakshi 6 mins 43 secs 7.00
61 Phule_Dhanyata_Raghunath 1 min 12 secs 6.00
62 SOMDALE 4 mins 7 secs 8.00
64 Raut Rutuja Savata 14 mins 18 secs 8.00
65 Abhijit Bibhishan salunkhe 2 mins 53 secs 7.00
66 Shinde_Pranjali_Shivaji 3 mins 41 secs 5.00
69 TATHE PRATIKSHA 3 mins 20 secs 8.00
70 Tathe Samruddhi Chandrakant 3 mins 57 secs 6.00
71 TATHE UDAYAN RAJKUMAR 8 mins 4 secs 6.00
72 Umbarkar Vaibhav Vijay 2 mins 52 secs 6.00
73 Yadav Aniket Ramchandra 2 mins 18 secs 8.00

You might also like