Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Unit: 2 - Programming the basic Computer

Introduction to Programming the basic Computer


In CAO, Programming the Basic Computer refers to the process of writing programs for the Basic
Computer architecture. The Basic Computer is a simplified computer system that used as a teaching
tool to understand the fundamental concepts of computer organization and programming.

The Basic Computer model typically consists of basic components such as a CPU (Central Processing
Unit), memory, input/output devices, and control logic. The instruction set architecture (ISA) of the
Basic Computer is usually limited and straightforward. This makes it easier to grasp the core
concepts of programming to the learner.

When programming the Basic Computer, you typically write programs in assembly language,
which is a low-level programming language closely tied to the machine language instructions of the
computer architecture. Assembly language allows you to directly manipulate the registers, memory
locations, and other components of the Basic Computer.

Why to program Basic Computer


By programming the Basic Computer, you gain hands-on experience in understanding how
instructions are executed, how data is stored and manipulated, and how programs interact with the
hardware components. This process helps develop a solid foundation in computer architecture and
the underlying principles of programming.

Programming the Basic Computer often involves tasks such as:


1. Writing assembly language programs: You write programs using mnemonic instructions that
correspond to the Basic Computer's instruction set. These instructions manipulate data, control
flow, and perform input/output operations.

2. Understanding memory organization: You learn how memory is organized and how to access
and store data in different memory locations. This includes using memory addresses and data
types.

3. Implementing control structures: You create loops, conditional statements, and other control
structures to control the flow of execution in your programs.

4. Handling input/output operations: You interact with input and output devices, such as reading
data from a keyboard or displaying output on a screen.
5. Debugging and testing: You identify and fix errors or bugs in your programs, ensuring they
function correctly.

Machine Language
Machine Language is a low-level programming language that directly corresponds to the machine
code instructions executed by a computer's hardware. It is the most fundamental and basic form of
programming language understood by the computer's central processing unit (CPU).

Machine Language instructions are represented as binary patterns, consisting of a series of 0s and
1s, which the CPU can interpret and execute. Each instruction in Machine Language performs a
specific operation, such as arithmetic calculations, data manipulation, or control flow.

Here are some key points about Machine Language:


1. Binary Representation: Machine Language instructions are represented using binary digits (bits).
Each instruction is typically divided into different fields, such as opcode (operation code),
operands, and addressing modes, which specify the operation to be performed and the data
involved.

2. Direct Hardware Interaction: Machine Language instructions directly interact with the
computer's hardware components, such as the CPU, memory, registers, and input/output
devices. Each instruction corresponds to a specific operation that the hardware can execute.

3. Lack of Abstraction: Machine Language has a very close relationship with the computer's
hardware architecture, lacking high-level abstractions found in programming languages like
variables, functions, or control structures. It operates at a low level, dealing with individual bits,
registers, and memory addresses.

4. Platform Specific: Machine Language is specific to a particular computer architecture or


processor. Different processors have their own unique instruction sets, meaning that Machine
Language programs are not portable across different hardware platforms without modification.

5. Difficult to Read and Write: Machine Language is not human-readable or easily understandable
by programmers. Writing programs directly in Machine Language requires deep knowledge of
the hardware architecture and instruction set.

6. Assembly Language Translation: Assembly language, a low-level symbolic programming


language, is often used as an intermediary step between Machine Language and high-level
languages. Assembly language instructions represent the Machine Language instructions using
human-readable mnemonics, making it easier for programmers to write and understand low-
level code.
Few examples of Machine Language
 x86 Machine Language: This is the machine language used by Intel and AMD x86 processors,
which are widely used in personal computers. It includes instructions such as mov, add, sub, jmp,
and cmp.
 ARM Machine Language: ARM processors are prevalent in mobile devices and embedded
systems. ARM machine language instructions are specific to the ARM architecture, with
operations like ldr, str, add, sub, and branch instructions.
 MIPS Machine Language: MIPS (Microprocessor without Interlocked Pipeline Stages) is a RISC-
based architecture commonly used in embedded systems and academic settings. MIPS machine
language instructions include add, sub, lw, sw, beq, and j.
 PowerPC Machine Language: PowerPC architecture, developed by IBM, Motorola, and Apple,
was used in older Macintosh computers and game consoles like the Xbox 360 and PlayStation 3.
PowerPC machine language instructions consist of add, sub, lwz, stw, and b.
 SPARC Machine Language: SPARC (Scalable Processor Architecture) is a RISC-based architecture
developed by Sun Microsystems (now Oracle). SPARC machine language includes instructions
like add, sub, ld, st, and branch instructions.
 Z80 Machine Language: The Z80 is an 8-bit microprocessor widely used in the early days of
personal computers, including popular machines like the Sinclair ZX Spectrum and MSX. Z80
machine language instructions consist of add, sub, ld, jp, and call instructions.

Assembly Language
In the context of Computer Architecture and Organization (CAO), Assembly Language is a low-level
programming language that bridges the gap between machine language and high-level programming
languages. It provides a human-readable representation of machine language instructions and
allows programmers to write code that is easier to understand and work with compared to writing
directly in machine language.

Here are some key points about Assembly Language:


1. Symbolic Representation: Assembly Language uses mnemonic codes or symbols to represent
machine instructions. Instead of writing binary patterns directly, programmers use symbolic
names that correspond to specific machine instructions. For example, instead of writing a binary
opcode like "10101011," you might use a mnemonic like "ADD" to represent an addition
instruction.

2. One-to-One Mapping with Machine Language: Each assembly language instruction directly
corresponds to a machine language instruction. Assembly language instructions are essentially
human-readable representations of the binary instructions that the computer's hardware can
execute.
3. Low-Level Operations: Assembly Language provides access to low-level operations of the
computer architecture, including memory operations, arithmetic and logic operations, control
flow instructions, and interaction with hardware devices.

4. Direct Memory and Register Manipulation: Assembly Language allows programmers to


manipulate memory locations, registers, and flags directly. It provides instructions for loading
data from memory into registers, performing arithmetic operations, storing data back to
memory, and controlling program flow based on conditions.

5. Platform Specific: Assembly Language is specific to a particular computer architecture or


processor. Different processors have their own unique instruction sets, so assembly language
programs written for one architecture may not work on another without modification.

6. Closer to Hardware: Assembly Language programming requires a good understanding of the


underlying computer architecture, including the instruction set, memory organization, and
register usage. It allows programmers to have fine-grained control over the hardware resources.

7. Assembler Translation: Assembly Language code needs to be translated into machine language
instructions before execution. Assemblers are software tools that convert assembly language
programs into machine code that can be directly executed by the computer's hardware.

Assembler
An Assembler is a software tool used to translate programs written in Assembly Language into
machine language instructions that can be executed by the computer's hardware. It is a type of
language translator that converts the human-readable assembly code into the corresponding binary
representation understood by the computer's processor.
The assembler takes the assembly code as input and generates a file containing the machine
language instructions. It performs tasks such as parsing the assembly instructions, resolving memory
addresses, assigning machine code opcodes, and generating the binary output.

Assemblers are essential in the development of software at a low level. They enable programmers to
write code in a more readable and manageable form (assembly language) while ensuring
compatibility with the computer's hardware by converting it into machine language.

Program Loops
In computer architecture organization, program loops are expressed using a combination of
instructions, registers, and memory accesses. Here's a general outline of how program loops can be
expressed at the architectural level:
 Loop Initialization: Before entering the loop, any necessary initialization steps are performed.
This may involve setting up loop counters, initializing loop control variables, and allocating
memory for loop-specific data.

 Loop Condition Evaluation: At the beginning of each iteration, the loop condition is evaluated.
The condition typically involves comparing values in registers or memory locations to determine
whether the loop should continue executing or exit. This evaluation is often done using branch
instructions or conditional jump instructions.

 Loop Body Execution: If the loop condition evaluates to true, the processor executes the loop
body. The loop body consists of the instructions that need to be repeated in each iteration.
These instructions can perform computations, manipulate data, update loop counters, and make
decisions based on program logic.

 Loop Control Update: After executing the loop body, any necessary updates to loop control
variables or counters are performed. This may involve incrementing or decrementing loop
counters, updating loop-related flags, or modifying memory addresses for data access within the
loop.

 Branch or Jump Back: After the loop control update, the processor branches or jumps back to
the loop condition evaluation step. This allows the loop to either continue executing or
terminate based on the updated loop control variables.

 Loop Termination: If the loop condition evaluates to false, indicating that the loop should
terminate, the processor proceeds to the next instruction outside the loop, effectively ending
the loop execution.

Example

For Loop:
MOV R1, 0 ; Initialize loop counter
MOV R2, 10 ; Set loop limit

FOR_LOOP:
; Code to be executed repeatedly
; ...

ADD R1, R1, 1 ; Increment loop counter


CMP R1, R2 ; Compare loop counter with loop limit
BLT FOR_LOOP ; Branch to FOR_LOOP if less than
In this example, the loop executes a specific block of code 10 times, incrementing the loop counter
(R1) with each iteration. The loop continues as long as the loop counter is less than the loop limit
(R2).

2. While Loop:
MOV R1, 0 ; Initialize loop counter

WHILE_LOOP:
CMP R1, 5 ; Compare loop counter with condition
BGE END_LOOP ; Branch to END_LOOP if greater than or equal

; Code to be executed repeatedly


; ...

ADD R1, R1, 1 ; Increment loop counter


B WHILE_LOOP ; Branch to WHILE_LOOP unconditionally

END_LOOP:
; Code following the loop
; ...
In this example, the loop continues as long as the loop counter (R1) is less than 5. The loop body is
executed repeatedly until the condition is no longer satisfied.

3. Do-While Loop:
MOV R1, 0 ; Initialize loop counter

DO_WHILE_LOOP:
; Code to be executed repeatedly
; ...

ADD R1, R1, 1 ; Increment loop counter


CMP R1, 5 ; Compare loop counter with condition
BLT DO_WHILE_LOOP ; Branch to DO_WHILE_LOOP if less than

; Code following the loop


; ...
Programming Arithmetic and Logical Operations
Arithmetic Operations:
1. Addition:
Addition is the process of combining two values to produce a sum. In computer architecture
organization, addition is typically performed using the ALU (Arithmetic Logic Unit) of the processor.
Here's an example in assembly language:

MOV R1, 5 ; Move value 5 into register R1


MOV R2, 3 ; Move value 3 into register R2
ADD R3, R1, R2 ; Add values in R1 and R2, store result in R3
In this example, the values in registers R1 and R2 are added, and the result is stored in register R3.

2. Subtraction:
Subtraction is the process of subtracting one value from another to produce a difference. Here's an
example:

MOV R1, 8 ; Move value 8 into register R1


MOV R2, 3 ; Move value 3 into register R2
SUB R3, R1, R2 ; Subtract value in R2 from R1, store result in R3
In this example, the value in register R2 is subtracted from the value in register R1, and the result is
stored in register R3.

3. Multiplication:
Multiplication is the process of multiplying two values to produce a product. Here's an example:

MOV R1, 4 ; Move value 4 into register R1


MOV R2, 5 ; Move value 5 into register R2
MUL R3, R1, R2 ; Multiply values in R1 and R2, store result in R3
In this example, the values in registers R1 and R2 are multiplied, and the result is stored in register
R3.

4. Division:
Division is the process of dividing one value by another to produce a quotient. Here's an example:

MOV R1, 10 ; Move value 10 into register R1


MOV R2, 2 ; Move value 2 into register R2
DIV R3, R1, R2 ; Divide value in R1 by R2, store quotient in R3
In this example, the value in register R1 is divided by the value in register R2, and the quotient is
stored in register R3.

Logical Operations:
1. AND:
The logical AND operation compares two binary values and produces a 1 if both inputs are 1, and 0
otherwise. Here's an example:

MOV R1, 6 ; Move binary value 0110 into register R1


MOV R2, 3 ; Move binary value 0011 into register R2
AND R3, R1, R2 ; Perform bitwise AND operation on R1 and R2, store result in R3
In this example, the logical AND operation is performed between the binary values in registers R1
and R2, and the result is stored in register R3.

2. OR:
The logical OR operation compares two binary values and produces a 1 if at least one input is 1, and
0 otherwise. Here's an example:

MOV R1, 6 ; Move binary value 0110 into register R1


MOV R2, 3 ; Move binary value 0011 into register R2
OR R3, R1, R2 ; Perform bitwise OR operation on R1 and R2, store result in R3
In this example, the logical OR operation is performed between the binary values in registers R1 and
R2, and the result is stored in register R3.

3. NOT:
The logical NOT operation flips each bit of a binary value, converting 1s to 0s and vice versa. Here's
an example:

MOV R1, 6 ; Move binary value 0110 into register R1


NOT R2, R1 ; Perform bitwise NOT operation on R1, store result in R2
In this example, the logical NOT operation is performed on the binary value in register R1, and the
result is stored in register R2.

4. XOR:
The logical XOR (exclusive OR) operation compares two binary values and produces a 1 if the inputs
differ, and 0 if they are the same. Here's an example:

MOV R1, 6 ; Move binary value 0110 into register R1


MOV R2, 3 ; Move binary value 0011 into register R2
XOR R3, R1, R2 ; Perform bitwise XOR operation on R1 and R2, store result in R3
In this example, the logical XOR operation is performed between the binary values in registers R1
and R2, and the result is stored in register R3.
Subroutines
In the context of computer architecture organization (CAO), a subroutine refers to a sequence of
instructions that performs a specific task and can be invoked (called) from different parts of a
program. Subroutines are used to modularize code, promote code reuse, and improve the overall
organization and readability of a program.

Here's an example of a subroutine in x86 assembly language:

section .data
message db 'Hello, World!', 0

section .text
global _start

_start:
; Call the subroutine
call greet

; Exit the program


mov eax, 1
xor ebx, ebx
int 0x80

; Subroutine definition
greet:
; Display the message
mov eax, 4
mov ebx, 1
mov ecx, message
mov edx, 13
int 0x80

; Return from the subroutine


Ret

In this x86 assembly code example, we define a subroutine called greet that displays the message
"Hello, World!" on the console. The _start label represents the program's entry point, and it calls the
greet subroutine using the call instruction.

Within the greet subroutine, the message is displayed using the write system call. The necessary
values are loaded into the appropriate registers (eax, ebx, ecx, edx), and the system call is invoked
using int 0x80. Finally, the ret instruction is used to return from the subroutine to the instruction
following the call statement in the _start routine.

Input Output Programming in CAO


In the context of computer architecture organization (CAO), input/output (I/O) programming refers
to the techniques and mechanisms used to handle data transfer between a computer system and
external devices. It involves designing and implementing software routines, instructions, and
protocols that enable the exchange of information with peripheral devices such as keyboards, mice,
displays, printers, network interfaces, and storage devices.

I/O programming in CAO encompasses the following aspects:


1. Device Communication: It involves establishing communication channels and protocols between
the computer system and the peripheral devices. This includes the physical connection, data
transfer protocols, and device-specific communication interfaces.

2. Device Drivers: Device drivers are software components that facilitate communication between
the operating system and specific hardware devices. They provide an abstraction layer, allowing
applications to interact with devices through standardized interfaces, regardless of the
underlying hardware implementation.

3. Interrupt Handling: I/O operations often rely on interrupts, which are signals generated by
devices to notify the processor of an event. Interrupt handling routines are responsible for
managing these interrupts, suspending the current program execution, and servicing the device
request in a timely manner.

4. Buffering and Data Transfer: Buffering mechanisms are used to optimize data transfer between
devices and the computer system. Buffers temporarily store data during I/O operations,
reducing the need for direct interaction with devices and enabling efficient data transfers.

5. Synchronization and Control: I/O programming involves coordinating and controlling concurrent
I/O operations to ensure proper synchronization and prevent data corruption or conflicts.
Techniques such as locking, signaling, and synchronization primitives are employed to manage
access to shared resources.

6. Error Handling: Robust I/O programming involves handling errors and exceptional conditions
that may arise during data transfer. Error detection, recovery mechanisms, and appropriate
error handling routines are implemented to ensure reliable I/O operations.
In the context of computer architecture organization (CAO) and assembly language, input/output
(I/O) programming involves using specific instructions and techniques in assembly language to
interact with peripheral devices. Here are a few examples:

Example of Reading from the keyboard (using x86 assembly language):

section .data
message db 'Enter a number: '
length equ $-message

section .bss
input resb 16

section .text
global _start
_start:
; Display message
mov eax, 4
mov ebx, 1
mov ecx, message
mov edx, length
int 0x80

; Read input
mov eax, 3
mov ebx, 0
mov ecx, input
mov edx, 16
int 0x80

; Display input
mov eax, 4
mov ebx, 1
mov ecx, input
int 0x80

; Exit
mov eax, 1
xor ebx, ebx
int 0x80
Micro-programmed control
Microprogrammed control, also known as microcode control, is a control strategy used in computer
architecture organization (CAO) to execute instructions. It involves using microcode—a lower-level
program stored in a control memory—to control the operations of the CPU and its components.

In microprogrammed control, instead of directly implementing control logic using combinational


circuits or hardwired control, the control signals are generated by a microprogram. The
microprogram resides in a control memory, which is a specialized memory unit dedicated to storing
microinstructions. Each microinstruction corresponds to a specific control operation or sequence of
control operations.

The control memory is a crucial component of microprogrammed control. It typically consists of a


read-only memory (ROM) or a random-access memory (RAM) that stores the microinstructions. Each
microinstruction contains control signals that activate or deactivate various functional units and
components within the CPU, such as the arithmetic logic unit (ALU), registers, memory interfaces,
and input/output (I/O) units.

Control memory provides several advantages in CAO:


 Flexibility: Microprogrammed control allows for greater flexibility and ease of modification
compared to hardwired control. Altering the control behavior of the CPU can be achieved by
simply updating the microcode stored in the control memory.

 Simplification: Complex control logic can be implemented using microcode, simplifying the
design of the control unit and reducing the complexity of the CPU implementation.

 Debugging and Testing: Microprogrammed control facilitates testing and debugging since the
control behavior can be easily modified and monitored by analyzing the microinstructions stored
in the control memory.

 Instruction Set Architecture (ISA) Independence: Microprogramming allows for the


implementation of different instruction sets on the same underlying hardware, enabling support
for various high-level programming languages and architectures.

Address Sequencing
In computer architecture, address sequencing refers to the process of generating a sequence of
memory addresses in order to access data or instructions stored in the computer's memory. It
involves determining the sequence of addresses that the processor needs to access to fetch or store
data during program execution. Address sequencing is an essential aspect of computer organization
and is typically performed by the memory management unit (MMU) in modern computer systems.

To understand address sequencing, let's consider a simplified view of a computer's memory. In this
model, memory is divided into individual storage units called bytes, and each byte has a unique
address. The processor uses these addresses to read data from or write data to specific locations in
memory.

The address sequencing process can be broken down into several stages:
1. Instruction Fetch
The processor fetches instructions from memory in order to execute them. It starts by fetching the
instruction located at a specific address, typically stored in a program counter (PC). The PC holds the
address of the next instruction to be fetched. After fetching an instruction, the PC is incremented to
point to the next instruction.

2. Operand Fetch
During instruction execution, the processor may need to fetch additional data from memory, such as
operands for arithmetic operations or variables used in the program. The memory addresses of
these operands are specified in the instructions themselves or in registers. The processor generates
the appropriate addresses based on the instructions and fetches the operands from memory.

3. Data Storage
In addition to fetching data from memory, the processor may also need to store data back to
memory. For example, the results of arithmetic operations or values assigned to variables are often
stored in memory. The processor generates the memory addresses where the data should be stored
and performs the necessary write operations.

The exact method of address sequencing depends on the computer architecture and memory
management scheme employed.

Some common techniques used in modern processors include:


1. Direct Addressing: In this simple addressing mode, the memory address is directly specified in
the instruction itself. The processor fetches the instruction and accesses the memory location
specified by the address to read or write data.

2. Register Indirect Addressing: In this mode, the instruction specifies a register that contains the
memory address. The processor fetches the instruction, reads the register's value, and uses it as
the address to access memory.

3. Indexed Addressing: This mode combines a base memory address with an offset value. The
processor fetches the instruction, reads the base address, adds the offset to it, and uses the
resulting address to access memory. Indexed addressing is useful for accessing elements of
arrays or data structures.

4. Indirect Addressing: In this mode, the instruction contains a memory address that points to
another memory location, known as an indirect address. The processor first fetches the
instruction, accesses the memory location specified by the indirect address, and retrieves the
actual memory address from that location. It then uses this obtained address to access the
desired data.

5. Virtual Addressing: In systems that employ virtual memory, address sequencing involves a
translation step. The processor generates virtual addresses, which are then translated to
physical addresses by the MMU. The MMU maps virtual addresses to physical addresses,
allowing the processor to access the actual data stored in memory.

Example-
Certainly! Let's consider a simple example to illustrate address sequencing. Suppose we have a
computer with a 16-bit address bus, capable of addressing up to 64 kilobytes of memory. We'll use
direct addressing as the addressing mode.

Let's say we have a program stored in memory starting at address 0x2000. The program has three
instructions: load, add, and store. Each instruction is 2 bytes in size. Here's the program:

0x2000: Load value from memory address 0x3000 into register A


0x2002: Add the value in register B to the value in register A
0x2004: Store the result in register A to memory address 0x4000

To execute this program, the processor performs address sequencing as follows:


1. Instruction Fetch:
The program counter (PC) initially holds the address of the first instruction, 0x2000.
The processor fetches the instruction located at address 0x2000.
The PC is incremented to 0x2002 to point to the next instruction.

2. Operand Fetch:
The fetched instruction is "Load value from memory address 0x3000 into register A."
The processor generates the memory address 0x3000 based on the instruction.
The processor fetches the value stored at memory address 0x3000 and stores it in register A.

3. Instruction Fetch:
The PC contains the updated address 0x2002.
The processor fetches the instruction located at address 0x2002.
The PC is incremented to 0x2004.
4. Operand Fetch:
The fetched instruction is "Add the value in register B to the value in register A."
The processor reads the values stored in registers A and B.
It performs the addition operation using the values from the registers.

5. Instruction Fetch:
The PC contains the updated address 0x2004.
The processor fetches the instruction located at address 0x2004.
The PC is incremented to 0x2006.

6. Operand Fetch:
The fetched instruction is "Store the result in register A to memory address 0x4000."
The processor generates the memory address 0x4000 based on the instruction.
It writes the result from register A to memory address 0x4000.

Micro program Example


In computer architecture, microprogramming is a technique used to implement complex
instructions or instruction sets in a processor. It involves breaking down complex instructions into a
sequence of simpler microinstructions stored in a control memory called a microprogram. The
microinstructions control the internal operations of the processor, specifying the microoperations
needed to execute each step of the complex instruction.

To illustrate microprogramming, let's consider an example of a simplified processor that supports an


ADD instruction. The ADD instruction adds two numbers stored in registers and stores the result in
another register.
We'll assume that the processor's architecture supports a set of microinstructions to execute this
ADD instruction.

1. Instruction Fetch:
 The processor fetches the ADD instruction from memory.
 The instruction is decoded, and the control unit determines that it is an ADD instruction.
 The control unit generates a control signal to fetch the operands from the specified registers.

2. Operand Fetch:
 The control unit generates microinstructions to load the operands from the specified registers
into internal storage or temporary registers.
 These microinstructions may include signals to enable the multiplexers that select the source
registers and store the operands in temporary storage.

3. Addition Operation:
 The control unit generates microinstructions to perform the addition operation.
 These microinstructions may include signals to enable the arithmetic and logic unit (ALU) and
specify the operation to be performed (in this case, addition).

4. Result Storage:
 The control unit generates microinstructions to store the result of the addition in the specified
destination register.
 These microinstructions may include signals to enable the multiplexers that select the
destination register and store the result.

5. Next Instruction:
 The control unit generates microinstructions to update the program counter (PC) to point to
the next instruction.
 These microinstructions may include signals to increment the PC or load the next instruction
address from a memory location.

The micro-program for executing the ADD instruction in this example would consist of a sequence of
microinstructions that define the control signals needed for each step. These microinstructions are
stored in a control memory (typically implemented using ROM or PLA) within the processor.

Microprogramming provides several advantages, including:


 Simplified processor design: Microprogramming allows complex instructions to be implemented
using simpler microinstructions, reducing the complexity of the control unit.
 Instruction flexibility: Microprograms can be easily modified or updated to introduce new
instructions or modify existing ones, without requiring changes to the hardware implementation.
 Instruction abstraction: Microprogramming provides a layer of abstraction between the hardware
and the instruction set architecture, allowing for a more efficient and manageable
implementation.

Design of Control Unit


In computer architecture, the control unit is a crucial component responsible for coordinating and
controlling the operations of the processor. It generates the necessary control signals to execute
instructions, manages the flow of data between different components, and ensures the correct
sequencing of operations. In the context of Computer Architecture and Organization (CAO), the
design of the control unit is typically based on one of two approaches: hardwired control or
microprogrammed control.

Hardwired Control:
Hardwired control, also known as combinational control, involves designing the control unit using a
network of combinational logic circuits. The control signals are generated directly based on the
current instruction and the state of the processor. The design process involves understanding the
instruction set architecture (ISA) and creating the necessary logic circuits to decode instructions and
generate control signals.
The steps involved in the design of a hardwired control unit are as follows:

 Instruction Decoding: The control unit decodes the instruction opcode to identify the type of
instruction being executed. This is typically done using a combination of logic gates, multiplexers,
and decoders.
 Control Signal Generation: Based on the decoded instruction and the current state of the
processor, the control unit generates the necessary control signals to perform the required
operations. These signals control various components such as the arithmetic and logic unit (ALU),
registers, memory, and input/output devices.
 Sequencing: The control unit manages the sequencing of operations, including the fetching of
instructions, reading and writing of data, and updating the program counter. It ensures that the
operations occur in the correct order and at the appropriate times.

Microprogrammed Control:
Microprogrammed control involves designing the control unit using microinstructions stored in a
control memory (often implemented using ROM or PLA). Each microinstruction specifies the control
signals for a specific microoperation. The control unit fetches the microinstructions based on the
current instruction being executed and executes them in a sequenced manner.
The steps involved in the design of a microprogrammed control unit are as follows:

 Microinstruction Encoding: Each instruction in the ISA is encoded with a unique opcode. The
microinstruction encoding scheme assigns a corresponding microaddress to each opcode.
 Control Memory Design: The microinstructions are stored in a control memory, with each
microaddress containing the control signals for a specific microoperation.
 Microinstruction Sequencing: The control unit fetches the microinstructions based on the current
instruction and the microaddress specified by the opcode. The microinstructions are executed
sequentially, with each microinstruction controlling a specific operation of the processor.
 Control Signal Generation: As the control unit executes each microinstruction, it generates the
necessary control signals to enable or disable specific components and perform the required
operations.
Assignment Questions
1. Why we need to Program an Basic Computer?
2. Explain Machine Language with example.
3. What are key Aspects of assembly Language?
4. Explain Assembler in CAO.
5. Explain loops in Assembly Language.
6. Explain writing of Arithmetic and logical Operation in Assembly Language. (Please add 3
examples with their definition)
7. Explain Address Sequencing and their types in python.
8. What is Micro Programmed Control?
9. Explain Design of Control Unit.
10. Explain Micro-Program with its process of execution.

You might also like