Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

UNIT - II

Micro Programmed Control: Control Memory

In the context of computer architecture and microprogramming, "Control Memory"


refers to a special type of memory used in micro programmed control.

1. **Micro Programmed Control:**

- In computer architecture, the control unit is responsible for managing and


coordinating the operations of the various components within the CPU.
Microprogramming is a technique used to implement the control unit by breaking
down complex instructions into a sequence of simpler, micro-level instructions.

2. **Control Memory:**

- Control Memory, in the context of microprogramming, is a memory unit that


stores microinstructions. Microinstructions are elementary instructions that control
the internal operations of the control unit. These microinstructions are fetched and
executed in sequence to carry out the overall instruction fetched from the main
memory.

In summary, microprogrammed control uses a control memory to store


microinstructions, and the control unit fetches and executes these microinstructions
to control the operations of the CPU. This approach provides more flexibility in
designing and modifying the control unit compared to hardwired control.
Address Sequencing:

The process of generating a sequence of addresses to access the microinstructions


stored in the control memory, where the control unit executes a series of
microinstructions to carry out a machine-level instruction.

Here's how address sequencing typically works:

1. **Microinstruction Address:**

- Control memory stores microinstructions, and each microinstruction has a


unique address. The address specifies the location of the microinstruction in
control memory.

2. **Address Sequencer:**

- The control unit includes an address sequencer or a control memory address


register that generates the addresses for fetching microinstructions. This sequencer
is responsible for producing a sequence of addresses to access the
microinstructions in the correct order.

3. **Control Flow:**

- The sequence of addresses generated by the address sequencer dictates the


control flow of the microprogram. Each address corresponds to a microinstruction,
and the control unit fetches and executes microinstructions sequentially. The
address field in a microinstruction specifies the next address to be fetched,
enabling the control unit to follow the desired sequence.

In summary, address sequencing in microprogrammed control is fundamental to


controlling the various operations of the CPU and executing machine-level
instructions.
Microprogram Example:

Microprograms are SEQUENCES OF MICROINSTRUCTIONS typically


corresponds to a basic operation or control signal. Below are simplified examples
of microprograms

Let's consider a instruction "ADD R1, R2, R3" which adds the contents of registers
R2 and R3 and stores the result in register R1.

Instruction: ADD R1, R2, R3

Microprogram:

1. Fetch instruction from main memory

2. Decode instruction to identify the operation (ADD)

3. Fetch operands from registers R2 and R3

4. Perform addition operation

5. Store the result in register R1

6. Increment the program counter for the next instruction.

THIS MICROPROGRAM BREAKS DOWN THE EXECUTION OF THE ADD


INSTRUCTION INTO A SEQUENCE OF 6 MICROINSTRUCTIONS.
Design of Control Unit:

The design of a control unit in a computer is a crucial aspect of computer


architecture. The control unit manages and coordinates the activities of various
components within the CPU to execute instructions. Here's a high-level overview
of the design process for a control unit:

### 1. **Instruction Set Architecture (ISA):**

- Understand the instruction set architecture of the target computer. This includes
the types of instructions, addressing modes, and the format of instructions that the
control unit needs to support.

### 2. **Instruction Fetch:**

- Design the instruction fetch unit to fetch instructions from memory. This
involves specifying the size of the instruction register, the program counter, and
the mechanism for retrieving instructions from memory.

### 3. **Instruction Decode:**

- Develop the instruction decode unit to interpret the fetched instructions.


Identify the operation code (opcode) and any operands specified in the instruction.

### 4. **Control Signals:**

- Determine the control signals needed to execute each instruction. These signals
will control various components of the CPU, such as the ALU (Arithmetic Logic
Unit), registers, and data paths.

### 5. **Microprogramming vs. Hardwired Control:**

- Decide whether to use microprogramming or hardwired control.


Microprogramming involves using a control memory that stores sequences of
microinstructions, while hardwired control uses a network of logic gates to
generate control signals directly.
### 6. **Testing and Verification:**

- Implement simulation models or prototypes of the control unit to test its


functionality. Verify that the control unit can correctly execute a variety of
instructions from the instruction set.

### 7. **Optimization {making something (such as a design, system, or decision)


as fully perfect, functional, or effective as possible}:**

- Optimize the design for performance, considering factors such as clock cycles
per instruction and minimizing delays in the control path.

### 8. **Documentation:**

- Document the control unit design thoroughly {PLAN}, including the


instruction set, control signals, microinstructions (if applicable), and any other
relevant details.

The design of the control unit is closely tied to the overall architecture of the
computer system, and it requires a careful balance between simplicity, speed, and
flexibility. The chosen design influences the performance and capabilities of the
entire computer.

Hard Wired Control:

Hardwired control is a method of designing the control unit of a computer using a


fixed set of logic circuits, without the use of a control memory. In hardwired
control. Here are the key aspects and steps involved in the design of hardwired
control:

1. **Opcode Decoding:**

- The first step in hardwired control is to decode the opcode of the instruction.
The opcode is a part of the instruction that specifies the operation to be performed.
2. **Control Signal Generation:**

- Based on the decoded opcode, design logic circuits that generate the necessary
control signals to coordinate the activities of various components in the CPU.
These control signals activate or deactivate specific functional units, such as the
ALU, registers, and data paths.

3. **Combinational Logic:**

- The control signals are typically generated using combinational logic circuits,
such as AND gates, OR gates, and multiplexers. The logic circuits take the
decoded opcode and produce the appropriate control signals based on the
instruction's requirements.

4. **Sequential Logic (Optional):**

- Depending on the complexity of the CPU architecture, sequential logic


elements like flip-flops might be used to store the state of the control unit between
clock cycles.

5. **Testing and Verification:**

- Simulate and test the hardwired control design to ensure that it correctly
generates the required control signals for a variety of instructions. Verification is
crucial to confirm that the control unit behaves as expected and follows the
instruction set architecture.

Hardwired control has the advantage of simplicity and can be faster than
microprogrammed control since there is no need to fetch microinstructions from a
control memory. However, it can be less flexible than microprogramming because
any changes to the instruction set or control signals may require physical changes
to the hardware.

It's worth noting that modern computer architectures often use a combination of
hardwired and microprogrammed control to achieve a balance between flexibility
and efficiency.
Microprogrammed Control:

Microprogrammed control is a method of designing the control unit in a computer's


central processing unit (CPU) using microinstructions. These microinstructions are
stored in a control memory, and each microinstruction corresponds to a set of
control signals that coordinate the operation of various components within the
CPU. Here are the key aspects and steps involved in the design of
microprogrammed control:

1. **Microinstruction Format:**

- Define the format of a microinstruction. A microinstruction typically consists of


fields that specify control signals, the next address to be fetched, and any
additional information needed for the control unit.

2. **Control Signals:**

- Identify the control signals needed for the CPU to execute each instruction.
Assign specific values to these control signals in the microinstructions to
coordinate the activities of the CPU components, such as the ALU, registers, and
data paths.

3. **Microprogram Counter (MPC):**

- Implement the microprogram counter, which keeps track of the address of the
current microinstruction. The microprogram counter is updated based on the
sequencing logic.

4. **Flexibility:**

- One advantage of microprogrammed control is its flexibility. Changes to the


instruction set or control signals can be made by updating the microinstructions in
the control memory, without the need to modify the hardware.
The Memory System: Basic Concepts of Semiconductor RAM
Memories

Semiconductor RAM (Random Access Memory) is a type of computer


memory that provides high-speed data access to a processor and allows
data to be read from or written to the memory cells in almost the same
amount of time regardless of the physical location of data inside the
memory. Here are some fundamental concepts related to semiconductor
RAM memories:

1. **Byte:** A group of 8 bits. It is the basic addressable storage unit in


RAM.

2. **Cell:**

- **Memory Cell:** The smallest addressable unit in RAM, typically


storing one bit of information.

3. **Memory Address:**

- Each memory cell in RAM is assigned a UNIQUE ADDRESS, allowing


the processor to read from or write to specific locations in memory.

4. **Memory Capacity:**

- RAM capacity is measured in bytes and represents the total amount of


data the memory can store.

5. **Read and Write Operations:**

- **Read Operation:** The process of RETRIEVING data from a specific


memory address.

- **Write Operation:** The process of STORING data at a specific


memory address.
6. **Volatile Nature:**

- RAM is volatile {Temporary} memory, meaning it loses its contents


when the power is turned off. This is in contrast to non-volatile memory
like hard drives and SSDs, which retain data even when the power is off.

7. **Types of RAM:**

- **DRAM (Dynamic RAM):** Requires periodic refreshing { the memory


controller reads the data from each row of memory cells and then
immediately rewrites {stores the data in to the same cell} it back.} to
maintain data integrity.

- **SRAM (Static RAM):** Does not need refreshing, and data is stored in
flip-flops. SRAM is faster but more expensive and consumes more power
than DRAM.

Read-Only Memories

Read-Only Memory (ROM) is a type of non-volatile {permanent} memory


that is used primarily in the startup process of a computer or electronic
system. Unlike Random Access Memory (RAM), ROM retains its data even
when the power is turned off.

1. **Permanent Data:**

- The data stored in ROM is written during the manufacturing process


and is intended to be permanent. It typically contains essential instructions
or data that should not be altered during normal system operation.

2. **Startup Instructions:**

- ROM is often used to store the firmware/software or basic input/output


system (BIOS) of a computer. During the startup process, the system reads
these instructions from ROM TO INITIATE THE HARDWARE AND
LOAD THE OPERATING SYSTEM.
3. **Firmware Storage:**

- Many devices use ROM to store firmware {calculator, notepad, small


games etc}, which is a set of instructions or software embedded in the
hardware. Firmware is responsible for providing low-level control for the
device's specific functions.

4. **Security Features:**

- Because the data in ROM is not easily modified, it provides a level of


security against unauthorized changes. This is particularly important for
critical system instructions.

Cache Memories Performance Considerations

Cache memory is a type of HIGH-SPEED VOLATILE COMPUTER


MEMORY THAT PROVIDES HIGH-SPEED DATA ACCESS TO A
PROCESSOR and stores frequently used computer programs,
applications, and data. It serves as a buffer between the slower main
memory (RAM) and the faster CPU. Here are several performance
considerations related to cache memories:

1. **Temporal and Spatial Locality:**

- **Temporal Locality:** If a particular memory location is accessed, it is


likely to be accessed again in the near future.

- **Spatial Locality:** If a particular memory location is accessed, nearby


memory locations are also likely to be accessed soon.

Cache memory is designed to exploit {వినియోగించు} both temporal


and spatial locality by storing recently accessed data and nearby
data in the cache.

2. **Cache Hit and Cache Miss:**

- A **cache hit** occurs when the CPU requests data that is already
present in the cache.
- A **cache miss** occurs when the CPU requests data that is not in the
cache, requiring the data to be fetched from the slower main memory.

Minimizing cache misses and maximizing cache hits are critical for
improving performance.

3. **Cache Size:**

- The size of the cache memory is a crucial factor. Larger caches can store
more data BUT MAY HAVE LONGER ACCESS TIMES. Finding the
right balance is essential.

4. **Cache Levels (L1, L2, L3):**

- Modern processors often have multiple levels of cache (L1, L2, and
sometimes L3). Each level serves as a progressively larger but slower cache.
Optimizing the use of each level is crucial for overall system performance.

5. **Cache Coherency {the state of being connected orderly}:**

- In multiprocessor systems, maintaining cache coherency is important. If


one processor modifies data in its cache, other processors' caches need to
be updated to reflect the change to avoid inconsistencies.

6. **Cache Replacement Policies:**

- When a cache is full and a new item needs to be loaded, a decision must
be made about which existing item to evict. Cache replacement policies like
LRU (Least Recently Used) or FIFO (First In, First Out) impact the
effectiveness of the cache.

7. **Cache Prefetching:**

- Prefetching is a technique where the cache predicts and loads data into
the cache before it is actually needed. This helps to reduce cache misses.
Virtual Memories Secondary Storage
**Virtual Memory and Secondary Storage:**

### 1. **Overview:**

- **Virtual Memory:**

- It extends the available physical memory by using disk space as a


temporary storage area.

- **Secondary Storage:**

- Permanent, non-volatile storage used for long-term data retention.

- Examples include hard drives, solid-state drives (SSDs), and optical


disks.

### 2. **Purpose of Virtual Memory:**

- **Address Space Extension:**

- Provides the illusion of a larger address space than the physical


memory.

### 3. **Working Mechanism:**

- **Page File (Swap Space):**

- A designated space on the secondary storage used as an extension of


RAM.

- **Page Fault:**

- When a requested page is not in RAM, a page fault occurs, and the
required page is loaded from the page file into RAM.

### 4. **Paging and Segmentation:**

- **Paging:**

- Paging allows a computer's operating system to use secondary storage


(usually a hard disk or SSD) as an extension of its primary storage (RAM).
- Divides memory into fixed-size pages. Pages are transferred between
RAM and the page file.

- **Segmentation:**

- Divides memory into variable-sized segments. Each segment is


independently managed.

### 5. **Advantages:**

- **Efficient Resource Utilization:**

- Allows the execution of larger programs that may not fit entirely into
physical memory.

- **Ease of Multitasking:**

- Facilitates the concurrent execution of multiple processes.

### 6. **Challenges:**

- **Performance Overhead {burden}:**

- Swapping pages between RAM and secondary storage can introduce


performance overhead.

- **Page Faults:**

- Excessive page faults may lead to decreased performance.

### 8. **Page Replacement Policies:**

- **Least Recently Used (LRU):**

- Replaces the page that has not been used for the longest time.

- **FIFO (First-In-First-Out):**

- Replaces the oldest page in memory.


**Introduction to RAID (Redundant Array of
Independent Disks)**

RAID is a storage technology that combines multiple physical disk drives


into a single logical unit to improve data performance, reliability, and/or
fault tolerance. It is widely used in servers, data centers, and storage
systems to enhance storage capabilities.

### **Key Objectives:**

- **Data Redundancy {no longer needed or useful}:**

- Provides fault tolerance by duplicating data across multiple drives.

- **Performance Improvement:**

- Enhances read and write performance by distributing data across


multiple drives.

- **Data Striping:**

- Divides data into blocks and writes these blocks across multiple drives
simultaneously for improved speed.

- **Advantages:**

- Improved read and write performance.

- **Disadvantages:**

- No fault tolerance; if one drive fails, all data is lost.

- **RAID 1 (Mirroring):**

- **Mirroring:**

- Each drive has an identical copy of the data.

- **Advantages:**

- High fault tolerance.


- Fast read performance.

- **Disadvantages:**

- High cost (requires twice the storage capacity).

- **RAID 10 (Combination of RAID 1 and RAID 0):**

- **Mirroring and Striping:**

- Data is mirrored and striped for performance.

- **Advantages:**

- High fault tolerance and performance.

- **Disadvantages:**

- Requires a minimum of four drives.

### **Applications:**

- **Data Centers:**

- RAID is commonly used to ensure data availability and prevent data loss
in large-scale storage environments.

- **Servers:**

- RAID configurations are employed to balance performance and


reliability.

- **Personal Computers:**

- Used for data protection and improved disk performance.

RAID technology has become integral to modern storage solutions,


providing a range of options to meet specific performance, capacity, and
fault-tolerance requirements. The choice of RAID level depends on the
specific needs and priorities of the storage environment.

You might also like