Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Q1. Explain Flynn's Classification.

1. Single Instruction, Single Data (SISD): This category represents the traditional von Neumann architecture, where
a single instruction stream operates on a single data stream. This is the simplest and most common type of
architecture found in most general-purpose computers.
2. Single Instruction, Multiple Data (SIMD): SIMD architectures have a single instruction stream that operates on
multiple data streams simultaneously. In this type of architecture, a single instruction is broadcasted to multiple
processing units, and each unit operates on a different piece of data. SIMD architectures are well-suited for
parallel processing tasks that can be broken down into identical computations on multiple data elements.
3. Multiple Instruction, Single Data (MISD): MISD architectures involve multiple instruction streams operating on a
single data stream simultaneously. This type of architecture is less common in practical implementations and is
typically used in specialized scenarios such as fault-tolerant systems or redundant processing.
4. Multiple Instruction, Multiple Data (MIMD): MIMD architectures support multiple instruction streams operating
on multiple data streams concurrently. Each instruction stream can be executing different instructions, and the
data streams can be independent or shared between the instruction streams. MIMD architectures are commonly
found in modern multi-core processors and distributed computing systems, where different tasks can be executed
independently on separate processing units.

Q2 Explain Different Types Of Distributed And Centralized Bus Arbitration Methods.

• Distributed Bus Arbitration:


In distributed bus arbitration, the responsibility for resolving bus conflicts is distributed among the devices
connected to the bus. Each device has the ability to arbitrate for bus control independently. Some common
distributed bus arbitration methods include:
o Daisy Chain: In this method, devices are connected in a linear fashion, forming a daisy chain. When a
device needs access to the bus, it sends a request signal to the next device in the chain. The request signal
propagates through the chain until it reaches a device that can grant the access. This device then asserts a
grant signal and gains control of the bus.
o Token Passing: This method uses a special control token that circulates among the devices connected to
the bus. Only the device holding the token has the right to access the bus. When a device wants to access
the bus, it waits until the token arrives, gains control, performs its operation, and then passes the token to
the next device.
o Random Selection: In this method, devices contend for bus control randomly. Each device generates a
random number and compares it with the numbers generated by other devices. The device with the lowest
or highest number (depending on the protocol) gains control of the bus. Random selection provides
fairness among devices, but it can also result in unpredictable bus access times.
• Centralized Bus Arbitration:
In centralized bus arbitration, there is a dedicated controller or arbiter responsible for granting bus access to
devices. The arbiter receives requests from devices and makes the decision on which device should be granted
access to the bus. Some common centralized bus arbitration methods include:
o Priority-Based: In this method, each device is assigned a priority level. The arbiter grants bus access to
the device with the highest priority among the requesting devices. Priority levels can be fixed or
dynamically assigned based on factors such as device type or criticality.
o Round Robin: In this method, the arbiter grants bus access to devices in a sequential manner. Each device
gets a turn to access the bus, and the arbiter cycles through the devices in a fixed order. This method
ensures fairness as each device gets an equal opportunity to access the bus.
o Reservation-Based: In this method, devices request bus access in advance by reserving specific time slots.
The arbiter allocates time slots to devices based on their requests.
Q3 Write A Short Note On Cache Coherency.
1. In a multiprocessor system, data inconsistency may occur among adjacent levels or within the same level of the
memory hierarchy. In a shared memory multiprocessor with a separate cache memory for each processor, it is
possible to have many copies of any one instruction operand: one copy in the main memory and one in each cache
memory. When one copy of an operand is changed, the other copies of the operand must be changed also.
Example : Cache and the main memory may have inconsistent copies of the same object.
2. As multiple processors operate in parallel, and independently multiple caches may possess different copies of the
same memory block, this creates a cache coherence problem. Cache coherence is the discipline that ensures that
changes in the values of shared operands are propagated throughout the system in a timely fashion

Q4 Explain Cache Memory in Computer Organization.

• Cache Memory is a special very high-speed memory. The cache is a smaller and faster memory that stores copies
of the data from frequently used main memory locations. There are various different independent caches in a
CPU, which store instructions and data. The most important use of cache memory is that it is used to reduce the
average time to access data from the main memory.
• Characteristics of Cache Memory:-
o Cache memory is an extremely fast memory type that acts as a buffer between RAM and the CPU.
o Cache Memory holds frequently requested data and instructions so that they are immediately available to the
CPU when needed.
o Cache memory is costlier than main memory or disk memory but more economical than CPU registers.
o Cache Memory is used to speed up and synchronize with a high-speed CPU.

Q5 Differences between Computer Architecture and Computer Organization.


Computer Architecture Computer Organization
1 Architecture describes what the computer does. The Organization describes how it does it.
2 Computer Architecture deals with the functional Computer Organization deals with a structural
behavior of computer systems. relationship.
3 In the above figure, it’s clear that it deals with high- In the above figure, it’s also clear that it deals with low-
level design issues. level design issues.
4 Architecture indicates its hardware. Whereas Organization indicates its performance.
5 As a programmer, you can view architecture as a The implementation of the architecture is called
series of instructions, addressing modes, and registers. organization.

Q6 Define Instruction Cycle.


1. Fetch: In this phase, the processor fetches the instruction from memory. The program counter (PC) holds the
address of the next instruction to be fetched. The processor reads the instruction from memory using the address
provided by the PC and stores it in an instruction register (IR).
2. Decode: In this phase, the processor decodes the fetched instruction. It interprets the opcode (operation code)
portion of the instruction to determine the type of operation to be performed and identifies the operands or
registers involved.
3. Execute: In this phase, the processor performs the operation specified by the instruction. It may involve
calculations, data manipulation, logical operations, or control flow modifications based on the decoded
instruction.
4. Store: In this phase, the result of the execution is stored in the appropriate location. It could be a register, memory
location, or an I/O device, depending on the instruction and the architecture of the computer system.
Q7 Explain Different Addressing Modes.
1. Immediate addressing: The operand is a constant value or immediate data directly embedded within the
instruction itself. It is useful for operations that involve constants or immediate values.
2. Register addressing: The operand is the content of a specific register. This mode allows direct access to registers
in the processor, which are typically fast storage locations.
3. Direct addressing: The operand is the actual memory address where the data is stored. The processor directly
accesses the memory location specified in the instruction.
4. Indirect addressing: The operand is a memory address that contains the actual memory address where the data is
stored. The processor accesses the memory location indirectly by first obtaining the address from the specified
memory location.
5. Indexed addressing: The operand is calculated by adding a constant offset or index value to a base address. It is
commonly used in array or table access, where the index determines the position of the element.
6. Relative addressing: The operand is a memory address calculated relative to the current program counter (PC) or
instruction pointer. It is often used in branch instructions to specify the target address relative to the current
instruction.
7. Stack addressing: The operand is implicitly specified from the top of the stack. It is commonly used in stack-
based architectures, where operands are pushed onto and popped from the stack.
8. Base/Offset addressing: The operand is obtained by adding a constant offset to a base address specified in a
register or memory location. It is useful for accessing data structures like arrays, records, or objects.
9. Indirect indexed addressing: This mode combines indirect and indexed addressing. The operand is obtained by
first obtaining a memory address indirectly and then adding an index value to that address.

Q8 Explain IEEE 754 Floating Point Representation.

• The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point
computation which was established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The
standard addressed many problems found in the diverse floating point implementations that made them difficult to
use reliably and reduced their portability. IEEE Standard 754 floating point is the most common representation
today for real numbers on computers, including Intel-based PC’s, Macs, and most Unix platforms.
• IEEE 754 has 3 basic components:
1. The Sign of Mantissa –
This is as simple as the name. 0 represents a positive number while 1 represents a negative number.
2. The Biased exponent –
The exponent field needs to represent both positive and negative exponents. A bias is added to the actual
exponent in order to get the stored exponent.
3. The Normalised Mantissa –
The mantissa is part of a number in scientific notation or a floating-point number, consisting of its significant
digits. Here we have only 2 digits, i.e. O and 1. So a normalised mantissa is one with only one 1 to the left of
the decimal.
Q9 Explain JK Flip Flop and SR Flip Flop.
➢ JK Flip-Flop:
• A JK flip-flop is a clocked sequential logic device that can store one bit of binary data. It has two inputs: J
(set) and K (reset), and two outputs: Q (output) and Q̅ (complement of the output). The JK flip-flop
operates based on the current state and the input values, as well as the rising or falling edge of a clock
signal. Here are the main characteristics of a JK flip-flop:
o When both J and K inputs are 0, the flip-flop remains in its current state (hold condition).
o When J and K inputs are both 1, the flip-flop toggles, meaning the output switches to its opposite state. If
the output was 0, it becomes 1, and vice versa.
o When J is 1 and K is 0, the flip-flop sets to 1 (output is forced to 1).
o When J is 0 and K is 1, the flip-flop resets to 0 (output is forced to 0).
➢ SR Flip-Flop:
• An SR flip-flop (Set-Reset flip-flop) is another type of sequential logic circuit used for storing and
controlling binary data. It also has two inputs: S (set) and R (reset), and two outputs: Q and Q̅. Here are
the key characteristics of an SR flip-flop:
o When both S and R inputs are 0, the flip-flop remains in its current state (hold condition).
o When S is 1 and R is 0, the flip-flop sets to 1 (output is forced to 1).
o When S is 0 and R is 1, the flip-flop resets to 0 (output is forced to 0).
o When both S and R inputs are 1, the flip-flop is in an indeterminate or forbidden state, and its behavior is
unpredictable. This situation is called a "race condition" and should be avoided in practical designs.

Q10 Draw Flowchart Of Booth Algorithm For Multiplication.

• Booth algorithm gives a procedure for


multiplying binary integers in signed 2’s
complement representation in efficient way, i.e.,
less number of additions/subtractions required. It
operates on the fact that strings of 0’s in the
multiplier require no addition but just shifting
and a string of 1’s in the multiplier from bit
weight 2^k to weight 2^m can be treated as
2^(k+1 ) to 2^m. As in all multiplication
schemes, booth algorithm requires examination
of the multiplier bits and shifting of the partial
product. Prior to the shifting, the multiplicand
may be added to the partial product, subtracted
from the partial product, or left unchanged.

You might also like