Professional Documents
Culture Documents
DDCO Model Paper Solved
DDCO Model Paper Solved
Consider three inputs: x, y, and z. We'll demonstrate that the NOR gate doesn't follow the associative property.
1. (x ↓ y) ↓ z:
2. x ↓ (y ↓ z):
1. (x ↓ y) ↓ z:
2. x ↓ (y ↓ z):
These results are not always the same. Therefore, the NOR gate doesn't follow the associative property
because changing the order of operations affects the outcome.
1b) Design a car safety alarm circuit diagram. The system considers four inputs: door (D), key (K), seat
pressure (P) and seat belt (B). The input is considered HIGH (1) if the door is closed, the key is in, the driver is
on the seat, or the seat belt is fastened. The alarm (A) should sound with two conditions as stated below: The
door is not closed, and the key is in. The door is closed, the key is in the driver's seat, and the seat belt is not
closed.
(a) Construct a truth table for the system based on input arrangement D, K, P, B with A as an output
A testbench is a simulation environment used in hardware description languages like Verilog or VHDL to verify
the functionality of a digital design by applying stimulus to the inputs and observing the responses at the
outputs.
Example: Suppose we want to verify the functionality of the AND gate for all possible input combinations. The
testbench applies different input combinations (0,0), (0,1), (1,0), and (1,1) sequentially to the AND gate and
observes the output y. After applying all the test cases, it terminates the simulation. This example demonstrates
how the testbench interacts with the Verilog module to verify its functionality under different scenarios.
VERILOG CODE:
module and_gate(
input wire a,
input wire b,
output wire y );
endmodule
-This Verilog module defines an AND gate, which takes two input wires a and b, and produces one output wire
y.
-The logic for the AND gate is implemented using the assign statement, where y is assigned the result of the
logical AND operation between inputs a and b.
TestBench:-
module and_gate_tb;
reg a, b;
wire y;
and_gate dut (
.a(a),
.b(b),
.y(y)
);
initial begin
a = 0; b = 0; #10;
a = 0; b = 1; #10;
a = 1; b = 0; #10;
a = 1; b = 1; #10;
$finish;
end
endmodule
Explanation: This Verilog module serves as a testbench for the AND gate module.
The AND gate module and_gate is instantiated within the testbench, connecting its inputs and output.
Within the initial block, stimulus is applied to inputs a and b for different test cases.
Each test case waits for a short delay (#10) before moving to the next one.
After all test cases are executed, the simulation is terminated using $finish.
2a) Demonstrate the positive and negative logic signal
- Signals in binary digital circuits can have two values: high (H) and low (L).
- Gates operate based on their truth tables, which specify behavior for H and L.
- Symbols for gates reflect their behavior and can change based on logic polarity.
- Converting between positive and negative logic involves swapping 1's and 0's.
- When switching between positive and negative logic, AND operations become OR operations, and vice versa.
- The symbols for positive logic AND gates look similar to positive logic OR gates.
- In negative logic, the truth table entries are flipped compared to positive logic.
- A single gate can act as either a positive logic AND gate or a negative logic OR gate.
- Graphic symbols for gates may include polarity indicators for negative logic.
Truth Table:
| 0000 (January) | 0 |
| 0001 (February) | 1 |
| 0010 (March) | 0 |
| 0011 (April) | 0 |
| 0100 (May) | 0 |
| 0101 (June) | 0 |
| 0110 (July) | 0 |
| 0111 (August) | 0 |
| 1000 (September)| 0 |
| 1001 (October) | 0 |
| 1010 (November) | 0 |
| 1011 (December) | 1 |
SOP Expression:
POSIIM Expression:
Output = (A + B + C + ~D)
\ CD\AB | 00 | 01 | 11 | 10 |
--------+----+----+----+----+
00 | 0 | 0 | 0 | 0 |
--------+----+----+----+----+
01 | 1 | 0 | 0 | 0 |
--------+----+----+----+----+
11 | 0 | 0 | 0 | 0 |
--------+----+----+----+----+
10 | 0 | 0 | 0 | 1 |
You can implement the simplified expression (~A & ~B & D) using AND, NOT gates as follows:
Output = ~(A + B + C) * D
2c) What is User-Defined Primitives in Verilog? What are the general rules for UDP? Explain with an example
HDL for user defined primitive. Draw the Schematic for the Circuit with UDP_02467
- Verilog logic gates like AND, OR, etc., are predefined by the system and are known as system primitives.
- User-defined primitives (UDPs) can be created by defining them in tabular form, often using a truth table.
- UDPs are declared using the keyword pair `primitive ... endprimitive`, unlike modules in Verilog which use
`module ... endmodule`.
1. Declare the UDP using the keyword `primitive`, followed by a name and port list.
2. The output must be listed first in the port list and declared using the keyword `output`.
3. Any number of inputs can be specified, listed in the input declaration in the order they are given values in
the truth table.
4. Enclose the truth table within the keywords `table` and `endtable`.
6. The output value is always the last entry in a row, followed by a semicolon (`;`).
table
endtable
endprimitive
// Instantiate primitive
and (f, e, d); // Perform AND operation between e, d and assign to output f
endmodule
Note that the variables listed at the top of the table are part of a commentand are shown only for clarity.
The system recognizes the variables by the order in which they are listed in the input declaration.
A user-defined primitive can be instantiated in the construction of other modules (digital circuits), just as the
system primitives are used. For example, the declaration Circuit_with_UDP_02467(E,F,A,B,C,D); will produce a
circuit that implements the hardware shown in Fig
3a) Differentiate Latches and Flip-Flop
3b) Explain the working of Four-bit adders using 4-full Adders.
-Each full adder's output carry is connected to the input carry of the next.
-Each full adder adds one bit of the numbers, and carries propagate through the adder chain.
A = 1011
B = 0011
Sum S = 1110
Subscript i: 3 2 1 0
Input carry: 0 1 1 0 Ci
Augend (A): 1 0 1 1 Ai
Addend (B): 0 0 1 1 Bi
Sum (S): 1 1 1 0 Si
Starting from the right (least significant) bit, we add each pair of bits along with the input carry.
The output carry from each addition becomes the input carry for the next addition.
3 c)Implement Y (A, B, C, D) = ∑m (0, 1, 6, 7, 8, 9, 10, 11, 12, 14) using 16- to-1 multiplexer and 8-to-1
multiplexer.
4a) Explain different modeling styles used to write the code in VERILOG with an example
Verilog and VHDL support three common styles of modeling combinational circuits:
1. Structural Modeling (Gate-level Modelling)
2. Dataflow Modeling
3. Behavioral Modeling:
1. Structural Modeling:
-Gate-level modeling creates complex circuits by connecting basic logic circuits.
-It describes circuits by specifying the gates used and how they're connected
- Example:
2. Dataflow Modeling:
-Dataflow modeling represents circuit functionality using operators and assignment statements.
-It focuses on describing how inputs are processed to produce outputs.
- Example:
3. Behavioral Modeling:
-Behavioral modeling creates an abstract model of a circuit using procedural statements.
-It describes circuit behavior without detailing its internal structure.
- Example:
- BCD and excess-3 codes use four bits for a decimal digit representation.
- A code converter needs four input (A, B, C, D) and four output (W, X, Y, Z) variables.
- The truth table for the converter is derived from the codes, showing input-output mappings.
- Despite 16 possible combinations for four variables, the truth table lists only 10 relevant combinations.
- Unlisted combinations are considered don't-care values, as they have no meaning in BCD.
A decoder is a combinational circuit that converts binary information from n input lines to maximum of 2^n
unique output lines
5a) Write one address, two address, and three address instructions to carry out C← [A] + [B].
The operation of adding 2 numbers is a fundamental capability in any computer. The statement
C=A+B
In a high level language program is a command to the computer to add the current values of the
2 variables called A & B and to assign the sum to third variable, C.
Same in high level language statement requires the action to takes place in the computer
C ← [A] + [B]
Instruction formats with examples:
5b) Explain the basic operation concepts of the computer with a neat diagram.
Processor Components:
b) Program Counter (PC): Holds the memory address of the next instruction.
d) Memory Address Register (MAR): Holds the address of the memory location to be accessed.
e) Memory Data Register (MDR): Stores data for read or write operations.
f) Arithmetic Logic Unit (ALU): Executes arithmetic and logical operations.
3. The content of the PC is transferred to the MAR, initiating a read operation to memory.
4. The addressed instruction (word) is read from memory into the MDR.
5. The content of the MDR is then transferred to the IR for decoding and execution.
6. If the instruction involves an ALU operation, operands are retrieved from memory.
7. Operand addresses are sent to the MAR, initiating read operations by the MDR.
11. The address where the result is to be stored is sent to the MAR, initiating a write operation by the MDR.
12. During instruction execution, the PC's contents are incremented, pointing to the next instruction.
13. This process allows for the fetching of the next instruction once the current one completes.
14. Overall, this system ensures accurate and efficient execution of instructions by coordinating the activities
between the processor and memory.
5C) Explain a) Processor Clock, b) Basic performance equation, c) Clock rate d) Performance measurement
A) Processor Clock :-
- The processor clock is a timing signal controlling processor and system components.
- It divides tasks into clock cycles, each representing a fixed unit of time.
- Modern processors have rates ranging from millions to billions of cycles per second.
- Clock rate formula: R = 1 / P, where R is clock rate and P is length of one cycle.
- Example: A 3.2 GHz processor performs over 3 billion cycles per second, indicating high speed.
c) Clock Rate:-
- Maintaining instruction complexity may require more cycles despite shorter clock period.
D) performance measurement
Performance Measurement:
- Metrics include execution time, throughput, response time, and resource utilization.
- Benchmarks and profiling tools assess system performance under different workloads.
- The SPEC (System Performance Evaluation Corporation) selects and publishes benchmarks.
- Programs are compiled and run on real computers for performance evaluation.
- The SPEC rating compares the performance of a computer under test to a reference computer.
- For example, a SPEC rating of 50 indicates the computer is 50 times faster than the reference.
6a) Write ALP of adding a list of n numbers using indirect addressing mode
- In figure (a), Add instruction uses R1's value as the operand's address.
- Then another read operation using B's value as address retrieves the operand.
- Program Explanation:
6b) What is an addressing mode? Explain the different addressing modes. With an example for each
The different ways in which the location of an operand is specified in an instruction are referred
to as addressing modes.”
- In immediate mode, the operand value is directly specified within the instruction.
- Example: `Move #10, R1` (Load immediate value 10 into register R1).
- Example: `Load R2, R3` (Load contents of register R2 into register R3).
- Example: `Store R1, LOC` (Store contents of register R1 at memory location LOC).
- The instruction provides the address of a register or memory location, which contains the actual operand
address.
- Example: `Add (R2), R3` (Add contents of memory address stored in R2 to R3).
- An offset value (X) is added to the contents of a register to calculate the effective address.
- Example: `Add 20(R5), R2` (Add value at memory address (R5 + 20) to R2).
- Example: `Subtract (R1, R2), R3` (Subtract contents of memory address formed by R1 and R2 from R3).
- Uses two registers and an additional offset value to calculate the effective address.
- Example: `Jump 100(R1, R2)` (Jump to the memory address 100 plus the sum of contents of R1 and R2).
- Example: `Branch > 0 50(PC)` (Branch to the memory address 50 locations ahead of the current PC).
- Retrieves the operand from the memory location pointed to by a register and increments the register
afterward.
- Example: `Load (R2)+, R3` (Load contents of memory location pointed to by R2 into R3 and increment R2).
- Retrieves the operand from the memory location pointed to by a register and decrements the register
afterward.
- Example: `Store -(R1), R2` (Store contents of R2 into the memory location pointed to by R1 and decrement
R1).
6c) Explain the following: (i) Byte addressability (ii) Big-endian assignment (i) Little-endian assignment.
- For example, if the word-length is 32 bits, each word consists of 4 bytes, and successive words are located at
addresses 0, 4, 8, etc.
- In Big-Endian assignment, the most significant byte (MSB) of a multi-byte data type is stored at the lowest
memory address.
- Subsequent bytes are stored at higher memory addresses in ascending order of significance.
- For example, considering a 32-bit integer (e.g., 0x12345678), when stored in Big-Endian:
1000: 12
1001: 34
1002: 56
1003: 78
- In Little-Endian assignment, the least significant byte (LSB) of a multi-byte data type is stored at the lowest
memory address.
- Subsequent bytes are stored at higher memory addresses in descending order of significance.
- For example, considering the same 32-bit integer (0x12345678), when stored in Little-Endian:
1000: 78
1001: 56
1002: 34
1003: 12
7a) Draw a neat block diagram of memory hierarchy in a computer system. Discuss the variation of size, speed
and cost per bit in the hierarchy
1. Disk Storage:
- Size: Disk devices offer the largest storage capacity among the memory hierarchy.
- Speed: They provide a cost-effective solution for storing large volumes of data but are slower in access
speed compared to semiconductor memory.
- Cost per Bit: Disk storage provides the lowest cost per bit of storage due to its large capacity, but it has the
slowest access speed.
2. Main Memory:
- Size: Main memory, implemented using dynamic memory components, offers a significant storage capacity
but is smaller than disk storage.
- Speed: It serves as a bridge between cache memory and disk storage, providing more storage capacity at a
lower cost per bit compared to cache memory.
- Cost per Bit: Main memory has a moderate cost per bit, offering a balance between storage capacity and
access speed.
- Size: Secondary cache, or L2 cache, is larger than primary cache and offers a larger storage capacity
compared to primary cache.
- Speed: It provides faster access than main memory but slower than primary cache, offering a balance
between speed and size.
- Cost per Bit: The cost per bit of secondary cache is higher than main memory but lower than primary cache,
reflecting its intermediate position in terms of speed and size.
- Size: Primary cache, also known as L1 cache, is smaller in size but faster in access speed compared to
secondary cache.
- Speed: It is located on the processor chip and offers faster access to frequently accessed instructions and
data.
- Cost per Bit: Primary cache has the highest cost per bit among the memory hierarchy due to its smaller size
and faster access speed.
5. Registers:
- Size: Registers provide the smallest storage capacity but offer the fastest access speed among the memory
hierarchy.
- Speed: They are located directly on the processor chip and are used for temporary storage of frequently
accessed data and instructions.
- Cost per Bit: Registers have the highest cost per bit due to their limited storage capacity and extremely fast
access speed.
7b) What is DMA Bus arbitration? Explain different bus arbitration techniques.
Bus Arbitration:-
❖ The device that is allowed to initiate data transfers on the bus at any given time is called the bus
master. When the current master relinquishes control of the bus, another device can acquire this
status.
❖ Bus arbitration is the process by which the next device to become the bus master is selected and
bus mastership is transferred to it.
❖ The selection of the bus master must take into account the needs of various devices by
establishing a priority system for gaining access to the bus.
❖ There are two approaches to bus arbitration: centralized and distributed. In centralized
arbitration, a single bus arbiter performs the required arbitration.
❖ In distributed arbitration, all devices participate in the selection of the next bus master.
Centralized Arbitration:-
❖ The bus arbiter may be the processor or a separate unit connected to the bus. A basic
arrangement in which the processor contains the bus arbitration circuitry.
❖ In this case, the processor is normally the bus master unless it grants bus mastership to one of the
DMA controllers.
❖ A DMA controller indicates that it needs to become the bus master by activating the Bus-
Request line, BR.
❖ The signal on the Bus-Request line is the logical OR of the bus requests from all the devices
connected to it. When Bus-Request is activated, the processor activates the Bus-Grant signal,
BG1, indicating to the DMA controllers that they may use the bus when it becomes free.
❖ This signal is connected to all DMA controllers using a daisy-chain arrangement. Thus, if DMA
controller 1 is requesting the bus, it blocks the propagation of the grant signal to other devices.
❖ Otherwise, it passes the grant downstream by asserting BG2. The current bus master
indicates to all device that it is using the bus by activating another open-controller line called
Bus-Busy, BBSY.
❖ Hence, after receiving the Bus-Grant signal, a DMA controller waits for Bus-Busy to become
inactive, then assumes mastership of the bus.
Distributed Arbitration:-
❖ Distributed arbitration means that all devices waiting to use the bus have equal
responsibility in carrying out the arbitration process, without using a central arbiter.
❖ A simple method for distributed arbitration is illustrated in figure 6.
❖ Each device on the bus assigned a 4-bit identification number. When one or more
devices request the bus, they assert the
❖ Start Arbitration signal and place their 4-bit ID numbers on four-open-collector lines, ARBO
through ARB3.
❖ A winner is selected as a result of the interaction among the signals transmitted over those
liens by all contenders.
❖ The net outcome is that the code on the four lines represents the request that has the
highest ID number.
8a) With neat sketches, explain various methods for handling multiple interrupt requests raised by multiple
devices.
1. Polling:
- Processor sequentially checks each device's status register to identify the one requesting an interrupt.
- First device encountered with its interrupt request bit set is serviced.
- Easy to implement but may result in wasted processing time if many devices are polled unnecessarily.
2. Vectored Interrupts:
- Devices directly identify themselves to the processor by sending a special code over the bus when requesting
an interrupt.
- Processor immediately executes the corresponding interrupt-service routine based on the device's
identification.
- Reduces polling time but requires additional hardware support for interrupt vectoring.
3. Interrupt Nesting:
- Interrupts are disabled during the execution of an interrupt-service routine to prevent multiple interruptions.
- Ensures one interrupt is serviced at a time, but may lead to delays for critical tasks.
4. Priority-Based Handling:
- Devices organized into priority levels, with the processor accepting interrupts only from devices with higher
priorities.
- Ensures critical interrupts are addressed promptly while lower-priority interrupts are queued.
5. Daisy Chaining:
- Processor services interrupts based on device position in the chain, with closest device having highest
priority.
- Reduces wire complexity but may not prioritize interrupts based on device importance.
- Processor selects highest priority interrupt request when multiple devices raise simultaneous interrupts.
- Ensures critical interrupts are addressed promptly while providing fair handling for multiple devices.
8b) What is cache memory? Explain the different mapping functions used in cache memory
Cache memory is a high-speed, volatile memory used to store frequently accessed data and instructions,
reducing the time it takes for the CPU to retrieve information from the slower main memory.
Mapping functions
There are 3 techniques to map main memory blocks into cache memory –
2. Associative Mapping
3. Set-Associative Mapping
1) DIRECT MAPPING
• The simplest way to determine cache locations in which to store memory blocks
• If there are 128 blocks in a cache, the block-j of the main-memory maps onto block-j modulo 128 of the cache
. When the memory-blocks 0, 128, & 256 are loaded into cache, the block is stored in cache-block 0. Similarly,
memory blocks 1, 129, 257 are stored in cache-block 1.(eg:1mod 128=1, 129 mod 128=1)
2) But more than one memory-block is mapped onto a given cache-block position.
• The contention is resolved by allowing the new blocks to overwrite the currently resident-block.
• Each block consists of 16 words. Hence least significant 4 bits are used to select one of the 16 words.
• The 7bits of memory address are used to specify the position of the cache block, location. The most
significant 5 bits of the memory address are stored in the tag bits. The tag bits are used to map one of
2^5 = 32 blocks into cache block location (tag bit has value 0-31).
• The higher order 5 bits of memory address are compared with the tag bits associated with cache
location. If they match, then the desired word is in that block of the cache.
• If there is no match, then the block containing the required word must first be read from the main memory.
2) Associative Mapping:
• In this technique main memory block can be placed into any cache block position.
• In this case , 12 tag bits are required to identify a memory block when it is resident of the cache memory.
• In this technique 12 bits of address generated by the processor are compared with the tag bits of each block
of the cache to see if the desired block is present. This is called as associative mapping technique.
3) Set Associative Mapping:
• The blocks of cache are divided into several groups. Such a groups are called as sets.
• Each set consists of number of cache blocks. A memory block is loaded into one of the cache sets.
• The main memory address consists of three fields, as shown in the figure.
• The lower 4 bits of memory address are used to select a word from a 16 words.
• A cache consists of 64 sets as shown in the figure. Hence 6 bit set field is used to select a cache set from 64
sets.
• As there are 64 sets, the memory is divided into groups containing 64 blocks, where each group is given a tag
number.
• The most significant 6 bits of memory address is compared with the tag fields of each set to determine
whether memory block is available or not.
• The following figure clearly describes the working principle of Set Associative Mapping technique.
9a) Explain the single-bus organization of computers and fundamental concepts with a neat diagram.
- External memory-bus data and address lines connect to the internal processor-bus via MDR (Memory Data
Register) and MAR (Memory Address Register) respectively.
- MDR can load data from either the memory-bus or the processor-bus.
- MAR's input is connected to the internal-bus, while its output is connected to the external-bus.
- Instruction Decoder & Control Unit issues control signals to all processor units and implements actions
specified by the loaded instruction (in IR).
- Registers R0 through R(n-1) are accessible for general-purpose use by the programmer.
- Registers Y, Z, and Temp are used for temporary storage during program execution and are only accessible by
the processor.
- ALU's "A" input is from the multiplexer output, while the "B" input comes directly from the processor-bus.
- MUX selects between the output of Y and a constant value 4 (used for incrementing PC content) for the "A"
input of the ALU.
- Instructions involve transferring data between registers, performing arithmetic/logic operations, fetching
memory contents, and storing data into memory.
- Disadvantage: Only one data-word can be transferred over the bus in a clock cycle.
Fundamental Concepts:-
1. Fetch Instruction:
- Load the Program Counter (PC) value into the Memory Address Register (MAR).
- Set the Mux to select the constant 4, adding it to the PC's content and storing the result in Z.
- Move the updated value from Z back to the PC, incrementing it to point to the next instruction.
3. Fetch Operand:
- Transfer the fetched instruction from the Memory Data Register (MDR) to the Instruction Register (IR).
4. Decode Instruction:
- Activate control signals for subsequent steps based on the decoded instruction.
- Issue a read signal to memory to fetch the operand from the memory location pointed to by R3.
- Transfer the contents of Register R1 to the input Y to prepare for the addition operation.
7. Perform Addition:
- When the read operation is completed, the memory operand is available in the MDR.
8. Store Result:
9. End of Execution:
- The End signal triggers the start of a new instruction fetch cycle by returning to step 1.
10a) Explain with an example the different types of hazards that occur during pipelining.
Let us consider an example of, one of the pipeline stages may not be able to complete its processing
task for a given instruction in the time allotted as in Figure 5.13.
Figure 5.13: Execution unit takes more than one cycle for execution
Here instruction I2 requires three cycles to complete, from cycle 4 through cycle 6. Thus, in cycles 5
and 6, the Write stage must be told to do nothing, because it has no data to work with. Meanwhile, the
information in buffer B2 must remain intact until the Execute stage has completed its operation. This means
that stage 2 and, in turn, stage1 are blocked from accepting new instructions because the information in B1
cannot be overwritten. Thus, steps D4 and F5 must be postponed.
Pipelined operation in Figure 5.13 is said to have been stalled for two clock cycles. Normal pipelined
operation resumes in cycle 7. Any condition that causes the pipeline to stall is called a hazard.
Instruction I1 is fetched from the cache in cycle1, and its execution proceeds normally. However, the
fetch operation for instruction I2, which is started in cycle 2,results in a cache miss. The instruction fetch unit
must now suspend any further fetch requests and wait for I2 to arrive. We assume that instruction I2 is received
and loaded into buffer B1 at the end of cycle 5. The pipeline resumes its normal operation at that point.
Figure 5.14 Instruction Hazard
The memory address, X+[R1], is computed in step E2 in cycle4, then memory access takes place in cycle5.The
operand read from memory is written into register R2 in cycle 6. This means that the execution step of this
instruction takes two clock cycles (cycles 4 and 5). It causes the pipeline to stall for one cycle, because both
instructions I2 and I3 require access to the register file in cycle 6 which is shown in Figure 5.15.
The most common case in which this hazard may arise is in access to memory. One instruction may
need to access memory as part of the Execute or Write stage while another instruction is being fetched. If
instructions and data reside in the same cache unit, only one instruction can proceed and the other instruction
is delayed.
10b) Write and explain the control sequence for the execution of an unconditional branch instruction.
2. Extract Offset:
- Offset value is extracted from the Instruction Register (IR) through instruction decoding.
- Step 4.
Explanation:
- The branch instruction directs the PC to fetch the next instruction from a specific branch target address.
- Typically, the branch target address is obtained by adding the offset to the current PC value.
- The offset represents the difference between the branch target address and the address immediately
following the branch instruction.
- In the case of a conditional branch, the status of condition codes needs to be checked before updating the PC.
- If the condition is met (e.g., N = 0), the processor returns to step 1.
- If the condition is not met (e.g., N = 1), a new value is loaded into the PC in step 5.