Professional Documents
Culture Documents
Co Unit-4 Q&a
Co Unit-4 Q&a
With a neat diagram, show the memory address map of RAM and ROM for a computer system
(Assume 512 bytes )[Apply]
Main Memory
The main memory is the central storage unit in a computer system.
Primary memory holds only those data and instructions on which computer is currently
working.
It has limited capacity and data is lost when power is switched off.
It is generally made up of semiconductor device.
These memories are not as fast as registers.
The data and instruction required to be processed reside in main memory.
It is divided into two subcategories RAM and ROM.
The designer of a computer system must calculate the amount of memory required forthe
particular application and assign it to either RAM or ROM.
The interconnection between memory and processor is then established fromknowledge of the
size of memory needed and the type of RAM and ROM chips available.
The addressing of memory can be established by means of a table that specifies the
memory address assigned to each chip.
The table, called a memory address map, is a pictorial representation of assignedaddress space
for each chip in the system, shown in table 9.1.
To demonstrate with a particular example, assume that a computer system needs 512bytes
of RAM and 512 bytes of ROM.
The RAM and ROM chips to be used are specified in figure 9.1 and figure 9.2.
The component column specifies whether a RAM or a ROM chip is used.
The hexadecimal address column assigns a range of hexadecimal equivalent addressesfor
each chip.
The address bus lines are listed in the third column.
Although there are 16 lines in the address bus, the table shows only 10 lines because the other
6 are not used in this example and are assumed to be zero.
The small x's under the address bus lines designate those lines that must be connected to the
address inputs in each chip.
The RAM chips have 128 bytes and need seven address lines. The ROM chip has 512 bytes and
needs 9 address lines.
The x's are always assigned to the low-order bus lines: lines 1 through 7 for the RAM and lines 1
through 9 for the ROM.
It is now necessary to distinguish between four RAM chips by assigning to each a different
address. For this particular example we choose bus lines 8 and 9 to represent four distinct
binary combinations.
The table clearly shows that the nine low-order bus lines constitute a memory space for RAM
equal to 29 = 512 bytes.
The distinction between a RAM and ROM address is done with another bus line. Here we choose
line 10 for this purpose.
When line 10 is 0, the CPU selects a RAM, and when this line is equal to 1, it selects the ROM.
Auxiliary Memory
Magnetic Disks
A magnetic disk is a type of memory constructed using a circular plate of metal or plastic coated with
magnetized materials. Usually, both sides of the disks are used to carry out read/write operations.
However, several disks may be stacked on one spindle with read/write head available on each surface.
The following image shows the structural representation for a magnetic disk .
o The memory bits are stored in the magnetized surface in spots along the concentric circles called
tracks.
o The concentric circles (tracks) are commonly divided into sections called sectors.
Magnetic Tape
Magnetic tape is a storage medium that allows data archiving, collection, and backup for different
kinds of data. The magnetic tape is constructed using a plastic strip coated with a magnetic
recording medium.
The bits are recorded as magnetic spots on the tape along several tracks. Usually, seven or nine
bits are recorded simultaneously to form a character together with a parity bit.
Magnetic tape units can be halted, started to move forward or in reverse, or can be rewound.
However, they cannot be started or stopped fast enough between individual characters. For this
reason, information is recorded in blocks referred to as records .
3. A two-way set associative cache memory uses blocks of four words. The cache can accommodate a
total of 2048 words for main memory. The main memory size is 128K X 32
Cache Mapping:
There are three different types of mapping used for the purpose of cache memory which are as follows:
o Direct mapping,
o Associative mapping
o Set-Associative mapping
(i)Direct mapping
The number of bits in the index field is equal to the number of address bits required toaccess
the cache memory.
The internal organization of the words in the cache memory is as shown in figure 9.7
(ii)Associative Mapping
The associative memory stores both the address and content(data) of the memory word.
This permits any location in cache to store any word from main memory word.
The fig shows three words presently stored in the cache.
The address value of 15 bits is shown as five digit octal number and its corresponding 12 bit word is shown
as four digit octal number.
A CPU address of 15 bits is placed in argument register and associative memory is searched for matching
address.
If the address is found the corresponding 12 bit data is read and sent to the CPU.
If no match occurs the main memory is accessed for the word.The address data pair is then transferred to
associative cache memory.
A third type of cache organization, called set associative mapping in that each word ofcache
can store two or more words of memory under the same index address.
Each data word is stored together with its tag and the number of tag-data items in oneword
of cache is said to form a set.
An example one set-associative cache organization for a set size of two is shown in figure 9.8.
Each index address refers to two data words and their associated terms.
Each tag required six bits and each data word has 12 bits, so the word length is 2 (6+12)= 36 bits.
An index address of nine bits can accommodate 512 words.
Thus the size of cache memory is 512 × 36. It can accommodate 1024 words or main
memory since each word of cache contains two data words.
In generation a set-associative cache of set size k will accommodate K word of main
memory in each word of cache.
The octal numbers listed in figure 9.8 are with reference to the main memory contents.
The words stored at addresses 01000 and 02000 of main memory are stored in cache
memory at index address 000.
Similarly, the words at addresses 02777 and 00777 are stored in cache at index address777.
When the CPU generates a memory request, the index value of the address is used toaccess
the cache.
The tag field of the CPU address is then compared with both tags in the cache to
determine if a match occurs.
The comparison logic is done by an associative search of the tags in the set similar to an
associative memory search: thus the name "set-associative”.
When a miss occurs in a set-associative cache and the set is full, it is necessary to replace one of
the tag-data items with a new value.
The most common replacement algorithms used are: random replacement, first-in first-out
(FIFO), and least recently used (LRU).
5. Describe the characteristics of Cache memory. Explain Write-through and Write-back methods.
[7M][SET-2] July2023[Understand]
During a read operation, when the CPU determines a word in the cache, the main memory is not
included in the transfer. Thus, there are two ways that the system can proceed when the operation is a
write.
1. Write Through Method : The simplest method is to update the main memory with every memory
write operation when the cache memory is updated in parallel when it contains the word at the
specified address. This can be known as the write-through method.
Number of memory write operation in a typical program is Number of memory write operation
3
more. in a typical program is less
7.Explain the hardware organization of associative memory with a neat block diagram[7M][SET-
1] July 2023[Understand]
The time required to find an item stored in memory can be reduced considerably if stored data
can be identified for access by the content of the data itself rather than by an address.
A memory unit accessed by content is called an associative memory or contentaddressable
memory (CAM).
This type of memory is accessed simultaneously and in parallel on the basis of data
content rather than by specific address or location.
Figure 9.3: Block diagram of associative memory
It consists of a memory array and logic form words with n bits per word.
The argument register A and key register K each have n bits, one for each bit of a word.
The match register M has m bits, one for each memory word.
Each word in memory is compared in parallel with the content of the argument register.
The words that match the bits of the argument register set a corresponding bit in thematch
register.
After the matching process, those bits in the match register that have been set indicatethe
fact that their corresponding words have been matched.
Reading is accomplished by a sequential access to memory for those words whose
corresponding bits in the match register have been set.
Hardware Organization
The key register provides a mask for choosing a particular field or key in the argumentword.
The entire argument is compared with each memory word if the key register contains all1's.
Otherwise, only those bits in the argument that have 1st in their corresponding positionof the
key register are compared.
Thus the key provides a mask or identifying piece of information which specifies how the
reference to memory is made.
To illustrate with a numerical example, suppose that the argument register A and the key
register K have the bit configuration shown below.
Only the three leftmost bits of A are compared with memory words because K has 1's inthese
position.
A 101 111100
K 111 000000
Word1 100 111100 no match
Word2 101 000001 match
Word 2 matches the unmasked argument field because the three leftmost bits of the
argument and the word are equal.
Virtual Memory
A virtual memory system provides a mechanism for translating program-generated addresses
into correct main memory locations.
Address space
An address used by a programmer will be called a virtual address, and the set of such addresses
is known as address space.
Memory space
An address in main memory is called a location or physical address. The set of such locations
iscalled the memory space.
Program 1
Data 1,1
Data 1,2 Program 1
Program 2
Data 2,1
Data 1,1
Address space 1024k=210
Figure 9.9: Relation between address and memory space in a virtual memory system
Supposes a computer which has a main-memory capacity of 64K words (K=1024). 16-bits are
required to specify a physical address in memory because 64K = 216.
Assume that computer has auxiliary memory for storing information equivalent to capacity of 16
main memories.
In a multiprogramming computer system, data and programs are transferred to and from auxiliary
memory and main memory based on demands obliged by CPU.
Suppose program 1 is currently being executed in CPU. Program 1 and a part of its associated data
are moved from secondary memory in the main memory as displayed in Figure
Parts of programs and data require not being in contiguous locations in memory because
information is being moved in and out and empty spaces may be available in scattered locations in
memory.
Figure: Memory table for mapping a virtual table
In our illustration address field of an instruction code will comprise 20 bits however physical
memory addresses should be specified with just 16-bits.
So CPU will reference data and instructions with a 20 bits address though information at this
address should be taken from physical memory since access to auxiliary storage for particular
words will be prohibitively long.
A mapping table is then required as displayed in Figure below to map a virtual address of 20 bits
to a physical address of 16 bits.
Mapping is a dynamic operation that means every address is translated instantly as a word is
referenced by CPU.
When a page that is residing in virtual memory is requested by a process for its execution, the Operating
System needs to decide which page will be replaced by this requested page. This process is known as
page replacement
This algorithm is basically dependent on the number of frames used. Then each frame takes up
the certain page and tries to access it. When the frames are filled then the actual problem starts.
The fixed number of frames is filled up with the help of first frames present. This concept is
fulfilled with the help of Demand Paging
After filling up of the frames, the next page in the waiting queue tries to enter the frame. If the
frame is present then, no problem is occurred. Because of the page which is to be searched is
already present in the allocated frames.
If the page to be searched is found among the frames then, this process is known as Page Hit.
If the page to be searched is not found among the frames then, this process is known as Page
Fault.
When Page Fault occurs this problem arises, then the Least Recently Used (LRU) Page
Replacement Algorithm comes into picture.
The Least Recently Used (LRU) Page Replacement Algorithms works on a certain principle. The
principle is:
Replace the page with the page which is less dimension of time recently used page in the past.
11.Explain about Input-output interface with an example.[7M] June/July 2022[Understand]
I/O interface with example
2. The data transfer rate of peripherals is usually slower than the transfer rate of
CPUand consequently, a synchronization mechanism may be needed.
3. Data codes and formats in the peripherals differ from the word format in the CPU
andmemory.
4 The operating modes of peripherals are different from each other and must be
controlled so as not to disturb the operation of other peripherals connected to theCPU.
These components are called Interface Units because they interface between the
processor bus and the peripheral devices
It defines the typical link between the processor and several peripherals.
The I/O Bus consists of data lines, address lines and control
lines.
The I/O bus from the processor is attached to allperipherals
interface.
To communicate with a particular device, the processor places a device address on
addresslines.
Each Interface decodes the address and control received from the I/O bus, interprets
them forperipherals and provides signals for the peripheral controller.
It is also synchronizes the data flow and supervises the transfer between
peripheral andprocessor.
For example, the printer controller controls the paper motion, the print
timing The control lines are referred as I/O command. The commands
are as following:
Control command- A control command is issued to activate the peripheral and to
inform itwhat to do.
Status command- A status command is used to test various status conditions in the
interfaceand the peripheral.
Data Input command- The data input command is the opposite of the data output.
In this case the interface receives on item of data from the peripheral and
places it in itsbuffer register. I/O Versus Memory Bus
To communicate with I/O, the processor must communicate with the memory unit.
Like the I/O bus, the memory bus contains data, address and read/write control
lines. There are 3 waysthat computer buses can be used to communicate with
memory and I/O:
i. Use two Separate buses , one for memory and other for I/O.
ii. Use one common bus for both memory and I/O but separate control lines for each.
iii. Use one common bus for memory and I/O with common control lines
I/O Processor
In the first method, the computer has independent sets of data, address and control busesone for
accessing memory and other for I/O. This is done in computers that provides a separate I/O
processor (IOP). The purpose of IOP is to provide an independent pathway forthe transfer of
information between external device and internal memory.
12.What is Asynchronous Data Transfer? Explain any one method to achieve the
asynchronous way of data transfer[7M][SET-3]July 2023[Remember]
15 Explain how Handshaking Asynchronous data transfer is advantageous over strobe
control data transfer[Understand]
In this method two typesof techniques are used based on signals before data transfer.
i. Strobe Control
ii. Handshaking
Strobe Signal :
In the block diagram fig. (a), the data bus carries the binary information from
source to destination unit. Typically, the bus has multiple lines to transfer an
entire byte or word. Thestrobe is a single line that informs the destination unit
when a valid data word is available.
The timing diagram fig. (b) the source unit first places the data
on the data bus. The information on the data bus and strobe signal remain in
the active state to allow thedestination unit to receive the data.
In this method, the destination unit activates the strobe pulse, to informing
the source to provide the data. The source will respond by placing the
requested binary information on thedata bus.
The data must be valid and remain in the bus long enough for
the destinationunit to accept it. When accepted the destination unit then
disables the strobe and the source unit removes the data from the bus.
Disadvantage of Strobe Signal :
The disadvantage of the strobe method is that, the source unit initiates the
transfer has no wayof knowing whether the destination unit has actually received
the data item that was places inthe bus. Similarly, a destination unit that initiates
the transfer has no way of knowing whetherthe source unit has actually placed
the data on bus. The Handshaking method solves this problem.
Handshaking:
Principle of Handshaking:
The basic principle of the two-wire handshaking method of data transfer is as follow:
One control line is in the same direction as the data flows in the bus from
the source to destination. It is used by source unit to inform the destination
unit whether there a valid data in the bus. The other control line is in the other
direction from the destination to the source. Itis used by the destination unit to
inform the source whether it can accept the data. The sequence of control
during the transfer depends on the unit that initiates the transfer.
The sequence of events shows four possible states that the system can be at
any given time. The source unit initiates the transfer by placing the data on the
bus and enabling its data validsignal. The data accepted signal is activated by
the destination unit after it accepts the data from the bus. The source unit then
disables its data accepted signal and the system goes into its initial state.
Destination Initiated Transfer Using Handshaking:
The name of the signal generated by the destination unit has been changed to
ready for datato reflects its new meaning. The source unit in this case does
not place data on the bus until after it receives the ready for data signal from
the destination unit. From there on, the handshaking procedure follows the
same pattern as in the source initiated case.
The only difference between the Source Initiated and the Destination
Initiated transfer is intheir choice of Initial sate.
Advantage of the Handshaking method:
If any of one unit is faulty, the data transfer will not be completed.
Such an error canbe detected by means of a Timeout mechanism which
provides an alarm if the data isnot completed within time.
13.Explain about asynchronous data transfer and asynchronous communication
interface [7M][SET-4]July2023[Understand]
16 Discuss various possible modes of Data transfer to and from the peripherals in a
computer system[7M][SET-2]July 2023[Analyze]
Mode of Transfer:
The method that is used to transfer information between internal storage and
external I/O devices is known as I/O interface. The CPU is interfaced using
special communication links by the peripherals connected to any computer system
CPU merely processes the information but the source and target is always the
memory unit. Data transfer between CPU and the I/O devices may be done in
different modes. Data transfer to and from the peripherals may be done in any of
the three possible ways
1 Programmed I/O.
2 Interrupt- initiated I/O.
3 Direct memory access( DMA).
Programmed I/O
In the programmed I/O method, the I/O device does not have direct access to
memory.
An example of data transfer from an I/O device through an interface into the CPU
isshown in figure 8.7.
When a byte of data is available, the device places it in the I/O bus and enables
its datavalid line.
The interface accepts the byte into its data register and enables the data accepted
line.
The interface sets a bit in the status register that we will refer to as an F or "flag" bit.
The device can now disables the data valid line, but it will not transfer another
byte untilthe data accepted line is disables by the interface.
A program is written for the computer to check the flag in the status register to
determine if a byte has been placed in the data register by the I/O device.
This is done by reading the status register into a CPU register and checking the
value ofthe flag bit.
Once the flag is cleared, the interface disables the data accepted line and the
device canthen transfer the next data byte.
Example of Programmed I/O:
A flowchart of the program that must be written for the CPU is shown in figure 8.8.
It is assumed that the device is sending a sequence of bytes that must be
stored inmemory.
The transfer of each byte requires three instructions :
1. Read the status register.
2. Check the status of the flag bit and branch to step 1 if not set or to
step 3 if set.
3. Read the data register.
Each byte is read into a CPU register and then transferred to memory with a
storeinstruction.
A common I/O programming task is to transfer a block of words from an I/O
device andstore them in a memory buffer.
Figure 8.8: Flowchart for CPU program to input data
Interrupt-Initiated I/O :
Priority Interrupt:
• There are number of IO devices attached to the computer.
• They are all capable of generating the interrupt.
• When the interrupt is generated from more than one device, priority interrupt
systemis used to determine which device is to be serviced first.
• Devices with high speed transfer are given higher priority and slow devices are given
lower priority.
• Establishing the priority can be done in two ways:
Using Software
Using Hardware
• A pooling procedure is used to identify highest priority in software means.
Polling Procedure :
• There is one common branch address for all interrupts.
• Branch address contain the code that polls the interrupt sources in sequence. The
highest priority is tested first.
• The particular service routine of the highest priority device is served.
• The disadvantage is that time required to poll them can exceed the time to serve
themin large number of IO devices.
Using Hardware:
• To speed up the operation each interrupting devices has its own interrupt vector.
• No polling is required, all decision are established by hardware priority interrupt unit.
Explain Priority interrupt and Daisy chain interrupt system with necessary
diagrams[7M][SET-3]July 2023[Understand]
Interrupt-Initiated I/O :
Priority Interrupt:
• There are number of IO devices attached to the computer.
• They are all capable of generating the interrupt.
• When the interrupt is generated from more than one device, priority interrupt
systemis used to determine which device is to be serviced first.
• Devices with high speed transfer are given higher priority and slow devices are given
lower priority.
• Establishing the priority can be done in two ways:
Using Software
Using Hardware
• A pooling procedure is used to identify highest priority in software means.
• Polling Procedure :
• There is one common branch address for all interrupts.
• Branch address contain the code that polls the interrupt sources in sequence. The
highest priority is tested first.
• The particular service routine of the highest priority device is served.
• The disadvantage is that time required to poll them can exceed the time to serve
themin large number of IO devices.
Using Hardware:
• To speed up the operation each interrupting devices has its own interrupt vector.
• No polling is required, all decision are established by hardware priority interrupt unit.
Parallel Priority Interrupt :
• It consist of interrupt register whose bits are set separately by the interrupting
devices.
• Priority is established according to the position of the bits in the register.
• Mask register is used to provide facility for the higher priority devices
to interruptwhen lower priority device is being serviced or disable all
lower priority devices when higher is being serviced.
• Corresponding interrupt bit and mask bit are ANDed and applied to priority encoder.
CPU cannot do any work until the transfer is CPU can do any other work until it is
complete as it has to stay in the loop to interrupted by the command indicating
continuously monitor the peripheral device. the readiness of device for data transfer
19.Draw the block diagram of DMA controller and explain its key
components[7M][SET- 1]July 2023[Understand]
20. Explain in detail about Direct Memory Access in computer system[7M][SE-
4]July2023[Understand]
Direct Memory Access (DMA):
In the Direct Memory Access (DMA) the interface transfer the data into
and out of the memory unit through the memory bus. The transfer of data
between a fast storage device such as magnetic disk and memory is often
limited by the speed of the CPU. Removing the CPU from the path and
letting the peripheral device manage the memory buses directly would
improve the speed of transfer. This transfer technique is called Direct
Memory Access (DMA).
During the DMA transfer, the CPU is idle and has no control of the
memory buses. A DMA Controller takes over the buses to manage the
transfer directly between the I/O device and memory.
The CPU may be placed in an idle state in a variety of ways. One common method
extensively used in microprocessor is to disable the buses through special control signals
such as:
(ii)Cycle Stealing
i) DMA Burst :- In DMA Burst transfer, a block sequence consisting of a
number of memory words is transferred in continuous burst while the
DMA controller is masterof the memory buses.
ii) Cycle Stealing :- Cycle stealing allows the DMA controller to transfer
one data wordat a time, after which it must returns control of the
buses to the CPU.
DMA Controller:
i. Address Register
(ii)Word Count Register :- WC holds the number of words to be transferred. The register is
incre/decre by one after each word transfer and internally tested for zero.
The unit communicates with the CPU via the data bus and control lines.
The registers in the DMA are selected by the CPU through the address
bus by enabling theDS (DMA select) and RS (Register select) inputs. The
RD (read) and WR (write) inputs are bidirectional.
When the BG (Bus Grant) input is 0, the CPU can communicate with the DMA
registers through the data bus to read from or write to the DMA registers. When BG
=1, the DMA can communicate directly with the memory by specifying an address in
the address bus and activating the RD or WR control
DMA Transfer:
The CPU communicates with the DMA through the address and data
buses as with any interface unit. The DMA has its own address, which
activates the DS and RS lines. The CPU initializes the DMA through the
data bus. Once the DMA receives thestart control command, it can
transfer between the peripheral and the memory.
When BG = 0 the RD and WR are input lines allowing the CPU to communicate with
the internal DMA registers. When BG=1, the RD and WR are output lines from the
DMA controller to the random access memory to specify the read or write operation
of data.