Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

Computer Organization

and Architecture

Dr. Sumita Nainan


UNIT 2

 System buses
 Overview of basic instruction cycle
 Interrupts
 Bus interconnection
 Elements of bus design
 Read and write timings diagram
 Bus hierarchy
 Bus arbitration techniques.

Dr. Sumita Nainan


Overview of basic instruction cycle

 Instruction Cycle
A program that exists inside a computer's memory unit consists of a series of instructions. The
processor executes these instructions through a cycle for each instruction.
In a basic computer, each instruction cycle consists of the following phases:
1. Instruction fetch: fetch instruction from memory
2. Decode the instruction: what operation to be performed.
3. Read the effective address from memory
4. Execute the instruction

Dr. Sumita Nainan


Overview of basic instruction cycle

Dr. Sumita Nainan


Overview of basic instruction cycle

Registers Involved In Each Instruction Cycle:


1. Memory address registers(MAR) : It is connected to System Bus address lines.
It specifies the address of a read or write operation in memory.
2. Memory Buffer Register(MBR) : It is connected to the data lines of the system
bus. : It is connected to the system bus Data Lines. It holds the memory
value to be stored, or the last value read from the memory.
3. Program Counter(PC) : Holds the address of the next instruction to be
fetched.
4. Instruction Register(IR) : Holds the last instruction fetched.

Dr. Sumita Nainan


Overview of basic instruction cycle

Fetch cycle
The address of the next instruction to execute is in the Program Counter(PC) at the beginning
of the fetch cycle.
Step 1: The address in the program counter is transferred to the Memory Address
Register(MAR), as this is the only register that is connected to the system bus address lines.
Step 2: The address in MAR is put on the address bus, now a Read order is provided by the
control unit on the control bus, and the result appears on the data bus and is then copied into
the memory buffer register. Program counter is incremented by one, to get ready for the next
instruction. These two acts can be carried out concurrently to save time.
Step 3: The content of the MBR is moved to the instruction register(IR). Instruction fetch
cycle consist of four micro operation:

Dr. Sumita Nainan


Overview of basic instruction cycle

Decode instruction cycle


The next move is to fetch source operands once an instruction is fetched. Indirect addressing
(it can be obtained by any addressing mode, here it is done by indirect addressing) is obtained
by Source Operand. You don't need to fetch register-based operands. If the opcode is
executed, it will require a similar process to store the result in main memory.
Step 1: The instruction address field is passed to the MAR. This is used to fetch the operand 's
address.
Step 2: The address field of the IR is updated from the MBR.
Step 3: The IR is now in the state. Now IR is ready for the execute cycle.

Dr. Sumita Nainan


Overview of basic instruction cycle

Execute instruction Cycle


The initial three cycles (Fetch, Indirect, and Interrupt) are predictable and quick. Each
requires simple , small, and fixed micro-operation sequences. The same micro-operation is
repeated every time around in each event. Execute instruction cycle is different from them.
Like, there is N different series of micro-operations that can occur for a computer with
different N opcodes.
Step 1: The address portion of IR is loaded into the MAR.
Step 2: The address field of the IR is updated from the MBR, so the reference memory
location is read.
Step 3: Now, the contents of R and MBR are added by the ALU.

Dr. Sumita Nainan


Interrupts

Execute instruction Cycle


An interrupt in computer architecture is a signal that requests the processor to suspend its
current execution and service the occurred interrupt. To service the interrupt the processor
executes the corresponding interrupt service routine (ISR). After the execution of the
interrupt service routine, the processor resumes the execution of the suspended program.
Interrupts can be of two types of hardware interrupts and software interrupts.

Dr. Sumita Nainan


Interrupts

Types of Interrupts in Computer Architecture


1. Hardware Interrupts
 If a processor receives the interrupt request from an external I/O device it is termed as a
hardware interrupt. Hardware interrupts are further divided into maskable and non-maskable
interrupt.
 Maskable Interrupt: The hardware interrupt that can be ignored or delayed for some time if the
processor is executing a program with higher priority are termed as maskable interrupts.
 Non-Maskable Interrupt: The hardware interrupts that can neither be ignored nor delayed and
must immediately be serviced by the processor are termed as non-maskeable interrupts.
 2. Software Interrupts
 The software interrupts are the interrupts that occur when a condition is met or a system call
occurs.

Dr. Sumita Nainan


Interrupts

Interrupt Cycle
 A normal instruction cycle starts with the instruction fetch and execute. But, to accommodate
the occurrence of the interrupts while normal processing of the instructions, the interrupt
cycle is added to the normal instruction cycle as shown in the figure below.

Dr. Sumita Nainan


Interrupts

• After the execution of the current instruction, the processor verifies the interrupt signal to check
whether any interrupt is pending. If no interrupt is pending then the processor proceeds to fetch the
next instruction in the sequence.
• If the processor finds the pending interrupts, it suspends the execution of the current program by
saving the address of the next instruction that has to be executed and it updates the program counter
with the starting address of the interrupt service routine to service the occurred interrupt.
• After the interrupt is serviced completely the processor resumes the execution of the program it has
suspended.
Dr. Sumita Nainan
Interrupts

Interrupt Latency
• To service the occurred interrupt the processor suspends the execution of the current
program and save the details of the program to maintain the integrity of the program
execution. The modern processor store the minimum information that will be needed
by the processor to resume the execution of the suspended program. Still, the saving
and restoring of information from memory and registers which involve memory transfer
increase the execution time of the program.
• Transfer of memory also occurs when the program counter is updated with the starting
address of the interrupt service routine. This memory transfer causes the delay
between the time the interrupt was received and the processor starts executing the
interrupt service routine. This time delay is termed as interrupt latency.

Dr. Sumita Nainan


Interrupts

Enabling and Disabling Interrupts in Computer Architecture


• Modern computers have facilities to enable or disable interrupts. A programmer must have control over
the events during the execution of the program.
• For example, consider the situation, that a particular sequence of instructions must be executed without
any interruption. As it may happen that the execution of the interrupt service routine may change the
data used by the sequence of instruction. So the programmer must have the facility to enable and
disable interrupt in order to control the events during the execution of the program.
• Now you can enable and disable the interrupts on both ends i.e. either at the processor end or at the
I/O device end. With this facility, if the interrupts are enabled or disabled at the processor end the
processor can accept or reject the interrupt request. And if the I/O devices are allowed to enable or
disable interrupts at their end then either I/O devices are allowed to raise an interrupt request or
prevented from raising an interrupt request.
• To enable or disable interrupt at the processor end, one bit of its status register i.e. IE (Interrupt
Enable) is used. When the IE flag is set to 1 the processor accepts the occurred interrupts. IF IE flag is
set to 0 processor ignore the requested interrupts.
• To enable and disable interrupts at the I/O device end, the control register present at the interface of
the I/O device is used. One bit of this control register is used to regulate the enabling and disabling of
Dr. Sumita Nainan
interrupts fro the I/O device end.
Interrupts

Handling Multiple Devices


• Consider the situation that the processor is connected to multiple devices each of which
is capable of generating the interrupt. Now as each of the connected devices is
functionally independent of each other, there is no certain ordering in which they can
initiate interrupts.
• Let us say device X may interrupt the processor when it is servicing the interrupt caused
by device Y. Or it may happen that multiple devices request interrupts simultaneously.
These situations trigger several questions like:
• How the processor will identify which device has requested the interrupt?
• If the different devices requested different types of interrupt and the processor has to
service them with different service routine then how the processor is going to get
starting address of that particular to interrupt the service routine?
• Can a device interrupt the processor while it is servicing the interrupt produced by
another device?
Dr. Sumita Nainan
• How can the processor handle if multiple devices request the interrupts simultaneously?
Interrupts

• How these situations are handled vary from computer to computer. Now,
if multiple devices are connected to the processor where each is capable
of raising an interrupt the how will the processor determine which device
has requested an interrupt.
• The solution to this is that whenever a device request an interrupt it set
its interrupt request bit (IRQ) to 1 in its status register. Now the processor
checks this IRQ bit of the devices and the device encountered with IRQ bit
as 1 is the device that has to raise an interrupt.
• But this is a time taking method as the processor spends its time checking
the IRQ bits of every connected device. The time wastage can be reduced
by using a vectored interrupt.
Dr. Sumita Nainan
Interrupts
Vectored Interrupt
• The devices raising the vectored interrupt identify themselves directly to the processor.
So instead of wasting time in identifying which device has requested an interrupt the
processor immediately start executing the corresponding interrupt service routine for
the requested interrupt.
• Now, to identify themselves directly to the processors either the device request with its
own interrupt request signal or by sending a special code to the processor which helps
the processor in identifying which device has requested an interrupt.
• Usually, a permanent area in the memory is allotted to hold the starting address of each
interrupt service routine. The addresses referring to the interrupt service routines are
termed as interrupt vectors and all together they constitute an interrupt vector table.
• The device requesting an interrupt sends a specific interrupt request signal or a special
code to the processor. This information act as a pointer to the interrupt vector table and
the corresponding address (address of a specific interrupt service routine which is
Dr. Sumita Nainan

required to service the interrupt raised by the device) is loaded to the program counter.
Interrupts

• Interrupt Nesting
• When the processor is busy in executing the interrupt service routine, the
interrupts are disabled in order to ensure that the device does not raise
more than one interrupt. A similar kind of arrangement is used where
multiple devices are connected to the processor. So that the servicing of
one interrupt is not interrupted by the interrupt raised by another device.
• What if the multiple devices raise interrupts simultaneously, in that case,
the interrupts are prioritized.

Dr. Sumita Nainan


Interrupts

Priority Interrupts in Computer Architecture


• The I/O devices are organized in a priority structure such that the interrupt raised by the
high priority device is accepted even if the processor servicing the interrupt from a low
priority device.
• A priority level is assigned to the processor which can be regulated using the program.
Now, whenever a processor starts the execution of some program its priority level is set
equal to the priority of the program in execution. Thus while executing the current
program the processor only accepts the interrupts from the device that has higher
priority as of the processor.
• Now, when the processor is executing an interrupt service routine the processor priority
level is set to the priority of the device of which the interrupt processor is servicing.
Thus the processor only accepts the interrupts from the device with the higher priority
and ignore the interrupts from the device with the same or low priority. To set the
priority level of the processor some bits of the processor’s status register is used.
Dr. Sumita Nainan
BUS InterConnection
• A Bus is a collection of wires that connects several devices.
• Buses are used to send control signals and data between the processor and other components
• This is to achieve a reasonable speed of operation.
• In computer system all the peripherals are connected to microprocessor through Bus.
• Multiple devices connect to the bus, and a signal transmitted by any one device is available
forreception by all other devices attached to the bus.
• If two devices transmit during the same time period, their signals willoverlap and become garbled.
Thus, only one device at a time can successfully transmit.
• Typically, a bus consists of multiple communication pathways, or lines. Each line is capable of
transmitting signals representing binary 1 and binary 0.
• An 8-bit unit of data can be transmitted over eight bus lines. A bus that connects majorcomputer
components (processor, memory, I/O) is called a system bus.

Dr. Sumita Nainan


BUS InterConnection

• Types of Bus structure:


1. Address bus
2. Data bus
3. Control bus

Dr. Sumita Nainan


BUS InterConnection
1. Address Bus:
• Address bus carry the memory address while reading from writing into memory.
• Address bus caary I/O post address or device address from I/O port.
• In uni-directional address bu only the CPU could send address and other units could not address
the microprocessor.
• Now a days computers are haing bi-directional address bus.
2. Data Bus:
• Data bus carry the data.
• Data bus is a bidirectional bus.
• Data bus fetch the instructions from memory.
• Data bus used to store the result of an instruction into memory.
• Data bus carry commands to an I/O device controller or port.
• Data bus carry data from a device controller or port.
Dr. Sumita Nainan

• Data bus issue data to a device controller or port.


BUS InterConnection
3. Control Bus:
• Different types of control signals are used in a bus:
• Memory Read: This signal, is issued by the CPU or DMA controller when performing a read
operation with the memory.
• MemoryWrite: This signal isissued by the CPU or DMAcontroller when performing a write
operation with the memory.
• I/O Read: This signal isissued by the CPU when it is reading from an input port.
• I/O Write: This signal is issued by the CPU when writing into an output port.
• Ready: The ready is an input signal to the CPU generated in order to synchronize the show
memory or I/O ports with the fast CPU.
• A system bus is a single computer bus that connects the major components of a computer
system, combining the functions of a data bus to carry information, an address bus to
determine
Dr. Sumita Nainan where it should be sent, and a control bus to determine its operation.
BUS InterConnection
3. Control Bus:
• The control lines are used to control the access to and the use of the data and
address lines.
• Control signals transmit both command and timing information among system
modules. T
• iming signals indicate the validity of data and address information. Command
signals specify operations to be performed.
• Typical control lines include
• Memory write: Causes data on the bus to be written into the addressed location
• Memory read: Causes data from the addressed location to be placed on the bus
• I/O write: Causes data on the bus to be output to the addressed I/O port
• I/O read: Causes data from the addressed I/O port to be placed on the bus
Dr. Sumita Nainan
BUS InterConnection
3. Control Bus:
• Transfer ACK: Indicates that data have been accepted from or placed on the bus
• Bus request: Indicates that a module needs to gain control of the bus
• Bus grant: Indicates that a requesting module has been granted control of the bus
• Interrupt request: Indicates that an interrupt is pending
• Interrupt ACK: Acknowledges that the pending interrupt has been recognized
• Clock: Is used to synchronize operations
• Reset: Initializes all modules
The operation of the bus is as follows. If one module wishes to send data to another, it must
do two things: (1) obtain the use of the bus, and (2) transfer data via the bus. If one module
wishes to request data from another module, it must (1) obtain the use of the bus, and (2)
transfer a request to the other module over the appropriate control and address lines. It must
Dr. Sumita Nainan
then wait for that second module to send the data.
BUS InterConnection
Multiple-Bus Hierarchies
• If a great number of devices are connected to the bus, performance will suffer.
• There are two main causes:
1. In general, the more devices attached to the bus, the greater the bus length and hence the greater the
propagationdelay.
2. The bus may become a bottleneck as the aggregate data transfer demand approaches the capacity of the
bus.
• Most computer systems use multiple buses, generally laid out in a hierarchy.
• A typical traditional structure is shown in Figure.

Dr. Sumita Nainan


BUS InterConnection

Multiple-Bus Hierarchies
• There is a local bus that connects the processor to a cache memory and that
may support one or more local devices
• The cache memory is connected to a system bus to which all of the main
memory modules are attached.
• It is possible to connect I/O controllers directly onto the system bus.
• A more efficient solution is to make use of one or moreexpansion buses for
this purpose.
• This arrangement allows the system to support a wide variety of I/O devices
and at thesame time insulate memory-to-processor traffic from I/O traffic.
Dr. Sumita Nainan
BUS InterConnection
Multiple-Bus Hierarchies
• Network connections include local area networks (LANs), wide area networks (WANs), SCSI (small computer system nterface), serial
port.
• This traditional bus architecture is reasonably efficient but begins to break down as higher and higher performance is seen in the I/O
devices.
• In response to these growing demands, a common approach taken by industry is to build a high-speed bus that is closely integrated
with the rest of the system, requiring only a bridge between the processor’s bus and the high-speed bus.
• This arrangement is sometimes known as a mezzanine architecture.
• Figure shows a typical realization of this approach

Dr. Sumita Nainan


BUS InterConnection

Multiple-Bus Hierarchies
• Again, there is a local bus that connects the processor to a cache controller, which is
in turn connected to a system bus that supports main memory.
• The cache controller is integrated into abridge, or buffering device, that connects to
the high-speed bus.
• This bus supports connections to high-speed LANs, video and graphics workstation
controllers, SCSI and FireWireLower-speed devices are still supported off an
expansion bus, with an interface buffering traffic between the expansion bus and the
high-speed bus.
• The advantage of this arrangement is that the high-speed bus brings high-demand
devices into closer integration with the processor and at the same time is
independent of the processor.
Dr. Sumita Nainan
Elements of Bus DesigN

Dr. Sumita Nainan


Elements of Bus DesigN

TYPE
• Bus lines can be separated into two generic types: dedicated and multiplexed.
• A dedicated bus line is permanently assigned either to one function or to a physical subset of computer
components.
• Physical dedication refers to the use of multiple buses, each of which connects only a subset of modules.
• The potential advantage of physical dedication is high throughput, because there is less bus contention.
• A disadvantage is the increased size and cost of the system.
• Address and data information may be transmitted over the same set of lines using an Address Valid control
line.
• At the beginning of a data transfer, the address is placed on the bus and the Address Valid line is activated.
• The address is then removed from the bus, and the same bus connections are used for the subsequent read or
write data transfer.
• This method of using the same lines for multiple purposes is known as time multiplexing.
• The advantage of time multiplexing is the use of fewer lines, which saves space and, usually, cost.
Dr. Sumita Nainan

• The disadvantage is that more complex circuitry is needed within each module
Elements of Bus DesigN

Method Of Arbitration
• The various methods can be roughly classified as being either centralized or
distributed.
• In a centralized scheme, a single hardware device, referred to as a bus controller
or arbiter, is responsible for allocating time on the bus.
• In a distributed scheme, there is no central controller. Rather, each module
contains access control logic and the modules act together to share the bus.
• With both methods of arbitration, the purpose is to designate either the
processor or an I/O module, as master.
• The master may then initiate a data transfer (e.g., read or write) with some
other device, which acts as slave for this particular exchange.
Dr. Sumita Nainan
Elements of Bus DesigN

Timing Buses
• Buses use either synchronous timing or asynchronous timing.
• With synchronous timing, the occurrence ofevents on the bus is
determined by a clock. A single 1–0 transmission is referred to as a clock
cycle or bus cycle anddefines a time slot.
• Figure shows a typical, but simplified, timing diagram for synchronous
read and write

Dr. Sumita Nainan


Elements of Bus DesigN
Timing Buses
• In this example, the processor places a memory address on the
address lines during the first clock cycle and may assert various status
lines.
• Once the address lines have stabilized, the processor issues an address
enable signal.
• For a read operation, the processor issues a read command at the
start of the second cycle.
• A memory module recognizes the address and, after a delay of one
cycle, places the data on the data lines.
• The processor reads the data from the data lines and drops the read
signal.
• For a write operation, the processor puts the data on the data lines at
the start of the second cycle, and issues a write command after the
data lines have stabilized.
• The memory module copies the information from the data linesduring
Dr. Sumita Nainan
the third clock cycle.
Elements of Bus DesigN
Timing Buses
• The processor places address and status signals on the bus.
• After pausing for these signals to stabilize, it issues a read command,
indicating the presence of valid address and control signals.
• The appropriate memory decodes the address and responds by placing the
data on the data line.
• Once the data lines have stabilized, the memory module asserts the
acknowledged line to signal the processor that the data are available.
• Once the master has read the data from the data lines, it deasserts the read
signal.
• This causes the memory module to drop the data and acknowledge lines.
• Finally, once the acknowledge line is dropped, the master removes the
address information.
• Synchronous timing is simpler to implement and test.
• However, it is less flexible than asynchronous timing.
• With asynchronous timing, a mixture of slow and fast devices, using older and
newer technology, can share a bus.
Dr. Sumita Nainan
Elements of Bus DesigN
Bus Width
• The width of the data bus has an impact on system performance
• The wider the data bus, the greater the number of bits transferred at one time.
• The width of the address bus has an impact on system capacity
• The wider the address bus, the greater the range of locations that can be referenced.

Data Transfer Type


• In the case of a multiplexed address/data bus, the bus is first used for specifying the
address and then for transferring the data.
• For a read operation, there is typically a wait while the data are being fetched from the
slave to be put on the bus.
• For either a read or a write, there may also be a delay if it is necessary to go through
arbitration to gain control of the bus for the remainder of the operation.
Dr. Sumita Nainan
Elements of Bus DesigN
Data Transfer Type
• In the case of dedicated address and data buses, the address is put on the address bus and remains
there while the data are put on the data bus.
• For a write operation, the master puts the data onto the data bus as soon as the address has stabilized
and the slave has had the opportunity to recognize its address.
• For a read operation, the slave puts the data onto the data bus as soon as it has recognized its address
and has fetched the data.

Dr. Sumita Nainan


Read/Write timing diagram
• The exact working of the processor bus can be explained by a
series of timing diagrams for basic operations such as memory
read and memory write.
• What all operations of the processor bus have in common is the
general order of steps, which typically starts with the processor
setting an address on the address bus and a signal on the control
bus that indicates presence of a valid address, and proceeds
with the transfer of data.
• Any device connected to the processor bus is responsible for
recognizing its address, usually through an address decoder that
sends the chip select signal when the address of the device is
recognized.
• The ISA (Industry Standard Architecture) bus is synchronized by a
clock signal ticking with the frequency of 8-10 MHz.
• In the first clock tick of a bus cycle, the bus master, which is
typically the processor, sets the address on the address bus and
pulses the BALE (Bus Address Latch Enable) signal to indicate
that the address is valid.
Dr. Sumita Nainan
Read/Write timing diagram
• In a read bus cycle, the bus master activates one of the MEMR (Memory
Read) or IOR (Input/Output Read) signals to indicate either reading from
memory or reading from an input device.
• The bus master waits the next four cycles for the memory or the device to
recognize its address and set the data on the data bus.

Dr. Sumita Nainan


Read/Write timing diagram
• In a write bus cycle, the bus master activates one of the MEMW (Memory
Write) or IOW (Input/Output Write) signals to indicate either writing to
memory or writing to an output device.
• The bus master sets the data on the data bus and waits the next four
cycles for the memory or the device to recognize its address and data.

Dr. Sumita Nainan


Bus arbitration

• Bus Arbitration refers to the process by which the current bus master accesses and then leaves the
control of the bus and passes it to another bus requesting processor unit.
• The controller that has access to a bus at an instance is known as a Bus master.
• A conflict may arise if the number of DMA controllers or other controllers or processors try to access the
common bus at the same time, but access can be given to only one of those.
• Only one processor or controller can be Bus master at the same point in time.
• To resolve these conflicts, the Bus Arbitration procedure is implemented to coordinate the activities of all
devices requesting memory transfers.
• The selection of the bus master must take into account the needs of various devices by establishing a
priority system for gaining access to the bus.
• The Bus Arbiter decides who would become the current bus master.
• There are two approaches to bus arbitration:
1. Centralized bus arbitration – A single bus arbiter performs the required arbitration.
2. Distributed bus arbitration – All devices participating in the selection of the next bus master.
Dr. Sumita Nainan
Methods of Centralized BUS Arbitration

1. Daisy Chaining method –


• It is a simple and cheaper method where all the bus masters use the same line for making bus requests.
• The bus grant signal serially propagates through each master until it encounters the first one that is
requesting access to the bus.
• This master blocks the propagation of the bus grant signal, therefore any other requesting module will not
receive the grant signal and hence cannot access the bus.
• During any bus cycle, the bus master may be any device – the processor or any DMA controller unit,
connected to the bus.

Dr. Sumita Nainan


Methods of Centralized BUS Arbitration

1. Daisy Chaining method –


Advantages –
• Simplicity and Scalability.
• The user can add more devices anywhere along the chain, up to a certain maximum value.
Disadvantages –
• The value of priority assigned to a device depends on the position of the master bus.
• Propagation delay arises in this method.
• If one device fails then the entire system will stop working.

Dr. Sumita Nainan


Methods of Centralized BUS Arbitration

2. Polling or Rotating Priority method –


• In this, the controller is used to generate the address for the
master(unique priority), the number of address lines required depends on
the number of masters connected in the system.
• The controller generates a sequence of master addresses. When the
requesting master recognizes its address, it activates the busy line and
begins to use the bus.

Dr. Sumita Nainan


Methods of Centralized BUS Arbitration

2. Polling or Rotating Priority method –


Advantages –
• This method does not favor any particular device and processor.
• The method is also quite simple.
• If one device fails then the entire system will not stop working.
Disadvantages –
• Adding bus masters is difficult as increases the number of address lines of the circuit.

Dr. Sumita Nainan


Methods of Centralized BUS Arbitration

2. Fixed priority or Independent Request method–


• In this, each master has a separate pair of bus request and bus grant lines
and each pair has a priority assigned to it.
• The built-in priority decoder within the controller selects the highest
priority request and asserts the corresponding bus grant signal.

Dr. Sumita Nainan


Methods of Centralized BUS Arbitration

2. Fixed priority or Independent Request method–


Advantages –
• This method generates a fast response.
Disadvantages –
• Hardware cost is high as a large no. of control lines is required.

Dr. Sumita Nainan


Distributed BUS Arbitration

• In this, all devices participate in the selection of the next bus master. Each
device on the bus is assigned a 4 bit identification number.
• The priority of the device will be determined by the generated ID.

Dr. Sumita Nainan

You might also like