Professional Documents
Culture Documents
Unit 2
Unit 2
and Architecture
System buses
Overview of basic instruction cycle
Interrupts
Bus interconnection
Elements of bus design
Read and write timings diagram
Bus hierarchy
Bus arbitration techniques.
Instruction Cycle
A program that exists inside a computer's memory unit consists of a series of instructions. The
processor executes these instructions through a cycle for each instruction.
In a basic computer, each instruction cycle consists of the following phases:
1. Instruction fetch: fetch instruction from memory
2. Decode the instruction: what operation to be performed.
3. Read the effective address from memory
4. Execute the instruction
Fetch cycle
The address of the next instruction to execute is in the Program Counter(PC) at the beginning
of the fetch cycle.
Step 1: The address in the program counter is transferred to the Memory Address
Register(MAR), as this is the only register that is connected to the system bus address lines.
Step 2: The address in MAR is put on the address bus, now a Read order is provided by the
control unit on the control bus, and the result appears on the data bus and is then copied into
the memory buffer register. Program counter is incremented by one, to get ready for the next
instruction. These two acts can be carried out concurrently to save time.
Step 3: The content of the MBR is moved to the instruction register(IR). Instruction fetch
cycle consist of four micro operation:
Interrupt Cycle
A normal instruction cycle starts with the instruction fetch and execute. But, to accommodate
the occurrence of the interrupts while normal processing of the instructions, the interrupt
cycle is added to the normal instruction cycle as shown in the figure below.
• After the execution of the current instruction, the processor verifies the interrupt signal to check
whether any interrupt is pending. If no interrupt is pending then the processor proceeds to fetch the
next instruction in the sequence.
• If the processor finds the pending interrupts, it suspends the execution of the current program by
saving the address of the next instruction that has to be executed and it updates the program counter
with the starting address of the interrupt service routine to service the occurred interrupt.
• After the interrupt is serviced completely the processor resumes the execution of the program it has
suspended.
Dr. Sumita Nainan
Interrupts
Interrupt Latency
• To service the occurred interrupt the processor suspends the execution of the current
program and save the details of the program to maintain the integrity of the program
execution. The modern processor store the minimum information that will be needed
by the processor to resume the execution of the suspended program. Still, the saving
and restoring of information from memory and registers which involve memory transfer
increase the execution time of the program.
• Transfer of memory also occurs when the program counter is updated with the starting
address of the interrupt service routine. This memory transfer causes the delay
between the time the interrupt was received and the processor starts executing the
interrupt service routine. This time delay is termed as interrupt latency.
• How these situations are handled vary from computer to computer. Now,
if multiple devices are connected to the processor where each is capable
of raising an interrupt the how will the processor determine which device
has requested an interrupt.
• The solution to this is that whenever a device request an interrupt it set
its interrupt request bit (IRQ) to 1 in its status register. Now the processor
checks this IRQ bit of the devices and the device encountered with IRQ bit
as 1 is the device that has to raise an interrupt.
• But this is a time taking method as the processor spends its time checking
the IRQ bits of every connected device. The time wastage can be reduced
by using a vectored interrupt.
Dr. Sumita Nainan
Interrupts
Vectored Interrupt
• The devices raising the vectored interrupt identify themselves directly to the processor.
So instead of wasting time in identifying which device has requested an interrupt the
processor immediately start executing the corresponding interrupt service routine for
the requested interrupt.
• Now, to identify themselves directly to the processors either the device request with its
own interrupt request signal or by sending a special code to the processor which helps
the processor in identifying which device has requested an interrupt.
• Usually, a permanent area in the memory is allotted to hold the starting address of each
interrupt service routine. The addresses referring to the interrupt service routines are
termed as interrupt vectors and all together they constitute an interrupt vector table.
• The device requesting an interrupt sends a specific interrupt request signal or a special
code to the processor. This information act as a pointer to the interrupt vector table and
the corresponding address (address of a specific interrupt service routine which is
Dr. Sumita Nainan
required to service the interrupt raised by the device) is loaded to the program counter.
Interrupts
• Interrupt Nesting
• When the processor is busy in executing the interrupt service routine, the
interrupts are disabled in order to ensure that the device does not raise
more than one interrupt. A similar kind of arrangement is used where
multiple devices are connected to the processor. So that the servicing of
one interrupt is not interrupted by the interrupt raised by another device.
• What if the multiple devices raise interrupts simultaneously, in that case,
the interrupts are prioritized.
Multiple-Bus Hierarchies
• There is a local bus that connects the processor to a cache memory and that
may support one or more local devices
• The cache memory is connected to a system bus to which all of the main
memory modules are attached.
• It is possible to connect I/O controllers directly onto the system bus.
• A more efficient solution is to make use of one or moreexpansion buses for
this purpose.
• This arrangement allows the system to support a wide variety of I/O devices
and at thesame time insulate memory-to-processor traffic from I/O traffic.
Dr. Sumita Nainan
BUS InterConnection
Multiple-Bus Hierarchies
• Network connections include local area networks (LANs), wide area networks (WANs), SCSI (small computer system nterface), serial
port.
• This traditional bus architecture is reasonably efficient but begins to break down as higher and higher performance is seen in the I/O
devices.
• In response to these growing demands, a common approach taken by industry is to build a high-speed bus that is closely integrated
with the rest of the system, requiring only a bridge between the processor’s bus and the high-speed bus.
• This arrangement is sometimes known as a mezzanine architecture.
• Figure shows a typical realization of this approach
Multiple-Bus Hierarchies
• Again, there is a local bus that connects the processor to a cache controller, which is
in turn connected to a system bus that supports main memory.
• The cache controller is integrated into abridge, or buffering device, that connects to
the high-speed bus.
• This bus supports connections to high-speed LANs, video and graphics workstation
controllers, SCSI and FireWireLower-speed devices are still supported off an
expansion bus, with an interface buffering traffic between the expansion bus and the
high-speed bus.
• The advantage of this arrangement is that the high-speed bus brings high-demand
devices into closer integration with the processor and at the same time is
independent of the processor.
Dr. Sumita Nainan
Elements of Bus DesigN
TYPE
• Bus lines can be separated into two generic types: dedicated and multiplexed.
• A dedicated bus line is permanently assigned either to one function or to a physical subset of computer
components.
• Physical dedication refers to the use of multiple buses, each of which connects only a subset of modules.
• The potential advantage of physical dedication is high throughput, because there is less bus contention.
• A disadvantage is the increased size and cost of the system.
• Address and data information may be transmitted over the same set of lines using an Address Valid control
line.
• At the beginning of a data transfer, the address is placed on the bus and the Address Valid line is activated.
• The address is then removed from the bus, and the same bus connections are used for the subsequent read or
write data transfer.
• This method of using the same lines for multiple purposes is known as time multiplexing.
• The advantage of time multiplexing is the use of fewer lines, which saves space and, usually, cost.
Dr. Sumita Nainan
• The disadvantage is that more complex circuitry is needed within each module
Elements of Bus DesigN
Method Of Arbitration
• The various methods can be roughly classified as being either centralized or
distributed.
• In a centralized scheme, a single hardware device, referred to as a bus controller
or arbiter, is responsible for allocating time on the bus.
• In a distributed scheme, there is no central controller. Rather, each module
contains access control logic and the modules act together to share the bus.
• With both methods of arbitration, the purpose is to designate either the
processor or an I/O module, as master.
• The master may then initiate a data transfer (e.g., read or write) with some
other device, which acts as slave for this particular exchange.
Dr. Sumita Nainan
Elements of Bus DesigN
Timing Buses
• Buses use either synchronous timing or asynchronous timing.
• With synchronous timing, the occurrence ofevents on the bus is
determined by a clock. A single 1–0 transmission is referred to as a clock
cycle or bus cycle anddefines a time slot.
• Figure shows a typical, but simplified, timing diagram for synchronous
read and write
• Bus Arbitration refers to the process by which the current bus master accesses and then leaves the
control of the bus and passes it to another bus requesting processor unit.
• The controller that has access to a bus at an instance is known as a Bus master.
• A conflict may arise if the number of DMA controllers or other controllers or processors try to access the
common bus at the same time, but access can be given to only one of those.
• Only one processor or controller can be Bus master at the same point in time.
• To resolve these conflicts, the Bus Arbitration procedure is implemented to coordinate the activities of all
devices requesting memory transfers.
• The selection of the bus master must take into account the needs of various devices by establishing a
priority system for gaining access to the bus.
• The Bus Arbiter decides who would become the current bus master.
• There are two approaches to bus arbitration:
1. Centralized bus arbitration – A single bus arbiter performs the required arbitration.
2. Distributed bus arbitration – All devices participating in the selection of the next bus master.
Dr. Sumita Nainan
Methods of Centralized BUS Arbitration
• In this, all devices participate in the selection of the next bus master. Each
device on the bus is assigned a 4 bit identification number.
• The priority of the device will be determined by the generated ID.