Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Advanced Computer Architecture

Kai Hwang & Naresh Jotwani

Chapter Five
Bus, Cache and Shared Memory
Bus system
In a computer system, the system bus can be requested by several active devices
such as processors at a same time. However only one of them can be granted access time
at a time. Standard bus specifications should be technology and architecture independent.
• Backplane Bus Specification
 interconnects processors, data storage and peripheral devices in a tightly coupled hardware
configuration.
 Must be designed to allow communications between devices on the bus without disturbing the
internal activities of all other devices attached to the bus.
 Timing protocol must be established for arbitrating among multiple requests.
 Data Transfer Bus (DTB): Data, address and control lines make up the DTB.
 Address lines are used to broadcast the data address and device address.
 Data lines are used to carry out the data required for any specific operation.
 Control lines are used to indicate read/write, timing control and bus error conditions.
• Bus Arbitration: Arbitration is a process of assigning control of the DTB to a requester. The
requester is called master and receiving end is called slave.
• Interrupt and synchronization: Interrupt line controls interrupt handlings. Synchronization lines
are used to adjust parallel activities among the processor modules.
• Utility: Utility bus provides signals that provides clock timing and coordinate power up and down
sequences.
Fig 5.1: Backplane bus specification with multiprocessor system
• Arbitration, Transaction and Interrupt
 The duration of occupying the bus under a master is called bus tenure. The arbitration process restricts
tenure of the bus to one master at a time. This can be established in several ways.
 Central Arbitration: Also known as daisy chained, uses a central bus arbiter.
 A specific signal line is used to propagate bus-grant signal from the first master to the last.
 Each master can send a specific bus-request signal despite the fact that all masters use the same bus-request
line.
 Fixed priority is set in a daisy chain from left to right. When the masters on the left do not request bus
control can a master on the adjacent slot be granted bus tenure.
 When the bus is not busy only then the master can establish its tenure with the DTB.
 This system is simple and additional devices can be added easily.
 The major disadvantage is it’s fixed priority that violates fairness practice.
 Independent request and Grants:
 Instead of using single bus-request and bus-grant signal lines, this system uses multiple of them. They can
be independently provided for each potential master.
 Like daisy chained, this system has a central arbiter as well as only one bus-busy line.
 Priority-based allocation for bus can be implemented easily.
 The system is much flexible and faster at arbitration process compared to the daisy chained arbiter system.
 Only major drawback is the number of arbitration lines used.
• Distributed Arbitration:
• The potential masters are equipped with their own arbiters with a unique arbitration numbers (AN).
• With two or more devices requesting for DTB, only one wins with the largest arbitration number.
• All potential arbiters send their arbitration numbers to the shared-bus request/grant (SBRG) lines on
the arbitration bus through their respective arbiters.
• Each number gets compared with the resulting number on the SBRG, if its greater than the number on
SBRG, the master is permitted, otherwise dismissed.
• The main advantage of this system is its priority-based.
Cash Memory Organization
• Cache Addressing modes
 Physical address mode: In physical address mode, cache is accessed with a physical memory
address.
 A cache hit occurs when the addressed data is found in the cache. If a miss is occurred, then the
cache is loaded with the data from the memory.
 When data is being received, the whole cache-block is loaded at a time.
 Fig 5.5 and 5.6 represent a unified cache and split cache organization for physical addressed
models correspondingly.

Fig 5.5 Fig 5.6

Legends: VA = Virtual Address, PA = Physical Address, I = Instruction, D = Data


• Virtual Cache memory
 Virtual address cache is indexed with virtual memory.
 Both cache and MMU(memory management unit) are translated in parallel.
 Via virtual memory, cache is accessed faster compared to physical accessing.
 Fig 5.7 and 5.8 depict unified and split cache memory indexed with virtual address.

Fig 5.7 Fig 5.8

Legends: VA = Virtual Address, PA = Physical Address, I = Instruction, D = Data

You might also like