Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

VLSI Design

Module - 5

Syllabus:
Semiconductor Memories: Introduction, Dynamic Random Access Memory (DRAM) and
Static Random Access Memory(SRAM) (10.1 to 10.3 of TEXT1)
Testing and Verification: Introduction, Logic Verification Principles, Manufacturing Test
Principles, Design for testability (15.1, 15.3, 15.5 15.6.1 to 15.6.3 of TEXT 2).
Textbooks:
1. “CMOS Digital Integrated Circuits: Analysis and Design” - Sung Mo Kang & Yosuf
Leblebici, Third Edition, Tata McGraw-Hill.
2. “CMOS VLSI Design- A Circuits and Systems Perspective”- Neil H. E. Weste and
David Money Harris, 4th Edition, Pearson Education.

Semiconductor Memories
Introduction
 Semiconductor memory arrays capable of storing large quantities of digital information
are essential to all digital systems.
 The amount of memory required in a particular system depends on the type of
application, but, in general, the number of transistors utilized for the information (data)
storage function is much larger than the number of transistors used in logic operations
and for other purposes.
 The ever-increasing demand for larger data storage capacity has driven the fabrication
technology and memory development towards more compact design rules and,
consequently, toward higher data storage densities.
 Thus, the maximum realizable data storage capacity of single-chip semiconductor
memory arrays approximately doubles every two years.
 On-chip memory arrays have become widely used subsystems in many VLSI circuits,
and commercially available single-chip read/write memory capacity has reached 64
megabits.

Page 1
VLSI Design

 The area efficiency of the memory array, i.e., the number of stored data bits per unit area,
is one of the key design criteria that determine the overall storage capacity and, hence,
the memory cost per bit.
 Another important issue is the memory access time, i.e., the time required to store and/or
retrieve a particular data bit in the memory array. The access time determines the
memory speed, which is an important performance criterion of the memory array.
 Finally, the static and dynamic power consumption of the memory array is a significant
factor to be considered in the design, because of the increasing importance of low-power
applications.
 Memory circuits are generally classified according to the type of data storage and the
type of data access.
 Read-Only Memory (ROM) circuits allow, only the retrieval of previously stored data
and do not permit modifications of the stored information contents during normal
operation.
 ROMs are non-volatile memories, i.e., the data storage function is not lost even when the
power supply voltage is off.
 Depending on the type of data storage (data write) method, ROMs are classified as mask-
programmed ROMs, Programmable ROMs (PROM), Erasable PROMs (EPROM), and
Electrically Erasable PROMs (EEPROM).

Fig.5.1: Semiconductor memory types


 Read-write (R/W) memory circuits, permits the modification (writing) of data bits stored
in the memory array, as well as their retrieval (reading) on demand.
Page 2
VLSI Design

 Therefore, the data storage function must be volatile, i.e., the stored data are lost when
the power supply voltage is turned off. The read-write memory circuit is commonly
called Random Access Memory (RAM),
 RAMs are classified into two main categories: Static RAMs (SRAM) and Dynamic RAMs
(DRAM).
 A typical memory array organization of a Random Access Memory (RAM) is shown in
fig.5.2. The data storage structure, or core, consists of individual memory cells arranged
in an array of horizontal rows and vertical columns.

Fig.5.2: Memory array organization of a Random Access Memory (RAM)


 Each cell is capable of storing one bit of binary information. There are 2N rows, also
called word lines, and 2M columns called bit lines therefore, total number of memory
cells in this array is 2M*2N.
 To access a particular memory cell, the corresponding bit line and the corresponding
word line must be activated
 The row and column selection operations are accomplished by row and column decoders.

Page 3
VLSI Design

 The row decoder circuit selects one out of 2N word lines according to an N-bit row
address, while the column decoder circuit selects one out of 2M Bit lines according to an
M-bit column address.
 Once a memory cell or a group of memory cells are selected a data read and/or a data
write operation may be performed on the selected single bit or multiple bits on a
particular row.
 The column decoder circuit selects a particular columns and routes the corresponding
data content in a selected row to the output.

Read-Only Memory (ROM) Circuits


 The read-only memory array can also be seen as a simple combinational Boolean
network which produces a specified output value for each input combination, i.e., for
each address.

Fig.5.3: 4-bit*4-bit NOR based ROM array


 Storing binary information at a particular address location can be achieved by the
presence or absence of a data path from the selected row (word line) to the selected
column (bit line), which is equivalent to the presence or absence of a device at that
particular location.

Page 4
VLSI Design

 Consider a 4-bit*4-bit NOR based ROM array shown in fig.5.3. Each column consists of
a pseudo-nMOS NOR gate driven by some of the row signals, i.e., the word lines.
 Only one-word line is activated (selected) at a time by raising its voltage to VDD, while
all other rows are held at a low voltage level.
 If an active transistor exists at the cross point of a column and the selected row, the
column voltage is pulled down to the logic low level by that transistor.
 If no active transistor exists at the cross point, the column voltage is pulled high by the
pMOS load device.
 Thus, a logic 1 is stored as the absence of an active transistor, while a logic 0 is stored as
the presence of an active transistor at the cross point.
 Consider a 4-bit*4-bit NAND based ROM array shown in fig.5.4. Here, each bit line
consists of a depletion-load NAND gate, driven by some of the row signals, i.e., the word
lines.

Fig.5.4: 4-bit*4-bit NAND based ROM array


 In normal operation, all word lines are held at the logic-high voltage level except for the
selected line, which is pulled down to logic-low level.
 If a transistor exists at the cross point of a column and the selected row, that transistor is
turned off and the column voltage is pulled high by the load device.

Page 5
VLSI Design

 If no transistor exists (shorted) at that particular cross point, the column voltage is pulled
low by the other nMOS transistors in the multi-input NAND structure.
 Thus, a logic 1 is stored by the presence of a transistor that can be deactivated, while a
logic 0 is stored by a shorted or normally on transistor at the cross point.

Design of Row and Column Decoders


 Row and column address decoders, selects a particular memory location in the array,
based on the binary row and column addresses.
 A row decoder designed to drive a NOR ROM array must, select one of the word lines.
 Consider the simple row address decoder shown in fig.5.5, which decodes a two-bit row
address and selects one out of four word lines.

Fig.5.5: Row address decoder for 2 address bits and 4 word lines.
 A straightforward implementation of this decoder is a NOR array, consisting of 4 rows
(outputs) and 4 columns (two address bits and their complements).
 The NOR-based decoder array can be built just like the NOR ROM array as shown in
fig.5.6, using the same selective programming approach.

Page 6
VLSI Design

Fig.5.6: NOR-based row decoder circuit for 2 address bits and 4 word lines
 The ROM array and its row decoder can thus be fabricated as two adjacent NOR arrays,
as shown in fig.5.7 or as two adjacent NNAD arrays, as shown in fig.5.8.

Fig.5.7: NOR based decoder and ROM array

Page 7
VLSI Design

Fig.5.8: NAND based decoder and ROM array


 The column decoder circuitry is designed to select one out of 2M bit lines (columns) of
the ROM array according to an M-bit column address, and to route the data content of the
selected bit line to the data output.
 A straightforward approach would be to connect an nMOS pass transistor to each bit-line
(column) output, and to selectively drive one out of 2M pass transistors by using a NOR-
based column address decoder, as shown in fig.5.9.

Fig.5.9: Bit-line (column) Decoder arrangement


 In this arrangement, only one nMOS pass transistor is turned on at a time, depending on
the column address bits applied to the decoder inputs.
 The conducting pass transistor routes the selected column signal to the data output.
 Similarly, a number of columns can be chosen at a time, and the selected columns can be
routed to a parallel data output port.

Page 8
VLSI Design

Static Read-Write Memory (SRAM) Circuits


 Read-write (R/W) memory circuits are designed to permit the modification (writing) of
data bits to be stored in the memory array, as well as their retrieval (reading) on demand.
 The memory circuit is said to be static if the stored data can be retained indefinitely (as
long as a sufficient power supply voltage is provided), without any need for a periodic
refresh operation.
 The data storage cell, i.e., the 1-bit memory cell in static RAM arrays, consists of a
simple latch circuit with two stable operating points (states).
 Depending on the preserved state of the two-inverter latch circuit, the data being held in
the memory cell will be interpreted either as a logic "0" or as a logic " 1."
 To access (read and write) the data contained in the memory cell via the bit line, we need
at least one switch, which is controlled by the corresponding word line, i.e., the row
address selection signal as shown in the fig.5.10.
 Usually, two complementary access switches consisting of nMOS pass transistors are
implemented to connect the 1-bit SRAM cell to the complementary bit lines (columns).

Fig.5.10: 1-bit SRAM cell


 Fig.5.11 shows the generic structure of the MOS static RAM cell, consisting of two
cross-coupled inverters and two access transistors.
 The load devices may be polysilicon resistors, depletion-type nMOS transistors, or
pMOS transistors, depending on the type of the memory cell.
 The pass gates acting as data access switches are enhancement-type nMOS transistors.

Page 9
VLSI Design

Fig.5.11: General structure of the MOS static RAM cell

Fig.5.12: Resistive-load SRAM cell

Fig.5.13: Depletion-load nMOS SRAM cell.

Page 10
VLSI Design

Fig.5.14: Full CMOS SRAM cell.

SRAM Operation Principles:


 Fig.5.15 shows a typical four-transistor resistive-load SRAM cell consisting of a pair of
cross-coupled inverters.

Fig.5.15: Basic structure of the resistive-load SRAM cell


 The two stable operating points of this basic latch circuit are used to store a one-bit piece
of information in the pair of cross-coupled inverters.
 To perform read and write operations, we use two nMOS pass transistors, both of which
are driven by the row select signal, RS.
 The SRAM cell shown in fig.5.15 is accessed via two bit lines or columns, instead of one.

Page 11
VLSI Design

 When the word line (RS) is not selected, i.e., when the voltage level of line RS is equal to
logic "0," the pass transistors M3 and M4 are turned off.
 The simple latch circuit consisting of two cross-connected inverters preserves one of its
two stable operating points; hence, data is being held.
 Consider the two columns, C and C̅. If all word lines in the SRAM array are inactive, the
relatively large column capacitances are charged-up by the column pull-up transistors,
MP1 and MP2.
 Select the memory cell by raising its word line voltage to logic "1," hence, the pass
transistors M3 and M4 are turned on.
 Once the memory cell is selected, four basic operations may be performed on this cell.
Write "1" operation:
 The voltage level of column C̅ is forced to logic-low by the data-write circuitry. The
driver transistor M1 turns off. The voltage V1, attains a logic-high level, while V2 goes
low.
Read "1" operation:
 The voltage of column C retains its precharge level while the voltage of column C̅ is
pulled down by M2 and M4. The data-read circuitry detects the small voltage difference
(VC > 𝑉C̅ ) and amplifies it as a logic "1" data output.
Write "0" operation:
 The voltage level of column C is forced to logic-low by the data-write circuitry. The
driver transistor M2 turns off. The voltage V2 attains a logic-high level, while V1 goes
low.
Read "0" operation:
 The voltage of column C̅ retains its precharge level while the voltage of column C is
pulled down by M1 and M3. The data-read circuitry detects the small voltage difference
(VC < 𝑉C̅ ) and amplifies it as a logic "0" data output.

Page 12
VLSI Design

Testing and Verification

Introduction
 Tests fall into three main categories.
i. The first set of tests verifies that the chip performs its intended function. These tests,
called functionality tests or logic verification, are run before tapeout to verify the
functionality of the circuit.
ii. The second set of tests are run on the first batch of chips that return from fabrication.
These tests confirm that the chip operates as it was intended and help debug any
discrepancies. They can be much more extensive than the logic verification tests
because the chip can be tested at full speed in a system. This silicon debug requires
creative detective work to locate the cause of failures because the designer has much
less visibility into the fabricated chip compared to during design verification.
iii. The third set of tests verify that every transistor, gate, and storage element in the chip
functions correctly. These tests are conducted on each manufactured chip before
shipping to the customer to verify that the silicon is completely intact. These are
called manufacturing tests.
 The goal of a manufacturing test procedure is to determine which die are good and should
be shipped to customers.
 Testing a die (chip) can occur at the following levels:
i. Wafer level
ii. Packaged chip level
iii. Board level
iv. System level
v. Field level
 By detecting a malfunctioning chip early, the manufacturing cost can be kept low. For
instance, the approximate cost to a company of detecting a fault at the various levels is at
least
i. Wafer - $0.01–$0.10
ii. Packaged chip - $0.10–$1
iii. Board - $1–$10

Page 13
VLSI Design

iv. System - $10–$100


v. Field - $100–$1000
 Obviously, if faults can be detected at the wafer level, the cost of manufacturing is lower.
 For example, Intel failed to correct a logic bug in the Pentium floating point divider until
more than 4 million units had shipped in 1994. IBM halted sales of Pentium-based
computers and Intel was forced to recall the flawed chips. The mistake and lack of
prompt response cost the company an estimated $450 million.
Logic Verification
 In the design of integrated circuits, at all levels of abstraction, verification tools compare
the design at different levels to make sure that in the synthesis process, the designers or
optimization tools have not introduced errors, particularly logic errors.
 Due to the high complexity of VLSI design and the complexity of synthesis tools, logic
verification has become increasingly important.
 Logic verification detects any discrepancy in the function implemented by the two
compared logic designs.
 Fig.5.16 shows functional equivalence at different level of abstraction.
 To check the functional equivalence simulations on the two descriptions of the chip (e.g.,
one at the gate level and one at a functional level) is done and outputs are compared for
all the applied inputs.
 Verification tools are used to check the functional equivalence. These tools verify that a
lower-level implementation is equivalent to a higher-level one.
 Verification tools can be a
i. Formal verification tools: such as mathematical models, Boolean equivalence,
Binary Decision Diagrams (BDDs)
ii. Simulation tools
 A simulator uses mathematical models to represent the behavior of circuit
components.
 For a given specific input signals, the simulator solves for the signals inside
the circuit.
 Simulators come in a wide variety depending on the level of accuracy and the
simulation speed desired.

Page 14
VLSI Design

a. circuit simulation
b. switch-level simulation
c. logic simulation
d. functional simulation
 The behavioral specification might be a verbal description, a plain language textual
specification, a description in some high level computer language such as C, or a
hardware description language such as VHDL or Verilog, or simply a table of inputs and
required outputs.
 RTL converts the HDL into a set of registers and combinational logic. The combinational
logic is optimized using algebraic and/or Boolean techniques
 Structural specification converts the combinational logic into switch level.
 Physical specification converts the switch level into layer specifications.

Fig.5.16: Functional equivalence at various levels of abstraction

Page 15
VLSI Design

 Functional equivalence involves running a simulator on the two descriptions of the chip
(e.g., one at the gate level and one at a functional level) and ensuring that the outputs are
equivalent at some convenient check points in time for all inputs applied.
 This is done in an HDL by employing a test bench; i.e., a wrapper that surrounds a
module and provides for stimulus and automated checking.
Manufacturing Tests
 The verification or functionality tests are done to confirm the function of a chip as a
whole, manufacturing tests are used to verify that every gate operates as expected.
 In general, manufacturing test generation assumes the function of the circuit/chip is
correct.
 The need to do this arises from a number of manufacturing defects that might occur
during either chip fabrication or accelerated life testing (where the chip is stressed by
over-voltage and over-temperature operation).
 Typical defects include the following:
i. Layer-to-layer shorts (e.g., metal-to-metal)
ii. Discontinuous wires (e.g., metal thins when crossing vertical topology jumps)
iii. Missing or damaged vias
iv. Shorts through the thin gate oxide to the substrate or well
 These in turn lead to
i. Nodes shorted to power or ground
ii. Nodes shorted to each other
iii. Inputs floating/outputs disconnected
 Tests are required to verify that each gate and register is operational and has not been
compromised by a manufacturing defect.
 Tests can be carried out at the wafer level to cull out bad dies, or can be left until the
parts are packaged. This decision would normally be determined by the yield and
package cost.
 If the yield is high and the package cost low (i.e., a plastic package), then the part can be
tested only once after packaging. However, if the wafer yield was lower and the package
cost high (i.e., an expensive ceramic package), it is more economical to first screen bad

Page 16
VLSI Design

dice at the wafer level. The length of the tests at the wafer level can be shortened to
reduce test time based on experience with the test sequence.
 Apart from the verification of internal gates, I/O integrity is also tested, with the
following tests being completed:
i. I/O levels (i.e., checking noise margin for TTL, ECL, or CMOS I/O pads)
ii. Speed test

Logic Verification Principles:


 Fig.5.17 shows a combinational circuit with N inputs. To test this circuit, a sequence of
2N inputs (or test vectors) must be applied and observed to fully exercise the circuit.

Fig.5.17: Combinational Logic


 This combinational circuit is converted to a sequential circuit with addition of M
registers, as shown in fig.5.6(b).
 The state of the circuit is determined by the inputs and the previous state. A minimum of
2N + M test vectors must be applied to exhaustively test the circuit.
 The verification engineer must cleverly devise test vectors that detect any (or nearly any)
defective node without requiring so many patterns.
Test Vectors
 Test vectors are a set of patterns applied to inputs and a set of expected outputs. Both
logic verification and manufacturing test require a good set of test vectors.
 The set should be large enough to catch all the logic errors and manufacturing defects, yet
small enough to keep test time (and cost) reasonable.
 Directed and random vectors are the most common test vector types.

Page 17
VLSI Design

 Directed vectors are selected by an engineer who is knowledgeable about the system.
Their purpose is to cover the corner cases where the system might be most likely to
malfunction.
 For example, in a 32-bit datapath, likely corner cases include the following:
0x00000000 All zeros
0xFFFFFFFF All ones
0x00000001 One in the LSB
0x80000000 One in the MSB
0x55555555 Alternating 0’s and 1’s
0xAAAAAAAA Alternating 1’s and 0’s
0x7A39D281 A random value
 Directed vectors are an efficient way to catch the most obvious design errors and a good
logic designer will always run a set of directed tests on a new piece of RTL to ensure a
minimum level of quality.
 Applying a large number of random or semi-random vectors is a good way to detect more
errors. The effectiveness of the set of vectors is measured by the fault coverage.
Testbenches and Harnesses
 A verification test bench or harness is a piece of HDL code that is placed as a wrapper
around a core piece of HDL to apply and check test vectors.
 In the simplest test bench, input vectors are applied to the module under test and at each
cycle, the outputs are examined to determine whether they comply with a predefined
expected data set.
Regression Testing
 Regression testing is a technique to performs a set of simulations automatically to verify
that no functionality has changed in a module or set of modules.
 During a design, it is common practice to run a regression script every time after design
activities have concluded to check that bug fixes or feature enhancements have not
broken completed modules.
Bug Tracking
 Another important tool to use during verification (and in fact the whole design cycle) is a
bug-tracking system.

Page 18
VLSI Design

 Bug-tracking systems such as GNATS (Unix/Linux based) allow the management of a


wide variety of bugs.
 In these systems, each bug is entered and the location, nature, and severity of the bug
noted. The bug discoverer is noted, along with the perceived person responsible for fixing
the bug.

Manufacturing Test Principles:


 Manufacturing tests verify that every gate and register in the chip functions correctly.
These tests are used after the chip is manufactured to verify that the silicon is intact.
Fault Models
 To deal with the existence of good and bad parts, it is necessary to propose a fault model;
i.e., a model for how faults occur and their impact on circuits.
 Types of fault models
i. Stuck-At model
ii. Short Circuit / Open Circuit model
Stuck-At Faults
 In the Stuck-At model, a faulty circuit node is permanently fixed to a logic value.
 Stuck at faults occur when a line is permanently stuck to Vdd or ground giving a faulty
output. This line may be an input or output to any gate. Also this fault can be single or
multiple stuck at faults.
 When a signal, or gate output, is stuck at a 0 or 1 value, independent of the inputs to the
circuit, the signal is said to be “stuck at” and the fault model used to describe this type
error is called a “stuck at fault model”.
 Fig.5.18 illustrates the S-A-0 and S-A-1 fault.

Fig.18: S-A-0 and S-A-1 fault

Page 19
VLSI Design

 These faults most frequently occur due to gate oxide shorts (the nMOS gate to GND or
the pMOS gate to VDD) or metal-to-metal shorts.
Short-Circuit and Open-Circuit Faults
 In open-circuit fault a single transistor is permanently stuck in the open state and in short
circuit fault a single transistor is permanently shorted irrespective of its gate voltage.

Fig.5.19: Bridge faults


 Two bridging or shorted faults are shown in fig.5.19. The short S1 results in an S-A-0
fault at input A, while short S2 modifies the function of the gate.
 Fig.5.20 shows a 2-input NOR gate in which one of the nMOS transistor A is stuck open,
then the function displayed by the gate will be
Z = ̅̅̅̅̅̅̅
A+B+B ̅Z ′
where Z' is the previous state of the gate.

Fig.5.20: Open faults


Controllability and Observability
 The controllability of a circuit is a measure of the ease (or difficulty) with which a
specific signal value can be established at each node by setting values at the circuit input
terminals.

Page 20
VLSI Design

 The observability is a measure of the ease (or difficulty) with which one can determine
the signal value at any logic node in the circuit by controlling its primary input and
observing the primary output.
 The degree of controllability and observability and, thus, the degree of testability of a
circuit, can be measured with respect to whether test vectors are generated
deterministically or randomly.
 For example, if a logic node can be set to either logic 1 or 0 only through a very long
sequence of random test vectors, the node is said to have a very low random
controllability since the probability of generating such a vector in random test generation
is very low.
 Consider the simple circuit shown in fig.5.21 consisting of four simple logic gates. To
detect any defect on line 8, the primary inputs A and B must be set to logic 1. However,
such a setting forces line 7 to logic 1. Thus, any stuck-at- (s-a- 1) fault on line 7 cannot be
tested at the primary output, although in the absence of such a fault, the logic value on
line 7 can be fully controllable through primary inputs B, C, and D. Therefore, this circuit
is not fully testable. The main cause of this difficulty in this circuit is the fact that input B
fans out to lines 5 and 6, and then after the OR3 gate, both line signals are combined in
the AND3 gate. Such a fanout is called reconvergent fanout. Reconvergent fanouts make
the testing of the circuit much more difficult.

Fig.5.21: A simple combinational circuit


 If a large number of input vectors are required to set a particular node value to 1 or 0
(fault excitation) and to propagate an error at the node to an output (fault effect
propagation), then the testability is low.
 The circuits with poor controllability include those with feedbacks, decoders, and clock
generators. The circuits with poor observability include sequential circuits with long

Page 21
VLSI Design

feedback loops and circuits with reconvergent fanouts, redundant nodes, and embedded
memories such as RAM, ROM, and PLA.
Repeatability
 The repeatability of system is the ability to produce the same outputs given the same
inputs. Combinational logic and synchronous sequential logic is always repeatable when
it is functioning correctly. However, certain asynchronous sequential circuits are
nondeterministic.
 Testing is much easier when the system is repeatable.
Survivability
 The survivability of a system is the ability to continue function after a fault. For example,
error-correcting codes provide survivability in the event of soft errors.
Fault Coverage
 It is the measure of the ability of a test (a collection of test patterns) to detect a given
fault’s that may occur on the device under test.
 The fault coverage is calculated is as follows.
i. Each circuit node is taken in sequence and held to 0 (S-A-0), and the circuit is
simulated with the test vectors comparing the chip outputs with a known good
machine (a circuit with no nodes artificially set to 0 (or 1)).
ii. When a discrepancy is detected between the faulty machine and the good
machine, the fault is marked as detected and the simulation is stopped.
iii. This is repeated for setting the node to 1 (S-A-1). In turn, every node is stuck
(artificially) at 1 and 0 sequentially.
 Therefore, the fault coverage of a set of test vectors is the percentage of the total nodes
that can be detected as faulty when the vectors are applied.
 To achieve world-class quality levels, circuits are required to have in excess of 98.5%
fault coverage.
Automatic Test Pattern Generation (ATPG)
 ATPG (Automatic Test Pattern Generation) is an EDA method/technology used to find
an input or test sequence.
 When applied to a digital circuit, ATPG enables automatic test equipment to distinguish
between the correct circuit behavior and the faulty circuit behavior caused by defects.

Page 22
VLSI Design

 The generated patterns are used to test semiconductor devices after manufacture, or to
assist with determining the cause of failure
 The effectiveness of ATPG is measured by the number of modeled defects, or fault
models, detectable and by the number of generated patterns.
 These metrics generally indicate test quality (higher with more fault detections) and test
application time (higher with more patterns).
 ATPG efficiency is another important consideration that is influenced by the fault model
under consideration, the type of circuit under test (full scan, synchronous sequential, or
asynchronous sequential), the level of abstraction used to represent the circuit under test
(gate, register-transfer, switch), and the required test quality
Delay Fault Testing
 Delay fault increases the input to output delay of one logic gate, at a time but the
functionality of the circuit is untouched.
 For ex, consider an inverter gate composed of paralleled nMOS and pMOS transistors as
shown in fig.5.22.

Fig.5.22: Delay fault


 If an open circuit occurs in one of the nMOS transistor source connections to GND, then
the gate would still function but with increased fall time propagation delay (tpdf).
 The fault now becomes sequential as the detection of the fault depends on the previous
state of the gate.

Design for Testability:


 Design-for-testability (DFT) is a design technique to make a testing a chip cost-effective
by adding test circuitry to the chip.

Page 23
VLSI Design

 The main goal of the DFT logic is to greatly enhance the controllability and
observability.
 Controllability is the ability to set (to 1) and reset (to 0) every node internal to the circuit.
Observability is the ability to observe, either directly or indirectly, the state of any node
in the circuit.
 Good observability and controllability reduce the cost of manufacturing testing because
they allow high fault coverage with relatively few test vectors.
 The DFT techniques can be broadly classified into three categories
i. Ad hoc testing
ii. Scan-based approaches
iii. Built-in self-test (BIST)
Ad Hoc Testing
 Ad hoc test techniques, are collections of ideas aimed at reducing the combinational
explosion of testing.
 They are only useful for small designs where scan, ATPG, and BIST are not available. A
complete scan-based testing methodology is recommended for all digital circuits.
 The following are common techniques for ad hoc testing:
i. Partitioning large sequential circuits
ii. Adding test points
iii. Adding multiplexers
iv. Providing for easy state reset
Things to be followed during Ad Hoc testing
 Large circuits should be partitioned into smaller sub-circuits to reduce test costs. One of
the most important steps in designing a testable chip is to first partition the chip in an
appropriate way such that for each functional module there is an effective (DFT)
technique to test it. Partitioning must be done at every level of the design process, from
architecture to circuit, whether testing is considered or not. Partitioning can be functional
(according to functional module boundaries) or physical (based on circuit topology).
Partitioning can be done by using multiplexers and/or scan chains.
 Test access points must be inserted to enhance controllability & observability of the
circuit. Test points include control points (CPs) and observation points (OPs). The CPs

Page 24
VLSI Design

are active test points, while the OPs are passive ones. There are also test points, which are
both CPs and OPs. Before exercising test through test points that are not PIs and POs, one
should investigate into additional requirements on the test points raised by the use of test
equipment’s.
 Circuits (flip-flops) must be easily initializable to enhance predictability. A power-on
reset mechanism controllable from primary inputs is the most effective and widely used
approach.
 Test control must be provided for difficult-to-control signals.
 Automatic Test Equipment (ATE) requirements such as pin limitation, tri-stating, timing
resolution, speed, memory depth, driving capability, analog/mixed-signal support,
internal/boundary scan support, etc., should be considered during the design process to
avoid delay of the project and unnecessary investment on the equipment’s.
 Internal oscillators, PLLs and clocks should be disabled during test. To guarantee tester
synchronization, internal oscillator and clock generator circuitry should be isolated
during the test of the functional circuitry. The internal oscillators and clocks should also
be tested separately.
 Analog and digital circuits should be kept physically separate. Analog circuit testing is
very much different from digital circuit testing. Testing for analog circuits refers to real
measurement, since analog signals are continuous (as opposed to discrete or logic signals
in digital circuits). They require different test equipments and different test
methodologies. Therefore, they should be tested separately.
Things to be avoided during Ad Hoc testing
 Asynchronous(unclocked) logic feedback in the circuit must be avoided. A feedback in
the combinational logic can give rise to oscillation for certain inputs. Since no clocking is
employed, timing is continuous instead of discrete, which makes tester synchronization
virtually impossible, and therefore only functional test by application board can be used.
 Monostables and self-resetting logic should be avoided. A monostable (one-shot)
multivibrator produces a pulse of constant duration in response to the rising or falling
transition of the trigger input. Its pulse duration is usually controlled externally by a
resistor and a capacitor (with current technology, they also can be integrated on chip).
One-shots are used mainly for 1) pulse shaping, 2) switch-on delays, 3) switch-off delays,

Page 25
VLSI Design

4) signal delays. Since it is not controlled by clocks, synchronization and precise duration
control are very difficult, which in turn reduces testability by ATE. Counters and dividers
are better candidates for delay control.
 Redundant gates must be avoided.
 High fanin/fanout combinations must be avoided as large fan-in makes the inputs of the
gate difficult to observe and makes the gate output difficult to control.
 Gated clocks should be avoided. These degrade the controllability of circuit nodes.

Scan Design
 The scan-design strategy for testing has evolved to provide observability and
controllability at each register.
 In designs with scan, the registers operate in one of two modes. In normal mode, they
behave as expected. In scan mode, they are connected to form a giant shift register called
a scan chain spanning the whole chip.
 By applying N clock pulses in scan mode, all N bits of state in the system can be shifted
out and new N bits of state can be shifted in. Therefore, scan mode gives easy
observability and controllability of every register in the system.

Fig.5.23: Scan based testing

Page 26
VLSI Design

 Modern scan is based on the use of scan registers, as shown in fig.5.23. The scan register
is a D flip-flop preceded by a multiplexer.
 When the SCAN signal is deasserted, the register behaves as a conventional register,
storing data on the D input.
 When SCAN is asserted, the data is loaded from the SI pin, which is connected in shift
register fashion to the previous register Q output in the scan chain.
 For the circuit to load the scan chain, SCAN is asserted and CLK is pulsed eight times to
load the first two ranks of 4-bit registers with data. SCAN is deasserted and CLK is
asserted for one cycle to operate the circuit normally with predefined inputs. SCAN is
then reasserted and CLK asserted eight times to read the stored data out. At the same
time, the new register contents can be shifted in for the next test.
 Testing proceeds in this manner of serially clocking the data through the scan register to
the right point in the circuit, running a single system clock cycle and serially clocking the
data out for observation.
 In this scheme, every input to the combinational block can be controlled and every output
can be observed.
Built-In Self-Test (BIST)
 Self-test and built-in test techniques rely on augmenting circuits to allow them to perform
operations upon themselves that prove correct operation.
 These techniques add area to the chip for the test logic, but reduce the test time required
and thus can lower the overall system cost.
 One method of testing a module is to use signature analysis or cyclic redundancy
checking. This involves using a pseudo-random sequence generator (PRSG) to produce
the input signals for a section of combinational circuitry and a signature analyzer to
observe the output signals.
 A PRSG of length n is constructed from a linear feedback shift register (LFSR), which in
turn is made of n flip-flops connected in a serial fashion, as shown in fig.5.24(a).

Page 27
VLSI Design

Fig.5.24: Pseudo-random sequence generator


 The XOR of particular outputs are fed back to the input of the LFSR.
 An n-bit LFSR will cycle through 2n–1 states before repeating the sequence. They are
described by a characteristic polynomial indicating which bits are fed back.
 A complete feedback shift register (CFSR), shown in fig.5.24(b), includes the zero state
that may be required in some test situations.
 An n-bit LFSR is converted to an n-bit CFSR by adding an n-1 input NOR gate
connected to all but the last bit.
 When in state 0…01, the next state is 0…00. When in state 0…00, the next state is
10…0. Otherwise, the sequence is the same.
 A signature analyzer receives successive outputs of a combinational logic block and
produces a syndrome that is a function of these outputs.
 The syndrome is reset to 0, and then XORed with the output on each cycle.
 The syndrome is swizzled each cycle so that a fault in one bit is unlikely to cancel itself
out.

Page 28
VLSI Design

 At the end of a test sequence, the LFSR contains the syndrome that is a function of all
previous outputs. This can be compared with the correct syndrome (derived by running a
test program on the good logic) to determine whether the circuit is good or bad.
 If the syndrome contains enough bits, it is improbable that a defective circuit will
produce the correct syndrome.
Built-In Logic Block Observation (BILBO)
 The essence of BIST is to have internal capability for generation of tests and for
compression of the results.
 Instead of using separate circuits for these two functions, it is possible to design a single
circuit that serves both purposes known as the built-in logic block observation (BILBO)
as shown in the fig.5.25.

Fig.5.25: Built-In Logic Block Observation


 The 3-bit BILBO register shown in fig.5.25 is a scannable, resettable register that also
can serve as a pattern generator and signature analyzer.
 The BILBO circuit has four modes of operation, which are controlled by the mode bits
C[1:0].
 In the reset mode (10), all the flip-flops are synchronously initialized to 0.

Page 29
VLSI Design

 In normal mode (11), the flip-flops behave normally with their D input and Q output.
 In scan mode (00), the flip-flops are configured as a 3-bit shift register between SI and
SO.
 In test mode (01), the register behaves as a pseudo-random sequence generator or
signature analyzer.
 If all the D inputs are held low, the Q outputs loop through a pseudo-random bit
sequence, which can serve as the input to the combinational logic.

Page 30

You might also like