Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Tomographic Testing and Validation of Probabilistic Circuits

Alexandru Paler and Ilia Polian


Department of Informatics and Mathematics University of Passau, Innstr. 43, Passau, Germany {alexandru.paler|ilia.polian}@uni-passau.de

John P. Hayes
Advanced Computer Architecture Laboratory University of Michigan, Ann Arbor, MI 48109, USA jhayes@eecs.umich.edu

AbstractSome emerging technologies for manufacturing computers have behavior that is inherently probabilistic under normal or fault conditions. Examples include stochastic and quantum computing circuits, and conventional nanoelectronic circuits subject to design or environmental errors. Problems common to these technologies are testing and validation, which require determining whether observed non-deterministic behavior is within acceptable limits. Traditional solutions rely on the determinism of operations performed by the circuit under test, and are not applicable to probabilistic circuits, where signals are often described by probability distributions. We introduce a generic methodology for testing probabilistic circuits by approximating signal probablity distributions using tomograms, which aggregate the outcomes of multiple, repeated test measurements. While the name comes from quantum computation, tomography is applicable to both quantum and non-quantum probabilistic circuits, as we show. Our approach makes use of fault or error models that allow handling of large and complex circuits. We report the first experimental results on the tomographic validation of quantum circuits.

I. INTRODUCTION A number of technologies seen as successors to todays CMOS IC technology will result in components (logic gates) and circuits whose behavior is not deterministic. Current nanoscale CMOS circuits already exhibit some non-deterministic or probabilistic behavior caused by quantum mechanical effects, or variations in hard-to-control manufacturing parameters such as 0/1 switching thresholds. Furthermore, advanced ICs are becoming more sensitive to soft errors caused by random environmental effects such as thermal noise and cosmic rays. Consequently, circuit behavior is likely to become increasingly probabilistic in nature, and will therefore require radically new methodologies for design and test. Deterministic behavior is a fundamental assumption underlying almost all existing IC technologies. It implies that if a given input vector is applied to a (fault-free) circuit, the output response can be determined with certainty, and applying the same input vector to the same component multiple times must result in identical outcomes. A probabilistic circuit, on the other hand, may respond to the same input with different outputs at different times. Thus, its output responses are best characterized by a probability distribution. Probabilistic models of computation have mainly been studied in the past to capture the behavior of deterministic circuits with faulty components [9]. Relatively little attention has been paid to computational tech-

nologies whose normal (fault-free) behavior is inherently probabilistic. Examples include quantum computing circuits [8] and stochastic circuits [4], which are potential nanoscale replacements for conventional circuits in some applications. This paper considers a fundamental problem in the testing and design validation of logic circuits with inherently probabilistic behavior, namely, how to determine whether their actual or observed probabilistic behavior is within acceptable limits under normal or faulty conditions. Two representative circuit types are considered: quantum circuits and a class of classical (non-quantum) circuits called stochastic circuits, both which are reviewed in the next section. We introduce a technique called tomographic testing, which involves obtaining (by physical experiment or computer simulation) a sequence of multiple measurements of circuit signals to determine their probability distributions in a form called a tomogram. Dissimilarities among tomograms can be used to detect design errors, physical faults, or the like. Section II reviews the representative technologies, quantum and stochastic computing, to which tomographic concepts apply. Tomographic testing is formalized in Sec. III. Experimental results of applying tomographic testing to small quantum circuits are reported and evaluated in Sec. IV. Some conclusions and ideas for further work are given in Sec. V. II. PROBABILISTIC CIRCUITS First, we summarize two probabilistic computing techniques that can benefit from the proposed tomographic approach. Quantum computing circuits and signals are inherently probabilistic, while stochastic circuits use deterministic means to represent and process probabilistic data. A. Quantum Computing Quantum computing (QC) is a form of computation discovered recently that provides efficient algorithms for some problems believed to be intractable in classical computer science [8]. The most important of these is number factorization, which is very relevant to cryptography, since fast factorization would undermine the most widely used data encryption technique. Smallscale QC systems have been demonstrated, and slow but steady progress is being made toward building practical ones [7]. Several competing implementation technologies are known, all of which are based on subtle interactions among nanoscale particles such as ions or electrons.

0.070.53i 0.010.04i 0.46 0.04 0.010.04i 0.54 0.04

0.010.04i 0.07+0.53i 0.04 0.46 0.06+0.45i 0.04 0.54

0.060.45i 0.010.04i 0.54 0.04 0.07+0.53i 0.01+0.04i 0.46 0.04

0.010.04i 0.06+0.45i 0.04 0.54 0.01+0.04i 0.070.53i 0.04 0.46

0.070.53i 0.010.04i 0.46 0.04 0.06+0.45i 0.01+0.04i 0.54 0.04

0.010.04i 0.04 0.46 0.01+0.04i 0.070.54i 0.04 0.54

0.010.04i 0.04 0.54 0.07+0.53i 0.04 0.46

0.060.45i 0.010.04i 0.53 0.04 0.010.04i 0.46 0.04

a0 a1 a2 a3 a4 a5 a6 a7

0.07+0.53i 0.06+0.45i

0.060.45i 0.010.04i

0.010.04i 0.070.53i

Figure 2. Matrix representation of circuit 3qubitcnot from Fig. 1 and an input vector. QC processes information in the form of qubits (quantum bits). A qubit has two parts and denoting its zeroness and oneness, respectively. It can be expressed as:
| = | 0 + | 1 =

and so on. Note that measurement also changes the output state to the measured values. Thus quantum measurements yield probabilistic results and destroy the state | being measured. In general, a quantum circuit produces different output values and states when run multiple times with the same inputs in absence of faults or errors. When a computation is repeated many times, it yields a tomogram listing the number of times when the 2n possible output values 0,0,,0,0; 0,0,,0,1; 1,1,,1,1 are observed. If these numbers are normalized to lie between 0 and 1, the tomogram takes the form T = (t0,t1,t2,,tq 1) where 0 ti 1 and 0 i q = 2n. The probability |ai| of a particular output value showing up in the tomogram should correspond to the probability amplitude ai. So, unlike traditional Boolean logic, two QC test applications using the same test vector are likely to result in different outcomes. Applying a test pattern to an error-free and an erroneous circuit may or may not lead to identical outputs. Consequently, the conventional notion of fault detection is not well-defined for quantum circuits. Realistic quantum test methods must therefore take the tomograms of the error-free and erroneous circuits into account. There is an extensive literature on quantum state tomography and the related, but harder, problem of quantum process tomography [8]. Various distance measures have been proposed to measure the similarities or differences between quantum tomograms. An example is fidelity, which is defined as follows for tomograms P = (p0,p1,,pq) and T = (t0,t1, ,t q):
fidelity ( P, T ) = pi ti
i

Here and are complex numbers called probability amplitudes. When a qubit is measured, quantum mechanics requires that the measured outcome be 0 or 1 with certain probabilities. The squares ||2 and ||2 are the probabilities of measuring as 0 and 1, respectively, implying that ||2 + ||2 = 1. QC operations can be described by unitary matrices that act on qubit vectors. Thus, linear algebra over complex vector spaces plays a role in QC analogous to Boolean algebra in classical logic circuits. Fig. 1 shows a small quantum circuit, containing several gate types: a controlled-NOT (CNOT) gate on the far right, two Hadamard (H) gates, two rotation (R) gates, and a 2-CNOT or Toffoli gate on the far left. This circuit operates on 3 qubits indicated by the horizontal lines or wires. Mathematically, it performs a matrix operation in a complex vector space (Hilbert space) of dimension 23 = 8. The matrix is shown in Fig. 2.
H Ry Rz H

Missing gate

Figure 1. Example of a small quantum circuit 3qubitcnot. After the last (rightmost) gate operation has been performed, the output qubits are subjected a quantum measurement procedure. As a result of the measurement, Boolean values (0 or 1) appear at the circuits n outputs with certain probabilities. Suppose the values in the 2n-entry qubit vector at the quantum circuits outputs before the measurement are (a0,a1, ,a2n2,a2n1). The probability that the measured values will be 0,0,,0,0 is given by |a0|2, the probability of 0,0,,0,1 is |a1|2,

The notion of a fault as a model of a manufacturing defect cannot be maintained in the quantum world [1][5][10]. A defect in a classical digital circuit is a physical imperfection, such as an open or short circuit associated with a transistor or an interconnecting wire. QC technologies are based on radically different principles. For example, in the trapped-ion technology [8][11], each of a quantum circuits wires (qubits) might be implemented by an ion of beryllium. An n-qubit trapped-ion circuit comprises n such ions held in a certain spatial configuration by a combination of static and dynamic electrical fields operating in a low-temperature regime. A gate operation corresponds to one or more laser pulses of well-controlled dura-

tion, which enable the ions to interact with each other. For example, a CNOT gate operation in an ion trap can be implemented using three laser pulses. So an operational error involving the CNOT might correspond to its three laser pulses being missing or ineffective for some reason when a test is applied to the CNOT. The usual concept of a fault would imply that if a measurement is repeated, the same pulses will still be missing. Finally, we note that QC circuits are subject to a kind of massive noise known as decoherence, which originates from interactions between the quantum system and its environment. While decoherence can be minimized, it cannot be entirely eliminated, because the quantum circuit must obtain its input values from outside, and must communicate its results to the environment. Much of the effort in building a quantum computer is to keep the decoherence in check, while allowing an adequate degree of external interaction. For test methods, decoherence implies one further source of uncertainty. B. Stochastic Computing The term stochastic computing (SC) was introduced in the 1960s for a class of computing circuits in which a number is represented by bit-stream or sequence 010011 that has a probabilistic interpretation [4]. Consider an n-bit stream B containing m 1s and n m 0s. The ratio m/n can be seen as the probability of a randomly-observed bit in B being a 1;, p is referred to as the signal probability [14]. SC consists of operations on the pis associated with a set of Bis of fixed or variable length n. It will usually be necessary to convert the pis to and from ordinary binary numbers at the SC circuits periphery. The main attraction of SC has been its ability to perform certain complex arithmetic operations using simple circuits. For example, multiplication can be performed on bit-streams by means of a single AND gate, as shown in Fig. 3. The output signal probability is 5/16 = 0.3125, which approximates the product 0.375 of the input signal probabilities. Small errors such as a 0 replacing a 1 due to a lost 1-pulse, have little effect on signal probabilities, hence SC is inherently resilient in the presence of bit-flip errors.
1011 0101 1111 1011 0101 1100 1001 0110

In this respect, SC can be compared to analog computing; indeed, the probabilities it processes can be interpreted as analog quantities. Nevertheless, SC seems worth revisiting it the light of current trends toward nanotechnologies, where the small size, low power, error tolerance, and probabilistic aspects of SC circuits are increasingly attractive. Errors in stochastic circuits manifest themselves in the appearance of (output) bit-streams whose probabilities deviate from desired values. The deviations or inaccuracies can be caused by design errors, e.g., not ensuring sufficiently low correlation among the circuits signals, or signal degradation due to environmental effects. An example of the latter is a bitflip due to a radiation strike that creates spurious bit values or similar soft errors [6]. III. TOMOGRAPHIC TESTING As the preceding circuit technologies suggest, tomographic testing is a way to determine whether a probabilistic circuit functions according to its specifications. As with deterministic circuits, there are two main reasons for specification violation: manufacturing defects (addressed by testing) and design errors (addressed by design validation). Tomographic testing estimates the probability distribution of an instance of the circuit under test by deriving its tomogram, and compares it to the tomograms of the fault-free circuit or known erroneous circuits. In the following, we discuss the terms fault and error when applied to probabilistic circuits, putting special emphasis on their relevance to quantum and stochastic circuits. Then, we formally introduce the concept of tomographic testing. A. Faults and Errors in Probabilistic Circuits As noted above, standard fault models such as the stuck-line or bridging models of conventional CMOS circuits [3] are unrealistic when applied to quantum circuits. Quantum gate operations are typically implemented by electromagnetic (EM) pulses interacting with qubits [8][11]; there are no physical gates or wires in the conventional sense. While physical disturbances may well occur during these EM interactions, they may not be repeatable. On the other hand, errors in the design of quantum circuits are likely to lead to repeatable effects. For example, the task of mapping quantum gates to pulse sequences, either manually or by compiler, is complex and errorprone. It can also be expected that QC compilers will have bugs, resulting in erroneous circuit implementations. Such design errors have the characteristics of manufacturing defects in CMOS circuits: they are repeatable i.e., they affect the probability distribution of the circuits responses in a deterministic and measurable way, and thus can be exposed by testing. Stochastic circuits are implemented by classical logic gates of the AND-OR-NOT type that process high-quality uncorrelated signal sequences implemented by (pseudo) random number generators. Consequently, they are prone to manufacturing defects that affect the circuits gates and signal probabilities, and can be modeled by standard fault models. They are also subject the usual types of design errors, as well as some that are peculiar to SC. Of special concern are subtle errors stemming from unintended correlations among signal probabilities. In our discussion of tomographic testing, we will use fault to include both manufacturing-style faults and design errors.

0001 0100 1001 0010

Figure 3. Stochastic multiplication by an AND gate. The inputs are p1 = 0.75 and p2 = 0.5; the output is p1 p2 = 0.375. On the other hand, SC has serious drawbacks. Computing times grow exponentially with the desired precision. Operations like that of Fig. 3 require input signals that are (pseudo) random or, more precisely, uncorrelated; the provision of many such signals is necessary and by no means easy. For example, if the input bit-streams in the SC multiplier of Fig. 3 are both identical (maximally correlated) bit-for-bit representations B of p = 0.5, then the output will also be B, and so will represent 0.5 instead of the expected product p2 = 0.25a huge error. Consequently, although SC has a wide range of potential applications such as neural networks [2] and control systems [12], its overall impact to date on computer science has been minimal.

B. Some Definitions Next, we formalize and generalize the tomographic testing concepts which were introduced in the preceding section and illustrated there for quantum and stochastic circuits. Definition 1: Consider a (combinational) circuit or circuit component C that computes a mapping F: I O. C is called a probabilistic circuit if each i I is mapped to each j O with some probability p(i,j), and ( p (i, j )) = 1 .
j

and T and to decide whether a particular tomogram has been produced by a correct or an erroneous probabilistic circuit, as will be explained later. For simplicity, our experiments employ the standard Euclidean distance
dist (( p1 ,..., pq ), (t1 ,..., tq )) := j =1 ( p j t j ) 2
q

while noting that there are alternative definitions such as fidelity or the Kolmogorov distance [8]. A distance of 0 means that the tomogram T is a perfect match to the given distribution P. C. Tomographic Testing and Diagnosis A probabilistic circuit C can be transformed to a different probabilistic circuit Ci by a fault f. The fault is defined by some formal model that mimics physical defect mechanisms or typical designer errors. In the experiments conducted in this paper, we concentrate on the widely used single-missing-gate fault model for quantum circuits [5]; however, our methodology is readily applicable to other fault types. The faulty circuit Cf has the same input and output space as the correct version C, but its probability distributions differ from Cs. All the faults being considered constitute a fault list f1, f2, ..., fF. The main purpose of tomographic testing is to determine whether a particular instance C of a circuit C0 is operating correctly or has a fault. An underlying assumption is that the structure of C cannot be observed from outside. Only the measurement results are available to decide whether C is correct. To make this decision, a tomogram T(C,i,N) is obtained, and its F + 1 distance measures d0,d1,d2,,...,dF with respect to the probability distributions of the correct circuit C0 and the possible faulty circuits C1,C2,,...,CF are calculated. C is considered correct with respect to test input i if the minimum distance is d0 corresponding to C0; otherwise, it is considered faulty. Fig. 4 illustrates tomographic testing. The probability distribution of the fault-free circuit C0 under some input vector (test) i is depicted as C0(i). Six faults are modeled and the probability distributions of the faulty circuits under i are shown as C1(i),,C6(i). A tomogram Tj = Tj(C, i, N) is obtained by applying vector i some N times to the circuit under test C and recording the frequencies of the outcomes; three such tomograms T1, T2 and T3 are shown in Fig. 4. Suppose T1 is obtained from the fault-free circuit, i.e., C = C0 and T1 = T1(C0, i, N). Due to measurement inaccuracies, the observed frequencies do not match the correct distribution C0(i). Still, the distance from T1 to C0(i), indicated by a grey solid arrow, is smaller than to any other Cj(i), so C is correctly classified as fault-free. Now consider T2 = T2(C0, i, N), another tomogram obtained by exactly the same testing experiment as T1. The differences between T1 and T2 are due to the fact that measurements are non-deterministic. Even though T2 originates from the correct distribution C0(i) (as indicated by the dashed arrow), the closest distribution to T2 is C1(i) and not C0(i). For this reason, C would be mis-classified by T2 as erroneous. The final tomogram T3 = T3(C6, i, N) has minimal distance to C5. A circuit C with a fault transforming its probability distribution under i to C6(i) would then be (correctly) classified as faulty by tomographic testing using T3. Another interesting question is: Given that a circuit has been identified as erroneous, which of the F fault types is present in the circuit? (For conventional circuits, this problem

A probabilistic circuit is specified by a set of |I| |O | probabilities which can be represented in various ways, e.g., by means of a probabilistic transfer matrix (PTM) [6]. Each input i I is associated with a probability distribution C(i) = (p(i,1), p(i,2),..., p(i,|O|)) corresponding to a complete row of a PTM. In testing terms, p(i,j) is the probability of measuring j at the outputs of C when a test vector i is applied. This leads directly to the formal definition of a tomogram. Definition 2: Let C be a probabilistic circuit that computes a mapping F: I O with probabilities {p(i,j)}. Let i I have the probability distribution (p1,p2,..., pq). A tomogram T(C,i,N) of C under input i obtained by N non-deterministic measurements is a probability distribution (t1,t2,...,tq) that is the outcome of the following experiment: Apply input vector i to the inputs of C and measure the output response. Repeat this procedure N times and count the number of times nj each oj O occurs. Set tj = nj /N. For example, consider a probabilistic circuit C with O = {o1, o2, o3, o4} and a reference (good) probability distribution (p1, p2, p3, p4) = (0, 0.5, 0.25, 0.25) for some input vector i. To construct T(C,i,5), suppose five successive experimental measurements are made, yielding the four possible output values 1, 2, 3 and 4, a total of 0, 3, 1 and 1 times, respectively. By Definition 2, T(C, i, 5) = (t1, t2, t3, t4) = (0/5, 3/5, 1/5, 1/5) = (0, 0.6, 0.2, 0.2). Note that T(C, i, 5) is by no means deterministic, so repeating this experiment k times may result in up to k different probability distributions. In contrast, if C is non-probabilistic (and fault-free), each application of the test vector i will produce the same output value F(i), and the tomogram T(C, i, N) will have 1 in the i-th position and 0s everywhere else. In general, a tomogram T = (t1,t2,...,tq) is an experimental probability distribution that approximates some given or ideal distribution P = (p1,p2,..., pq). Intuitively, large values of N, corresponding to many measurements, will result in a tomogram that is very close or identical to the ideal distribution. We will use a concept of distance to measure the similarity of P

Figure 4. Tomographic testing with input vector i, six faults resulting in circuits C1,C2,,C6, and three tomograms T1,T2,T3.

corresponds to diagnosis when manufacturing defects are considered, and post-silicon debugging when design errors are targeted). State-of-the-art diagnostic approaches construct a ranked list of candidate faults. A diagnostic method is considered efficient if the fault which is actually present in the circuit under test is at the beginning of this ranked list, ideally, in the first position. The rank is considered because the designer is interested in correcting the error, and he is likely to do so by eliminating the first error reported. If that fails, he will consider the second error, and so on, until he arrives at the actual error. The position of this error in the ranked list corresponds to the effort spent by the designer trying to correct wrong errors. For tomographic testing, the ranked diagnostic list is easy to construct for a given tomogram T: the faults are simply sorted according to their distance from T. More formally, we define the diagnostic rank of a fault with respect to T to be the number of other faults with a smaller distance to T, plus one. For example, tomogram T3 in Fig. 4 succeeds in identifying the circuit as faulty but does not point to the right fault candidate, as its distance to C5(i) is smaller than to C6(i). Since the distance of T3 to its original distribution C6(i) is second-largest among all distributions, the diagnostic rank of T3 equals 2. D. Evaluation of Tomographic Testing Tomographic testing is prone to both false positives (identifying a correct circuit as erroneous) and false negatives (identifying an erroneous circuit as correct). Tomogram T2 in Fig. 4 is an instance of a false positive. Intuitively, the probability of such misclassification is reduced if the number of measurements N, and thus the accuracy of the tomogram, is increased. Since repeating experiments is costly, it is of interest to know how many measurements are required to achieve a given confidence in a tomogram, i.e., to ensure that the probability of a misclassification does not exceed a pre-defined limit. We evaluated the effectiveness of tomographic testing by means of Monte Carlo simulation. Lacking manufactured instances of probabilistic circuits, we used simulators of such circuits (with or without faults or errors present) to obtain probability distributions, and then generated tomograms from these distributions. Let the probability distribution determined by the simulator be (p1,p2,..., pq). A tomogram of N measurements is obtained as follows: 1. Repeat the following experiment N times: Draw a (pseudo) random integer between 1 and q according to the distribution (p1,p2,...,pq), i.e., the probability of j being drawn is j. 2. Set tj to the number of times j is obtained in Step 1, divided by N. This experiment is repeated many times, e.g. M = 10,000. If N measurements are needed for one tomogram, then the overall experiment requires M N invocations of Steps 1 and 2. For each simulated tomogram (t1,t1,...,tq), we calculate the distances to the correct circuit C0s probability distribution and to those of all erroneous circuits, C1,C2,...,CF. If the original probability distribution (p1,p2,...,pq) stems from an erroneous circuit, but the minimal distance is to the correct circuits probability distribution, a false negative is recorded; in the opposite case, a false positive is recorded. The fraction of false positives and negatives among M experiments indicates the confidence that can be placed in the accuracy of the tomogram.

Table 1. Results for the circuit 3qubitcnot of Fig. 1


False positive input vectors N 5 6 7 8 9 10 11 12 15 20 100 1000 000 1112 257 86 89 84 40 10 4 1 1 0 0 001 1301 319 89 96 84 48 19 8 2 0 0 0 100 1197 284 90 84 81 47 12 7 2 0 0 0 111 False negative input vectors 000 001 100 111

1421 19996 20116 20129 20225 319 18798 18668 18693 18877 88 12868 12953 12978 13160 95 9412 9373 9448 9455 96 10091 10091 10097 10182 54 10604 10563 10751 10756 20 5 3 0 0 0 8678 6499 5725 2721 1 0 8821 6516 5645 2856 1 0 8800 6636 5713 2736 1 0 8725 6534 5566 2740 0 0

IV. EXPERIMENTAL RESULTS We first report on a simple experiment for a small quantum circuit with a fault list consisting of one missing-gate fault. The circuit and the missing-gate fault are shown in Fig. 1. With input vector 000 (or (1,0,0,0,0,0,0,0) in the state-vector notation), the probability distribution is (0.286, 0.002, 0.210, 0.001, 0.210, 0.001, 0.287, 0.002) for the correct circuit (with all the gates in place), and (0.167, 0.122, 0.122, 0.089, 0.122, 0.089, 0.167, 0.122) for the missing-gate case. The probabilities have been calculated using the quantum circuit simulator QuIDDPro [13]. The number of false positives and negatives among M = 100,000 tomograms for different tomogram precision levels N, and different input vectors, are shown in Table 1. Several observations can be made from this experiment. First, increasing the number of measurements per tomogram improves the accuracy of classification: both false positives and negatives fall significantly with rising N. Second, the numbers of false positives and negatives are of different orders of magnitude. (One might be tempted to think that, since the distance metric is commutative, the numbers should be comparable, but that is not the case). Third, the particular input vector used has little influence. This is due a specific feature of the circuit under test, where Hadamard gates at the inputs create a state superposition that is independent of the input vector. Such gates at the inputs are common in quantum circuits. Fig. 5 shows, for four circuits and different numbers N of measurements in a tomogram, the misclassification rate, i.e., the share of tomograms obtained from the correct circuits that have been classified as erroneous circuits among 10,000 tomograms. The circuits used are 3qubitcnot and three circuits provided with QuIDDPro [13]. The number of qubits ranges between 3 and 13 (note that 13 qubits defines a very large quantum circuit by todays implementations standards). The exact number of qubits and single-missing-gate faults in the circuits can be found in the headings of Table 2. We considered a fault f with dist(C, Ci) < 0.0001 to be hard-to-detect, or redundant for the given input vector and excluded all redundant faults according to this definition. We used the all-0 input vector for all experiments.

1 0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0
30 10 20

simplegrover 3qubitcnot steaneX steaneZ

Table 2. Diagnostic rank statistics


Circuit 3qubitcnot (3 qubits, 6 faults, 4 nonredundant) N 5 10 20 50 100 1000 Min 1 1 1 1 1 1 Circuit simple_grover (3 qubits, 9 faults, 4 nonredundant)

Missclassification Rate

Max Mean Min Max Mean SD SD 4 1.1526 0.3931 1 4 1.0118 0.1347 4 1.0589 0.2437 1 4 1.0028 0.0653 3 1.0143 0.1194 1 3 1.0002 0.0142 2 1.0001 0.0089 1 1 1 0 1 1 0 1 1 1 0 1 1 0 1 1 1 0 Circuit steaneX Circuit steaneZ (12 qubits, 95 flts, 41 non-red.) (13 qubits,105 flts,42 non-red.) Min 1 1 1 1 1 1 Max Mean 41 1.2153 4 1.1518 3 1.1359 3 1.1262 3 1.1036 3 1.0147 SD 0.5502 0.4306 0.4152 0.4023 0.3662 0.1392 Min 1 1 1 1 1 1 Max Mean 18 1.2198 7 1.1802 6 1.1804 5 1.1650 4 1.1182 4 1.0146 SD 0.6331 0.5264 0.5215 0.5006 0.4345 0.1502

Number of Measurements

Figure 5. Circuit misclassification rates with M = 10,000.

N 5 10 15 20 100 1000

80

0 10 90

It can be seen that the classification accuracy is relatively poor in tomograms based on a very small number of measurements N, but it quickly improves with rising N. In one case, there are fluctuations (with a period of 4) for small N, which are due to particular statistical properties of the probability distributions considered. Table 2 gives the diagnostic ranks for all faults in the four target circuits for various values of N. The minimum, maximum, and average ranks are reported, along with the mean and standard deviation. Rank improves with increasing accuracy of the tomograms, achieving near-perfect resolution by N = 100 for the two smaller circuits. The two larger ones have probability distributions with 212 and 213 entries, respectively, so their approximation by tomograms composed of few measurements may not be adequate for diagnostic purposes. V. CONCLUSIONS AND FUTURE WORK Probabilistic effects are of increasing concern in many types of circuits. We have introduced a practical methodology for test and verification of such circuits based on approximating probability distributions generated by means of tomograms. In particular, we designed a Monte Carlo strategy to quantify and control the uncertainty inherent in tomographic testing. Quantum and stochastic computing were used for illustrating the potential applications. We presented experimental results for quantum circuits which show that even a small number of experiments relative to the exponential size of a circuits probability distribution are quite effective for classification (testing) purposes, as well as for fault diagnosis. A number of research challenges remain. A further challenge is to improve scalability. While the Monte Carlo approach is conceptually simple and applicable to small- to medium-sized circuits, an analytical model for determining the minimal number of measurements is needed for larger circuits. Implicit or approximate techniques to store and manipulate probability distributions (which are exponential in the number of circuit

40

50

60

70

outputs) need to be developed. It will also be interesting to see how effective test and diagnosis is when the probabilistic circuit has un-modeled faults. ACKNOWLEDGEMENTS This paper is partially based on work done by J. P. Hayes as a recipient of the Humboldt Foundation Research Award. REFERENCES
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] J.D. Biamonte, J.S. Allen and M.A. Perkowski; Fault models for quantum mechanical switching networks. Quantum Physics Archive arXiv:quant-ph/0508147v3, 2010. B. Brown and H. Card. Stochastic neural computation: I computational elements. IEEE Trans. Computers (50), pp. 891905, Sept. 2001. M.L. Bushnell and V.D. Agrawal. Essentials of Electronic Testing. Kluwer Academic Publishers, Norwell, MA, 2000. B. R. Gaines. Stochastic computing. Proc. AFIPS Spring Joint Computer Conf., pp. 149156, April 1967. J.P. Hayes, I. Polian and B. Becker. Testing for missing-gate fault models in reversible circuits. Proc. Asian Test Symp., pp. 100105, 2004. S. Krishnaswamy, G. F. Viamontes, I. L. Markov and J. P. Hayes, Probabilistic transfer matrices in symbolic reliability analysis of logic circuits. ACM Trans. Des. Autom. Elec. Sys. (13), paper 8, Jan. 2008. C. Monroe and M. Lukin. Remapping the quantum frontier. Physics World, pp. 3239, 2008. M.A. Nielsen and I.L. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, 2000. J. von Neumann. Probabilistic logics and the synthesis of reliable organisms from unreliable components. Automata Studies (34), pp. 4398, 1956. K.N. Patel, J.P. Hayes and I.L. Markov. Fault testing for reversible circuits. IEEE Trans. CAD, (23), pp.12201230, 2004. D. Stick et al. Ion trap in a semiconductor chip. Nature Physics (2), pp.3639, 2006. S. Toral, J. Quero and L. Franquelo. SRC passivation controller implementation using stochastic computing. Proc. ISCAS, pp. 723 726, May 2001. G.F. Viamontes, I.L. Markov and J.P. Hayes. Quantum Circuit Simulation, Springer, 2009. H.-J. Wunderlich. PROTEST: A tool for probabilistic testability analysis. Proc. Des. Autom. Conf., pp. 204-211, 1985.

You might also like