Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 36

Fast Spectral Transforms and

Logic Synthesis
DoRon Motter
August 2, 2001
Introduction
• Truth Table Representation
– Value provides complete information for one
combination of input variables
– Provides no information about other combinations
• Spectral Representation
– Value provides some information about the behavior of
the function at multiple points
– Does not contain complete information about any
single point
Spectral Transformation
• Synthesis
– Many algorithms proposed leveraging fast
transformation
• Verification
– Correctness may be checked more efficiently
using a spectral representation.
Review – Linear Algebra
• Let M be a real-valued square matrix.
– The transposed matrix Mt is found by interchanging rows
and columns
– M is orthogonal if
MMt = MtM = I
– M is orthogonal up to the constant k if
MMt = MtM = kI
– M-1 is the inverse of M if
MM-1 = M-1M = I
– M has its inverse iff the column vectors of M are linearly
independent
Review – Linear Algebra
• Let A and B be (n  n) square matrices
• Define the Kronecker product of A and B as
 a11B a12B  a1n B 
a B a B  a2 n B 
A  B   21 22

     
 
an1B an 2 B  ann B 
Spectral Transform
• General Transform Idea
– Consider the 2n output values of f as a column
vector F
– Find some transformation matrix T(n) and
possibly its inverse T-1(n)
– Produce RF, the spectrum of F, as a column
vector by
• RF = T(n)F
• F = T-1(n)RF
Walsh-Hadamard Transform
• The Walsh-Hadamard Transform is defined:
T ( 0)   1
T(n  1) T(n  1) 
T( n )   
T(n  1)  T(n  1)
Walsh-Hadamard Matrices
1 1
Τ(1)   
1  1 1 1 1 1 1 1 1 1
1 1  1 1  1
1 1 1 1  1 1 1
1  1 1  1 1 1 1 1 1 1  1  1
T(2)    
1 1 1 1 1  1  1 1

1 1  1  1 T(3)  
  1 1 1 1  1  1  1  1
1  1  1 1 
1 1 1  1 1  1 1

1
1 1 1 1  1  1 1 1
 
1 1 1 1  1 1 1  1
Walsh-Hadamard Example
f ( x1 , x2 , x3 )  x1 x2 x3  x1 x2 x3  x1 x2 x3

1 1 1 1 1 1 1 1 0  
1 1 1 1 1 1 1  1 1  
 
1 1 1 1 1 1 1  1 1  
    
1 1 1 1 1 1 1 1 0  

1 1 1 1 1 1 1  1 0  
    
1 1 1 1 1 1 1 1 0  
1 1 1 1 1 1 1 1 0  
    
1 1 1 1 1 1 1  1 1  
Walsh-Hadamard Example
f ( x1 , x2 , x3 )  x1 x2 x3  x1 x2 x3  x1 x2 x3

1 1 1 1 1 1 1 1 0  3
1 1 1 1 1 1 1  1 1  

1 1 1 1 1 1 1  1 1  
    
1 1 1 1 1 1 1 1 0  

1 1 1 1 1 1 1  1 0  
    
1 1 1 1 1 1 1 1 0  
1 1 1 1 1 1 1 1 0  
    
1 1 1 1 1 1 1  1 1  
Walsh-Hadamard Example
f ( x1 , x2 , x3 )  x1 x2 x3  x1 x2 x3  x1 x2 x3

1 1 1 1 1 1 1 1 0  3 
1 1 1 1 1 1 1  1 1   1

1 1 1 1 1 1 1  1 1   1
    
1 1 1 1 1 1 1 1 0   1

1 1 1 1 1 1 1  1 0  1 
    
1 1 1 1 1 1 1 1 0  1 
1 1 1 1 1 1 1 1 0  1 
    
1 1 1 1 1 1 1  1 1  3
Walsh-Hadamard Example
f ( x1 , x2 , x3 )  x1 x2 x3  x1 x2 x3  x1 x2 x3

1 1 1 1 1 1 1 1  1   2 
1 1 1 1 1 1 1  1  1  2 

1 1 1 1 1 1 1  1  1  2 
    
1 1 1 1 1 1 1 1  1   2 

1 1 1 1 1 1 1  1  1   1
    
1 1 1 1 1 1 1 1  1   1
1 1 1 1 1 1 1 1  1   1
    
1 1 1 1 1 1 1  1  1  6 
Walsh-Hadamard Transform
• Why is it useful?
– Each row vector of T(n) has a ‘meaning’

1 1 1 1 1
1  1 1  1 x1
T(2)   
1 1  1  1 x2
 
1  1  1 1 x1 x2

– Since we take the dot product of the row vector


with F we find the correlation between the two
Walsh-Hadamard Transform

1 1 1 1 1 1 1 1 Constant, call it x0
1 1 1 1 1 1 1  1 x1

1 1 1 1 1 1 1  1 x2
 
 1 1 1 1 1 1 1 1 x1  x 2
T(3) 
1 1 1 1 1 1 1  1 x3
 
1 1 1 1 1 1 1 1 x1  x 3
1 1 1 1 1 1 1 1
  x2  x 3
1 1 1 1 1 1 1  1
x1  x 2  x3
Walsh-Hadamard Transform
• Alternate Definition
n 1 1
T( n )    
i 1 1  1
 

• Recursive Kronecker Structure gives rise to


DD/Graph Algorithm
Walsh-Hadamard Transform
• Alternate Definition
n
T(n)  1 1  2 xi 
i 1

• Note, T is orthogonal up to the constant 2:


1 1 1 1 2 0
Τ(1)Τ(1)  
t
      2I
1  1 1  1 0 2
Decision Diagrams
• A Decision Diagram has an implicit
transformation in its function expansion
n
– Suppose T(n)   T(1)
i 1

– This mapping defines an expansion of f


1  f0 
f  T (1)T(1)  
 f1 
Binary Decision Diagrams
• To understand this expansion better,
consider the identity transformation
1 0
• Symbolically, T(n)      xi xi 
0 1 

1  f0 
f  I (1)I (1)  
 f1 
 f0 
f   xi xi     xi f 0 xi f1
 f1 
Binary Decision Diagrams
• The expansion of f defines the node
semantics
• By using the identity transform, we get
standard BDDs
• What happens if we use the Walsh
Transform?
Walsh Transform DDs
n
T(n)  1 1  2 xi 
i 1

1  f0 
f  T (1)T(1)  
 f1 
1 1 1   f 0 
f  1 1  2 xi     
2 1  1  1  f
1
f    f 0  f1  1  2 xi  f 0  f1  
2
Walsh Transform DDs
• 1
f    f 0  f1  1  2 xi  f 0  f1  
2

1 1 2 xi 
1
1
 f 0  f1   f 0  f1 
2 2
Walsh Transform DDs
• It is possible to convert a BDD into a
WTDD via a graph traversal.
• The algorithm essentially does a DFS
Applications: Synthesis
• Several different approaches (all very
promising) use spectral techniques
– SPECTRE – Spectral Translation
– Using Spectral Information as a heuristic
– Iterative Synthesis based on Spectral
Transforms
Thornton’s Method
• M. Thornton developed an iterative
technique for combination logic synthesis
• Technique is based on finding correlation
with constituent functions
• Needs a more arbitrary transformation than
Walsh-Hadamard
– This is still possible quickly with DD’s
Thornton’s Method
• Constituent Functions
– Boolean functions whose output vectors are the
rows of the transformation matrix
– If we use XOR as the primitive, we get the
rows for the Walsh-Hadamard matrix
• Other functions are also permissible
Thornton’s Method
1. Convert the truth table F from {0, 1} to {1, -1}
2. Compute transformation matrix T using
constituent functions {Fc(x)}
• Constituent functions are implied via gate library
3. Compute spectral coefficients
4. Choose largest magnitude coefficient
5. Realize constituent function Fc(x) corresponding
to this coefficient
Thornton’s Method
6. Compute the error e(x) = Fc(x)  F with
respect to some operator, 
7. If e(x) indicates w or fewer errors,
continue to 8. Otherwise iterate by
synthesizing e(x)
8. Combine intermediate realizations of
chosen {Fc(x)} using  and directly realize
e(x) for the remaining w or fewer errors
Thornton’s Method
• Guaranteed to converge
• Creates completely fan-out free circuits
• Essentially a repeated correlation analysis
• Extends easily to multiple gate libraries
Thornton’s Method: Example
f ( x)  x1 x3  x1 x2 x3  x1 x2  x2 x3
• Step 1: Create the truth table using {-1, 1}
x1 x2 x3 F
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
Thornton’s Method: Example
• Step 2: Compute transformation matrix T
using constituent functions.
• In this case, we’ll use AND, OR, XOR
Thornton’s Method: Example
1 11 1 1 1 1 1 1
1 x1 1 1 1 1 1 1  1

1 x2 1  1  1 1 1 1  1
 x31 
1 1 1 1 1 1  1
1 x11 x2 1  1  1  1 1 1
 x11 x3 1  1  1 
1 1 1 1
1 x21 x3 1 1 1 1 1 1
 
1 x11 x2 1 x3 1  1 1 1  1
1 x11+ x2 1  1  1  1  1  1
 
1 x11+ x3 1  1  1  1  1  1
 x21+ x3 1  1 
1 1 1 1  1
1 x11+ x2 1
+ x3 1  1  1  1  1
 x1x12 1 
1 1 1 1 1  1
1 x1x13 1 1 1 1 1  1
 x2x13 
1 1 1 1 1 1  1
1 x1x12x3 1 1 1 1 1  1

Thornton’s Method: Example
• Step 3: Compute spectral coefficients
• S[Fc(x)]= [-2, -2, 2, -2, 2, -2, 2, -6, 2,
-2, 2, 2, -2, -2, -2, -4]
• Step 4: Choose largest magnitude
coefficient.
– In this case, it corresponds to x1  x2  x3
Thornton’s Method: Example
• Step 5: Realize the constituent function Fc
– In this case we use XNOR since the coefficient
is negative
Thornton’s Method: Example
• Step 6: Compute the error function e(x)
– Use XOR as a robust error operator
x1 x2 x3 F Fc e
1 1 1 1 1 1
1 1 1 1 1 1
1 1 1 1 1 1
1 1 1 1 1 1
1 1 1 1 1 1
1 1 1 1 1 1
1 1 1 1 1 1
1 1 1 1 1 1
Thornton’s Method: Example
• Step 7: Since there is only a single error, we
can stop, and realize the final term directly
Conclusion
• The combination of spectral transforms and
implicit representation has many
applications
• Many ways to leverage the spectral
information for synthesis

You might also like