Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

US 20140172937A1

(19) United States


(12) Patent Application Publication (10) Pub. No.: US 2014/0172937 A1
LNDERMAN et al. (43) Pub. Date: Jun. 19, 2014
(54) APPARATUS FOR PERFORMING MATRIX Publication Classification
VECTOR MULTIPLICATION
APPROXMLATION USING CROSSBAR (51) Int. Cl.
ARRAYS OF RESISTIVE MEMORY DEVICES G06F 7/16 (2006.01)
(52) U.S. Cl.
(71) Applicant: UNITED STATES OF AMERICA AS CPC ...................................... G06F 17/16 (2013.01)
REPRESENTED BY THE USPC .......................................................... 708/607
SECRETARY OF THE AIR FORCE,
Rome, NY (US) (57) ABSTRACT
(72) Inventors: RICHARD LINDERMAN, ROME, NY
(US); QINGWU, MANLIUS, NY (US); An apparatus that performs the mathematical matrix-vector
GARRETT ROSE, CLINTON, NY multiplication approximation operations using crossbar
(US); HAI LI, PITTSBURG, PA (US); arrays of resistive memory devices (e.g. memristor, resistive
YIRAN CHEN, PITTSBURG, PA (US); random-access memory, spintronics, etc.). A crossbar array
MIAO HU, PITTSBURG, PA (US) formed by resistive memory devices serves as a memory array
that stores the coefficients of a matrix. Combined with input
(21) Appl. No.: 13/965,459 and output analog circuits, the crossbar array System realizes
the method of performing matrix-vector multiplication
(22) Filed: Aug. 13, 2013 approximation operations with significant performance, area
Related U.S. Application Data and energy advantages over existing methods and designs.
This invention also includes an extended method that realizes
(60) Provisional application No. 61/848,775, filed on Dec. the auto-associative neural network recall function using the
19, 2012. resistive memory crossbar architecture.

Subtractior Amplifiers
Analog Circuits
Patent Application Publication Jun. 19, 2014 Sheet 1 of 5 US 2014/0172937 A1
Patent Application Publication Jun. 19, 2014 Sheet 2 of 5 US 2014/0172937 A1
Patent Application Publication Jun. 19, 2014 Sheet 3 of 5 US 2014/0172937 A1

'W.

s s /
/

e
>

tak s
s

w
t

t
s

s
Patent Application Publication Jun. 19, 2014 Sheet 4 of 5 US 2014/0172937 A1

{}{}
Patent Application Publication Jun. 19, 2014 Sheet 5 of 5 US 2014/0172937 A1

gºdnôt,
US 2014/0172937 A1 Jun. 19, 2014

APPARATUS FOR PERFORMING MATRIX through a memristoris greater than the threshold Voltage, e.g.,
VECTOR MULTIPLICATION IV, DIVI, the memristance changes. Otherwise, the memris
APPROXMLATION USING CROSSBAR tor behaves like a resistor.
ARRAYS OF RESISTIVE MEMORY DEVICES
OBJECTS AND SUMMARY OF THE
PRIORITY CLAIMUNDER 35 U.S.C. S 119(e) INVENTION

0001. This patent application claims the priority benefit of 0008. It is therefore an object of the present invention to
the filing date of provisional application Ser. No. 61/848,775 provide an apparatus that performs mathematical matrix-vec
having been filed in the United States Patent and Trademark tor multiplication approximation.
Office on Dec. 19, 2012 and now incorporated by reference 0009. It is a further object of the present invention to
herein. provide an apparatus that utilizes resistive memory devices to
store matrix coefficients so as to facilitate matrix-vector mul
STATEMENT OF GOVERNMENT INTEREST tiplication approximation.
0010. It is yet a further object of the present invention to
0002 The invention described herein may be manufac apply matrix-vector multiplication approximation to facili
tured and used by or for the Government for governmental tate auto-associative neural network recall functionality.
purposes without the payment of any royalty thereon. 0011 Briefly stated, the present invention is an apparatus
that performs mathematical matrix-vector multiplication
BACKGROUND OF THE INVENTION approximation operations using crossbar arrays of resistive
memory devices (e.g. memristor, resistive random-access
0003. Matrix-vector multiplication is one of the most memory, spintronics, etc.). A crossbar array formed by resis
commonly used mathematical operations in Science and engi tive memory devices serves as a memory array that stores the
neering computations. Mathematically, a matrix is usually coefficients of a matrix. Combined with input and output
represented by a two-dimensional array of real numbers, and analog circuits, the crossbar array system realizes the method
a vector is represented by a one-dimensional array of real of performing matrix-vector multiplication approximation
numbers. Since a matrix can be viewed as an array of vectors, with significant performance, area and energy advantages
the matrix-vector multiplication operation can be generalized over existing methods and designs. This invention also
to vector-vector multiplication (inner product) or matrix-ma includes an extended method that realizes the auto-associa
trix multiplication operations. In today’s computer systems, tive neural network recall function using the resistive memory
matrix-vector multiplication operations are performed by crossbar architecture.
digital integrated circuits, in which the numbers are repre
sented in binary format and the computation is performed by SUMMARY OF THE INVENTION
Boolean logic circuits.
0004. The existence of resistive memory devices, such as 0012. In this invention, we describe a matrix-vector mul
the memristor, resistive random-access memory (RRAM), tiplication approximation hardware method and design based
and spintronic Switches, were predicted in circuit theory on crossbar arrays of resistive memory devices.
nearly forty years ago. However, it wasn’t until 2008 that the 0013 By nature, the memristor crossbar array is attractive
first physical realization was demonstrated by HP Labs for the implementation of matrix-vector multiplication opera
through a TiO, thin-film structure. Afterward, many resistive tions. The crossbar array inherently provides capabilities for
memory materials and devices have been reported or redis this type of operation. In this invention, we focus on a hard
ware architecture and realization of the matrix-vector multi
covered. The resistive memory device has many promising plication approximation assuming all of the memristors have
features, such as non-volatility, low-power consumption,
high integration density, and excellent Scalability. More been pre-programmed to store the elements of the matrix.
importantly, the unique property to record the historical pro During the multiplication operations, the Voltages across the
file of the excitations on the devicemakes it an ideal candidate memristors are constrained below the threshold voltage so
to perform storage and computation functions in one device. that all the memristance values remain unchanged. The cross
bar architecture can naturally transfer the weighted combina
0005 For the purpose of succinct description, the present tion (matrix) of input Voltages (input vector) to output Volt
invention uses the terminology “memristor” to represent the ages (output vector). We also describe a fast approximation
category of “resistive memory device'. For the remainder of mapping method so that the matrix can be mapped to pure
the patent description, references to “memristor shall be circuit element relations.
regarded as referring to any “resistive memory device'. 0014. As shown in FIG. 1, the proposed hardware method
0006 Based on circuit theory, an ideal memristor with and design includes two crossbar arrays of resistive memory
memristance Mbuilds the relationship between the magnetic devices, M10 for positive elements of the matrix and M20
flux (p and electric charge q that passes through the device, that for negative elements of the matrix. The input vector is rep
is, dop-Midd. Since the magnetic flux and the electric charge resented by the input voltages VI that are driving the row
are time dependent parameters, the instantaneous mem (horizontal) wires 30 of the crossbar arrays. The matrix
ristance varies with time and reflects the historical profile of vector multiplication approximation resulting vector is rep
the excitations through the device. resented by the outputs VO of the analog subtraction amplifier
0007 When developing actual memristive devices, many circuits 40 connected to the column (vertical) wires 50 of the
materials have demonstrated memristive behavior in theory crossbar arrays.
and/or experimentation via different mechanisms. In general, 0015. This invention also includes an extended method
a certain energy (or threshold Voltage) is required to enable a and design that realizes the auto-associative neural network
state change in a memristor. When the electrical excitation recall function using the resistive memory crossbar architec
US 2014/0172937 A1 Jun. 19, 2014

ture (see FIG. 4 and FIG.5). The Brain-State-in-a-Box (BSB) G, and a simple approximation useful for frequent or resource
model recall operation will be used as an example to illustrate constrained updates. In this invention we adopt the second
the method. method, which is described next.
BRIEF DESCRIPTION OF THE DRAWINGS
0024. In this section we first show how to map C to G when
c, is in the range of [0, 1.0). In the next section we will show
0016 FIG.1 depicts the two crossbar arrays of memristive the method of mapping a general matrix A with both positive
devices and subtraction amplifier used by the present inven and negative elements.
tion to perform matrix-vector multiplication approximation. (0025. We assume any geG satisfies gusgs.g.
0017 FIG. 2 depicts a single crossbar array used in the where g, and g, respectively represent the minimum and
present invention showing more clearly the word lines, bit the maximum conductance of all memristors in the crossbar
lines, and the connection of memristive devices between array. Instead, we propose a simple and fast approximation to
them. the mapping problem by allowing:
0018 FIG. 3 depicts the circuit schematic of the subtrac 3i Cif (gmaxgnin)+gnin (4)
tion amplifier used in the preferred embodiment of the present
invention. (0026. In the following, we will prove that by using this
0019 FIG. 4 depicts an alternate embodiment of the mapping method, a scaled version C of the connection matrix
present invention employing the auto-associative neural net C can be approximately mapped to the conductance matrix G
work recall function. of the memristor array.
0020 FIG. 5 depicts the circuit schematic of the summa 0027 PROOF. By plugging Eq. (4) into Eq. (3), we have:
tion-subtraction amplifier circuit used in the alternate
embodiment of the present invention.
r Ci.j (gma - gmin) + gmin (5)
DETAILED DESCRIPTION OF THE PREFERRED Cii —
EMBODIMENT gs + N gmin + (gma - gmin) 2. Cik

A. Crossbar Array Implementation of Matrix


0021 Referring to FIG. 2, the present invention uses an 0028 Note that many memristor materials, such as the
N-by-M memristor crossbar array as a basic building block to TiO2 memristor, demonstrate a large g/g, ratio. Thus, a
achieve matrix-vector multiplication approximation compu memristor at the high resistance state under a low voltage
tation functionality. excitation can be regarded as an insulator, that is, gaO. And
0022. A set of input voltages VI = {vi, vi,..., vi} are X, 'c, can be further reduced by increasing the ratio of
applied on each of the N word-lines (WLS)30 of the array, and g/g. As a result, the impact of X-'e, can be ignored.
the current is collected through each of the Mbit-lines (BLS) These two facts indicate that Eq. (5) can be further simplified
tO:
50 by measuring the voltage across a sensing resistor 60. The
same sensing resistors are used on all the BLS with resistance
r, or conductance g-1/r. The output Voltage vector is
VO'-vo, Vo2. . . . . vo). Assume the memristive device ?ij = Ci. (6)
(i.e., memristor) 70 sitting on the connection between WL,
and BL, has a memristance of m. The corresponding con
ductance is g, 1/m. Then the relation between the input 0029. In summary, with the proposed mapping method,
and output Voltages can be approximated by: the memristor crossbar array performs as a scaled connection
OsCXT (1) matrix C between the input and output Voltage signals.
Here, C is an MA-by-N matrix that can be represented by the B. Transformation of Matrix
memristors 70 and the sensing (load) resistors 60 as:
0030. In this section we describe the method of mapping
from a matrix A to the memristor conductances G, under the
31.1 ... 31.N. (2) condition that a, is in the range from -1.0 to +1.0. In the
C = DX G = diag(d. ... , dif) X
32.1 32.N general case, we can scale any given matrix to the -1.0, 1.0
range, then perform the operation, and finally scale the result
3M, ... 8MN ing outputs back to the original range.
0031. Given a matrix A with all its elements scaled to the
range of -1.0, 1.0, we first split the positive and negative
where D is a diagonal matrix with diagonal elements of elements of A into two matrixes A" and A as:
d=1/(g.+x, g)i=1,2,..., M. (3)
0023 Eq. (1) indicates that a trained memristor crossbar
array can be used to construct the connection matrix C, and -- {" if ai > 0 and (7a)
O, f di.is O,
transfer the input vector VI to the output vector VO. However,
Eq. (2) shows that C is not a direct one-to-one mapping of O if ai > 0 (7b)
conductance matrix G of the memristor crossbar array, since di.x -aii if ai is 0.
the diagonal matrix D is also a function of G. We have two
methods to obtain the solution of G: an complex numerical
iteration method to obtain the exact mathematical solution of
US 2014/0172937 A1 Jun. 19, 2014

0032. As such, a general matrix-vector multiplication dimensional matrix requires M Summing amplifiers 40 to
approximation becomes: realize the subtraction operation in Eq. (14). Also, for sub
traction amplifiers 40, we should adjust their power supplies
to make their maximum/minimum output Voltages to reflect
Here, the two matrices A" and A can be mapped to two the same Scaling factor when converting the input vector X to
memristor crossbar arrays M and M in a scaled version A" voltage VI. Finally the resulting vectory can be obtained from
and A. respectively, by combining the mapping method in VO with inversed scaling factor of X, as shown in Eq. (15).
Eq. (4) with Eq. (7a) and (7b) as follows.

aii (gma - gmin) + gmin, if aij > 0.0 (9) y = Poil. vo (15)
For M: g; ; =
or M1: gif gmin, if a. s. 0.0
-dii (gna - gmin) + gnin, if aij < 0.0 (10)
For M.: g; ; =
or M2: gif gmin, if ai. 20.0 Description of an Alternate Embodiment of the
Present Invention

D. Extended Method and Design for Auto-Associative Neural


C. Circuit Realization of Matrix-Vector Multiplication Network Recall Function
0033. To realize the matrix-vector multiplication approxi 0037. The Brain-State-in-a-Box (BSB) model is a typical
mation function y=Ax at the circuit level, the elements of the auto-associative neural network. The mathematical model of
input vector X are converted to the range of input Voltage the BSB recall function can be represented as:
levels VI. The corresponding functions for the multiplication
approximation can be expressed as:
where x is an N dimensional real vector, and A is an N-by-N
WO = 8s (VO" - VO) (11) connection matrix. AX(t) is a matrix-vector multiplication
8max operation, which is the main function of the recall function. C.
is a scalar constant feedback factor. W is an inhibition decay
where VO" = A". VI and VO = A. VI (12) constant. S(y) is the “squash”
where WI = bn (13)
|\mal
1, if ya: 1 (17)
S(y) = y, if -1 < y < 1
where X, is the maximum possible magnitude of any -1, if y is -1
element of input vector X, and V is the input voltage bound
ary, that is, -V.svisV., for any viev I. In implementation,
V, must be smaller than V, so that the memristance values 0038. For a given input pattern x(0), the recall function
will not change during the multiplication operation. computes Eq. (16) iteratively until convergence, that is, when
0034. As shown in FIG. 1, the memristor crossbar arrays all entries of x(t+1) are either “1” or “-1”.
10, 20 are used to realize the matrix-vector multiplication 0039. Using the same method for general matrix-vector
approximation operation. To obtain both positive and nega multiplication approximation described previously. Eq. (16)
tive elements in the matrix, two memristor crossbar arrays M COnVertS to:
10 and M. 20 are required in the design to represent the
positive and negative matrices A" and A. respectively. The
memristor crossbar arrays have the same dimensions as the
transposed matrix A. Here, for the default case we set C. W=1. The two connection
0035. In the present invention, the input signal VI along matrices A" and A can be mapped to two N-by-N memristor
with VO" andVO, the corresponding voltage outputs of two crossbar arrays Ms 100 and M110 in a scaled version A" and
memristor crossbar arrays, are fed into a number of analog A, respectively, by following the mapping method in Eq. (9)
subtraction amplifier circuits 40. The design of the subtrac and Eq. (10).
tion amplifier 40 is shown in FIG. 3. 0040. To realize the BSB recall function at the circuit
0036 Resulting from the scaled mapping method, the level, the normalized input vector x(t) is converted to a set of
required VO should be g/g, times the generated VO" or input Voltage signals V(t). The corresponding function for the
VO. In the present invention, we set R=R-1/g and Voltage feedback system can be expressed as:
RR-1/g. The resulting output of the subtraction ampli
fier 40 is:

= S(VA(t) - VA (t) + V(t))


VO = -s. VO – 8 . VO, (14)
V represents the input Voltage boundary, that is, -V.sv, (t)
which indicates that the scaled effect (caused by mapping sv for any V,(t)eV(t). The new saturation boundary function
from A to A" and A) has been canceled out. The M-by-N S() needs to be modified accordingly as:
US 2014/0172937 A1 Jun. 19, 2014

when all N outputs reach convergence. In total, N compara


Vb, if at who tors 90 are needed to cover all the outputs.
S(y) = v, if who s vs. Vh. 0049. There are three major physical constraints in the
- Vb, if sh circuit implementation: (1) For any V(0)eV(0), the voltage
amplitude of initial input signal V(0) is limited by the input
circuit; (2) boundary Voltage V, must be Smaller than V, of
In implementation, V, can be adjusted based on requirements memristors 70; and (3) the summation-subtraction amplifier
for convergence speed and accuracy. Meanwhile, V, must be 80 has finite resolution.
Smaller than V, so that the memristance values will not 0050. In the BSB recall function, the ratio between bound
change during the recall process. aries of S(y) and the initial amplitude of x,(0), X,(0)ex(0)
0041 FIG. 4 illustrates the BSB recall circuit built based determines the learning space of the recall function. If the
on Eq. (19). The design is an analog system consisting of ratio is greater than the normalized value, the recall operation
three major components. The selector (switch) selects V(0) as will take more iterations to converge with a higher accuracy.
input Voltage at the start of the recall computation, then Otherwise, the procedure converges faster by lowering sta
selects V(t+1) afterward. We assume that “t is discretized bility. Thus, minimizing the ratio of V(0) and V can help
time, so we have t-O, 1, 2, . . . . After the output Voltages are obtain the best performance. However, the real amplifier has
all converged, we resett to O so that the circuit takes the new a finite resolution and V, is limited within V of the memris
input V(0) to be computed (recalled). Below is the detailed tor 70. Continuously reducing V(0) eventually will lose a
description. significant amount of information in the recall circuit.
0042 Memristor Crossbar Arrays: 0051 Having described preferred embodiments of the
0043. As the key component of the overall design, the invention with reference to the accompanying drawings, it is
memristor crossbar arrays 100, 110 are used to approximate to be understood that the invention is not limited to these
the matrix-vector multiplication functions in the BSB recall precise embodiments, and that various changes and modifi
operation. To obtain both positive and negative weights in the cations may be affected therein by one skilled in the art
original BSB algorithm in Eq. (16), two N-by-N memristor without departing from the scope or spirit of the invention as
crossbar arrays Ms 100 and M110 are required in the design defined in the appended claims.
to represent the connection matrices A" and A, respectively. What is claimed is:
The memristor crossbar arrays have the same dimensions as
the BSB weight matrix A transposed. 1. An apparatus for performing matrix-vector multiplica
tion approximation, comprising
0044 Summation-Subtraction Amplifiers:
0045. In the present invention, the input signal V(t) along a first crossbar array of memristive devices for positive
elements of a matrix:
with va(t) and via (t), the corresponding voltage outputs of a second crossbar array of memristive devices for negative
two memristor crossbar arrays 100, 110, are fed into a sum
mation-subtraction amplifier 80. The design of the summa elements of a matrix; wherein
tion-subtraction amplifier circuit 80 can be found in FIG. 5. said first and said second crossbar array further comprise
0046 Resulting from the decayed mapping method, the a plurality of word lines and a like plurality of bit lines
required V., (t) and V., (t) should be g/g, times of the with each of said word lines having an input and each
generated VA, (t) and va, (t), respectively. In the present of said bit lines having an output;
invention R, R-R-1/g, and R. R. R.-R,-1/g. The a subtraction amplifier having a positive input, a negative
resulting output of the summation-subtraction amplifier80 is: input, and an output;
said Subtraction amplifier being connected to each cor
responding pair of bit lines from said first and said
v:(t+1) = - vs. (I)- - - , (t) + v (t) (20) second crossbar array,
said subtraction amplifier having a positive input con
= VA (t) - VA-i(t) + V;(t) nected to an output of a bit line from said first crossbar
array and having a negative input connected to an
output of a corresponding bit line from said second
which indicates that the decayed effect has been canceled out. crossbar array; and
The N dimensional BSB model requires N summation-sub an output in common with said output of said Subtraction
traction amplifiers 80 to realize the addition/subtraction amplifier, wherein when an input Voltage vector is
operation in Eq. (20). Also, for the amplifiers, we should
adjust their power Supply levels to make their maximum/ applied to said word lines, an output vector is produced
minimum output voltages to be equal to tv respectively. In at said output approximately according to
the present invention the resistances R through R, can be
adjusted to match the required C. and in Eq. (16), if they are
not the default value 1. where VO represents said output voltage vector;
0047 Comparator: VI represents said input Voltage vector, and
0048. Once a new set of voltage signals V(t+1) is gener A represents a matrix with element values in the range of
ated from the summation-subtraction amplifiers 80, the -1.0, 1.0 being a function of the conductance of said
present invention sends them back as the input of the next memristive devices.
iteration. Meanwhile, each V,(t+1)eV(t+1) is compared to V, 2. The apparatus for performing matrix-vector multiplica
and-V, So that when V, equals to either V, or-V, we deem tion approximation of claim 1, wherein said each of the first
the outputias having “converged’. The recall operation stops and said second crossbar arrays further comprises
US 2014/0172937 A1 Jun. 19, 2014

a plurality of word lines intersecting a like plurality of bit And wherein the conductance of said memristive devices
lines, each of said plurality of word lines having an input in the second crossbar array is set by the expression:
end and each of said plurality of bit lines having an
output end;
a plurality of memristive devices each having a first termi -dii (gna - gmin) + gnin, if aij <0.0
nal and a second terminal, wherein gii F gmin, if a. 20.0
one said memristive device is connected at each said inter
section; and where
at each said intersection said first terminal of said memris g, is the conductance of a memristive device:
tive device is connected to said intersected word line and g, and g, respectively represent the minimum and the
said second terminal of said memristive device is con maximum conductance of all said memristors in a cross
nected to said intersected bit line; bar array; and
a sensing resistor connected between each said output end a,6. The
are apparatus
the elements of said matrix A of claim 1.
for performing matrix-vector multiplica
of each said bit line, and ground; and tion approximation of claim 1 wherein the Voltage elements
when an input voltage vector is applied to said word lines, vi, of input voltage vector VI are constrained to an input
an output vector is produced at said output approxi voltage boundary defined by
mately according to -v's Visva,
where V, and -V are the upper and lower input Voltage
boundaries.
where VO represents said output voltage vector; 7. The apparatus for performing matrix-vector multiplica
VI represents said input Voltage vector, and tion approximation of claim 6, wherein the absolute value of
C represents a matrix being a function of the conductance Vand-V, is less than V, where V, is the threshold Voltage
of said memristive devices. of said memristive devices.
8. An apparatus for performing auto-associative neural
3. The apparatus for performing matrix-vector multiplica network recall function, comprising
tion approximation of claim 2, wherein said matrix C is a first crossbar array of memristive devices for positive
defined by the expression elements of a matrix:
a second crossbar array of memristive devices for negative
elements of a matrix; wherein
31.1 ... 31.N. said first and said second crossbar array further comprise
32.1 32.N a plurality of word lines and a like plurality of bit lines
C = DX G = diag(d. ... , dif) X with each of said word lines having an input and each
of said bit lines having an output;
3M, ... 8MN a Summation-Subtraction amplifier having a positive input,
a negative input, a feedback input, and an output;
where
said Summation-Subtraction amplifier positive and nega
tive inputs being connected to each corresponding
G represents a conduction matrix corresponding to the pair of bit lines from said first and said second cross
conductance of said plurality of memristive devices in bar array:
said first or said second crossbar array; and said Summation-subtraction amplifier having a positive
D represents a diagonal matrix corresponding to the input connected to an output of a bit line from said first
inverse of the Summation of a row of said conduction crossbar array and having a negative input connected
to an output of a corresponding bit line from said
matrix with the conductance of a sensing resistor. second crossbar array; and
4. The apparatus for performing matrix-vector multiplica said Summation-Subtraction amplifier having said feed
tion of claim 3, wherein said diagonal matrix D further com back input connected to its output and to an input to
prises matrix elements d, defined by the expression said apparatus;
a comparator having an input, and output, and a first and a
second comparison input,
where said comparator input being connected to said output of
said Summation-Subtraction amplifier,
g is the conductance of a sensing resistor, and said first comparison input being connected to a positive
g is the conductance of a memristive device. boundary Voltage;
5. The apparatus for performing matrix vector multiplica said second comparison input being connected to a
negative boundary Voltage;
tion approximation of claim 1 wherein the conductance of wherein said comparator outputs a “1” to indicate
said memristive devices in the first crossbar array is set by the convergence when said comparator input Voltage is
expression: equal to either said positive boundary Voltage or
said negative boundary Voltage; and
aii (gna - gmin) + gnin, if aij > 0.0 an output in common with said comparator output, wherein
gii F gmin, if a. s.0.0
when an input Voltage vector is applied to said word
lines, a convergence output vector is produced at said
output approximately according to
US 2014/0172937 A1 Jun. 19, 2014

where V(t+1) represents said convergence output Voltage 11. The apparatus for performing auto-associative neural
Vector, network recall function of claim 10, wherein said diagonal
V(t) represents said input Voltage vector; and matrix D further comprises matrix elements d, defined by the
A represents a matrix with element values in the range of expression
-1.0, 1.0 being a function of the conductance of said
memristive devices. where
S() is the saturation boundary function as follows: g is the conductance of a sensing resistor, and
g, is the conductance of a memristive device.
12. The apparatus for performing auto-associative neural
Vb, if at who network recall function of claim8 wherein the conductance of
S(y) = v, if V s vs. Vb said memristive devices in the first crossbar array is set by the
- Vb, if sh expression:

9. The apparatus for performing auto-associative neural aii (gma - gmin) + gmin, if aij > 0.0
network recall function of claim 8, wherein said each of the gi. gmin, if a. s. 0.0
first and said second crossbar arrays further comprises
a plurality of word lines intersecting a like plurality of bit
lines, each of said plurality of word lines having an input And wherein the conductance of said memristive devices
end and each of said plurality of bit lines having an in the second crossbar array is set by the expression:
output end;
a plurality of memristive devices each having a first termi
nal and a second terminal, wherein -dii (gma - gmin) + gmin, f di. s. O.O
one said memristive device is connected at each said inter gi. gmin, if ai > 0.0
section; and
at each said intersection said first terminal of said memris where
tive device is connected to said intersected word line and
said second terminal of said memristive device is con g, is the conductance of a memristive device;
g, and g, respectively represent the minimum and the
nected to said intersected bit line; and maximum conductance of all said memristors in a cross
a sensing resistor connected between each said output end bar array; and
of each said bit line, and ground. a, are the elements of said connection matrix A of claim 8.
when an input voltage vector is applied to said word lines, 13. The apparatus for performing auto-associative neural
an output vector is produced at said output approxi network recall function of claim 8 wherein the voltage ele
mately according to ments vi, of input Voltage vector VI are constrained to an input
Voltage boundary defined by
where VO represents said output voltage vector; where V, and -V are the upper and lower input Voltage
VI represents said input Voltage vector, and boundaries.
C represents a matrix being a function of the conductance 14. The apparatus for performing auto-associative neural
of said memristive devices. network recall function of claim 13, wherein the absolute
10. The apparatus for performing auto-associative neural value of vand-V, is less than V, where V, is the threshold
network recall function of claim 9, wherein said connection Voltage of said memristive devices.
matrix C is defined by the expression 15. The apparatus of claim 14 further comprising an input
selector for
selecting V(0) as an input Voltage at the start of a recall
81,1 . . . .31N computation and selecting V(t+1) thereafter,
32.1 32.N resetting t–0 after the convergence of all said output volt
C = DX G = diag(d. ... , dw) X ages; and
3N.1 . . . .3N,N
selecting a new input Voltage V(0) for a recall computation.
16. The apparatus for performing matrix-vector multipli
cation of claim 15, wherein said first and said second com
where parison inputs are set to V, and -V, respectively.
G represents a conduction matrix corresponding to the 17. The apparatus for performing matrix-vector multipli
conductance of said plurality of memristive devices in cation of claim 16, wherein said comparator compares each of
said first or said second crossbar array; and its positive and its negative outputs to said first and said
D represents a diagonal matrix corresponding to the second comparison inputs, respectively, to determine
inverse of the Summation of a row of said conduction whether an output has converged.
matrix with the conductance of a sensing resistor. k k k k k

You might also like