Professional Documents
Culture Documents
Cortex 00939813 PDF
Cortex 00939813 PDF
Invited Paper
This paper reviews the problem of translating signals into sym- from Turing and Von Neumann and is based on the theory
bols preserving maximally the information contained in the signal of formal systems [26]. The formal model is invaluable and
time structure. In this context, we motivate the use of nonconver- quite general for the purpose of symbolic processing [22].
gent dynamics for the signal to symbol translator. We then describe
a biologically realistic model of the olfactory system proposed by However, we have to remember that symbols do not exist in
Walter Freeman that has locally stable dynamics but is globally the real world. The real world provides time varying signals,
chaotic. We show how we can discretize Freeman”s model using usually faint signals corrupted by noise. Hence, the critical
digital signal processing techniques, providing an alternative to the issue for accurate and robust interpretation of real-world
more conventional Runge–Kutta integration. This analysis leads to
signals by animats is not only how to process symbols but
a direct mixed-signal (analog amplitude/discrete time) implementa-
tion of the dynamical building block that simplifies the implementa- also how to transform signals into symbols. The processing
tion of the interconnect. We present results of simulations and mea- device that transforms signals into symbols is called here
surements obtained from a fabricated analog very large scale inte- the signal-to-symbols translator ( ). We can specify an
gration (VLSI) chip. optimal as a device that is able to capture the goal-rel-
Keywords—Analog VLSI implementation, digital simulation evant information contained in the signal time structure and
models, neural assemblies, nonlinear dynamics. map it with as little excess irrelevant information as possible
to a stable representation in the animat’s computational
I. DESIGN OF SIGNAL TO SYMBOL TRANSLATORS framework.
There is no generally accepted theory to optimally design
There are many important differences between biological for the unconstrained signals found in the real world.
and man-made information processing systems. Animals s fall between two very important areas of research:
have goal-driven behavior and have explored inductive symbolic and signal processing, which use very diverse tools
principles throughout the course of evolution to work and techniques. Symbolic manipulation is unable to deal with
reliably in a non-Gaussian, nonstationary, nonlinear world. time signals, while signal processing techniques are ill-pre-
Autonomous man-made systems with sensors and compu- pared to deal with symbols. Many hybrid approaches using
tational algorithms (animats) are still unable to match these
the minimum description length principle [40] and optimal
capabilities. Engineered systems may be made faster, more
signal processing principles [42], pattern recognition [14],
accurate, but at the expense of specialization, which brings
neural networks [41], or other machine learning approaches
brittleness. Our present model of computation was inherited
[28] have been proposed.
A framework where the s is modeled as a dynam-
Manuscript received January 1, 2000; revised February 1, 2001. This ical system coupled to the external world seems a produc-
work was supported in part by the Office of Naval Research under Grant
N00014-01-1-0405 and by the National Science Foundation under Grant tive alternative. To bring a biological flavor, we will center
ECS-9900394. The work of V. G. Tavares was supported by Fundacao the discussion in distributed, adaptive arrangements of non-
para a Ciencia e Tecnologia and Fundacao Luso-Americana para o linear processing elements (PEs) called coupled lattices [28].
Desenvolvimento scholarships.
J. C. Principe, V. G. Tavares, and J. G. Harris are with the Computational The appeal of these coupled lattices is that they have po-
NeuroEngineering Laboratory, University of Florida, Gainesville, FL 32611 tentially very rich autonomous dynamics and the external
USA. world signals can directly influence the dynamics as forcing
W. J. Freeman is with the Department of Molecular and Cell Biology,
University of California, Berkeley, CA 94720 USA. inputs. Von Neumann was the first to propose cellular au-
Publisher Item Identifier S 0018-9219(01)05407-X. tomata, which are discrete-state and discrete-time coupled
A. The K0 Set
The simplest structure in the hierarchy of sets is the K0 set.
This set has the characteristic of having no functional inter-
connections among the elements of the neural assembly. The
neural population in a K0 share common inputs and common
sign for the outputs as shown in Fig. 3. It can accept several
spatial inputs that are weighted and summed, and then con-
volved with a linear time invariant system defined by the left
hand side of (1). The output state resulting from the con-
volution is magnitude shaped by (2) to form the output. We
Fig. 2. Static nonlinear function sigmoidal characteristic.
will represent a K0 set by a square, as in Fig. 3. This is the
basic building block necessary for the construction of all the
where and are constants. Equation (1) presages Gross- other higher level structures. A K0 network is a parallel ar-
berg’s additive model [19], and each PE is a second-order rangement of K0 sets.
ordinary differential equation to take into consideration the
independent dynamics of the wave density for the action B. The KI Network
dendrites and the pulse density for the parallel action of The KI set is formed from K0 units interconnected through
axons. No auto-feedback from the nonlinearity output lateral feedback of common sign; hence, we can have exci-
is allowed. The operation at trigger zones is tatory (denoted by sign in Fig. 4) and inhibitory KI sets
nonlinear (wave to pulse transformation) and is incorporated (denoted by a sign). There is no auto-feedback. A number
in the model through , a static sigmoidal of KI sets connected only by forward lateral feedback con-
nonlinearity, which is nonsymmetric as opposed to the non- nections also comprises a K1 network. Each connection rep-
linearities used in neurocomputing. This asymmetry is an resents a weight that can be constant or time varying and
important source of instability in the neural network because is obtained from neurophysiological measurements. These
of asymmetric excitatory and inhibitory gain. models are lumped circuit models of the dynamics.
is defined by (2) and plotted in Fig. 2 [12]. In the model,
is fixed at all levels to C. The KII Set and Network
The KII set is built from at least two KI sets with dense
functional interconnections [11]. The most interesting inter-
connectivity occurs when the KII set is formed by 2KI (or
if 4 K0) sets with opposite polarity (i.e., two excitatory K0
and two inhibitory) (Fig. 5). KII sets have fixed connecting
if weights, and they form oscillators with the parameter set-
(2) tings found in biology. The response measured at the output
of the excitatory K0 to an impulse applied to its input is of
A set of inputs may be prewired for temporal processing a high-pass filter type (damped oscillation). However, with a
expressed by as in the gamma model [39]. Freeman’s sustained input it evolves into undamped oscillation, which
model is also an additive model because the topological pa- vanishes after the input rests into a zero state. The KII set
rameters and are independent of the input. The dy- is then an oscillator controlled by the input with an intrinsic
namical parameters and are real-time constants of two frequency setup by the connection weights.
decaying eigenfunctions and are determined experimentally When the KII sets are connected together they form a KII
from brain tissue [11]. This is also true for all the other topo- network, which becomes a set of mutually coupled oscilla-
logical parameters. is an external input. tors. Note that the excitatory K0 of each set is connected
If we follow the engineering literature, a neural network through weights to all the other excitatory K0 sets of the
would be an arbitrary arrangement of these PEs. However, network. Likewise for the inhibitory K0 sets. The excitatory
in the cortex there is a very tight relation between topolog- weights interconnecting the individual KII sets in the net-
ical arrangement of neurons and function [44] that is totally work are adapted with Hebbian learning, while the inhibitory
missed in neurocomputing. Freeman’s model incorporates weights are fixed with a value obtained from physiological
neurophysiological knowledge by modeling the functional measurements [13].
connection among groups of neurons, called neural assem- The input to the KII network is a vector of values, each ap-
blies. Although each neural assembly is built from thousands plied to the excitatory K0 (named M1 in Fig. 5) in the KII set.
to millions of neurons, the functional connections have been The output is spatially distributed over the network, as we
catalogued in an hierarchy of topologies by Katchalsky [11] discuss next. When an external excitation is applied to one
Fig. 5. KII network made up of tetrads of K0 sets. M: Mitral cells. G: Granular cells.
or more KII sets in the network, those sets tend to build up pattern with the spatial distributed coupling that is stored
energy on the other sets through excitatory (positive) inter- in the interconnecting pathways, and that becomes “visible”
connections feedback. The oscillation magnitude of the sets from the outside as larger amplitude oscillations along the
that do not have any excitation will depend on the magnitude KII sets that have larger weights. The excitatory connec-
of the coupling gain defined by the interconnecting weights. tions represent cooperative behavior while inhibitory con-
If these interconnections are adapted to an input pattern, a nections introduce competitiveness into the system. They are
distributed representation for that input is achieved. When crucial for contrast enhancement between the channels [13]
a known pattern is applied to the system input after training, (a channel is an input or output taken from a single set). For
then a recognizable output oscillation pattern emerges across an “ON–OFF” (1/0) type of input pattern, the learning is ac-
the KII sets. In this perspective, the KII network functions complished with a modified Hebbian rule as follows [13]. If
as an associative memory [30] that associates an input bit between any two input channels the product is equal
(7)
(5) (8)
(10)
order gamma filter. The sampling frequency was chosen The gamma kernel has the added feature of allowing for an
as 20 times larger than the largest frequency pole. Fig. 12 rep- analytical procedure to calculate the time dispersion. From
resents the impulse invariant system response to an impulse. (14), we may interpret as a memory structure with
The gamma filter approximation of the sampled differential time depth of . A gamma kernel has a depth of
equation is also plotted. The resulting match is very good, memory given by (12), so we may take
although the approximation filter has only a double pole at and find a with a given order of the system. However,
. Table 1 summarizes the gamma filter parameters as mentioned before, time resolution is decreased with this
values found using the discussed system ID approach. design and may be a concern. In the present implementa-
tion, a cascade of two dispersive delays was utilized
D. Dispersive Delays Digitization . Following this procedure, we get the disper-
As discussed earlier, the time dispersion was introduced in sive delay response of Fig. 13.
the KIII network to model the different axonal interconnec-
tion lengths and thicknesses. These functions can be modeled E. Digital Simulation Results
as in (13) After discretization, any software packages (DSP or
neural networks) or even hardware simulators can be
utilized in the simulation. The environment used for the
(13)
simulation task is a commercial neural network package
called “NeuroSolutions” [35]. The software has all the basic
which can be rewritten in a convolution form as building blocks necessary to construct the K sets and the
dispersive delays. The user interface enabled the creation
of the topologies of each set using an icon-based interface.
We performed many tests on each K set, but here we will
report on the more interesting ones (forced responses).
(14) First, we show in Fig. 14 a comparison of the KII output
spectrum to a “high” input obtained using Runge–Kutta
where denotes the convolution and is the step func- (ODE45 in Matlab [32]) and our modeling method with
tion. Equation (14) represents a FIR low-pass filter in dis- a sampling period of s. The KII system
crete time and is recognized as a linear phase filter response was simulated with an input square wave with an
of Type I or II depending if the number of samples amplitude of six (arbitrary units) when excitation is present,
is an even or odd number, respectively. A linear phase system and zero otherwise. The parameter set is summarized in
is nondispersive. To be more biologically realistic, the filter Table 2 where K are the weights between the mitral (E)
should have nonlinear phase. A simple low-pass IIR system and granular cell (I) denoted by M and G in Fig. 5.
like a first-order gamma kernel has nonlinear phase and so We plot in Fig. 14 the magnitude spectrum of the oscil-
implements a more realistic dispersive delay. lating time response using the fast Fourier transform (FFT)
Table 2
KII Parameter Set
Table 3
Patterns Stored
(a)
to a large limit cycle centered away from the origin. It stays Table 4
Parameters Set for the KIII Network Simulations (see Fig. 6)
there for the duration of the input, and when the system input
vanishes the output returns to zero.
The phase plot resembles a Shilnikov singularity, but we
have not investigated further the local characteristics of the
unstable point [45]. This result also indicates that the KII may
be stable in the sense of Lyapunov, i.e., there is a potential
function that models the system as energy conservative. Fi-
nally, Fig. 16 shows the outputs of all the KII sets in the KII
network after the application of pattern 5 at the input (recall
of pattern 5). The input pattern produces a large amplitude
oscillation in the channels that have 1 in the input pattern, so
we have the expected association between inputs and the AM
oscillation that Freeman describes in the OB of the rabbit fol-
lowing a sniff, which was also simulated in [10].
One interesting property of this output is that it is transient
in time, disappearing when the input vanishes. This behavior
is very different from Hopfield networks, where the system
dynamics stay in the fixed point after the input is removed.
Extra energy would have to be supplied to the system to get
away from the fixed point, while here the KII network is al-
ways ready for real-time processing of temporal patterns.
The KIII network was constructed in NeuroSolutions with
only channels in the OB layer because the simula-
tion takes much longer. Table 4 summarizes the parameter bles the electroencephalogram’s background activity. When
set used for the simulation. All the results were generated for we plot the activity as phase plots between the inhibitory and
the recall of pattern 2. The input was a smooth rising pulse excitatory states (Fig. 17), we see phase plots that resemble
within the values of 2 and 8 as specified in [49]. strange attractors of different shapes across the four layers.
In the full KIII system, we have many possible probe Therefore, the KIII has a high-order nonconvergent dynam-
points. We will start by analyzing the resting state at the PG, ical behavior compatible with a chaotic attractor. We have
OB, AON, and PC layers. If we observe the time signals at not made any tests for chaoticity yet, but a simple power
any of these locations in the absence of input, we will see spectrum estimation of the time series generated by the
random fluctuations of a quasi-periodic activity that resem- AON (E1 state) clearly shows a type of spectrum that is
once again compatible with a self similar attractor (Fig. 18). the “high” level of the input was stored do indeed show a dc
So we conclude that the expected behavior simulated using offset during the time the pulse is applied. We were also ex-
Runge–Kutta integration in [49] is again obtained in our pecting an increase in oscillatory amplitude during the pulse,
discretized model. but the small size of the OB layer (only eight channels) may
Let us now turn to the OB layer and analyze the response explain the difference.
to a pulse input. Fig. 19 shows the time series for the recall of We investigated in more detail the structure of the oscilla-
pattern two. Notice that the response of the channels where tions among the different channels of the OB layer. When we
create the phase plot between the excitactory and inhibitory plementations. However, in the general framework of s
state of channel 2, we clearly see [Fig. 20 (a)] that the basal discussed in the introduction, an analog implementation also
attractor (for the “off” input) changes to a “wing” when the presents advantages of its own, such as:
input switches to a high amplitude. Channels that have a • natural coupling to the analog real world;
stored pattern oscillate in perfect synchronization [Fig. 20 (b) • computation is intrinsically parallel;
and (c)] and share the same basal state evolution. This is also • the computation time is practically instantaneous (only
reported by Freeman [13]. However, channels belonging to set by the delay imposed by each component);
different templates oscillate with different incommensurable • there are no roundoff problems, i.e., the amplitude res-
phases [Fig. 20 (d)]. We remark that in the beginning of the olution is effectively infinite;
simulation they start in phase, but they loose synchrony very • it generally renders smaller circuits;
quickly thereafter. Freeman states that this lack of coherence • it is one order of magnitude more power efficient.
is not apparent in the signals collected from the rabbit nor in These benefits motivated the development of an analog
analog implementations of the model. He states that it corre- VLSI implementation, but unlike the traditional design ap-
sponds to the breakdown of the chaotic attractor due possibly proach, we seek to preserve as much as possible the func-
to quantization errors that create limit cycles in phase space tional correspondence to the digital design to help us set in
[14]. the digital simulator the parameters of the analog KIII model.
Therefore, we conclude that when excited, the KIII Otherwise, finding appropriate parameters in the analog ver-
system undergoes a transition from the basal to a different sion of the KIII becomes a daunting task. Our analog de-
chaotic behavior characteristic of the pattern being recalled. sign approach is centered on a continuous amplitude and dis-
When the excitation vanishes, the system always returns to crete time implementation of the dynamic building blocks
the initial attractor (Fig. 20). The “on” channels are distin- (mixed-signal implementation) for the following reasons.
guishable by the higher offset and signal dynamics (Fig. 20). • We wish to preserve a continuous amplitude representa-
All these results are consistent with Freeman’s results using tion due to the negative effect of the roundoff errors re-
Runge–Kutta numerical methods [49], while here we use ported in implementing chaotic dynamical systems [50].
our discretization of the linear part of the dynamics using • The most serious design constraint in the VLSI imple-
the gamma approximation. mentation of the KIII is the size of the interconnect. If a
full analog (i.e., analog amplitude and continuous time)
V. ANALOG CIRCUIT IMPLEMENTATIONS implementation is selected, the area required for the
The previous sections presented Freeman’s model of the interconnect would be prohibitive. Another notorious
olfactory system and its DSP implementation. The flexibility, advantage of the mixed-signal solution is the control
immunity to noise, and modularity of the DSP implemen- gained over the timing of events. The synchronous re-
tation is unsurpassed when compared to analog VLSI im- sponse update at discrete times provide free time slots
to implement other functions such as reading inputs, Of course, this tight coupling between the mixed signal
parameters, and transparent multiplexing with the con- and the DSP implementations also creates extra difficulties
sequent optimal resource management. because the exact mathematical functions are seldom imple-
• Discrete time implementation also brings important ad- mentable in VLSI. Hence, the design cycle should include a
vantages for testing since a one-to-one correspondence redefinition of the formal DSP system to include the small
can be established between the DSP simulation and the alterations imposed by implementation limitations. If both
chip components. This is evident from the fact that both behave similarly, the analog implementation will be a good
have the same mathematical representations, difference representation of the original continuous model, since the
equations, and transform. The difference resides on previous results showed that the digital model reproduces
dimensionality that can easily be translated between the Runge–Kutta simulation well. This is the procedure
both domains by substituting by . followed in the next sections, where we describe each of
• Unexpectedly, time discretization also leads to a very the building blocks and present measurements from circuits
efficient implementation of the dynamics in the form designed in MOSIS 1.2- m AMI technology using the
of a novel circuit called Filter and Hold (F&H, patent subthreshold CMOS regime for low power.
pending) that decreases the size of the capacitors and
resistors for a given time constant. The discretization A. Nonlinear Asymmetrical Sigmoidal Function
of the KIII system provides a clear roadmap for the The function responsible for the nonlinear characteristic
analog implementation. As was discussed above, the of Freeman olfactory model is static and has the peculiarity
only components required are gamma filters for the of being asymmetric around the zero point (Fig. 2). A precise
dynamics and dispersive delays, static nonlinearities, implementation of (2) is not an easy task, but we succeeded
summing nodes, and the vast interconnect. in approximating the sigmoid with a modified operational
As a conclusion, the high-level formal simulation pro- transconductor amplifier (OTA). The important aspects of
posed in earlier sections becomes the centerpiece for the the original function are preserved, such as exponential sig-
discrete-time analog implementation. We have access to moidal shape and asymmetry around the input axis. The ex-
a formal high-level representation that can, in a sample ponential shape is set by placing the MOS transistor in the
by sample basis, predict how the real system will behave. subthreshold operating region, where it displays an exponen-
The VLSI system can be flexibly tested and the parameters tial relationship between the gate source voltage and the drain
refined with the digital simulator, which can be directly current. The asymmetry is accomplished by unmatching the
brought onto the designed chip by external active control. current mirror to act as an active charge to a differential input
stage. The output current relation with the input differential bias currents are shown in Fig. 21 (right panel), and the overall
voltage can be represented by (15), which has the required shape matches the circuit simulations reasonably well.
shape as Fig. 21 shows Notice that the dynamic range of the DSP implementa-
tion and the chip are different. However, in the chip we pre-
(15) serve the 1/5 ratio between the dc unbalance and the satu-
ration level that is implemented by Freeman’s nonlinearity
The offset that is evident from Fig. 21 (left panel) is pro- with . We choose an nA to yield a 60-mV
duced by the current mirror unbalancing; however, it can be dynamic range, but different bias currents preserve the ratio
cancelled out by subtracting a voltage at the cell input equal while changing the gain and the saturating levels. We are
to the offset value. This can be automatically done by placing only concerned with the slope of the nonlinearity that did not
feedback around a similar cell and applying the output to the match (2). This will impact the parameter setting between the
inverting terminal [48]. The residual offset will be only due to two implementations, but for comparison purposes (15) was
the natural fabrication process. After the offset cancellation, also programmed into the digital simulator.
the sigmoidal curve is shown in Fig. 21 (left panel). The final
schematic is shown in Fig. 22. The current is converted to a B. Dynamics—A Nanopower Analog Discrete Time Circuit
voltage with a amplifier and the OTA was implemented The olfactory system is complex, highly interconnected,
using a wide-range topology to prevent the inversion of cur- with many similar blocks repeated many times. The power
rent at the input differential pair when the output voltage drops consumption increases proportionally to the system order,
below a certain value. Chip measurements with two different meaning that effectively power consumption is a strict con-
Fig. 24. F&H measured AC response together with predicted response by (16) (solid).
straint for the implementation. In discrete time, there are ba- analog/discrete time design that combines analog continuous
sically two filtering techniques available: the switched ca- time conventional filtering with sampling [47], [48].
pacitor (SC) [5] and switched current (SI) [26], but they both The F&H technique is based on the very intuitive idea of
utilize large areas and are not compatible with nanowatt con- allowing a capacitor to integrate a current during seconds
sumption. A third alternative would use analog filtering with and halting it during a time of seconds. This process is
a sample and hold (S/H) at the input and output to imple- repeated every seconds. The resulting time constant value
ment continuous time processing of a discrete time signal is then multiplied by the duty cycle, which is defined as the
[36]; however, it does not bring any advantage when com- ratio of over the sampling period . The multiplicative ef-
pared with the above two approaches. fect to the time constant is solely based on the duty-cycle of
Small area and nanopower consumption usually go hand- the clock and not on the sampling period. Therefore, large
in-hand. Simpler circuits take less elements and in general time constants can be achieved without a decrease in sam-
will require less power. We present an example of a novel dis- pling frequency or with high capacitor (or transistors in SI
crete-time technique, that in essence is analog in magnitude, systems) ratios. The switching scheme is simple and both
but uses time discretization in a very simple way to get very the area and power requirements can be made small. Further
long time constants with little area and low power. The tech- details about F&H are out of the scope of this paper; how-
nique was named “filter and hold (F&H)” and it is a mixed ever, we have shown that the procedure is general for any
filter order and type, i.e., high-pass, low-pass, bandpass and Table 5
band-reject filters for sampled input signals. KII–K0 Interconnections Gain in Analog and Digital Simulations
The F&H circuit used for the K0 is shown in Fig. 23. The
difference equation can easily be found taking the step re-
sponse [47], and the corresponding frequency response be-
comes
(16)
The very low frequency poles (35 Hz and 114 Hz) were
realized with a very low power implementation (100 nW)
and with a fairly low area consumption of 200 m 150
m( 1 pF, 4 pF), the duty cycle is 1%, which
means that the equivalent capacitors will be virtually scaled
by 100 ( 100 pF, 400 pF). Fig. 24 shows the
predicted and measured frequency responses. tion inverts the input signals. In order to preserve the proper
sign between input and output, the nonlinear circuits were
C. K0 Cells topologically changed to reverse the signal polarity. Fig. 25
At this point, the K0 model is almost complete. In order shows conceptually the needed nonlinear function change to
to allow interconnectivity, an input summing node is needed. ensure the proper signs. The circuit to perform these non-
The summing node is a simple voltage adder, as represented linear functions is a small variant of that of Fig. 22. Only the
in Fig. 25. Since the adder inverts the signal, the output non- unbalancing transistors and input positions change.
linearity was changed to take into account the signal inver- Each designed K0 takes an area of 500 m 160 m and
sion. To build the higher level models, excitatory and in- the power consumption is roughly 300 W to ensure driving
hibitory K0 cells are needed. The voltage adder configura- capabilities in the summing amplifier of Fig. 25.
D. A Coupled KII–K0 Measured Results earity by (15). With these pole values and the same coupling
parameters of Table 5, our digital simulation produced the
A KII set was implemented in analog VLSI with the com- expected dynamical behavior that we report next. Hence, we
ponents described in the previous sections. On the same chip, have reenforced our conviction that the digital simulation can
we also implemented several K0 sets and for the tests, we be used as a flexible test bed to set analog system parameters;
connected one of the K0 sets to the KII as shown in Fig. 26. we just need to model all the components more accurately.
All the resistors are external, and for this setup we have two The waveform measured in the VLSI chip with a forcing
more coupling parameters K and K linking the KII and input pulse is plotted in Fig. 27. Here, we plot the digital
the K0. Our first experiment was to find the values of the input and the measured response at the excitatory state (ac
coupling coefficients that made sense from an analog circuit measurement). This figure should be compared with Fig. 15
point of view (all the values in the digital simulations are obtained from the digital simulation.
dimensionless multipliers). For dynamic modeling, the ratio In Fig. 28, we show the response between the K0 and the
of resistors is what is important for the simulations, so it is excitatory state of the KII set, for both the digital simulation
the quantity presented in Table 5. The feedback resistor is and the waveforms collected from the chip.
100 K . In Fig. 29, we select the KII inhibitory state to show the
Due to the F&H implementation of the dynamics and expected out-of-phase response. We conclude that the new
differences in the nonlinearity, the coupling coefficients of digital simulation correctly follows the KII VLSI implemen-
Table 2 did not provide the sought after dynamics for the tation time evolution with forced inputs both for the phase,
KII, so a tuning of parameters was necessary. We present in frequency, and relative magnitudes.
Table 5 a parameter set that produced the expected dynam- We conclude that the major sources of error responsible
ical behavior. This reparametrization was anticipated, but for some observable differences are the F&H implementa-
according to our methodology, we now need to make sure tion and the shape of the nonlinearity. The amplifiers
that we redefine the DSP model to match the new dynamical were designed as a simple differential pair with emitter de-
behavior. generating diode connected transistors to increase the linear
First, we measured the poles of the analog K0 to be input range and diminish the value. It can be shown that
110/s and 360/s. The bias current for the nonlinearity for each stage in Fig. 23 when the switch is “on,” the differ-
was 40 nA, very close to the designed value. Next, we ential equation is [33]
substituted in our digital implementation the gamma filter
(17)
model by the F&H equation (16), and the original nonlin-
The nonlinear hyperbolic tangent function will have a re- , and the resulting normalizing factor becomes
shaping effect on the signals and will effectively limit the , where the numerator
signals’ magnitudes (the amplitude difference we see in the comes from the fact that Freeman’s nonlinearity has a ratio
simulations of Figs. 28 and 29). For a better matching be- of five between the saturating levels. This scaled-up function
tween the simple and flexible digital implementation and the has the same saturating levels as the one plotted in Fig. 21,
real chip implementation, this effect needs to be included in but with different slope due to the different normalization of
the model or a more linear or other more linear F&H . If now we correct Freeman’s nonlinearity slope
topology should be designed. with a normalizing factor of
We are beginning to understand the effect of the amplitude , and perform a digital simulation with the same condi-
nonlinearity on the overall dynamics, which is the missing tions of Figs. 28 and 29, the same dynamics (including the
link to go back and forth between the parameterization of the oscillating frequency) are obtained except for the obvious
digital simulator and the analog implementation. Observing scaled-up amplitudes.
Table 5, the ratio of K /K is smaller than the scale factor Other higher order effects, such as clock feedthrough [53]
for the other gains of Table 2. A more careful analysis of and extra delays due to finite bandwidth of all the needed
Fig. 21 reveals that the exact nonlinear function proposed components, have not been accounted for in the digital repre-
by Freeman has a higher slope and consequently requires a sentation. We conclude that our discretization methodology
smaller input for the same saturating output levels. This is is a good prediction of how the mixed-signal analog system
the gain factor responsible for such a parameter change. In behaves dynamically, and will be a tremendous asset to the
fact, the normalization factor needed to match the slopes parametrization of future, more complex analog implemen-
of the two functions is about . Furthermore, the tations. But with the present silicon technology, these chips
implemented nonlinearity has 25 mV, and a posi- will still have relatively few components to attack the prob-
tive saturating level of 80 mV. To scale up this function to lems of representing real-world signals. One very promising
the 1 and 5 saturating levels used in the digital simula- area of research is to seek implementations with nanotech-
tions of Freeman’s nonlinearity, we have to make in (15) nology components. They hold the promise of shrinking the
size and bring new properties such as short-term memory, using the tools of engineering. Freeman’s model of the
which is very difficult to implement in analog VLSI. Bi- olfactory system shows how different the information pro-
ology can provide the insight for new computer architectures, cessing solutions found in biology are from the conventional
and to take full advantages of the physics and chemistry of engineering approaches. Pragmatically, we should extract
macromolecules, the language of dynamics should be uti- from this model what is essential, and discount the require-
lized to describe the components of the next generation of ments of the “wetware.” Unfortunately, we still are far from
nanoinformation processing devices. being able to understand the model to make this distinction.
First and foremost, we lack a dynamic theory of information
processing. Great strides and interest followed Hopfield’s
VI. CONCLUSION
work on dynamic associative memories, but his model only
The work described in this paper encompasses several utilizes the simplest of the singularities: the fixed point.
disciplines, and it is a prototype of the issues that will be Recently, analog processors have been built for image pro-
faced in creating animats possessing the exquisite informa- cessing using dynamical processing elements [7], and chaos
tion processing capabilities found in animals. We think that is being applied in communication systems [1] and signal
effectively transferring information from real-world signals processing [21]. Freeman [12], Haken [20], and many others
to stable representations is a key link in this process that are postulating nonconvergent dynamics for information
requires new insights, mixed analog/digital solutions and a processing, but we need to understand and quantify what is
dynamic theory of information processing. gained by increasing the order of the singularities, and the
Translating signals into internal representations is not an difficulties associated with chaotic attractors. The sensitive
easy task as the size of sensory cortices of mammals demon- to initial conditions of chaotic attractors bring dynamic
strate. Understanding neurophysiology and computational memory, but on the other hand require exactly the same
neuroscience is the starting point, but the description lan- inputs to reach the same attractor, which is obviously impos-
guage and the proper level of abstraction are very important sible with the variability of real-world signals. The role of
to achieve a model that can be studied and implemented network noise here seems crucial to smooth out the attractor