Professional Documents
Culture Documents
Hesham Hamed Amin AbuElhassan - Spiking Neural Networks. Learning, Applications and Analysis (Diss.) - University of Aizu (2005)
Hesham Hamed Amin AbuElhassan - Spiking Neural Networks. Learning, Applications and Analysis (Diss.) - University of Aizu (2005)
A DISSERTATION
SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
4.4 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.5 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Tables
4.1 XOR Input spike times (including the bias) and output times. . . . . . . 55
4.2 Input spike train classification, clustering, and final output times. . . . . 56
5.3 complex spike train classification and comparison with ANN Back-propagation
method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
1.1 (A) Neuron with spatio-temporal excitatory t i inputs and output tj . (B)
1.4 (A) Block diagram of sound localization application. (B) Block diagram
pattern input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.12 Output times of the ISI1 and IS2 neurons for different input spike trains
3.13 Output times of the ISI1 and IS2 neurons for different input spike trains
3.14 Output times of the ISI1 and IS2 neurons for different input spike trains
type1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.15 Output times of the ISI1 and IS2 neurons for different input spike trains
type2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.16 Output time differences produced by the ISI1 neuron for two different
3.17 Output potential the ISI1 neuron for two different input spike trains. . . 36
3.18 Output potential the ISI2 neuron for two different input spike trains. . . 37
3.19 Output time differences produced by the ISI1 neuron for two different
3.20 Output potential the ISI1 neuron for two different input spike trains (for
exponential W function). . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.21 Output potential the ISI2 neuron for two different input spike trains (for
exponential W function). . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.1 ISI neuron for the Learning Stage. . . . . . . . . . . . . . . . . . . . . . 42
4.6 (A) Multiple Input ISI Neuron. (B) Two-Input (plus the local reference
4.8 Spiking neural network for XOR function with spatio-temporal encoding
4.9 The original spike train for each class is spike train number 1. The other
5.1 Input x variable encoded into 8 spike times using gaussian receptive fields. 60
and τ = 5.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.3 Steel plate impact sound waveform and its corresponding 20 spike trains. 65
5.4 Two patterns with 10 spike trains each (the other patterns are not
6.4 (A) Spike train arrives at the latest possible times. (B) Spike train arrives
P1
6.6 Plots of n and two other approximations of it. . . . . . . . . . . . . . 83
6.7 ISI1 mapping neuron output time window vs. β 1 for different weight
6.8 ISI1 mapping neuron output separation times vs. β 1 for different weight
6.9 ISI2 mapping neuron output time window vs. β 2 for different weight
6.10 ISI2 mapping neuron output separation times vs. β 2 for different weight
6.13 (A) All spatio-temporal input spikes come at the latest possible times.
(B) All spatio-temporal input spikes come at the earliest possible times. 93
6.14 ISI1 learning neuron output time window vs. β 1 for different weight ω1
6.15 ISI2 learning neuron output time window vs. β 2 for different weight ω2
sA
n−1
> sB
n−1
; (C) and (D) tA
n
< tB
n
and sA
n−1
> sB
n−1
; (E) tA
n
= tB
n−1
and sA
n−1
> sB
n−1
; (F) tA
n
= tB
n−1
and sA
n−1
< sB
n−1
. . . . . . . . . . . . 109
Abstract
Artificial Neural Networks (ANNs) are considered as special circuits that have the
ability to approximate, classify, and perform other computations that can be considered
as emulating some of the functions performed in the biological brain. In this research,
focus has been placed on a relatively new field of ANNs called “Spiking Neural Networks
(SNNs)”. A biological basis exists for SNN. Research on artificial SNNs has gained
momentum in the last decade due to its ability to emulate biological neural network
Input data arrives into a spiking neural network as temporal data instead of values
within a time window (rate code) as was used in the classical ANNs. The input data into
such a neural network arrives in the shape of sequence of pulses or spikes in time, which
called “spike train” patterns. Another type of input data are single spikes in time which
are distributed spatially at the input terminals, and in this case called “spatio-temporal”
patterns.
For both types of input data, spatio-temporal and spike train patterns, there is a
need for a pre-processing method, a learning algorithm, and deep analysis for a practical
model is a highly postulated matter because research is scant regarding these points
Emphasis has been placed on finding a robust and practical SNN learning al-
gorithm and as well as an analysis of how various parameters affect the algorithm’s
behavior. A special pre-processing stage (the mapping stage) was used to convert spike
Another main point of this research was to achieve a learning organization that
can be practically implemented. Hence, learning schemes have been developed in a way
determine appropriate design parameter values to be used for the neuron threshold po-
tential value and appropriate time windows for the mapping and the learning stages.
The use of the resulting model in conjunction with devices (such as motors and
sensors) can help people with disabilities. This work may lead us a step closer to achiev-
ing robust and computationally efficient neural models that can be implemented as small
VLSI chips that can be implanted in the human body to replace damaged or malfunc-
tioning parts.
Chapter 1
Introduction
has been achieved especially in the past couple of decades. An ANN is a computational
model whose functions are inspired by fundamental principles of real biological neural
networks [21], [45]. Evidence exists that biological neurons communicate with each
other through short pulses, or spikes, which are emitted at varying rates at different
times [1], [17], [38]. In the classical ANNs, firing rate (rate code) has been considered
for a long time as the relevant information being exchanged between neurons as inputs
and outputs [38]. However, in the classical ANNs the input and output firing rates are
represented as a binary or analog value. The firing rate represents the number of active
Research which attempts to understand and utilize the neural code with which
biological neurons communicate in the temporal domain is relatively new [1]. Information
in the temporal domain consists of the neuron spike times relative to a local reference
time. The rate code does not consider spike times; hence, valuable information contained
succinctly in the spike times is not utilized. In addition, in the context of fast information
processing there is not always sufficient time to sample an average firing rate. Hence,
it seems very promising to extend the computational possibilities of ANNs which utilize
spiking neurons, which use spike time information [14], [31], [38].
2
SNN can be considered as a third generation of ANN after the perceptron neuron
and the sigmoidal function generation neuron [28], [31]. When a spiking neuron fires,
function. A spike in a spiking neuron is considered to be a pulse with unit amplitude and
very short duration; thus, it simply acts as a triggering signal. Information in a SNN
is contained in the timing of individual input spikes. This gives SNNs the capability
of exploiting time as a resource for coding and computation in a more efficient manner
Synaptic spike inputs with only one spike per synapse during a given time window
are called spatio-temporal inputs. A spike train synaptic input consists of a sequence of
spikes with various inter-spike times which occur during a given time window. Examples
The spike times within a spike train has a much larger encoding space than
the simple rate code used in classical neural networks as was shown in [29], [30]. A
Substantial evidence exits which indicates that the timing sequence of neuronal
spikes (spike trains) is utilized in biological neural signal processing [1], [38].
ing paradigms suitable for fast hardware implementations, e.g. VLSI neural network
chips [34]. In addition, this research may shed some light on the neural code used by
biological neurons. So far not much is known about the computational mechanisms used
3
t1
(A)
w1
ti wi
tj
wn
tn
(B)
wi
tn ti t1 tj
Fig. 1.1. (A) Neuron with spatio-temporal excitatory t i inputs and output tj . (B)
Neuron with a spike train input and output t j .
by biological neurons to process the spike timing information. Overview of SNN models
Learning algorithms for spiking neural networks proposed in the past by others
have employed spatio-temporal input patterns and used synaptic weights and delays as
well as multiple sub-synapses (in some of these models) for supervised learning [3], back-
(SOM) learning [40], and radial basis function (RBF) based learning [39]. In [27], it has
been shown that the timing of spikes can be used to simulate sigmoidal gates with SNNs.
4
In [24], [25] a novel learning method which uses the neuron input spike times to
trigger a set of currents with a variety of fixed decay rates is proposed; however, the net-
work used a large number of neurons and learning convergence took many iterations. The
SNN learning algorithm proposed in this research is a one step learning algorithm and
utilizes only synaptic weights for learning; hence, the proposed SNN learning algorithm
In [32], [33] learning of spike train patterns has been investigated by mapping the
input spike trains into another set of spike trains using a highly complex spiking neural
network (liquid state machine (LSM)). The output of the mapping stage used in [32], [33]
was fed into a read out circuit that was trained to distinguish inputs. The drawbacks
of spiking neural network model used by [32], [33] are: a) a highly complex network
structure which employed many neurons; b) the use of recurrent connection which make
analysis very complex; c) the use of a large number of synapses which required various
parameters; and d) the relatively long time needed to decode and process input spike
trains.
1.2 Objectives
The objective of this research was to advance the current state of knowledge in
the field of spiking neural networks. New research results were obtained in the following
areas:
• A learning algorithm for an SNN that employs spike train patterns as inputs;
5
Pre-computing Parameters
ch. 6
spike trains )
Input
Pre-processing
Signal/
Stimulus Mapping Learning
ch. 5
Mapping-Learning Organziation
mainly proposed to simplify the dilemma of input spike trains learning. The map-
ping stage transforms input spike trains into spatio-temporal outputs which are
employed in the proceeding learning stage for learning irregular spike trains with
different number of spikes as shown in Figure 1.3. The main emphasis in the
mapping stage is the one-to-one relationship between inputs and outputs which
as a universal real time pre-processing for any input spike trains. Emphasis has
been placed in this research on the applicability of the proposed schemes; thus,
the mapping model use only synaptic weights modification to achieve a one-to-one
input/output relationship.
The use of half of the number of the proposed neurons is discussed with the re-
strictions made to use it in this way. Applications are used to show the usefulness
1.4(A), and mapping sound spike trains into spatio-temporal patterns as shown in
Figure 1.4(B).
Chapter 4 : The proposed learning stage is described. The main challenge in this part
of thesis is the argument that practical design is needed. Not many SNN learning
used to achieve the learning part to give synchronization of input spikes moreover
7
ISI1 ISI1
CD
Input stimulus Signal
ISI2 ISI2
pre-
processing
stage
n spike
2n spikes
trains
MAPPING LEARNING Output
STAGE STAGE neuron
Sound
Sound
Source
localization
(mics)
(A)
Sound
Source Pre-
Sound
processing
mapping
stage
(B)
Fig. 1.4. (A) Block diagram of sound localization application. (B) Block diagram of
sound spike train mapping.
8
the use of synaptic weights and sub-synapse connections. However, the design of
the delay parameters is not practically an easy matter, as it is needed to store the
input signals for some time and release them later to be delayed; thus, large circuit
more freedom in the learning, while it obviously needs more computations than a
In this research, the learning units can be done in simple computations using the
In the learning stage, like in the mapping stage, the one-to-one relationship between
inputs and outputs in the learning units to give a unique center of each learning
unit (cluster). The same neurons building blocks of the mapping stage are used in
the learning stage but with different function. Each synaptic weight was modified
by both the mapping and learning rules using locally available information.
Chapter 5 : Simulations and real-world applications are used to demonstrate the ro-
with another learning method of classical ANNs. Last application is the classifi-
cation of complex input patterns; each consists of 10 spike trains generated with
different rate for each pattern. Comparison with the well-known back-propagation
9
Signal
pre-
processing Mapping Learning
(Onset, stage stage
peak,
and offset)
Chapter 6 : Further analyses and design criteria are introduced for both the mapping
and the learning stage parameters. This is done to introduce a clear idea about
how the parameters of both stages can be chosen. Also, the time needed to give
the correct result of each stage as well as the overall model is determined.
It should be emphasized that this thesis is not about biology but about possibil-
ities of computing with spiking neurons which are inspired by biology. It is not claimed
10
t1
t2
x Encoding Learning y
tn
that any of the mechanisms described here do occur in exactly the same way in bio-
logical systems. However, a thorough understanding of such networks, which are rather
simplified in comparison to real biological networks, may be helpful for detecting and un-
Chapter 2
Many models have been proposed to represent a spiking neuron. The most biolog-
ically accurate model uses a conductance-based neuron with many complex details [31].
This model can reproduce the behavior of a biological neuron with a high degree of ac-
curacy. Unfortunately, such a model is complex and analytically difficult to use to model
the dynamic behavior of a spiking neuron. For this reason, simpler spiking neuron mod-
els have often been used. In one of these simple models spike outputs are generated
whenever the neuron potential crosses some pre-determined threshold ϑ from below [20].
The spiking neuron model employed in this research is based on the Spike Re-
sponse Model (SRM) [20]. Input spikes come at times {t · · · t · · · t } into the input
1 n n
The neuron outputs a spike when the internal neuron membrane potential x (t)
j
The threshold potential ϑ is assumed to be constant for the neuron. After the
firing of a spiking neuron, it does not respond to any input spikes for a limited period
of time which is called “the neuron refractory time”. However, the neuron refractory
time is not taken into account in this research. The relationship between input spike
12
times and the internal neuron potential (or Post Synaptic Potential (PSP)) x (t) can be
j
described as follows:
n
X
x (t) = W .α(t − t ) (2.2)
j i i
i=1
i represents the ith synapse, W is the ith synaptic weight variable which can change the
i
amplitude of the neuron potential x (t), t is the ith input spike arrival-time, and α(t)
j i
t 1− τt
α(t) = e (2.3)
τ
0.9
0.8
0.7
0.6
Voltage
0.5
0.4
0.3
0.2
0.1
0
0 5 10 15 20 25
Time
t
α(t) = .e (2.4)
τ
e can be included as a constant in τ , i.e, Equation 2.4 can be rewritten for simplicity as
follows:
t
α(t) = (2.5)
τ
it then follows that Equation 2.2, the internal neuron potential, can be re-written as:
n
tX
x (t) = W .u(t − t ); tτ (2.6)
j τ i i
i=1
1 if t ≥ 0
u(t) = (2.7)
0 otherwise
15
Chapter 3
encephalogram signals, or time varying analog sensor data when converted into spike
on being able to produce a one-to-one correspondence between input spike trains and
output spike firing times. In a spiking neuron, inputs from different synapse inputs
cause the post synaptic potential (PSP) to increase at some rate which is dependent on
the inter-spike intervals (ISI). By selecting an appropriate set of synaptic weights for a
neuron, a particular spike train or a set of spike trains which belong to the same class
can be distinguished by its unique neuron output firing time. A class C of an input
pattern set may be represented by at least one learning unit cluster Cl or maximum of
m learning units, where m is the number of input pattern samples in the class. The
The mapping stage, which is described in details in this chapter, is used for
transforming the input spike train(s), large or spatio-temporal patterns, into simple
unique spatio-temporal output patterns [4], [5], [6]. The output spatio-temporal patterns
are then used as inputs for the next stage which performs the learning task by clustering
16
inputs into various classes (as described in chapter 4). In both stages, a one-to-one
mapping between inputs and outputs is necessary for processing information (The proof
for the one-to-one mapping of the mapping scheme is shown in Appendix A).
In the next sections, a detailed description of the mapping stage will be given.
MU LU CD
MU LU CD
MU LU CD
The proposed mapping scheme utilizes the absolute arrival times of input spikes.
This scheme can be used for multiple synapse spatio-temporal inputs neuron and a single
17
synapse spike train input neuron. The description of the mapping scheme will be for a
multiple synapse spatio-temporal input neuron. For a single synapse spike train input
neuron, the mapping scheme is similar and can be done in a simpler manner because
the order of the spike arrivals is implicit in a single spike train. The mapping model is
t1
t2 tout1
ISI1
tn
tout2
ISI2
tout3
ROC
The first part of the mapping model (Figure 3.3) utilizes the Inter-Spike Intervals
(ISI) between input spike times for mapping the inputs. An inter-spike interval is the
18
time between two successive input spikes which belong to some pattern and arrive at
EPSP level
Neuron
t1= tr
w1
t2 w2
Input spikes
tout1 (tout2)
wn
tn
Input spikes that make up an input pattern vector, arrive at different times at
different input synapses as shown in Figure 3.3. The input spike times carry information
about the input pattern. The input pattern spikes arrive at times {t , ...., t }, with
1 n
some minimum time resolution 4t, into the input synapses. The ISI1 neuron (Figure
3.3) consists of two units: a) a neuron with excitatory inputs; and b) an Excitatory Post
Synaptic Potential (EPSP) unit. The EPSP unit updates the dynamic weight variable
W in Equation 2.6 after every synaptic input according to the following equation:
i
where β is a small constant and i refers to the temporal order of the input spikes, not the
spatial number of an input synapse, and ω is an initial weight value. Equation 3.1 shows
that the value of the dynamic weight variable W is proportional to the input spike time
i
t.
i
The ISI2 neuron has a construction and function similar to the ISI1 neuron (Figure 3.3)
β
W = , W =ω , i = 2, 3, ..., n (3.2)
i t 1
i
The ISI1 (ISI2) neuron fires output spikes at certain times which can be utilized
to distinguish patterns whose order of input spikes are the same but for which the actual
spike times may be different. For instance, two patterns P and P with spike times
A B
A A A A B B B B
{t = 1, t = 2, t = 3, t = 4} and {t = 2, t = 4, t = 1, t = 3} will give the
1 2 3 4 1 2 3 4
same output spike time t (t ) because the inter-spike intervals are the same for
out1 out2
these pair of patterns. In this case and by substituting in Equation 2.6, it can be shown
that:
ISI1 t t
x (t) = β(1 + 2 + 3 + 4) = 10β.
j τ τ
ISI2 t 1 1 1 25 t
x (t) = β(1 + + + ) = β.
j τ 2 3 4 12 τ
The output values are equal for both patterns P and P for each ISI1 and ISI2
A B
neurons. The synaptic weight values may be initially set to be identical (W = ω),
i
20
as shown in Figure 3.3. Using a combination of the ISI1 and ISI2 neurons produces
proof that using the two neurons ISI1 and ISI2 together gives a one-to-one relationship
Inhibitory
-
Neuron
t1
w
Input spikes
t2 2w
tout3
2n-1 w
tn
As these two neurons can distinguish different patterns if and only if their input
spikes come in the same order but with various different arrival times (as shown in the
example above), it is necessary to have another part which distinguishes the order of
Order of arrival means which input spike arrives first, which one is the second
Rank Order Coding (ROC) [36] is a suitable approach to distinguish the order of
the input spike arrivals. The ROC neuron is composed of two units, as shown in Figure
3.4. One of these units utilizes an excitatory neuron and the other unit an inhibitory-
like neuron with a special function for attenuating the synapse weights according to the
order of arrival of the spatio-temporal inputs. The weight values must be distinct in this
To distinguish the order of arrival of the spikes, the inhibitory neuron in the ROC
neuron acts to progressively inhibit the effect of later arriving spikes. The first input
spike has the strongest effect, the second input gives the next strongest effect and so on
Consider the case in Figure 3.5 where four inputs arrive into the ROC unit at
four different times. Assume that t < t < t < t . t is the time for the first spike, t
1 2 3 4 1 2
is the time for the second spike, and so on until the last spike at time t . Initially, the
4
effect of the inhibitory neuron is null and so each input is maximally effective. However,
every time one of the inputs arrives, the inhibitory neuron attenuates the effectiveness
of the weights (for example by 50%). t can be calculated for input patterns P and
out3 A
P as follows:
B
A 0 1 2 3
t = (1 ∗ 0.5 ) + (3 ∗ 0.5 ) + (5 ∗ 0.5 ) + (10 ∗ 0.5 ) = 5.0
out3
B 0 1 2 3
t = (5 ∗ 0.5 ) + (1 ∗ 0.5 ) + (10 ∗ 0.5 ) + (3 ∗ 0.5 ) = 8.375
out3
22
Inhibitory
-
Neuron
w=1 D
t2
w=3 C
t4
tout3
t1 w=5 B
t3 w=10 A
Any other input pattern with a different order will produce a different output
time value.
Modifications to the original ROC inhibition sequence and synaptic weights pro-
posed in [36] are necessary to account for cases in which two or more input pattern spikes
arrive at the same time. In such a case, the inhibitory neuron must be able to recognize
that two or more inputs have arrived simultaneously and compensate accordingly.
For example, the ROC can work with the following inhibition sequence [7.3, 3.0,
1.7, 0.5] together with the synaptic weights shown in Figure 3.5. For all possible order
combinations, including the cases when the four inputs arrive at different times, a one-
to-one coding relationship between the inputs and outputs can be obtained with these
following algorithm was used: a) determine a set of synaptic weights (within some ap-
propriate magnitude range as shown in Figure 3.5) such that any sub-set sum of the
synaptic weights is not equal to any other sub-set sum of the remaining synaptic weights.
b) Determine an appropriate set of inhibition sequence values for all possible input order
Real-time mapping of inputs is possible when the ISI1, ISI2 and ROC neurons
produced for all input patterns. In the ISI1 (ISI2) neuron, all output spike times t
out1
max
(t ) must be larger than the last input spike time t = max{t , i = 1, 2, ..., n} of all
out2 i
the input patterns to be mapped where n is the number of input spikes defining the input
pattern. This means that the patterns to be learned must be known a priori in order
to know the time range of the input spikes for all the input patterns. Furthermore, all
input spikes have equal importance, so all spikes representing a pattern must be utilized
The objectives of the ISI1 (or ISI2) neuron mapping scheme are:
1. Each input pattern l can be represented by a vector of input spike times P [t ....t ....t ],
l 1 i n
+
where t ∈ R , n is the number of input spikes defining the input pattern.
i
24
map
2. Calculate the threshold value ϑ (refer to Chapter 6) so that the output firing
1
max map max
time will be larger than t (T >t ).
1
map max
The same scheme is used for the ISI2 neuron in a similar way to get T >t .
2
Spike train mapping can be done in the same manner as the mapping of a spatio-
Since only one synapse’s input is considered, spike train is being the order of the spikes
within a spike train do not need to be considered. Hence, the ROC neuron is not needed.
Ref. time=0
Input Spike Train
tout1
tn t4 t3 t2 t1
ISI1
t1
t2
t3 tout2
t4
ISI2
Fig. 3.6. Spike train input and its equivalent representation as a spatio-temporal pattern
input
25
t is the first spike which arrive into the neuron, as shown in Figure 3.6. However,
1
t spike time is used as a reference time for the next coming spikes in the same input
1
spike train.
Sound localization was used as an interesting and useful application of the pro-
posed mapping scheme. The azimuth and elevation angles were to be deduced from
sound input data. Sound localization was thought to be an appropriate application be-
cause it can utilize the Inter-aural Time Differences (ITD) defined to be the difference
between the arrival times of a sound signal to each ear. In the proposed mapping scheme,
the sound signal itself can be used directly without complex modifications such as those
needed in [22] and the Head Related Transfer Function (HRTF) approach [18] or us-
ing delays in multiple coincidence detector [2], [15], [26], [41]. The reception time of a
sound at a sensor was determined by the first incoming audio signal which exceeded a
pre-determined sound level. Sensors representing right (R), left (L), front (F), back (B),
above (AB) and below (BL) were placed in their appropriate positions as shown Figure
3.7.
Depending on the sound source location with respect to the six sensors, the spike
arrival time will be different for each of the sensors. The mapping unit, shown in Figure
26
AB Sound
Source
L
F
B R
BL
3.8, generates a set of output spikes for each input pattern. Echo effects were not con-
unit can be used to determine the sound source location. The output spike times t
out1
and t increased (or decreased) within the appropriate output spike time firing range.
out2
tout3 time did not change as long as the order of input spike arrival times was unchanged;
i.e., tout3 was not affected by the actual input spike arrival times within the same region
with respect to the sensors. As a result, the three output times {t ,t ,t } could
out1 out2 out3
be used to determine the exact position of a sound source. These outputs can be used
as inputs to a learning stage to get only one output representing the exact sound source
location.
The proposed mapping scheme for sound localization in this research has some
advantages over other sound localization approaches such as in [18], [22]. For example;
27
R
L
tout1
F Mapping tout2
B Unit
tout3
AB
BL
(a) the use of the first incoming audio signal which exceeds a pre-determined sound level
instead of the use of the analysis of the sound signal itself; and (b) the azimuth and
elevation angles can be deduced directly from the input audio signal time differences,
thus there is no need to use the azimuth and elevation estimator circuits used in [18].
(c) Hebbian learning with integrate and fire neurons (IFNs) was used to achieve the
sound source localization [22]. Multiple synapses with delays were used to detect the
phase shift between signals resulting from analyzing the input sound with several pre-
processing stages.
Sound localization experiments were also carried out using real sounds gathered
using microphones placed at four different azimuth positions set at various distances
from each other in a circle as shown in Figure 3.9. The reception time of a sound at
a microphone was determined by the first incoming audio signal which exceeded a pre-
determined sound level. The distance between microphones ranged from 30 cm to 150
28
cm as shown in Figure 3.9. The sound localization experiments using real sounds were
carried out in two different environments: an anechoic chamber and a regular room filled
with furniture, different kinds of equipment, and with walls which reflect sound. In both
environments the recognition was working well because the model depends on sensing
the beginning of an input signal and not the subsequent sound signals.
mic3
m
0c
15
mic4 30~ mic2
mic1
For testing the mapping unit’s ability to discriminate among various spike trains,
1
a publicly accessible dataset consisting of input sound files for various words spoken by
1 http://neuron.princeton.edu/∼moment/Organism
29
different speakers was used. These sound files (in wav-format) represented the numbers
0,1,...,9 spoken by various speakers. The wav-format files were transformed into 40-
channel spatio-temporal spike events using a bank of frequency filters [25]; the 40-channel
spatio-temporal spikes were then used in a single spike train. The output times of the
mapping unit for some of the spike trains representing the spoken words “zero”, “one”,
and “two” spoken by 5 different speakers are shown in Figure 3.10. From this plot,
it can be seen that the mapping unit output times for these spoken words appear in
separate groupings except for the word “one”. A non-linear classifier can be used after
the mapping unit for better clustering of the mapping unit outputs.
4
"Zero"
"One"
3.5 "Two"
Output time of ISI2 unit
2.5
1.5
0.5
1.95 2 2.05 2.1 2.15 2.2 2.25 2.3 2.35 2.4 2.45
Output time of ISI1 unit
Fig. 3.10. Output times of the ISI1 and IS2 neurons for different input spike trains
representing the words ”Zero”, ”One”, and ”Two”.
30
In this experiment, only the pre-processing of speech sounds has been tried using
input spike trains. This was accomplished by mapping an input spike train using the
proposed mapping mode. To complete the classification of the inputs, the outputs of the
Randomly generated spike trains were used as another example to verify the
practicality of the proposed mapping scheme. Five spike trains were generated using a
Poisson distribution for the inter-spike times and assuming a mean firing rate of 15 Hz.
#5
#4
Spike Train Number
#3
#2
#1
0 0.1 0.2 0.3 0.4 0.5 0.6
Time
Noise was added to each of these five spike trains to check the mapping robustness.
For these particular five spike trains, the outputs for each original spike train as well as
its noisy versions can be clustered so that the five spike trains can be distinguished as
shown in Figure 3.12 (will be referred here as noise type1). For other sets of spike trains,
non-linearly separable clusters may occur. However, when different type of noise with
different addition of spike times was added to the original spike trains (will be referred
here as noise type2), the output may change to non-linearly separable as shown in Figure
3.13.
It could be shown from Figures 3.12 and 3.13 that the outputs for some spike
trains are either located very close to each other or somewhat spaced apart depending
on the amount of noise introduced into the original spike train. Synapse delays can be
used in the next stage for clustering of outputs which belong to the same spike train.
ISI1 neuron:
t
W = β ∗ e i, W =ω , i = 2, 3, ..., n (3.3)
i 1
ISI2 neuron:
β
W = t , W =ω , i = 2, 3, ..., n (3.4)
i ei 1
32
3.5
train 1
train 2
3
train 3
Output time of ISI2 unit
train 4
train 5
2.5
1.5
0.5
1.5 2 2.5 3 3.5 4
Fig. 3.12. Output times of the ISI1 and IS2 neurons for different input spike trains
representing noisy spike trains (noise type1)
It is shown in Figures 3.14 and 3.15 for the two types of added noise, type1 and
type2 respectively, with using the exponential dynamic weight variable W parameter as
i
in Equations 3.3 and 3.4. It is clear form these figures that better separation properties
could be done. However, these analysis is a case dependent and could be different for
3.5
train 1
train 2
3
Output time of ISI2 unit
train 3
train 4
train 5
2.5
1.5
0.5
1.5 2 2.5 3 3.5 4
Fig. 3.13. Output times of the ISI1 and IS2 neurons for different input spike trains
representing noisy spike trains(noise type2)
The use of only one of the mapping neurons, either ISI1 or ISI2 instead of both,
is possible if the threshold voltage ϑ of the neuron can be specified a priori. The neuron
firing threshold voltage ϑ has to be chosen high enough so that any two spike trains (to
In Figure 3.16, a plot of the output time differences produced by the ISI1 neuron
vs. neuron threshold voltage ϑ for two different spike trains using the dynamic weight
variable W in Equation 3.1 is shown. An output time difference of zero means that
i
34
3 train1
train2
train3
train4
2.8
train5
Output time of ISI2 unit
2.6
2.4
2.2
1.8
2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4
Fig. 3.14. Output times of the ISI1 and IS2 neurons for different input spike trains
representing noisy spike trains (for exponential W function and noise type1)
the two spike trains have produced an ISI1 output at identical times. By choosing the
time difference to clearly distinguish a spike train. The choice of the neuron threshold
voltage ϑ has to be made in such a way that a) mapping time is not overly long and b)
The outputs of the ISI1 and ISI2 neurons for two different input spike trains using
the dynamic weight variable W in Equations 3.1 and 3.2, are shown in Figures 3.18 and
i
3.17, respectively.
35
3 train1
train2
train3
train4
2.8
Output time of ISI2 unit
train5
2.6
2.4
2.2
1.8
2.5 3 3.5 4
Fig. 3.15. Output times of the ISI1 and IS2 neurons for different input spike trains
representing noisy spike trains (for exponential W function and noise type2)
For this particular pair of spike train inputs, it can be seen from Figure 3.18 that
using the ISI2 neuron can easily distinguish them because the two spike trains produce
diverging outputs. If the outputs of the ISI1 neuron are used (Figure 3.17), the time
differences are not as large (as those produced by the ISI1 neuron) and hence the use of
the ISI1 output by itself may be inappropriate. However, ISI1 neuron may be used due
0.35
0.3
T ime Difference
0.25
0.2
0.15
0.1
0.05
0
0 1 2 3 4 5 6 7 8 9 10
Thres hold ϑ
Fig. 3.16. Output time differences produced by the ISI1 neuron for two different input
spike trains.
12
10
8
Voltage
0
0 1 2 3 4 5 6 7 8 9 10
Time
Fig. 3.17. Output potential the ISI1 neuron for two different input spike trains.
37
Voltage 5
0
0 1 2 3 4 5 6 7 8 9 10
Time
Fig. 3.18. Output potential the ISI2 neuron for two different input spike trains.
Figure 3.19 shows the relationship between output time differences and neuron
threshold voltage ϑ when the ISI1 neuron is used with the exponential function given in
Equation 3.3. Comparing the plots shown in Figures 3.16 and 3.19, it can be observed
that the output time differences are larger for small threshold voltages ϑ in the case
and t ' 0.1 (Figure 3.19) for the linear (original) and exponential W functions
dif f
respectively. This implies that the use of an exponential W function can lead to better
The neuron membrane potential plots for the ISI1 and ISI2 neurons utilizing the
exponential W function are shown in Figures 3.20 and 3.21, respectively. The exponential
38
W function can better differentiate input spike trains when ISI1 neuron is used as can
0.16
0.14
0.12
T ime Difference
0.1
0.08
0.06
0.04
0.02
0
0 1 2 3 4 5 6 7 8 9 10
Thres hold ϑ
Fig. 3.19. Output time differences produced by the ISI1 neuron for two different input
spike trains (for exponential W function).
39
16
14
12
10
Voltage
0
0 1 2 3 4 5 6 7 8 9 10
Time
Fig. 3.20. Output potential the ISI1 neuron for two different input spike trains (for
exponential W function).
40
1.4
1.2
1
Voltage
0.8
0.6
0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10
Time
Fig. 3.21. Output potential the ISI2 neuron for two different input spike trains (for
exponential W function).
41
Chapter 4
The spatio-temporal patterns generated by the ISI1 and ISI2 neurons in the map-
ping stage are used as inputs for the learning stage where a supervised learning method
is used to classify input patterns [7], [8]. Clustering of input patterns which belong to
the same class is achieved by setting the synaptic weight vector for a neuron in a manner
that makes the output fire at approximately the same time for more than one input spike
The mapping stage ISI1 and ISI2 neurons are also used in the learning stage but
with multiple synapse inputs as shown in Figure 4.1. The learning unit (LU) consists of
the ISI1 and ISI2 neuron is shown in Figure 4.2. Multiple synaptic inputs are needed
because the mapping stage produces several spatio-temporal outputs as shown in Figure
β
W = , W = const, , i = 2, 3, ..., n (4.2)
i t 1
i
EPSP level
Neuron
t1= tr
w1
t2 w2
Input spikes
tout1 (tout2)
wn
tn
The t reference time shown in Figure 4.3 input is used as a local reference time for
r
fires when the outputs of a learning unit are output at nearly coincident times.
1. Choose an input pattern vector (say P ) at random from the set of P = (P , P , ....)
A l A B
pattern vectors to be used for the learning phase. Each pattern P consists of the
l
43
tn Final output
tout2
ISI2
learning Coincidence
unit detection
neuron
pattern P is used to assign weights to the ISI neurons in a learning unit. This
A
learning unit will represent the class to which pattern P belongs. Once the weights
A
have been assigned, they are temporarily fixed. The weights selected for the ini-
tial input pattern works as a center vector which can later be modified slightly to
accommodate more than one input pattern; in this manner, similar input patterns
can then be clustered together and fewer learning units will be needed.
2. Another input pattern (say P ) belonging to the same class as pattern P chosen
B A
in step 1 above is selected. This new pattern is applied to the learning unit for P
A
44
MU LU CD
MU LU CD
MU LU CD
and the output of the ISI neurons times for P {t ,t } are compared against
B out1 out2
∗ ∗
the output times for P {t ,t }. This new pattern (P ) is assigned to the
A out1 out2 B
learning unit (e.g. learning unit for P ) with which each of the output times differ
A
by less than .
∗ ∗
|t −t |≤ε and |t −t |≤ε (4.3)
out1 out1 out2 out2
ε is a small error value determined empirically. If the error is larger than ε for any
one of the error conditions in Equation 4.3 , a new learning unit is added as is done
in incremental learning.
45
A one-to-one relationship between inputs and outputs for each of the LUs must
be achieved (i.e. each LU must output a spike at a time which is different from the
output times corresponding to other inputs). For example, assuming that the weights
for a learning unit, composed of the ISI1 and the ISI2 neurons, have been fixed for
A A A A
the spatio-temporal pattern P = {t , t , ......., t , t }, if a different pattern P =
A 1 2 n−1 n B
B B B B
{t , t , ......., t , t } is input into the learning unit for P (LUA), the output time
1 2 n−1 n A
t and/or t for P must be different from those output by the learning unit
out1 out2 A
assigned for P (LUB). However, since LUA weights have been fixed, it may be possible
B
for P to make both LUA and LUB fire simultaneously. It should be noted that such
B
cases occur with very low probability. In cases when the problem does occur, there
are two possible solutions: 1) the winner-take-all scheme and 2) the two-input sub-LU
scheme.
In case P simultaneously activates both LUA and LUB, the synaptic weight
B
B B B
values (W , W , ...., W ) for LUB can be modified so that LUB can fire an output
1 2 n
earlier than LUA and inhibit LUA from firing. This winner-take-all scheme is shown in
figure 4.4.
46
Learning
units
W1A tout1A
W2A LUA CD
t1B
WnA tout2A Fast
t2B inhibitory
W1B
connection
tout1B
W2B
tnB LUB CD
WnB
tout2B
In this approach, a learning unit (LU) which fires for the wrong input pattern (in
addition to firing for the correct input pattern(s)) can be designed so that the one-to-
sub-learning units using two-input ISI neurons within each sub-learning unit as shown
in Figure 4.5. Each sub-learning unit (e.g LUA1) takes two inputs from one mapping
unit (MU) in the mapping stage. For example, outputs t and t from the mapping unit
1 2
are input into the sub-learning unit LUA1 shown in Figure 4.5. The sub-learning unit
ISI neurons perform the same functions as the ISI neurons used in the LUs described in
section 4.1. However, in this case there can be up to n sub-learning units in a learning
unit (LU). n represents the number of input spike trains fed into the mapping stage. Each
47
sub-learning unit consists of one two-input ISI1 neuron, one two-input ISI2 neuron, and
one coincidence generation (CG) unit. The t reference time input shown in Figure
r
4.5 is the same local reference signal which is used in the combined mapping-learning
organization as shown in Figure 4.3. The coincidence generation (CG) neuron in a sub-
learning unit performs the function of aligning the output spike times of the ISI1 and
ISI2 neurons so that they are coincident in time. When all CG neurons in an LU fire
Reference
Inputs from the mapping stage
time input tr
t1 ISI1
CG1
t2 ISI2
LUA1
t3 ISI1
CG2
CD
t4 ISI2
LUA2
t2n-1 ISI1
t2n CGn
ISI2
LUAn
learning
unit (LU)
achieved by using the sub-learning units described above. The one-to-one input/output
relationship for each sub-learning unit guarantees that a sub-learning unit which had its
synaptic weight set for a given pattern (e.g. pattern P ) does not respond with equal
A
output times for a different pattern (e.g. pattern P ). The one-to-one input/output
B
Assume that patterns P and P produce the same ISI output times t (ISI1
A B out1
output time) and t (ISI2 output time) for the sub-learning unit for pattern P .
out2 A
t3 W3
tn Wn
Fig. 4.6. (A) Multiple Input ISI Neuron. (B) Two-Input (plus the local reference input)
Sub-Learning Unit’s ISI Neuron.
For the multiple input ISI neuron case (shown in Figure 4.6(A)), the internal ISI
t n
A A
W .t + out
X
W .u(t − t ) =
1 r τ i i
i=2
t n
A B
W .t + out
X
W .u(t − t ) (4.4)
1 r τ
i=2 i i
where W is the weight assigned for the local reference time t and t is the firing time
1 r out
at the output of the ISI neuron. Equation 4.4 can be rewritten as:
A A A A
W (t − t ) · · · + W (t −t )=
2 out 2 n out n
A B A A
W (t − t ) · · · + W (t −t ) (4.5)
2 out 2 n out n
A A A A A A
W .t + W .t + · · · + W .t =
2 2 3 3 n n
A B A B A A
W .t + W .t + · · · + W .t (4.6)
2 2 3 3 n n
For a sub-learning unit, the ISI neuron has only two inputs (plus t as shown in
r
A A A A A B A B
W .t + W .t = W .t + W .t . (4.7)
2 2 3 3 2 2 3 3
50
For the sub-learning unit ISI1 neuron W = β.t and Equation 4.7 can be rewritten
i i
as:
A 2 A 2 A B A B
(t ) + (t ) = t .t + t .t . (4.8)
2 3 2 2 3 3
and for the sub-learning unit ISI2 neuron W = tβ and Equation 4.7 can be
i i
rewritten as:
B B
t t
1+1= 2 + 3 . (4.9)
tA tA
2 3
B A B A
The simultaneous solution of Equations 4.8 and 4.9 gives t =t and t =t
2 2 3 3
which can happen only if P = P . Thus, the uniqueness or the one-to-one in-
A B
put/output relationship for a sub-learning unit has been proven. When the number
of spatio-temporal inputs fed into an ISI neuron exceeds two, a unique solution cannot
be found.
It should be noted that the number of sub-learning units within a learning unit do
not have to equal the number of the spike train inputs fed into the mapping stage. What
used in order to have a one-to-one input/output relationship. Thus, there may be cases
in which only a combination of one sub-learning unit and one multiple input learning
unit is sufficient to guarantee the one-to-one input/output relationship for the pattern
In order to have only one of the LUs in the learning stage fire, the t (ISI1)
out1
and t (ISI2) times can be made coincident by changing the ISI1 and/or the ISI2
out2
input synaptic weight values by increasing the β value of the ISI1 or the ISI2 neuron in
order to adjust its output spike time as shown in Figure 4.2. The coincidence detection
neuron, shown in Figure 4.2, uses the exponential response function (Equation 2.3) of a
spiking neuron and not the linear response function (Equation 2.5) because a fast decay
the relative time difference |t −t | may not be. In other words, two different
out1 out2
learning units can fire at different output times, t and t , but the relative time
out1 out2
difference |t −t | may be the same. Thus, the local input reference time t is
out1 out2 r
necessary to differentiate these output as shown in Figure 4.2. This reference time t
r
is the time when all the ISI neurons in the learning unit (as well as the mapping unit)
begin to operate.
4.4 Clustering
The outputs of the LUs fire at certain times according to the synaptic weights
which have been assigned during the learning stage. The other patterns, which have been
joined to the same learning unit cause the outputs {t ,t } to fire at times which
out1 out2
are close to the ones corresponding to the center pattern. These other patterns cause
the output to fire at slightly different times depending on the value chosen during the
52
learning steps as was described in section 4.1. The value can be chosen according to
the accuracy needed for learning. If the value is small, more learning units are required
but the learning accuracy will be high. On the other hand, a large ε value required fewer
learning units but the learning accuracy will be lower. The neuron threshold voltage of
the coincident detection neuron must be set at a sufficiently low value in order to make
The number of learning units needed for classification and clustering cannot be
known a priori; thus, learning units are incrementally added as needed. The clustering
algorithm produces only a locally optimal input clustering because the input patterns
proposed in [39] but without the need for synapse delays and/or multiple sub-synapse
weights and delays; thus, the proposed learning scheme may be easier to implement in
hardware.
4.5 Simulations
(XOR) function has often been used to test the function approximation or classification
As shown in Figure 4.7, the XOR function has non-linearly sparable input values.
The main idea of the learning algorithm described in earlier section 4.1 was to assign
x2
C1
1
C2
C1
0 1 x1
x x digital inputs 00 and 11. C represents x x digital inputs 01 and 10. The two
1 2 2 1 2
distinguishable because the inputs are not referenced to a clock. Thus, in order to
54
distinguish such cases, a third reference input x = 0 can be used as shown in Figure
0
4.8.
Learning
ISI1 units Coincidence
detection
ISI2 neurons
0
0
0
0
0
x0 1
0
x1
0 0
x2 1 Final
0.1 0 output
time
0
1
1
Fig. 4.8. Spiking neural network for XOR function with spatio-temporal encoding for
logical input ”001”. Details of a learning unit is shown.
The learning simulation results for the XOR function are shown in Table 4.1.
Inputs 0 and 1 are represented by times 0 and 0.1 respectively as shown in Table 4.1.
detection neuron generates a spike when the appropriate spatio-temporal pattern is in-
put.
55
Table 4.1. XOR Input spike times (including the bias) and output times.
The XOR neural network organization is shown in Figure 4.8. The final output
neuron, shown in Figure 4.8, is used to represent the XOR output value in the time
domain by appropriately assigning its input synaptic weights. It should be noted that
only one of the coincident detection neurons fires for any one particular input.
low frequency) spike trains were used as inputs. A set of spike trains were generated by
adding noise to each of the original three spike trains as shown in Figure 4.9.
Spike time skews were produced by adding Gaussian white noise (GWN) to the
spike train, or by shifting one or two spikes in the spike train to the left or to the right
randomly. Adding both types of noises was used to test the classification capability of
the neural network after learning had been completed. By injecting various amounts of
GWN into a spike train, noisy time shifted versions of the original spike trains could be
generated as shown in Figure 4.9, where spike train number 1 is the original spike train
for each class, i.e, all the patterns including the noisy patterns were used as a learning
set. The closer the noisy versions were to the original spike train, the likelihood of being
56
able to use an already assigned learning unit increased. The original spike train and
its five noisy versions were used as inputs to the mapping stage which utilized multiple
mapping units with different β values in the range of [0.25, 1.0] to give a relatively wide
The learning and input pattern clustering simulation results are shown in Table
4.2. For example, for the three classes a total of six clusters were needed. For input
class 1, learning unit 1 was used for clustering five input patterns and learning unit 2
was used for clustering one input pattern as shown in Table 4.2.
Class No. Learning unit # # Learning patterns # Test patterns Final output time
1 1 5 5 4.0
2 1 -
2 3 4 3 5.0
4 2 2
3 5 4 3 6.0
6 2 2
Table 4.2. Input spike train classification, clustering, and final output times.
After the learning phase was completed, additional noisy spike trains for each
of the three classes were used to test the neural network. These additional noisy spike
trains are called test patterns in Table 4.2. The testing phase spike trains were generated
with the same range of noise used during the learning phase. For input class 3, three
input patterns were joined to learning unit 5 as shown in Table 4.2. The final output
57
neuron shown in Figure 4.3 was used to represent the final output time values for each
Class #1
Spike Train Number
0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Class #2
Spike Train Number
0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Class #3
Spike Train Number
0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Time (S)
Fig. 4.9. The original spike train for each class is spike train number 1. The other five
trains are noisy versions of it (one class is represented).
59
Chapter 5
SNN Applications
Three applications were used to represent the classification capability and robust-
proximation was carried out. In this case, the mapping stage was not needed because
the inputs had already been encoded into spatio-temporal patterns by a pre-processing
stage. Encoding method of the input variables into spatio-temporal patterns will be
Second, the classification of sounds produced when a glass ball struck different
materials of various shapes and sizes was carried out [9]. In this case, spike trains were
generated using the input sound signal and then mapped as described in chapter 3 using
the mapping stage; the learning stage was used to classify the different materials.
spike time distributions is described. The back-propagation learning algorithm was used
An encoding method that can efficiently transform input pattern variables into
spike times is required. One method is based on an array of gaussian functions [13] which
is employed to transform the input variable x into spike times {t , t , ...., t } as shown
1 2 n
in Figure 5.1. For example x = 3.5 is transformed into a spatio-temporal pattern with
firing times {20, 95, 325, 705, 985, 880, 505, 185} ms.
1 2 3 4 5 6 7 8
1 t
5
0.9
t6
0.8
t4
output spike times
0.7
0.6
0.5 t
7
0.4
t3
0.3
0.2
t8
t
0.1 2
t1
0
−2 −1 0 1 2 3 x 4 5 6 7 8
Input variable
Fig. 5.1. Input x variable encoded into 8 spike times using gaussian receptive fields.
61
gaussian functions can be used [11], [19], [37], [42], [43], [44].
by sharpening (or broadening) the gaussian functions as well as increasing the number of
overlapped gaussian functions [43]. Such a coding method has been applied successfully
For the function approximation experiment, the parameters of the encoding gaussian
functions [13]were selected as follows: a) for an input variable x with minimum value
x and maximum value x , k overlapped gaussian functions were used (k=8 in this
min max
x −xmin
expreiment); b) The center of the ith gaussian function was set to x +i. max
k−2 (k >
min
x −xmin
2); and c) the width of the gaussian functions were set to γ max
k−2 (k > 2), γ = 1.5.
output of the gaussian function neurons are used directly as spike times. All gaussian
function neurons were assumed to fire including the un-excited ones that were assumed
The proposed learning algorithm was used to approximate the following non-linear
function
−x
f (x) = e sin(3x)
62
0.7
Target
0.6 ε = 0.003
0.5 ε = 0.007
ε = 0.010
0.4 ε = 0.020
ε = 0.030
f(x)0.3
0.2
0.1
0
−0.1
−0.2
−0.3
0 0.5 1 1.5 2 2.5 3 3.5 4
x
Fig. 5.2. Function approximation for different tolerances with ϑ = 0.3, β = 0.5, and
τ = 5.0
The interval [0, 4] was sampled at 41 points with an interval spacing of 0.1. The
learning algorithm, described in section 4.1, was used to train the neural network to
To test the generalization capability of the network after training, the same in-
terval was sampled at 401 points, at intervals of 0.01, in order to generate the test data
Table 5.1 shows the proposed SNN learning results together with radial basis
function (RBF) based learning results. It can be observed from these results that as
the number of clusters increases (decreasing of the ε value) the learning accuracy is
improved. It can also be observed that RBF learning produces a smaller ”maximum
fit error (MFE)”than the proposed learning algorithm for the same number of clusters
(centers) and the same for mean squared error (MSE). When ε = 0.020 and ε = 0.030,
RBF learning could not achieve the same MSE. RBF learning needed many iterations
to achieve equal MSE values while the proposed learning algorithm only requires a one
step learning.
As a real world experiment, sounds produced when a small hard glass ball struck
different materials were recorded. The impacted materials were of different sizes and
64
shapes. The materials consisted of sheets of steel (S), sheets of copper (C), and pieces
of wood (D). For example C , C in Table 5.2 represent two sheets of copper of different
1 2
To encode each sound into spike trains, the method proposed in [24] was employed.
In this method, frequency tuned cells are used. These cells respond to transient changes
in input signals. The transient changes of the input that can be detected by these special
function cells; onsets, offsets, and peaks of each filter output which pass predetermined
threshold level from below, above, and maximum critical value respectively.
In this experiment, all onset, offset, and peak output times of each filter belonging
to a filter bank consisting of 20 band-pass filters were used as shown in Figure 5.3. The
20 spike trains generated by the outputs of the filter bank were mapped into a spatio-
temporal pattern containing 40 spikes (two output spikes for each mapping unit) using
the mapping stage described in Chapter 3. The filter bank center frequencies ranged
65
from 100Hz to 4000 Hz, with each filter having a bandwidth of 200 Hz. The spatio-
temporal patterns for the various impact sounds were then used as input patterns for
0.8
0.6
0.4
Amplitude
0.2
0
−0.2
−0.4
−0.6
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08
Time (Sec.)
Train Number
20
19
18
17
2
Spike
1
0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08
Time (Sec.)
Fig. 5.3. Steel plate impact sound waveform and its corresponding 20 spike trains.
66
As can be seen in Table 5.2, each material could be correctly classified with
relatively good accuracy in the testing phase. It should be noted that the learning
phase can achieve 100% learning of the learning set because learning units can be added
incrementally as needed. Better classification accuracy during the testing phase may
require a better way to pre-process sound signals as well as a larger learning set.
The experiment was repeated using a filter bank of 50 pand-pass filters working
in the range of 100Hz to 4000 Hz as before. In this case, 50 spike trains generated by
the outputs of the filter bank were mapped into a spatio-temporal pattern containing
100 spikes (two output spikes for each mapping unit) using the mapping stage. Results
are also shown in Table 5.2. In this case, the testing phase classification accuracy was
Pattern classification problem was used to evaluate the usefulness and robustness
of the proposed learning algorithm. Five patterns each consisting of 10 spike trains were
used.
Each of the spike trains consisted of spikes which were uniformly distributed
within a time window T = 1 second as shown in Figure 5.4. The distinguishing char-
acteristic of the 5 patterns was the number of spikes within a spike train: a) the first
pattern, about 20 spikes; b) the second pattern, about 25 spikes; c) the third pattern,
about 40 spikes; d) the fourth pattern, about 45 spikes; e) the fifth pattern, about 50
spikes.
67
(A) 10
9
8
(B) 10
9
Spike Train Number
8
7
6
5
4
3
2
1
0
0 0.2 0.4 0.6 0.8 1
Time (sec.)
Fig. 5.4. Two patterns with 10 spike trains each (the other patterns are not shown).
(A) A pattern with spike trains generated with a rate of 20 spikes/spike train. (B) 50
spikes/spike train
A small amount of normally distributed noise was added to each of the 5 patterns.
For each of the original patterns, 23 noisy patterns were generated. Thus, in total 120
patterns were available for the learning (100 patterns) and testing phases 20 patterns).
The number of clusters needed to make a correct classification is shown in Table 5.3.
68
6
(clusters) ses steps (10 ) (sec) ses (epochs) (sec)
2 135 (38) 1654 0.236 5.34 100% 11 5710 13.722 (12) 1256 95%
5 120 (33) 1439 0.234 4.88 95% 9 4568 12.806 (14) 1176 95%
10 102 (27) 1181 0.232 3.93 95% 7 3426 11.662 (17) 938 90%
Table 5.3. complex spike train classification and comparison with ANN Back-
propagation method.
In this experiment, the mapping and learning stages required a sufficient amount
of time needed for an output firing is dependent on the parameters chosen. In the
−3
mapping stage ISI1 units, the utilized parameters were β = 5 × 10 , initial weight
1
value W = 0.5, and threshold value is ϑ = 0.05 and the output time result in the
11 1
range of few milliseconds (about 50 ms) after the last input spike time (the input time
−4
window was assumed to be T = 1 second). The ISI2 unit utilized β = 1 × 10
2
with same initial weight and threshold values as in the ISI1 unit. It should be noted
that the weight value ω should be initialized in an appropriate manner with respect to
the β parameter which directly changes the assigned weight values of the input spike
times, otherwise learning stage will have difficulty in distinguishing time differences in
The outputs of the mapping stage (spatio-temporal patterns) are used by the
learning stage for classification as was described in chapter 4. The utilized parameters in
69
−5
the learning stage in this experiment in the ISI1 unit were β = 5 × 10 and threshold
1
−4 −6
value is ϑ = 1 × 10 and the ISI2 unit were β = 5 × 10 and threshold value
1 2
−5
is ϑ = 1 × 10 . The output time of the learning stage was in the range of a few
2
Matlab ver. 7.0 was used for simulating the proposed SNN learning Algorithm.
spike trains (composing a patten) were used as a vector pattern by combining all of
them in one vector as shown in Figure 5.5. In Figure 5.5, each spike train was assumed
to consist of 57 spikes (the maximum number of spike inputs in a spike train). If the
number of spikes in a spike train was less than 57, the spike input(s) with no spike had
a “0” as an input. Thus, each hidden layer neuron had a total of 570 synaptic inputs.
The classification results are shown in Table 5.3 for a constant learning rate 0.05 and
−10
target error 10 .
which had 570 input synapses; furthermore, back-propagation needed many iterations to
converge. On the other hand, the proposed SNN learning algorithm ISI1 and ISI2 units
needed one input synapse neurons in the mapping stage and 20 input synapse neurons
The number of computation steps in Table 5.3 for the proposed mapping-learning
c−1
k(4p ∗ n + c( + 2)) (5.1)
2
70
t1
1st spike
train { t57
t1
2nd spike
train { t57
t
10th spike 1
train t57{
Fig. 5.5. Back propagation neural network.
where p is the number of input learning samples (patterns), k is the number of input
spike trains per pattern, n is the number of spike per input spike train, and c is the
The number of computation steps in Table 5.3 for the back propagation learning
where p is the number of input learning samples (patterns), k is the number of inputs
per pattern, h is the number of the utilized hidden layer neurons, and E is the number
p
Matlab ver. 7.0 was used for the back-propagation learning. The Matlab newff
function does not allow a large number of hidden layer neurons; thus, for the back-
propagation learning case number of hidden layer neurons was made considerably smaller
than the number of learning units required by the proposed SNN learning algorithm. The
number of computation steps needed by the SNN learning algorithm compared to the
learning units (by increasing the ε parameter) for the SNN learning algorithm reduced
the number of computation steps. For the back-propagation learning algorithm, reducing
the number of hidden layer neurons caused the number of epochs to increase and the
It can be observed from Table 5.3 that the number of epochs needed to achieve the
required error rate for the back-propagation learning algorithm was small. The reason
for small number of epochs is likely due to the fact that only 100 input samples were
used in this application example. Another reason is the similarity of the inputs: for each
of the 5 basic patterns, 19 similar patterns were generated by adding small amounts of
The CPU times shown in Table 5.3 were obtained using Matlab ver. 7.0 on a PC
(Celeron 2.40 GHz and 2.0 GB RAM) running Microsoft Windows XP SP2 operating
system.
73
Chapter 6
performed. The analysis determines the appropriate parameters values, such as the
values for the neuron threshold ϑ, β, and time window values to perform mapping and
learning.
An input spike train can be mapped to two output spike times using the ISI1
and ISI2 mapping neurons. One way to increase the output of the mapping stage is to
use several mapping units with different β values. Different β values produce non-linear
mapping unit output firing time delays for the same spike train input. A larger input-
space for the learning stage may be beneficial when relatively similar spike trains need
The effect of using multiple units in the mapping stage with each unit using a
Assume that the potential function in the two ISI neurons has a sufficiently long
time constant where t τ as shown in Equation 2.6 of the neuron PSP which will be
74
n
tX
x (t) = W .u(t − t ); tτ
j τ i i
i=1
In Equation 2.6, u(t) is the unit step function. The slope of the x (t) function once all
j
Voltage x(t)
Y
Threshold (υ)
(90-ϕ)
x(tn) ϕ
t1 tn tout Time
where ϕ is the angle at the last input t of a spike train as shown in Figure 6.1.
n
t Xn
x(t ) = β. n t (6.3)
n τ i
i=1
Assume,
n
X
J= t (6.4)
i
i=1
Y Y
tan(90 − ϕ) = cot(ϕ) = = (6.7)
A ϑ − x(t )
n
then,
Y = cot(ϕ). ϑ − x(t )
n
τ β.t .J
= (ϑ − n )
β.J τ
τ.ϑ
= −t (6.8)
β.J n
76
τ.ϑ
t =Y +t = (6.9)
out n β.J
For the same input spike train, t and J are constants, and ϑ is constant in the whole
n
tout
3
0
0 1 2 3 4 5 6
β
mapping stage. Figure 6.2 shows that linear changes in β results in drastic non-linear
changes in the output times of the ISI1 neuron especially in the range β = [0.25, 2.0].
A similar procedure can be used for the ISI2 neuron by using W = tβ , and the
i i
Pn 1
relation is represented by Equation 6.9,but J = t .
i=1 i
77
in Equations 3.1 and 3.2, and the initial weight values W and W for the ISI1 and ISI2
11 12
problems such as wrong input spike train mapping or a long delay time for a correct
mapping.
All of these parameters are related to each other and cannot be independently
determined. In this research, it is assumed that all input spikes (belonging to a spike
train input pattern) arrive within one input time window of length T seconds as shown
inp
in Figure 6.3. Thus, the latest arriving spike, within an input spike train, occurs at some
max
t ≤T .
inp
0 Tinp Tmap
Fig. 6.3. Presentation of input time window and mapping time window.
n
X
W= W; W = const. (6.10)
i 1
i=1
78
where n is the number of input spikes which arrive at the neuron input within a time
window of length 0 ≤ t ≤ T .
inp
possible sum of the weights must be calculated for both the ISI1 and ISI2 neurons, which
can be computed for the hypothetical spike trains shown in Figure 6.4.
(A)
.. .. ..
Tinp-(n-2)∆t
Tinp
Tinp-2∆t
Tinp-∆t
0 Time
(B)
.. .. ..
0 ∆t 2∆t (n-1)∆t Tinp Time
Fig. 6.4. (A) Spike train arrives at the latest possible times. (B) Spike train arrives at
the earliest possible times.
The following explanation will be for the mapping units which encode spike train
From Equation 3.1 (W = β ∗ t ) of the ISI1 neuron, it is clear that the maximum
i 1 i
sum of the weights, resulting from all input spikes within one input time window T
inp
79
occurs when all input spikes come at the latest possible times (i.e. t values are the
i
.. .. ..
0
xj(t)
.. tout
t2 tn-2 tn
t1=0 Time
tn-1
The internal ISI neuron potential (Equation 2.6) can be calculated using Figure
6.5 as follows:
80
W W
x (t) = t W + (t −t ) 2 + · · · + (t −t ) n
j out 1 out 2 τ out n τ
n n
1X 1X
= t (W + W )− Wt (6.11)
out 1 τ i τ i i
i=2 i=2
where the initial weight value W is attached to the input synapse for the first
1
input spike within a spike train is considered as the local reference at time t = 0 as
r
described in Chapter 4.
β X n β X n
map 2
ϑ = t (W + 1 t )− 1 t (6.12)
1 out 11 τ i τ
i=2 i=2 i
where W = W of Equation 6.11 is the initial weight value of the ISI1 mapping
11 1
unit neuron.
Input spikes at times t come at the latest possible times within the input time
i
n
X
t = T + (T − 4t) + (T − 24t) + ..... + (T − (n − 2)4t)
i inp inp inp inp
i=2
n
X 2 2 2 2 2
t = T + (T − 4t) + (T − 24t) + ..... + (T − (n − 2)4t)
inp inp inp
i=2 i inp
2
2 4t
= (n − 1)T −T 4t(n − 1)(n − 2) + (n − 2)(n − 1)(2n − 3)
inp inp 6
(6.14)
Thus, by substituting Equations 6.13 and 6.14 in Equation 6.12, the mapping
map β β 4t β 2
ϑ = (T + 4t)(W + 1 (n − 1)T − 1 (n − 1)(n − 2)) − 1 (n − 1)T
1 inp 11 τ inp 2τ τ inp
2
β β 4t
+ 1T 4t(n − 1)(n − 2) − 1 (n − 1)(n − 2)(2n − 3)
τ inp 6τ
β 4t 4t
+ 1 (n − 1)(n − 2) T
≈ W T − (2n − 3) (6.15)
11 inp 2τ inp 3
4t is the minimum time resolution between input spikes within a spike train.
map
ϑ should be selected to be slightly larger than the value given in Equation
1
6.15 by a small value γ.
β
From Equation 3.2 of the ISI2 neuron (W = t 1 ) it is clear that the maximum
i i
of the sum of the weights resulting from all input spike inputs within one input time
window occurs when all input spikes come at the earliest possible times (i.e. t values
i
are the smallest within the input time window), as shown in Figure 6.4(B). Then, the
β Xn β Xn t
map 2 1 2 i
ϑ = t (W + )− (6.16)
2 out 12 τ t τ t
i=2 i i=2 i
where W = W of Equation 6.11 is the initial weight value of the ISI1 mapping
12 1
n
X 1 1 1 1
= + + ..... +
t 4t 24t (n − 1)4t
i=2 i
1 1 1
= (1 + + ..... + ) (6.17)
4t 2 (n − 1)
From Figure 6.6, as the maximum value is in interest in the computations, the
n
X 1
ln(1 + n) < < 1 + ln(n) (6.18)
i
i=1
map
β β
ϑ =T W + 2 1 + ln(n − 1) − 2 (n − 1) (6.19)
2 inp 12 4tτ τ
4t is the minimum time resolution between input spikes within a spike train.
map
ϑ should be selected to be slightly larger than the value given in Equation
2
6.19 by a small value γ.
83
f(n) Σ 1/n
3
1+ln(n)
2 ln(1+n)
0
0 10 20 30 40 50 60 70 80 90 100
n
P1
Fig. 6.6. Plots of n and two other approximations of it.
Sufficient time should be allowed for the internal neuron potential to reach the
threshold potential ϑ. For the ISI1 neuron, the latest output firing time occurs when all
the spikes within a spike train arrive at the beginning of the input time window T
inp
(i.e. the weight values W will be the smallest), as shown in Figure 6.4(B).
i
84
n n
map map 1X 1X
ϑ = T (W + W )− Wt
1 1 11 τ i τ i i
i=2 i=2
β Xn β Xn
map 2
= T (W + 1 t )− 1 t
1 11 τ i τ i
i=2 i=2
2
map β 4t β 4t
+ 1 (n)(n − 1) − 1
= T W (n)(n − 1)(2n − 1) (6.20)
1 11 2τ 6τ
where:
n
X
t = 4t + 24t + ..... + (n − 1)4t
i
i=2
4t
= n(n − 1) (6.21)
2
n
X 2 2 2 2
t = 4t + (24t) + ..... + ((n − 1)4t)
i=2 i
2
4t
= (n − 1)(n)(2n − 1) (6.22)
6
β 4t 2
map
ϑ + 16τ (n)(n − 1)(2n − 1)
map
T = 1 (6.23)
β 4t
1 W + 12τ (n)(n − 1)
11
85
where 4t is the minimum time resolution between input spikes within a spike
train.
A similar reasoning can be applied for the ISI2 neuron except that the latest
output firing time occurs when all spikes come at very end of the input time window
(i.e. the weight values W will be the smallest). In other words, the first spike within a
i
spike train occurs at t = 0 and the rest of the spikes occur at the very end of the input
1
β Xn β Xn t
map map 2 1 2 i
ϑ = T (W + )−
2 2 12 τ t τ t
i=2 i i=2 i
where:
n
X 1 1 1 1 1
= + + + ..... + (6.24)
t T T − 4t T − 24t T − (n − 2)4t
i=2 i inp inp inp inp
The minimum is the most important point for this, it can be approximated as
follows:
n
X 1 n−1
≈ ; 0<t ≤1 (6.25)
t T i
i=2 i inp
where n is the number of input spikes within an input time window. Thus,
86
map map β β
ϑ = T W + 2 (n − 1) − 2 (n − 1) (6.26)
2 2 12 τT τ
inp
map
from which, T can be expressed as follows:
2
map β
ϑ + τ2 (n − 1)
map 2
T = (6.27)
β
2 W + 2 (n − 1)
12 τ Tinp
map map
The larger of either T or T is defined as the mapping time window
1 2
map map
(T = max{T ,T }) as shown in Figure 6.3.
map 1 2
output spikes times for nearly similar input patterns (spike trains) [10]. At the same
time, the mapping time window T should be as small as possible in order to have a
map
short input-output delay. Two input spike trains were considered. The first spike train
is the one shown in Figure 6.4(A); the second spike train differs from this one by having
its last spike comes at time n4t instead of (n − 1)4t. In other words, the difference
between the two spike trains is just one spike shift in time by 4t in the second spike
train.
The worst case output separation time for the ISI1 and ISI2 neurons can be
computed using the internal ISI neuron potential Equation 6.11 (for the two slightly
n n
1X 1X
ϑ = t (W + W )− Wt (6.28)
out 1 τ i τ i i
i=2 i=2
where the initial weight value W is assigned to the input synapse for the first input spike
1
within a spike train and is considered as the local reference at time t = 0 as described
Pn
ϑ + τ1 Wt
t = i=2 i i (6.29)
out W + τ1 n W
P
1 i=2 i
1
The output times will then occur within 100 ms after T time for the para-
inp
meters used in Table 6.1 and assuming that the input spike train occurs at the latest
possible time as shown in Figure 6.4(A). Thus, the output time differences between two
spike trains will be the smallest among all other possible output separation times for
both the ISI1 and ISI2 units. When an input spike train comes at the latest possible
time within the input time window T , as shown in Figure 6.4(A) and the worst case
inp
output firing time has been set to T + 100msec, the difference in output times for two
inp
various initial W weight values for the ISI1 mapping neuron is shown.
11
In Figure 6.8, the relationship between the output firing time difference for two similar
different initial W weight values for the ISI1 mapping neuron is shown.
11
In Figure 6.10, the relationship between the output firing time difference for two similar
By comparing Figures 6.7 and 6.9 for the ISI1 and ISI2 neurons, it can be observed
that the ISI1 neuron output time is for the input spike train which arrives at the earliest
time within T ; thus it takes a long time (approximately the entire T time) to fire.
inp inp
On the other hand, the ISI2 neuron output time is for an input spike train which arrives
at the latest time within T ; thus the ISI2 neuron fires after T at a time chosen by
inp inp
For the ISI1 neuron in the mapping unit, Figures 6.7 and 6.8 can be used to
map
achieve a small T time window with appropriate output separation time for some
1
β and W values. Results are shown in Table 6.1 for a time constant τ = 2.5sec,
1 11
n = 20, an input time window T = 1sec and an output time window of 100 ms
inp
89
1.002
0.998
0.996
Tmap (Second)
0.994
0.992
0.99
1
0.988
W1=0.1
0.986 W1=0.2
W1=0.5
0.984
W1=0.7
0.982
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
β1
Fig. 6.7. ISI1 mapping neuron output time window vs. β for different weight W
1 11
values (4t = 3ms and τ = 2.5sec).
(T = 1.1sec.) for various values of 4t. It should be noted that the output separation
map
times shown are for two spike trains with minimum Euclidean distance between them.
In a similar manner, for the ISI2 neuron in the mapping unit, Figures 6.9 and 6.10 can
map
be used to achieve a small T time window with appropriate output separation time
2
for some β and W values.
2 12
and ISI2 neurons in the mapping stage are used as input patterns for the learning stage
90
−4
x 10
1.4
0.8
0.6
0.4
W1=0.1
W1=0.2
0.2 W1=0.5
W1=0.7
0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
β1
Fig. 6.8. ISI1 mapping neuron output separation times vs. β for different weight W
1 11
values (4t = 3ms and τ = 2.5sec).
where a supervised learning method is used to classify input patterns as shown in Figure
4.3. The learning unit is repeated in Figure 6.11 for convenience. The neuron weights
β
are assigned using W = β .t and W = t2 for ISI1 and ISI2 respectively as was done
i 2 i i i
in Equations 4.1 and 4.2, and the weight values W of the ISI1 and ISI2 neurons. All
i
of these parameters are related to each other and cannot be independently determined.
However, the learning stage analysis considers multiple spatio-temporal inputs instead
of a single spike train at each ISI neuron. It is assumed that all spatio-temporal input
91
1.1
1.09
1.08
1.07
Tmap (Second)
1.06
1.05
1.04
2
1.03
W1=0.1
1.02 W1=0.2
W1=0.5
1.01 W1=0.7
1
0 0.2 0.4 0.6 0.8 1 1.2 1.4
β2 x 10
−4
Fig. 6.9. ISI2 mapping neuron output time window vs. β for different weight W
2 12
values (4t = 3ms and τ = 2.5sec).
spikes arrive within one learning unit input time window T as shown in Figure 6.12.
map
possible sum of the weights must be known for both the ISI1 and ISI2 neurons which can
be computed for the hypothetical spatio-temporal pattern cases shown in Figure 6.13.
In Equation 4.1 for the ISI1 neuron, the maximum sum of the weights resulting
from all input spike inputs (composing a spatio-temporal pattern) within one input time
window (T ) occurs when all input spikes come at the latest possible times (i.e. t
map i
values are the largest within the input time window) as shown in Figure 6.13(A). It
92
−6
x 10
3
W1=0.1
W1=0.2
2.5 W1=0.5
W1=0.7
ISI2 Separation (Second)
1.5
0.5
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4
β2 −4
x 10
Fig. 6.10. ISI2 mapping neuron output separation times vs. β for different weight W
2 12
values (4t = 3ms and τ = 2.5sec).
EPSP level
Neuron
t1= tr
w1
t2 w2
Input spikes
tout1 (tout2)
wn
tn
0 Tmap Tlearn
Fig. 6.12. Presentation of input time window and learning time window.
Tmap
0 0 Tmap
Tmap
Tinp Tmap
Tmap- (n-1)Dt
.. ..
.. ..
Time Time
Tmap Tinp+(n-1)Dt Tmap
(A) (B)
Fig. 6.13. (A) All spatio-temporal input spikes come at the latest possible times. (B)
All spatio-temporal input spikes come at the earliest possible times.
should be noted that the first spike in the mapping unit is used as a reference time for
all the mapping and learning units. It should also be noted that Equation 4.1 is being
used here despite the fact that spatio-temporal inputs and not a spike train input are fed
into the ISI neurons of the learning unit; the spatio-temporal inputs shown in Figures
6.13(A) and 6.13(B) are considered in this analysis to be behaving as a spike train.
94
Thus, the learning unit neuron threshold potential can be expressed as follows:
β Xn β Xn
learn 2
ϑ = t (ω + 1 t )− 1 t (6.30)
1 out 1 τ i τ i
i=1 i=1
where ω is the initial weight value of the ISI1 learning unit neuron.
1
n
X
t = T + (T − 4t) + (T − 24t) + ..... + (T − (n − 1)4t)
i map map map map
i=1
n
X 2 2 2 2 2
t = T + (T − 4t) + (T − 24t) + ..... + (T − (n − 1)4t)
map map map
i=1 i map
2
2 4t
= nT −T 4t(n − 1)n + (n − 1)(n)(2n − 1) (6.32)
map map 6
95
Thus:
learn β β 4t β 2
ϑ = (T + 4t)(ω + 1 nT − 1 n(n − 1)) − 1 nT
1 map 1 τ map 2τ τ map
2
β β 4t
+ 1T 4tn(n − 1) − 1 n(n − 1)(2n − 1)
τ map 6τ
β 4t 4t
+ 1 n(n − 1) T
≈ ω T − (2n − 1) (6.33)
1 map 2τ inp 3
learn
ϑ should be selected to be slightly larger than the value given in Equation
1
6.33 by a small value γ.
In Equation 4.2 for the ISI2 neuron, the maximum sum of the weights resulting
from all input spike inputs within one input time window occurs when all input spikes
come at the earliest possible times (i.e. t values are the smallest within the input time
i
β Xn β Xn t
learn 1
ϑ = t (ω + 2 )− 2 i (6.34)
2 out 2 τ t τ t
i=1 i i=1 i
where ω is the initial weight value of the ISI1 learning unit neuron.
2
n
X 1 1 1 1
= + + ..... +
t T T + 4t T + (n − 1)4t
i=1 i inp inp inp
n
≈ (6.35)
T + 4t
inp
96
The approximation in Equation 6.35 is used because the maximum threshold value
β n T
learn map
∗ω + 2
ϑ = T −1 (6.36)
2 map 2 τ t + 4t
map
learn
ϑ should be selected to be slightly larger than the value given in Equation
2
6.36 by a small value γ.
Sufficient time should be allowed for the internal neuron potential to reach the
threshold potential ϑ. For the ISI1 neuron, the latest output firing time occurs when
all the spikes in the spatio-temporal pattern arrive at the beginning of the input time
Hence, when the internal neuron potential x(t) of Equation 6.11 is set equal to
learn
the threshold potential ϑ , the following relationship can be established:
1
n n
learn learn 1X 1X
ϑ = T (ω + W )− Wt
1 1 1 τ i τ i i
i=1 i=1
β X n β X n
learn 2
= T (ω + 1 t )− 1 t (6.37)
1 1 τ i τ
i=1 i=1 i
97
where,
n
X
t = T + (T + 4t) + (T + 24t) + ..... + (T + (n − 1)4t)
i inp inp inp inp
i=1
= nT + 4t 1 + 2 + ...... + (n − 1)
inp
4t
= nT +
n(n − 1)
inp 2
4t
= n T + (n − 1) (6.38)
inp 2
n
X 2 2 2 2 2
t = T + (T + 4t) + (T + 24t) + ..... + (T + (n − 1)4t)
inp inp inp
i=1 i inp
2
2 4t
= nT +T 4t(n − 1)n + (n − 1)(n)(2n − 1) (6.39)
map map 6
Thus, the learning unit ISI1 neuron output time window can be expressed as:
learn β 2 2
+ τ1 nT 4t(n − 1)n + 4t
ϑ +T 6 (n − 1)(n)(2n − 1)
learn 1 map map
T = (6.40)
β
+ 4t
1 ω + τ1 n T 2 (n − 1)
1 inp
learn
T is the latest possible ISI1 neuron firing time.
1
A similar reasoning can be applied for the ISI2 neuron except that the latest
output firing time occurs when all spikes come at the very end of the input time window.
In other words, the first spike within of the spatio-temporal pattern occurs at t = T
1 inp
and the rest of the spikes occur at the very end of the input time window as shown in
Figure 6.13(A).
98
Hence, when the internal neuron potential x(t) of Equation 6.11 is set equal to
learn
the threshold potential ϑ , the following relationship can be established:
2
β Xn β Xn t
learn 2 1 2 i
ϑ = T (ω + )−
2 learn 2 τ t τ t
i=1 i i=1 i
β β
= T ω + 2 n − 2n (6.41)
learn 2 τT τ
map
where:
n
X 1 1 1 1 1
= + + + ..... + (6.42)
t T T − 4t T − 24t T − (n − 1)4t
i=1 i map map map map
n
≈ ; 0<t ≤1 (6.43)
T i
map
learn β
ϑ + τ2 n
learn
T = 2 (6.44)
β
2 ω + τT 2 n
2 map
learn
T is the latest possible ISI2 neuron firing time.
2
learn learn
The larger of either T or T is defined as the learning time window as
1 2
shown in Figure 6.12.
99
learn
In Figure 6.14, the relationship between the T and β in Equation 6.40 for
1 1
different initial weight ω values in the ISI1 learning neuron is shown. It can be seen
1
learn
that T < 1.1sec.
1
1.105
W1=0.1
1.1 W1=0.2
W1=0.5
W1=0.7
1.095
1.09
Tlearn (Second)
1.085
1.08
1
1.075
1.07
1.065
1.06
0 0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009 0.01
β1
Fig. 6.14. ISI1 learning neuron output time window vs. β for different weight ω
1 1
values (4t = 1msec and τ = 2.5sec).
map
In Figure 6.15, the relationship between the T and β in Equation 6.44 for
2 2
different initial weight ω values in the ISI1 mapping neuron is shown. It can be observed
2
learn
that T > 1.1sec.
2
100
1.15
1.145
1.14
1.135
(Second)
1.13
1.125
learn
1.12
T2
1.115
1.11 W1=0.1
W1=0.2
W1=0.5
1.105
W1=0.7
1.1
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
β
2
Fig. 6.15. ISI2 learning neuron output time window vs. β for different weight ω
2 2
values (4t = 1msec and τ = 2.5sec).
By comparing Figures 6.14 and 6.15 for the ISI1 and ISI2 neurons, it can be
observed that the ISI1 neuron output time is for the input spikes which arrive at the
earliest time within T ; thus it takes a long time (approximately the entire T
map map
time) to fire. On the other hand, the ISI2 neuron output time is for an input spikes
which arrive at the latest time within T ; thus the ISI2 neuron fires after T at a
map map
time chosen by the designer (50 msec was used in this example).
In the learning stage, coincidence detection is needed and hence the final output
neuron uses the exponential characteristics of the neuron response function (Equation
101
2.3) to ensure that only coincident spikes cause the output to fire. Thus, the required
must be chosen appropriately in order to represent each input class with a unique output
time. Hence, the learning time window T (shown in Figure 6.12) can be rewritten
learn
as follows:
learn learn
T = max{T ,T }+τ +τ (6.45)
learn 1 2 c out
learn learn
It can be observed from figures 6.14 and 6.15 that T >T , thus Equation
2 1
6.45 can be expressed as follows:
learn
T ≥T + tau + τ (6.46)
learn 2 c out
The complexity of the proposed learning algorithm is calculated for a one proces-
2
sor, random access model machine. The learning algorithm complexity is O(4nk + p )
where k is the number of input spike trains, n is the number of spikes per spike train,
and p is the number of learning samples. The complexity order was calculated for the
worst case number of clusters needed for classification which occurs when the number of
the clusters is equal to the number of learning samples p. The dominant factor in this
learning algorithm is the number of learning patterns p. Thus, the learning algorithm
2
complexity can be approximated by O(p ).
102
Conclusions
It has been shown that SNNs can be computationally efficient and practical. Its
consideration and analysis are important both from the biological as well as from the
computational points of view. In this dissertation, emphasis has been placed on temporal
ANN. It has been shown that some types of ANNs known to be very efficient can be
implemented with SNNs in the temporal domain such as the proposed learning model
which is relatively near from radial basis function RBF [21]. In this case the ability to
get additional information by considering firing times of neurons was shown, which gave
log/digital VLSI implementations. Spiking neural networks can be used to process time
domain analog real world signals once these signals are converted to spike trains. A
hardware realization of SNNs would make it possible to process real would signals in
close to real-time.
A new scheme for mapping both a spike train and spatio-temporal patterns (one
spike time per input synapse) was introduced. This scheme transforms temporal domain
spikes into two outputs (in the case of spike train inputs) which can produce a one-to-one
the spatial and/or temporal domain. As shown in Appendix A, the inputs and outputs
104
have a one-to-one relationship when the outputs of the ISI1 and ISI2 neurons are used
in combination.
A new learning algorithm for spiking neural networks was proposed. The resulting
spiking neural network can classify input spike trains which are transformed using the
mapping stage. Simulations have shown that classifications of input spike trains with
noise can be achieved by either adding learning units or clustering input spike trains. The
applied in this research and other applications can be practical areas for the proposed
learning algorithm. However, the simulations and real-world applications which were
discussed in this research show the computational power of spiking neural networks
specially when the input into the SNN arrive as spike trains (impact sound classification
example). Spiking neural networks can process real world signals in close to real-time
information can be directly utilized would be also be good application candidates such
The proposed methods use spike trains in the mapping stage and spatio-temporal
(singles spikes) in the learning stage. Moreover, each neuron is allowed to fire only once
within the coding interval (mapping and learning windows as shown in the analysis).
However, the presentation of an input vector may just happen within a small time
window of a longer lasting computation. For example, one may consider the case where
all neurons of the SNN fire with the same frequency, but phase-shifted, as suggested
in [23]. Then, in the case of the SNNs proposed constructions in this dissertation, local
105
reference time of the first coming spike (within an input spike train) is used for all
computations. A phase-shift for some input neurons yields a phase-shift of the output
neuron.
even other types such as classical neurons, could influence the quality of the introduced
methods and the computational power of such neurons in general. This work is only the
first step in the exploration of the possibilities of SNNs. The computational power of
SNNs may be enhanced by making use of other synaptic features like synaptic depression
features of biological neurons are included into SNNs, would it improve the computational
Can learning be better realized using both temporal and rate code information?
The proposed model, after some modifications, may be useful in directly process-
ing real biological signals from some body parts (such as muscles) to overcome physical
disabilities.
107
Appendix A
The one-to-one mapping of inputs to outputs of the mapping unit will be proved.
Assume that the potential function in the two ISI neurons has a sufficiently long time
constant so that α(t) of Equation 2.3 can be considered to work simply as a linear
n
tX
x(t) = W .u(t − t ) (A.1)
τ i i
i=1
In Equation A.1, u(t) is the Heaviside function. The slope of the function x(t) is:
n
1X
s= W (A.2)
τ i
i=1
In Equation A.2, the slope is dependent on the value of the dynamic weight
To prove that no coincident potential values are produced for different input spike
trains, or spatio-temporal patterns, after the last input spike has arrived, it is sufficient
to show that the slopes of two different input spike trains cannot be equal. The following
cases cover all the worst case input spike trains combinations.
108
• Assume two different input spike trains P and P have the same spike orders but
A B
A A A A B B B B
different spike times P = {t , t , ......., t , t } and P = {t , t , ......., t , t }.
A 1 2 n−1 n B 1 2 n−1 n
A B A B
• If the last spike inputs have the relation t >t and s >s , as shown in
n n n−1 n−1
Figure A.1(A), the potential functions of P and P may intersect at some later
A B
A B A B
time after the last input spike, i.e., s <s and then t =t , for the ISI1
n n out1 out1
neuron (ISI1 internal potential ∝ β ∗ t ); however, for the same spike trains P
i A
and P the ISI2 neuron (ISI2 internal potential ∝ tβ ) makes the internal potential
B i
A B A B
functions diverge (s > s ) and thus t 6= t as shown in Figure A.1(B).
n n out2 out2
A B A B
• If t < t and s > s , as shown in Figure A.1(D), the potential slopes
n n n−1 n−1
A B A B
may intersect at some later time (s <s and then t =t ) for the ISI2
n n out2 out2
A B
neuron, while the ISI1 neuron would make the internal potentials diverge (s >s )
n n
because its internal potential is proportional with input spike time t and thus
i
A B
t 6= t as shown in Figure A.1(C).
out1 out1
A B A B A B
• If t =t and s >s , the ISI1 and ISI2 neurons would produce s >s
n n n−1 n−1 n n
A B
and thus t 6= t (not shown in Figure A.1).
out1 out1
A B A B A B
• If t =t and s >s , then ISI1 neuron would produce t 6= t as
n n−1 n−1 n−1 out1 out1
A B A B
shown in Figure A.1(E); Furthermore if t =t and s <s , then ISI2
n n−1 n−1 n−1
A B
neuron would produce t 6= t as shown in Figure A.1(F).
out2 out2
Thus, all possible input spike trains produce unique combination of outputs at
t and t of the ISI1 and ISI2 neurons which can be used to recognize a particular
out1 out2
input sequence.
109
PA
Voltage Voltage PB
A B
Fig. A.1. Internal Potential for ISI1 and ISI2 neurons, (A) and (B) t > t and
n n
A B A B A B A B A B
s >s ; (C) and (D) t <t and s >s ; (E) t =t and s >s ;
n−1 n−1 n n n−1 n−1 n n−1 n−1 n−1
A B A B
(F) t =t and s <s .
n n−1 n−1 n−1
111
Matlab Code
(1)Mapping-Learning Organization
clear clc
load(’x100.dat’);
[a,b]=size(x);
s=[]; t = cputime;
for i=1:pat
y= decoder_sngl_beta(x(trns*(i-1)+1:trns*i,:));
s=[s y];
end y1=s’;
112
w1=0.7; sml_dst=0.005;
for j=1:np1
end
ii=0;
ynn=[];
jj=1;
spike_out1=0;
yn(nn,:)=0;
y1(nn,:)=0;
else
%spike_out2=combin_inputs(spike1,spike2,th)
c1=compare(spike1(ii,1),spike11,sml_dst);
c2=compare(spike2(ii,1),spike22,sml_dst);
yn(nn,:)=0;
y1(nn,:)=0;
end
end
end
i=0;
for n=1:np1
114
if yn(n,1)~=0
i=i+1;
ynn(i,:)=yn(n,1);
y2(i,:)=y1(n,:);
end
end
y1=y2
yn=ynn;
s1=size(y1);
end
e = cputime-t;
% A function that its inputs are spike trains "x", and returns outputs in
endvalue=0.2; tau=10;
beta1=0.1; %calculated
beta2=3.5*10^-5;
115
V1=0.76; %calculated
V2=0.77; %calculated
w1=0.7;
[np,ns]=size(x); % np= no. of spike trains & ns= no. of spikes/spike train
x_tmp=[];
for j=1:ns
x_tmp(i,j)=x(i,j);
end
end
[np1,ns1]=size(x_tmp);
x_tmp(i,:)=x_tmp(i,:)-min(x_tmp(i,:));
if x_tmp(i,1)==0
spike(i*2-1,1)=(V1*tau + beta1*sum(x_tmp(i,2:ns1).*x_tmp(i,2:ns1)))...
/(w1*tau + beta1*sum(x_tmp(i,2:ns1)));
sum(1./x_tmp(i,2:ns1)));
else
spike(i*2-1,1)=(V1*tau + beta1*sum(x_tmp(i,:)))/...
116
(beta1*sum(x_tmp(i,:)));
end
end
117
References
1658, 1993.
[2] Hagai Agmon-Snir, Catherine E. Carr, and John Rinzel, “The role of dendrites in
[3] Hesham H. Amin and Robert H. Fujii, “Learning Algorithm for Spiking Neural
2003.
Scheme for a Spiking Neural Network, ”Proceeding of The 12th European Sympo-
[5] Hesham H. Amin and Robert H. Fujii, “Spike Train Decoding Scheme for a Spiking
[6] Hesham H. Amin and Robert H. Fujii, “Spiking Neural Network Inter-Spike Time
Based Decoding Scheme”, Special Issue of IEICE Trans. of Circuits and Systems,
[7] Hesham H. Amin and Robert H. Fujii, “Spike Train Classification Based on Spiking
[8] Heham H. Amin and Robert H. Fujii, “Learning Algorithm for Spiking Neural Net-
[9] Heham H. Amin and Robert H. Fujii, “Sound Classification and Function Approx-
[10] Heham H. Amin and Robert H. Fujii, “Spike Train learning Algorithm, Applications,
and Analysis”, 48th IEEE Int’l Midwest Symposium on Circuits and Systems, Ohio,
2005.
[11] P. Baldi and W. Heiligenberg, “How sensory maps could enhance resolution through
1988.
[12] S.M. Bohte, H. La Poutr and J.N. Kok, “Unsupervised classification in a Network
of spiking neurons”, IEEE Trans. Neural Networks, Vol.13, No.2, pp.426-435, 2002.
[13] S.M. Bohte, J.N. Kok and H. La Poutré, “Spike-prop: error-backprogation in multi-
[14] Emery N. Brown, Robert E. kass, and Partha P. Mitra, “Multiple neural spike train
[15] C. E. Carr, “Processing of temporal information in the brain”, Annu Rev Neurosci.,
[16] Hideyuki Câteau and Tomoki Fukai, “A Stochastic Method to Predict the Conse-
and Spatial Hearing in Real and Virtual Environments, ed. R. H. Gilkey and T. B.
[20] W. Gerstner and W. Kistler, Spiking Neuron Models, Single Neurons, Populations,
[21] Simon Haykin, Neural Networks, A Comprehensive Foundation, Prentice Hall In-
[22] W. Gerstner, R. Kempter, J.L. van Hemmen, and H. Wagner “Hebbian learning
of pulse timing in the barn owl auditory system“ In: W. Maass and C.M. Bishop
[23] J. J. Hopfield, “Pattern recognition computation using action potential timing for
Over a Brief Interval”, Proc. Natl. Acad. Sci. USA, Vol.97, No.25, pp.13919-13924,
2000.
Collective Mechanism for Spatiotemporal Integration”, Proc. Natl. Acad. Sci. USA,
[26] M. Konishi, “Listening with two ears”, Sci. Amer., Vol.268, No.4, pp.66-73. 1993
[27] W. Maass, “Fast sigmoidal networks via spiking neurons”, Neural Computation,
[28] W. Maass, “Networks of spiking neurons: the third generation of neural network
[29] W. Maass, ”Lower bounds for the computational power of networks of spiking neu-
[30] W. Maass, ”Noisy spiking neurons with temporal coding have more computational
tems, ed. M. Mozer, M. I. Jordan, and T. Petsche, Vol.9, pp.211-217, MIT Press,
Cambridge, 1997.
[31] W. Maass and C. Bishop, editors, Pulsed Neural Networks, MIT press, Cambridge,
1999.
[35] T. Natschläger and B. Ruf, “Spatial and Temporal Pattern Analysis via Spiking
[37] A. Pouget, S. Deneve, J. C. Ducom, and P. Latham, “Narrow versus wide tuning
curves: What’s best for a population code?”, Neural Computation, 11(1), pp.85-90,
1999.
[38] Fred Rieke, David Warland, Rob de Ruyter van Steveninck and William Bialek,
Spikes: Exploring the Neural Code (Computational Neuroscience), MIT press, 1997.
[39] Berthold Ruf: Computing and Learning with Spiking Neurons - Theory and Simu-
lations, Chapter (8), Doctoral Thesis, Technische Universitaet Graz, Austria, 1997.
Available: ftp://ftp.eccc.uni-trier.de/pub/eccc/theses/ruf.ps.gz
[40] B. Ruf and M. Schmitt, “Self-Organization of Spiking Neurons using Action Poten-
tial Timing”, IEEE Trans. Neural Networks, Vol.9, No.3, pp.575-578, 1998.
[41] Jonathan Z. Simon, Catherine E. Carr, and Shihab A. Shamma, ”A dendritic model
269, 1999.
[45] Jacek M. Zurada, Introduction to Artificial Neural Systems, Pws Pub Co., 1992.