Download as pdf or txt
Download as pdf or txt
You are on page 1of 130

Spiking Neural Networks

Learning, Applications, and Analysis

Hesham Hamed Amin AbuElhassan

A DISSERTATION
SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

Graduate Department of Computer Systems


The University of Aizu
2005
Contents

List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1 Past Research in SNN . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.3 Dissertation Organization and Contributions . . . . . . . . . . . . . 5

Chapter 2. Model of Spiking neural networks . . . . . . . . . . . . . . . . . . . . 11

Chapter 3. Input Arrival-Time-Dependent Mapping . . . . . . . . . . . . . . . . 15

3.1 Computational Model for Mapping-Learning Organization . . . . . . 15

3.2 Spatio-temporal Pattern Mapping Scheme . . . . . . . . . . . . . . . 16

3.3 Spike Train Mapping Scheme . . . . . . . . . . . . . . . . . . . . . . 24

3.4 Application of The Mapping scheme . . . . . . . . . . . . . . . . . . 25

3.4.1 Spatio-temporal Patterns Applications . . . . . . . . . . . . . 25

3.4.2 Spike Train Patterns Applications . . . . . . . . . . . . . . . 28

3.5 Mapping a Spike Train Using a Single Mapping Unit . . . . . . . . . 33

Chapter 4. Learning and Clustering using Spiking Neural Networks . . . . . . . 41

4.1 The Learning Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.2 Learning Unit Output Time Uniqueness . . . . . . . . . . . . . . . . 45


4.2.1 Winner-Take-All Scheme . . . . . . . . . . . . . . . . . . . . . 45

4.2.2 Two-Input Sub-Learning Unit Scheme . . . . . . . . . . . . . 46

4.3 Coincidence Detection Neuron . . . . . . . . . . . . . . . . . . . . . . 51

4.4 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

4.5 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4.5.1 Spiking Neural Network Realization of the XOR Function . . 52

4.5.2 Classification of Spike Trains . . . . . . . . . . . . . . . . . . 55

Chapter 5. SNN Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

5.1 Function Approximation . . . . . . . . . . . . . . . . . . . . . . . . . 60

5.1.1 Encoding Continuous Input Variables Into Spike-Times . . . 60

5.1.2 Example of Function Approximation . . . . . . . . . . . . . . 61

5.2 Classification of Materials Based on Impact Sounds . . . . . . . . . . 63

5.3 Classification of Complex Spike Train Patterns . . . . . . . . . . . . 66

Chapter 6. Further Analysis of the Mapping-Learning Organization . . . . . . . 73

6.1 Analysis of the Mapping Stage Parameters . . . . . . . . . . . . . . . 73

6.1.1 Multiple Mapping Units for a Single Spike Train Input . . . . 73

6.1.2 Mapping Stage Threshold Values ϑ . . . . . . . . . . . . . . . 77

6.1.3 Required Input and Mapping Time-Windows . . . . . . . . . 83

6.2 Learning Stage Parameter Calculations . . . . . . . . . . . . . . . . . 89

6.2.1 Time Window for the Learning Stage . . . . . . . . . . . . . . 96

6.3 Complexity of the Mapping-Learning Organization Algorithm . . . . 101


Appendix A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

Appendix B. Matlab Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Tables

4.1 XOR Input spike times (including the bias) and output times. . . . . . . 55

4.2 Input spike train classification, clustering, and final output times. . . . . 56

5.1 Comparison of proposed and RBF learning algorithms. . . . . . . . . . . 63

5.2 Impact sound based material classification accuracy. . . . . . . . . . . . 64

5.3 complex spike train classification and comparison with ANN Back-propagation

method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

6.1 Output separation times for different 4t values. . . . . . . . . . . . . . . 88


Figures

1.1 (A) Neuron with spatio-temporal excitatory t i inputs and output tj . (B)

Neuron with a spike train input and output t j . . . . . . . . . . . . . . . 3

1.2 Research flowchart. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.3 Block diagram of the learning-mapping organization. . . . . . . . . . . . 7

1.4 (A) Block diagram of sound localization application. (B) Block diagram

of sound spike train mapping. . . . . . . . . . . . . . . . . . . . . . . . . 7

1.5 Block diagram of sound classification. . . . . . . . . . . . . . . . . . . . 9

1.6 Block diagram of function approximation. . . . . . . . . . . . . . . . . . 10

2.1 Spike-Response Function α(t) with a time constant τ = 10 . . . . . . . . 12

3.1 Combined Mapping-Learning Organization. . . . . . . . . . . . . . . . . 16

3.2 The combined mapping unit. . . . . . . . . . . . . . . . . . . . . . . . . 17

3.3 ISI mapping neuron. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.4 Rank Order Coder (ROC). . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.5 Rank Order Coder (ROC) with four inputs. . . . . . . . . . . . . . . . . 22

3.6 Spike train input and its equivalent representation as a spatio-temporal

pattern input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.7 Distribution of sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.8 Sound localization unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.9 Four microphones with different distance in between. . . . . . . . . . . . 28


3.10 Output times of the ISI1 and IS2 neurons for different input spike trains

representing the words ”Zero”, ”One”, and ”Two”. . . . . . . . . . . . . 29

3.11 Spike trains generated by Poisson distribution. . . . . . . . . . . . . . . 30

3.12 Output times of the ISI1 and IS2 neurons for different input spike trains

representing noisy spike trains (noise type1) . . . . . . . . . . . . . . . . 32

3.13 Output times of the ISI1 and IS2 neurons for different input spike trains

representing noisy spike trains(noise type2) . . . . . . . . . . . . . . . . 33

3.14 Output times of the ISI1 and IS2 neurons for different input spike trains

representing noisy spike trains (for exponential W function and noise

type1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.15 Output times of the ISI1 and IS2 neurons for different input spike trains

representing noisy spike trains (for exponential W function and noise

type2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.16 Output time differences produced by the ISI1 neuron for two different

input spike trains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3.17 Output potential the ISI1 neuron for two different input spike trains. . . 36

3.18 Output potential the ISI2 neuron for two different input spike trains. . . 37

3.19 Output time differences produced by the ISI1 neuron for two different

input spike trains (for exponential W function). . . . . . . . . . . . . . . 38

3.20 Output potential the ISI1 neuron for two different input spike trains (for

exponential W function). . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.21 Output potential the ISI2 neuron for two different input spike trains (for

exponential W function). . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.1 ISI neuron for the Learning Stage. . . . . . . . . . . . . . . . . . . . . . 42

4.2 Learning Unit Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

4.3 Combined Mapping-Learning Organization. . . . . . . . . . . . . . . . . 44

4.4 Winner Take-All Scheme. . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.5 Learning Unit (LU) with Sub-Learning Units. . . . . . . . . . . . . . . . 47

4.6 (A) Multiple Input ISI Neuron. (B) Two-Input (plus the local reference

input) Sub-Learning Unit’s ISI Neuron. . . . . . . . . . . . . . . . . . . 48

4.7 XOR Function Input Classification. . . . . . . . . . . . . . . . . . . . . . 53

4.8 Spiking neural network for XOR function with spatio-temporal encoding

for logical input ”001”. Details of a learning unit is shown. . . . . . . . 54

4.9 The original spike train for each class is spike train number 1. The other

five trains are noisy versions of it (one class is represented). . . . . . . . 58

5.1 Input x variable encoded into 8 spike times using gaussian receptive fields. 60

5.2 Function approximation for different tolerances with ϑ = 0.3, β = 0.5,

and τ = 5.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

5.3 Steel plate impact sound waveform and its corresponding 20 spike trains. 65

5.4 Two patterns with 10 spike trains each (the other patterns are not

shown). (A) A pattern with spike trains generated with a rate of 20

spikes/spike train. (B) 50 spikes/spike train . . . . . . . . . . . . . . . . 67

5.5 Back propagation neural network. . . . . . . . . . . . . . . . . . . . . . . 70

6.1 Neuron internal potential . . . . . . . . . . . . . . . . . . . . . . . . . . 74

6.2 Change of output time with changing of β . . . . . . . . . . . . . . . . . 76


6.3 Presentation of input time window and mapping time window. . . . . . 77

6.4 (A) Spike train arrives at the latest possible times. (B) Spike train arrives

at the earliest possible times. . . . . . . . . . . . . . . . . . . . . . . . . 78

6.5 ISI1 neuron internal potential. . . . . . . . . . . . . . . . . . . . . . . . 79

P1
6.6 Plots of n and two other approximations of it. . . . . . . . . . . . . . 83

6.7 ISI1 mapping neuron output time window vs. β 1 for different weight

W11 values (4t = 3ms and τ = 2.5sec). . . . . . . . . . . . . . . . . . . 89

6.8 ISI1 mapping neuron output separation times vs. β 1 for different weight

W11 values (4t = 3ms and τ = 2.5sec). . . . . . . . . . . . . . . . . . . 90

6.9 ISI2 mapping neuron output time window vs. β 2 for different weight

W12 values (4t = 3ms and τ = 2.5sec). . . . . . . . . . . . . . . . . . . 91

6.10 ISI2 mapping neuron output separation times vs. β 2 for different weight

W12 values (4t = 3ms and τ = 2.5sec). . . . . . . . . . . . . . . . . . . 92

6.11 ISI neuron for the Learning Stage. . . . . . . . . . . . . . . . . . . . . . 92

6.12 Presentation of input time window and learning time window. . . . . . . 93

6.13 (A) All spatio-temporal input spikes come at the latest possible times.

(B) All spatio-temporal input spikes come at the earliest possible times. 93

6.14 ISI1 learning neuron output time window vs. β 1 for different weight ω1

values (4t = 1msec and τ = 2.5sec). . . . . . . . . . . . . . . . . . . . . 99

6.15 ISI2 learning neuron output time window vs. β 2 for different weight ω2

values (4t = 1msec and τ = 2.5sec). . . . . . . . . . . . . . . . . . . . . 100


A.1 Internal Potential for ISI1 and ISI2 neurons, (A) and (B) t A
n
> tB
n
and

sA
n−1
> sB
n−1
; (C) and (D) tA
n
< tB
n
and sA
n−1
> sB
n−1
; (E) tA
n
= tB
n−1

and sA
n−1
> sB
n−1
; (F) tA
n
= tB
n−1
and sA
n−1
< sB
n−1
. . . . . . . . . . . . 109
Abstract

Artificial Neural Networks (ANNs) are considered as special circuits that have the

ability to approximate, classify, and perform other computations that can be considered

as emulating some of the functions performed in the biological brain. In this research,

focus has been placed on a relatively new field of ANNs called “Spiking Neural Networks

(SNNs)”. A biological basis exists for SNN. Research on artificial SNNs has gained

momentum in the last decade due to its ability to emulate biological neural network

signals and its enhanced computational capabilities.

Input data arrives into a spiking neural network as temporal data instead of values

within a time window (rate code) as was used in the classical ANNs. The input data into

such a neural network arrives in the shape of sequence of pulses or spikes in time, which

called “spike train” patterns. Another type of input data are single spikes in time which

are distributed spatially at the input terminals, and in this case called “spatio-temporal”

patterns.

For both types of input data, spatio-temporal and spike train patterns, there is a

need for a pre-processing method, a learning algorithm, and deep analysis for a practical

model is a highly postulated matter because research is scant regarding these points

especially in case of input spike trains.

Emphasis has been placed on finding a robust and practical SNN learning al-

gorithm and as well as an analysis of how various parameters affect the algorithm’s
behavior. A special pre-processing stage (the mapping stage) was used to convert spike

train pattern inputs into spatio-temporal outputs.

Another main point of this research was to achieve a learning organization that

can be practically implemented. Hence, learning schemes have been developed in a way

that avoids complex or costly designs.

The analysis of the mapping-learning organization was carried out in order to

determine appropriate design parameter values to be used for the neuron threshold po-

tential value and appropriate time windows for the mapping and the learning stages.

The use of the resulting model in conjunction with devices (such as motors and

sensors) can help people with disabilities. This work may lead us a step closer to achiev-

ing robust and computationally efficient neural models that can be implemented as small

VLSI chips that can be implanted in the human body to replace damaged or malfunc-

tioning parts.
Chapter 1

Introduction

Considerable research progress in the field of Artificial Neural Networks (ANNs)

has been achieved especially in the past couple of decades. An ANN is a computational

model whose functions are inspired by fundamental principles of real biological neural

networks [21], [45]. Evidence exists that biological neurons communicate with each

other through short pulses, or spikes, which are emitted at varying rates at different

times [1], [17], [38]. In the classical ANNs, firing rate (rate code) has been considered

for a long time as the relevant information being exchanged between neurons as inputs

and outputs [38]. However, in the classical ANNs the input and output firing rates are

represented as a binary or analog value. The firing rate represents the number of active

inputs within a specific time window.

Research which attempts to understand and utilize the neural code with which

biological neurons communicate in the temporal domain is relatively new [1]. Information

in the temporal domain consists of the neuron spike times relative to a local reference

time. The rate code does not consider spike times; hence, valuable information contained

succinctly in the spike times is not utilized. In addition, in the context of fast information

processing there is not always sufficient time to sample an average firing rate. Hence,

it seems very promising to extend the computational possibilities of ANNs which utilize

spiking neurons, which use spike time information [14], [31], [38].
2

SNN can be considered as a third generation of ANN after the perceptron neuron

and the sigmoidal function generation neuron [28], [31]. When a spiking neuron fires,

it sends an electric pulse which is commonly referred to as “action potential, or spike”.

Spikes induce a post-synaptic potential in the neuron according to a predefined response

function. A spike in a spiking neuron is considered to be a pulse with unit amplitude and

very short duration; thus, it simply acts as a triggering signal. Information in a SNN

is contained in the timing of individual input spikes. This gives SNNs the capability

of exploiting time as a resource for coding and computation in a more efficient manner

than typical rate code based neural networks.

Synaptic spike inputs with only one spike per synapse during a given time window

are called spatio-temporal inputs. A spike train synaptic input consists of a sequence of

spikes with various inter-spike times which occur during a given time window. Examples

of spatio-temporal inputs and a spike train are shown in Figure 1.1.

The spike times within a spike train has a much larger encoding space than

the simple rate code used in classical neural networks as was shown in [29], [30]. A

practical spiking neuron model, practical and efficient encoding/learning algorithms,

and algorithm parameter analyses for practical implementation are needed.

Substantial evidence exits which indicates that the timing sequence of neuronal

spikes (spike trains) is utilized in biological neural signal processing [1], [38].

Exploration of the computational possibilities of SNNs may lead to new comput-

ing paradigms suitable for fast hardware implementations, e.g. VLSI neural network

chips [34]. In addition, this research may shed some light on the neural code used by

biological neurons. So far not much is known about the computational mechanisms used
3

t1
(A)
w1

ti wi

tj

wn
tn
(B)

wi
tn ti t1 tj

Fig. 1.1. (A) Neuron with spatio-temporal excitatory t i inputs and output tj . (B)
Neuron with a spike train input and output t j .

by biological neurons to process the spike timing information. Overview of SNN models

and research is given in [20], [31], [38].

1.1 Past Research in SNN

Learning algorithms for spiking neural networks proposed in the past by others

have employed spatio-temporal input patterns and used synaptic weights and delays as

well as multiple sub-synapses (in some of these models) for supervised learning [3], back-

propagation learning (SpikeProp) [13], unsupervised learning [35], self-organizing map

(SOM) learning [40], and radial basis function (RBF) based learning [39]. In [27], it has

been shown that the timing of spikes can be used to simulate sigmoidal gates with SNNs.
4

In [24], [25] a novel learning method which uses the neuron input spike times to

trigger a set of currents with a variety of fixed decay rates is proposed; however, the net-

work used a large number of neurons and learning convergence took many iterations. The

SNN learning algorithm proposed in this research is a one step learning algorithm and

utilizes only synaptic weights for learning; hence, the proposed SNN learning algorithm

may be considered to be more efficient and practical than past approaches.

In [32], [33] learning of spike train patterns has been investigated by mapping the

input spike trains into another set of spike trains using a highly complex spiking neural

network (liquid state machine (LSM)). The output of the mapping stage used in [32], [33]

was fed into a read out circuit that was trained to distinguish inputs. The drawbacks

of spiking neural network model used by [32], [33] are: a) a highly complex network

structure which employed many neurons; b) the use of recurrent connection which make

analysis very complex; c) the use of a large number of synapses which required various

parameters; and d) the relatively long time needed to decode and process input spike

trains.

1.2 Objectives

The objective of this research was to advance the current state of knowledge in

the field of spiking neural networks. New research results were obtained in the following

areas:

• A one-to-one mapping (pre-processing) of input spike trains into spike outputs;

• A learning algorithm for an SNN that employs spike train patterns as inputs;
5

• A practical (implementable and efficient) learning algorithm;

• An analysis of the proposed SNN mapping/learning algorithm parameters.

1.3 Dissertation Organization and Contributions

The flowchart of the thesis chapters is shown in Figure 1.2.

Pre-computing Parameters

ch. 6
spike trains )

Input
Pre-processing

Signal/
Stimulus Mapping Learning
ch. 5

(Applications) Stage Stage

ch. 5 ch. 3 ch. 4


(signal

Mapping-Learning Organziation

Fig. 1.2. Research flowchart.

This dissertation is organized as follows:


6

Chapter 2 : Biological and computational foundations of spiking neural networks are

described. The model that is used in this research is explained in detail.

Chapter 3 : The mapping stage (pre-processing stage) is described. This stage is

mainly proposed to simplify the dilemma of input spike trains learning. The map-

ping stage transforms input spike trains into spatio-temporal outputs which are

employed in the proceeding learning stage for learning irregular spike trains with

different number of spikes as shown in Figure 1.3. The main emphasis in the

mapping stage is the one-to-one relationship between inputs and outputs which

is proved in details in Appendix A. The proposed mapping stage can be used

as a universal real time pre-processing for any input spike trains. Emphasis has

been placed in this research on the applicability of the proposed schemes; thus,

the mapping model use only synaptic weights modification to achieve a one-to-one

input/output relationship.

The use of half of the number of the proposed neurons is discussed with the re-

strictions made to use it in this way. Applications are used to show the usefulness

of the proposed mapping scheme such as sound localization as shown in Figure

1.4(A), and mapping sound spike trains into spatio-temporal patterns as shown in

Figure 1.4(B).

Chapter 4 : The proposed learning stage is described. The main challenge in this part

of thesis is the argument that practical design is needed. Not many SNN learning

algorithms were proposed by others for spatio-temporal patterns, albeit delay is

used to achieve the learning part to give synchronization of input spikes moreover
7

Spike trains Spatio-temporal


patterns

ISI1 ISI1

CD
Input stimulus Signal
ISI2 ISI2
pre-
processing
stage

n spike
2n spikes
trains
MAPPING LEARNING Output
STAGE STAGE neuron

Fig. 1.3. Block diagram of the learning-mapping organization.

Sound
Sound
Source
localization
(mics)

(A)

Sound
Source Pre-
Sound
processing
mapping
stage

(B)

Fig. 1.4. (A) Block diagram of sound localization application. (B) Block diagram of
sound spike train mapping.
8

the use of synaptic weights and sub-synapse connections. However, the design of

the delay parameters is not practically an easy matter, as it is needed to store the

input signals for some time and release them later to be delayed; thus, large circuit

area is required to achieve these parameters. Sub-synapse connections may give

more freedom in the learning, while it obviously needs more computations than a

single synapse connection and a large circuit area.

In this research, the learning units can be done in simple computations using the

synaptic weights modification, which is considered to be a significant contribution.

In the learning stage, like in the mapping stage, the one-to-one relationship between

inputs and outputs in the learning units to give a unique center of each learning

unit (cluster). The same neurons building blocks of the mapping stage are used in

the learning stage but with different function. Each synaptic weight was modified

by both the mapping and learning rules using locally available information.

Chapter 5 : Simulations and real-world applications are used to demonstrate the ro-

bustness of the proposed mapping-learning organization. One of the applications

is an original different materials impact sound data-set which is collected in labo-

ratory to be used for classification as shown schematically in Figure 1.5. Another

Application is function approximation which used as a linearly non-separable in-

puts application as shown schematically in Figure 1.6. Comparison is introduced

with another learning method of classical ANNs. Last application is the classifi-

cation of complex input patterns; each consists of 10 spike trains generated with

different rate for each pattern. Comparison with the well-known back-propagation
9

is introduced. It is shown that the proposed learning algorithm is faster in learning

and need less computation steps.

Input Spike trains Spatio-temporal Output


sound signal patterns

Signal
pre-
processing Mapping Learning
(Onset, stage stage
peak,
and offset)

n filters n spike 2n spikes


trains

Fig. 1.5. Block diagram of sound classification.

Chapter 6 : Further analyses and design criteria are introduced for both the mapping

and the learning stage parameters. This is done to introduce a clear idea about

how the parameters of both stages can be chosen. Also, the time needed to give

the correct result of each stage as well as the overall model is determined.

It should be emphasized that this thesis is not about biology but about possibil-

ities of computing with spiking neurons which are inspired by biology. It is not claimed
10

t1
t2

x Encoding Learning y
tn

Fig. 1.6. Block diagram of function approximation.

that any of the mechanisms described here do occur in exactly the same way in bio-

logical systems. However, a thorough understanding of such networks, which are rather

simplified in comparison to real biological networks, may be helpful for detecting and un-

derstanding possible similar mechanisms in biological systems or could help in replacing

some malfunctioning part in the brain in the near future.


11

Chapter 2

Model of Spiking neural networks

Many models have been proposed to represent a spiking neuron. The most biolog-

ically accurate model uses a conductance-based neuron with many complex details [31].

This model can reproduce the behavior of a biological neuron with a high degree of ac-

curacy. Unfortunately, such a model is complex and analytically difficult to use to model

the dynamic behavior of a spiking neuron. For this reason, simpler spiking neuron mod-

els have often been used. In one of these simple models spike outputs are generated

whenever the neuron potential crosses some pre-determined threshold ϑ from below [20].

The spiking neuron model employed in this research is based on the Spike Re-

sponse Model (SRM) [20]. Input spikes come at times {t · · · t · · · t } into the input
1 n n

synapse(s) of a neuron as shown in Figure 1.1.

The neuron outputs a spike when the internal neuron membrane potential x (t)
j

crosses the threshold potential ϑ from below at firing time t :


j

t = min{t : x (t) ≥ ϑ} (2.1)


j j

The threshold potential ϑ is assumed to be constant for the neuron. After the

firing of a spiking neuron, it does not respond to any input spikes for a limited period

of time which is called “the neuron refractory time”. However, the neuron refractory

time is not taken into account in this research. The relationship between input spike
12

times and the internal neuron potential (or Post Synaptic Potential (PSP)) x (t) can be
j

described as follows:

n
X
x (t) = W .α(t − t ) (2.2)
j i i
i=1

i represents the ith synapse, W is the ith synaptic weight variable which can change the
i

amplitude of the neuron potential x (t), t is the ith input spike arrival-time, and α(t)
j i

(Figure 2.1) is the spike response function defined as follows:

t 1− τt
α(t) = e (2.3)
τ

τ represents the membrane potential decay time constant.

0.9

0.8

0.7

0.6
Voltage

0.5

0.4

0.3

0.2

0.1

0
0 5 10 15 20 25
Time

Fig. 2.1. Spike-Response Function α(t) with a time constant τ = 10


13

In this research, the α(t) function in Equation 2.3 is approximated as a linear

function for t << τ as follows:

t
α(t) = .e (2.4)
τ

e can be included as a constant in τ , i.e, Equation 2.4 can be rewritten for simplicity as

follows:

t
α(t) = (2.5)
τ

it then follows that Equation 2.2, the internal neuron potential, can be re-written as:

n
tX
x (t) = W .u(t − t ); tτ (2.6)
j τ i i
i=1

u(t) is the unit step function, where:



 1 if t ≥ 0

u(t) = (2.7)

 0 otherwise

15

Chapter 3

Input Arrival-Time-Dependent Mapping

3.1 Computational Model for Mapping-Learning Organization

Actual real-world time domain signals such as sound, electrocardiogram signals,

encephalogram signals, or time varying analog sensor data when converted into spike

trains can be processed in a relatively simple manner by spiking neural networks.

The mapping-learning organization algorithms proposed in this research are based

on being able to produce a one-to-one correspondence between input spike trains and

output spike firing times. In a spiking neuron, inputs from different synapse inputs

cause the post synaptic potential (PSP) to increase at some rate which is dependent on

the inter-spike intervals (ISI). By selecting an appropriate set of synaptic weights for a

neuron, a particular spike train or a set of spike trains which belong to the same class

can be distinguished by its unique neuron output firing time. A class C of an input

pattern set may be represented by at least one learning unit cluster Cl or maximum of

m learning units, where m is the number of input pattern samples in the class. The

combined mapping-learning organization is shown in Figure 3.1.

The mapping stage, which is described in details in this chapter, is used for

transforming the input spike train(s), large or spatio-temporal patterns, into simple

unique spatio-temporal output patterns [4], [5], [6]. The output spatio-temporal patterns

are then used as inputs for the next stage which performs the learning task by clustering
16

inputs into various classes (as described in chapter 4). In both stages, a one-to-one

mapping between inputs and outputs is necessary for processing information (The proof

for the one-to-one mapping of the mapping scheme is shown in Appendix A).

In the next sections, a detailed description of the mapping stage will be given.

Input Spike Trains Spatio-temporal Reference


Pattens time input tr

MU LU CD

MU LU CD

MU LU CD

Mapping stage Learning stage Final output

Fig. 3.1. Combined Mapping-Learning Organization.

3.2 Spatio-temporal Pattern Mapping Scheme

The proposed mapping scheme utilizes the absolute arrival times of input spikes.

This scheme can be used for multiple synapse spatio-temporal inputs neuron and a single
17

synapse spike train input neuron. The description of the mapping scheme will be for a

multiple synapse spatio-temporal input neuron. For a single synapse spike train input

neuron, the mapping scheme is similar and can be done in a simpler manner because

the order of the spike arrivals is implicit in a single spike train. The mapping model is

composed of three parts as shown in Figure 3.2.


Input spikes terminals

t1
t2 tout1
ISI1
tn

tout2
ISI2

tout3
ROC

Fig. 3.2. The combined mapping unit.

The first part of the mapping model (Figure 3.3) utilizes the Inter-Spike Intervals

(ISI) between input spike times for mapping the inputs. An inter-spike interval is the
18

time between two successive input spikes which belong to some pattern and arrive at

different input synapses.

EPSP level
Neuron
t1= tr
w1
t2 w2
Input spikes

tout1 (tout2)
wn
tn

Fig. 3.3. ISI mapping neuron.

Input spikes that make up an input pattern vector, arrive at different times at

different input synapses as shown in Figure 3.3. The input spike times carry information

about the input pattern. The input pattern spikes arrive at times {t , ...., t }, with
1 n

some minimum time resolution 4t, into the input synapses. The ISI1 neuron (Figure

3.3) consists of two units: a) a neuron with excitatory inputs; and b) an Excitatory Post

Synaptic Potential (EPSP) unit. The EPSP unit updates the dynamic weight variable

W in Equation 2.6 after every synaptic input according to the following equation:
i

W =β∗t , W =ω , i = 2, 3, ..., n (3.1)


i i 1
19

where β is a small constant and i refers to the temporal order of the input spikes, not the

spatial number of an input synapse, and ω is an initial weight value. Equation 3.1 shows

that the value of the dynamic weight variable W is proportional to the input spike time
i

t.
i

The ISI2 neuron has a construction and function similar to the ISI1 neuron (Figure 3.3)

except the dynamic weight variable W is defined as follows:


i

β
W = , W =ω , i = 2, 3, ..., n (3.2)
i t 1
i

The ISI1 (ISI2) neuron fires output spikes at certain times which can be utilized

to distinguish patterns whose order of input spikes are the same but for which the actual

spike times may be different. For instance, two patterns P and P with spike times
A B
A A A A B B B B
{t = 1, t = 2, t = 3, t = 4} and {t = 2, t = 4, t = 1, t = 3} will give the
1 2 3 4 1 2 3 4
same output spike time t (t ) because the inter-spike intervals are the same for
out1 out2

these pair of patterns. In this case and by substituting in Equation 2.6, it can be shown

that:

ISI1 t t
x (t) = β(1 + 2 + 3 + 4) = 10β.
j τ τ

ISI2 t 1 1 1 25 t
x (t) = β(1 + + + ) = β.
j τ 2 3 4 12 τ

The output values are equal for both patterns P and P for each ISI1 and ISI2
A B

neurons. The synaptic weight values may be initially set to be identical (W = ω),
i
20

as shown in Figure 3.3. Using a combination of the ISI1 and ISI2 neurons produces

a one-to-one correspondence between inputs and outputs. Refer to Appendix A for a

proof that using the two neurons ISI1 and ISI2 together gives a one-to-one relationship

between the input and output.

Inhibitory
-
Neuron

t1
w
Input spikes

t2 2w
tout3

2n-1 w
tn

Fig. 3.4. Rank Order Coder (ROC).

As these two neurons can distinguish different patterns if and only if their input

spikes come in the same order but with various different arrival times (as shown in the

example above), it is necessary to have another part which distinguishes the order of

arrival of the spikes comprising a pattern regardless of their absolute time.

Order of arrival means which input spike arrives first, which one is the second

and so on regardless of the actual input spike times.


21

Rank Order Coding (ROC) [36] is a suitable approach to distinguish the order of

the input spike arrivals. The ROC neuron is composed of two units, as shown in Figure

3.4. One of these units utilizes an excitatory neuron and the other unit an inhibitory-

like neuron with a special function for attenuating the synapse weights according to the

order of arrival of the spatio-temporal inputs. The weight values must be distinct in this

neuron in order to produce distinct output times [36].

To distinguish the order of arrival of the spikes, the inhibitory neuron in the ROC

neuron acts to progressively inhibit the effect of later arriving spikes. The first input

spike has the strongest effect, the second input gives the next strongest effect and so on

until the last spike has the weakest effect.

Consider the case in Figure 3.5 where four inputs arrive into the ROC unit at

four different times. Assume that t < t < t < t . t is the time for the first spike, t
1 2 3 4 1 2

is the time for the second spike, and so on until the last spike at time t . Initially, the
4

effect of the inhibitory neuron is null and so each input is maximally effective. However,

every time one of the inputs arrives, the inhibitory neuron attenuates the effectiveness

of the weights (for example by 50%). t can be calculated for input patterns P and
out3 A

P as follows:
B

A 0 1 2 3
t = (1 ∗ 0.5 ) + (3 ∗ 0.5 ) + (5 ∗ 0.5 ) + (10 ∗ 0.5 ) = 5.0
out3
B 0 1 2 3
t = (5 ∗ 0.5 ) + (1 ∗ 0.5 ) + (10 ∗ 0.5 ) + (3 ∗ 0.5 ) = 8.375
out3
22

Inhibitory
-
Neuron
w=1 D
t2
w=3 C
t4
tout3

t1 w=5 B

t3 w=10 A

Fig. 3.5. Rank Order Coder (ROC) with four inputs.

Any other input pattern with a different order will produce a different output

time value.

Modifications to the original ROC inhibition sequence and synaptic weights pro-

posed in [36] are necessary to account for cases in which two or more input pattern spikes

arrive at the same time. In such a case, the inhibitory neuron must be able to recognize

that two or more inputs have arrived simultaneously and compensate accordingly.

For example, the ROC can work with the following inhibition sequence [7.3, 3.0,

1.7, 0.5] together with the synaptic weights shown in Figure 3.5. For all possible order

combinations, including the cases when the four inputs arrive at different times, a one-

to-one coding relationship between the inputs and outputs can be obtained with these

inhibition sequences and synaptic weights.


23

To obtain an appropriate set of synaptic weights and inhibition sequences, the

following algorithm was used: a) determine a set of synaptic weights (within some ap-

propriate magnitude range as shown in Figure 3.5) such that any sub-set sum of the

synaptic weights is not equal to any other sub-set sum of the remaining synaptic weights.

b) Determine an appropriate set of inhibition sequence values for all possible input order

sequences using an iterative search algorithm.

Real-time mapping of inputs is possible when the ISI1, ISI2 and ROC neurons

work simultaneously. Unique output spike combinations at t ,t and t will be


out1 out2 out3

produced for all input patterns. In the ISI1 (ISI2) neuron, all output spike times t
out1
max
(t ) must be larger than the last input spike time t = max{t , i = 1, 2, ..., n} of all
out2 i

the input patterns to be mapped where n is the number of input spikes defining the input

pattern. This means that the patterns to be learned must be known a priori in order

to know the time range of the input spikes for all the input patterns. Furthermore, all

input spikes have equal importance, so all spikes representing a pattern must be utilized

to determine the neuron membrane potential.

The objectives of the ISI1 (or ISI2) neuron mapping scheme are:

1. All n input spikes are included.

2. Allow enough time for the mapping to complete.

These two objectives are achieved by the following:

1. Each input pattern l can be represented by a vector of input spike times P [t ....t ....t ],
l 1 i n
+
where t ∈ R , n is the number of input spikes defining the input pattern.
i
24
map
2. Calculate the threshold value ϑ (refer to Chapter 6) so that the output firing
1
max map max
time will be larger than t (T >t ).
1

map max
The same scheme is used for the ISI2 neuron in a similar way to get T >t .
2

3.3 Spike Train Mapping Scheme

Spike train mapping can be done in the same manner as the mapping of a spatio-

temporal pattern as explained in the previous section.

A spike train can be treated as a spatio-temporal pattern as shown in Figure 3.6.

Since only one synapse’s input is considered, spike train is being the order of the spikes

within a spike train do not need to be considered. Hence, the ROC neuron is not needed.

Ref. time=0
Input Spike Train
tout1
tn t4 t3 t2 t1
ISI1

t1
t2
t3 tout2
t4
ISI2

Fig. 3.6. Spike train input and its equivalent representation as a spatio-temporal pattern
input
25

In this case, a combination of {t ,t } outputs is used to represent a single


out1 out2

spike train as shown in Figure 3.6.

t is the first spike which arrive into the neuron, as shown in Figure 3.6. However,
1

t spike time is used as a reference time for the next coming spikes in the same input
1

spike train.

3.4 Application of The Mapping scheme

3.4.1 Spatio-temporal Patterns Applications

Sound localization was used as an interesting and useful application of the pro-

posed mapping scheme. The azimuth and elevation angles were to be deduced from

sound input data. Sound localization was thought to be an appropriate application be-

cause it can utilize the Inter-aural Time Differences (ITD) defined to be the difference

between the arrival times of a sound signal to each ear. In the proposed mapping scheme,

the sound signal itself can be used directly without complex modifications such as those

needed in [22] and the Head Related Transfer Function (HRTF) approach [18] or us-

ing delays in multiple coincidence detector [2], [15], [26], [41]. The reception time of a

sound at a sensor was determined by the first incoming audio signal which exceeded a

pre-determined sound level. Sensors representing right (R), left (L), front (F), back (B),

above (AB) and below (BL) were placed in their appropriate positions as shown Figure

3.7.

Depending on the sound source location with respect to the six sensors, the spike

arrival time will be different for each of the sensors. The mapping unit, shown in Figure
26

AB Sound
Source

L
F
B R

BL

Fig. 3.7. Distribution of sensors

3.8, generates a set of output spikes for each input pattern. Echo effects were not con-

sidered in this application. The set of {t ,t ,t } times output by the mapping


out1 out2 out3

unit can be used to determine the sound source location. The output spike times t
out1

and t increased (or decreased) within the appropriate output spike time firing range.
out2

tout3 time did not change as long as the order of input spike arrival times was unchanged;

i.e., tout3 was not affected by the actual input spike arrival times within the same region

with respect to the sensors. As a result, the three output times {t ,t ,t } could
out1 out2 out3

be used to determine the exact position of a sound source. These outputs can be used

as inputs to a learning stage to get only one output representing the exact sound source

location.

The proposed mapping scheme for sound localization in this research has some

advantages over other sound localization approaches such as in [18], [22]. For example;
27

R
L
tout1
F Mapping tout2
B Unit
tout3
AB
BL

Fig. 3.8. Sound localization unit

(a) the use of the first incoming audio signal which exceeds a pre-determined sound level

instead of the use of the analysis of the sound signal itself; and (b) the azimuth and

elevation angles can be deduced directly from the input audio signal time differences,

thus there is no need to use the azimuth and elevation estimator circuits used in [18].

(c) Hebbian learning with integrate and fire neurons (IFNs) was used to achieve the

sound source localization [22]. Multiple synapses with delays were used to detect the

phase shift between signals resulting from analyzing the input sound with several pre-

processing stages.

Sound localization experiments were also carried out using real sounds gathered

using microphones placed at four different azimuth positions set at various distances

from each other in a circle as shown in Figure 3.9. The reception time of a sound at

a microphone was determined by the first incoming audio signal which exceeded a pre-

determined sound level. The distance between microphones ranged from 30 cm to 150
28

cm as shown in Figure 3.9. The sound localization experiments using real sounds were

carried out in two different environments: an anechoic chamber and a regular room filled

with furniture, different kinds of equipment, and with walls which reflect sound. In both

environments the recognition was working well because the model depends on sensing

the beginning of an input signal and not the subsequent sound signals.

mic3

m
0c
15
mic4 30~ mic2

mic1

Fig. 3.9. Four microphones with different distance in between.

3.4.2 Spike Train Patterns Applications

For testing the mapping unit’s ability to discriminate among various spike trains,
1
a publicly accessible dataset consisting of input sound files for various words spoken by

1 http://neuron.princeton.edu/∼moment/Organism
29

different speakers was used. These sound files (in wav-format) represented the numbers

0,1,...,9 spoken by various speakers. The wav-format files were transformed into 40-

channel spatio-temporal spike events using a bank of frequency filters [25]; the 40-channel

spatio-temporal spikes were then used in a single spike train. The output times of the

mapping unit for some of the spike trains representing the spoken words “zero”, “one”,

and “two” spoken by 5 different speakers are shown in Figure 3.10. From this plot,

it can be seen that the mapping unit output times for these spoken words appear in

separate groupings except for the word “one”. A non-linear classifier can be used after

the mapping unit for better clustering of the mapping unit outputs.

4
"Zero"
"One"
3.5 "Two"
Output time of ISI2 unit

2.5

1.5

0.5
1.95 2 2.05 2.1 2.15 2.2 2.25 2.3 2.35 2.4 2.45
Output time of ISI1 unit

Fig. 3.10. Output times of the ISI1 and IS2 neurons for different input spike trains
representing the words ”Zero”, ”One”, and ”Two”.
30

In this experiment, only the pre-processing of speech sounds has been tried using

input spike trains. This was accomplished by mapping an input spike train using the

proposed mapping mode. To complete the classification of the inputs, the outputs of the

mapping units should be processed by another unit.

Randomly generated spike trains were used as another example to verify the

practicality of the proposed mapping scheme. Five spike trains were generated using a

Poisson distribution for the inter-spike times and assuming a mean firing rate of 15 Hz.

The five spike trains are shown in Figure 3.11.

#5

#4
Spike Train Number

#3

#2

#1
0 0.1 0.2 0.3 0.4 0.5 0.6
Time

Fig. 3.11. Spike trains generated by Poisson distribution.


31

Noise was added to each of these five spike trains to check the mapping robustness.

Noise consisted of time-shifting of the spikes and/or addition/deletion of a few spikes.

For these particular five spike trains, the outputs for each original spike train as well as

its noisy versions can be clustered so that the five spike trains can be distinguished as

shown in Figure 3.12 (will be referred here as noise type1). For other sets of spike trains,

non-linearly separable clusters may occur. However, when different type of noise with

different addition of spike times was added to the original spike trains (will be referred

here as noise type2), the output may change to non-linearly separable as shown in Figure

3.13.

It could be shown from Figures 3.12 and 3.13 that the outputs for some spike

trains are either located very close to each other or somewhat spaced apart depending

on the amount of noise introduced into the original spike train. Synapse delays can be

used in the next stage for clustering of outputs which belong to the same spike train.

If the dynamic weight variable W in Equation 2.6 is changed to an exponential


i

function of time, different characteristics can be produced:

ISI1 neuron:

t
W = β ∗ e i, W =ω , i = 2, 3, ..., n (3.3)
i 1

ISI2 neuron:

β
W = t , W =ω , i = 2, 3, ..., n (3.4)
i ei 1
32

3.5
train 1
train 2
3
train 3
Output time of ISI2 unit
train 4
train 5
2.5

1.5

0.5
1.5 2 2.5 3 3.5 4

Output time of ISI1 unit

Fig. 3.12. Output times of the ISI1 and IS2 neurons for different input spike trains
representing noisy spike trains (noise type1)

It is shown in Figures 3.14 and 3.15 for the two types of added noise, type1 and

type2 respectively, with using the exponential dynamic weight variable W parameter as
i

in Equations 3.3 and 3.4. It is clear form these figures that better separation properties

could be done. However, these analysis is a case dependent and could be different for

other spike trains example.


33

3.5

train 1
train 2
3
Output time of ISI2 unit
train 3
train 4
train 5
2.5

1.5

0.5
1.5 2 2.5 3 3.5 4

Output time of ISI1 unit

Fig. 3.13. Output times of the ISI1 and IS2 neurons for different input spike trains
representing noisy spike trains(noise type2)

3.5 Mapping a Spike Train Using a Single Mapping Unit

The use of only one of the mapping neurons, either ISI1 or ISI2 instead of both,

is possible if the threshold voltage ϑ of the neuron can be specified a priori. The neuron

firing threshold voltage ϑ has to be chosen high enough so that any two spike trains (to

be mapped) can be differentiated.

In Figure 3.16, a plot of the output time differences produced by the ISI1 neuron

vs. neuron threshold voltage ϑ for two different spike trains using the dynamic weight

variable W in Equation 3.1 is shown. An output time difference of zero means that
i
34

3 train1
train2
train3
train4
2.8
train5
Output time of ISI2 unit

2.6

2.4

2.2

1.8
2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4

Output time of ISI1 unit

Fig. 3.14. Output times of the ISI1 and IS2 neurons for different input spike trains
representing noisy spike trains (for exponential W function and noise type1)

the two spike trains have produced an ISI1 output at identical times. By choosing the

neuron threshold voltage ϑ appropriately, it is possible to have a large enough output

time difference to clearly distinguish a spike train. The choice of the neuron threshold

voltage ϑ has to be made in such a way that a) mapping time is not overly long and b)

all spike trains under consideration can be mapped appropriately.

The outputs of the ISI1 and ISI2 neurons for two different input spike trains using

the dynamic weight variable W in Equations 3.1 and 3.2, are shown in Figures 3.18 and
i

3.17, respectively.
35

3 train1
train2
train3
train4
2.8
Output time of ISI2 unit

train5

2.6

2.4

2.2

1.8
2.5 3 3.5 4

Output time of ISI1 unit

Fig. 3.15. Output times of the ISI1 and IS2 neurons for different input spike trains
representing noisy spike trains (for exponential W function and noise type2)

For this particular pair of spike train inputs, it can be seen from Figure 3.18 that

using the ISI2 neuron can easily distinguish them because the two spike trains produce

diverging outputs. If the outputs of the ISI1 neuron are used (Figure 3.17), the time

differences are not as large (as those produced by the ISI1 neuron) and hence the use of

the ISI1 output by itself may be inappropriate. However, ISI1 neuron may be used due

to its simpler circuit design than ISI2 neuron.


36

0.35

0.3

T ime Difference
0.25

0.2

0.15

0.1

0.05

0
0 1 2 3 4 5 6 7 8 9 10
Thres hold ϑ

Fig. 3.16. Output time differences produced by the ISI1 neuron for two different input
spike trains.

12

10

8
Voltage

0
0 1 2 3 4 5 6 7 8 9 10
Time

Fig. 3.17. Output potential the ISI1 neuron for two different input spike trains.
37

Voltage 5

0
0 1 2 3 4 5 6 7 8 9 10
Time

Fig. 3.18. Output potential the ISI2 neuron for two different input spike trains.

Figure 3.19 shows the relationship between output time differences and neuron

threshold voltage ϑ when the ISI1 neuron is used with the exponential function given in

Equation 3.3. Comparing the plots shown in Figures 3.16 and 3.19, it can be observed

that the output time differences are larger for small threshold voltages ϑ in the case

of the exponential W function. For example, at ϑ = 2, t ' 0.06 (Figure 3.16)


dif f

and t ' 0.1 (Figure 3.19) for the linear (original) and exponential W functions
dif f

respectively. This implies that the use of an exponential W function can lead to better

differentiation and faster mapping.

The neuron membrane potential plots for the ISI1 and ISI2 neurons utilizing the

exponential W function are shown in Figures 3.20 and 3.21, respectively. The exponential
38

W function can better differentiate input spike trains when ISI1 neuron is used as can

be seen by comparing Figures 3.17 and 3.20.

0.16

0.14

0.12
T ime Difference

0.1

0.08

0.06

0.04

0.02

0
0 1 2 3 4 5 6 7 8 9 10

Thres hold ϑ

Fig. 3.19. Output time differences produced by the ISI1 neuron for two different input
spike trains (for exponential W function).
39

16

14

12

10
Voltage

0
0 1 2 3 4 5 6 7 8 9 10
Time

Fig. 3.20. Output potential the ISI1 neuron for two different input spike trains (for
exponential W function).
40

1.4

1.2

1
Voltage

0.8

0.6

0.4

0.2

0
0 1 2 3 4 5 6 7 8 9 10
Time

Fig. 3.21. Output potential the ISI2 neuron for two different input spike trains (for
exponential W function).
41

Chapter 4

Learning and Clustering using Spiking Neural Networks

4.1 The Learning Algorithm

The spatio-temporal patterns generated by the ISI1 and ISI2 neurons in the map-

ping stage are used as inputs for the learning stage where a supervised learning method

is used to classify input patterns [7], [8]. Clustering of input patterns which belong to

the same class is achieved by setting the synaptic weight vector for a neuron in a manner

that makes the output fire at approximately the same time for more than one input spike

train belonging to the same class.

The mapping stage ISI1 and ISI2 neurons are also used in the learning stage but

with multiple synapse inputs as shown in Figure 4.1. The learning unit (LU) consists of

the ISI1 and ISI2 neuron is shown in Figure 4.2. Multiple synaptic inputs are needed

because the mapping stage produces several spatio-temporal outputs as shown in Figure

4.3. The neuron synaptic weights are assigned using:

for the ISI1 neuron:

W =β∗t , W = const, , i = 2, 3, ..., n (4.1)


i i 1
42

for the ISI2 neuron:

β
W = , W = const, , i = 2, 3, ..., n (4.2)
i t 1
i

as was done previously in the mapping stage.

EPSP level
Neuron
t1= tr
w1
t2 w2
Input spikes

tout1 (tout2)
wn
tn

Fig. 4.1. ISI neuron for the Learning Stage.

The t reference time shown in Figure 4.3 input is used as a local reference time for
r

the combined mapping-learning organization. The coincidence detection (CD) neuron

fires when the outputs of a learning unit are output at nearly coincident times.

The learning algorithm works as follows:

1. Choose an input pattern vector (say P ) at random from the set of P = (P , P , ....)
A l A B

pattern vectors to be used for the learning phase. Each pattern P consists of the
l
43

Non Coincident Reference


Coincident time input tr

Inputs from mapping stage ISI1


t1 tout1
t2

tn Final output
tout2
ISI2

learning Coincidence
unit detection
neuron

Fig. 4.2. Learning Unit Details.

spatio-temporal outputs generated by the mapping stage. The randomly chosen

pattern P is used to assign weights to the ISI neurons in a learning unit. This
A

learning unit will represent the class to which pattern P belongs. Once the weights
A

have been assigned, they are temporarily fixed. The weights selected for the ini-

tial input pattern works as a center vector which can later be modified slightly to

accommodate more than one input pattern; in this manner, similar input patterns

can then be clustered together and fewer learning units will be needed.

2. Another input pattern (say P ) belonging to the same class as pattern P chosen
B A

in step 1 above is selected. This new pattern is applied to the learning unit for P
A
44

Input Spike Trains Spatio-temporal Reference


Pattens time input tr

MU LU CD

MU LU CD

MU LU CD

Mapping stage Learning stage Final output

Fig. 4.3. Combined Mapping-Learning Organization.

and the output of the ISI neurons times for P {t ,t } are compared against
B out1 out2
∗ ∗
the output times for P {t ,t }. This new pattern (P ) is assigned to the
A out1 out2 B

learning unit (e.g. learning unit for P ) with which each of the output times differ
A

by less than .

∗ ∗
|t −t |≤ε and |t −t |≤ε (4.3)
out1 out1 out2 out2

ε is a small error value determined empirically. If the error is larger than ε for any

one of the error conditions in Equation 4.3 , a new learning unit is added as is done

in incremental learning.
45

3. Steps 1 and 2 are repeated for all input patterns.

4.2 Learning Unit Output Time Uniqueness

A one-to-one relationship between inputs and outputs for each of the LUs must

be achieved (i.e. each LU must output a spike at a time which is different from the

output times corresponding to other inputs). For example, assuming that the weights

for a learning unit, composed of the ISI1 and the ISI2 neurons, have been fixed for
A A A A
the spatio-temporal pattern P = {t , t , ......., t , t }, if a different pattern P =
A 1 2 n−1 n B
B B B B
{t , t , ......., t , t } is input into the learning unit for P (LUA), the output time
1 2 n−1 n A

t and/or t for P must be different from those output by the learning unit
out1 out2 A

assigned for P (LUB). However, since LUA weights have been fixed, it may be possible
B

for P to make both LUA and LUB fire simultaneously. It should be noted that such
B

cases occur with very low probability. In cases when the problem does occur, there

are two possible solutions: 1) the winner-take-all scheme and 2) the two-input sub-LU

scheme.

4.2.1 Winner-Take-All Scheme

In case P simultaneously activates both LUA and LUB, the synaptic weight
B
B B B
values (W , W , ...., W ) for LUB can be modified so that LUB can fire an output
1 2 n
earlier than LUA and inhibit LUA from firing. This winner-take-all scheme is shown in

figure 4.4.
46

Learning
units
W1A tout1A

W2A LUA CD
t1B
WnA tout2A Fast
t2B inhibitory
W1B
connection
tout1B
W2B
tnB LUB CD
WnB
tout2B

Fig. 4.4. Winner Take-All Scheme.

4.2.2 Two-Input Sub-Learning Unit Scheme

In this approach, a learning unit (LU) which fires for the wrong input pattern (in

addition to firing for the correct input pattern(s)) can be designed so that the one-to-

one input/output property is guaranteed. This is accomplished by dividing an LU into

sub-learning units using two-input ISI neurons within each sub-learning unit as shown

in Figure 4.5. Each sub-learning unit (e.g LUA1) takes two inputs from one mapping

unit (MU) in the mapping stage. For example, outputs t and t from the mapping unit
1 2

are input into the sub-learning unit LUA1 shown in Figure 4.5. The sub-learning unit

ISI neurons perform the same functions as the ISI neurons used in the LUs described in

section 4.1. However, in this case there can be up to n sub-learning units in a learning

unit (LU). n represents the number of input spike trains fed into the mapping stage. Each
47

sub-learning unit consists of one two-input ISI1 neuron, one two-input ISI2 neuron, and

one coincidence generation (CG) unit. The t reference time input shown in Figure
r

4.5 is the same local reference signal which is used in the combined mapping-learning

organization as shown in Figure 4.3. The coincidence generation (CG) neuron in a sub-

learning unit performs the function of aligning the output spike times of the ISI1 and

ISI2 neurons so that they are coincident in time. When all CG neurons in an LU fire

simultaneously, the coincidence detection (CD) neuron fires.

Reference
Inputs from the mapping stage

time input tr
t1 ISI1
CG1
t2 ISI2
LUA1

t3 ISI1
CG2
CD
t4 ISI2
LUA2

t2n-1 ISI1
t2n CGn
ISI2
LUAn
learning
unit (LU)

Fig. 4.5. Learning Unit (LU) with Sub-Learning Units.


48

A one-to-one relationship between inputs and outputs of a learning unit can be

achieved by using the sub-learning units described above. The one-to-one input/output

relationship for each sub-learning unit guarantees that a sub-learning unit which had its

synaptic weight set for a given pattern (e.g. pattern P ) does not respond with equal
A

output times for a different pattern (e.g. pattern P ). The one-to-one input/output
B

relationship for a sub-learning unit can be proven as follows:

Assume that patterns P and P produce the same ISI output times t (ISI1
A B out1

output time) and t (ISI2 output time) for the sub-learning unit for pattern P .
out2 A

t1=tr (A) (B)


W1
t1=tr W1
t2 W2
t2 W2
ISI ISI

t3 W3
tn Wn

Fig. 4.6. (A) Multiple Input ISI Neuron. (B) Two-Input (plus the local reference input)
Sub-Learning Unit’s ISI Neuron.

For the multiple input ISI neuron case (shown in Figure 4.6(A)), the internal ISI

neuron potential x (t) (Equation 2.6) can be written as follows:


j
49

t n
A A
W .t + out
X
W .u(t − t ) =
1 r τ i i
i=2

t n
A B
W .t + out
X
W .u(t − t ) (4.4)
1 r τ
i=2 i i

where W is the weight assigned for the local reference time t and t is the firing time
1 r out

at the output of the ISI neuron. Equation 4.4 can be rewritten as:

A A A A
W (t − t ) · · · + W (t −t )=
2 out 2 n out n
A B A A
W (t − t ) · · · + W (t −t ) (4.5)
2 out 2 n out n

which can be reduced to:

A A A A A A
W .t + W .t + · · · + W .t =
2 2 3 3 n n
A B A B A A
W .t + W .t + · · · + W .t (4.6)
2 2 3 3 n n

For a sub-learning unit, the ISI neuron has only two inputs (plus t as shown in
r

Figure 4.6(B)) and Equation 4.6 can be reduced to:

A A A A A B A B
W .t + W .t = W .t + W .t . (4.7)
2 2 3 3 2 2 3 3
50

For the sub-learning unit ISI1 neuron W = β.t and Equation 4.7 can be rewritten
i i

as:
A 2 A 2 A B A B
(t ) + (t ) = t .t + t .t . (4.8)
2 3 2 2 3 3

and for the sub-learning unit ISI2 neuron W = tβ and Equation 4.7 can be
i i

rewritten as:
B B
t t
1+1= 2 + 3 . (4.9)
tA tA
2 3

B A B A
The simultaneous solution of Equations 4.8 and 4.9 gives t =t and t =t
2 2 3 3
which can happen only if P = P . Thus, the uniqueness or the one-to-one in-
A B

put/output relationship for a sub-learning unit has been proven. When the number

of spatio-temporal inputs fed into an ISI neuron exceeds two, a unique solution cannot

be found.

It should be noted that the number of sub-learning units within a learning unit do

not have to equal the number of the spike train inputs fed into the mapping stage. What

is required is that a sufficient number of sub-learning units (within a learning unit) be

used in order to have a one-to-one input/output relationship. Thus, there may be cases

in which only a combination of one sub-learning unit and one multiple input learning

unit is sufficient to guarantee the one-to-one input/output relationship for the pattern

set being learned.


51

4.3 Coincidence Detection Neuron

In order to have only one of the LUs in the learning stage fire, the t (ISI1)
out1

and t (ISI2) times can be made coincident by changing the ISI1 and/or the ISI2
out2

input synaptic weight values by increasing the β value of the ISI1 or the ISI2 neuron in

order to adjust its output spike time as shown in Figure 4.2. The coincidence detection

neuron, shown in Figure 4.2, uses the exponential response function (Equation 2.3) of a

spiking neuron and not the linear response function (Equation 2.5) because a fast decay

time of the neuron internal potential is necessary for coincidence detection.

It should be noted that although the LU output times {t ,t } are unique,


out1 out2

the relative time difference |t −t | may not be. In other words, two different
out1 out2

learning units can fire at different output times, t and t , but the relative time
out1 out2

difference |t −t | may be the same. Thus, the local input reference time t is
out1 out2 r

necessary to differentiate these output as shown in Figure 4.2. This reference time t
r

is the time when all the ISI neurons in the learning unit (as well as the mapping unit)

begin to operate.

4.4 Clustering

The outputs of the LUs fire at certain times according to the synaptic weights

which have been assigned during the learning stage. The other patterns, which have been

joined to the same learning unit cause the outputs {t ,t } to fire at times which
out1 out2

are close to the ones corresponding to the center pattern. These other patterns cause

the output to fire at slightly different times depending on the  value chosen during the
52

learning steps as was described in section 4.1. The  value can be chosen according to

the accuracy needed for learning. If the  value is small, more learning units are required

but the learning accuracy will be high. On the other hand, a large ε value required fewer

learning units but the learning accuracy will be lower. The neuron threshold voltage of

the coincident detection neuron must be set at a sufficiently low value in order to make

it possible to cluster inputs which produce non-perfectly aligned t and t times.


out1 out2

The number of learning units needed for classification and clustering cannot be

known a priori; thus, learning units are incrementally added as needed. The clustering

algorithm produces only a locally optimal input clustering because the input patterns

for a given class are sequentially chosen at random.

This supervised incremental learning scheme is relatively similar to the algorithm

proposed in [39] but without the need for synapse delays and/or multiple sub-synapse

weights and delays; thus, the proposed learning scheme may be easier to implement in

hardware.

4.5 Simulations

4.5.1 Spiking Neural Network Realization of the XOR Function

Due to its nonlinearly separable input characteristics, a two-input eXclusive OR

(XOR) function has often been used to test the function approximation or classification

capability of a neural network [21].


53

As shown in Figure 4.7, the XOR function has non-linearly sparable input values.

The main idea of the learning algorithm described in earlier section 4.1 was to assign

one learning unit per classification cluster.

x2

C1
1
C2

C1
0 1 x1

Fig. 4.7. XOR Function Input Classification.

The XOR problem has non-linearly separable classes (C and C ). C represents


1 2 1

x x digital inputs 00 and 11. C represents x x digital inputs 01 and 10. The two
1 2 2 1 2

classes cannot be separated by a single straight line as shown in Figure 4.7.

For a spiking neural network, the inputs x x = 00 and x x = 11 are not


1 2 1 2

distinguishable because the inputs are not referenced to a clock. Thus, in order to
54

distinguish such cases, a third reference input x = 0 can be used as shown in Figure
0

4.8.

Learning
ISI1 units Coincidence
detection
ISI2 neurons
0
0
0

0
0
x0 1
0
x1
0 0
x2 1 Final
0.1 0 output
time
0
1
1

Fig. 4.8. Spiking neural network for XOR function with spatio-temporal encoding for
logical input ”001”. Details of a learning unit is shown.

The learning simulation results for the XOR function are shown in Table 4.1.

Inputs 0 and 1 are represented by times 0 and 0.1 respectively as shown in Table 4.1.

As described in section 4.1, each learning unit in conjunction with a coincident

detection neuron generates a spike when the appropriate spatio-temporal pattern is in-

put.
55

Input Patterns Coincident firing time Final output time


0 0 0 1.464 4
0 0 0.1 1.910 2
0 0.1 0 1.910 2
0 0.1 0.1 3.013 4

Table 4.1. XOR Input spike times (including the bias) and output times.

The XOR neural network organization is shown in Figure 4.8. The final output

neuron, shown in Figure 4.8, is used to represent the XOR output value in the time

domain by appropriately assigning its input synaptic weights. It should be noted that

only one of the coincident detection neurons fires for any one particular input.

4.5.2 Classification of Spike Trains

Three randomly generated (using Poisson distributed inter-spike intervals at a

low frequency) spike trains were used as inputs. A set of spike trains were generated by

adding noise to each of the original three spike trains as shown in Figure 4.9.

Spike time skews were produced by adding Gaussian white noise (GWN) to the

spike train, or by shifting one or two spikes in the spike train to the left or to the right

randomly. Adding both types of noises was used to test the classification capability of

the neural network after learning had been completed. By injecting various amounts of

GWN into a spike train, noisy time shifted versions of the original spike trains could be

generated as shown in Figure 4.9, where spike train number 1 is the original spike train

for each class, i.e, all the patterns including the noisy patterns were used as a learning

set. The closer the noisy versions were to the original spike train, the likelihood of being
56

able to use an already assigned learning unit increased. The original spike train and

its five noisy versions were used as inputs to the mapping stage which utilized multiple

mapping units with different β values in the range of [0.25, 1.0] to give a relatively wide

input dimension for the learning stage as was explained in Chapter 3.

The learning and input pattern clustering simulation results are shown in Table

4.2. For example, for the three classes a total of six clusters were needed. For input

class 1, learning unit 1 was used for clustering five input patterns and learning unit 2

was used for clustering one input pattern as shown in Table 4.2.

Class No. Learning unit # # Learning patterns # Test patterns Final output time
1 1 5 5 4.0
2 1 -
2 3 4 3 5.0
4 2 2
3 5 4 3 6.0
6 2 2

Table 4.2. Input spike train classification, clustering, and final output times.

After the learning phase was completed, additional noisy spike trains for each

of the three classes were used to test the neural network. These additional noisy spike

trains are called test patterns in Table 4.2. The testing phase spike trains were generated

with the same range of noise used during the learning phase. For input class 3, three

input patterns were joined to learning unit 5 as shown in Table 4.2. The final output
57

neuron shown in Figure 4.3 was used to represent the final output time values for each

of the three classes shown in Table 4.2.


58

Class #1
Spike Train Number

0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

Class #2
Spike Train Number

0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

Class #3
Spike Train Number

0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Time (S)

Fig. 4.9. The original spike train for each class is spike train number 1. The other five
trains are noisy versions of it (one class is represented).
59

Chapter 5

SNN Applications

Three applications were used to represent the classification capability and robust-

ness of the proposed SNN learning algorithm.

First, a spatio-temporal input application example for a non-linear function ap-

proximation was carried out. In this case, the mapping stage was not needed because

the inputs had already been encoded into spatio-temporal patterns by a pre-processing

stage. Encoding method of the input variables into spatio-temporal patterns will be

employed according to [13].

Second, the classification of sounds produced when a glass ball struck different

materials of various shapes and sizes was carried out [9]. In this case, spike trains were

generated using the input sound signal and then mapped as described in chapter 3 using

the mapping stage; the learning stage was used to classify the different materials.

Third, the classification of patterns consisting of multiple spike trains of various

spike time distributions is described. The back-propagation learning algorithm was used

as a comparison with the proposed SNN learning algorithm.


60

5.1 Function Approximation

5.1.1 Encoding Continuous Input Variables Into Spike-Times

An encoding method that can efficiently transform input pattern variables into

spike times is required. One method is based on an array of gaussian functions [13] which

is employed to transform the input variable x into spike times {t , t , ...., t } as shown
1 2 n

in Figure 5.1. For example x = 3.5 is transformed into a spatio-temporal pattern with

firing times {20, 95, 325, 705, 985, 880, 505, 185} ms.

1 2 3 4 5 6 7 8
1 t
5
0.9
t6
0.8
t4
output spike times

0.7

0.6

0.5 t
7
0.4
t3
0.3

0.2
t8
t
0.1 2
t1
0
−2 −1 0 1 2 3 x 4 5 6 7 8
Input variable

Fig. 5.1. Input x variable encoded into 8 spike times using gaussian receptive fields.
61

To increase the temporal representation of an input variable, several overlapped

gaussian functions can be used [11], [19], [37], [42], [43], [44].

Improved representation accuracy for a particular input variable may be obtained

by sharpening (or broadening) the gaussian functions as well as increasing the number of

overlapped gaussian functions [43]. Such a coding method has been applied successfully

for classification problems using spiking neural networks [12], [13].

For the function approximation experiment, the parameters of the encoding gaussian

functions [13]were selected as follows: a) for an input variable x with minimum value

x and maximum value x , k overlapped gaussian functions were used (k=8 in this
min max
x −xmin
expreiment); b) The center of the ith gaussian function was set to x +i. max
k−2 (k >
min
x −xmin
2); and c) the width of the gaussian functions were set to γ max
k−2 (k > 2), γ = 1.5.

In general, for a function of m variables, m × k gaussian functions are needed. The

output of the gaussian function neurons are used directly as spike times. All gaussian

function neurons were assumed to fire including the un-excited ones that were assumed

to fire at time 0. The encoding of a single input-varaible x by a set of gaussian functions

population coding is shown in Figure 5.1.

5.1.2 Example of Function Approximation

The proposed learning algorithm was used to approximate the following non-linear

function

−x
f (x) = e sin(3x)
62

in the interval [0, 4] as shown on Figure 5.2.

0.7
Target
0.6 ε = 0.003
0.5 ε = 0.007
ε = 0.010
0.4 ε = 0.020
ε = 0.030
f(x)0.3
0.2
0.1
0
−0.1
−0.2
−0.3
0 0.5 1 1.5 2 2.5 3 3.5 4
x

Fig. 5.2. Function approximation for different tolerances with ϑ = 0.3, β = 0.5, and
τ = 5.0

The interval [0, 4] was sampled at 41 points with an interval spacing of 0.1. The

learning algorithm, described in section 4.1, was used to train the neural network to

assign cluster centers.

To test the generalization capability of the network after training, the same in-

terval was sampled at 401 points, at intervals of 0.01, in order to generate the test data

for the neural network.


63

The Proposed SNN RBF Algorithm


Learning Algorithm (the same number of clusters)
ε No. of Mean squared Max fit Mean squared Max fit
learning clusters error error error error
0 41 0 0 0 0
−4 −4
0.003 36 0.27 × 10 0.0320 0.06 × 10 0.0227
−4 −4
0.007 28 1.77 × 10 0.0453 0.54 × 10 0.0415
−4 −4
0.010 23 6.23 × 10 0.0867 2.52 × 10 0.0490
−4 −4
0.020 17 15.0 × 10 0.1656 15.9 × 10 0.0789
−4 −4
0.030 14 23.5 × 10 0.2700 28.3 × 10 0.1202

Table 5.1. Comparison of proposed and RBF learning algorithms.

Table 5.1 shows the proposed SNN learning results together with radial basis

function (RBF) based learning results. It can be observed from these results that as

the number of clusters increases (decreasing of the ε value) the learning accuracy is

improved. It can also be observed that RBF learning produces a smaller ”maximum

fit error (MFE)”than the proposed learning algorithm for the same number of clusters

(centers) and the same for mean squared error (MSE). When ε = 0.020 and ε = 0.030,

RBF learning could not achieve the same MSE. RBF learning needed many iterations

to achieve equal MSE values while the proposed learning algorithm only requires a one

step learning.

5.2 Classification of Materials Based on Impact Sounds

As a real world experiment, sounds produced when a small hard glass ball struck

different materials were recorded. The impacted materials were of different sizes and
64

shapes. The materials consisted of sheets of steel (S), sheets of copper (C), and pieces

of wood (D). For example C , C in Table 5.2 represent two sheets of copper of different
1 2

thicknesses and sizes.

No. of No. of 20 filters 50 filters


Material learning testing No. of Classification No. of Classification
type patterns patterns clusters accuracy clusters accuracy
S 30 10 18 70% 6 100%
C 15 5 11 80% 9 100%
1
C 30 10 13 80% 8 90%
2
D 45 15 8 87% 7 93.33%
1
D 30 10 13 70% 7 90%
2

Table 5.2. Impact sound based material classification accuracy.

To encode each sound into spike trains, the method proposed in [24] was employed.

In this method, frequency tuned cells are used. These cells respond to transient changes

in input signals. The transient changes of the input that can be detected by these special

function cells; onsets, offsets, and peaks of each filter output which pass predetermined

threshold level from below, above, and maximum critical value respectively.

In this experiment, all onset, offset, and peak output times of each filter belonging

to a filter bank consisting of 20 band-pass filters were used as shown in Figure 5.3. The

20 spike trains generated by the outputs of the filter bank were mapped into a spatio-

temporal pattern containing 40 spikes (two output spikes for each mapping unit) using

the mapping stage described in Chapter 3. The filter bank center frequencies ranged
65

from 100Hz to 4000 Hz, with each filter having a bandwidth of 200 Hz. The spatio-

temporal patterns for the various impact sounds were then used as input patterns for

the learning stage shown in Figure 4.3.

0.8
0.6
0.4
Amplitude

0.2
0
−0.2
−0.4
−0.6
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08
Time (Sec.)
Train Number

20
19
18
17

2
Spike

1
0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08
Time (Sec.)

Fig. 5.3. Steel plate impact sound waveform and its corresponding 20 spike trains.
66

As can be seen in Table 5.2, each material could be correctly classified with

relatively good accuracy in the testing phase. It should be noted that the learning

phase can achieve 100% learning of the learning set because learning units can be added

incrementally as needed. Better classification accuracy during the testing phase may

require a better way to pre-process sound signals as well as a larger learning set.

The experiment was repeated using a filter bank of 50 pand-pass filters working

in the range of 100Hz to 4000 Hz as before. In this case, 50 spike trains generated by

the outputs of the filter bank were mapped into a spatio-temporal pattern containing

100 spikes (two output spikes for each mapping unit) using the mapping stage. Results

are also shown in Table 5.2. In this case, the testing phase classification accuracy was

improved due to the finer pre-processing of the inputs.

5.3 Classification of Complex Spike Train Patterns

Pattern classification problem was used to evaluate the usefulness and robustness

of the proposed learning algorithm. Five patterns each consisting of 10 spike trains were

used.

Each of the spike trains consisted of spikes which were uniformly distributed

within a time window T = 1 second as shown in Figure 5.4. The distinguishing char-

acteristic of the 5 patterns was the number of spikes within a spike train: a) the first

pattern, about 20 spikes; b) the second pattern, about 25 spikes; c) the third pattern,

about 40 spikes; d) the fourth pattern, about 45 spikes; e) the fifth pattern, about 50

spikes.
67

(A) 10
9
8

Spike Train Number


7
6
5
4
3
2
1
0
0 0.2 0.4 0.6 0.8 1

(B) 10
9
Spike Train Number

8
7
6
5
4
3
2
1
0
0 0.2 0.4 0.6 0.8 1
Time (sec.)

Fig. 5.4. Two patterns with 10 spike trains each (the other patterns are not shown).
(A) A pattern with spike trains generated with a rate of 20 spikes/spike train. (B) 50
spikes/spike train

A small amount of normally distributed noise was added to each of the 5 patterns.

For each of the original patterns, 23 noisy patterns were generated. Thus, in total 120

patterns were available for the learning (100 patterns) and testing phases 20 patterns).

The number of clusters needed to make a correct classification is shown in Table 5.3.
68

The Proposed SNN BP Algorithm


Learning Algorithm (the same number of clusters)
ε No. of No. of Average CPU Testing No. of No. of computing CPU Testing
6
(10 ) neurons synap- computing time accuracy neurons synap- steps (10 ) time accuracy
−3

6
(clusters) ses steps (10 ) (sec) ses (epochs) (sec)
2 135 (38) 1654 0.236 5.34 100% 11 5710 13.722 (12) 1256 95%
5 120 (33) 1439 0.234 4.88 95% 9 4568 12.806 (14) 1176 95%
10 102 (27) 1181 0.232 3.93 95% 7 3426 11.662 (17) 938 90%

Table 5.3. complex spike train classification and comparison with ANN Back-
propagation method.

In this experiment, the mapping and learning stages required a sufficient amount

of time to fire outputs. As discussed in Chapter 6, the determination of the amount

of time needed for an output firing is dependent on the parameters chosen. In the
−3
mapping stage ISI1 units, the utilized parameters were β = 5 × 10 , initial weight
1

value W = 0.5, and threshold value is ϑ = 0.05 and the output time result in the
11 1

range of few milliseconds (about 50 ms) after the last input spike time (the input time
−4
window was assumed to be T = 1 second). The ISI2 unit utilized β = 1 × 10
2

with same initial weight and threshold values as in the ISI1 unit. It should be noted

that the weight value ω should be initialized in an appropriate manner with respect to

the β parameter which directly changes the assigned weight values of the input spike

times, otherwise learning stage will have difficulty in distinguishing time differences in

the output times put out by the mapping stage.

The outputs of the mapping stage (spatio-temporal patterns) are used by the

learning stage for classification as was described in chapter 4. The utilized parameters in
69
−5
the learning stage in this experiment in the ISI1 unit were β = 5 × 10 and threshold
1
−4 −6
value is ϑ = 1 × 10 and the ISI2 unit were β = 5 × 10 and threshold value
1 2
−5
is ϑ = 1 × 10 . The output time of the learning stage was in the range of a few
2

milliseconds (about 50 ms).

Matlab ver. 7.0 was used for simulating the proposed SNN learning Algorithm.

A code sample is shown in Appendix B.

The back-propagation learning algorithm was used as a comparison. All the 10

spike trains (composing a patten) were used as a vector pattern by combining all of

them in one vector as shown in Figure 5.5. In Figure 5.5, each spike train was assumed

to consist of 57 spikes (the maximum number of spike inputs in a spike train). If the

number of spikes in a spike train was less than 57, the spike input(s) with no spike had

a “0” as an input. Thus, each hidden layer neuron had a total of 570 synaptic inputs.

The classification results are shown in Table 5.3 for a constant learning rate 0.05 and
−10
target error 10 .

As shown in Figure 5.5, back-propagation learning used hidden layer neurons

which had 570 input synapses; furthermore, back-propagation needed many iterations to

converge. On the other hand, the proposed SNN learning algorithm ISI1 and ISI2 units

needed one input synapse neurons in the mapping stage and 20 input synapse neurons

in the learning stage.

The number of computation steps in Table 5.3 for the proposed mapping-learning

organization was calculated according to the following equation:

c−1
k(4p ∗ n + c( + 2)) (5.1)
2
70

Input Layer Hidden Layer Output Layer

t1
1st spike
train { t57
t1
2nd spike
train { t57

t
10th spike 1
train t57{
Fig. 5.5. Back propagation neural network.

where p is the number of input learning samples (patterns), k is the number of input

spike trains per pattern, n is the number of spike per input spike train, and c is the

number of the required learning units in the learning stage.

The number of computation steps in Table 5.3 for the back propagation learning

algorithm was calculated according to the following equation:

h(k + 1) + p ∗ E (2k ∗ h + 3h + 2) (5.2)


p

where p is the number of input learning samples (patterns), k is the number of inputs

per pattern, h is the number of the utilized hidden layer neurons, and E is the number
p

of required learning epochs.


71

Matlab ver. 7.0 was used for the back-propagation learning. The Matlab newff

function does not allow a large number of hidden layer neurons; thus, for the back-

propagation learning case number of hidden layer neurons was made considerably smaller

than the number of learning units required by the proposed SNN learning algorithm. The

number of computation steps needed by the SNN learning algorithm compared to the

back propagation learning algorithm is considerably smaller. Reducing the number of

learning units (by increasing the ε parameter) for the SNN learning algorithm reduced

the number of computation steps. For the back-propagation learning algorithm, reducing

the number of hidden layer neurons caused the number of epochs to increase and the

number of computation steps to decrease as shown in Table 5.3.

It can be observed from Table 5.3 that the number of epochs needed to achieve the

required error rate for the back-propagation learning algorithm was small. The reason

for small number of epochs is likely due to the fact that only 100 input samples were

used in this application example. Another reason is the similarity of the inputs: for each

of the 5 basic patterns, 19 similar patterns were generated by adding small amounts of

normally distributed noise.

The CPU times shown in Table 5.3 were obtained using Matlab ver. 7.0 on a PC

(Celeron 2.40 GHz and 2.0 GB RAM) running Microsoft Windows XP SP2 operating

system.
73

Chapter 6

Further Analysis of the Mapping-Learning Organization

In this chapter, an analysis of the proposed mapping-learning organization is

performed. The analysis determines the appropriate parameters values, such as the

values for the neuron threshold ϑ, β, and time window values to perform mapping and

learning.

6.1 Analysis of the Mapping Stage Parameters

6.1.1 Multiple Mapping Units for a Single Spike Train Input

An input spike train can be mapped to two output spike times using the ISI1

and ISI2 mapping neurons. One way to increase the output of the mapping stage is to

use several mapping units with different β values. Different β values produce non-linear

mapping unit output firing time delays for the same spike train input. A larger input-

space for the learning stage may be beneficial when relatively similar spike trains need

to be distinguished from each other.

The effect of using multiple units in the mapping stage with each unit using a

different β value to map a single input spike train will be described.

Assume that the potential function in the two ISI neurons has a sufficiently long

time constant where t  τ as shown in Equation 2.6 of the neuron PSP which will be
74

repeated here for convenience:

n
tX
x (t) = W .u(t − t ); tτ
j τ i i
i=1

In Equation 2.6, u(t) is the unit step function. The slope of the x (t) function once all
j

n spikes have been input is:


n
1X
s= W (6.1)
τ i
i=1

In Equation 6.1, the slope s is dependent on the value of W assuming τ is a constant.


i

Equation 6.1 can be rewritten as follows:

Voltage x(t)
Y
Threshold (υ)

(90-ϕ)

x(tn) ϕ

t1 tn tout Time

Fig. 6.1. Neuron internal potential


75
n
1X
tan(ϕ) = W (6.2)
τ i
i=1

where ϕ is the angle at the last input t of a spike train as shown in Figure 6.1.
n

For the ISI1 neuron, W = β.t , x(t ) can be rewritten as:


i i n

t Xn
x(t ) = β. n t (6.3)
n τ i
i=1

Assume,
n
X
J= t (6.4)
i
i=1

then Equation 6.3 becomes:


t
x(t ) = β. n .J (6.5)
n τ

and the slope s = tan(ϕ):


n
1X β
tan(ϕ) = β.t = .J (6.6)
τ i τ
i=1

Y Y
tan(90 − ϕ) = cot(ϕ) = =  (6.7)
A ϑ − x(t )
n

then,


Y = cot(ϕ). ϑ − x(t )
n

τ β.t .J
= (ϑ − n )
β.J τ
τ.ϑ
= −t (6.8)
β.J n
76

from which, it can be shown that:

τ.ϑ
t =Y +t = (6.9)
out n β.J

For the same input spike train, t and J are constants, and ϑ is constant in the whole
n

tout
3

0
0 1 2 3 4 5 6
β

Fig. 6.2. Change of output time with changing of β

mapping stage. Figure 6.2 shows that linear changes in β results in drastic non-linear

changes in the output times of the ISI1 neuron especially in the range β = [0.25, 2.0].

A similar procedure can be used for the ISI2 neuron by using W = tβ , and the
i i
Pn 1
relation is represented by Equation 6.9,but J = t .
i=1 i
77

6.1.2 Mapping Stage Threshold Values ϑ

The mapping stage parameters to be considered are the threshold potential ϑ, β

in Equations 3.1 and 3.2, and the initial weight values W and W for the ISI1 and ISI2
11 12

neurons respectively. Mainly, these parameters have to be computed a priori to avoid

problems such as wrong input spike train mapping or a long delay time for a correct

mapping.

All of these parameters are related to each other and cannot be independently

determined. In this research, it is assumed that all input spikes (belonging to a spike

train input pattern) arrive within one input time window of length T seconds as shown
inp

in Figure 6.3. Thus, the latest arriving spike, within an input spike train, occurs at some
max
t ≤T .
inp

Mapping time window


Input time window

0 Tinp Tmap

Fig. 6.3. Presentation of input time window and mapping time window.

The sum of the input synaptic weights W can be expressed as follows:

n
X
W= W; W = const. (6.10)
i 1
i=1
78

where n is the number of input spikes which arrive at the neuron input within a time

window of length 0 ≤ t ≤ T .
inp

In order to determine the appropriate neuron threshold potential ϑ, the maximum

possible sum of the weights must be calculated for both the ISI1 and ISI2 neurons, which

can be computed for the hypothetical spike trains shown in Figure 6.4.

(A)
.. .. ..
Tinp-(n-2)∆t
Tinp

Tinp-2∆t
Tinp-∆t
0 Time

(B)
.. .. ..
0 ∆t 2∆t (n-1)∆t Tinp Time

Fig. 6.4. (A) Spike train arrives at the latest possible times. (B) Spike train arrives at
the earliest possible times.

The following explanation will be for the mapping units which encode spike train

inputs. This is the main focus in the research.

From Equation 3.1 (W = β ∗ t ) of the ISI1 neuron, it is clear that the maximum
i 1 i

sum of the weights, resulting from all input spikes within one input time window T
inp
79

occurs when all input spikes come at the latest possible times (i.e. t values are the
i

largest within the input time window T ), as shown in Figure 6.4(A).


inp

.. .. ..
0

xj(t)

.. tout

t2 tn-2 tn
t1=0 Time
tn-1

Fig. 6.5. ISI1 neuron internal potential.

The internal ISI neuron potential (Equation 2.6) can be calculated using Figure

6.5 as follows:
80

W W
x (t) = t W + (t −t ) 2 + · · · + (t −t ) n
j out 1 out 2 τ out n τ
n n
1X 1X
= t (W + W )− Wt (6.11)
out 1 τ i τ i i
i=2 i=2

where the initial weight value W is attached to the input synapse for the first
1

input spike within a spike train is considered as the local reference at time t = 0 as
r

described in Chapter 4.

Thus, the ISI1 neuron threshold potential can be represented as follows:

β X n β X n
map 2
ϑ = t (W + 1 t )− 1 t (6.12)
1 out 11 τ i τ
i=2 i=2 i

where W = W of Equation 6.11 is the initial weight value of the ISI1 mapping
11 1

unit neuron.

Input spikes at times t come at the latest possible times within the input time
i

window T , as shown in Figure 6.4(A). Thus,


inp

n
X
t = T + (T − 4t) + (T − 24t) + ..... + (T − (n − 2)4t)
i inp inp inp inp
i=2

= (n − 1)T − 4t(1 + 2 + ...... + (n − 2))


inp
4t
= (n − 1)T −
((n − 2)(n − 1))
inp
2
4t 
= (n − 1) T − (n − 2) (6.13)
inp 2
81

n
X 2 2 2 2 2
t = T + (T − 4t) + (T − 24t) + ..... + (T − (n − 2)4t)
inp inp inp
i=2 i inp

2
2 4t
= (n − 1)T −T 4t(n − 1)(n − 2) + (n − 2)(n − 1)(2n − 3)
inp inp 6

(6.14)

Thus, by substituting Equations 6.13 and 6.14 in Equation 6.12, the mapping

unit ISI1 neuron threshold value can be expressed as follows:

map β β 4t β 2
ϑ = (T + 4t)(W + 1 (n − 1)T − 1 (n − 1)(n − 2)) − 1 (n − 1)T
1 inp 11 τ inp 2τ τ inp
2
β β 4t
+ 1T 4t(n − 1)(n − 2) − 1 (n − 1)(n − 2)(2n − 3)
τ inp 6τ
β 4t 4t
+ 1 (n − 1)(n − 2) T

≈ W T − (2n − 3) (6.15)
11 inp 2τ inp 3

4t is the minimum time resolution between input spikes within a spike train.

map
ϑ should be selected to be slightly larger than the value given in Equation
1
6.15 by a small value γ.

β
From Equation 3.2 of the ISI2 neuron (W = t 1 ) it is clear that the maximum
i i

of the sum of the weights resulting from all input spike inputs within one input time

window occurs when all input spikes come at the earliest possible times (i.e. t values
i

are the smallest within the input time window), as shown in Figure 6.4(B). Then, the

ISI2 neuron threshold potential can be represented as follows:


82

β Xn β Xn t
map 2 1 2 i
ϑ = t (W + )− (6.16)
2 out 12 τ t τ t
i=2 i i=2 i

where W = W of Equation 6.11 is the initial weight value of the ISI1 mapping
12 1

unit neuron. Thus,

n
X 1 1 1 1
= + + ..... +
t 4t 24t (n − 1)4t
i=2 i
1 1 1
= (1 + + ..... + ) (6.17)
4t 2 (n − 1)

From Figure 6.6, as the maximum value is in interest in the computations, the

value of the series of fractions can be approximated to appropriate value as:

n
X 1
ln(1 + n) < < 1 + ln(n) (6.18)
i
i=1

Equation 6.16 can be rewritten after approximation as follows:

map
 β  β
ϑ =T W + 2 1 + ln(n − 1) − 2 (n − 1) (6.19)
2 inp 12 4tτ τ

4t is the minimum time resolution between input spikes within a spike train.

map
ϑ should be selected to be slightly larger than the value given in Equation
2
6.19 by a small value γ.
83

f(n) Σ 1/n
3

1+ln(n)

2 ln(1+n)

0
0 10 20 30 40 50 60 70 80 90 100
n

P1
Fig. 6.6. Plots of n and two other approximations of it.

6.1.3 Required Input and Mapping Time-Windows

Sufficient time should be allowed for the internal neuron potential to reach the

threshold potential ϑ. For the ISI1 neuron, the latest output firing time occurs when all

the spikes within a spike train arrive at the beginning of the input time window T
inp

(i.e. the weight values W will be the smallest), as shown in Figure 6.4(B).
i
84

From Equation 6.12:

n n
map map 1X 1X
ϑ = T (W + W )− Wt
1 1 11 τ i τ i i
i=2 i=2

β Xn β Xn
map 2
= T (W + 1 t )− 1 t
1 11 τ i τ i
i=2 i=2
2
map β 4t β 4t
+ 1 (n)(n − 1) − 1

= T W (n)(n − 1)(2n − 1) (6.20)
1 11 2τ 6τ

where:

n
X
t = 4t + 24t + ..... + (n − 1)4t
i
i=2

= 4t(1 + 2 + ..... + (n − 1))

4t
= n(n − 1) (6.21)
2

n
X 2 2 2 2
t = 4t + (24t) + ..... + ((n − 1)4t)
i=2 i
2
4t
= (n − 1)(n)(2n − 1) (6.22)
6

Thus, the output time window can be expressed as:

β 4t 2
map
ϑ + 16τ (n)(n − 1)(2n − 1)
map
T = 1 (6.23)
β 4t
1 W + 12τ (n)(n − 1)
11
85

where 4t is the minimum time resolution between input spikes within a spike

train.

A similar reasoning can be applied for the ISI2 neuron except that the latest

output firing time occurs when all spikes come at very end of the input time window

(i.e. the weight values W will be the smallest). In other words, the first spike within a
i

spike train occurs at t = 0 and the rest of the spikes occur at the very end of the input
1

time window as shown in Figure 6.4(A). In this case:

From Equation 6.11:

β Xn β Xn t
map map 2 1 2 i
ϑ = T (W + )−
2 2 12 τ t τ t
i=2 i i=2 i

where:

n
X 1 1 1 1 1
= + + + ..... + (6.24)
t T T − 4t T − 24t T − (n − 2)4t
i=2 i inp inp inp inp

The minimum is the most important point for this, it can be approximated as

follows:

n
X 1 n−1
≈ ; 0<t ≤1 (6.25)
t T i
i=2 i inp

where n is the number of input spikes within an input time window. Thus,
86

map map β β
ϑ = T W + 2 (n − 1) − 2 (n − 1) (6.26)
2 2 12 τT τ
inp

map
from which, T can be expressed as follows:
2

map β
ϑ + τ2 (n − 1)
map 2
T = (6.27)
β
2 W + 2 (n − 1)
12 τ Tinp

where n is the number of input spikes within an input time window.

map map
The larger of either T or T is defined as the mapping time window
1 2
map map
(T = max{T ,T }) as shown in Figure 6.3.
map 1 2

One of the important parameters to consider is the distinguishability of the neuron

output spikes times for nearly similar input patterns (spike trains) [10]. At the same

time, the mapping time window T should be as small as possible in order to have a
map

short input-output delay. Two input spike trains were considered. The first spike train

is the one shown in Figure 6.4(A); the second spike train differs from this one by having

its last spike comes at time n4t instead of (n − 1)4t. In other words, the difference

between the two spike trains is just one spike shift in time by 4t in the second spike

train.

The worst case output separation time for the ISI1 and ISI2 neurons can be

computed using the internal ISI neuron potential Equation 6.11 (for the two slightly

different spike trains) as follows:


87

n n
1X 1X
ϑ = t (W + W )− Wt (6.28)
out 1 τ i τ i i
i=2 i=2

where the initial weight value W is assigned to the input synapse for the first input spike
1

within a spike train and is considered as the local reference at time t = 0 as described

in Chapter 3. W = W and W = W for the ISI1 and ISI2 units respectively.


1 11 1 12

Thus, the output time t time can be calculated as follows:


out

Pn
ϑ + τ1 Wt
t = i=2 i i (6.29)
out W + τ1 n W
P
1 i=2 i

1
The output times will then occur within 100 ms after T time for the para-
inp

meters used in Table 6.1 and assuming that the input spike train occurs at the latest

possible time as shown in Figure 6.4(A). Thus, the output time differences between two

spike trains will be the smallest among all other possible output separation times for

both the ISI1 and ISI2 units. When an input spike train comes at the latest possible

time within the input time window T , as shown in Figure 6.4(A) and the worst case
inp

output firing time has been set to T + 100msec, the difference in output times for two
inp

very similar spike trains will be the smallest.


map
In Figure 6.7, the relationship between the T and β in Equation 6.23 for
1 1

various initial W weight values for the ISI1 mapping neuron is shown.
11

1 The output time can be selected appropriately by the designer.


88

Table 6.1. Output separation times for different 4t values.


ISI1 W = 0.7 ISI2 W = 0.7
11 12
4t ms β1 Separation (µs) β2 (10−3 ) Separation (µs)
3 0.1 74 0.13 0.24
4 0.1 95 0.18 0.45
5 0.1 114 0.22 0.7
10 0.1 181 0.46 2.9

In Figure 6.8, the relationship between the output firing time difference for two similar

spike trains and β for a range of initial W weight values is shown.


1 12
map
In Figure 6.9, the relationship between the T and β in Equation 6.27 for
2 2

different initial W weight values for the ISI1 mapping neuron is shown.
11

In Figure 6.10, the relationship between the output firing time difference for two similar

spike trains and β for a range of initial W weight values is shown.


2 12

By comparing Figures 6.7 and 6.9 for the ISI1 and ISI2 neurons, it can be observed

that the ISI1 neuron output time is for the input spike train which arrives at the earliest

time within T ; thus it takes a long time (approximately the entire T time) to fire.
inp inp

On the other hand, the ISI2 neuron output time is for an input spike train which arrives

at the latest time within T ; thus the ISI2 neuron fires after T at a time chosen by
inp inp

the designer (100 msec was used in this example).

For the ISI1 neuron in the mapping unit, Figures 6.7 and 6.8 can be used to
map
achieve a small T time window with appropriate output separation time for some
1
β and W values. Results are shown in Table 6.1 for a time constant τ = 2.5sec,
1 11

n = 20, an input time window T = 1sec and an output time window of 100 ms
inp
89

1.002

0.998

0.996
Tmap (Second)

0.994

0.992

0.99
1

0.988
W1=0.1
0.986 W1=0.2
W1=0.5
0.984
W1=0.7

0.982
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
β1

Fig. 6.7. ISI1 mapping neuron output time window vs. β for different weight W
1 11
values (4t = 3ms and τ = 2.5sec).

(T = 1.1sec.) for various values of 4t. It should be noted that the output separation
map

times shown are for two spike trains with minimum Euclidean distance between them.

In a similar manner, for the ISI2 neuron in the mapping unit, Figures 6.9 and 6.10 can
map
be used to achieve a small T time window with appropriate output separation time
2
for some β and W values.
2 12

6.2 Learning Stage Parameter Calculations

As described in chapter 4, the spatio-temporal patterns generated by the ISI1

and ISI2 neurons in the mapping stage are used as input patterns for the learning stage
90

−4
x 10
1.4

ISI1 Separation (Second) 1.2

0.8

0.6

0.4
W1=0.1
W1=0.2
0.2 W1=0.5
W1=0.7

0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
β1

Fig. 6.8. ISI1 mapping neuron output separation times vs. β for different weight W
1 11
values (4t = 3ms and τ = 2.5sec).

where a supervised learning method is used to classify input patterns as shown in Figure

4.3. The learning unit is repeated in Figure 6.11 for convenience. The neuron weights
β
are assigned using W = β .t and W = t2 for ISI1 and ISI2 respectively as was done
i 2 i i i

previously in the mapping stage.

The learning stage parameters to be considered are the threshold potential ϑ, β

in Equations 4.1 and 4.2, and the weight values W of the ISI1 and ISI2 neurons. All
i

of these parameters are related to each other and cannot be independently determined.

However, the learning stage analysis considers multiple spatio-temporal inputs instead

of a single spike train at each ISI neuron. It is assumed that all spatio-temporal input
91

1.1

1.09

1.08

1.07
Tmap (Second)

1.06

1.05

1.04
2

1.03
W1=0.1
1.02 W1=0.2
W1=0.5
1.01 W1=0.7

1
0 0.2 0.4 0.6 0.8 1 1.2 1.4
β2 x 10
−4

Fig. 6.9. ISI2 mapping neuron output time window vs. β for different weight W
2 12
values (4t = 3ms and τ = 2.5sec).

spikes arrive within one learning unit input time window T as shown in Figure 6.12.
map

In order to determine the appropriate neuron threshold voltage ϑ, the maximum

possible sum of the weights must be known for both the ISI1 and ISI2 neurons which can

be computed for the hypothetical spatio-temporal pattern cases shown in Figure 6.13.

In Equation 4.1 for the ISI1 neuron, the maximum sum of the weights resulting

from all input spike inputs (composing a spatio-temporal pattern) within one input time

window (T ) occurs when all input spikes come at the latest possible times (i.e. t
map i

values are the largest within the input time window) as shown in Figure 6.13(A). It
92

−6
x 10
3
W1=0.1
W1=0.2
2.5 W1=0.5
W1=0.7
ISI2 Separation (Second)

1.5

0.5

0
0 0.2 0.4 0.6 0.8 1 1.2 1.4
β2 −4
x 10

Fig. 6.10. ISI2 mapping neuron output separation times vs. β for different weight W
2 12
values (4t = 3ms and τ = 2.5sec).

EPSP level
Neuron
t1= tr
w1
t2 w2
Input spikes

tout1 (tout2)
wn
tn

Fig. 6.11. ISI neuron for the Learning Stage.


93

Learning time window


Input time window

0 Tmap Tlearn

Fig. 6.12. Presentation of input time window and learning time window.

Tmap
0 0 Tmap
Tmap
Tinp Tmap
Tmap- (n-1)Dt
.. ..
.. ..
Time Time
Tmap Tinp+(n-1)Dt Tmap
(A) (B)

Fig. 6.13. (A) All spatio-temporal input spikes come at the latest possible times. (B)
All spatio-temporal input spikes come at the earliest possible times.

should be noted that the first spike in the mapping unit is used as a reference time for

all the mapping and learning units. It should also be noted that Equation 4.1 is being

used here despite the fact that spatio-temporal inputs and not a spike train input are fed

into the ISI neurons of the learning unit; the spatio-temporal inputs shown in Figures

6.13(A) and 6.13(B) are considered in this analysis to be behaving as a spike train.
94

Thus, the learning unit neuron threshold potential can be expressed as follows:

β Xn β Xn
learn 2
ϑ = t (ω + 1 t )− 1 t (6.30)
1 out 1 τ i τ i
i=1 i=1

where ω is the initial weight value of the ISI1 learning unit neuron.
1

n
X
t = T + (T − 4t) + (T − 24t) + ..... + (T − (n − 1)4t)
i map map map map
i=1

= nT − 4t(1 + 2 + ...... + (n − 1))


map
4t
= nT −
n(n − 1)
map
2
4t 
= n T − (n − 1) (6.31)
map 2

n
X 2 2 2 2 2
t = T + (T − 4t) + (T − 24t) + ..... + (T − (n − 1)4t)
map map map
i=1 i map

2
2 4t
= nT −T 4t(n − 1)n + (n − 1)(n)(2n − 1) (6.32)
map map 6
95

Thus:

learn β β 4t β 2
ϑ = (T + 4t)(ω + 1 nT − 1 n(n − 1)) − 1 nT
1 map 1 τ map 2τ τ map
2
β β 4t
+ 1T 4tn(n − 1) − 1 n(n − 1)(2n − 1)
τ map 6τ
β 4t 4t
+ 1 n(n − 1) T

≈ ω T − (2n − 1) (6.33)
1 map 2τ inp 3

learn
ϑ should be selected to be slightly larger than the value given in Equation
1
6.33 by a small value γ.

In Equation 4.2 for the ISI2 neuron, the maximum sum of the weights resulting

from all input spike inputs within one input time window occurs when all input spikes

come at the earliest possible times (i.e. t values are the smallest within the input time
i

window) as shown in Figure 6.13(B).

The threshold potential for the ISI2 neuron:

β Xn β Xn t
learn 1
ϑ = t (ω + 2 )− 2 i (6.34)
2 out 2 τ t τ t
i=1 i i=1 i

where ω is the initial weight value of the ISI1 learning unit neuron.
2

n
X 1 1 1 1
= + + ..... +
t T T + 4t T + (n − 1)4t
i=1 i inp inp inp
n
≈ (6.35)
T + 4t
inp
96

The approximation in Equation 6.35 is used because the maximum threshold value

is needed. Equation 6.34 can be rewritten as follows:

β n T
learn map
∗ω + 2

ϑ = T −1 (6.36)
2 map 2 τ t + 4t
map

learn
ϑ should be selected to be slightly larger than the value given in Equation
2
6.36 by a small value γ.

6.2.1 Time Window for the Learning Stage

Sufficient time should be allowed for the internal neuron potential to reach the

threshold potential ϑ. For the ISI1 neuron, the latest output firing time occurs when

all the spikes in the spatio-temporal pattern arrive at the beginning of the input time

window as shown in Figure 6.13(B).

Hence, when the internal neuron potential x(t) of Equation 6.11 is set equal to
learn
the threshold potential ϑ , the following relationship can be established:
1

n n
learn learn 1X 1X
ϑ = T (ω + W )− Wt
1 1 1 τ i τ i i
i=1 i=1

β X n β X n
learn 2
= T (ω + 1 t )− 1 t (6.37)
1 1 τ i τ
i=1 i=1 i
97

where,

n
X
t = T + (T + 4t) + (T + 24t) + ..... + (T + (n − 1)4t)
i inp inp inp inp
i=1

= nT + 4t 1 + 2 + ...... + (n − 1)
inp
4t
= nT +
n(n − 1)
inp 2
4t 
= n T + (n − 1) (6.38)
inp 2

n
X 2 2 2 2 2
t = T + (T + 4t) + (T + 24t) + ..... + (T + (n − 1)4t)
inp inp inp
i=1 i inp

2
2 4t
= nT +T 4t(n − 1)n + (n − 1)(n)(2n − 1) (6.39)
map map 6

Thus, the learning unit ISI1 neuron output time window can be expressed as:

learn β 2 2
+ τ1 nT 4t(n − 1)n + 4t

ϑ +T 6 (n − 1)(n)(2n − 1)
learn 1 map map
T = (6.40)
β
+ 4t
 
1 ω + τ1 n T 2 (n − 1)
1 inp

learn
T is the latest possible ISI1 neuron firing time.
1
A similar reasoning can be applied for the ISI2 neuron except that the latest

output firing time occurs when all spikes come at the very end of the input time window.

In other words, the first spike within of the spatio-temporal pattern occurs at t = T
1 inp

and the rest of the spikes occur at the very end of the input time window as shown in

Figure 6.13(A).
98

Hence, when the internal neuron potential x(t) of Equation 6.11 is set equal to
learn
the threshold potential ϑ , the following relationship can be established:
2

β Xn β Xn t
learn 2 1 2 i
ϑ = T (ω + )−
2 learn 2 τ t τ t
i=1 i i=1 i
β  β
= T ω + 2 n − 2n (6.41)
learn 2 τT τ
map

where:

n
X 1 1 1 1 1
= + + + ..... + (6.42)
t T T − 4t T − 24t T − (n − 1)4t
i=1 i map map map map
n
≈ ; 0<t ≤1 (6.43)
T i
map

The approximation in Equation 6.42 is used in order to compute the maximum


learn
output time window T which can be derived from Equation 6.41 as follows:
2

learn β
ϑ + τ2 n
learn
T = 2 (6.44)
β
2 ω + τT 2 n
2 map

learn
T is the latest possible ISI2 neuron firing time.
2
learn learn
The larger of either T or T is defined as the learning time window as
1 2
shown in Figure 6.12.
99

learn
In Figure 6.14, the relationship between the T and β in Equation 6.40 for
1 1

different initial weight ω values in the ISI1 learning neuron is shown. It can be seen
1
learn
that T < 1.1sec.
1

1.105
W1=0.1
1.1 W1=0.2
W1=0.5
W1=0.7
1.095

1.09
Tlearn (Second)

1.085

1.08
1

1.075

1.07

1.065

1.06
0 0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009 0.01
β1

Fig. 6.14. ISI1 learning neuron output time window vs. β for different weight ω
1 1
values (4t = 1msec and τ = 2.5sec).

map
In Figure 6.15, the relationship between the T and β in Equation 6.44 for
2 2

different initial weight ω values in the ISI1 mapping neuron is shown. It can be observed
2
learn
that T > 1.1sec.
2
100

1.15

1.145

1.14

1.135
(Second)

1.13

1.125
learn

1.12
T2

1.115

1.11 W1=0.1
W1=0.2
W1=0.5
1.105
W1=0.7

1.1
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
β
2

Fig. 6.15. ISI2 learning neuron output time window vs. β for different weight ω
2 2
values (4t = 1msec and τ = 2.5sec).

By comparing Figures 6.14 and 6.15 for the ISI1 and ISI2 neurons, it can be

observed that the ISI1 neuron output time is for the input spikes which arrive at the

earliest time within T ; thus it takes a long time (approximately the entire T
map map

time) to fire. On the other hand, the ISI2 neuron output time is for an input spikes

which arrive at the latest time within T ; thus the ISI2 neuron fires after T at a
map map

time chosen by the designer (50 msec was used in this example).

In the learning stage, coincidence detection is needed and hence the final output

requires some additional time. As described in chapter 4, the coincidence detection

neuron uses the exponential characteristics of the neuron response function (Equation
101

2.3) to ensure that only coincident spikes cause the output to fire. Thus, the required

extra time for this part could be assigned as τ .


c

In the final output stage of the mapping-learning organization (Figure 4.3), τ


out

must be chosen appropriately in order to represent each input class with a unique output

time. Hence, the learning time window T (shown in Figure 6.12) can be rewritten
learn

as follows:

learn learn
T = max{T ,T }+τ +τ (6.45)
learn 1 2 c out

learn learn
It can be observed from figures 6.14 and 6.15 that T >T , thus Equation
2 1
6.45 can be expressed as follows:

learn
T ≥T + tau + τ (6.46)
learn 2 c out

6.3 Complexity of the Mapping-Learning Organization Algorithm

The complexity of the proposed learning algorithm is calculated for a one proces-
2
sor, random access model machine. The learning algorithm complexity is O(4nk + p )

where k is the number of input spike trains, n is the number of spikes per spike train,

and p is the number of learning samples. The complexity order was calculated for the

worst case number of clusters needed for classification which occurs when the number of

the clusters is equal to the number of learning samples p. The dominant factor in this

learning algorithm is the number of learning patterns p. Thus, the learning algorithm
2
complexity can be approximated by O(p ).
102

Comparisons needed during learning to determine whether or not to assign a new


2
neuron causes the p factor.
103

Conclusions

It has been shown that SNNs can be computationally efficient and practical. Its

consideration and analysis are important both from the biological as well as from the

computational points of view. In this dissertation, emphasis has been placed on temporal

coding so that time-consuming sampling of a firing rate is as previously done in classical

ANN. It has been shown that some types of ANNs known to be very efficient can be

implemented with SNNs in the temporal domain such as the proposed learning model

which is relatively near from radial basis function RBF [21]. In this case the ability to

get additional information by considering firing times of neurons was shown, which gave

rise to additional features such as clustering in the learning algorithm.

The mapping/learning organization proposed in this work is suitable for ana-

log/digital VLSI implementations. Spiking neural networks can be used to process time

domain analog real world signals once these signals are converted to spike trains. A

hardware realization of SNNs would make it possible to process real would signals in

close to real-time.

A new scheme for mapping both a spike train and spatio-temporal patterns (one

spike time per input synapse) was introduced. This scheme transforms temporal domain

spikes into two outputs (in the case of spike train inputs) which can produce a one-to-one

correspondence between input spikes (spatio-temporal patterns or trains) and outputs.

A one-to-one output is important to distinguish inputs which could be very similar in

the spatial and/or temporal domain. As shown in Appendix A, the inputs and outputs
104

have a one-to-one relationship when the outputs of the ISI1 and ISI2 neurons are used

in combination.

A new learning algorithm for spiking neural networks was proposed. The resulting

spiking neural network can classify input spike trains which are transformed using the

mapping stage. Simulations have shown that classifications of input spike trains with

noise can be achieved by either adding learning units or clustering input spike trains. The

learning algorithm is relatively simple compared to algorithms such as back-propagation.

Applications of function approximation and impact sound classification which are

applied in this research and other applications can be practical areas for the proposed

learning algorithm. However, the simulations and real-world applications which were

discussed in this research show the computational power of spiking neural networks

specially when the input into the SNN arrive as spike trains (impact sound classification

example). Spiking neural networks can process real world signals in close to real-time

applications such as spoken words classifications. Other applications in which temporal

information can be directly utilized would be also be good application candidates such

as robot vision and movement in real world environment.

The proposed methods use spike trains in the mapping stage and spatio-temporal

(singles spikes) in the learning stage. Moreover, each neuron is allowed to fire only once

within the coding interval (mapping and learning windows as shown in the analysis).

However, the presentation of an input vector may just happen within a small time

window of a longer lasting computation. For example, one may consider the case where

all neurons of the SNN fire with the same frequency, but phase-shifted, as suggested

in [23]. Then, in the case of the SNNs proposed constructions in this dissertation, local
105

reference time of the first coming spike (within an input spike train) is used for all

computations. A phase-shift for some input neurons yields a phase-shift of the output

neuron.

Future work includes incorporation of other features of biological neurons, or

even other types such as classical neurons, could influence the quality of the introduced

methods and the computational power of such neurons in general. This work is only the

first step in the exploration of the possibilities of SNNs. The computational power of

SNNs may be enhanced by making use of other synaptic features like synaptic depression

and facilitation [17] or spike-timing-dependent plasticity (STDP) [16].

Many fundamental questions about artificial SNNs are unanswered: a) If some

features of biological neurons are included into SNNs, would it improve the computational

power of SNNs? b) How will hardware designs be affected by biological concepts? c)

Can learning be better realized using both temporal and rate code information?

The proposed model, after some modifications, may be useful in directly process-

ing real biological signals from some body parts (such as muscles) to overcome physical

disabilities.
107

Appendix A

The one-to-one mapping of inputs to outputs of the mapping unit will be proved.

Assume that the potential function in the two ISI neurons has a sufficiently long time

constant so that α(t) of Equation 2.3 can be considered to work simply as a linear

function (Equation 2.6), which is repeated here for convenience:

n
tX
x(t) = W .u(t − t ) (A.1)
τ i i
i=1

In Equation A.1, u(t) is the Heaviside function. The slope of the function x(t) is:

n
1X
s= W (A.2)
τ i
i=1

In Equation A.2, the slope is dependent on the value of the dynamic weight

variable W assuming τ is constant. Assume s represents the slope of the potential


i

function after the arrival of input spike t .


n

To prove that no coincident potential values are produced for different input spike

trains, or spatio-temporal patterns, after the last input spike has arrived, it is sufficient

to show that the slopes of two different input spike trains cannot be equal. The following

cases cover all the worst case input spike trains combinations.
108

• Assume two different input spike trains P and P have the same spike orders but
A B
A A A A B B B B
different spike times P = {t , t , ......., t , t } and P = {t , t , ......., t , t }.
A 1 2 n−1 n B 1 2 n−1 n

A B A B
• If the last spike inputs have the relation t >t and s >s , as shown in
n n n−1 n−1
Figure A.1(A), the potential functions of P and P may intersect at some later
A B
A B A B
time after the last input spike, i.e., s <s and then t =t , for the ISI1
n n out1 out1
neuron (ISI1 internal potential ∝ β ∗ t ); however, for the same spike trains P
i A

and P the ISI2 neuron (ISI2 internal potential ∝ tβ ) makes the internal potential
B i
A B A B
functions diverge (s > s ) and thus t 6= t as shown in Figure A.1(B).
n n out2 out2

A B A B
• If t < t and s > s , as shown in Figure A.1(D), the potential slopes
n n n−1 n−1
A B A B
may intersect at some later time (s <s and then t =t ) for the ISI2
n n out2 out2
A B
neuron, while the ISI1 neuron would make the internal potentials diverge (s >s )
n n
because its internal potential is proportional with input spike time t and thus
i
A B
t 6= t as shown in Figure A.1(C).
out1 out1

A B A B A B
• If t =t and s >s , the ISI1 and ISI2 neurons would produce s >s
n n n−1 n−1 n n
A B
and thus t 6= t (not shown in Figure A.1).
out1 out1

A B A B A B
• If t =t and s >s , then ISI1 neuron would produce t 6= t as
n n−1 n−1 n−1 out1 out1
A B A B
shown in Figure A.1(E); Furthermore if t =t and s <s , then ISI2
n n−1 n−1 n−1
A B
neuron would produce t 6= t as shown in Figure A.1(F).
out2 out2

Thus, all possible input spike trains produce unique combination of outputs at

t and t of the ISI1 and ISI2 neurons which can be used to recognize a particular
out1 out2

input sequence.
109

PA

Voltage Voltage PB

tAn tBn Time tAn tBn Time


(A) ISI1 (B) ISI2
Voltage Voltage

tBn tA Time tBn tAn


Time
n
(C) ISI1 (D) ISI2
Voltage Voltage
tAn=tBn-1

tBn Time tBn Time


tAn=tBn-1

(E) ISI1 (F) ISI2

A B
Fig. A.1. Internal Potential for ISI1 and ISI2 neurons, (A) and (B) t > t and
n n
A B A B A B A B A B
s >s ; (C) and (D) t <t and s >s ; (E) t =t and s >s ;
n−1 n−1 n n n−1 n−1 n n−1 n−1 n−1
A B A B
(F) t =t and s <s .
n n−1 n−1 n−1
111

Matlab Code

(1)Mapping-Learning Organization

clear clc

load(’x100.dat’);

% x100.dat contains 100 patterns each composed of 10 spike trains with

% different length (rate)

[a,b]=size(x);

trns=10; % No. of spike trains per pattern

pat=a/trns; % No. of patterns (each is a bunch of spike trains)

s=[]; t = cputime;

for i=1:pat

y= decoder_sngl_beta(x(trns*(i-1)+1:trns*i,:));

s=[s y];

end y1=s’;
112

% Calculated vlaues for the learning stage

beta1=0.01; beta2=0.1; tau=2.5; Vth1=0.77/2; Vth2=0.84/2;

[np1,ns1]=size(y1); % np1 # patterns, ns1 # spikes/pattern

w1=0.7; sml_dst=0.005;

for j=1:np1

yn(j,1)=j; % the input patterns numbers, pointer to patten numbers

end

ii=0;

while np1 > 0 % pattern by pattern

ynn=[];

y2=[]; % tmp for all patterns intially

jj=1;

spike_out1=0;

for nn=1:np1 % skip one pattern every loop

if nn==1 % Assign weights according to the 1st pattern

ii=ii+1 % % the new column in the classification matrix

v1(ii,:)=beta1.*y1(nn,:); % weight assignment for the ISI1

v2(ii,:)=beta2./y1(nn,:); % weight assignment for the ISI2


113

spike1(ii,1)=(Vth1*tau + sum(y1(nn,:).*v1(ii,:)))/(w1*tau + sum(v1(ii,:)));

spike2(ii,1)=(Vth2*tau + sum(y1(nn,:).*v2(ii,:)))/(w1*tau + sum(v2(ii,:)));

class_mat(jj,ii)=yn(nn,1) % heading of the adj. matrix.

yn(nn,:)=0;

y1(nn,:)=0;

else

spike11=(Vth1*tau + sum(y1(nn,:).*v1(ii,:)))/(w1*tau + sum(v1(ii,:)));

spike22=(Vth2*tau + sum(y1(nn,:).*v2(ii,:)))/(w1*tau + sum(v2(ii,:)));

%spike_out2=combin_inputs(spike1,spike2,th)

c1=compare(spike1(ii,1),spike11,sml_dst);

c2=compare(spike2(ii,1),spike22,sml_dst);

if c1==1 && c2==1

jj=jj+1; % the new row in the classification matrix

class_mat(jj,ii)=yn(nn,1) %join with already learnt pattern

yn(nn,:)=0;

y1(nn,:)=0;

end

end

end

i=0;

for n=1:np1
114

if yn(n,1)~=0

i=i+1;

ynn(i,:)=yn(n,1);

y2(i,:)=y1(n,:);

end

end

y1=y2

yn=ynn;

s1=size(y1);

np1=s1(1,1); % new no. of patterns after joining the similars

end

e = cputime-t;

(2)decoder sngl beta(x) function

% A function that its inputs are spike trains "x", and returns outputs in

% spatio-temporal patterns "spike" which is used in its turn as inputs to

% another stage of SNN.

%function [spike]= decoder_sngl_beta(x)

endvalue=0.2; tau=10;

beta1=0.1; %calculated

beta2=3.5*10^-5;
115

V1=0.76; %calculated

V2=0.77; %calculated

w1=0.7;

[np,ns]=size(x); % np= no. of spike trains & ns= no. of spikes/spike train

x_min=min(min(x)); for i=1:np

x_tmp=[];

for j=1:ns

if x(i,j)>= 0 % neglect the NaN entries

x_tmp(i,j)=x(i,j);

end

end

[np1,ns1]=size(x_tmp);

x_tmp(i,:)=x_tmp(i,:)-min(x_tmp(i,:));

if x_tmp(i,1)==0

spike(i*2-1,1)=(V1*tau + beta1*sum(x_tmp(i,2:ns1).*x_tmp(i,2:ns1)))...

/(w1*tau + beta1*sum(x_tmp(i,2:ns1)));

spike(i*2,1) =(V2*tau + beta2*(ns-1))/(w1*tau + beta2*...

sum(1./x_tmp(i,2:ns1)));

else

spike(i*2-1,1)=(V1*tau + beta1*sum(x_tmp(i,:)))/...
116

(beta1*sum(x_tmp(i,:)));

spike(i*2,1) =(V2*tau + beta2*ns)/(beta2*sum(1./x_tmp(i,:)));

end

end
117

References

[1] M. Abeles, H. Bergman, E. Margalit, and E. Vaadia, “Spatiotemporal Firing Pat-

terns in the Frontal Cortex of Behaving Monkeys”, J. Neurophysiol, Vol.70, pp.1629-

1658, 1993.

[2] Hagai Agmon-Snir, Catherine E. Carr, and John Rinzel, “The role of dendrites in

auditory coincidence detection”, Nature No.393, pp.268-272, 1998.

[3] Hesham H. Amin and Robert H. Fujii, “Learning Algorithm for Spiking Neural

Networks Based on Synapse Delays”, 3D FORUM, Vol.17, No.1, pp.191-197, March

2003.

[4] Hesham H. Amin and Robert H. Fujii, “Input Arrival-Time-Dependent Decoding

Scheme for a Spiking Neural Network, ”Proceeding of The 12th European Sympo-

sium of Artificial Neural Networks (ESANN 2004), 355-360, April 2004.

[5] Hesham H. Amin and Robert H. Fujii, “Spike Train Decoding Scheme for a Spiking

Neural Network”, Proceedings of the 2004 International Joint Conference on Neural

Networks (IJCNN’04), IEEE, pp.477-482, 2004.

[6] Hesham H. Amin and Robert H. Fujii, “Spiking Neural Network Inter-Spike Time

Based Decoding Scheme”, Special Issue of IEICE Trans. of Circuits and Systems,

Vol.E88-D, No.8, pp.1893-1902, August 2005.

[7] Hesham H. Amin and Robert H. Fujii, “Spike Train Classification Based on Spiking

Neural Networks”, WSEAS Trans. Systems, Vol.4, March 2005.


118

[8] Heham H. Amin and Robert H. Fujii, “Learning Algorithm for Spiking Neural Net-

works”, The First International Conference on Natural Computation (ICNC’05),

China, 2005 (Lecture Notes of Computer Science Vol.3610, pp.456-465, Springer-

Verlag Berlin Heidelberg 2005).

[9] Heham H. Amin and Robert H. Fujii, “Sound Classification and Function Approx-

imation Using Spiking Neural Networks”, International Conference on Intelligent

Computing (ICIC2005), China, 2005 (Lecture Notes of Computer Science Vol.3644,

pp.621-630, Springer-Verlag Berlin Heidelberg 2005).

[10] Heham H. Amin and Robert H. Fujii, “Spike Train learning Algorithm, Applications,

and Analysis”, 48th IEEE Int’l Midwest Symposium on Circuits and Systems, Ohio,

2005.

[11] P. Baldi and W. Heiligenberg, “How sensory maps could enhance resolution through

ordered arrangements of broadly tuned receivers”, Biol. Cybern., No.59, pp.313-318,

1988.

[12] S.M. Bohte, H. La Poutr and J.N. Kok, “Unsupervised classification in a Network

of spiking neurons”, IEEE Trans. Neural Networks, Vol.13, No.2, pp.426-435, 2002.

[13] S.M. Bohte, J.N. Kok and H. La Poutré, “Spike-prop: error-backprogation in multi-

layer networks of spiking neurons”, Proceedings of the European Symposium on

Artificial Neural Networks ESANN’2000, pp.419-425, 2000.


119

[14] Emery N. Brown, Robert E. kass, and Partha P. Mitra, “Multiple neural spike train

data analysis: state-of-the-art and future chanllenges”, Nature Neuroscience, Vol.7,

No.5, May 2004.

[15] C. E. Carr, “Processing of temporal information in the brain”, Annu Rev Neurosci.,

No.16, pp.223-43, 1993.

[16] Hideyuki Câteau and Tomoki Fukai, “A Stochastic Method to Predict the Conse-

quence of Arbitrary Forms of Spike-Timing-Dependent Plasticity”, Neural Compu-

tation Vol.15, pp.597620, 2003.

[17] Peter Dayan, and L. F. Abbott, Theoretical Neuroscience: Computational and

Mathematical Modeling of Neural Systems, MIT Press, 2001.

[18] R. O. Duda, “Elevation dependence of the interaural transfer function”, in Binaural

and Spatial Hearing in Real and Virtual Environments, ed. R. H. Gilkey and T. B.

Anderson, pp.49-75, Lawrence Erlbaum Associates, Mahwah, NJ, 1997.

[19] C. W. Eurich and S. D. Wilke, “Multi-dimensional encoding strategy of spiking

neurons”, Neural Computation, Vol.12, pp.1519-1529, 2000.

[20] W. Gerstner and W. Kistler, Spiking Neuron Models, Single Neurons, Populations,

Plasticity, Cambridge University Press, 2002.

[21] Simon Haykin, Neural Networks, A Comprehensive Foundation, Prentice Hall In-

ternational Inc., 1999.


120

[22] W. Gerstner, R. Kempter, J.L. van Hemmen, and H. Wagner “Hebbian learning

of pulse timing in the barn owl auditory system“ In: W. Maass and C.M. Bishop

(Editors), Pulsed Neural Networks, MIT press, pp.353-377, 1998.

[23] J. J. Hopfield, “Pattern recognition computation using action potential timing for

stimulus representation”. Nature, Vol.36, pp.376-33, 1995.

[24] J. J. Hopfield and C. D. Brody, “What is a Moment? Cortical Sensory Integration

Over a Brief Interval”, Proc. Natl. Acad. Sci. USA, Vol.97, No.25, pp.13919-13924,

2000.

[25] J. J. Hopfield and C. D. Brody, “What is a Moment? Transient Synchrony as a

Collective Mechanism for Spatiotemporal Integration”, Proc. Natl. Acad. Sci. USA,

Vol.98, No.3, pp.1282-1287, 2001.

[26] M. Konishi, “Listening with two ears”, Sci. Amer., Vol.268, No.4, pp.66-73. 1993

[27] W. Maass, “Fast sigmoidal networks via spiking neurons”, Neural Computation,

Vol.9, pp.279-304, 1997.

[28] W. Maass, “Networks of spiking neurons: the third generation of neural network

models”, Neural Networks, Vol.10, pp.1659-1671, 1997.

[29] W. Maass, ”Lower bounds for the computational power of networks of spiking neu-

rons”, Neural Computation, Vol.8, No.1, pp.1-40, 1996.


121

[30] W. Maass, ”Noisy spiking neurons with temporal coding have more computational

power than sigmoidal neurons”, in Advances in Neural Information Processing Sys-

tems, ed. M. Mozer, M. I. Jordan, and T. Petsche, Vol.9, pp.211-217, MIT Press,

Cambridge, 1997.

[31] W. Maass and C. Bishop, editors, Pulsed Neural Networks, MIT press, Cambridge,

1999.

[32] W. Maass, T. Natschläger, and H. Markram, “A Model for Real-Time Computation

in Generic Neural Microcircuits”, in Proc. of NIPS 2002, Advances in Neural In-

formation Processing Systems, Vol.15, ed. S. Becker, S. Thrun, and K. Obermayer,

pp.229-236, MIT Press, 2003.

[33] W. Maass, T. Natschläger, and H. Markram, “Computational Models for Generic

Cortical Microcircuits”, in Computational Neuroscience: A Comprehensive Ap-

proach, chapter 18, ed. J. Feng, CRC-Press, 2003.

[34] A. F. Murray, Pulse-based computation in VLSI neural networks. In W. Maass and

C. Bishop., editors, Pulsed Neural Networks, MIT press, Cambridge, 1999.

[35] T. Natschläger and B. Ruf, “Spatial and Temporal Pattern Analysis via Spiking

Neurons”, Network: Comp. Neural Syst., Vol.9, No.3, pp.319-332, 1998.

[36] L. Perrinet, A. Delorme, S. Thorpe, “Network of Integrate-and-Fire Neurons using

Rank Order Coding A: How to Implement Spike Timing Dependant Plasticity”,

Neurocomputing, 38-40(1-4), pp.817-822, 2001.


122

[37] A. Pouget, S. Deneve, J. C. Ducom, and P. Latham, “Narrow versus wide tuning

curves: What’s best for a population code?”, Neural Computation, 11(1), pp.85-90,

1999.

[38] Fred Rieke, David Warland, Rob de Ruyter van Steveninck and William Bialek,

Spikes: Exploring the Neural Code (Computational Neuroscience), MIT press, 1997.

[39] Berthold Ruf: Computing and Learning with Spiking Neurons - Theory and Simu-

lations, Chapter (8), Doctoral Thesis, Technische Universitaet Graz, Austria, 1997.

Available: ftp://ftp.eccc.uni-trier.de/pub/eccc/theses/ruf.ps.gz

[40] B. Ruf and M. Schmitt, “Self-Organization of Spiking Neurons using Action Poten-

tial Timing”, IEEE Trans. Neural Networks, Vol.9, No.3, pp.575-578, 1998.

[41] Jonathan Z. Simon, Catherine E. Carr, and Shihab A. Shamma, ”A dendritic model

of coincidence detection in the avian brainstem”, Neurocomputing No.26-27, pp.263-

269, 1999.

[42] H. Snippe and J. Koenderink, “Information in channel-coded systems: correlated

receivers”, Biol. Cybern., Vol.67, No.2, pp.183-190, 1992.

[43] K. Zhang and T. Sejnowski, “Neuronal tuning: To sharpen or broaden?”, Neural

Computation, Vol.11, No.1, pp.75-84, 1999.

[44] K. Zhang, I. Ginzburg, B. McNaughton, and T. Sejnowski, “Interpreting neuronal

population activity by reconstruction: Unified framework with application to hip-

pocampal place cells”, J. Neurophysiol., Vol.79, pp.1017-1044, 1998.

[45] Jacek M. Zurada, Introduction to Artificial Neural Systems, Pws Pub Co., 1992.

You might also like