Spiking Neural Network Model MATLAB

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Spiking Neural Network Model MATLAB

Implementation Based on Izhikevich Mathematical


Model for Control Systems

Konstantin S. Sayarkin, Alexey V. Popov Anton A. Zhilenkov


Faculty of Control Systems and Robotics
The High School of Cyberphysical Systems and Control
Department of Control Systems and Informatics
Institute of Computer Science and Technology
ITMO University
Peter the Great St. Petersburg Polytechnic University
Saint Petersburg, Russia
St. Petersburg, Russia
zhilenkovanton@gmail.com

Abstract—In article results of spiking neural network model II. PRINCIPLES OF ARTIFICIAL NEURONS
realization on based on Izhikevich mathematical model in
MATLAB environment are considered. It is known that complited The simplest neuron model that is usually considered for
mathematical model of a biological nervous cell from the point of practical application is a McCulloch-Pitts model that was
view of display of its functionality is the model of Nobel laureates described in 1943 [1] and implemented by F. Rosenblatt in 1958
Hodzhkin and Huxley. However, it contains a large number of the [2]. It represents a neuron as a weighted sum module that adds
differential equations that does it of little use for hardware or up all input values and multiplies it by a value called weight.
program realization. Especially at creation of big scale artificial Then this value is passed through a transfer function and serves
neural networks. Izhikevich's model is less exacting to computing as an output. It can be represented as a
resources and at the same time rather precisely realizes ௡
functionality of biological neuron. A specific question is the › ൌ ˆሺ—ሻǡ — ൌ ෍ ߱௜ ‫ݔ‬௜ ൅ ߱଴ ‫ݔ‬଴ Ǥ (1)
problem of discrete hardware realization of this model.
௜ୀଵ

Keywords—spiking neural network; neuron; MATLAB; In (1) ‫ݔ‬௜ represents input signals, ߱௜ represents input
Simulink; robotics weights and ݂ሺ‫ݑ‬ሻ is a transfer function. ߱଴ and ‫ݔ‬଴ serve to
initialize a neuron for the first time.
I. INTRODUCTION
Transfer function determines how output signal depends on
Today artificial neural networks are widely used in all kinds the weighted sum of the input signals. It usually needs to satisfy
of fields. Neural networks replace credit managers, financial a few conditions such as being value limited, monotonically
traders, data analysts, doctors, they are widely used for voice increasing, determined on all real arguments and be
and image recognition tasks. Despite mathematical neuron differentiable to facilitate learning algorithms. Some common
models being more than half a century old, they still enjoy a examples of transfer functions include a linear transfer function
great popularity due to their simplicity and satisfying results. (2):
However, many modern applications try to utilize those Ͳ݂݅‫ ݔ‬൏ Ͳ
simple old models for complex tasks usually associated with ݂ሺ‫ݔ‬ሻ ൌ ൝ͳ݂݅‫ ݔ‬൐ ͳǡ (2)
higher degree of sentience. They usually solve them by scaling ‫݁ݏ݈݁ݔ‬
up neural network size which leads to lesser performance and a Heavyside function (3):
still doesn’t solve a lot of problems, instead of utilizing more
complex biologically plausible models that accurately replicate ͳ݂݅‫ ݔ‬൐ ܶ
݂ሺ‫ݔ‬ሻ ൌ ቄ ǡ (3)
biological processes. Ͳ݈݁‫݁ݏ‬
In this paper we consider different mathematical models of where ܶ ൌ െ߱଴ ‫ݔ‬଴ , sigmoid functions such as logistic function
a neuron to consider their strong and weak sides for future (4):
application in control systems. We consider current ͳ
developments in the field of neuron modeling and try to apply ߪሺ‫ݔ‬ሻ ൌ ǡ (4)
ͳ ൅ ‡š’ሺെ‫ݔݐ‬ሻ
newer biologically plausible models of neurons as a method of
control in control systems. where ‫ ݐ‬determines a slope of a function, or hyperbolic
tangent function (5)
While researching different neuron models we consider
their useful application by analyzing their biological ݁ ஺௫ െ ݁ ି஺௫
‫݄ݐ‬ሺ‫ݔܣ‬ሻ ൌ  (5)
plausibility as well as computational performance. ݁ ஺௫ ൅ ݁ ି஺௫
that behaves similarly to (4).

979 978-1-5386-4340-2/18/$31.00 ©2018 IEEE


III. ARTIFICIAL NEURAL NETWORK PRINCIPLES The first breakthrough in the field of biologically plausible
neuron models was Hodgkin-Huxley model that was deemed
When it comes to connecting artificial neurons into a worthy of Nobel prize. It is still considered one of the most
network that is able to perform a useful task, there exist a accurate representation of biophysical processes in neural
different paradigms of network architectures. Most neural networks and many other models are simplifications of this
network in one or another utilize concept of layers – a series of
model [4]. Models such as FitzHugh-Nagumo aim to simplify
neurons that are not interconnected and are strictly connected to calculations involved while preserving a lot of accuracy.
no more than two other layers. Also, they usually in one way or
another involve separation of neuronal layers into three types: A modern type of model is Izhikevich neuron model, that
input, output and hidden. models a lot of neuron types while being computationally
efficient. While it doesn’t model intraneural processes, it shows
A simplest architecture of artificial neural network is a feed- plausible behavior.
forward one-layer network. In this architecture there are three
layers of neurons – one input layer, one hidden layer and one An important quality of spiking neurons is their ability to
output layer. The qualifier of feed-forward means that signals encode and process information in timings between spikes. This
travel strictly from one layer to another in the direction from allows for a much denser information processing and better
input to output. suitability for real-time tasks.
A more complex type of neural network is a recurrent neural V. NN LEARNING MECHANISMS
network (RNN). RNNs involve connections that form a loop
inside a network, leading to it exhibiting memory-like It is important to understand that neural networks are not
properties. It means that network results may vary depending programmed for their tasks like typical software. Instead, they
on the order of input variables, but also leads to the problem of employ different learning mechanisms that modify synaptic
vanishing or exploding gradient, which means that over time weights to achieve desired results. The basic principle of
weights may reach abnormally high or low values, rendering learning is Hebbian rule that states that neurons that fire in close
previous states influence on the output meaningless or greatly timing proximity strengthen synaptic connection between
reduced. themselves. In case of supervised learning, an error function is
employed that rates how well the network performs compared
Deep convolutional neural networks (DCNN) are very
useful for data classification tasks that would require very large to the desired results. With unsupervised learning, the network
layers in a traditional feed-forward network. They usually doesn’t require such accuracy evaluation.
involve a part of network called “scanner” that parses input data A popular class of learning mechanisms employed in feed-
by small overlapping chunks and then processed result is fed forward ANNs is backpropagation. It involves calculation of
through convolutional layers that don’t feature all-to-all error function gradient and propagation of weight modifications
connections between layers and gradually get smaller in in direction opposite to signal propagation. It has also been
direction of output [3]. Often a feed-forward network segment employed for spiking neural networks with algorithms such as
is also present before the output to discern more complex
SpikeProp and its modification for multiple spikes [5, 10-15].
features. Pooling layers may also be utilized that reduce amount
of input data while preserving useful information by utilization A more biologically plausible mechanism is spike-timing
of techniques such as max-pooling (taking only maximum dependent plasticity (STDP). It implements Hebbian learning
value out of a small group of input values). rule in a more accurate way by changing weight modification
rate based on difference between presynaptic and postsynaptic
IV. SPIKING NEURAL NETWORKS firing times. It is an unsupervised learning mechanism in
Traditional artificial neural networks only aim to represent principle, however, it can be used either in more complex
informational behavior of biological neural networks. The networks employing supervised learning or can be affected by
models that serve as a base for neurons simply select some sort modifying neuron firings directly.
of transfer function that only represent very few features of
biological neurons, such as modification of propagated VI. MATLAB AND SIMULINK AS NN MODELING TOOLS
information by synaptic weights, ability of neurons to fire, etc. When it comes to implementation of a neural network on
To create a better representation of biological neuron processes, embedded hardware, MATLAB Simulink provides a
spiking neuron models have been developed. convenient set of tools for development, testing and deployment
The oldest spiking neuron model called integrate-and-fire of a signal processing system such as a neural network. The
was developed in 1902. It models spiking behavior in a very ability to measure performance allows to tweak network
simplistic way. One of the core problems with this model is its parameters to achieve better performance in an automated way,
tendency to infinitely store below-threshold input voltage which is especially important considering cyclical nature of
which is very implausible in real neurons. This effect is reduced neural network learning. Simulink also provides tools for
in leaky integrate-and-fire model that introduces a voltage “leak” deployment on different robotic platforms or FPGA chips as
preventing infinite potential storage. There have also been well as generation of a C/C++ code from model.
many other modifications to this model that aim to mitigate
MATLAB development environment provides tools to
other inaccuracies.
interact with Simulink environment and create, modify and

980
execute Simulink models as well as send and receive data to a A system (6) is relatively computationally efficient and can
model and analyze it with its vast array of functions. be implemented as a Simulink model relatively easily (fig. 2).
MATLAB functions such as add_block and add_line allow
us to quickly create Simulink block diagrams with specified
parameters. We can further specify block parameters by using
set_param function to alter standard Simulink block parameters.
When it comes to modularization of Simulink block
diagrams, there are two ways to group blocks into modules –
visual and nonvisual. Visual way includes referenced models
and simple subsystems and as its name hints, only affects the
way a model is presented visually as a block diagram. When
compiled, it will be identical to the model where those blocks
are simply on the top level of a model [7, 10].
Nonvisual ways are atomic subsystems that force code
generator to treat a subsystem as an atomic function so it will
be represented as a function in compiled code. This should be
considered in traditional neural networks with neuron activation Fig. 2. Izhikevich spiking neuron model in Simulink
functions, however since spiking models simulate action
potential recovery, each neuron needs to keep track of its state,
and therefore there are few possibilities for performance To study behavior of a model, we can monitor membrane
improvement. voltage of a neuron with a constant applied current (fig. 3).
Simulink also provides tools to analyse system performance,
including detailed reports on bottlenecks that allow for testing
of different model features to achieve optimal performance.

VII. IMPLEMENTATION OF IZHIKEVICH SPIKING NEURON IN


SIMULINK ENVIRONMENT
Izhikevich spiking model provides ability to model different
types of neurons, including bursting and chattering neurons that
are one of the better studied types, meaning they have useful
applications (fig. 1).

Fig. 3. Membrane voltage

As is clear from figure 3, spiking neuron exhibits a rapid


spiking pattern during buildup of a recovery variable and then
begins representing a regular spiking behavior according to
behavior described by Izhikevich in fig.1. Therefore, we can
conclude relative accuracy of a developed software model.
VIII. CONCLUSIONS
This article only outlines common challenges that we
encountered during design of a robot controlled by a spiking
Fig.1. Different types of neurons in Izhikevich spiking model [6] neural network. Different applications may allow sacrifices that
we deemed unacceptable or instead place additional constraints
The basis of Izhikevich spiking model is a system (6) of that make our suggested solutions inapplicable.
ordinary differential equations that include parameters defining It is still hard to test SNN solutions in robotics on a large
a type of neuron, reset voltage drop and reset recovery variable scale due to secrecy of companies that are pioneers in this field
rise [6, 8-9]. and a lack of practical research. Hopefully, this article will
‫ ݒ‬ᇱ ൌ ͲǤͲͶ‫ ݒ‬൅ ͷ‫ ݒ‬൅ ͳͶͲ െ ‫ ݑ‬൅ ‫ܫ‬ provide basic guidelines for SNN implementation.
൜ 
‫ݑ‬ᇱ ൌ ܽሺ„˜ െ —ሻ (6)
‹ˆ˜ ൌ ͵Ͳ–Š‡˜ ൌ …ǡ — ൌ — ൅ †

981
REFERENCES [8] O. Booij, “Temporal Pattern Classification using Spiking Neural
Networks” (Master’s thesis), University of Amsterdam, 2004.
[1] W. S. McCulloch, W. Pitts, " A logical calculus of the ideas immanent
in nervous activity", The bulletin of mathematical biophysics, 5(4), 1943, [9] A. Nyrkov, K. Goloskokov, E. Koroleva, S. Sokolov, A. Zhilenkov and
pp. 115-133. S. Chernyi, "Mathematical Models for Solving Problems of Reliability
Maritime System", Advances in Systems, Control and Automation, pp.
[2] F. Rosenblatt, “The perceptron: A probabilistic model for information
387-394, 2017.
storage and organization in the brain”, Psychological review, 65(6)
1958, p. 386. [10] D. Lisitsa and A. Zhilenkov, "Comparative analysis of the classical and
[3] A. Karpov, A. Zhilenkov and D. Lisitsa, "The integration of the video nonclassical artificial neural networks," 2017 IEEE Conference of
monitoring, inertial orientation and ballast systems for container ship's Russian Young Researchers in Electrical and Electronic Engineering
emergency stabilization," 2017 IEEE Conference of Russian Young (EIConRus), 2017.
Researchers in Electrical and Electronic Engineering (EIConRus), 2017. [11] D. Lisitsa and A. Zhilenkov, "Prospects for the development and
[4] L. F. Abbott, T. B. Kepler, “Model neurons: from hodgkin-huxley to application of spiking neural networks," 2017 IEEE Conference of
hopfield”, Statistical mechanics of neural networks, 1990, pp. 5-18. Russian Young Researchers in Electrical and Electronic Engineering
[5] A. Zhilenkov, "The study of the process of the development of marine (EIConRus), 2017.
robotics," Vibroengineering Procedia, Vol. 8, pp. 17-21, 2016. [12] S. Chernyi and A. Zhilenkov, "Modeling of Complex Structures for the
[6] E. M. Izhikevich, “Simple model of spiking neurons”, IEEE Ship's Power Complex Using Xilinx System", Transport and
Transactions on neural networks, 14(6), 2003, pp. 1569-1572.S. M. Telecommunication Journal, vol. 16, no. 1, 2015.
Bohte, J. N. Kok, H. La Poutre, “Error-backpropagation in temporally [13] A. Zhilenkov, "GaN Materials Nanostructures Growth Control in the
encoded networks of spiking neurons”, Neurocomputing, 48(1), 2002, Epitaxial Units", Solid State Phenomena, vol. 265, pp. 627-630, 2017.
pp.17-37.
[14] A. Zhilenkov and S. Chernyi, "Investigation Performance of Marine
[7] O. Booij, H. tat Nguyen, “A gradient descent rule for spiking neurons Equipment with Specialized Information Technology", Procedia
emitting multiple spikes”, Information Processing Letters, 95(6), 2005, Engineering, vol. 100, pp. 1247-1252, 2015.
pp. 552-558.
[15] A. Karpov and A. Zhilenkov, "Designing the platform for monitoring
and visualization orientation in Euler angles," 2017 IEEE Conference of
Russian Young Researchers in Electrical and Electronic Engineering
(EIConRus), 2017.

982

You might also like