Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

PROCEEDINGS OF SPIE

SPIEDigitalLibrary.org/conference-proceedings-of-spie

Neurocomputer architecture based


on spiking neural network and its
optoelectronic implementation

Kolesnytskyj, Oleh, Kutsman, Vladislav, Skorupski,


Krzysztof, Arshidinova, Mukaddas

Oleh K. Kolesnytskyj, Vladislav V. Kutsman, Krzysztof Skorupski, Mukaddas


Arshidinova, "Neurocomputer architecture based on spiking neural network
and its optoelectronic implementation," Proc. SPIE 11176, Photonics
Applications in Astronomy, Communications, Industry, and High-Energy
Physics Experiments 2019, 1117609 (6 November 2019); doi:
10.1117/12.2536607

Event: Photonics Applications in Astronomy, Communications, Industry, and


High-Energy Physics Experiments 2019, 2019, Wilga, Poland

Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 29 May 2020 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use


Neurocomputer architecture based on spiking neural network and its
optoelectronic implementation
1
Oleh K. Kolesnytskyj, 1Vladislav V. Kutsman, 2Krzysztof Skorupski, 3Mukaddas Arshidinova
1
Vinnytsia National Technical University, Khmelnytske Hwy, 95, Vinnytsia, Vinnyts'ka oblast,
Ukraina, 21000, 2Lublin University of Technology, Nadbystrzycka 38A, 20-618 Lublin, Poland; 3Al-
Farabi Kazakh National University, Almaty 050040 , Kazakhstan

ABSTRACT

The paper clarifies neurocomputer and neurocomputer architecture term definitions. The choice of spiking neural
network as neurocomputer operating unit is substantiated. The spiking neurocomputer organization principles are
formulated by analyzing and generalization of the current level of knowledge on neurocomputer architecture (based on
analogy with the well-known von Neumann digital computer organization principles). Analytical overview of current
projects on spiking neural networks hardware implementation is conducted. Their major disadvantages are highlighted.
Optoelectronic hardware implementation of spiking neural network is proposed as such that is free of mentioned
disadvantages due to usage of optical signals for communication between neurons, as well as organization of learning
through hardware. The main technical parameters of the proposed spiking neural network are estimated.
Keywords: neurocomputer, neurocomputer architecture, spiking neural network, construction principles, hardware
implementation

1. INTRODUCTION
Today, the answer to the question "What instrument is best to solve hardly formalized and unformalized tasks?" is
undoubtedly the neurocomputer. A neurocomputer is an information system, whose main processing unit is an artificial
neural network (as opposed to a microprocessor in a digital computer), and its basic functioning principle is learning on
examples (as opposed to programming). Most modern neurocomputers exist in the form of software or hard&software
implementations. But it is well-known that the maximum benefits from the use of neurocomputers can be obtained
specifically with their hardware implementation. Today, unfortunately, no effective hardware implementation of a
neurocomputer has been created. An effective neurocomputer hardware implementation should contain the biggest
possible number of neurons (ideally close to the number of neurons in the human brain: 5 × 1010), while taking the
minimal volume and consuming a minimum of energy. Creation of a such brain-like neurocomputers will solve two
mutually related tasks: 1) creation of a "smart" computers to perform complex cognitive unformalized tasks; 2)
revelation of the human brain secrets through its reverse engineering by technical means. The immediate strategic goal is
to create neuromorphic core (hardware neural network chip), which anybody will be able use to test their own theories
about the brain's functioning principles, and to build a variety of neurocomputer systems on its basis for practical
applications. This new knowledge can then be used in the next generation of neuromorphous cores.

2. FORMULATION OF THE PROBLEM


There is a large amount of information in the scientific literature about various architectures of the neurocomputer
design. Since all known neurocomputers can be divided into 3 major classes (software, hard&software and hardware),
the architecture can also be considered for software [1,2], hard&software [3,4] and hardware neurocomputers [4]. Since
the maximum advantages of neurocomputers over traditional computers can only be obtained with their hardware
implementation [4,16], in this article we will only consider the architecture of hardware neurocomputers.
The late 90s were the period of overwhelming devotion to the software emulation of neurocomputer technology on
personal computers, workstations and even supercomputers. It became clear that only creation of a specialized element
base (as it seemed, VLSI neurochips) for the purely hardware realization of biologically plausible neural networks with a
large number of neurons would significantly increase the productivity-to-cost ratio in the implementation of
neurocomputer information processing. However, implementation on the basis of electronic VLSI has a number of
significant drawbacks related to the impossibility to organize a large number of electrical connection lines between
neurons in the plane of a semiconductor crystal. Because of this, instead of asynchronous transmission of pulses between

Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2019,
edited by Ryszard S. Romaniuk, Maciej Linczuk, Proc. of SPIE Vol. 11176, 1117609
© 2019 SPIE · CCC code: 0277-786X/19/$21 · doi: 10.1117/12.2536607

Proc. of SPIE Vol. 11176 1117609-1


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 29 May 2020
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
neurons (as in nature), the synchronized transmission of encoded digital data packages [5] between groups of neurons is
used, which distorts the very principles of natural neurostructures organization (neuromorphism) [17,18].
The today’s problem is the creation of hardware neurocomputers with a large number of energy-efficient neurons and
neuromorphic organization principles: direct links between physically separated spiking neurons, adaptive learning
without the use of computational procedures.
To identify ways for overcoming this problem, you need to answer the following key questions:
1) What should be the architecture of the neurocomputer;
2) What should be the structure of its processing unit (neural network);
3) Which technology should be used for this neural network hardware implementation;
4) What methods and means should be applied for adaptive neural network training without the use of
computational procedures.
The purpose of this article is to formulate the neurocomputer architecture design principles by analogy with the well-
known von Neumann principles of the digital computer design and to analyze current projects on the spiking neural
network hardware implementation to identify their advantages and disadvantages, and to propose ways to eliminate these
disadvantages [19,20].

3. NEUROCOMPUTER ARCHITECTURE DESIGN PRINCIPLES


Circuitry (constructive-technological) and system engineering (architectural) aspects of the computer facilities
development are closely linked with each other. This fact plays an important role in choosing the optimal computer
design strategy. This provision also applies to neurocomputers. It is rather hard to determine what is primary and what is
secondary. On the one hand, new architectural solutions stimulate the technology development, and on the other hand,
the advancement of technology leads to a change in architectural decisions, and thus, these two processes are cyclically
evolving in an increasing spiral [21,22].
Speaking of a regular digital computer, the computer architecture includes both a structure that reflects the computer
composition, as well as software and mathematical support. The fundamentals of the digital computer architecture theory
were found by John von Neumann [6]. Von Neumann not only formulated the fundamental principles of the computer
design and functioning, but also proposed its structure, presented in Fig. 1.

External memory device

Central processing unit (CPU)

Input Arithmetic Output


device logic unit Control unit device
(ALU)

Operational memory device

Fig. 1. The architecture of a digital computer, built on the von Neumann principles.
Solid lines - information streams, dashed lines - control signal streams
Let us recall the principles of von Neumann:
1. The principle of binary coding: all information on the computer is presented in binary form, combination 0 and 1.

Proc. of SPIE Vol. 11176 1117609-2


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 29 May 2020
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
2. The principle of memory homogeneity: both programs and data are stored in the same memory, so the computer
does not recognize what is stored in the given memory cell – there can be stored numbers, text, commands, etc. One can
perform the same actions on commands as on data.
3. The principle of memory addressability: structurally, the main memory (MM) consists of numbered cells, and any
memory cell is available to central processing unit (CPU) at any time. Therefore, it is possible to assign names to
memory blocks for more convenient interaction between MMs and CPUs.
4. The principle of sequential software control: the program consists of a set of commands that are executed on the
CPU automatically one after another in a definite sequence.
5. The principle of conditional transition: it does not always happen that commands are performed one after another,
so the presence of a conditional transition command is possible, which changes the sequence of command execution
depending on the value of stored data.
Let's define the architecture of a neurocomputer by analogy with a digital computer.
The neurocomputer architecture is a set of neurocomputer conceptual structure and basic principles of its
component operation and interaction, which include information representation and processing methods, training
algorithms and methods for organizing user-friendly interface [23,24].
What should the architecture of a neurocomputer be? In the scientific and technical literature, there are very few
publications concerning the architecture of the abstract neurocomputer and its conceptual generalized structure. On the
other hand, there is a wealth of information on the structures and architectures of various types of neural networks
(multilayer perceptron, Hopfield network, Hamming's network, etc.). There is also a structure and architecture of specific
(rather than abstract) neuroprocessors (eg., based on the vector-matrix operations), neuroboards or "vector-matrix"
neurocomputers [3,4] focused on a specific element base (eg., FPGA) or the specific structure of the neural network
based on formal or analogous neurons. As an example of a neurocomputer generalized structure, which correlates with
other similar researches, one can cite the scheme presented in Fig. 2 [7].

Memory device

Input device Neural network Output device

Learning unit Control unit

Fig. 2. Generalized structural diagram of an abstract neurocomputer

In the same paper [7], the basic architectural principles of an abstract neurocomputer design are formulated:
1. The main operating unit of the neurocomputer (its processor) is an artificial neural network, which represents the
set of formal neurons, which are connected by channels of information transmission.
2. The neural network does not perform calculations, only transforms the input signal (pattern) into the output signal
according to its topology and values of the synaptic interconnections.
3. A neurocomputer memory device stores the program for changing the synaptic coefficients between neurons.
4. The information input and output devices perform the same functions as in von Neumann's structure, and the
control device synchronizes work of all structural blocks of neurocomputer when solving a specific task.
5. The neurocomputer works in two modes - learning and functioning. The learning process is an optimization
problem solving, which purpose is to minimize the function of error on a given set of training input and output pattern
examples by selecting the coefficients of inter-neuron connections. In the mode of functioning, the training unit is
disconnected, and the neurocomputer receives signals with noise, which need to be recognized.
As we can see, in paper [7,25,26], the principles are formulated in somewhat amorphous manner, not in such a definite
style as von Neumann's principles. This structure and the formulated principles require further clarification and clearer
wording, due to achievements of scientific research in recent years, in particular, the achievements in the field of spiking
neural networks [8,9,27].

Proc. of SPIE Vol. 11176 1117609-3


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 29 May 2020
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
4. JUSTIFICATION FOR THE CHOICE OF SPIKING NEURAL NETWORK AS
NEUROCOMPUTER PROCESSING UNIT
In recent years, there is an intensification of research in the field of spiking neural networks, in which information
signals are impulses (spikes), rather than static signals of a certain level. What is the cause of this? This question is
discussed in detail in [8,9,10], but in brief, mainly it is caused by the fact that these networks are neuromorphic (more
similar to the brain than traditional ones) and universal (their structure does not depend on the tasks being solved, as in
traditional networks with static signals). That is, you do not need the whole "zoo" of neural network structures to meet all
practical needs.
Due to their neuromorphia, spiking neural networks have following advantages over traditional ones:
1) recognition of dynamic patterns (speech, time-varying images, etc.) without any discretization or feature
extraction, and not iteratively;
2) multi-task capability (information about input streams circulates in a recurrent neural network so the results of
multiple tasks can be set to output simultaneously using various groups of reading neurons that learned to carry out a
certain task);
3) recognition with prediction (any dynamic process can be recognized using incomplete data, i.e. even before it is
over);
4) simplicity of learning procedure (not all neurons in the network are learning, but only output reading neurons);
5) increased productivity of data processing and noise immunity due to time-pulse information coding.
These advantages of spiking neural networks make them the most promising candidate to be used for the
neurocomputer operational unit implementation.

5. IMPROVING THE PRINCIPLES OF NEUROCOMPUTER ARCHITECTURE DESIGN


Today, the neurocomputer structure (Fig. 2) and the principles of neurocomputer design shown under Fig. 2 [7,11]
are need of clarifications based on the achievements of scientific research in recent years, namely results of works
[8,9,10]. Thus, in principle #1, “formal neurons” should be replaced with “biologically realistic spiking neurons”, and
“connected by channels of information transmission” should be “connected directly”. There is a simple explanation to
this: in order to reveal the secrets of human brain, it is necessary to simulate it in the most realistic way, that is, spike
neurons should be used instead of formal neurons with potential output, connections between neurons should be direct,
and the process of asynchronous transmission of impulses should not be replaced by the process of synchronized packet
transmission of digital codes. If this is not done, there is a risk of not getting the desired effect of emergence of
extraordinary properties (like awareness, thinking, emotions, etc.) in the network of ordinary information units, which
neurons are.
One could fully agree with principle #2.
In case of principle #3, doubts are caused by the fact that there are no programs in the natural neural networks (in the
brain), namely there are no training programs changing the communication weights between neurons. The learning
mechanism is built in the very structural and functional organization of the brain. This fact confirms the "Hebb's rule"
[3,7] derived from the results of neurophysiological research. The mechanisms of the brain have not yet been studied
deep enough, so it is still impossible to clearly and unambiguously adjust the structure of the fig. 2, but it is clear that the
learning mechanism should be a part of organization of the neurocomputer core (neural network). Memorization should
also take place in the neural network (the principle of “memory in the processor”). However, a separate storage device
may still exist to store secondary information in order to decrease load on neural network [28,29,30].
Principle #4 can be accepted in part. As for the input devices, it has long been believed that the neurocomputer must
have sensory fields similar to the five human senses (that is, to understand the human language, to perceive images, to
smell, to taste, and to have tactile sensors). Respectively, as for output devices, the neurocomputer must generate human
speech, produce images and have performing mechanical organs for movement in space and orientation of its sensory
fields. Neurophysiologists have determined that the human brain consists of separate neuron micronetworks (neocortical
columns), and there are separate areas in the brain responsible for certain functions. As for the control device, it is
possible that at the initial stages it will be necessary, but as our knowledge of the brain progresses, control functions will
also be incorporated into one of the subsystems of the neural computer core [31,32].
We could also partially agree to principle #5, with the only clarification that such a division into two modes (training
and functioning) is rather nominal. After all, it is known that in biological brain these two modes exist in parallel, and
there is no external system that switches them.

Proc. of SPIE Vol. 11176 1117609-4


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 29 May 2020
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
In addition to these points, it would make sense to add an item about adding a conventional digital computer to a
neurocomputer in order to solve formalized tasks. That is, not digital computer should manage the neurocomputer
blocks, but the neurocomputer should control the digital computer (set tasks for it, which it can do better and faster, and
use the results of these tasks).

External memory

Sensors: Neural network


image
image
distributed generator
sound memory sound
(language)
smell learning
generator
mechanism
taste mechanical
distributed executing
touch
control organs
Interface with digital
computer

digital computer
Fig. 3. Improved generalized structural diagram of the abstract neurocomputer

Thus, taking into account the above arguments, the structure of abstract neurocomputer can be represented by the
scheme in Fig. 3, and the architectural principles of its construction can be formulated as follows:
1. The principle of spike coding: all information in the neurocomputer is represented in the form of spikes
(information is encoded by the moments in which impulses occur).
2. The principle of processing associativity: the main processing core of the neurocomputer is a hardware-
implemented network of highest possible number of biologically realistic spiking neurons that are directly connected to
each other. The neural network does not perform any computations, but transforms the input signal (pattern) into the
output image according to its topology and the interconnection coefficients.
3. The principle of learning autonomy and adaptability: the neurocomputer should be trained to perform a specified
mapping autonomously on the basis of adaptation, and the learning mechanism should be incorporated into the
organization of its processor core (neural network).
4. The principle of memory distribution: memory functions in the neurocomputer are realized by its processor core
(neural network), and the separate external memory has a supplementary value.
5. The principle of naturalness of interface: the neurocomputer should have an interface familiar to the human nature,
that is, understand and generate human language, produce images and have sensory fields similar to the five human
senses, and have mechanical organs for movement in space and orientation of its sensory fields.
6. The principle of homogeneity of processing and control: control over the functional parts of the neurocomputer is
carried out by its processor core (neural network).
7. The principle of amplification by a digital computer: the neurocomputer must incorporate a digital computer that is
used for solving formalized problems. The neurocomputer controls the digital computer (sets formalized tasks for it,
which it can do better and faster, through an appropriate interface).
According to these principles, a neurocomputer can be defined as follows:
A neurocomputer is a complex of technical means for information processing; of which main processor unit is a
hardware-implemented large-scale network of spiking neurons; which has a verbal-visual interface and sensor fields
intrinsic to the human nature; and which performs cognitive functions common to the human brain.
According to this definition, the neurocomputer is significantly different from all previous software, hard&soft and
hardware neurocomputers based on digital or analog potential artificial neurons. It can be separated from them by being
referred to as a "natural neurocomputer", because it needs to be as intelligent, creative and self-aware as a human being.
No such computer has yet been built, but it's just a matter of time.

Proc. of SPIE Vol. 11176 1117609-5


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 29 May 2020
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
6. REVIEW OF WELL-KNOWN PROJECTS FOR HARDWARE IMPLEMENTATION OF
SPIKING NEURAL NETWORKS
Let's consider what are the most notable present projects in the field of creation (hardware implementation) of
spiking neural networks. There are numerous projects in different countries, but there are four main ones that are
presented in Table. 1 (columns 1-4).

Table 1 The most powerful projects in the field of spiking neural network hardware implementation
Title SpiNNaker[5] Neurogrid[13] BrainScaleS[14] TrueNorth[15] Proposed [12]

Country UK USA EU USA Ukraine

MicroproessorAR Analogous VLSI Hybrid


Element Analogous VLSI Digital VLSI (28
M968 based on с-MOS (Optoelectronics
base based on с-MOS нм)
(130 nm) (180 nm) and VLSI)

20000 1 million
Neurons on 65536
(20 cores, each core 512 4026 cores, each ~3000
chip (256х256)
1000 neurons) core 256 neurons

Chip size 10.4×9.8 mm2 11,9х13,9 mm2 (5х10 mm2) 4,2 mm2 ~30х30 mm2

Largest current
1 billion.
configuration:
It is housed in 10
16 Chips; 16
19-inch racks, each 1 million ~ 200 000
million neurons; 4 It may cascade,
rack - 100,000 Neurogrid consists neurons on a wafer.
Neurons on billion synapses connection between
cores. The cards of 16 "neurocores" Each wafer Si Æ20
set Next chips - optical
holding the chips (each 65536 cm contains 384
configuration: signals
are held in 5 Blade neurons) chips
4,096 Chips; 4
enclosures (Oct. 14,
billion neurons; 1
2018)
trillion synapses

Synapses
1000 ~6000 256 256 ~3000
per neuron

Quadratic integrate- Exponential


Neuron Izhikevich spiking Integrate-and-fire Integrate-and-fire
and-fire neuron integrate and fire
model neuron neuron neuron
(QI&F) neuron (AdExp)

2-D recurrent with


Network
fully connected connectivity It is programmed It is programmed fully connected
type
"Mexican hat"

Power
1 mW/neuron 3,1 μW/neuron 5 mW/neuron 0,1 mW/neuron 10..20 μW/neuron
consump.

There are no there are separate


there are separate neurons (+)
separate neurons (–) neurons (+)
Qualitative
parameters many neurons, but
connectivity is not connectivity is not
their connectivity is connectivity is not limited (+)
limited (+) limited (+)
limited (–)

Proc. of SPIE Vol. 11176 1117609-6


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 29 May 2020
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
A board with
additional electronic
circuits is not
Chip is installed in the case, and the case - on the board with additional servicing required. Therefore,
electronic circuits. Therefore, the size of the system as a whole is large (–) the size of the
system relative to
the chip does not
increase (+)

Synchronous pulse transfer systems (in the brain - asynchronous transmission) (–) asynchronous (+)

direct connections
There are no direct connections between neurons (–)
between neurons (+)

Training a neural network is done with the help of digital computers and special software on-chip навчання
(–) (+)

The two main drawbacks of all known projects are indicated in the last 2 lines of Table 1:
1) There are no direct connections between neurons, since it is technologically impossible to create a large number of
electrical traces in the plane of a semiconductor crystal. Direct connections are replaced by the organization of artificial
programmable or specially coded exchange protocols between groups of neurons that allow reducing the number of
electrical traces in the plane of the semiconductor crystal, but violates the principles of the biological brain work;
2) Training a neural network is done with the help of digital computers and special software, but not with its own
non-programmed tools and non-computing tools.
If the first disadvantage is eliminated by creation of direct electrical connection between the neuron electronic
circuits, taking into account the space necessary to isolate one trace of communication from another, the occupied space
can become so large that there is no room left on the silicon wafer for neuroelement schemes themselves. At the same
time, the neuroelement connection with light beams does not require isolation between the signal paths, because light
streams can pass one through another without mutual influence. Moreover, signaling paths can be located in all three
dimensions. The density of optical transmission paths is limited only by the size of the light sources, their divergence,
and the size of the detector. In addition, all signaling paths can work simultaneously, thus providing a huge pace of data
transfer.
Therefore, the first disadvantage can be corrected with the help of spiking neural network architectures that are
designed for optoelectronic implementation, in which the connections between neurons are carried out using optical
signals. In the future, it will be possible to use nanoelectronic elemental base, provided the invention of methods of
three-dimensional construction of nanoelements. The second disadvantage can be corrected by developing spiking neural
network architectures that have the means to organize training by their own circuitry without the use of computational
procedures (similar to biological neural networks) [33].

7. ESTIMATION OF TECHNOLOGICAL PARAMETERS OF OPTOELECTRONIC SPIKING


NEURAL NETWORK
In [12], an implementation of optoelectronic spiking neural network is proposed. It is made in a hybrid form, i.e. it
combines optical two-dimensional spatially-light modulators (SLM) and electronic (VLSI) components. This spiking
neural network can be manufactured in the form of a "sandwich structure". Estimated parameters of this structure are
given in Table 1 (column 5).
In [12] it is shown that 1 neuron in a such "sandwich structure" can be placed on an area of 10×10 microns. If
aperture of 3 cm is considered to be a technologically normal size for optoelectronic devices, then it can accommodate
3000 pixels. Thus, to date, it is possible to produce hardware implementations of spiking neural network with about 3000
neurons. These will be spiking neural network modules, which can be cascaded by optical means to acquire a spiking
neural network with more elements.
From Table 1, it can be seen that the proposed project is inferior or equal to the known ones in most quantitative
parameters (except power consumption), but prevails them in most quality parameters. The main qualitative advantages
are: 1) there are individual physical neurons, between which there are direct physical connections (not transmission of

Proc. of SPIE Vol. 11176 1117609-7


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 29 May 2020
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
information about the generated pulses in the form of encoded packages in synchronized intervals via a bus), 2) training
is carried out without a digital computer by means of its on-chip tools (SLM) and simple systems for changing control
signals.
Due to these qualitative advantages the proposed project can be considered more neuromorphic than others (its
functioning is more adequate to the functioning of the brain), and therefore, when used to study the principles of the
brain, it is more likely to reveal the secrets of brain work. Also, its usage reproduces the processes of the brain
functioning more accurately.

8. CONCLUSIONS
A review of the scientific and technical information on the neurocomputer architecture design has shown that
currently there are no clearly formulated principles for neurocomputer architecture design by analogy with the well-
known von Neumann’s digital computer architecture design principles. The information on known neurocomputer
structures and principles of their design and functioning has been generalized. The choice of spiking neural networks as a
neurocomputer operating unit has been substantiated. 7 principles of neurocomputer architecture design by analogy with
the well-known principles of von Neumann digital computer architecture design have been formulated: 1) the principle
of pulse coding, 2) the principle of processing associativity, 3) the principle of learning autonomy and adaptability, 4) the
principle of the memory distributivity, 5) the principle of the interface naturalness, 6) the principle of uniformity of
processing and control, 7) the principle of amplification by a digital computer.
Analytical review of current projects on the spiking neural network hardware implementations has showed that today
they all use the technology of very large scale integration (VLSI), which is well developed and tested, and therefore
convenient. Two main drawbacks of all projects have been emphasized: 1) the absence of direct connections between
neurons, since it is technologically impossible to create a large number of electrical traces in the plane of a
semiconductor crystal (direct links are replaced by organization of artificial programmable or specially coded exchange
protocols between groups of neurons, which allows reducing the number of electrical traces in the plane of the
semiconductor crystal, but removes the compliance with the principles of the work of biological brain); 2) training of
spiking neural networks is done using digital computers and special software, rather than using their own non-
programmed and non-computing tools.
The proposed optoelectronic version of the hardware implementation of the spiking neural network is free of these
disadvantages. The first drawback is eliminated by the use of optical signals for the organization of neuronal
connections. The second disadvantage is corrected by training the proposed spiking neural network using hardware
(SLM) without the use of computational procedures, providing for the network’s ability to additionally learn and relearn
(adaptability). Due to these qualitative advantages, the proposed project is more neuromorphic (more adequate to the
brain), and therefore when used it is more likely to reveal the secrets of the human brain.

REFERENCES

[1] Mirkes, Е. М. “Neyrocomputer. Proekt Standarta”, Novosibirsk: Nauka, Sibirskaya izdatelskaya firma RAN,
188 p. – ISBN 5-02-031409-9 (1998)
[2] Grytsenko ,V. I., Misuno, I. S., Rachkovskiy, D. A., “The concept and architecture of the software
neurocomputer SNC”, Control systems and machines, no3, pp. 3-14 (2004)
[3] Krug, P.G., “Neural networks and neurocomputers: study guide”. M.: Publishing house MEI, p.176 (2002)
[4] Galushkin, A. I. [Neurocomputers], IPRGR, Moscow,p. 528, ISBN 5-93108-007-4 (2000)
[5] SpiNNaker Home Page http://apt.cs.manchester.ac.uk/projects/SpiNNaker/
[6] Von Neumann, J., [The General and Logical Theory of Automata], Wiley, New York, pp. 1–41 (1951)
[7] Komartsova, L. G., Maksimov, A. V., [Neurocomputers: study guide for universities], Publishing House of
Moscow State Technical University, Moscow, ISBN 5-7038-2554-7, p.400 (2004)
[8] Maass, W., Bishop, C.M., [Pulsed Neural Networks], MIT Press, Cambridge, ISBN 0-262-13350-4, p.377
(2001)

Proc. of SPIE Vol. 11176 1117609-8


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 29 May 2020
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
[9] Bardachenko,V.F., Kolesnytskyj, О.K., Vasyletsky, S.F., “Application prospects for spiking neural network
with time information representations for the dynamical pattern recognition”, Upravliayuschie sistemy i
mashyny, no 6, pp 73-82 (2003)
[10] Natschläger, T., “The liquid computer: A novel strategy for real-time computing on time series”, Foundations of
Information Processing of TELEMATIK, vol.8, no1: pp.39-43, (2002)
[11] .Kolesnytskyj, O. K, Bokotsey, I.V., Yaremchuk, S.S., “Optoelectronic Implementation of Pulsed Neurons and
Neural Networks Using Bispin-Devices”, Optical Memory & Neural Networks, Vol.19, no 2, рр.154-165
(2010)
[12] Kozemiako, V. P.; Kolesnytskyj, O. K.; Lischenko, T. S.; Wojcik, W., A. Sulemenov, A.," Optoelectronic
spiking neural network ", Proc. SPIE 8698, Optical Fibers and Their Applications, 86980M (2012)
[13] The Board: Neurogrid. https://web.stanford.edu/group/brainsinsilicon/neurogrid.html
[14] BrainScaleS. http://brainscales.kip.uni-heidelberg.de/index.html
[15] Introducing a Brain-inspired Computer. http://www.research.ibm.com/articles/brain-chip.shtml/
[16] Zabolotna, N. I.; Pavlov, S. V.; Radchenko, K. O., “Diagnostic efficiency of Mueller - matrix polarization
reconstruction system of the phase structure of liver tissue”, Proc. of SPIE, Vol. 9816, 98161E, (2015)
[17] Kukharchuk, V. V.; Bogachuk, V. V.; Hraniak, V. F.; et al., “Method of magneto-elastic control of mechanic
rigidity in assemblies of hydropower units”, Proc. of SPIE, Vol. 10445, 104456A, (2017)
[18] Vyatkin, S. I.; Romanyuk, A. N.; Gotra, Z. Y.; et al., “Offsetting, relations and blending with perturbation
functions”; Proc.of SPIE, Vol. 10445, 104452B, (2017)
[19] Wojcik, W., Komada P., Cieszczyk, S.; et al., „ECTL application for carbon monoxide measurements”, Proc. of
SPIE, Vol. 5958, 595837 (2005)
[20] Wojcik, W., Lach, Z., Smolarz, A., et al. „Power supply via optical fibre in home telematic networks”, Przeglad
Elektrotechniczny, Vol. 84 Issue: 3, pp.:277-279 (2008)
[21] Wojcik, W., Kisala, P., “The application of inverse analysis in strain distribution recovery using the fibre bragg
grating sensors”, Metrology and Measurement Systems, Vol. 16 Issue: 4 pp. 649-660 (2009)
[22] Wojcik, W., Gromaszek, K.; Kotyra, A.; et al. „Pulverized coal combustion boiler efficient control” Przeglad
Elektrotechniczny, Volume: 88 Issue: 11B, pp. 316-319 (2012)
[23] Zabolotna, N. I.; Wojcik, W.; Pavlov, S. V.; et al. „Diagnostics of pathologically changed birefringent networks
by means of phase Mueller matrix tomography”, Proc. of SPIE, Vol. 8698, 86980E (2013)
[24] Chepurna, O; Shton, I.; Kholin, V, et al., “Photodynamic therapy with laser scanning mode of tumor”, Proc.
of SPIE, Vol. 9816, 98161F (2015)
[25] .Krak, Y.V, Barmak, A.V., Bagriy, R.A., Stelya, I.O., “Text entry system for alternative speech
communications”, Journal of Automation and Information Sciences, Vol. 49, Issue 1, pp 65–75 (2017)
[26] Kryvonos, I.G.,.Krak, I.V, Barmak, O.V., Bagriy, R.O., “Predictive text typing system for the Ukrainian
language”, Cybernetics and systems analysis, Vol. 53, Issue 4, pp 495–502 (2017)
[27] Zabolotna, N. I.; Pavlov, S. V., Sobko, O. V., Savich, V. O., “Multivariate system of polarization tomography of
biological crystals birefringence networks”, Proc. SPIE, Vol. 9166, 916615 (2014)
[28] Wojcik, W., Smolarz A., “Information Technology in Medical Diagnostics”, London, Taylor &Francis Group
CRC Press Reference, London - 210 Pages (2017)
[29] Vassilenko, S., Valtchev, S, Teixeira, J.P., Pavlov, S., “Energy harvesting: an interesting topic for education
programs in engineering specialities” Internet, Education, Science IES, pp. 149-156 (2016)
[30] Tenderenda, T., Murawski, M., Szymanski, M., Becker, M., Rothhardt, M., Bartelt, H., Mergo, P., Poturaj, K.,
Makara, M., Skorupski, K., Marc, P., Jaroszewicz, L., Nasilowski, T., “Fibre Bragg Gratings written in highly
birefringent microstructured fiber as very sensitive strain sensors”, Proc. SPIE 8426, 84260D (2012)
[31] Kvyetnyy R., Bunyak Y., Sofina O., "Blur recognition using second fundamental form of image surface", Proc.
SPIE 9816, 98161A (2015)
[32] Kvyetnyy, R. N., Romanyuk, O. N., Titarchuk E. O., "Usage of the hybrid encryption in a cloud instant
messages exchange system ", Proc. SPIE 10031, 100314S (2016)
[33] Kvyetnyy, R, Sofina, O., Orlyk P., Utreras, A. J., Wojcik, W., "Improving the quality perception of digital
images using modified method of the eye aberration correction", Proc. SPIE 10031, 1003113 (2016)

Proc. of SPIE Vol. 11176 1117609-9


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 29 May 2020
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use

You might also like