Professional Documents
Culture Documents
Proceedings of Spie: Neurocomputer Architecture Based On Spiking Neural Network and Its Optoelectronic Implementation
Proceedings of Spie: Neurocomputer Architecture Based On Spiking Neural Network and Its Optoelectronic Implementation
SPIEDigitalLibrary.org/conference-proceedings-of-spie
ABSTRACT
The paper clarifies neurocomputer and neurocomputer architecture term definitions. The choice of spiking neural
network as neurocomputer operating unit is substantiated. The spiking neurocomputer organization principles are
formulated by analyzing and generalization of the current level of knowledge on neurocomputer architecture (based on
analogy with the well-known von Neumann digital computer organization principles). Analytical overview of current
projects on spiking neural networks hardware implementation is conducted. Their major disadvantages are highlighted.
Optoelectronic hardware implementation of spiking neural network is proposed as such that is free of mentioned
disadvantages due to usage of optical signals for communication between neurons, as well as organization of learning
through hardware. The main technical parameters of the proposed spiking neural network are estimated.
Keywords: neurocomputer, neurocomputer architecture, spiking neural network, construction principles, hardware
implementation
1. INTRODUCTION
Today, the answer to the question "What instrument is best to solve hardly formalized and unformalized tasks?" is
undoubtedly the neurocomputer. A neurocomputer is an information system, whose main processing unit is an artificial
neural network (as opposed to a microprocessor in a digital computer), and its basic functioning principle is learning on
examples (as opposed to programming). Most modern neurocomputers exist in the form of software or hard&software
implementations. But it is well-known that the maximum benefits from the use of neurocomputers can be obtained
specifically with their hardware implementation. Today, unfortunately, no effective hardware implementation of a
neurocomputer has been created. An effective neurocomputer hardware implementation should contain the biggest
possible number of neurons (ideally close to the number of neurons in the human brain: 5 × 1010), while taking the
minimal volume and consuming a minimum of energy. Creation of a such brain-like neurocomputers will solve two
mutually related tasks: 1) creation of a "smart" computers to perform complex cognitive unformalized tasks; 2)
revelation of the human brain secrets through its reverse engineering by technical means. The immediate strategic goal is
to create neuromorphic core (hardware neural network chip), which anybody will be able use to test their own theories
about the brain's functioning principles, and to build a variety of neurocomputer systems on its basis for practical
applications. This new knowledge can then be used in the next generation of neuromorphous cores.
Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2019,
edited by Ryszard S. Romaniuk, Maciej Linczuk, Proc. of SPIE Vol. 11176, 1117609
© 2019 SPIE · CCC code: 0277-786X/19/$21 · doi: 10.1117/12.2536607
Fig. 1. The architecture of a digital computer, built on the von Neumann principles.
Solid lines - information streams, dashed lines - control signal streams
Let us recall the principles of von Neumann:
1. The principle of binary coding: all information on the computer is presented in binary form, combination 0 and 1.
Memory device
In the same paper [7], the basic architectural principles of an abstract neurocomputer design are formulated:
1. The main operating unit of the neurocomputer (its processor) is an artificial neural network, which represents the
set of formal neurons, which are connected by channels of information transmission.
2. The neural network does not perform calculations, only transforms the input signal (pattern) into the output signal
according to its topology and values of the synaptic interconnections.
3. A neurocomputer memory device stores the program for changing the synaptic coefficients between neurons.
4. The information input and output devices perform the same functions as in von Neumann's structure, and the
control device synchronizes work of all structural blocks of neurocomputer when solving a specific task.
5. The neurocomputer works in two modes - learning and functioning. The learning process is an optimization
problem solving, which purpose is to minimize the function of error on a given set of training input and output pattern
examples by selecting the coefficients of inter-neuron connections. In the mode of functioning, the training unit is
disconnected, and the neurocomputer receives signals with noise, which need to be recognized.
As we can see, in paper [7,25,26], the principles are formulated in somewhat amorphous manner, not in such a definite
style as von Neumann's principles. This structure and the formulated principles require further clarification and clearer
wording, due to achievements of scientific research in recent years, in particular, the achievements in the field of spiking
neural networks [8,9,27].
External memory
digital computer
Fig. 3. Improved generalized structural diagram of the abstract neurocomputer
Thus, taking into account the above arguments, the structure of abstract neurocomputer can be represented by the
scheme in Fig. 3, and the architectural principles of its construction can be formulated as follows:
1. The principle of spike coding: all information in the neurocomputer is represented in the form of spikes
(information is encoded by the moments in which impulses occur).
2. The principle of processing associativity: the main processing core of the neurocomputer is a hardware-
implemented network of highest possible number of biologically realistic spiking neurons that are directly connected to
each other. The neural network does not perform any computations, but transforms the input signal (pattern) into the
output image according to its topology and the interconnection coefficients.
3. The principle of learning autonomy and adaptability: the neurocomputer should be trained to perform a specified
mapping autonomously on the basis of adaptation, and the learning mechanism should be incorporated into the
organization of its processor core (neural network).
4. The principle of memory distribution: memory functions in the neurocomputer are realized by its processor core
(neural network), and the separate external memory has a supplementary value.
5. The principle of naturalness of interface: the neurocomputer should have an interface familiar to the human nature,
that is, understand and generate human language, produce images and have sensory fields similar to the five human
senses, and have mechanical organs for movement in space and orientation of its sensory fields.
6. The principle of homogeneity of processing and control: control over the functional parts of the neurocomputer is
carried out by its processor core (neural network).
7. The principle of amplification by a digital computer: the neurocomputer must incorporate a digital computer that is
used for solving formalized problems. The neurocomputer controls the digital computer (sets formalized tasks for it,
which it can do better and faster, through an appropriate interface).
According to these principles, a neurocomputer can be defined as follows:
A neurocomputer is a complex of technical means for information processing; of which main processor unit is a
hardware-implemented large-scale network of spiking neurons; which has a verbal-visual interface and sensor fields
intrinsic to the human nature; and which performs cognitive functions common to the human brain.
According to this definition, the neurocomputer is significantly different from all previous software, hard&soft and
hardware neurocomputers based on digital or analog potential artificial neurons. It can be separated from them by being
referred to as a "natural neurocomputer", because it needs to be as intelligent, creative and self-aware as a human being.
No such computer has yet been built, but it's just a matter of time.
Table 1 The most powerful projects in the field of spiking neural network hardware implementation
Title SpiNNaker[5] Neurogrid[13] BrainScaleS[14] TrueNorth[15] Proposed [12]
20000 1 million
Neurons on 65536
(20 cores, each core 512 4026 cores, each ~3000
chip (256х256)
1000 neurons) core 256 neurons
Chip size 10.4×9.8 mm2 11,9х13,9 mm2 (5х10 mm2) 4,2 mm2 ~30х30 mm2
Largest current
1 billion.
configuration:
It is housed in 10
16 Chips; 16
19-inch racks, each 1 million ~ 200 000
million neurons; 4 It may cascade,
rack - 100,000 Neurogrid consists neurons on a wafer.
Neurons on billion synapses connection between
cores. The cards of 16 "neurocores" Each wafer Si Æ20
set Next chips - optical
holding the chips (each 65536 cm contains 384
configuration: signals
are held in 5 Blade neurons) chips
4,096 Chips; 4
enclosures (Oct. 14,
billion neurons; 1
2018)
trillion synapses
Synapses
1000 ~6000 256 256 ~3000
per neuron
Power
1 mW/neuron 3,1 μW/neuron 5 mW/neuron 0,1 mW/neuron 10..20 μW/neuron
consump.
Synchronous pulse transfer systems (in the brain - asynchronous transmission) (–) asynchronous (+)
direct connections
There are no direct connections between neurons (–)
between neurons (+)
Training a neural network is done with the help of digital computers and special software on-chip навчання
(–) (+)
The two main drawbacks of all known projects are indicated in the last 2 lines of Table 1:
1) There are no direct connections between neurons, since it is technologically impossible to create a large number of
electrical traces in the plane of a semiconductor crystal. Direct connections are replaced by the organization of artificial
programmable or specially coded exchange protocols between groups of neurons that allow reducing the number of
electrical traces in the plane of the semiconductor crystal, but violates the principles of the biological brain work;
2) Training a neural network is done with the help of digital computers and special software, but not with its own
non-programmed tools and non-computing tools.
If the first disadvantage is eliminated by creation of direct electrical connection between the neuron electronic
circuits, taking into account the space necessary to isolate one trace of communication from another, the occupied space
can become so large that there is no room left on the silicon wafer for neuroelement schemes themselves. At the same
time, the neuroelement connection with light beams does not require isolation between the signal paths, because light
streams can pass one through another without mutual influence. Moreover, signaling paths can be located in all three
dimensions. The density of optical transmission paths is limited only by the size of the light sources, their divergence,
and the size of the detector. In addition, all signaling paths can work simultaneously, thus providing a huge pace of data
transfer.
Therefore, the first disadvantage can be corrected with the help of spiking neural network architectures that are
designed for optoelectronic implementation, in which the connections between neurons are carried out using optical
signals. In the future, it will be possible to use nanoelectronic elemental base, provided the invention of methods of
three-dimensional construction of nanoelements. The second disadvantage can be corrected by developing spiking neural
network architectures that have the means to organize training by their own circuitry without the use of computational
procedures (similar to biological neural networks) [33].
8. CONCLUSIONS
A review of the scientific and technical information on the neurocomputer architecture design has shown that
currently there are no clearly formulated principles for neurocomputer architecture design by analogy with the well-
known von Neumann’s digital computer architecture design principles. The information on known neurocomputer
structures and principles of their design and functioning has been generalized. The choice of spiking neural networks as a
neurocomputer operating unit has been substantiated. 7 principles of neurocomputer architecture design by analogy with
the well-known principles of von Neumann digital computer architecture design have been formulated: 1) the principle
of pulse coding, 2) the principle of processing associativity, 3) the principle of learning autonomy and adaptability, 4) the
principle of the memory distributivity, 5) the principle of the interface naturalness, 6) the principle of uniformity of
processing and control, 7) the principle of amplification by a digital computer.
Analytical review of current projects on the spiking neural network hardware implementations has showed that today
they all use the technology of very large scale integration (VLSI), which is well developed and tested, and therefore
convenient. Two main drawbacks of all projects have been emphasized: 1) the absence of direct connections between
neurons, since it is technologically impossible to create a large number of electrical traces in the plane of a
semiconductor crystal (direct links are replaced by organization of artificial programmable or specially coded exchange
protocols between groups of neurons, which allows reducing the number of electrical traces in the plane of the
semiconductor crystal, but removes the compliance with the principles of the work of biological brain); 2) training of
spiking neural networks is done using digital computers and special software, rather than using their own non-
programmed and non-computing tools.
The proposed optoelectronic version of the hardware implementation of the spiking neural network is free of these
disadvantages. The first drawback is eliminated by the use of optical signals for the organization of neuronal
connections. The second disadvantage is corrected by training the proposed spiking neural network using hardware
(SLM) without the use of computational procedures, providing for the network’s ability to additionally learn and relearn
(adaptability). Due to these qualitative advantages, the proposed project is more neuromorphic (more adequate to the
brain), and therefore when used it is more likely to reveal the secrets of the human brain.
REFERENCES
[1] Mirkes, Е. М. “Neyrocomputer. Proekt Standarta”, Novosibirsk: Nauka, Sibirskaya izdatelskaya firma RAN,
188 p. – ISBN 5-02-031409-9 (1998)
[2] Grytsenko ,V. I., Misuno, I. S., Rachkovskiy, D. A., “The concept and architecture of the software
neurocomputer SNC”, Control systems and machines, no3, pp. 3-14 (2004)
[3] Krug, P.G., “Neural networks and neurocomputers: study guide”. M.: Publishing house MEI, p.176 (2002)
[4] Galushkin, A. I. [Neurocomputers], IPRGR, Moscow,p. 528, ISBN 5-93108-007-4 (2000)
[5] SpiNNaker Home Page http://apt.cs.manchester.ac.uk/projects/SpiNNaker/
[6] Von Neumann, J., [The General and Logical Theory of Automata], Wiley, New York, pp. 1–41 (1951)
[7] Komartsova, L. G., Maksimov, A. V., [Neurocomputers: study guide for universities], Publishing House of
Moscow State Technical University, Moscow, ISBN 5-7038-2554-7, p.400 (2004)
[8] Maass, W., Bishop, C.M., [Pulsed Neural Networks], MIT Press, Cambridge, ISBN 0-262-13350-4, p.377
(2001)