Professional Documents
Culture Documents
Final Erp Submission
Final Erp Submission
1 Introduction
Neuromorphic Computing is a method of computing with an aim to mimic the way the human brain
works in terms of function, structure and efficiency. It was a concept coined in the late 1980s with
research still being done today. This type of computing will make use of spiking neural networks which
utilise of artificial neurons, henceforth exploring both the theoretical and hardware aspects in this
field. This article aims to explore four vital sections, that of which consist of Hardware, Simulators,
Algorithms and Techniques to then conclude whether research in this field is actually detrimental or
the fact that we are able to take advantage of our programming proficiency to make greater progress
than what we can imagine.
2 Hardware
2.1 Deployment Hardware
There are numerous deployment hardware that exist but in this article, we shall use a neurosynap-
tic chip called TrueNorth. It contains over 4096 cores with its transistor count being just over 5.4
billion(Esser, S.K, 2015). The main issue that rises in neuromorphic computing is back propaga-
tion. This is the process of which the weights of the neurons are adjusted to ensure for accurate
results to be produced but this is unable to happen because this process uses synpatic weights and
continuous-output neurons in a regular perceptron model(J.V, Modha, D.S, 2015) however neuromor-
phic computing makes use of spiking neurons and discrete synaptic weights instead. This does justify
that neuromorphic computing will not be efficient as more research would be done however, when
exploring deployment hardware, it does not seem so. What is important to explore is that TrueNorth
uses digital circuitry rather than analogue circuits there is tendency for more noise and therefore more
errors can be accumulated. It is imperative to note that this chip is able to make that noticeable shift
in AI development.
1
Figure 1: TrueNorth Chip with 64x64 cores
Table 1: A table displaying a few differences between modern-day and future computing.
around the simulation of neural networks with three important types of hardware(E, Poncino, 2013),
namely accelerator boards, programmable arrays and general purpose. Accelerator boards use digital
signal processing to allow for a model to be simulated quicker however an FPGA allow for real-time
programming which in fact relates to the key question of how programming is able to provide a tangible
advantage. Key concepts of the simulative strategy will be explored in greater depth until the section,
titled Simulators.
2
Figure 2: Sliding Window Technique
2.6 Network-on-chip.
An NOC is a specialised chip that is able to receive data packs for DSP(Digital signal processing) in
a full-duplex bus where transmission can occur both ways. Using NOCs have an advantage and that
is a slightly closer step to creating a three dimensional connectivity(A, Panda, P, 2019) that is found
in the brain; perchance even changing the network topology could aid for this. But, we do know that
if we were to put more research and time into this then we can allow for AI to be even more powerful
than it is right now.
3.1 SuperNeuro
One simulator which seems to hit the bell for the programming paradigm in neuromorphic computing
is SuperNeuro. This simulator has two modes, MAT and ABM mode. Now, we will highlight the
differences between the two but mainly dive into the imperatives of this simulator in the forthcoming
epoch of neuromorphic computing. Both these modes are crucial but MAT support homogeneous
simulations which are when the bias, synapses of given weights and spiking neurons are of all the
3
same dendronic type(Prassana, 2023). A type of spiking neuron model which is used is called Leaky
integrate and has a threshold, reset, leak and refractory period which need to be altered with an
offset value to allow for greater accuracy. ABM, also known as Agent-Based modelling is a simulation
mode in SuperNeuro, and works by treating each neuron as a separate ”agent”(Prassana, 2023). Each
”agent” contains a neuron and synapse step function with the ability to alter parameters like axonal
delay, refractory periods and more alongside the use of GPU acceleration(Prassana, 2023). Using these
modes, the best simulation can be produced allowing for more discoveries in the world of neuromorphic
computing.
3.3 SNNs
There are two critical approaches which allow for the shift towards this type of computing to become
justified. They are called Conversion-based and Spike-based approaches and are both techniques used
for learning in SNNS. SNNS, otherwise known as Spiking Neural Networks allow for the increase in
elasticity as well as sparticity of neurons alongside a great deal of research being put into Backprop-
agation(Roy, K.,2019). This was a term that was discussed repeatedly before and the advantage of
its exploration is that it could allow for multi-layer SNNs to be created of greater training times but
this would also establish a programming paradigm that is open for exploration to all. At the moment,
stochastic gradient descent is used for random neuron firing rather than precise one-to-one(Panda, P.,
2019) and this area is still being looked at. We can also compare this approach to the conversion-based
approach where weight scaling and methods of normalization are present which allows for increased
latency, energy efficient attitude as well as a great increase in accuracy. We are able to alter membrane
thresholds and leak time constants but with this brings problems like the softmax function(Jaiswal,
2019) allowing for negative values to be discarded which in turn affects performance. However, the
two different approaches encourage more research and a creation of a programming paradigm that of
which allows one to argue for which approach is of easier implementation and efficiency allowing for
greater transition within the neuromorphic world truly.
4
Figure 3: An example of a problem which caused heaps of development
MAT ABM
CPU usage GPU usage
Inherently scalable Analogous Synaptic Weights
Spike Timing Dependent Plasticity Process scheduling scheme
Table 2: A table displaying a few differences the two different modes, MAT and ABM.
the true power of neuromorphic computing and could slow down research almost halting it allowing
for us to argue whether it truly is worth putting a great deal of effort into but that is why this is
much better than other types of computing due to the programming paradigm which we shall explore
in detail later. Furthermore, there is a lack of programming features at the moment so we can argue
for the fact that the shift to AI is too early. Many examples can include designing the different layers
within the neural network, the threshold as well as the synaptic strength at a more intricate level each
time a network must be created; this is much more work and can affect accuracy levels increasing
chances of error. It has been mentioned that ”Neural Engineering Framework” and ”Dynamic Neural
Fields” can allow for sub-networks to be defined(Parsa, M., Mitchell, 2022) performing particular tasks
like binary operations, sequence and conditional statements subsituting programming jargon.
3.6 Code
import time
from brian2 import *
import neuro
import matplotlib . pyplot as plt
num_neurons = 100
tau = 10 * ms
threshold = 0 . 8
reset = 0
sim_duration = 100 * ms
s t a r t _ t i m e _ b r i a n 2 = time . time ()
5
# Define Brian2 model
eqs_brian2 = ’’’
dv / dt = ( 0 . 5 - v ) / tau : 1
’’’
neuro ns_bria n2 = NeuronGroup ( num_neurons , eqs_brian2 , threshold = ’v > threshold ’ , reset
= ’v = reset ’ , method = ’ linear ’)
neuro ns_bria n2 . v = reset
# Record spikes
s p i k e _ m o n _ b r i a n 2 = SpikeMonitor ( neurons _brian2 )
# Run Brian2 s i m u l a t i o n
run ( sim_duration )
e nd _t im e _b ri an 2 = time . time ()
s t a r t _ t i m e _ n e u r o = time . time ()
# Create a network in Neuro
net_neuro = neuro . Network ()
neurons_neuro = net_neuro . c r e a t e _ p o p u l a t i o n ( num_neurons , neuro . types . IF_curr_exp , { ’
tau_rc ’: tau , ’ v_thresh ’: threshold , ’
v_reset ’: reset } )
m e m b r a n e _ p o t e n t i a l _ p r o b e = neurons_neuro [ 0 ] . probe ( ’v ’)
# Run Neuro s i m u l a t i o n
net_neuro . run ( sim_duration )
end_t ime_neu ro = time . time ()
plt . subplot (1 , 2 , 2 )
plt . plot ( m e m b r a n e _ p o t e n t i a l _ p r o b e . time , m e m b r a n e _ p o t e n t i a l _ p r o b e . data [ 0 ] )
plt . xlabel ( ’ Time ( ms ) ’)
plt . ylabel ( ’ Membrane potential ’)
plt . title ( ’ Neuro - Membrane potential of a neuron over time ’)
plt . tight_layout ()
plt . show ()
# Compare e x e c u t i o n times
print ( " Brian2 execution time : " , en d _t im e_ b ri an 2 - start_time_brian2 , " seconds " )
print ( " Neuro execution time : " , end _time_ne uro - start_time_neuro , " seconds " )
s i m u l a t o r _ a _ t i m e s = [1 , 3 , 6 , 20 , 50 ]
s i m u l a t o r _ b _ t i m e s = [ 0 .5 , 2 , 4 , 15 , 40 ]
s i m u l a t o r _ a _ s c a l a b i l i t y = [1 , 0 . 95 , 0 . 90 , 0 . 85 , 0 . 80 ]
s i m u l a t o r _ b _ s c a l a b i l i t y = [1 , 0 . 98 , 0 . 95 , 0 . 92 , 0 . 90 ]
6
Figure 4: Graph of Simulator Time against Network Size of Brian2 and Neuro
7
computing in terms of its ease of implementation as well as its ability to be explored much deeper
through programming that of which is unable to occur for optical computing for example. We start
with the Hopfield Algorithm, namely, which is a single-layer feedback neural network with ability to
identify features even if contained within robust noise. Before we take a critical approach, lets lay
out that there are two types- discrete and continuous. Discrete takes two inputs indicating whether
a neuron is in activation or inhibition however continuous has a major difference which is that the
activation function could be a discrete step function or a sigmoid function. (Imran, MA, Abbasi,
2020). This algorithm is associative meaning that it is able to learn as well as remember which is
quite similar to the ”memristor” concept talked about earlier about even after power being switched
off, it retains its data in the static sense. This algorithm can be related to the hardware deployment
and network deployment as well as mapping as discussed earlier and the fact that it is able to cause
network optimization is a great deal to prove how just one algorithm is able to cause an increase in
progress within neuromorphic computing and therefore has the capability for causation to occur for
AI and its development.
4.2 What can be done to improve the stages of creating the Hopfield Net-
work?
We have four vitals stages from first selecting standard samples(1), designing the Hopfield Network(2),
Training the Hopfield Network(3), using test data in the neural network(4) and then analysing results
through a confusion matrix perchance(5). Now, neurmorphic computing is quite a new field invented
around the 90s and if we want progression to occur and especially wanting to make use of a program-
ming paradigm(Yu, Z, Abdulghini, 2023). For stage 1, we can explore with different sample sizes with
the change in matrix sizes therefore and possibly a better way to process data rather than a binarisa-
tion matrix. For stage 2, rather than taking into account the matrix size, we should take into account
the number of neurons that the neural network will take which is the matrix size squared. Stage 3 is
important to consider as it uses a ”neurodynamic method” and this can be altered slightly as this uses
recursions until a stable form is reached but the question arises of which there may be a better way
to train the data in the hopfield network. It does make use of associate memory with the concept of
evolution being involved(AM, Zahid, A, Heidari, n.d.). Stage 5 is the most critical of all where we can
use a confusion matrix instead to compare the trained object vector looking at bias, variance and F1
Score.
8
in neuromorphic computing like an artificial synapse (Vishwa et al., 2020) is needed because we are
to transfer, storage and process data all at once without the need of a dedicated storage space and
is remarkable because memory is quite a large problem. Knowing this potential, AI can be changed
completely and is fantastic for neuromorphic computing to shift to development in the AI sector whilst
the existance of a programming paradigm allows for different approaches to neuromorphic computing.
Another concept that must be known to all is a memristor(Vishwa et al., 2020). This is a component
that is non-volatile and is able to save its memory even when it loses power. It is formed as a matrix
of switches and can retain data as when there is no voltage passing through the component then
the remaining charge allows for the memristor to become what is known as a ”memory component”.
(Vishwa et al., 2020). These are ground-breaking discoveries that can change the direction that AI is
heading in and would completely answer the question with sureness.
4.6 SW Optimization
An approach called model compression can alter sparsity and the probability of the weights of neurons
as well as biases. A technique known as weight pruning (arxiv.org, n.d.) can be performed which
is when values of unneeded weights in the tensor are set to zero. There is research that has been
done to show how energy and power consumption decreases with maximum data efficiency (arxiv.org,
n.d.)with less read/write to the neuromorphic chip being performed.
4.7 HW Optimization
Accelerators that are HW-Optimized are able to allow for efficient spike transmissions (arxiv.org, n.d.)
within the Spiking Neural Network and therefore allow for faster memory access hence faster read/write
speeds to the neuromorphic chip. This allows for more optimized memory as well as reduced risk of
collisions or other types of errors. All of this is essential to be considered to ensure that the reader is
able to be justified that the shift to AI development is needed whilst it being known that all of this is
able to increase efficiency while being easy to implement by engineers with years of experience in the
industry.
4.8 Conclusion
In conclusion, I would like to finalise that the shift towards neuromorphic computing in AI development
is justified to an extent if the government or a privatised company is able to allocate a budget to more
research in this field as from what we have explored above, there is pending potential that is to be
unlocked and with the evidence of the programming paradigm from plotting graphs to the accessing of
memristors internal structure using C and Assembly language really does allow for a tangible advantage
over convential approaches of using Emulative and Simulative Strategies. If programming was not so
key, then the development of neuromorphic computing would take epochs and we must grateful that
it does exist truly.
9
4.9 Reference List
Dureja, H., Garg, Y., Chaujar, R. and Kumar, B. (n.d.). REVIEW RESEARCH PAPER NEURO-
MORPHIC COMPUTING AND APPLICATIONS. @International Research Journal of Modernization
in Engineering, [online]
Roy, K., Jaiswal, A. and Panda, P. (2019). Towards spike-based machine intelligence with neuro-
morphic computing. Nature, [online] 575(7784), pp.607–617. doi:https://doi.org/10.1038/s41586-019-
1677-2.
Schuman, C.D., Kulkarni, S.R., Parsa, M., Mitchell, J.P., Date, P. and Kay, B. (2022). Opportuni-
ties for neuromorphic computing algorithms and applications. Nature Computational Science, [online]
2(1), pp.10–19. a doi:- (Schuman et al., 2022)
Prasanna (2023). SuperNeuro: A Fast and Scalable Simulator for Neuromorphic Computing.
[online] Available at: https://arxiv.org/pdf/2305.02510v1.pdf [Accessed 11 Nov. 2023]
Calimera, A., Macii, E. and Poncino, M. (n.d.). Available at: [Accessed 11 Nov. 2023]
Yu, Z, Abdulghani, AM, Zahid, A, Heidari, H, Imran, MA Abbasi, QH 2020, ’An overview of
neuromorphic computing for artificial intelligence enabled hardware-based Hopfield neural network’,
IEEE Access, vol. 8, 9057570, pp. 67085-67099.
Esser, S.K., Appuswamy, R., Merolla, P., Arthur, J.V. and Modha, D.S. (2015). Backpropaga-
tion for Energy-Efficient Neuromorphic Computing. [online] Neural Information Processing Systems.
[Accessed 11 Nov. 2023]
Calimera, A., Macii, E. and Poncino, M. (2013). The Human Brain Project and neuromorphic
computing. Functional neurology, [online] 28(3), pp.191–6.
arxiv.org. (n.d.). Embodied Neuromorphic Artificial Intelligence for Robotics: Perspectives, Chal-
lenges, and Research Development Stack. [online]
Vishwa, R., Karthikeyan, R., Rohith, R. and Sabaresh, A. (2020). Current Research and Future
Prospects of Neuromorphic Computing in Artificial Intelligence. IOP Conference Series: Materials
Science and Engineering, 912, p.062029
Patil, P., Shirashyad, P., Pandhare, P., Biradar, P., Entc and Cse (2023). RECENT TRENDS IN
AI, NEUROMORPHIC COMPUTING ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE. In-
ternational Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org f164, [online] 11, pp.2320–2882.
10