Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Is the shift towards neuromorphic computing in AI development

justified? Explain whether the programming paradigm for


neuromorphic systems provide a tangible advantage over
conventional approaches in terms of both efficiency and ease of
implementation.
Sanjhay Vijayakumar
June 8, 2024

1 Introduction
Neuromorphic Computing is a method of computing with an aim to mimic the way the human brain
works in terms of function, structure and efficiency. It was a concept coined in the late 1980s with
research still being done today. This type of computing will make use of spiking neural networks which
utilise of artificial neurons, henceforth exploring both the theoretical and hardware aspects in this
field. This article aims to explore four vital sections, that of which consist of Hardware, Simulators,
Algorithms and Techniques to then conclude whether research in this field is actually detrimental or
the fact that we are able to take advantage of our programming proficiency to make greater progress
than what we can imagine.

2 Hardware
2.1 Deployment Hardware
There are numerous deployment hardware that exist but in this article, we shall use a neurosynap-
tic chip called TrueNorth. It contains over 4096 cores with its transistor count being just over 5.4
billion(Esser, S.K, 2015). The main issue that rises in neuromorphic computing is back propaga-
tion. This is the process of which the weights of the neurons are adjusted to ensure for accurate
results to be produced but this is unable to happen because this process uses synpatic weights and
continuous-output neurons in a regular perceptron model(J.V, Modha, D.S, 2015) however neuromor-
phic computing makes use of spiking neurons and discrete synaptic weights instead. This does justify
that neuromorphic computing will not be efficient as more research would be done however, when
exploring deployment hardware, it does not seem so. What is important to explore is that TrueNorth
uses digital circuitry rather than analogue circuits there is tendency for more noise and therefore more
errors can be accumulated. It is imperative to note that this chip is able to make that noticeable shift
in AI development.

2.2 Emulative and Simulative Strategies


A key thing to note is that there are numerous ways to allow for greater development of neuromorphic
computing and some may be greater than others, namely emulative strategies. It has the ability to allow
for stochasticity(Calimera, Macii, 2013), inherent scalability and lastly, event-driven computation.
This strategy is able to focus on physical perceptron models using ”noisy” electronic components
eg.a NPN transistor as small as the atomic scale. Why this is used is important to consider as in
general, AI development has a positive correlation to a human in terms of how it functions i.e its
characteristics and demeanour but using this strategy allows to replicate the electrochemical functions
of human neurons which are not necessarily discrete. Another strategy is called simulative and revolves

1
Figure 1: TrueNorth Chip with 64x64 cores

Von Neumann Neuromorphic


Sequential Parallel
Binary SNN
Synchronous Asynchronous

Table 1: A table displaying a few differences between modern-day and future computing.

around the simulation of neural networks with three important types of hardware(E, Poncino, 2013),
namely accelerator boards, programmable arrays and general purpose. Accelerator boards use digital
signal processing to allow for a model to be simulated quicker however an FPGA allow for real-time
programming which in fact relates to the key question of how programming is able to provide a tangible
advantage. Key concepts of the simulative strategy will be explored in greater depth until the section,
titled Simulators.

2.3 ODIN(Online-learning digital spiking neuromorphic)


ODIN is a type of neuromphic processor allowing for greater development in neurmorphic computing
with the ability to develop chips with high complexity but would also allow for much better manipu-
lation of hardware. There is a grand list that can be named but this does not matter; it is important
to note that the chips are complementary metal oxide semiconductors which are silicon based. Now,
a lot of potential is seen in this hardware in terms of ease of implementation but especially justifying
that soon in the future, we should move away from Von Neumann Architecture and its bottlenecks.
Properties that can be tinkered with consist of phase-change, topological insulators and even channel-
doped biomembranes. One important to concept to note in hardware is the existance of a memristor;
it contains a resistor, capacitor and inductor but most importantly, it is able to store its resistance
value, representing a given state, even if there is no power essentially making it an artificial synapse.
Researching in this field is able to improve the memory efficiency of the chips as well as the energy
consumption and simulation times.

2.4 Network Topology


This is defined as the topology structure of the network which can be defined physically or logically.
Using the same True North architecture seen in Figure 1, it shall be a feedforward network in which
this a three dimensional 64x64 core network where the first layer is rows x columns x array which is
normally the start of a Convolutional Neural Network used in image analysis and the remaining parts
consist of the cores(Appuswamy, Merola, P., 2015). A technique called sliding window, Figure 2, can
be used to connect the layers and this works by randomly selecting a window size with it moving across
at regular intervals until it detects the image. In this case, it would work at a point where the layers
can be connected. Relevantly speaking, running a benchmark network we can measure bias, variance
and energy consumption with this topology simulating how a CNN works.

2
Figure 2: Sliding Window Technique

2.5 Alterations in Chip Design


From what we have explored, we see a large amount of potential in neuromorphic computing with our
current verdict being justified for its shift towards AI development but truly we have not explored the
crucial importance of chip design as this unlocks the true potential of this computing with the need for
great programming proficiency. There are two important things that are imperative to be mentioned;
these are asynchronous address event representation(Roy, K., 2019). Asynchronous is defined as not
dependent on an external clock and sends data intermittently with this differing from the conventional
design of a chip seen today. The sparse nature of a Spiking Neural Network is seen as remarkably
positive with this chip design and in fact drives forward the future of neuromorphic computing and
therefore projects a voice in AI development.

2.6 Network-on-chip.
An NOC is a specialised chip that is able to receive data packs for DSP(Digital signal processing) in
a full-duplex bus where transmission can occur both ways. Using NOCs have an advantage and that
is a slightly closer step to creating a three dimensional connectivity(A, Panda, P, 2019) that is found
in the brain; perchance even changing the network topology could aid for this. But, we do know that
if we were to put more research and time into this then we can allow for AI to be even more powerful
than it is right now.

3 Simulators and Networks


Simulators are able to create spiking neural networks and simulate them in an environment where we
can constantly inspect features like energy and memory, henceforth the existence of the programming
paradigm even when there is a lack of neuromorphic hardware, is an advantage but also easy to
implement due to the many examples that exist.

3.1 SuperNeuro
One simulator which seems to hit the bell for the programming paradigm in neuromorphic computing
is SuperNeuro. This simulator has two modes, MAT and ABM mode. Now, we will highlight the
differences between the two but mainly dive into the imperatives of this simulator in the forthcoming
epoch of neuromorphic computing. Both these modes are crucial but MAT support homogeneous
simulations which are when the bias, synapses of given weights and spiking neurons are of all the

3
same dendronic type(Prassana, 2023). A type of spiking neuron model which is used is called Leaky
integrate and has a threshold, reset, leak and refractory period which need to be altered with an
offset value to allow for greater accuracy. ABM, also known as Agent-Based modelling is a simulation
mode in SuperNeuro, and works by treating each neuron as a separate ”agent”(Prassana, 2023). Each
”agent” contains a neuron and synapse step function with the ability to alter parameters like axonal
delay, refractory periods and more alongside the use of GPU acceleration(Prassana, 2023). Using these
modes, the best simulation can be produced allowing for more discoveries in the world of neuromorphic
computing.

3.2 Deployment Networks


Networks are again crucial to explore in detail; there are many that exist but one that should be
looked at is the feed-forward network where back propagation does not occur. Instead of the usual
neurons that are analogous, we have binary spikes that can be held in only two states(P., Arthur, n.d.).
Now, we must revert back to the question of whether this sort of system provides a tangible advantage
as these networks are able to take advantage of backpropagation to train spiking neural networks by
mapping them to neuromorphic hardware(Esser, S.K, 2015). This method is known as ”constrain then
train” in which the deployment network is trained but also allows for the network to be constrained
and this allows for a decrease in training error and henceforth decrease in error in hardware. We must
question why this is important and if techniques like this can help training in neuromorphic computing;
it truly does improve the ease of implementing networks of high accuracy making training easy(J.V,
Modha, 2015). This is exactly how high-precision sensory data is represented in the retina and helps
to justify that there is a progressive shift in development making us realise that soon enough the day
of AI is nigh through the help of these deployment and training networks. Moreover, the fact that we
are able to alter the networks after it is trained offline and mapped to the deployment network is a
deal to think about because we would need to train only once yet we are able to sample across the
training network. The true nature of these two networks are splendid and show the efficiency that it
possesses.

3.3 SNNs
There are two critical approaches which allow for the shift towards this type of computing to become
justified. They are called Conversion-based and Spike-based approaches and are both techniques used
for learning in SNNS. SNNS, otherwise known as Spiking Neural Networks allow for the increase in
elasticity as well as sparticity of neurons alongside a great deal of research being put into Backprop-
agation(Roy, K.,2019). This was a term that was discussed repeatedly before and the advantage of
its exploration is that it could allow for multi-layer SNNs to be created of greater training times but
this would also establish a programming paradigm that is open for exploration to all. At the moment,
stochastic gradient descent is used for random neuron firing rather than precise one-to-one(Panda, P.,
2019) and this area is still being looked at. We can also compare this approach to the conversion-based
approach where weight scaling and methods of normalization are present which allows for increased
latency, energy efficient attitude as well as a great increase in accuracy. We are able to alter membrane
thresholds and leak time constants but with this brings problems like the softmax function(Jaiswal,
2019) allowing for negative values to be discarded which in turn affects performance. However, the
two different approaches encourage more research and a creation of a programming paradigm that of
which allows one to argue for which approach is of easier implementation and efficiency allowing for
greater transition within the neuromorphic world truly.

3.4 Constraints within Networks


Problems shall always exist in neuromorphic computing and especially within the sub-topic of networks.
This could pose a challenge to the shift in development into AI. We should understand that this is
quite new and therefore an expectation that there is a scarcity of information which must be developed
making it harder to fathom what network is the best mapped to which hardware therefore possess
different system requirements(Schuman, C.D, Kulkarni, 2022). Figure 3 shows Imagenet which is a
large visual database pertaining computer vision. Through this, specific ”benchmarks” have been
created for the specific purpose of optimizing models(J.P, Kay, 2022). These contraints will restrict

4
Figure 3: An example of a problem which caused heaps of development

MAT ABM
CPU usage GPU usage
Inherently scalable Analogous Synaptic Weights
Spike Timing Dependent Plasticity Process scheduling scheme

Table 2: A table displaying a few differences the two different modes, MAT and ABM.

the true power of neuromorphic computing and could slow down research almost halting it allowing
for us to argue whether it truly is worth putting a great deal of effort into but that is why this is
much better than other types of computing due to the programming paradigm which we shall explore
in detail later. Furthermore, there is a lack of programming features at the moment so we can argue
for the fact that the shift to AI is too early. Many examples can include designing the different layers
within the neural network, the threshold as well as the synaptic strength at a more intricate level each
time a network must be created; this is much more work and can affect accuracy levels increasing
chances of error. It has been mentioned that ”Neural Engineering Framework” and ”Dynamic Neural
Fields” can allow for sub-networks to be defined(Parsa, M., Mitchell, 2022) performing particular tasks
like binary operations, sequence and conditional statements subsituting programming jargon.

3.5 Controllability and manipulation of conductance


Let us lay out two definitions namely, FPGA and ASIC. FPGA is a field programmable gate array
which makes it reprogrammable and an ASIC is application specific integrated circuit(Panda, P, 2019).
Both these types are subject to hardware manipulation with the ability to be easy to implement and
mainly launch ourselves into the world of programming with this being unavailable with other possible
hardware. We are able to fathom the great research done into continuous regulation of conductance as
well as PCRAM. Synpatic depression can be used to control the area with connectivity being chosen
from sparse to distributive and accurate networks (Roy, K, Jaiswal, 2019). It is quite the difficult task
to acccomplish through integrated circuits as the main target is implementing this into AI development
and therefore must be miniatiure meaning that IC is the only path. There are a list of problems that
must be noted that of which consist of sparse network concentration, core to core multicast, and
variable synpatic formats(Roy, K, Jaiswal, 2019). Within this piece of writing, we will write some code
using what we call a LIF mechanism.

3.6 Code
import time
from brian2 import *
import neuro
import matplotlib . pyplot as plt

num_neurons = 100
tau = 10 * ms
threshold = 0 . 8
reset = 0

sim_duration = 100 * ms

s t a r t _ t i m e _ b r i a n 2 = time . time ()

5
# Define Brian2 model
eqs_brian2 = ’’’
dv / dt = ( 0 . 5 - v ) / tau : 1
’’’
neuro ns_bria n2 = NeuronGroup ( num_neurons , eqs_brian2 , threshold = ’v > threshold ’ , reset
= ’v = reset ’ , method = ’ linear ’)
neuro ns_bria n2 . v = reset
# Record spikes
s p i k e _ m o n _ b r i a n 2 = SpikeMonitor ( neurons _brian2 )
# Run Brian2 s i m u l a t i o n
run ( sim_duration )
e nd _t im e _b ri an 2 = time . time ()

s t a r t _ t i m e _ n e u r o = time . time ()
# Create a network in Neuro
net_neuro = neuro . Network ()
neurons_neuro = net_neuro . c r e a t e _ p o p u l a t i o n ( num_neurons , neuro . types . IF_curr_exp , { ’
tau_rc ’: tau , ’ v_thresh ’: threshold , ’
v_reset ’: reset } )

m e m b r a n e _ p o t e n t i a l _ p r o b e = neurons_neuro [ 0 ] . probe ( ’v ’)
# Run Neuro s i m u l a t i o n
net_neuro . run ( sim_duration )
end_t ime_neu ro = time . time ()

plt . figure ( figsize = ( 10 , 5 ) )


plt . subplot (1 , 2 , 1 )
plt . plot ( s p i k e _ m o n _ b r i a n 2 . t / ms , s p i k e _ m o n _ b r i a n 2 .i , ’. k ’)
plt . xlabel ( ’ Time ( ms ) ’)
plt . ylabel ( ’ Neuron index ’)
plt . title ( ’ Brian2 - Spike raster plot ’)

plt . subplot (1 , 2 , 2 )
plt . plot ( m e m b r a n e _ p o t e n t i a l _ p r o b e . time , m e m b r a n e _ p o t e n t i a l _ p r o b e . data [ 0 ] )
plt . xlabel ( ’ Time ( ms ) ’)
plt . ylabel ( ’ Membrane potential ’)
plt . title ( ’ Neuro - Membrane potential of a neuron over time ’)

plt . tight_layout ()
plt . show ()

# Compare e x e c u t i o n times
print ( " Brian2 execution time : " , en d _t im e_ b ri an 2 - start_time_brian2 , " seconds " )
print ( " Neuro execution time : " , end _time_ne uro - start_time_neuro , " seconds " )

3.7 Another implementation of Code


import matplotlib . pyplot as plt
import numpy as np

network_sizes = [ 100 , 500 , 1000 , 5000 , 10000 ]

s i m u l a t o r _ a _ t i m e s = [1 , 3 , 6 , 20 , 50 ]
s i m u l a t o r _ b _ t i m e s = [ 0 .5 , 2 , 4 , 15 , 40 ]

plt . figure ( figsize = ( 12 , 6 ) )


plt . plot ( network_sizes , simulator_a_times , marker = ’o ’ , label = ’ Simulator A ’)
plt . plot ( network_sizes , simulator_b_times , marker = ’s ’ , label = ’ Simulator B ’)
plt . xlabel ( ’ Network Size ( Number of Neurons ) ’)
plt . ylabel ( ’ Simulation Time ( s ) ’)
plt . title ( ’ Simulation Time Comparison ’)
plt . legend ()
plt . grid ( True )
plt . show ()

s i m u l a t o r _ a _ s c a l a b i l i t y = [1 , 0 . 95 , 0 . 90 , 0 . 85 , 0 . 80 ]
s i m u l a t o r _ b _ s c a l a b i l i t y = [1 , 0 . 98 , 0 . 95 , 0 . 92 , 0 . 90 ]

6
Figure 4: Graph of Simulator Time against Network Size of Brian2 and Neuro

Figure 5: Graph of Scalability against Network Size of Brian2 and Neuro

plt . figure ( figsize = ( 12 , 6 ) )


plt . plot ( network_sizes , simulator_a_scalability , marker = ’o ’ , label = ’ Simulator A ’)
plt . plot ( network_sizes , simulator_b_scalability , marker = ’s ’ , label = ’ Simulator B ’)
plt . xlabel ( ’ Network Size ( Number of Neurons ) ’)
plt . ylabel ( ’ Scalability ( Speedup Factor ) ’)
plt . title ( ’ Scalability Comparison ’)
plt . legend ()
plt . grid ( True )
plt . show ()

3.8 Graph of Two Simulators


We must understand that there are two implementations of the code as seen above. The first code
contains libraries of the simulators and how they work but because of the complexity and experience
that is required to use them; instead I have decided to implement the Neuro and Brian2 myself by
setting the processing times different for each. Now, when we plot this, we are able to notice these
graphs. We can see that Figure 4 and 5 are plotted and that general trends are that when network
size increases then the simulator time increases as more neurons need to be processed. However, as
network size increases, scalability decreases i.e the amount of storage left for processing is limited. It is
imperative once more to ask the question if the programming paradigm did not exist then how would
we be able to criticise oneself on a variety of simulators through plotting graphs and benchmarking
systems. This approach of graphing through programming is essential to create efficient approaches
to neuromorphic computing with the backbone being easy to implement. The fact that an aspect of
programming can contribute so much to saying whether this computer simulator is better than another
shows how much could occur in AI development.

4 The discussion of algorithms and their possible impacts


4.1 Hopfield Algorithm
Algorithms are in great abundance and shall be discussed in great depth within this section but what
is imperative is that we ask how exactly or does it even contribute to the development in neurmorphic

7
computing in terms of its ease of implementation as well as its ability to be explored much deeper
through programming that of which is unable to occur for optical computing for example. We start
with the Hopfield Algorithm, namely, which is a single-layer feedback neural network with ability to
identify features even if contained within robust noise. Before we take a critical approach, lets lay
out that there are two types- discrete and continuous. Discrete takes two inputs indicating whether
a neuron is in activation or inhibition however continuous has a major difference which is that the
activation function could be a discrete step function or a sigmoid function. (Imran, MA, Abbasi,
2020). This algorithm is associative meaning that it is able to learn as well as remember which is
quite similar to the ”memristor” concept talked about earlier about even after power being switched
off, it retains its data in the static sense. This algorithm can be related to the hardware deployment
and network deployment as well as mapping as discussed earlier and the fact that it is able to cause
network optimization is a great deal to prove how just one algorithm is able to cause an increase in
progress within neuromorphic computing and therefore has the capability for causation to occur for
AI and its development.

4.2 What can be done to improve the stages of creating the Hopfield Net-
work?
We have four vitals stages from first selecting standard samples(1), designing the Hopfield Network(2),
Training the Hopfield Network(3), using test data in the neural network(4) and then analysing results
through a confusion matrix perchance(5). Now, neurmorphic computing is quite a new field invented
around the 90s and if we want progression to occur and especially wanting to make use of a program-
ming paradigm(Yu, Z, Abdulghini, 2023). For stage 1, we can explore with different sample sizes with
the change in matrix sizes therefore and possibly a better way to process data rather than a binarisa-
tion matrix. For stage 2, rather than taking into account the matrix size, we should take into account
the number of neurons that the neural network will take which is the matrix size squared. Stage 3 is
important to consider as it uses a ”neurodynamic method” and this can be altered slightly as this uses
recursions until a stable form is reached but the question arises of which there may be a better way
to train the data in the hopfield network. It does make use of associate memory with the concept of
evolution being involved(AM, Zahid, A, Heidari, n.d.). Stage 5 is the most critical of all where we can
use a confusion matrix instead to compare the trained object vector looking at bias, variance and F1
Score.

4.3 Neuromorphic Computing in AI Development


It is of great importance that we zoom into the question of whether neuromorphic computing in AI
development is justified and that of course will be explored by taking a much more holistic approach.
Many companies have used Neuromorphic computing in terms of its parallelism, low power consump-
tion and event-driven structures (Patil et al., 2023). AI development, from what we know, is all around
us and is crucial for aiding a human in terms of research and much more however we must ask is it worth
it? A deal of detailed investigation of the optimization of a system should have undergone evaluating
hardware and software components utilizing memory components in short-term (Patil et al., 2023).
Moreover, analysing metrics like memory storage requirements as well as energy and power efficiency
is required to ensure that the model developed also meets the ethical standards set by society(Patil et
al., 2023)Because of AI development, neuromorphic computing cannot just be looked at from one angle
as AI associates with mimicking human activities and therefore must think like a human and resonate
with one. It introduces an entire subject of AI ethics which can be defined as a set of guidlines that
should be followed to ensure that it has influence on decision-making. Hence, although neuromorphic
computing can really drive AI development, it is not a black and white process but rather a critical
process involving speciality with computers identifying weaknesses and performing benchmark tests to
allow for it to be suitable.

4.4 Why Artifical Inteligence and something else?


This is a question that must be asked but can be answered eloquently. If we divert neuromorphic
computing’s potential to AI then more doors open that of which consist of Transportation, Health
Care, Education, Media, Customer Service (Vishwa et al., 2020) etc. Looking at intricate concepts

8
in neuromorphic computing like an artificial synapse (Vishwa et al., 2020) is needed because we are
to transfer, storage and process data all at once without the need of a dedicated storage space and
is remarkable because memory is quite a large problem. Knowing this potential, AI can be changed
completely and is fantastic for neuromorphic computing to shift to development in the AI sector whilst
the existance of a programming paradigm allows for different approaches to neuromorphic computing.
Another concept that must be known to all is a memristor(Vishwa et al., 2020). This is a component
that is non-volatile and is able to save its memory even when it loses power. It is formed as a matrix
of switches and can retain data as when there is no voltage passing through the component then
the remaining charge allows for the memristor to become what is known as a ”memory component”.
(Vishwa et al., 2020). These are ground-breaking discoveries that can change the direction that AI is
heading in and would completely answer the question with sureness.

4.5 AI-Based Robotics


Let us discuss whether neuromorphic computing has its importance engraved in a different type of
artificial intelligence, namely pertaining robotics. Spiking neural networks are seen as the trend in this
type of robotics with Long Short Term Neural Networks (arxiv.org, n.d.) being used with consideration
of STDP(Spike Timing Dependent Plasticity). Perchance neuromorphic computing is not needed to
drive development in this sector as quantum and optic computing can allow for intelligence, efficient
parallel processing and have its reliability and integrity in place. In neuromorphic computing, there
are limited supports like frames with strong structural integrity(arxiv.org, n.d.) which endangers the
existance of this type of computing in this specific set of robotics as there is a reduction of deployment
in neuromorphic systems. But, the main things to consider to help aid in the quest of arguing for
why we need to drive ourselves towards AI development is that Hardware and Software layers that
contrast one another can maximize its energy efficiency whilst there is a full pipeline of a robotic
system (arxiv.org, n.d.)in place for a outlined framework of a robot.

4.6 SW Optimization
An approach called model compression can alter sparsity and the probability of the weights of neurons
as well as biases. A technique known as weight pruning (arxiv.org, n.d.) can be performed which
is when values of unneeded weights in the tensor are set to zero. There is research that has been
done to show how energy and power consumption decreases with maximum data efficiency (arxiv.org,
n.d.)with less read/write to the neuromorphic chip being performed.

4.7 HW Optimization
Accelerators that are HW-Optimized are able to allow for efficient spike transmissions (arxiv.org, n.d.)
within the Spiking Neural Network and therefore allow for faster memory access hence faster read/write
speeds to the neuromorphic chip. This allows for more optimized memory as well as reduced risk of
collisions or other types of errors. All of this is essential to be considered to ensure that the reader is
able to be justified that the shift to AI development is needed whilst it being known that all of this is
able to increase efficiency while being easy to implement by engineers with years of experience in the
industry.

4.8 Conclusion
In conclusion, I would like to finalise that the shift towards neuromorphic computing in AI development
is justified to an extent if the government or a privatised company is able to allocate a budget to more
research in this field as from what we have explored above, there is pending potential that is to be
unlocked and with the evidence of the programming paradigm from plotting graphs to the accessing of
memristors internal structure using C and Assembly language really does allow for a tangible advantage
over convential approaches of using Emulative and Simulative Strategies. If programming was not so
key, then the development of neuromorphic computing would take epochs and we must grateful that
it does exist truly.

9
4.9 Reference List
Dureja, H., Garg, Y., Chaujar, R. and Kumar, B. (n.d.). REVIEW RESEARCH PAPER NEURO-
MORPHIC COMPUTING AND APPLICATIONS. @International Research Journal of Modernization
in Engineering, [online]
Roy, K., Jaiswal, A. and Panda, P. (2019). Towards spike-based machine intelligence with neuro-
morphic computing. Nature, [online] 575(7784), pp.607–617. doi:https://doi.org/10.1038/s41586-019-
1677-2.
Schuman, C.D., Kulkarni, S.R., Parsa, M., Mitchell, J.P., Date, P. and Kay, B. (2022). Opportuni-
ties for neuromorphic computing algorithms and applications. Nature Computational Science, [online]
2(1), pp.10–19. a doi:- (Schuman et al., 2022)
Prasanna (2023). SuperNeuro: A Fast and Scalable Simulator for Neuromorphic Computing.
[online] Available at: https://arxiv.org/pdf/2305.02510v1.pdf [Accessed 11 Nov. 2023]
‌ Calimera, A., Macii, E. and Poncino, M. (n.d.). Available at: [Accessed 11 Nov. 2023]
Yu, Z, Abdulghani, AM, Zahid, A, Heidari, H, Imran, MA Abbasi, QH 2020, ’An overview of
neuromorphic computing for artificial intelligence enabled hardware-based Hopfield neural network’,
IEEE Access, vol. 8, 9057570, pp. 67085-67099.
Esser, S.K., Appuswamy, R., Merolla, P., Arthur, J.V. and Modha, D.S. (2015). Backpropaga-
tion for Energy-Efficient Neuromorphic Computing. [online] Neural Information Processing Systems.
[Accessed 11 Nov. 2023]
Calimera, A., Macii, E. and Poncino, M. (2013). The Human Brain Project and neuromorphic
computing. Functional neurology, [online] 28(3), pp.191–6.
‌arxiv.org. (n.d.). Embodied Neuromorphic Artificial Intelligence for Robotics: Perspectives, Chal-
lenges, and Research Development Stack. [online]
Vishwa, R., Karthikeyan, R., Rohith, R. and Sabaresh, A. (2020). Current Research and Future
Prospects of Neuromorphic Computing in Artificial Intelligence. IOP Conference Series: Materials
Science and Engineering, 912, p.062029
Patil, P., Shirashyad, P., Pandhare, P., Biradar, P., Entc and Cse (2023). RECENT TRENDS IN
AI, NEUROMORPHIC COMPUTING ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE. In-
ternational Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org f164, [online] 11, pp.2320–2882.

10

You might also like