Neuromorphic

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Table of Contents for Neuromorphic Systems

01
 OVERVIEW OF NEUROMORPHIC COMPUTING
 ADVANTAGES OF NEUROMORPHIC COMPUTING

02
 How does Neuromorphic Computing Work?
 Why do We Need Neuromorphic Systems?

03

04

05
As the name suggests, neuromorphic computing uses a model that's inspired by the workings of the brain.

Neuromorphic computing utilizes an engineering approach or method based on the activity of the biological

NEUROMORPHIC
brain. This type of approach can make technologies more versatile and adaptable, and promote more vibrant
results than other types of traditional architectures, for instance, the von Neumann architecture that is so useful
in traditional hardware design. Neuromorphic computing utilizes an engineering approach or method based on
the activity of the biological brain. This type of approach can make technologies more versatile and adaptable,
and promote more vibrant results than other types of traditional architectures, for instance, the von Neumann
architecture that is so useful in traditional hardware design.
 Most hardware today is based on the von Neumann architecture, which separates out memory and
computing. Because von Neumann chips have to shuttle information back and forth between the
memory and CPU, they waste time (computations are held back by the speed of the bus between the
compute and memory) and energy -- a problem known as the von Neumann bottleneck. As time goes
on, von Neumann architectures will make it harder and harder to deliver the increases in compute
power that we need.

NEUROMORPHIC
 To keep up, a new type of non-von Neumann architecture will be needed: a neuromorphic architecture.
Quantum computing and neuromorphic systems have both been claimed as the solution, and it's
neuromorphic computing. In this both processing and memory are governed by the neurons and the
synapses. Programs in neuromorphic computers are defined by the structure of the neural network and
its parameters, rather than by explicit instructions as in a von Neumann computer.
o Neurons in spiking neural networks accumulate charge over time from either the environment (via input information to the
network) or from internal communications (usually via spikes from other neurons in the network).
o Neurons have an associated threshold value, and when the charge value on that neuron reaches the threshold value, it fires,
sending communications along all of its outgoing synapses.
o Neurons may also include a notion of leakage, where the accumulated charge that is not above the threshold dissipates as
time passes. Furthermore, neurons may have an associated axonal delay, in which outgoing information from the neuron is
delayed before it affects its outgoing synapses.
o Synapses form the connections between neurons, and each synapse has a pre-synaptic neuron and a post-synaptic neuron.
Synapses have an associated weight value, which may be positive (excitatory) or negative (inhibitory).
o Synapses may have an associated delay value such that communications from the presynaptic neuron are delayed in reaching
the post-synaptic neuron.
o Synapses also commonly include a learning mechanism in which the weight value of the synapse changes over time based on
activity in the network.
o Neuromorphic computers often realize a particular fabric of connectivity, but the synapses may be turned on and off to
realize a network structure within that connectivity.
o Furthermore, parameters of the neurons and synapses such as neuron thresholds, synaptic weights, axonal delays and
synaptic delays are often programmable within a neuromorphic architecture.
Unlike traditional artificial neural networks, in which information is received at the input and then synchronously passed
between layers in the network, in SNNs, even if input information is received at the same time and the SNN is organized
into layers, as the delays on each synapse and neuron may be different, information is propagated asynchronously
throughout the network, arriving at different times; this is beneficial for realizing SNNs on neuromorphic hardware, which
can be designed to operate in an event-driven or asynchronous manner that fits well with the temporal dynamics of
spiking neurons and synapses.
An example SNN and how it operates in the temporal domain is shown in the figure. In this example, synapses are depicted
with a time delay. Information is communicated by spikes passed throughout the network. In this example, the network’s
operation at time t (left) and time t+1 (right) is depicted, to show how the network’s state changes with time.
Rapid Response System -As compared to traditional computers,
neuromorphic computers are built to work like a human brain and so their
Features of rapid response system is a major highlight
Neuromorphic
Computing Low Consumption of Power - Owing to the concept of SNN,
neuromorphic machines work when electric spikes or signals are passed
through the artificial neurons. These artificial neurons work only when
electric spikes are passed through them thus consuming low energy

Mobile Architecture One of the most striking features of neuromorphic


computing is its mobile architecture. They do not require much space
and are highly efficient in terms of space occupancy

Higher Adaptability - neuromorphic computers work well according to the


evolving demands of technology. With changing times, neuromorphic computers
adapt themselves and change from time to time resulting in efficient working.

Fast-paced Learning Machines working on the principle of neuromorphic


computing are highly fast-paced when it comes to learning. Establishing algorithms
based on interpretation of data and formulating algorithms as and when new data
is fed into such computers, neuromorphic enables machines to learn rapidly.
Building Blocks
In functional terms, the simplest, most naïve properties of the various devices and their function in the brain
areas include the following.
o 1.Somata (also known as neuron bodies), which function as integrators and threshold spiking devices
o 2. Synapses, which provide dynamical interconnections between neurons
o 3. Axons, which provide long-distance output connection between a presynaptic to a postsynaptic neuron
o 4. Dendrites, which provide multiple, distributed inputs into the neurons
Edge computing devices like smartphones currently have to hand off processing to a cloud-based
system, which processes the query and transmits the answer to the device for compute-intensive
01 activities. That query wouldn’t have to be shunted back and forth with neuromorphic systems; it
could be handled right on the device. The most important motivator of neuromorphic computing is
the hope it provides for the future of AI.

02

03

Potential Applications of Neuromorphic computing For running Al algorithms at the edge instead of in the cloud driverle
smart home devices natural language understanding data analytics process optimization ⚫ real-time image processing f
police cameras
NEUROMORPHIC COMPUTER SYSTEMS AVAILABLE TODAY Many neuromorphic systems have been developed and utilized by academics, startups, and the most prominent
players in the technology world.

Intel’s neuromorphic chip Loihi has 130 million synapses and 131,000
1. IBM's TrueNorth chip neurons. It was designed for spiking neural networks. Scientists use Intel
Loihi chips to develop artificial skin and powered prosthetic limbs. Intel Labs’
second-generation neuromorphic research chip, codenamed Loihi 2, and
2. Intel's Loihi chips
Lava, an open-source software framework are also announced

3. The Tianjic chip . IBM’s neuromorphic system TrueNorth was unveiled in 2014, with 64 million neurons and 16 billion synapses.
IBM recently announced a collaboration with the US Air Force Research Laboratory to develop a “neuromorphic
supercomputer” known as Blue Raven. While the technology is still being developed, one use may be to
4. Intel's Pohoiki Beach develop smarter, lighter, and less energy-demanding drones. The Human Brain Project (HBP), a 10-year project
computers that began in 2013 and is funded by the European Union, was established to further understand the brain
through six areas of study, including neuromorphic computing. The HBP has inspired two major neuromorphic
projects from universities, SpiNNaker and BrainScaleS. In 2018, a million-core SpiNNaker system was introduced,
5. BrainScales from Heidelberg becoming the world’s largest neuromorphic supercomputer at the time; The University of Manchester aims to
University scale it up to model one million neurons in the future. The examples from IBM and Intel focus on computational
performance. In contrast, the examples from the universities use neuromorphic computers as a tool for learning
about the human brain. Both methods are necessary for neuromorphic computing since both types of
information are required to advance AI

You might also like