Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

NEUROMORPHIC COMPUTING

-ABSTRACT-
Neuromorphic Computing is a concept developed by Caver Mead.In neuromorphic
computing, you basically take inspiration from the principles of the brain and try to mimic
those on hardware utilising knowledge from nanoelectronics and VLSI. Neuromorphic
Computing refers to computing structures and algorithms currently it is used in a wide variety
of applications include recognition tasks, decision making, and forecasting.Neuromorphic
architectures needed to: produce lower energy consumption, potential novel nanostructured
materials, and enhanced computation.
Compared with von Neumann's computer architecture, neuromorphic systems offer more
unique and novel solutions to the artificial intelligence discipline. Inspired by biology, this
novel system has implemented the theory of human brain modeling by connecting feigned
neurons and synapses to reveal the new neuroscience concepts. Nowadays, neuromorphic
computing has become a popular architecture of choice instead of von Neumann computing
architecture for applications such as cognitive processing. Based on highly connected
synthetic neurons and synapses to build biologically inspired methods, which to achieve
theoretical neuroscientific models and challenging machine learning techniques using. The
von Neumann architecture is the computing standard predominantly for machines. However,
it has significant differences in organizational structure, power requirements, and processing
capabilities relative to the working model of the human brain.Therefore, neuromorphic
calculations have emerged in recent years as an auxiliary architecture for the von Neumann
system. Neuromorphic calculations are applied to create programming framework. The
system can learn and create applications from these computations to simulate neuromorphic
functions. These can be defined as neuro-inspired models, algorithms and learning methods,
hardware and equipment, support systems and applications .

Neuromorphic architectures have several significant and special requirements, such as


higher connection and parallelism, low power consumption, memory collocation and
processing.Its strong ability to execute complex computational speeds compared to traditional
von Neumann architectures, saving power and smaller size of the footprint. These features
are the bottleneck of the von Neumann architecture, so the neuromorphic architecture will be
considered as an appropriate choice for implementing machine learning algorithm ploration
of the neuromorphic system and have implemented many corresponding applications.
Recently, some researchers have demonstrated the capabilities of Hopfield algorithms in
some large-scale notable hardware projects and seen significant progression.
Neuromorphic computing systems will have much lower power consumption than
conventional processors and they are explicitly designed to support dynamic learning in the
context of complex and unstructured data. Early signs of this need show up in the Office of
Science portfolio with the emergence of machine learning based methods applied to problems
where traditional approaches are inadequate. These methods have been used to analyze the
data produced from climate models, in search of complex patterns not obvious to humans.
They have been used to recognize features in large-scale cosmology data, where the data
volumes are too large for human inspection. They have been used to predict maintenance
needs for accelerator magnets—so they can be replaced before they fail—to search for rare
events in high-energy physics experiments and to predict plasma instabilities that might
develop in fusion reactors. These novel approaches are also being used in biological research
from searching for novel features in genomes to predicting which microbes are likely to be in
a given environment at a given time. Machine learning methods are also gaining traction in
designing materials and predicting faults in computer systems, especially in the so-called
“materials genome” initiative. Nearly every major research area in the DOE mission was
affected by machine learning in the last decade. Today these applications run on existing
parallel computers; however, as problems scale and dataset sizes increase, there will be huge
opportunities for deep learning on neuromorphic hardware to make a serious impact in
science and technology.
Traditional computational architectures and their parallel derivatives are based on a core
concept known as the von Neumann architecture. The system is divided into several major,
physically separated, rigid functional units such as memory (MU), control processing (CPU),
arithmetic/logic (ALU), and data paths. This separation produces a temporal and energetic
bottleneck because information has to be shuttled repeatedly between the different parts of
the system. This “von Neumann” bottleneck limits the future development of revolutionary
computational systems. Traditional parallel computers introduce thousands or millions of
conventional processors each connected to others. Aggregate computing performance is
increased, but the basic computing element is fundamentally the same as that in a serial
computer and is similarly limited by this bottleneck. In contrast, the brain is a working
system that has major advantages in these aspects. The energy efficiency is markedly—many
orders of magnitude—superior. In addition, the memory and processors in the brain are
collocated because the constituents can have different roles depending on a learning process.
Moreover, the brain is a flexible system able to adapt to complex environments, self-
programming, and capable of complex processing. While the design, development, and
implementation of a computational system similar to the brain is beyond the scope of today’s
science and engineering, some important steps in this direction can be taken by imitating
nature. Clearly a new disruptive technology is needed which must be based on revolutionary
scientific developments. In this “neuromorphic” architecture, the various computational
elements are mixed together and the system is dynamic, based on a “learning” process by
which the various elements of the system change and readjust depending on the type of
stimuli they receive.
One of the more recent accomplishments in neuromorphic computing has come from
IBM research, namely, a biologically inspired chip (“TrueNorth”) that implements one
million spiking neurons and 256 million synapses on a chip with 5.5 billion transistors with a
Neuromorphic Computing: From Materials to Systems Architecture 6 typical power draw of
70 milliwatts. As impressive as this system is, if scaled up to the size of the human brain, it is
still about 10,000 times too power intensive.

JEESHMA KRISHNAN K
S7 EC2
RNO: 05
TLY19EC048

You might also like