Professional Documents
Culture Documents
IBM's Brain-Inspired Chip Tested For Deep Learning - IEEE Spectrum
IBM's Brain-Inspired Chip Tested For Deep Learning - IEEE Spectrum
BrainInspired Chip Tested for Deep Learning IEEE Spectrum
Illustration: Shutterstock
The deeplearning software driving the modern artificial intelligence revolution has mostly run on fairly standard
computer hardware. Some tech giants such as Google and Intel have focused some of their considerable resources
on creating more specialized computer chips designed for deep learning. But IBM has taken a more unusual
approach: It is testing its braininspired TrueNorth computer chip as a hardware platform for deep learning.
Deep learning’s powerful capabilities rely on algorithms called convolutional neural networks that consist of layers
of nodes (also known as neurons). Such neural networks can filter huge amounts of data through their “deep”
layers to become better at, say, automatically recognizing individual human faces or understanding different
languages. These are the types of capabilities that already empower online services offered by the likes of
Google, Facebook, Amazon, and Microsoft.
In recent research, IBM has shown that such deeplearning algorithms could run on braininspired hardware
(http://spectrum.ieee.org/computing/hardware/howibmgotbrainlikeefficiencyfromthetruenorthchip) that
typically supports a very different type of neural network.
IBM published a paper on its work in the 9 September 2016 issue of the journal Proceedings of the National
Academy of Sciences (http://www.pnas.org/content/early/2016/09/19/1604850113.full). The research was
funded with just under $1 million (https://govtribe.com/contract/award/fa945315c0055) from the U.S. Defense
http://spectrum.ieee.org/techtalk/computing/hardware/ibmsbraininspiredchiptestedondeeplearning 1/3
29/10/2016 IBM's BrainInspired Chip Tested for Deep Learning IEEE Spectrum
Advanced Research Projects Agency (DARPA). Such funding formed part of DARPA’s Cortical Processor program
(https://www.fbo.gov/index?
s=opportunity&mode=form&id=91bc9e58d6fa024d55d7c0583d38fc21&tab=core&_cview=0) aimed at brain
inspired AI that can recognize complex patterns and adapt to changing environments.
“The new milestone provides a palpable proof of concept that the efficiency of braininspired computing can be
merged with the effectiveness of deep learning, paving the path towards a new generation of chips and algorithms
with even greater efficiency and effectiveness,” says Dharmendra Modha, chief scientist for brain
inspired computing at IBM ResearchAlmaden
(http://researcher.watson.ibm.com/researcher/view.php?person=usdmodha), in San
Jose, Calif.
IBM first laid down the specifications for TrueNorth and a prototype chip in 2011. So, TrueNorth predated—and
was therefore never specifically designed to harness—the deeplearning revolution based on convolutional neural
networks that took off starting in 2012. Instead, TrueNorth typically supports spiking neural networks that more
closely mimic the way real neurons work in biological brains.
Instead of firing every cycle, the neurons in spiking neural networks must gradually build up their potential before
they fire. To achieve precision on deeplearning tasks, spiking neural networks typically have to go through
multiple cycles to see how the results average out. That effectively slows down the overall computation on tasks
such as image recognition or language processing.
Deeplearning experts have generally viewed spiking neural networks as inefficient—at least, compared with
convolutional neural networks—for the purposes of deep learning. Yann LeCun, director of AI research at Facebook
and a pioneer in deep learning, previously critiqued
(https://www.facebook.com/yann.lecun/posts/10152184295832143) IBM’s TrueNorth chip because it primarily
supports spiking neural networks. (See IEEE Spectrum’s previous interview
(http://spectrum.ieee.org/automaton/robotics/artificialintelligence/facebookaidirectoryannlecunondeep
learning#qaTopicTwo) with LeCun on deep learning.)
The IBM TrueNorth design may better support the goals of neuromorphic computing that focus on closely
mimicking and understanding biological brains, says Zachary Chase Lipton, a deeplearning researcher in
the Artificial Intelligence Group at the University of California, San Diego
(http://ai.ucsd.edu/). By comparison, deeplearning researchers are more interested in getting practical
results for AIpowered services and products. He explains the difference as follows:
To evoke the cliche metaphor about birds and airplanes, you might say the computational
neuroscience/neuromorphic community is more concerned with studying birds, and the
machine learning community more interested in understanding aerodynamics, with or
without the help of biology. The deep learning community is generally bullish on the benefits of
specialized hardware. [Therefore,] the neuromorphic chips don't inspire as much excitement
because the spiking neural networks they focus on are not so popular in deep learning.
To make the TrueNorth chip a good fit for deep learning, IBM had to develop a new algorithm that could enable
convolutional neural networks to run well on its neuromorphic computing hardware. This combined approach
achieved what IBM describes as “near stateoftheart” classification accuracy on eight data sets involving vision
and speech challenges. They saw between 65 percent and 97 percent accuracy in the best circumstances.
http://spectrum.ieee.org/techtalk/computing/hardware/ibmsbraininspiredchiptestedondeeplearning 2/3
29/10/2016 IBM's BrainInspired Chip Tested for Deep Learning IEEE Spectrum
When just one TrueNorth chip was being used, it surpassed stateoftheart accuracy on just one out of eight data
sets (http://www.pnas.org/content/early/2016/09/19/1604850113/T3.expansion.html). But IBM researchers were
able to boost the hardware’s accuracy on the deeplearning challenges by using up to eight chips. That enabled
TrueNorth to match or surpass stateoftheart accuracy on three of the data sets.
The TrueNorth testing also managed to process between 1,200 and 2,600 video frames per second. That means a
single TrueNorth chip could detect patterns in real time from between as many as 100 cameras at once, Modha
says. This assumes each camera uses 1,024 color pixels (32 x 32) and streams information at a standard TV rate of
24 frames per second.
Such results may be impressive for TrueNorth’s first major foray into deeplearning testing, but they should be
taken with a grain of salt, Lipton says. He points out that the vision data sets involved very minor problems with
the 32 x 32 pixel images.
Still, IBM’s Modha seems enthusiastic about continuing to test TrueNorth for deep learning. He and his colleagues
hope to test the chip on socalled unconstrained deep learning, which involves gradually introducing hardware
constraints during the training of neural networks instead of constraining them from the very beginning.
Modha also points to TrueNorth’s general design as an advantage over those of more specialized deeplearning
hardware designed to run only convolutional neural networks. It will likely allow the running of multiple types of
AI networks on the same chip.
“Not only is TrueNorth capable of implementing these convolutional networks, which it was not originally designed
for, but it also supports a variety of connectivity patterns (feedback and lateral, as well as feed forward) and can
simultaneously implement a wide range of other algorithms,” Modha says.
Such biologically inspired chips would probably become popular only if they show that they can outperform other
hardware approaches for deep learning, Lipton says. But he suggested that IBM could leverage its hardware
expertise to join Google and Intel in creating new specialized chips designed specifically for deep learning.
“I imagine that some of the neuromorphic chipmakers will use their expertise in hardware acceleration to develop
chips more focused on practical deeplearning applications and less focused on biological simulation,” Lipton says.
http://spectrum.ieee.org/techtalk/computing/hardware/ibmsbraininspiredchiptestedondeeplearning 3/3