Professional Documents
Culture Documents
What It'll Take To Go Exascale: Science
What It'll Take To Go Exascale: Science
http://www.sciencemag.org/content/335/6067/394.full
AAAS.ORG
Science Home
Current Issue
FEEDBACK
Previous Issues
HELP
LIBRARIANS
Science Express
Science Magazine
Science Products
My Science
ADVANCED
Home > Science Magazine > 27 January 2012 > Service, 335 (6067): 394-396
Science
www.sciencemag.org
Science 27 January 2012:
Vol. 335 no. 6067 pp. 394-396
DOI: 10.1126/science.335.6067.394
NEWS FOCUS
COMPUTER SCIENCE
Scientists hope the next generation of supercomputers will carry out a million trillion operations
per second. But first they must change the way the machines are built and run.
Using real climate data, scientists at Lawrence Berkeley National
Laboratory (LBNL) in California recently ran a simulation on one of
Related Resources
In Science Magazine
PODCASTS
Science Podcast
Science 27 January 2012: 481.
figure out ways to make future machines far more energy efficient and
The step we are about to take to exascale computing will be very, very
difficult, says Robert Rosner, a physicist at the University of Chicago in
Illinois, who chaired a recent Department of Energy (DOE) committee
charged with exploring whether exascale computers would be
achievable. Charles Shank, a former director of LBNL who recently
headed a separate panel collecting widespread views on what it would
take to build an exascale machine, agrees. Nobody said it would be
revolutionize engine
designs.
CREDIT: J. CHEN/CENTER
FOR EXASCALE SIMULATION
OF COMBUSTION IN
TURBULENCE, SANDIA
NATIONAL LABORATORIES
Gaining support
The next generation of powerful supercomputers will be used to design high-efficiency engines tailored
to burn biofuels, reveal the causes of supernova explosions, track the atomic workings of catalysts in
real time, and study how persistent radiation damage might affect the metal casing surrounding nuclear
weapons. It's a technology that has become critically important for many scientific disciplines, says
1 of 4
10/10/12 10:53 AM
http://www.sciencemag.org/content/335/6067/394.full
synchronize the results, and synthesize the final ensemble. Today's top
computers (number in
1 billion processors.
parentheses).
will not get us to the exascale, Simon says. These computers are
becoming so complicated that a number of issues have come up that
were not there before, Rosner agrees.
The biggest issue relates to a supercomputer's overall power
use. The largest supercomputers today use about 10 megawatts
(MW) of power, enough to power 10,000 homes. If the current
trend of power use continues, an exascale supercomputer would
require 200 MW. It would take a nuclear power reactor to run
it, Shank says.
Even if that much power were available, the cost would be
prohibitive. At $1 million per megawatt per year, the electricity
to run an exascale machine would cost $200 million annually.
That's a non-starter, Shank says. So the current target is a
machine that draws 20 MW at most. Even that goal will require a
300-fold improvement in flops per watt over today's technology.
2 of 4
10/10/12 10:53 AM
http://www.sciencemag.org/content/335/6067/394.full
which are very fast at certain types of calculations. Chip manufacturers are now looking at going from
multicore chips with four or eight cores to many-core chips, each containing potentially hundreds of
CPU and GPU cores, allowing them to assign different calculations to specialized processors. That
change is expected to make the overall chips more energy efficient. Intel, AMD, and other chip
manufacturers have already announced plans to make hybrid many-core chips.
Another stumbling block is memory. As the number of processors in a supercomputer skyrockets, so,
too, does the need to add memory to feed bits of data to the processors. Yet, over the next few years,
memory manufacturers are not projected to increase the storage density of their chips fast enough to
keep up with the performance gains of processors. Supercomputer makers can get around this by
adding additional memory modules. But that's threatening to drive costs too high, Simon says.
Even if researchers could afford to add more memory modules, that still won't solve matters. Moving
ever-growing streams of data back and forth to processors is already creating a backup for processors
that can dramatically slow a computer's performance. Today's supercomputers use 70% of their power
to move bits of data around from one place to another.
One potential solution would stack memory chips on top of one another and run communication and
power lines vertically through the stack. This more-compact architecture would require fewer steps to
route data. Another approach would stack memory chips atop processors to minimize the distance bits
need to travel.
A third issue is errors. Modern processors compute with stunning accuracy, but they aren't perfect. The
average processor will produce one error per year, as a thermal fluctuation or a random electrical spike
flips a bit of data from one value to another.
Such errors are relatively easy to ferret out when the number of processors is low. But it gets much
harder when 100 million to 1 billion processors are involved. And increasing complexity produces
additional software errors as well. One possible solution is to have the supercomputer crunch different
problems multiple times and vote for the most common solution. But that creates a new problem.
How can I do this without wasting double or triple the resources? Lucas asks. Solving this problem
will probably require new circuit designs and algorithms.
Finally, there is the challenge of redesigning the software applications themselves, such as a novel
climate model or a simulation of a chemical reaction. Even if we can produce a machine with 1 billion
processors, it's not clear that we can write software to use it efficiently, Lucas says. Current parallel
computing machines use a strategy, known as message passing interface, that divides computational
problems and parses out the pieces to individual processors, then collects the results. But coordinating
all this traffic for millions of processors is becoming a programming nightmare. There's a huge concern
that the programming paradigm will have to change, Rosner says.
DOE has already begun laying the groundwork to tackle these and other challenges. Last year it began
funding three co-design centers, multi-institution cooperatives led by researchers at Los Alamos,
Argonne, and Sandia national laboratories. The centers bring together scientific users who write the
software code and hardware makers to design complex software and computer architectures that work
in the fastest and most energy-efficient manner. It poses a potential clash between scientists who favor
openness and hardware companies that normally keep their activities secret for proprietary reasons.
But it's a worthy goal, agrees Wilfred Pinfold, Intel's director of extreme-scale programming in
Hillsboro, Oregon.
Management and Budget]? Simon asks. I'm curious to see. DOE's strategic plan, due out next month,
should provide some answers.
3 of 4
10/10/12 10:53 AM
http://www.sciencemag.org/content/335/6067/394.full
The rest of the world faces a similar juggling act. China, Japan, the European Union, Russia, and India all
have given indications that they hope to build an exascale computer within the next decade. Although
none has released detailed plans, each will need to find the necessary resources despite these tight
fiscal times.
The victor will reap more than scientific glory. Companies use 57% of the computing time on the
machines on the Top500 List, looking to speed product design and gain other competitive advantages,
Dongarra says. So government officials see exascale computing as giving their industries a leg up.
That's particularly true for chip companies that plan to use exascale designs to improve future
commodity electronics. It will have dividends all the way down to the laptop, says Peter Beckman, who
directs the Exascale Technology and Computing Initiative at Argonne National Laboratory in Illinois.
The race to provide the hardware needed for exascale computing will be extremely competitive,
Beckman predicts, and developing software and networking technology will be equally important,
according to Dongarra. Even so, many observers think that the U.S. track record and the current
alignment of its political and scientific forces makes it America's race to lose.
Whatever happens, U.S. scientists are unlikely to be blindsided. The task of building the world's first
exascale computer is so complex, Simon says, that it will be nearly impossible for a potential winner to
hide in the shadows and come out of nowhere to claim the prize.
The editors suggest the following Related Resources on Science sites
In Science Magazine
PODCASTS
Science Podcast
Science 27 January 2012: 481.
Summary
4 of 4
Full Text
Transcript
10/10/12 10:53 AM