Professional Documents
Culture Documents
Queuevol4no10 December2006
Queuevol4no10 December2006
December/January 2006-2007
Architecture’s www.acmqueue.com
Renaissance
Multithreading for Mere Mortals
Virtualization Comes of Age
The Hennessy-Patterson Interview
CONTENTS
DECEMBER/JANUARY 2006-2007 VOL. 4 NO. 10
FOCUS
COMPUTER ARCHITECTURE
Unlocking Concurrency 24
Ali-Reza Adl-Tabatabai, Intel,
Christos Kozyrakis, Stanford University,
and Bratin Saha, Intel
Can transactional memory ease
the pain of multicore programming?
And when you’ve been working offline, Folder Diff makes it a snap to
Perforce Folder Diff
reconcile and catch up with the Perforce Server when you get back online.
Folder Diff is just one of the many productivity tools that come with the
Perforce SCM System.
NEWS 2.0 10
INTERVIEW Taking a second look at the news so you don’t have to.
BOOK REVIEWS 49
CALENDAR 50
CURMUDGEON 56
Will the Real Bots Stand Up?
Stan Kelly-Bootle, Author
Spot the difference? Your peers will. A faster path to Visual Basic®
2005 makes it easier to leverage your existing skills while taking on the
challenging projects that make reputations. You get over 400 features
that streamline coding, so you can focus on the work that matters.
See all 400 differences at msdn.microsoft.com/difference
Publisher and Editor ACM Headquarters
Charlene O’Hanlon Executive Director and CEO: John White
cohanlon@acmqueue.com Director, ACM U.S. Public Policy Office: Cameron Wilson
Editorial Staff Sales Staff Deputy Executive Director and COO: Patricia Ryan
Executive Editor National Sales Director Director, Office of Information Systems: Wayne Graves
Jim Maurer Ginny Pohlman Director, Financial Operations Planning: Russell Harris
jmaurer@acmqueue.com 415-383-0203 Director, Office of Membership: Lillian Israel
gpohlman@acmqueue.com
Managing Editor
John Stanik Regional Eastern Manager Director, Office of Publications: Mark Mandelbaum
jstanik@acmqueue.com Walter Andrzejewski Deputy Director, Electronic Publishing: Bernard Rous
207-763-4772 Deputy Director, Magazine Development: Diane Crawford
Copy Editor
walter@acmqueue.com Publisher, ACM Books and Journals: Jono Hardjowirogo
Susan Holly
Art Director Contact Points Director, Office of SIG Services: Donna Baglio
Sharon Reuter Queue editorial Assistant Director, Office of SIG Services: Erica Johnson
Production Manager queue-ed@acm.org
George Neville-Neil the first page. Copyright for components of this work owned by others
than ACM must be honored. Abstracting with credit is permitted. To
Guest Expert
copy otherwise, to republish, to post on servers, or to redistribute to lists,
Kunle Olukotun
requires prior specific permission and/or fee. Request permission to repub-
ACM Queue (ISSN 1542-7730) is published ten times per year by the lish from: Publications Dept. ACM, Inc. Fax +1 (212) 869-0481 or e-mail
ACM, 2 Penn Plaza, Suite 701, New York, NY 10121-0701. POSTMASTER: <permissions@acm.org>
Please send address changes to ACM Queue, 2 Penn Plaza, Suite 701, For other copying of articles that carry a code at the bottom of the
New York, NY 10121-0701 USA Printed in the U.S.A. first or last page or screen display, copying is permitted provided that the
The opinions expressed by ACM Queue authors are their own, and are per-copy fee indicated in the code is paid through the Copyright Clear-
not necessarily those of ACM or ACM Queue. Subscription ance Center, 222 Rosewood Drive, Danvers, MA 01923, 508-750-8500,
information available online at www.acmqueue.com. 508-750-4470 (fax).
Technology
KNOWS NO FEAR.
I
am of the opinion that humans are not flexible crea- Can we say the same
tures. We resist change like oil resists water. Even if a for our industry in the
change is made for the good of humankind, if it messes next year? Can technol-
around with our daily routine, then our natural instinct is ogy face the changes and
to fight the change like a virus. adapt accordingly? Can
Let’s face it, all of us thrive on routine—what time we force an evolution, or will it come naturally? Charles
we get up, how we brush our teeth, where we sit on the Darwin said living things must adapt or die, but I wonder
train, what we eat for lunch—and for some it takes a lot whether the same applies to technology. Indeed, we
to break the routine. If you don’t agree, take a look at humans are the ones forcing the change—after all, tech-
your life. How many of you regularly perform some task nology does not create itself—but are we moving along a
that you dislike (backing up your hard drive, going to the path in which one day technology will be responsible for
same boring job, eating liver every Tuesday night) simply its own evolution?
because you don’t want to face the alternative (a hard- It’s a thought that is both thrilling and scary—the kind
drive crash, no extra money for new CDs, the chance of stuff that Michael Crichton novels are made of. Some
that your iron level will dip so low you’ll end up in the may scoff and say that humans ultimately have control
hospital getting mass blood transfusions)? over the amount of intelligence any machine has, and
I grew up in a household in which Saturday was clean- that we will always be superior. But I would point out
ing day and everyone was forced to pitch in, so as a result that humans are often held back by the one thing that
there was a time not too long ago when I was absolutely technology knows nothing about: fear.
stringent about keeping a perfectly clean house. As I’ve A certain amount is fear is healthy; fear is what keeps
gotten older and somewhat wiser, however, I’ve started us from jumping off a cliff without a bungee cord just to
slacking off somewhat in the housecleaning department. see what it feels like. But too much fear can prevent us
A creature of habit, I used to begin my picking-up process from discovering our true talents and best assets—fear of
in earnest every night at 9:30, darting in and out of every the unknown, fear of being ridiculed, fear of failure.
room in the house like a dervish and cleaning up the Call me crazy, but I’m sure a Web server doesn’t care
detritus of the day. Then one night, out of pure exhaus- whether it is being laughed at.
tion, I just didn’t. And I woke up the next morning still I, for one, can envision a day when technology
alive and healthy. My house was a little out of order, but becomes smarter than humans. I think we will reach
it wasn’t anything I couldn’t handle. Since then I’ve cut that threshold when man and machine possess equal
down my dervish episodes to three a week, and it suits intelligence, and then technology will evolve to surpass
me well (I’m also a little calmer now). man simply because we humans can’t get past our fears.
Baby steps, I know. But for some it takes baby steps to Which may be a good thing, depending on how one
precede the big steps. But because this is December and looks at it. I, for one, would never wish humankind to
a new year—and the chance to make those dreaded New lose its humanity for the sake of lightning-fast decisions
Year’s resolutions—is just weeks away, I’ve decided that or a better way to build a widget. Fear, along with all our
2007 will be the year I make some real changes in my myriad emotions, is what makes us human.
life. I don’t just mean switching laundry detergents, but You can’t say that about a Web server. Q
real change. And if I fail in my attempts, then I will work
harder to make my changes successful. I know there will CHARLENE O’HANLON, editor of Queue, is in for some big
be difficulties, both internal and external, but I will face adventures in 2007. Stick around and see for yourself. Mean-
the changes and the challenges head on, embracing the while, send your comments to her at cohanlon@acmqueue.
changes rather than fighting them. com.
emiu
pr
m
Sponsored by
news 2.0
Taking a
Fox and the Weasel second look AT data right from the cow
Capitalizing on the growing popularity of Mozilla’s pasture. Farmers can
THE NEWS SO YOU
Firefox, many Linux distributors now package the open input, view, and manage
source Web browser with their Linux code. According to DON’T HAVE TO information using wireless
Mozilla’s licensing policies, distributors may package the devices equipped with a
Firefox code with the Firefox name and logo, provided Web browser. FarmWizard’s
that Mozilla approves any changes made to the code. wirelessly accessed hosted service shows that this new
Mozilla wants to protect its trademark and prevent the breed of “Agri-IT” applications closely aligns with com-
confusion that might ensue if there were many separate puting trends seen in other sectors.
forks of Firefox that all used the Firefox name and logo. WANT MORE?
Debian, a Linux distribution closely aligned with the http://www.vnunet.com/computing/news/2167254/
free software movement, is butting heads with Mozilla handhelds-collect-farming
over these requirements. The folks at Debian want to
package a version of Firefox, but they object to using
the logo because it’s trademarked and therefore conflicts Second-Life Commerce Meets First-Life IRS
with Debian’s free-use ethos. They also object to Mozilla’s It’s becoming increasingly difficult to draw boundaries
code approval process, which could disqualify Debian’s between the imaginary and the real. Immersive online
browser from any association with the Firefox brand. simulations such as Second Life and World of Warcraft
So what’s a self-respecting free software advocate to have evolved virtual exchange systems that closely resem-
do? One solution would be for Debian to adopt the GNU ble real-world commerce. Players looking for an edge in
fork of Firefox, which, in obvious tribute to its parent, these games can head to eBay, where valuable items can
is cutely named IceWeasel. Another option would be for be bought and sold with real currency, with the actual
Debian to apply the IceWeasel name and logo, which are exchange of goods occurring in the online gaming world.
not trademarked, to its own Firefox code. Congress has noticed all this commerce and is evaluat-
WANT MORE? ing its policies for governing these virtual-to-real-world
http://www.internetnews.com/dev-news/article. transactions. After all, any transaction occurring in a real
php/3636651 marketplace using real money reasonably could be subject
to taxation, regardless of whether the goods exchanged
are tangible or imaginary. But things become complex
Down on the Wireless Farm when you consider the potential real-world value of vir-
As Queue reported in its September 2006 issue, compli- tual goods traded in cyberspace. If one person sells a deed
ance is a growing challenge for enterprises that’s creating to some Second-Life property on eBay, while someone
business opportunities for those savvy enough to sort it else, acting as an avatar online, completes the same trans-
out. Lest we get too bogged down in SOX and HIPAA and action using Second Life’s internal Linden dollars, is the
Basel II, however, we must remember that compliance first transaction taxable and the second one not taxable?
with government mandates is a challenge for all indus- The problem for the IRS is that while these games
tries. For example, farmers across the globe must comply are quite sophisticated, their economic systems lack the
with government reporting requirements to verify the structures and institutions, such as a stock market, that
safety of the food they produce. European Union farmers real-world tax law relies on. If the lack of these features
must keep detailed records about their cattle—everything is what’s keeping taxes out of virtual worlds, it seems
from where they’re grazing to their health problems. unlikely game developers will add them anytime soon.
Farmers are turning to technology to help them WANT MORE?
comply. Companies such as Ireland’s FarmWizard are http://today.reuters.com/news/ArticleNews.aspx?type=
seizing the opportunity to provide solutions. FarmWiz- technologyNews&storyID=2006-10-16T121700Z_01_N15
ard allows cattle farmers to manage important farming 306116_RTRUKOC_0_US-LIFE-SECONDLIFE-TAX.xml Q
reader files
A
s the year draws to an end, we would like to thank all www.acmqueue.com and send us your rants, raves, and
of our readers who have submitted to WOYHD. Over more new tools that you absolutely can’t live without—or
the past 12 months we’ve seen a wide variety of tools can’t stand to use. As further incentive, if we publish your
mentioned, and, come 2007, we would like to see a lot submission, you’ll be starting off the New Year with a
more of the same. So log on to our Web site at http:// brand new Queue coffee mug!
kode vicious
A koder with
attitude, KV ANSWERS
P
eer-to-peer networking (better known as P2P) has two YOUR QUESTIONS. tect them from the usual
faces: the illegal file-sharing face and the legitimate charges of providing a
MISS MANNERS HE AIN’T.
group collaboration face. While the former, illegal use system whereby people can
is still quite prevalent, it gets an undue amount of atten- exchange material that per-
tion, often hiding the fact that there are developers out haps certain other people,
there trying to write secure, legitimate P2P applications who also have lawyers, consider it wrong to exchange.
that provide genuine value in the workplace. While KV What else is there to worry about? Plenty.
probably has a lot to say about file sharing’s dark side, it is At the crux of all file-sharing systems—whether they
to the legal, less controversial incarnation of P2P that he are peer-to-peer, client/server, or what have you—is the
turns his attention to this month. Take it away, Vicious… type of publish/subscribe paradigm they follow. The pub-
lish/subscribe model defines how users share data.
The models follow a spectrum from low to high risk. A
Dear KV, high-risk model is one in which the application attempts
I’ve just started on a project working with P2P software, to share as much data as possible, such as sharing all data
and I have a few questions. Now, I know what you’re on your disk with everyone as the basic default setting.
thinking, and no this isn’t some copyright-violating piece Laugh if you like, but you’ll cry when you find out that
of kowboy kode. It’s a respectable corporate application lots of companies have built just such systems, or systems
for people to use to exchange data such as documents, that are close to being as permissive as that.
presentations, and work-related information. Here are some suggestions for building a low-risk peer-
My biggest issue with this project is security—for to-peer file-sharing system.
example, accidentally exposing our users’ data or leav- First of all, the default mode of all such software
ing them open to viruses. There must be more things to should be to deny access. Immediately after installing
worry about, but those are the top two. the software, no new files should be available to anyone.
So, I want to ask, “What would KV do?” There are several cases in which software did not obey
Unclear Peer this simple rule, so when a nefarious person wanted to
steal data, he or she would trick someone into download-
Dear UP, ing and installing the file-sharing software. This is often
What would KV do? KV would run, not walk, to the near- referred to as a “drive-by install.” The attacker would then
est bar and find a lawyer. You can always find lawyers in have free access to the victim’s computer or at least to the
bars, or at least I do; they’re the only ones drinking faster My Documents or similar folder.
than I am. The fact that you believe your users will use Second, the person sharing the files—that is, the
your software only for your designated purpose makes sharer—should have the most control over the data. The
you either naive or stupid, and since I’m feeling kind person connecting to the sharer’s computer should be
today, I’ll assume naive. able to see and copy only the files that the sharer wishes
So let’s assume your company has lawyers to pro- that person to see and copy. In a reasonably low-risk
system, the sharing of data would have a timeout such
Got a question for Kode Vicious? E-mail him at that unless the requester got the data by a certain time
kv@acmqueue.com—if you dare! And if your letter (say, 24 hours), the data would no longer be available.
appears in print, he may even send you a Queue coffee Such timeouts can be implemented by having the sharer’s
mug, if he’s in the mood. And oh yeah, we edit letters for computer generate a one-time use token containing a
content, style, and for your own good! timeout that the requester’s computer must present to get
a particular file.
Coming in February:
Secure Open Source
Open Source vs. Closed Source Security
Vulnerability Management
interview
Photography by Jacob Leverich
They wrote
the book ON
A
s authors of the seminal textbook, Computer Archi- COMPUTING computing). Patterson
tecture: A Quantitative Approach (4th Edition, Morgan pioneered the RISC project
Kaufmann, 2006), John Hennessy and David Patter- at Berkeley, which pro-
son probably don’t need an introduction. You’ve prob- duced research on which
ably read them in college or, if you were lucky enough, Sun’s Sparc processors (and
even attended one of their classes. Since rethinking, and many others) would later be based. Meanwhile, Hen-
then rewriting, the way computer architecture is taught, nessy ran a similar RISC project at Stanford in the early
both have remained committed to educating a new 1980s called MIPS. Hennessy would later commercialize
generation of engineers with the skills to tackle today’s this research and found MIPS Computer Systems, whose
tough problems in computer architecture, Patterson as a RISC designs eventually made it into the popular game
professor at Berkeley and Hennessy as a professor, dean, consoles of Sony and Nintendo.
and now president of Stanford University. Interviewing Hennessy and Patterson this month is
In addition to teaching, both have made significant Kunle Olukotun, associate professor of electrical engineer-
contributions to computer architecture research, most ing and computer science at Stanford University. Oluko-
notably in the area of RISC (reduced instruction set tun led the Stanford Hydra single-chip multiprocessor
emiu
pr
m
Sponsored by
interview
research project, which pioneered multiple processors on it in its company store for employees. I think what also
a single silicon chip. Technology he helped develop and surprised us is how quickly it caught on internationally.
commercialize is now used in Sun Microsystems’s Niagara We’re now in at least eight languages.
line of multicore CPUs. DP I got a really great compliment the other day when I
was giving a talk. Someone asked, “Are you related to the
KUNLE OLUKOTUN I want to start by asking why you Patterson, of Patterson and Hennessy?” I said, “I’m pretty
decided to write Computer Architecture: A Quantitative sure, yes, I am.” But he says, “No, you’re too young.” So I
Approach. guess the book has been around for a while.
DAVID PATTERSON Back in the 1980s, as RISC was just JH Another thing I’d say about the book is that it wasn’t
getting under way, I think John and I kept complaining until we started on it that I developed a solid and com-
to each other about the existing textbooks. I could see plete quantitative explanation of what had happened in
that I was going to become the chair of the computer sci- the RISC developments. By using the CPI formula
ence department, which I thought meant I wouldn’t have
any time. So we said, “It’s now or never.” Execution Time/Program = Instructions/Program x Clocks/
JOHN HENNESSY As we thought about the courses we Instruction x Time/Clock
were teaching in computer architecture—senior under-
graduate and first-level graduate courses—we were very we could show that there had been a real breakthrough in
dissatisfied with what resources were out there. The terms of instruction throughput, and that it overwhelmed
common method of teaching a graduate-level, even an any increase in instruction count.
introductory graduate-level computer architecture course, With a quantitative approach, we should be able to
was what we referred to as the supermarket approach. explain such insights quantitatively. In doing so, it also
The course would consist of selected readings—some- became clear how to explain it to other people.
times a book, but often selected readings. Many people DP The subtitle, Quantitative Approach, was not just a
used [Dan] Siewiorek, [Gordon] Bell, and [Allen] Newell casual additive. This was a turn away from, amazingly,
(authors of Computer Structures, McGraw-Hill, 1982), people spending hundreds of millions of dollars on
which were essentially selected readings. Course curricula somebody’s hunch of what a good instruction set would
looked as though someone had gone down the aisle and be—somebody’s personal taste. Instead, there should be
picked one selection from each aisle, without any notion engineering and science behind what you put in and
of integration of the material, without thinking about what you leave out. So, we worked on that title.
the objective, which in the end was to teach people how We didn’t quite realize—although I had done books
to design computers that would be faster or cheaper, and before—what we had set ourselves up for. We both took
with better cost performance. sabbaticals, and we said, “Well, how hard can it be? We
KO This quantitative approach has had a significant can just use the lecture notes from our two courses.” But,
impact on the way that the industry has designed com- boy, then we had a long way to go.
puters and especially the way that computer research has JH We had to collect data. We had to run simulations.
been done. Did you expect your textbook to have the There was a lot of work to be done in that first book. In
wide impact that it had? the more recent edition, the book has become sufficiently
JH The publisher’s initial calculation was that we needed well known that we have been able to enlist other people
to sell 7,000 copies just to break even, and they thought to help us collect data and get numbers, but in the first
we had a good shot at getting to maybe 10,000 or 15,000. one, we did most of the work ourselves.
As it turned out, the first edition sold well over 25,000. DP We spent time at the DEC Western Research Lab,
We didn’t expect that. where we hid out three days a week to get together and
DP This was John’s first book, but I had done several talk. We would write in between, and then we would go
books before, none of which was in danger of making me there and spend a lot of time talking through the ideas.
money. So I had low expectations, but I think we were We made a bunch of decisions that I think are
shooting for artistic success, and it turned out to be a unchanged in the fourth edition of the book. For exam-
commercial success as well. ple, an idea has to be in some commercial product before
JH The book captured a lot of attention both among aca- we put it into the book. There are thousands of ideas, so
demics using it in classroom settings and among practic- how do you pick? If no one has bothered to use it yet,
ing professionals in the field. Microsoft actually stocked then we’ll wait till it gets used before we describe it.
look at architecture-driven shifts, then this is probably DP By the way, that was a lot of work back then. Comput-
only the fourth. There’s the first-generation electronic ers were a lot slower!
computers. Then I would put a sentinel at the IBM JH We were working with hammers and chisels.
360, which was really the beginning of the notion of DP We were cutting Rubylith with X-acto knives, as I
an instruction-set architecture that was independent of remember.
implementation. KO Absolutely. So today, if you really want to make an
I would put another sentinel marking the beginning of impact, it’s very difficult to actually do VLSI (very large
the pipelining and instruction-level parallelism move- scale integration) design in an academic setting.
ment. Now we’re into the explicit parallelism multipro- JH I don’t know that that’s so true. It may have gotten
cessor era, and this will dominate for the foreseeable easier again. One could imagine designing some novel
future. I don’t see any technology or architectural innova- multiprocessor starting with a commercial core, assuming
tion on the horizon that might be competitive with this that commercial core has sufficient flexibility. You can’t
approach. design something like a Pentium 4, however. It’s com-
DP Back in the ’80s, when computer science was just pletely out of the range of what’s doable.
learning about silicon and architects were able to under- DP We recently painfully built a large microprocessor. At
stand chip-level implementation and the instruction the ISCA (International Symposium on Computer Archi-
set, I think the graduate students at Berkeley, Stanford, tecture) conference in 2005, a bunch of us were in the
and elsewhere could genuinely build a microprocessor hallway talking about exactly this issue. How in the world
that was faster than what Intel could make, and that was are architects going to build things when it’s so hard to
amazing. build chips? We absolutely have to innovate, given what
Now, I think today this shift toward parallelism is has happened in the industry and the potential of this
being forced not by somebody with a great idea, but switch to parallelism.
because we don’t know how to build hardware the That led to a project involving 10 of us from several
conventional way anymore. This is another brand-new leading universities, including Berkeley, Carnegie-Mellon,
opportunity for graduate students at Berkeley and Stan- MIT, Stanford, Texas, and Washington. The idea is to use
ford and other schools to build a microprocessor that’s FPGAs (field programmable gate arrays). The basic bet is
genuinely better than what Intel can build. And once that FPGAs are so large we could fit a lot of simple proces-
again, that is amazing. sors on an FPGA. If we just put, say, 50 of them together,
JH In some ways it’s déjà vu, much as the early RISC days we could build 1,000-processor systems from FPGAs.
relied on collaboration between compiler writers and FPGAs are close enough to the design effort of hard-
architects and implementers and even operating-system ware, so the results are going to be pretty convincing.
people in the cases of commercial projects. It’s the same People will be able to innovate architecturally in this
thing today because this era demands a level of collabora- FPGA and will be able to demonstrate ideas well enough
tion and cross-disciplinary problem solving and design. that we could change what industry wants to do.
It’s absolutely mandatory. The architects can’t do it alone. We call this project Research Accelerator for Multiple
Once ILP (instruction-level parallelism) got rolling, at Processors, or RAMP. There’s a RAMP Web site (http://
least in the implicit ILP approaches, the architects could ramp.eecs.berkeley.edu).
do most of the work. That’s not going to be true going KO Do you have industry partners?
forward. DP Yes, we’ve got IBM, Sun, Xilinx, and Microsoft. Chuck
DP This parallelism challenge involves a much broader Thacker, Technical Fellow at Microsoft, is getting Micro-
community, and we have to get into applications and soft back into computer architecture, which is another
language design, and maybe even numerical analysis, not reflection that architecture is exciting again. RAMP is one
just compilers and operating systems. God knows who of his vehicles for doing architecture research.
should be sitting around the table—but it’s a big table. JH I think it is time to try. There are challenges, clearly,
Architects can’t do it by themselves, but I also think but the biggest challenge by far is coming up with suf-
you can’t do it without the architects. ficiently new and novel approaches. Remember that this
KO One of the things that was nice about RISC is that era is going to be about exploiting some sort of explicit
with a bunch of graduate students, you could build a parallelism, and if there’s a problem that has confounded
30,000- or 40,000-transistor design, and that was it. You computer science for a long time, it is exactly that. Why
were done. did the ILP revolution take off so quickly? Because pro-
JH If anything, a bit of self-reflection on what happened ers don’t control the software business, so you’ve got a
in the last decade shows that we—and I mean collectively very difficult situation.
the companies, research community, and government It’s far more important now to be engaging the univer-
funders—became too seduced by the ease with which sities and working on these problems than it was, let’s say,
instruction-level parallelism was exploited, without helping find the next step in ILP. Unfortunately, we’re not
thinking that the road had an ending. We got there very going to find a quick fix.
quickly—more quickly than I would have guessed—but DP RAMP will help us get to the solution faster than
now we haven’t laid the groundwork. So I think Dave is without it, but it’s not like next year when RAMP is avail-
right. There’s a lot of work to do without great certainty able, we’ll solve the problem six months later. This is
that we will solve those problems in the near future. going to take a while.
For RISC, the big controversy was whether or not
to change the instruction set. Parallelism has changed
KO One of the things that we had in the days when you the programming model. It’s way beyond changing the
were doing the RISC research was a lot of government instruction set. At Microsoft in 2005, if you said, “Hey,
funding for this work. Do we have the necessary resources what do you guys think about parallel computers?” they
to make parallelism what we know it has to be in order to would reply, “Who cares about parallel computers? We’ve
keep computer performance going? had 15 or 20 years of doubling every 18 months. Get
DP I’m worried about funding for the whole field. As lost.” You couldn’t get anybody’s attention inside Micro-
ACM’s president for two years, I spent a large fraction of soft by saying that the future was parallelism.
my time commenting about the difficulties facing our In 2006, everybody at Microsoft is talking about paral-
field, given the drop in funding by certain five-letter gov- lelism. Five years ago, if you had this breakthrough idea
ernment agencies. They just decided to invest it in little in parallelism, industry would show you the door. Now
organizations like IBM and Sun Microsystems instead of industry is highly motivated to listen to new ideas.
the proven successful path of universities. So they are a ready market, but I just don’t think
JH DARPA spent a lot of money pursuing parallel comput- industry is set up to be a research funding agency. The
ing in the ’90s. I have to say that they did help achieve one organization that might come to the rescue would
some real advances. But when we start talking about par- be the SRC (Semiconductor Research Council), which is
allelism and ease of use of truly parallel computers, we’re a government/semiconductor industry joint effort that
talking about a problem that’s as hard as any that com- funnels monies to some universities. That type of an orga-
puter science has faced. It’s not going to be conquered nization is becoming aware of what’s facing the micropro-
unless the research program has a level of long-term cessor and, hence, semiconductor industry. They might
commitment and has sufficiently significant segments of be in position to fund some of these efforts.
strategic funding to allow people to do large experiments
and try ideas out.
DP For a researcher, this is an exciting time. There are KO There are many other issues beyond performance
huge opportunities. If you discover how to efficiently that could impact computer architecture. What ideas are
program a large number of processors, the world is going there in the architecture realm, and what sort of impact
to beat a path to your door. It’s not such an exciting are these other nonperformance metrics going to have on
time to be in industry, however, where you’re betting the computing?
company’s future that someone is going to come up with JH Well, power is easy. Power is performance. Completely
the solution. interchangeable. How do you achieve a level of improved
KO Do you see closer industry/academic collaboration efficiency in the amount of power you use? If I can
to solve this problem? These things wax and wane, but improve performance per watt, I can add more power and
given the fact that industry needs new ideas, then clearly be assured of getting more performance.
there’s going to be more interest in academic research to DP It’s something that has been ignored so far, at least in
try to figure out where to go next. the data center.
JH I would be panicked if I were in industry. Now I’m JH I agree with that. What happened is we convinced
forced into an approach that I haven’t laid the ground- ourselves that we were on a long-term road with respect
work for, it requires a lot more software leverage than the to ILP that didn’t have a conceivable end, ignoring the
previous approaches, and the microprocessor manufactur- fact that with every step on the road we were achieving
Designing Interactions
http://mitpress.mit.edu
Available in fine bookstores everywhere, or call 800-405-1619.
fixed hardware budget. Then five or 10 years go by where There is evidence of tremendous advancement in
a bunch of software people try to figure out how to make part of the programming community—not particularly
that thing programmable, and then we’re off to the next the academic part. I don’t know if academics are pay-
architecture idea when the old one doesn’t turn out. ing attention to this kind of work or not in the language
Maybe we should put some science behind this, trying to community, but there’s hope of very different ways of
evaluate what worked and what didn’t work before we go doing things than we’ve done in the past.
onto the next idea. Is there some way we could leverage that kind of inno-
My guess is that’s really the only way we’re going to vation in making it compatible with this parallel future
solve these problems; otherwise, it will just be that all of that we’re sure is out there? I don’t know the answer to
us will have a hunch about what’s easier to program. that, but I would say nothing is off the table. Any solu-
Even shared memory versus message passing—this is tion that works, we’ll do it.
not a new trade-off. It has been around for 20 years. I’ll KO Given that you won’t be able to buy a microprocessor
bet all of us in this conversation have differing opinions with a single core in the near future, you might be opti-
about the best thing to do. How about some experiments mistic that the proliferation of these multicore parallel
to shed some light on what the trade-offs are in terms of architectures will enable the open source community to
ease of programming of these approaches, especially as come up with something interesting. Is that likely?
we scale? DP Certainly. What I’ve been doing is to tell all my
If we just keep arguing about it, it’s possible it will colleagues in theory and software, “Hey, the world has
never get solved; and if we don’t solve it, we won’t be changed. The La-Z-Boy approach isn’t going to work
able to rise up and meet this important challenge facing anymore. You can’t just sit there, waiting for your single
our field. processor to get a lot faster and your software to get faster,
KO Looking back in history at the last big push in parallel and then you can add the feature sets. That era is over. If
computing, we see that we ended up with message pass- you want things to go faster, you’re going to have to do
ing as a de facto solution for developing parallel software. parallel computing.”
Are we in danger of that happening again? Will we end The open source community is a real nuts-and-bolts
up with the lowest common denominator—whatever is community. They need to get access to parallel machines
easiest to do? to start innovating. One of our tenets at RAMP is that the
JH The fundamental problem is that we don’t have a software people don’t do anything until the hardware
really great solution. Many of the early ideas were moti- shows up.
vated by observations of what was easy to implement in JH The real change that has occurred is the free soft-
the hardware rather than what was easy to use: how we’re ware movement. If you have a really compelling idea,
going to change our programming languages; what we your ability to get to scale rapidly has been dramatically
can do in the architecture to mitigate the cost of various changed.
things, communication in particular, but synchronization DP In the RAMP community, we’ve been thinking about
as well. how to put this in the hands of academics. Maybe we
Those are all open questions in my mind. We’re really should be putting a big RAMP box out there on the Inter-
in the early stages of how we think about this. If it’s the net for the open source community, to let them play with
case that the amount of parallelism that programmers a highly scalable processor and see what ideas they can
will have to deal with in the future will not be just two or come up with.
four processors but tens or hundreds and thousands for I guess that’s the right question: What can we do to
some applications, then that’s a very different world than engage the open source community to get innovative
where we are today. people, such as the authors of Ruby on Rails and other
DP On the other hand, there’s exciting stuff happening in innovative programming environments? The parallel
software right now. In the open source movement, there solutions may not come from academia or from research
are highly productive programming environments that labs as they did in the past. Q
are getting invented at pretty high levels. Everybody’s
example is Ruby on Rails, a pretty different way to learn LOVE IT, HATE IT? LET US KNOW
how to program. This is a brave new world where you can feedback@acmqueue.com or www.acmqueue.com/forums
rapidly create an Internet service that is dealing with lots
of users. © 2006 ACM 1542-7730/06/1200 $5.00
UNLOCKING
CONCURRENCY
24 December/January 2006-2007 ACM QUEUE rants: feedback@acmqueue.com
ALI-REZA ADL-TABATABAI, INTEL
CHRISTOS KOZYRAKIS, STANFORD UNIVERSITY
BRATIN SAHA, INTEL
M
ulticore architectures are an inflection rency-control concepts used for decades by the database
point in mainstream software development community. Transactional-language constructs are easy
because they force developers to write paral- to use and can lead to programs that scale. By avoid-
lel programs. In a previous article in Queue, ing deadlocks and automatically allowing fine-grained
Herb Sutter and James Larus pointed out, “The concur- concurrency, transactional-language constructs enable the
rency revolution is primarily a software revolution. The programmer to compose scalable applications safely out
difficult problem is not building multicore hardware, but of thread-safe libraries.
programming it in a way that lets mainstream applica- Although TM is still in a research stage, it has increas-
tions benefit from the continued exponential growth ing momentum pushing it into the mainstream. The
in CPU performance.” 1 In this new multicore world, recently defined HPCS (high-productivity computing
developers must write explicitly parallel applications that system) languages—Fortress from Sun, X10 from IBM,
can take advantage of the increasing number of cores that and Chapel from Cray—all propose new constructs for
each successive multicore generation will provide. transactions in lieu of locks. Mainstream developers who
Parallel programming poses many new challenges to are early adopters of parallel programming technologies
the developer, one of which is synchronizing concurrent have paid close attention to TM because of its potential
access to shared memory by multiple threads. Program- for improving programmer productivity; for example, in
mers have traditionally used locks for synchronization, his keynote address at the 2006 POPL (Principles of Pro-
but lock-based synchronization has well-known pitfalls. gramming Languages) symposium, Tim Sweeney of Epic
Simplistic coarse-grained locking does not scale well, Games pointed out that “manual synchronization…is
while more sophisticated fine-grained locking risks intro- hopelessly intractable” for dealing with concurrency in
ducing deadlocks and data races. Furthermore, scalable game-play simulation and claimed that “transactions are
libraries written using fine-grained locks cannot be easily the only plausible solution to concurrent mutable state.”2
composed in a way that retains scalability and avoids Despite its momentum, bringing transactions into the
deadlock and data races. mainstream still faces many challenges. Even with trans-
TM (transactional memory) provides a new concur- actions, programmers must overcome parallel program-
rency-control construct that avoids the pitfalls of locks ming challenges, such as finding and extracting parallel
and significantly eases concurrent programming. It brings tasks and mapping these tasks onto a parallel architecture
to mainstream parallel programming proven concur- for efficient execution. In this article, we describe how
FIG 1
} // other Map methods
// other Map methods ...
... }
}
FIG 2
number of threads
for ConcurrentHashMap,
however, is significantly
longer and more compli-
cated than the version
FIG 3
} }
} }
FIG 4
break; compiler—for example, by
} eliminating barriers to the
} while (1); same address or to immut-
… able variables.7
CONCURRENCY the transactions are rolled back and restarted in the STM
mode.15 The challenge with hybrid TM is conflict detec-
tion between software and hardware transactions. To
To detect conflicts, the caches must communicate avoid the need for two versions of the code, the software
their read sets and write sets using the cache coherence mode of a hybrid STM system can be provided through
protocol implemented in multicore chips. Pessimistic the operating system with conflict detection at the granu-
conflict detection uses the same coherence messages larity of memory pages.16
exchanged in existing systems.12 On a read or write access A final implementation approach is to start with an
within a transaction, the processor will request shared STM system and provide a small set of key mechanisms
or exclusive access to the corresponding cache line. The that targets its main sources of overhead.17 This approach
request is transmitted to all other processors that look up is called HASTM (hardware-accelerated STM). HASTM
their caches for copies of this cache line. A conflict is sig- introduces two basic hardware primitives: support for
naled if a remote cache has a copy of the same line with detecting the first use of a cache line, and support for
the R bit set (for an exclusive access request) or the W bit detecting possible remote updates to a cache line. The
set (for either request type). Optimistic conflict detection two primitives can significantly reduce the read barrier in
operates similarly but delays the requests for exclusive general instrumentation overhead and the read-set valida-
access to cache lines in the write set until the transaction tion time in the case of optimistic reads.
is ready to commit. A single, bulk message is sufficient to
communicate all requests.13 CONCLUSIONS
Even though HTM systems eliminate most sources of Composing scalable parallel applications using locks is
overhead for transactional execution, they nevertheless difficult and full of pitfalls. Transactional memory avoids
introduce additional challenges. The modifications HTM many of these pitfalls and allows the programmer to
requires in the cache hierarchy and the coherence pro- compose applications safely and in a manner that scales.
tocol are nontrivial. Processor vendors may be reluctant Transactions improve the programmer’s productivity by
to implement them before transactional programming shifting the difficult concurrency-control problems from
becomes pervasive. Moreover, the caches used to track the the application developer to the system designer.
read set, write set, and write buffer for transactions have In the past three years, TM has attracted a great deal
finite capacity and may overflow on a long transaction. of research activity, resulting in significant progress.18
Long transactions may be rare, but they still must Nevertheless, before transactions can make it into the
be handled in a manner that preserves atomicity and mainstream as first-class language constructs, there are
isolation. Placing implementation-dependent limits on many open challenges to address.
transaction sizes is unacceptable from the programmer’s Developers will want to protect their investments
perspective. Finally, it is challenging to handle the trans- in existing software, so transactions must be added
action state in caches for deeply nested transactions or incrementally to existing languages, and tools must be
when interrupts, paging, or thread migration occur.14 developed that help migrate existing code from locks to
Several proposed mechanisms virtualize the finite transactions. This means transactions must compose with
resources and simplify their organization in HTM systems. existing concurrency features such as locks and threads.
One approach is to track read sets and write sets using System calls and I/O must be allowed inside transactions,
signatures based on Bloom filters. The signatures provide and transactional memory must integrate with other
a compact yet inexact (pessimistic) representation of the transactional resources in the environment. Debugging
sets that can be easily saved, restored, or communicated and tuning tools for transactional code are also chal-
if necessary. The drawback is that the inexact representa- lenges, as transactions still require tuning to achieve
tion leads to additional, false conflicts that may degrade scalability and concurrency bugs are still possible using
performance. Another approach is to map read sets, transactions.
write sets, and write buffers to virtual memory and use Transactions are not a panacea for all parallel program-
The
Virtualization
Reality
A number of important challenges are associated with the tion, taking a closer look at the Xen hypervisor and its
deployment and configuration of contemporary comput- paravirtualization architecture. We then review several
ing infrastructure. Given the variety of operating systems challenges in deploying and exploiting computer systems
and their many versions—including the often-specific and software applications, and we look at IT infrastruc-
configurations required to accommodate the wide range ture management today and show how virtualization can
of popular applications—it has become quite a conun- help address some of the challenges.
drum to establish and manage such systems.
Significantly motivated by these challenges, but also A POCKET HISTORY OF VIRTUALIZATION
owing to several other important opportunities it offers, All modern computers are sufficiently powerful to use
virtualization has recently become a principal focus for virtualization to present the illusion of many smaller VMs
computer systems software. It enables a single computer (virtual machines), each running a separate operating sys-
to host multiple different operating system stacks, and tem instance. An operating system virtualization environ-
it decreases server count and reduces overall system ment provides each virtualized operating system (or guest)
complexity. EMC’s VMware is the most visible and early the illusion that it has exclusive access to the underly-
entrant in this space, but more recently XenSource, Paral- ing hardware platform on which it runs. Of course, the
lels, and Microsoft have introduced virtualization solu- virtual machine itself can offer the guest a different view
tions. Many of the major systems vendors, such as IBM, of the hardware from what is really available, including
Sun, and Microsoft, have efforts under way to exploit CPU, memory, I/O, and restricted views of devices.
virtualization. Virtualization appears to be far more than Virtualization has a long history, starting in the main-
just another ephemeral marketplace trend. It is poised to frame environment and arising from the need to provide
deliver profound changes to the way that both enterprises isolation between users. The basic trend started with
and consumers use computer systems. time-sharing systems (enabling multiple users to share
What problems does virtualization address, and more- a single expensive computer system), aided by innova-
over, what will you need to know and/or do differently tions in operating system design to support the idea of
to take advantage of the innovations that it delivers? In processes that belong to a single user. The addition of user
this article we provide an overview of system virtualiza- and supervisor modes on most commercially relevant
hypercall API
API
CPU AND MEMORY VIRTUALIZATION
In Xen’s paravirtualization, virtualization of CPU and
hardware
memory and low-level hardware interrupts are provided
by a low-level efficient hypervisor layer that is imple- • small hypervisor runs directly on hardware
mented in about 50,000 lines of code. When the operat- • guest OSes co-operate with hypervisor for
resource management & I/O
ing system updates hardware data structures, such as the
• device drivers outside hypervisor
page table, or initiates a DMA operation, it collaborates
FIG 1
with the hypervisor by making calls into an API that is
offered by the hypervisor.
This, in turn, allows the hypervisor to keep track of all
changes made by the operating system and to optimally
S
Since I started a stint as chair of the IETF (Internet Engineering Task Force)
in March 2005, I have frequently been asked, “What’s coming next?” but I
have usually declined to answer. Nobody is in charge of the Internet, which
is a good thing, but it makes predictions difficult (and explains why this
article starts with a disclaimer: It represents my views alone and not those
of my colleagues at either IBM or the IETF).
The reason the lack of central control is a good thing is that it has
allowed the Internet to be a laboratory for innovation throughout its
life—and it’s a rare thing for a major operational system to serve as its own
development lab. As the old metaphor goes, we frequently change some of
the Internet’s engines in flight.
This is possible because of a few of the Internet’s basic goals:
• Universal connectivity—anyone can send packets to anyone.
• Applications run at the edge—so anyone can install and offer services.
• “Cheap and cheerful” core technology—so transmission is cheap.
• Natural selection—no grand plan, but good technology survives and the
rest dies.
Of course, this is an idealistic view. In recent years, firewalls and network
address translators have made universal connectivity sticky. Some telecom-
munications operators would like to embed services in the network. Some
transmission technologies try too hard, so they are not cheap. Until now,
however, the Internet has remained a highly competitive environment and
natural selection has prevailed, even though there have been attempts to
protect incumbents by misguided regulation.
In this environment of natural selection, predicting technology trends
is very hard. The scope is broad—the IETF considers specifications for how
FASTER,
on observable challenges and trends today.
MORE SECURE
The original Internet goal that anyone could send a
packet to anyone at any time was the root of the extraor-
dinary growth observed in the mid-1990s. To quote Tim
Berners-Lee, “There’s a freedom about the Internet: As
IP runs over emerging hardware media, maintenance long as we accept the rules of sending packets around, we
and improvements to IP itself and to transport protocols can send packets containing anything to anywhere.”1 As
including the ubiquitous TCP, routing protocols, basic with all freedoms, however, there is a price. It’s trivial to
application protocols, network management, and secu- forge the origin of a data packet or of an e-mail message,
rity. A host of other standards bodies operate in parallel so the vast majority of traffic on the Internet is unauthen-
with the IETF. ticated, and the notion of identity on the Internet is fluid.
To demonstrate the difficulty of prediction, let’s Anonymity is easy. When the Internet user com-
consider only those ideas that get close enough to real- munity was small, it exerted enough social pressure on
ity to be published within the IETF; that’s about 1,400 miscreants that this was not a major problem area. Over
new drafts per year, of which around 300 end up being the past 10 years, however, spam, fraud, and denial-of-ser-
published as IETF requests for comments (RFCs). By an vice attacks have become significant social and economic
optimistic rough estimate, at most 100 of these specifica- problems. Thus far, service providers and enterprise users
tions will be in use 10 years later (i.e., 7 percent of the have responded largely in a defensive style: firewalls
initial proposals). Of course, many other ideas are floated to attempt to isolate themselves, filtering to eliminate
in other forums such as ACM SIGCOMM. So, anyone unwanted or malicious traffic, and virtual private net-
who agrees to write about emerging protocols has at least works to cross the Internet safely.
a 93 percent probability of writing nonsense. These mechanisms are not likely going away, but what
What would I have predicted 10 years ago? As a matter seems to be needed is a much more positive approach to
of fact, I can answer that question. In a talk in May 1996 security: Identify and authenticate the person or system
I cautiously quoted Lord Kelvin, who stated in 1895 that you are communicating with, authorize certain actions
“heavier-than-air flying machines are impossible,” and I accordingly, and if needed, account for usage. The term of
incautiously predicted that CSCW (computer-supported art is AAA (authentication, authorization, accounting).
collaborative work), such as packet videoconferencing AAA is needed in many contexts and may be needed
and shared whiteboard, would be the next killer applica- at several levels for the same user session. For example, a
tion after the Web, in terms of bandwidth and realtime user may first need to authenticate to the local network
requirements. I’m still waiting. provider. A good example is a hotel guest using the hotel’s
A little earlier, speaking to an IBM user meeting in wireless network. The first attempt to access the Internet
1994 (before I joined IBM), I made the following specific may require the user to enter a code supplied by the front
predictions: desk. In an airport, a traveler may have to supply a credit
• Desktop client/server is the whole of computing. The card number to access the Internet or use a preexisting
transaction processing model is unhelpful. account with one of the network service providers that
• Cost per plug of LAN will increase. offer connectivity. A domestic ADSL customer normally
• Internet and IPX will merge and dominate. authenticates to a service provider, too. IETF protocols
• Desktop multimedia is more than a gimmick, but only such as EAP (Extensible Authentication Protocol) and
part of desktop computing. RADIUS (Remote Authentication Dial-in User Service)
• Wireless mobile PCs will become very important. are used to mediate these AAA interactions. This form of
• Network management (including manageable equip- AAA, however, authenticates the user only as a sender
ment and cabling) is the major cost. and receiver of IP packets, and it isn’t used at all where
Well, transaction processing is more important in 2006 free service is provided (e.g., in a coffee shop).
than it has ever been, and IPX has just about vanished. Often (e.g., for a credit card transaction) the remote
The rest, I flatter myself, was reasonably accurate. server needs a true identity, which must be authenti-
FASTER,
clumsy. A related approach, however—building virtual
paths with guaranteed bandwidth across the network
core—is embodied in the use of MPLS (MultiProtocol
MORE SECURE
Label Switching). In fact, a derivative of RSVP known as
RSVP-TE (for traffic engineering) can be used to build
MPLS paths with specified bandwidth. Many ISPs are
using MPLS technology.
may and does lose a (hopefully small) fraction of all pack- MPLS does not solve the QoS problem in the access
ets. By the end-to-end principle, end systems are required networks, which by their very nature are composed of
to detect and compensate for missing packets. For reliable a rapidly evolving variety of technologies (ADSL, CATV,
data transmission, that means retransmission, normally various forms of Wi-Fi, etc.). Only one technology is
performed by the TCP half of TCP/IP. Users will see such common to all these networks: IP itself. Therefore, the
retransmission, if they notice it at all, as a performance final piece of the QoS puzzle works at the IP level. Known
glitch. For media streams such as VoIP, packet loss will as Differentiated Services, it is a simple way of marking
often be compensated for by a codec—but a burst of every packet for an appropriate service class, so that VoIP
packet loss will result in broken speech or patchy video. traffic can be handled with less jitter than Web browsing,
For this reason, the issue of QoS (quality of service) came for example. Obviously, this is desirable from a user view-
to the fore some years ago, when audio and video codecs point, and it’s ironic that the more extreme legislative
first became practical. It remains a challenge. proposals for so-called “net neutrality” would ostensibly
One aspect of QoS is purely operational. The more outlaw it, as well as outlawing priority handling for VoIP
competently a network is designed and managed, the bet- calls to 911.
ter the service will be, with more consistent performance The challenge for service providers is how to knit the
and fewer outages. Although unglamorous, this is prob- four QoS tools (competent operation, overprovision of
ably the most effective way of providing good QoS. bandwidth, traffic engineering, and differentiated ser-
Beyond that, there are three more approaches to QoS, vices) into a smooth service offering for users. This chal-
which can be summarized as: lenge is bound up with the need for integrated network
• Throw bandwidth at the problem. management systems, where not only the IETF but also
• Reserve bandwidth. the DMTF (Distributed Management Task Force), TMF
• Operate multiple service classes. (TeleManagement Forum), ITU (International Telecom-
The first approach is based on the observation that munication Union), and other organizations are active.
both in the core of ISP networks and in properly cabled This is an area where we have plenty of standards, and
business environments, raw bandwidth is cheap (even the practical challenge is integrating them.
without considering the now-historical fiber glut). In However, the Internet’s 25-year-old service model,
fact, the only place where bandwidth is significantly which allows any packet to be lost without warning,
limited is in the access networks (local loops and wireless remains; and transport and application protocols still
networks). Thus, most ISPs and businesses have solved have to be designed accordingly.
the bulk of their QoS problem by overprovisioning their
core bandwidth. This limits the QoS problem to access BACK TO THE FUTURE
networks and any other specific bottlenecks. As previously mentioned, MPLS allows operators to cre-
The question then is how to provide QoS management ate virtual paths, typically used to manage traffic flows
at those bottlenecks, which is where bandwidth reserva- across an ISP backbone or between separate sites in a
tions or service classes come into play. In the reservation large corporate network. At first glance, this revives an
approach, a session asks the network to assign bandwidth old controversy in network engineering—the conflict
all along its path. In this context, a session could be a between datagrams and virtual circuits. More than three
single VoIP call, or it could be a semi-permanent path decades ago this was a major issue. At that time, con-
between two networks. This approach has been explored ventional solutions depended on end-to-end electrical
in the IETF for more than 10 years under the name of circuits (hardwired or switched, and multiplexed where
“Integrated Services,” supported by RSVP (Resource Res- convenient).
200,000
Note that not all experts
are convinced by these
benefits. Modern IP routers 150,000
are hardly slow, and as pre-
viously noted, QoS may in
100,000
practice not be a problem
in the network core. Most
ISPs insist on the need for 50,000
such traffic engineering,
however.
Even with MPLS virtual 0
FIG 1
’95 ’96 ’97 ’98 ’99 ’00 ’01 ’02 ’03 ’04 ’05 ’06
paths, the fundamental date
unit of transmission on the
Internet remains a single IP
packet.
FASTER,
by being open—and open-minded. Any engineer who
wants to join in can do so. The IETF has no membership
requirements; anyone can join the mailing list of any
MORE SECURE
working group, and anyone who pays the meeting fee
can attend IETF meetings. Decisions are made by rough
consensus, not by voting. The leadership committees in
the IETF are drawn from the active participants by a com-
change in a table of that size could greatly exceed the rate munity nomination process. Apart from meeting fees, the
at which routing updates could be distributed worldwide. IETF is supported by the Internet Society.
Although we have known about this problem for more Any engineer who wants to join in can do so in several
than 10 years, we are still waiting for the breakthrough ways: by supporting the Internet Society (http://www.
ideas that will solve it. isoc.org), by joining IETF activities of interest (http://
www.ietf.org), or by contributing to research activities
MULTIPLE UNIVERSES? (http://www.irtf.org and, of course, ACM SIGCOMM at
The telecommunications industry was fundamentally sur- http://www.acm.org/sigs/sigcomm/). Q
prised by the Internet’s success in the 1990s and then fun-
damentally shaken by its economic consequences. Only REFERENCES
now is the industry delivering a coherent response, in 1. Berners-Lee, T. 1999. Weaving the Web. San Francisco:
the form of the ITU’s NGN (Next Generation Networks) HarperCollins.
initiative launched in 2004. NGN is to a large extent 2. Gross, P., Almquist, P. 1992. IESG deliberations on
founded on IETF standards, including IP, MPLS, and SIP routing and addressing, RFC 1380 (November). DDN
(Session Initiation Protocol), which is the foundation of Network Information Center; http://www.rfc-archive.
standardized VoIP and IMS (IP Multimedia Subsystem). org/getrfc.php?rfc=1380.
IMS was developed for third-generation cellphones but 3. Based on a talk by Keith Knightson. 2005. Basic NGN
is now the basis for what ITU calls “fixed-mobile conver- architecture principles and issues; http://www.itu.int/
gence.” The basic principles of NGN are:3 ITU-T/worksem/ngn/200505/program.html.
• IP packet-based transport using MPLS
• QoS-enabled ACKNOWLEDGMENTS
• Embedded service-related functions—layered on top of Thanks to Bernard Aboba and Stu Feldman for valuable
transport or based on IMS comments on a draft of this article.
• User access to competing service providers
• Generalized mobility LOVE IT, HATE IT? LET US KNOW
At this writing, the standardization of NGN around feedback@acmqueue.com or www.acmqueue.com/forums
these principles is well advanced. Although it is new for
the telecommunications industry to layer services on top BRIAN E. CARPENTER is an IBM Distinguished Engineer
rather than embedding them in the transport network, working on Internet standards and technology. Based in
there is still a big contrast with the Internet here: Internet Switzerland, he became chair of the IETF (Internet Engineer-
services are by definition placed at the edges and are not ing Task Force) in March 2005. Before joining IBM, he led
normally provided by ISPs as such. The Internet has a the networking group at CERN, the European Laboratory
history of avoiding monopoly deployments; it grows by for Particle Physics, from 1985 to 1996. He served from
spontaneous combustion, which allows natural selection March 1994 to March 2002 on the Internet Architecture
of winning applications by the end users. Embedding Board, which he chaired for five years. He also served as a
service functions in the network has never worked in the trustee of the Internet Society and was chairman of its board
past (except for directories). Why will it work now? of trustees for two years until June 2002. He holds a first
degree in physics and a Ph.D. in computer science, and is a
COME JOIN THE DANCE chartered engineer (UK) and a member of the IBM Academy
It should be clear from this superficial and partial per- of Technology.
sonal survey that we are still having fun developing the © 2006 ACM 1542-7730/06/1200 $5.00
Continued from page 56 lief: “An abacus? Sheer luxury! We had to dig out our own
with what seem now to be ridiculously frugal resources. pebbles from t’ local quarry. We used to dream of having
Maurice Wilkes, David Wheeler, and Stan Gill had written an abacus...”
the first book on programming.4 This revered, pioneering In truth, adversity did bring its oft-touted if not so
trio are generally acknowledged as the co-inventors of sweet usages. Programming at the lower levels with lim-
the subroutine and relocatable code. As with all the most ited memory constantly “focused the mind”—you were
sublime of inventions, it’s difficult to imagine the world nearer the problem, every cycle had to earn its keep, and
without a call/return mechanism. Indeed, I meet pro- every bit carry [sic!] its weight in expensive mercury, as
grammers, whose parasitic daily bread is earned by invok- it were. The programming cycle revolved thus: hand-
ing far-flung libraries, who have never paused to ponder write the code on formatted sheets; punch your tape on
with gratitude that the subroutine concept needed the a blind perforator (the “prayer” method of verification
brightest heaven of invention. Although no patents for was popular, whence quips about the trademark Creed);
the basic subroutine mechanism were sought (or even select and collate any subroutines (the library was a set of
available) back then, a further sign of changing times is paper tapes stored in neat white boxes); wait in line at the
that patents are now routinely [sic] awarded for varia- tape reader (this was before the more efficient “cafete-
tions on the call/return mechanism, as well as for specific ria” services were introduced); then finally collect and
subroutines.5 print your output tape (if any). All of which combined to
David Wheeler died suddenly in 2004 after one of his impose a stricter discipline on what we now call software
daily bicycle rides to the Cambridge Computer Labs. It’s development. More attention perhaps than in these
quite Cantabrigian to “die with your clips on.” I had the agile, interactive days was given to the initial formula-
sad pleasure of attending David’s memorial service and tion of the program, including “dry runs” on Brunsviga
learning more of his extensive work in many areas of hand calculators. Indeed, the name of our discipline was
computing.6 numerical analysis and automatic computing, only later
Other innovations from the Cambridge Mathemati- to be called computer science.7
cal Laboratory in the early 1950s included Wilkes’s paper EDSAC designer Professor (now Sir) Maurice Wilkes
introducing the concept of microprogramming. On a was quoted by the Daily Mail, October 1947:
more playful note was the XOX program written by my “The brain will carry out mathematical
supervisor A. S. (Sandy) Douglas. This played (and never research. It may make sensational discover-
lost!) tic-tac-toe (also known as OXO)—a seemingly trivial ies in engineering, astronomy, and atomic
pursuit, yet one with enormous, unpredicted conse- physics. It may even solve economic and
quences. XOX was the very first computer game with philosophical problems too complicated for
an interactive CRT display, the challenge being not the the human mind. There are millions of vital
programming logic, of course, but the fact that the CRT questions we wish to put to it.”
was designed and wired for entirely different duties. Little A few years later, the Star (June 1949) was reporting:
could anyone guess then that games and entertainment “The future? The ‘brain’ may one day come
would become the dominant and most demanding appli- down to our level and help with our income-
cations for computers. Can anyone gainsay this assertion? tax and bookkeeping calculations. But this is
One would need to add up all the chips, MIPS, terabytes, speculation and there is no sign of it so far.”
and kid-hours (after defining kid), so I feel safe in my Allowing for journalistic license, one can spot early
claim. Discuss! If you insist, I can offer a weaselly cop-out: differences between how computing was expected to
Games and entertainment are now among the most domi- evolve and how, in fact, things turned out. The enor-
nant and demanding applications for computers. mous impact on scientific research did come about and
Cue in some computer-historic Pythonesque clichés: continues to grow, but the relative pessimism about
“We had it tough in them days, folks. The heat, mud, commercial applications quickly vanished. Indeed, soon
dust, and flies. Try telling the young’uns of today—they after the June 1949 quote (“no sign of it so far”), the UK’s
just don’t believe yer. And did I mention the heat? leading caterers, food manufacturers, and tea-shop chain,
3,000 red-hot Mullard valves. All of us stripped down t’ J. (Joe) Lyons & Co., embarked on its LEO (Lyons Elec-
waist—even t’ men!” Then an older old soldier would tronic Office) project, a business computer based directly
intervene: “512 words? You were lucky! All we had were on Wilkes’s EDSAC designs (with appropriate financial
two beads on a rusty abacus!” More ancient cries of disbe- support). I recall visits by the Joe Lyons “suits,” who
replies lack any real signs of human intelligence. Discuss- Tigers (Japan) and the Dirty Dozen (Germany) as “look-
ing Democratic presidential candidates (George’s idea), ing remarkably like a contest between toasters on wheels,
I entered “Hillary Clinton.” George replied, “Hilary while the Four-Legged League (RoboMutts) appeared to
Clinton is a senator.” I said, “She’s also a candidate for spend most of the time sniffing each other’s shorts.”
president.” George replied, “Oh yes, I know!!!” But when I Kitano remains optimistic that by 2050 a team of
asked, “Is she Hilary or Hillary?”, George answered, “This autonomous bots will beat the human World Cup
does not make sense—how about math?” I said, “Nyet,” Champions. That’s a prediction that’s difficult to gainsay.
and George answered: “You’re right, that’s Russian.” Kitano points out that 50 years after EDSAC, the IBM
The site has a disclaimer: “Jabberwacky learns the Deep Blue beat world chess champion Garry (or some
behavior and words of its users. It may use language and prefer Gary) Kasparov. Seddon argues that playing a sym-
produce apparent meanings that some will consider inap- bolic “computable” game like chess cannot be compared
propriate. Use this site with discretion, and entirely at with the physical complexities of soccer, where the rules
your own risk.” appear simple but defy algorithmic precision. “He was
bleedin’ off-side!” “Oh no, he bleedin’ wasn’t!”
Seddon reckons that the chances of Kitano’s prophesy
coming true are about the same as Beckham ever becom-
...by 2050 a team ing world chess champion. That honor, by the way, has
of autonomous bots just been achieved by the Russian Vladimir Kramnik but
will beat the human not without some all-too-human, sordid altercations. His
rival, the Bulgarian Veselin Topalov, objected to Kram-
World Cup champions.
nik’s frequent trips to the restroom (or, in chess notation,
K x P?). The ultimate insult was that Kramnik had been
consulting a computer hidden in the Gents. This was
convincingly disproved, but one further irony ensued.
Since the normal games ended in a points tie, a soccer-
like extra-time had to be played: four games of nail-biting
rapid play, which, to the relief of all fair-chess lovers, was
won by Kramnik. Q
The point has long been made that human knowledge
and learning relies deeply on our having a corporeal REFERENCES
entity able to explore three-dimensional space and some 1. You, too, can relive those heroic days. Martin Camp-
cognitive “devices” to acquire the notions of both place bell-Kelly (no relation) of Warwick University offers an
and time. Chatbots that reside in static hardware are EDSAC simulator for the PC and Mac; http://www.dcs.
rather limited in this respect. Hence the need for a mobile warwick.ac.uk/~edsac/Software/EdsacTG.pdf. This site
bot, whether it has humanoid features such as Rossum’s will also point you to the vast EDSAC bibliography.
original robots or not. From “embedded systems” to 2. I’m lying for a cheap laugh. In fact, I’ve never know-
“embodied”? ingly stolen a file of any kind. As a member of ASCAP
For limited, repetitive actions in tightly controlled (American Society of Composers, Authors, and Publish-
environments, tremendous progress has been made, as in ers), I urge you all to obey the IP (intellectual property)
motor-car assembly. As a devout soccer fan, I’m intrigued protocols.
by the possibilities of two teams of 11 robots playing 3. EOF, possibly overloaded from EndOfFile to Extreme-
the Beautiful Game. In 1997, Hiroaki Kitano launched lyOldFart, started life as plain OF (OldFart) in the
the annual Robot World Cup, or RoboCup for short.9 Jargon File and subsequent versions of the Eric Ray-
(The name Ballbot has been taken up elsewhere for a mond/Guy Steele Hacker’s Dictionary. In the 1980s OF
robot that moves around rather like a gymnast walk- was generally applied (with pride or sarcasm) to those
ing while balanced on a large sphere; see http://www. with more than about 25 years in the trenches. It now
post-gazette.com/pg/06235/715415-96.stm.) By 2002, 29 seems appropriate to define EOFs by stretching the
different countries had entered the RoboCup staged in time served to “more than about 50 years.”
Fukuoka, Japan, attracting a total audience of 120,000 4. Wilkes, M. V., Wheeler, D. J., Gill, S. 1951. The Prepa-
fans. Peter Seddon describes the game between the Baby ration of Programs for an Electronic Digital Computer.
dtSearch® Reviews
◆ “Bottom line: dtSearch manages a terabyte of text in a single index
and returns results in less than a second” – InfoWorld
®
◆ “For combing through large amounts of data, dtSearch “leads the market”
– Network Computing
◆ “Blindingly fast”– Computer Forensics: Incident Response Essentials
Spider ($199)
Desktop with
◆ “Covers all data sources ... powerful Web-based engines”– eWEEK
$8 00) ◆ “Searches at blazing speeds”– Computer Reseller News Test Center
h Spider (fr om
Network wit ◆ “The most powerful document search tool on the market”– Wired Magazine
$999)
eb w it h S pider (from
W ,500)
For hundreds more reviews — and developer case studies — see www.dtsearch.com
fo r C D /DVDs (from $2
Publish New
r W in & .N ET 64-bit beta Contact dtSearch for fully-functional evaluations
Engine fo The Smart Choice for Text Retrieval ® since 1991
e fo r Linux
Engin
1-800-IT-FINDS • www.dtsearch.com
more queue: www.acmqueue.com ACM QUEUE December/January 2006-2007 55
Will the Real Bots
Stand Up?
curmudgeon
Stan Kelly-Bootle, Author
From EDSAC
to iPod—
W
hen asked which advances in computing technol- PREDICTIONS ELUDE US Meanwhile, adding still-
ogy have most dazzled me since I first coaxed the life pictures, such as cover
Cambridge EDSAC 1 1 into fitful leaps of calcula- art, may retain the iPod’s
tion in the 1950s, I must admit that Apple’s iPod sums simple “completeness,”
up the many unforeseen miracles in one amazing, iconic but pushing the device to
gadget. Unlike those electrical nose-hair clippers and salt TV seems to me to break the spell of sound gimcrackery
’n’ pepper mills (batteries not included) that gather dust [sic]. Peering at tiny moving pictures is a pointless pain,
after a few shakes, my iPod lives literally near my heart, whereas even modestly priced earphones provide the
on and off the road, in and out of bed like a versatile superb hi-fi we used to dream about when growing up.
lover—except when it’s recharging and downloading in The near-exponential improvement of every comput-
the piracy of my own home.2 ing power-performance parameter—physical size, clock
I was an early iPod convert and remain staggered by speed, storage capacity, and bandwidth, to name the
the fact that I can pop 40 GB of mobile plug-and-play obvious features—is now a cliché of our fair trade. Yet
music and words in my shirt pocket. I don’t really mind even my older readers3 may need reminding just how
if the newer models are 80 GB or slightly thinner or can bleak things were almost 60 years ago as the world’s first
play movies; 40 GB copes easily with my music and e- stored-program machine (note the Cambridge-chauvinis-
lecture needs. Podcasts add a touch of potluck and ser- tic singular) moved into action.
endipity-doo-dah. Broadcasts from the American public The house-size EDSAC was effectively a single-user per-
radio stations that I’ve missed since moving back to Eng- sonal computer—a truly general computing factotum, but
land now reach my iPod automatically via free subscrip- as Rossini’s Figaro warns: Ahime, che furia! Ahime, que
tions and Apple’s iTunes software. I’ve learned to live with folla! Uno alla volta, per carità! (Heavens, what mayhem!
that pandemic of “i-catching” prefixes to the point where Goodness, what crowds! One at a time, for pity’s sake!)
I’ve renamed Robert Graves’s masterwork “iClaudius,” but Originally (1947) EDSAC boasted [sic] 512 words of
I digress. main memory stored in 16 ultrasonic mercury-delay-line
The functional “completeness” of the audio iPod tanks, cleverly known as “long” tanks because they were
stems from its ideal marriage of hardware and software. longer than the short tanks used for registers. On the
The compactness is just right, respecting the scale of bright side, as we used to quip, each of the 512 words was
human manipulations. The Dick Tracy wristwatch vade 18 bits! Forget the word count, feel the width! Alas, for
mecum failed through over-cram and under-size. The iPod technical reasons, only 17 of the 18 bits were accessible.
succeeds with a legible alphanumeric screen and that By 1952, the number of long tanks had doubled, provid-
senile-proof, uncluttered, almost minimal, click-wheel ing a dizzy total of 1-KB words. Input/output was via five-
user interface. This avoids the input plague of most por- track paper tape, which therefore also served as mass [sic
table gadgets such as phones, calculators, and PDAs: the again] storage. Subject only to global timber production,
minuscule keyboards and buttons. I hasten to deflect the one might see this as virtually unlimited mass storage,
wrath of my daughter-in-law Peggy Sadler and all who although access was strictly slow-serial via 20-characters-
have mastered and swear by the Palm Pilot stylus! The per-second tape readers and 10-characters-per-second tele-
click wheel offers circular, serial access to and selection type printers. (By 1958, with EDSAC 2 taking over, paper
of your titles, but that’s a decent compromise when you tape and printer speeds had risen and magnetic tapes had
ponder the problems of searching by keywords. Spoken become the standard backup and mass storage medium.)
commands remain, as always, waiting for the next reas- Although hindsight and nostalgia can distort, one still
suring “breakthrough.” I’ll return anon to other Next-Big- looks back with an old soldier’s pride at the feats achieved
Fix-Release promises. Continued on page 52
SUPER
20
EARLY BIRD EARLY BIRD
DISCOUNT DISCOUNT
Register by JAN 19 Register by FEB 23 CELEBRATING
SAVE UP SAVE UP
TO $500 TO $300 YEARS!
13 IN-DEPTH
TRACKS
• .NET
• Ruby
• Requirements
& Analysis
• Weband renowned C++ expert Herb
Services/SOA
• XML
Sutter for an in-depth two-day
tutorial on C++.
R E G I S T E R T O D AY AT W W W. S D E X P O . C O M
The Object Database
With Jalapeno.
˜
Give Your POJOs
More Mojo.
The object database that runs SQL faster than relational databases now comes with InterSystems
Jalapeño™ technology that eliminates mapping. Download a free, fully functional, non-expiring copy at:
www.InterSystems.com/Jalapeno1S
© 2006 InterSystems Corporation. All rights reserved. InterSystems Caché is a registered trademark of InterSystems Corporation. 10-06 CacheJal1Queue