Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

Moravec’s Paradox of Artificial Intelligence

and a Possible Solution by Hiroshi Yamakawa


with Interesting Ethical Implications

Have you heard of Moravec’s Paradox?


This is a principle discovered by AI robotics expert Hans
Moravec in the 1980s. He discovered that, contrary to
traditional assumptions, high-level reasoning requires
relatively little computation power, whereas low-
level sensorimotor skills require enormous computational
resources. The paradox is sometimes simplified by the
phrase: Robots find the difficult things easy and the easy
things difficult.  Moravec’s Paradox explains why we can now
create specialized AI, such as predictive coding software to
help lawyers find evidence, or AI software that can beat the
top human experts at complex games such as Chess,
Jeopardy and Go, but we cannot create robots as smart as
dogs, much less as smart as gifted two-year-olds like my
granddaughter. Also see the possible economic, cultural
implications of this paradox as described, for instance,
by Larry Elliott, in Robots will not lead to fewer jobs – but
the hollowing out of the middle class (The Guardian,
8/20/17).
Hans Moravec is a legend in the world of
AI. An immigrant from Austria, he is now serving as a
research professor in theRobotics Institute of Carnegie
Mellon University. His work includes attempts to develop a
fully autonomous robot that is capable of navigating its
environment without human intervention. Aside from his
paradox discovery, he is well-known for a book he wrote in
1990, Mind Children: The Future of Robot and Human
Intelligence.  This book has become a classic, well-known and
admired by most AI scientists. It is also fairly easy for non-
experts to read and understand, which is a rarity in most
fields.

Moravec is also a futuristwith many of his


publications and predictions focusing on transhumanism,
including Robot: Mere Machine to Transcendent Mind (Oxford
U. Press, 1998). In Robot he predicted that Machines will
attain human levels of intelligence by the year 2040, and by
2050 will have far surpassed us. His prediction may still
come true, especially if exponential acceleration of
computational power following Moore’s Law continues. But
for now, we still have a long was to go.

Yamakawa on Moravec’s Paradox

A recent interview of Horoshi


Yamakawa, a leading researcher in Japan working on
Artificial General Intelligence (AGI), sheds light on the
Moravec Paradox.  See the April 5, 2017 interview of Dr.
Hiroshi Yamakawa, by a host of AI Experts, Eric Gastfriend,
Jason Orlosky, Mamiko Matsumoto, Benjamin Peterson, and
Kazue Evans. The interview is published by the Future of Life
Institute where you will find the full transcript and more
details about Yamakawa.

In his interview Horoshi explains the Moravec Paradox and


the emerging best hope for its solution by deep learning.

The field of AI has traditionally progressed with symbolic


logic as its center. It has been built with knowledge defined
by developers and manifested as AI that has a particular
ability. This looks like “adult” intelligence ability. From this,
programming logic becomes possible, and the development
of technologies like calculators has steadily increased. On the
other hand, the way a child learns to recognize objects or
move things during early development, which corresponds to
“child” AI, is conversely very difficult to explain. Because of
this, programming some child-like behaviors is very difficult,
which has stalled progress. This is also called Moravec’s
Paradox.
However, with the advent of deep
learning, development of this kind of “child” AI has become
possible by learning from large amounts of training data.
Understanding the content of learning by deep learning
networks has become an important technological hurdle
today. Understanding our inability to explain exactly how
“child” AI works is key to understanding why we have had to
wait for the appearance of deep learning.

Horoshi Yamakawa calls his approach to deep


learning the Whole Brain Architecture approach.

The whole brain architecture is an engineering-based


research approach “To create a human-like artificial general
intelligence (AGI) by learning from the architecture of the
entire brain.”  … In short, the goal is brain-inspired AI, which
is essentially AGI. Basically, this approach to building AGI is
the integration of artificial neural networks and machine-
learning modules while using the brain’s hard wiring as a
reference. However, even though we are using the entire
brain as a building reference, our goal is not to completely
understand the intricacies of the brain. In this sense, we are
not looking to perfectly emulate the structure of the brain
but to continue development with it as a coarse reference.
Yamakawa sees at least two advantages to this approach.

The first is that since we are


creating AI that resembles the human brain, we can develop
AGI with an affinity for humans. Simply put, I think it will be
easier to create an AI with the same behavior and sense of
values as humans this way. Even if superintelligence exceeds
human intelligence in the near future, it will be comparatively
easy to communicate with AI designed to think like a human,
and this will be useful as machines and humans continue to
live and interact with each other. …

The second merit of this unique approach is that if we


successfully control this whole brain architecture, our
completed AGI will arise as an entity to be shared with all of
humanity. In short, in conjunction with the development of
neuroscience, we will increasingly be able to see the entire
structure of the brain and build a corresponding software
platform. Developers will then be able to collaboratively
contribute to this platform. … Moreover, with collaborative
development, it will likely be difficult for this to become
“someone’s” thing or project. …

Act Now for AI Safety?

As part of the interview Yamakawa


was asked whether he thinks it would be productive to start
working on AI Safety now?As readers here know, one of the
major points of the AI-Ethics.comorganization I started is
that we need to begin work know on such regulations.
Fortunately, Yamakawa agrees. His promising Whole Brained
Architectureapproach to deep learning as a way to
overcome Moravec’s Paradox thus will likley have a strong
ethics component. Here is Horoshi Yamakawa full, very
interesting answer to this question.

I do not think it is at all too early to act for safety, and I


think we should progress forward quickly. Technological
development is accelerating at a fast pace as predicted by
Kurzweil. Though we may be in the midst of this exponential
development, since the insight of humans is relatively linear,
we may still not be close to the correct answer. In situations
where humans are exposed to a number of fears or risks,
something referred to as “normalcy bias” in psychology
typically kicks in. People essentially think, “Since things have
been OK up to now, they will probably continue to be OK.”
Though this is often correct, in this case, we should subtract
this bias.

If possible, we should have several methods to be able to


calculate the existential risk brought about by AGI. First, we
should take a look at the Fermi Paradox. This is a type of
estimation process that proposes that we can estimate the
time at which intelligent life will become extinct based on the
fact that we have not yet met with alien life and on the
probability that alien life exists. However, using this type of
estimation would result in a rather gloomy conclusion, so it
doesn’t really serve as a good guide as to what we should
do. As I mentioned before, it probably makes sense for us to
think of things from the perspective of increasing decision
making bodies that have increasing power to bring about the
destruction of humanity.

You might also like