Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

Keith Pope

Introduction to Nanotechnology
Dr. James Smith

CONVERGENCE

December 14, 2015

Introduction
In the nanotech world, the concept of convergence originated with a 2001
conference organized by the U.S. National Science Foundation and the U.S.
Department of Commerce titled Nanotechnology, Biotechnology, Information
Technology and Cognitive Science (NBIC): Converging Technologies for Improving
Human Performance.
It was a recognition that the research being conducted in a number of various
disciplines was coming closer together around new developments at the nanoscale.
Nanoscience was seen as revolutionary way of thinking about, and solving
problems, at the molecular level in a variety of fields; the intersection between
classical physics, which governs the world that we see and interact with, and
quantum mechanics, which governs interactions at an atomic level. The included
fields of biotechnology, information technology, and cognitive science are obviously
not comprehensive. Several other fields are also being fundamentally affected by
nanoscience, such as materials science, chemistry, MEMS, and synthetic biology.
The focus of the conference was a deliberate effort to attract funding for
basic research projects.1 Nanotechnology was used as the centerpiece since the
perceived potential upside of nanotechnology is almost limitless. The organizers
also deliberately emphasized the subtitle, Converging Technologies for Improving
Human Performance in the belief that improved human performance would be
attractive to various stakeholders, including the federal government, which could
provide the necessary funding for basic research, and in an attempt to distance
themselves from the negative publicity that already existed around the Grey Goo
nanobot.2

But what an extraordinary premise! Improving human performance. Not


seeking advances in medical diagnosis and treatment, not seeking ways to improve
human lifestyle, but improving the capabilities of a fully-functioning human. In what
manner are humans are to be seen as deficient and in need of improving? It also
suggests a dissatisfaction with the current system of improving human capabilities,
evolution. Why wait eons for changes that may only be focused on the survival of
the species if we can make any changes we desire quickly, based on ongoing
technological advances? It also raises a number of ethical issues that seem to
have gotten lost in the focus on the individual advances that are being made in a
variety of areas of nanoscience. 3
While developments in nanotechnology are being made many areas, this
paper focuses on the overall consequences of the convergence of developments in
artificial intelligence and the interface with the human brain that together may lead
to fundamental changes in human capabilities.
Human and Machine Intelligence
There are a number of commentators who see the convergence of medicine,
biology, and artificial intelligence, coupled with the control over the building blocks
of matter offered by nanoscience, as life-changing, or even human-changing, on a
scale not previously seen.
One particularly aggressive proponent of nanotechnology and the
convergence is Ray Kurzweil. In his 2005 book The Singularity is Near he argues
that advances in information technology will permit humanity to transcend its
biological limitations.4 He foresees a not too distant future in which human
intelligence and computer intelligence will be combined, which, together with

ongoing medical advances, will extend human life, perhaps indefinitely. Medical
advances in the diagnosis and treatment of diseases have already significantly
extended the human lifespan, but not to the degree that Kurzweil anticipates. The
average American male born in 1900 had a life expectancy of 48 years, born in
2000, he had a life expectancy of 74 years, an increase of over 50% 2. It is no longer
uncommon for people to live well into their 90s. Kurzweils foresees a day in which
worn out parts can be simply replaced and life continued indefinitely.
Kurzweil uses the term singularity, rather than convergence. It is a
reference to the event horizon around a black hole, the boundary beyond which all
matter will be pulled into the black hole and from which nothing can escape. Since
no information is available from beyond the boundary, we dont know what happens
inside a black hole. Ray Kurzweils concept of singularity is similar. Once the merger
of human and machine intelligence is achieved, new developments will occur so
quickly in unforeseen ways that we cannot see what the future will be from our side
of the divide.
Ray Kurzweil is not a science fiction writer. He has a lifetime of scientific
achievements. In fact, Inc. magazine called him the rightful heir to Thomas
Edison. One of his early developments was a voice recognition program, Dragon
Naturally Speaking, which allowed computers to understand the spoken word. His
early speech recognition work laid the foundation for Siri, Google Now, Alexa and
other computer-based assistants. He is currently a Director of Engineering with
Google, leading a team developing machine intelligence and natural language
understanding.

We dont have to accept Kurzweils vision of the future (which has been
disputed by many and which is counter-balanced by those who foresee a dystopian
outcome if artificial intelligence is achieved). However, we are currently making
strides in both artificial intelligence and the potential for improved human
intelligence which may lead to some version of his cyborg combination of human
and machine intelligence.
Artificial Intelligence
Alan Turing initially postulated the Turing Test in 1950. He proposed that if a
person could have a natural language conversation with a machine and was unable
to determine whether they were talking to a machine or another human, we would
have achieved artificial intelligence. In 1950! Computers only existed in what we
would consider a very primitive form and he was already thinking about the
possibility of artificial intelligence rivaling human intelligence at some point.
Since then, there has been an ongoing debate about how smart machines
can be made. One impossible goal was achieved in 1997 when the IBM computer
Deep Blue defeated the then reigning world chess champion, Garry Kasparov, under
tournament conditions. Once accomplished, the feat was dismissed by sceptics as
only involving a game with defined rules and a limited (technically) number of
moves that was particularly subject to massive computational power.
Computers are far better than humans at computation and at maintaining
and retrieving accurate information. The latest studies of the brain indicate that we
have a single read/write head5. That is, any time we access a memory, we may
also modify it. For example, when we see a person we know, we automatically
update our memory of that persons image. That is why when we see someone

every day we may not notice changes that happen slowly over time, but can be
shocked by the changes in appearance of someone that we havent seen in several
years. It also is a reason why our memories can be notoriously unreliable.
On the other hand, humans are much better than computers at
understanding and communicating in natural language. We use a large number of
clues, including context, a common culture, visual input, facial expressions, known
speech patterns of the person we are talking to, and the emphasis placed on the
words to extract meaning from what is spoken (which, if interpreted literally, can
often be quite nonsensical). For a long time programmers thought that programing
computers to understand natural language was impossible.
In 2011, Watson, Deep Blues successor, defeated Ken Jennings and Brad
Rutter (two former champions) on the Jeopardy show. Jeopardy is designed to test
contestants on their ability to remember a large number of facts and to make
logical, and sometimes whimsical, connections. According to IBM, Watson was
designed to apply advanced natural language processing, information retrieval,
knowledge representation, automated reasoning and machine learning technologies
to the field of open domain question answering.
Watson is now being used at Memorial Sloan Kettering Cancer Center to
provide feedback to doctors and nurses about the choice and implementation of
various courses of cancer treatment. IBM has also announced that it plans to invest
$100 million over a 10 year period to use Watson to help African countries address
development problems, beginning with healthcare and education.
Artificial intelligence does not need to be limited to a single, high-powered
computer. In a speech at a Google event in 2014, Kurzweil said that Google was

developing new search software capable of understanding text and providing a fully
reasoned response to a natural language question, not just a list of inter-related
websites that might be helpful. 18 months later glimpses of those systems are
becoming available. On your phone! Google may become the basis for our first
interaction with a computer system as if we were communicating with another
human.
Between Watson and the natural language computer assistants currently
becoming available, it would seem that the Turing Test may be within reach.
It may not matter if a new species (artificial intelligence/self-conscious
machine/robot) is created or if a hybrid between humans and machines develops; if
we get to a point that we can naturally communicate a problem to a computer and
rely on that computer to analyze the problem and give us a reasoned solution or
work side by side with us in reviewing the best course of action to address a
problem, we will have greatly expanded our capabilities.
Communication with the Brain
In order to take advantage of developments in artificial intelligence to
improve the capabilities of humans, the computer-human interface must be
improved. Initially we interfaced by trying to talk computer language through the
use of punch cards or a computer-based interface such as DOS. The adoption of
icons to simplify a large number of common instructions was an improvement, but
recently we have focused on getting computers to understand us in our language.
However, the interface is still extremely slow. What is needed is direct
communication with the brain. Is that possible?

In May 2012 MIT announced the implanting of a chip directly into a womens
brain that permitted her to control a robotic arm. Neurological impulses were sent
to a computer that interpreted the pulses to control the robotic arm. This
technology has subsequently developed sufficiently to permit the direct control of
prosthetic devices. This is possible because the brain is electrical. Both inputs and
outputs are information coded in electrical impulses.
In June 2014 Ohio State University Wexner Medical Center announced they
had implanted a chip in the brain of a patient that gave him control over his arm
that had been paralyzed for 4 years. This was a major development in moving from
control of a mechanical device to control of the patient's own limb.
These developments deal with getting information from the brain. What
about the other direction? Can we deliver information directly to the brain that can
be understood? On September 11, 2015 DARPA (Defense Advanced Research
Projects Agency) announced that it had successfully connected a prosthetic hand to
both the motor cortex and the sensory cortex of the brain, giving the person control
over a prosthetic device with real time sensory feedback. DARPA also has other
current projects designed to create closed-loop direct interface to the brain 6.
Clinical work has also been done in the area of restoring eyesight. Trials to
date have enabled blind people to see light sources and highcontrast edges.
Sheila Nirenberg of Cornell University, a recipient of a Genius Award from the
MacArthur Foundation, has developed algorithms that mirror the coded information
that the eyes send to the brain7. The front-end of the visual system can be replaced
by photographic technology (which has been expanded to use more than the visual
spectrum to collect information) and, if we know the code, this information can be

communicated directly to the brain. She has successfully tested the system in
animals and is working on human applications.
All of these are examples of first steps. They dont necessarily work well or
smoothly in all circumstances. However, we are developing ways to send
information directly to the brain and using neurological impulses to get instructions
directly from the brain. How long will it be before we consider the interfaces we
currently use with computers to be insufferably slow and archaic? Or, alternatively,
before we implant a chip in a persons head with memory and computational power
directly accessible by the brain?
Conclusion
As one author has put it, when does all of this change from WOW to YUCK 8?
We already know some limits in other areas. People are leery of genetically
modified foods. While we are ok with gene therapies to address gene-base
diseases, are we ready for designer babies? Is it alright to clone humans? Is it
acceptable to use artificially grown replacement organs? Even if grown from
harvested human cells? Is it better or worse to use organs harvested from animals?
Nanotechnology presents its own issues. Will we accept implants or drugs
that increase our physical and mental abilities? If so, will we create a privileged
class that has access to that technology and is bigger, stronger, faster, and smarter
than those that do not? How do we treat the lesser, non-improved humans? Are
their rights not as important? Can extending the human life span be balanced with
avoiding people becoming jaded and not interested in life? Will we really improve
human performance in fundamental ways? If so, we will continue to be human, or
some combination of human and machine? What advances are improvements?

What about regulation? Nanotechnology is advancing so quickly and on so


many fronts, it is impossible to keep up. This makes effective regulation difficult.
While the EPA has determined that carbon nanotubes are new chemicals requiring
advance notice prior to manufacturing, that says very little about overall control of
the ongoing significant developments. We need to think about more comprehensive
regulations in advance since by the time public debate is initiated on one
technology, an even more radical new technology comes on line 3.
I recently met a senior physician at the National Institute of Health. He no
longer practices medicine, but spends his time on an internal team working on the
legal moral and ethical issues presented by medical advances. His bottom line was
that physicians are caregivers, not creators. What if we create new life forms
(bacteria that eat hydrocarbons to clean up environmental spills) or create life
itself? Some commentators have predicted the end of religion as technological
advances continue. Is being smart enough, or do we also need to focus on
developing an ethical framework and new forms of the governance of social
interactions?
It appears that these life-changing developments will continue to come, and
at an accelerating pace. There is currently almost no public discourse about the
ethical questions that are presented. In general, is it acceptable to improve on the
human hardware (the body) and software (the brain), in essence replacing
evolution, which happens very slowly, with scientific advances and ongoing updates
to our abilities9? Who decides which advances are desirable? We all should be
thoughtful about how we deal with the issues that will ultimately be presented.

[1] Wolbring, Gregor. "Why NBIC? Why Human Performance Enhancement?"


Innovation: The European Journal of Social Sciences 21.1 (2008): 25-40. Academic
Search Premier. Web. 3 Dec. 2015.
[2] Bainbridge, William Sims. "Converging Technologies and Human Destiny." Journal
of Medicine & Philosophy 32.3 (2007): 197-216. Academic Search Premier. Web. 3
Dec. 2015.
[3] Paradise, Jordan, Wolf, Susan M. Kuzma, Jennifer, Ramachandran, Gurumurthy,
and Kikkoli, Effrosini. The Challenge of Developing Oversight Approaches to
Nanobiotechnology. Journal of Law, Medicine & Ethics Winter 2009
[4] Kurzweil, Ray The Singularity is Near: When Humans Transcend Biology.
Penguin Publishing Group 2005, ISBN 978-0-143-03788-0.
[5] Buonomano, Dean. Brain Bugs: How the Brains Flaws Shape Our Lives. W.W.
Norton & Company 2011, ISBN 978-0-393-07602-8.
[6] Press release. Neurotechnology Provides Near-Natural Sense of Touch.
Defense Advanced Research Projects Agency, September 11, 2015.
[7] Nirenberg, Shelia. A Prosthetic Eye to Treat Blindness. TEDMED, filmed
October 2011. Available at
https://www.ted.com/talks/sheila_nirenberg_a_prosthetic_eye_to_treat_blindness?
language=en
[8] Kulinowski, Kristen. "Nanotechnology: From "Wow to "Yuck"? Bulletin of Science,
Technology & Society 24.1 (2004): 13-20. Academic Search Premier. Web. 3 Dec.
2015.

[9] Khushf, George. "Open Questions in the Ethics of Convergence." Journal of


Medicine & Philosophy 32.3 (2007): 299-310. Academic Search Premier. Web. 3 Dec.
2015.

You might also like