Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

The history of Artificial Intelligence (A.I.

or AI) begins in antic societies, primarily in


Greek myths. The most famous of these are, perhaps the myth of Pygmalion and his creation
Galatea or the Pandora myth. The first one was inspiration for many later stories about an artisan
whose creation comes to life and becomes sentient. This idea of playing God is ubiquitous in
literature since before Common Era, sometimes as the main theme, sometimes as the subplot.
Although the real development of artificial intelligence as we know it today has begun in
twentieth century, first steps in making a thinking machine were made even before the
Christian era. In order to understand AI we need to distinguish three separate problems: the
machine itself, its way of communication with its surrounding, and its train of thought.
The automata1 are the first machines ever made to communicate with their
surroundings, so to speak. In reality these are nothing more than that machines, except once
they are started they follow a unique set of instructions, set by its maker. These differ from
todays machines precisely in that they follow a pattern, while the later ones do not. Modern-day
machines have a way of anticipating a conclusion and changing the flow of operations, rather
than just following a set of instructions.
This set of instructions is a machines train of thought, the way a machine thinks and it
can think and communicate with its surrounding thanks to its language, a set of possible actions
it can perform.
The history of communicating machines has unraveled itself through the ages, as more
and more sophisticated machines have surfaced. Many of these were just bragging of certain
scientist, or better said alchemists, but every now and then a real programmable2 machine turned
up.
The story of machines capable of performing complex tasks truly begins with French
scientist Blaise Pascal3 who invented the first mechanical calculator Pascaline, in 1642,
essentially the first digital calculating machine. This was not really a machine capable of doing
complex computing, it could only do simple mathematical operations like addition and
subtraction initially, although its use could be widened to multiplication and division by
repetition.
This was not a machine that could do things on its own, but it was a first step towards machines
that could, because the revolution that started with this simple calculator meant that people are
willing to use machines to do tiring jobs instead. In a sense this invention is an early start of the
industrial revolution, because since Pascaline there were numerous calculating machines, and
this development of calculating machines culminated in 1971 with implementation of
1 Automaton is a machine designed to follow a predetermined set of instructions.
2 Programming is, in a way, giving instructions to a machine in order to achieve a certain goal.
3 Blaise Pascal

microprocessor in a Busicom calculator. This trend continued, and microprocessors are still the
core of every computer.
But from Pascaline to Busicom calculator, and later on to better computers, there was a
long way, in which one war and one man played a key part.
In 1939 British government funded a project for cracking the unbreakable German
encryption machine the Enigma. The project assembled some of the finest British minds at
Bletchley Park (Britains codebreaking centre), including Alan Turing an aspiring
mathematician from University of Cambridge. For a time Turing led one of the teams,
particularly the one responsible for German naval cryptanalysis. During that time he devised a
number of techniques for breaking German ciphers, and, amongst others, he found a way to find
settings for the Enigma machine.
The Enigma machines were a series of electro-mechanical rotor cipher machines. It
worked in a very complex way, constantly changing the electrical circuit, after every letter
someone typed in. The outcome was a scrambled text, and the only way someone could know the
original message was to run the scrambled text through the Enigma, but on the same initial
settings, which changed every day. Since Enigma had 26 letters available, three rotors (later this
number went up to eight rotors) with 26 contacts and a reflector the number of possible
combinations for a single letter (number of different settings) is approximately 158 sextillion.
The reason the Enigma is so important in computer and AI history is that Alan Turing, with the
help from his team, made an enormous electrical machine, consisting of thousand electrical
circuits, which could, with a given set of numerical instructions, solve a problem. Considering
todays computers it was pretty rudimentary, because it was slow and could only solve numerical
problems, but that was no problem, since it would solve these problems numerous times faster
than any man ever could.
Nevertheless, Christopher, as Turing named it, was indeed a prototype computer, and thus
began the Age if Information.
Although the war ended, computers were still for military use only. Commonly referred
to as the first computer ever, ENIAC saw the light of the day in 1946. Electronic Numerical
Integrator and Computer never really saw the light of the day, since it was as big as the room it
was stored in. ENIAC weighed more than 30 tons, consumed 150 kW of electricity and was
entirely handmade. It was primarily used in ballistics, to anticipate behaviour of non-powered
projectiles, although its first programs included a study of feasibility of H-bomb. ENIACs
programming usually took weeks, consisting of planning, turning switches and manipulating
cables. An important role in this process was played by punch-cards, which also aided the
programming and, especially, debugging process.
When it comes to language any computer would understand, Machine language imposed
as the best solution. Machine language was derived from formal logic, improved through the
years by numerous scientists for use in computer science.

Since Machine language is represented by true or false (in syllogism and mathematical
logic), 1s and 0s in binary or with charged and non-charged units in computer memory, there had
to be a way to translate this language into peoples language. Initially assembly languages
were formed, which resembled Machine in many ways, and later on many, many higher
programming languages were introduced.

Since before the first computing machine there were fears regarding machine awakening,
essentially the machine becoming aware of itself. The first one to seriously regard this question
was not a computer scientist, but a writer. His name was Karel Capek, and in his play R.U.R4. he
mentioned certain robots5, artificial biological organisms which can be mistaken for humans.
He inspired later science fiction to use this word for any sentient, non-human being that
resembled humans in appearance.
Later sci-fi writers and scientists gradually defined the ethics and philosophy behind the
artificial intelligence. The most notable of these definitions were given by Isaac Asimov
(probably the most prominent, and a very plentiful science fiction writer) and Mark Tilden (a
notable robotics physicist). Tildens laws regulated AIs relation to oneself, while Asimovs
regarded robot in relation to human/humanity.
Tildens laws of robotics:
1. A robot must protect its existence at all costs.
2. A robot must obtain and maintain access to its own power source.
3. A robot must continually search for better power sources.
Asimovs laws of robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to
harm.
2. A robot must obey the orders given it by human beings, except where such orders would conflict
with the First Law.

4 Rossum's Universal Robots (1920)


5 The word is probably derived from Czech words robota (forced labour) and rab (slave).

3. A robot must protect its own existence as long as such protection does not conflict with the First
or Second Laws.
In later books, a zeroth law was introduced:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Considering a certain logical dilemma a robot following Asimovs three laws may come to
unrest, since there is a possibility of generation of a conflict between the zeroth and one of other
laws (or all, for that matter). To understand this problem we need to perform a few simple
thought experiments:
1. What should a robot do if his master would order him to kill someone?
2. What should a robot do if he has more than one master, and their orders are in a
conflict?
3. What should a robot do if there is a danger for some part of the humanity, but the only
thing it can do is to kill one or more humans to prevent it?
These questions are not easy to answer, even if it were a human instead of a robot. But
considering that we have yet to make a sentient, self-aware AI program EPRSC6 and AHRC7
devised a new set of rules in 2011:
1. Robots should not be designed solely or primarily to kill or harm humans.
2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
3. Robots should be designed in ways that assure their safety and security.
4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an
emotional response or dependency. It should always be possible to tell a robot from a human.
5. It should always be possible to find out who is legally responsible for a robot.

And so, the problems with robots functioning were effectively solved, for the time being. For the
time being, because there are still questions regarding AI and Artificial General Intelligence8:

6 Engineering and Physical Sciences Research Council


7 Arts and Humanities Research Council

1. Is artificial general intelligence possible? Can a machine solve any problem that a human being
can solve using intelligence? Or are there hard limits to what a machine can accomplish?
2. Are intelligent machines dangerous? How can we ensure that machines behave ethically and that
they are used ethically?
3. Can a machine have a mind, consciousness and mental states in exactly the same sense that
human beings do? Can a machine be sentient, and thus deserve certain rights? Can a
machine intentionally cause harm?

Another thing to consider, in regard to AGI is the result of two thought experiments: Turings test
and Marys room.

Turings test
The "standard interpretation" of the Turing Test, in which player C, the interrogator, is given the
task of trying to determine which player A or B is a computer and which is a human. The interrogator
is limited to using the responses to written questions to make the determination.

Marys room
(the knowledge argument)
Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a
black and white room via a black and white television monitor. She specializes in the neurophysiology of
vision and acquires, let us suppose, all the physical information there is to obtain about what goes on
when we see ripe tomatoes, or the sky, and use terms like red, blue, and so on. She discovers, for
example, just which wavelength combinations from the sky stimulate the retina, and exactly how this
produces via the central nervous system the contraction of the vocal cords and expulsion of air from the
lungs that results in the uttering of the sentence The sky is blue. [...] What will happen when Mary is
released from her black and white room or is given a color television monitor? Will she learn anything or
not?

8 AGI is the intelligence of a (hypothetical) machine capable of effectively solving any problem and
successfully performing any intellectual task that a human being can.

You might also like