Professional Documents
Culture Documents
INTRODUÇÃO Sobre Ter Uma Mente Própria
INTRODUÇÃO Sobre Ter Uma Mente Própria
TERRY DARTNALL
Griffith University
Since the French Enlightenment of the Eighteenth Century there has been a
growing belief that people are machines. In 1745, the French physician and philoso-
pher La Mettrie published The Natural History of the Soul. This brought him such
official censure that he exiled himself in Holland. Two years later he published
L'Homme Machine (Man A Machine), whose materialistic contents aroused even
the liberal-minded Dutch to angry protest. Two hundred years ago, then, the belief
that people are machines was bold and dangerous. Today it is so deeply rooted in
our culture that we find it difficult to imagine what else people might be.
In the Nineteenth Century, Lady Lovelace, the friend and colleague of Charles
Babbage, expressed the opinion that computers can have nothing to do with cre-
ativity because they "have no pretensions whatever to originate anything~. This,
too, has taken root in the culture, so that the notion of 'machine creativity' is seen
as paradoxical or contradictory. Consequently we tend to believe, at least in our
uncritical moments, that people are machines, but also that machines cannot do
something which is characteristically human.
In this introduction I will tease out some of the implications of these conflicting
intuitions. I will first try to show why creativity is a key issue for AI and cognitive
science, and will then ask whether 'machine creativity' is, in fact, a paradoxical
concept. I will finish by outlining the papers in this part of the book.
with our ability to have thoughts at all-in other words, to have minds of our own.
This is discussed in more detail in the papers by Dartnall and Clark in this section
of the book.l
Finally, there is this. In less than 40 years AI has gone from believing that
intelligence is a general feature of the mind, to recognising that it requires domain-
specific knowledge, to appreciating the need for common-sense knowledge, and to
at least beginning to recognise that this is grounded in skills and abilities. It has
discovered that intelligence is not an independent cognitive quality, but is related
in some intimate way to general, evolved abilities: intelligence is found in our
capacity to understand language, to find Coke cans, and to recognise' A'. All this is
fine and good, not least because it forces us to appreciate abilities that have taken
a long time to evolve (such as recognising Coke cans). But there is a danger, with
such a broad notion of intelligence, that AI will lose its way. (What, now, is the
quarry?)
A concern with creativity restores the focus, for creativity is about things that
we are told computers cannot do-and yet which they must be able to do if they
are to be intelligent. Normal science does not test theories by looking at easy cases,
since it is always possible to find data that will fit. A crucial part of the methodology
of normal science is attempting to falsify theories by exposing them to maximum
risk-a point that has been made famous by Popper. In artificial intelligence we
know some of the things that machines are good at. They can perform rapid search,
and can do so far more efficiently than people. Consequently they can do things
such as play chess and backgammon. They are adept at formulating and deploying
complex bodies of rules, so that they can diagnose diseases of the blood and
configure computer systems. But the time has come to put AI to the test by looking
at the things that it is claimed computers cannot do. Creativity provides us with the
acid test.
In her paper in this volume, and in more detail in her book (Boden, 1990/1992;
see also Boden, 1993; Boden, 1994), Margaret Boden addresses Lady Lovelace's
claim that computers cannot have anything to do with creativity because they
cannot originate anything. Boden points out that the matter is not so simple, and
distinguishes between four 'Lovelace questions':
can computers help us to understand human creativity?
could computers (now or in the future) do things which at least appear to be
creative?
could computers appear to recognize creativity?
can computers really be creative (as opposed to producing apparently creative
performances whose originality is really due to the human programmer)?
1 Both authors draw on the work of the developmental psychologist Annette Karmiloff-Smith.
32 TERRY DARrNALL
We can ask other questions. We can ask, for instance, whether computers can
enhance human creativity. If they can, should we develop them as creativity-
enhancing tools, in which the human operator is kept in the loop, rather than
trying to produce autonomously creative machines? The last section of this volume
addresses this issue.
Boden is principally concerned with the first Lovelace question, to which her
answer is a clear 'yes': computational concepts and theories (rather than computers
as such) can help us to specify the conceptual structures and processes in people's
minds.
Her answer to the fourth question (,Can computers be creative?') is qualified.
She believes that computers can do things that appear to be creative, but whether
we regard them as actually creative will depend on a moral/political question: are
we prepared to allow them 'a moral and intellectual respect comparable with the
respect we feel for fellow human beings'? (1990/1992: 11).
It is certainly true that something can appear to be creative without actually
being so. We might judge a picture or a poem to be creative but change our minds
when we discover that it was produced by a random process. If, for instance, we
discovered that the paintings on the Sistine Chapel ceiling resulted from an orgy of
paint throwing, or that Hamlet was written by monkeys and typewriters, we would
retract our judgements about their creativity. We would do so because we would
have discovered that these things were not produced in the right kind of way: the
aetiology, the causal history, would be wrong. In the same way a computer could
appear to be creative without actually being so-if it used only random processes,
for example.
The fact that we take aetiology so seriously indicates that we believe that it is
the underlying process-how the product was generated-that matters. Ironically
we have almost no understanding of this process, but it seems that we put our faith
in the fact that it is there. What, then, of moral/political decisions? As Boden says
(1990/92: 283-284), we are swayed by the appearance of things, and will accord
intelligence and rights to warm, cuddly creatures whilst withholding them from
structures of silicon and steel. Indeed, this is so. But I believe that we make these
judgements because we believe that warm, cuddly things are similar to ourselves,
and therefore that the right sorts ofprocesses are going on. If, as Boden claims, it
is the computational processes, rather than the hardware, that matters, then these
processes should yield intelligence and creativity, whether they are implemented in
carbon or silicon.
There is a huge literature on this general issue, and I cannot go into it here.
Since Boden is principally concerned with the way in which computers can help
us to understand creativity, I will briefly explore the fourth Lovelace question: Can
computers really be creative?
INTRODUcnON: ON HAVING A MIND OF YOUR OWN 33
This is valid, but the first premiss is false, for we sometimes instruct people to
be creative. For instance, Pope Julius II instructed Michaelangelo to be creative
when he painted the Sistine Chapel ceiling. Therefore it is possible to be creative
and to be following instructions.
"But," the objection goes, "Julius II only gave Michaelangelo very general
instructions, and left the rest to him. In contrast, every single thing that a computer
does is something that it was told to do. Suppose that Michaelangelo had been
instructed about every brushstroke that he made!"
The argument is now:
Now the second premiss is false, for we do not instruct computers in every action
that they perform. This would require us to give them millions of instructions per
second.
34 TERRY DARfNALL
"What I meant was that everything that they do follows from instructions that
we give them."
But what does this mean? If it means that the machine's performance literally
follows from its instructions then it is false, for if we wrote all the instructions on
a piece of paper, nothing at all would happen.
Presumably it means that the computer is built, or designed, to respond in a
predictable way to its instructions. So we have:
But this still isn't clear. Is the complaint that computers are predictable-that we
can predict the output given the input plus an exhaustive description of the innards?
Or is it that we have designed the innards, so that their ostensible creativity is really
ours?
Let us consider the second interpretation first.
We can 'give someone a good education', but it is still their education. We can
'teach them to think for themselves', and the fact that we have taught them does
not detract from their ability. Indeed, we can 'give someone a good mind' and (as
we say) 'give them a mind of their own'. Haugeland (1985: 10) says "Why should
an entity's potential for inventiveness be determined by its ancestry (like some
hereditary title) and not by its own manifest competence?" If we were to discover
that we were created, would this mean that we are not creative? Surely not. And
would we be more inclined to assign creativity to a computer that, rather than having
been created, had come together by chance?! Systems cannot create themselves,
so if we deny creativity to systems that have been produced (by evolution or by
people), we will have to say that nothing is creative, and that there is no such thing
as creativity.
What about the other interpretation, that computers are predictable input-output
devices? I suspect that there is circularity here, and that the intuition is that if
creativity was predictable, then a machine could do it-and machines cannot be
creative!
This suspicion aside, why shouldn't creativity be predictable? Julius II could
have predicted that Michaelangelo would creatively paint the Sistine Chapel ceiling.
"But that was only predicting that he would be creative. It was not predicting
what he would create."
Here we need Margaret Boden's distinction between P-creativity and H-creat-
ivity (1990/92; this volume; etc.). Something is P-creative ('psychologically cre-
ative') if it is fundamentally novel for the individual, and it is H-creative Chistor-
INTRODUCTION: ON HAVING A MIND OF YOUR OWN 35
ically creative') if it is fundamentally novel with respect to the whole of human
history.
Now, as Boden says, we can predict P-creativity. For instance, we can put a
child into an environment in which we know that she will discover something, or
solve a problem, in a P-creative way (Boden, 1990/92: 37).
What about H-creativity? We can predict that someone will be H-creative: we
lock a genius in a broom cupboard and tell her that she cannot come out until she
has been H-creative. We cannot, it is true, predict what she will H-create-but let
us be clear why we cannot do this. We cannot do it because then, in a sense, we
would have created it first. It is not that we cannot predict what she will come up
with. There is no magic going on inside the broom cupboard. There is just the trivial
truth that, by definition, H -creative thoughts haven't been thought before, so that if
you have the thought before Sally, then her thought isn't H-creative.
Notice, too, that the claim that we are considering is that something cannot
be creative if it is predictable in principle. Now, something can be predictable in
principle without having been predicted in fact. If it was not predicted in fact then it
may be fundamentally new with respect to human history. Consequently, something
can be H-creative and predictable in principle.
These points are worth labouring. Suppose we discover that Michaelangelo was
an intelligent machine (built by a Renaissance genius called 'Michaelangelo'-{)r,
to avoid confusion, plain 'Mike'). And let us assume that Mike knew all there was
to know about Michaelangelo's innards, so that, in principle, he could predict the
output for any given input. He did not, however, bother to make the predictions (he
was too busy converting lead into gold). "Qh, very nice," he said, when he looked
at the Sistine Chapel ceiling, and added, sotto voce, "Be careful what you say to
the Pope."
First, the fact that Mike gave Michaelangelo his ability does not detract from
Michaelangelo having the ability. Second, it doesn't detract from the qualities of the
Sistine Chapel ceiling (always assuming, of course, that Mike gave Michaelangelo
the right sorts of innards-not random ones, for instance). Finally, because Mike
hadn't in/act predicted what the outcome would be, Michaelangelo's work was
H-creative. It was fundamentally new with respect to human history.
4. But yet...
There is, then, no obvious reason why computers cannot be creative, and no obvious
reason why they cannot have minds of their own. The final argument, that creativity
is not predictable, is little more than a trick of the light. (I say that there is no obviolls
reason. There may be unobvious ones!)
Nevertheless, we are bound to ask what it would look like for a computer to be
creative-what it would be (in very general terms) for a machine to have a mind of
its own.
We can get a conceptual overview by looking at a similar, but better understood,
36 TERRY DAKI'NALL
In 'On Giving Libertarians What They Want to Say', Dennett (1978: 296) says that
the compatibilist position "looks terribly mechanical and inevitable, and seems to
leave no room for creativity or genius." It allows no distinction to be drawn, he
says, between authorship and mere implication in a causal chain.
In an effort to give libertarians something like what they want, he first draws the
sting out of the notion of randomness. In common parlance this has connotations
of pointlessness or meaninglessness, but the point that the libertarian wishes to
make is exactly that random actions (actions that are random in the sense of being
undetermined) need not be meaningless.
Dennett invites us to imagine someone who has to make a decision in a hurry.
She does not have time to think of all the considerations, and has to make her
mind up immediately. Now, as Dennett says, "it just might be the case that exactly
which considerations occur to [her] in such circumstances is to some degree strictly
undetermined." (1978: 294). Some considerations will occur to her, but others will
not. If we knew everything about her personality we could only predict what she
would do ifsuch and such considerations occured to her. What she will do in fact
is strictly indeterminate.
This leads Dennett to suggest that two factors are involved in decision making.
There is a "consideration-generator whose output is to some degree undetermined"
(p. 295). This produces a series of considerations that are either rejected as irrelevant
by the agent, or selected as having bearing on the reasoning process. He quotes
Valery as saying:
It takes two to invent anything. The one makes up combinations; the other
.one chooses, recognises what he wishes and what is important to him in the
mass of things which the former has imparted to him. What we call genius is
much less the work of the first one than the readiness of the second one to grasp
the value of what has been laid before him and to choose it. 2
This two stage model maps perfectly onto what is almost a standard account
of creativity: Wall as (1926), taking his cue from Poincare and others, said that
creativity proceeds in the four stages of preparation, incubation, illumination and
verification. The account has been criticised, but mainly on the grounds that an
incubation period may not be necessary (Weisberg, 1986). As far as I know there is
no disagreement with the claim that creativity involves generation and evaluation.
More recent research, in fact, addresses the generative-evaluative cycle, in which
a product is generated, evaluated, new goals are set, and the cycle is repeated (see
the sections on Design and the Enhancement of Human Creativity in this volume).
Dennett, of course, is suggesting that we locate a randomiser in the generative
component, so that randomness, rather than giving us chaos, actually works for
2 Originally quoted by Jacques Hadamard (1949: 30). Dennett discusses the implications of Valery's
claim in Dennett (1978, ch. 5).
38 TERRY DAlUNALL
creativity. This will have to be at a low level, or we will have the chaos that the
compatibilist fears.
In the Epilogue to this volume, Douglas Hofstadter explains how low level
randomness works for creativity in the COPYCAT program. Randomness occurs
in the low-level act of 'micro-exploration'. The purpose of micro-exploration is to
efficiently explore the vast, foggy world of possibilities lying ahead without get-
ting bogged down in a combinatorial explosion; for this purpose, randomness,
being equivalent to non-biasedness, is the most efficient method.
These, then are two general ways, drawn in the broadest brush-strokes, in which
machines might be creative. They might be deterministic, but have rich internal
processes that stamp a distinctive hallmark on their products. Or they might include
a randomising element at a low level of description. In the latter case, of course, we
would still need complex cognitive processes to account for style and 'personality':
Hofstadter talks about COPYCAT's 'personality' and 'aesthetic taste'.
7. Overview of papers
I will begin with Margaret Boden's paper, 'Creativity and Computers', because the
story starts there.
Boden argues that creativity is the mapping, exploration and transformation of
conceptual spaces (shortened to 'METCS' in Boden, 1994). A conceptual space is
a space of structures generable, that is, defined, by the rules of a generative system.
We map it and explore it by combining its elements according to the rules-thereby
discovering structures in the space. We transform it by mapping and exploring
it, and then modifying the rules to produce different types of structures. Boden
calls the first kind of creativity 'improbabilist' and the second kind 'impossibilist'
(Boden, 1994).
Boden argues for this by first considering the notion that creativity involves
a novel combination of old ideas. She accepts that creativity involves novelty,
but observes that such accounts do not tell us which combinations are novel,
nor how novel combinations come about. Her main, and related, criticism is that
many creative ideas not only did not occur before, but could not have occured
before. Previous thinking was trapped in a framework, relative to which new
ideas were impossible. Consequently, creativity sometimes requires us to 'think the
impossible', to have ideas that are impossible in the present framework: we need
to 'break out' of a conceptual space by changing the rules that define it. Kekule,
for instance, broke out of the space defined by the rules of nineteenth-century
chemistry, and opened up the new space of aromatic chemistry. We 'break out',
says Boden, by using heuristics (that is, higher-level rules) to modify the rules that
define the space.
Boden argues that AI provides us with concepts and theories that help us to
study creativity, and she considers computer programs in both the Arts and the
INTRODUCfION: ON HAVING A MIND OF YOUR OWN 39
Sciences. She concludes by addressing some philosophical puzzles associated with
creati vi ty.
In their papers Terry Dartnall and Andy Clark discuss creativity in the context of
Annette Karmiloff-Smith 's Representational Redescription Hypothesis (RRH). The
RRH is a theory of cognitive development that tries to explain how the mind goes
beyond domain-specific constraints. It maintains that we are 'driven from within' to
redescribe our procedural knowledge as declarative knowledge, and to continue to
redescribe it at increasingly abstract levels. Skilled performers, such as beavers, are
flexible within a domain, but they cannot progress beyond what Karmiloff-Smith
calls 'behavioural mastery'. Creative cognizers, on the other hand, redescribe their
implicit knowledge as accessible structures (as thoughts), which they can examine
and amend. Thus there is an intimate relationship between our ability to be creative
and our ability to have a mental life.
Dartnall relates this to the way in which creativity 'gets something out of
nothing'. When we have an idea, or solve a problem, or write an novel we get
something out of nothing inasmuch as there is no significant sense in which these
things are combinations of previously existing elements. Consequently, combina-
tion theories--even sophisticated ones such as Boden's-<lo not tell the whole
story. In the final analysis they may not tell much of it, for they only provide us
with well-formedness conditions on ideas and artefacts.
To get at the broader picture, Dartnall turns to the question of intentionality-of
how our thoughts and products can be about anything. Following an Information
Theoretic line, he argues that we first acquire intentional states by being causally
situated in the world. We do not have access to these states, which account for
our implicit, procedural knowledge. Later we redescribe this knowledge as ex-
pliCit, accessible structures (as thoughts) that have intentionality because they are
redescriptions of states that we acquired by being causally situated in the world.
Viewed thus, creativity is the struggle to give expression to what it is to be reflective
creatures causally situated in the world.
Clark relates redescription, creativity and the mental life to connectionism. First
order connectionist systems, such as NETtalk, are like the beaver: they are skilled
in a domain, but are unable to present their knowledge to themselves as accessible
structures. Consequently, they cannot be creative. How, then, is interdomain flexi-
bility to be achieved? Clark discusses Adrian Cussin's model of cognition, which
aims to show how connectionist systems could achieve an increasingly general and
flexible view of the world. He compares this with the evidence that in the later
phases of development we need a symbolic Language of Thought-and he records
an open verdict.
The RRH does not tell us how we redescribe, that is, what mechanisms underlie
representational redescription. In his paper, Donald Peterson addresses this issue in
a more traditional problem-solving framework. He observes that a problem might
be difficult, or even impossible, to solve in one representation, but quite simple to
solve when redescribed into another one. He asks whether redescription is just 're-
40 TERRY DARfNALL
that computers, but not people, are limited to strictly rule-bound behaviour. Priest
spells out Lucas' argument and shows that it can avoid a number of problems. But
he also argues that both the original paper and subsequent discussions are flawed
by confusions over the concept of what it is to give a proof-and he maintains that
the argument fails when this concept is clarified.
We finish with Richard McDonough's spirited attack on cognitive science. This
paper is not for the faint-hearted, though the present version has been heavily
tranquillized.
McDonough is not attacking AI as such, nor is he directly attacking the concept
of machine creativity. He is attacking the idea that people are machines (the idea of
l'homme machine that we started out with) and his thesis is that we are creative in
a way that machines cannot be, for our intelligent products cannot be understood
as the results of mechanical processes.
When we say that something is a machine (says McDonough) we mean that it
is possible to explain its behaviour by causally tracing it to the machine's inner
states. The machine does not have to be 'causally closed' to the environment, but
the data received from the outside must be 'processed' by the internal mechanism,
so that the machine's behaviour results from a change in the state of the internal
mechanism.
Now, machine behaviour can be narrowly described and it can be broadly de-
scribed. A narrow description only refers to the internal workings of the machine,
but a broad description depends on external considerations. Describing the be-
haviour of my watch only in terms of cogs and gears, for instance, is to give a
narrow description. Describing it as saying that it is 3 o'clock is to provide a broad
description, because it depends on social conventions.
McDonough's point is that narrow description is sufficient to explain mechanical
behaviour: in the case of my watch, for instance, it is sufficient to explain why the
hands move as they do. But it is not sufficient to understand intelligent human
behaviour. To understand intelligent human behaviour we need to refer to the
normative environment. When I sign a cheque, the movement of my fingers is
intelligent because it falls under the description 'signing a cheque'-and cheque-
signing depends on a normative context. The intelligence of my behaviour depends
on the appropriateness of moving my fingers in a certain way in a certain context.
It does not suffice, says McDonough, to explain my intelligent behaviour narrowly
described. The mechanist must explain the intelligence of my narrowly described
behaviour. The claim is that he cannot do this because it requires him to go beyond
a description of the mechanism. It requires him to look at the external environment.
McDonough's paper takes us back to an issue discussed earlier in this intro-
duction: the notion of 'getting something out of nothing'. McDonough claims that
human thought and creativity get something out of nothing, not in Dartnall's sense
that they go beyond the combination and recombination of previously existing
elements, but in the stronger sense that they are not the products of mechanism.
Clark (1993) discusses whether machines can act appropriately in normative
42 1BRRY DAKfNAlL
contexts (see especially Part II of his book). I think that the jury is out, and that it
is too soon to record a verdict.
References
Boden, M. A.: 1990, The Creative Mind: Myths and Mechanisms, Weidenfeld & Nicolson, London.
Revised edition, 1992, Cardinal, London.
Boden, M. A.: 1993, Mapping, exploring, transforming, in Dartnall, T. H. (1993).
Boden, M. A.: 1994, summary of Boden (1990/1992), with peer reviews, Behavioural and Brain
Sciences,17: 3.
Clark, A. C.: 1993, Associative Engines: Connectionism, Concepts and Representatianal Change,
MITlBradford.
Dartnall, T. H. (ed.): 1993a, ArtijiciallntelUgence &: Simulatian of Behaviour Quarterly, Special
Issue on AI and Creativity, No. 85, Autumn.
Dartnall, T. H: 1993b, AI, Creativity, Representational Redescription, Intentionality, Mental Life: An
Emerging Picture", in Dartnall, T. H., et. al. (1993).
Dartnall, T. H: 1994a, Creativity, combination and cognition, peer review of Boden (1990/1992),
Behavioural and Brain Sciences,17: 3.
Dartnall, T. H: 1994b, Redescribing Redescription, peer review of Karmiloff-Smith (1992), Be-
havioural and Brain Sciences,17: 4.
Dartnall, T. H., Levinson, R., Kim, S., Subramaniam, D., Sudweeks, F. (eds): 1993, AAAl Technical
Report on ArtificiallntelUgence and Creativity, AAAI Press.
Dennett, D.: 1978, Brainstorms: Philosophical Essays on Mind and Psychology, Harvester Press,
Brighton.
Dreyfus, H. L.: 1979, What Computers Cant Do: The Limits of ArtificiallnteUigence, 2nd edition,
Harper and Row, New York.
Fetzer, J. H. (ed.): 1988, Aspects ofArtijiciallnteUigence, Kluwer Academic Publishers, Dordrecht.
Hadamard, J.: 1949, The Psychology of Invention in the Mathematical Field, Princeton University
Press.
Haugeland, J.: 1985, Artificial Intelligence: The Very Idea, MITlBradford Books, Cambridge. Mass.
Hempel, C. G.: 1985, Thoughts on the limitations of discovery by computer, in Schaffner, K. F.
(1985).
Karmiloff-Smith, A.: 1992, BeyondModulority: A developmenralperspecti1leon cognitive science,
MITlBradford Press, Cambridge, Mass.
Sacks,O.: 1985, The Man Who Mistook His Wife For a Hat, Picador, London.
Schaffner, K. F. (ed.): 1985, Logic ofDiscovery and Diagnosis in Medicine, chapter 5, University of
California Press, Berkeley.
Scheines, R.: 1988, Automating creativity, in Fetzer, J. H. (1988).
Wallas, C. F.: 1926, Creative process, Art of Thought, Jonathon cape, London.
Weisberg, R.: 1986, Creativity: Genius and Other Myths, W. H. Freeman.