Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

INTRODUCTION:

ON HAVING A MIND OF YOUR OWN

TERRY DARTNALL
Griffith University

Since the French Enlightenment of the Eighteenth Century there has been a
growing belief that people are machines. In 1745, the French physician and philoso-
pher La Mettrie published The Natural History of the Soul. This brought him such
official censure that he exiled himself in Holland. Two years later he published
L'Homme Machine (Man A Machine), whose materialistic contents aroused even
the liberal-minded Dutch to angry protest. Two hundred years ago, then, the belief
that people are machines was bold and dangerous. Today it is so deeply rooted in
our culture that we find it difficult to imagine what else people might be.
In the Nineteenth Century, Lady Lovelace, the friend and colleague of Charles
Babbage, expressed the opinion that computers can have nothing to do with cre-
ativity because they "have no pretensions whatever to originate anything~. This,
too, has taken root in the culture, so that the notion of 'machine creativity' is seen
as paradoxical or contradictory. Consequently we tend to believe, at least in our
uncritical moments, that people are machines, but also that machines cannot do
something which is characteristically human.
In this introduction I will tease out some of the implications of these conflicting
intuitions. I will first try to show why creativity is a key issue for AI and cognitive
science, and will then ask whether 'machine creativity' is, in fact, a paradoxical
concept. I will finish by outlining the papers in this part of the book.

1. The acid test


Lady Lovelace's intuition about creativity is a common one. We feel that if there is
anything that computers cannot do, and that distinguishes them from people, it is
that they cannot be creative.
In fact there are two intuitions here. The first is about the concept of man as
a machine, and concerns cognitive science. The second is about the possibility of
machines being intelligent, and concerns artificial intelligence.
The first issue is that if machines cannot be creative, then people are not ma-
chines, for people are creative. This would be the end of La Mettrie's vision of Man
as a Machine.
It would also deal a body blow to cognitive science. Cognitive science tries to
provide computational models of the mind, that is, computational simulations of
human cognitive processes. If creativity is not a computational process, it might
still be possible to simulate it computationally, just as it is possible to simulate
29
T. Dartnall (ed.), Artificial Intelligence and Creativity, 29--42.
© 1994 Kluwer Academic Publishers.
30 TERRY DARfNALL

hurricanes or digestive processes without the simulation itself being a hurricane


or a digestive process. That is, it might be possible to have machine models of
human creative processes, even if machines themselves cannot be creative. The
main point here, of course, is that simulation is not duplication. Nevertheless, if
machines cannot be creative the driving force behind cognitive science will have
been lost, for cognitive science is driven by the beliefs that it is cognitive processes
that matter, and that these can be performed by silicon computers as well as by
carbon brains. It is not clear whether cognitive science could survive the loss of its
central metaphor of the mind as a computational device.
The second issue concerns artificial intelligence. If machines cannot be creative
then I doubt that there is any significant sense in which they can be intelligent, for
they will never 'have minds of their own'. I do mean this in the weak sense that
they will always slavishly do what we tell them, but in the strong sense that they
will never be able to generate their own ideas. And I take it as axiomatic that if they
cannot generate their own ideas they cannot be intelligent.
This issue of 'having a mind of your own' is emerging as a key issue in AI. I
will briefly outline the history.
To begin with, AI embraced the concept of 'general intelligence' traditionally
favoured by psychologists, and focussed on general problem-solving methods such
as heuristic search. Later it became disillusioned with these methods, and recognised
that intelligence requires a great deal of domain-specific knowledge. Knowledge
Representation became a major concern, and dealt with the representation and
manipulation of domain-specific knowledge. Expert Systems developed domain-
oriented knowledge-based packages and took them to the market-place.
AI then discovered that intelligence also requires common-sense, background
knowledge about the world. Intelligent systems, if they are to go beyond narrow,
domain-specific constraints, need to know, for instance, that fluid flows out of tilted
containers, and that objects fall when you drop them.
But common-sense knowledge proved to be very elusive. In all probability this
is not only because there is a great deal of it, and not only because it is difficult to
represent in declarative form. It is probably, too, because it is partly procedural-
because it is based (as Hubert Dreyfus (e. g. 1979) is always telling us) in skills and
abilities. The child endlessly fills and empties cups of water. Why does she do this?
It seems that she is acquiring thousands of water-pouring skills that will eventually
blossom forth as structured, declarative knowledge. Until she can articulate this
knowledge she quite literally knows more than she can say-and even when she
can say it, I doubt that she can say it all.
As she learns to articulate her knowledge, she comes to have thoughts that she
did not have before. These are not recombinations of previously existing elements
or ideas, but declarative structures that have emerged out of skills and abilities.
They are new and original for her-and they are hers, because they arise out of her
interaction with the world. This ability to generate ideas and beliefs effectively ex
nihilo is, I believe, the operational core of creativity, and is intricately interwoven
INTRODUcnON: ON HAVING A MIND OF YOUR OWN 31

with our ability to have thoughts at all-in other words, to have minds of our own.
This is discussed in more detail in the papers by Dartnall and Clark in this section
of the book.l
Finally, there is this. In less than 40 years AI has gone from believing that
intelligence is a general feature of the mind, to recognising that it requires domain-
specific knowledge, to appreciating the need for common-sense knowledge, and to
at least beginning to recognise that this is grounded in skills and abilities. It has
discovered that intelligence is not an independent cognitive quality, but is related
in some intimate way to general, evolved abilities: intelligence is found in our
capacity to understand language, to find Coke cans, and to recognise' A'. All this is
fine and good, not least because it forces us to appreciate abilities that have taken
a long time to evolve (such as recognising Coke cans). But there is a danger, with
such a broad notion of intelligence, that AI will lose its way. (What, now, is the
quarry?)
A concern with creativity restores the focus, for creativity is about things that
we are told computers cannot do-and yet which they must be able to do if they
are to be intelligent. Normal science does not test theories by looking at easy cases,
since it is always possible to find data that will fit. A crucial part of the methodology
of normal science is attempting to falsify theories by exposing them to maximum
risk-a point that has been made famous by Popper. In artificial intelligence we
know some of the things that machines are good at. They can perform rapid search,
and can do so far more efficiently than people. Consequently they can do things
such as play chess and backgammon. They are adept at formulating and deploying
complex bodies of rules, so that they can diagnose diseases of the blood and
configure computer systems. But the time has come to put AI to the test by looking
at the things that it is claimed computers cannot do. Creativity provides us with the
acid test.

2. The Lovelace questions

In her paper in this volume, and in more detail in her book (Boden, 1990/1992;
see also Boden, 1993; Boden, 1994), Margaret Boden addresses Lady Lovelace's
claim that computers cannot have anything to do with creativity because they
cannot originate anything. Boden points out that the matter is not so simple, and
distinguishes between four 'Lovelace questions':
can computers help us to understand human creativity?
could computers (now or in the future) do things which at least appear to be
creative?
could computers appear to recognize creativity?
can computers really be creative (as opposed to producing apparently creative
performances whose originality is really due to the human programmer)?

1 Both authors draw on the work of the developmental psychologist Annette Karmiloff-Smith.
32 TERRY DARrNALL

We can ask other questions. We can ask, for instance, whether computers can
enhance human creativity. If they can, should we develop them as creativity-
enhancing tools, in which the human operator is kept in the loop, rather than
trying to produce autonomously creative machines? The last section of this volume
addresses this issue.

Boden is principally concerned with the first Lovelace question, to which her
answer is a clear 'yes': computational concepts and theories (rather than computers
as such) can help us to specify the conceptual structures and processes in people's
minds.
Her answer to the fourth question (,Can computers be creative?') is qualified.
She believes that computers can do things that appear to be creative, but whether
we regard them as actually creative will depend on a moral/political question: are
we prepared to allow them 'a moral and intellectual respect comparable with the
respect we feel for fellow human beings'? (1990/1992: 11).
It is certainly true that something can appear to be creative without actually
being so. We might judge a picture or a poem to be creative but change our minds
when we discover that it was produced by a random process. If, for instance, we
discovered that the paintings on the Sistine Chapel ceiling resulted from an orgy of
paint throwing, or that Hamlet was written by monkeys and typewriters, we would
retract our judgements about their creativity. We would do so because we would
have discovered that these things were not produced in the right kind of way: the
aetiology, the causal history, would be wrong. In the same way a computer could
appear to be creative without actually being so-if it used only random processes,
for example.
The fact that we take aetiology so seriously indicates that we believe that it is
the underlying process-how the product was generated-that matters. Ironically
we have almost no understanding of this process, but it seems that we put our faith
in the fact that it is there. What, then, of moral/political decisions? As Boden says
(1990/92: 283-284), we are swayed by the appearance of things, and will accord
intelligence and rights to warm, cuddly creatures whilst withholding them from
structures of silicon and steel. Indeed, this is so. But I believe that we make these
judgements because we believe that warm, cuddly things are similar to ourselves,
and therefore that the right sorts ofprocesses are going on. If, as Boden claims, it
is the computational processes, rather than the hardware, that matters, then these
processes should yield intelligence and creativity, whether they are implemented in
carbon or silicon.
There is a huge literature on this general issue, and I cannot go into it here.
Since Boden is principally concerned with the way in which computers can help
us to understand creativity, I will briefly explore the fourth Lovelace question: Can
computers really be creative?
INTRODUcnON: ON HAVING A MIND OF YOUR OWN 33

3. The arguments against


Lady Lovelace maintains that computers cannot be creative because they cannot
originate anything. Carl Hempel recently reiterated this claim when he said that
the discovery of an explanatory theory requires the introduction of new theoretical
terms and principles, and "it does not seem clear at al1 how a computer might be
programmed to discover such powerful theories~. (Hempel, 1985: 120; quoted in
Scheines, 1988: 341.)
The claim that computers <'.:annot originate anything appears to be straightfor-
ward, but when we try to articulate the reasons behind it we discover that what
passes for a single claim is really a cluster of related claims. The claims are that
machines cannot be creative because: they only fol1ow instructions; everything
they do is something they have been told to do; everything they do fol1ows from
something they have been told to do; their internal structure determines their out-
put; their internal structure makes their output predictable; their internal structure
is something that we have given them; and so on. I will consider these in turn.
The most common reason put forward to support the claim that computers cannot
originate anything (and therefore that they cannot be creative) is that they merely
fol1ow instructions. The argument is:

If X is merely fol1owing instructions, X is not being creative.


Computers only fol1ow instructions.

Therefore computers are not be creative.

This is valid, but the first premiss is false, for we sometimes instruct people to
be creative. For instance, Pope Julius II instructed Michaelangelo to be creative
when he painted the Sistine Chapel ceiling. Therefore it is possible to be creative
and to be following instructions.
"But," the objection goes, "Julius II only gave Michaelangelo very general
instructions, and left the rest to him. In contrast, every single thing that a computer
does is something that it was told to do. Suppose that Michaelangelo had been
instructed about every brushstroke that he made!"
The argument is now:

If everything that X does is something that it was told to do, then X is


not creative.
Everything that a computer does is something that it was told to do.

Therefore computers are not creative.

Now the second premiss is false, for we do not instruct computers in every action
that they perform. This would require us to give them millions of instructions per
second.
34 TERRY DARfNALL

"What I meant was that everything that they do follows from instructions that
we give them."
But what does this mean? If it means that the machine's performance literally
follows from its instructions then it is false, for if we wrote all the instructions on
a piece of paper, nothing at all would happen.
Presumably it means that the computer is built, or designed, to respond in a
predictable way to its instructions. So we have:

If X is designed to respond in a predictable way to its instructions, then


X is not creative.
Computers are designed to respond in a predictable way to their instruc-
tions.

Therefore computers are not creative.

But this still isn't clear. Is the complaint that computers are predictable-that we
can predict the output given the input plus an exhaustive description of the innards?
Or is it that we have designed the innards, so that their ostensible creativity is really
ours?
Let us consider the second interpretation first.
We can 'give someone a good education', but it is still their education. We can
'teach them to think for themselves', and the fact that we have taught them does
not detract from their ability. Indeed, we can 'give someone a good mind' and (as
we say) 'give them a mind of their own'. Haugeland (1985: 10) says "Why should
an entity's potential for inventiveness be determined by its ancestry (like some
hereditary title) and not by its own manifest competence?" If we were to discover
that we were created, would this mean that we are not creative? Surely not. And
would we be more inclined to assign creativity to a computer that, rather than having
been created, had come together by chance?! Systems cannot create themselves,
so if we deny creativity to systems that have been produced (by evolution or by
people), we will have to say that nothing is creative, and that there is no such thing
as creativity.
What about the other interpretation, that computers are predictable input-output
devices? I suspect that there is circularity here, and that the intuition is that if
creativity was predictable, then a machine could do it-and machines cannot be
creative!
This suspicion aside, why shouldn't creativity be predictable? Julius II could
have predicted that Michaelangelo would creatively paint the Sistine Chapel ceiling.
"But that was only predicting that he would be creative. It was not predicting
what he would create."
Here we need Margaret Boden's distinction between P-creativity and H-creat-
ivity (1990/92; this volume; etc.). Something is P-creative ('psychologically cre-
ative') if it is fundamentally novel for the individual, and it is H-creative Chistor-
INTRODUCTION: ON HAVING A MIND OF YOUR OWN 35
ically creative') if it is fundamentally novel with respect to the whole of human
history.
Now, as Boden says, we can predict P-creativity. For instance, we can put a
child into an environment in which we know that she will discover something, or
solve a problem, in a P-creative way (Boden, 1990/92: 37).
What about H-creativity? We can predict that someone will be H-creative: we
lock a genius in a broom cupboard and tell her that she cannot come out until she
has been H-creative. We cannot, it is true, predict what she will H-create-but let
us be clear why we cannot do this. We cannot do it because then, in a sense, we
would have created it first. It is not that we cannot predict what she will come up
with. There is no magic going on inside the broom cupboard. There is just the trivial
truth that, by definition, H -creative thoughts haven't been thought before, so that if
you have the thought before Sally, then her thought isn't H-creative.
Notice, too, that the claim that we are considering is that something cannot
be creative if it is predictable in principle. Now, something can be predictable in
principle without having been predicted in fact. If it was not predicted in fact then it
may be fundamentally new with respect to human history. Consequently, something
can be H-creative and predictable in principle.
These points are worth labouring. Suppose we discover that Michaelangelo was
an intelligent machine (built by a Renaissance genius called 'Michaelangelo'-{)r,
to avoid confusion, plain 'Mike'). And let us assume that Mike knew all there was
to know about Michaelangelo's innards, so that, in principle, he could predict the
output for any given input. He did not, however, bother to make the predictions (he
was too busy converting lead into gold). "Qh, very nice," he said, when he looked
at the Sistine Chapel ceiling, and added, sotto voce, "Be careful what you say to
the Pope."
First, the fact that Mike gave Michaelangelo his ability does not detract from
Michaelangelo having the ability. Second, it doesn't detract from the qualities of the
Sistine Chapel ceiling (always assuming, of course, that Mike gave Michaelangelo
the right sorts of innards-not random ones, for instance). Finally, because Mike
hadn't in/act predicted what the outcome would be, Michaelangelo's work was
H-creative. It was fundamentally new with respect to human history.

4. But yet...
There is, then, no obvious reason why computers cannot be creative, and no obvious
reason why they cannot have minds of their own. The final argument, that creativity
is not predictable, is little more than a trick of the light. (I say that there is no obviolls
reason. There may be unobvious ones!)
Nevertheless, we are bound to ask what it would look like for a computer to be
creative-what it would be (in very general terms) for a machine to have a mind of
its own.
We can get a conceptual overview by looking at a similar, but better understood,
36 TERRY DAKI'NALL

issue: freewill versus detenninism. This is similar because it is concerned with


whether (and if so, how) people can have minds of their own.
The traditional battle-lines are as follows. Determinists believe that every event
is caused. Hard determinists believe that, because every event is caused, there is no
freewill. Libertarians believe that some events are uncaused-namely, events in the
human mind or brain-and that this gives us freewill. Soft determinists (also called
'compatibilists') agree that every event is caused, but say that this is compatible
with freewill, for an action is free if it is caused by our preferencies, volitions and
desires-if it is the product of individual personality. This (they say) is what we
mean by 'free action'.

5. Creativity and compatibilism


Margaret Boden (1990/92: chapter 9) provides a persuasive, broadly compatibilist,
account of creativity. The creative product arises out of the creator's particular
genius, out of her ideosyncratic personality and abilities. The processes that are
going on inside her are causal, but they are clearly and uniquely her processes, and
this is evidenced by the fact that the products have her hallmark upon them.
An apparent problem for a compatibilist account of creativity is disavowal. We
sometimes disavow responsibility for our creative acts, and say that something
'came to us' or that 'we were spoken to by the Muse'.
It is true that creative products are not always, or not entirely, the fruits of
conscious effort or deliberation, but this need not militate against compatibilism.
All that is needed is that sometimes we can articulate at least part of the creative
process, saying why we did this, and why we did not do that. If we can never do this
then we have the problem of idiot savants, who can just do things, but cannot say
how. In The Man Who Mistook His Wife For a Hat, Oliver Sacks (1985) describes
twins who could perform astonishing mathematical feats by just 'seeing' patterns
of prime numbers, but who were unable to articulate how they did this, on any level
of des.cription. I think that under these circumstances we suspect that the agent is so
unaware of, and so disconnected from, her processes that they are not her processes
at all.
I believe that we do not perceive our creativity in this way, even when we
disavow it. If we really believed that we were not responsible for what we had
created, we would not call it 'creative' at all, any more than we would accord
creativity to a radio or a TV set.
The appeal of compatibilism is two-fold. First, it reconciles creativity with
mechanism in a fairly straightforward way. Second, it avoids what appears to
be a major problem (indeed, a disaster) for libertarianism. For it appears that
libertarianism gives us, not freewill, but chaos: if our actions were genuinely
uncaused they would be chaotic and random rather than free.
This is the standard criticism oflibertarianism. I will outline an idea of Dennett's
which suggests that it is premature.
INTRODUCTION: ON HAVING A MIND OF YOUR OWN 37
6. Creativity and indeterminism

In 'On Giving Libertarians What They Want to Say', Dennett (1978: 296) says that
the compatibilist position "looks terribly mechanical and inevitable, and seems to
leave no room for creativity or genius." It allows no distinction to be drawn, he
says, between authorship and mere implication in a causal chain.
In an effort to give libertarians something like what they want, he first draws the
sting out of the notion of randomness. In common parlance this has connotations
of pointlessness or meaninglessness, but the point that the libertarian wishes to
make is exactly that random actions (actions that are random in the sense of being
undetermined) need not be meaningless.
Dennett invites us to imagine someone who has to make a decision in a hurry.
She does not have time to think of all the considerations, and has to make her
mind up immediately. Now, as Dennett says, "it just might be the case that exactly
which considerations occur to [her] in such circumstances is to some degree strictly
undetermined." (1978: 294). Some considerations will occur to her, but others will
not. If we knew everything about her personality we could only predict what she
would do ifsuch and such considerations occured to her. What she will do in fact
is strictly indeterminate.
This leads Dennett to suggest that two factors are involved in decision making.
There is a "consideration-generator whose output is to some degree undetermined"
(p. 295). This produces a series of considerations that are either rejected as irrelevant
by the agent, or selected as having bearing on the reasoning process. He quotes
Valery as saying:
It takes two to invent anything. The one makes up combinations; the other
.one chooses, recognises what he wishes and what is important to him in the
mass of things which the former has imparted to him. What we call genius is
much less the work of the first one than the readiness of the second one to grasp
the value of what has been laid before him and to choose it. 2
This two stage model maps perfectly onto what is almost a standard account
of creativity: Wall as (1926), taking his cue from Poincare and others, said that
creativity proceeds in the four stages of preparation, incubation, illumination and
verification. The account has been criticised, but mainly on the grounds that an
incubation period may not be necessary (Weisberg, 1986). As far as I know there is
no disagreement with the claim that creativity involves generation and evaluation.
More recent research, in fact, addresses the generative-evaluative cycle, in which
a product is generated, evaluated, new goals are set, and the cycle is repeated (see
the sections on Design and the Enhancement of Human Creativity in this volume).
Dennett, of course, is suggesting that we locate a randomiser in the generative
component, so that randomness, rather than giving us chaos, actually works for

2 Originally quoted by Jacques Hadamard (1949: 30). Dennett discusses the implications of Valery's
claim in Dennett (1978, ch. 5).
38 TERRY DAlUNALL

creativity. This will have to be at a low level, or we will have the chaos that the
compatibilist fears.
In the Epilogue to this volume, Douglas Hofstadter explains how low level
randomness works for creativity in the COPYCAT program. Randomness occurs
in the low-level act of 'micro-exploration'. The purpose of micro-exploration is to
efficiently explore the vast, foggy world of possibilities lying ahead without get-
ting bogged down in a combinatorial explosion; for this purpose, randomness,
being equivalent to non-biasedness, is the most efficient method.
These, then are two general ways, drawn in the broadest brush-strokes, in which
machines might be creative. They might be deterministic, but have rich internal
processes that stamp a distinctive hallmark on their products. Or they might include
a randomising element at a low level of description. In the latter case, of course, we
would still need complex cognitive processes to account for style and 'personality':
Hofstadter talks about COPYCAT's 'personality' and 'aesthetic taste'.

7. Overview of papers

I will begin with Margaret Boden's paper, 'Creativity and Computers', because the
story starts there.
Boden argues that creativity is the mapping, exploration and transformation of
conceptual spaces (shortened to 'METCS' in Boden, 1994). A conceptual space is
a space of structures generable, that is, defined, by the rules of a generative system.
We map it and explore it by combining its elements according to the rules-thereby
discovering structures in the space. We transform it by mapping and exploring
it, and then modifying the rules to produce different types of structures. Boden
calls the first kind of creativity 'improbabilist' and the second kind 'impossibilist'
(Boden, 1994).
Boden argues for this by first considering the notion that creativity involves
a novel combination of old ideas. She accepts that creativity involves novelty,
but observes that such accounts do not tell us which combinations are novel,
nor how novel combinations come about. Her main, and related, criticism is that
many creative ideas not only did not occur before, but could not have occured
before. Previous thinking was trapped in a framework, relative to which new
ideas were impossible. Consequently, creativity sometimes requires us to 'think the
impossible', to have ideas that are impossible in the present framework: we need
to 'break out' of a conceptual space by changing the rules that define it. Kekule,
for instance, broke out of the space defined by the rules of nineteenth-century
chemistry, and opened up the new space of aromatic chemistry. We 'break out',
says Boden, by using heuristics (that is, higher-level rules) to modify the rules that
define the space.
Boden argues that AI provides us with concepts and theories that help us to
study creativity, and she considers computer programs in both the Arts and the
INTRODUCfION: ON HAVING A MIND OF YOUR OWN 39
Sciences. She concludes by addressing some philosophical puzzles associated with
creati vi ty.
In their papers Terry Dartnall and Andy Clark discuss creativity in the context of
Annette Karmiloff-Smith 's Representational Redescription Hypothesis (RRH). The
RRH is a theory of cognitive development that tries to explain how the mind goes
beyond domain-specific constraints. It maintains that we are 'driven from within' to
redescribe our procedural knowledge as declarative knowledge, and to continue to
redescribe it at increasingly abstract levels. Skilled performers, such as beavers, are
flexible within a domain, but they cannot progress beyond what Karmiloff-Smith
calls 'behavioural mastery'. Creative cognizers, on the other hand, redescribe their
implicit knowledge as accessible structures (as thoughts), which they can examine
and amend. Thus there is an intimate relationship between our ability to be creative
and our ability to have a mental life.
Dartnall relates this to the way in which creativity 'gets something out of
nothing'. When we have an idea, or solve a problem, or write an novel we get
something out of nothing inasmuch as there is no significant sense in which these
things are combinations of previously existing elements. Consequently, combina-
tion theories--even sophisticated ones such as Boden's-<lo not tell the whole
story. In the final analysis they may not tell much of it, for they only provide us
with well-formedness conditions on ideas and artefacts.
To get at the broader picture, Dartnall turns to the question of intentionality-of
how our thoughts and products can be about anything. Following an Information
Theoretic line, he argues that we first acquire intentional states by being causally
situated in the world. We do not have access to these states, which account for
our implicit, procedural knowledge. Later we redescribe this knowledge as ex-
pliCit, accessible structures (as thoughts) that have intentionality because they are
redescriptions of states that we acquired by being causally situated in the world.
Viewed thus, creativity is the struggle to give expression to what it is to be reflective
creatures causally situated in the world.
Clark relates redescription, creativity and the mental life to connectionism. First
order connectionist systems, such as NETtalk, are like the beaver: they are skilled
in a domain, but are unable to present their knowledge to themselves as accessible
structures. Consequently, they cannot be creative. How, then, is interdomain flexi-
bility to be achieved? Clark discusses Adrian Cussin's model of cognition, which
aims to show how connectionist systems could achieve an increasingly general and
flexible view of the world. He compares this with the evidence that in the later
phases of development we need a symbolic Language of Thought-and he records
an open verdict.
The RRH does not tell us how we redescribe, that is, what mechanisms underlie
representational redescription. In his paper, Donald Peterson addresses this issue in
a more traditional problem-solving framework. He observes that a problem might
be difficult, or even impossible, to solve in one representation, but quite simple to
solve when redescribed into another one. He asks whether redescription is just 're-
40 TERRY DARfNALL

description of problems under useful new concepts'. To answer this he examines


three problems and their solutions through re-representation. He concludes that,
although re-representation crucially involves the introduction of useful new con-
cepts, it also crucially involves emergent information, which can be calculated once
the new concepts have been introduced. This information constitutes a discovery
which increases our understanding of the problem.
Wales and Thornton explore other ways in which psychological research can
contribute to the development of computational models of creativity. They first
consider accounts of creativity in the literature and observe that these have a
common theme of novelty. One of the important issues is whether there are tests
for creativity--can it be measured? And is it separate from other factors such as
intelligence and motivation? Some psychologists believe that it is not: creative
people are more motivated, work on problems longer, strive for originality, are
ready to take independent action and are more flexible-but do not have a special
ingredient of ·creativity'.
The authors then consider three experiments that have been conducted to illu-
minate the creative process.
The first experiment considers the way in which children draw pictures: these
are representational and iconic rather than attempts to produce replicas. They reflect
cultural differences. Children develop styles of their own. These studies point to
the importance of experience in the creative process and to the ability to think in
terms of associations, symbols and representations.
A second experiment involves children's use of blocks to form a bridge to span
a gap for which the blocks are individually too short. Significant strategy changes
occur during these sessions, and striking differences in behaviour are exhibited. The
authors argue, however, that this discontinuity is only apparent, since information
and features from previous failed attempts are used to design future ones and
ultimately to produce the strategy that works.
A third experiment involves the way in which children process metaphorical
expressions. Wales and Thornton argue that the deployment of metaphor may fall
within the realm of linguistic knowledge, so that it may not be as difficult to model
as many suppose.
The last two papers in the section are about philosophical objections to machine
creativity. The first defuses an old argument, but the second articulates a new one,
so that we end on a note of dissent.
Graham Priest's paper addresses J. R. Lucas' infamous argument (recently re-
stated by Lucas, and given a new incarnation by Roger Penrose) that a mind will
always be able to prove mathematical results that a machine cannot. Lucas' argu-
ment draws on GMel's Incompleteness Theorem, which states that, for a particular
type of mathematical theory, T, if T is consistent (Le. if a formula and its negation
cannot be derived in T), then there is a true formula which is not provable in T. That
is, under certain conditions a true formula is not provable in a certain theory. Lucas
maintains that computers are bound by this restriction, whereas people are not, so
INTRODUCI10N: ON HAVING A MIND OF YOUR OWN 41

that computers, but not people, are limited to strictly rule-bound behaviour. Priest
spells out Lucas' argument and shows that it can avoid a number of problems. But
he also argues that both the original paper and subsequent discussions are flawed
by confusions over the concept of what it is to give a proof-and he maintains that
the argument fails when this concept is clarified.
We finish with Richard McDonough's spirited attack on cognitive science. This
paper is not for the faint-hearted, though the present version has been heavily
tranquillized.
McDonough is not attacking AI as such, nor is he directly attacking the concept
of machine creativity. He is attacking the idea that people are machines (the idea of
l'homme machine that we started out with) and his thesis is that we are creative in
a way that machines cannot be, for our intelligent products cannot be understood
as the results of mechanical processes.
When we say that something is a machine (says McDonough) we mean that it
is possible to explain its behaviour by causally tracing it to the machine's inner
states. The machine does not have to be 'causally closed' to the environment, but
the data received from the outside must be 'processed' by the internal mechanism,
so that the machine's behaviour results from a change in the state of the internal
mechanism.
Now, machine behaviour can be narrowly described and it can be broadly de-
scribed. A narrow description only refers to the internal workings of the machine,
but a broad description depends on external considerations. Describing the be-
haviour of my watch only in terms of cogs and gears, for instance, is to give a
narrow description. Describing it as saying that it is 3 o'clock is to provide a broad
description, because it depends on social conventions.
McDonough's point is that narrow description is sufficient to explain mechanical
behaviour: in the case of my watch, for instance, it is sufficient to explain why the
hands move as they do. But it is not sufficient to understand intelligent human
behaviour. To understand intelligent human behaviour we need to refer to the
normative environment. When I sign a cheque, the movement of my fingers is
intelligent because it falls under the description 'signing a cheque'-and cheque-
signing depends on a normative context. The intelligence of my behaviour depends
on the appropriateness of moving my fingers in a certain way in a certain context.
It does not suffice, says McDonough, to explain my intelligent behaviour narrowly
described. The mechanist must explain the intelligence of my narrowly described
behaviour. The claim is that he cannot do this because it requires him to go beyond
a description of the mechanism. It requires him to look at the external environment.
McDonough's paper takes us back to an issue discussed earlier in this intro-
duction: the notion of 'getting something out of nothing'. McDonough claims that
human thought and creativity get something out of nothing, not in Dartnall's sense
that they go beyond the combination and recombination of previously existing
elements, but in the stronger sense that they are not the products of mechanism.
Clark (1993) discusses whether machines can act appropriately in normative
42 1BRRY DAKfNAlL

contexts (see especially Part II of his book). I think that the jury is out, and that it
is too soon to record a verdict.

References
Boden, M. A.: 1990, The Creative Mind: Myths and Mechanisms, Weidenfeld & Nicolson, London.
Revised edition, 1992, Cardinal, London.
Boden, M. A.: 1993, Mapping, exploring, transforming, in Dartnall, T. H. (1993).
Boden, M. A.: 1994, summary of Boden (1990/1992), with peer reviews, Behavioural and Brain
Sciences,17: 3.
Clark, A. C.: 1993, Associative Engines: Connectionism, Concepts and Representatianal Change,
MITlBradford.
Dartnall, T. H. (ed.): 1993a, ArtijiciallntelUgence &: Simulatian of Behaviour Quarterly, Special
Issue on AI and Creativity, No. 85, Autumn.
Dartnall, T. H: 1993b, AI, Creativity, Representational Redescription, Intentionality, Mental Life: An
Emerging Picture", in Dartnall, T. H., et. al. (1993).
Dartnall, T. H: 1994a, Creativity, combination and cognition, peer review of Boden (1990/1992),
Behavioural and Brain Sciences,17: 3.
Dartnall, T. H: 1994b, Redescribing Redescription, peer review of Karmiloff-Smith (1992), Be-
havioural and Brain Sciences,17: 4.
Dartnall, T. H., Levinson, R., Kim, S., Subramaniam, D., Sudweeks, F. (eds): 1993, AAAl Technical
Report on ArtificiallntelUgence and Creativity, AAAI Press.
Dennett, D.: 1978, Brainstorms: Philosophical Essays on Mind and Psychology, Harvester Press,
Brighton.
Dreyfus, H. L.: 1979, What Computers Cant Do: The Limits of ArtificiallnteUigence, 2nd edition,
Harper and Row, New York.
Fetzer, J. H. (ed.): 1988, Aspects ofArtijiciallnteUigence, Kluwer Academic Publishers, Dordrecht.
Hadamard, J.: 1949, The Psychology of Invention in the Mathematical Field, Princeton University
Press.
Haugeland, J.: 1985, Artificial Intelligence: The Very Idea, MITlBradford Books, Cambridge. Mass.
Hempel, C. G.: 1985, Thoughts on the limitations of discovery by computer, in Schaffner, K. F.
(1985).
Karmiloff-Smith, A.: 1992, BeyondModulority: A developmenralperspecti1leon cognitive science,
MITlBradford Press, Cambridge, Mass.
Sacks,O.: 1985, The Man Who Mistook His Wife For a Hat, Picador, London.
Schaffner, K. F. (ed.): 1985, Logic ofDiscovery and Diagnosis in Medicine, chapter 5, University of
California Press, Berkeley.
Scheines, R.: 1988, Automating creativity, in Fetzer, J. H. (1988).
Wallas, C. F.: 1926, Creative process, Art of Thought, Jonathon cape, London.
Weisberg, R.: 1986, Creativity: Genius and Other Myths, W. H. Freeman.

You might also like