Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

So, can we start with you sharing your name and your position?

I’m Alan Winfield, Professor of Robot Ethics here at the University of the West of England,
Bristol.

Excellent. Alan, can you tell me a little bit about how your work in AI began?

Gosh. Well, when I was hired at this university in 1991, I was hired as Head of Engineering
Research. And, in fact, that rapidly became within a few months in fact Associate Dean
Research in Engineering. And the big task that I had was to start some new research areas.
And by chance, another colleague recently hired Chris Melhuish and I got together, and
together with a third colleague Tony Pipe, we all discovered we had an interest in robotics. And
with a fourth colleague, Owen Holland, we started the lab. In fact, it was then called the
Intelligent Autonomous Systems Lab. Here, on the campus, in fact it was, I mean, initially, I’m
not kidding, we were in a corridor, so we had no space, so we had to do our experiments in the
corridor. And once got told off by the services department of the university because we moved a
drinks machine to make room for the corridor. But one of the advantages of actually doing
swarm robotics experiments we discovered is that people like the cleaners and the dinner ladies
would actually come and sit and watch the robots doing their thing, which was really interesting,
nice little bit of, you know, if you like, public engagement if you like. I didn’t appreciate that at the
time. So, it was a kind of partly serendipitous that, that a group of us were interested in robotics.
The only one of us that had actually done robotics was Owen Holland, and his background was
biological robotics, in fact he was one of the founders of the field of what we now refer to as bio-
inspired robotics. And, you know, I very quickly realized, you know, that actually this was an
area not covered at all in this region. In fact, hardly covered anywhere in the UK. So, you know,
I figured that this was a good thing to try and start a new research group. And we grew from
quite the truly nothing to the lab you see now with, you know, more than 200 faculty and
students.

Yeah, I want to ask a little bit more about that in a second. But, your own interests in this area,
were they influenced ever by science fiction?

Oh totally, yes.

Oh, can you say a little bit about that?

Well, sure, I mean I read, you know, science fiction as a boy, and I still read science fiction now.
So yeah, I mean I grew up with Isaac Asimov and Hineline and, you know, Frank Herbert, and
Larry Niven, and all of those great, classic, you know, science fiction. Well actually, even before
their time, I mean I was an avid reader of H.G. Wells, Jules Verne especially. So, yeah, I really
loved the idea of robots from a boy, and so it was very exciting to suddenly have the opportunity
to start to, you know, to work in this area. And of course, one of the beauties of being, well as I
was then, Head of Research, is that well I could essentially decide a strategy. Of course, I had
to get the University’s support, particularly when we needed space and facilities. But fortunately,
you know, we were, I mean really from the very beginning we were able to win grants. And, of
course that’s a really good persuader.

It is.

As far as University’s are concerned. University management is concerned.


Sure. Yeah. So, can you share a little bit about how some of your work has evolved...

Sure.

Up until present projects?

Yeah, and it’s been a big evolution. So, I started research in essentially in the area that Owen
cofounded, which is swarm robotics, swarm intelligence. And all of us actually, all four of us, the,
if you like, the original founders of the lab, worked in, in swarm robotics in one way or another.
And I continued to work in that area for some years. And still have a little bit of work in swarm
robotics, but I’ve kind of moved away from that area now. So, in swarm robotics, I was
particularly interested in, if you like, the idea that a swarm of physical robots can help us to
understand the natural phenomena of self-organization and emergence. And very quickly, as a
research roboticist, I kind of came to the idea that I’m not really so interested in robots for their
real-world application. I am, but I’m in a sense more interested in robots as scientific
instruments. In other words, robots as a kind of microscope if you like, for investigating
properties of natural intelligence, life, evolution, even culture. And I’ve had projects in all of
those fields. So, you know, I became very interested in exactly what emergence is and how self-
organizing systems work. And, if you like the meta-question of can robots be used as a scientific
instrument, and that’s a question I’m still working on.

Yeah.

So, and then to continue the story, not long after starting the lab, I got to know how our science
communication here in the university. Already quite well-known and, you know, quite famous for
working in public engagement. And it was a kind of marriage made in heaven because robotics
is fabulous as a medium, a vehicle for doing public engagement and science communication. I
discovered that I liked doing science communication and in a sense it’s what professors should
do after all, what is a professor but someone who professes, someone who communicates. You
know I get very cross with professors who don’t do any science communication because, as far
as I’m concerned, they’re not doing professoring. They’re not being professors unless they do
science communication or public engagement in general. I don’t just mean for science
professors. And interestingly, that work in public engagement got me interested in ethics, robot
ethics. It sensitized me to ordinary people’s concerns with robotics and AI, you know, people
used to ask me when I gave talks, “So how intelligent are your intelligent robots?” And there
was always a kind of subtext which is “And should I be worried.” Of course, the answer was
always “Not very and don’t worry.” But that really opened my eyes to the, if you like the ethical
and societal broader ethical and societal questions about robotics, and of course by extension
AI. So, I then really kind of diversified so that since I guess around 2010, so the last eight years
or so, I’ve been doing both, if you like, hardcore scientific research with robots, in parallel with
robot ethics.

Yeah, actually communicating to the public is one of the central themes that we’re quite
interested in. So, I’m wondering if there’s an example that you think is a strong example of being
able to communicate this sophistication or the particular focus of a tool, to a, that you think has
been successfully communicated to the public at large?

So, what, what do you mean by examples? You mean project or...?

So, an example of an accurate... Yeah, or useful communication on AI to a broad public.


I led a project called walking with robots for several years, which was a kind of UK-wide public
engagement project where essentially we networked all of the robotics labs in the UK and kind
of opened them up, if you like, so that the, you know, children, families and so on were able to
meet the scientists. In fact, one of the events that we ran here in the science center in Bristol, it
was then called At Bristol, it’s now changed its name, it’s now called We The Curious. And we
ran events called Meet the Scientist. We also ran open lab events so that people could come,
and we still do. You know children, school children, some could actually come and meet, you
know, people who make, roboticists. And you’ve reminded me of an interesting experience I
had. So, I can’t remember exactly which year, but one summer I was a kind of visiting, if you
like, scientist in At Bristol, the science center, local science center, and I had a swarm of robots
that were running in a little kind of enclosure. But they weren’t very reliable robots. These were
first generation robots. And they were about the size of a dinner plate. So, they’re quite large.
And, you know, I guess it was an arena which was a few square meters. So, quite large arena
by modern standards. And, the robots broke down. So about once a week I went down to the, to
the center, to At Bristol, to fix the robots, and it became a thing. So, I became known as the
robot doctor. And what we found was that the kids loved it when I was fixing them, and they
could see the insides of the robots. And it was actually more instructive when the robots broke
and I was able to show them, you know, the robots from the inside, than when they were doing
their thing normally.

It's an interesting insight because on some level, it seems like some of the tensions between a
lot of the metaphors or narrative descriptions that rely on science fiction draw our attention and,
by our I include myself because I’m not a technologist, to the upper bounds or the upper
possibility of the systems rather than the case that you’re pointing to which is the limitations of
the systems alongside their actual functions.

I mean Illah will know as well as I do that, you know, most of us are concerned with not very
intelligent things. I mean most robots are not very smart.

Yes.

And, and actually, you know, the study of the intelligence in not very intelligent things is really
instructive.

Yes, yeah.

So, often, I mean there’s a kind of, if you like, unhealthy obsession, particularly in the media,
and by some scientists who should know better with, you know, with super smart robots and
AIs. And I think that’s a mistake. We certainly don’t need to be worrying about super
intelligence. You know we can’t even make a robot that could make you a cup of tea in your
kitchen. And we’re a long way from being able to do that.

Yeah.

So, and in fact I discovered, that, now who was it? I think it’s the, correct me on this, the
Wozniak coffee test, because he came up before I did with the same test, you know. If a robot
could come and make a cup of coffee in your kitchen, then you’ve really cracked it. I may be
wrong. Is it Steve Wozniak?

There’s a possibility.
Yeah, yeah, so, but I’m nevertheless going to claim the Winfield Tea Test, being British of
course.

Yeah, of course. Keeping with your roots.

Yeah.

In regards to this notion of work, not necessarily in our own kitchen, from your vantage point,
how do you think AI systems have changed the way that people work up until now?

Not very much, I think. Some of course, yes. I mean, we, you know, we all use search engines,
pretty much all the time. And those are little bits of AI, they’re very narrow, you know, very
specialized bits of AI. Pretty useful, yeah. But, in everyday use, I mean, it’s pretty limited still. I
mean, some of the AI tools we have are really great, and I’d be the first to agree. You know, I
think machine translation is remarkable. It means that I can read papers written in German or
Chinese in a way that I couldn’t have done, you know, at all twenty years ago. And even five
years ago, the machine translation systems were just not good enough to make sense. So, you
know, for some people at least, you know, if you like, experts or professionals, I think that made
a big difference. But, I don’t think AI is as pervasive as we think it is.

Based on the, the work that you’ve been working on as well as colleagues throughout the UK,
how do you think AI will have changed the way that people work in the coming twenty years in
this boundary period?

Yeah, that’s a really hard question. I think things will change a lot, I suspect. And I hope in good
ways. I mean I’m an optimist. But I’m also worried, I mean, you know, I’m particularly worried
about some of the big ethical issues in AI, particularly bias. You know, we now know of course
that AI systems, particularly deep learning systems, you know, the trendy kind of AI systems,
only work because of large datasets. And it’s very difficult to curate data to make sure that that
data is truly representative, you know, which is why you have dreadful examples of racial, you
know, bias. You know, demonstrated by image recognition systems for instance. Which are truly
atrocious. And they’re entirely solvable, but only by curating datasets. So, I think that’s one
problem. One big problem, we should all worry about. The other problem is data ownership and
data privacy. And of course, some of us knew this some years ago, but I think more and more
people are waking up to the fact that, you know, when we use services like Google and
Facebook, you know, we’re kind of making a Faustian pact, or if you like. You know, we give up,
we give away our data in exchange for these cool services and tools which are now pretty cool,
you know. I don’t use Facebook at all, but I do use Google a lot. And I think that needs to
change. And I’m, you know, I’m really excited by, for instance, by a new initiative by Tim
Berners-Lee in creating frameworks that allow you to effectively encapsulate your own data and
have control over who accesses that data and how they access it. I think that we really need to
take back control. I mean, I’m sorry I hate that phrase of course, because of its association with
Brexit, but, it’s true I think in the Internet. You know we in as far as the AI and social media are
concerned, we really do need to rest back ownership of our own data from, you know, the AI
giants.

Yeah, that actually moves us back to...

Can I take a break please to just have a slurp of tea?


So, we were just finishing up discussing ownership and curation of data, and the ways in which
individuals might be given tools but also become more informed in thinking about how they
choose to share their data.

Exactly, yes. Yes.

We’re, we’re quite concerned with the power relationships in terms of individuals and this
community relationships with this tool, these tools. So, I’m wondering if you can describe an AI
tool where you believe power has been transferred from the human user to that system?

I’m trying to understand the question. Examples of AI where power’s been transferred...

To the system for decision making.

Well, I mean there are kind of trivial examples in the sense that if, if you use Amazon, then, you
know, Amazon will give you recommendations. And Netflix is another good example where
there are recommendations. I don’t know yet of any recommender systems in, for instance
medical diagnosis, but those will be coming soon. And, in a sense, those are a transfer in a
sense, not of authority, because you know while they remain recommender systems, and I hope
they always will, you know, I think this is where we need to draw a line, so, you know, we should
not have systems like a medical diagnosis AI that actually does the diagnosis. It should always
recommend a diagnosis if indeed at all to a clinician who actually makes the decision. So, I see
these as augmenting and supporting human intelligence and human expertise rather than
replacing it. Gosh, I mean, there are bad examples of handing over authority. And, you know,
one of the, the worst I think is driverless cars. Now I say the worst, I don’t want to give you the
wrong impression, I’m a great fan of driverless car technology, it’s just that we’re not doing it
right. And, what we have right now are driverless cars, well, let me put it this way, I’ll start again.
What we have right now are driverless car autopilots that I believe are badly engineered and
have not been properly, formally verified and validated against standards. Yet, if you own a top
of the range Tesla for instance, you can simply press the button and engage that autopilot. This
is very dangerous, I mean, it’s dangerous not just for the driver, but, as we’ve seen, I mean
there have been some very high profile, dreadful accidents. You know, not just killing the driver,
but in one case, killing a pedestrian. The, you know this is the Uber accident in, was it Tempe?

Tempe.

Arizona. And the problem is this is an unproven technology. And I think actually it’s the wrong
paradigm. You know, we should not be making driverless cars that require as it were you as the
driver to be in charge of the vehicle all of the time and be ready to take over at a moment’s
notice. The worst thing that you should ever do is give a human being nothing to do and expect
them to pay attention all of the time. Of course, we expect that of airline pilots with airline
autopilots, but there’s a big difference there. Firstly, those airline pilots are trained in how to use
an autopilot. And secondly, they generally have minutes to react whereas of course if you’re on
a freeway, you’re traveling at speed and if your car autopilot you know sounds an alert and
wants you to take over you have a fraction of a second which is not enough.

And can you speak a little bit more on the details associated with that social organization?
Because of the ways in which the human intervention with the autopilot is one configuration. But
to your point with the traffic on a freeway versus a pilot’s traffic which is highly regulated by air
traffic control, the social variables at play for the driver are quite more complex.
Yes, are radically different. That’s right.

Yeah.

The point is that a car is navigating a crowded and fast moving dynamic environment and of
course, you know, one of the many ways in which current generation autopilots fail completely, I
don’t mean dangerously, but they fail to do the job, is if the car is emerging from a side street
onto a busy city road, now what you and I would do, as human drivers, is we’d nudge our way
out and we’d make eye contact. And as soon as you make eye contact with one of the drivers
they’ll let you out. And that’s essentially a human protocol which is not in the highway code. It’s
something that, it’s an emergent if you like, social protocol, that keeps traffic flowing. It means
that you’re not stuck waiting, you know, for hours to merge onto a busy city street.

It’s an interesting twist on the swarm work.

Exactly.

From your early work.

Well exactly. It is kind of a collected behavior, that’s right, and, you know, I said at the beginning
that I think that driverless car technology has enormous potential, but I think we should be
working on the paradigm where there is no human control whatsoever. And, you know, if you
like the pod paradigm, which is where you literally just sit in the vehicle and it does its thing.
Now, that’s much harder of course, to get that right. And as someone who works in standards,
I’m currently doing a fair bit of work in standards, you know, I’m shocked by the fact that
standards don’t exist yet for driverless car technologies. That means that we cannot, in fact
effectively we cannot regulate. You know, regulation typically requires. Sorry I’ll say that again,
regulation typically depends on standards to which you can then measure the safety of the
technology.

So, I’m wondering if you could maybe speak a little bit more about this work in standards?

Sure.

And also, some of the criteria where you might expect either you or other individuals who are
concerned with standards might determine features that would allow a human to relinquish
some decision making to a system.

Gosh, that’s a really hard question. That’s an entire lecture’s worth of question. Let me take a
break.

Yeah that’s fine.

And, and think about that. So, do you want to say that question again Jennifer?

Yeah, so if we’re, if we’re thinking about the development of standards that are currently not in
place, alongside what is an evolving body of research in the prospect of autonomy, in the
example of driverless cars, where might we begin...

Okay.
With standards...

So.

In, in regards to that example?

I’m going to try and address that by telling you a little bit about the standards that I have been
involved in and I’m currently involved in. And I’ll kind of segue into the autonomy thing. So, the
first standard that I helped to draft is what we believe is the very first ever ethical standard in
robotics. And it’s a British standard, it’s called BS 8611. Guide to the ethical design of robots
and robotic systems. And of course, that includes autonomous systems. So, what is 8611? Well,
it’s basically a method, a kind of toolkit for doing what we call an ethical risk assessment of a
robot or robotic system. And I think that’s a great starting point. You know I would like to see an
environment where every robot and AI, at a very early stage of development goes through a
process of ethical risk assessment. Now what kind of risks am I talking about? Well in BS 8611,
we, if you like, brainstormed a whole bunch of risks, ethical hazards and ethical risks, and they
cover everything from robot addiction for instance, robot dependency, robot deception. As well
as the obvious ones like loss of employment and environmental factors, sustainability,
reparability. So very broad ranging set of ethical risks that go from the very personal through
societal, economic and ultimately environmental risks. So, you know, I think that that’s a really
good starting point, if you like. For me, that’s a baseline standard that should apply to all
autonomous systems, cause it makes you ask questions about those systems. Now, within the
IEEE Standards Association, and I’m sure you’re aware of the Ethics Initiative that the IEEE
started a couple of years ago, we’re developing a thing, we have developed a thing called
ethically aligned design. And in turn that’s spinning out a whole bunch of ethical standards in
robotics and AI, and their so-called human standards which I think is a really, really good way of
describing those standards. Standards that put humans at the center, not the technology but
humans. And really try and kind of embody the principle that AI should benefit humans both
individually and societally. And increase well-being. I mean that’s a fundamental goal if you like
that we have in robotics and AI. Now, I’ll tell you a little bit about one of those standards. But
first I’ll tell you about P7000, so they’re called the P7000 series. So P7000 is itself an ethical
design method. So, in a sense, IEEE P7000 is, if you like, the IEEE’s equivalent of BS 8611. I
think it’s going to be more expansive, I haven’t read it yet, it’s still in draft. They all are. But that’s
again an important baseline standard. Now the one that I’m leading is called P7001 and that’s
called Transparency in Autonomous Systems. So, it’s a standard about transparency. And it’s
based on the radical proposition that it should always be possible to find out why an
autonomous system made a particular decision. Now that’s of course very easy to say, but hard
to do. It’s also controversial and you’d be surprised how many AI people go “Woah, you can’t do
that. That means that we can’t use a lot of AI systems.” Well, you know, my answer to that is,
well it’s okay, I don’t mind AI systems if they’re doing non-critical things, like, you know video
games, or playing Go or Chess or something like that. Or even, you know, recommending what
movie you might like to watch or what book you might want to buy. That’s okay. But if that AI is
driving my car, then it certainly does need to be transparent.

Or diagnosing illness perhaps.

Or diagnosing illness. Actually, there are many examples of what I called safety critical AI. So, it
isn’t just physical systems, it’s soft AI systems. And, you know, governmental AI systems. So, if
an AI is recommending, for instance, prison sentences, if it’s recommending welfare payments,
if it’s recommending immigration status, then those are decisions that have real human impact
and should come under the category of safety critical, even though traditionally we reserve the
name of safety critical for physical systems that might cause injury. But actually, you know,
harms.

It can cause institutional harm, assuming it’s scaled.

Exactly.

Cause that rapidity of the...

Exactly.

Repetition of decisions they make.

That’s exactly right.

Scaled...

You know harms don’t have to be physical in order to be real harms, you know, harms can be
psychological, emotional, societal, as you say institutional. So, actually, there’s a very broad
range of AIs that I regard as safety critical. And all of them should be transparent in my view.
Now, what does that mean? Let me unpick that a bit. The first thing that we realized when we
started to write this standard is that transparency’s not one thing. So, transparency depends on
the question, who is it transparent for? And these are if you like the stakeholders perhaps the
better word is beneficiaries of the standard. So, for instance, let me give you an example, the
kind of transparency that your elderly mother would need from her care robot is very different to
the transparency required by the engineer who fixes it or indeed if there’s an accident, the group
of people who investigate that accident. So those are three examples of beneficiaries or
stakeholders if you like, of transparency, or so, you know, your elderly mother for instance,
needs actually not transparency as much as explainability. So, what she needs is the ability to
say to the robot: “Why did you just do that?” Or even better, “What would you do if I fell down?”
Or “What would you do if I didn’t take my medicine?” That kind of explainablity, transparency in
other words, will help her to understand what the robot’s doing, what it might do, and then trust
it, so in a way it builds trust and therefore the value of the robot is increased. I mean if robots
behave unpredictably and we humans don’t understand what they’re doing and why, we just
won’t use them. So, the technology will fail. So, this is really important to building trust. Now,
clearly, the engineer who fixes it needs a different kind of transparency. They need the
transparency of understanding how it works, at a level that means they can repair it. The
accident investigator, and this of course is particularly relevant to driverless cars, needs a kind
of log, the equivalent of a flight data recorder, a black box. And, you know, and I have a paper
called the Case for an Ethical Black Box, where we argue that all robots and some AIs, should
be fitted as standard with, if you like, a data logger, that records a continuous log of, not only the
decisions that that AI is making, but the sense data maybe sample sense data, the context if
you like of those decisions, and the internal processes that are leading to those decisions. Now
it’s very interesting that, that you know the National Transport Safety Board's report on the first
Tesla accident. The first well known one. The one where Joshua Brown was killed. I forget
exactly when that was. I think it was May 2016. Is that right? I may be wrong. But that accident
was subject to a very deep investigation and it’s very interesting that the investigators
discovered from examining the proprietary, you know, data logs that Tesla maintain, because
there is no standard for data logging in cars or vehicles generally except aircraft.

Yes.
The investigators discovered that the car had failed to perceive the truck that the large trailer
that was crossing the Freeway in front of the highway, in front of the car, but the logs were not
able to reveal why the autopilot failed to perceive and it’s the why question that’s really
important. So, we need the kind of transparency that allows an accident investigator to answer
why did the autopilot fail? I mean only then can we have a system, if you like it, an environment,
ideally a kind of regulatory environment that means that the faults that are discovered when
there are accidents, because accidents are inevitable, it’s not if it’s when, then they can be fixed
not just for the particular manufacturers vehicle, but for all manufacturers. And you know, one of
the amazing things about air accident investigation, is that there’s a culture of data sharing
among manufacturers which is very well-established. In fact, in air accident investigation there’s
a kind of, a motto, which is very powerful which goes that anybody’s accident is everybody’s
accident. And we need to move to that culture in robotics and AI. And I’m sorry to say we’re far
from that. Now, let’s get back to the question. Of course, transparency’s only one aspect of
autonomy, and there are other aspects of autonomy that we need to have standards for. And,
you know, some examples are standards for fail safety, for instance, standards that tell us how
we prove the safety of an autonomous system. Now a lot of people would be surprised that we
don’t already have those. It’s because it’s really hard, it’s extremely hard. I mean there are
standards for instance in merging, new standards for things like lane assist in driverless cars,
but that’s as far as it goes. Now lane assist is just one little piece of autonomy for a car, not a
driverless car, but a, you know, regular car. So, we’re far from having standards for instance,
autopilots for driverless cars or trucks or other terrestrial vehicles.

It’s interesting cause as you were speaking I was trying to think about in my mind at least we
move from standards to identifying accountability.

Yeah, yeah.

Stakeholders who would be...

Exactly.

Responsible. And the ways in which that could also be an emerging labor market on some level.

Exactly, yes.

Because of the sheer scale of work that would need to be done for the management of the data
and also the very mechanisms for gathering that information.

Yeah, absolutely agree.

That is sometimes quite tactile and not necessarily going to be taken by another AI system
necessarily.

No, you’re absolutely right.

And I’m wondering as, as you’re starting to work with IEEE, what strategies have been
discussed even at the lowest level in terms of where you begin this conversation with who would
be accountable?

Yeah.
Do you start it at the governmental level by that initial education campaign on what these
systems actually are.

Yes, I mean.

Governmental entities to develop.

Of the fundamental ethical principles that have been, if you like, articulated by IEEE, one is that
systems should be transparent. Another one is systems should be accountable. And, you know,
we realized very early in developing those principles that of course you can’t have accountability
without transparency. If you don’t know what’s gone wrong and how it’s gone wrong. You can’t
figure out who’s responsible for that. But one thing is very sure, the system itself, the AI, is
never accountable. It’s always humans. Humans or collections of humans, companies,
organizations, institutions, or even governments. And I think the, kind of, you know, you’re
asking me a question which is really a little bit above my paygrade if you like in IEEE.

It’s an unfair, it’s an admittedly unfair question, but on some level, I think...

Sure.

It’s where we’re headed especially the richness of the work you’re doing in regards to
standards.

But I know that Konstantinos, you know, the director of the Standards Association and the leads
on that, on this initiative are having these conversations. You know, at governmental for
instance. I mean, in fact, I shared one of those conversations with Konstantinos when we both
gave oral evidence to the House of Lords, a select committee on AI just six months or so ago.
So, we have to have multiple conversations at all levels if you like, from central government
through to standards bodies which we’re doing to regulatory authorities and at a broader
societal level because, you know, there needs, and this where we come back to public
engagement.

Yes.

There needs to be a conversation in society so that we, you know, as a public can be aware of
the risks and can therefore lobby our representatives to make law on our behalf. But, you know,
people often say to me, well how do you, you know, how can standards get enforced? And
often, interestingly, you don’t need regulation. So, there’s a really nice piece of soft governance
which I’ve been advocating for. So, governments typically procure a lot, you know, in this
country the National Health Service procures an enormous amount of stuff, and a lot of that
increasingly will be robotics and AI technology. Now, what we can do in that procurement
process is that if those procurers specify it as a condition of tender that those systems should
comply to these standards that we’ve been talking about then that makes a big difference
because, you know, the prime contractors will then in turn feed that requirement down the
supply chain and, for large procurers like government, it means that those standards become
embedded, really rather quickly.

It could also be perhaps...

Yeah.
Integrated into expectations for grant proposals for...

Exactly.

Developing research.

Exactly right, that’s exactly right. So, the point is you don’t need to make standards law in order
to make them effective. They can become defacto required, you know, if you like mandated, just
by virtue of procurement.

And I wonder if we can, as we start to close the interview now, if I can ask you to speak a little
bit about the ways in which your position as a professor and working at this beautiful facility that
we just walked through with 40% women in your lab. How does your work as an educator also
shift this culture in regards to thinking about embedding many of these principles and
questions?

Sure. Sure.

From design phase at research to these higher levels of shifting of the entire paradigm.

Yeah, absolutely. And I think, I mean as an educator and erstwhile teacher. I don’t do teaching
so much these days, I think that it’s really important that we embed ethics education at all levels
of technology, teaching and training. And, you know, the again I really like the phrase ethically
aligned design, you know, what we ideally would like to see, and I’d like to see is a new
generation of developers and technologists who are ethically aware and, in a sense, build
therefore design ethics in from the very beginning. Now, that isn’t the case right now. But it’s
interesting that, you know, we are seeing cases of individual developers in large AI companies
who are saying “No, I’m not happy about this, you know, the ethics of this particular piece of AI”,
and there are, in some cases they’re resigning, and in other cases they’re writing, you know, to
the, you know, if you like, the senior management, saying “No, no, this is not acceptable.” So,
so good, you know, we need to have both ethically principled professional developers, creators,
manufacturers at the individual level, but also principle leadership. And of course, that’s a
tougher nut to crack. That really is more difficult, so, you know, we need, if you like, codes of
conduct, almost if you like a Hippocratic Oath for engineers, and, you know, I was really pleased
to see the ACM Code of Ethics for engineers. Of course, it needs to have teeth. So, what we
need are professional bodies like, you know, the ACM, like the IEEE, who will actually, as it
were, disbar members who’ve been shown to behave unethically in the same way that a
medical doctor can be barred from practice. I think we need to go further than just having the
codes of conduct. They need to have teeth, rather like, well, you know, not just rather like,
exactly the way we do in medicine. And we also need ethical governance. Now that’s much
more difficult and in fact I have a new paper with my colleague Marina Jirotka on ethical
governance in robotics and AI. Really setting out, if you like, ways in which companies,
organizations, institutions can embed ethical governance and that, you know, we suggest the
whole load of measures that they should implement. You know, what we’ve seen in the last two
or three years is a kind of explosion of ethical principles, which is great, it’s really great. But of
course, principles are not practice. What we need now is to move to the next stage where those
principles are actually put into effect. Put into practice. And that’s ethical governance. And we’re
not there yet.

Thank you so much for this conversation. Is there anything that...


[...] I’m particularly interested in a technology which we call simulation-based internal models.
So, for a couple of years what we’ve been doing is putting a simulator of a robot inside that
robot. Now, it’s an odd thing to say but it kind of makes sense. Because we already have robot
simulators, so robot simulation is a standard technology. We use robot simulations all the time
to test ideas and so on without having to try them out on the real robot. Cause real robots are
expensive and often it takes longer to try them out. So, robot simulator is a common tool. But it’s
rather rare to put a simulator of a robot inside that exact robot. In fact, it’s kind of hard to do. It’s
a tricky thing to do. But it’s also very powerful. Cause if a robot has a simulation of itself and its
environment, including other actors in that environment, which could be people or other robots
inside itself, then it can ask itself questions like what would happen if I tried this action? Or that
action? So, in other words it can, it can model the consequences of each of its next possible
actions. And we call this a consequence engine. It means that we can make robots that can
predict the future, they can anticipate the future. And we’ve tested this in several different ways,
some of which are really very exciting. So, one of them is making very simple ethical robots.
Now, for different reasons, I’m not sure I think ethical robots are a good idea, but nevertheless
scientifically we’ve shown that it is possible to make robots that can make very simple ethical
decisions based on predicting the future.

Are they predicting so this is where we fall into semantics?

Yeah.

Rather than predicting the future, they are predicting their future?

Yes, not just their future.

Particular to their...

So, let me give you an example. If you’re walking down the sidewalk and there’s a hole in the
pavement, a hole because of road work or something or, you know, builders or whoever,
maintenance guys are doing something, and you see a child about to fall into that hole because
they’re not looking where they’re going. Maybe they’re peering at their smartphone. You will
probably try and intervene to prevent them, you know, from falling into the hole in the ground.
Now why is that? It’s not just because you’re a good person. It’s because you have the ability to
predict the future. Not just your own future, but her future too. You can predict that if she doesn’t
notice the hole, she’ll fall into it and come to harm.

But that’s learned behavior by virtue of an individual’s relationship to the world.

That’s true.

As well as social.

Yes.

Engagement.

That’s true. But we can program a very simple robot, with the rules. The simulator of course will
have in it, if you initialize it, the hole in the ground, and it means that the robot, the ethical robot
can predict if another robot which is pretending to be a human not looking where they’re going
might fall into the hole and can decide what to do on the basis of that prediction. So that’s very
simple ethical behavior. But we’ve also shown the same technique for just safe behavior. So, a
robot can be safer if it can predict what might happen to it and what’s going on around it. And
the analogy is you walking down a, a busy corridor in an airport where everybody else is coming
in the opposite direction. You’ll navigate by essentially [...] and that ability is useful also for
driverless cars of course, that’s another example.

The transparency question.

That’s right, yes. But, we’ve also discovered the same approach can be used to build robots that
can imitate, not just your actions, but your goals, by simulating what you’re trying to do and how
it would do what you’re trying to do. So, the imitation of goals which is quite interestingly
different to the imitation of actions. And, perhaps the most interesting, if you like, application of
this is that if a robot has a simulation of itself and others inside itself, then it can answer
questions, the questions I talked about earlier, like, why did you just do that and what would you
do if? So, I think that simulation-based internal models are the key to explainabilty for social
robots. And I think that’s perhaps the most exciting practical outcome. I mean there are other
scientific questions I’m interested in like, for instance I have a recent paper called Experiments
in Artificial Theory of Mind. So, I’m proposing in that paper that a simulation-based internal
model is a theory-driven, realizable, computational model of theory of mind, in other words
artificial theory of mind. That’s controversial but interesting anyway.

Yeah.

I hope, I hope it’s interesting.

Well thank you so much.

Great, well it’s been a pleasure. Thank you.

You might also like