Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

So, could you start with introducing yourself and your position here?

I’m Dan Siewiorek. I hold a tri-appointment in Electrical Computer Engineering and School of
Computer Science, in particular the Human-Computer Interaction Institute. I've been at CMU 47
years.

Excellent.

Lots of changes.

I can only imagine. Can you talk a little bit about how your work in AI began?

Well, one of the things you have to understand is I revolve around. So, in the 1970s, we did
multi-processors, 1980s we did computer-aided design, 1990s we did wearable computing. Lots
we did, basically context wear computing, now we’re doing virtual coaches. So, the first time I
got involved with AI was back in the 1970s when we were doing computer-aided design and
there was something called layout, in other words, what you want to is layout to print a circuit
board or you wanted to ??? @1:06 to print a circuit board. And at that time there were these
blackboard architectures and there were these rule-based systems, and we actually created
rule-based system that they had something called a switch box where you have four sides and
you have things entering and things exciting and you want them to connect without crossing
over. And, so we actually did a multi-user expert system on that, and including the common-
sense expert that said when nobody else can move everybody move one step into the center.
And we were actually able to route things that were claimed unrouteable in literature. And
unfortunately, it didn’t really take off commercially because the rule-based systems were slow.
Once our competition figured out what they were doing they could hard code it a lot faster. So
much to Alan Newell’s chagrin we didn’t get as much press from it for what we did compared to
another West Coast University that just hard-coded it up and treated it as their own. But
anyway.

It will remain unnamed.

Yeah, but one thing we found out when we were doing computer-aided design was that most of
the AI researchers, did, let me call, shallow systems. So, another thing we were doing was
basically laying out integrated circuit chip. And MIT had a system that had 300-rule-based
system. And we took an ECE student who had worked for AT&T I think one summer, and he did
a system that had 3000 rules. And it seemed to be the AI researchers would stop about an
order of magnitude short of what you needed in terms of knowledge to really make it a, you
know, viable, you know, competitive tool. So, anyway, that was just sort of a little anecdotal
observation at the time. So that was the first encounter, but then we went on and did a lot of
things with computer-aided design. We probably created what was, and I’ll call it, silicon
compiler where we could take a description, and this is a really good example of Raj Reddy’s
influence. So in about 1980, the Robotics Institute had just started, and they were trying to use
computers to control robotic arms. And Raj always needed something that was bigger or faster
than what he could buy, so he wanted something that had twice the memory, but it took six
months to get it done, so he said, “Why can’t you do that in ten minutes?” Well, we did it in ten
minutes, but it took a year, two PhDs, about four Masters and six Undergraduates degrees to do
it, and we actually created a company called Omniview. And we were actually able to take the
type of specification you get from the back of a product brochure, like which was the processor,
what was the clock speed, how much memory, what type of I/O cores, how long did you want
the battery to last, system reliability, cost. And we could literally do that in about 10 minutes, get
a part list and netlist, then go into an optimization phase that would actually cut the resources in
half. But the problem is we started a company, and the students were involved in the company,
unfortunately, we were way too far ahead of the field. At the time that we were talking about
doing synthesis, less than half the companies were doing even simulation. And so that’s one of
the times that you know the research can get out too far ahead. A lot of these projects we
actually wrote books about, so there’s a book about the MICON project. Then we wanted to
demonstrate that MICON could actually build things and interesting things, so we decided to
create, and we had an engineering design research center funded by NSF at this time, and we
had an industrial affiliates program, and so we had about 20 people from industry coming for 10
weeks during the summer to learn about design. So, we decided we didn’t want to just have
them sit in lecture for eight hours a day, we wanted them to design something, and so we ended
up designing what became our first wearable computer in about 1991. And this is what it was,
so the students, we actually synthesized the electronic board by MICON and then we worked
with some design students who worked with people from industry to come up with the shape of
the structures. You wore it over your shoulder like a man-purse, and you rest your hand on the,
on the outcrop here. The buttons had different feel, so you could use them without looking at it.
And it’s very interesting, we actually made 30 copies so everybody in the class went home with
one. And we thought it was just a demonstration of our computer-aided design program, but
when we showed it to people they came up with all kinds of applications for it. And so that
started our wearable computing. So, for about a decade, we built about 20 systems, including
systems for bridge inspectors, systems that did welds for nuclear submarines. We did
maintenance for marines at Camp Hamilton, we got to mount in the field. We worked on F14s at
Warner Robins Air Force Base. So, you got, probably about 20 different systems, looking for the
repeated patterns, much like the desktop was driven by, you know, a PowerPoint and Excel and
Word processing. And we came up with those repeated patterns, kind of interesting now that
you’re starting to see these wearable computers with these head-mount displays come out, and
we’re starting to see them rediscovering it’s not the Google Glass which was meant to be a
social thing, but helping people manufacturing and things like we learned we want to put it down
our display had a place you could just glance at it for your information. But your attention was
still in the real world, you’re just getting your instructions of what you’re manufacturing or what
you’re in the warehouse picking and so forth. So, all of that started coming out, but just a little
footnote of that, you know, sometimes you transfer things by software, sometimes you transfer
things by designs, but we transferred a lot of our stuff by people, so the first wearable computing
company BodyMedia was spun out of our group. And the John Stivoric and Chris Kasabach
worked on these projects. As a matter of a fact, probably the first computer that Chris Kasabach
did this one, this one was done by John Stivoric. And you can sort of see this was not your
typical square box.

It’s not.

It had a lot of character to it. Metal was sort of a heat sink, and this is a one-dimensional mouse.
So, everything on your screen was in a circle, and then you could select it. And if you’re right-
handed you got this button, left-handed you got this button. So, we actually put that out in the
field. It actually won an international design award. Although the design jury missed a point, they
said it looked almost good enough to work, even though we had it out in the field working.

What year was that?

This was 1993. So, if you take a look at what’s coming out now, you know Google Glass 2 and
so forth, and so this technology, so if you want to look at where things are going to be in 25
years, look at the university’s research. But anyway, we ended up transferring knowledge by
people. So, John, basically his company got bought by Jawbone, and now he’s inside of Google
X leading their design team. Because quote on quote he’s been doing it for 20 years. Well two
engineers who did the electronics inside of this, they started at a local company and did the first
five generations of Fitbit products.

Okay.

And then, at an only in Pittsburgh thing, one of the two founders of Fitbit was my older
daughter’s senior prom date.

Oh my goodness. Family affair of sorts.

But anyway, we obviously took a couple other things, we did our first smartwatch in about 2003.
Which was about 10 years before other people came out with smartwatch. And we had a
microphone, accelerometers, a light sensor, and we could actually use machine learning
algorithms to decide what room you’re in. Turns out this room has a sound and a light fingerprint
which is different from other rooms. So, we could tell if, you know, a student was in their home,
or on the bus, or in the grocery store, or in their office, or at lunch just by the signature of the
physical activity, as well as the light and sound background. And so that was a good 10 years
before, and then, this device we were working with Compaq, which was the old digital
equipment corporation, and they build something called the Itsy which was this top part here,
which had a touch screen and it had accelerometers and so forth. And we added WiFi to it and
a battery. And you could consider this the first smartphone in about 1999. It had the
functionalities that we take for granted in smartphones. A little bigger, but.

A little heavier.

A little heavier, but that’s what happens when technology has a chance to take hold.

Right.

So anyway, so that’s where we started getting into the context wear using sensors and then
machine learning algorithms to figure out, first of all, what you’re doing, and then where you
were and other elements of your context. So now we’re working on what we call virtual coaches.
So, the wearable computer got dated out to where you’re working. And became portable. And
then the sensors, made it, it became aware of your context. Now giving guidance to the user
closing that feedback loop is a virtual coach. So, we first started out with the virtual coaches with
people who were doing exercises to recover from stroke. So, the company called myoMO or
myoMOTION actually had sort of an arm brace that had a motor in the elbow. So, if my right
side is affected, I perhaps can’t do something like bring a cup to my mouth. But I can do that
with a therapist help, but the therapist is only there for six weeks, and then after that, it takes a
while, it takes at least 6 months before you retrain your brain. So, my M.O. was to give sort of
power assist. So, what we’re trying to do is use things like Kinect to watch a person doing their
exercise to try to decide if they’re doing it correctly. Well first of all, are they doing it at all? And
then are they doing it correctly? And so, we actually created some games like the fishing game
to bring a cup to your mouth. You would wait for the fish to get near the hook and then you bring
it up to your mouth. We started some work with Portugal, there’s a Portuguese-CMU joint-
degree. And so, we started working with them, and they had access to stroke survivors. And so,
we were doing some exercises. So, just the straight exercise as fast as you can, you can do
about 600 of those a day. That’s what it took to really train your brain. Now, our virtual, our
virtual coach did about 450 but when we did the game, only 300. And it took us one of these
moments you said “Duh”, well the “Duh” was that there was a game situation. You had to wait
for the fish to get near the hook before you could pull it in a boat. So that close, that...

So that delayed it.

...cut down the repetition rate. So now we’ve been working with Flappy Birds and other things
like that that require what you say constant motion to keep the repetition rate up. So, the other
thing that we did was now we’re working with a research institute in Northwestern and Chicago,
to help people who are getting prosthetic limbs to practice using the limb before they actually
get it. So, what happens is you know there’s a sleeve that senses your muscle movement, so
we put the person basically in a virtual world where they can now control something like a
shopping cart and picking things or doing flappy birds if we want to go left and right and so forth.
But the idea is to get them used to using their muscles before they get the limb. Because when
they get the limb if they haven’t been able to practice controlling it, their motivation goes away,
and they never end up using the limb at all. So, they’re supposed to be on the order of, we’ve
got about a dozen games and about half a dozen virtual reality games. There’s a VA hospital
down in Texas that is going to try to use these for people who are about to get their prosthetic
limbs and see if games really help them get engaged.

Yeah, I mean this is just an extraordinary cross-section of projects.

Well yeah, it’s one of these things if you ask me where I would be in 10 years, I couldn’t tell you.
I can tell you the path I came, but it always sort of seemed to be a serendipitous thing. You
know Raj Reddy coming by or somebody’s offering you a challenge of why don’t you do this.
And you start thinking about it.

And that likes.

Yeah.

Yeah. So, can you tell me, in your earlier years, when you were first getting started, were you
ever influenced by popular culture, science fiction or anything?

Well I mean I was always that October-sky kid, you know.

Okay.

Well, 1957 you got backyard and there was Sputnik. And anything, everything and anything
NASA was, you know I gobbled that up, and sort of said, well I don’t know what engineering is,
but I guess I want to be one. And got into engineering. Just a little footnote though, my dream
job was to work with NASA and actually my senior year in college summer job was working for
NASA up in Cleveland and it’s now Glenn Center. I actually have a telegram from Glenn wishing
me luck on my ear surgery. My parents, I don’t know how my parents worked it out, they’re from
Ohio and Glenn was from Ohio, so anyway it’s NASA Glenn now, so we were actually working
on supertronic transport. And they had the high-speed wind tunnels. But the shock waves off the
tunnel walls would corrupt the data, so they’re going to do high speed runs over Lake Erie with a
delta wing fighter. And halfway through the summer, I got laid off. So, summer interns got laid
off with the gardeners and it was ’68, Apollo was just about, you know, winding down, even
though they still hadn’t gotten to the moon. But you know, and so, at that point, I decided that
maybe NASA wasn’t stable enough for a career, so luckily that drove me to computing and
made up my mind what I wanted to do.
Yeah, sounds like most of the influences were in the real world.

Well yeah.

And pushing the boundaries of the material world.

Well yeah, I think that’s a good observation. I like to build things. And you like to get them out
there and doing something. I don’t like to just build something to build it. You want to do,
hopefully, you’re serving some type of need.

Yeah. You’ve offered this cross-section of devices that have direct application in populations
that have a need. In the current climate, many of these AI devices are becoming quite complex.
I’m wondering if you can think of an example of a very accurate or useful communication in
regards to AI that continues to keep a public both engaged, but also understanding the
complexity of the systems?

Well, a little challenged. Let me divert that by giving another example.

Great.

So, the wearable computer, you know, basically for each of these decades, we would wrap up
what we learned in a textbook, and usually try to transfer the technology, often starting a
company. So, we did start a company, based on wearable computing, when was it, early 90s,
mid-90s, called Inmedius. And we actually picked aircraft maintenance because it was a high-
valued thing. Like at that time USAir had a maintenance hub here. And if they had a fully loaded
airplane they weren’t pushing back they were losing a thousand dollars a minute. And in those
days, head-up displays were about $5000. So, you weren’t going to do that commercially. I
mean for, you know the general public, but something like aircraft maintenance, and everybody,
you know, had the same problem, they had a 10,000-page manual. And you only needed two
pages, but how do I find the right two pages? How do I share knowledge? You know somebody
fixed this problem before, what is it? And so forth. So, we created a software that is called
interactive electronic technical manuals. So literally every aircraft is unique. You know, it comes
out, 200 built alike, but they come in and modify it different times and so forth. So, what they
had is a paper library with, you know, huge books, and then you had all these inserts that if you
wanted to look at your airplane you buy your inserts here and so forth. So that was just readily
made for, you know, electronics and a computer. So anyway, we eventually had a whole F18
fleet in the world, using our interactive software. And finally, Boeing wanted to commercialize
that for the commercial aircraft. But from that beginning to being bought out by Boeing was 17
years.

Wow.

So, this is not your, you know, hockey-stick Silicon Valley. I mean during 17 years we employed
40 people for good jobs and stuff like that. But sometimes it just takes that long for the
technology to mature and then, you know, get caught up. It’s really very interesting now, you
seem some of these new head up displays are coming out. And Boeing’s word @ 22:41 as one
of the users and doing wire harnesses. You had these large wire bundles in airplanes. Well we
saw a demonstration they had by a guy named Mizell for Boeing in 1996. So, they’re starting,
finally now, to use it on the production floor. So, I guess the challenge is to, you know, keep
things alive. And ideally, the lessons that were learned passed along because, well let me just
tell you a little anecdote, which is, so we did something called the health kiosk this is for older
people who are living in high rise apartment and have to take their blood pressure periodically
and so forth. So, we had to build a system that the doctor could say, “Ms. Jones, we’d like you
to take your blood pressure in the morning when you get up, or your pulse or your blood oxygen
before you go to bed at night.” And they could walk up to the kiosk, identify themselves with an
RFID tag, and see what they had to do. And there’s instructions there. And as soon as they did
it, the data would be sent to their doctor. Well the nursing homes wanted to use a touchscreen
because a keyboard and mouse would be foreign to people. But so was a touchscreen. So, they
didn’t know you could actually touch things on the screen and something happen. So, we had
things like “Touch here”. So, they touched. Nothing happened. It’s broken. I don’t trust it. I’ll
never use it again. I’ll tell my friends never use it again. Well, every software engineer is looking
for the release. But you told me touch, so they’re very literal, so touch. So, when we changed it
to tap, all the sudden it started working and it was, was greatest thing around for the people. But
you don’t get those things in theory or in the lab, you know you have to, you know, now I’m
seeing more and more user interfaces on smartphones that will say tap so they don’t get
themselves into that quandary as you get to the older generation that hasn’t had the experience
growing up with the technology.

Yeah, I mean I just find that to be a really illustrative anecdote in that my next question was
going to be what you say is your responsibility as a technologist in communicating that. But it
sounds to me, and maybe, please correct me if I’m inferring incorrectly, that the user population
is identified in much of the preliminary work that you do as you’re developing a new device,
whether it’s the airplane mechanic or this group of individuals who really need to take regular
blood pressure.

Right.

And that, the features of the design, as well as how you communicate about that design, is
really developed in the earliest phases before anything is actually built by learning how to
communicate with the user population. Is that the case?

Yeah, so we do something called user-centered design. And the idea is you get to your, you
know, I don’t like the word, stakeholders, but you get your people together. So, in this particular
case, the health kiosk, we had the nurses, we had the patients, we had the doctors. And we
start out and we create a baseline scenario, a story with personas and that. A day in the life of.
And then we create a visionary scenario of how the technology might be able to improve that
and of course we’re checking with our customers as we go along. And then we create a paper
prototype if you please. Then we actually go out and build a physical prototype and install it and
get some user feedback. So, if you look at my table over there, we’ve actually run this class 25
times. And each time the students go out, and I’m talking about the quilt, but each time the
students basically create an identity for themselves, create their own color scheme and so forth.
So, we surprise them with a t-shirt with their scheme at the end of the last day of class. So, we
got the idea, so I’ve got three quilts like this which represent the projects. So, every one of these
projects had that type of lead up to it, working with end users in that, but the main thing is that
we do it in 15 weeks. So, we have a functional prototype in 15 weeks. So, we’ve done things
like congestive heart failure, we did scoliosis last Spring. These are braces that young girls have
to wear to straighten out their spine and they’re uncomfortable in sensing the temperature and
pressure and so forth. So, we created a brace that was an active brace. And it’s amazing, the
students don’t know what can’t be done, so you end up with these incredible things and every
now and then, you know, a good enough idea to start a company or something. But, so that
uses that user-centered design with the people engaged right from beginning. And even if you
don’t get it exactly right there, you’re champions for the rest of the community because they
know you listen and you’re trying to do things with them and not ignoring them.

Yeah and that communication principle is not only in your practice it sounds like, but core to
your work as an educator as well.

Yeah definitely, and of course all the students. So, we get about 25 students in the class, so it’s
like five to seven thousands engineering hours. So, you can do a fairly decent system with that
amount of time, but they’re all working on the same system.

Excellent. So, there’s a lot of discourse in mainstream media, pertaining to the influence that AI
will have on the way that people work, but from your vantage point, can you tell me a little bit
about how you think AI systems have changed the way that people work up until now?

Well, do you ever remember Clippy? Okay, so Clippy was this AI help you write documents tool
with Microsoft. And Clippy would show up, a cute little paperclip.

The little paperclip. I remember that icon.

Yeah, and most people wanted to shoot it, cause it was annoying as all get out, and so it’s a
very subtle thing, again, you know, we have a mantra in the Human-Computer-Interaction
Institute, the “User” is not “I”. And Randy Pausch would make his students go out to Monroeville
Mall, and you want to show your system, you know, take it out to mall walkers or somebody like
that, people who aren’t technical, because nobody on campus is typical.

Yes.

Typical of the population. And I think Microsoft maybe get it into the, you know, they often do
their test marketing in Seattle. Well Seattle is not...

It’s not typical either.

And the people, of course, what they give is prizes are next generation Microsoft software, so
again there’s some self-selecting going on. So, you really have to get out to, you know, who the
mainline users are going to be. And we prefer the, initially starting with the industrial area,
because they had, you know, if you could save them time or something, there’s a direct cost-
benefit that you could quantify. People are quote on quote wearing uniforms, so you can make
them use the technology. Whereas, if its voluntary you have to have a pretty capitative thing,
you certainly see, you know, different things, you know, tweeting and stuff like that, social
media. Some things have caught on like that, but the vast majority of things don’t have that, you
know, user community and so, somebody’s idea what people would want, but usually it’s not
serving, you know, the image, what other people want.

Yeah. Do you have any thoughts on how you think current AI systems will affect the way that
people work in the coming 20 years? I know it’s a difficult projection but you rest is sort of cut off
@ 5:53

Well, I can tell you what I would like for me. There’s things that, I do things in somewhat of a
repetitive way. So, any time something breaks at home, and it’s been a year or two since I fixed
it, I start doing the problem-solving all over again, making the same mistakes. Like my
windshield wiper, it’s usually November when I realize it needs to be changed, but I don’t know
how I did it the last time. So, I go, and you know I fumble around, and then I’ll forget what I did.
So, I’d like a system that, you know, looked over my shoulder and remembered the last thing I
did and could remind me of that as opposed to, you know, there are just these things I do rarely.
You know, things I do fairly repetitively that’s one thing. Now I’m not sure this is AI, but I see the
workload and my frustration goes up too, is that you know everybody wants you to fill out their
form on the web, and it makes their job easier, but you now have to learn their peculiarities, and
for some reason, I tend to break things very easily, or get into a state that they didn’t anticipate.
And I don’t know what it is, but, and that’s very frustrating and its something.

They didn’t have you in the group of people they were working with when they were designing.

Yeah maybe I wasn’t mainstream there or whatever, but...

You have to start walking the Mall.

Yeah, so there’s just all kinds of examples like that where you get yourself into a Catch-22 that
you can’t do this without that, but you don’t have that so how you do this. Just very recently, you
know, the Pittsburgh Post-Gazette went to electronic media and they’re not publishing paper
two days a week, and so I had to do, finally activate my account, my wife likes the newspaper
and we didn’t need the electronic version until now. And so, it took me 45 minutes to find my
account number and went through any number of, you know, you get the answering machine
well you know the robot or whatever it is, and, you know,they say press this press this press
this, I go through it and they cut me off. You know they just drop me. That’s the only instruction
they give you, probably increases their response time I guess, or whatever, but so I always
seem to have on these automated systems you know I’m thinking differently or something, but I
end up probably taking five times longer to do things than if I did daily knew about it. So, I think
those rare activities, the things I’d love something to, you know, reach over my shoulder to tell
me, you know, press the number one button, this is what hashmark is or whatever. And just
need to know what I needed to do. I don’t need a big long list you know, press one to do this,
press two to do this.

Yeah, it’s such an interesting shift in relationships because it sounds to me that what you’re
describing there is actually, it’s a shift in power negotiation, right? And so, what you’ve shared in
the development of these various systems that have, that have been developed over 17 years,
or even the short prototype practice with your students really is putting emphasis on what the
needs are in terms of what’s articulated by a user population. It sounds to me in this example of
the newspaper that the power negotiation has shifted. Where the person instead of using this as
a tool is kind of subject to the design features of that tool.

Right.

Can you talk about that a little bit?

Well, I mean, in the newspaper case, the thing is sort of blast through to get to a person. And
then the person, now he couldn’t help me, but he got me to the person that could help me. And
then give me my account number that then I could now sign onto my web account to get the
electronic version delivered to my mailbox. And, you know, if you knew what you were doing it
was, you know, five minutes. But if you don’t know what you are doing, you ran out of what you
think are the obvious things. So, I think, you know, it takes a certain critical mass to have
enough people try things. We haven’t even talked about people with disability or color-
blindness, or other things that naturally are assumed not to happen. So, there’s one of our
faculty, Jen Mankoff, did some work on evaluating your website to see if was navigable by
somebody who is blind or a screen reader could make sense out of it. So, often we don’t have
some people that need the help the most, they’re nowhere near the designers or nowhere near
the experience of designers. That’s why you see the assistive technology that come out are
often are from people who either experience it themselves or they have a family member or
somebody that they’re trying to help.

Yeah, it’s an exercise in empathy. Can you think of a system either that you’ve worked on or
you’ve come in contact with, where an AI system actually empowers an individual or a group of
people?

I’m sure there are ones, and they’re probably so buried that we don’t consider them AI systems
anymore, but you know, to some extent, you know, the auto-automatic fill in the blanks and that,
now occasionally it just bothers me. You know, if I tell it no the first time, it should learn. So,
there’s these things that automatically take strings and make them into web pointers. Well
there’s sometimes I don’t want that. But every time I put one in I have to tell it no. So, I guess
there are things, but, you know, they’re better than Clippy. But at times they’re frustrating, you
can’t figure out how to turn them off. But there are certainly getting more and more of these
things, maybe they’re doing minor functions and you don’t really think of them as an AI system.
So, I sort of liken it to take an automobile. Your automobile probably has like 60 motors in it,
electric motors. Well you don’t think of electric motors, they pop the gas track, they pop the
trunk, control the windows, windshield wipers and so forth. So, you think of them as service or
their function, and I think that’s, you now, what you’re seeing with the AI systems. You know the
AI is buried inside, you know Echo or something like that. And I mean those are certainly still in
their trial area, and I guess weather and time are sort of the most often requested things which
is kind of expensive for just getting those two things, but you know, you’ve got to get enough out
there so you get the imagination and the applications. Cause usually the platform developers
don’t really have a good handle on applications and it takes somebody with a different type of
thinking to see how they could use it in a different way.

So, can you think of an example that’s really undermining human decision-making power?

Undermining.

Perhaps where someone’s deferring to the system.

Well, I am concerned about a couple of things, I don’t know the solutions to them. You talk
about the automatic cars, self-driving cars, well the default seems to be okay, when we don’t
know what to do, give it back to the driver. Well I mean, if the driver’s involved driving the car
he’s going to be aware of the situation. But if you’re chatting or something like this, and, you
know, it doesn’t take much of a distraction, talk about cellphones or that, but even talking to
somebody in the seat next to you can be enough of a distraction, every now and then I just go
automatic on which way I turn, and my wife says, “Where are we going?” And because we were
talking I was going on my old habits. But if I’m not having those habits exercised then my ability
to become aware of the situation, figure out the right way to handle it, so I’m really concerned
about, you know, totally automated cars. There’s things you know like the automatic braking
systems that have worked really really well. But we don’t even think of them, I guess you can
think of them as an AI system. They probably were in the first incarnation, but maybe like my
router type of thing is once you’ve figured out exactly which is the best way to do it and you
don’t need all the scaffolding of AI, you can just hardcode the algorithms.
Right.

But discovering those things needs a tool and AI provides a way of getting lot of dispersed
knowledge, you know, working on the same problem.

So, in regards to some of these systems that are, are moving towards autonomy, do you see
intrinsic value in the prospect of machine autonomy?

You mean in terms of financial gain or what?

I think it’s open to interpretation, so what might be the values in machine autonomy? What do
you think of it?

Well, I think of a couple things that’s not necessarily related. So, one thing, I don’t think we
addressed at all, but I’ll call it garbage collection. So, we’re going to have this internet of things,
and they’re going to be billions and billions of these things. Nobody’s going to know where they
are, nobody’s going to know what they’re doing. And they’ll be generating their own power, so
you won’t, they won’t even naturally die. And so, a sample I like to give in the very early days of
the space race, the U.S. is trying to put up a satellite using commercial rather than military
hardware and you had something called Vanguard. Well Vanguard was sort of a great brute
size satellite and it had solar cells, and the only thing it did was go “beep beep beep beep” just
to show they could do it. Well they forgot to put a shut-off switch and so for 25 years, an
important part of satellite wireless spectrum is taken up by “beep beep beep beep beep”, and so
we’re not putting shut off switches, you know, some of these systems you might want to think
that they should commit suicide after a while, cause I’m always concerned about unlearning
things. The system may have learned something that was important at one time, but things
changed. They’re always talking about learning systems, but they don’t talk about how do you
unlearn. So, I think there’s some challenges there because the fact that we got more and more
of these systems, but we don’t know how they interact. We don’t even know how to turn them
off anymore.

So, that’s an interesting idea with the unlearning concern. Also link towards fears pertaining to
surveillance or concerns associated with causing harm with the system.

[...] Well somebody’s telling me they just looked at a mattress or something ad and then all the
sudden every time they went to the web they got an advertisement popup for mattresses. And
so, you know, these things are sort of quote on quote free, but in order to be free they’ve got to
make money somehow. And so, they’re either pushing things at you or using your stuff to, you
know, create a profile or something, and so there’s an awful lot of stuff out there even if you’re
not part of the social web that people know about you, and exactly where that’s going to go. You
know we’ve already seen theft of, so they were talking the other day, apparently a big scam
nowadays is to soon as a newborn is, comes into the world they get a social security number.
And there are people poaching these social security numbers to get fraudulent claims so they’re
saying you should go get your newborn and freeze their account, but remember to tell them
about it, because maybe 60 years later, but who’s going to remember? So, there’s going to be
these, I don’t want to call them time bombs, but they’re going to be these switches that are
going to be buried and we don’t know what they are, and we don’t know what they do.

My next, second to last question is actually about that whole issue of time bombs. As we afford
more autonomy to machines, as they do things for us, and as you’ve pointed out in some cases,
manipulate us, for the desires of somebody else, like somebody trying to sell us something,
sometimes harm happens. Sometimes a bad thing occurs. And we’d love your thoughts on how
you think we as a society should think about ascribing responsibility when AI or autonomous
systems cause harm.

Are you looking at allocating responsibility or...

Yes.

Or, you know, something happens and who’s responsible for it. I find this really difficult and
many years ago there was a case about I guess it was a power lawnmower. And something had
happened, but the power lawnmower was a 20-year-old lawnmower, so at the time it was built
every specification and safety measure that they knew about. Twenty years later they know a lot
more, but you’re still left with the old device. And so, you know, and it caused a problem, but
that if it’s a new device it wouldn’t have caused. The responsibility for that, I, I it’s the same
thing, you got all these things that are buried and particularly if they’re self-renewing if you
please, either energy-wise or software-update or protocol or whatever it is. There’s awful lot of
things that are sort of done automatically that you don’t even know are happening.

So, if AI and computer science keep progressing more and more rapidly, could we face a
prospect of a world where we have a lot of variegated, historic intelligent machines around us all
built in their own decades, but all connected to us in daytime and interaction with us? That
seems messy.

Yeah it does seem messy, and there probably needs to be some discipline, you know, go back
to the original TCP/IP or something like that. You know, where’s the architecture anymore.
There used to be people very carefully designing protocols and, and they did really magnificent
job, look at how long they still are viable, but I think now more and more you get startups
composing systems out of other systems and so, you know people talk about emergent
behavior, you know, things that could happen that you never envisioned, but in hindsight said,
well yeah that’s sort of obvious, but you didn’t know those things were going to interact in that
way. So, I just hope that you know power companies, other places that had their own
proprietary networks, as they become more on the commercial network, that whatever discipline
they had in terms of keeping their systems safe, you know, continue that way, but I think we’re
going to just keep stumbling into new things and then patching things as we go along.

Thank you for the answers. We have one final question we’ve been asking each of the experts
that we talked to, which is about artificial general, general intelligence because this is so much
in the news these days. Just your thoughts on the development of AGI.

So, you mean not goal-directed?

The idea of human-level intelligence. And not, not for one task but for all human capability.

Well let’s see. Not my lifetime, but I mean you know, AI has gone through these different
phases, you know, we think we got it cracked in rule-based, then we ended up against the wall
of like thousands of rules, and then statistical came along and that was really great, but we
didn’t know what it was modeling, but we just give it enough data, It seems to come up with a
good answer.

Now we got deep neural nets.


Yeah, so there will always be this new fad or something, so it seems to me the systems that
eventually are going to break out of the, you know, let me say, whatever barrier there is it would
be, let me say, multi-level AI. There will be certain AI approaches that are good for certain
things and other things, and other things. But I don’t see anybody architecting that. And I think a
good university project would be try to figure out what the architecture of these called
compositor, hybrid systems or whatever. But, you know, you’ve got to figure out how they can
work together. Sort of like a population too, cause you’ve got people that all have different
capabilities, but they somehow have to get along and still survive and still thrive.

Well thank you for all the answers to the questions.

Well thank you for the opportunity.

You might also like