Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 7

Skip to content

Moral Robots
Making sense of robot ethics
HOME
HIGHLIGHTS
VIDEO LECTURES
AI AND SOCIETY
OPINION
INTERESTS
RESOURCES
ABOUT
Search for:
Search …
More fun with finite state machines
February 14, 2019andyAI and Society, Coder's corner, Feature, Philosophy-AI-Book
This is the second part of a post on finite state machines programmed in Prolog.
The first part is here. A short intro to Prolog is here. Have fun!

Contents [hide]

1 A virtual, well-behaved dog


2 A slightly more interesting dog
3 When to use Finite State Machines
3.1 Related Posts
3.2 Related
A virtual, well-behaved dog
Let’s write an FSM that describes some of the common behaviours of pet dogs and
that could be used to program a robotic pet.

We begin with a very simple FSM that (like the first example above) describes only
the eating behaviour of the dog. But now we will do it in actual code. We will do
this example in Prolog, because it’s relatively easy to read, even for non-
programmers, and its expressiveness makes implementing finite state machines
particularly easy.

Our first FSM will not have any entry or exit actions, just two states and their
transitions. For a basic introduction to Prolog, see the post here.

You can easily try this out online at: https://swish.swi-prolog.org

Swish is a fully working Prolog environment that you can use directly in your
browser without installing anything or needing to create any account. Just go to
that page and copy the code below into the window on the left. Ignore the line
numbers; they are just here for the purposes of discussion and they don’t belong to
the program itself. Any text between /* and */ is a comment for the reader. It is
entirely ignored by the Prolog interpreter.

state( hungry, "Hungry" ).


state( full, "Full" ).
transition( hungry, full, eat ).
transition( full, hungry, wait ).

next( State ) :-
state( State, Name ),
print( Name ),
read( Cond ),
transition( State, Next, Cond ),
next( Next ).
next( State ) :-
/* This is called if the transition fails. Don't do anything. */
print("No transition for that verb."), nl,
next( State ).

start( Startstate ) :-
next( Startstate ).
Let’s now look at this program in a little more detail.

In the first four lines, we describe the states (hungry, full) and their
transitions. Every transition takes three arguments: the state it transitions from,
the state it transitions to, and the transition condition that will trigger this
transition.

So:

transition( hungry, full, eat ).


will go from the state ‘hungry’ to the state ‘full’ if the condition ‘eat’ is
fulfilled.

next( State ) :-
state( State, Name ),
print( Name ),
read( Cond ),
transition( State, Next, Cond ),
next( Next ).
This is the main predicate that drives the state changes. For each state ‘State’
(remember, in Prolog capitalised words are variables!) this calculates the next
state of the state machine. The next state for state ‘State’ is computed thus:

Go into state ‘State’ and get its name into the variable ‘Name.’
Print the name of the current state ‘State’ so that the user can see where we are.
Read the user’s input from the keyboard. What the user enters will be the
transition condition out of the present state ‘State.’ So the user is expected to
type in something like ‘eat’ or ‘wait,’ which are our transition conditions.
Then find a suitable transition. Prolog will try to match the ‘transition’
predicate in the fifth line (above) with the known transitions we defined at the
beginning of our program. If it can find a transition from State to Next via the
condition Cond, it will put the name of the next state into the variable Next
(which up to now has been unused, so it doesn’t have any value). If there was a
transition that corresponded to the user’s entered transition condition Cond, now
Next will contain the new state.
In the last step we go to that, new state by calling the next() predicate again,
but now we replace State with Next.
If there was no suitable next state for the transition condition entered by the
user, the program will fail at this point and jump to the next ‘next()’ predicate
(see line 13 in the code above) which will do nothing except outputting a warning
message. So we will stay in the same state and do the same thing all over again.
Related: Deep Convolutional Generative Adversarial Network (DCGAN) Tutorial
Use this program by querying:

?- start( hungry ).
and then try to enter a few transition conditions when prompted, both valid ones
(eat, wait) and invalid ones (bark). You should now understand how the system
behaves.

A slightly more interesting dog


The previous example was not meant to be very thrilling. It was just meant to show
how to program a simple FSM in Prolog. Here we will now give our dog a few more
states and a few more interesting behaviours. Let’s start with the code. If you
understood the previous example, this here should be easy to understand:

state( sleeping, "Sleeping", "(Dog lies down.)" ).


state( hungry, "Hungry", "(Dog looks around and barks.)" ).
state( startled, "Startled", "(Dog jumps up.)" ).
state( barking, "Barking", "(Dog barks.)" ).
state( satisfied, "Satisfied", "(Dog looks satisfied and wags its tail.)" ).

transition( sleeping, startled, shout ).


transition( sleeping, barking, bark ).
transition( sleeping, hungry, wakeup).
transition( hungry, satisfied, givecookie ).
transition( startled, satisfied, givecookie ).
transition( sleeping, sleeping, givecookie ).
transition( satisfied, satisfied, wakeup ).
transition( barking, satisfied, givecookie ).
transition( startled, barking, bark ).
transition( satisfied, hungry, wait ).
transition( satisfied, sleeping, wait ). /* This is never reached! */
transition( barking, barking, shout ).
transition( satisfied, barking, shout ).
transition( _, barking, shout ). /* The _ matches everything. */
transition( _, barking, bark ).
transition( _, barking, _ ). /* Whenever it doesn't understand, it barks */

next( State ) :-
state( State, Name, Action ),
print( Name ), nl,
print( Action ), nl,
read( Cond ),
transition( State, Next, Cond ),
next( Next ).

next( State ) :-
/* This is called if the transition fails. Don't do anything. */
print("No transition for that verb."), nl,
next( State ).

start( Startstate ) :-
print("Toy dog simulator."), nl,
next( Startstate ).
Use by querying:

?- start( hungry ).
One big difference is that now we also have entry actions:

state( sleeping, "Sleeping", "(Dog lies down.)" ).


The state ‘sleeping’ will be printed as “Sleeping” to the user, and when the system
enters this state, it will perform the action of printing the predicate’s third
parameter; in this case, it will print “(Dog lies down.)” In a robot or a simulated
game character this would likely be a real action, but here we just print it to
keep things simple.

transition( satisfied, hungry, wait ).


/* This is never reached: */
transition( satisfied, sleeping, wait ).
Here is another interesting point. Since ours is a deterministic FSM, we can only
have one transition condition of the same kind attached to a state. Here, ‘wait’
will always match the first transition and lead from the state ‘satisfied’ to
‘hungry.’ Since this transition then will have consumed the user’s input, there’s
no way how the system could ever reach the state ‘sleeping’ from ‘satisfied’ via
the condition ‘wait.’ So the second line here is useless and should be removed. I
just wanted to show you the difference between a deterministic FSM (like ours) that
would ignore this second line, and a non-deterministic one, that would perhaps
randomly choose one of the two transitions, or have some other means of determining
which to follow.

Related: Kur: Descriptive Deep Learning


Another interesting idiom is this:

transition( barking, barking, shout ).


Here we have a transition from one state to itself. There’s nothing wrong with
that. Sometimes an input will lead to a transition of a state to the same state.
For example, here: if you shout at a barking dog, it will go on barking. If you are
in the state ‘hungry,’ then the action of walking around will keep you in the state
‘hungry,’ and many more examples like this.

A final point: the underscore “_” character in Prolog will match anything. So the
first line in this snippet:

transition( _, barking, bark ).


transition( _, barking, _ ).
/* Whenever it doesn't understand,
it barks */
will transition from any state to ‘barking’ if the transition condition is ‘bark.’
In other words: if the dog hears barking, it will bark back, no matter what state
it is in at the moment. And the second line will transition to the state ‘barking’
from any state and given any transition condition, including all that the program
doesn’t deal with explicitly. So really any word entered by the user as a
transition condition will match this transition, and so the dog will answer all
unidentified user inputs with a bark.

Of course, after having this line, we don’t need the previous one any more, because
the underscore for the transition condition will also match the ‘bark’ in the first
line. Thus the first of these two lines is now redundant. Equally, the error
condition (second next() predicate) is now never reached, because all transitions
will succeed. Whatever the user enters, will successfully trigger a transition
instead of causing an error, because any user input that is not understood will
just lead to the fall-back state of barking.

When to use Finite State Machines


Finite state machines can simplify the way we think about particular problems and
make them easier to implement in an algorithm. But they are not always the best
choice.

Essentially, an FSM expresses a big and convoluted set of conditions. If the state
is S1 and condition C1 is fulfilled, then perform the entry action A2 of state S2.
If, instead the state is S2 and condition C2 is present, perform A3. And so on.
There is no technical reason not to use a big conditional statement instead of an
FSM. But there is a good reason that has to do with how human programmers can keep
track of all the possible conditions and actions.

Finite State Machines are an organisational principle that we use to analyse and
describe complex behaviour in a way that will reduce its apparent complexity by
dividing up into distinct “groups of behaviour” that we call states and that can be
treated individually. In this way, we can work our way through implementing the
complex behaviour in software state by state, without being overwhelmed by the
overall complexity of the complete system.

This is particularly important where we have a sequence of conditional decisions


that depend on context and on previous decisions. As an example, think of the
meaning of a word like “Sun.” Seen in isolation, we could not decide whether it is
supposed to name a heavenly body, or perhaps the computer company, or whether it
might be an abbreviation in a calendar for “Sunday.” But in the context of a
sentence, the meaning usually becomes clear: “The sun was hot today.” “I bought a
computer from Sun.” So here the meaning of “Sun” will depend on what came before
it, and perhaps on what follows it.

Related: Under the Hood of a Self-Driving Taxi


Similarly, when we talk about simulating a virtual dog, its reaction to the same
input might be different depending on the state it is in. Giving it a cookie when
it’s hungry will make it gobble up the cookie. Giving it a cookie when it’s
sleeping may not cause any behavioural response. Giving it a cookie when it’s full
might make it sniff at the cookie but not eat it. The point here is that we have
three entirely different responses to exactly the same input. Which response is
appropriate, depends entirely on the previous history of the system and the state
it is in at the moment.

To capture such dependencies of responses on previous events or long-term system


states might be difficult and confusing if one wanted to do it with straightforward
conditional statements, since every condition would have to describe a whole
history of previous interactions. It is much easier to see the dog’s behaviour as a
sequence of states: when input event E1 in context C1 has caused the system to
enter state S1, we can discard any memory of E1. For the subsequent behaviour of
the system, it is only relevant to know that it starts with state S1. It is not
important how the system got to be in state S1. This simplifies things immensely.
The programmer does not need to think in long causal chains of events, but can
break up these events at specific breakpoints that she calls “states.” One state
thus is always a clean slate, a place without relevant history. Any conditional
responses only need to look back to that state, rather than to the whole history of
the system beyond that state.

What does all this mean for the programmer? As long as your simulation only reacts
to individual events and does not need to behave in different ways depending on the
history of the system and its context, a state machine will not provide any
advantage. But if you want to model different responses to the same input,
depending on the context of the interaction, then finite state machines can be very
helpful.

I hope you had fun playing with this program. You surely can find infinite ways to
extend this little example and give our dog even more interesting behaviours. One
could, for example, try to model the dog’s mental states of trust or fear of the
user, its reaction to cats and other environmental factors, or its food-searching
behaviour.

The possibilities are endless.

Related Posts
Fun with finite state machines!
Fun with finite state machines!
Symbolic AI and Prolog (video) [AI and Society 05a]
Symbolic AI and Prolog (video) [AI and Society 05a]
GOFAI, Eliza, and SHRDLU: Symbols and Meaning
GOFAI, Eliza, and SHRDLU: Symbols and Meaning
Alan Turing (1912-1954)
Alan Turing (1912-1954)
Deep Learning CNN’s in Tensorflow with GPUs
Deep Learning CNN’s in Tensorflow with GPUs
Expert systems (video) [AI and Society 06b]
Expert systems (video) [AI and Society 06b]
Share on FacebookShare on TwitterShare on LinkedinShare on PinterestShare on Xing
Related
Fun with finite state machines!
February 12, 2019
In "AI and Society"
Prolog: Programming in Logic
September 13, 2017
In "AI and Society"
Can machines have common sense?
January 12, 2019
In "AI and Society"
Tagged code finite state machines games introduction programming Prolog tutorial
Post navigation
Fun with finite state machines!History and Claims of AI (video) [AI and Society
03a]
STAY UP TO DATE!
email address
Find us on:
View moralrobots’s profile on FacebookView @MoralRobots’s profile on Twitter
LATEST POSTS
Expert systems (video) [AI and Society 06b] September 26, 2019
What is symbolic AI? (video) [AI and Society 06a] September 26, 2019
Defeasible logic in AI (video) [AI and Society 05b] September 17, 2019
Symbolic AI and Prolog (video) [AI and Society 05a] September 17, 2019
Definitions of AI (video) [AI and Society 04b] September 11, 2019
What is intelligence? (video) [AI and Society 04a] September 11, 2019
Claims of AI (video) [AI and Society 03b] September 9, 2019
History and Claims of AI (video) [AI and Society 03a] September 5, 2019
More fun with finite state machines February 14, 2019
Fun with finite state machines! February 12, 2019
CATEGORIES
About
Activism
AI and Society
AIS videos
Briefing
Case studies
Coder's corner
Feature
News
Newsletters
Opinion
Philosophy
Philosophy-AI-Book
Press
Research corner
Resources
Technology
Uncategorized
TAG
LATEST
COMMENTS
amazon applications art artificial intelligence autonomous cars biohacking brain-
computer-interface code creativity culture deep learning deepmind ethics ethics
problems facebook games google hardware healthcare human obsolescence industry
introduction jobs law machine learning machine morality machine vision medicine
natural language processing neural networks personal assistants philosophy privacy
python reading list research robots social robots society surveillance
technoregulation tensorflow tools tutorial war robots
Find us on:
View moralrobots’s profile on FacebookView @MoralRobots’s profile on Twitter
Proudly powered by WordPress | Theme: NewsAnchor by aThemes.

You might also like