Professional Documents
Culture Documents
Robotics in Education
Robotics in Education
Robotics in Education
Robotics in
Education
RiE 2021
Advances in Intelligent Systems and Computing
Volume 1359
Series Editor
Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences,
Warsaw, Poland
Advisory Editors
Nikhil R. Pal, Indian Statistical Institute, Kolkata, India
Rafael Bello Perez, Faculty of Mathematics, Physics and Computing,
Universidad Central de Las Villas, Santa Clara, Cuba
Emilio S. Corchado, University of Salamanca, Salamanca, Spain
Hani Hagras, School of Computer Science and Electronic Engineering,
University of Essex, Colchester, UK
László T. Kóczy, Department of Automation, Széchenyi István University,
Gyor, Hungary
Vladik Kreinovich, Department of Computer Science, University of Texas
at El Paso, El Paso, TX, USA
Chin-Teng Lin, Department of Electrical Engineering, National Chiao
Tung University, Hsinchu, Taiwan
Jie Lu, Faculty of Engineering and Information Technology,
University of Technology Sydney, Sydney, NSW, Australia
Patricia Melin, Graduate Program of Computer Science, Tijuana Institute
of Technology, Tijuana, Mexico
Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro,
Rio de Janeiro, Brazil
Ngoc Thanh Nguyen , Faculty of Computer Science and Management,
Wrocław University of Technology, Wrocław, Poland
Jun Wang, Department of Mechanical and Automation Engineering,
The Chinese University of Hong Kong, Shatin, Hong Kong
The series “Advances in Intelligent Systems and Computing” contains publications
on theory, applications, and design methods of Intelligent Systems and Intelligent
Computing. Virtually all disciplines such as engineering, natural sciences, computer
and information science, ICT, economics, business, e-commerce, environment,
healthcare, life science are covered. The list of topics spans all the areas of modern
intelligent systems and computing such as: computational intelligence, soft comput-
ing including neural networks, fuzzy systems, evolutionary computing and the fusion
of these paradigms, social intelligence, ambient intelligence, computational neuro-
science, artificial life, virtual worlds and society, cognitive science and systems,
Perception and Vision, DNA and immune based systems, self-organizing and
adaptive systems, e-Learning and teaching, human-centered and human-centric
computing, recommender systems, intelligent control, robotics and mechatronics
including human-machine teaming, knowledge-based paradigms, learning para-
digms, machine ethics, intelligent data analysis, knowledge management, intelligent
agents, intelligent decision making and support, intelligent network security, trust
management, interactive entertainment, Web intelligence and multimedia.
The publications within “Advances in Intelligent Systems and Computing” are
primarily proceedings of important conferences, symposia and congresses. They
cover significant recent developments in the field, both of a foundational and
applicable character. An important characteristic feature of the series is the short
publication time and world-wide distribution. This permits a rapid and broad
dissemination of research results.
Indexed by DBLP, INSPEC, WTI Frankfurt eG, zbMATH, Japanese Science and
Technology Agency (JST).
All books published in the series are submitted for consideration in Web of Science.
Gottfried Koppensteiner
•
Editors
Robotics in Education
RiE 2021
123
Editors
Munir Merdan Wilfried Lepuschitz
Practical Robotics Institute Austria (PRIA) Practical Robotics Institute Austria (PRIA)
Vienna, Austria Vienna, Austria
David Obdržálek
Charles University
Prague, Czech Republic
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
v
vi Preface
Committee
Co-chairpersons
Balogh Richard Slovak University of Technology in Bratislava,
Slovakia
Lepuschitz Wilfried Practical Robotics Institute Austria, Austria
Obdržálek David Charles University in Prague, Czech Republic
vii
viii Organization
xi
xii Contents
Teachers in Focus
Enhancing Teacher-Students’ Digital Competence
with Educational Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Elyna Heinmäe, Janika Leoste, Külli Kori, and Kadri Mettis
Disciplinary Knowledge of Mathematics and Science Teachers
on the Designing and Understanding of Robot
Programming Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Jaime Andrés Carmona-Mesa, Daniel Andrés Quiroz-Vallejo,
and Jhony Alexander Villa-Ochoa
Teachers’ Perspective on Fostering Computational Thinking
Through Educational Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Morgane Chevalier, Laila El-Hamamsy, Christian Giang, Barbara Bruno,
and Francesco Mondada
Contents xiii
Programming Environments
CORP. A Collaborative Online Robotics Platform . . . . . . . . . . . . . . . . . 231
Olga Sans-Cope, Ethan Danahy, Daniel Hannon, Chris Rogers,
Jordi Albo-Canals, and Cecilio Angulo
A ROS-based Open Web Platform for Intelligent
Robotics Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
David Roldán-Álvarez, Sakshay Mahna, and José M. Cañas
HoRoSim, a Holistic Robot Simulator: Arduino Code, Electronic
Circuits and Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Andres Faiña
Exploring a Handwriting Programming Language
for Educational Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Laila El-Hamamsy, Vaios Papaspyros, Taavet Kangur, Laura Mathex,
Christian Giang, Melissa Skweres, Barbara Bruno,
and Francesco Mondada
Artificial Intelligence
AI-Powered Educational Robotics as a Learning Tool to Promote
Artificial Intelligence and Computer Science Education . . . . . . . . . . . . . 279
Amy Eguchi
An Interactive Robot Platform for Introducing Reinforcement
Learning to K-12 Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Ziyi Zhang, Sara Willner-Giwerc, Jivko Sinapov, Jennifer Cross,
and Chris Rogers
The BRAIINS AI for Kids Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
Gottfried Koppensteiner, Monika Reichart, Liam Baumann,
and Annamaria Lisotti
xiv Contents
Abstract. The aim of our research is to identify a scale with levels representing
gradation of cognitive operations in learning with the Blue-Bot (TTS, Nottingham,
England) at primary school informatics in Slovakia. Therefore, qualitative research
was conducted, and it took place in iterative rounds at two different schools. During
the pandemic, when the schools were closed, the research continued with small
groups of children at home. As a working tool, a series of activities that were
iteratively developed, edited, and then analyzed, was used. Preliminary results of
this research show that there are 4 levels on the scale, while the content of each
level contains 4 different areas: a) how to manipulate the robot, b) defining the
state of the robot, i.e. its position and rotation, c) difficulty of the pad on which
it moves and d) cognitive operations with commands. By combining these areas,
different levels of difficulty can be set in creating gradual tasks, with regard to the
pupils’ cognitive level.
These results are based on the cognitive domain of learning and in the future,
we want to connect them with psychomotor domain.
1 Introduction
Educational robotics has long found its place in quality and modern education [1].
Its implementation is slowly moving to lower and lower levels of education [2]. In
Slovakia, within the project Education of pedagogical employees of kindergartens as
part of lifelong learning [3], laptops, multifunctional devices, digital cameras, televisions
and electronic teaching aids were delivered to kindergartens in the years 2007 to 2013.
Within them there were about 4,000 robotic Bee-Bot toys, so-called “Bees” [4]. Since
then, we have met with several interesting ideas (from Slovakia and also from abroad) on
how to use this robot in education [5–8]. In recent years a new version of this Blue-Bot
robot (TTS, Nottingham, England) has been developed with expanded functionality. We
will write more about them in the next Chapter.
In Slovakia, Informatics is a compulsory subject from the 3rd year of elementary
school (8 years old pupils). We try to systematically contribute to the building of its cur-
riculum, which would include educational robotics with a finer granularity of cognitive
operations. In its current form, it contains only rough outlines of operations, such as
controlling or planning a robot movement etc. The detailed exploration of the potential
that robotics enables at this level of education is a lengthy process. In this preliminary
research, the focus was on the gradation of programming constructs, which can be taught
with the help of Blue-Bot.
The basic word is the Programming concept. It is e.g. repetition, sequence of
commands, rotation, etc. Operations on these concepts are e.g. reading, compiling,
tracing, program execution, etc. Programming constructs are then operations on one
or more concepts. The term Condition of the robot means its rotation and position on
the base. The pad is a square mesh in dimensions m * n, where one square is 15 cm. A
box is a single square on a pat that may or may not contain a shape (e.g., a beehive, a
flower, a house, a number, and so on). Reader (TacTile Code Reader) is a blue board
connected via bluetooth, into which children can insert commands in the form of small
plastic pieces. We call him “blue panel” with the kids.
Fig. 1. On the left - Emil 3 environment with absolute control, on the right - Emil 4 environment
with relative control.
Education robot Blue-Bot is a small programmable robotic kit moving on the floor. It is a
new version of a popular Bee-Bot. It uses relative driving in a square grid just like a turtle
in Logo language [11]. Blue-Bot is affordable, user-friendly and, in addition, offers a
variety of interesting programming constructs and allows pupils to develop many skills
such as communication, cooperation, verbalization of ideas and the like [12–14].
We can work with Blue-Bot just the same way as with Bee-Bot - we control it using
integrated buttons. After giving single commands (forward, backwards, right, left) and
pushing the GO button all given instructions are executed. Next commands given later
are added to the previous remembered sequence of instructions (the previous instructions
are not canceled). We can give 40 commands in one sequence, the memory of Blue-Bot
is limited to 200 commands. CLEAR and PAUSE labels of the buttons in Bee-Bot were
replaced by appropriate pictures in Blue-Bot, thus they are clearer to children which
cannot read. The disadvantage of this kind of control is that children do not remember
the given sequence of instructions. They also often forget to clear the previous sequence
of commands before giving a new one.
There are two more ways of controlling Blue/Bot:
3 Methodology
We are aware that a major influence on pupils’ quality understanding of programming are
software environment, programming language, rendering commands, etc. [15], in addi-
tion to the experience and programming skills of teachers [16]. All the aforementioned
aspects affect the speed and depth of pupils’ understanding and therefore a qualitative
approach [17] was chosen in order to better understand this process. As a research tool,
the educational robot Blue-Bot with a series of activities focused on the development
of programming skills was used. The choice of Blue-Bot was affected by the fact that
Bee-Bots, as Blue-Bot’s predecessors, are relatively widespread in Slovak kindergartens
so there is a chance that primary school pupils had used them in their previous education.
Our main research goal was to identify a scale that would reflect the appropriate
cognitive complexity of operations in activities for primary school pupils. This scale
can contribute to a better understanding of the difficulty of programming constructs.
Therefore, the following research question was proposed:
What are the levels of cognitive operations in blue-bots programming with primary
school pupils?
In order to find out what the appropriate gradation of tasks might look like for primary
school pupils, several tasks with Blue-Bots had to be designed and validated. These tasks
should create a reasonable challenge for pupils and still remain engaging. This resulted
in a series of 20 lessons, which was used as part of a verification tool.
Based on the nature of the research, Design research [18] was used. That allows
us to iteratively validate correct arrangement of tasks on the scale and thus refine the
complexity granularity of tasks and terms used. Collected data consist of field records,
videos, photo documentation, and interviews with teachers and pupils [19].
Gradation of Cognitive Operations of Blue-Bot Control 7
The research participants came from three different backgrounds. One was an ele-
mentary school in the capital, where pupils had experience with Bee-Bots and other
innovative teaching techniques. These were 2nd and 4th year classes. The second envi-
ronment was an elementary school near a big city, where pupils who participated in the
research had no experience with such robotic kits. Research took place there in the 1st
year. As Slovak schools had been closed for a long time during the pandemic, we verified
individual activities with a small number of children, mostly at home. We managed to
verify each part of series of 20 lessons, which consist of several activities and thus form a
research tool for our research. In the next chapter, we will present the cognitive coverage
of these activities.
4 Research Results
1. Level
Manipulation: Robot control using a reader. (With one card - direct control. More
cards - programming control.)
Difficulty of the pad: Each position on the pad is uniquely identified by an object or
a picture.
State of the robot: Pupils do not deal with the initial direction, because the robot
always starts in the same direction. In different tasks, of course, its position is different,
albeit given. These rules remain the same at this level.
Programming constructs:
As an example of this level, we present an activity (Fig. 3), in which pupils must
place the Blue-Bot at the start, enter the program and draw its path in the workbook:
In addition to the first two points (of the programming constructs), pupils must also
identify the start according to the assignment and correctly place the Bot on the pad
(always towards the meadow in this level).
Gradation of Cognitive Operations of Blue-Bot Control 9
2. Level
Manipulation: Control the robot using its buttons (the program is not visible to
pupils).
Difficulty of the pad: Some boxes of the pad are not uniquely identifiable (there is
neither object nor picture at them).
State of the robot: Pupils get acquainted with determining the direction and turning
the robot using a directional rose (up, down, right, left).
Programming constructs:
One of the tasks pupils solve at this level is to find the way from the start to the finish
so that they do not go through other houses (Fig. 4). In addition, pupils must not use the
Right command.
Children decide themselves how the robot will be turned at the beginning. Then they
will write this direction in a circle at the starting box. At the target box, they will put
how the robot was turned finally.
10 K. Miková et al.
3. Level
Manipulation: Path planning for the robot using paper cards. This changes the
robot’s control from implicit to explicit. It’s similar to using a reader, but programs
can be longer.
Difficulty of the pad: Boxes on the pad are marked (described) in more abstract
terms. Either letters or numbers.
State of the robot: The direction of the robot is defined by the terms: “turned To”,
“turned From”.
Programming constructs:
• running and tracing the same program from different positions and with different
rotations
• planning longer programs (using paper cards)
• restriction of using of one (any) command first, later two commands
• completion of the commands at the end of the program - according to the assignment
• determining the initial direction when the program and destination are specified
• creation of assignments with conditions.
In one of the tasks of this level, children have to read the program from the cards,
draw the path of the robot to the picture of the pad and answer the question (Fig. 5).
Children solve this task without Blue-Bots. They can check their solutions using the
robot after solving the task.
4. Level
Manipulation: Combining previous levels. Planning the path (by drawing the path
in the pad or with paper cards or cards for the reader) and its implementation. Planning
the parallel behavior of two robots.
Gradation of Cognitive Operations of Blue-Bot Control 11
Difficulty of the pad: Pupils work with a more abstracted pad or with a pad without
marking the boxes or without the pad, i.e. without space restrictions.
State of the robot: Pupils must determine the state according to the assignment.
Programming constructs:
As an example, we present a task in which pupils have to read and interpret the
program, find out which command is wrong and correct this error to meet the task
(Fig. 6).
5 Discussion
Within the research, we sense as the weakest point that could have contributed to the
inaccuracy of the results - verification in schools. This risk is particularly high at the
fourth level, as the original class could not be continued due to the pandemic, but the
verification took place only in smaller groups (at home). We realize that minor changes
can occur during continuous verification with the entire class. It was also time consuming
for us to create pads (6 identical pads for one class). These are oversized papers (A2
to A1) that we could not print at school. We printed them on plain A4 paper, then cut
them out and glued them to hard cardboard, which, however, took an enormous amount
of time. An unpleasant situation was also the reworking of the gradation due to the
disqualification of the tablet to control the robot, which we originally wanted to start.
12 K. Miková et al.
The reason for the disqualification was a change in the application that no longer met
our requirements. The original application, which contained only buttons to control the
robot, was replaced by an application in which the buttons were radically reduced and
a large part of the area was occupied by a robot that rotated according to the entered
buttons. However, children often identify with a real robot - they follow it, they rotate
exactly like him, in which case the visualization on the tablet does not correspond to the
real situation. Ultimately, this created more misconceptions in pupils than a deepening
of understanding. For this reason, we first included activities with a reader. Some may
argue that it would be easier for pupils to work first with the buttons on the robot and
then with the reader. However, when the buttons are pressed, the pupils who create the
program have to memorize and with each new command delete the original one from
the robot’s memory, which we know from experience causes them problems. Therefore,
it is better to visualize commands via a reader, where they work at the beginning with
tasks where programs are only one command in length.
6 Conclusion
In this article, we present preliminary results of qualitative research, in which we sought
an answer to our research question: What are the levels of cognitive operations in Blue-
Bots programming with primary school pupils? Four levels have been identified. These
levels will help us in creating appropriate activities and assessing their complexity. The
content of each level is determined by the following characteristics: a) difficulty of the
pad, b) state of the robot (its rotation + position), c) robot control by pupils and d)
programming constructs.
We believe that the created scale will contribute to a better knowledge of computer
science didactics. This part of research was focused mainly on cognitive actions, but since
one of the main parts of robotics is the manipulation with robots, it can also encourage the
determination of psychomotor skills. We will continue with further verifications together
with research of other learning domains when schools reopen and in site research will
be possible.
Acknowledgments. The authors would like to thank Ivan Kalaš, who is a co-author of activities
without which this research would not be possible. Material with activities is currently being
prepared for print. Many thanks also go to all the children and the teachers from our design
schools and Roman Hrušecký, who participated in the verification of the activities. We would also
like to thank Indícia, the non-profit organization, which covered the whole process, and the project
VEGA 1/0602/20, thanks to which the results of this research could be published.
References
1. Miglino, O., Lund, H.H., Cardaci, M.: Robotics as an educational tool. J. Interact. Learn. Res.
10(1), 25–47 (1999)
2. Bers, M.U., Ponte, I., Juelich, C., Viera, A., Schenker, J.: Teachers as designers: integrating
robotics in early childhood education. Inf. Technol. Child. Educ. Annu. 2002(1), 123–145
(2002)
Gradation of Cognitive Operations of Blue-Bot Control 13
Janika Leoste1(B) , Tiiu Tammemäe1 , Getter Eskla1 , José San Martín López2 ,
Luis Pastor2 , and Elena Peribáñez Blasco2
1 Tallinn University, Narva rd 25, 10120 Tallinn, Estonia
leoste@tlu.ee
2 Universidad Rey Juan Carlos, Calle Tulipán, s/n, 28933 Móstoles, Madrid, Spain
Abstract. Lately, some high-end robots have been proposed for facilitating inter-
action with children with ASD with promising results. Nevertheless, most edu-
cational environments lack these robots, among other reasons, for cost and com-
plexity considerations. In this work, a simple Bee-Bot has been used as a support
tool for children with ASD who played a memory game about emotions. The main
objective of the experiment was to study whether using this robot while playing
the game resulted in improvements in the children verbal communication. Also,
this work explored if incorporating the Bee-Bot robot to the game resulted in
additional improvements in the level of child involvement in the activity.
This study has been conducted with children who had no previous experience
with Bee-Bots. We qualitatively evaluated the children abilities during the memory
game and quantitatively the children verbal and non-verbal communication during
the memory game with and without Bee-Bot. The results revealed that all children
showed a clear improvement in the use of verbal communication when playing
with Bee-Bot, a result that allows us to encourage schools and kindergartens with
educational robots to put them to use with children with ASD. Our study indicates
also that even using a relatively primitive robotic toy, such as Bee-Bot, produces
comparable results to using a more complex humanoid robot. On the other hand,
although the use of robots manages capturing the attention of children with ASD
we cannot not confirm that these robots helped children pay more attention to the
memory game. This experiment will be expanded in the future to obtain results
that are more comprehensive.
1 Introduction
Autism Spectrum Disorder (ASD) has become an important serious public health concern
worldwide, with its prevalence steadily rising since 1990-s [1]. The estimated prevalence
of ASD in the USA is currently one in 54 children [2]. These children have difficulties
in understanding both verbal and non-verbal communication, social cues, behaviors and
feelings of other kids, and they struggle when using their skills outside specific contexts.
For children with ASD it is as difficult to learn various social and functional skills, as it
is to learn about the outside world through the observation and imitation of other people
[3–5]. These difficulties, including social interaction disorders, impaired verbal and
non-verbal communication and limited interests and behaviors [6], makes engagement
of children with ASD in learning activities challenging. To address these challenges,
various novel autism interventions have been suggested, one of these being robotic
therapy. A number of studies has shown that (educational) robots may help autistic
children to develop social skills and to become more involved in learning activities
[7–10].
The promise of robotic therapy is based on utilizing the childrens’ natural interest
in technology for reassuring and motivating children with ASD, engaging them more
deeply in learning activities (see [5, 11]). In this approach, robots are used as simplified
models of living creatures, or as embodied agents to keep the children’s attention engaged
with the learning context. In current research, the following types of robots stand out:
• Social robots (e.g., Nao robot) that use spoken words, body movements and human-
like social cues when communicating with children with ASD (see also [12]).
• Robotic pets (e.g., puppy-like robot Zoomer) that look like pet animals, such as dogs,
and behave similarly to these (see also [13]).
• Robotic toys (e.g., LEGO Mindstorms EV3 or Bee-Bot), which are relatively simple
educational toys that, in the context of autism, can be used as animated 3D objects
that attract children’s attention, or as embodied objects to visualize children’s thinking
processes [14] and to encourage social engagement in lessons [15, 16].
Most of the research performed in this area so far has been focused on studying
the effects of using social or animal-like robots on children with ASD, as these robots
offer human-like social cues while keeping object-like simplicity [17]. In these studies,
robots are used as social mediators between children and a task, object, or adult [18]. In
addition, social robots can also be seen as simplified models of other persons or animals,
being more predictable and less complex, encouraging the autistic child to demonstrate
reciprocal communication [19]. However, using social robots in real life classrooms can
often be problematic due to their relatively high price tag (for example, in ebay.de, used
Nao robots cost more than 2000 e).
A less common approach for interacting with ASD children is using relatively simple
robotic toys as animated learning tools that attract their attention and encourage them to
express greater laughter and vocal responses [19, 20]. In this context, robots are seen as
supportive tools for constructing a highly structured learning environment for children
with ASD, helping them to focus on the relevant stimuli (see [12, 21]). One of the
benefits of using these robots is the ease of transferring the results of a study into real
life conditions, since they are already widely available and teachers are skilled in using
them due to their relatively low price tag.
In this study, we are using a simple robotic toy (Bee-Bot) as a supportive tool for
children with ASD when playing a memory game about emotions. Bee-Bot, being a
modern representation of the LOGO turtle (see also [21]), is suitable for neuro-typical
16 J. Leoste et al.
children 3 years of age or older. Its movement can be programmed, using only seven
buttons on the robot, and it does not need a separate app or device. Our goal is to
examine whether playing the memory game with and without Bee-Bot has different
outcomes in child verbal communication and whether it has influence on the levels of
child engagement, posing two specific questions:
1. Are there any differences in ASD child verbal communication when playing a
collaborative board game with or without the Bee-Bot robot?
2. Are there any differences in ASD child engagement when playing a collaborative
board game with or without the Bee-Bot robot?
2 Methods
2.1 Participants
The participants of the study were three children with ASD, with ages around five
years old. We used purposeful sampling [22] with snowball sampling and convenience
sampling to select the sample. Convenience sample [23] was chosen due to the COVID-
19 restrictions. Using email, we contacted familiar special educators who helped us to
identify suitable participants and guide us to them. The participants had to meet the
following criteria: (1) had to have medically diagnosed autism spectrum disorder; (2)
their age had to be from 4 to 6 years; and (3) they had to reside in Tallinn or its vicinity.
As the participants belong to a vulnerable group, we also contacted their parents who
were informed of the study goals, and the methods of collecting, analyzing and using
data. The parents who agreed to participate were also asked to sign an informed consent
form. The following participants formed the study sample:
A – a 5 years and 1 month Old Girl. Her home language and kindergarten language
is English. She has definite preferences, she is very curious about new things, and she is
attracted to new environments. She does not like to show her weaknesses, so she prefers
to leave some activities or fool around to let others know that she is not doing well
or has not learned any activities yet. If she really likes the new thing or activity then
she will find a way to learn it and get better in it. Symptoms characteristic of the autism
spectrum disorder are currently in the foreground. In the studies, the child’s speech (self-
reinforcement and speech comprehension) and total mental ability scores were (slightly)
lower than age-appropriate, lagging behind due to autism spectrum disorder.
B – a 5 years and 3 months Old Girl. Her home language and kindergarten language
is Estonian. She is sociable. New and interesting things are very exciting for her. Like
most children, she may be a little shy at first, but not always. Can be provocative at times
if she is not interested in the activities. She has diagnosed deep speech impediment and
characteristics of autism identified by specialists.
C – a 4 years and 11 months Old Boy. Diagnosed autism and epilepsy. His home and
kindergarten language is Estonian but he has acquired English through videos and car-
toons in English. To ensure a better understanding, the researcher and he communicated
Bee-Bot Educational Robot as a Means of Developing Social Skills 17
in a mixture of both languages. He has a strong interest in electronic devices. His attention
is quickly distracted.
None of the children had previous experience with Bee-Bot robots.
2.2 Materials
Memory Game. The aim of the board game (Fig. 1) was to present emotions that the
participants could identify. For representing emotions, we used photos of young adults
(one male model and one female model). The presence of confusing factors was reduced
to the minimum (the background of cards was neutral, and both models wore similar
white shirts). The game used four different pairs of emotions with two different models
(a happy man and happy woman; an angry man and angry woman; a sad man and sad
woman; a surprised man and a surprised woman), sixteen cards in total.
Each test was conducted with the presence of a participating child, a parent or teacher,
and a researcher. The test started with building a relationship of trust between the child
and the researcher. The researcher introduced herself, discussed an interesting topic with
the child or played with the child a game that the child enjoyed.
The Memory Game was first played on the table without the Bee-Bot robot, and
then on the floor with the Bee-Bot robot (Fig. 2). Depending on the needs of the child,
there were breaks, the researcher and child had discussions on free topics, or played
something else.
1 https://www.terrapinlogo.com/Bee-Bot-family.html.
18 J. Leoste et al.
The Memory Game without a Robot. The researcher introduced the emotions of the
Memory Game to the child before starting to play the game. This was done in order
to become convinced that the child was familiar with the emotions and is able to name
them. The cards of happy man, angry woman, sad man, and surprised woman were
placed on the table in front of the child who was when asked, “In your opinion, how
does this man/woman feel? Why do you think so?” After discussing these concepts, the
child and researcher reach an agreement regarding that a happy person laughs, a sad
person has their corners of mouth turned down, an angry person has a crease between
their eyes, and a surprised person has their mouth open.
Next, the cards were shuffled and placed on the table, face down. It was explained
to the child that the cards are to be turned face up one by one, telling at the same time
who is on the card and what emotion does that card express. The turned card remained
on the table. If a pair of cards with the same emotions was found then these cards were
set aside. The game was played until all cards were grouped into pairs or until the child
expressed their unwillingness to continue playing. The duration of the Memory Game
without a robot was 7–9 min.
The Memory Game with the Bee-Bot Robot. The Bee-Bot was demonstrated to the
child for about 5 min, explaining and trying out how the robot could be controlled via
its buttons. Next, the Memory Game cards were placed on the floor into the pockets of
the Bee-Bot transparent play mat, face down. The child and adult took turns in order
to choose a route for Bee-Bot to reach to a freely chosen card. The card was turned
then and its emotion voiced. The card remained in face-up position. Next, a new route
was chosen for Bee-Bot to reach another card. If two cards with the same emotion were
found then these cards were set aside. The duration of the Memory Game with Bee-Bot
was 14–18 min.
3 Results
3.1 Individual Observations
Participant A. During the introduction of the Memory Game the participant A was
able to correctly choose an emotion (happy or sad) suggested by the adult, and was able
to independently voice emotion on the card, although without reasoning. The child did
not understand how the turn changed during the game, needing constant adult guidance
(“Take this card!”, “Who is on the picture?”, “How does he feel?”), and she voiced
emotions only when answering the researcher’s questions. The child constantly moved
around the room and looked for other activities, not focusing on the Memory Game from
the beginning to the end, whether it was being played with or without a Bee-Bot. During
the game (playing the game with or without Bee-Bot) the child was not able to find the
same emotion card either independently or with the adult guidance. The content and
the general idea of the game remained incomprehensible to her, but she acted happily,
clapped hands and laughed. Although she did not learn to control her Bee-Bot, she tried
actively. Mostly she tried to push Bee-Bot on the floor forward as if she was playing with
a toy car. The participant A used echolalia 13 times when playing without Bee-Bot,
and 15 times, when playing with Bee-Bot. She used a spontaneous expression 20 times
when playing without Bee-Bot and 28 times when playing with Bee-Bot. She looked
away from the game 3 times when playing without Bee-Bot and 3 times when playing
with Bee-Bot. She left the game area 5 times when playing without Bee-Bot and 6 times
when playing with Bee-Bot. She turned 3 cards up without additional guidance when
playing without Bee-Bot and 2 cards when playing with Bee-Bot.
Participant B. During the introduction of the Memory Game she was able to correctly
choose an emotion (happy or sad), suggested by the adult, but could not independently
voice or reason emotions on cards. The child did partially understand how the turn
changed during the game, but she needed constant adult guidance, and she voiced emo-
tions only when answering the researcher’s questions. During the game (playing it
with or without Bee-Bot) the child was able to find the same emotion card only with
the adult guidance. She acted happily, clapped hands and laughed, expressing her posi-
tive emotions more intensely and being more active when playing with Bee-Bot. When
playing without Bee-Bot she needed constant reminding about her turn and she did not
interfere with the researcher’s game. When playing with Bee-Bot she pressed its buttons
and turned cards for the adult. Although she did not learn to control Bee-Bot, she tried
actively and attempted to press its buttons during the whole game. The participant B
used echolalia 8 times when playing without Bee-Bot, and 17 times when playing with
Bee-Bot. She used a spontaneous expression 7 times when playing without Bee-Bot and
12 times when playing with Bee-Bot. She looked away from the game 2 times when
playing without Bee-Bot and 6 times when playing with Bee-Bot. She did not leave
the game area. She turned without additional guidance 8 cards when playing without
Bee-Bot and 8 cards when playing with Bee-Bot. She played until the end of the game.
Participant C. During the introduction of the Memory Game he was twice able to cor-
rectly, although without reasoning, choose an emotion (happy or sad) on cards. The child
did not understand how the turn changed during the game, and he needed constant adult
Bee-Bot Educational Robot as a Means of Developing Social Skills 21
guidance. He voiced emotions only sometimes and only when answering the researcher’s
questions. During the game (playing with or without Bee-Bot) the child was not able to
find the same emotion cards either independently or with the adult guidance. He could
not comprehend the content and the idea of the game. The child could not learn to control
Bee-Bot although he tried actively. Mostly he tried to use Bee-Bot as a toy car, and he
was more interested in the robot compared to the cards or game content. The participant
C did not use echolalia when playing without Bee-Bot, and used it 5 times when playing
with Bee-Bot. He used a spontaneous expression 4 times when playing without Bee-Bot
and 7 times when playing with Bee-Bot. He looked away from the game 4 times when
playing without Bee-Bot and 9 times when playing with Bee-Bot. He left the game area
3 times when playing without Bee-Bot and 5 times when playing with Bee-Bot. She
turned without additional guidance 1 card when playing without Bee-Bot and 6 cards
when playing with Bee-Bot. He did not play until the end of the game on both occasions.
Are there any differences in ASD child engagement when playing a collaborative
board game with or without the Bee-Bot robot? The children had different results in
other areas like focusing on the game (looking away from the game), leaving the game
area, needing adult guidance, etc., but we can not make considerable generalizations
when comparing playing the game with or without Bee-Bot.
We discovered that children with ASD communicated more intensively when the
robot was involved as a mediator: the children exhibited more echolalia and they verbal-
ized expressions that were more spontaneous; similar observations were made by [25].
Communication became more intensive even when children were not able to use the
robot properly, confirming the remarks made by [13]. These results can be considered
as encouraging, supporting using robots in educational activities even when the players
(children with ASD) are unable to program them. In addition, we suggest that develop-
ing the programming skills or other STEAM skills of children with ASD should not be
considered as an essential goal – instead robots can be used as animated toys, mediating
between the child and the learning object [19]. In this sense, we encourage schools and
kindergartens that have educational robots to put them into use with children with ASD.
Our study indicates that using a relatively primitive robotic toy, such as the Bee-Bot
robot, with children with ASD yields comparable results to using more advanced (and
more expensive) humanoid robots such as the Softbank Robotics’ Pepper robot [26].
Compared to other similar studies, we focused on relatively young children (5 years
old), as with early intervention the probability of improving the quality of life of those
afflicted is higher [27]. For this age group, simple educational robots can work better,
as these devices have less confusing attributes. If these notions were to be supported
by further research, it would indicate that using robotic toys for ASD treatment would
financially be relatively easy to implement in specialized institutions.
Our results about engagement were controversial. Although using robots activates
children with ASD, we could not confirm that robots helped children to pay more atten-
tion to the Memory Game. The results indicated that the robot itself was of interest to
the children with ASD involved in our tests, making it difficult to claim that using robots
would engage children in learning activities [28]. Some additional remarks can be done
about this work. First, since this study was posed as a preparation and test ground for
a larger international research project, the amount of experimental work was somehow
reduced. For example, we used only one session per participant, introducing the selected
robot to each child in that same session. In many similar studies researchers use separate
sessions for introducing robots; additionally, learning activities are usually conducted
iteratively through several sessions, in order to gather results that are more comprehen-
sive. We will follow a similar approach for our next study. Second, although various
previous studies recommend some specific robotics platforms, we intend to focus on the
robots’ functionality more than on the actual features of particular robots. Therefore, we
are planning to use several robotics platforms within the same study design (for example,
our study in Spanish will also use the Mouse robot, which is very similar to Bee-Bot,
carrying out a comparative study of the results obtained with both robots). This will allow
the introduction of new variables in the study, on the one hand one of a cultural nature,
as they are Spanish children, and on the other hand, with the introduction into the study
of another low-end robot, such as the aforementioned Mouse; it has to be pointed out
that even though both robots are different, their functionality is quite similar, allowing
therefore the study to concentrate on robot functionality. Last, in future interventions we
are planning to perform the field work with the help of the children’s therapists instead of
university researchers. This will facilitate ensuring trusting relationships among children
and researchers, which will help executing longer research cycles.
Bee-Bot Educational Robot as a Means of Developing Social Skills 23
In the future, we plan to expand the range of robots to be used in these studies,
including robots of our own design, in order to categorize their response to the questions
raised in this preliminary study: improvement in the child’s verbal communication with
a robot, and improvement of child involvement in games and activities.
The results of this study can be used to make changes in the way activities for ASD
children are planned and implemented, as the test results reflect the effectiveness of
involving educational robots as mediators between learning activities and ASD chil-
dren. Nevertheless, this study has some limitations, making it difficult to propose wider
generalizations. For example, the sample is small and consists of test subjects with differ-
ent home languages, cognitive abilities, and levels of speech development. The number
of test-session iteration was low and (due to the COVID-19 restrictions) some of these
sessions were conducted in an unfamiliar environment for the children, by a person
unfamiliar to them. Last, the children being tested did not have previous experience with
Bee-Bots.
5 Ethical Considerations
This work was approved by the Tallinn University ethics committee. Parents were able
to inform researchers about their child’s capacity, communication ability and special
needs, based on which the researcher could adjust the duration of the experiments to
meet the individual needs of the child. The child’s (verbal or nonverbal) consent and
readiness to participate in the study was important. To minimize the risk of test subjects
becoming tired during the experiment, short breaks were planned between the different
stages of the experiments. Parents were allowed to withdraw their child from the study
at any time and they were allowed to access the research results about their child, and in
generalized format about the whole study. Informed consent forms were digitized and
stored with video recordings and other person-related data on an encrypted hard drive,
with the paper originals destroyed immediately after digitalization.
References
1. Newschaffer, C.J., et al.: The epidemiology of autism spectrum disorders. Annu. Rev. Public
Health 28, 235–258 (2007). https://doi.org/10.1146/annurev.publhealth.28.021406.144007
2. Maenner, M.J., Shaw, K.A., Baio, J., et al.: Prevalence of Autism spectrum disorder among
children aged 8 years—Autism and developmental disabilities monitoring network, 11 Sites,
United States, 2016. MMWR Surveill. Summ. 69(SS-4), 1–12 (2020). http://dx.doi.org/10.
15585/mmwr.ss6904a1
3. Dewey, D.: Error analysis of limb and orofacial praxis in children with developmental motor
deficits. Brain Cogn. 23, 203–221 (1993)
4. Baldwin, D.A.: Joint attention: its origins and role in development. In: Moore, C., Dunham,
P.J. (eds.) Understanding the Link Between Joint Attention and Language, pp. 131–158.
Lawrence Erlbaum Associates Inc., Hillsdale (1995)
24 J. Leoste et al.
5. Srinivasan, S.M., Eigsti, I.M., Neelly, L., Bhat, A.N.: The effects of embodied rhythm and
robotic interventions on the spontaneous and responsive social attention patterns of children
with Autism Spectrum Disorder (ASD): a pilot randomized controlled trial. Res. Autism
Spectr. Disord. 27, 54–72 (2016). https://doi.org/10.1016/j.rasd.2016.01.004
6. Frith, U.: Autism: Explaining the Enigma, 2nd edn. Blackwell Publishing, Hoboken (2003)
7. Albo-Canals, J., et al.: A pilot study of the KIBO robot in children with severe ASD. Int. J.
Soc. Robot. 10(3), 371–383 (2018). https://doi.org/10.1007/s12369-018-0479-2
8. Di Lieto, M.C., et al.: Improving executive functions at school in children with special needs
by educational robotics. Front. Psychol. 10, 2813 (2020). https://doi.org/10.3389/fpsyg.2019.
02813
9. Kozima, H., Michalowski, M.P., Nakagawa, C.: Keepon: a playful robot for research, therapy,
and entertainment. Int. J. Soc. Robot. 1, 3–18 (2008). https://doi.org/10.1007/s12369-008-
0009-8
10. Sandygulova, A., et al.: Interaction design and methodology of robot-assisted therapy for
children with severe ASD and ADHD, Paladyn. J. Behav. Robot. 10(1), 330–345 (2019).
https://doi.org/10.1515/pjbr-2019-0027
11. Scassellati, B.B.: How social robots will help us to diagnose, treat, and understand Autism.
In: Robotics Research: Results of the 12th International Symposium, ISRR, pp. 552–563.
Springer, Heidelberg (2007)
12. So, W.-C., et al.: A robot-based play-drama intervention may improve the joint attention and
functional play behaviors of Chinese-speaking preschoolers with Autism spectrum disorder:
a pilot study. J. Autism Dev. Disord. 50(2), 467–481 (2019). https://doi.org/10.1007/s10803-
019-04270-z
13. Silva, K., Lima, M., Santos-Magalhães, A., Fafiães, C., de Sousa, L.: Living and robotic dogs as
elicitors of social communication behavior and regulated emotional responding in individuals
with autism and severe language delay: a preliminary comparative study. Anthrozoös 32(1),
23–33 (2019). https://doi.org/10.1080/08927936.2019.1550278
14. Han, I.: Embodiment: a new perspective for evaluating physicality in learning. J. Educ.
Comput. Res. 49(1), 41–59 (2013)
15. Kopcha, T.J., et al.: Developing an integrative STEM curriculum for robotics education
through educational design research. J. Form. Des. Learn. 1, 31–44 (2017)
16. Werfel, J.: Embodied teachable agents: learning by teaching robots (2014)
17. Pop, C., Pintea, S., Vanderborght, B., David, D.D.: Enhancing play skills, engagement and
social skills in a play task in ASD children by using robot-based interventions. A pilot study.
Interact. Stud. 15, 292–320 (2014)
18. Scassellati, B., Admoni, H., Matarić, M.: Robots for use in Autism research. Annu. Rev.
Biomed. Eng. 14, 275–294 (2012). https://doi.org/10.1146/annurev-bioeng-071811-150036
19. Duquette, A., Michaud, F., Mercier, H.: Exploring the use of a mobile robot as an imitation
agent with children with low-functioning autism. Auton. Robot. 24, 147–157 (2008). https://
doi.org/10.1007/s10514-007-9056-5
20. Dautenhahn, K.: Design issues on interactive environments for children with Autism. In:
Proceedings of the International Conference on Disability, Virtual Reality and Associated
Technologies (2000)
21. Emanuel, R., Weir, S.: Using LOGO to catalyse communication in an autistic child. Technical
report DAI Research Report No. 15, University of Edinburgh (1976)
22. Creswell, J.W.: Collecting Qualitative Data in Educational Research: Planning, Conducting
and Evaluating Quantitative and Qualitative Research, 4th edn. Pearson, Boston (2012)
23. Patton, M.Q.: Qualitative Evaluation and Research Methods. Sage, Thousand Oaks (2002)
24. Mason, J.: Qualitative Researching, 2nd edn. Sage Publications, London (2002)
Bee-Bot Educational Robot as a Means of Developing Social Skills 25
25. Schadenberg, B.R., Reidsma, D., Heylen, D.K.J., Evers, V.: Differences in spontaneous inter-
actions of autistic children in an interaction with an adult and humanoid robot. Front. Robot.
AI 7, 28 (2020). https://doi.org/10.3389/frobt.2020.00028
26. Efstratiou, R., et al.: Teaching daily life skills in Autism Spectrum Disorder (ASD) interven-
tions using the social robot Pepper. In: Springer Series: Advances in Intelligent Systems and
Computing, 11th International Conference on Robotics in Education. Springer, Singapore
(2021)
27. Cabibihan, J.-J., Javed, H., Ang, M., Aljunied, S.M.: Why robots? A survey on the roles
and benefits of social robots in the therapy of children with Autism. Int. J. Soc. Robot. 5(4),
593–618 (2013). https://doi.org/10.1007/s12369-013-0202-2
28. Costa, S., Soares, F., Santos, C., Pereira, A., Moreira, M.: Lego robots & Autism spectrum dis-
order: a potential partnership? Revista de Estudios e Investigación en Psicología y Educación
3(1), 50–59 (2016)
Development of Educational Scenarios
for Child-Robot Interaction: The Case
of Learning Disabilities
Abstract. During the last decade, there has been an increased interest in research
on the use of social robots in education, both in typical education as well as in
special education. Despite the demonstrated advantages of robot-assisted tutoring
in typical education and the extensive work on robots in supporting autism therapy-
related scenarios, little work has been published on how social robots may support
children with Learning Disabilities (LD). The purpose of this paper is to present a
comprehensive experimental protocol where a social robot is directly involved in
specially designed intervention strategies, which are currently underway. These
strategies are designed to address reading comprehension and other learning skills
through child-robot interaction, aiming at improving the academic performance
of the children diagnosed with learning disabilities.
1 Introduction
We are currently witnessing a new technological revolution in which robots play a
prominent role. One type of the new generation of robots are called social and are
designed to participate in the daily activities of humans such as education. Much research
has been carried out to demonstrate the educational utility of social robots. The benefits
of using robots in the learning process are mainly due to their physical appearance,
which makes them attractive to children and stimulates curiosity. Consequently, several
educational advantages arise. Firstly, robots can be used for curricula or populations that
require enhanced engagement [1]. Secondly, using a robot encourages students to clearly
develop social behaviors that are beneficial to learning. Additionally, robots can work
tirelessly as long as their power demands are met and their teaching performance does not
deteriorate with time. They can also be programmed to deliver different subjects without
the need of years of training [2]. Furthermore, educational robots do not discriminate, do
not express frustration and are generally small in size, which makes children feel more
comfortable and increases their confidence [3].
However, despite the fact that much research has been carried out regarding the
potential of using robots as tutors, there have been only a few studies that explore
robots’ potential when it comes to learning independence and efficiency in children
who have been diagnosed with LD. Moreover, social robots have so far mostly been
used in short-term interventions with strictly structured learning scenarios, with limited
adaptation and flexibility to each student’s needs. One important challenge is that that
students with LD come into classrooms without the requisite knowledge, skills, and
disposition to read and comprehend texts. This is because children with LD have not
developed this metacognitive awareness or the ability to skilfully apply comprehension
strategies [4]. Over the last decades, a plethora of studies have shown the effectiveness
of reading interventions based on cognitive and metacognitive strategies in children with
LD.
In this paper, a comprehensive long-term intervention for children with LD is pre-
sented, where the specially-designed educational activities are supported by a social
robot. It is part of the national project titled “Social Robots as Tools in Special Educa-
tion” which aims at using social robots as innovative tools in Greek Special Treatment
and Education. The paper is structured as follows: In Sect. 2 there is an overview of
the current literature on robot-assisted special education, and the issues in addressing
LD. Section 3 describes the experimental procedure and the role of the social robot in
the proposed educational scenarios. Finally, Sect. 4 summarizes the various aspects and
identifies potential issues of the proposed methodology.
2 Literature Review
An increasing number of studies confirm the potential that social robots have when it
comes to education and tutoring. Teaching assistant robots are being used to provide
motivation, engagement and personalized support to students [5, 6]. Moreover, social
robots can create an interactive and engaging learning experience for students [7, 8].
The physical presence of a robot can lead to positive perceptions and increased task
performance when compared with virtual agents [9]. For example, work presented in
[10] suggests that instructions given by a social robot during a cognitive puzzle task can
be more effective than those that were given by video-based agents or a disembodied
voice. As it can be noticed in these studies, the use of social robots in the learning
process motivates children and increases their attention. This is particularly important,
28 E. Karageorgiou et al.
since students who are experiencing LD are often observed to be experiencing lack of
motivation and they are not usually actively participating in the learning process.
The use of social robots in therapeutic interventions in special education has been
focused in the treatment of autism. As far as learning disabilities interventions are con-
cerned, studies are typically limited to short, well-defined lessons delivered with limited
adaptation to individual learners. Recent studies aim at exploring how social robots can
support meta-cognitive strategies that lead to learning independence. In [11], there is
an investigation of how personalized robot tutoring using an open learner model can
impact learning and self-regulation skills. The study concluded that a social robot can be
effective in scaffolding self-regulating learning and in motivating in a real-world school
environment. In another study, the effectiveness of the “think aloud” protocol via a robot
tutor was investigated [12]. The results that students that complete the “think aloud”
exercise were more engaged when it was delivered by the robot tutor and were foster-
ing immediate learning outcomes. Although these results refer to typical education, the
underlying principles can be applied in equivalent special education settings.
3 Methods
In this section, the experimental design and experimental procedure that will be followed
in this study are described.
chance of being selected as subject. The 29 participants of the clinical trials are children
of third and fourth grade of primary school (9–11 years old, 21 males and 7 females), since
during these early grades, emphasis is given in the teaching of decoding, as opposed to
later grades where emphasis is given in reading comprehension. The participants have
a diagnosis of LD (DSM-5 315.00, 315.1. 315.2). To confirm their eligibility for the
study, and after parental consent was given, an official assessment has been carried
out at the Papageorgiou General Hospital in Thessaloniki, Greece. Students’ individual
educational profiles were created from the initial assessment.
The eligible participants have then been divided into two groups depending on the
type of intervention they receive:
The comparison between the two groups at the end of all the intervention sessions will
result in quantifiable conclusions regarding the effect of using the social robot. To address
privacy issues, any data collected was anonymized.
Interventions take place in a therapy room which provides all the necessary equip-
ment for the intervention, such as the robot, a laptop for the control and recording
software, and other items that are part of the educational activities.
The social robot NAO has been selected for proposed learning intervention. It is a
humanoid robot that has already been used by the authors in previous work [19, 20] and
is a popular choice in educational robotics research because of its appealing physical
appearance as well as its sensory and motion capabilities. The robot is able to speak,
move, and create. Speech can be controlled in terms of speed and pitch. It can also use
LEDs and playback sounds, which complement and enhance its interaction with the
child. Speech recognition allows the robot to understand what the child is saying and
monitor and record sound. Finally, the robot has cameras which can be used to detect
motion or eye contact. The robot’s role in the interventions is dual:
The use of a familiar, informal communication style by the robot that personalizes the
intervention has been found to lead to learning benefits [21]. Therefore, in terms of
personalization, the robot is able to remember and use the child’s name during speech,
and recalls past activities to adjust the difficulty level of current activities.
Table 1 shows an example intervention session in which the objectives are memory
reinforcement, decoding and reading comprehension.
Development of Educational Scenarios for Child-Robot Interaction 31
Activity Description
Introduction The robot greets the child and then a short conversation between
robot and child follows
Memory reinforcement The child is given a set of pictures (e.g. animals). After the child
observes the pictures, one of the pictures is removed. The robot
asks which picture is missing and checks the answer
Decoding A word is formed and then one or more letters are substituted
with other letters in order to form pseudo-words. For example,
the word «table» becomes the word “mable”. When a number of
these pseudo-words have been produced, the robot asks the
student to read them, according to the letter substitutions that
have been given by the educator. The robot assesses
Reading comprehension The student is asked by the robot to word process the text using
the fixed-up strategies that have been taught. The robot gives the
following instructions and monitors its performance: 1)
Underline unknown words and phrases, 2) Re-read the part that
makes it harder for you to understand, 3) Ask some new
questions about the text, 4) Try to retell the story in your own
words
Psychopedagogical activity The robot describes itself to the child, and mentions the features
that it feels are its strongest. The robot then asks the child to do
the same. The activity ends with the robot praising the child for
the features he chose (self-image enhancement)
At the end of every session, the educator reviews the data collected by the robot.
After the completion of the entire intervention, the children’s reassessment (follow-up
session) takes place at the hospital in order to examine the learning gains.
It can be seen from the above that the social robot has a central role in all educational
activities both by a) guiding the educational process, and (b) monitoring the children’s
performance. The implementation of the scenarios requires specific capabilities on the
part of the NAO social robot, particularly the text-to-speech function for instructions, for
recitation and pronunciation exercises. The robot is frequently required to interact with
the child, and so speech recognition is also vital to the implementation of the scenarios.
Preliminary work revealed that recognition works optimally the child is directly in front
of the robot when it speaks. Since the child is expected to move during the intervention,
face tracking has been employed so that the robot continuously maintains its focus to the
child, therefore improving the recognition rate as well as helping to maintain the child’s
engagement. Other features such as gestures, sound playback and the multi-color LEDs
are used as motivators and improve the child’s experience.
Preliminary observations in this ongoing work reveal that most children in the robot-
assisted intervention group appear to be motivated by the presence of the robot and are
generally more responsive to the robot’s instruction. However, the quantified educational
32 E. Karageorgiou et al.
benefits of using the robot as opposed to the control group remain to be seen in the
statistical analysis at the end of the intervention.
The paper presents a complete social robot-based intervention for dealing with learn-
ing disabilities, which is currently applied. There are four distinct learning objectives,
namely phonological awareness, reading skills, working memory strategies and reading
comprehension skills. The protocol consists of 24 sessions with a well-defined struc-
ture of activities. The social robot NAO is integrated into the activities and participates
actively as a teacher’s assistant by giving instructions, providing motivation and guiding
the workflow of the various exercises. The robot is also used as an annotator by mon-
itoring the child’s performance, and recording quantities that are not easily measured
by an educator in real time, such as duration of speech, voice volume, performance in
each exercise etc. In this way, the educator can keep a detailed record of the sessions and
monitor the child’s improvement throughout the duration of the intervention. Ultimately,
the skills practiced during the intervention with the aid of the social robot, are expected
to improve the overall academic performance of the children.
Acknowledgment. This research has been co-financed by the European Union and Greek national
funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under
the call RESEARCH – CREATE – INNOVATE (project code: T1EDK-00929).
References
1. Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B., Tanaka, F.: Social robots for
education: a review. Sci. Robot. 3 (2018). https://doi.org/10.1126/scirobotics.aat5954
2. Ivanov, S.: Will robots substitute teachers? In: 12th International Conference “Modern Sci-
ence, Business and Education”, Varna, Bulgaria, pp. 42–47 (2016). https://doi.org/10.1080/
15388220903185605
3. Kaburlasos, V.G., Vrochidou, E.: Social robots for pedagogical rehabilitation: trends and
novel modeling principles. In: Dimitrova, M., Wagatsuma, H. (eds.) Cyber-Physical Systems
for Social Applications. Advances in Systems Analysis, Software Engineering, and High
Performance Computing (ASASEHPC), pp. 1–21. IGI Global, Hershey (2019). https://doi.
org/10.4018/978-1-5225-7879-6.ch001
4. Baker, L., Brown, A.L.: Metacognitive skills and reading. In: Pearson, P.D. (ed.) Handbook
of Research in Reading, pp. 353–395. Longman, New York (1984)
5. Matarić, M.: Socially assistive robotics. In: Proceedings of the 2014 ACM/IEEE international
conference on Human-robot interaction - HRI 2014, p. 333. ACM Press, New York (2014).
https://doi.org/10.1145/2559636.2560043
6. Jacq, A., Lemaignan, S., Garcia, F., Dillenbourg, P., Paiva, A.: Building successful long child-
robot interactions in a learning context. In: 2016 11th ACM/IEEE International Conference
on Human-Robot Interaction (HRI), pp. 239–246. IEEE (2016). https://doi.org/10.1109/HRI.
2016.7451758
Development of Educational Scenarios for Child-Robot Interaction 33
7. Barco, A., Albo-Canals, J., Garriga, C.: Engagement based on a customization of an iPod-
LEGO robot for a long-term interaction for an educational purpose. In: Proceedings of the 2014
ACM/IEEE International Conference on Human-Robot Interaction - HRI 2014, pp. 124–125.
ACM Press, New York (2014). https://doi.org/10.1145/2559636.2563697
8. Aslam, S., Shopland, N., Standen, P.J., Burton, A., Brown, D.: A comparison of humanoid and
non humanoid robots in supporting the learning of pupils with severe intellectual disabilities.
In: Proceedings of the 2016 International Conference on Interactive Technologies and Games
EduRob Conjunction with iTAG 2016, iTAG 2016, pp. 7–12 (2016). https://doi.org/10.1109/
iTAG.2016.9
9. Li, J.: The benefit of being physically present: a survey of experimental works comparing
copresent robots, telepresent robots and virtual agents. Int. J. Hum. Comput. Stud. 77, 23–37
(2015). https://doi.org/10.1016/j.ijhcs.2015.01.001
10. Leyzberg, D., Spaulding, S., Scassellati, B.: Personalizing robot tutors to individuals’ learning
differences. In: Proceedings of the 2014 ACM/IEEE International Conference on Human-
Robot Interaction, pp. 423–430. ACM, New York (2014). https://doi.org/10.1145/2559636.
2559671
11. Jones, A., Castellano, G.: Adaptive robotic tutors that support self-regulated learning: a longer-
term investigation with primary school children. Int. J. Soc. Robot. 10(3), 357–370 (2018).
https://doi.org/10.1007/s12369-017-0458-z
12. Ramachandran, A., Huang, C.-M., Gartland, E., Scassellati, B.: Thinking aloud with a tutoring
robot to enhance learning. In: Proceedings of the 2018 ACM/IEEE International Conference
on Human-Robot Interaction - HRI 2018, pp. 59–68. ACM Press, New York (2018). https://
doi.org/10.1145/3171221.3171250
13. Sweet, A.P., Snow, C.E.: Rethinking Reading Comprehension. Guilford Publications, New
York (2003)
14. Vellutino, F.R.: Individual differences as sources of variability in reading comprehension
in elementary school children. In: Sweet, A.P., Snow, C.E. (eds.) Rethinking Reading
Comprehension, pp. 51–81. Guilford Publications, New York (2003)
15. Paris, S.G.: Children’s Reading Comprehension and Assessment. Routledge, Milton Park
(2005). https://doi.org/10.4324/9781410612762
16. Nicholson, T.: Reading comprehension processes. In: Thompson, B.G., Nicholson, T. (eds.)
Learning to Read: Beyond Phonics and Whole Language, pp. 127–149. Teachers College
Press, New York (1998)
17. Gersten, R., Fuchs, L.S., Williams, J.P., Baker, S.: Teaching reading comprehension strategies
to students with learning disabilities: a review of research. Rev. Educ. Res. (2007). https://
doi.org/10.3102/00346543071002279
18. Vaughn, S., et al.: Response to intervention for middle school students with reading difficulties:
effects of a primary and secondary intervention. School Psych. Rev. 39, 3–21 (2010)
19. Lytridis, C., et al.: Distance special education delivery by social robots. Electronics 9, 1034
(2020). https://doi.org/10.3390/electronics9061034
20. Papakostas, G.A., Strolis, A.K., Panagiotopoulos, F., Aitsidis, C.: Social robot selection: a case
study in education. In: 2018 26th International Conference on Software, Telecommunications
and Computer Networks (SoftCOM), Split, Croatia, pp. 1–4. IEEE (2018). https://doi.org/10.
23919/SOFTCOM.2018.8555844
21. Baxter, P., Ashurst, E., Read, R., Kennedy, J., Belpaeme, T.: Robot education peers in a situated
primary school study: Personalisation promotes child learning. PLoS ONE 12, e0178126
(2017). https://doi.org/10.1371/journal.pone.0178126
Educational Robotics in Online Distance
Learning: An Experience from Primary
School
1 Introduction
With the onset of the Covid-19 pandemic, schools worldwide faced temporary
closure, intermittently affecting more than 90% of the world’s students [1]. As
a consequence, many teachers had to adapt their activities to online formats in
order to continue instruction at distance. Moving to the online space represented
a particular challenge for compulsory schools, where, in contrast to many uni-
versities, online learning is still rather unexplored [2]. In addition to general diffi-
culties associated with this new format of instruction, such as lack of experience
in online learning or poor digital infrastructure [3], implementing Educational
Robotics (ER) activities appeared to be particularly complicated. ER activities
usually comprise a combination of different learning artifacts [4], namely robots,
programming interfaces and playgrounds, that are often dependent from each
other and have thus also been referred to as “Educational Robotics Learning Sys-
tems” (ERLS) [5]. Many times, students benefit from their school’s infrastructure
providing them with access to these artifacts. In this regard, the restrictions on
individual mobility imposed during the pandemic in many countries, have sig-
nificantly limited these opportunities. Although previous works have explored
remote labs as a way to implement robotics education in distance learning, most
approaches were limited to university level (e.g. [6–9]) or secondary school (e.g.
[10,11]). Bringing such approaches to primary school education is not obvious,
since it would require extensive adjustments of the tools and activities proposed
to make age-appropriate adaptations.
Furthermore, ER activities usually capitalize on students’ interactions with
their peers and teachers building on the ideas of project-based didactics [12] and
social constructivist learning [13]. As illustrated in a previous study by Negrini
and Giang [14], students indeed perceived collaboration as one of the most
important aspects of ER activities. However, during the pandemic, with teachers
and students working from home, providing possibilities for direct interactions
became significantly more difficult. While the recent technological advances have
yielded a myriad of tools for online communication and collaboration, only few
educators have already acquired the pedagogical content knowledge to leverage
them for the design of meaningful online learning experiences [15]. As a matter of
fact, Bernard et al. [16] have emphasized that the effectiveness of distance educa-
tion depends on the pedagogical design rather than the properties of the digital
media used. Designing engaging and pedagogically meaningful online activities
is especially important at primary school level, in order to address the limited
online attention spans of young children as well as the concerns of many parents
with regard to online learning [17].
Both the limited access to ER learning artifacts as well as the unfavorable
conditions for direct interactions in online settings, thus represented major obsta-
cles to be overcome to implement ER online activities during the Covid-19 pan-
demic. Yet on the other hand, these circumstances emerged as a possibility to
experiment with new approaches addressing these issues. In this work, we will
present one such approach made during the Covid-19 pandemic to implement
ER online activities for primary school students. To address the issue of limited
access to ER learning artifacts, activities were mainly based on pen and paper
approaches. Commonly used social media were used as a complement to augment
the learning experience and to allow students to communicate with their peers
and teacher as well as to collaborate with each other. The devised activities were
proposed to 13 students aged between 9 and 12 years old, over the time course
of four weeks. In the following, we will present the devised activity, report on
the benefits and challenges observed from this experience and finally discuss how
the findings could inform future works in this domain.
36 C. Giang and L. Negrini
Fig. 1. Paper grid and triangles to locally simulate the behavior of the Bee-Bots (A),
set of instructions for the robot (B) and an example of a solution proposal (C).
The paper grid and the triangles served the students as learning artifacts
to locally simulate the behavior of the Bee-Bots. Each triangle represented a
robot with its orientation on the grid. The paper strips allowed the students
to write down their solution proposals before submitting them to their teacher
for evaluation. Solutions were submitted as photos sent through smartphone
messaging applications (Fig. 2A). The teacher, having access to Bee-Bot robots,
1
https://www.tts-international.com/bee-bot-programmable-floor-robot/1015268.
html.
Educational Robotics in Online Distance Learning for Primary School 37
Fig. 2. Example of a message exchange between a student and the teacher (A), assign-
ment in which four robots have to be programmed to drive around the outer circle
of the grid (B) and assignment in which all four robots have to perform the same
choreography without crashing into each other (C).
The integration of social media that most students were familiar with from their
everyday life (i.e., smartphone messaging applications) facilitated a smooth oper-
ation without any technical issues. At the same time, it allowed students to com-
municate and collaborate with their peers even from afar. Similarly, it allowed
the teacher to provide direct feedback to individual students using text or audio
messages. In addition to the asynchronous approach through messaging appli-
cations, the weekly online video conferences enabled synchronous moments of
interaction. During the video conferences, the teacher discussed with the stu-
dents in plenary sessions, explaining upcoming assignments and debriefing com-
pleted ones. In their work published during the pandemic, Rapanta et al. [15]
have suggested mixing synchronous and asynchronous approaches as a favorable
approach to online learning. When well designed, it allows to shift the responsi-
bility for learning to the student, while the teacher takes the role of a facilitator
and tutor. Since ER is built on the same learner-centered approach, following
this idea of hybrid forms of instruction, could provide a favorable framework
for the implementation of ER online activities. Moreover, from the teacher’s
perspective, following hybrid approaches could significantly reduce the fatigue
associated with the extensive use of synchronous forms of online instruction.
Except for the first week, where around 30 min were taken to explain the gen-
eral idea of the devised activities, only little amounts of time (around 5 min)
were needed to discuss the activities during the online video conferences of the
subsequent weeks.
In a recent work, El-Hamamsy et al. [18] have highlighted that the intro-
duction of screens in primary school is still met with reticence. Reducing active
screen time was particularly relevant during the pandemic, in order to address
the limited online attention span of young children as well as the concerns of
their parents with respect to online learning [17]. With online learning becoming
increasingly prevalent, finding alternative, “analogue” ways of distance learning
is crucial. As Mehrotra et al. [19] have suggested before, one way to address the
issue of extensive screen time in ER activities, is to introduce paper-based learn-
ing approaches. Building on this idea, the activities proposed in this work mainly
relied on paper-based materials. It could be argued that performing ER activities
without having students directly interact with robots and programming inter-
faces represents a major limitation of the proposed approach. However, a recent
work by Chevalier et al. [20] has also highlighted that providing students with
unregulated access to ER tools, in particular the programming interfaces, does
not necessarily result in better learning experiences. Indeed, when ER activities
are not well designed, involving these learning artifacts can even be counter-
productive, since they can promote blind trial-and-error approaches and hence
prevent desired reflection processes. The pen and paper approach applied in this
work provided students with alternative learning artifacts, that in contrast to
real robots, cannot provide immediate feedback on the proposed solutions. The
decelerated feedback loop, requiring students to take pictures, sending them to
their teacher and waiting for their response, may therefore represent an inter-
esting approach to promote the use of pauses for reflection. As shown by Perez
et al. [21], the strategic use of pauses was strongly associated with successful
learning in inquiry-based learning activities.
Educational Robotics in Online Distance Learning for Primary School 39
The three youngest students (9 years old) who chose to participate in the
proposed ER online activities, completed one assignment per week for the whole
duration of the school closure (in total four weeks). From the six older students
(12 years old) who started in the first week, only one half continued until the
end. The higher dropout rate of the older students could be related to the lack
of difficulty with regard to the proposed activities. Bee-Bots are designed as
tools for rather young children and are therefore limited in the complexity of
the possible programming instructions. The proposed ER online activities could
thus have been perceived as not sufficiently challenging by the older students.
4 Conclusion
The presented ER online activity was developed to face the challenges of online
distance learning and allowed primary school students to continue with ER activ-
ities during the Covid-19 pandemic. However, some interesting aspects emerged
from this experience, in particular the promotion of reflection processes due to
the use of pen and paper learning artifacts. The relevance of such approaches
could indeed go beyond the context of online learning and potentially also gen-
eralize to ER in face-to-face classroom activities. However, it should be acknowl-
edged that more exhaustive studies are needed to confirm this hypothesis. More-
over, future work could also investigate whether similar approaches could be
adapted for more advanced ER tools in order to design more complex tasks for
more mature students.
Acknowledgements. The authors would like to thank the teacher Claudio Giovanoli
for his contributions in designing and implementing the Bee-Bot online activity as well
as the children of the school in Maloja who decided to participate in the voluntary ER
online activities.
References
1. Donohue, J.M., Miller, E.: Covid-19 and school closures. JAMA 324(9), 845–847
(2020)
2. Allen, J., Rowan, L., Singh, P.: Teaching and teacher education in the time of
covid-19 (2020)
3. Carrillo, C., Flores, M.A.: Covid-19 and teacher education: a literature review of
online teaching and learning practices. Eur. J. Teach. Educ. 43(4), 466–487 (2020)
4. Giang, C., Piatti, A., Mondada, F.: Heuristics for the development and evaluation
of educational robotics systems. IEEE Trans. Educ. 62(4), 278–287 (2019)
5. Giang, C.: Towards the alignment of educational robotics learning systems with
classroom activities. Ph.D. thesis, EPFL, Lausanne, Switzerland (2020)
6. Kulich, M., Chudoba, J., Kosnar, K., Krajnik, T., Faigl, J., Preucil, L.: SyRoTek-
distance teaching of mobile robotics. IEEE Trans. Educ. 56(1), 18–23 (2012)
7. dos Santos Lopes, M.S., Gomes, I.P., Trindade, R.M., da Silva, A.F., Lima,
A.C.d.C.: Web environment for programming and control of a mobile robot in
a remote laboratory. IEEE Trans. Learn. Technol. 10(4), 526–531 (2016)
40 C. Giang and L. Negrini
1 Introduction
The fast development of robotics is leading towards a greater and greater presence of
robots in our society. One of the main sectors in which robots are showing their poten-
tial outside of industry and research is represented by education. As a matter of fact,
educational robotics programs in schools are spreading worldwide [1].
The use of robots as a teaching method has been suggested to develop and improve
not only students’ academic skills (e.g., mathematics and geometry, coding, physics,
etc.), but also to boost important soft skills [2], although further quantitative research
might be needed to confirm this [3]. In particular, educational robotics is expected to
help develop problem-solving, logic reasoning, creativity, linguistic skills, and social
skills such as teamwork and interactions with peers [4, 5].
Robotics has been largely deployed in high-school curricula as well as in secondary
and primary schools [6, 7]. One of the most common tools that have been proven to be
effective for introducing robotics to children and young students is LEGO Mindstorms1 .
1
https://www.lego.com/it-it/themes/mindstorms.
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 43–51, 2022.
https://doi.org/10.1007/978-3-030-82544-7_5
44 G. C. Bettelani et al.
Indeed, the pedagogical use of LEGO goes in the direction of learning through play,
shown as a very efficient approach for students’ learning [8].
As expected, while students are usually very enthusiastic about the use of robots at
school, teachers may feel anxious about it and need adequate formation and, possibly,
technical support during the lessons [7, 9, 10].
Although many experiences have been conducted, robotic courses held by univer-
sity robotic researchers are not common in most of the pre-university institutes. In this
work, we describe a collaboration between the “Research Center E. Piaggio” of the
University of Pisa and an Italian high school of the scientific and technological area
(“ITI Galileo Galilei”, Livorno, Italy). The collaboration was born in 2018 within the
national program for school-work transition promoted by the Italian Ministry of Edu-
cation (MIUR). The aim of the program is twofold: 1) putting into practice in realistic
working environments the knowledge acquired at school; 2) helping the students in
making a conscious decision on their future job or university career.
The objective of this paper is to share our experience that we think could be taken
as a model by teachers in future robotics couses. Results of the students’ end-of-term
survey are reported and show positive feedback on the proposed approach. Finally, the
paper also aims at discussing gender issues and COVID-19 restriction problems in the
educational robotic field.
2 Preliminary Activities
Before starting the robotics lessons in class, the students attended a general introduction
about the activities of the “Research Center E. Piaggio” of the University of Pisa. In
particular, a Professor of the University and the Ph.D. students that participated in the
project (i.e., the authors) presented their research topics: humanoid robots, autonomous
cars, drones, etc. The students had also the opportunity to visit the laboratories of the
“Research Center E. Piaggio” and see real robots (e.g., KUKA arm [11], EGO robot
[12]). During this visit, they could get an insight into real-world robotics applications.
They also understood that robotics is not only limited to industrial robots but is a wide
field that includes, for example, the realization of prosthetic robotic hands (e.g., the
Pisa/IIT Softhand Pro [13]) and wearable haptics devices (e.g., the CUFF device [14]
and the Skin-stretch device [15]), which are developed to restore the sense of touch in
amputees.
These preliminary activities were of particular importance to introduce a large group
of students to robotics and to give them a general overview of the multidisciplinarity of
robotics, which includes mechanics, electronics, programming, automatic control and
so on. Moreover, they could see how working in robotics can be an actual profession
and had the chance to taste what working in the field of robotic research means (e.g.,
working in teams and finding people with different backgrounds such as bio-medical,
mechanical, computer, automation, and even management engineers). Finally, the stu-
dents who were most amazed by these activities decided to take part in the extracurric-
ular robotics course being part of the work-school transition program.
Robotics Laboratory Within the Italian School-Work Transition Program 45
Fig. 1. Example of the race environment. On the left side: the black line that the robot had to
follow (choosing the direction signaled by the green markers in the case of multiple options). On
the right side: the rectangular “dangerous area” with the balls. The red cross indicates the other
possible corner where the triangle could be placed
The goal of the mission was to get as many points as possible before the time ran out.
A mobile robot, positioned at the starting point, had to recognize and follow a twisty
path, created with a black line, which was painted on a floor with many asperities (e.g.,
bumps, halls, obstacles). At the end of the black path, the robot found a silvery line that
pointed out the entrance to a rectangular area: the “dangerous area” with the “victims”,
which were represented by balls. Particularly, the balls could be black (“dead victims”)
or silvery (“alive victims”). The robot had to pick up the balls and place them into a
triangular area, which had a certain height with respect to the floor. The triangle could
be positioned in one of the “dangerous area” corners that were in front of the robot
when it entered the area. Particularly, the rescue of “alive victims” gave more points
than the rescue of “dead victims”. After rescuing all the “victims”, the robot had to find
a way to exit the “dangerous area”.
2
https://junior.robocup.org/.
46 G. C. Bettelani et al.
– Identification of the triangle position: the students decided to use the ultrasound
sensor (placed on the frontal side of the robot and facing ahead) to measure the robot
distance from the side walls of the “dangerous area”. In doing so, the robot was able
to compute the center of the rectangular area in the transversal direction and to go
there. After that, by measuring its distance from the frontal wall and from the corners
of the room, the robot was able to understand in which corner of the area the triangle
was. (see Fig. 2 for a clear explanation of all the steps)
– Pick up of the “victims”: some students decided to design with the FreeCAD soft-
ware a calliper (Fig. 3). They decided to control the calliper elevation by implement-
ing the PID controller that the researchers explained during the related theoretical
lessons.
– Coverage of the rectangular dangerous zone: The students decided to make the
robot do a serpentine. If a light sensor, positioned at the base of the calliper, identified
a ball, the robot moved forward until the wall, where the ball could be dragged in
the calliper and then brought to the triangle. It is important to note that when the
robot arrived at the triangle, the height of the calliper had to be controlled in order to
lift the ball inside the triangle. For this task the students decided to not distinguish
between “alive or dead victims” but only to understand if the robot had picked up a
ball or not (Fig. 4).
For the development of these strategies, the students have exploited multiple skills.
Indeed, while the first and the third proposed strategies are related to the development of
technical skills, the other two strategies are associated with problem-solving solutions.
Fig. 2. Triangle position identification. The solutions proposed by the students articulated as fol-
lows: a) After the entrance of the robot in the rectangular area, the robot rotates counterclockwise
and measures its distance from the wall on its left. b) The robot measures its distance from the
wall on its right. c The robot can now compute the midpoint of the area width and when it places
there, it computes its distance from the frontal wall. d-e) Known the distances computed in panel
c, named x and y, the robot can compute the angle α. At this point, the robot turns counter-
clockwise and clockwise of α. The triangle is placed in the corner where the measured distance
robot-corner is smaller (dashed red line)
48 G. C. Bettelani et al.
Fig. 3. Mechanical parts designed and realized by the students a) The calliper cad realized with
the FreeCAD software. b) Lateral view of the calliper mounted on a LEGO motor. c) Calliper
and motor mounted on the robot
Fig. 4. Coverage strategy found by the students: when the robot identifies the position of the
triangle, it memorizes that position and, in the meanwhile, it moves there. From the triangle
position, the robot starts the serpentine and when it recognizes a ball, it drags it till the wall, so
the ball can be wedged in the calliper and it can be brought to the triangle.
4 Discussions
The solutions proposed by the students were of course not the optimal solutions but
the message that we tried to deliver was that often in robotics there is no unique solu-
tion. This is the reason why, in this field, it is important to work with others and share
opinions and doubts. We also tried to show to the students that robotics is indeed mul-
tidisciplinary. For this reason, we introduced the 3D mechanical design and we invited
Eng. Daniele Benedettelli, a freelance LEGO designer who presented to the students
LEGONARO5 , a LEGO portrayer robot. During this lesson, the students had also the
opportunity to be introduced to the concept of Artificial Intelligence that is becoming
very intertwined with modern robotics.
Another important point that we would like to underline is that the students, dur-
ing the implementation of the race solutions, were able to apply and understand the
importance of the concepts explained during the lessons, such as the calibration of the
sensors, the concept of closed-loop control and the 3D mechanical design. We think
that it was of particular relevance to have robotics researchers explaining these ideas.
5
https://robotics.benedettelli.com/legonardo/.
Robotics Laboratory Within the Italian School-Work Transition Program 49
Indeed, robotics researchers naturally speak using the right contextual vocabulary and
show great confidence towards topics they deal with regularly. Because of this, it comes
pretty natural to us to teach these concepts, as we experience them daily. Teachers, on
the other hand, usually lack deep robotics knowledge and practice, and can thus benefit
from inputs coming from collaborations with universities. As mentioned before, this
project was an experience of work-school transition, so it was essential that the stu-
dents collaborated with experts in the robotic field. However, we would like to stress
the importance that the teachers’ knowledge of didactics and pedagogy would have in a
curricular course. For this reason, in that case, the teachers should hold the course, but
robotics professionals, such as robotics researchers, could provide substantial support
to both students and teachers.
Moreover, we also think that this project was a great opportunity for both
researchers and students. From one side, as said before, working on problem-solving
with researchers was a good chance for the students to be introduced to the robotics
world and more in general to the research world. On the other side, for us, this project
was not only very satisfactory but it also gave us the opportunity to explain in a simple
way concepts that we take for granted. Moreover, the students’ course/project impres-
sions are underlined by the results of the Likert scale questionnaire (Table 1) proposed
at the end of the course. In particular, the students identified the project as useful (Q1),
they considered the lessons well-explained (Q2) and they found that the lesson topics
were sufficiently detailed (Q3). Then, the students considered it was useful to work in
groups (Q4). They also considered that the course should have been longer (Q5). This
last result is to be interpreted in light of the enthusiasm of the students. Indeed, some of
them left a comment suggesting that they would have willingly done a longer stage.
Finally, we would like to discuss some important problems related to robotics. The
first one is related to gender issues. The situation has improved considerably in recent
years, with more women in the scientific fields. However, the work that needs to be
done to highlight and resolve gender-related differences in STEM (Science, Technol-
ogy, Engineering, and Mathematics) is still significant [16, 17]. Indeed, it is necessary
that females feel part of these fields as males do. Only male students participated in
the laboratory in 2018. The laboratory was open to volunteers since it represented one
possible choice among the opportunities offered within the school-work transition pro-
gram. We want to stress the importance of the preliminary activities that we carried out
(preliminary lessons and the visit to our laboratory) since they were aimed at a wider
audience (a large number of students participated, females included, not only the ones
involved in the laboratory activity). During the preliminary lessons and the visit to the
“Research Center E. Piaggio”, we explained that our course would be held by three
female researchers and two male researchers, and all the students could directly see
females at work in a robotics laboratory. Encouragingly, in 2019 one female partici-
pated in the course, and in 2020 two females were enrolled, even though the course was
not held due to COVID-19 restrictions. It is important to underline that the high school
“ITI Galileo Galilei” is a school mainly attended by males, so having two girls enrolled
in the course can be considered a good result. We hope that we served as a positive
role model to encourage also female students to approach robotics and that the greater
female participation in consecutive years is a consequence.
50 G. C. Bettelani et al.
Table 1. Likert scale questions (from 1 to 7). 1 means that the students completely disagree with
the question and 7 means that the students completely agree with the question.
The last point that is important to underline in this context is that educational sci-
entific subjects have been really affected by COVID-19 restrictions. Indeed, organizing
a laboratory tour and teaching a course like the one we held is difficult and not well
manageable with the pandemic restrictions. However, the presentation of the laboratory
activities could be held online. During pandemic, a virtual presentation of the labora-
tory activities of the Research Center E. Piaggio was tested for the Researchers’ Night
event and the experience was really immersive for the people that were at home (see the
video of the event at this link https://www.youtube.com/watch?v=9QvTPpAReUg).
Then, to solve the problem of the robotics course, something could be achieved with
funds to buy a laptop and a LEGO Mindstrorms kit for each student. The students could
not collaborate during the lessons as they would have done in class, but they could at
least try out their solutions on the robot. Moreover, it has also been suggested [18] that
simulators represent the most suitable solution to easily carry on robotics classes within
the restrictions imposed by the recent social distancing. Indeed, if the school has not
enough funds to buy a huge number of robots, each student could use the simulator
to test his/her strategies and then the teacher could try the solutions on the real robot
during a video lesson. Furthermore, the target simulator should be easy-to-use, accurate
enough for the students to test their solutions reliably, and moderately priced. Finally,
to solve the problem related to the printing of the 3D mechanical parts, the school could
rely on online platforms that offer 3D printing and shipping services (e.g., 3Dhubs6 or
Fab Labs of the city)
Acknowledgments. We would like to thank Professor Cristina Tani and Professor Giorgio Meini
of the high school “ITI Galileo Galilei”, Livorno, Italy, who gave us the possibility to undertake
this great experience. We also thank Eng. Daniele Benedettelli, who took part in this course with
his great experience with LEGO usage and his enthusiasm.
References
1. Miller, D.P., Nourbakhsh, I.: Robotics for education. In: Siciliano, B., Khatib, O. (eds.)
Springer Handbook of Robotics, pp. 2115–2134. Springer, Cham (2016). https://doi.org/10.
1007/978-3-319-32552-1 79
6
https://www.3dhubs.com/.
Robotics Laboratory Within the Italian School-Work Transition Program 51
2. Toh, L.P.E., Causo, A., Tzuo, P.-W., Chen, I.-M., Yeo, S.H.: A review on the use of robots in
education and young children. J. Educ. Technol. Soc. 19(2), 148–163 (2016)
3. Benitti, F.B.V.: Exploring the educational potential of robotics in schools: a systematic
review. Comput. Educ. 58(3), 978–988 (2012)
4. Blanchard, S., Freiman, V., Lirrete-Pitre, N.: Strategies used by elementary schoolchildren
solving robotics-based complex tasks: Innovative potential of technology. Procedia-Soc.
Behav. Sci. 2(2), 2851–2857 (2010)
5. Souza, I.M.L., Andrade, W.L., Sampaio, L.M.R., Araujo, A.L.S.O.: A systematic review on
the use of lego R robotics in education. In 2018 IEEE Frontiers in Education Conference
(FIE), pp. 1–9. IEEE (2018)
6. Scaradozzi, D., Sorbi, L., Pedale, A., Valzano, M., Vergine, C.: Teaching robotics at the pri-
mary school: an innovative approach. Procedia - Soc. Behav. Sci. 174, 3838–3846 (2015).
International Conference on New Horizons in Education, INTE 2014, 25-27 June 2014,
Paris, France
7. Khanlari, A., Mansourkiaie, F.: Using robotics for stem education in primary/elementary
schools: Teachers’ perceptions. In: 2015 10th International Conference on Computer Science
& Education (ICCSE), pp. 3–7. IEEE (2015)
8. Atmatzidou, S., Markelis, I., Demetriadis, S.: The use of lego mindstorms in elementary and
secondary education: game as a way of triggering learning. In: International Conference of
Simulation, Modeling and Programming for Autonomous Robots (SIMPAR), Venice, Italy,
pp. 22–30. Citeseer (2008)
9. Chalmers, C.: Robotics and computational thinking in primary school. Int. J. Child-Comput.
Interact. 17, 93–100 (2018)
10. Alimisis, D.: Robotics in education & education in robotics: shifting focus from technology
to pedagogy. In: Proceedings of the 3rd International Conference on Robotics in Education,
pp. 7–14 (2012)
11. Bischoff, R., et al. The kuka-dlr lightweight robot arm-a new reference platform for robotics
research and manufacturing. In: ISR 2010 (41st International Symposium on Robotics) and
ROBOTIK 2010 (6th German Conference on Robotics), pp. 1–8. VDE (2010)
12. Lentini, G., et al.: Alter-ego: a mobile robot with a functionally anthropomorphic upper body
designed for physical interaction. IEEE Robot. Autom. Mag. 26(4), 94–107 (2019)
13. Godfrey, S.B., et al.: The softhand pro: translation from robotic hand to prosthetic prototype.
In: Ibáñez, J., González-Vargas, J., Azorn, J., Akay, M., Pons, J. (eds.) Converging Clinical
and Engineering Research on Neurorehabilitation II, pp. 469–473. Springer, Cham (2017).
https://doi.org/10.1007/978-3-319-46669-9 78
14. Casini, S., Morvidoni, M., Bianchi, M., Catalano, M., Grioli, G., Bicchi, A.: Design and
realization of the cuff-clenching upper-limb force feedback wearable device for distributed
mechano-tactile stimulation of normal and tangential skin forces. In: 2015 IEEE/RSJ Inter-
national Conference on Intelligent Robots and Systems (IROS), pp. 1186–1193. IEEE (2015)
15. Colella, N., Bianchi, M., Grioli, G., Bicchi, A., Catalano, M.G.: A novel skin-stretch haptic
device for intuitive control of robotic prostheses and avatars. IEEE Robot. Autom. Lett. 4(2),
1572–1579 (2019)
16. Kuschel, K., Ettl, K., Dı́az-Garcı́a, C., Alsos, G.A.: Stemming the gender gap in stem
entrepreneurship-insights into women’s entrepreneurship in science, technology, engineer-
ing and mathematics. Int. Entrepreneurship Manage. J. 16(1), 1–15 (2020)
17. Casad, B.J., et al.: Gender inequality in academia: problems and solutions for women faculty
in stem. J. Neurosci. Res. 99(1), 13–23 (2021)
18. Bergeron, B.: Online stem education a potential boon for affordable at-home simulators.
SERVO Mag. 2 (2020)
How to Promote Learning and Creativity
Through Visual Cards and Robotics
at Summer Academic Project Ítaca
1 Introduction
Nowadays, the use of robots is quite common in compulsory education, basically,
through non-formal educational activities. It has been proved that such activities
contribute to reduce the risk of school dropout. Also, robots have been proved
useful in activities with students who have diverse special needs [1,2]. Taking
into consideration such benefits, the Campus Ítaca program1 decided to include
a robotics workshop which has been tailored by us since its beginnings.
Campus Ítaca is a socio-educational program of the Universitat Autònoma
de Barcelona (UAB), managed by the Fundación Autónoma Solidaria founda-
tion (FAS) aimed at secondary school students. It consists of a stay on the UAB
Campus during the months of June and July, organized in two turns of seven
days each. The students carry out a series of pedagogical and recreational activ-
ities whose objective is to encourage them to continue their education once the
1
Campus Ítaca. Universitat Autònoma de Barcelona (UAB). https://www.uab.cat/
web/campus-itaca-1345780037226.html.
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 52–63, 2022.
https://doi.org/10.1007/978-3-030-82544-7_6
How to Promote Learning and Creativity 53
obligatory stage has finished. At the same time, the campus aims to be a space
for coexistence between students of diverse social environments.
Campus Ítaca was founded in 2004. It was inspired by the Summer Academy
project of the University of Strathclyde (Scotland) and its main objectives were
to reduce early school dropout and bring students closer to the university. The
students are between 14 and 15 years old and the number of students along all
these years (2004–2020) is more than five thousands students from more than
70 schools of Barcelona area.
In order to favor active participation, activities are presented in an appealing
way and set in real contexts the students can relate to. In each of these activities
the students need to apply research methodologies to confirm or reject their initial
hypotheses. During these activities the students are accompanied by a monitor
which ensures that the group works properly during the activity. At the same time,
the activities promote cooperative work, dialogue and reflection among them. Two
types of activities are carried out on campus: pedagogical activities and leisure-
sports activities. One of the main pedagogical activity in the Campus Ítaca, is to
show to students the research done in several topics at university by offering about
fifteen different workshops which are repeated in two shifts.
Therefore, the workshops are the activities with most weight within the cam-
pus. All of them are developed during the last three days of the campus in which
the number of participants is around 12 students and gender balanced. Students
are assigned to a single one and they cannot choose it. Each workshop takes ten
hours plus a final oral presentation at the Closing Ceremony, where students
show their results to family members, high-school and town councils represen-
tatives and members of the University. The aim of these presentations is to
empower them, as they feel that what they have done is relevant, but also to
show families that their children could enroll at university.
Thanks to the students’ good reviews, our robotics workshop is running since
the beginning of the program fifteen years ago. To maintain this good perfor-
mance, we have to adapt to new frameworks and platforms [3]. Therefore, in the
last edition we designed new material for helping students in their robotic tasks.
This material was developed as visual cards called STEAMOLUS2 , and are our
main contribution of this project. Also, in the last edition, we introduced a new
cloud platform for robotics (Open Roberta3 [4]). The degree of satisfaction of the
students achieved in this edition has been very high, thus the recently introduced
changes will be applied in further editions.
This paper is structured into five sections. Section one gives an introduction
of the educational program Campus Ítaca. Section two provides an overview
and outlines our robotics workshop detailing the main goals and describing each
session. Section three describes the methodology at a pedagogical level as well as
our novel STEAMOLUS cards and the visual platform Open Roberta. Section
four summarizes the evaluation reports of our workshop. Finally, section five
provides a conclusion with future directions.
2
STEAMOLUS Make things to learn to code, learn to code to make things! https://
steamolus.wordpress.com.
3
Open Roberta Lab. https://lab.open-roberta.org/.
54 M.-I. Cárdenas et al.
The main objective of the workshop is that students have to be capable to solve
simple challenges using a customized robot by adapting its hardware (physical
part, sensors and actuators) and programming its software (behavioural part)
in the most efficient way.
Other objectives are:
Finally, in the last session, the students prepare and rehearse the presentation
for the Campus Ítaca closing ceremony. Before leaving, participants disassemble
all the robots and sort the pieces inside boxes for the next edition.
The detailed description of these sessions for the edition 2019 can be found
below:
1st Session: Introduction and building the first robot.
First of all, an overview of the state of the art in robotics and intelligent
systems is introduced. Then, teams of four members are created. Each group
assembles their own basic robot depending on their abilities (those groups with
less technical skills are provided with assembly instructions while those groups
with more skills have to create their own robot design). It is important to encour-
age the groups to distribute the assembly task amongst the different members:
one part of the group looks for the pieces while the other part assembles them.
Since the beginning of the campus, the platform we used has been Lego Mind-
storms (first RCX and now NXT)4 [11] since it is the material provided by the
FAS foundation.
2nd Session: Introduction to visual programming.
We have used different visual programming languages along the years with
considerable success: LabView for RCX, NXT-G and Open Roberta. They are
very intuitive and easy to use. Hence, students can see the fundamentals of
programming very quickly and they can start to think in their own projects
very soon. In this session we teach the basic blocks: flow control, conditionals
and loops. The students also learn how to command the engines and how to
receive and process data from the different available sensors. During the session,
students follow our specifically designed guides called STEAMOLUS cards (see
Subsect. 3.2) where basic tasks such as moving the robot or reacting to sensors
inputs are explained. Some of these tasks imply adding some new sensors to the
basic robot. The programming task is distributed among the team applying pair
programming [12]. In our case, one student takes the control of the computer
while another one tells him/her which blocks are needed and how to connect
them. The rest of the team incrementally tests the robot once it has been pro-
grammed, observing if it really does what it is supposed to do. In case of a robot
malfunction, students have to think how to modify the program until it works
correctly.
3rd Session: First Challenge: waiter bot line-follower.
The first challenge is to create a robot that follows a black line using two
light sensors without going out of line. The students have to adapt the body of
the robot to perform this task efficiently with the given sensors and then deduce
and implement the robot’s behaviour. The robot is tested in a scenario with
two similar circuits (two separate lines) so teams can compete in pairs and see
who has come up with the most efficient overall solution (faster without going
out of line). In this session the team is divided in two subgroups. Two students
program the algorithm, following a pair programming approach. The rest adapts
4
Lego Mindstorms. http://mindstorms.lego.com/en-us/Default.aspx.
56 M.-I. Cárdenas et al.
and improves the body of the robot to make it faster, doing tests to ensure that
the robot does not step out the line.
4th Session: Second Challenge: Sumo players.
The second challenge is to create a robot that can stay inside a circle bounded
by black line, while it is able to locate an opposing robot and push it out of the
circle. In this case, the students have total freedom to create the body of the
robot, as well as add new sensors or motors to the robot in order to make
it more robust. At the end of the session, a competition is held between the
different teams to see which team has made the most robust and efficient sumo
fighter robot. In this session the team is also divided in two subgroups. Two
students design and program the new algorithm, following the pair programming
approach. The rest adapts and improves the body of the robot to make it heavier,
doing tests to ensure that the robot does not get off the circle.
5th Session: Preparing the presentation
The last session is dedicated to preparing the presentation for the closing
ceremony. Students have to discuss among themselves what aspects of the project
they want to include in the presentation. The table of contents of the presentation
is agreed on between the students and each section is assigned to a group of
students for developing. The best photos and videos of the workshop are selected
and edited. Then, a summary video of two minutes at the most is prepared for
each challenge with the results of each team. Once the presentation has been
designed, the students have time to rehearse it before the closing ceremony.
Before leaving, students disassemble the robots and store the pieces in boxes for
the next round of students.
Figure 1 shows some stages of the student’s learning process, as they design
their robots by agreeing on their final design and having into account their size
and shape depending on the task that has to be performed. The left picture
shows a STEAMOLUS card. The top-right picture depicts how a working team
of students build a robot. The center-right picture shows two teams perform-
ing a sumo challenge. Finally, the bottom-right picture shows the students and
instructors of the workshop.
out here that we want to consolidate STEAM areas, improve academic and pro-
fessional orientation with a vocation for success, optimising the transversality
of content and improving the treatment of Special Educational Needs (SEN)
diversity.
We focus on last edition workshop which was divided in two shifts of fifteen
students distributed in four heterogeneous groups of three or four students. Three
instructors were involved to train the students and one monitor was responsible
for watching and motivating the group.
In this edition two novelties were introduced: our designed visual cards
STEAMOLUS and the open platform for programming the robots named Open
Roberta. Before that, we used both text-based and blog-based materials5 . For
coding the robots we have used copyrighted Lego visual block programming
software.
These elements are smoothly integrated in our workshop thanks to the applica-
tion of different artifacts: the visual cards STEAMOLUS, the visual program-
ming Open Roberta and the well-known Lego Mindstorms NXT.
1. Motivating statements:
Do you know that there are already factories and shops where robots move
around following marks on the floor?
Or that there are restaurants where robots use floor marks per waiter?
Do you dare to make a robot capable of following some marks on the floor to
avoid getting lost?
2. Encouraging advises:
Find out the way robot waiters have to go.
You can try to make your waiter robot from these visual clues.
Think about the challenge step by step and not at once.
6
The Tinkering Studio https://www.exploratorium.edu/tinkering.
How to Promote Learning and Creativity 59
instructions are inspired in puzzle pieces (likewise Scratch [18] visual language
programming). Students just need to register to the platform and start program-
ming using the NEPO language. Robots can be paired with the platform using
wi-fi or USB/Bluetooth connection. Once the pairing is done, the programs can
be executed in the real robot and test its behaviours. Open Roberta has support
for a large number of robots from several manufacturers. However, in the case
that the student does not have any hardware available, the platform offers a
basic simulation that includes a sensor/obstacle emulation. Moreover, you can
use the simulator to test programs previous downloading them to the real robots.
Even it is possible to make changes while the simulator runs.
Figure 4 shows the coding of a line follower and its corresponding simulation
in Open Roberta Lab.
4 Results
At the end of each Campus Ítaca edition, students, monitors and instructors
are evaluated by responding several satisfaction surveys. The evaluation reports
are elaborated by the Campus Ítaca organizers, helping instructors to find the
strengths and weaknesses of their workshops. Let’s remark that the surveys are
designed by the organization and instructors can not modify its questions.
In the case of our workshop, the last year edition, students highlighted that
what they liked the most was being able to contribute throughout the creation
of the robot. They also enjoyed programming, building, and then dismantling
the robots. In addition, they also emphasized that they have learned more about
programming and robotics. Figure 5 shows the answers of the instructor (red)
and the students (blue) satisfaction evaluations. Students are asked for eight
different indicators about their experience in the workshop, while instructors
only are asked for four indicators.
How to Promote Learning and Creativity 61
Fig. 5. Robotics workshop last year edition: instructor and student satisfaction evalu-
ation.
Finally, the monitors stated that the workshop gave the opportunity to dis-
cover the world of robotics and programming through the development of a robot
and also that the activity was very dynamic, whereas the activity turned out to
be complicated on some occasions causing some students to become demotivated
because the challenges did not work out.
In average, the workshop mark is 8.2 out of 10 which states the success of
the activity.
62 M.-I. Cárdenas et al.
Figure 6 shows the comparison between last two editions. The number of
respondents are all the students of both shifts that attended the workshop, this is
around 24 students for year. Despite the fact that the results are slightly similar,
the degree of participation, the acquisition of new learning and the interest are
key indicators that suggest that this methodology will improve positively.
5 Conclusions
The last edition of our robotic workshop has been one of the most successful and
best evaluated workshops compared to the fifteen previous editions. Only the
best evaluated workshops are asked to continue to next editions, and robotics
is one of the few that exists since the very first edition. This means that our
methodology keeps the interest and passion for robotics, regardless that it is more
known nowadays than fifteen years ago. From the observations we gather in our
workshop, we have experimented that motivational dynamics induce curiosity,
which sparks interest in novel and/or surprising activities [19]. In that sense,
surveys results from all previous editions showed that these pedagogical issues
applied in our workshop had positive results, not only to improve programming
robotics but also to acquire a strong motivation in STEAM area.
Introducing the visual cards STEAMOLUS had encourage students to solve
the tasks even they had no previous experience in the field. Incorporating this
new kind of materials helps us to adapt to new generations and connect better
in the way they understand and interact with new technologies. Furthermore, a
new open robotics platform in the cloud, Open Roberta in our case, helped us
to provide students a way to bring their own results to home or school.
In future editions, we want to promote more the context provided by the
STEAMOLUS challenges in order to engage a wider number of students. More-
over, we are planning to move to a newer hardware platform as soon as the
organizers have enough funding.
References
1. Daniela, L., Lytras, M.D.: Educational robotics for inclusive education. Tech. Know
Learn. 24, 219–225 (2019). https://doi.org/10.1007/s10758-018-9397-5
2. Daniela, L., Strods, R.: The role of robotics in promoting the learning motivation
to decrease the early school leaving risks. In: ROBOESL Conference Proceedings,
Athens (2016)
3. Vandevelde, C., Saldien, J., Ciocci, M.C., Vanderborght, B.: Overview of technolo-
gies for building robots in the classroom. In: Proceedings of International Confer-
ence on Robotics in Education, pp. 122–130 (2013)
4. Ketterl, M., Jost, B., Leimbach, T., Budde, R.: Open roberta - a web based app-
roach to visually program real educational robots. LOM. Online J. 8(14), 22 (2015).
https://doi.org/10.7146/lom.v8i14.22183
How to Promote Learning and Creativity 63
5. Anwar, S., Bascou, N.A., Menekse, M., Kardgar, A.: A systematic review of studies
on educational robotics. J. Pre-Coll. Eng. Educ. Res. (J-PEER): 9(2), Article 2
(2019). https://doi.org/10.7771/2157-9288.1223
6. Benitti, F.B.V.: Exploring the educational potential of robotics in schools: a sys-
tematic review. Comput. Educ. 58(3), 978–988 (2012)
7. Eguchi, A.: Educational robotics for promoting 21st century skills. II J. Autom.
Mob. Robot. Intell. Syst. 8(1), 5–11 (2014)
8. Miller, G., Church, R., Trexler, M.: Teaching diverse learners using robotics. In:
Druin, A., Hendler, J. (eds.) Robots for kids: Exploring New Technologies for
Learning, pp. 165–192. Morgan Kaufmann, San Francisco (2000)
9. Afari, E., Khine, M.S.: Robotics as an educational tool: impact of lego mindstorms.
Int. J. Inf. Educ. Technol. 7(6), 437–442, June 2017
10. Ferrer, G.J.: Using Lego mindstorms NXT in the classroom. J. Comput. Sci. Coll.
23, 153–153 (2008)
11. Souza, I., Andrade, W., Sampaio, L., Araújo, A.L., Araujo, S.: A systematic review
on the use of lego robotics in education. In: Proceedings - Frontiers in Education
Conference, FIE, October 2018
12. Hanks, B., Fitzgerald, S., Mccauley, R., Murphy, L., Zander, C.: Pair programming
in education: a literature review. Comput. Sci. Educ. 21, 135–173 (2011)
13. Kumar, D., Meeden, L.: A robot laboratory for teaching artificial intelligence. The
Proceedings of the Twenty-ninth SIGCSE, Technical Symposium on Computer
Science Education (SIGCSE-98). Daniel Joyce Editor, ACM Press (1998)
14. Carbonaro, M. Rex, M. et al.: Using LEGO robotics in a project-based learning
environment. Interact. Multi. Electron. J. Comput. Enhanc. Learn. 6(1), 55–70
(2004)
15. Alimisis, D., Kynigos, C.: Constructionism and robotics in education. Teach. Educ.
Robot. Enhanced Constructivist Pedagogical Meth. Athens, Greece, School of Ped-
agogical and Technological Education (2009)
16. Innovations. 7(3), 11–14. https://www.mitpressjournals.org/doi/pdf/10.1162/
INOV a 00135
17. Jost, B., Ketterl, M., Budde, R., Leimbach, T.: Graphical programming environ-
ments for educational robots: open roberta - yet another one? In: IEEE Interna-
tional Symposium on Multimedia, pp. 381–386 (2014). https://doi.org/10.1109/
ISM.2014.24
18. Maloney, J., Resnick, M., Silverman, B., Eastmond, E.: The scratch programming
language and environment. TOCE 10(4), 1–15 (2010)
19. Tyng, C.M., Amin, H.U., Saad, N.M.N., Malik, A.S.: The influences of emotion
on learning and memory. Front Psychol. 8, 1454 (2017). https://doi.org/10.3389/
fpsyg.2017.01454
20. Gelber, S.M.: Do-It-Yourself: Construction. American Quarterly, Repairing and
Maintaining Domestic Masculinity (1997). https://doi.org/10.1353/aq.1997.0007
An Arduino-Based Robot with a Sensor
as an Educational Project
1 Introduction
Robotics has become commonplace in our environment and people have become accus-
tomed to possible interaction with robots in everyday life. Robotic tools have been used
successfully for teaching for several decades. The main advantage of teaching robotics
is that it is an interdisciplinary field that includes mechanics, electronics and program-
ming. Students improve in all these subjects at the same time and at the same time it is
fun for them to design robots [1]. An important prerequisite for teaching robotics is that
students could work on their own technical equipment individually or in small groups.
Each student should be able to solve problems in teaching with a given platform without
the necessary knowledge of higher principles of the robotic platform and the platform
should be flexible and extensible [2]. For the sake of clarity, it is also better to use real
robots instead of simulated environments [3].
Robots are physical three-dimensional objects that can move in space and imitate
human or animal behavior. From the point of view of education, it was observed that the
use of robots in teaching offers the following advantageous options:
• When young people are concerned with concrete and physical objects and not just
working on abstractions and patterns, they learn faster and easier. It can therefore be
assumed that it would be advantageous if they were simply dedicated to computer
programming.
• Robots also evoke fascination and are motivating during the learning process. This
often leads to thinking in context.
In addition to the use of robots in schools, the study of the science of robotics for
children is a rich experience. It is very important to realize that we should not only
talk about technical aspects, but that robotics translates the interdisciplinary nature of
this science. This is evident from the fact that the ability to position and program a
robotic device requires several skills and competencies based on different disciplines.
Combining studies related to the field of robotics has many advantages. Most visible is
the inclusion of the development of technical skills related to the world of information
technology and the development of computational thinking. This educational approach
to STEAM (Science, Technology, Engineering, Arts and Mathematics), where some
creativity is added through the arts, allows robotics to be used in school as a practical
activity to encourage students to learn and apply theoretical knowledge in science, tech-
nology, engineering, art (creativity) and mathematics. Combinations of robotics using
STEAM and DIY (Do It Yourself) can be very effective and help students improve basic
skills such as logic and communication.
This approach can be better specified by introducing the term “Computational Think-
ing”. This concept can be described as a problem-solving approach [4]. Computational
thinking is not a skill, but it is mainly concepts, applications, tools and strategies of
thinking that are used to solve problems. In general, four main aspects of computational
thinking can be defined [5]:
Technology is constantly changing our society, and students must learn to think
critically as well as be able to master and gain their own digital experience. This leads us
to strive so that students are not just consumers of digital technologies but could become
their creators. Teaching young people computational thinking allows them to understand
how digital technologies work, so it is important to ensure that they are directly involved
in understanding the principles of digital technology.
This approach was also applied in the PRIM project [8], which aims to support a
change in the orientation of the school subject of informatics from user control of ICT
towards the basics of informatics, as a field. The resulting textbook of robotics [11] uses
the basic concepts of STEM. It aims not only to provide materials to students, but above
all to help teachers understand the new approach to teaching programming.
66 M. Novák et al.
2 Teaching Materials
The textbook consists of several topics, which are approached using a physical computer
paradigm. These topics should teach students basic programming techniques. All topics
are processed using the open Arduino platform, which is widely available and supports
not only the programming itself, but also creativity and invention, in solving individual
tasks.
The creation of teaching materials was based on the established ADDIE model. This
model is part of ISD - Instructional System Design and provides a general framework
for the system design of courses or educational processes [5]. Its five basic components
helped us to clarify and choose a strategy for creating teaching materials (see Fig. 1).
Analysis
Evaluation
Design
Implement Developm
ation ent
The teaching materials themselves were divided into separate topics, which were
processed separately according to the model and their structure and content are processed
to meet the requirements of STEM as much as possible.
A model has been created that primarily considers the teacher and the student and
emphasizes the fulfillment of the didactic principles of the teaching process such as
sequence, consistency, permanence, clarity or the connection of theory with practice.
The structure of partial teaching materials from each topic is composed of three basic
elements (see Fig. 2). The so-called lesson guide is intended for teachers, which is a
methodological sheet. Contains a time division of the lesson with recommendations for
leading the lesson. The structure of the lesson guide is copied by a worksheet, which in
turn is intended for students. Both materials contain the developed program code and
separate tasks. Both documents are supplemented by a Detailed Guide to Theory section,
which contains the program codes and electronic diagrams described in detail with an
explanation of physical phenomena. It forms the exposure component of the teaching
material with all the details.
An Arduino-Based Robot with a Sensor as an Educational Project 67
Basic Basic
Developed Individual
wiring program
program tasks
diagram code
Lesson Work-
Guide sheets
Theory
guide
This lesson is intended for mastering the work with an ultrasonic sensor. They will learn
to upload to their program code an external library for working with the sensor and with
this component they can solve other tasks related to various calculations and processing
of multiple measurements, etc. A partial project for this lesson is distance measurement.
68 M. Novák et al.
Ultrasonic sensor,
Working Using the IF condi- breadboard, Arduino
with ultrasonic tional statement. board.
sensor
Resister
Final project:
MEASUREME When using an ultrasonic sensor, students will learn
NT OF to use external libraries to control the sensor and use
DISTANCE the conditional if statement.
Students then apply the knowledge from the above projects in the robotic vehicle project.
It must connect all already known components on one development board at the same
time. It must also form the very mechanical part of the robotic chassis. Here they can
use their imagination and use their free ideas. They do not have to be bound by general
conventions as to what the robotic vehicle might look like and may use any material e.g.,
in the form of wooden prisms, components from old kits, old CDs, etc.
4 Verification
As part of the PRIM project, individual modules were tested at several secondary schools
over a period of two years. They were secondary vocational high schools, focusing
on technical fields and grammar schools. Responsible teachers tested partial lessons
An Arduino-Based Robot with a Sensor as an Educational Project 69
within their professional subjects and with a focus on the didactic side of the textbook.
Whether the selected topics are supportive and can be implemented in the current model
of teaching in secondary schools. Teachers used detailed lesson guides to save as much
time as possible to prepare for the lesson. Based on our own experience, the materials
were evaluated according to several criteria:
Each criterion was supplemented by the possibility of verbal expression and sub-
sequent discussion between the teacher and the authors. Testing of the effectiveness of
teaching in the form of acquired knowledge of student programming followed and the
first results are available.
As part of the preparation of teaching materials in 2018, alpha testing took place at
one grammar school and one secondary vocational school. There were around twenty
students in each classroom. All authors of the article directly participated in this testing,
either as a teacher or in the form of listening. During these tests, we noticed interesting
facts:
• High school students have trouble understanding the involvement of examples. They
do not understand wiring diagrams and have trouble understanding the principle of
operation of DC motor components. However, if they accept the connection as a fact,
they have no problem with any code modification.
• Pupils of vocational secondary schools do not have a problem with the connection
of circuits and understanding the function of individual elements, but on the contrary
have great problems with any modification of the code.
After the end of this testing and the incorporation of the identified problems and
requirements, beta testing took place in five schools, of which three were vocational
and one grammar school. There were again about twenty students in each class. Here,
as authors, we had no choice of schools, they were selected as part of a project that
included our textbook. We also did not have the opportunity to influence the behavior
of teachers and students in any way. At the end of this phase in February 2020, we
70 M. Novák et al.
received evaluations from teachers who taught according to our materials. The textbook
has 12 chapters, and each was evaluated by up to five teachers. Unfortunately, not all
ratings came back to us, we received a total of 42 ratings, a little more than two thirds.
Unfortunately, the evaluation system was set up so that it did not force teachers to evaluate
all chapters (Table 1).
Question Score
Structure and sequence of individual topics, manageability of the curriculum regarding 4,28
the length of the teaching unit
Adequacy of age and previous knowledge of students, usability in everyday life 4,25
Color and quality of objects (pictures, diagrams, graphs) 4,5
Quantity, adequacy/complexity for students 4,14
The number of questions and their structure we could not influence as authors, it was
the same for all parts of the project.
Here we met again with a few comments also:
• Some have found our materials complex and others simple. From this we conclude
that complexity is generally right.
• In some chapters we found that the teachers did not catch up with them and in others
that there was little curriculum in them, so we partially moved some subject matter.
• Teachers praised that thanks to these materials, students finally understood the
principle of some components and their connection with the outside world.
• We got the suggestion that the teaching material is well structured and nicely meets
the principle from simple to more complex.
• There have been complaints about the poor quality of some of the recommended
components purchased, especially regarding cheap servos.
• We also moved the recommended class a bit upwards to second or third year of studies
with a recommendation to use this textbook in the third year of studies.
Further extended testing was planned for this year but had to be canceled due to
the epidemiological situation. These materials in their form are unsuitable for distance
online teaching.
6 Conclusion
We are seeing an increasing impact of technology and its development in every aspect of
human life. This has an impact on people’s interests, values and needs and their perception
of the world, because technology is an integral part of their lives, as it provides both
social and fun functions. The task of education is to prepare the individual to become
part of it [7]. For the development of digital competencies of teachers, it is essential
An Arduino-Based Robot with a Sensor as an Educational Project 71
to find common ground with their students and be able to implement these skills into
the educational process. The development of digital skills is important for teachers, in
terms of the ability to implement the use of technology in the educational process to
motivate students. It is essential to bring into use the use of technologies or methods that
might seem interesting and exciting to digital generation students. The field of robotics
provides both, develops digital competencies for students and for teachers, and at the
same time motivates students to participate in any subject taught in connection with
robotics. Therefore, the above example in the form of the presented material can form
the basis for further development regarding the principles of STEM and computational
thinking.
References
1. García-Saura, C., González-Gómez, J.: Low-cost educational platform for robotics, using open
source. In: 5th International Conference of Education, Research and Innovation, ICERI201,
Madrid, Spain (2012). ISBN 9788461607631. ISSN 23401095
2. Ryu, H., Kwak, S.S., Kim, M.: A study on external form design factors for robots as elementary
school teaching assistants. In: The 16th IEEE International Symposium on Robot and Human
interactive Communication, RO-MAN 2007, Jeju. IEEE (2007). ISBN 9781424416349. ISSN
9781424416359
3. Balogh, R.: Basic activities with the Boe-Bot mobile robot. In: 14th International Conference
DidInfo2008. FPV UMB, Banská Bystrica, Slovakia (2008)
4. Wing, J.M.: Computational thinking. Commun. ACM 49(3), 33–35 (2006). https://doi.org/10.
1145/1118178.1118215
5. ADDIE Model: Wikipedia: the free encyclopedia. San Francisco: Wikimedia Foundation,
November 2016. https://en.wikipedia.org/wiki/ADDIE_Model. Accessed 11 Mar 2020
6. Crouch, C., Fagen, A.P., Callan, J.P., Mazur, E.: Classroom demonstrations: learning tools or
entertainment? Am. J. Phys. 72(6), 835–838 (2004). https://doi.org/10.1119/1.1707018. ISSN
0002-9505
7. Morin, E.: Seven Complex Lessons in Education for the Future. United Nations Educational,
Scientific and Cultural Organization, Paris (1999)
8. Informatické myšlení: About the project. iMysleni. JIhočeská Univerzita v Českých
Budějovicích (2019). https://imysleni.cz/about-the-project. Accessed 5 Jan 2021
9. Novák, M., Pech, J.: Robotika pro střední školy - programujeme Arduino. Jihočeská univerzita
v Českých Budějovicích, Pedagogická fakulta, České Budějovice (2020). ISBN 978-80-7394-
786-6. https://github.com/Nowis75/PRIM. Accessed 29 Jan 2021
A Social Robot Activity for Novice
Programmers
1 Introduction
Preparing students for digitization requires that certain aspects of computer sci-
ence should be addressed early in the curriculum [1–4]. The Flemish government
enforces schools to achieve a set of minimum learning goals with their students.
For several decades computer science was almost absent in the curriculum of
Flemish schools [5]. However, recently, policy has changed and since September
2019 also minimum goals1 on computer science and STEM are included. This
means computational thinking (CT), programming and technological fluency are
becoming increasingly important in Flemish secondary schools.
However, there are some challenges to put the new curriculum successful into
practice. To begin with, teachers have little time to spend on teaching CT and
programming. Besides this, they often face a limited set of resources due to a
small budget. Another key point is that teaching computer science and STEM
requires appropriate knowledge and skills from teachers who lack proficiency in
these domains. Teachers do not necessarily have a computer science background
when they are assigned to teach programming and CT [6,7]. At the same time,
digital competencies are not always a strong component in their teacher training
[3, pp. 45–59]. Within physical computing teachers often see the constructions
and electronics they have to handle as an insurmountable threshold [8]. Thus it
becomes a challenge to fulfill the goals of the curriculum that is in place.
Teachers find it challenging to teach programming and physical computing
[5,9]. When teachers search for appropriate exercises for their learners, they
1
https://onderwijsdoelen.be, in Dutch.
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 72–77, 2022.
https://doi.org/10.1007/978-3-030-82544-7_8
A Social Robot Activity for Novice Programmers 73
Fig. 1. The Dwenguino microcontroller with breadboard and ultrasonic sensor (left)
and some social robots created during the project (right).
74 Z. Van de Staey et al.
2 Methods
We equip the teachers with a manual, that gives them insights into the state-
of-the-art of social robotics and human-robot interaction (HRI), and the role
artificial intelligence (AI) plays in those fields. During the Social Robot activity,
teachers can make use of our custom robotics kit that allows their students to
build a personalized social robot (in pairs). The kit includes: (1) a Dwenguino
which is an Arduino-based microcontroller board for robotics in education (see
Fig. 1) [16]; (2) several sensors and actuators; (3) laser-cut adaptor pieces to
easily attach sensors and actuators to a robot body; (4) the DwenguinoBlockly
software, a graphical programming environment developed for the Dwenguino.
(5) A cloud-based simulator view of the DwenguinoBlockly software in which
the students can test their code, free from physical restraints (see Fig. 2). The
simulator consists of several programming scenarios such as a driving, a drawing
and a social robot. For inclusiveness, the environment can be used in several
languages. The kit was designed during an iterative process in which different
parameters were taken into consideration: low production cost, high durability
(should be reusable), high robustness and high flexibility in its application.
The Social Robot activity consists of multiple modules of which some are
optional (see Fig. 3). During an introductory activity, students familiarize them-
selves with the context of social robots. They imagine how their robot can com-
municate with and respond to people. They can take part in unplugged activities
and class discussions. Afterward, they start the hands-on design and program-
ming phase in the online simulator. There they add different components to a
simulated robot and see the direct result of their programming code. Gradually
they can make their program more sophisticated. Following this phase the stu-
dents can start constructing their robot in physical space, using the sensors and
actuators in the kit. During the construction phase, students are encouraged to
upcycle leftover materials or be creative with materials such as papier-mâché.
After uploading the code to the physical robot, the learners can encounter some
problems with the executability of their program, due to physical issues. Students
might have to go back to the programming phase to adjust their implementa-
tion. Subsequently, teachers can propose to do a creative assignment with the
final result such as using the social robot in visual storytelling or organizing an
exhibition for other students. The activity can be wrapped up by doing a class
discussion, e.g., discussing the ethical implications of AI and social robotics.
The Social Robot activity gives learners plenty of opportunities to think
creatively, communicate and work together with their peers. It makes computa-
tional thinking a social event [17]. Since social robots come in all types and looks,
from a pet-like robot in elderly homes to guard robots in shopping malls, the
project can be approached gender-neutral. The social robot scenario provides a
less competitive context than some other scenarios in the simulator (in partic-
ular the driving robot that is used in robot competitions [18]). It also provides
opportunities for a more ‘narrative’ character since social robotics connects to
various school subjects and a social robot can be designed for communication
with humans [11]. To increase the Social Robot activity’s accessibility we took
A Social Robot Activity for Novice Programmers 75
several design decisions. First of all, we designed social robot programming blocks
with multiple levels of abstraction. For instance, to program the movement of
joints and limbs of the robot, students can use simplified movement blocks such
as ‘wave arms’ or ‘turn eyes left’ [13]. They can also opt to use more low-level
programming blocks such as blocks to control a servo motor directly, to achieve
more technological fluency. Secondly, by making use of the default sensors and
actuators pins on the Dwenguino and identifying unique default pins for the
other components, learners are less prone to making errors in the programming
and construction process. Programming happens in an online environment in
multiple languages. Learners also have the possibility to switch their code from
blocks-based to textual. Last but not least the laser-cut pieces to easily attach
sensors and actuators lower the barrier for both teacher and students.
3 Discussion
The Social Robot activity is one of the projects of AI At School2 , in which we are
developing teaching materials on CT, programming and CS, in particular AI,
for the entire secondary school education. These teaching materials consist of
several STEM projects, each in a different context. The contexts are chosen for
their relevance to society, their links with state-of-the-art scientific research, and
special attention goes to the ethical aspects of AI that arise. Our projects tackle
some of the challenges STEM education copes with: more inclusive projects, the
improvement of digital and AI literacy, stimulating the transfer of knowledge
between disciplines and awareness on ethical issues.
Previous work suggests that many factors can contribute to fostering the
interest and the engagement of a more diverse audience when it comes to com-
puter science education [11,15,17,19]. In the Social Robot project we have
included a substantial amount of these initiatives. We have contextualized
designing social robots within the needs of our society. Additionally, we pro-
posed to link the social robotics theme to a discussion on the ethics of AI.
During the project, learners start from an engaging real-world multidisciplinary
application of computing, which leads to a tangible result. We selected an open-
ended context to stimulate the creativity of the participants. At the same time,
we ensured that the assignments do not have a competitive nature, but instead
encourage collaboration by allowing co-creation and pair programming. We sup-
port the teacher with background information on the theme through a manual.
The theme and its connection with non-STEM learning goals have a lot of poten-
tial to capture the teachers’ and learners’ interests.
2
https://www.aiopschool.be, in Dutch.
76 Z. Van de Staey et al.
Fig. 2. Overview of the simulator layout: on the left a programming code editor with
graphical programming blocks; on the right the different programming scenarios and
program controls. You can see an example of the social robot scenario where the user
can drag and drop the physical components in the simulation window below.
Fig. 3. Overview of the Social Robot project structure. There can be multiple iterations
for the design and programming phase, as well as for the construction phase. To allow
Sim2Real transfer the design and programming phase can be repeated once more after
the construction phase.
References
1. Barr, V., Stephenson, C.: Bringing computational thinking to k-12: what is involved
and what is the role of the computer science education community? Acm Inroads
2(1), 48–54 (2011)
2. Bocconi, S., et al.: Developing Computational Thinking in Compulsory Education.
European Commission, JRC Science for Policy Report (2016)
3. European Commission/EACEA/Eurydice. Digital education at school in Europe.
Technical report, Publications Office of the European Union (2019)
4. Caspersen, M.E., Gal-Ezer, J., McGettrick, A., Nardelli, E.: Informatics for all:
The strategy. Technical report, February 2018
5. wyels, F., Martens, B., Lemmens, S.: Starting from scratch: exper- imenting with
computer science in flemish secondary education. In: Proceedings of the 9th Work-
shop in Primary and Secondary Computing Education, WiPSCE 2014, pp. 12–15,
New York, NY, USA, Association for Computing Machinery, November 2014
6. Marı́a Cecilia, M.M., Gomez, M.J., Moresi, M., Benotti., L.: Lessons learned on
computer science teachers professional development. In: Proceedings of the 2016
ACM Conference on Innovation and Technology in Computer Science Education
(2016)
7. Ravitz, J., Stephenson, C., Parker, K., Blazevski, J.: Early lessons from evaluation
of computer science teacher professional development in google’s CS4HS program.
ACM Trans. Comput. Educ. (TOCE), August 2017
8. Margot, K.C., Kettler, T.: Teachers’ perception of stem integration and education:
a systematic literature review. Int. J. STEM Educ. 6, 1–16 (2019)
9. Benitti, F.B.V.: Exploring the educational potential of robotics in schools: a sys-
tematic review. Comput. Educ. 58(3), 978–988 (2012)
10. Neutens, T., Wyffels, F.: Unsupervised functional analysis of graphical programs
for physical computing, pp. 1–6 (2020)
11. Eguchi, A.: Bringing Robotics in Classrooms, pp. 3–31. 07 (2017)
12. Karim, M.E., Lemaignan, S., Mondada, F.: A review: can robots reshape K-12
STEM education? In: 2015 IEEE International Workshop on Advanced Robotics
and its Social Impacts (ARSO), pp. 1–8, June 2015
13. Bau, D., Gray, J., Kelleher, C., Sheldon, J., Turbak, F.: Learnable programming:
blocks and beyond. Commun. ACM 60(6), 72–80 (2017)
14. West, M., Kraut, R., Ei Chew, H.: I’d blush if I could: closing gender divides in
digital skills through education (2019)
15. Resnick, M., Rusk, N.: Coding at a crossroads. Commun. ACM 63(11), 120–127
(2020)
16. Neutens, T., Wyffels, F.: Analyzing coding behaviour of novice programmers in
different instructional settings: creating vs. debugging, pp. 1–6 (2020)
17. Kafai, Y.B., Burke, Q.: Connected Code: Why Children Need to Learn Program-
ming. MIT Press, Cambridge, July 2014
18. wyffels, F., et al.: Robot competitions trick students into learning. In: Stelzer,
R., Jafarmadar, K., (eds.), Proceedings of the 2nd International Conference on
Robotics in Education, pp. 47–51 (2011)
19. Happe, L., Buhnova, B., Koziolek, A., Wagner, I.: Effective measures to foster
girls’ interest in secondary computer science education. Educ. Inf. Technol. 26(3),
2811–2829 (2020)
Workshops, Curricula and Related
Aspects Universities
Robotics and Intelligent Systems:
A New Curriculum Development
and Adaptations Needed in Coronavirus
Times
1 Introduction
Robotics, Artificial Intelligence, Machine Learning, Automation... they are all
topics receiving a very high attention in society and demands from industry.
Whilst they are often taught as part of other more canonical programs, like
Electrical/Mechanical/Computer Engineering and Computer Science, there is
an increased interest to have ad-hoc programs, where students can start learning
from the very beginning of their higher education. Those sciences have reached
a level of maturity and knowledge body that would grant them the dignity of
stand-alone programs, similarly to what happened to computer science decades
ago. Already when looking at robotics alone, it is a quite diverse field, both with
respect to research as well as education aspects, and it accordingly touches upon
many different disciplines [6].
Curriculum development in this area is hence not straight-forward [22] and
it spans several quite different dimensions. It ranges for example with respect
to content from add-on elements to, e.g., Computer Science education [11] or
Science-Technology-Engineering-Math (STEM) oriented programs in general [2,
14,17] to full-fledged robotics programs [1,25]. It also encompasses aspects of
different target groups, ranging for example from Kindergarten [4,16] over K12
[3,15,18] to college education [21,26,27].
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 81–93, 2022.
https://doi.org/10.1007/978-3-030-82544-7_9
82 F. Maurelli et al.
Knowledge and Understanding. After finishing this program, the student will
have knowledge and understanding of
– Kinematics and dynamics of multi-body systems
– Linear and nonlinear control systems
– Basic electronics, operational principles of motors and drives
– Machine Learning algorithms and techniques for pattern-recognition, classi-
fication, and decision-making under uncertainty
– Computer Vision algorithms for inferring 3D information from camera images,
and for objectrecognition and localization
– Robotic manipulators and mobile robots
– Simultaneous Localization and Mapping (SLAM) algorithms
– Motion planning techniques in robotics
– Relevant sensors, signal-processing, and probabilistic estimation techniques
– Analytical and numerical optimization in continuous and discrete domains
Ability. After finishing this program, the student will be capable of designing and
implementing complete intelligent mobile systems that carry out complex tasks
in challenging environments without permanent human supervision. Concretely,
the student will be able to
– Model common mechanical and electrical systems which are part of intelligent
mobile systems
– Design control systems and tune their performance
Robotics and Intelligent Systems: A New Curriculum 83
Fig. 2. The study plan of the initial intelligent mobile systems BSc
The faculty of the program has therefore worked on a new concept, starting
from the original IMS program, and arranging the necessary modifications to
take into account the feedback received. This process happened in the framework
of a wider reorganisations of the BSc programs at Jacobs University in view of
the renewal of the program accreditation, as explained in the following section.
external framework:
educational policies, founding programs, international guidelines,
macro
operational framework within an institution:
developing a culture and providing an infrastructure, for example,
through quality management or professional development of
educators
meso
study programs:
de ning a pro le by
curricula development micro
and coherence learning environments:
development of student competencies in
between
courses, projects, seminars, facilities,
curricula
Fig. 3. Governance levels in higher education pedagogy: macro, meso, and micro. For
each level, an operative framework is presented alongside with objectives and examples.
being able to dedicate more time to the research questions they choose. C++ is
introduced from the first semester, Python in the second semester and then used
in various courses in the second year. In addition to be aligned to the overall
program ILOs, these structural changes at program level responded very well to
the requests from students. Finally, at the individual module micro-level, each
course has been revised, again starting with the definition of the module ILOs
and adjusting course content and examination type in order to cover all ILOs.
In particular, constructive alignment principles were employed in the design of
the module and examinations [5]. At the end, a table matching the Program
Intended Learning Outcomes with the program modules ensured that all ILOs
were appropriately addressed, as in Fig. 4. The resulting schematic study plan
for the Robotics and Intelligent Systems BSc can be found in Fig. 5. By the end
of the program, according to the intended learning outcomes, students will be
able to:
– demonstrate knowledge of kinematics and dynamics of multibody systems;
– design and develop linear and nonlinear control systems;
– design basic electronic circuits;
– show competence about operational principles of motors and drives;
– design and develop machine learning algorithms and techniques for pattern-
recognition, classification, and decision-making under uncertainty;
– design and develop computer vision algorithms for inferring 3D information
from camera images, and for object recognition and localization;
– model common mechanical and electrical systems that are part of intelligent
mobile systems;
– design robotics systems and program them using popular robotics software
frameworks;
– use academic or scientific methods as appropriate in the field of Robotics
and Intelligent Systems such as defining research questions, justifying meth-
ods, collecting, assessing and interpreting relevant information, and drawing
scientifically founded conclusions that consider social, scientific, and ethical
insights;
– develop and advance solutions to problems and arguments in their subject
area and defend these in discussions with specialists and non-specialists;
– engage ethically with the academic, professional, and wider communities and
to actively contribute to a sustainable future, reflecting and respecting differ-
ent views;
– take responsibility for their own learning, personal, and professional develop-
ment and role in society, evaluating critical feedback and self-analysis;
– apply their knowledge and understanding to a professional context;
– work effectively in a diverse team and take responsibility in a team;
– adhere to and defend ethical, scientific, and professional standards.
The first batch with the new curriculum started in Fall 2019, while from Fall
2020 the program finally changed the name from Intelligent Mobile Systems to
the more understandable Robotics and Intelligent Systems, also to highlight the
important structural changes.
Robotics and Intelligent Systems: A New Curriculum 87
Fig. 5. The study plan of the new robotics and intelligent systems BSc
Like any sector, the coronavirus has severely impacted education. On the pos-
itive side, it has also represented a vigorous push for innovation in education.
The IMS/RIS Program was hit during its first year of implementation of the new
curriculum. The first year module Introduction to Robotics and Intelligent Sys-
tems was particularly challenging due to the high number of student (146) and
Robotics and Intelligent Systems: A New Curriculum 89
Fig. 7. Weighted student satisfaction - IMS-RIS programs. Weights used: {−2, −1, 0,
1, 2}
Fig. 8. Course evaluation RIS-average (red line) vs JUB-average (dotted blue line).
90 F. Maurelli et al.
−− − N + ++
2018 Quality of teaching 0% 36% 18% 37% 9%
Your studies 0% 9% 36% 55% 0%
Overall organisation 9% 18% 37% 36% 0%
2019 Quality of teaching 7% 26% 13% 47% 7%
Your studies 7% 13% 20% 53% 7%
Overall Organisation 0% 40% 33% 27% 0%
2020 Quality of teaching 0% 18% 12% 58% 12%
Your studies 0% 12% 18% 46% 24%
Overall organisation 6% 0% 23% 53% 18%
the lab activities, involving three faculty members and eight teaching assistants.
With one week of notice, the overall university moved to remote teaching. Such a
sudden change, already demanding for lectures, presented additional challenges
for the lab activities. Due to the numbers, the Arduino-based lab associated to
the Introductory module was split into six rotations in the semester. Only about
three rotations (half of the students) were able to complete the lab component
before in-presence teaching was suspended. The remaining of the semester was
organised with the following strategy. The hardware kits were distributed to the
teaching assistants, who are senior students living on-campus together with the
other students in college accommodations. A limited number of students was
allowed to collect the hardware kit in predefined times, with a specific hygiene
protocol to follow. The other students, including those who decided to travel
back to their country/family home, were provided with a simulation environ-
ment and performed the tasks in simulation. At the end of the semester, a grade
analysis has shown no statistically relevant difference between the various groups
(in the lab with a faculty member, alone collecting the hardware from the TA, in
simulation). This initial result was very interesting and will be further explored
in Spring 2021 to gather more data to arrive to more solid conclusions about the
use of simulation and real hardware in fulfilling the ILOs of a specific module.
Regarding the adaptation of the lectures, a great example is reported for the
Robotics class (CORE second year), in Birk et al. [7].
As from one side curriculum stability is important, on the other side it is not
possible not to consider the need for continuous improvements. Quality man-
agement measures will continue to ensure to proactively address any concern as
well as any chance for making things better, in service of the students’ personal
and professional development.
Notes and Comments. This research was funded by the project “Hands-On 4.0:
Individualized Applied Education in the Digitalization Age” (Grant 2019-1347-
00) by the Jacobs Foundation within the “B3—Bildung Beyond Boundaries”
framework.
References
1. Ahlgren, D.: Meeting educational objectives and outcomes through robotics edu-
cation. In: World Automation Congress, 2002. Proceedings of the 5th Biannual
World Automation Congress, 2002. Proceedings of the 5th Biannual, vol. 14, pp.
395–404 (2002)
2. Altin, H., Pedaste, M.: Learning approaches to applying robotics in science educa-
tion. J. Baltic Sci. Educ. 12(3), 365 (2013)
3. Benitti, F.B.V.: Exploring the educational potential of robotics in schools: a sys-
tematic review. Comput. Educ. 58(3), 978–988 (2012)
4. Bers, M.U., Flannery, L., Kazakoff, E.R., Sullivan, A.: Computational think-
ing and tinkering: exploration of an early childhood robotics curriculum. Com-
put. Educ. 72, 145–157 (2014). https://www.sciencedirect.com/science/article/
pii/S0360131513003059
5. Biggs, J.B., Tang, C.K.C.: Teaching for Quality Learning at University. McGraw-
Hill and Open University Press, Maidenhead (2011)
6. Birk, A.: What is robotics? An interdisciplinary field is getting even more diverse.
IEEE Robot. Autom. Mag. (RAM) 18(4), 94–95 (2011)
7. Birk, A., Dineva, E., Maurelli, F., Nabor, A.: A robotics course during COVID-
19: lessons learned and best practices for online teaching beyond the pandemic.
Robotics 10(1), 5 (2021)
8. Bologna Working Group: A Framework for Qualifications of the European Higher
Education Area. Bologna Working Group Report on Qualifications Frameworks.
(Copenhagen, Danish Ministry of Science, Technology and Innovation) (2005)
9. Brahm, T., Jenert, T., Euler, D.: Pädagogische hochschulentwicklung - überblick
und perspektiven. In: Brahm, T., Jenert, T., Euler, D. (eds.) Pädagogische
Hochschulentwicklung: Von der Programmatik zur Implementierung, pp. 9–16.
Springer Fachmedien Wiesbaden, Wiesbaden (2016)
10. Careers Researchand Advisory Centre (CRAC) Limited: vitae—Researcher Devel-
opment Framework. Joint Statement of the UK Research Councils’ Training
Requirements for Research Students, 2001, UK GRAD Programme and the
Research Councils, 2 edn (2011)
11. Cosma, C., Confente, M., Botturi, D., Fiorini, P.: Laboratory tools for robotics and
automation education. In: Confente, M. (ed.) Proceedings of IEEE International
Conference on Robotics and Automation, ICRA 2003, vol. 3, pp. 3303–3308 (2003)
92 F. Maurelli et al.
12. Dineva, E., Bachmann, A., Knodt, U., Nagel, B.: Lessons learned in participative
multidisciplinary design optimization. J. Aerosp. Oper. 4(1–2), 49–66 (2016)
13. Hadgraft, R.: New curricula for engineering education: experiences, engagement,
e-resources. Glob. J. Eng. Educ. 19(3), 112–117 (2017)
14. Howell, A., Way, E., Mcgrann, R., Woods, R.: Autonomous robots as a generic
teaching tool. In: Way, E. (ed.) 36th Annual Frontiers in Education Conference,
pp. 17–21 (2006)
15. Johnson, J.: Children, robotics, and education. Artif. Life Robot. 7(1), 16–21
(2003). https://doi.org/10.1007/BF02480880
16. Jung, S.E., Won, E.S.: Systematic review of research trends in robotics education
for young children. Sustainability 10(4), 905 (2018). https://www.mdpi.com/2071-
1050/10/4/905
17. Khine, M.S.: Robotics in STEM Education - Redesigning the Learning Experience.
Springer, Cham (2017). https://doi.org/10.1007/978-3-319-57786-9
18. Kolberg, E., Orlev, N.: Robotics learning as a tool for integrating science technology
curriculum in k-12 schools. In: Orlev, N. (ed.) 31st Annual Frontiers in Education
Conference, vol. 1, pp. T2E–12–13 (2001)
19. Maurelli, F., Cartwright, J., Johnson, N., Petillot, Y.: Nessie IV autonomous under-
water vehicle wins the SAUC-E competition. In: 10th International Conference on
Mobile Robots and Competitions, Portugal (2010)
20. Maurelli, F., Petillot, Y., Mallios, A., Ridao, P., Krupiński, S.: Sonar-based AUV
localization using an improved particle filter approach. In: IEEE Oceans 2009,
Germany (2009)
21. Mirats Tur, J., Pfeiffer, C.: Mobile robot design in education. IEEE Robot. Autom.
Mag. 13(1), 69–75 (2006)
22. Padir, T., Chernova, S.: Guest editorial special issue on robotics education. IEEE
Trans. Educ. 56(1), 1–2 (2013)
23. Prpic, J.K., Hadgraft, R.: Interdisciplinarity as a path to inclusivity in the engi-
neering classroom: a design-based research approach. In: Proceedings of the: AAEE
Conference, Fremantle, Western Australia (2011)
24. Sheppard, S., Macatangay, K., Colby, A., Shulman, L., Sullivan, W.: Educating
Engineers: Designing for the Future of the Field. Jossey-Bass/Carnegie Foundation
for the Advancement of Teaching. Wiley (2009)
25. Shibata, M., Demura, K., Hirai, S., Matsumoto, A.: Comparative study of robotics
curricula. IEEE Trans. Educ. 1–9 (2020)
26. Spolaor, N., Benitti, F.B.V.: Robotics applications grounded in learning theories
on tertiary education: a systematic review. Comput. Educ. 112, 97–107 (2017).
http://www.sciencedirect.com/science/article/pii/S0360131517300970
27. Tsoy, T., Sabirova, L., Magid, E.: Effective robotics education: surveying experi-
ences of master program students in introduction to robotics course. In: MATEC
Web Conference, vol. 220, p. 06005 (2018). https://doi.org/10.1051/matecconf/
201822006005
28. Ulrich, I., Heckmann, C.: Taxonomien hochschuldidaktischer Designs und Metho-
den aus pädagogisch-psychologischer Sicht samt Musterbeispielen aus der aktuellen
Forschung. die hochschullehre 3, 1–28 (2017)
29. Valeyrie, N., Maurelli, F., Patron, P., Cartwright, J., Davis, B., Petillot, Y.: Nessie
V turbo: a new hover and power slide capable torpedo shaped AUV for survey,
inspection and intervention. In: AUVSI North America 2010 Conference (2010)
Robotics and Intelligent Systems: A New Curriculum 93
1 Introduction
Humanoid Robotics is receiving a lot of attention for its potential usage in dif-
ferent fields of society. Research is being done in care applications, socially assis-
tive situations and education (Gross et al. 2019; Belpaeme et al. 2018), among
other areas. To ensure future research in humanoid robotics, new generations of
researchers must be trained. However, their education can only be provided in
a high quality if the students have access to appropriate hardware. On the one
hand, there are commercially available products, such as Nao from Aldebaran
Robotics (Gouaillier et al. 2009). On the other hand, there are open source
projects like iCub (Metta et al. 2008) and robots built by research groups them-
selves like Myon (Hild et al. 2012). All these robots are highly complex systems
that are fragile, expensive and not easy to maintain. As a result, there are usu-
ally few of these systems, if any, available for education. This can be a barrier
to high-quality teaching. Another, already mentioned difficulty is the high com-
plexity of these systems. Even for experienced researchers, it often takes a lot of
time and effort to become familiar with such a system. Therefore, it is under-
standable that one cannot complete a large project on a humanoid robot within
a short period of time, unless a lot of simplifications are made (like using pre-
existing solutions). Additionally, the different learning needs and performance
levels of the students must be taken into account. This applies to any kind of
complex problem that is to be solved with trainees. As long as the trainees have
no experience in solving such tasks, a lot can go wrong. This includes not only
the availability of possibly required hardware, as in the case of robotics. A group
of trainees is always diverse in their level of knowledge of individual content.
The level of comprehension capacities also varies naturally. How can we help
weaker learners? Frustration should be avoided to guarantee satisfying results
(see (Meindl et al. 2019) for possible consequences of frustration). And how can
we ensure that the stronger learners are not bored when the content is taught
more slowly? What happens if certain teaching content is not understood by
individuals? Everyone has strengths and weaknesses and there should be no sys-
tematic disadvantage. Last but not least, hardware, as mentioned above, is often
a bottleneck. Not only must availability be ensured, but also functionality. How
to deal with technical problems, whether or not they are the fault of the learn-
ers? We address these questions with a didactic concept that makes it possible to
implement complex projects even in a large course with many students. It pro-
vides equal opportunity for students with different performance levels while still
delivering measurable results. With smooth transitions between different levels
of complexity, frustration does not arise for weaker learners, nor does motivation
disappear for the better ones. Furthermore, our curriculum can handle technical
problems of complex hardware. We show how this concept is realized in a project
course in the program Humanoid Robotics at Beuth University of Applied Sci-
ences in Berlin. For this university course, we introduce three levels of increasing
complexity. For each level, different hardware requirements are given and differ-
ent contents of the accompanying lectures are relevant. Nevertheless, there are
enough meaningful tasks to satisfy all needs and performance levels. In addition,
learners have the opportunity to participate in a research project with their work
and have direct contact with future clients.
The paper is structured as follows. In Sect. 2, the didactic concept is described
in detail based on the robotics course example. Part three describes the integra-
tion of the curriculum into the research project RoSen. The paper concludes
with part four.
Related Work
Many studies on robots in education use these machines to deliver teaching
content or to motivate students. For example, the authors of (Donnermann et al.
2020) use the robot Pepper to provide individual tutoring as exam preparation. A
similar situation is established in (Rosenberg-Kima et al. 2020). Here, the robot
Nao serves as a moderator of a group assignment. Another field of application
is the use of robots as a mechatronic example for engineering subjects (see for
example (de Gabriel et al. 2015)). Close to the approach of this work is (Danahy
et al. 2014). Here, LEGO robots were used over a period of 15 years to implement
various teaching programs at universities. Different engineering topics such as
96 S. Untergasser et al.
Fig. 1. Students start in the simulation, shown on the left side. When successful, they
get a real MiniBot shown in the middle. At the end of the semester, they get the chance
to test their code on a big humanoid robot like Myon shown on the right (source: Messe
Berlin).
2 Concept
This section first describes the problem in more detail. Then it outlines the
proposed concept in the robotics course.
In any learning situation with multiple learners, you face the challenge of
having to serve different levels of proficiency. Some of the learners understand
the content faster than others. Typically, the goal is to maintain an average
speed and level of complexity so that most of the group can follow along. This
can mean that the material is nevertheless too difficult or fast for some learners,
so that frustration can spread among this part of the group. These learners then
have to spend more of their free time on the subject in question. As a result,
they may miss out on other courses. At the same time, the better learners are
bored because the material is not challenging enough for them or is taught too
Research in Education 97
It is possible that there are teaching contents which can not divided into
several levels. However, with some effort and abstraction, it is possible in many
cases. For example, one goal of the research project RoSen (described in part
3) is using a complex humanoid robot like Myon (see Fig. 1 on the right). This
can only be achieved if all intermediate steps are understood. These include, for
example, controlling the actuators, visual processing of camera data, or handling
the up to 32 degrees of freedom. However, concentrating on simpler subtasks of
the bigger goal allows for a much broader range of projects. For example, local-
izing the robot in complex environments and building a map of the environment
are essential parts of the RoSen project. To solve this task, however, the robot
Myon is not necessary in the first step. All that is needed is a small analogue,
called MiniBot (shown in the center of Fig. 1), which has the necessary capa-
bilities and degrees of freedom so that relevant algorithms can be tested on it
before being applied to Myon. This MiniBot provides a huge reduction in com-
plexity. It also circumvents potential hardware bottlenecks, since the MiniBot
is relatively inexpensive to produce and therefore each student can have their
own robot. However, the lowest level of complexity is formed by a simulation of
the two robots (large and small) in the simulation environment WeBots (Michel
2004). This virtual environment is used for the development and initial testing
of algorithms and project ideas and serves as a playground for the first level.
Figure 1 shows the simulated MiniBot on the left. In the transition to the second
level the code developed by the students can be transferred to the MiniBot and
adapted to the real world. The transition to the third level is achieved when
everything works on the MiniBot. Then the students get the opportunity to test
their project developments on the large robot. The three-level curriculum of the
example presented here is shown in Fig. 2. The total 15-week course is divided
into three blocks of equal length. The lower part of the blocks list the main
course contents. In the following, the two courses forming the formal framework
of the robotic projects are first outlined in terms of content. Building on this,
the concepts and contents of the three levels are described in detail.
98 S. Untergasser et al.
Fig. 2. The timeline of the project course is divided in three blocks. Students start
in the simulation, continue with the MiniBot and can end up programming Myon.
Contents of the student tasks are listed in the lower part of the blocks.
The Humanoid Robotics project module forms the formal framework for the
students’ project work. The details for the WeBots simulation environment (see
next section) and the simulated worlds and robots in it are presented and dis-
cussed here. Additionally, the hardware and programming details of the robots
are introduced. Students can carry out projects alone or in small groups of up to
three people. The examination consists of three videos, which are due at the end
of the respective blocks. In these videos, the project group members describe
their current progress, any problems and their solutions. The expected content
relates to the content of the block, but of course also to the intended projects. In
the first five weeks only the simulation is used. Here the students are given tasks
that build on each other in order to get to know the simulation environment and
the behavior of the robots better. This already includes the first algorithms for
robot localization and map creation (Durrant-Whyte and Bailey 2006), which
are implemented and tested as part of the tasks. The students also get to know
extensions and improvements of the algorithms, which they can later implement
and test as possible projects. In the following five weeks, the students who have
successfully submitted their first video will each receive a MiniBot. The first
tasks are to let the real MiniBot balance and to port the code that has already
been written for orientation. Here it is of particular didactic relevance to over-
come the difference between the simulated world and the real world, where, for
example, sensor values are always noisy. In the course of the second block, the
students are encouraged to come together in groups and to develop project ideas.
In the second video, at the end of the second block, the MiniBot is expected to
balance, move when switched on and form a map. A sketch and first develop-
ments regarding the project ideas are also expected here. Individual performance
can vary greatly, which is why there are only a few predetermined expectations.
The third block now consists of the implementation of the project ideas on the
MiniBot and, in the case of successful projects, the adaptation of the code for
Myon. The lecture Machine Learning deals with the required algorithms. The
contents are adapted to the timeline in Fig. 2 to enable successful projects. Here,
too, there is a division into three subject blocks. The first block deals with the
mathematical basics of the course (probability theory, graph theory, particle
Research in Education 99
Fig. 3. The C-Program in the middle is written by the students in the course of their
projects. There are different SDKs for all platforms available, since they all have dif-
ferent hardware and therefore different implementations. These differences are hidden
below the SDKs.
filter, etc.). In addition, the robot localization and map formation (SLAM) is
dealt with in detail. The characteristics of the different robotic platforms, for
example, the specific features of the sensor values, are taken into account here.
The second part deals with neural networks in general and the visual processing
with the help of these networks in particular. Architectures that are suitable for
mobile robot applications are examined in more detail. These are so-called light
weight networks such as MobileNet (Howard et al. 2017) and YOLO (Redmon
et al. 2016). By the end of the second block, the students have got to know all
the algorithms for their projects. In the third part of the lecture, further algo-
rithms from machine learning are presented, but they have no direct relation
to the projects (e.g. regression, decision trees, boosting, etc.). Platform-specific
SDKs (Software Development Kits) have been developed to make the transitions
between simulation, MiniBot and Myon as easy as possible. The same function-
alities, independent of the robotic platform, are available to the students. These
are implemented differently in the background on the different platforms. In the
course of the lectures, however, it will be discussed exactly where the differences
are and what to look out for (e.g. different number of sensors, different pro-
cessing capabilities). Figure 3 shows the merging of the different platforms on a
conceptual level. The common basis is always the C code, which is created by
the students as part of their projects. Depending on which platform they want
their code to run on, a different SDK is integrated. This enables easy porting of
the code between all platforms once the SDKs are available. The details of the
platforms are described in more detail below.
100 S. Untergasser et al.
Fig. 4. The initial simulated world for the MiniBot experiments. The walls within a
two square meters rectangle can be added or removed according to a raster.
2.2 Simulation
To maximize the efficiency of learning, the concept described above was devel-
oped in close collaboration with students. To this end, selected students were
given the opportunity to work on specific sub-aspects. One of these sub-aspects
was the development and construction of the MiniBot (see Sect. 2.3). Another
part was to create a simulation of the MiniBot (see Fig. 1 on the left) and a
corresponding world in the simulation environment WeBots. As can be seen in
Fig. 4, this world consists of a set of walls placed on a table. Within the rectangle
of two square meters, the walls can be added or removed as desired, creating
different room geometries. The simulation corresponds exactly to a model in the
university’s lab, where the real MiniBot can drive around. The real model also
consists of movable walls of the same size as in the simulation. The algorithms
tested in the simulation can therefore be tested in the exact same environment
in reality. Accordingly, the simulated MiniBot is also an exact replica of the real
robot. This includes shape, weight and components, but also the driving behavior
and sensor qualities. As can be seen in Fig. 3, the students’ C program commu-
nicates with the simulation environment via a TCP/IP connection. In order for
the communication to work, a C program is running in the WeBots Simulation,
which executes a TCP/IP server and handles communication (this includes read-
ing sensor values and writing motor commands). The TCP/IP server sends the
sensor values (including camera image) to the central C program and receives
the motor commands from it.
2.3 MiniBot
When the students are ready to transition from the simulation to the real world,
they receive a MiniBot as shown in the middle of Fig. 1. The central computing
Research in Education 101
unit of the MiniBot is the Kendryte K210 with a Risc-V dual core processor and
the so-called Kernel Processing Unit (KPU), which accelerates the computation
of Convolutional Neural Networks. The K210 is mounted on the Maix BiT board
from Sipeed. The set from Sipeed includes a microphone, camera, SD card slot
and a LCD display. In addition to that, a gyro sensor, a time of flight sensor, some
switches and an infrared receiver for a remote control were built onto the MiniBot
(see Fig. 5 on the left for all parts of one robot). The battery was placed at the
very top for better weight distribution. The drive is provided by two Dynamixel
XH430 from Robotis. The wheels, as well as the body of the MiniBot are made
from acryl (PMMA) with a laser cutter. Figure 5 also shows the charger and
power supply for the battery. To run the same central C code from Fig. 3 on the
MiniBot, it is compiled into a BIN file using the MiniBot’s SDK. This BIN file is
then flashed onto the K210. It is also possible to place neural networks in the SD
card’s memory and then run them using the KPU. However, the implementation
details are hidden in the background in the respective SDKs.
2.4 Myon
Finally, if the students handle the MiniBot successfully as well, they can test their
code on the large robot Myon (see Fig. 1 on the right). For this purpose, a student
developed a drivable lower body, the prototype of which can be seen in Fig. 5 on
the right side. This development is part of the student’s final thesis. Also, part
of the work is to bring Myon into WeBots, including the mobile lower body. So
far the head with its functionalities is already realized in the simulation. Due to
the mobile lower body, the possible behaviors of the large robot correspond to
those of the MiniBot. The algorithms developed by the students can therefore be
brought to the large robot without having to develop everything from scratch.
Of course, minor adjustments are necessary. For example, Myon has eight time
of flight sensors, whereas the MiniBot has only one. Also, the size of the wheels
in relation to the body size is completely different. However, the adaptation of
the parameters is part of the didactic unit of the last block and is an important
lesson to learn.
Project RoSen
The RoSen project is a cooperative project between two universities and a hous-
ing cooperative that operates senior housing facilities. With the help of con-
versations between students and residents of the senior housing facilities, the
102 S. Untergasser et al.
Fig. 5. Left: Parts of a MiniBot, which will be assembled from students in collaboration
with teachers. Right: Myon with the prototype of its drivable lower body.
project aims to discuss wishes and needs of the participants. These are then
tested for their technical feasibility. Together with the residents and technical
experts, the students develop possible application scenarios to create a win-win
situation. After prioritization and further elaboration of the ideas, the agreed on
applications are implemented and the robots are then brought into interaction
with the seniors for defined time periods. These interactions are observed and
evaluated. In addition, there are surveys of all participants on the fulfillment of
the expected effort and benefits. The primary goal, then, is to gather and ana-
lyze the needs and expectations of potential users, as well as the young junior
technologists.
In the third block of the project course, one of the students’ tasks is to contact
residents of the senior citizens’ residence. Due to the health situation, the planned
interviews will not be conducted in person, but by letter or telephone. For this
purpose, the students write a welcome letter to selected seniors, in which they
introduce themselves, formulate their concerns and make suggestions on how
to proceed. Optionally, the same is done by telephone. In the course of several
conversations, the above-mentioned project goals are worked out together with
the seniors. In the module, data protection, appropriate forms of approaching
and the methods for finding the needs of potential users are discussed. This
gives the students the opportunity to align their robotics projects with the RoSen
project and the wishes and needs of the seniors. They can test and evaluate their
ideas directly in interaction with users. They can try out the algorithms they have
learned and implemented (such as robot localization and recognition of visual
impressions) in a realistic environment. Direct involvement in active research also
Research in Education 103
provides a unique insight. Interested students can also participate in the project
after the end of the semester in the context of final theses. Nevertheless, it is also
a challenge for those involved. It is not just a technical event where everyone
can program for themselves. The social component through direct contact with
potential users opens up completely new perspectives.
3.1 Results
After the first semester of the presented university courses we can report first
results. All participating students (17 people) have completed the course and all
of them have realized a project. We observed a typical distribution of perfor-
mance levels, some easier projects, most of them with an intermediate level of
difficulty and some very challenging ones. Some results of the projects will now
be incorporated into the hardware and software, so that the base system for the
next generation of students already starts at a higher level. One third of the
students this semester are interested in continuing the projects in their bachelor
thesis.
4 Conclusion
When young recruits learn about complex machines, it usually takes years before
they are good enough to carry out larger projects. It’s the same with humanoid
robots. Understanding and being able to enable these very complex machines to
do something is a tedious task. This work presented how such complex projects
can nevertheless be realized in a short time. The described educational frame-
work meets all the needs of the learners. By dividing the study contents into sev-
eral difficulty levels, projects with different demands can be realized. Thus, every
learner has a chance to successfully complete a project. For the specific example
of humanoid robotics projects, three of these levels were introduced. Students
were first taught to program the virtual version of the small robot MiniBot in
the simulation environment WeBots. Each student who was able to successfully
implement the algorithms from the accompanying machine learning lecture here
will receive a real MiniBot. In terms of functionality, this MiniBot is a small
copy of the large robot Myon. The transfer from simulation to the real world
has many obstacles. Those who successfully master these obstacles and enable
the MiniBot to find its way around are allowed to test their own developments
on the large robot. To enable these seamless transitions between platforms, the
MiniBot was developed along the lines of Myon. A virtual version of the MiniBot
was then built in WeBots. A virtual version of the Myon is also nearly complete.
In addition, SDKs for all platforms have been developed, allowing all platforms
to be programmed with the same code. Through the RoSen research project,
students can directly try out what they have learned in an application-oriented
environment and practice initial customer contact. In each level of the curricu-
lum there are enough topics that can be worked on by the learners so that they
can contribute to the complete project in a meaningful way.
104 S. Untergasser et al.
References
Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B., Tanaka, F.: Social robots
for education: a review. Sci. Robot. 3(21), eaat5954 (2018)
Danahy, E., Wang, E., Brockman, J., Carberry, A., Shapiro, B., Rogers, C.B.: Lego-
based robotics in higher education: 15 years of student creativity. Int. J. Adv. Robot.
Syst. 11(2), 27 (2014)
de Gabriel, J.M.G., Mandow, A., Fernández-Lozano, J., Garcı́a-Cerezo, A.: Mobile
robot lab project to introduce engineering students to fault diagnosis in mechatronic
systems. IEEE Trans. Educ. 58(3), 187–193 (2015)
Donnermann, M., Schaper, P., Lugrin, B.: Integrating a social robot in higher
education–a field study. In: 2020 29th IEEE International Conference on Robot and
Human Interactive Communication (RO-MAN), pp. 573–579. IEEE (2020)
Durrant-Whyte, H., Bailey, T.: Simultaneous localization and mapping: part I. IEEE
Robot. Autom. Mag. 13(2), 99–110 (2006)
Čehovin Zajc, L., Rezelj, A., Skočaj, D.: Teaching intelligent robotics with a low-cost
mobile robot platform (2015)
Gouaillier, D., et al.: Mechatronic design of NAO humanoid. In: 2009 IEEE Interna-
tional Conference on Robotics and Automation, pp. 769–774. IEEE (2009)
Gross, H.-M., Scheidig, A., Müller, S., Schütz, B., Fricke, C., Meyer, S.: Living with a
mobile companion robot in your own apartment-final implementation and results of
a 20-weeks field study with 20 seniors. In: 2019 International Conference on Robotics
and Automation (ICRA), pp. 2253–2259. IEEE (2019)
Hild, M., Siedel, T., Benckendorff, C., Thiele, C., Spranger, M.: Myon, a new humanoid.
In: Language Grounding in Robots, pp. 25–44. Springer (2012)
Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile
vision applications. arXiv preprint arXiv:1704.04861 (2017)
Meindl, P., et al.: A brief behavioral measure of frustration tolerance predicts academic
achievement immediately and two years later. Emotion 19(6), 1081 (2019)
Metta, G., Sandini, G., Vernon, D., Natale, L., Nori, F.: The icub humanoid robot:
an open platform for research in embodied cognition. In: Proceedings of the 8th
Workshop on Performance Metrics for Intelligent Systems, pp. 50–56 (2008)
Michel, O.: Cyberbotics ltd. webotsTM : professional mobile robot simulation. Int. J.
Adv. Robot. Syst. 1(1), 5 (2004)
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-
time object detection. In: Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, pp. 779–788 (2016)
Rosenberg-Kima, R.B., Koren, Y., Gordon, G.: Robot-supported collaborative learn-
ing (RSCL): social robots as teaching assistants for higher education small group
facilitation. Front. Robot. AI 6, 148 (2020)
Education Tool for Kinematic Analysis:
A Case Study
Abstract. This work describes the part of the Applied Mechanics course con-
cerning the kinematics of mechanisms in the Mechanical Engineering master’s
degree course of the University of Perugia. The kinematic study of mechanisms is
essential for mechanical engineers, but also for the design and control of robots and
intelligent machines. Traditionally in mechanical engineering courses this subject
is dealt with methods that are not easy to generalise and visualise by students. In
the activity described in this paper the lecturer, after having dealt with the funda-
mental properties of closed and open kinematic chains, with particular attention to
problems of motion, presented an algebraic method for determining the position,
velocity and acceleration of 1-DoF (degree of freedom) mechanisms, throughout
explanatory examples, and proposed to the students to individually analyse mech-
anisms employed in robotic systems, providing a complete kinematic analysis
integrated with the dynamic animation of the mechanism.
This paper reports the response of the students, who were asked to select a
mechanism of their choice to test the application of the taught analysis method.
In particular, one student’s exercise is described as an explanatory case, which
highlights the importance of kinematics in educational robotics.
1 Introduction
The role of engineers is rapidly changing in the new industry 4.0 framework and requires
cross-functional and multidisciplinary knowledge and skills that combine IT and pro-
duction knowledge [1]. In his 2017 talk, Peter Fisk defined a list of features of future
education: personalized in time, space, and tools, project and field experience based,
peer-to-peer, allowing data interpretation. Universities have an important role in fulfill-
ing this need and the teaching methods adopted also for basic courses need to be revised
according to this new perspective [2].
The need for innovative education tools, allowing distance learning, possibly through
the use of simulation environments, has become even more important during the recent
lockdown period, due to the COVID-19 pandemics [3].
In this work, a specific path followed during the Applied Mechanics course, focused
on the kinematics of mechanisms, is described. It dealt with methodologies which
brought out the interest of some students in the study of the mechanics of robots. This
experience showed how an applied mechanics didactic path, carried out during a gen-
eral mechanical engineering MSc degree course, which does not explicitly present a
curriculum in robotics, can be an incentive and an introduction to robotics [4].
Traditionally, the course of applied mechanics is one of the fundamental courses in
mechanical engineering MSc degree courses, introducing the study of common com-
ponents found into various categories of machines, regardless of the specificity of each
machine [5–7]. Some examples are mechanical seals and bearings [8, 9], cams [10–12],
mechanisms and gears [13–15]. The study can be approached from a purely kinematic
point of view, regardless of the forces which cause the motion [16], or from a dynamic
point of view, considering the motion as the effect of forces acting on the machine,
and evaluating the onset of vibrations [17]. All these studies can be approached both
neglecting friction and considering the tribological conditions [18].
Notwithstanding the availability of computational resources and the development
of efficient algorithms and tools for the kinematic and dynamic analysis of mechanical
systems, in several academic courses [17] traditional analysis methods, often based on
graphical constructions, are still quite diffused [5–7] (see Fig. 1). Even if, thanks to
the availability of CAD software, the solution of mechanics problems with traditional
graphical methods has been improved, these methods are still time consuming, they are
limited to the design and mechanical analysis, and they are not suitable for an integration
with other contents, for example for the design of mechatronics components and/or
mechanism control system. Such an integration is desirable in robotic systems, in which
the mechanical structure behaviour is strictly related to the sensorimotor and control
systems. A more computationally oriented perspective has been employed in robotics
teaching: several software frameworks have been realised for education and training
purposes, mostly at an academic level. They were recently revised by Cañas et al. in
[19], and among them, the most famous one is the Robotics Toolbox by Peter Corke
[4], other examples are SynGrasp [20], devoted to the simulation of robotic grasping,
and ARTE (A Robotics Toolbox for Education), allowing the study of industrial robotic
manipulators [21]. In all these toolboxes the mechanical structure of robots is mostly
limited to serial structures that can be modelled with standard kinematic tools, as for
instance the Denavit Hartenberg representation. Other recent simulation tools developed
for kinematics and educational robotics are presented in literature, such as [22], where
the purpose was to analyse the kinetostatic effects of singularities in parallel robots
to evaluate change in resistance or loss of control. In [23], authors describe a virtual
laboratory of parallel robots developed for educational purposes. The tool is named
LABEL and is aimed at the teaching of kinematics of parallel robots, solving the inverse
and direct kinematic problem of three kinds of robots. In [24], authors deal with the
teaching of parallel robots introducing an intelligent computer-aided instruction (ICAI)
Education Tool for Kinematic Analysis: A Case Study 107
Fig. 1. Acceleration analysis of a planar mechanism with graphical methods from [7]
108 M. C. Valigi et al.
The configuration analysis has the objective of determining the positions of the centers
of joints, centers of mass and the rotation angles of all the bodies that constitute the
mechanism as a function of a set of input parameters whose dimension is defined by the
number of DoF of the mechanism. Typically, in planar mechanisms joints are represented
as nodes and bodies are schematized as links between two joints.
From the mathematical point of view, the problem consists in solving a non-linear
algebraic system.
Once the system configuration has been solved in terms of configuration, and the veloci-
ties and accelerations of the input elements are provided, the successive step in kinematic
analysis consists in defining velocities and accelerations of all the mechanism’s elements.
The solution of the velocity and acceleration problem is simpler than the position anal-
ysis; in fact, the problem consists in solving a linear equation system. For velocity, rigid
body fundamental kinematic relationship is used, while for the acceleration Rival’s the-
orem is applied. In some mechanisms it is convenient to introduce relative motions and
therefore to include relative velocities, accelerations and Coriolis acceleration term.
The kinematic simulation comprehends all the three steps previously introduced and
allows to determine the motion characteristics (configuration, velocity, acceleration)
of all the system’s elements, once the motion laws of the input elements are known.
The kinematic simulation provides useful information about the mechanism’s motion;
therefore, it can be employed for the trajectory analysis of each element, with the purpose
of identifying any collision, interference, etc.
Education Tool for Kinematic Analysis: A Case Study 109
Fig. 2. a) Sample mechanism: links, joints and input data; b) determination of the configuration
of point C
The system can be analytically formulated by writing the equations of the two
circumferences:
DC 2 = (xC − xD )2 + (yC − yD )2
(1)
BC 2 = (xC − xB )2 + (yC − yB )2
It is clear that, since the system is not linear, multiple solutions can appear and should
be managed. In particular, by solving the system of the two equations points 1 and 2 are
determined, as shown in Fig. 2b), respectively C1 = [xC1 , yC1 , 0] and C2 = [xc2 , yC2 , 0].
The point corresponding to the effective point C of the mechanism is chosen by selecting
the point with the lower y, in this case C since yC2 < yC1 .
110 M. C. Valigi et al.
Similarly, the position of the joint E can be computed by solving the following
system, as graphically reported in Fig. 3, with the following relationships:
CE 2 = (xE − xC )2 + (yE − yC )2
yC −yB yE −yB (2)
xC −xB = xE −xB
Also this system has two solutions, indicated with 1 and 2 in Fig. 3a), respectively,
corresponding to two points E1 = [xE1 , yE1 , 0] and E2 = [xE2 , yE2 , 0]. The point corre-
sponding to the effective point E of the mechanism is the one with lower x coordinate,
therefore in this case E1 point is chosen, since xE1 < xE2 .
The joint F can move along a guide track parallel to the x axis, therefore the value
of yF is known.
xF can be determined (Fig. 3b). Solving the equation, also in this case two solutions xF1
and xF2 can be found. The point corresponding to the point F of the mechanism is the
one with the lower xF value, in this case xF1 is chosen since xF1 < xF2 .
The velocity and acceleration problems can be consequently formulated and solved.
Based on the described method, a series of Matlab scripts developed for educational
purposes was set up to evaluate the kinematics of the mechanism in terms of position,
velocity and acceleration during a complete rotation of the member 1, i.e. varying ϕ angle
in the range 0° < ϕ < 360° and to provide a graphical representation of the mechanism
movement during member 1 rotation (as shown in Fig. 4 for the sample mechanism).
effective in the course, especially in the last semester in which the lectures were forced
to be on-line:
• With respect to more efficient but complex analysis tools [28, 29], as the one imple-
mented in multibody simulators, it is still quite intuitive and easy to visualise for the
students and requires only very basic mathematics and physics knowledge.
• The formulation is provided in an analytical way that can be easily implemented
directly by students with very basic programming skills, for instance with Matlab.
• The implementation allows the students to immediately visualise the results of the
analysis, the main features of the motion and the main properties of kinematic
parameters.
The course has been attended on-line by approximately 50 students in the first
semester of Academic Year 2020/2021. During the first lessons, the theoretical fun-
damentals of kinematics were explained step by step, and in parallel the taught methods
were implemented in practical exercises and examples, also based on robotic applica-
tions, and solved by the lecturer. Later, the same exercises were proposed to the students
with different input data, and they solved the mechanisms in autonomy. At the end
of the course, the students were asked to choose a mechanism implemented on a real
mechanical system and use it to practice the methods. Different types of mechanisms
were chosen, a graphical representation of the distribution of the applications is reported
in Fig. 5. An example is reported in the following section.
112 M. C. Valigi et al.
This section reports one of the most significant works developed by the students in the
robotic field as a case study: a part of the walking mechanism created by Theo Jansen,
a Dutch artist which has developed kinetic art experiences (New forms of life, as he
defines them) since 1990.
Some of his most famous artworks are the Strandbeesten (animals of the beach,
Fig. 6), a sort of animated skeletons, whose propulsion originates directly from the
wind energy. The motion of these mechanisms is governed by complex systems, made
up mechanical members jointed with joints. Although they have big dimensions, these
structures are built up using plastic flexible pipes, nylon wires and adhesive tape [30].
These complex kinematic systems can have large size, generally from 3–4 m up to
12 m. The big dimensions allow to have large surfaces for the wind impact, resulting in
huge propulsion pressures.
Education Tool for Kinematic Analysis: A Case Study 113
The case study regards the position and kinematic analysis of the mechanism of a
single limb. This is a single DoF planar mechanism (Fig. 7a), composed of 11 links and
8 joints. The reference frame has its origin in A, which is a stationary point. ϕ is the
angle between the horizontal direction intersecting A and the link AB and it represents
the unique DoF of the motion. Applying the method presented in the previous section,
the position analysis was performed as shown in Fig. 7b and the kinematic simulation
was carried out, determining velocities (Fig. 8) and accelerations (Fig. 9) of the main
points of the mechanism.
References
1. Onar, S., Ustundag, A., Kadaifci, Ç., Oztaysi, B.: The changing role of engineering education
in industry 4.0 Era. In: Industry 4.0: Managing the Digital Transformation, pp. 137–151.
Springer, Cham (2018). https://doi.org/10.1007/978-3-319-57870-5_8
2. Fisk, P.: Education 4.0 (2017)
3. Birk, A., Dineva, E., Maurelli, F., Nabor, A.: A robotics course during COVID-19: lessons
learned and best practices for online teaching beyond the pandemic. Robotics 10, 5 (2021)
4. Corke, P.: Robotics, Vision and Control: Fundamental In MATLAB Second Completely
Revised, vol. 118. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54413-7
5. Callegari, M., Fanghella, P., Pellicano, F.: Meccanica Applicata alle Macchine. UTET
Università
6. Meneghetti, U., Maggiore, A., Funaioli, E.: Lezioni di meccanica applicata alle macchine.
Fondamenti di meccanica delle macchine, Pàtron, vol. 1 (2005)
7. Papini, S., Malvezzi, M., Falomi, S.: Esercitazioni di meccanica applicata alle macchine.
Cinematica e cinetostatica, Esculapio, vol. 1
8. Valigi, M.C., Braccesi, C., Logozzo, S.: A parametric study on friction instabilities in
mechanical face seals. Tribol. Trans. 59(5), 911–922 (2016)
9. Liu, Z., Wang, Y., Zhao, Y., Cheng, Q., Dong, X.: A review of hydrostatic bearing system:
researches and applications. Adv. Mech. Eng. 9(10) 2017
10. Yousuf, L.S., Hadi, N.H.: Contact stress distribution of a pear cam profile with roller follower
mechanism. Chin. J. Mech. Eng. 34(1), 1–14 (2021). https://doi.org/10.1186/s10033-021-005
33-y
11. Cardona, S., Zayas, E., Jordi, L., Català, P.: Synthesis of displacement functions by Bézier
curves in constant-breadth cams with parallel flat-faced double translating and oscillating
followers. Mech. Mach. Theory 62, 51–62 (2013)
12. Chen, F.Y.: Mechanics and Design of Cam Mechanisms. Pergamon Press, Oxford (1982)
13. Dudley, D.: Gear Handbook. McGraw-Hill, New York (1962)
14. Baglioni, S., Cianetti, F., Landi, L.: Influence of the addendum modification on spur gear
efficiency. Mech. Mach. Theory 49, 216–233 (2012)
15. Litvin, F.L., Fluentes, A.: Gear Geometry and Applied Theory. University Press, Cambridge
(2004)
16. Valigi, M.C., Logozzo, S., Malvezzi, M.: Design and analysis of a top locking snap hook for
landing manoeuvres. In: Niola, V., Gasparetto, A. (eds.) IFToMM ITALY 2020. MMS, vol.
91, pp. 484–491. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-55807-9_55
17. Kelly, S.: Fundamental of Mechanical Vibrations. McGraw-Hill, Singapore (2000)
18. Mang, T., Dresel, W.: Lubricants and Lubrication. Wiley-VCH Verlag GmbH & Co., KGaA
(2007)
19. Cañas, J., Perdices, E., García-Pérez, L.: A ROS-based open tool for intelligent robotics
education. Appl. Sci. 10, 7419 (2020)
20. Malvezzi, M., Gioioso, G., Salvietti, G., Prattichizzo, D.: SynGrasp: a Matlab toolbox for
underactuated and compliant hands. IEEE Robot. Autom. Mag. 22(4), 52–68 (2015)
21. Gil, A.: ARTE: a robotics toolbox for education. Miguel Hernández University (UMH) (2012)
22. Peidró, A., Marín, J.M., Payá, L., Reinoso, O.: Simulation tool for analyzing the kinetostatic
effects of singularities in parallel robots. In: IEEE International Conference on Emerging
Technologies and Factory Automation, ETFA 2019, September 2019
23. Gil, A., Peidró, A., Reinoso, O., Marín, J.M.: Implementation and assessment of a virtual
laboratory of parallel robots developed for engineering students. IEEE Trans. Educ. 57(2),
92–98 (2014)
116 M. C. Valigi et al.
24. Tan, D.-P., Ji, S.-M., Jin, M.-S.: Intelligent computer-aided instruction modeling and a method
to optimize study strategies for parallel robot instruction. IEEE Trans. Educ. 56(3), 268–273
(2012)
25. Kovar, J., Andrs, O., Brezina, L., Singule, V.: Laboratory delta robot for mechatronic edu-
cation purposes. In: SPEEDAM 2012 - 21st International Symposium on Power Electronics,
Electrical Drives, Automation and Motion (2012)
26. Marghitu, D.B.: Mechanisms and Robots Analysis with MATLAB. Springer, London (2009).
https://doi.org/10.1007/978-1-84800-391-0
27. Baglioni, S., Cianetti, F., Braccesi, C., De Micheli, D.: Multibody modelling of N DOF robot
arm assigned to milling manufacturing. Dynamic analysis and position errors evaluation. J.
Mech. Sci. Technol. 30(1), 405–420 (2016)
28. Shabana, A.: Dynamics of Multibody Systems. Cambridge University Press, Cambridge
(2020)
29. Samin, J., Fisette, P.: Symbolic modeling of multibody systems, vol. 112. Springer, Dordrecht
(2003). https://doi.org/10.1007/978-94-017-0287-4
30. www.strandbeest.com
Evaluating Educational Robotics
Girls and Technology – Insights
from a Girls-Only Team at a Reengineered
Educational Robotics Summer Camp
SDU Game Development and Learning Technology, The Maersk Mc-Kinney Moeller Institute,
University of Southern Denmark, Odense, Denmark
bkp@mmmi.sdu.dk
Abstract. In this paper, we present the results from a case study on an annual
extra-curricular summer camp, in which participants (n = 126) engaged in
STEM and Computational Thinking activities, facilitated through the usage of
the micro:bit microcontroller platform. The camp was a reengineered version of
an annual summer camp held by Teknologiskolen; a Danish non-profit organisa-
tion offering weekly classes in technology. The focus of the reengineering was
to increase the number of girls participating in the camp. This was attempted by
creating a girls-only team, which employed highly contextualised projects and
had an emphasis on using everyday materials, like cardboard, paint and glue. The
result was a significant increase in the number of participating girls and on their
attitude towards technology, which at the end of the camp, matched that of the
boys. Based on the results, we argue that the girls-only team was the main reason
for the higher number of participating girls, while the change in attitude was due
to the highly contextualised projects and selection of materials.
1 Introduction
Teknologiskolen (transl. “The School of Technology”) (TS) [1] is a non-profit organi-
sation based in Odense, Denmark.
TS was founded for two reasons: First, we saw a societal need for an increase in the
STEM (Science, Technology, Engineering and Mathematics) workforce [2, 3]; Second,
and most importantly, we identified a need for an extracurricular technology club. The
intention was to give youths from the local community a place to cultivate and grow
their interest in technology, much like they can attend extracurricular sports and music
activities. The activities at TS revolve around teaching technology, based in both STEM
and Computational Thinking (CT) [4], by the use of Educational Robotics (ER) which
has been proven a feasible approach of achieving this [5–7]. Our approach to teaching
RQ1: Will the addition of a girls-only team influence the number of participating girls
and their interest in the camp?
Girls and Technology – Insights from a Girls-Only Team 121
RQ2: Will introducing more contextualised projects and a change in materials influence
the number of participating girls and their interest in the camp?
2 Method
The study was carried out as a case study at Teknologiskolens annual summer camp on
robotics and automation, which took place from Monday 29 June to Friday 3 July 2020
with participants meeting in from 9 AM–3 PM each day. When signing up for the camp,
the participants (N = 126) had to choose which team to participate in based on their
age group and gender (see Table 1). The case study and following sections will mainly
revolve around Team 3, which was specifically created for this camp.
Four months ahead of the camp, the official advertisement materials (TS – website
information) was distributed in the local community. Descriptions for both Team 2 and
Team 3 were largely similar and revolved around learning more about the technology
which surrounds us, by building and programming technological machines and robots
at the camp. The main difference between the descriptions was that in relation to the
description of Team 3, the potential for robot technology to help humans and animals
was mentioned, without providing any specific contexts for this. In addition, a list of
prototyping materials to be used in their projects, such as cardboard, paper, glue and
fabric, was mentioned. Participants (girls-only) who signed up for Team 3 were asked
to fill out a short initial survey regarding their interest in the following ER themes:
Smart homes, with a focus on persons with a physical disability (61.5%); traffic safety
(23.1%); interactive art (38.5%) and wearables/intelligent clothing (46.2%). The price
of attending the camp was DKK 850, which included a personal box of technology to
take home afterwards, with technology worth approximately DKK 800, made possible
through a donation from the local Facebook data centre.
122 B. K. M. K. Pedersen et al.
Projects and Contexts: During the camp, several projects and contexts were explored
and worked on. Firstly, there was the automatic soap dispenser. This project served as
an introduction to robotics and automation (microcontrollers, sensors, actuators and the
interplay between these), while demonstrating the strength and flexibility of relatively
simple input/output systems (one sensor, one actuator). To exemplify the relevance of
robot technology in relation to the daily lives of the participants, the chosen context for
the project was the current Covid-19 pandemic.
For practical reasons, mainly costs and number of helpers, it was decided that the
participants could choose between two different projects to work on, after having com-
pleted the automatic soap dispenser project. In this regard, the results from the admission
survey show that the three most popular options in technology were the smart home,
wearables/intelligent clothing and interactive art. After considering the overlap in mate-
rial and work processes, it was decided to let the students choose between projects
within interactive art and smart home, after having finished the soap dispenser project.
While developing both projects, an emphasis was put on designing for a low floor entry
threshold for beginners and a high ceiling for the more experienced participants [24].
The project regarding interactive art involved developing paintings which would
react to their surroundings. This could be, for example, streetlights that turn on when
it gets dark (light sensor/LED); characters that hide when a person approaches (dis-
tance sensor/servomotor); or flowerheads that spin around when grass is being stroked
(switch/DC motor). The context was designing and developing technological products
that the participants would want to continue using after the camp had finished.
The smart home project was about creating devices intended for people with physi-
cal disabilities to exemplify how robot technology is an important factor in the welfare
sector. For this purpose, a laser-cut model for a two-room house with hollow walls
for containing the technology was developed. The model was designed to support the
implementation of the following functionalities: automatic windows (switch or tempera-
ture sensor/servomotor); automatic doors (switch/servomotor); automatic curtains (light
sensor/servomotor); automatic lighting (PIR/LED); aoor lock (hall sensor/servomotor);
aoor alarm (distance sensor or PIR/buzzer); automatic plant watering (soil moisture
sensor/DC motor water pump).
Materials: Considering the massive national investments in the MakeCode [25] com-
patible micro:bit technology [26], it was decided to use this as the basis development
platform for the camp, to bridge with the compulsory education of the participants. To
expand the micro:bit with a breadboard and thus allow for external sensors and actua-
tors, the FireFly connector from Teknologihuset [27] was chosen, due its built-in dual
H-Bridges providing native control of up to four DC motors. In addition, a variety of
basic components, sensors, actuators and prototyping materials was made freely avail-
able to the participants to use as they saw fit (see Table 2). Furthermore, to support the
participants in working as autonomously as possible, they were provided with a devel-
oped compendium of cheat sheets for each sensor and actuator. The files for the house
model and compendium can be found here: https://bit.ly/3pRBOk3.
Girls and Technology – Insights from a Girls-Only Team 123
Procedure: Different pedagogical methods were applied throughout the camp, which
was split into two sections. The first section (Monday) was based on teacher-centred
instructions, guiding the pupils through their first project (automatic soap dispenser).
Here the students were taught about the difference between sensors and actuators, the
role of the microcontroller, and how to use conditionals to connect the input with a
desired output. That is, how to construct and programme simple input/output systems, in
which the sensors and actuators can be freely swapped around with other types to obtain
completely new functionalities. The second section (Tuesday to Friday) was inspired
by constructionism [8, 9], self-directed learning and Challenge-Based Learning [28, 29]
with each proof of concept (smart home) acting as a nano-challenge [28]. During this
time, the participants would direct their own learning through their choice of project
and the functionalities they wanted to implement, while the teachers acted as facilitators
assisting them in understanding the challenges. To let the participants make an informed
decision regarding which type of project they wished to work with, and to provide a basis
for understanding the scope of the possibilities and limitations, three pieces of interactive
art and three models of smart homes were demonstrated.For practical reasons and due to
the Covid-19 pandemic, the participants worked in pairs of two with a minimum distance
of two metres between each pair. In addition, participants could opt to keep their project
at the end of the camp. However, the pairs working with the smart homes had to choose
internally which one of them could keep it.
2.3 Data
A mixed-method approach was used to collect both quantitative and qualitative data.
Quantitative data was collected in both Team 2 and Team 3 through pre- and post-
questionnaires. The questionnaires made use of multiple-choice answers, 5-point Likert
scales and open answers. Qualitative data from Team 3 was collected through observa-
tions and conversations with the participants throughout the study. Additionally, individ-
ual semi-structured interviews were conducted with six randomly selected participants
124 B. K. M. K. Pedersen et al.
during the last day of the camp. For the analysis, Pearson’s Chi-Squared tests, indepen-
dent and dependent t-tests (conf. interval 95%) were used to find the level of significance
between the scores from the pre- and post-questionnaires. In the instance of no-response
errors, the data field was left empty and the method ‘exclude cases analysis by analysis’
in IBM’s SPSS [30] was used. While the pre- and post-questionnaires were anonymous,
a generated code allowed for the comparison between them.
3 Results
The demographic, quantitative and qualitative data will be presented in this section. In
the following, statistically significant results will be in bold.
3.1 Demographics
Table 3 presents an overview of the participants’ age, their previous level of experience
with programming and robotics, previous attendance at Teknologiskolen, if they knew
at least one other person attending the camp and the importance of this.
Age
8: 9: 10: 11: 12: 13: 14: 15:
Team 2: 0% 2.1% 31.9% 21.3% 19.1% 14.9% 8.5% 2.1%
Team 3: 4.5% 4.5% 31.8% 36.4% 22.7% 0% 0% 0%
Prev. attended the TS: Knew another It is important to know another person:
person: Yes No Don’t know:
Team 2: 41.1% 45.1% 41.2% 52.9% 0%
Team 3: 36.4% 68.2% 50.0% 40.9% 9.1%
Prev. experience with programming: Prev. experience with robotics:
Vast (3): Some (2): None (1): Vast (3): Some (2): None (1):
Team 2: 45.8% 45.8% 0% 25% 52.1% 22.9%
Team 3: 9.1% 81.8% 9.1% 4.5% 59.1% 36.4%
Team 2 Team 3 Significance
M= SE= M= SE=
Experience with programming: 2.46 .073 2.00 .093 t(68) = 3.882, p = .000
Experience with robotics: 2.02 .101 1.68 .121 t(68) = 1.991, p = .051
and wished to be moved to Team 3 (see Table 4 for an overview). The difference in the
gender composition of the participating pupils aged 10–16 in the years 2019 and 2020
were significant at X 2 (1, N = 125) = 9.941, p = .002.
Table 4. The gender composition of participants aged 10–16 from the camp in 2019 and 2020.
The participants were also shown a series of MakeCode programming blocks and
asked to write down what each block did, in addition to explaining related vocabulary.
The answers were given scores based on the following criteria: The answer is wrong,
blank or an abbreviation of ‘I don’t know’ (0); there is a trace of understanding, although
very vague (1); there is a clear understanding (2); there is a very clear and detailed
understanding (3). In addition, they were asked whether different electronic components
could be categorised as sensors or actuators. See Table 6 for the results.
126 B. K. M. K. Pedersen et al.
Table 6. The participants pre- and post-scores regarding programming and sensors/actuators.
3.4 Evaluation
Table 7 presents the participants’ answers regarding their evaluation of the camp.
Answers were given on a 5-point Likert-scale ranging from 5 (very much) to 1 (not
at all). The table also presents the most often-mentioned topics within the participants’
written answers, regarding what they felt had been the best about the camp, and what
could have been better. Two participants from Team 3 had written ‘That it wasn’t just
technology, but also art’ and ‘Doing different things than the ones on Team 2’.
Girls and Technology – Insights from a Girls-Only Team 127
Table 7. The scores and most mentioned topics from the feedback.
The six participants were among other subjects interviewed regarding their motivation
for attending the camp and why they had chosen Team 3.
Motivation: A single participant answered it was because she already attended TS
weekly classes. Two answered they really enjoyed working with technology, with one
of them saying that she felt they did not do it enough at school. Three answered that the
prospect of learning about and building their own robots sounded interesting.
Why Team 3: Although being interviewed individually, all six of the participants
quickly answered that it was because it was a girls-only team. They also stressed how
128 B. K. M. K. Pedersen et al.
important it was that there would be other girls at the camp when deciding to sign up,
how none of them liked it when there were too many boys, and how they generally
preferred being with other girls. Their answers covered the spectrum from: ‘It’s fun
because there are other girls’ all the way to ‘Boys are annoying’. Interestingly, none
of them mentioned the slight difference in the advertisement materials for Team 2 and
Team 3. When asking how they felt about the available projects and materials, they had
found them to be fun, relevant, creative, exciting and engaging, with two girls specifically
mentioning not liking LEGO® Technic (one mentioned having seen Team 2 use it during
the shared breaks).
Fig. 1. 1st row (left, middle): The flower goes up/down, and the stars turn on/off when its light/dark
(LDR/servomotor, LDR/NEO pixels) and a fish (magnet) attracts a cat (hall sensor/servomotor).
1st row (right): The unicorn lights up when you blow on the sun (microphone/NEO pixels). 2nd
row (left): Automatic curtains go down/up when its dark/light (LDR/DC motor). 2nd row (middle):
Automatic watering when the earth is too dry (soil moisture-sensor/DC motor water pump). 2nd
row (right): Automatic lights turn on when motion is detected (PIR/NEO-pixels). 3rd row (left):
Automatic soap-dispenser (IR sensor/servomotor). 3rd row (middle): Automatic front door with a
key card (magnet) (hall sensor/servomotor). 3rd row (right): The earth starts/stops spinning when
Saturn is pressed down (switch/DC-motor), the colours of the stars change when the planet next
to Saturn is rotated (potentiometer/NEO-pixels), and Baby Yoda hides when a hand comes close
to him (IR sensor/servomotor).
4 Discussion
The Immediate Results, Attitude and Learning Outcome: The Team 3 (T3) initiative
yielded an immediate and significant increase in female participants and change to the
gender composition of participants in oldest half of the camp. However, it did not manage
to attract female participants aged 13–16, an age group where we also see a reduction
in the number of male participants. In addition, while Team 2 (T2) did not develop
a significantly higher interest in technology like T3 – which can be explained by T2s
initially higher interest herein – it is interesting to see that despite this initial difference in
their interests in technology and related jobs, these were both equalised at the end of the
camp. Likewise, although both T2 and T3 developed a significantly higher awareness
of their daily interactions with robot technology, T3’s awareness became noticeably
130 B. K. M. K. Pedersen et al.
higher. This difference can possibly be due to the higher level of contextualisation in
the projects developed for T3. In comparison, it was, however, only T2 who developed
a significantly higher interest for STEM related school topics. While no effort had been
made to emphasise the relation between these or the involved labour market perspectives
and the content of the camp, this is something which could possibly prove beneficial
to do in the future. However, T3 reported having gained a significantly higher level of
improvement in their understanding of robot technology than T2, and in their feedback
they had a higher focus on having learned/made new things. This might be explained by
T2’s initial significantly higher experience with programming and robotics and could
indicate that attending participating the camp had a larger effect on the girls’ self-efficacy
than the boys’. To our surprise, and despite both T2 and T3 reporting high average scores
regarding how fun and exciting it had been attending the camp, T3 reported a significantly
higher desire to participate in the camp the following year compared to T2. Likewise,
a high percentage indicated a desire for having more time at the camp, which is not
something we had discussed with them during the camp. The results also show that
while T3 became significantly better at differentiating between sensors and actuators,
and at explaining the MakeCode programming blocks, the average scores for the latter
were still relatively low. This inability to use related vocabulary was indeed observed
during the camp and stood in stark contrast to their actual abilities. Thus, we consider
their final projects a better indicator for the learning outcome.
Considering that the camp managed to significantly increase the number of female
participants and had a significant effect on their attitude towards technology, an attitude
that for the most part matched that of the boys or even surpassed them, while also provid-
ing them with a significant learning outcome, we consider the camp and new initiatives
a success. The results regarding their change in attitude is likewise in accordance with
related research [15–18].
RQ1 – The Effect of the Girls-Only Team: T3 generally had a higher focus on the
social aspects of the camp in their written feedback than T2 and likewise held a signif-
icantly higher average regarding having made new friends at the camp. In addition, a
noticeably higher percentage of them also knew at least one other person who would be
attending the camp and reported a slightly higher need for this. Furthermore, while the
participants’ initial interest for robotics was present before the camp, the confirmation
of there being other girls at the camp and of not risking being the only girl at a camp full
of boys was expressed as being the decisive factor when deciding to attend the camp.
Regarding this, we likewise interpret the interviewed girls’ attitude towards boys as an
indicator of this and as a common phenomenon at that age.
In comparison, none of them mentioned the slight difference between the advertise-
ment materials for T2 and T3, mentioning the potential for robot technology to help
humans and animals and examples of prototyping materials. With this in mind, we argue
that the implementation of the girls-only team had the largest effect regarding the sig-
nificantly higher number of participating girls at the camp. Their expressed desire for a
girls-only team is also in accordance with related research [23].
RQ2 – The High Contextualisation of the Projects, the Change in Materials,
and the Effect of This: The strength and flexibility of the heavily used, yet relatively
Girls and Technology – Insights from a Girls-Only Team 131
simple input/output system became very clear when observing the wide variety of func-
tionalities embedded in the participants individual projects. We therefore consider this
approach very efficient in letting the participants freely design the desired behaviour of
their projects: What should it do, when should it do it? Likewise, we consider it an effi-
cient approach in letting them analyse the technologies in their surroundings, with this
conceptual framework in mind: How does the automatic sink, hand dryers etc., work?
Regarding the developed compendium, it made a great impact on enabling them to search
for information by themselves, yet within a restricted scope. We therefore recommend
this approach as an alternative to i.e. providing them with internet access and thus the
risk of finding incompatible or incomprehensible information. However, it should also
be noted that while LEGO naturally affords for being assembled/disassembled, this is
not the case of a painting, as was also evident from the participants’ high affection to
the interactive art projects. A forced requirement of disassembling these might therefore
result in dissatisfaction. In these cases, we recommend the smart home projects, which
possibly due to also being a standardised assembly kit, did not invoke the same level
of affection. Overall, the participants were generally observed as being very satisfied
with the contextualisation of the projects, which they found relevant in relation to both
themselves (soap dispensers, interactive art) and in helping humans (soap dispensers,
smart homes). In addition, the chosen materials were found to afford creativity without
exhibiting the feelings of inadequacy than can stem from having to scale the technical
barrier that the LEGO platform can present [21, 22].
Considering this, we argue that the design of the projects, the high level of con-
textualisation and the change in materials had the largest effect in relation to keeping
them engaged and interested throughout the camp, their significant change in attitude,
learning outcome and desire to participate in the camp the following year. This is also
in accordance with previous research [19–22].
Implications for Future STEM Activities and TS: Based on the results of this case
study, we argue that neither the initiative of a girls-only team or an increased contextual-
isation of the projects and the change in prototyping materials, can stand alone regarding
voluntarily attended robotics activities. We therefore argue that they will probably need
to be implemented in unity to obtain similar results. As a direct implication hereof, future
TS summer camps will likewise implement both. In addition, we have also created a new
girls-only weekly TS class, designed around the knowledge obtained from this study.
While the implementation of a girls-only team can be controversial, we also acknowl-
edge its effect on the number of participating girls, in that it serves as a guarantee for
them not to be the minority girl at a camp or class, otherwise full of boys.
5 Conclusion
With this case study, in which participants (n = 126) attended an educational robotics
summer camp, we have shown that the implementation of a girls-only team in com-
bination with highly contextualised projects (automatic soap dispensers, smart homes
and interactive art) and the usage of everyday materials (paint, cardboard, glue etc.) can
help to attract significantly more female participants to extracurricular robotics activ-
ities. The camp also had a significant effect on the girls’ attitude and interest towards
technology and their desire to return for more. We have argued that while the girls-only
team was probably the main reason in attracting them, it was likely the contextualised
projects and choice of materials which kept them engaged and interested. It is therefore
our conclusion that in relation to extracurricular voluntarily attended robotics activities,
we consider it necessary to implement the initiatives in unity to obtain these results.
References
1. Teknologiskolen. Teknologiskolen (2020). [cited January the 22nd , 2021]; https://www.tek
nologiskolen.dk/
2. Future, E.T.: Prognose for STEM-mangel 2025 (2018). Engineer the Future: engineerthefu-
ture.dk
3. Schoolnet, E.: European Schoolnet’s 2017 Annual Report. 2018: eun.org - Brussels, Belgium
4. Wing, J.M.: Computational thinking. Commun. ACM 49(3), 33–35 (2006)
5. Atmatzidou, S., Demetriadis, S.: Advancing students’ computational thinking skills through
educational robotics: a study on age and gender relevant differences. Robot. Auton. Syst. 75,
661–670 (2016)
6. Blanchard, S., Freiman, V., Lirrete-Pitre, N.: Strategies used by elementary schoolchildren
solving robotics-based complex tasks: Innovative potential of technology. Procedia-Soc.
Behav. Sci. 2(2), 2851–2857 (2010)
7. Sáez-López, J.-M., Sevillano-García, M.-L., Vazquez-Cano, E.: The effect of programming
on primary school students’ mathematical and scientific understanding: educational use of
mBot. Educ. Tech. Res. Dev. 67(6), 1405–1425 (2019). https://doi.org/10.1007/s11423-019-
09648-5
8. Papert, S., Harel, I.: Constructionism: research reports and essays, 1985–1990. 1991: Ablex
publishing corporation (1991)
9. Wilson, B.G.: Constructivist learning environments: case studies in instructional design. 1996:
Educational Technology (1996)
10. Corbett, C., Hill, C.: Solving the Equation: The Variables for Women’s Success in Engineering
and Computing. 2015: ERIC (2015)
11. García-Holgado, A., et al.: European proposals to work in the gender gap in STEM: a system-
atic analysis. IEEE Revista Iberoamericana de Tecnologias del Aprendizaje 15(3), 215–224
(2020)
12. García-Holgado, A., et al.: Trends in studies developed in Europe focused on the gender gap in
STEM. In: Proceedings of the XX International Conference on Human Computer Interaction
(2019)
13. Peixoto, A., et al.: Diversity and inclusion in engineering education: looking through the
gender question. In: 2018 IEEE Global Engineering Education Conference (EDUCON). IEEE
(2018)
Girls and Technology – Insights from a Girls-Only Team 133
14. Master, A., et al.: Programming experience promotes higher STEM motivation among first-
grade girls 160, 92–106 (2017)
15. Craig, M., Horton, D.: Gr8 designs for Gr8 girls: a middle-school program and its evaluation.
In: Proceedings of the 40th ACM technical symposium on Computer Science Education
(2009)
16. Mason, R., Cooper, G., Comber, T.J.A.I.: Girls get it 2(3), 71–77 (2011)
17. Weinberg, J.B., et al.: The impact of robot projects on girls’ attitudes toward science and
engineering. In: Workshop on Research in Robots for Education. Citeseer (2007)
18. Sullivan, A.A.J.T.U.: Breaking the STEM Stereotype: Investigating the Use of Robotics to
Change Young Childrens Gender Stereotypes About Technology and Engineering (2016)
19. Terry, B.S., Briggs, B.N., Rivale, S.: Work in progress: gender impacts of relevant robotics
curricula on high school students’ engineering attitudes and interest. In: 2011 Frontiers in
Education Conference (FIE). IEEE (2011)
20. Sklar, E., Eguchi, A.: RoboCupJunior — four years later. In: Nardi, D., Riedmiller, M., Sam-
mut, C., Santos-Victor, J. (eds.) RoboCup. LNCS (LNAI), vol. 3276, pp. 172–183. Springer,
Heidelberg (2005). https://doi.org/10.1007/978-3-540-32256-6_14
21. Pedersen, B.K.M.K., Marchetti, E., Valente, A., Nielsen, J.: Fabric robotics - lessons learned
introducing soft robotics in a computational thinking course for children. In: Zaphiris, P.,
Ioannou, A. (eds.) HCII 2020. LNCS, vol. 12206, pp. 499–519. Springer, Cham (2020).
https://doi.org/10.1007/978-3-030-50506-6_34
22. Borum, N., Kristensen, K., Peterssonbrooks, E., Brooks, A.L.: Medium for children’s creativ-
ity: a case study of artifact’s influence. In: Stephanidis, C., Antona, M. (eds.) UAHCI 2014.
LNCS, vol. 8514, pp. 233–244. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-
07440-5_22
23. Carmichael, G.J.A.S.B.: Girls, computer science, and games 40(4), 107–110 (2008)
24. Papert, S.A.: Mindstorms: children, computers, and powerful ideas. 2020: Basic books (2020)
25. MakeCode, M.: MakeCode. https://makecode.microbit.org/. Accessed 14 Dec 2020
26. Holst, K.: Ny bevilling giver ultra:bit vokseværk (2020). https://www.dr.dk/om-dr/ultrabit/
ny-bevilling-giver-ultrabit-voksevaerk. Accessed 14 Dec 2020
27. Teknologihuset. FireFly (2021). https://teknologihuset.dk/vare/firefly/. Accessed 20 Jan 2021
28. Conde, M.Á., et al.: RoboSTEAM-A Challenge Based Learning Approach for integrating
STEAM and develop Computational Thinking. In: Proceedings of the Seventh International
Conference on Technological Ecosystems for Enhancing Multiculturality (2019)
29. Johnson, L.F., et al.: Challenge-based learning: An approach for our time. The new Media
consortium (2009)
30. IBM. SPSS (2021). https://www.ibm.com/analytics/spss-statistics-software. Accessed 4 Jan
2021
Improved Students’ Performance in an
Online Robotics Class During COVID-19:
Do only Strong Students Profit?
1 Introduction
In spring 2020, Jacobs University Bremen responded – like almost every univer-
sity [8,11] – to the CoViD-19 outbreak by a switch to online education, which is
expected to have significant, long-lasting effects on higher education [1,2,22,23].
Especially, CoViD-19 has accelerated a trend that is expected to become per-
sitent, namely that (elements of) online educations will be the new normal in
higher education [15].
Here, effects on a course in a robotics module are reported, which we consider
to be of interest for online teaching in robotics and related disciplines in general.
Especially, the online version of the course lead to an unexpected increase in the
grade performance of the students - while the difficulty of the exams remained
at the same level. There are indications that the asynchronous online version of
the course addressed some general students’ interests and needs [3], especially
with respect to the use of video material. While learning with videos plays an
increasing role since several years [4,14], it has recently experienced a massive
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 134–145, 2022.
https://doi.org/10.1007/978-3-030-82544-7_13
Students’ Performance in Online Robotics Teaching 135
surge. YouTube ranked for example first for learning according to a survey in the
generation-Z conducted in 2018 by The Harris Poll for the publisher Pearson [17].
Asynchronously provided video material hence caters to the learning preferences
of current undergraduate students.
Given the unexpected increase in students’ performance, it is an interest-
ing question whether particular groups of students benefited more than others.
Especially, there is the question if there is any correlation with general student
performance, i.e., that, e.g., strong students profit more than under-performers.
Note that evidence was found in the past that online teaching may increase per-
formance gaps [24], i.e., strong students may profit significantly more than low
achievers from online learning elements.
The paper strives for two main contributions. First, it provides indications in
the spirit of evidence-based education [9,19], which can be further used in, e.g.,
meta-studies to identify general trends. Second, it provides some insights and
guidelines that can be useful for instructors of robotics courses who are interested
in using online or blended learning in the future beyond the CoViD-19 pandemic;
especially, we motivate why asynchronous online material including videos can
be beneficial and we provide a postulation that all students can benefit from
this.
Table 2. The number of slides and the duration (min) of the videos of the asynchronous
online lecture part of the Spring 2020 robotics course at Jacobs University.
Table 3. The number of slides and the duration (minutes) of the videos for the asyn-
chronous online explanations and solutions of the homework exercises of the Spring
2020 robotics course at Jacobs University.
videos and to engage in paper and pencil work on their own to reproduce the
derivation of presented methods and their application to specific problems. This
holds especially for the provided solutions to the homework exercises, where the
expected time to interact with the slides/video material is about 6 to 7 times
the duration of each video when properly doing all calculations step by step,
e.g., when performing extensive linear algebra calculations. In addition to the
slides and videos, a course forum was provided on Moodle [7,10], where stu-
dents were encouraged to ask questions and to engage in discussions with the
instructor as well as among each other. Furthermore, Microsoft Teams [12] sup-
port was provided for each course at Jacobs University, where also the option
for (synchronous) Q&A-sessions in the lectures slots was offered for the robotics
course.
Students’ Performance in Online Robotics Teaching 139
Fig. 1. ANOVA box plot statistics of grades by year for the Robotics Lectures. There
is a strong shift toward higher grades in the year with the online teaching (2020)
compared to the years with face-to-face teaching (2017–2019).
140 A. Birk and E. Dineva
Fig. 2. Histogram of the exam grades by year. The relative frequencies for each year
are shown.
As already mentioned in the introduction, there are indications that the asyn-
chronous online material, especially in form of the video material was in line
with the students’ learning preferences [3]. The main question for this paper is
whether some students may have profited more than others - given especially
that some evidence was found in the past that online teaching may increase
performance gaps [24].
Students’ Performance in Online Robotics Teaching 141
Fig. 3. The Grade Performance Average (GPA) of the students in the course in each
year. There is no significant correlation, i.e., the general performance of the students
was very similar in each year.
For this analysis, the Grade Point Average (GPA) of the students who took
the robotics course in the years 2017 to 2020 are used as a measure of the more
general performance of the students. Figure 3 shows the average GPA for each
year. While there are slight fluctuations, there are no significant differences, i.e.,
the on average general performance of the students taking the course is about
the same in each year.
Furthermore, the grade for each student is roughly proportional to the GPA
of each student. This is also illustrated in Fig. 4. This roughly linear relation
holds for every year. What differs is the proportionality constant for 2020, where
all the grades are so to say shifted towards higher performance in the course.
This is more clearly illustrated in Fig. 5. The grades at Jacobs University are
given on a scale from 1.00 to 5.00 in steps of 0.33, i.e., 1.00, 1.33, 1.67, 2.00, and
so on. The lower the number, the better the grade, i.e., 1.00 is best and 5.00
the worst possible grade. In Fig. 5, the histograms of the number of students
per performance difference pd are shown for every year 2017 to 2020, i.e., the
difference pd = cg − gpa between the course grade cg and the GPA gpa is used.
142 A. Birk and E. Dineva
Fig. 4. A scatter plot of the grades and GPAs of the students for the four years. Overall,
a student’s grade tends to be proportional to his/her GPA, i.e., the course grades tends
to be better, the better the student’s GPA is.
Fig. 5. Histograms of the number of students per performance difference between the
course grade and the GPA.
for robotics education [3] - and that this holds for all students, i.e., that also
under-performing students can achieve clear performance gains through online
teaching.
5 Conclusion
References
1. Aristovnik, A., Kerzic, D., Ravselj, D., Tomazevic, N., Umek, L.: Impacts of the
COVID-19 pandemic on life of higher education students: a global perspective.
Sustainability 12(20), 8438 (2020)
2. Bai-Yun, C.: The effects of COVID-19 on international secondary assessment
(2020). https://www.naric.org.uk/downloads/The%20Effects%20of%20COVID-
19%20on%20International%20Secondary%20Assessment%20-%20UK%20NARIC.
pdf
3. Birk, A., Dineva, E., Maurelli, F., Nabor, A.: A robotics course during COVID-
19: Lessons learned and best practices for online teaching beyond the pandemic.
Robotics 10(1), 5 (2021)
4. Buzzetto-More, N.A.: An examination of undergraduate student’s perceptions and
predilections of the use of YouTube in the teaching and learning process. Interdis-
cip. J. e-Skills Lifelong Learn. 10, 17–32 (2014)
5. Carpenter, S.K., Witherby, A.E., Tauber, S.K.: On students’ (mis)judgments of
learning and teaching effectiveness. J. Appl. Res. Mem. Cogn. 9(2), 137–151 (2020).
http://www.sciencedirect.com/science/article/pii/S2211368120300024
6. Clayson, D.E.: Student evaluations of teaching: are they related to what students
learn? A meta-analysis and review of the literature. J. Market. Educ. 31(1), 16–30
(2009). https://doi.org/10.1177/0273475308324086
7. Costello, E.: Opening up to open source: looking at how Moodle was adopted in
higher education. Open Learn. J. Open Distance e-Learn. 28(3), 187–200 (2013).
https://doi.org/10.1080/02680513.2013.856289
8. Crawford, J., et al.: COVID-19: 20 countries’ higher education intra-period digital
pedagogy responses. J. Appl. Learn. Teach. 3(1), 1–20 (2020)
9. Davies, P.: What is evidence-based education? Br. J. Educ. Stud. 47(2), 108–121
(1999). https://doi.org/10.1111/1467-8527.00106
10. Dougiamas, M., Taylor, P.C.: Moodle: using learning communities to create an open
source course management system. In: EdMedia. Association for the Advancement
of Computing in Education (AACE) (2003)
11. IAU: Regional/national perspectives on the impact of COVID-19 on higher
education (2020). https://www.iau-aiu.net/IMG/pdf/iau covid-19 regional
perspectives on the impact of covid-19 on he july 2020 .pdf
12. Ilag, Balu N.: Microsoft Teams Overview. In: Understanding Microsoft Teams
Administration, pp. 1–36. Apress, Berkeley (2020). https://doi.org/10.1007/978-
1-4842-5875-0 1
13. Linse, A.R.: Interpreting and using student ratings data: guidance for faculty serv-
ing as administrators and on evaluation committees. Stud. Educ. Eval. 54, 94–106
(2017). http://www.sciencedirect.com/science/article/pii/S0191491X16300232
14. Nicholas, A.J.: Preferred learning methods of the millennial generation. Int. J.
Learn. Ann. Rev. 15(6), 27–34 (2008)
15. Pearson: The global learner survey (2020). https://www.pearson.com/content/
dam/one-dot-com/one-dot-com/global/Files/news/gls/Pearson Global-Learners-
Survey 2020 FINAL.pdf
Students’ Performance in Online Robotics Teaching 145
16. Perry, R.P., Smart, J.C.: The Scholarship of Teaching and Learning in Higher
Education: An Evidence-Based Perspective. Springer, Dordrecht (2007). https://
doi.org/10.1007/1-4020-5742-3
17. Poll, T.H.: Beyond Millennials: The Next Generation of Learners. Pearson, London
(2018)
18. Schneider, M., Preckel, F.: Variables associated with achievement in higher educa-
tion: a systematic review of meta-analyses. Psychol. Bull. 143(6), 565–600 (2017)
19. Slavin, R.E.: Evidence-based education policies: transforming educational prac-
tice and research. Educ. Res. 31(7), 15–21 (2002). https://doi.org/10.3102/
0013189X031007015
20. Stroebe, W.: Why good teaching evaluations may reward bad teaching: on grade
inflation and other unintended consequences of student evaluations. Perspect. Psy-
chol. Sci. 11(6), 800–816 (2016). https://doi.org/10.1177/1745691616650284
21. Stroebe, W.: Student evaluations of teaching encourages poor teaching and con-
tributes to grade inflation: a theoretical and empirical analysis. Basic Appl. Soc.
Psychol. 42(4), 276–294 (2020). https://doi.org/10.1080/01973533.2020.1756817
22. Symonds, Q.: The coronavirus crisis and the future of higher educa-
tion (2020). https://www.qs.com/portfolio-items/the-coronavirus-crisis-and-the-
future-of-higher-education-report/
23. Symonds, Q.: International student survey - global opportunities in the new higher
education paradigm (2020). https://info.qs.com/rs/335-VIN-535/images/QS EU
Universities Edition-International Student Survey 2020.pdf
24. Xu, D., Jaggars, S.S.: Performance gaps between online and face-to-face courses:
differences across types of students and academic subject areas. J. High. Educ.
85(5), 633–659 (2014)
Pupils’ Perspectives Regarding Robot Design
and Language Learning Scenario
Petra Karabin1(B) , Ivana Storjak2 , Kristina Cergol1 , and Ana Sovic Krzic2
1 Faculty of Teacher Education, University of Zagreb, Zagreb, Croatia
{petra.karabin,kristina.cergol}@ufzg.hr
2 Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
{ivana.storjak,ana.sovic.krzic}@fer.hr
1 Introduction
The development of information and communication technology (ICT) has highly
increased over the past two decades. Such development introduced modern digital mate-
rials into mainstream schooling, the beneficial usage of which quickly became well-
recognized and supported by the educational system. One form of ICT that was advanced
and implemented as part of the classroom materials at the beginning of the 21st cen-
tury was the educational robot [1]. The robots used in educational research differentiate
according to their characteristics. Thus, taking into consideration the autonomy of social
robots we distinguish between the teleoperated type (the robot is controlled remotely to
achieve telepresence), the autonomous type (the robot is controlled by artificial intelli-
gence), and the transformed type (the robot utilizes both teleoperation and autonomous
control) [2]. Educational robots are also classified according to their appearance. Namely,
we differentiate the anthropomorphic robot type (the robot that has humanoid character-
istics: head, torso, limbs), the zoomorphic robot type (the robot that looks and behaves
like an animal), the mechanomorphic robot type (the robot that has an industrial appear-
ance and machine-like characteristics), and the cartoon-like robot (the robot that has
caricature-like characteristics) [3]. Moreover, as the development of educational robots
amplified the number of studies on the topic, the concept of Robot-Assisted Learning
(RAL) was introduced into the field of education research [4]. Han [4] defined RAL as
learning assisted by robots with anthropomorphized characteristics (the ability to move,
produce sounds, sense the surrounding, etc.). According to [5], applications of robots in
education include technical subjects, nontechnical subjects, second language learning,
and assistive robotics. Specific to the field of foreign language learning is the concept
of Robot-Assisted Language Learning (RALL), i.e. using robots in language instruction
[6]. Most of the research considering RALL has been carried out in Asia (South Korea,
Taiwan, and Japan), when, in Europe, they are just taking their turn [3]. The results
of previous research [7, 8] showed that pupils in ELT classrooms felt quite motivated
and satisfied while using a robot as the educational material, but some research [9, 10]
found that the pupils’ interest in using robots decreased over time. In Croatia, robots are
included in the curriculum only in the scope of technical subjects, namely Informatics
and Technical Education [11], and the research regarding robot usage in primary schools
is at the beginning [1]. To tackle the issue of interest, in earlier studies, we focused on
future teachers’ attitudes on using robots as educational tools [12] and using robots with
appropriate activities in everyday lessons in teaching the native language [13]. On the
other hand, this study aims to explore the perspective of foreign language learners (age
8–16) on using robots in the ELT classroom.
2 Methodology
This pilot study was conducted during the Summer robotics camp “Petica 2020”. The
camp was held on a rural estate and lasted for four days. The schedule included the
learning activities thus robotics workshops in the morning, recreational activities in the
afternoon, and social activities in the evening. The duration of the learning activities
was six hours per day with a break in the middle. According to background knowledge,
pupils were divided into three groups: beginners (M = 6, F = 1), experienced (M = 7,
F = 5), and high schoolers (M = 3, F = 2). For beginners and experienced, progression
was guided by using worksheets in the Croatian language where programming tasks
were scaffolded. All the programming interfaces were used in the English language,
which can also serve as an ELT classroom vocabulary building tool. Beginners used
LEGO Mindstorms EV3 kit to assemble and program a robot to race, rescue animals,
solve a maze, talk, alarm about intruders, show cartoons, recognize colors, and follow
the black line. Robot design was adapted accordingly to the objectives of tasks. The
experienced group used LEGO SPIKE Prime to drive slalom, clean scattered bricks,
solve a maze, make sounds, recognize colors, follow the black line, and have gears. This
group assembled more complex artifacts such as robotic arms, soccer players, and sumo
robots. High schoolers used LEGO SPIKE Prime and REV Robotics metal elements
to build and control more advanced robots. REV was used because it is suitable for
hands-on learning of engineering principles [14].
148 P. Karabin et al.
As the participants worked on solving the tasks, two researchers conducted unstruc-
tured interviews with them to direct their attention to the possibilities of using the tech-
nology they were exploring in the ELT classroom. The idea behind this approach was
to investigate pupils’ perspectives so that further application of robots can be adjusted
to the needs of the ELT classroom. Based on their feedback, a set of 12 questions was
formed aiming to uncover specific and more elaborated possibilities and pupils’ pref-
erences for using robots in an ELT classroom. The questionnaires were distributed at
the end of the robotics camp. As one camp participant did not take part in the study, a
sample consisted of 8 females and 15 males from different parts of Croatia. Questions
were related to robot design and learning scenario design as seen in Table 1. To inves-
tigate preferable robot design, two approaches were tested, first based on a preference
of a real robot design introduced at the camp (Q1), and second based on an imaginary
robot which design depended on the pupils’ vision (Q3, Q4, Q5, Q8). Characteristics
of a preferable learning scenario design concerning the scope of the tasks, nature of the
content, assembly and programming responsibility, duration, and working practice were
investigated. To give context and promote creative thinking the following sentence was
used as the introduction to the questionnaire: “Imagine you are in an English class and
you learn about fruits and vegetables, or you learn concepts and words that can be found
in a store.”. As all the pupils were already familiar with the content from the input, it
was expected that it would easily prompt their ideas about possible robot design and
learning scenario.
Table 1. Questionnaire
Robot design
1 Which robot design that you assembled at the camp could be used in an English class?
3 Design a different robot that you could use in an English class when learning about fruits and
vegetables or store-related words. Draw that robot
4 Which sensors are on the robot?
5 How many motors does the robot have?
8 Which parts would you use to assemble the robot (e.g. Fischertechnik, STEMI, LEGO, wood,
metal)?
Learning scenario design
2.,6 What would be the task for the pupils with this robot?
7 Would you learn new content in this class, or would you revise what you have previously learned?
9 Would the teacher assemble the robot in advance, or would the pupils assemble it in class?
10 Would the teacher program the robot in advance, or would the pupils program it in class?
11 How long would such activity last?
12 How many pupils would use one robot?
Pupils’ Perspectives Regarding Robot Design and Language Learning Scenario 149
1 P9 stands for the answer provided by participant No. 9. The abbreviation used to represent other
participants are P1, P2, … and P23.
150 P. Karabin et al.
pupils would need to recognize or read a picture or a word from the screen.”. Listening
tasks involve listening to a target lexis-related question that the robot presents and the
pupils need to react to. For instance, P22 stated that the pupils would need to help the
robot reach a goal by replying to the robot’s questions using a remote control.
Fig. 1. Pupils’ robot design. From left to right, up to down: recognition of fruits and vegetables
from pictures; pupils answer to questions and help the robot to follow the line; robotic arm that
puts names of fruits and vegetables in different boxes based on pupils’ input; robot assigns words
to translate; and pupils control the robot to sort fruits and vegetables.
An often cited example of a speaking task involves having a robot perform a move-
ment it has been pre-programmed to do. The movement would involve interpreting target
vocabulary and performing an activity connected to it. P7 suggested: “The pupils would
teach the robot to move and to recognize fruits and vegetables by color. When the robot
finds fruits or vegetables, it would need to stop and name the fruit or the vegetable.”.
A similar activity was carried out in [6] where the pupils gave navigation commands to
the LEGO Mindstorms robot. The robot was required to reach a colored target, sense
the color of the target, and shoot a ball into a matching goal. These two examples can be
an illustration of the Total Robo Physical Response (TRPR) since the robot is moving
according to the commands of the users. We propose a concept of a TRPR as a Total
Physical Response (TPR [17]) in robot-assisted learning where the robot is the physical
doer of an activity operated and controlled by a child. On the other hand, [18] presented
the case of TPR, where pupils followed the commands (such as turning around, putting
the hands up, etc.) presented by a humanoid robot. Finally, writing practice has also been
suggested as a robot would require that the pupils provide the correct spelling for the
target vocabulary. P23 shared: “The pupils would need to pronounce the word and spell
it in English.”. Some pupils provided ideas for practicing grammar and translation. P11
proposed that the pupils need to translate the words that the robot pronounces – if they
do it correctly, they score a point.
According to pupils’ answers, robots should be used for learning new content (12),
to revise previously learned content (9) or for both depending on the robot envisioned
(2).
Assembly is well accepted among pupils, a majority (11) considered that they should
assemble the robot during the ELT class or at least upgrade it (7). Only three pupils wrote
Pupils’ Perspectives Regarding Robot Design and Language Learning Scenario 151
that the robot should be already assembled. One pupil did not answer, and one pupil wrote
that it depends on the motivation of the pupils in the class.
Regarding programming responsibility, the majority would like to create their own
program during the lecture (16), while some (5) prefer that the teacher does it beforehand.
Pupils prefer working with robots for a longer period as seen in the distribution of
answers: the activity with the robot should last less than half an hour (3), half an hour
(5), the whole hour (8), or more than one hour (7).
Lastly, the number of pupils per robot was the question with the most divergent
answers: a pupil should have her/his robot (4), the maximum is two pupils per robot
(5), the maximum is three pupils per robot (6), and four or more pupils per robot (7).
The highest noted was 10 pupils per robot. According to teachers, robots should be used
individually, or in small groups to experience teamwork [19], and based on the answers,
pupils in this study recognized the benefit of working in small groups with robots.
In this paper, we presented the results of a pilot study concerning pupils’ perspectives,
thus ideas and preferences regarding the utilization of robots in an ELT classroom. Pupils
envisioned robots, described, and drew them, and decided about sensors (mostly ultra-
sonic and color) and building materials (mostly LEGO kits pieces or metal). Analysis of
their input regarding a given context showed that the robots could be used for teaching all
four language skills: reading, listening, speaking, and writing; should have the appear-
ance of an anthropomorphic or mechanomorphic robot type, and could sort or transport
objects. Pupils would prefer to assemble and program robots in an ELT classroom by
working individually or collaboratively in small groups for a longer period. The results
can be used as a starting point for implementing a robot as an educational tool into the
ELT classroom, namely in designing tasks and making an appropriate choice of a robot
for a given context. In the future, we plan to organize camps annually so that pupils
can develop their programming skills while using robots in an informal environment but
also learn English utilizing robots. Research should be further conducted for specific age
groups and different contexts to correspond with the prescribed English language edu-
cational materials, with an aim of conceptualizing the model of a robot as an educational
tool in the ELT classroom in Croatia.
Acknowledgment. This work has been supported by Croatian Science Foundation under the
project UIP-2017–05-5917 HRZZ-TRES and Croatian robotic association. The work of doctoral
students Petra Karabin and Ivana Storjak has been fully supported by the “Young researchers’
career development project – training of doctoral students” of the Croatian Science Foundation
DOK-2018–09 and ESF-DOK-2018–01.
References
1. Nikolić, G.: Robotska edukacija “robotska pismenost” ante portas? Andragoški glasnik 20(1–
2), 25–57 (2016)
152 P. Karabin et al.
2. Han, J.: Emerging technologies: robot assisted language learning. Lang. Learn. Technol.
16(3), 1–9 (2012)
3. Randall, N.: A survey of robot-assisted language learning (RALL). ACM Trans. Hum.-Robot
Interact. 9(1). 7, 1–36 (2019)
4. Han, J.: Robot-aided learning and r-learning services. Hum.-Robot Interact. 247–266 (2010)
5. Mubin, O., Stevens, C.J., Shahid, S., Al Mahmud, A., Dong, J.J.: A review of the applicability
of robots in education. J. Technol. Educ. Learn. 1(209–0015), 13 (2013)
6. Mubin, O., Shadid, S., Bartneck, C.: Robot assisted language learning through games: a
comparison of two case studies. Aust. J. Intell. Inf. Process. Syst. 13(3), 9–14 (2013)
7. Lee, S., et al.: On the effectiveness of robot-assisted language learning. ReCALL 23(1), 25–58
(2011)
8. Hong, Z.-W., Huang, Y.-M., Hsu, M., Shen, W.-W.: Authoring robot-assisted instructional
materials for improving learning performance and motivation in EFL classrooms. Educ.
Technol. Soc. 19(1), 337–349 (2016)
9. Kanda, T., Hirano, T., Eaton, D., Ishiguro, H.: Interactive robots as social partners and peer
tutors for children: a field trial. J. Hum.-Comput. Interact. 19(1–2), 61–84 (2004)
10. You, Z.-J., Shen, C.-Y., Chang, C.-W., Liu, B.-J., Chen, G.-D.: A robot as a teaching assistant
in an English class. In: Proceedings of the 6th International Conference on Advanced Learning
Technologies, Kerkrade, pp. 87–91. IEEE (2006)
11. Kurikulumi–Škola Za Život. https://skolazazivot.hr/kurikulumi-2/. Accessed 30 Mar 2021
12. Karabin, P., Cergol, K., Mikac, U., Sović Kržić, A., Puškar, L.: Future teachers’ attitudes on
using robots as educational tools. In: Proceedings of the Symposium Trends and Challenges
in FL Education and Research, International Scientific and Art Conference Contemporary
Themes in Education, In press
-
13. Milašinčić, A., Andelić, B., Pushkar, L., SovićKržić, A.: Using robots as an educational tool
in native language lesson. In: Merdan, M., Lepuschitz, W., Koppensteiner, G., Balogh, R.,
Obdržálek, D. (eds.) RiE 2019. AISC, vol. 1023, pp. 296–301. Springer, Cham (2020). https://
doi.org/10.1007/978-3-030-26945-6_26
14. REV Robotics. https://www.revrobotics.com/. Accessed 30 Mar 2021
15. Bruni, F., Nisdeo, M.: Educational robots and children’s imagery: a preliminary investigation
in the first year of primary school. Res. Educ. Media 9(1), 37–44 (2017)
16. Storjak, I., Pushkar, L., Jagust, T., Krzic, A. S.: First steps into STEM for young pupils
through informal workshops. In: 2020 IEEE Frontiers in Education Conference (FIE), pp. 1–5,
Uppsala. IEEE (2020)
17. Asher, J.J.: The total physical response approach to second language learning. Modern Lang.
J. 53(1), 3–17 (1969)
18. Chang, C.-W., Lee, J.-H., Chao, P.-Y., Wang, C.-Y., Chen, G.-D.: Exploring the possibility
of using humanoid robots as instructional tools for teaching a second language in primary
school. Educ. Technol. Soc. 13(2), 13–24 (2010)
19. Serholt, S., et al.: Teachers’ views on the use of empathic robotic tutors in the classroom. In:
23rd IEEE RO-MAN, Edinburgh, pp. 955–960. IEEE Press (2014)
Teachers in Focus
Enhancing Teacher-Students’ Digital
Competence with Educational Robots
Abstract. The digital competence of an average teacher is still behind the level
needed for teaching in technology-rich classrooms. The teacher-students need
skills and knowledge for enhancing their lesson plans and activities with technol-
ogy. Educational robots can be used to improve teacher-students’ digital compe-
tence to support their problem solving, communication, creativity, collaboration,
and critical thinking skills. Together with STEAM kits, educational robots pro-
vide good opportunities to teach subjects like mathematics, chemistry, biology,
and physics. The aim of this case study is to find if a robot-enhanced training course
encourages development of teachers-students’ digital competence, and which edu-
cational robots or STEAM kits do teacher-students prefer to use in learning activ-
ities. We used European Digital Competence Framework for Educators to assess
teacher-students’ competence in four areas (digital resources, teaching and learn-
ing, empowering learners, facilitating learners’ digital competence) before and
after a robot-enhanced training course. Our results indicate that activities with
educational robots (hands-on experiments and designing innovative lesson plans)
supported and developed teacher-students’ digital competence. As a conclusion
we recommend using educational robots or other STEAM kits when developing
teachers’ digital competence.
1 Introduction
Rapid digital transformation is reshaping society, the labor market and the future of
education and training [1]. Thus, being digitally competent is a key requirement for a
21st century teacher [2, 3]. The vast and growing array of digital technologies (devices,
systems, electronic tools, also software) forces us to think more about people, their
mindsets, attitudes and skills. Although teachers must be digitally competent (i.e., able
to effectively integrate technology in classrooms and to empower learners to become
creators of technology etc.), studies [4, 5] show that while teachers have a positive attitude
towards using technology in the classroom, their digital competence is still behind the
level needed for teaching in technology-rich classrooms. This leads to situations where
schools and kindergartens are equipped with ICT tools and innovative STEAM kits
(robots, sensors, electronic sets) but left unused, as teachers are unable to integrate these
sets into their daily teaching practices [5]. This could be originated from the lack of time
and support or from inadequate teacher technological education during their studies
(poor digital skill level).
According to EU Kids Online [6], many secondary education students are entering
universities, professional education or the labor market with extraordinarily relevant
digital skills. Thus, there is still a too large percentage of learners with a low level of
digital and technological literacy skills. In this context, an education system that does not
provide learners with the adequate background and familiarity with technology-based
(digital) tools, will fail to prepare them for their future professional activities in our
digital societies. The research problem here is twofold: on the one hand the teacher-
students’ low digital competence, and on the other hand lack of courage to integrate
educational robotics in educational activities.
In order to cope with these problems, Estonia has designed systemic strate-
gies to enhance teaching profession attractiveness. Implementing robots into teacher
training courses helps to engage teacher-students in learning STEM subjects, introduce
programming, and raise awareness that using robots in classrooms is not difficult [7, 8].
The aim of this case study is (a) to find out which digital competence can be taught
through educational robots, and (b) to examine what educational robots and STEAM
kits did teacher-students prefer for developing their digital skills.
Based on the research problem and research aims, the following research questions
were formulated:
1. Which teacher-students’ digital competence differ before and after a training course
with STEAM K12 open educational resources?
2. What type of educational robots and STEAM kits do teacher-students consider as
most appropriate for designing their digital competence?
We wanted to find out if using the STEAM K12 collection for solving experiments
on their own, adapting these OER-s to the classroom environment, and using adapted
OER-s to conduct robotics workshops to students will have significant influence on
teacher-students’ digital competence.
Technological advancements eventually find their way into the education sector. Stake-
holder pressure and necessities of the labor market force the education sector to embrace
these advancements with the goal of making learning and teaching more effective. Using
technology to enhance learning is expected to improve some or all aspects of the learning
process without an additional overhead.
The term Educational Robots (ER) refers to the use of robots in education: robots
as learning objects, robots as learning tools and learning aids. Thus, robots as learning
tools refers to using robots to teach different subjects, such as science, math, or physics
[11]. Surveys show [12, 13] that educational robots should be seen as tools to support
and foster STEAM skills such as problem solving, communication, creativity, collab-
oration and critical thinking. Moreover, using educational robots is an innovative way
to increase the attractiveness of science education [14]. Despite the positive results of
using educational robots in the learning process, there are some crucial obstacles, which
have to be pointed out. Studies show [15, 16] that the educational robotics technology is
still in its developmental stage and there are lack of appropriate hardware platforms and
software frameworks. There are also rather little integrated didactic concepts available,
making it difficult for teachers to design lesson plans, and to share materials and best
teaching practices. We can say that many teachers do not have a strong enough under-
standing and background knowledge on this area [17]. Therefore, applying educational
robots in the context of teacher training is essential.
According to DigCompEdu [10], Estonian national documents [8, 18] and quali-
fication standards [19], teachers must be digitally competent. Digital pedagogy is one
important strategic goal in Estonian Education Strategy for 2021–2035 [20]. Thus, edu-
cators have to know trends, opportunities, risks and methodologies related to new tech-
nologies. Also, they have to apply the technologies in a purposeful way. Defining digital
competence as a key competence across Europe is a consistent approach, based on Dig-
CompEdu [10]. This framework aims to describe how digital technologies can be used to
enhance and innovate education and training. Majority of European education systems
have included learning outcomes related to digital competence [7].
3 Methodology
This research was designed as a case study, which consisted of master degree teacher-
students that participated in three different courses. The first course, “Educational Tech-
nology in Learning Process”, with the volume of 3 ECTS (European Credit Point Transfer
System) equal to 78 h, was conducted by one of the researchers. Another researcher con-
ducted the other courses: an interdisciplinary course “Using robots to save the world”,
158 E. Heinmäe et al.
with the volume of 6 ECTS equal to 156 h, and a “Robot-integrated learning in the kinder-
garten and primary” course with the volume of 6 ECTS equal to 156 h. The courses were
conducted during the autumn semester of 2020.
Our study included a semi-structured online survey based on DigCompEdu Check-
In Self-Reflection Tool [21]. The check-in self-reflection tool was translated previously
by HITSA1 . Thus, some descriptions of the competence (compared with DigCompEdu)
may differ a little in wording.
We decided to use this self-reflection tool as a method that allows the participants
to assess their digital competence (17 items) in four areas (see Table 1). Also, this self-
reflection tool is mainly used in teacher training courses at Tallinn University. Digital
competence in areas 1 and 4 were not measured because these areas were not able to
test in the context of these courses.
The participants assessed their expertise levels in 17 digital competences. Each com-
petence could be evaluated from Level 0 (the lowest level) to Level 5 (the highest level).
They were also asked to bring examples (via open-ended questions) about how they
apply these skills in their everyday practices.
We also used Level (0–5) descriptions as similar to DigCompEdu with some mod-
ifications. Level 0-No, Level 1-Beginner, Level 2-developing-Expert, Level 3-Expert,
Level 4-developing-Leader, Level 5-Leader. According to the DigCompEdu explana-
tions, Level 0 means that educators have had very little contact with digital technologies,
Level 1 and Level 2 means that educators assimilate new information and develop basic
digital practices. Level 3 and 4 means that educators apply, further expand and reflect
on their digital practices, Level 5 means that educators pass on their knowledge, critique
existing practice and develop new practices.
of data for research purposes and participation was voluntary. The questionnaire was
personalized and only these teacher-students were included into the sample who filled
the survey twice. In total, the population consisted of 55 teacher-students (total of three
courses) and the sample was 36 teacher-students.
Fig. 1. Examples of robots used in the workshops (first row: LEGO WeDo, second row Sphero,
third row EasiScope microscope and Dash).
The participants (n = 36) assessed their digital competence before and after the
course, which included activities with educational robots (Edison, Ozobot, LEGO WeDo
2.0, LEGO EV3, LEGO Spike Prime, mTiny, Sphero, etc.), STEAM kits (Makeblock
Neuron, Little Bits, thermal camera Flir One, different sensors of Vernier and Globisens,
TTS EasiScope microscope), and virtual reality solutions. One of their task was to design
lesson plans, based on a freely chosen educational robotics or STEAM platform. These
lesson plans were used to gather information about which of these educational robots
and STEAM kits did the participants prefer to use in the learning and teaching context.
Depending on the COVID-19 restrictions, the three courses were first conducted in-
person, next in the hybrid form, and at last as distance learning courses. The participants
160 E. Heinmäe et al.
worked as teams of 3–5 members. At first, they were familiarized with the STEAM K12
OERs, and they solved robotics experiments in teamwork. Next, each group chose 1–2
experiments to be used as the basis for a workshop. The teacher-students adapted the
material to the age group that participated in the workshops. Depending on the COVID-
19 restrictions, the participants of two courses conducted their workshops at school,
kindergarten or with their own children. As the swimming pools were open, then the
“Using robots to save the world” mission with the Sphero robot was conducted in a
swimming pool (see Fig. 1). The participants in the third course simulated children of
appropriate age and conducted their classes at the university.
Due to not all participants’ schools, kindergartens and homes having the necessary
equipment (i.e. educational robots, tablets), these were borrowed from the university if
needed. Teacher-students were also allowed to prepare their materials in the university’s
EDUSPACE research laboratory.
4 Results
Data was analyzed with IBM SPSS Statistics 27, using paired sample t-tests and descrip-
tive statistics. Paired sample t-test showed that all digital competence (in 4 areas)
improved statistically significantly (see Table 2). In order to put our findings into a
context, we analyzed the participants’ answers to open-ended questions about using
educational robots in learning activities.
The frequency analysis showed that before the course (including activities with
educational robots) 26 teacher-students assessed their digital resources competence (2.1,
2.2, 2.3, and 2.4) as Level 0 - No use. After the course, only 6 teacher-students of
36 considered their digital skills with selecting, creating and modifying, managing,
protecting, etc. relatively low. In addition, before the ICT-enriched course, only 6 teacher-
students felt themselves as Leaders (Level 5) in the context of digital resources. After
the course, 16 teacher-students evaluated themselves as Leaders (Level 5).
According to the results, before the course the majority of teacher-students rated
their digital competence, especially communication skills (6.2), self-regulated learning
(3.4) and content creation competence (6.4) very low.
We found out that in all areas (digital resources, teaching and learning, empowering
learners, facilitating learners’ digital competence) teacher-students assessed their skills
more as No use (Level 0) whereas after the course their self-image had significantly
improved, reaching Levels 2 (developing Expert) and 3 (Expert).
Surprisingly however, some teacher-students who had assessed their digital compe-
tence in teaching (3.1), guidance (3.2) and self-regulated learning (3.4) quite high before
the course, revised their assessments and reported lower values after the course.
We also used open-ended questions to find out teacher-students opinions and expla-
nations using educational robotics in learning activities. The answers suggest that suc-
cessful exploitation of robots depended on availability of OER lesson design templates.
The participants pointed out that using educational robots with researcher-supplied OER
Enhancing Teacher-Students’ Digital Competence 161
materials (STEAM K12) was novel and exciting: “I didn’t know anything about this kind
of materials and I hope this type of content will be supplemented and further developed”
(S1). The learners mentioned that STEAM K12 resources were helpful and contributed
to planning and implementing learning activities with robots. In their opinion, they
felt more confident and supported when they were able to use these previously created
collections.
Table 2. (continued)
Based on the lesson plans that the participants designed, and on answers to the open-
ended questions in the survey, we studied what types of educational robots and STEAM
kits they preferred to use in learning activities. Results show that the most popular
educational robots were Ozobot, LEGO WeDo 2.0, Sphero. Of the STEAM kits, only
the handheld digital microscope called EasiScope was chosen. The teacher-students did
not use any standalone sensors in their lesson plans or activities with pupils.
The tools, selected by the participants, are designed with the needs of children in
kindergarten in mind, having high reliability, being easy to use, and easy to program in a
graphical programming environment. The Ozobot robot can be used without additional
devices, using only its pre-programmed line following and color-code programming
features. The Sphero robot can be controlled in a remote-control mode, without any
programming at all. We would like to emphasize the fact that Sphero is suitable for
introducing the working principles of robots, meaning that learner engagement with
robotics could begin before learning programming and coding. In addition, programming
alone does not cover all digital competence areas.
Enhancing Teacher-Students’ Digital Competence 163
Building a LEGO WeDo 2.0 robot is not very time consuming – constructing a typical
model takes only 15–20 min, its programming language is minimalist, and when using
examples, the robot can be made to move easily. As these robots are relatively common in
Estonian schools and kindergartens, and as the majority of our participants are already
working in these institutions, we can assume that the teacher-students preferred the
devices that they had already seen in their workplaces and perceived as easy to use.
It is possible that these results could have been different if the participants had
previous knowledge about all robotics platforms that were used during the course. Some
teacher-students indicated that for them, robots and technology were quite unfamiliar
topics, but after the course, they had gathered more confidence and were planning to
implement educational robots in their teaching practices.
The aim of this case study was to discover the areas where teacher-students’ digital com-
petence improve after a robot-enhanced training course, and to identify the educational
robots and STEAM kits that teacher-students preferred to use for developing their digital
skills. We theorized that involving educational robots in teacher training courses would
develop teacher-students’ digital competence in four DigCompEdu areas. Using the Dig-
CompEdu check-in list (an Estonian translation with a few modifications), we evaluated
in total 17 competences. The results showed that learners’ competence advanced in all
4 DigCompEdu areas, with significant improvements in digital resources (2.1, 2.2, 2.3,
2.4), teaching and learning (3.1, 3.2, 3.4) and facilitating learners’ digital competence
(6.2). Our results suggest that educational robots can be used as helpful tools to facilitate
developing learners’ digital competence in the context of higher education [14].
As we had expected, the majority of the participants rated their digital competence
much higher after the course. These results can be explained by the fact that the courses
were very practical and with hands-on tasks. In addition, the COVID-19 situation forced
teacher-students to communicate more online and to take responsibility for their own
learning because there was less in-person communication and direct guidance by the
lecturer.
Our findings showed that the participants rated their competence after the training
course positively high in Level 2 (developing Expert) and 3 (Expert). The reason for
this could be teacher-students’ preference for easier-to-handle robots and STEAM kits
to plan their lesson activities. The use of these robots (like Ozobot etc.) did not require
top-level digital competence, and the participants felt more confident and comfortable
with these tools. It is important to emphasize that with novice teachers (both didactically
and technologically), it’s wise to start with simpler robots and sensors.
Our results indicate, similarly to previous findings [22], that some teacher-students
evaluated their digital competence in the context of teaching and learning much lower
after the ICT-enriched course. This could be a result of a reality check for their existing
knowledge after reading the descriptions of DigCompEdu check-in items. Learners,
when assessing their digital competence, had to understand the description of all items.
Thus, reading the descriptions could give a clearer understanding about what is being
asked and what they have to assess. As a result, learners could discover what knowledge
164 E. Heinmäe et al.
and competence they are lacking. Thus, understanding the DigCompEdu model and
how to analyze digital competence is very important skill that needs to be taught and
supported at teacher-training courses.
The case study has some limitations. One of the weaknesses is a non-representative
sample that was rather small. This does not allow us to make broader generalizations
about the findings. Also, teacher-students’ activities during the course were assessed by
the lecturer, and learners got additional feedback from their peers. This feedback may
have influenced the teacher-students’ self-perception of their digital competence.
We believe that additional research, especially qualitative research, is needed on the
topic. Valuable data can be gathered from, for instance, interviews and observations
with teacher-students in order to explore how they use educational robots in real life
situations with their pupils, and how they use STEAM kits to support and enhance
pupils’ digital competence. Future studies should also focus on the pre-teacher and post-
teacher experiences with educational robots in the context of digital competence. Also,
we plan to carry out an experiment with test and control groups, with various settings,
in order to examine how STEAM activities change learner digital competence.
In conclusion, we believe our findings provide a valuable input for the wider topic
of how to support teachers’ professional development in ICT field in general, and in the
STEAM field specifically, by using educational robots.
References
1. The Digital Education Action Plan (2021–2027): Resetting education and training for the digi-
tal age. https://ec.europa.eu/education/education-in-the-eu/digital-education-action-plan_en.
Accessed 21 Dec 2020
2. OECD: The Future of Education and Skills: Education 2030. OECD Publishing, Paris (2018)
3. All digital webpage. https://all-digital.org/all-digital-annual-report-2018-is-out/. Accessed 08
Nov 2020
4. OECD: TALIS 2018 Results (Volume II): Teachers and School Leaders as Valued Profes-
sionals, TALIS. OECD Publishing, Paris (2020)
5. Praxis Homepage. http://www.praxis.ee/en/works/ict-education-in-estonian-schools-and-kin
dergartens/. Accessed 10 Jan 2020
6. Smahel, D., et al.: EU Kids Online 2020: survey results from 19 countries. EU Kids Online
(2020)
7. Eurydice: Digital Education at School in Europe. Eurydice Report. Publications Office of the
European Union, Luxembourg (2019)
8. The Estonian Lifelong Learning Strategy 2020. Ministry of education and research, Tallinn
(2014)
9. STEAM K12 - open education resource. https://e-koolikott.ee/kogumik/28660-STEAM-K12.
Accessed 10 Jan 2020
10. Punie, Y., Redecker, C.: Digital Competence Framework for Educators: DigCompEdu.
Publications Office of the European Union (2017)
Enhancing Teacher-Students’ Digital Competence 165
11. Angel-Fernandez, J.M., Vincze, M.: Introducing storytelling to educational robotic activities.
In: IEEE Global Engineering Education Conference (EDUCON) 2018, pp. 608–615, Tenerife
(2018)
12. Alimisis, D.: Educational robotics: open questions and new challenges. Themes in Sci.
Technol. Educ. 6(1), 63–71 (2013)
13. Harris, A., de Bruin, L.R.: Secondary school creativity, teacher practice and STEAM educa-
tion: An international study. J. Educ. Change 19(2), 153–179 (2017). https://doi.org/10.1007/
s10833-017-9311-2
14. De Gasperis, G., Di Mascio, T., Tarantino, L.: Toward a cognitive robotics laboratory based on
a technology enhanced learning approach. In: Vittorini, P., et al. (eds.) International Confer-
ence in Methodologies and Intelligent Systems for Techhnology Enhanced Learning, AISC,
vol. 617, pp. 101–109. Springer Nature, Switzerland (2017).
15. Gerecke, U., Wagner, B.: The challenges and benefits of using robots in higher education.
Intell. Automation Soft Comput. 13(1), 29–43 (2007)
16. Gorakhnath, I., Padmanabhan, J.: Educational robotics in teaching learning process. Online
Int. Interdiscip. Res. J. 7(2), 161–168 (2017)
17. Leoste, J., Heidmets, M.: Factors influencing the sustainability of robot supported math
learning in basic school. In: Silva, M.F., et al. (eds.) ROBOT’2019: Fourth Iberian Robotics
Conference, AISC 1092, p. 443−454, Springer Nature, Switzerland AG (2019)
18. National curriculum for basic schools. https://www.riigiteataja.ee/en/eli/524092014014/con
solide. Accessed 10 Jan 2020
19. Occupational Qualification Standards: Teacher, level 7. https://www.kutseregister.ee/ctrl/en/
Standardid/vaata/10719336. Accessed 10 Jan 2020
20. Estonia Education Strategy 2021–2035. https://www.hm.ee/en/activities/strategic-planning-
2021-2035/education-strategy-2035-objectives-and-preliminary-analysis. Accessed 10 Jan
2020
21. The Check-In Self-Reflection Tool. https://ec.europa.eu/jrc/en/DigCompEdu/self-assess
ment. Accessed 10 Jan 2020
22. Abbiati, G., Azzolini, D., Piazzalunga, D., Rettore, E., Schizzerotto, A.: MENTEP Evaluation
Report, Results of the field trials: The impact of the technology- enhanced self-assessment
tool. TET-SAT) - European Schoolnet, FBK- IRVAPP, Brussels (2018)
Disciplinary Knowledge of Mathematics
and Science Teachers on the Designing
and Understanding of Robot Programming
Algorithms
Abstract. This document presents the results of a study aimed to identify disci-
plinary knowledge of mathematics and pre-service science teachers while under-
standing and designing a programming algorithm. For this, a block programming
experience with Arduino UNO boards was designed and an interpretative content
analysis of the data recorded in the experience was developed. The results show
that two groups of preservice teachers exhibit both explicit and implicit basic dis-
ciplinary knowledge. This study suggests that science teachers can fluently guide
students in making explicit the relationship between algorithms and disciplinary
knowledge.
1 Theoretical Background
The current technological revolution (known as the fourth industrial revolution - 4IR)
demands an education that prepares the new generations for challenges such as artificial
intelligence, rapid development of global communications and maximum data dissemi-
nation (Carmona-Mesa et al. 2021; Mpungose 2020; Scepanovic 2019). Faced with these
challenges, educational proposals that highlight the role of programming as an instru-
ment to face this technological revolution have been consolidating. These initiatives focus
on computational thinking as a methodological resource or object of study within the
framework of STEM education (Lee et al. 2020; Villa-Ochoa and Castrillón-Yepes 2020).
They enhance interdisciplinary educational processes based on a broad understanding
of technology in which other disciplines are connected and required (Carmona-Mesa
et al. 2020; Weintrop et al. 2016).
Benton et al. (2017) and Bellas et al. (2019) analyze programming training and its
integration with mathematics through robotics in the context of primary education; sim-
ilarly, Santos et al. (2019) explore the integration of programming with natural sciences.
These studies highlight that: (i) although the contributions of robotics in under- standing
the notion of algorithm (a core concept in programming) stand out, its relationship with
other disciplinary knowledge requires further investigation; and (ii) teachers have a key
role to make explicit the utility of algorithms in other disciplines, so it is necessary to
expand their training and confidence to guide experiences in programming based on
robotics.
International research points out that the possibilities of robotics are not fully
exploited in the teaching and learning of specific disciplines, because of teachers not fully
identifying their potential (e.g., Zhong and Xia 2018). On the other hand, it is reported
that the main sources of training used by teachers are previous personal experiences in
the subject, self-learning, and experience exchange with colleagues; furthermore, engi-
neers are the principal professionals who guide programming experiences supported in
robotics in the school curriculum (Patiño et al. 2014).
Different initiatives have been generated to meet the educational needs of teachers;
some aimed at the design of methodological strategies (Bellas et al. 2019); others con-
sidered the design of educational materials, accompanied by related training to support
its integration in the school system (Benton et al. 2017); to a lesser extent, other initia-
tives focused on offering virtual courses (Zampieri and Javaroni 2020). Nevertheless,
the research results reported by these initiatives are not conclusive; in particular, there
is no empirical evidence of support to educational needs of specific-disciplines teachers
that make explicit the relationship between algorithms and specific curricular contents.
In this sense, this study was developed to answer the question: What are the disciplinary
skills that pre-service mathematics and science teachers exhibit in understanding and
designing algorithms when programming robots?
2 Methodology
In the empirical part of this study, a training experience in programming through robotics
was designed and implemented; it consisted of the design of an algorithm using the
mBlock software; the algorithm implicitly implied disciplinary concepts such as dis-
tance, time, speed, right angles, and proportional scales. This experience took a four-hour
session and was implemented during two courses aimed at training teachers in technol-
ogy integration, one for mathematics and the other one for science (with an emphasis on
physics). This section presents the description of the programming training experience,
the tools used for empirical data analysis and the data analysis method.
The programming educational experience has a theoretical and practical nature based
on the objective of learning from and through robotics. This experience emerges from
a challenge where elements such as teamwork and disciplinary-knowledge decision
making, as well as specific software and hardware knowledge, are considered (López
and Andrade 2013). For this study, the block programming language mBlock (version
3.4.12) was used; the software allowed visualizing algorithms both in blocks and code
lines. In addition, Arduino UNO boards were used to upload the generated algorithms.
168 J. A. Carmona-Mesa et al.
Fig. 1. Map of the University (left). The right image represents the left black rectangle zoomed
area and shows the route to be covered by the algorithm.
As the used robots had its own power supply, it was possible to operate its motors without
plugging them to an external source.
Educational experiences with robots have been focused in procedural learning
(Patiño et al. 2014). To transcend it, a challenge was designed to build an algorithm
in such a way that planning, design, test and final evaluation phases were necessary
to solve it (see Table 1). The challenge consisted of programming a vehicle that could
enter the university campus. A map was given to the pre-service teachers (Fig. 1). The
instructions were: through the western vehicular access, enter the university campus; the
vehicle (robot) must be parked correctly and in the shortest possible time in a specific
location of the parking zone (see Fig. 1). The path to be described by the robot was
studied in the classroom, considering real campus dimensions, but using different scales
to construct sections: A (1/10 scale), B (1/30 scale), C (1/20 scale) and D (1/30 scale).
After that activity, the following questions were raised: how to find the real dimensions
of the path and, through the scales, find the dimensions of the path that the robot must
carry out in the classroom? How to find the speed of the robotic car and be able to
program the algorithm that allows it to travel the entire trajectory?
Table 1. (continued)
Table 1. (continued)
The educational experiences were implemented in two courses for one week. Thirteen
future mathematics teachers were divided into five teams and nine future science teach-
ers were divided into three teams. The activities had a full-time teacher and two assistant
teachers. The three teachers jointly designed and implemented the experience. The expe-
rience lasted four hours and was videotaped by two cameras that captured the experience
from opposite angles; In addition, to carry out each test, the algorithm was previously
tested. In total, four hours of video (from both cameras) and three algorithms for each
work team were transcribed.
To answer the study question, an interpretative content analysis was developed using
qualitative criteria to determine the reliability and validity of the analytical processes,
also allowing inferences to be made by objectively and systematically identifying both
manifest and latent content in the recorded data (Drisko and Maschi 2016). Each author
independently analyzed and coded the three algorithms constructed by each team and
the transcription (manifest contents). Once this phase was completed, the findings were
organized into a list of three types of algorithms (basic, intermediate, and sophisti-
cated). After that, the authors analyzed the transcripts trying to identify the influence of
disciplinary knowledge made implicit or explicit in the participants’ arguments (latent
content). Finally, the findings of each author were discussed, with the whole team seek-
ing to reach consensus; because of this debate, a more refined categorization emerged
(Table 2).
Disciplinary Knowledge of Mathematics and Science Teachers 171
3 Results
The results are presented according to the categories described in Table 2. For each
category, conceptual aspects and records of the training experience that support them
are shown.
their construction, for instance: “wait_seconds” and “forever”. In the science course, two
teams achieved intermediate algorithms and the remaining one achieved a sophisticated
one; for this last case, it is worth noting the fact of limiting the rotation of the robot by
turning off the digital pin 11 and not keeping pin 8 on.
Even though the experience was introductory, it is highlighted that pre-service teach-
ers built functional algorithms, even though not the most optimized versions for the
challenge. For example, when considering as an evaluative variable the shortest time to
complete the journey, the fact of keeping both motors on during a turn and giving them
opposite rotation directions allows the turn to be performed much faster (to do this, pin
9 or pin 10 must be activated, which drive counter-clockwise rotation for each motor).
It is important to note that, in the case of the science course teams, none considered
the last turn of the route (between section C and D). In contrast, all teams in the math
course considered in detail the four sections and the respective turns. It was a constant
in all the teams to formulate an initial structure of blocks and adjust it slightly after each
test; that is, the essence of the algorithm was identified from the beginning and the rest of
the experience was focused on modifying it according to time issues (both for sections
and turns).
an efficient algorithm. In both courses, the professor asked about the relationship that
the participants established between the algorithm and the concepts of distance, time,
speed, right angle, and proportional scale.
Divided opinions were identified in the mathematics course. On the one hand, a
member of one of the teams stated: “It seemed silly to me that we took so much time
looking at how much the measurement in meters was, knowing that this was not going to
be of any use for programming [proportional scale]”. To that comment, another member
of the same team added:
We did the conversion, we did all that and then we obtained ten meters for the first
[section A], (...) and then we calculated the speed divided by time, uh! the distance
divided by time to find the speed and we got 0.33, and that was not right! ... it had
no sense. We did not see any connection in the data we collected, we were just
calculating.
The reaction of this team shows that, in their experience, the disciplinary knowl-
edge had no influence on the construction of the algorithm; furthermore, there is some
imprecision (or doubt) regarding the concept of speed. However, the observation of the
teachers allowed to infer that this perception was not generalized. They asked the rest of
the pre-service teachers if the same thing happened for all. Faced with this, one of the
members of another team replied:
no, it did help us, (…) making an approximate translation of scales, we obtained 1.3
meters here (…) and with the speed I assumed, then we made those calculations,
and it did help us, so much so that what we have modified here has been more in
terms of the lap and the turn than the distance. The distance as calculated in the
beginning kept unchanged and it has worked well, only with respect to the turns
have we had problems.
During the same experience in the science course, one of the members of a team
responded to the same previous question by stating: “There are basic concepts of physics
such as (…): take the map [proportional scale], find the distance, ( …) The speed of the
car”; the participant complemented: “no one is one hundred percent sure that this is
the case, but, if we go further and say: (…) next time we will try to understand the
two concepts and see if either of the two is right [when analyzing the notion of time for
speed and for turns]”. In addition, he emphasized the challenge that the 90º angle concept
implied: “this turn of the car was not perpendicular, it did not result in a ninety-degree
turn, so what are we going to do? look, it is a little more [than perpendicular] so we
could lower the time a little”.
The records of the previous paragraph denote a greater sensitivity for the role of
disciplinary knowledge in the construction of the algorithms. Furthermore, the third team
was the one that developed the sophisticated algorithm. These feelings and statements,
unlike the math group ones, were common in the science teams. Even more, while for the
mathematics group the opinions were divided in relation to disciplinary knowledge, for
the case of the science group such knowledge was assumed as basic. For this reason, the
professor, aiming to delve into this discussion, asked: “if the radius of the robot’s wheel
is reduced by one third of its current value, how is the route it makes modified?”. The
174 J. A. Carmona-Mesa et al.
science group began to discuss, and they suggested that the length of the circumference
that the wheel described would be 2πr, therefore, the length of the new route would be
2πr/3. They stated then that, although the engine operation would be the same, the route
length was modified by a reduction of one third respect to the one initially considered.
In the first part of this chapter, the need to expand the empirical evidence related to the
teachers’ educational needs on specific disciplines was presented. Teacher knowledge
on relationship between algorithms and specific curricular contents are also required.
Then, it was proposed to contribute to the international research from two categories:
(i) understanding and design of algorithms and (ii) influence of disciplinary knowledge
in the construction of algorithms, analyzed for the case of mathematics and science
teachers.
In this regard, this chapter offers two important results. First one is that, although
in short-term training experiences future teachers can perform functional algorithms,
there is an important difference in the emphasis given by mathematics and science
teachers. For the first group of them, a meticulous reasoning is observed. Priority to the
functionality of the robot-path algorithm was observed in that reasoning. For pre-service
science teachers, priority is given to optimizing the algorithm respect to the number of
blocks and the time in which the robot made the journey.
Although the results are not conclusive, there is evidence that science teachers, by
showing greater confidence and optimization in the algorithm, could better accompany
students when explaining the relationship between algorithms and disciplinary knowl-
edge (Benton et al. 2017; Zhong and Xia 2018). In addition, it is confirmed that the design
of algorithms constitutes a challenge for mathematics teachers as it transcends the usual
reasoning of their discipline, involving aspects of computer science (Carmona-Mesa
et al. 2021; Benton et al. 2017).
A second result refers to the disciplinary knowledge that future teachers exhibit in the
construction of algorithms. The importance of transcending functional aspects of robotics
and promoting training in practical programming is highlighted, where elements such as
teamwork and decision-making based on disciplinary knowledge could emerge (López
and Andrade 2013; Patiño et al. 2014). This fact showed that, while for mathematics
teachers, disciplinary concepts do not play an important role in the understanding and
design of an "ideal" algorithm, for science teachers there is a greater presence of their
disciplinary knowledge, for example, in the use of time to optimize the "wait" block.
This provides evidence against the conjecture of a possible greater relationship between
science and robot programming (Santos et al. 2019), by showing a possible explicit
influence on the design of the algorithm.
In this study, on the one hand, the influence of an absolutist vision of mathematics
is identified in the process carried out by the teachers of this discipline. This was evi-
denced in the fact that they seem to ignore the suggested disciplinary concepts and focus
on achieving an "ideal version” of the algorithm. On the other hand, science teachers
reflect a common tendency in their discipline to neglect variables with less effects and
focus their work on functional procedures closely related to the faced challenge. These
Disciplinary Knowledge of Mathematics and Science Teachers 175
Acknowledgments. We are grateful to the Committee for the Development of Research of the
University of Antioquia for financing the project “Foundation and development of a STEM training
proposal for future mathematics teachers.” In the same way, we express our gratitude to the Doc-
toral Excellence Scholarship Program of the Bicentennial of MinCiencias-Colombia for financing
the project “Design and validation of a STEM training proposal for teachers of the Department of
Antioquia.”
References
Benton, L., Hoyles, C., Kalas, I., Noss, R.: Bridging primary programming and mathematics:
some findings of design research in England. Digit. Exp. Math. Educ. 3(2), 115–138 (2017).
https://doi.org/10.1007/s40751-017-0028-x
Bellas, F., Salgado, M., Blanco, T.F., Duro, R.J.: Robotics in primary school: a realistic mathe-
matics approach. In: Daniela, L. (ed.) Smart Learning with Educational Robotics, pp. 149–182.
Springer, Cham (2019). https://doi.org/10.1007/978-3-030-19913-5_6
Carmona-Mesa, J.A., Krugel, J., Villa-Ochoa, J.A.: La formación de futuros profesores en tec-
nología. Aportes al debate actual sobre los Programas de Licenciatura en Colombia. En Richit,
A., Oliveira, H. (eds.) Formação de Professores e Tecnologias Digitais, pp. 35–61. Livraria da
Física, São Paulo (2021)
Carmona-Mesa, J.A., Cardona, M.E., Castrillón-Yepes, A.: Estudio de fenómenos físicos en la
formación de profesores de Matemáticas. Una experiencia con enfoque en educación STEM.
Uni-pluriversidad 20(1), e2020101 (2020). https://doi.org/10.17533/udea.unipluri.20.1.02
Drisko, J., Maschi, T.: Content Analysis. Oxford University Press (2016)
Mpungose, C.B.: Student teachers’ knowledge in the era of the fourth industrial revolution. Educ.
Inf. Technol. 25(6), 5149–5165 (2020). https://doi.org/10.1007/s10639-020-10212-5
Lee, I., Grover, S., Martin, F., Pillai, S., Malyn-Smith, J.: Computational thinking from a disci-
plinary perspective: integrating computational thinking in K-12 science, technology, engineer-
ing, and mathematics education. J. Sci. Educ. Technol. 29(1), 1–8 (2020). https://doi.org/10.
1007/s10956-019-09803-w
López, P.A., Andrade, H.: Aprendizaje de y con robótica, algunas experiencias. Revista Educación
37(1), 43 (2013). https://doi.org/10.15517/revedu.v37i1.10628
Santos, I., Grebogy, E.C., de Medeiros, L.F.: Crab robot: a comparative study regarding the use of
robotics in STEM education. In: Daniela, L. (ed.) Smart Learning with Educational Robotics,
pp. 183–198. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-19913-5_7
176 J. A. Carmona-Mesa et al.
Patiño, K.P., Diego, B.C., Rodilla, V.M., Conde, M.J.R., Rodriguez-Aragon, J.F.: Using robotics
as a learning tool in Latin America and Spain. IEEE Revista Iberoamericana de Tecnologias
del Aprendizaje 9(4), 144–150 (2014). https://doi.org/10.1109/RITA.2014.2363009
Scepanovic, S.: The fourth industrial revolution and education. In: 2019 8th Mediterranean Con-
ference on Embedded Computing (MECO), vol. 8, pp. 1–4. IEEE (2019). https://doi.org/10.
1109/MECO.2019.8760114
Zhong, B., Xia, L.: A Systematic review on exploring the potential of educational robotics in
mathematics education. Int. J. Sci. Math. Educ. 18(1), 79–101 (2018). https://doi.org/10.1007/
s10763-018-09939-y
Villa-Ochoa, J.A., Castrillón-Yepes, A.: Temas y tendencias de investigación en América Latina
a la luz del pensamiento computacional en Educación Superior. En Políticas, Universidad e
innovación: retos y perspectivas, pp. 235–248. JMBosch (2020)
Weintrop, D., et al.: Defining computational thinking for mathematics and science classrooms. J.
Sci. Educ. Technol. 25(1), 127–147 (2016). https://doi.org/10.1007/s10956-015-9581-5
Zampieri, M.T., Javaroni, S.L.: A dialogue between computational thinking and interdisciplinarity
using scratch software. Uni-pluriversidad 20(1), e2020105 (2020). https://doi.org/10.17533/
udea.unipluri.20.1.06
Teachers’ Perspective on Fostering
Computational Thinking Through
Educational Robotics
1 Introduction
Educational robotics has garnered significant interest in recent years to teach
students not only the fundamentals of robotics, but also core Computer Science
This research was supported by the NCCR Robotics, Switzerland. We would like to
thank the trainers and teachers who made this study possible.
M. Chevalier and L. El-Hamamsy contributed equally to this work.
The original version of this chapter was previously published non-open access. A Cor-
rection to this chapter is available at https://doi.org/10.1007/978-3-030-82544-7 30
c The Author(s) 2022, corrected publication 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 177–185, 2022.
https://doi.org/10.1007/978-3-030-82544-7_17
178 M. Chevalier et al.
(CS) concepts [5] and Computational Thinking (CT) competencies [2,3]. How-
ever, participation in an ER Learning Activity (ERLA) does not automatically
ensure student learning [17], with the design of the activity playing a key role
towards the learning outcomes [6]. Indeed, the lack of understanding as to how
specific instructional approaches impact student learning in ER activities has
been raised at multiple occasions [16]. Many researchers have even evoked the
need to have an operational model to understand how to foster CT skills [10]
within the context of ER activities [1,9]. To that effect, Chevalier & Giang et
al. [3] developed the Creative Computational Problem Solving (CCPS) model
for CT competencies using an iterative design-oriented approach [13] through
student observations. The resulting 5 phase model (Fig. 1) helped identify and
understand students’ cognitive processes while engaging in ER learning activi-
ties aimed to foster CT skills. By analysing the students’ behaviour through the
lens of the CCPS model to understand the students’ thought processes, both
teachers and researchers may have a means of action and intervention in the
classroom to foster the full range of cognitive processes involved in creative com-
putational problem solving. The authors concluded that a validation by teachers
was essential to ensure that they could “effectively take advantage of the model
for their teaching activities”, not only at the design stage, but also to guide
specific interventions during ER learning activities.
This article reports the findings of a study involving 334 in-service and pre-
service primary school teachers, with the purpose of evaluating their perception
of the model and investigating whether their own needs as users of the model
are met. The Utility, Usability and Acceptability framework [18] for computer-
based learning environment assessment was leveraged, as it has been previously
used for the evaluation of teachers’ perception of the use of educational robots
in formal education [4]. More formally, we address the following questions:
• RQ1: What is the perceived utility of the CCPS model?
• RQ2: What is the perceived usability of the model?
• RQ3: What is the acceptability of the model by teachers?
2 Methodology
To evaluate the model, the study was conducted with 232 in-service and 102
pre-service teachers participating in the mandatory training program for Digi-
tal Education underway in the Canton Vaud, Switzerland [5] between Novem-
ber 2019 and February 2020. The inclusion of both pre-service and in-service
teachers within the context of a mandatory ER training session helps ensure
the generalisability of the findings to a larger pool of teachers, and not just
experienced teachers and/or pioneers who are interested in ER and/or already
actively integrating ER into their practices [4]. During the ER training session,
the teachers participated in an ER learning activity (see Lawnmower activity
in Fig. 1 [3]) which was mediated by the CCPS model. During this activity, the
teachers worked in groups of 2 or 3 to program the event-based Thymio II robot
[12] to move across all of the squares in the lawn autonomously. As the robot is
Teachers’ Perspective on Fostering CT Through Educational Robotics 179
event-based, the participants “have to reflect on how to use the robot’s sensors
and actuators to generate a desired [behaviour]” [3], which requires that the par-
ticipants leverage many CT-related competencies. So that teachers understand
how the CCPS can be used to mediate an ER learning activity, and similarly
to Chevalier & Giang et al. (2020) [3], a temporary access blocking to the pro-
gramming interface was implemented at regular time intervals.
Fig. 1. The CCPS model (left) and lawnmower activity setup with Thymio (right) [3].
Table 1. Utility, usability and acceptability survey [18]. Utility in terms of transver-
sal skills considers 5 dimensions: collaboration (COL), communication (COM), learning
strategies (STRAT), creative thinking (CREA) and reflexive processes (REFL)
Construct Question
Utility of the (COL) We exchanged our points of view/evaluated the pertinence of our
CCPS for actions/confronted our ways of doing things
transversal skills (COM) We expressed ourselves in different ways (gestural, written
(4-point Likert etc...)/identified links between our achievements and
scale) discoveries/answered our questions based on collected information
(STRAT) We persevered and developed a taste for effort/identified
success factors/chose the adequate solution from the various
approaches
(CREA) We expressed our ideas in different and new ways/expressed our
emotions/were engaged in new ideas and exploited them
(REFL) We identified facts and verified them/made place for doubt and
ambiguity/compared our opinions with each other
Utility of the What helps i) identify the problem (USTD)? ii) generate ideas (IDEA)?
intervention iii) formulate the solution (FORM)? iv) program (PROG)? v) evaluate the
methods solution found (EVAL)?
(checkboxes) Max 3 of 5 options: 1) manipulating the robot; 2) writing down the
observations; 3) observing 3 times before reprogramming; 4)
programming; 5) not being able to program
Usability (4-point The model helps i) plan an ERLA; ii) intervene during an ERLA; iii)
Likert scale) regulate student learning during an ERLA; iv) evaluate student
learning during an ERLA
Acceptability I will redo the same ERLA in my classroom
(4-point Likert I will do a similar ERLA that I already know in my classroom
scale) I will do a similar ERLA that I will create in my classroom
I will do a more complex ERLA in my classroom
less perceived by teachers. This is coherent with the fact that the ER Lawnmower
activity was conceived to promote students’ use of transversal skills to help the
emergence of related CT competencies and suggests that the use of the CCPS
model in designing ER learning activities helps teachers see and strengthen the
link between ER, transversal skills and CT, confirming the results of [4] with
teachers who were novices. Although in the mandatory curriculum teachers are
taught to evaluate transversal skills, little indication is provided as to how to
foster them. The use of ER learning activities informed and mediated by the
CCPS model can provide a concrete way to foster skills already present in the
curriculum and ensure that students acquire the desired competencies.
Fig. 2. Teachers’ perceived utility (with respect to fostering transversal skills), usability
and acceptability of the CCPS model. For each transversal skills we report the mean
and standard deviation μ±std, and Cronbach’s α measure of internal consistency.
To clarify the link between the CCPS model and the employed intervention
methods, the teachers were asked to select a maximum of three intervention
methods that they believed were useful to engage in each of the phases of the
CCPS model (see Fig. 3). The element which emerges as the most relevant for
all the phases of the model is the possibility of manipulating the robot, thereby
reinforcing the role of physical agents in fostering CT skills [8]. This however is
dependent on the fact that the Thymio robot provides immediate visual feed-
back through the LEDs in relation to each sensor’s level of activation. This
highlights once more the importance of constructive alignment in ER learning
activity design [7] which stipulates the importance of the robot selection in rela-
tion to the desired learning outcomes. The second most popular choice was to
write down the observations, likely because this constitutes a means of specifying
what happens with the robot in the environment. The written observations then
become a “thinking tool” that supports modelling and investigation [15]. Sur-
prisingly, and although the teachers were introduced to the fact that unregulated
182 M. Chevalier et al.
access to the programming interface tends to lead to trial and error behaviour
[3], programming was often selected as being useful to foster the different CT
phases, whilst not being able to program was one of the least selected. Only in
the case of idea generation did both programming and not programming receive
an equal number of votes. We believe that the frequent selection of programming
is due to the fact that the question was based on their experience as the partic-
ipants, and therefore the need for a high sense of controllability [14], in the ER
learning activity and not on their experience as a teacher leading the activity in
the classroom.
Fig. 3. Teachers’ perception of the link between intervention methods and the different
phases of the CCPS model. For each phase of the model and intervention method, the
proportion of teachers having selected the approach as relevant is shown.
full extent, this is also highly linked to the difficulty found in both the ER and
CT literature in terms of assessment of learning and/or transversal skills [8,9].
The question of acceptability here targets teachers’ intent to use the CCPS model
in their practices. Intent to use is considered at progressive levels of appropriation
(see Fig. 2), which is why it is not surprising to find that while teachers might be
willing to conduct the same ER learning activity in their classrooms (64%), less
are willing to adapt the activity (40%), create their own custom one (32%), and
conduct a more complex one (20%). One can put this in relation with the Use-
Modify-Create (UMC) progression [11] which was developed to scaffold student
learning in CT contexts. Teachers need to start by using the model in a given ER
learning activity to gain in self-efficacy. Only then will they be able to progress
to the next stage where they feel comfortable adapting the use of the model to
their teaching style and to the individual students. Finally, teachers will reach
a level where they create their own ER pedagogical interventions to foster CT
competencies. One must however note that intent is likely influenced by external
factors (e.g. time or access to the robots, frequent barriers to the introduction
of ER in formal education [4,5]).
4 Conclusion
Provided the prominent role that teachers play in the integration of ER and
CT in formal education, this study investigated teachers’ perception of an oper-
ational model to foster Computational Thinking (CT) competencies through
ER activities: the Creative Computational Problem Solving (CCPS) model [3].
Three research questions were considered in a study with 334 pre-service and
in-service primary school teachers: What is the perceived utility (RQ1), usabil-
ity (RQ2) and acceptability (RQ3) of the CCPS model? While teachers found
that the activity design and intervention methods employed were useful to foster
transversal skills (RQ1), their perception of the utility of the intervention meth-
ods on the different cognitive processes defined by the CCPS model (RQ1) was
somewhat unexpected. In terms of the usability (RQ2), teachers perceived how
they could design an activity and intervene using the model, but were less able
to perceive how the model could be used to assess where the students were in
terms of learning and regulate the activity to mediate their learning. The find-
ings of RQ1 and RQ2 support the importance of training teachers to recognise
and understand the different cognitive processes to intervene adequately and be
able to differentiate their teaching per student, rather than adopting a unique
strategy for an entire class.
To help teachers implement ER learning activities in the classroom and gain
in autonomy to create their own activities that foster CT skills (RQ3), it seems
relevant to alternate between experimentation in classrooms and debriefing dur-
ing teacher training and go beyond providing pedagogical resources. To conclude,
184 M. Chevalier et al.
the operationalisation of ER to foster CT skills must also consider the key role
that teachers have to play in the introduction of any such model and its appli-
cation in formal education.
References
1. Atmatzidou, S., Demetriadis, S.: Advancing students’ computational thinking skills
through educational robotics: A study on age and gender relevant differences.
Robotics and Autonomous Systems 75, 661–670 (Jan 2016)
2. Bers, M.U., Flannery, L., Kazakoff, E.R., Sullivan, A.: Computational thinking
and tinkering: Exploration of an early childhood robotics curriculum. Computers
& Education 72, 145–157 (Mar 2014)
3. Chevalier, M., Giang, C., Piatti, A., Mondada, F.: Fostering computational think-
ing through educational robotics: a model for creative computational problem solv-
ing. International Journal of STEM Education 7(1), 39 (Dec 2020)
4. Chevalier, M., Riedo, F., Mondada, F.: Pedagogical Uses of Thymio II: How Do
Teachers Perceive Educational Robots in Formal Education? IEEE Robotics &
Automation Magazine 23(2), 16–23 (Jun 2016)
5. El-Hamamsy, L., et al.: A computer science and robotics integration model for
primary school: evaluation of a large-scale in-service K-4 teacher-training program.
In: Education And Information Technologies, (Nov 2020)
6. Fanchamps, N.L.J.A., Slangen, L., Hennissen, P., Specht, M.: The influence of SRA
programming on algorithmic thinking and self-efficacy using Lego robotics in two
types of instruction. ITDE (Dec 2019)
7. Giang, C.: Towards the alignment of educational robotics learning systems with
classroom activities p. 176 (2020)
8. Grover, S., Pea, R.: Computational Thinking in K-12: A Review of the State of
the Field. Educational Researcher 42(1), 38–43 (Jan 2013)
9. Ioannou, A., Makridou, E.: Exploring the potentials of educational robotics in
the development of computational thinking: A summary of current research and
practical proposal for future work. Education And Information Technologies 23(6),
2531–2544 (Nov 2018)
10. Lye, S.Y., Koh, J.H.L.: Review on teaching and learning of computational thinking
through programming: What is next for K-12? Computers in Human Behavior 41,
51–61 (2014)
11. Lytle, N., Cateté, V., Boulden, D., Dong, Y., Houchins, J., Milliken, A., Isvik, A.,
Bounajim, D., Wiebe, E., Barnes, T.: Use, Modify, Create: Comparing Computa-
tional Thinking Lesson Progressions for STEM Classes. In: ITiCSE. pp. 395–401.
ACM (2019)
12. Mondada, F., Bonani, M., Riedo, F., Briod, M., Pereyre, L., Retornaz, P., Magne-
nat, S.: Bringing Robotics to Formal Education: The Thymio Open-Source Hard-
ware Robot. IEEE Robotics Automation Magazine 24(1), 77–85 (Mar 2017)
13. Rogers, Y., Sharp, H., Preece, J.: Interaction design: beyond human-computer
interaction. John Wiley & Sons (2011)
14. Rolland, V.: La motivation en contexte scolaire / Rolland Viau. Pédagogies en
développement Problématiques et recherches, De Boeck Université (1994)
15. Sanchez, É.: Quelles relations entre modélisation et investigation scientifique dans
l’enseignement des sciences de la terre? Éducation et didactique (2–2), 93–118
(2008)
Teachers’ Perspective on Fostering CT Through Educational Robotics 185
16. Sapounidis, T., Alimisis, D.: Educational robotics for STEM: A review of tech-
nologies and some educational considerations. p. 435 (Dec 2020)
17. Siegfried, R., Klinger, S., Gross, M., Sumner, R.W., Mondada, F., Magnenat, S.:
Improved Mobile Robot Programming Performance through Real-time Program
Assessment. In: ITiCSE. pp. 341–346. ACM (Jun 2017)
18. Tricot, A., Plégat-Soutjis, F., Camps, J., Amiel, A., Lutz, G., Morcillo, A.: Util-
ity, usability, acceptability: interpreting the links between three dimensions of the
evaluation of the computerized environments for human training (CEHT). Envi-
ronnements Informatiques pour l’Apprentissage Humain 2003,(2003)
Open Access This chapter is licensed under the terms of the Creative Commons
Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/),
which permits use, sharing, adaptation, distribution and reproduction in any medium
or format, as long as you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons license and indicate if changes were
made.
The images or other third party material in this chapter are included in the
chapter’s Creative Commons license, unless indicated otherwise in a credit line to the
material. If material is not included in the chapter’s Creative Commons license and
your intended use is not permitted by statutory regulation or exceeds the permitted
use, you will need to obtain permission directly from the copyright holder.
Technologies for Educational Robotics
Mirobot: A Low-Cost 6-DOF Educational
Desktop Robot
School of Mechanical Electronic and Information Engineering, China University of Mining and
Technology (Beijing), Beijing, China
zhoudongxv@yeah.net
Abstract. Traditional industrial robots are not suitable as teaching equipment for
students to learn due to their high cost and large size. Accordingly, this paper’s
motivation is to propose a multi-degree-of-freedom low-cost desktop educational
robot Mirobot, suitable use as an educational device for various kinds of robotics
courses and research. We apply 3D printing technology to design iteration and
manufacture of the robot structural parts. The look-ahead speed control algorithm
is applied to the execution of a large number of small line segments in trajectory
control. The geometric method and the Euler angle transformation method are used
to solve the inverse kinematics solution problem. The robot’s computer control
software, mobile phone software, and Bluetooth remote control are presented.
1 Introduction
the success of Cassino University in this regard. Quigley et al. [2] designed a low-
cost 7-degree-of-freedom robotic arm. The robot arm is equipped with several stepper
motors and Dynamixel robotics RX-64 servos to act as joints. The unique feature of this
robotic arm is that it guarantees a relatively high-performance index, such as repeated
positioning accuracy and movement speed. McLurkin et al. [3] developed a low-cost,
multi-robot collaborative mobile robot system for science, technology, engineering, and
mathematics (STEM) education. The developed robot is applied to the teaching work of
graduate students and has achieved good results. Eilering et al. [4] proposed a low-cost
3D printed desktop robotic arm and innovatively proposed the use of a small desktop
robotic arm as a controller to control industrial robotic arms. A three-degree-of-freedom
desktop robotic arm named Magician [5] is proposed by a company. The robot arm is
equipped with three stepper motors with planetary reducers as joints, and the MEMS
inclination sensor is installed on the big arm and the small arm to detect the angle of
the axis. The UARM swift manipulator [6] designed by another company has a similar
structure and function. However, this type of robotic arm has relatively few degrees
of freedom, and they are unable to control the orientation of the end effector. Sahin
et al. [7] designed a three-degree-of-freedom education desktop robot R3D. The robot
is equipped with a DC motor as the joint and CNC-machined aluminum as the arm
material. Linke et al. [8, 9] designed a low-cost multi-axis grinder based on a modular
desktop robotic arm structure. By reassembling each module, it can realize a robotic arm
grinder with different mechanical structures. Ghio et al. [10] Proposed another low-cost
three-degree-of-freedom writing robotic arm. The robot arm is equipped with cardboard
as the arm material and three servo motors as joints. Due to the reduced rigidity of the
cardboard, the accuracy of the robotic arm is not high. Shevkar et al. [11] proposed
a three-degree-of-freedom SCARA desktop robotic arm. The robotic arm is driven by
three stepper motors, which controls the speed of the joints well.
Most desktop robots have relatively few degrees of freedom. In the actual teaching
of robotics, the concepts of robot position and orientation are essential. If the number of
degrees of freedom of the robot is less than 6, the orientation of its end effector cannot
be controlled well. Some desktop robotic arms have sufficient degrees of freedom [2],
but they often need to be equipped with small-sized servos to form joints. However,
ordinary servos only provide position control but not speed control. This is a significant
drawback of a robotic arm consisting of a servo as a joint. When a robotic arm equipped
with servos needs to walk through a large number of small line segments, the smoothness
of the movement cannot be guaranteed.
Most desktop robotic arms have poor motion stability due to their lack of look-ahead
speed control [12, 13]. When such a robotic arm performs spot movement, its motion
effect is relatively good. However, as mentioned in the first point, its end-effector will
experience severe jitter when it moves through a large number of small line segments.
The materials of most desktop robotic arms are inappropriate. The structure of some
desktop robotic arms is made of metal materials, which results in high weight and
complicated machining manufacturing processes. There are also some desktop robotic
arms made of plastic sheet, cardboard or sheet metal, which results in low accuracy and
structural strength.
Mirobot: A Low-Cost 6-DOF Educational Desktop Robot 191
2 Mechanical Structure
The actual Mirobot robotic arm is shown in Fig. 1. The overall mechanical system
consists of a base, link frames, gear reducer stepping motors and limit switch sensors.
The robot has link frames and appearance similar to an industrial robot, and it consists
of six joints. A stepper motor with reduction gear is installed at each joint.
The Mirobot robotic arm uses six small stepper motors as drivers. There are many
advantages to using a stepper motor as a joint drive for a desktop robotic arm. The
characteristic of the stepper motor is that it can provide larger torque when running at
low speed. The desktop robotic arm is mainly used in teaching and scientific research,
and its movement speed does not need to be very high. Using a stepper motor as a joint
can also reduce the cost of a desktop robotic arm. A stepper motor can be combined with
192 D. Zhou et al.
its driver to achieve precise pulse-based control under open-loop control. The control
system precisely controls the rotation angle of the motor by sending the number of pulses
and precisely controls the speed of the motor by sending the frequency of the pulses.
Moreover, the servo motor system that realizes the same rotation angle and speed control
is costly, which is one of the essential reasons why the cost of a small mechanical arm
is difficult to reduce. Also, stepper motors can act as electromagnetic clutches [2]. If
the motor output end is accidentally applied with force exceeding its holding torque,
the stepper motor will slide until the force is small enough that the stepper motor can
resume transmission. This characteristic is of great significance in the use of desktop
education robotic arms. It means that desktop robotic arms do not need to add additional
collision detection functions. When the robotic arm touches the user and generates higher
resistance, the stepper motor joint will automatically slide to ensure the safety of the
user. Of course, when the robotic arm is in the “lost step” sliding state, it needs to be
reset before it can continue to be used. Therefore, after restarting the Mirobot robot arm,
it is necessary to perform a mechanical reset first. Otherwise, each axis is locked and
cannot move. The small gear reducer stepping motors used by Mirobot’s six axes are
shown in Fig. 2.
Fig. 2. The small gear reducer stepping motors used by Mirobot’s six axes
Considering the cost of the desktop robot and the need for rapid iteration in design, the
prototype of Mirobot robot is processed by 3D printing technology. 3D printing tech-
nology [14] is an emerging additive manufacturing technology in recent years. After
completing the 3D CAD drawing, the designer can quickly print a set of prototype struc-
tural parts with only a few modifications. Designers can then quickly modify drawings
and print again based on the effects of the prototype. The structural design process of the
Mirobot robot has gone through multiple iterations. With the help of 3D printing tech-
nology, the speed of iteration has been increased while the processing costs have been
reduced. As shown in Fig. 3, it is the 3D printed structural component after painting.
Figure 4 shows the prototypes of various iterative versions of the Mirobot robot, which
are all manufactured using 3D printing technology.
Table 1. DH parameters
αi-1 ai−1 di θi
1 0 0 d1 0
2 −π/2 a1 0 −π/2
3 0 a2 0 0
4 −π/2 a3 d4 0
5 π/2 0 0 π/2
6 π/2 0 d6 0
Mirobot: A Low-Cost 6-DOF Educational Desktop Robot 195
The look-ahead control of speed plays an essential role in high-end robot controllers and
CNC equipment. It is one of the core technologies of various industrial robot controller
manufacturers. Some desktop robotic arms are not equipped with the look-ahead control
algorithm. The small line segments densified by the interpolation algorithm are directly
sent to each joint axis to perform rotation after inverse kinematics solution. The move-
ment of the robotic arm without the look-ahead control is basically accurate, but the
smoothness of the movement is very poor. For a large number of interpolation small line
segments, the movement of the robot is performed frame by frame, and each frame is
not connected with the previous frame in speed, so a sudden change in speed will occur.
The jitter of the robotic arm will be serious. When the jitter is too large, the trajectory
of the desktop robotic arm will not be accurate.
The look-ahead control is to plan the speed profile curve of the small line segment
when the robot arm needs to execute a large number of small line segments to ensure
that the speed profile curve of each small line segment is connected end to end. During
the movement of the robotic arm, the interpolation algorithm densifies its trajectory
into a large number of small line segments that end to end. The robotic arm needs
to execute a large number of small line segments within a particular time to ensure the
accuracy of the trajectory. This situation is prevalent in the welding function of industrial
robotic arms and the painting and laser engraving functions of desktop robotic arms.
The speed connection between the small line segments directly affects the smoothness
of the movement. Typical look-ahead speed profiles are trapezoidal and S-shaped. The
trapezoidal speed profile makes the speed of each small line segment a trapezoid, and
the trapezoidal speeds are connected according to a particular speed connection rule.
The S speed profile can further ensure the continuity of acceleration between small line
196 D. Zhou et al.
segments. The Mirobot robotic arm adopts a trapezoidal method that is relatively easy
to implement.
In the control system, each small line segment of the movement of the robot arm is
abstracted as a “block” structure, as shown in Fig. 7. Each block contains information
such as the entry speed, normal speed, and movement distance of the small line segment.
The role of speed look-ahead is to plan the movement speed of these small line
segments, and connect the first and last speeds of each small line segment under the
consideration of speed constraints, as shown in Fig. 8. This method can ensure that
the movement speed does not change abruptly, thereby ensuring the smoothness and
stability of the movement.
computer provides the single-axis motion function of the robotic arm, the posture con-
trol function in the Cartesian mode, the teaching function, the blockly [16] graphical
programming function, and the drawing function. These functions of Mirobot robots are
designed according to common teaching or laboratory research needs.
For smooth operation, we have also developed a simple mobile phone APP and
remote control for Mirobot robotic arm. Both the mobile phone and the remote control
can use Bluetooth to wirelessly connect with the Mirobot robotic arm to achieve wireless
control of the robotic arm. Figure 10(left) shows the App software interface on the mobile
phone, and Fig. 10(right) shows the remote control interface.
Fig. 11. The operator used his arm and hand to control the robotic arm
As shown in Fig. 12, a watercolor pen was installed on the end flange of the robot,
and the robot wrote a Chinese character “Yong”.
We wrote the URDF file of Mirobot robot and imported it into ROS’s gazebo simula-
tion environment. The MoveIt! plug-in in ROS was used for motion planning of Mirobot
robot. As shown in Fig. 13, in the Moveit! plug-in, the virtual robot arm in the gazebo
and the external real robotic arm were simultaneously controlled for motion planning.
Mirobot: A Low-Cost 6-DOF Educational Desktop Robot 199
8 Conclusion
In this paper, we propose a low-cost desktop robot named Mirobot that is suitable for
robotics education and scientific research. In mechanical structure design, we use stepper
motors to form the joints of the robot and use 3D printing technology to manufacture each
structural part. In the design of the control system, we use the look-ahead speed control
algorithm to ensure the stability of the movement, and the kinematics inverse solution
algorithm with low computational complexity. Computer control software and remote
control designed for the robot are presented. We showed some typical applications of
the Mirobot robot in writing and scientific research experiments.
References
1. Ceccarelli, M.: Robotic teachers’ assistants - low-cost robots for research and teaching
activities. IEEE Robot. Autom. Mag. 10, 37–45 (2003)
2. Quigley, M., Asbeck, A., Ng, A.: A low-cost compliant 7-DOF robotic manipulator. In: 2011
IEEE International Conference on Robotics and Automation. IEEE, New York (2011)
3. McLurkin, J., Rykowski, J., John, M., Kaseman, Q., Lynch, A.J.: Using multi-robot systems
for engineering education: teaching and outreach with large numbers of an advanced, low-cost
robot. IEEE Trans. Educ. 56, 24–33 (2013)
4. Eilering, A., Franchi, G., Hauser, K.: ROBOPuppet: low-cost, 3D printed miniatures for
teleoperating full-size robots. In: 2014 IEEE/RSJ International Conference on Intelligent
Robots and Systems, pp. 1248–1254 (2014)
5. Ai, Q.S., Yang, Q.F., Li, M., Feng, X.R., Meng, W.: Implementing multi-DOF trajectory
tracking control system for robotic arm experimental platform. In: 2018 10th International
Conference on Measuring Technology and Mechatronics Automation, pp. 282–285. IEEE,
New York (2018)
6. UFACTORY Homepage. https://www.ufactory.cc/#/en/uarmswift. Accessed 26 Jan 2021
7. Sahin, O.N., Uzunoglu, E., Tatlicioglu, E., Dede, M.I.C.: Design and development of an
educational desktop robot R3D. Comput. Appl. Eng. Educ. 25, 222–229 (2017)
200 D. Zhou et al.
8. Ghadamli, F., Linke, B.: Development of a desktop hybrid multipurpose grinding and 3D
printing machine for educational purposes. In: Shih, A., Wang, L. (eds.) 44th North American
Manufacturing Research Conference, NAMRC 44, vol. 5, pp. 1143–1153 (2016)
9. Linke, B., Harris, P., Zhang, M.: Development of desktop multipurpose grinding machine for
educational purposes. In: Shih, A.J., Wang, L.H. (eds.) 43rd North American Manufacturing
Research Conference, NAMRC 43, vol. 1, pp. 740–746 (2015)
10. Grana, R., Ghio, A., Weston, A., Ramos, O.E., Sect, I.P.: Development of a low-cost robot
able to write (2017)
11. Shevkar, P., Bankar, S., Vanjare, A., Shinde, P., Redekar, V., Chidrewar, S.: A Desktop SCARA
robot using stepper motors (2019)
12. Todd, J., Schuett, A.: Closer look at look-ahead, speed and accuracy benefits. Creative
Technology Corp (1996)
13. Tsai, M.-S., Nien, H.-W., Yau, H.-T.: Development of a real-time look-ahead interpolation
methodology with spline-fitting technique for high-speed machining. Int. J. Adv. Manuf.
Technol. 47, 621–638 (2010)
14. Attaran, M.: The rise of 3-D printing: the advantages of additive manufacturing over traditional
manufacturing. Bus. Horiz. 60, 677–688 (2017)
15. Zhou, D., Xie, M., Xuan, P., Jia, R.: A teaching method for the theory and application of robot
kinematics based on MATLAB and V-REP. Comput. Appl. Eng. Educ. 28, 239–253 (2020)
16. García-Zubía, J., Angulo, I., Martínez-Pieper, G., Orduña, P., Rodríguez-Gil, L., Hernandez-
Jayo, U.: Learning to program in K12 using a remote controlled robot: RoboBlock. In: Auer,
M.E., Zutin, D.G. (eds.) Online Engineering & Internet of Things. LNNS, vol. 22, pp. 344–
358. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-64352-6_33
17. Mizera, C., Delrieu, T., Weistroffer, V., Andriot, C., Decatoire, A., Gazeau, J.P.: Evaluation
of hand-tracking systems in teleoperation and virtual dexterous manipulation. IEEE Sens. J.
20, 1642–1655 (2020)
Developing an Introduction to ROS
and Gazebo Through the LEGO SPIKE
Prime
1 Introduction
the physical robot build allowing for prototyping and testing in a simulation
sandbox, greatly reducing physical prototyping time and increasing accessibility
for users. Once testing of the virtual twin is completed, results will be deployed
to the physical robot for final physical testing and tuning.
In [1] the authors write about utilizing simulation software in the context of
introducing inverse kinematics and programming by framing industrial robots as
the virtual twins. However, given the complexity of industrial robotics for begin-
ning students, a simplified learning platform may also be successful in introduc-
ing fundamental topics.
With the hope of lowering costs and increasing accessibility to robotics, prior
research has investigated the utilization of low-cost robotics kits and systems [2],
mainly Arduino [5] and LEGO education platforms [6].
LEGO Education’s platforms have been widely adopted in universities and
K-12 curriculums in order to provide the base frames for the robotics design
process [7,12]. Given access to LEGO bricks and easily interfaced hardware and
software, the student is quickly able to dive into the design process. By utilizing
this platform, educators have been able to introduce complex robotics topics,
such as reinforcement learning [10] and autonomous operation [12].
Our goal for this research project was to build on previous work leveraging
LEGO platforms in curriculum by using the LEGO SPIKE Prime [13] platform
as the means to introduce ROS 2 and Gazebo to college students. Outlined will
be our developed process combining Onshape [14], ROS 2, and Gazebo, in order
to allow for the creation and programming of a LEGO SPIKE Prime virtual
twin. We will cover our process of creating an asset library of LEGO pieces in
Onshape, the build and design procedure of the virtual twin, and the control
and simulation in Gazebo using ROS 2.
2 Connecting Platforms
2.1 Process Flow Overview
The process flow shown in Fig. 1 demonstrates a high-level overview of the devel-
oped connection. The users will first design a SPIKE Prime robot using the cre-
ated Onshape part library. They will then export their robots to URDF/SDF
format and create their own ROS 2 package. Lastly, the users will simulate
their builds in Gazebo, deploying the simulation results to control their physical
builds. This section of the paper will dive into each of these respective categories
and our solution to packaging this process into a deliverable format.
Fig. 1. Complete process overview of building and simulating a virtual LEGO SPIKE
Prime robot in Onshape.
Unlike competing CAD suites, a single part document in Onshape can contain
multiple solid-body models. This unique format allows for the entire LEGO
part library to be created in a single document, which can be made accessible
to all Onshape users. Leveraging this format, we configured each piece into its
respective part category i.e., liftarm, or square brick. This configuration-driven
format allowed for a final assembly workflow which can be seen in Fig. 2. In
the insertion menu on the left, the user can select a part from the desired part
category. This selection will then be populated into the assembly build space to
the right.
The initial goal of creating a virtual twin that utilizes the new asset library in
Onshape led to the discovery of a Python 3 [20] package, onshape-to-robot [21].
Onshape-to-robot utilizes the Onshape API to export assemblies into SDF
format which is compatible with ROS and Gazebo. The SDF format provides
a full dynamic description of a collection of rigid bodies. By connecting LEGO
pieces in parent/child style during the assembly process, the exporter identifies
the trunk link and obtains each concurrent link’s visual, collision, and inertial
information. Inertial data was determined using a LEGO-like ABS material from
the Onshape material library.
204 O. Gervais and T. Patrosio
Fig. 2. Example part library usage during the build process of a virtual LEGO SPIKE
Prime robot in Onshape
In order to use the newly exported robot model with ROS 2, the user creates a
new ROS 2 Python package with the previously supplied pathname. The user
then copies the onshape-to-robot meshes and sdf folders into the ROS 2 package.
A sample launch folder and setup.py file is also supplied to the user ensuring
that all dependency and description file paths are generated correctly. Figure 4
illustrates the described process.
After completion of these steps, the user will be able to build the package and
launch the ROS 2 start file, spawning their robot into Gazebo. As imported, the
robot will not be controllable until a joint controller is applied to the model. For
the scope of this project, we worked with a built-in differential-drive controller
from gazebo ros pkgs [22], which generated the necessary topics, /cmd vel (wheel
velocity control) and /odom (odometry of the base frame) for operation.
Gazebo LEGO SPIKE Prime 205
Fig. 3. Onshape-to-robot workflow and output folder structure, meshes and sdf
Fig. 4. Process of copying asset folders and supplemental material into the new ROS
2 package
We then converted the virtual twin from an Onshape assembly into SDF
format and created the new ROS 2 package using the supplied materials in
the Ubuntu virtual machine. After creating a simple PID control, utilizing the
orientation of the SPIKE Prime Hub for pose feedback, we tuned the system until
we achieved stability in Gazebo for three seconds. Captures from the Gazebo
simulation can be seen in Fig. 7.
We then deployed these generated gains to the physical counterpart of the
self-balancing LEGO SPIKE Prime robot. By controlling motor speed with
PWM, we were able to achieve dynamic stability with the same gain values
generated in Gazebo.
Gazebo LEGO SPIKE Prime 207
Fig. 7. Screen capture from the tuning process during Gazebo simulation
4 Discussion
In this project, we were motivated to introduce virtual robotics design and sim-
ulation tools, ROS 2 and Gazebo, through the common framework of the LEGO
SPIKE Prime. The choice of the LEGO Education robotics platform stemmed
from its high level of complexity abstraction. By providing intuitive electronics,
materials, and programming environments, the LEGO SPIKE Prime lowers the
barrier of entry to robotics for beginning users.
The creation of this process was intended to bridge the knowledge gap that
arises when students wish to add ROS and Gazebo to their toolboxes. In order
to build, design, and simulate using these tools, students have to first gain high
proficiency in CAD and programming before they produce their first iteration.
By removing the need to delve into the innerworkings of each software, com-
plexity can be abstracted away in order to introduce ROS and Gazebo earlier.
By providing the entire LEGO SPIKE Prime set in Onshape, we remove the
need for students to model each individual component. This allows the students
to focus on learning assemblies and mechanical design more broadly. Following
the process outlined in this paper, the student is then able to quickly export
their model, start their own physics simulations, and setup control theory test-
ing environments. This allows the students to focus on learning the basics of
208 O. Gervais and T. Patrosio
ROS and Gazebo without the setup time of structuring a ROS 2 package that
connects with Gazebo.
Throughout this process we ran into many issues in streamlining and finding
the best way to balance abstraction without losing valuable learning oppor-
tunities. After personal explorations into the respective programs, we found a
method of setting repeatable paths to all of the assets and created a package
encompassing this work in the form of a VMware virtual machine.
While the complexity of this process is currently still out of reach for our
original middle-school goal, there are still exciting possibilities for future work
introducing machine learning and reinforcement learning through this medium.
In the proof-of-concept example in Sect. 3, all PID gain tuning was done by hand.
Future ventures could foreseeably connect AI training to this platform in order
to teach different control approaches.
Looking to streamline the process further, we could implement a bash script
that would tie the file renaming and ROS 2 package creation into a one-line oper-
ation. With this new workflow after the student creates their model in Onshape,
they would be able to run this bash command and quickly dive into the ROS
and Gazebo environment.
5 Conclusion
References
1. Lopez-Nicolas, G., Romeo, A., Guerrero, J.: Simulation tools for active learn-
ing in robot control and programming, pp. 1–6. (2009). https://doi.org/10.1109/
EAEEIE.2009.5335490
2. Irigoyen, E., Larzabal, E., Priego, R.: Low-cost platforms used in control education:
an educational case study. IFAC Proc. Volumes 46(17), 256–261 (2013). https://
doi.org/10.3182/20130828-3-UK-2039.00058
3. ROS 2. https://index.ros.org/doc/ros2/
4. Gazebo. http://gazebosim.org/
5. Arduino. https://www.arduino.cc/
6. LEGO Education. https://education.lego.com/en-au/
7. Edwards, G., et al.: A test platform for planned field operations using LEGO
mindstorms NXT. Robotics 2(4), 203–216. ProQuest. (2013). https://doi.org/10.
3390/robotics2040203
Gazebo LEGO SPIKE Prime 209
8. Danahy, E., Wang, E., Brockman, J., Carberry, A., Shapiro, B., Rogers, C.B.:
LEGO-based robotics in higher education: 15 years of student creativity. Int. J.
Adv. Robot. Syst. 11(2), 27 (2014). https://doi.org/10.5772/58249
9. Cheng, C.-C., Huang, P.-L., Huang, K.-H.: Cooperative learning in lego robotics
projects: exploring the impacts of group formation on interaction and achievement.
J. Netw. 8(7), 1529–1529. (2013). https://doi.org/10.4304/jnw.8.7.1529-1535
10. Martinez-Tenor, A. Cruz-Martin, A., Fernandez-Madrial, Juan-Antonio, R.: Teach-
ing machine learning in robotics interactively: the case of reinforcement learning
with lego mindstorms. Interact. Learn. Environ. 27(3), 293–306 (2019). https://
doi.org/10.1080/10494820.2018.152541
11. Fanchamps, N.L.J.A., Slangen, L., Hennissen, P., Specht, M.: The influence of SRA
programming on algorithmic thinking and self-efficacy using Lego robotics in two
types of instruction. Int. J. Technol. Des. Educ. (2019).https://doi.org/10.1007/
s10798-019-09559-9
12. Akin, H.L., Mericli, C., Mericli, T.: Introduction to autonomous mobile robotics
using Lego mindstorms NXT. Comput. Sci. Educ. 23(4), 368–386 (2013). https://
doi.org/10.1080/08993408.2013.838066
13. LEGO SPIKE Prime. https://education.lego.com/en-us/products/lego-education-
spike-prime-set/45678#spike
14. Onshape. https://www.onshape.com/en/
15. GrabCAD. https://grabcad.com/
16. Thingiverse. https://www.thingiverse.com/
17. LEGO SPIKE Prime Onshape Document. https://cad.onshape.com/documents/
f80b668b3ae9c732b3c7d91b/w/cc29213b0eb52b9d3bc554e2/e/78708f75527b34f
b9f21b768
18. Solidworks. https://www.solidworks.com/
19. Fusion 360. https://www.autodesk.com/products/fusion-360/overview
20. Python. https://www.python.org/
21. Rhoban, onshape-to-robot, Github Repository (2020). https://github.com/
Rhoban/onshape-to-robot
22. gazebo ros pkgs. http://wiki.ros.org/gazebo ros pkgs
23. VMware. https://www.vmware.com/
24. Ubuntu. https://ubuntu.com/
25. Google Drive. https://www.google.com/intl/en/drive/
26. Virtual Machine Google Drive Hosting. https://drive.google.com/drive/folders/
1jaeOl53kcf-iK5GLCMRy wXDmsG37tWbi?usp=sharing
Constructing Robots for Undergraduate
Projects Using Commodity Aluminum
Build Systems
John Seng(B)
1 Introduction
Robotics education in the college classroom environment can take shape in many
forms, including: using small robots in a class, having students participate in
large scale team projects, or using virtual environment simulators. In the first
example, there is much students can learn from smaller scale robots (i.e. Arduino-
powered designs), but students may be missing out on larger system integration
issues and may be needing more compute capability. When working on a large
scale team project, such as a group of students working on an autonomous car,
students may encounter system level engineering issues, but may become pigeon-
holed into the area of the project they are working on. In the last example of
a simulation environment, students may be missing the hands-on aspect that
many students appreciate when working in the field of robotics.
In our work, we focus on the area of what we categorize as medium-sized
robots (i.e. microwave oven-sized) which have a single Linux computer for com-
pute that is coupled with several other sensors (e.g. cameras, lidar). Having
medium-sized robots available for undergraduate students to work on as projects
can be very effective in improving student learning and can expose them to many
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 210–220, 2022.
https://doi.org/10.1007/978-3-030-82544-7_20
Constructing Robots for Undergraduate Projects 211
software, mechanical, and electrical issues that they may encounter when work-
ing on an engineering project in their future careers.
Commercial research robots in the university environment can often be
expensive and not very configurable if research needs change over time. In this
work, we discuss the use of commodity aluminum build systems (ones that are
often marketed towards high school robotics competitions) and our experience
in utilizing such systems for undergraduate projects, along with the computer
systems used in the robots.
We have used these modern aluminum build systems to construct two gen-
erations of project robots that we call Herbie 1 and Herbie 2. These robots are
used at our university as a foundation for student projects and for undergraduate
research. The robots use different build systems and we outline the differences
between the systems.
In this paper, we will describe the robot designs in parallel and cover the
various design aspects of each robot. This work focuses on robots that are not
intended to be duplicated for multiple student groups in a classroom setting.
That is completely possible with our work, but it is currently not viable in our
university’s educational environment and budget. Instead, our robots are used
as development platforms for senior design projects where a few students work
on particular aspect of the robot at a time.
This paper is organized as follows: Sect. 2 outlines prior work that is related
and relevant to this project. Section 3 outlines the construction systems used in
the robot designs and covers the benefits and drawbacks of using the systems in
an undergraduate student environment. Section 4 describes the compute hard-
ware and software used on the Herbie robots. Section 5 covers the experience
gained from utilizing these robots. Section 6 concludes.
2 Related Work
Aside from the construction systems that are discussed in our work, there are a
number of other modular aluminum construction systems that are available on
the market. Some of these include Tetrix Max [12], Rev Robotics [13] channel
and extrusion, and the Andy Mark S3 system [14]. The Tetrix Max system by
Pitsco Education is very similar to the construction systems that we utilize. The
system consists of aluminum channel and provides flexibility in motor mounting.
The channel design offered by Rev Robotics has the unique feature of having
slots to allow easy slide mounting of brackets in a manner similar to a T-slot.
The S3 system by Andy Mark provides a custom box aluminum extrusion for
building structures. We find that all 3 related systems target FTC (FIRST Tech
Challenge) contestants on their respective web sites. For our work, we selected
the Actobotics [2] system and the goBilda [3] system based on their merits,
academic discount pricing, and market availability.
One research robot that utilizes the Actobotics construction system is a snake
robot developed by Dear et al. [6]. The authors use U-channel as the body of the
snake and utilize servos to actuate the locomotion of the robot. In that work,
212 J. Seng
Fig. 1. Photograph of Actobotics (on the left) and goBilda channel (on the right).
The channel is U-shaped with regularly spaced larger holes for mounting bearings and
axles. The smaller holes can be used for fastening brackets and other connectors.
the authors studied snake robot locomotion when motion constraints are placed
at each segment of the robot. Work by Ray et al. [7] uses an Actobotics gripper
to implement techniques to efficiently detangle salad from plant material. These
works focus primarily on using the Actobotics components for research work and
not in the educational domain.
Ramos et al. [8] describe an open source platform that is designed for teaching
legged robot control in a classroom environment. The hardware structure they
present is constructed using the goBilda system.
Fig. 2. Diagram of Actobotics pattern spacing. The larger center hole is for mounting
ball bearings and the smaller holes can be used for brackets and connectors. Image
used with permission [4].
In our work, we utilize two construction systems and discuss their suitability
to building robots for university-level student projects and research. These build
systems are produced by the same parent company and we outline the differences
between the systems. We are not affiliated with the parent company and have
selected these build systems on their merits.
3.1 Actobotics
Actobotics is a build system based on the Imperial measurement system. The
structural components of the Actobotics system are fastened together using 6–32
screws. These size screws are sold in various lengths and are sufficient for the
robot size we are using (approximately 10 pounds total weight).
The primary structural component of the Actobotics system is U-channel
which has a 1.5 square U-shaped cross section (open on one side), with holes
drilled as shown in Fig. 2. The channel comes in lengths based on the number of
holes in the channel (e.g. 3 hole, 5 hole, 7 hole).
Figure 2 shows the layout of holes in Actobotics channel. In the Actobotics
build system, the main holes are .502 in diameter and are intended to house
0.5 flanged ball bearings and allow axles (commonly sized to 1/4 diameter)
to pass through. One significant feature of both of these build systems is the
ability to utilize ball bearings in various locations of a robot build. The flange
on the ball bearing allows it to be easily fitted from one side of the channel while
having the flange prevent the bearing from falling through.
Surrounding the 0.502 primary hole are 8 smaller holes that allow a 6–
32 bolt to pass through. These 8 holes are not threaded and allow beams and
bracket to be connected when the connecting holes are spaced .770 apart. The
8 holes allow mounting items conveniently at a 45◦ angle to the U-channel. U-
channel on its own, does not provide the strength that an equivalently sized
214 J. Seng
Fig. 3. Diagram of goBilda pattern spacing. Holes are drilled on an 8 mm grid pattern.
Image used with permission [5].
3.2 GoBilda
The Herbie 1 robot is constructed entirely using the Actobotics build system and
there was a significant investment in the system. At the time of the design of
Herbie 2, the goBilda system had just become available and there were a number
of advantages to the build system and those advantages were significant enough
to consider investment in the newer system.
One of the main differentiators of the goBilda build system is that the system
is completely metric. All bolt lengths, channel lengths, and hole spacings are
specified in millimeters. Although this fact in itself was not a primary reason to
move to the build system, one benefit is that the system was designed to use
Constructing Robots for Undergraduate Projects 215
M3 metric screws. The larger diameter M3 screws were a better design choice
for the size of the Herbie 2 robot to be constructed.
A second advantage of the goBilda system is that the standard channel is
larger. The exterior dimensions of the square U-channel is 48 mm with the inte-
rior dimensions being 43 mm. This increased size allows motors to fit inside
the channel to produce a very compact right-angle drivetrain. Because Herbie
1 and Herbie 2 needed to pass through doorways, robot width was always an
important design constraint. The right angle drivetrain allows the robot width
to not be limited by the length of the drive motors. The goBilda system provides
many different wheel options, but for Herbie 2 it was determined that a larger
wheel diameter was required. 8 pneumatic kick-scooter wheels were found to
be suitable and they worked with an 8 mm axle that is available in goBilda.
As shown in Fig. 3, the main center hole for bearings in 14 mm. This hole is
surrounded by 4 mm holes arranged on an 8 mm grid. The grid allows components
to be connected in several locations and the holes at the corners surround the
14 mm hole are enlarged to match a 16 mm grid spacing when connecting items
at 45◦ .
3.3 Comparison
While both build systems provide similar functionality, the ability to place drive
motors inside U-channel provided a significant advantage for our application
in Herbie 2. If there were no width requirement for our particular application,
Actobotics would have served just as well in terms of drivetrain capabilities for
the second generation design.
The standard grid spacing of goBilda was very useful when designing the
sensor mounts for Herbie 2. Several of the sensor brackets are 3-D printed with
8 mm holes spaced to match goBilda and this allowed the sensor positions to be
easily modified on the robot when necessary. We have found that because some
of our sensors and components are sourced from international companies, the
metric spacing of the goBilda has proven to be advantageous.
Both Actobotics and goBilda provide a variety of mounts and channel con-
nectors. There are various mounts to hold DC motors to U-channel and also
various patterned mounts which have drilled out holes to match the spacing
pattern of the channel.
In terms of additional structural components, both systems provide extruded
aluminum rail (similar to 80/20 aluminum extrusion [15], but matching the
respective channel hole spacing). This extruded railing allows infinite adjust-
ment of bracket mounting positions. Additionally, both systems provide timing
belts and chain for power transmission. At this time, we have not utilized these
components for drivetrain design.
216 J. Seng
square footprint. Because of the smaller footprint, we use a 3-D printed mount-
ing bracket to hold the AGX Xavier board. The Jetson Xavier provides support
for an M.2 SSD and that is used to hold additional data and record video. For
low level control, the Arduino Uno is replaced by a Sparkfun Redboard Turbo
[16]. A picture of Herbie 2 under development is shown in Fig. 5.
The sensor set for both robots is similar with some slight upgrades as available.
Because the batteries of both robots are relatively small (4S Lipo 6000 mAh),
there is a limitation on the number of sensors that can be on-board.
For camera input, both robots use the ZED stereo camera with Herbie 1
using the original ZED camera and Herbie 2 using the ZED 2. These cameras
provide a depth map which is computed using the on-board GPU of the NVIDIA
boards.
For Herbie 2, we add an additional 2-D laser scanner for obstacle detection
and for building maps. For both robots, we have used the RTABMAP SLAM
system [17]. This SLAM system allows for map building using visual input as
well as with laser scanner input. RTABMAP has built-in support for ROS and
supports several feature detection and matching techniques.
218 J. Seng
5 Experience
In this section, we outline two types of experience that we have obtained through
the Herbie 1 and Herbie 2 projects. The first is our experience in using the
modular construction systems. The second section describes student involvement
in the projects.
6 Conclusion
In this work, we describe the construction of two robots for undergraduate stu-
dent projects: Herbie 1 and Herbie 2. Instead of purchasing pre-built robot chas-
sis designs, the structures for these robots are constructed using Actobotics and
goBilda. These are two commodity aluminum build systems that are commonly
used in high school FIRST robotics competitions. We find these build systems to
be robust, reconfigurable, and reusable - all characteristics needed for develop-
ing robots for undergraduate student projects and research. The build systems
provide the ability to quickly construct a stable platform for carrying embedded
computers, sensors, and batteries.
Throughout the development of these robots, we have incorporated under-
graduate student work through various projects. Some students have been
involved in designing the chassis itself while other students have been a part
of adding various features (mechanical, electrical, and software) to the robots.
In addition, students have been involved in developing software infrastructure
to enable future groups to develop machine learning models to enhance robot
capabilities.
220 J. Seng
References
1. Burack, C., Melchior, A., Hoover, M.: Do after-school robotics programs expand
the pipeline into STEM majors in college? J. Pre-College Eng. Educ. Res. 9(2),
1–13 (2019)
2. Actobotics. https://www.servocity.com/actobotics
3. goBilda. https://www.gobilda.com
4. Actobotics Pattern. https://cdn11.bigcommerce.com/s-tnsp6i3ma6/images/
stencil/original/products/3671/21044/585440-Schematic 02283.1583537050.png?
c=2
5. goBilda Pattern. https://cdn11.bigcommerce.com/s-eem7ijc77k/images/stencil/
original/products/846/27263/1120-0001-0048-Schematic 02571.1605298181.png?
c=2
6. Dear, T., Buchanan, B., Abrajan-Guerrero, R., Kelly, S., Travers, M., Choset, H.:
Locomotion of a multi-link non-holonomic snake robot with passive joints. Int. J.
Robot. Res. 39(5), 598–616 (2020)
7. Ray, P., Howard, M.: Robotic untangling of herbs and salads with parallel grippers.
In: International Conference on Intelligent Robots and Systems (2020)
8. Ramos, J., Ding, Y., Sim, Y., Murphy, K., Block, D.: HOPPY: an open-source and
low-cost kit for dynamic robotics education. https://arxiv.org/pdf/2010.14580.pdf
9. NVIDIA Robotics Teaching Kit. https://developer.nvidia.com/teaching-kits
10. Erector by Meccano. http://www.meccano.com/products
11. VEX Robotics. https://vexrobotics.com
12. Tetrix Robotics. https://www.pitsco.com/Shop/TETRIX-Robotics
13. REV Robotics. https://www.revrobotics.com/
14. Andy Mark S3 Tubing. https://www.andymark.com/products/s3-tubing-options
15. 80/20 T-Slot Aluminum Building System. https://8020.net/
16. Sparkfun Redboard Turbo. https://www.sparkfun.com/products/14812
17. Labbe, M., Michaud, F.: RTAB-map as an open-source lidar and visual SLAM
library for large-scale and long-term online operation. J. Field Robot. 36(2), 416–
446 (2019)
18. Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural
networks. In: International Conference on Machine Learning (2019)
Hands-On Exploration of Sensorimotor
Loops
1 Introduction
In robotics research, among other things, researchers are working intensively on
sensorimotor coupling. This refers to the coupling of sensory stimuli with motor
activity in natural and in artificial systems. The sensors provide information
about the environment (for example visual stimuli) and about the state of the
own body (for example temperature). To survive in an environment, a system
must process this information and translate it into actions that satisfy its needs
and, if possible, improve its state. Since there is an interdependence, actions
performed can influence sensory information and vice versa. This ongoing pro-
cess is called a sensorimotor loop. To enable robots to interact in our highly
complex and unstructured environment, it can be helpful if they, too, have some
sort of coupling of their sensors to their actuators. These can be simple wirings
from sensors to motors, such as those described in [Braitenberg 1986], or more
complex networks, such as in [Sun et al. 2020]. However, it can be said that the
study of sensorimotor couplings is still in its infancy. Therefore, it is important to
incorporate the study of sensorimotor couplings into education in relevant top-
ics (such as humanoid robotics in this paper) so that future researchers can be
exposed to it at an early stage. However, it is often difficult to teach these com-
plex relationships and the mathematical background in an easily understandable
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 221–228, 2022.
https://doi.org/10.1007/978-3-030-82544-7_21
222 S. Untergasser et al.
2 Sensomotorics Curriculum
In the following, first an overview of topics from the curriculum of the sensori-
motor course is given. Afterwards, It is described, how this curriculum is used,
the individual topics are put into context and it is argued why they are relevant
for robotics.
– Motor Dynamixel XH430
– Dynamical Systems Theory
– Sensorimotor Loops
– Neural Networks
– Artificial Evolution
– Central Pattern Generators etc.
This list of topics comprise the “sensorimotorics” lecture (2. semester) in
the program Humanoid Robotics at Beuth University of Applied Sciences in
Berlin. This lecture continues topics from the first semester (reactive robotics,
electronics, mathematics) and prepares for the following semesters (cognitive
robotics, adaptive systems, control theory). In the course of this lecture, the
students deepen their knowledge using different robots. To do so, they first need
to understand how a typical robot actuator works, and how they can use it. So
in the first weeks, an important topic is the Dynamixel X430 actuator and how
to program it. Equipped with this actuator, different sensorimotor behaviors
Sensorimotor Curriculum 223
can be explored. The mathematical tools for sensorimotor systems come from
dynamical systems theory. A dynamical system is the mathematical description
of a system that changes over time. In most cases, the exact behavior of more
complex systems cannot be predicted, but qualitative statements can be made.
For example, specific states, called attractors, can be described toward which
a system moves (e.g., the resting state of a pendulum). Figure 1 shows a cog-
nitive agent (for example, a robot) embedded in an environment. It interacts
with this environment using its actuators and sensors. The environment and the
cognitive system, both dynamic systems in their own right, influence each other
and therefore also form a dynamic system overall. The controller may consist
of a recurrent neural network [Haykin et al. 2009]. These networks are capa-
ble of enabling robots to behave in interesting ways (see, for example, [Hülse
et al. 2004] or [Candadai et al. 2019]). One can also consider them as dynamical
systems and therefore study them with the same mathematical tools. To find
suitable networks, it is helpful to use artificial evolution. Following ideas from
nature, artificial evolution is a population-based optimization algorithm [Nolfi
and Floreano 2000]. It is often used when classical optimization strategies such
as gradient methods do not work well. Artificial evolution can provide a solution
to a problem that is not optimal in some cases, but good enough for the task at
hand. Instead of evolving neural networks via artificial evolution, one can also
find interesting behaviors through theoretical reasoning (see, e.g., [Schilling et al.
2013]). For example, Central Pattern Generators are simple constructs of a few
neurons that can serve as the basis of walking behaviors of more complex robots
([Hild 2008]). Another important topic is the interaction of robots with their
environment. To achieve robust behavior in dynamic and unstructured environ-
ments, it is necessary for robots to adapt to changing environmental conditions.
It can be beneficial if the robot is able to use its body and sensors to acquire the
information it needs about the environment to do so. Again, dynamic systems
theory can be used to structure the gathered information. For example, stable
postures (lying down, standing up straight see Fig. 4 for some examples) of the
robot correspond to attractors in the dynamic system consisting of robot and
environment. In addition, there are unstable postures (repellors, e.g. tilting on
an edge). This possibility space can be explored by the robot and paths (i.e.
movement patterns) can be found so that the robot can switch between different
postures. Depending on the complexity of the robot and the environment, more
or less complex sensorimotor manifolds are thus found.
To give students a feel for how these theoretical topics can be applied to the real
world, they are given the opportunity to experiment with a number of robots.
To do this, they are given a small box, as shown in Fig. 2, which contains a
Dynamixel XH430 motor, as well as a battery and connectors. This allows them
to connect the motor to a smartphone, as shown on the left side of the figure.
First, the teaching content about the motor is described. Then it will be discussed
224 S. Untergasser et al.
Fig. 2. The box, shown on the right top, includes the motor XH430, a battery and
some cables and connectors. Students can connect this system to their smartphones
as shown in the left part of the figure. In the right bottom, screenshots of the android
apps are shown, which are used to program the motor.
how the robot is programmed and which tools are provided for this purpose. The
conclusion is formed by 2 examples of robots that are discussed in the lecture.
Fig. 3. The cognitive sensorimotor loop [Hild 2013] as a flow graph. In the right bottom,
its implementation in the programming language BDE is shown. The z −1 block is a
delay of one time step. The coefficients are gi = 22.5 and gf = 0.3.
whose flow chart can be seen in Fig. 3. This sensorimotor loop has different modi,
for example, the contraction mode causes the motor to move against external
forces.
With a few additional parts that the students can produce themselves, the first
simple robot can be built. One example is shown in Fig. 4 on the left. It consists
of the XH430 motor described above, the battery from the box, and some molded
parts laser cut from acrylic glass. As can be seen in the figure, it is designed so
that the axis of the motor always remains parallel to the ground once it has
been oriented that way. This allows the state of the robot to be described by
four variables. The first two are the angle of the robot to the ground and its
derivative. The next two are the current angular position of the motor and its
derivative.The states where the robot does not move are the fixed points of
the system. The robot postures shown in the right part of Fig. 4 are instable fix
points, where slight perturbations causes the robot to leave the posture. The CSL
in contraction mode swaps stable and instable fix points, so that the postures
in the figure become stable. The corresponding nodes are connected by arrows
when the robot can move from one posture to another. Green arrows mean
counterclockwise rotation of the motor, red means clockwise rotation. These
transitions are only possible in one direction. Only between postures that are
connected with a black arrow (pointing in both directions), the robot can switch
between postures by itself.
This simple example already shows a rich behavior that can be investigated
with the learned methods. To find out, in which posture the robot is, students
usually suggest to equip the robot with a sensor (e.g. IMU) to measure the angle
between robot and table. But that is not necessary. By choosing a suitable move-
ment, the sensorimotor behavior provides enough relevant information. For exam-
ple, the motor voltage can be used to find out in which posture the robot is. To see
this, experiments are carried out in which joint angle and motor voltage are mea-
sured and the robot’s postures are recorded. At the respective posture, one can
then see the influence of gravity on the motor voltage, as can be seen in Fig. 5.
226 S. Untergasser et al.
Fig. 4. One design of a simple robot, which can be built with the described motor.
The right side shows stable fix points of the system containing the robot, the CSL and
the environment, as well as the possible transitions between them (which the robot can
do by itself). The green arrows represent counterclockwise rotation of the motor, red
means clockwise. Black arrows represent both directions.
Fig. 5. Recording of joint angle and motor voltage in one experiment (see text). Hori-
zontal axis is the joint angle ϕ where 360◦ are divided into 212 steps. The vertical axis
represents the motor voltage U where –60 is 0.7 V.
In the displayed series of measurements, the robot’s motor was kept at a constant
rotational speed. During this process, the motor voltage was measured (blue line
in the figure). It can be seen that the robot’s postures with the same angular posi-
tions require different amounts of motor voltage to maintain the speed, depending
on whether the robot is standing upright (posture a), or lying on the ground (pos-
ture h). Moving from posture a to the right, you can see the point where the robot
falls over (ϕ ≈ 70), which is the sharp drop in motor voltage. If the running direc-
tion of the motor is reversed here, the robot can no longer come to posture a by
itself, hence the motor voltage follows another curve (The ripples in the curve stem
from the stick slip effect between robot and table). Without complex sensor tech-
nology, the robot can therefore already learn a lot about its environment and its
Sensorimotor Curriculum 227
own body. The students now have all the freedom they need to make custom bod-
ies for their robots and to repeat the experiments and gain their own insights. Of
course, feeling and understanding the sensorimotor behaviors is also part of the
exam for this course. For example, a randomly chosen behavior is set on the robot
before the examinee enters the room. Now, as a first task, the behavior (and the
sensorimotor loop behind it) should be identified through interaction.
4 Conclusion
Insights on sensorimotor couplings from research on living beings are often useful
in robotics. In order to prepare future roboticists, sensorimotorics is being incor-
porated into various curricula of programs like humanoid robotics. To under-
stand this complex topic in depth, it is useful to interact with the robot and
experience the behaviors haptically. In this work, it was presented how in-depth
hands-on experiences can be built into a sensorimotor curriculum. For this pur-
pose, the necessary mathematical basics (dynamical systems, neural networks)
were deepened with the help of experiments on simple robotic systems. Each
student gets a motor with accessories, with which she can try out the learned
contents in practice. Android apps were developed to be able to program the
motor. In addition, a simple programming language was flashed on the motor so
that different sensorimotor loops can be tried out. Using a simple robotic body,
interactions of the robot with the environment, but also with the students, were
explored, so that a deep understanding of the content emerges.
References
Braitenberg, V.: Vehicles: Experiments in Synthetic Psychology. MIT press (1986)
Brodie, L.: Thinking Forth. Punchy Publisher (2004)
Candadai, M., Setzler, M., Izquierdo, E.J., Froese, T.: Embodied dyadic interaction
increases complexity of neural dynamics: a minimal agent-based simulation model.
Front. Psychol. 10, 540 (2019)
Haykin, S.S., et al.: Neural networks and learning machines/Simon Haykin. Prentice
Hall, New York (2009)
Hild, M.: Neurodynamische module zur bewegungssteuerung autonomer mobiler
roboter (2008)
Hild, M.: Defying gravity - a minimal cognitive sensorimotor loop which makes robots
with arbitrary morphologies stand up. In: Proceedings 11th International Confer-
ence on Accomplishments in Electrical and Mechanical Engineering and Information
Technology (DEMI) (2013)
Hülse, M., Wischmann, S., Pasemann, F.: Structure and function of evolved neuro-
controllers for autonomous robots. Connection Sci. 16(4), 249–266 (2004)
Montúfar, G., Ghazi-Zahedi, K., Ay, N.: A theory of cheap control in embodied systems.
PLoS Comput. Biol. 11(9), e1004427 (2015)
Nolfi, S., Floreano, D.: Evolutionary Robotics: The Biology, Intelligence, and Technol-
ogy of Self-Organizing Machines. MIT Press (2000)
228 S. Untergasser et al.
Schilling, M., Hoinville, T., Schmitz, J., Cruse, H.: Walknet, a bio-inspired controller
for hexapod walking. Biol. Cybern. 107(4), 397–419 (2013)
Sun, T., Xiong, X., Dai, Z., Manoonpong, P.: Small-sized reconfigurable quadruped
robot with multiple sensory feedback for studying adaptive and versatile behaviors.
Front. Neurorobotics 14, 14 (2020)
Programming Environments
CORP. A Collaborative Online Robotics
Platform
1 Introduction
As the COVID-19 pandemic increased the usage of distance learning and eLearn-
ing in all levels of education and many types of hands-on instruction have been
interrupted, we introduced CORP (Collaborative Online Robotics Platform), an
integrated educational learning development web-based environment intended to
teach robotics.
As we can see in the literature, for more than 20 years, the use of technology
to bring hands-on instruction to online learning has been used so several works,
with multiple approaches [1,2]; however, it is relatively new in k-12 instruction
[3]. Currently, some Universities have opened remote access to their labs for sec-
ondary schools [4,5], with the initial purpose to optimize costs because physical
experiments are expensive to maintain, and they require dedicated facilities to
accommodate the experimental setups. Additionally, COVID-19 boosted the use
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 231–242, 2022.
https://doi.org/10.1007/978-3-030-82544-7_22
232 O. Sans-Cope et al.
of remote labs adding the social distancing factor [6]. Remote labs already exist,
and they are being applied with success, such as [7] or [8]. We focus our pro-
posal on an alternative architecture based on an existing environment, Google
Cloud, to accelerate the solution’s adoption and scalability based on the already
existing users within the K-12 education population [9,10].
Hereafter, a research prototype of the platform developed at Tufts Center for
Engineering Education and Outreach (CEEO) is presented. The system allows
participants to directly interact with remote physical robots over internet facili-
tating an immersive experience, also known as a phygital experience, in a collab-
orative virtual environment by combining physical robots with remote STEM-
based instruction. Particularly relevant during COVID-19, it also helps connect
students potentially isolated by other factors (geographic, economic, environ-
mental, health limitations). Whilst, it is challenging to transfer the straight-
forwardness of collaboration onto remote located teams [1], this computer sup-
ported cooperative work (CSCW) system allows real-time coding, sharing, and
reporting through a merging of existing Google Slides functionality with a web-
enabled sidebar additional for directly connecting to the physical robots (locally
or remotely located).
Through this platform, “roboticists learners” collectively give instructions to
a remote robot in a shared environment. The system combines online activities
with shared experiences where the students are exposed to a technological cli-
mate that in addition to their STEM skills, students also work on their SEL
skills: self- management (goal setting and following instructions), relationship
building (communication and cooperation), social awareness (turn-taking and
appreciating diversity), self-awareness (confidence and efficacy), and responsible
decision-making (problem-solving), to achieve the goals.
Fig. 1. Left: Scenario 1, students sharing the same space. Right: Scenario 2, students
working in different locations
and a device with camera pointing at the robot. This configuration gives CORP
the ability to connect several users with a robot over the Internet, creating
a workspace where users can collaborate remotely controlling the inputs and
outputs to and from the robot from anywhere in the world.
3 Hardware Architecture
A web server in Flask (Python) has been developed to answer the petitions from
Google Slides.
4 Software Architecture
The overview of the different software elements is shown in Fig. 3. They are
built on Google Slides, Google Meet, and LEGO Mindstorms EV3 software using
several programming languages and applications.
Google Slides Add-On. As commented above, Google Slides offers the user
interface (UI) to interact with the robotic tool. The UI is an add-on built using
Apps Script. It is presented to the user as a sidebar by using the add-on fea-
tures [22]. This Add-on is implemented with HTML and JavaScript that makes
it possible to create and design highly customizable user interface applications
within the menu system (sidebars and dialog box) of the Google Slides. The
236 O. Sans-Cope et al.
HTML service serves these web pages as custom user interfaces and interacts
with server-side Apps Script functions [21]. In addition, the add-on allows inte-
gration of data by the user from and communication with the robot using the
URL Fetch and HTML Services.
Our custom (EV3) sidebar (Slides add-on) is built using Apps Script, a tool
that lets one edit Google slides and connect directly to many Google products
using the built-in Apps Script Slides service [16] to read, update, and manipulate
data within G Suite applications. We also leveraged the Slides REST API directly
using the Apps Script advanced Slides Service.
The add-on uses Apps Script’s HTML service for setting up client-server
communication calls to interact with the robotic EV3 platform. The user actions
in the interface are tied to a function in the script that result in actions taken
on the Google servers where the editor file resides and vice versa. The structure
of the files that implements the add-on is shown in Fig. 5.
The EV3Control.gs file is the core of the add-on. It manages all the user’s
interactions from opening and closing the add-on to the dialog boxes inside
the sidebar tabs. Interactions where the user asks for data from or send com-
mands to the robot are also managed by the Ev3control.gs file. In those cases,
it makes petitions and receives data through FetchURL service. That service
allows to retrieve or upload data from the robotic platform into and from Google
Slide presentation working with JSON objects. Depending on the information
retrieved, the data is displayed directly on the slides or sidebar tabs, or in a
spreadsheet from where data is taken and displayed on Google Slides using the
Slides service [21].
Those communications between sidebar and dialog boxes (HTML service)
with the Ev3control.gs file (Script Apps application) are asynchronous client-
side JavaScript API petitions to the server-side Apps Script functions stored in
the Ev3control.gs file.
CORP. A Collaborative Online Robotics Platform 237
To interact with the LEGO Mindstorms EV3 robot over Wi-Fi with Google
Slides, we used the open-source firmware EV3dev [15]. EV3dev is a Debian Linux-
based operating system running on LEGO Mindstorms EV3 robots that brings a
low-level driver framework for controlling EV3 robot’s sensors and motors using
multiple scripting languages like Python, Java, C, C++, Go. In that project,
EV3dev runs on a microSD card attached in the EV3 robot and a Wi-Fi dongle
connected in the USB port of the LEGO EV3 brick that allows internet control
of the robot.
238 O. Sans-Cope et al.
One of the advantages of using EV3dev is it does not require flashing new
firmware and it is therefore compatible with a lot of free software packages.
To allow the communication between EV3 robot and Google Slides, the Flask
package is installed over EV3dev firmware and the language selected to expand
the communication’s capabilities of the EV3 robot with Google Slides is Python.
A Flask server manages the communication from/to Google Slides and the
EV3 LEGO robot. The server processes users’ commands and controls the pro-
grammable robot retrieving measurement data from the robot and send it back
to the client when it is requested by the user. Once the user makes a petition
through the sidebar user interface, it is processed by the main core file of the
add-on, EV3Control.gs which makes a fetchUrl petition to a specific address
and sends a JSON data package.
The Flask server is listening the petitions, processing them and executing
them in an asynchronous way. Some of the petitions can be actions that are
executed on the robot directly, like moving motors or a Python file to be executed
by it. Others require to send back to Google Slides data, like reading from sensors.
In the later, the data is sent back also as a JSON package.
Python is selected as a high-level, general-purpose, interpreted scripting lan-
guage because its versatility for beginners, and experienced coders. Programs
written in Python can be sent from Google Slides to the robot to execute sophis-
ticate actions. Finally, the overall CORP software architecture is shown in Fig. 7.
Fig. 8. CORP’s Sidebar options and tabs’ configuration. The red dash box in sensor’s
tab is an example of data from the robot shown in the sidebar
like read sensors or graphs, receive data from the robot and visualize them on
either the same block of the sidebar (see Fig. 8 for the reading sensors tab) or in
a slide.
If the user wants to see the data on a slide, a specific field should be filled in
with the number of the slide where user wants to show data. For example, in the
240 O. Sans-Cope et al.
Fig. 9. CORP’s Sidebar options and tabs’ configuration. The red dash box in sensor’s
tab is an example of data from the robot shown in the sidebar
case of running a datalog, data collected by the robot is sent to Google Slides.
Next, the EV3Control.gs file registers the tuples time and sensor data value in
a spreadsheet and generates a line graph with data pairs. Then, in the graphs
tab the user can add the graph in a slide selecting the desired slide’s number.
Figure 9 shows an example.
Because the sidebar is a client-side HTML file, it is different for each user.
The actions and the data received and shown in the sidebar are attached to the
user who made the action. Users can share their programs and data obtained
from the robot with the group and work collaboratively on the same section of
code or program sequence using the option field “Share” in sidebar’s tabs and
dialog boxes.
Acknowledgements. This work was partially funded by a grant from LEGO Edu-
cation. The authors acknowledge Emma Stevens, Joshua Balbi and Dipeshwor Man
Shrestha for their useful and helpful collaboration of our work. The research results
presented in this paper are under the Tufts Institutional Review Boards Submission
ID: MOD-01-1911005. Date Approved: 06/11/2020.
References
1. Gröber, S., Vetter, M., Eckert, B., Jodl, H.J.: Experimenting from a distance-
remotely controlled laboratory (RCL). Eur. J. Phys. 28(3), S127 (2007)
2. Matarrita, C. A., Concari, S.B.: Remote laboratories used in physics teaching: a
state of the art. In: 2016 13th International Conference on Remote Engineering
and Virtual Instrumentation (REV), pp. 385–390. IEEE (2016)
3. Gravier, C., Fayolle, J., Bayard, B., Ates, M., Lardon, J.: State of the art about
remote laboratories paradigms-foundations of ongoing mutations. Int. J. Online
Eng. 4(1) (2008)
4. Maiti, A., Maxwell, A.D., Kist, A.A., Orwin, L.: Merging remote laboratories and
enquiry-based learning for STEM education. Int. J. Online Eng. 10(6), 50–57
(2014)
5. Lowe, D., Newcombe, P., Stumpers, B.: Evaluation of the use of remote laboratories
for secondary school science education. Res. Sci. Educ. 43(3), 1197–1219 (2013)
6. West, R.E.: Ideas for supporting student-centered stem learning through remote
labs: a response. Educ. Technol. Res. Dev. 1–6 (2020)
7. de Jong, T., Sotiriou, S., Gillet, D.: Innovations in STEM education: the Go-Lab
federation of online labs. Smart Learn. Environ. 1(1), 1–16 (2014). https://doi.
org/10.1186/s40561-014-0003-6
242 O. Sans-Cope et al.
8. da Silva, J.B., Rochadel, W., Simão, J.P.S., da Silva Fidalgo, A.V.: Adaptation
model of mobile remote experimentation for elementary schools. IEEE Revista
Iberoamericana de Tecnologias del Aprendizaje 9(1), 28–32 (2014)
9. Ok, M.W., Rao, K.: Digital tools for the inclusive classroom: google chrome as assis-
tive and instructional technology. J. Spec. Educ. Technol. 34(3), 204–211 (2019)
10. Khazanchi, R.: Current scenario of usage, challenges, and accessibility of emerging
technological pedagogy in k-12 classrooms. In: Society for Information Technology
—& Teacher Education International Conference, pp. 1767–1769. Association for
the Advancement of Computing in Education (AACE) (2020)
11. Otto, O., Roberts, D., Wolff, R.: A review on effective closely-coupled collaboration
using immersive CVE’s. In: Proceedings of the 2006 ACM International Conference
on Virtual Reality Continuum and its Applications, pp. 145–154 (2006)
12. LEGO Education: User Guide LEGO Mindstorms EV3. https://education.lego.
com/en-gb/product-resources/mindstorms-ev3/downloads/user-guide
13. Souza, I.M., Andrade, W.L., Sampaio, L.M., Araujo, A.L.: A systematic review
on the use of LEGOrobotics
R in education. In: 2018 IEEE Frontiers in Education
Conference (FIE), pp. 1–9. IEEE Press, New York (2018). https://doi.org/10.1109/
FIE.2018.8658751
14. CanaKit: Raspberry Pi WiFi Adapter. https://www.canakit.com/raspberry-pi-
wifi.html
15. ev3dev: ev3dev software. https://www.ev3dev.org/
16. Google: Slides Services. https://developers.google.com/apps-script/reference/
slides
17. Google: Apps Script. https://developers.google.com/apps-script/overview
18. Google: Built-in Google Services. https://developers.google.com/apps-script/
guides/services
19. Google: URL Fetch Service. https://developers.google.com/apps-script/reference/
url-fetch
20. Google: Class UrlFetchApp. https://developers.google.com/apps-script/reference/
url-fetch/url-fetch-app
21. Google: HTML Service: Create and Serve HTML. https://developers.google.com/
apps-script/guides/html
22. Google: Extending Google Slides with Add-ons. https://developers.google.com/
gsuite/add-ons/editors/slides
23. https://drive.google.com/file/d/1s6qXG1YQnE-09eg5E9OUwKEAGdfKF7AV/
A ROS-based Open Web Platform
for Intelligent Robotics Education
Abstract. This paper presents the new release of the Robotics Academy
learning framework and the open course on Intelligent Robotics available
on it . The framework hosts a collection of practical exercises of robot
programming for engineering students and teachers at universities. It
has evolved from an open tool, which the users had to install on their
machines, to an open web platform, simplifying the user’s experience.
It has been redesigned with the adoption of state-of-the-art web tech-
nologies and DevOps that make it cross-platform and scalable. The web
browser is the new frontend for the users, both for source code editing
and for the monitoring GUI of the exercise execution. All the software
dependences are already preinstalled in a container, including the Gazebo
simulator. The proposed web platform provides a free and nice way for
teaching Robotics following the learn-by-doing approach. It is also a use-
ful complement for remote educational robotics.
1 Introduction
In the last decade robotics has experienced a large growth in the number of
applications available in the market. We usually think of robots as the robotic
arms used in the industrial sector and automated assembly processes. However,
today robots have appeared in many areas of daily life such as food process-
ing or warehouse logistics [1]. Moreover, recently robots have been used in the
homes to carry out real life tasks for people, such as vacuum cleaning. This
shows how robots can successfully address real life tasks. In many areas, robots
are progressively being included, with technologies such as automatic parking
or driver-assistance systems, not to mention aerial robots which have risen in
popularity in the last years.
Robotics in Education tries to strengthen the learning skills of future engi-
neers and scientists by using robot-based projects. Both in schools and colleges
presenting robots in the classroom will give students a more interesting vision of
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 243–255, 2022.
https://doi.org/10.1007/978-3-030-82544-7_23
244 D. Roldán-Álvarez et al.
science and engineering, and they will be able to observe directly the practical
application of theoretical concepts in the fields of mathematics and technology.
From the perspective of the University, robot-based projects help to enhance key
abilities since their origin, for instance those identified with mechatronics [2].
The fast technological changes and the subsequent movements in education
require the development and use of educational methodologies and opportunities
with a moderate economic effort. Many institutions answer to this challenge
creating distance education programs. Web-based robotics distance education is
a growing field of engineering education. Web based applications allow students
to code robotic projects without having to run special software in their computers
nor the robot physically near them [3,4].
When learning online, the robotic projects take place using a robotic frame-
work and a teaching tool where the robot behavior is programmed using certain
language [4]. The most common way to interact with robots remotely is through
virtual and remote laboratories [5]. According to Esposito’s work [6], MATLAB
(62%) and C (52%) are the dominat choices, with a minority using free open-
source packages such as the Robot Operating System (ROS) (28%) or OpenCV
(17%). We did not found in the literature recent studies with this kind of infor-
mation but in Garcı́a’s et al. work [7], 16 professionals where interviewed about
the frameworks they use when programming robotics. Their results showed that
most of them (88%) used ROS. ROS stands out for having a large and enthu-
siastic community. ROS enables all users to leverage an ever-increasing number
of algorithms and simulation packages, as well as providing a common ground
for testing and virtual commissioning [8]. One of the most used simulators for
ROS is Gazebo [9]. It provides 3D physics simulation for a large variety of sen-
sors and robots, and it can be customized for the user needs. Gazebo is bundled
into the full installation package of ROS, making it widely and easily available.
Many robot manufacturers offer ROS packages specifically designed for Gazebo
support. Therefore, ROS and Gazebo seem suitable tools to be used in web
environments to offer robotics education at no cost.
In this paper we present the latest release of Robotics Academy learning
framework. On its exercises the student is intended to develop the intelligence of
some robots for several robotics application. Robotics Academy is open source
and uses Python, ROS, Gazebo and Docker for supplying an easy method to
create robotics or computer vision applications.
and whenever as long as they have an Internet connection. Some leading uni-
versities already provide online robotic courses such as Artificial Intelligence
for Robotics from Standford University, Autonomous Mobile Robots from ETH
Zurich or the Art of Grasping and Manipulation in Robotics from the Univer-
sity of Siena. They see the potential of online learning in robotics engineering.
In addition, we can find these courses in the market. One of the most well-
known proprietary platforms that provide robotic courses is TheConstruct [13].
The goal of this online tool is to provide an environment to learn ROS-based
advanced robotics online. It provides courses at different levels using different
programming languages (C++ and Python).
In addition, many authors and developers have joined forces to create online
learning tools for robotics. We may find in the literature examples of robotics
online learning tools such as RobUALab [11]. This platform has been pro-
grammed in Java and thanks to web technologies it allows to teleoperate, exper-
iment and programm a robotic arm. With it, the students can test diferent
configurations, evaluate pairs, both Cartesian and articulation trajectories and
program the arms with a list of comands.
Another example is the work of Peidró et al. [12]. They presented a virtual
and remote laboratory of parallel robots consisting of a simulation tool and a real
robot. The virtual tool helps the students to simulate the inverse and forward
kinematic problems of three parallel robots in an intuitive and graphical way.
On the remote lab, the students can experiment with a real parallel robot and
let them explore phenomena that arise in the motion of the real robot and do
not happen in the simulation.
Fabregas et al. [5] presented the Robots Formation Control Platform, a web
based tool for simulation and real experimentation with mobile robots with ped-
agogical purposes. The platform provides an interactive simulator to develop
advanced experiment of formation control with multi-robots systems, an exper-
imental environment to develop laboratory sessions with real mobile robots and
a low-cost robotic platform. It also allows the students to conduct experiments
with mobile robots in a dynamic environment by changing the configuration
without having to reprogram the simulation itself.
One of the main issues regarding robotics platforms is that they do not follow
a standard in the robotic infrastructure used and many times they do not use
open hardware or software [14]. It was not until recently when ROS has been
taken into account in the development of educational robotics platforms. One
interesting educational robotics platform is EUROPA, an extensible one, based
on open hardware and software (ROS) which includes friendly user interfaces
for control and data acquisition and a simulation environment [15]. By using
ROS, they provide the opportunity for both teachers and students to work with
advanced simulation and visualization tools. EUROPA allows easy programma-
bility based on Python. In spite of all the advantages of the EUROPA platform,
its functionality is limited to only one robot and the computer where the software
is installed.
246 D. Roldán-Álvarez et al.
The robotics software is inherently complex, includes many pieces and so it usu-
ally has many dependencies. For instance, the software access to robot sensors
1
https://github.com/JdeRobot/RoboticsAcademy.
2
https://unibotics.org.
A ROS-based Open Web Platform for Intelligent Robotics Education 247
Websockets
Python template
Student’s code GzWeb server
in execution
Sensors Actuators
Drivers (plugins)
GAZEBO SIMULATOR
Robot, world
DOCKER Container
and actuators is provided by drivers. In some cases the focus for the Robotics
students is on the drivers themselves, but most times the focus is on the algo-
rithms and their implementation. Then the students just need to install and use
the drivers, without further digging into them. They are included in common
robotics frameworks such as ROS and Gazebo simulator.
In order to run RoboticsAcademy exercises many software pieces are required
and so, the student had to install all of them. The installation recipes delivered in
the second framework release were not enough to significantly simplify the expe-
rience for the users. They still had to manually install all the needed packages
on their local operating system, and this was an entry barrier. In the presented
release a Docker container is provided3 which includes all the dependencies
of exercises already preinstalled inside. It is called RADI (Robotics Academy
Docker Image) and it may be easily installed and run in Windows, Linux or
MacOS, as all of them support this de-facto DevOps standard. This approach
follows some of the ideas in ROSLab [18], which uses ROS, Jupyter Notebooks
and Docker containers for providing reproducible research environments.
3
https://hub.docker.com/r/jderobot/robotics-academy/tags.
248 D. Roldán-Álvarez et al.
The burden of installing all the dependencies is removed from users and
moved to the RoboticsAcademy developers who build that container. This way
the learning curve is greatly reduced for final users, who skip most of the details.
They are only required to (1) install a Docker image, which is a simple and
standard step, (2) start its execution and (3) enter into a web page. Then the
browser automatically connects to the robot simulator and to the empty robot
brain inside the container. The Docker container and the browser are cross-
platform technologies, and so is the presented framework, expanding the number
of its potential users.
The robot simulator and the student’s code both run inside that container,
in the user’s machine. This design allows the framework to scale up to many
users, as the main computing costs are already distributed. The simulator runs
headless inside the container, regardless the operating system of the host com-
puter . The Gazebo simulated world is displayed in the browser through an http
connection with the “GzWeb server”, which also runs inside the container. The
browser and the container are connected through several websockets and that
http connection.
The new exercise templates have two parts, as shown in Fig. 1: a Python
program inside the RADI (exercise.py) and web page inside the browser
(exercise.html). They are connected to each other through two websockets.
The browser holds the code editor and sends the user’s source code for an exer-
cise to the empty robot brain, the exercise.py, where it will be run. The
exercise.py sends debugging and GUI information to the browser, where it
will be displayed.
The student just programs the robot’s brain in the browser. From the user’s
point of view the template provides two programming interfaces: (a) HAL-API
for reading the robot sensor measurements and for commanding orders to the
robot actuators; and (b) GUI-API which includes several useful methods for
debugging, for showing print messages in the browser or displaying the processed
images.
The Python program exercise.py is connected to the Gazebo simulator for
getting the sensor readings and setting the motor commands. In addition, it runs
the source code of the robot’s brain received from the browser, which typically
executes perception and control iterations in an infinite loop.
The browser is the only Graphical User Interface (GUI). The webpage of each
exercise, exercise.html, is divided in two vertical columns, as shown in Fig. 2.
The left column is for an inline text editor and the buttons for executing the
code, stopping it, etc. The right column typically includes a view of the simulated
world to see the generated robot behavior. It also includes a debugging console for
text messages and several exercise-specific illustrative widgets to show debugging
information such as the processed images. The widgets use state-of-the-art web
technologies from html5 such as 2D canvas, 3D graphics, websockets, etc.
A ROS-based Open Web Platform for Intelligent Robotics Education 249
3.3 Webserver
The core of the web platform is a Django based webserver. It shows the differ-
ent exercises available to the users so they may choose which one try to solve.
The currently available exercises are shown in Fig. 3. Once the user chooses an
exercise the webserver will show the template corresponding to that exercise, its
simulation will be executed in the Docker container that the user has instantiated
on his computer, as shown in Fig. 1.
The webserver also supports a second option: the Docker containers are
instantiated and run on a backend computer farm, and so the user is not required
to install anything in his/her computer. The webserver distributes the user’s
simulation into one of the available computers in the farm. The IP of the farm
computers remain hidden for the user, since all of the required communications
(sending code, receiving images, sensor information, etc.) occur through the web-
server itself.
In order to manage the computer farm and the running Docker containers
the webserver provides an administration panel as shown in Fig. 4. The panel
offers all the functionalities needed to monitor execution, add a new computer,
to delete a container, to restart a container and to run a new container.
Fig. 4. Admin panel with the computers in the farm with a running RADI.
Through the webserver, the user is able to save and load the code written
for each exercise. When the code is saved, the webserver stores it in an Amazon
S3 service. Amazon S3 is an object storage service that offers scalability, data
availability, security and performance. Everytime the user loads an exercise, the
last version of the code is retrieved from Amazon S3.
forward speed V and angular speed W. Figure 2 show the Gazebo environment
and the F1 car used.
This exercise is designed to teach basic reactive control, including PID con-
trollers as well as introducing students to basic image processing, for example
color filters, morphological filters or segmentation of the red line from the track.
The solution involves segmenting the red line and making the car follow it using
a PID based control. The GUI of this exercise includes a bird’s eye view widget
to know the current position of the car inside the track, as shown in Fig. 2.
The solution of this exercise involves the use of Local Navigation Algorithms,
such as Virtual Force Field (VFF). A sequence of waypoints is provided to the
students for a successful completion of the circuit. Navigation using the VFF
algorithm consists of obstacles generating a repulsive force and the current way-
point generating an attractive force. This makes it possible for the robot to go
towards the nearest waypoint, distancing itself of the obstacles and addressing
towards the vector sum of the forces. A debugging widget specific to this vector
sum algorithm is provided in the GUI, as shown in Fig. 5.
252 D. Roldán-Álvarez et al.
This exercise is similar to the previous one. The student is provided with the
same Roomba vacuum cleaner robot to clean an apartment. However, the robot
is equipped with an additional pose estimation sensor, which is used to estimate
the 3D position of the robot. Its position estimations are available using a Python
API call to getPose3d() method.
This exercise has been developed to teach planning algorithms. As self local-
ization is provided, efficient foraging algorithms, such as boustrophedon motion
and A* search algorithm, are available choices . They minimize the revisiting of
places and sweep the apartment more systematically and efficiently.
A ROS-based Open Web Platform for Intelligent Robotics Education 253
5 Conclusions
The lessons learnt with the previous release of the learning framework have
motivated the transition from an open tool to an open web platform. First, the
installation of the software dependencies has been greatly simplified as they are
now all pre-installed in the Docker container. This makes simpler the use of the
framework.
Second, the web browser is the only frontend for the student, who edits the
robot program there and monitors its execution there as well. The new exercise
templates are composed of a webpage inside the browser and a Python program
inside the Docker container, mutually connected. The Python program is also
connected to the robot simulator and runs the user’s code for the robot’s brain.
A webserver has also been developed and the new platform is ready to use
by anyone at a web site, for free. The framework is truly cross-platform as the
container technology and the web browser used technologies are widely supported
in the main operating systems (Windows, MacOS, Linux).
Beyond the major framework upgrade, the exercises of the Intelligent
Robotics course have been successfully migrated to the web platform and are
ready to use by Robotics teachers and students around the world. An instance
of this course is currently taking place with twenty students of the Rey Juan
Carlos University.
Future lines for the new web platform: (a) include support for real robots;
(b) introduce more educative contents such as computer vision course and indus-
trial robotics course; (c) upgrade the infrastructure dependencies to ROS2 and
Ignition simulator; and (d) perform an in-depth and empirical study with quan-
titative evidence of its impact in the learning process of real students since the
platform has recently started to be used.
References
1. Cañas, J.M., Martı́n, A., Perdices, E., Rivas, F., Calvo, R.: Entorno Docente
Universitario para la Programación de los Robots. Revista Iberoamericana de
Automática e Informática industrial 15(4), 404–415 (2018). https://doi.org/10.
4995/riai.2018.8962
2. Curto, B., Moreno, V.: Robotics in education. J. Intell. Robot. Syst. 81(1), 3–4
(2016)
3. Cheng, H.H., Chen, B., Ko, D.: Control system design and analysis education via
the web. In: Web-Based Control and Robotics Education, pp. 39–59. Springer,
Dordrecht (2009)
4. Bacca-Cortés, B., Florián-Gaviria, B., Garcı́a, S., Rueda, S.: Development of a
platform for teaching basic programming using mobile robots. Revista Facultad de
Ingenierı́a 26(45), 61–70 (2017)
5. Fabregas, E., Farias, G., Dormido-Canto, S., Guinaldo, M., Sanchez, J., Dormido,
S.: Platform for teaching mobile robotics. J. Intell. Robot Syst. 81, 131–143 (2016)
6. Esposito, J.M.: The state of robotics education: proposed goals for positively trans-
forming robotics education at postsecondary institutions. IEEE Robot. Autom.
Mag. 24, 157–164 (2017)
7. Garcı́a, S., Strüber, D., Brugali, D., Berger, T., Pelliccione, P.: Robotics software
engineering: a perspective from the service robotics domain. In: Proceedings of
the 28th ACM Joint Meeting on European Software Engineering Conference and
Symposium on the Foundations of Software Engineering, pp. 593–604, November
2020
8. Erős, E., Dahl, M., Hanna, A., Götvall, P.-L., Falkman, P., Bengtsson, K.: Devel-
opment of an industry 4.0 demonstrator using sequence planner and ROS2. In:
Koubaa, A. (ed.) Robot Operating System (ROS). SCI, vol. 895, pp. 3–29. Springer,
Cham (2021). https://doi.org/10.1007/978-3-030-45956-7 1
9. Marian, M., Stı̂ngă, F., Georgescu, M.T., Roibu, H., Popescu, D., Manta, F.: A
ROS-based control application for a robotic platform using the Gazebo 3D simula-
tor. In: 2020 21th International Carpathian Control Conference (ICCC), pp. 1–5.
IEEE, October 2020
10. Cañas, J.M., et al.: Open-source drone programming course fordistance engi-
neering education. Electronics 9(12), 2163 (2020). https://doi.org/10.3390/
electronics9122163
11. Jara, C., Candelas, F., Pomares, J., Torres, F.: Java software platform for the
development of advanced robotic virtual laboratories. Comput. Appl. Eng. Educ.
21, 14–30 (2013). https://doi.org/10.1002/cae.20542
12. Peidró, A., Reinoso, Ó., José, A.G., Marı́n, M., Payá, L.: A simulation tool to
study the kinematics and control of 2RPR-PR parallel robots. IFAC-PapersOnLine
49(6), 268–273 (2016)
13. A platform to Learn ROS-based advanced ROBOTICS ONLINE , 01 March 2021).
https://www.theconstructsim.com/. Accessed 08 Mar 2021
14. Casañ, G. A., Cervera, E., Moughlbay, A. A., Alemany, J., Martinet, P.: ROS-
based online robot programming for remote education and training. In: 2015 IEEE
International Conference on Robotics and Automation (ICRA), pp. 6101-6106.
IEEE, May 2015
A ROS-based Open Web Platform for Intelligent Robotics Education 255
15. Karalekas, G., Vologiannidis, S., Kalomiros, J.: EUROPA–A ROS-based open plat-
form for educational robotics. In: 2019 10th IEEE International Conference on
Intelligent Data Acquisition and Advanced Computing Systems: Technology and
Applications (IDAACS), vol. 1, pp. 452–457. IEEE, September 2019
16. Cañas, J.M., Perdices, E., Garcı́a-Pérez, L., Fernández-Conde, J.: A ROS-based
open tool for intelligent robotics education. Appl. Sci. 10(21), 7419 (2020). https://
doi.org/10.3390/app10217419
17. Mahna, S., Roldán, D., Cañas, J.M., JdeRobot robotics academy: web based tem-
plates. ROSWorld2020 (2020). https://vimeo.com/480550758
18. Cervera, E., del Pobil, Ángel P., ROSLab: sharing ROS code interactively with
docker and jupyterlab. IEEE Robot. Autom. Mag. 26(3), 64–69 (2019)
HoRoSim, a Holistic Robot Simulator:
Arduino Code, Electronic Circuits
and Physics
Andres Faiña(B)
Abstract. Online teaching, which has been the only way to keep teach-
ing during the pandemic, imposes severe restrictions on hand-on robotic
courses. Robot simulators can help to reduce the impact, but most of
them use high-level commands to control the robots. In this paper, a
new holistic simulator, HoRoSim, is presented. The new simulator uses
abstractions of electronic circuits, Arduino code and a robot simulator
with a physics engine to simulate microcontroller-based robots. To the
best of our knowledge, it is the only tool that simulates Arduino code and
multi-body physics. In order to illustrate its possibilities, two use cases
are analysed: a line-following robot, which was used by our students to
finish their mandatory activities, and a PID controller testbed. Prelim-
inary tests with master’s students indicate that HoRoSim can help to
increase student engagement during online courses.
1 Introduction
There are a lot of robots controlled by microcontrollers that interact with phys-
ical objects, such as line-following robots or walking robots. In these robots,
sensors are used to transform physical properties into electrical signals, which
are then processed by the microcontroller. Based on these inputs, the micro-
controller produces electric signals as outputs, which are then processed by an
electric circuit to power actuators. Finally, these actuators produce some effect
in the environment (turning a motor for example) and the sensors generate new
signals. Thus, teaching these robotic first principles involves a wide curricula
that includes mechanics, electronics and programming.
The main aim of teaching these basic topics in a robotics course is to demys-
tify these areas, show the big picture to the students and force them to under-
stand that robotics is not only computer science. At IT University of Copen-
hagen, we use this holistic approach in the first course of the robotics specializa-
tion. The course is called “How to Make (Almost) Anything”1 and is targeted to
1
Inspired by the course of the same name taught at MIT by Neil Gershenfeld.
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 256–267, 2022.
https://doi.org/10.1007/978-3-030-82544-7_24
HoRoSim: A Holistic Robot Simulator 257
2 Related Work
There are several simulators that can be used to simulate some aspects of robots.
Physics engines such as ODE [4] or Bullet [5] can simulate multi-body dynamics
and use collision detection algorithms to calculate the distance from one sensor
258 A. Faiña
to a body. However, the input of these physics engines requires to specify the
torques or forces for each joint and the output is just the position of the bodies
in the system for the next time step. Robot simulators such as Gazebo [6],
CoppeliaSim (before known as V-Rep) [7] or Webots [8] can facilitate the use of
the physics engines as they provide a graphical user interface where the robot and
physical bodies are displayed, a set of controllers for the joints (move joint with
constant speed, etc.), sensors (cameras, distance sensors, etc.), and predefined
models of robots and common objects. Nevertheless, they ignore the fact that
these actuators and sensors are controlled by some electronic circuit.
The most popular electronic simulator is SPICE [9] and its different forks
such as LTspice [10] and Ngspice [11]. These simulators take the definition of an
electronic circuit through a netlist and can perform different types of analyses:
direct current operating point, transient, noise, etc. However, the complexity of
these analyses is especially high when simulating devices that interact with the
real world (motors, switches, photo-diodes, etc.) as they require to specify the
properties of their components depending on the physical environment.
There are different simulators for microcontrollers, but most of them are
very specialized tools. These simulators change the registers of the microcon-
troller based on the firmware introduced without simulating other hardware
than LEDs [12]. Some of them have been extended to interact with electronic
circuits. For example, Proteus [13] or SimulIDE [14] are able to combine the
simulation of a microcontroller with a SPICE simulation. They are very useful
to learn electronics, however they are not able to perform physics simulations
and their use for teaching robotics is limited.
3 HoRoSim
HoRoSim2 has four main components. The first one is the Arduino code that
students program, which has been extended with a function to specify the hard-
ware devices employed (electronic circuits, motors and sensors). The second one
is the HoRoSim libraries that replace the standard libraries used by Arduino.
The third component is a robot simulator, CoppeliaSim [7], that is employed
for visualization and calculation of the physics (multi-body dynamics, object
collision, distance sensors, etc.). Finally, there is a user interface to handle those
devices that are not part of the physical simulation and that the user just uses
to interface with the microcontroller (buttons, potentiometers and LEDs). A
representation of these four components is shown in Fig. 1.
HoRoSim is used from the Arduino IDE and uses the same sketch building
process as Arduino. Basically, HoRoSim installs a new Arduino “core” (software
API) that compiles and links the user’s program to the HoRoSim libraries. Thus,
the user can choose to use HoRoSim (and the Arduino board to be simulated3 )
from the Board menu in the Arduino IDE. For the user, the only difference is
2
HoRoSim is open source (MIT license) and available for download at https://
bitbucket.org/afaina/horosim.
3
Currently, only Arduino Unos are supported.
HoRoSim: A Holistic Robot Simulator 259
Fig. 1. Main components of the HoRoSim simulator: The Arduino code is extended
to specify the hardware devices employed, the code is compiled using the HoRoSim
libraries and the libraries are in charge of communicating with a robot simulator (Cop-
peliaSim) and the user interface.
that HoRoSim needs an extra function in the program that specifies the hard-
ware devices (motors, sensors, LEDs and its circuits) that are connected to the
Arduino. This is done in a function called hardware setup() that is mandatory.
In this function, the user creates instances of the hardware devices employed and
specifies their properties and the pins of the Arduino that are used to interact
with them. Two implementations of the hardware setup function can be seen in
Sect. 4.
Currently, the hardware devices that HoRoSim can simulate are shown in
Fig. 2 and include:
– Direct current (DC) motors controlled with a transistor or with motor con-
trollers (H-bridges)
– Stepper motors controlled by drivers with a STEP/DIR interface
– Radio control (RC) servomotors
– Proximity sensors like ultrasound or infrared (IR) sensors (analogue or digital)
– Infrared sensors used as vision sensors to detect colours (analogue or digital)
– Potentiometers connected to a joint in the robot simulator
– LEDs (User Interface)
– Potentiometers (User Interface)
– Buttons (momentary or latching) (User Interface)
The program is compiled using a C++ compiler using the Arduino IDE
interface (verify button), which creates an executable program. This program
can be run from the Arduino IDE (upload button) or through a terminal in the
computer.
Once the compiled program is launched, it connects to the robot simulator
(CoppeliaSim) through TCP/IP communication and starts a user interface. The
robot simulator is used to perform the rigid-body dynamic calculations. The
components that do not need the robot simulator (LEDs, buttons and poten-
tiometers) are rendered in the user interface using ImGui, a graphical user inter-
face library for C++.
260 A. Faiña
Fig. 2. The different hardware devices wired in a breadboard; note that this view
is for illustration purposes only and it is not available in the simulator. The wires
that come out of the breadboard are the connections that need to be specified in
the hardware setup function (blue for Arduino outputs, purple for digital inputs and
orange for analogue inputs). For the sake of clarity, the power applied to the rails of
the breadboard is not shown.
Calls to the Arduino functions are executed using the HoRoSim libraries.
Currently, the list of available Arduino functions is shown in Table 1. The func-
tions that interact with hardware devices (I/O and Servo libraries) call the
respective functions for each hardware device instance that has been defined
in the hardware setup function. The code in each hardware device instance will
check that the pin of the Arduino that is passed as an argument matches the
pin where the hardware device is connected and that the pin has been initialized
properly. If these conditions are met, the HoRoSim libraries call the functions to
replicate the expected behaviour in the robot simulator or user interface. There-
fore, a digital output pin in the Arduino can control several devices. An example
of this software architecture for the digitalWrite function can be seen in Fig. 3.
In this case, the user has defined a DC motor controlled by a transistor, which
base is connected to a specific pin in the Arduino. If the conditions are met,
the library sends a message to CoppeliaSim to set a new speed (0 or maximum
HoRoSim: A Holistic Robot Simulator 261
Fig. 3. Pseudo-code that illustrates how HoRoSim works. The Arduino functions call
the functions of the hardware devices instances. Each hardware device instance checks
that that pin passed as argument is used in that device and that it has been initialized
properly. If so, the correct behaviour is applied or the correct value returned.
4 Use Cases
In this section, we will illustrate the potential of the simulator with two examples:
A line following robot and a testbed to study PID controllers.
This example was used by our masters students (21 to 30 years of age) to finish
a mandatory assignment of the course as the real robots were locked in the
university during the lockdown. The assignment consisted in programming a
robot able to follow a line, find a can at the end of the line, grasp it with a gripper
and place it near a crossing. This example replicates all the hardware that the
students had in their line-following robots: DC motors controlled by transistors,
an Arduino, 2 IR sensors for detecting the line on the floor, an ultrasound sensor
to detect the can and a gripper controlled by a RC servomotor. Figure 4a shows
HoRoSim: A Holistic Robot Simulator 263
Fig. 4. A line following robot that was used to work on a mandatory activity during
the lockdown.
the Arduino code that initializes the hardware setup function and Fig. 4b shows
the robot following the line.
The scene with the robot, the line and the can was given to the students.
To avoid problems, we also gave them a small Arduino sketch with the hard-
ware setup function already implemented to check that the connection between
HoRoSim and CoppeliaSim worked correctly.4 As the installation of HoRoSim
is a bit cumbersome in Windows, an Ubuntu virtual machine was available to
download with HoRoSim already installed (students still had to install Cop-
peliaSim in their computers).
Fig. 5. Simulation of a PID testbed. The dial is moved between two different target
positions using a proportional controller. The user can change the target positions and
the proportional constant by moving the potentiometers in the user interface (sliders).
The angle of the dial is plotted for proportional values of 1, 10 and 100 in subfigures c, b
and d respectively. Thus, different behaviours are observed: overdamped, overshooting
and oscillation around the target position.
(P) controller, but of course this could be extended to PID controllers. The dial
is moved continuously between two target positions by the DC motor every few
seconds. The target positions and the proportional constant can be adjusted
by tuning the three potentiometers in the user interface (Fig. 5b). In addition,
a graph element has been inserted in the scene. This graph plots the angle of
the dial (in degrees) versus time and it is updated automatically during the
simulation.
This example allows students to experiment with different controllers (and
values of their constants) and observe the different behaviours that they produce.
In Fig. 5b, a proportional value around 10 has been set with the potentiometer
and the graph shows a small overshooting when reaching the targets. When
changing the proportional value, different behaviours are observed as shown
in Fig. 5c and 5d. Note that the simulator allows students to quickly change
other properties and not only the software of the controller. For example, they
could study the response of the system when increasing the mass of the dial or
increasing the speed or torque of the motor.
HoRoSim: A Holistic Robot Simulator 265
5 Discussion
5.1 Students Reception
The students used the simulator to fulfil the last mandatory assignment of the
course (programming a line-following robot) and the reception of the simulator
was very good. As an example, one student wrote in the evaluation of the course:
“I liked that Andres made the simulator which helped us finish the mandatories!
It was very cool to use it actually.”. And a lot of students liked the attempt
to keep a hands-on course even during the lockdown, to which the simulator
contributed.
Furthermore, all the students were able to program their robots to success-
fully pass the mandatory assignment. None of them complained about the simu-
lator and most of the questions were about how to achieve that the robot could
make the sharp turns and go straight over the crossing. In that sense, it was
a pleasure to observe that the mandatory assignment could be realised online
without any problems and triggering the same kind of questions as if they had
used a physical robot.
The simulator, which was originally built to only cover the line following
robot assignment, was extended to allow simulating other kinds of robots and
machines, and more hardware devices were implemented. This gave students
the possibility of using the simulator for their final projects (build a robot or a
machine with mechanisms, electronics and embedded programming). When they
started to work on their final projects, a small survey was carried out and 10
students replied. Half of them were interested in using the simulator for their final
projects, while the other half discarded it. The reasons to discard the simulator
were: 3 of them were very busy to use it, 1 machine could not be simulated,
and 1 machine could be built at home and there was no need for the simulator.
However, only one group out of twenty ended up using the simulator.
The low usage of the simulator (1 group out of 20) was motivated by sev-
eral reasons. First, the students could use a small budget to order electronic
and mechanical parts from online stores and prototyping services. Thus, most
of the students managed to finish with a minimum prototype and there was
no need for the simulator. However, the university had to use a considerable
amount of money to pay for this when in a normal course the students can pro-
totype their own parts at our workshop and use our stock materials. Additionally,
some students reported a lack of engagement and difficulty to collaborate during
the lockdown. Finally, the current limitations of the robot simulator, which are
described in the next section, also contributed to the low usage of the simulator.
5.2 Limitations
The current implementation of the code has several limitations. First, there
are still several functions and libraries available for Arduino that have not
been implemented. They include interrupts, receiving serial messages and I2C
266 A. Faiña
6 Conclusions
The paper has shown a new robot simulator, HoRoSim, that has a holistic app-
roach. Students can simulate Arduino code, electronic hardware and multi-body
dynamics, which helps to teach the basic principles of robotics. To the best of our
knowledge, it is the first simulator that allows users to simulate Arduino code
combined with a physics engine. HoRoSim provides teachers with a flexible tool
that can be used for assignments in robotic courses. Preliminary results have
shown that it can contribute to keep hands-on courses and student engagement
in online courses, which is especially relevant in the current pandemic.
HoRoSim: A Holistic Robot Simulator 267
In order to address the limitations of the current version, future work includes
(1) generating CoppeliaSim models automatically from CAD designs, (2) allow-
ing students to make their own electronic circuits by wiring electronic compo-
nents, and (3) adding a SPICE simulator. We are currently looking at how we
could integrate Fritzing [15] and Ngspice [11]. Fritzing could provide a platform
where students create their circuits (as schematics or wiring the components in
a breadboard), while Ngspice could simulate these circuits.
References
1. Schocken, S.: Nand to tetris: building a modern computer system from first princi-
ples. In: Proceedings of the 49th ACM Technical Symposium on Computer Science
Education, p. 1052 (2018)
2. Mondada, F., et al.: The e-puck, a robot designed for education in engineering, in:
Proceedings of the 9th Conference on Autonomous Robot Systems and Competi-
tions, vol. 1, pp. 59–65. IPCB: Instituto Politécnico de Castelo Branco (2009)
3. Bellas, F., et al.: The robobo project: bringing educational robotics closer to real-
world applications. In: International Conference on Robotics and Education RiE
2017, pp. 226–237. Springer (2017)
4. Smith, R., et al.: Open dynamics engine
5. Coumans, E., et al.: Bullet physics library. bulletphysics.org. Accessed 04 Feb 2021
6. Koenig, N., Howard, A.: Design and use paradigms for gazebo, an open-source
multi-robot simulator. In: IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), vol. 3, pp. 2149–2154. IEEE (2004)
7. Rohmer, E., Singh, S.P., Freese, M.: V-rep: a versatile and scalable robot simula-
tion framework. In: IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), pp. 1321–1326. IEEE (2013)
8. Michel, O.: Cyberbotics LTD-webotsTM : professional mobile robot simulation. Int.
J. Adv. Robot. Syst. 1(1), 40–43 (2004)
9. Nagel, L., Pederson, D.: Simulation program with integrated circuit emphasis. In:
Midwest Symposium on Circuit Theory (1973)
10. Analog Devices, LTspice. https://www.analog.com/en/design-center/design-tools-
and-calculators/ltspice-simulator.html#. Accessed 08 Jan 2021
11. Vogt, H., Hendrix, M., Nenzi, P., Warning, D.: Ngspice users manual version 33
(2020)
12. Gonçalves, P.F., Sá, J., Coelho, J., Durães, J.: An arduino simulator in classroom-a
case study. In: First International Computer Programming Education Conference
(ICPEC), Schloss Dagstuhl-Leibniz-Zentrum für Informatik (2020)
13. Bo Su, L.W.: Application of proteus virtual system modelling (VSM) in teaching of
microcontroller. In: 2010 International Conference on E-Health Networking Digital
Ecosystems and Technologies (EDT), vol. 2, pp. 375–378 (2010). https://doi.org/
10.1109/EDT.2010.5496343
14. SimulIDE. https://www.simulide.com. Accessed 08 Jan 2021
15. Knörig, A., Wettach, R., Cohen, J.: Fritzing: a tool for advancing electronic proto-
typing for designers. In: Proceedings of the 3rd International Conference on Tan-
gible and Embedded Interaction, pp. 351–358 (2009)
Exploring a Handwriting Programming
Language for Educational Robots
1 Introduction
With the increasing efforts to introduce computer science (CS) in K-12 world-
wide [2], it has become necessary to develop tools and environments that offer
an appropriate progression across schools years [13]. Nowadays, aside from Text-
based Programming Languages such as Python or C, three main approaches to
teach CS exist: CS unplugged activities 1 that do not involve the use of screens
and aim at teaching core CS concepts rather than programming per se; Tangi-
ble Programming Languages (TPLs) which involve the use of physical manip-
ulatives to program virtual or physical agents [9,10]; and Visual Programming
Languages (VPLs) such as Scratch2 , which offer block-based solutions to pro-
gramming, either on tablets or computers [7]. Although VPLs are increasingly
present in education settings, they are met with resistance from teachers at the
lower levels of primary school mainly due to the presence of screens [2]. Con-
versely, TPLs, which are still not present in formal settings [8] appear to be
better suited for the needs of primary school education. Indeed, the tangibility
of TPL approaches aligns nicely with early childhood education practices and
principles [3,15], promotes active and collaborative learning [9,16], and is less
likely to be met with resistance by teachers. Moreover, TPLs seem to appeal
more than VPLs to young students [7] and help mitigate gender stereotypes
[7,9]. Lastly, experiments suggest that TPLs have a positive impact on interest,
enjoyment and learning [9], all the while reducing the amount of errors produced
[17] and the amount of adult support required [8] compared to VPLs.
Educational Robots (ER), which by design embody and enact abstract CS
concepts and directives, can be thought of as naturally aligned with the principles
of TPL, and thus a potential means to promote the adoption of TPL by teachers.
Among the solutions proposed in the literature to combine ER and TPL, those
not requiring the purchase of additional dedicated hardware (i.e., beside the
robot itself) seem particularly promising for primary school education settings
[6]. A recent example is the work of Mehrotra et al. [8], who developed a Paper-
based Programming Language (PaPL) for the Thymio II robot3 , observing that
the paper-based interface helps promote the use of tangible interfaces in class-
rooms by offering a ubiquitous and accessible solution which only requires the
robot and a classroom laptop. Similarly, TPL approaches that introduce hand-
writing as a way to provide instructions/values [14] seem particularly interesting
for primary school education settings, enhancing the programming sessions with
the possibility to train essential graphomotor skills [18], which have been shown
to be linked to improved student learning [12].
In this work, we build on the principles of the first handwriting TPL approaches
and the Paper-based Programming Language to introduce a Handwriting-based
Programming Language (HPL). By allowing students to program the robot by
handwriting instructions on a sheet of paper, HPL relies on two cornerstones of
1
CS Unplugged activities: csunplugged.org/en/.
2
See scratch.mit.edu for Scratch resources and scratchjr.org for Scratch Jr.
3
Thymio II - https://www.thymio.org/.
270 L. El-Hamamsy et al.
2 Development of HPL
Fig. 1. HPL symbols (A) and workflow from the handwriting activity to the robot
commands (B). The “up” and “down” arrows (first row) correspond to forwards and
backwards motions. The “forward right” and “forward left” arrows (second row) rep-
resent a forward motion along an arc of given circumference. The “rotate right” and
“rotate left” arrows (third row) correspond to a rotation. Icons are taken from FlatIcon
and the NounProject.
A data set of 6888 images of handwritten arrows was acquired from 92 stu-
dents in the 9 to 12 age range, and split 60%–40% between training and testing.
Both during training and testing, each image is first pre-processed using adap-
tive filtering and binarisation followed by morphological operators, to reduce the
noise while preserving edges and remove asperities [5]. The arrows are then iso-
lated by identifying contours. The feature set is created to characterise the shape
differences between the arrows by combining a) Fourier descriptors, b) Hellinger
distances between the pixel density histograms along the x and y axes of the
arrow to be classified and the centroid of the classes computed on the training
set, and c) geometric features (circularity, convexity, inertia, rotated bounded
rectangle, minimum including circle, moments, etc...). A number of state-of-the-
art classifiers were trained and tested, namely: Decision Tree; Support Vector
Exploring a Handwriting Programming Language for Educational Robots 271
Table 1. Confusion Matrix, Precision (P), Recall (R) and F1 -Score (F1 ) for the MLP
classifier (learning rate 0.01, ReLU activation, Adam optimiser) computed on the test
set. The confusion matrix indicates the number of observations of the class (true labels
indicated in each row) and predicted to be in the class indicated by the columns.
Fig. 2. Learning activity with ER-HPL using the Thymio robot. Participants are
expected to program the robot to move from one icons on the map to another using the
HPL programming language. To learn about the notion of cost, on the playground map
(top), the colour of the ground indicates the cost for the robot to navigate along it. The
darker the ground colour of a segment, the higher the cost for the robot to navigate it
and the faster the robot will use it’s initial energy (denoted by the colours of the LEDs
on the surface of the robot) using the ground sensors to the intensity of the colour. The
instructions corresponding to the displayed trajectory (in white on the map) are shown
on the bottom. Each step of the algorithm generates either a displacement (indicated
by the white arrows on the map) which is equal to the distance between the different
illustrated locations on the map (i.e., 11 cm) or a rotation of 45◦ . Icons are taken from
FlatIcon and the NounProject.
Fig. 2, Option 1). Conversely, if the objective is to have the students practice
their handwriting, the teacher can choose to have them write the arrows on a
single sheet of paper (see Fig. 2, Option 2).
A preliminary heuristic evaluation of the HPL platform and corresponding
activity framework was conducted based on the methodology developed by Giang
[4]. The evaluation included 8 participants with experience in education (2 teach-
ers, 4 engineers, 2 educational researchers) representing the different stakehold-
ers involved in the development of ER content. The participants were interested
Exploring a Handwriting Programming Language for Educational Robots 273
and evoked the engaging, kinaesthetic, playful character of this low cost app-
roach to teaching programming. Although one researcher mentioned that they
were unsure how the HPL would help improve learning CS concepts, other par-
ticipants appreciated that the HPL would help train graphomotor skills and
increase sense of ownership of the system. Participants mentioned that the HPL
components were easy to understand (but not necessarily easy to draw), and
suggested to continue improving the robustness of the symbol recognition. They
believed that drawing the symbols on individual cards would facilitate collabora-
tion, and allow to easily modify the number of symbols to be used. They believed
that the activity set designed was intuitive and didactic. They appreciated the
various scaffolding possibilities and the presence of a physical exercise book com-
plementing the in-app instructions. They characterised the overall platform as
intuitive, but suggested improvements in terms of storytelling and the applica-
tion’s graphical user interface, specifically recommending additional feedback on
the smartphone.
4 Conclusion
In this article we propose the Handwriting-based Programming Language (HPL),
aiming at facilitating the introduction of programming in primary school edu-
cation. HPL relies on paper and handwriting to program Educational Robots
(ER) in a tangible way, while leveraging on well-known practices for early child-
hood education. From the students’ perspective, the use of handwriting as input
modality introduces the possibility to train graphomotor skills while program-
ming, and can potentially contribute to an increased sense of controllability and
ownership, which, like handwriting in general, are important factors for student
learning. From the teachers’ perspective, HPL aims to be close to traditional
pedagogical approaches and facilitate the development of learning activities that
organically take into account instructions (and instruction modalities), learning
artefacts (i.e., the robot, playground and interfaces) and assessment methods [4].
Therefore, we believe that HPL will be more attractive for teachers who are still
reticent about educational technologies. Future steps include evaluating these
hypotheses in classroom experiments with the target group and expanding to a
larger set of instructions. One could even envision a fully tablet-based alternative
in conjunction with a digital pen and an automated handwriting diagnostic sys-
tem [1]. Such a system would not only be able to assess the child’s progress over
time but would be able to propose adequate remediation to address common
handwriting difficulties [1] through playful means.
Acknowledgements. A big thank you goes out to the teacher A.S. and the students
involved in the data collection, as well as our always supportive colleagues.
274 L. El-Hamamsy et al.
References
1. Asselborn, T.L.C.: Analysis and Remediation of Handwriting difficulties, p. 157
(2020). https://doi.org/10.5075/epfl-thesis-8062
2. El-Hamamsy, L., et al.: A computer science and robotics integration model for
primary school: evaluation of a large-scale in-service K-4 teacher-training program.
Educ. Inf. Technol. 26(3), 2445–2475 (2020). https://doi.org/10.1007/s10639-020-
10355-5
3. Elkin, M., Sullivan, A., Bers, M.U.: Programming with the KIBO robotics kit in
preschool classrooms. Comput. Schools 33(3), 169–186 (2016)
4. Giang, C.: Towards the alignment of educational robotics learning systems with
classroom activities, p. 176 (2020). https://doi.org/10.5075/epfl-thesis-9563
5. Herrera-Camara, J.I., Hammond, T.: Flow2Code: from hand-drawn flowcharts to
code execution. In: SBIM 2017, pp. 1–13. ACM (2017)
6. Horn, M.S., Jacob, R.J.K.: Designing tangible programming languages for class-
room use. In: TEI 2007, p. 159. ACM Press (2007)
7. Horn, M.S., Solovey, E.T., Crouser, R.J., Jacob, R.J.: Comparing the use of tangi-
ble and graphical programming languages for informal science education. In: CHI
2009, pp. 975–984. ACM, April 2009
8. Mehrotra, A., et al.: Introducing a Paper-Based Programming Language for Com-
puting Education in Classrooms. In: ITiCSE 2020, pp. 180–186. ACM (2020)
9. Melcer, E.F., Isbister, K.: Bots & (Main)Frames: exploring the Impact of Tangible
Blocks and Collaborative Play in an Educational Programming Game. In: CHI
2018, pp. 1–14. ACM (2018)
10. Mussati, A., Giang, C., Piatti, A., Mondada, F.: A Tangible programming lan-
guage for the educational robot Thymio. In: 2019 10th International Conference
on Information, Intelligence, Systems and Applications (IISA), pp. 1–4 , July 2019
11. Nasir, J., Norman, U., Johal, W., Olsen, J.K., Shahmoradi, S., Dillenbourg, P.:
Robot analytics: what do human-robot interaction traces tell us about learning?
In: 2019 28th IEEE RO-MAN, pp. 1–7, October 2019
12. Ose Askvik, E., van der Weel, F.R.R., van der Meer, A.L.H.: The importance of
cursive handwriting over typewriting for learning in the classroom: a high-density
EEG study of 12-year-old children and young adults. Front. Psychol. 11, 1810
(2020)
13. Papavlasopoulou, S., Giannakos, M.N., Jaccheri, L.: Reviewing the affordances of
tangible programming languages: implications for design and practice. In: 2017
IEEE EDUCON, pp. 1811–1816 (2017)
14. Sabuncuoğlu, A., Sezgin, M.: Kart-ON: affordable early programming education
with shared smartphones and easy-to-find materials. In: Proceedings of the 25th
International Conference on Intelligent User Interfaces Companion, IUI 2020, pp.
116–117. ACM (2020)
15. Sapounidis, Theodosios, Demetriadis, Stavros: Educational robots driven by tan-
gible programming languages: a review on the field. In: Alimisis, Dimitris, Moro,
Michele, Menegatti, Emanuele (eds.) Edurobotics 2016 2016. AISC, vol. 560, pp.
205–214. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-55553-9 16
16. Sapounidis, T., Demetriadis, S., Papadopoulos, P.M., Stamovlasis, D.: Tangible
and graphical programming with experienced children: a mixed methods analysis.
Int. J. Child Comput. Interact. 19, 67–78 (2019)
Exploring a Handwriting Programming Language for Educational Robots 275
Open Access This chapter is licensed under the terms of the Creative Commons
Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/),
which permits use, sharing, adaptation, distribution and reproduction in any medium
or format, as long as you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons license and indicate if changes were
made.
The images or other third party material in this chapter are included in the
chapter’s Creative Commons license, unless indicated otherwise in a credit line to the
material. If material is not included in the chapter’s Creative Commons license and
your intended use is not permitted by statutory regulation or exceeds the permitted
use, you will need to obtain permission directly from the copyright holder.
Artificial Intelligence
AI-Powered Educational Robotics as a Learning
Tool to Promote Artificial Intelligence
and Computer Science Education
Amy Eguchi(B)
1 Introduction
We cannot ignore the fact that the impacts of Artificial Intelligence (AI) on our society
are becoming ever more prominent now than ever. For example, many children may have
used AI-assistants becoming aware of them, and some have had the privilege of growing
up with AI-assistants or AI-assisted smart devices in their homes (i.e. Google enhanced
smart speaker, devices equipped with Siri or Alexa). The rapid development of voice and
facial recognition technologies and machine learning algorithms have started to trans-
form our everyday lives. One of the most common AI algorithms that we encounter every
day is recommender systems that power customized timelines on Social Networking Ser-
vices (SNS, such as Facebook and Twitter) and video suggestions we see on YouTube.
Educators across fields from computer science, AI, to education strongly suggest that
it is crucial to help people understand the science behind AI, its limits, and potential
societal impacts in our everyday lives as well as in the future. What is specifically urgent
is to prepare K-12 students for their future professions, which might not currently exist,
and becoming citizens capable of understanding and utilizing AI-enhanced technologies
in the right way in the future so that they would not benefit some populations over oth-
ers [1]. This is of particular importance to low-income and underrepresented minorities
where the potential to be left behind is greatest due to lack of resources and opportunities
to explore computing in their schools and lack of exposure to AI-enabled devices in their
homes and communities.
In the US, several initiatives at the grassroots level have emerged to provide greater
access to K-12 AI Education. AI4All provides AI summer camps for historically
excluded populations through education and mentorship with a focus on high school.
AI4K12, a joint project between CSTA (CS Teacher Association) and AAAI (Associa-
tion for Advancing Artificial Intelligence), is developing K-12 AI guidelines for teachers
and students organized around the Five Big Ideas in AI [1]. The Five Big Ideas in AI
were introduced in 2019 which are (Fig. 1):
The AI4K12 guidelines are developed in response to the existing CSTA K-12 CS
Standards (2017) which included only two standards addressing AI under the Algorithms
& Programming concept for 11–12 grade band:
3B-AP-08 - Describe how artificial intelligence drives many software and physical
systems.
3B-AP-09 - Implement an artificial intelligence algorithm to play a game against a human
opponent or solve a problem.
positive including their excitement to continue learning about robotics and coding, and
tips to make the robots better. Moreover, teachers asked the author if she could help them
start incorporating educational robotics in their lessons. Although it has not started due
to the COVID-19 pandemic.
Educational robotics started to gather attention from the education community after
LEGO Mindstorms RCX was produced in the late 1990s. It has gained more popularity
and demands since the worldwide focus on STEM and computer science education
started to prevail. It is widely accepted and understood that educational robotics brings
excitement and motivation to learn, and fosters curiosity and creativity in students [7–11].
Educational robotics is an effective tool for promoting students’ learning. It becomes
effective when students engage in making or creating using the hands-on robotic tool,
creating a fun and engaging hands-on learning environment for students. Making with
robotics engages students in manipulating, assembling, disassembling, and reassembling
materials while going through the design learning and engineering iteration processes.
Students also engage in problem-solving processes through trial and error while improv-
ing the codes for their robots. The learning approaches that make learning with robotics
successful are project-based, problem-based, learning by design, student-centered, and
constructionist learning approaches where the focus is on the process of learning, rather
than the final product [12].
AI, as mentioned earlier, is still an emerging technology that is continuously devel-
oped and incorporated into and influencing our daily lives. When students engaged in
activities using AI technology, they become curious and wonder how it works. Some
might also become interested in creating their own or tinker to make it their own. AI
provides students with excitement and more learning opportunities to explore the new
technology; however, since some have more exposure to AI in their home or community,
while others have less access to AI, it is crucial to incorporate AI learning early. To help
the early introduction of AI, there are several tools available for K-12 populations of
students. For example, AI for Oceans (https://code.org/oceans) designed for grades 3 and
up, introduces AI, machine learning, training data, and AI bias. It aligns with some con-
cepts of the United States’ Computer Science Teachers Association Computer Science
Standards (Data and Analysis and Impacts of Computing). There are also AI-enhanced
online tools and resources that introduce AI which are fun to use. For example, there
are several AI-tools accessible to K-5, including Scratch Lab’s Face Sensing (https://lab.
scratch.mit.edu/face/), Scratch extensions (i.e. text to speech, translate, video sensing,
face sensing), Machine Learning for Kids (https://machinelearningforkids.co.uk/), and
Cognimates (https://cognimates.me/).
Although there are several AI-enhanced online tools that support students’ learning of
AI, computer science, and computational thinking concepts, the concepts remain rather
abstract for students since their work stays inside of the computer (virtual and abstract).
Students, especially younger ones, tend to struggle to fully grasp abstract concepts and
ideas without applying them in real-life situations using manipulatives.
Seymour Papert’s constructionism theory as its foundation, the learning environment
that robotics tools create promotes students’ learning of abstract concepts and ideas [12–
18]. Incorporating AI-powered robotics tools as manipulatives could make abstract AI,
computer science, and computational thinking concepts and their ideas visible while
AI-Powered Educational Robotics as a Learning Tool 283
students apply the concepts in real-life situations. Since the subject and concepts that AI
introduces are foreign to many, making AI and its concepts visible through AI-robotics
tools could help their construction of new knowledge, make their understanding of AI
deeper, and help them retain the newly constructed knowledge.
3.1 Cozmo
Cozmo (Fig. 2) is the first AI-powered robot that became available for students to explore
AI while learning to code. Cozmo is equipped with proximity sensors, a gyroscope and a
downward-facing cliff detector, and a camera, which allows it to sense its environment.
Cozmo has a vision capability, which allows it to learn human faces and objects, and sense
human feelings. It provides several coding options, such as Code Lab – a block coding
environment, built on Scratch Blocks, for younger students and Python. It provides SDKs
for expert programmers to explore its AI capabilities. Code Lab is a coding app that runs
on tablets. Calypso (https://calypso.software/) is another coding app for Cozmo. It also
runs on tablets. Calypso is a simple tile-based user interface to teach robot logic and
behavior. It also provides an AI lesson that encourages students to learn how AI works
while developing coding skills. RedyAI, a company promoting AI education based in
Pittsburgh, developed an AI education unit for elementary school students (https://edu.
readyai.org/courses/lesson-plans-elementary-school/). The lessons incorporate project-
based learning and let students explore AI concepts with a special focus on AI ethics and
AI for social good. Cozmo is currently not available; however, it will become available
again in Spring 2021.
3.2 Zumi
Zumi, by Robolink (https://www.robolink.com/zumi/), is a self-driving car robot that has
capabilities of vision and navigation and allows students to explore AI, machine learning,
284 A. Eguchi
3.3 CogBots
Cogbots, CogBot, and CogMini (Figs. 4 and 5) are open-source robotics kits developed
by CogLabs in collaboration with Google and UNESCO. They are do-it-yourself-type
robotics kits that spark students’ creativity and imaginations. CogBots empower students
through creating designs, assembling, and programming their own “thinking” robot.
It uses sustainable and readily available materials like recycled boxes and cardboards
(Fig. 6), 3D printed parts, and recycled smartphones. For the controller, CogBots uses an
Arduino ESP32 board that communicates with motors and a smartphone. ESP32 board
is a low-cost, low-power microcontroller board with integrated WiFi and dual-mode
Bluetooth communication capabilities, which is robust enough for the AI-robotics tool.
Its motors, micro 360-degree continuous rotation servo motors, are also low-cost (US$5
each), which contributes to keeping the price of the robot affordable.
where no installation is required, making it easy for teachers to use in their classrooms
since any installation on their classroom computers requires IT support. Since Scratch is
widely used in K-12 classrooms around the world, it will make programming with Cog-
Bots accessible to many students who have experience coding with Scratch. Teachable
Machine, by Google Creative Lab, is an additional tool that enables CogBot to provide
a machine learning experience (https://teachablemachine.withgoogle.com/train/image).
ThinkBot Scratch extension allows students to use an image classification model created
with Teachable Machine in their codes to control the CogBots.
References
1. Touretzky, D.S., Gardner-McCune, C., Martin, F., Seehorn, D.: Envisioning AI for K-12: what
should every child know about AI? In: AAAI 2019. AAAI Press, Palo Alto (2019)
AI-Powered Educational Robotics as a Learning Tool 287
2. Payne, B.H.: An Ethics of Artificial Intelligence Curriculum for Middle School Students.
https://aieducation.mit.edu/aiethics.html
3. ISTE: Artificial Intelligence Explorations and Their Practical Use in Schools. https://www.
iste.org/learn/iste-u/artificial-intelligence
4. Clarke, B.: Artificial Intelligence Alternate Curriculum Unit (2019). http://www.exploringcs.
org/wp-content/uploads/2019/09/AI-Unit-9-16-19.pdf
5. AI4ALL: AI4ALL - Open Learning. https://ai-4-all.org/open-learning/. Accessed 1 Feb 2021
6. Eguchi, A.: Keeping students engaged in robotics creations while learning from the
experience. In: Presented at the 11th International Conference on Robotics in Education
(2020)
7. Eguchi, A.: Educational robotics theories and practice: tips for how to do it right. In: Barker,
B.S., Nugent, G., Grandgenett, N., Adamchuk, V.L. (eds.) Robotics in K-12 education: a
new technology for learning, pp. 1–30. Information Science Reference (IGI Global), Hershey
(2012)
8. Eguchi, A.: Learning experience through RoboCupJunior: promoting STEM education and
21st century skills with robotics competition. In: Proceedings of the Society for Information
Technology & Teacher Education International Conference (2014)
9. Eguchi, A.: Educational robotics for promoting 21st century skills. J. Autom. Mobile Robot.
Inf. Syst. 8, 5–11 (2014). https://doi.org/10.14313/JAMRIS_1-2014
10. Eguchi, A.: Computational thinking with educational robotics. In: Proceedings of the Society
for Information Technology & Teacher Education International Conference (2016)
11. Valls, A., Albó-Canals, J., Canaleta, X.: Creativity and contextualization activities in edu-
cational robotics to improve engineering and computational thinking. In: Lepuschitz, W.,
Merdan, M., Koppensteiner, G., Balogh, R., Obdržálek, D. (eds.) RiE 2017. AISC, vol. 630,
pp. 100–112. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-62875-2_9
12. Eguchi, A.: Bringing robotics in classrooms. In: Khine, M.S. (ed.) Robotics in STEM
Education, pp. 3–31. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-57786-9_1
13. Eguchi, A.: Student learning experience through CoSpace educational robotics. In: Pro-
ceedings of the Society for Information Technology & Teacher Education International
Conference. (2012).
14. Eguchi, A.: Educational robotics as a learning tool for promoting rich environments for active
learning (REALs). In: Keengwe, J. (ed.) Handbook of Research on Educational Technology
Integration and Active Learning, pp. 19–47. Information Science Reference (IGI Global),
Hershey (2015)
15. Papert, S.: The Children’s Machine: Rethinking School in the Age of the Computer. Basic
Books, New York (1993)
16. Papert, S.: Mindstorms: Children, Computers, and Powerful Ideas, 2nd edn. Basic Books,
New York (1993)
17. Bers, M.U.: Using robotic manipulatives to develop technological fluency in early childhood.
In: Saracho, O.N., Spodek, B. (eds.) Contemporary Perspectives on Science and Technology in
Early Childhood Education, pp. 105–125. Information Age Publishing Inc., Charlotte (2008)
18. Bers, M.U.: The TangibleK robotics program: applied computational thinking for young
children. Early Child. Res. Pract. 12 (2010)
19. Rhodes, C.M., Schmidt, S.W.: Culturally Responsive Teaching in the Online Classroom
(2018). https://elearnmag.acm.org/featured.cfm?aid=3274756
20. Richards, H.V., Brown, A.F., Forde, T.B.: Addressing diversity in schools: culturally
responsive pedagogy. Teach. Except. Child. 39, 64–68 (2007)
21. Eglash, R., Gilbert, J.E., Foster, E.: Broadening participation toward culturally responsive
computing education - Improving academic success and social development by merging
computational thinking with cultural practices (2013)
An Interactive Robot Platform
for Introducing Reinforcement Learning
to K-12 Students
1 Introduction
Artificial intelligence (AI) is predicted to be a critical tool in the majority of
future work and careers. It is increasingly important to introduce basic AI con-
cepts to K-12 students to build familiarity with AI technologies that they will
interact with. Reinforcement Learning (RL), a sub-field of AI, has been demon-
strated to positively contribute to many fields, including control theory [1],
physics [2] and chemistry [3]. This was demonstrated by the AlphaZero system
which attracted widespread public attention by defeating the world’s top chess
and Go players [4]. For students, basic concepts of RL are intuitive and attrac-
tive to learn since it is similar with our cognition of nature of learning [5] and
easy to be demonstrated with games. For instance, when researchers introduced
Super Mario to undergraduate RL sessions, they argued that it increased stu-
dents’ engagement [6]. However, most current platforms and empirical research
on introducing AI to K-12 students are based on training supervised learning
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 288–301, 2022.
https://doi.org/10.1007/978-3-030-82544-7_27
Robot Platform for Learning RL 289
models [7,8]. There are only few activities on the Machine Learning for Kids
platform, and a VR system designed by researchers that are intended to intro-
duce RL to K-12 students [9]. The activities in these RL teaching projects were
all fully developed in the simulated world and there is no research on using phys-
ical tools like educational robots to introduce RL concepts to K-12 students in
the real world.
To address this need, we designed three consecutive sessions combining the
LEGO SPIKE Prime robot and a code-free web-based GUI designed for middle
school and above students to learn basic RL concepts. Our first two sessions are
designed around the LEGO SPIKE Prime robot. We asked students to build and
control a robot car to move a specific distance, first with mathematical methods
using extrapolation, and then with a RL method. In the third session, students
were asked to train a virtual robot agent to finish a 1D and a 2D treasure hunt
challenge on a web-based GUI. Prior research has shown that educational robots
are powerful tools for introducing AI concepts to students from young age [10]
to college students [11,12]. Through our sessions, we want students to learn
RL in both the physical and virtual world and give the students an intuitive,
interactive, and engaging learning experience. Due to COVID-19, we designed
our system to be easily built and explored at home, with the instructors providing
feedback and guidance using common video conferencing platforms. Our system
and proposed structured activities cover the following aspects of RL: 1) Key
concepts in RL, including state, action, reward and policy; 2) Exploration and
exploitation and how the agent chooses whether to explore or exploit; 3) Q-table
and agent’s decision making based on it; 4) Episodes and termination rules; and
5) Impacts of human input. We ran a pilot study in December 2020 to test
our system design and session plan with 6 high school students in Boston area
remotely and described the key results and takeaways in this paper.
works indicate that students are more motivated and engaged when learning
AI through games [19]. Educators also consider “toy problems” as satisfactory
approaches to introduce basic AI knowledge [20]. Our activity is inspired by
classic treasure hunt games which we make it tangible by combining the web
GUI with an educational robotics platform.
Educational LEGO robots and others have been studied in many K-12 edu-
cation contexts including physics [21], mathematics [22] and engineering [23,24].
These studies have shown that robotics kits can improve the students’ engage-
ment and facilitate understanding of STEM concepts. LEGO robots have also
been used in studies for teaching AI and robotics to different age group students
[10–12,25–27]. These researchers have reported that LEGO robots could make
the learning more interactive, attractive and friendly to students who do not
have an AI or robotics background, which provided inspiration for us on how to
utilize the advantages of physical robots when designing our RL activity.
Fig. 1. An example physical build of the robot car and activity environment
choose the action with the highest Q-value in its current state (i.e., the agent
acts greedily with respect to its current action-value function).
RL can be computationally expensive and may require large amounts of
interaction. To speed up learning, researchers have proposed human-in-the-loop
RL methods. For example, in the learning-from-demonstration (LfD) framework,
human teachers take over the action selection step, often providing several tra-
jectories of complete solutions before the agent starts learning autonomously. In
a related paradigm, the agent can seek “advice” from its human partner, e.g.,
what action to select in a particular state for which the Q-values are thought
to be unreliable due to lack of experience. One of the goals of our system and
proposed activity is to demonstrate to students how human partners can help a
robot learn through interacting with it.
4 Technical Approach
4.1 System Overview
1
The following system components are required for setting up the activity: a
physical robot platform, a custom code-free web-based GUI environment to bet-
ter visualize and control the RL training process, and instructions to guide stu-
dents as they go through the activity. We used the LEGO SPIKE Prime robot,
which supports MicroPython and Scratch-like block-based coding, and Blue-
tooth communication between the robot and a computer. It comes with several
sensors including distance sensor, force sensor and color sensor. It also has an
intuitive programming application for students to code and monitor the status
of the robot. In our activity, students are guided through a challenge to build
a robotic car like the one shown in Fig. 1 and program it to move a specific
distance, first using a mathematical approach, and then using a RL approach.
1
All code and materials available at: https://github.com/ZyZhangT/rlplayground.
292 Z. Zhang et al.
Fig. 2. The front side of the placemat instruction used in session 1 and 2. We pro-
vide students some example builds and hints to the solution of the challenges. High-
resolution images are available at: https://github.com/ZyZhangT/rlplayground/tree/
main/Placemats
The first part of our GUI is based on an 1-D treasure hunt activity as we showed
in Fig. 3. The agent is located in a six-state environment, which is located at
the center of the interface. Students can differentiate between various states,
including a starting state, a trap (restart from the starting state) and a treasure
Robot Platform for Learning RL 293
Fig. 3. GUI layout of 1-D treasure hunting. (The upper part is for students to visualize
virtual robot’s move in the maze and interact with the GUI. The lower part is for
students to visualize agent’s assessment on taking different actions in each state.
(termination). The agent can choose its action to be either moving left or moving
right. We set the rewards for the trap, the treasure and other states to be –20,
20, and –1 correspondingly. Students can let the agent train itself for an episode
using Q-learning algorithm by clicking on the button above the map and inspect
the learning process by watching how the agent moves around in the environment
and also from the information provided in the user interface, such as cumulative
return of the episode, current state, last action and reward. We also provide
buttons for students to reset the training process, or to navigate to the 2-D
treasure hunt challenge. To help students acquire a better understanding of the
learning procedure, we provide a visual aid to illustrate the current estimated
Q-value of each state-action pair, which is located under the virtual maze. For
each state-action pair, if the corresponding box turns greener, it means the agent
has a more positive value estimation; if the corresponding box turns redder, it
has a more negative value estimation. But how each Q-value is calculated is not
exposed to the student since it would be too complicated. We also have two
buttons with question marks to help students with some common confusions
they might have during the training process.
Through 1-D treasure hunting activity, we hope to introduce basic RL con-
cepts such as state, action, episodes to students and help them get familiar with
our interface. After that, students are guided to proceed to the 2-D treasure
hunting activity.
As shown in Fig. 4, the agent is now in a 2-D maze with an addition of an
exit and some barriers between states to make the problem more complex. To
put students’ in the robot’s perspective, the maze structure starts out hidden
from students and as the training proceeds, the states visited by the agent and
barriers detected by the agent would be revealed on the map. The maze has
16 states, including a starting state, two traps (restart from the starting state),
294 Z. Zhang et al.
Fig. 4. GUI layout for 2-D Treasure Hunt. The general layout is the same as 1D version
except we added a mid part of the GUI for students to play with different training
modes. And on top of the right part of the GUI, students could tweak exploration rate
and the auto training speed.
a treasure, and an exit (termination). The rewards for actions that make the
robot reach traps, treasure or exit, or hitting a barrier are −10, 20, 50, and
−10 correspondingly. Otherwise, the reward is −1. And the agent now has 4
available actions: moving up, down, left, or right. For 2-D treasure hunt, we
provide buttons for both automatic training and manual training, which means
students could choose to let the agent train itself based on the reward table
we provides, or to give the agent reward by themselves at each step to train
it in their own way. if students wish to take a step-by-step look at the training
process, we also provide adjustable automatic training speed so they can slow the
training process. Students could also tweak the exploration rate () during the
training to see how it affects the agent’s learning process. We provide the same
kind of information about the training process above the maze map and visual
illustration on the Q-value estimation on the right side of the interface. Through
2-D treasure hunt activity, we expect students to have a deeper comprehension
on basic RL concepts, also understand exploration and exploitation, and the
impact of human inputs.
The lesson was divided into three one-hour sessions. The first two sessions lever-
aged the LEGO SPIKE Prime robot. In the first session, students built a robot
car for our “Going the Distance” challenge. Students used the placemat instruc-
tion shown in Fig. 2 as a support resource. The goal of this activity was to get
students to program their car to get as close as possible without running over
a LEGO mini-figure placed between 5 in. and 25 in. away from their cars. They
were not allowed to use a sensor for this challenge. The objective of this was
to get them to think about interpolation and extrapolation and get more famil-
iar with programming their robot using mathematical relationships. While this
activity did not leverage RL or other artificial intelligence concepts, it acted as
a setup to the second activity which did introduce RL.
Robot Platform for Learning RL 295
In the second session, the students used reinforcement learning to solve the
going the distance challenge. The goal of this challenge was to help students
obtain a general understanding of what RL is and let them try to code a simple
RL method. This session contained 4 phases. Since we assumed that students had
little to no prior experience with artificial intelligence, the first phase was a 5 min
introduction to the general concepts of AI and reinforcement learning. Phase 2
is the main part of this session in which we would send out the placemat showed
in Fig. 2, and students would try to implement the RL method we provided in
placemat to accomplish the challenge individually. In phase 3, we asked students
to show how far they got. The goal of this section was for them to share ideas
with each other, and also for us to evaluate how much the students learned from
this session and whether the activity design was achieving aspects of the desired
outcomes. The last phase was to get feedback from students and introduce some
of the concepts for the final session.
The third session was designed around our web platform. The learning objec-
tives were to let students: 1) Understand more specific RL concepts (state,
action, reward and episode); 2) Identify the differences between exploration and
exploitation in the context of reinforcement learning and understand why explo-
ration is important and how is affects the training process if an inappropriate is
given; 3) Associate a specific action performed by the agent with the correspond-
ing policy function, i.e., relate the agent’s choice with values in the Q-table; and
4) Understand how humans can be part of the learning process and facilitate
the agent identifying more desirable actions in a shorter amount of time.
We separated this session into 4 phases. In the first phase, students played
with the 1D treasure hunting game to get familiar with the GUI and basic RL
concepts. Then they started the 2D treasure hunting challenge, in which they
first watched how the robot trained itself and learned about the environment.
Next, they manually trained the robot by giving it reward with six options (+10,
+5, +1, −1, −5, −10) for several episodes to compare the two different methods.
After that, we asked students to reset the environment, set to zero, automat-
ically train the robot for one episode, and then see how the robot kept moving
to the treasure box and fails to find the exit. We let them discuss why this hap-
pens in an effort to introduce the concept of exploration and exploitation. Due
to the limitation of time, we interspersed our assessment on students’ learning
outcome throughout the whole training process in the form of discussion ques-
tions. These questions included asking them to explain why specific phenomena
happen during training and to describe their understanding of RL concepts.
5 Pilot Study
5.1 Background
We conducted a pilot study with 6 high school students in a city just outside
of Boston, Massachusetts. These six students were members of a robotics club
that had been meeting weekly for three months to do activities with the LEGO
SPIKE Prime robotic kit. Two of the students had previous robotics and coding
296 Z. Zhang et al.
Fig. 5. A screenshot from our pilot study. One student was testing his robot while
others were watching or preparing their robot cars
experience, and four of them were new to both robotics and coding. The students
had no prior knowledge of AI. The testing includes three separate sessions as we
proposed in the former chapter. The first session was carried out on December
4th , and the following two sessions were held on two consecutive Fridays after.
Each session lasted for an hour, and all the sessions were held remotely on Zoom
(Fig. 5).
The LEGO SPIKE Prime robot and the placemat instructions were two key
components of the first two sessions. In our study, we found the robot not only
increased student engagement, but also inspired them to be creative. Unlike
a virtual platform where students have limited ways of interacting with the
environment, a physical robot allows them to manipulate the robot in space and
time and control not only the robot itself but also the robot’s environment. For
example, one student used the relationship between the rotations and driving
distance to control his robot in Session 1. They also built different robot cars
and were responsible for finding ways to test them. The placemat instructions
for Session 1 and 2 acted as a shared artifact between the students and the
instructors. In Session 1, the placemat instruction was useful for helping students
get started with building their robot car. In Session 2, the placemat instruction
served as a resource for the instructors to point students towards answers when
they had questions about how to code using RL. The diagram on the front of the
placemat instruction helped spark dialogue about what the relevant variables in
the system were, and the flowchart on the back helped students combine those
variables in a single program to achieve the desired outcome.
treasure hunting activity. He also checked the source code behind the web GUI,
which showed a high level of interest and engagement.
5.4 Limitations
Since the pilot study only had 6 participants, we didn’t collect and analyze quan-
titative data from our testing. For session 2, we think the time was tight and some
students spent too much time programming the robot and debugging syntax errors
rather than coming up with the logic of the RL algorithm. In the future, we would
provide students more pre-written code chunks so they could focus on writing the
algorithm. In Session 3, since students were all working on their own screens, we
found it was hard to track how far they got. Going forward we would use multiple
screen shares functions so that we can see what students are doing in real time,
which would be especially helpful in the manual training phase. Also, we found
that if one student answered a question, other students were sometimes unwilling
to answer it again. Therefore, we need to find a way to encourage students to talk
more about their own thinking and reduce the influence of other students’ answers
so we could better evaluate their learning outcome.
Robot Platform for Learning RL 299
5.5 Results
Overall, students demonstrated satisfactory learning outcomes on general and
specific RL concepts. Though most of the time we let the students independently
explore the activity we designed, they still gained an understanding of the AI
concepts we were hoping they would learn about and successfully triggered their
interest in the AI and RL fields. In Session 2 specifically, we expected students
to learn the logic of a general RL algorithm through hands-on coding experience
of solving a problem. With the exception of the two students who had hardware
issues, most of the students were able to generate the correct structure of the
RL method by the end of the session. In Session 3, we hoped that students
would learn about specific RL concepts including reward, action, explore and
exploit, etc. As shown in Table 1, students could correctly answer most of our
evaluation questions. The interesting ideas shared by the students during the
session revealed that some students not only had clear understanding, but even
thought one step further than we expected on concepts covered in this session.
References
1. Kiumarsi, B., Vamvoudakis, K.G., Modares, H., Lewis, F.L.: Optimal and
autonomous control using reinforcement learning: A survey. IEEE Trans. Neural
Netw. Learn. Syst. 29(6), 2042–2062 (2018)
2. Fosel, T., Tighineanu, P., Weiss, T., Marquardt, F.: Reinforcement learning with
neural networks for quantum feedback. Phys. Rev. X 8, 031084 (2018)
300 Z. Zhang et al.
3. Zhou, Z., Li, X., Zare, R.: Optimizing chemical reactions with deep reinforcement
learning. ACS Cent. Sci. 3, 12 (2017)
4. Silver, D., et al.: A general reinforcement learning algorithm that masters chess,
shogi, and go through self-play. Science 362(6419), 1140–1144 (2018)
5. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. A Bradford
Book, Cambridge (2018)
6. Taylor, M.E.: Teaching reinforcement learning with Mario: an argument and case
study. In: Proceedings of the 2011 AAAI Symposium Educational Advances in
Artificial Intelligence (2011)
7. Vartiainen, H., Tedre, M., Valtonen, T.: Learning machine learning with very young
children: who is teaching whom? Int. J. Child-Comput. Interact. 100182, 06 (2020)
8. Sakulkueakulsuk, B.: Kids making AI: integrating machine learning, gamification,
and social context in stem education. In: 2018 IEEE International Conference on
Teaching, Assessment, and Learning for Engineering (TALE), pp. 1005–1010. IEEE
(2018)
9. Coppens, Y., Bargiacchi, E., Nowé, A.: Reinforcement learning 101 with a virtual
reality game. In: Proceedings of the 1st International Workshop on Education in
Artificial Intelligence K-12, August 2019
10. Williams, R., Park, H.W., Breazeal, C.: A is for artificial intelligence: The impact
of artificial intelligence activities on young children’s perceptions of robots. In: CHI
2019: Proceedings of the 2019 CHI Conference on Human Factors in Computing
Systems, pp. 1–11, April 2019
11. Parsons, S., Sklar, E., et al.: Teaching AI using LEGO mindstorms. In: AAAI
Spring Symposium (2004)
12. van der Vlist, B., et al.: Teaching machine learning to design students. In: Pan, Z.,
Zhang, X., El Rhalibi, A., Woo, W., Li, Y. (eds.) Technologies for E-Learning and
Digital Entertainment, pp. 206–217. Springer, Heidelberg (2008)
13. Carney, M., et al.: Teachable machine: approachable web-based tool for exploring
machine learning classification. In: Extended Abstracts of the 2020 CHI Conference
on Human Factors in Computing Systems, CHI EA 2020, pp. 1–8, New York, NY,
USA, 2020. Association for Computing Machinery (2020)
14. Druga, S.: Growing up with AI. Cognimates: from coding to teaching machines.
PhD thesis, Massachusetts Institute of Technology (2018)
15. Friese, S., Rother, K.: Teaching artificial intelligence using a web-based game
server. In: Proceedings of the 13th Koli Calling International Conference on Com-
puting Education Research, pp. 193–194 (2013)
16. Yoon, D.-M., Kim, K.-J.: Challenges and opportunities in game artificial intelli-
gence education using angry birds. IEEE Access 3, 793–804 (2015)
17. McGovern, A., Tidwell, Z., Rushing, D.: Teaching introductory artificial intelli-
gence through java-based games. In: AAAI 2011 (2011)
18. DeNero, J., Klein, D.: Teaching introductory artificial intelligence with Pac-Man.
In: First AAAI Symposium on Educational Advances in Artificial Intelligence
(2010)
19. Hainey, T., Connolly, T.M., Boyle, E.A., Wilson, A., Razak, A.: A systematic
literature review of games-based learning empirical evidence in primary education.
Comput. Educ. 102, 202–223 (2016)
20. Wollowski, M.: A survey of current practice and teaching of AI. In: Thirtieth AAAI
Conference on Artificial Intelligence (2016)
21. Petrovič, P.: Spike up prime interest in physics. In: Robotics in Education, pp.
146–160. Springer International Publishing, (2021)
Robot Platform for Learning RL 301
22. Mandin, S., De Simone, M., Soury-Lavergne, S.: Robot moves as tangible feedback
in a mathematical game at primary school. In: Robotics in Education. Advances
in Intelligent Systems and Computing, pp. 245–257. Springer International Pub-
lishing, Cham (2016)
23. Laut, J., Kapila, V., Iskander, M.: Exposing middle school students to robotics and
engineering through LEGO and Matlab. In: 120th ASEE Annual Conference and
Exposition, 2013. 120th ASEE Annual Conference and Exposition; Conference 23
June 2013 Through 26 June 2013
24. Williams, K., Igel, I., Poveda, R., Kapila, V., Iskander, M.: Enriching K-12 science
and mathematics education using LEGOs. Adv. Eng. Educ. 3, 2 (2012)
25. Whitman, L., Witherspoon, T.: Using LEGOs to interest high school students and
improve K-12 stem education. Change 2, 12 (2003)
26. Selkowitz, R.I., Burhans, D.T.: Shallow blue: LEGO-based embodied AI as a plat-
form for cross-curricular project based learning. In: Proceedings of the Twenty-
Eighth AAAI Conference on Artificial Intelligence, pp. 3037–3043 (2014)
27. Cheli, M., Sinapov, J., Danahy, E.E., Rogers, C.: Towards an augmented reality
framework for K-12 robotics education. In: Proceedings of the 1st International
Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018)
28. Christopher JCH Watkins and Peter Dayan: Q-learning. Mach. Lear. 8(3–4), 279–
292 (1992)
29. WWillner-Giwerc, S., Danahy, E., Rogers, C.: Placemat instructions for open-
ended robotics challenges. In: Robotics in Education, pp. 234–244. Springer Inter-
national Publishing (2021)
30. Cleaver, A., Muhammad, F., Hassan, A., Short, E., Sinapov, J.: SENSAR: a visual
tool for intelligent robots for collaborative human-robot interaction. In: AAAI Fall
Symposium on Artificial Intelligence for Human-Robot Interaction (2020)
31. Chandan, K., Kudalkar, V., Li, X., Zhang, S.: ARROCH: Augmented reality for
robots collaborating with a human. In: IEEE International Conference on Robotics
and Automation (ICRA) (2021)
32. Ikeda, B., Szafir, D.: An AR debugging tool for robotics programmers. In: Inter-
national Workshop on Virtual, Augmented, and Mixed-Reality for Human-Robot
Interaction (VAM-HRI) (2021)
The BRAIINS AI for Kids Platform
annamarialisotti@cavazzisorbelli.it
1 Introduction
Artificial Intelligence is no longer the stuff of the future although awareness is still inad-
equate at all levels. Children growing up in the era of artificial intelligence (AI) will
have a fundamentally different relationship with technology than those before them [1].
Moreover, as grownups they will surely have to interact with AI in both their personal and
professional lives. As a result, Artificial Intelligence education is very important in the
21st century knowledge information society [2]. Nevertheless, the topic is not explicitly
taught in most of our schools and has not found an appropriate place in curricula yet.
Only a few platforms focusing on AI for schools have emerged, like Machine Learn-
ing for Kids, Calypso for Cozma, Cognimates [3–5]. All of these platforms provide AI
engines and enable children to do hands-on projects. There are also some professional
tools, used in projects by companies and teaching at universities, like Firebase or KNIME
[6, 7]. However, to prepare adequate digital literacy, avoid an increasing digital divide
among students and ensure equal opportunities to everyone, educators need to under-
stand how AI technologies can be leveraged to facilitate learning, accelerate inclusion,
solve real-world problems and how teaching can and should change accordingly. BRAI-
INS (BRing AI IN Schools) 2020–2023 is an Erasmus+ KA229 project funded by the
EU Commission and targeting students (15–18 yrs old), teachers and local communities
alike [8]. Coordinated by IIS Cavazzi - Pavullo (IT) the project involves TGM Wien
(AT), IES Villoslada Pamplona (ES) and 2° GEL Kalymnos (GR). The main priorities
are creation and testing of innovative educational practices in the digital era while con-
textually increasing interest and level of achievement in STEM and operating for an
inclusive school and – consequently - society. In this context the project aims to increase
students’ competences in following domains: AI, robotics and programming.
Focus on AI. Participants in the project will acquire a critical knowledge of basic AI
ideas; gain awareness of its impact on the fast-changing future of work, of the related
ethical issues (privacy, data collection, algorithmic bias and their effect on individuals
and society, human-machine relationship) together with the new opportunities it offers.
They will also identify AI that can be used to support teachers: AES, personal tutors,
expert systems, language expert support thus favoring a shift in teachers’ role from
transmission model to facilitators.
Focus on Robotics. Working with humanoid robots will be a core section of the project
as human-machine relationships will be a critical point in the near future, dense with
ethical and social implications. With their human-like appearance and physical pres-
ence, such robots call for a more natural and intuitive interaction, are highly motivating
for all students and have proved particularly effective with special needs kids in their
engagement and for the improvement of cognitive tasks and social abilities.
Humanoid robots introduce new and attractive pedagogical challenges and
approaches promoting meaningful learning based on authentic tasks. NAO robots for
example will give to the students the opportunity to get their hands, mind and even heart
involved in the world of robots, letting them play a role of innovators in a high-tech but
still inclusive society.
the rapid changes in the knowledge and information society of the 21st century and
the reason why it should be embraced and trained by teachers [13]. Moreover, Artificial
intelligence education should be started from elementary school as an integral part of the
curricula and should be taught on all stages till university level. Currently, AI is emerging
in such a fast way that society is not able to cope with this transition. Everybody is using
it, but we do have a lack of understanding AI technologies do (and could) not care
about the usage of data about us. In this context, BRAIINS (Bring AI IN Schools)
project will set the scene for the understanding of AI future scenarios within schools
and local communities. Students within a service-learning frame will engage with the
public to make them reflect on AI imagery and related messages, unveiling how AI
technologies are already embedded in many different aspects of our lives, realizing how
they can benefit humanity but also which are their drawbacks. In addition, students (with
technical background) will provide online-learning materials and a ready to use website
for their peers (from schools in other countries) in order to give them the opportunity
to learn from each other, encourage them to enhance the website with exercises devised
by them as well as use the website for their discussions of ethics and developing a
creative mindset. Different tools will be integrated in order to learn and teach about AI,
like ML4Kids, KNIME, and the ML Kit for Firebase. Dissemination will be achieved
in informal interactive learning environments such as hackathons, exhibitions, robotic
courses, contests.
2.1 ML4Kids
For an easy entry-point to start AI education, especially at the elementary level, we are
planning to use the ML4Kids Environment. Machine Learning for Kids (ML4Kids) is a
free tool that introduces machine learning by providing hands-on experiences for learning
about and training machine learning systems. The website provides an easy-to-use guided
environment for training machine learning models to recognize text, numbers, images,
or sounds. With other educational coding platforms, like Scratch and App Inventor, it
allows children to create projects and build games with the machine learning models
they train [3]. In addition to the website, a series of book will be presented in order to
guide young readers through creating compelling AI-powered games and applications
using the Scratch programming language [19]. Within the BRAIINS Project this website
will be evaluated, and the best examples will be provided at the project internal website
AI for Kids. Additionally, this website will provide the opportunity to share projects
done by kids and teachers in order to help the community.
2.2 KNIME
The KNIME Software is a professional tool, structured in an analytics platform and a
server. KNIME Analytics Platform is the open-source software for creating and learning
data science. It is intuitive, open, and will be continuously integrating new developments.
As a result, KNIME makes understanding data and designing data science workflows and
reusable components accessible to everyone. Together with KNIME Server, analytical
applications and services for automation, management, and deployment of data science
workflows could be integrated in the curricula. The idea behind integrating KNIME
The BRAIINS AI for Kids Platform 305
Analytics Platform into our concept will allow us to focus on concepts and methods
instead of the syntactic nature of programming languages. The KNIME-platform offers
a wide range of data access and processing functions, modeling tools, and techniques in
a single platform. Students will be able to apply methods using a software that they can
continue using throughout various stages of their professional lives. Within the KNIME
Academic Alliance [20], material as well as ideas on how to integrate KNIME in classes
will be shared. There are also a couple of books available, like the Guide to intelligent
data analysis: how to intelligently make sense of real data [21].
group to get in touch with Machine Learning. In this context, five groups of students
from the Department of Information Technology from the Vienna Institute of Technology
(TGM, Technologisches Gewerbemuseum), with 5 students per group, will be working
on the tools explained in Sect. 2 in different phases. First of all, within the next semester
all of these groups will work to get a deeper understanding of the topic, the possibilities
of these tools as well as how to learn about them in an easy way. As a result, the
groups will provide their knowledge on a webpage in order to support students from
the other three high schools (with non-technical background). With the beginning of
the next semester, the website will be presented during the European Conference on
Educational Robotics (ECER), which is a research conference of students for students,
where robotic tournaments take place [28]. Besides the presentation of the website, also
different tournaments involving ML technologies will be announced. As the students
at the other schools which are involved in the BRAIINS Project start learning about
the ML-Tools presented in order to be able to participate in the tournaments, they also
give feedback about the website and so the website will get further improved. During
the preparation for the tournaments, multiple exercises will be provided on the website.
All the participating teams need to solve those and present their work on the website.
The best solutions will be rewarded with points in their respective categories and for the
overall winner. This should lead to a common knowledge sharing community. After the
BRAIINS Project, this mechanism of introducing new exercises will be opened to the
public and should lead to the growth of this educational material.
4 Conclusion
The project is still in the beginning, but as research on the field for AI Technologies
has shown, a lot of tools are available. Only a few of them are for absolute beginners
or usable for outreach programs. Therefore, the BRAIINS project tries to overcome this
and find new ways of preparing the right tools for young students out of the ideas and
needs of their peer-group. In order to learn about the needs of students while learning
about AI technologies we will evaluate the effort done by 20 students out of 5 groups in
4 different countries/schools with almost more than 400 students and present the results
in our future work.
References
1. Druga, S., et al.: Inclusive AI literacy for kids around the world. In: Proceedings of FabLearn
2019, pp. 104–111 (2019)
2. Ali, S., Payne, B.H., Williams, R., Park, H.W., Breazeal, C.: Constructionism, ethics, and
creativity: developing primary and middle school artificial intelligence education. In: Pro-
ceedings of the International Workshop on Education in Artificial Intelligence K-12 (EDUAI
2019) (2019)
The BRAIINS AI for Kids Platform 307
gkoppensteiner@tgm.ac.at
1 Introduction
Human-machine relationships will be a critical point in the near future, dense with ethical
and social implications. AI is much more than just robotics but in public imagery they
often are one thing. As we are plunging head first - but often dangerously unconscious -
in the very midst of a digital revolution, the introduction of Humanoid robotics in schools
will have many advantages: promotion of meaningful learning based on authentic tasks,
the opportunity for students to play a role of innovators in a high-tech but still inclusive
society, rethinking Society’s relationship with the machines we create.
BRAIINS (Bring AI IN Schools) is an Erasmus+ KA229 Project [1] funded by the
EU Commission targeting students (15–18 years old), teachers and local communities
alike. Its main priorities are the creation and testing of innovative educational practices in
the digital era while contextually increasing interest and level of achievement in STEM
and operating for an inclusive school and – consequently - society. Based on the idea to
bring more students into the STEM fields, Robots have been proven by many projects
as the perfect tool for it [2–4]. Therefore, the project focuses on Artificial Intelligence,
Coding, Robotics, and the integration of all these topics in regular classroom activities
in order to prepare the students for their future. On the other hand students - within a
service-learning frame- will engage with the local communities: social robots will be
instrumental in demonstrating the 5 big ideas of AI to the public [5, 6].
As CoVid19 banned all on-site activities, students reverted to working from home.
Remote robotic laboratories are no news [7, 8]. However what we aimed at was something
tailored to our needs: free, quick, flexible, easy to access, directly managed by the
teachers, with no time limits, open to experimentation of all robot functionalities and its
possible applications. We didn’t mean to train for just one challenge but to offer a whole
playground. We wanted students to interact on-line real time with the real robot together
with mates. In a word we had in view not just a simulator but a substitute experience
for the usual robotic lab preserving the socializing aspect and the empathy generated
by the humanoid robot. The paper is structured as follows: Sect. 2 introduces the Nao
Robotic Platform. Section 3 presents the Humanoid Robots Remote Lab. Section 4 gives
Pedagogic Point of View. Finally, a conclusion is given in Sect. 5, future developments
in Sect. 6.
and empower the cultural experience within a local institution: either an art gallery, a
museum or a landmark. Cavazzi’s students partnered with Montecuccolo Castle which
hosts some art collections from local painters [15]. One of these collections in particular
captured their interest: “Il Paese ritrovato”. It tells the story of the inhabitants of the
Tuscan Emilian Apennines in a not-so-distant past, when life on the mountains was
hard but villages were still full of life. Since the ‘50s because of industrialization and
economic boom, in fact the area as a whole suffered a slow but persistent demographic
decline and a disruptive change in society habits and values. Students decided to offer
the perspective of a young resilient society rooted in its traditional values and confidently
facing the future thanks to the right use of technology, namely robots.
In lockdown far from stopping, our robotics club gained extra momentum. As with many
other aspects of this pandemic impacting Education - the need to redesign approaches
eventually brought into a new perspective and disclosed new possibilities.
Since February ‘20 the stage was set: each student had his own device and Net
connection at home plus was trained in working collaboratively at distance with teachers
acting as facilitators.
Cavazzi BRAIINS Remote Lab consisted of two distant stations. Let’s call Laboratory
the place where the robot was physically present and Remote-Station any place a student
tried to connect from remotely.
The Laboratory. It had one Nao Academic V.6 robot, a laptop with Windows 10,
a second device to launch Google Meet and an operator (usually but not necessarily
the teacher). Installed on the laptop were Choreographe 2.8.6 and Teamviewer. Chore-
ographe is a drag and drop graphic software based on building blocks to produce the
workflows and program Nao. Blocks are either taken from the very rich library or created
by the user itself. However, advanced users may code in Python, too. It can be freely
downloaded from SoftBankrobotics and works also in simulator mode [16]. TeamViewer
is a solution for remote access and control of computers or mobile devices. It works with
almost every desktop and mobile platform with a free and unlimited permission for
private non-commercial use [17]. AnyDesk is a low cost alternative [18]. The second
device just needed a web connection and a working camera through which students could
work collaboratively in Google Meet and follow Nao movements and its reactions to the
uploaded workflow. They could also interact with Nao either through the operator (touch
sensors) or directly. Nao is in fact able to perform recognition of vocal commands and
execute face detection across the screen. We asked each student to have his own avatar
to make the recognition task easier for Nao, but it worked also with real faces provided
the right level of illuminance [19].
The Remote Stations. Students joined in small groups from their homes through
Google Meet. They all had the same software as the laptop in the Laboratory installed
The BRAIINS Humanoid Robotics Lab 311
on their device. The Teamviewer connection however was one directional, therefore, we
had no privacy issue and no risk of intrusion in students’ laptops.
Through Teamviewer they also enabled quick file sharing so they could produce
workflows prior to the lab on their own laptop and then upload the.crg files and run
them. Up to 4 students with the same Teamviewer ID and password connected and
worked collaboratively and synchronously. We didn’t use the videoconferencing option
in Teamviewer because we wanted to have a completely free and fixed screen for the
teacher to monitor the workflow and join in the collaborative work as coach.
When a group was satisfied with coding results and ready for a final test with the real
robot, a 30’ slot was booked in the remote distance lab. The on-line modality had the
added bonus of maximum flexibility and no transport constraint: booking could therefore
be extended far beyond the traditional 14.30–16.30 slot. Everyone had the possibility to
upload and improve his own workflow and interact with Nao.
5 Conclusion
The Remote Lab experience was totally positive! Against a few cons we count many
pros. Cons: the presence of an operator sitting next to Nao is still needed for tactile
312 A. Lisotti and G. Koppensteiner
sensors and sometimes also in facial recognition to adjust the screen in the robot view.
These two functionalities in fact do not work on the simulator. Pros:
No Connection Issues: Our setup turned out to be very robust. Taking control with
TeamViewer/AnyDesk just takes a few seconds. It seldom happened, but in case of
connection failure students just dropped out of TeamViewer and Google Meet while the
rest of the group was still in and working. Nao was not disrupted as it was not a direct
connection but through the laptop in the same room with the robot.
Time Flexibility: IIS Cavazzi collects students from all the surrounding mountain area
and almost 40% of the student population is made of commuters. Our territory is a rural
one, with villages spread apart and not so frequent public transport connection.
After 4 p.m. many pupils have to leave because there are no later buses to go back
home. Moreover, some students do not join robotics activities because the set day/time
doesn’t fit in their already tightly packed schedule of extracurricular activities. Remote
labs thus offer the opportunity to greatly expand beyond the regular 2.30–4.00 p.m. time
slot of the traditional extracurricular activities in school.
Easily Reaching Out to Junior Schools of the Local Area. We plan to offer short
outreach courses on AI and humanoid robotics on a regular basis: 3 sessions from remote
and one in presence. We already started a pilot collaboration with the Junior Secondary
School in Sestola (14 km away, 1000 m asl).
6 Further Work
Crises are rare occasions for change. Responding to the necessity of ensuring students
the access to the robot in order to complete the Italian Nao challenge had much wider
implications for our school than we had previously envisioned:
Remote Collaboration with Local Junior Schools: it will become a part of our School
Open day program. Teachers’ training in Humanoid Robotics integration in the dis-
ciplines: planned for next September in hybrid mode. Outreach. We plan to launch
and test a beta course for local junior students during the robotics/code week in Octo-
ber 2021. A hybrid approach will be of great advantage to kids living scattered in the
mountain villages where STEM and robotics programs are almost non-existent.
All the above activities will be part of the BRAIINS dissemination program.
The BRAIINS Humanoid Robotics Lab 313
References
1. BRAIINS Project: https://bit.ly/3dfaqsY
2. Jung, S.E., Won, E.S.: Systematic review of research trends in robotics education for young
children. Sustainability 10(4), 905 (2018)
3. Dorouka, P., Papadakis, S., Kalogiannakis, M.: Tablets and apps for promoting robotics,
mathematics, STEM education and literacy in early childhood education. Int. J. Mob. Learn.
Organ. 14(2), 255–274 (2020)
4. Jäggle, G., Merdan, M., Koppensteiner, G., Lepuschitz, W., Posekany, A., Vincze, M.: A
study on Pupils’ motivation to pursue a STEM career. In: Auer, Michael E., Hortsch, Hanno,
Sethakul, Panarit (eds.) ICL 2019. AISC, vol. 1134, pp. 696–706. Springer, Cham (2020).
https://doi.org/10.1007/978-3-030-40274-7_66
5. Touretzky, D., Gardner-McCune, C., Martin, F., Seehorn, D.: Envisioning AI for K12-what
should every child know about AI. Proc. AAAI Conf. Artif. Intel. 33, 9795–9799 (2019)
6. https://ai4k12.org
7. https://www.robocup.org/leagues/23
8. https://www.coppeliarobotics.com/downloads
9. https://youtu.be/0Fc9d10D5JI
10. Reich-Stiebert, N., Eyssel, F.: Robots in the classroom: what teachers think about teaching and
learning with education robots. In: Agah, Arvin, Cabibihan, John-John., Howard, Ayanna M.,
Salichs, Miguel A., He, Hongsheng (eds.) ICSR 2016. LNCS (LNAI), vol. 9979, pp. 671–680.
Springer, Cham (2016). https://doi.org/10.1007/978-3-319-47437-3_66
11. Eguchi, A.: RoboCupJunior for promoting STEM education, 21st century skills, and tech-
nological advancement through robotics competition. Robot. Auton. Syst. 75, 692–699
(2016)
12. NAO Robots from Softbank Robotics: https://www.softbankrobotics.com/emea/en/nao.
Accessed 2 Feb 2021
13. Vogt, P., De Haas, M., De Jong, C., Baxter, P., Krahmer, E.: Child-robot interactions for
second language tutoring to preschool children. Front. Hum. Neurosci. 11, 73 (2017)
14. NAO Challenge Italy: https://www.naochallenge.it/. Accessed Dec 2020
15. Montecuccoli’s Castle. https://bit.ly/3hO1E6X. Accessed Dec 2020
16. Choreographe: https://www.softbankrobotics.com/emea/en/support/nao-6/downloads-sof
twares. Accessed Dec 2020
17. Teamviewer: https://www.teamviewer.com/en/. Accessed Dec 2020
18. AnyDesk: https://anydesk.com/. Accessed Feb 2021
19. Light level NAO: http://doc.aldebaran.com/2-1/naoqi/vision/allandmarkdetection.html
20. Padlet Example for NAO: https://bit.ly/3pmlKG2. Accessed 2 Feb 2021
21. Bodnenko, D.M., Kuchakovska, H.A., Proshkin, V.V., Lytvyn, O.S.: Using a virtual digital
board to organize student’s cooperative learning. In: Proceedings of the 3rd International
Workshop on Augmented Reality in Education (AREdu 2020), Kryvyi Rih, Ukraine, May
2020 (2020)
Correction to: Robotics in Education
Correction to:
M. Merdan et al. (Eds.): Robotics in Education, AISC 1359,
https://doi.org/10.1007/978-3-030-82544-7
The original versions of the Chapters 17 and 25 were previously published as non-open
access. They have now been changed to open access under the CC BY 4.0 license and
the copyright holder updated to ‘The Author(s)’. The book and the chapters have been
updated with the changes.
A G
Albo-Canals, Jordi, 231 Gabellieri, Chiara, 43
Angulo, Cecilio, 231 Gervais, Owen, 201
Avramidou, Eleftheria, 26 Gesquière, Natacha, 72
Giang, Christian, 34, 177, 268
B
Baumann, Liam, 302
H
Bettelani, Gemma C., 43
Hannon, Daniel, 231
Bezáková, Daniela, 3
Heinmäe, Elyna, 155
Birk, Andreas, 81, 134
Hild, Manfred, 94, 221
Blasco, Elena Peribáñez, 14
Hrušecká, Andrea, 3
Bruno, Barbara, 177, 268
Budinská, Lucia, 3
J
C Jia, Ruiqing, 189
Campos, Jordi, 52
Cañas, José M., 243 K
Cárdenas, Martha-Ivón, 52 Kaburlasos, Vassilis G., 26
Carmona-Mesa, Jaime Andrés, 166 Kalová, Jana, 64
Cergol, Kristina, 146 Kangur, Taavet, 268
Chevalier, Morgane, 177 Karabin, Petra, 146
Cross, Jennifer, 288 Karageorgiou, Elpida, 26
Kechayas, Petros, 26
D Koppensteiner, Gottfried, 302, 308
Danahy, Ethan, 231 Kori, Külli, 155
Dineva, Evelina, 81, 134 Kourampa, Efrosyni, 26
E
Eguchi, Amy, 279 L
El-Hamamsy, Laila, 177, 268 Larsen, Jørgen Christian, 119
Eskla, Getter, 14 Leoste, Janika, 14, 155
Lisotti, Annamaria, 302, 308
F Logozzo, Silvia, 105
Faiña, Andres, 256 Lytridis, Chris, 26
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 315–316, 2022.
https://doi.org/10.1007/978-3-030-82544-7
316 Author Index
M R
Mahna, Sakshay, 243 Reichart, Monika, 302
Malvezzi, Monica, 105 Rogers, Chris, 231, 288
Mannucci, Anna, 43 Roldán-Álvarez, David, 243
Massa, Federico, 43
Mathex, Laura, 268 S
Maurelli, Francesco, 81 Sabri, Rafailia-Androniki, 26
Mengacci, Riccardo, 43 San Martín López, José, 14
Mettis, Kadri, 155 Sans-Cope, Olga, 231
Seng, John, 210
Miková, Karolína, 3
Sinapov, Jivko, 288
Mondada, Francesco, 177, 268
Skweres, Melissa, 268
Sovic Krzic, Ana, 146
N Storjak, Ivana, 146
Nabor, Andreas, 81
T
Negrini, Lucio, 34
Tammemäe, Tiiu, 14
Nielsen, Jacob, 119
Novák, Milan, 64 U
Untergasser, Simon, 94, 221
P
V
Pallottino, Lucia, 43 Valigi, Maria Cristina, 105
Panreck, Benjamin, 94, 221 Van de Staey, Zimcke, 72
Papakostas, George A., 26 Villa-Ochoa, Jhony Alexander, 166
Papanikolaou, Athanasia-Tania, 26
Papaspyros, Vaios, 268 W
Pastor, Luis, 14 Willner-Giwerc, Sara, 288
Patrosio, Therese, 201 wyffels, Francis, 72
Pech, Jiří, 64
Pedersen, Bjarke Kristian Maigaard Kjær, 119 X
Puertas, Eloi, 52 Xie, Mingzuo, 189
Z
Q Zhang, Ziyi, 288
Quiroz-Vallejo, Daniel Andrés, 166 Zhou, Dongxu, 189