Robotics in Education

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 319

Advances in Intelligent Systems and Computing 1359

Munir Merdan · Wilfried Lepuschitz ·


Gottfried Koppensteiner ·
Richard Balogh ·
David Obdržálek Editors

Robotics in
Education
RiE 2021
Advances in Intelligent Systems and Computing

Volume 1359

Series Editor
Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences,
Warsaw, Poland

Advisory Editors
Nikhil R. Pal, Indian Statistical Institute, Kolkata, India
Rafael Bello Perez, Faculty of Mathematics, Physics and Computing,
Universidad Central de Las Villas, Santa Clara, Cuba
Emilio S. Corchado, University of Salamanca, Salamanca, Spain
Hani Hagras, School of Computer Science and Electronic Engineering,
University of Essex, Colchester, UK
László T. Kóczy, Department of Automation, Széchenyi István University,
Gyor, Hungary
Vladik Kreinovich, Department of Computer Science, University of Texas
at El Paso, El Paso, TX, USA
Chin-Teng Lin, Department of Electrical Engineering, National Chiao
Tung University, Hsinchu, Taiwan
Jie Lu, Faculty of Engineering and Information Technology,
University of Technology Sydney, Sydney, NSW, Australia
Patricia Melin, Graduate Program of Computer Science, Tijuana Institute
of Technology, Tijuana, Mexico
Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro,
Rio de Janeiro, Brazil
Ngoc Thanh Nguyen , Faculty of Computer Science and Management,
Wrocław University of Technology, Wrocław, Poland
Jun Wang, Department of Mechanical and Automation Engineering,
The Chinese University of Hong Kong, Shatin, Hong Kong
The series “Advances in Intelligent Systems and Computing” contains publications
on theory, applications, and design methods of Intelligent Systems and Intelligent
Computing. Virtually all disciplines such as engineering, natural sciences, computer
and information science, ICT, economics, business, e-commerce, environment,
healthcare, life science are covered. The list of topics spans all the areas of modern
intelligent systems and computing such as: computational intelligence, soft comput-
ing including neural networks, fuzzy systems, evolutionary computing and the fusion
of these paradigms, social intelligence, ambient intelligence, computational neuro-
science, artificial life, virtual worlds and society, cognitive science and systems,
Perception and Vision, DNA and immune based systems, self-organizing and
adaptive systems, e-Learning and teaching, human-centered and human-centric
computing, recommender systems, intelligent control, robotics and mechatronics
including human-machine teaming, knowledge-based paradigms, learning para-
digms, machine ethics, intelligent data analysis, knowledge management, intelligent
agents, intelligent decision making and support, intelligent network security, trust
management, interactive entertainment, Web intelligence and multimedia.
The publications within “Advances in Intelligent Systems and Computing” are
primarily proceedings of important conferences, symposia and congresses. They
cover significant recent developments in the field, both of a foundational and
applicable character. An important characteristic feature of the series is the short
publication time and world-wide distribution. This permits a rapid and broad
dissemination of research results.
Indexed by DBLP, INSPEC, WTI Frankfurt eG, zbMATH, Japanese Science and
Technology Agency (JST).
All books published in the series are submitted for consideration in Web of Science.

More information about this series at http://www.springer.com/series/11156


Munir Merdan Wilfried Lepuschitz
• •

Gottfried Koppensteiner

Richard Balogh David Obdržálek


Editors

Robotics in Education
RiE 2021

123
Editors
Munir Merdan Wilfried Lepuschitz
Practical Robotics Institute Austria (PRIA) Practical Robotics Institute Austria (PRIA)
Vienna, Austria Vienna, Austria

Gottfried Koppensteiner Richard Balogh


Practical Robotics Institute Austria (PRIA) Slovak University of Technology (STU)
Vienna, Austria Bratislava, Slovakia

David Obdržálek
Charles University
Prague, Czech Republic

ISSN 2194-5357 ISSN 2194-5365 (electronic)


Advances in Intelligent Systems and Computing
ISBN 978-3-030-82543-0 ISBN 978-3-030-82544-7 (eBook)
https://doi.org/10.1007/978-3-030-82544-7
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2022, corrected publication 2022
Chapters “Teachers’ Perspective on Fostering Computational Thinking Through Educational Robotics”
and “Exploring a Handwriting Programming Language for Educational Robots” are licensed under the
terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/
licenses/by/4.0/). For further details see license information in the chapters.
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

We are honored to present the proceedings of the 12th International Conference on


Robotics in Education (RiE), which was carried out as a purely virtual conference
from April 28 to 30, 2021. While originally planned to be held in Bratislava, Slovak
Republic, being already the second try after 2020, the conference had to be turned
into a purely virtual event due to the COVID-19 pandemic. The international
conference on robotics in education is organized every year with the goal to provide
the opportunity for the presentation of relevant novel research and development in a
strongly multidisciplinary context in the educational robotics domain.
Educational robotics is an innovative way for increasing the attractiveness of
science education and scientific careers in the view of young people. Robotics
represents a multidisciplinary and highly innovative domain encompassing physics,
mathematics, informatics and even industrial design as well as social sciences. As a
multidisciplinary field, it promotes the development of systems thinking and
problem solving. Due to various application areas, teamwork, creativity and
entrepreneurial skills are required for the design, programming and innovative
exploitation of robots and robotic services. The fascination for autonomous
machines and the variety of fields and topics covered make robotics a powerful idea
to engage with. Robotics confronts the learners with the areas of Science,
Technology, Engineering, Arts and Mathematics (STEAM) through the design,
creation and programming of tangible artifacts for creating personally meaningful
objects and addressing real-world societal needs. Thus, young girls and boys can
easily connect robots to their personal interests and share their ideas. As a conse-
quence, it is regarded as very beneficial if engineering schools and university
program studies include the teaching of both theoretical and practical knowledge on
robotics. In this context, current curricula need to be improved, and new didactic
approaches for an innovative education need to be developed for improving the
STEAM skills among young people. Moreover, an exploration of the multidisci-
plinary potential of robotics toward an innovative learning approach is required for
fostering the pupils’ and students’ creativity leading to collaborative entrepre-
neurial, industrial and research careers.

v
vi Preface

In these proceedings, we present methodologies and technologies for teaching


and learning in the field of educational robotics. The book offers insights into the
latest research, developments and results regarding curricula, activities and their
evaluation. Moreover, the book introduces interesting programming approaches as
well as new applications, technologies, systems and components for educational
robotics. The presented applications cover the whole educative range, from
kindergarten, primary and secondary school, to the university level and beyond.
Besides, some contributions especially focus on teachers and their role in educa-
tional robotics. In total, 40 papers were submitted and 29 papers are now part
of these proceedings after a careful peer-review process. We would like to express
our thanks to all authors who submitted papers to RiE 2021, and our congratula-
tions to those whose papers were accepted.
This publication would not have been possible without the support of the RiE
International Program Committee and the Conference Co-Chairs. All of them
deserve many thanks for having helped to attain the goal of providing a balanced
event with a high level of scientific exchange and a pleasant environment. We
acknowledge the use of the EasyChair conference system for the paper submission
and review process. We would also like to thank Dr. Thomas Ditzinger and
Mr. Nareshkumar Mani from Springer for providing continuous assistance and
advice whenever needed.
Organization

Committee
Co-chairpersons
Balogh Richard Slovak University of Technology in Bratislava,
Slovakia
Lepuschitz Wilfried Practical Robotics Institute Austria, Austria
Obdržálek David Charles University in Prague, Czech Republic

International Program Committee


Alimisis Dimitris EDUMOTIVA (European Lab for Educational
Technology), Greece
Bellas Francisco Universidade da Coruña, Spain
Castro-Gonzalez Alvaro Universidad Carlos III de Madrid, Spain
Čehovin Luka University of Ljubljana, Slovenia
Chevalier Morgane HEP-Vaud, Switzerland
Christoforou Eftychios University of Cyprus, Cyprus
Demo Barbara Universita Torino, Italy
Dessimoz Jean-Daniel Western Switzerland University of Applied
Sciences and Arts, Switzerland
Dias André INESC Porto – LSA/ISEP, Portugal
Duro Richard Universidade da Coruña, Spain
Ebner Martin Graz University of Technology, Austria
Eguchi Amy University of California San Diego, USA
Eteokleous Nikleia Frederick University Cyprus, Cyprus
Ferreira Hugo LSA-ISEP, Portugal
Ferreira João Institute of Engineering of Coimbra, Portugal
Ferreira Nuno Institute of Engineering of Coimbra, Portugal
Fislake Martin University of Koblenz, Germany

vii
viii Organization

Gerndt Reinhard Ostfalia University of Applied Sciences,


Germany
Gonçalves José ESTiG-IPB, Portugal
Hofmann Alexander University of Applied Sciences Technikum Wien,
Austria
Jäggle Georg Vienna University of Technology, Austria
Kandlhofer Martin Graz University of Technology, Austria
Kasyanik Valery Brest State Technical University, Belarus
Kazed Boualem University of Blida, Algeria
Krajník Tomáš Czech Technical University in Prague,
Czech Republic
Kulich Miroslav Czech Technical University in Prague,
Czech Republic
Leoste Janika Tallinn University, Estonia
Lucny Andrej Comenius University, Slovakia
Mallela Praneeta DEKA Research & Development Corporation,
USA
Malvezzi Monica University of Siena, Italy
Mantzanidou Garyfalia Primary Education School, Greece
Martins Alfredo Instituto Superior de Engenharia do Porto,
Portugal
Menegatti Emanuele University of Padua, Italy
Mikova Karolina Comenius University, Slovakia
Mora Marta Covadonga Universitat Jaume I, Spain
Moro Michele DEI UNIPD Padova, Italy
Negrini Lucio SUPSI, Switzerland
Papakostas George EMT Institute of Technology, Greece
Paya Luis Universidad Miguel Hernández, Spain
Petrovic Pavel Comenius University, Slovakia
Pina Alfredo Public University of Navarra, Spain
Portugal David University of Coimbra, Portugal
Postma Marie Tilburg University, Netherlands
Reinoso Oscar Universidad Miguel Hernández, Spain
Sansone Nadia Sapienza University of Rome, Italy
Schmoellebeck Fritz University of Applied Sciences Technikum Wien,
Austria
Thiruvathukal George Loyola University Chicago, USA
Usart Mireia Universitat Rovira i Virgili, Spain
Valente Antonio UTAD University, Portugal
Valiente David Universidad Miguel Hernández, Spain
Yudin Anton Bauman Moscow State Technical University,
Russia
Zapatera Alberto Universidad CEU Cardenal Herrera, Spain
Organization ix

Local Conference Organization


Koppensteiner Gottfried Practical Robotics Institute Austria, Austria
Merdan Munir Practical Robotics Institute Austria, Austria
Contents

Workshops, Curricula and Related Aspects Primary Schools


and Kindergartens
Gradation of Cognitive Operations of Blue-Bot Control
in the Primary Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Karolína Miková, Andrea Hrušecká, Lucia Budinská,
and Daniela Bezáková
Bee-Bot Educational Robot as a Means of Developing Social Skills
Among Children with Autism-Spectrum Disorders . . . . . . . . . . . . . . . . 14
Janika Leoste, Tiiu Tammemäe, Getter Eskla, José San Martín López,
Luis Pastor, and Elena Peribáñez Blasco
Development of Educational Scenarios for Child-Robot Interaction:
The Case of Learning Disabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Elpida Karageorgiou, Efrosyni Kourampa, Athanasia-Tania Papanikolaou,
Petros Kechayas, Eleftheria Avramidou, Rafailia-Androniki Sabri,
Chris Lytridis, George A. Papakostas, and Vassilis G. Kaburlasos
Educational Robotics in Online Distance Learning: An Experience
from Primary School . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Christian Giang and Lucio Negrini

Workshops, Curricula and Related Aspects Secondary Schools


Robotics Laboratory Within the Italian School-Work Transition
Program in High Schools: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . 43
Gemma C. Bettelani, Chiara Gabellieri, Riccardo Mengacci,
Federico Massa, Anna Mannucci, and Lucia Pallottino
How to Promote Learning and Creativity Through Visual Cards
and Robotics at Summer Academic Project Ítaca . . . . . . . . . . . . . . . . . 52
Martha-Ivón Cárdenas, Jordi Campos, and Eloi Puertas

xi
xii Contents

An Arduino-Based Robot with a Sensor as an Educational Project . . . . 64


Milan Novák, Jiří Pech, and Jana Kalová
A Social Robot Activity for Novice Programmers . . . . . . . . . . . . . . . . . 72
Zimcke Van de Staey, Natacha Gesquière, and Francis wyffels

Workshops, Curricula and Related Aspects Universities


Robotics and Intelligent Systems: A New Curriculum Development
and Adaptations Needed in Coronavirus Times . . . . . . . . . . . . . . . . . . . 81
Francesco Maurelli, Evelina Dineva, Andreas Nabor, and Andreas Birk
An Educational Framework for Complex Robotics Projects . . . . . . . . . 94
Simon Untergasser, Manfred Hild, and Benjamin Panreck
Education Tool for Kinematic Analysis: A Case Study . . . . . . . . . . . . . 105
Maria Cristina Valigi, Silvia Logozzo, and Monica Malvezzi

Evaluating Educational Robotics


Girls and Technology – Insights from a Girls-Only Team
at a Reengineered Educational Robotics Summer Camp . . . . . . . . . . . . 119
Bjarke Kristian Maigaard Kjær Pedersen, Jørgen Christian Larsen,
and Jacob Nielsen
Improved Students’ Performance in an Online Robotics Class
During COVID-19: Do only Strong Students Profit? . . . . . . . . . . . . . . . 134
Andreas Birk and Evelina Dineva
Pupils’ Perspectives Regarding Robot Design and Language
Learning Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Petra Karabin, Ivana Storjak, Kristina Cergol, and Ana Sovic Krzic

Teachers in Focus
Enhancing Teacher-Students’ Digital Competence
with Educational Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Elyna Heinmäe, Janika Leoste, Külli Kori, and Kadri Mettis
Disciplinary Knowledge of Mathematics and Science Teachers
on the Designing and Understanding of Robot
Programming Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Jaime Andrés Carmona-Mesa, Daniel Andrés Quiroz-Vallejo,
and Jhony Alexander Villa-Ochoa
Teachers’ Perspective on Fostering Computational Thinking
Through Educational Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Morgane Chevalier, Laila El-Hamamsy, Christian Giang, Barbara Bruno,
and Francesco Mondada
Contents xiii

Technologies for Educational Robotics


Mirobot: A Low-Cost 6-DOF Educational Desktop Robot . . . . . . . . . . . 189
Dongxu Zhou, Ruiqing Jia, and Mingzuo Xie
Developing an Introduction to ROS and Gazebo Through
the LEGO SPIKE Prime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Owen Gervais and Therese Patrosio
Constructing Robots for Undergraduate Projects Using Commodity
Aluminum Build Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
John Seng
Hands-On Exploration of Sensorimotor Loops . . . . . . . . . . . . . . . . . . . . 221
Simon Untergasser, Manfred Hild, and Benjamin Panreck

Programming Environments
CORP. A Collaborative Online Robotics Platform . . . . . . . . . . . . . . . . . 231
Olga Sans-Cope, Ethan Danahy, Daniel Hannon, Chris Rogers,
Jordi Albo-Canals, and Cecilio Angulo
A ROS-based Open Web Platform for Intelligent
Robotics Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
David Roldán-Álvarez, Sakshay Mahna, and José M. Cañas
HoRoSim, a Holistic Robot Simulator: Arduino Code, Electronic
Circuits and Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Andres Faiña
Exploring a Handwriting Programming Language
for Educational Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Laila El-Hamamsy, Vaios Papaspyros, Taavet Kangur, Laura Mathex,
Christian Giang, Melissa Skweres, Barbara Bruno,
and Francesco Mondada

Artificial Intelligence
AI-Powered Educational Robotics as a Learning Tool to Promote
Artificial Intelligence and Computer Science Education . . . . . . . . . . . . . 279
Amy Eguchi
An Interactive Robot Platform for Introducing Reinforcement
Learning to K-12 Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Ziyi Zhang, Sara Willner-Giwerc, Jivko Sinapov, Jennifer Cross,
and Chris Rogers
The BRAIINS AI for Kids Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
Gottfried Koppensteiner, Monika Reichart, Liam Baumann,
and Annamaria Lisotti
xiv Contents

The BRAIINS Humanoid Robotics Lab . . . . . . . . . . . . . . . . . . . . . . . . . 308


Annamaria Lisotti and Gottfried Koppensteiner
Correction to: Robotics in Education . . . . . . . . . . . . . . . . . . . . . . . . . . . C1
Munir Merdan, Wilfried Lepuschitz, Gottfried Koppensteiner,
Richard Balogh, and David Obdržálek

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315


Workshops, Curricula and Related
Aspects Primary Schools
and Kindergartens
Gradation of Cognitive Operations of Blue-Bot
Control in the Primary Education

Karolína Miková(B) , Andrea Hrušecká, Lucia Budinská, and Daniela Bezáková

Comenius University, 842 48, Bratislava, Slovakia


{mikova,hrusecka,budinska,bezakova}@fmph.uniba.sk

Abstract. The aim of our research is to identify a scale with levels representing
gradation of cognitive operations in learning with the Blue-Bot (TTS, Nottingham,
England) at primary school informatics in Slovakia. Therefore, qualitative research
was conducted, and it took place in iterative rounds at two different schools. During
the pandemic, when the schools were closed, the research continued with small
groups of children at home. As a working tool, a series of activities that were
iteratively developed, edited, and then analyzed, was used. Preliminary results of
this research show that there are 4 levels on the scale, while the content of each
level contains 4 different areas: a) how to manipulate the robot, b) defining the
state of the robot, i.e. its position and rotation, c) difficulty of the pad on which
it moves and d) cognitive operations with commands. By combining these areas,
different levels of difficulty can be set in creating gradual tasks, with regard to the
pupils’ cognitive level.
These results are based on the cognitive domain of learning and in the future,
we want to connect them with psychomotor domain.

Keywords: Educational robotics · Difficulty gradation · Cognitive operations ·


Blue-Bot · Primary education

1 Introduction

Educational robotics has long found its place in quality and modern education [1].
Its implementation is slowly moving to lower and lower levels of education [2]. In
Slovakia, within the project Education of pedagogical employees of kindergartens as
part of lifelong learning [3], laptops, multifunctional devices, digital cameras, televisions
and electronic teaching aids were delivered to kindergartens in the years 2007 to 2013.
Within them there were about 4,000 robotic Bee-Bot toys, so-called “Bees” [4]. Since
then, we have met with several interesting ideas (from Slovakia and also from abroad) on
how to use this robot in education [5–8]. In recent years a new version of this Blue-Bot
robot (TTS, Nottingham, England) has been developed with expanded functionality. We
will write more about them in the next Chapter.
In Slovakia, Informatics is a compulsory subject from the 3rd year of elementary
school (8 years old pupils). We try to systematically contribute to the building of its cur-
riculum, which would include educational robotics with a finer granularity of cognitive

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022


M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 3–13, 2022.
https://doi.org/10.1007/978-3-030-82544-7_1
4 K. Miková et al.

operations. In its current form, it contains only rough outlines of operations, such as
controlling or planning a robot movement etc. The detailed exploration of the potential
that robotics enables at this level of education is a lengthy process. In this preliminary
research, the focus was on the gradation of programming constructs, which can be taught
with the help of Blue-Bot.
The basic word is the Programming concept. It is e.g. repetition, sequence of
commands, rotation, etc. Operations on these concepts are e.g. reading, compiling,
tracing, program execution, etc. Programming constructs are then operations on one
or more concepts. The term Condition of the robot means its rotation and position on
the base. The pad is a square mesh in dimensions m * n, where one square is 15 cm. A
box is a single square on a pat that may or may not contain a shape (e.g., a beehive, a
flower, a house, a number, and so on). Reader (TacTile Code Reader) is a blue board
connected via bluetooth, into which children can insert commands in the form of small
plastic pieces. We call him “blue panel” with the kids.

2 Ways to Control the Robot

If computing/programming with children is realized using computers, the absolute


control is more natural and should be used sooner than the relative control.
In environments with the absolute control we control the agent either with the arrow
keys or with the buttons of the same meaning, where ↑ means the movement of the agent
one step up, ↓ means the agent’s movement one step down, → the movement of the agent
one step right and ← means the agent’s movement one step left from its actual position.
In this type of control, we ignore the heading of the agent. Mostly the agent does not
rotate at all. In environments with relative control, we can also use arrow keys to drive
an agent (more often some instructions are used), but with different meanings. ↑ stands
for move forward one step in the actual agent’s direction, ↓ means move backward one
step in the actual direction, → stands for rotate 90° right without changing position and
← means rotate 90° left without any position change.
At Slovak schools computing in primary education in last years is partly realized
with application Emil the Robot. Emil is a character on the screen - an agent controlled
in a virtual environment based on a square grid [9]. In the first Emil 3 environment
(Emil the Collector) children control the robot by clicking a position in an actual row
or column where they want to move it. In two other Emil 3 environments (Emil the
Caretaker and Emil the Collagist) the robot is controlled by arrow buttons in an absolute
way. The relative control of the robot comes up in the Emil 4 environment aimed for
Year 4 (Fig. 1).
Using robotic tools like Blue-Bot, where we work in a real environment, not virtual,
it is natural to use relative control. Children can follow the robot, stand behind him and
imitate its movements. Thus they identify with the robot and it’s easy for them how to
go on [10].
Gradation of Cognitive Operations of Blue-Bot Control 5

Fig. 1. On the left - Emil 3 environment with absolute control, on the right - Emil 4 environment
with relative control.

2.1 Blue-Bot Controls

Education robot Blue-Bot is a small programmable robotic kit moving on the floor. It is a
new version of a popular Bee-Bot. It uses relative driving in a square grid just like a turtle
in Logo language [11]. Blue-Bot is affordable, user-friendly and, in addition, offers a
variety of interesting programming constructs and allows pupils to develop many skills
such as communication, cooperation, verbalization of ideas and the like [12–14].
We can work with Blue-Bot just the same way as with Bee-Bot - we control it using
integrated buttons. After giving single commands (forward, backwards, right, left) and
pushing the GO button all given instructions are executed. Next commands given later
are added to the previous remembered sequence of instructions (the previous instructions
are not canceled). We can give 40 commands in one sequence, the memory of Blue-Bot
is limited to 200 commands. CLEAR and PAUSE labels of the buttons in Bee-Bot were
replaced by appropriate pictures in Blue-Bot, thus they are clearer to children which
cannot read. The disadvantage of this kind of control is that children do not remember
the given sequence of instructions. They also often forget to clear the previous sequence
of commands before giving a new one.
There are two more ways of controlling Blue/Bot:

• using a tablet or a PC (remote control),


• using a reader (Fig. 2).

Both ways use bluetooth technology to connect with Blue-Bot. If Blue-Bot is


connected (paired) with an appropriate device, it lights blue.
To control Blue-Bot using tablet or PC an appropriate application is needed. Using
the application, we can control the robot directly - after giving an instruction the robot
executes it automatically, but we can also plan a sequence of instructions, which can be
seen on the screen and is executed later - after moving a virtual robot to some position
and pushing the GO button in the application.
Using the reader, we create a sequence of instructions which we then send to Blue-Bot
for execution.
6 K. Miková et al.

Fig. 2. Blue-Bot with TacTile Code Reader.

3 Methodology

We are aware that a major influence on pupils’ quality understanding of programming are
software environment, programming language, rendering commands, etc. [15], in addi-
tion to the experience and programming skills of teachers [16]. All the aforementioned
aspects affect the speed and depth of pupils’ understanding and therefore a qualitative
approach [17] was chosen in order to better understand this process. As a research tool,
the educational robot Blue-Bot with a series of activities focused on the development
of programming skills was used. The choice of Blue-Bot was affected by the fact that
Bee-Bots, as Blue-Bot’s predecessors, are relatively widespread in Slovak kindergartens
so there is a chance that primary school pupils had used them in their previous education.
Our main research goal was to identify a scale that would reflect the appropriate
cognitive complexity of operations in activities for primary school pupils. This scale
can contribute to a better understanding of the difficulty of programming constructs.
Therefore, the following research question was proposed:
What are the levels of cognitive operations in blue-bots programming with primary
school pupils?
In order to find out what the appropriate gradation of tasks might look like for primary
school pupils, several tasks with Blue-Bots had to be designed and validated. These tasks
should create a reasonable challenge for pupils and still remain engaging. This resulted
in a series of 20 lessons, which was used as part of a verification tool.
Based on the nature of the research, Design research [18] was used. That allows
us to iteratively validate correct arrangement of tasks on the scale and thus refine the
complexity granularity of tasks and terms used. Collected data consist of field records,
videos, photo documentation, and interviews with teachers and pupils [19].
Gradation of Cognitive Operations of Blue-Bot Control 7

The research participants came from three different backgrounds. One was an ele-
mentary school in the capital, where pupils had experience with Bee-Bots and other
innovative teaching techniques. These were 2nd and 4th year classes. The second envi-
ronment was an elementary school near a big city, where pupils who participated in the
research had no experience with such robotic kits. Research took place there in the 1st
year. As Slovak schools had been closed for a long time during the pandemic, we verified
individual activities with a small number of children, mostly at home. We managed to
verify each part of series of 20 lessons, which consist of several activities and thus form a
research tool for our research. In the next chapter, we will present the cognitive coverage
of these activities.

4 Research Results

Based on the results of observations, we iteratively designed and adjusted grades on a


scale that appropriately reflect the cognitive sequence of operations of first grade pupils
in familiarizing with programming constructs to control the educational robot Blue-Bot,
naturally respecting the cognitive maturity of children of a given age.
During the verification, we recorded several examples that confirmed the suitability
of the gradation. Here are two of them. At level 1, pupils moved absolutely smoothly from
direct control to planning. First, they placed one card in the reader and had it executed,
then the other card and had it executed, and so on. Subsequently, they began to place
several cards next to each other on their own, thus starting to plan the program. It was a
spontaneous reaction of the pupils, not planned by us. As a second example, we present a
situation from level 3, where pupils began to create longer programs, sometimes making
a mistake and starting to look for it together in the plan from the cards and correcting,
and thus there was a spontaneous propaedeutic for debugging, which is one level higher.
When creating the scale, we also take into account many years of experience of
our department with teaching computer science in primary education [19, 20] and with
teaching using the robotic toy Bee-Bot in kindergartens [21–23]. As we wrote in the
methodology section, the activities themselves are not the goal of our research, but only
a tool, so we will not deal with them in more detail, but we will present only the cognitive
umbrella of these activities.
The scale we created has 4 levels, in which we focus on graduation of four different
areas.

a) The way pupils manipulate with an educational robot.


b) The difficulty of the pad, which graduates mainly by the presence (absence) of
objects on its boxes.
c) The state of the educational robot, which is determined by the position on the pad
and the heading. Most often this means determining the start or end position.
d) Programming constructs. As the robot does not have many commands to control
it (such as text or children’s programming languages), thus we see the essence of
these constructs in the complexity of programs that pupils have to create and in the
restrictions imposed on them by planning the journey (e.g. forbidden commands or
impassable places on the pad,…)
8 K. Miková et al.

1. Level
Manipulation: Robot control using a reader. (With one card - direct control. More
cards - programming control.)
Difficulty of the pad: Each position on the pad is uniquely identified by an object or
a picture.
State of the robot: Pupils do not deal with the initial direction, because the robot
always starts in the same direction. In different tasks, of course, its position is different,
albeit given. These rules remain the same at this level.

Programming constructs:

• reading and interpreting short programs


• drawing the path according to the program
• control of the robot from position A to B without restrictions
• control of the robot from position A to B with the condition of transition through
the given intermediate stop
• identification of possible end positions when using a maximum of three commands
• reverse tracing of an already passed path
• control of the robot without Forward command.

As an example of this level, we present an activity (Fig. 3), in which pupils must
place the Blue-Bot at the start, enter the program and draw its path in the workbook:

Fig. 3. Activity from the first level.

In addition to the first two points (of the programming constructs), pupils must also
identify the start according to the assignment and correctly place the Bot on the pad
(always towards the meadow in this level).
Gradation of Cognitive Operations of Blue-Bot Control 9

2. Level
Manipulation: Control the robot using its buttons (the program is not visible to
pupils).
Difficulty of the pad: Some boxes of the pad are not uniquely identifiable (there is
neither object nor picture at them).
State of the robot: Pupils get acquainted with determining the direction and turning
the robot using a directional rose (up, down, right, left).

Programming constructs:

• adding restrictions when passing through the pad:

– some edges are forbidden


– some boxes are forbidden
– the order of visited boxes (objects) is given

• identification of possible end positions when using a maximum of three or four


commands
• combination of restrictions - forbidden boxes and one forbidden command.

One of the tasks pupils solve at this level is to find the way from the start to the finish
so that they do not go through other houses (Fig. 4). In addition, pupils must not use the
Right command.

Fig. 4. Activity from the second level.

Children decide themselves how the robot will be turned at the beginning. Then they
will write this direction in a circle at the starting box. At the target box, they will put
how the robot was turned finally.
10 K. Miková et al.

3. Level
Manipulation: Path planning for the robot using paper cards. This changes the
robot’s control from implicit to explicit. It’s similar to using a reader, but programs
can be longer.
Difficulty of the pad: Boxes on the pad are marked (described) in more abstract
terms. Either letters or numbers.
State of the robot: The direction of the robot is defined by the terms: “turned To”,
“turned From”.

Programming constructs:

• running and tracing the same program from different positions and with different
rotations
• planning longer programs (using paper cards)
• restriction of using of one (any) command first, later two commands
• completion of the commands at the end of the program - according to the assignment
• determining the initial direction when the program and destination are specified
• creation of assignments with conditions.

In one of the tasks of this level, children have to read the program from the cards,
draw the path of the robot to the picture of the pad and answer the question (Fig. 5).
Children solve this task without Blue-Bots. They can check their solutions using the
robot after solving the task.

Fig. 5. Activity from the third level.

4. Level
Manipulation: Combining previous levels. Planning the path (by drawing the path
in the pad or with paper cards or cards for the reader) and its implementation. Planning
the parallel behavior of two robots.
Gradation of Cognitive Operations of Blue-Bot Control 11

Difficulty of the pad: Pupils work with a more abstracted pad or with a pad without
marking the boxes or without the pad, i.e. without space restrictions.
State of the robot: Pupils must determine the state according to the assignment.

Programming constructs:

• finding and fixing a bug in a given program


• planning the behavior of two robots (for parallel execution of their programs) with
the condition: no collisions and required or/and forbidden boxes
• identification of the sequence of commands according to the visualization
• creating a sequence of commands for the movement of two robots according to the
central and axial symmetry.

As an example, we present a task in which pupils have to read and interpret the
program, find out which command is wrong and correct this error to meet the task
(Fig. 6).

Fig. 6. Activity from the fourth level.

5 Discussion

Within the research, we sense as the weakest point that could have contributed to the
inaccuracy of the results - verification in schools. This risk is particularly high at the
fourth level, as the original class could not be continued due to the pandemic, but the
verification took place only in smaller groups (at home). We realize that minor changes
can occur during continuous verification with the entire class. It was also time consuming
for us to create pads (6 identical pads for one class). These are oversized papers (A2
to A1) that we could not print at school. We printed them on plain A4 paper, then cut
them out and glued them to hard cardboard, which, however, took an enormous amount
of time. An unpleasant situation was also the reworking of the gradation due to the
disqualification of the tablet to control the robot, which we originally wanted to start.
12 K. Miková et al.

The reason for the disqualification was a change in the application that no longer met
our requirements. The original application, which contained only buttons to control the
robot, was replaced by an application in which the buttons were radically reduced and
a large part of the area was occupied by a robot that rotated according to the entered
buttons. However, children often identify with a real robot - they follow it, they rotate
exactly like him, in which case the visualization on the tablet does not correspond to the
real situation. Ultimately, this created more misconceptions in pupils than a deepening
of understanding. For this reason, we first included activities with a reader. Some may
argue that it would be easier for pupils to work first with the buttons on the robot and
then with the reader. However, when the buttons are pressed, the pupils who create the
program have to memorize and with each new command delete the original one from
the robot’s memory, which we know from experience causes them problems. Therefore,
it is better to visualize commands via a reader, where they work at the beginning with
tasks where programs are only one command in length.

6 Conclusion
In this article, we present preliminary results of qualitative research, in which we sought
an answer to our research question: What are the levels of cognitive operations in Blue-
Bots programming with primary school pupils? Four levels have been identified. These
levels will help us in creating appropriate activities and assessing their complexity. The
content of each level is determined by the following characteristics: a) difficulty of the
pad, b) state of the robot (its rotation + position), c) robot control by pupils and d)
programming constructs.
We believe that the created scale will contribute to a better knowledge of computer
science didactics. This part of research was focused mainly on cognitive actions, but since
one of the main parts of robotics is the manipulation with robots, it can also encourage the
determination of psychomotor skills. We will continue with further verifications together
with research of other learning domains when schools reopen and in site research will
be possible.

Acknowledgments. The authors would like to thank Ivan Kalaš, who is a co-author of activities
without which this research would not be possible. Material with activities is currently being
prepared for print. Many thanks also go to all the children and the teachers from our design
schools and Roman Hrušecký, who participated in the verification of the activities. We would also
like to thank Indícia, the non-profit organization, which covered the whole process, and the project
VEGA 1/0602/20, thanks to which the results of this research could be published.

References
1. Miglino, O., Lund, H.H., Cardaci, M.: Robotics as an educational tool. J. Interact. Learn. Res.
10(1), 25–47 (1999)
2. Bers, M.U., Ponte, I., Juelich, C., Viera, A., Schenker, J.: Teachers as designers: integrating
robotics in early childhood education. Inf. Technol. Child. Educ. Annu. 2002(1), 123–145
(2002)
Gradation of Cognitive Operations of Blue-Bot Control 13

3. Project: Education of teachers of kindergartens as part of education reform (2007–13). https://


archiv.mpc-edu.sk/sk/projekty/mat. Accessed 4 Feb 2021
4. Bee-Bot. https://www.terrapinlogo.com/bee-bot-family.html. Accessed 4 Feb 2021
5. Activities with Bee-Bot. https://www.infracz.cz/data_13/soubory/365.pdf. Accessed 4 Feb
2021
6. Cacco, L., Moro, M.: When a Bee meets a Sunflower. In: Proceedings of 4th International
Workshop Teaching Robotics Teaching with Robotics and 5th International Conference on
Robotics in Education, Padova, Italy, pp. 68–75 (2014)
7. Maněnová, M., Pekárová, S.: Algoritmizace s využitím robotických hraček pro děti do 8 let,
p. 72. Univerzita Hradec Králové (2020)
8. Lydon, A.: Let’s Go with Bee-Bot: Using Your Bee-Bot Across the Curriculum (2007)
9. Application Robot Emi. https://www.robotemil.com/. Accessed 4 Feb 2021
10. Kalas, I., Blaho, A., Moravcik, M.: Exploring control in early computing education. In:
International Conference on Informatics in Schools: Situation, Evolution, and Perspectives,
pp. 3–16. Springer, Cham (2018)
11. De Michele, M.S., Demo, G.B., Siega, S.: A piedmont schoolnet for a k-12 mini-robots
programming project: experiences in primary schools. In: Workshop Proceedings of SIMPAR
2008 International Conference on Simulation, Modeling and Programming for Autonomous
Robots, pp. 90–99 (2008)
12. Bers, M.U.: Designing Digital Experiences for Positive Youth Development: From Playpen
to Playground. Oxford University Press, Oxford (2012)
13. Khanlari, A.: Effects of robotics on 21st century skills. Eur. Sci. J. 9(27), 26–36 (2013)
14. Eguchi, A.: Educational robotics for promoting 21st century skills. J. Autom. Mob. Robot.
Intell. Syst. 8, 5–11 (2014)
15. Kelleher, C., Pausch, R.: Lowering the barriers to programming: a taxonomy of programming
environments and languages for novice programmers. ACM Comput. Surv. (CSUR) 37(2),
83–137 (2005)
16. Alimisis, D.: Robotics in education & education in robotics: Shifting focus from technology
to pedagogy. In: Proceedings of the 3rd International Conference on Robotics in Education,
pp. 7–14 (2012)
17. Lichtman, M.: Qualitative Research in Education: A User’s Guide. Sage Publications,
Thousand Oaks (2012)
18. Van den Akker, J., Gravemeijer, K., McKenney, S., Nieveen, N.: Educational Design Research.
Routledge, London (2006)
19. Creswell, J.W.: Educational Research: Planning, Conducting, and Evaluating Quantitative.
Prentice Hall, Upper Saddle River (2002)
20. Kalas, I.: Programming in lower primary years: design principles and powerful ideas.
Submitted to Constructionism, Vilnius (2018)
21. Kabátová, M., Kalaš, I., Tomcsányiová, M.: Programming in Slovak primary schools. Olymp.
Inform. 10, 125–159 (2016)
22. Pekárová, J.: Using a programmable toy at preschool age: why and how. In: Teaching with
Robotics: Didactic Approaches and Experiences. Workshop of International Conference
on Simulation, Modeling and Programming Autonomous Robots, Venice, November 2008,
pp. 112–121 (2008)
23. Pekárová, J., Záhorec, J., Hrušecký, R.: Digitálne technológie v materskej škole 3 multi-
médiá. Digitálne hračky (Digital technologies in kindergarten 3. Multimedia. Digital toys).
Metodicko-pedagogické centrum, Bratislava (2013)
Bee-Bot Educational Robot as a Means
of Developing Social Skills Among Children
with Autism-Spectrum Disorders

Janika Leoste1(B) , Tiiu Tammemäe1 , Getter Eskla1 , José San Martín López2 ,
Luis Pastor2 , and Elena Peribáñez Blasco2
1 Tallinn University, Narva rd 25, 10120 Tallinn, Estonia
leoste@tlu.ee
2 Universidad Rey Juan Carlos, Calle Tulipán, s/n, 28933 Móstoles, Madrid, Spain

Abstract. Lately, some high-end robots have been proposed for facilitating inter-
action with children with ASD with promising results. Nevertheless, most edu-
cational environments lack these robots, among other reasons, for cost and com-
plexity considerations. In this work, a simple Bee-Bot has been used as a support
tool for children with ASD who played a memory game about emotions. The main
objective of the experiment was to study whether using this robot while playing
the game resulted in improvements in the children verbal communication. Also,
this work explored if incorporating the Bee-Bot robot to the game resulted in
additional improvements in the level of child involvement in the activity.
This study has been conducted with children who had no previous experience
with Bee-Bots. We qualitatively evaluated the children abilities during the memory
game and quantitatively the children verbal and non-verbal communication during
the memory game with and without Bee-Bot. The results revealed that all children
showed a clear improvement in the use of verbal communication when playing
with Bee-Bot, a result that allows us to encourage schools and kindergartens with
educational robots to put them to use with children with ASD. Our study indicates
also that even using a relatively primitive robotic toy, such as Bee-Bot, produces
comparable results to using a more complex humanoid robot. On the other hand,
although the use of robots manages capturing the attention of children with ASD
we cannot not confirm that these robots helped children pay more attention to the
memory game. This experiment will be expanded in the future to obtain results
that are more comprehensive.

Keywords: Autism Spectrum Disorder · Social skills · Educational robotics ·


Experimental learning · Robot-assisted therapy

1 Introduction

Autism Spectrum Disorder (ASD) has become an important serious public health concern
worldwide, with its prevalence steadily rising since 1990-s [1]. The estimated prevalence
of ASD in the USA is currently one in 54 children [2]. These children have difficulties

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022


M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 14–25, 2022.
https://doi.org/10.1007/978-3-030-82544-7_2
Bee-Bot Educational Robot as a Means of Developing Social Skills 15

in understanding both verbal and non-verbal communication, social cues, behaviors and
feelings of other kids, and they struggle when using their skills outside specific contexts.
For children with ASD it is as difficult to learn various social and functional skills, as it
is to learn about the outside world through the observation and imitation of other people
[3–5]. These difficulties, including social interaction disorders, impaired verbal and
non-verbal communication and limited interests and behaviors [6], makes engagement
of children with ASD in learning activities challenging. To address these challenges,
various novel autism interventions have been suggested, one of these being robotic
therapy. A number of studies has shown that (educational) robots may help autistic
children to develop social skills and to become more involved in learning activities
[7–10].
The promise of robotic therapy is based on utilizing the childrens’ natural interest
in technology for reassuring and motivating children with ASD, engaging them more
deeply in learning activities (see [5, 11]). In this approach, robots are used as simplified
models of living creatures, or as embodied agents to keep the children’s attention engaged
with the learning context. In current research, the following types of robots stand out:

• Social robots (e.g., Nao robot) that use spoken words, body movements and human-
like social cues when communicating with children with ASD (see also [12]).
• Robotic pets (e.g., puppy-like robot Zoomer) that look like pet animals, such as dogs,
and behave similarly to these (see also [13]).
• Robotic toys (e.g., LEGO Mindstorms EV3 or Bee-Bot), which are relatively simple
educational toys that, in the context of autism, can be used as animated 3D objects
that attract children’s attention, or as embodied objects to visualize children’s thinking
processes [14] and to encourage social engagement in lessons [15, 16].

Most of the research performed in this area so far has been focused on studying
the effects of using social or animal-like robots on children with ASD, as these robots
offer human-like social cues while keeping object-like simplicity [17]. In these studies,
robots are used as social mediators between children and a task, object, or adult [18]. In
addition, social robots can also be seen as simplified models of other persons or animals,
being more predictable and less complex, encouraging the autistic child to demonstrate
reciprocal communication [19]. However, using social robots in real life classrooms can
often be problematic due to their relatively high price tag (for example, in ebay.de, used
Nao robots cost more than 2000 e).
A less common approach for interacting with ASD children is using relatively simple
robotic toys as animated learning tools that attract their attention and encourage them to
express greater laughter and vocal responses [19, 20]. In this context, robots are seen as
supportive tools for constructing a highly structured learning environment for children
with ASD, helping them to focus on the relevant stimuli (see [12, 21]). One of the
benefits of using these robots is the ease of transferring the results of a study into real
life conditions, since they are already widely available and teachers are skilled in using
them due to their relatively low price tag.
In this study, we are using a simple robotic toy (Bee-Bot) as a supportive tool for
children with ASD when playing a memory game about emotions. Bee-Bot, being a
modern representation of the LOGO turtle (see also [21]), is suitable for neuro-typical
16 J. Leoste et al.

children 3 years of age or older. Its movement can be programmed, using only seven
buttons on the robot, and it does not need a separate app or device. Our goal is to
examine whether playing the memory game with and without Bee-Bot has different
outcomes in child verbal communication and whether it has influence on the levels of
child engagement, posing two specific questions:

1. Are there any differences in ASD child verbal communication when playing a
collaborative board game with or without the Bee-Bot robot?
2. Are there any differences in ASD child engagement when playing a collaborative
board game with or without the Bee-Bot robot?

2 Methods

2.1 Participants

The participants of the study were three children with ASD, with ages around five
years old. We used purposeful sampling [22] with snowball sampling and convenience
sampling to select the sample. Convenience sample [23] was chosen due to the COVID-
19 restrictions. Using email, we contacted familiar special educators who helped us to
identify suitable participants and guide us to them. The participants had to meet the
following criteria: (1) had to have medically diagnosed autism spectrum disorder; (2)
their age had to be from 4 to 6 years; and (3) they had to reside in Tallinn or its vicinity.
As the participants belong to a vulnerable group, we also contacted their parents who
were informed of the study goals, and the methods of collecting, analyzing and using
data. The parents who agreed to participate were also asked to sign an informed consent
form. The following participants formed the study sample:

A – a 5 years and 1 month Old Girl. Her home language and kindergarten language
is English. She has definite preferences, she is very curious about new things, and she is
attracted to new environments. She does not like to show her weaknesses, so she prefers
to leave some activities or fool around to let others know that she is not doing well
or has not learned any activities yet. If she really likes the new thing or activity then
she will find a way to learn it and get better in it. Symptoms characteristic of the autism
spectrum disorder are currently in the foreground. In the studies, the child’s speech (self-
reinforcement and speech comprehension) and total mental ability scores were (slightly)
lower than age-appropriate, lagging behind due to autism spectrum disorder.

B – a 5 years and 3 months Old Girl. Her home language and kindergarten language
is Estonian. She is sociable. New and interesting things are very exciting for her. Like
most children, she may be a little shy at first, but not always. Can be provocative at times
if she is not interested in the activities. She has diagnosed deep speech impediment and
characteristics of autism identified by specialists.

C – a 4 years and 11 months Old Boy. Diagnosed autism and epilepsy. His home and
kindergarten language is Estonian but he has acquired English through videos and car-
toons in English. To ensure a better understanding, the researcher and he communicated
Bee-Bot Educational Robot as a Means of Developing Social Skills 17

in a mixture of both languages. He has a strong interest in electronic devices. His attention
is quickly distracted.
None of the children had previous experience with Bee-Bot robots.

2.2 Materials

Memory Game. The aim of the board game (Fig. 1) was to present emotions that the
participants could identify. For representing emotions, we used photos of young adults
(one male model and one female model). The presence of confusing factors was reduced
to the minimum (the background of cards was neutral, and both models wore similar
white shirts). The game used four different pairs of emotions with two different models
(a happy man and happy woman; an angry man and angry woman; a sad man and sad
woman; a surprised man and a surprised woman), sixteen cards in total.

Fig. 1. Memory game and Bee-Bot.

Bee-Bot1 is a small (about 15 × 15 × 8 cm) rechargeable robot (Fig. 1) that can


be programmed with its seven buttons, or with a tablet. Its programs consist of up to
40 steps that can make the robot move forward or backward by 15 cm, turn to the left
or right, or wait for a few seconds. Bee-Bot resembles a bee, and it flashes its eyes and
makes sounds in order to be more attractive to children.

2.3 Design and Procedure

Each test was conducted with the presence of a participating child, a parent or teacher,
and a researcher. The test started with building a relationship of trust between the child
and the researcher. The researcher introduced herself, discussed an interesting topic with
the child or played with the child a game that the child enjoyed.
The Memory Game was first played on the table without the Bee-Bot robot, and
then on the floor with the Bee-Bot robot (Fig. 2). Depending on the needs of the child,
there were breaks, the researcher and child had discussions on free topics, or played
something else.

1 https://www.terrapinlogo.com/Bee-Bot-family.html.
18 J. Leoste et al.

The Memory Game without a Robot. The researcher introduced the emotions of the
Memory Game to the child before starting to play the game. This was done in order
to become convinced that the child was familiar with the emotions and is able to name
them. The cards of happy man, angry woman, sad man, and surprised woman were
placed on the table in front of the child who was when asked, “In your opinion, how
does this man/woman feel? Why do you think so?” After discussing these concepts, the
child and researcher reach an agreement regarding that a happy person laughs, a sad
person has their corners of mouth turned down, an angry person has a crease between
their eyes, and a surprised person has their mouth open.
Next, the cards were shuffled and placed on the table, face down. It was explained
to the child that the cards are to be turned face up one by one, telling at the same time
who is on the card and what emotion does that card express. The turned card remained
on the table. If a pair of cards with the same emotions was found then these cards were
set aside. The game was played until all cards were grouped into pairs or until the child
expressed their unwillingness to continue playing. The duration of the Memory Game
without a robot was 7–9 min.
The Memory Game with the Bee-Bot Robot. The Bee-Bot was demonstrated to the
child for about 5 min, explaining and trying out how the robot could be controlled via
its buttons. Next, the Memory Game cards were placed on the floor into the pockets of
the Bee-Bot transparent play mat, face down. The child and adult took turns in order
to choose a route for Bee-Bot to reach to a freely chosen card. The card was turned
then and its emotion voiced. The card remained in face-up position. Next, a new route
was chosen for Bee-Bot to reach another card. If two cards with the same emotion were
found then these cards were set aside. The duration of the Memory Game with Bee-Bot
was 14–18 min.

Fig. 2. Playing the memory game with Bee-Bot.


Bee-Bot Educational Robot as a Means of Developing Social Skills 19

Data Collection. We used observation for collecting research results. Observation is


considered a suitable data collection method in situations where conditions change and
are difficult to predict [22]. Although conducting observational studies is time consuming
and resource intensive, it provides the most objective representation of observed objects
[24].
We collected the following data about the subjects: (a) video files collected during the
filming of the testing activities; (b) observation tables created during the video analysis
process; and (c) informed consent forms, signed by the parents.
The whole event was digitally recorded and researchers filled observation tables when
reviewing the recordings. In observation tables, we recorded the child’s activity, and their
verbal and non-verbal communication, when playing the Memory Game, with or without
Bee-Bot. For the pre-game introductory part (where the emotions were introduced) it was
noted how the child understood emotions: (a) the child does not understand emotions,
being unable to name them, (and/or) does not accept help; (b) under adult initiative,
the child chooses and voices the correct emotion (e.g., the person is happy or is sad);
(c) the child voices emotions independently, but without reasoning; (d) the child voices
emotions independently and reasons non-verbally (mimics facial expressions); and last,
(e) the child voices emotions and reasons verbally (the person is happy because he
smiles).
The participants were anonymized for data analysis: each participant was assigned
an anonymous code (A, B, C), which was written on an observation sheet. The collected
data is kept on an encrypted HDD in a safe area in the chief researcher’s office.
We evaluated qualitatively the following child’s abilities during the Memory Game
with and without Bee-Bot: (a) how they understood turn change; (b) whether they were
able to find the same emotion independently or they needed adult help; (c) how they
understood the game content; (d) how well they learnt to control the Bee-Bot; and (e)
how they expressed their emotions.
We evaluated quantitatively the following child’s verbal and non-verbal commu-
nication features during the Memory Game with and without Bee-Bot: (a) how many
times echolalia (meaningless repetition of another person’s spoken words) was used;
(b) how many times a spontaneous expression was used; (c) how many times the child
looked away during the game; (d) how many times the child left the game area; and (e)
how many cards the child turned up without additional assistance.
The pilot study was conducted in Tallinn University’s research laboratory on August
27, 2020 with a neuro-typical boy. The event was recorded and analyzed with the goal
of testing the methodology and adapting it if necessary. The child had no previous
experience with Bee-Bot, but he became quickly proficient, played the Memory Game
without and with Bee-Bot, and expressed his opinion that he liked the Bee-Bot version
of the game better.
The main study was conducted during a period ranging from November 20 to Decem-
ber 15, 2020. Depending on the COVID-19 restrictions, the test location was either in
the child’s kindergarten or in Tallinn University EDUSPACE research lab. The rooms
where the tests were conducted were child-friendly – having toys in the room, a large
floor area for free movement, floor carpet for comfortable playing, and a table with a
chair, suitable for the child.
20 J. Leoste et al.

3 Results
3.1 Individual Observations
Participant A. During the introduction of the Memory Game the participant A was
able to correctly choose an emotion (happy or sad) suggested by the adult, and was able
to independently voice emotion on the card, although without reasoning. The child did
not understand how the turn changed during the game, needing constant adult guidance
(“Take this card!”, “Who is on the picture?”, “How does he feel?”), and she voiced
emotions only when answering the researcher’s questions. The child constantly moved
around the room and looked for other activities, not focusing on the Memory Game from
the beginning to the end, whether it was being played with or without a Bee-Bot. During
the game (playing the game with or without Bee-Bot) the child was not able to find the
same emotion card either independently or with the adult guidance. The content and
the general idea of the game remained incomprehensible to her, but she acted happily,
clapped hands and laughed. Although she did not learn to control her Bee-Bot, she tried
actively. Mostly she tried to push Bee-Bot on the floor forward as if she was playing with
a toy car. The participant A used echolalia 13 times when playing without Bee-Bot,
and 15 times, when playing with Bee-Bot. She used a spontaneous expression 20 times
when playing without Bee-Bot and 28 times when playing with Bee-Bot. She looked
away from the game 3 times when playing without Bee-Bot and 3 times when playing
with Bee-Bot. She left the game area 5 times when playing without Bee-Bot and 6 times
when playing with Bee-Bot. She turned 3 cards up without additional guidance when
playing without Bee-Bot and 2 cards when playing with Bee-Bot.

Participant B. During the introduction of the Memory Game she was able to correctly
choose an emotion (happy or sad), suggested by the adult, but could not independently
voice or reason emotions on cards. The child did partially understand how the turn
changed during the game, but she needed constant adult guidance, and she voiced emo-
tions only when answering the researcher’s questions. During the game (playing it
with or without Bee-Bot) the child was able to find the same emotion card only with
the adult guidance. She acted happily, clapped hands and laughed, expressing her posi-
tive emotions more intensely and being more active when playing with Bee-Bot. When
playing without Bee-Bot she needed constant reminding about her turn and she did not
interfere with the researcher’s game. When playing with Bee-Bot she pressed its buttons
and turned cards for the adult. Although she did not learn to control Bee-Bot, she tried
actively and attempted to press its buttons during the whole game. The participant B
used echolalia 8 times when playing without Bee-Bot, and 17 times when playing with
Bee-Bot. She used a spontaneous expression 7 times when playing without Bee-Bot and
12 times when playing with Bee-Bot. She looked away from the game 2 times when
playing without Bee-Bot and 6 times when playing with Bee-Bot. She did not leave
the game area. She turned without additional guidance 8 cards when playing without
Bee-Bot and 8 cards when playing with Bee-Bot. She played until the end of the game.

Participant C. During the introduction of the Memory Game he was twice able to cor-
rectly, although without reasoning, choose an emotion (happy or sad) on cards. The child
did not understand how the turn changed during the game, and he needed constant adult
Bee-Bot Educational Robot as a Means of Developing Social Skills 21

guidance. He voiced emotions only sometimes and only when answering the researcher’s
questions. During the game (playing with or without Bee-Bot) the child was not able to
find the same emotion cards either independently or with the adult guidance. He could
not comprehend the content and the idea of the game. The child could not learn to control
Bee-Bot although he tried actively. Mostly he tried to use Bee-Bot as a toy car, and he
was more interested in the robot compared to the cards or game content. The participant
C did not use echolalia when playing without Bee-Bot, and used it 5 times when playing
with Bee-Bot. He used a spontaneous expression 4 times when playing without Bee-Bot
and 7 times when playing with Bee-Bot. He looked away from the game 4 times when
playing without Bee-Bot and 9 times when playing with Bee-Bot. He left the game area
3 times when playing without Bee-Bot and 5 times when playing with Bee-Bot. She
turned without additional guidance 1 card when playing without Bee-Bot and 6 cards
when playing with Bee-Bot. He did not play until the end of the game on both occasions.

3.2 Answers to the Research Questions


Are there any differences in ASD child verbal communication when playing a
collaborative board game with or without the Bee-Bot robot? The greatest differences
were observed in the children’s verbal activity. All children had higher echolalia and use
of spontaneous expressions when playing the game with Bee-Bot (Table 1).

Table 1. Children’s verbal activity when playing with or without Bee-Bot.

Participant Echolalia Spontaneous expression


Without Bee-Bot With Bee-Bot Without Bee-Bot With Bee-Bot
A 13 15 20 28
B 8 17 7 12
C 0 5 4 10
Average 7 12.3 10.3 16.6

Are there any differences in ASD child engagement when playing a collaborative
board game with or without the Bee-Bot robot? The children had different results in
other areas like focusing on the game (looking away from the game), leaving the game
area, needing adult guidance, etc., but we can not make considerable generalizations
when comparing playing the game with or without Bee-Bot.

4 Conclusions and Discussion


The main goal of our study was to examine if using a robotic toy such as the Bee-Bot as a
mediator between a child with ASD and a memory board game would change the child’s
social interaction, including verbal and nonverbal communication and engagement,
compared to playing the same game without a robot.
22 J. Leoste et al.

We discovered that children with ASD communicated more intensively when the
robot was involved as a mediator: the children exhibited more echolalia and they verbal-
ized expressions that were more spontaneous; similar observations were made by [25].
Communication became more intensive even when children were not able to use the
robot properly, confirming the remarks made by [13]. These results can be considered
as encouraging, supporting using robots in educational activities even when the players
(children with ASD) are unable to program them. In addition, we suggest that develop-
ing the programming skills or other STEAM skills of children with ASD should not be
considered as an essential goal – instead robots can be used as animated toys, mediating
between the child and the learning object [19]. In this sense, we encourage schools and
kindergartens that have educational robots to put them into use with children with ASD.
Our study indicates that using a relatively primitive robotic toy, such as the Bee-Bot
robot, with children with ASD yields comparable results to using more advanced (and
more expensive) humanoid robots such as the Softbank Robotics’ Pepper robot [26].
Compared to other similar studies, we focused on relatively young children (5 years
old), as with early intervention the probability of improving the quality of life of those
afflicted is higher [27]. For this age group, simple educational robots can work better,
as these devices have less confusing attributes. If these notions were to be supported
by further research, it would indicate that using robotic toys for ASD treatment would
financially be relatively easy to implement in specialized institutions.
Our results about engagement were controversial. Although using robots activates
children with ASD, we could not confirm that robots helped children to pay more atten-
tion to the Memory Game. The results indicated that the robot itself was of interest to
the children with ASD involved in our tests, making it difficult to claim that using robots
would engage children in learning activities [28]. Some additional remarks can be done
about this work. First, since this study was posed as a preparation and test ground for
a larger international research project, the amount of experimental work was somehow
reduced. For example, we used only one session per participant, introducing the selected
robot to each child in that same session. In many similar studies researchers use separate
sessions for introducing robots; additionally, learning activities are usually conducted
iteratively through several sessions, in order to gather results that are more comprehen-
sive. We will follow a similar approach for our next study. Second, although various
previous studies recommend some specific robotics platforms, we intend to focus on the
robots’ functionality more than on the actual features of particular robots. Therefore, we
are planning to use several robotics platforms within the same study design (for example,
our study in Spanish will also use the Mouse robot, which is very similar to Bee-Bot,
carrying out a comparative study of the results obtained with both robots). This will allow
the introduction of new variables in the study, on the one hand one of a cultural nature,
as they are Spanish children, and on the other hand, with the introduction into the study
of another low-end robot, such as the aforementioned Mouse; it has to be pointed out
that even though both robots are different, their functionality is quite similar, allowing
therefore the study to concentrate on robot functionality. Last, in future interventions we
are planning to perform the field work with the help of the children’s therapists instead of
university researchers. This will facilitate ensuring trusting relationships among children
and researchers, which will help executing longer research cycles.
Bee-Bot Educational Robot as a Means of Developing Social Skills 23

In the future, we plan to expand the range of robots to be used in these studies,
including robots of our own design, in order to categorize their response to the questions
raised in this preliminary study: improvement in the child’s verbal communication with
a robot, and improvement of child involvement in games and activities.
The results of this study can be used to make changes in the way activities for ASD
children are planned and implemented, as the test results reflect the effectiveness of
involving educational robots as mediators between learning activities and ASD chil-
dren. Nevertheless, this study has some limitations, making it difficult to propose wider
generalizations. For example, the sample is small and consists of test subjects with differ-
ent home languages, cognitive abilities, and levels of speech development. The number
of test-session iteration was low and (due to the COVID-19 restrictions) some of these
sessions were conducted in an unfamiliar environment for the children, by a person
unfamiliar to them. Last, the children being tested did not have previous experience with
Bee-Bots.

5 Ethical Considerations
This work was approved by the Tallinn University ethics committee. Parents were able
to inform researchers about their child’s capacity, communication ability and special
needs, based on which the researcher could adjust the duration of the experiments to
meet the individual needs of the child. The child’s (verbal or nonverbal) consent and
readiness to participate in the study was important. To minimize the risk of test subjects
becoming tired during the experiment, short breaks were planned between the different
stages of the experiments. Parents were allowed to withdraw their child from the study
at any time and they were allowed to access the research results about their child, and in
generalized format about the whole study. Informed consent forms were digitized and
stored with video recordings and other person-related data on an encrypted hard drive,
with the paper originals destroyed immediately after digitalization.

Acknowledgements. Project “TU TEE - Tallinn University as a promoter of intelligent lifestyle”


(nr 2014-2020.4.01.16-0033) under activity A5 in the Tallinn University Centre of Excellence in
Educational Innovation.

References
1. Newschaffer, C.J., et al.: The epidemiology of autism spectrum disorders. Annu. Rev. Public
Health 28, 235–258 (2007). https://doi.org/10.1146/annurev.publhealth.28.021406.144007
2. Maenner, M.J., Shaw, K.A., Baio, J., et al.: Prevalence of Autism spectrum disorder among
children aged 8 years—Autism and developmental disabilities monitoring network, 11 Sites,
United States, 2016. MMWR Surveill. Summ. 69(SS-4), 1–12 (2020). http://dx.doi.org/10.
15585/mmwr.ss6904a1
3. Dewey, D.: Error analysis of limb and orofacial praxis in children with developmental motor
deficits. Brain Cogn. 23, 203–221 (1993)
4. Baldwin, D.A.: Joint attention: its origins and role in development. In: Moore, C., Dunham,
P.J. (eds.) Understanding the Link Between Joint Attention and Language, pp. 131–158.
Lawrence Erlbaum Associates Inc., Hillsdale (1995)
24 J. Leoste et al.

5. Srinivasan, S.M., Eigsti, I.M., Neelly, L., Bhat, A.N.: The effects of embodied rhythm and
robotic interventions on the spontaneous and responsive social attention patterns of children
with Autism Spectrum Disorder (ASD): a pilot randomized controlled trial. Res. Autism
Spectr. Disord. 27, 54–72 (2016). https://doi.org/10.1016/j.rasd.2016.01.004
6. Frith, U.: Autism: Explaining the Enigma, 2nd edn. Blackwell Publishing, Hoboken (2003)
7. Albo-Canals, J., et al.: A pilot study of the KIBO robot in children with severe ASD. Int. J.
Soc. Robot. 10(3), 371–383 (2018). https://doi.org/10.1007/s12369-018-0479-2
8. Di Lieto, M.C., et al.: Improving executive functions at school in children with special needs
by educational robotics. Front. Psychol. 10, 2813 (2020). https://doi.org/10.3389/fpsyg.2019.
02813
9. Kozima, H., Michalowski, M.P., Nakagawa, C.: Keepon: a playful robot for research, therapy,
and entertainment. Int. J. Soc. Robot. 1, 3–18 (2008). https://doi.org/10.1007/s12369-008-
0009-8
10. Sandygulova, A., et al.: Interaction design and methodology of robot-assisted therapy for
children with severe ASD and ADHD, Paladyn. J. Behav. Robot. 10(1), 330–345 (2019).
https://doi.org/10.1515/pjbr-2019-0027
11. Scassellati, B.B.: How social robots will help us to diagnose, treat, and understand Autism.
In: Robotics Research: Results of the 12th International Symposium, ISRR, pp. 552–563.
Springer, Heidelberg (2007)
12. So, W.-C., et al.: A robot-based play-drama intervention may improve the joint attention and
functional play behaviors of Chinese-speaking preschoolers with Autism spectrum disorder:
a pilot study. J. Autism Dev. Disord. 50(2), 467–481 (2019). https://doi.org/10.1007/s10803-
019-04270-z
13. Silva, K., Lima, M., Santos-Magalhães, A., Fafiães, C., de Sousa, L.: Living and robotic dogs as
elicitors of social communication behavior and regulated emotional responding in individuals
with autism and severe language delay: a preliminary comparative study. Anthrozoös 32(1),
23–33 (2019). https://doi.org/10.1080/08927936.2019.1550278
14. Han, I.: Embodiment: a new perspective for evaluating physicality in learning. J. Educ.
Comput. Res. 49(1), 41–59 (2013)
15. Kopcha, T.J., et al.: Developing an integrative STEM curriculum for robotics education
through educational design research. J. Form. Des. Learn. 1, 31–44 (2017)
16. Werfel, J.: Embodied teachable agents: learning by teaching robots (2014)
17. Pop, C., Pintea, S., Vanderborght, B., David, D.D.: Enhancing play skills, engagement and
social skills in a play task in ASD children by using robot-based interventions. A pilot study.
Interact. Stud. 15, 292–320 (2014)
18. Scassellati, B., Admoni, H., Matarić, M.: Robots for use in Autism research. Annu. Rev.
Biomed. Eng. 14, 275–294 (2012). https://doi.org/10.1146/annurev-bioeng-071811-150036
19. Duquette, A., Michaud, F., Mercier, H.: Exploring the use of a mobile robot as an imitation
agent with children with low-functioning autism. Auton. Robot. 24, 147–157 (2008). https://
doi.org/10.1007/s10514-007-9056-5
20. Dautenhahn, K.: Design issues on interactive environments for children with Autism. In:
Proceedings of the International Conference on Disability, Virtual Reality and Associated
Technologies (2000)
21. Emanuel, R., Weir, S.: Using LOGO to catalyse communication in an autistic child. Technical
report DAI Research Report No. 15, University of Edinburgh (1976)
22. Creswell, J.W.: Collecting Qualitative Data in Educational Research: Planning, Conducting
and Evaluating Quantitative and Qualitative Research, 4th edn. Pearson, Boston (2012)
23. Patton, M.Q.: Qualitative Evaluation and Research Methods. Sage, Thousand Oaks (2002)
24. Mason, J.: Qualitative Researching, 2nd edn. Sage Publications, London (2002)
Bee-Bot Educational Robot as a Means of Developing Social Skills 25

25. Schadenberg, B.R., Reidsma, D., Heylen, D.K.J., Evers, V.: Differences in spontaneous inter-
actions of autistic children in an interaction with an adult and humanoid robot. Front. Robot.
AI 7, 28 (2020). https://doi.org/10.3389/frobt.2020.00028
26. Efstratiou, R., et al.: Teaching daily life skills in Autism Spectrum Disorder (ASD) interven-
tions using the social robot Pepper. In: Springer Series: Advances in Intelligent Systems and
Computing, 11th International Conference on Robotics in Education. Springer, Singapore
(2021)
27. Cabibihan, J.-J., Javed, H., Ang, M., Aljunied, S.M.: Why robots? A survey on the roles
and benefits of social robots in the therapy of children with Autism. Int. J. Soc. Robot. 5(4),
593–618 (2013). https://doi.org/10.1007/s12369-013-0202-2
28. Costa, S., Soares, F., Santos, C., Pereira, A., Moreira, M.: Lego robots & Autism spectrum dis-
order: a potential partnership? Revista de Estudios e Investigación en Psicología y Educación
3(1), 50–59 (2016)
Development of Educational Scenarios
for Child-Robot Interaction: The Case
of Learning Disabilities

Elpida Karageorgiou1 , Efrosyni Kourampa1 , Athanasia-Tania Papanikolaou1 ,


Petros Kechayas2 , Eleftheria Avramidou2 , Rafailia-Androniki Sabri3 , Chris Lytridis4 ,
George A. Papakostas4(B) , and Vassilis G. Kaburlasos4
1 Family Center KPG, 54352 Thessaloniki, Greece
elpis_kar@hotmail.com, kourampa@hotmail.com,
athanasiapapanik@gmail.com
2 1st Psychiatric Clinic of Aristotle University, Papageorgiou General Hospital, Aristotle
University of Thessaloniki, 56403 Thessaloniki, Greece
pekehag@gmail.com, avramdiouria@yahoo.gr
3 Novel Therapeutic and Consulting Unit Praxis, El. Venizelou 42, 65302 Kavala, Greece
niki2609@hotmail.com
4 HUMAIN-Lab, Department of Computer Science, International Hellenic University,
65404 Kavala, Greece
{lytridic,gpapak,vgkabs}@teiemt.gr

Abstract. During the last decade, there has been an increased interest in research
on the use of social robots in education, both in typical education as well as in
special education. Despite the demonstrated advantages of robot-assisted tutoring
in typical education and the extensive work on robots in supporting autism therapy-
related scenarios, little work has been published on how social robots may support
children with Learning Disabilities (LD). The purpose of this paper is to present a
comprehensive experimental protocol where a social robot is directly involved in
specially designed intervention strategies, which are currently underway. These
strategies are designed to address reading comprehension and other learning skills
through child-robot interaction, aiming at improving the academic performance
of the children diagnosed with learning disabilities.

Keywords: Social robots · Learning disabilities · Reading comprehension ·


Child-robot interaction · Special educational needs

1 Introduction
We are currently witnessing a new technological revolution in which robots play a
prominent role. One type of the new generation of robots are called social and are
designed to participate in the daily activities of humans such as education. Much research
has been carried out to demonstrate the educational utility of social robots. The benefits
of using robots in the learning process are mainly due to their physical appearance,

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022


M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 26–33, 2022.
https://doi.org/10.1007/978-3-030-82544-7_3
Development of Educational Scenarios for Child-Robot Interaction 27

which makes them attractive to children and stimulates curiosity. Consequently, several
educational advantages arise. Firstly, robots can be used for curricula or populations that
require enhanced engagement [1]. Secondly, using a robot encourages students to clearly
develop social behaviors that are beneficial to learning. Additionally, robots can work
tirelessly as long as their power demands are met and their teaching performance does not
deteriorate with time. They can also be programmed to deliver different subjects without
the need of years of training [2]. Furthermore, educational robots do not discriminate, do
not express frustration and are generally small in size, which makes children feel more
comfortable and increases their confidence [3].
However, despite the fact that much research has been carried out regarding the
potential of using robots as tutors, there have been only a few studies that explore
robots’ potential when it comes to learning independence and efficiency in children
who have been diagnosed with LD. Moreover, social robots have so far mostly been
used in short-term interventions with strictly structured learning scenarios, with limited
adaptation and flexibility to each student’s needs. One important challenge is that that
students with LD come into classrooms without the requisite knowledge, skills, and
disposition to read and comprehend texts. This is because children with LD have not
developed this metacognitive awareness or the ability to skilfully apply comprehension
strategies [4]. Over the last decades, a plethora of studies have shown the effectiveness
of reading interventions based on cognitive and metacognitive strategies in children with
LD.
In this paper, a comprehensive long-term intervention for children with LD is pre-
sented, where the specially-designed educational activities are supported by a social
robot. It is part of the national project titled “Social Robots as Tools in Special Educa-
tion” which aims at using social robots as innovative tools in Greek Special Treatment
and Education. The paper is structured as follows: In Sect. 2 there is an overview of
the current literature on robot-assisted special education, and the issues in addressing
LD. Section 3 describes the experimental procedure and the role of the social robot in
the proposed educational scenarios. Finally, Sect. 4 summarizes the various aspects and
identifies potential issues of the proposed methodology.

2 Literature Review

2.1 Use of Robots in Education

An increasing number of studies confirm the potential that social robots have when it
comes to education and tutoring. Teaching assistant robots are being used to provide
motivation, engagement and personalized support to students [5, 6]. Moreover, social
robots can create an interactive and engaging learning experience for students [7, 8].
The physical presence of a robot can lead to positive perceptions and increased task
performance when compared with virtual agents [9]. For example, work presented in
[10] suggests that instructions given by a social robot during a cognitive puzzle task can
be more effective than those that were given by video-based agents or a disembodied
voice. As it can be noticed in these studies, the use of social robots in the learning
process motivates children and increases their attention. This is particularly important,
28 E. Karageorgiou et al.

since students who are experiencing LD are often observed to be experiencing lack of
motivation and they are not usually actively participating in the learning process.
The use of social robots in therapeutic interventions in special education has been
focused in the treatment of autism. As far as learning disabilities interventions are con-
cerned, studies are typically limited to short, well-defined lessons delivered with limited
adaptation to individual learners. Recent studies aim at exploring how social robots can
support meta-cognitive strategies that lead to learning independence. In [11], there is
an investigation of how personalized robot tutoring using an open learner model can
impact learning and self-regulation skills. The study concluded that a social robot can be
effective in scaffolding self-regulating learning and in motivating in a real-world school
environment. In another study, the effectiveness of the “think aloud” protocol via a robot
tutor was investigated [12]. The results that students that complete the “think aloud”
exercise were more engaged when it was delivered by the robot tutor and were foster-
ing immediate learning outcomes. Although these results refer to typical education, the
underlying principles can be applied in equivalent special education settings.

2.2 Reading Intervention for Children with LD


Reading comprehension is a basic skill that must be conquered in order for someone
to become a capable reader. More specifically, reading understanding is the process in
which meaning is exported and constructed at the same time [13]. It can also be defined
as one’s ability to understand the written texts’ meaning in order to achieve a learning
goal [14]. Therefore, it is a basic skill which affects academic performance.
Reading comprehension is the product of a complex integration of knowledge and
skills such as phonological and phonemic awareness, decoding and vocabulary [15].
When a student has poor phonological awareness at the initial stage of learning to read,
there is likely to be a delay in learning of decoding [16]. As a result, if a student does not
learn to decode what he/she reads, then they will not be able to understand the text. The
majority of students with LD (approximately 80%) are experiencing problems while
learning to read [17]. Previous studies and meta-analyses have shown that reading com-
prehension intervention can be effective when it comes to improve children’s academic
performance [18].
Based on the above, our proposed intervention protocol aims to address the following
components of reading: a) phonological awareness, b) reading skills, c) working memory
strategies, and d) reading comprehension skills. By improving these components, it is
expected that academic skills of children can be improved as a whole.

3 Methods
In this section, the experimental design and experimental procedure that will be followed
in this study are described.

3.1 Experimental Design


The experimental design involves a randomized study of learning intervention, in which
random sampling was adopted. Using this methodology, each member has an equal
Development of Educational Scenarios for Child-Robot Interaction 29

chance of being selected as subject. The 29 participants of the clinical trials are children
of third and fourth grade of primary school (9–11 years old, 21 males and 7 females), since
during these early grades, emphasis is given in the teaching of decoding, as opposed to
later grades where emphasis is given in reading comprehension. The participants have
a diagnosis of LD (DSM-5 315.00, 315.1. 315.2). To confirm their eligibility for the
study, and after parental consent was given, an official assessment has been carried
out at the Papageorgiou General Hospital in Thessaloniki, Greece. Students’ individual
educational profiles were created from the initial assessment.
The eligible participants have then been divided into two groups depending on the
type of intervention they receive:

• Group A: Robot-free intervention. The intervention protocol is implemented exclu-


sively by the specialist educator. This is the control group. When the full intervention
is completed, they will be able to get to know the robot and play with it.
• Group B: Intervention with the robot. The robot assists the educator during the learning
intervention and actively participates in the delivery of the various exercises.

The comparison between the two groups at the end of all the intervention sessions will
result in quantifiable conclusions regarding the effect of using the social robot. To address
privacy issues, any data collected was anonymized.
Interventions take place in a therapy room which provides all the necessary equip-
ment for the intervention, such as the robot, a laptop for the control and recording
software, and other items that are part of the educational activities.

3.2 The Social Robot and Its Role

The social robot NAO has been selected for proposed learning intervention. It is a
humanoid robot that has already been used by the authors in previous work [19, 20] and
is a popular choice in educational robotics research because of its appealing physical
appearance as well as its sensory and motion capabilities. The robot is able to speak,
move, and create. Speech can be controlled in terms of speed and pitch. It can also use
LEDs and playback sounds, which complement and enhance its interaction with the
child. Speech recognition allows the robot to understand what the child is saying and
monitor and record sound. Finally, the robot has cameras which can be used to detect
motion or eye contact. The robot’s role in the interventions is dual:

• A teacher’s assistant. The robot participates as an assistant of the special educator


by giving instructions, modelling part of the cognitive and metacognitive strategies,
giving feedback and where possible guiding the learning performances.
• An annotator. The robot monitors the child during the session and depending on the
activity, records various quantities. For example, in a task where the child is reading
text, the robot records the reading speed, volume, rhythm and possible mistakes during
reading. The educator can, at the end of the session, collect these results and create
a record for each child, which will help monitor its progress. This is particularly
important since some of these features are difficult to measure and record by the
educator during the session. In contrast to other studies, where these features are
measured by a human annotator using a video recording, our system provides an
automated logging system that uses the robot sensors as data acquisition devices.
30 E. Karageorgiou et al.

The use of a familiar, informal communication style by the robot that personalizes the
intervention has been found to lead to learning benefits [21]. Therefore, in terms of
personalization, the robot is able to remember and use the child’s name during speech,
and recalls past activities to adjust the difficulty level of current activities.

3.3 Educational Scenarios


The intervention protocol includes 24 sessions (taking place twice a week) of integrated
learning activities of cognitive and metacognitive strategies. The activities consist of
narrative text, comprehension, vocabulary editing and writing exercises in a form that
students are familiar with, since they are based on their school textbooks.
The special educator guides the intervention, intervening wherever he judges it to be
necessary for the smooth execution of the session. In addition, during each session, the
special educator records his observations on the progress in the special sheets produced
for this purpose. These sheets are later examined in conjunction with the robot logs.
Apart from the first introductory session where the child familiarizes itself with
the educator, the robot and the therapy room, the next sessions deal with gradually
achieving the educational goals through the various activities. Each session consists of
a set of learning activities and each activity targets one of the following objectives:

• Phonological and phonemic awareness: Phonological awareness refers to the ability


to segment and manipulate the sounds of spoken language. Therefore, the activities
that are given to the students are a mixture of blending and segmenting phonemes,
onset-rime blending and segmentation, syllable blending and deletion etc.
• Reading skills: Basic reading skills activities include vocabulary acquisition, pre-
reading strategies, textual comprehension, organizational skills and response tech-
niques. In order to master basic reading skills, participants of this study learn how to
increase their reading speed, fluency, and overall vocabulary through activities.
• Working memory strategies: Working memory refers to how children utilize informa-
tion stored in short-term memory. Children use working memory to learn and follow
instructions. Therefore, the intervention’s activities in this particular field are based
on working on visual skills, card games, games that use visual memory, backward
digit span tasks, listening span tasks, retelling stories etc.
• Reading comprehension: During the intervention, children learn pre-reading, dur-
ing reading and post-reading strategies. Pre-reading strategies involve brainstorming
about a topic that is familiar and predicting what they think they will learn. During
reading, the educator monitors children’s comprehension and applies fix-up strategies
to assist with unknown words. Also, the children identify the most important ideas
about a topic in a section of text by using visual imagery strategy. After reading,
students generate questions and review the key ideas using graphic organizers. The
robot presents the strategies to the children using modelling and think-aloud. After
students become proficient in the use of the strategies, the robot adopts a more passive
role and repeats the basic steps of the strategy only when asked to do so.

Table 1 shows an example intervention session in which the objectives are memory
reinforcement, decoding and reading comprehension.
Development of Educational Scenarios for Child-Robot Interaction 31

Table 1. Example structure of an intervention session

Activity Description
Introduction The robot greets the child and then a short conversation between
robot and child follows
Memory reinforcement The child is given a set of pictures (e.g. animals). After the child
observes the pictures, one of the pictures is removed. The robot
asks which picture is missing and checks the answer
Decoding A word is formed and then one or more letters are substituted
with other letters in order to form pseudo-words. For example,
the word «table» becomes the word “mable”. When a number of
these pseudo-words have been produced, the robot asks the
student to read them, according to the letter substitutions that
have been given by the educator. The robot assesses
Reading comprehension The student is asked by the robot to word process the text using
the fixed-up strategies that have been taught. The robot gives the
following instructions and monitors its performance: 1)
Underline unknown words and phrases, 2) Re-read the part that
makes it harder for you to understand, 3) Ask some new
questions about the text, 4) Try to retell the story in your own
words
Psychopedagogical activity The robot describes itself to the child, and mentions the features
that it feels are its strongest. The robot then asks the child to do
the same. The activity ends with the robot praising the child for
the features he chose (self-image enhancement)

At the end of every session, the educator reviews the data collected by the robot.
After the completion of the entire intervention, the children’s reassessment (follow-up
session) takes place at the hospital in order to examine the learning gains.
It can be seen from the above that the social robot has a central role in all educational
activities both by a) guiding the educational process, and (b) monitoring the children’s
performance. The implementation of the scenarios requires specific capabilities on the
part of the NAO social robot, particularly the text-to-speech function for instructions, for
recitation and pronunciation exercises. The robot is frequently required to interact with
the child, and so speech recognition is also vital to the implementation of the scenarios.
Preliminary work revealed that recognition works optimally the child is directly in front
of the robot when it speaks. Since the child is expected to move during the intervention,
face tracking has been employed so that the robot continuously maintains its focus to the
child, therefore improving the recognition rate as well as helping to maintain the child’s
engagement. Other features such as gestures, sound playback and the multi-color LEDs
are used as motivators and improve the child’s experience.
Preliminary observations in this ongoing work reveal that most children in the robot-
assisted intervention group appear to be motivated by the presence of the robot and are
generally more responsive to the robot’s instruction. However, the quantified educational
32 E. Karageorgiou et al.

benefits of using the robot as opposed to the control group remain to be seen in the
statistical analysis at the end of the intervention.

4 Conclusions and Future Work

The paper presents a complete social robot-based intervention for dealing with learn-
ing disabilities, which is currently applied. There are four distinct learning objectives,
namely phonological awareness, reading skills, working memory strategies and reading
comprehension skills. The protocol consists of 24 sessions with a well-defined struc-
ture of activities. The social robot NAO is integrated into the activities and participates
actively as a teacher’s assistant by giving instructions, providing motivation and guiding
the workflow of the various exercises. The robot is also used as an annotator by mon-
itoring the child’s performance, and recording quantities that are not easily measured
by an educator in real time, such as duration of speech, voice volume, performance in
each exercise etc. In this way, the educator can keep a detailed record of the sessions and
monitor the child’s improvement throughout the duration of the intervention. Ultimately,
the skills practiced during the intervention with the aid of the social robot, are expected
to improve the overall academic performance of the children.

Acknowledgment. This research has been co-financed by the European Union and Greek national
funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under
the call RESEARCH – CREATE – INNOVATE (project code: T1EDK-00929).

References
1. Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B., Tanaka, F.: Social robots for
education: a review. Sci. Robot. 3 (2018). https://doi.org/10.1126/scirobotics.aat5954
2. Ivanov, S.: Will robots substitute teachers? In: 12th International Conference “Modern Sci-
ence, Business and Education”, Varna, Bulgaria, pp. 42–47 (2016). https://doi.org/10.1080/
15388220903185605
3. Kaburlasos, V.G., Vrochidou, E.: Social robots for pedagogical rehabilitation: trends and
novel modeling principles. In: Dimitrova, M., Wagatsuma, H. (eds.) Cyber-Physical Systems
for Social Applications. Advances in Systems Analysis, Software Engineering, and High
Performance Computing (ASASEHPC), pp. 1–21. IGI Global, Hershey (2019). https://doi.
org/10.4018/978-1-5225-7879-6.ch001
4. Baker, L., Brown, A.L.: Metacognitive skills and reading. In: Pearson, P.D. (ed.) Handbook
of Research in Reading, pp. 353–395. Longman, New York (1984)
5. Matarić, M.: Socially assistive robotics. In: Proceedings of the 2014 ACM/IEEE international
conference on Human-robot interaction - HRI 2014, p. 333. ACM Press, New York (2014).
https://doi.org/10.1145/2559636.2560043
6. Jacq, A., Lemaignan, S., Garcia, F., Dillenbourg, P., Paiva, A.: Building successful long child-
robot interactions in a learning context. In: 2016 11th ACM/IEEE International Conference
on Human-Robot Interaction (HRI), pp. 239–246. IEEE (2016). https://doi.org/10.1109/HRI.
2016.7451758
Development of Educational Scenarios for Child-Robot Interaction 33

7. Barco, A., Albo-Canals, J., Garriga, C.: Engagement based on a customization of an iPod-
LEGO robot for a long-term interaction for an educational purpose. In: Proceedings of the 2014
ACM/IEEE International Conference on Human-Robot Interaction - HRI 2014, pp. 124–125.
ACM Press, New York (2014). https://doi.org/10.1145/2559636.2563697
8. Aslam, S., Shopland, N., Standen, P.J., Burton, A., Brown, D.: A comparison of humanoid and
non humanoid robots in supporting the learning of pupils with severe intellectual disabilities.
In: Proceedings of the 2016 International Conference on Interactive Technologies and Games
EduRob Conjunction with iTAG 2016, iTAG 2016, pp. 7–12 (2016). https://doi.org/10.1109/
iTAG.2016.9
9. Li, J.: The benefit of being physically present: a survey of experimental works comparing
copresent robots, telepresent robots and virtual agents. Int. J. Hum. Comput. Stud. 77, 23–37
(2015). https://doi.org/10.1016/j.ijhcs.2015.01.001
10. Leyzberg, D., Spaulding, S., Scassellati, B.: Personalizing robot tutors to individuals’ learning
differences. In: Proceedings of the 2014 ACM/IEEE International Conference on Human-
Robot Interaction, pp. 423–430. ACM, New York (2014). https://doi.org/10.1145/2559636.
2559671
11. Jones, A., Castellano, G.: Adaptive robotic tutors that support self-regulated learning: a longer-
term investigation with primary school children. Int. J. Soc. Robot. 10(3), 357–370 (2018).
https://doi.org/10.1007/s12369-017-0458-z
12. Ramachandran, A., Huang, C.-M., Gartland, E., Scassellati, B.: Thinking aloud with a tutoring
robot to enhance learning. In: Proceedings of the 2018 ACM/IEEE International Conference
on Human-Robot Interaction - HRI 2018, pp. 59–68. ACM Press, New York (2018). https://
doi.org/10.1145/3171221.3171250
13. Sweet, A.P., Snow, C.E.: Rethinking Reading Comprehension. Guilford Publications, New
York (2003)
14. Vellutino, F.R.: Individual differences as sources of variability in reading comprehension
in elementary school children. In: Sweet, A.P., Snow, C.E. (eds.) Rethinking Reading
Comprehension, pp. 51–81. Guilford Publications, New York (2003)
15. Paris, S.G.: Children’s Reading Comprehension and Assessment. Routledge, Milton Park
(2005). https://doi.org/10.4324/9781410612762
16. Nicholson, T.: Reading comprehension processes. In: Thompson, B.G., Nicholson, T. (eds.)
Learning to Read: Beyond Phonics and Whole Language, pp. 127–149. Teachers College
Press, New York (1998)
17. Gersten, R., Fuchs, L.S., Williams, J.P., Baker, S.: Teaching reading comprehension strategies
to students with learning disabilities: a review of research. Rev. Educ. Res. (2007). https://
doi.org/10.3102/00346543071002279
18. Vaughn, S., et al.: Response to intervention for middle school students with reading difficulties:
effects of a primary and secondary intervention. School Psych. Rev. 39, 3–21 (2010)
19. Lytridis, C., et al.: Distance special education delivery by social robots. Electronics 9, 1034
(2020). https://doi.org/10.3390/electronics9061034
20. Papakostas, G.A., Strolis, A.K., Panagiotopoulos, F., Aitsidis, C.: Social robot selection: a case
study in education. In: 2018 26th International Conference on Software, Telecommunications
and Computer Networks (SoftCOM), Split, Croatia, pp. 1–4. IEEE (2018). https://doi.org/10.
23919/SOFTCOM.2018.8555844
21. Baxter, P., Ashurst, E., Read, R., Kennedy, J., Belpaeme, T.: Robot education peers in a situated
primary school study: Personalisation promotes child learning. PLoS ONE 12, e0178126
(2017). https://doi.org/10.1371/journal.pone.0178126
Educational Robotics in Online Distance
Learning: An Experience from Primary
School

Christian Giang1,2(B) and Lucio Negrini1


1
Department of Education and Learning, University of Applied Sciences and Arts
of Southern Switzerland (SUPSI), Locarno, Switzerland
{christian.giang,lucio.negrini}@supsi.ch
2
Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland

Abstract. Temporary school closures caused by the Covid-19 pandemic


have posed new challenges for many teachers and students worldwide.
Especially the abrupt shift to online distance learning posed many obsta-
cles to be overcome and it particularly complicated the implementation of
Educational Robotics activities. Such activities usually comprise a variety
of different learning artifacts, which were not accessible to many students
during the period of school closure. Moreover, online distance learning con-
siderably limits the possibilities for students to interact with their peers
and teachers. In an attempt to address these issues, this work presents the
development of an Educational Robotics activity particularly conceived
for online distance learning in primary school. The devised activities are
based on pen and paper approaches that are complemented by commonly
used social media to facilitate communication and collaboration. They
were proposed to 13 students, as a way to continue ER activities in online
distance learning over the time period of four weeks.

Keywords: Educational robotics · Distance learning · Online


learning · Pen and paper · Primary school

1 Introduction

With the onset of the Covid-19 pandemic, schools worldwide faced temporary
closure, intermittently affecting more than 90% of the world’s students [1]. As
a consequence, many teachers had to adapt their activities to online formats in
order to continue instruction at distance. Moving to the online space represented
a particular challenge for compulsory schools, where, in contrast to many uni-
versities, online learning is still rather unexplored [2]. In addition to general diffi-
culties associated with this new format of instruction, such as lack of experience
in online learning or poor digital infrastructure [3], implementing Educational
Robotics (ER) activities appeared to be particularly complicated. ER activities
usually comprise a combination of different learning artifacts [4], namely robots,
programming interfaces and playgrounds, that are often dependent from each

c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022


M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 34–40, 2022.
https://doi.org/10.1007/978-3-030-82544-7_4
Educational Robotics in Online Distance Learning for Primary School 35

other and have thus also been referred to as “Educational Robotics Learning Sys-
tems” (ERLS) [5]. Many times, students benefit from their school’s infrastructure
providing them with access to these artifacts. In this regard, the restrictions on
individual mobility imposed during the pandemic in many countries, have sig-
nificantly limited these opportunities. Although previous works have explored
remote labs as a way to implement robotics education in distance learning, most
approaches were limited to university level (e.g. [6–9]) or secondary school (e.g.
[10,11]). Bringing such approaches to primary school education is not obvious,
since it would require extensive adjustments of the tools and activities proposed
to make age-appropriate adaptations.
Furthermore, ER activities usually capitalize on students’ interactions with
their peers and teachers building on the ideas of project-based didactics [12] and
social constructivist learning [13]. As illustrated in a previous study by Negrini
and Giang [14], students indeed perceived collaboration as one of the most
important aspects of ER activities. However, during the pandemic, with teachers
and students working from home, providing possibilities for direct interactions
became significantly more difficult. While the recent technological advances have
yielded a myriad of tools for online communication and collaboration, only few
educators have already acquired the pedagogical content knowledge to leverage
them for the design of meaningful online learning experiences [15]. As a matter of
fact, Bernard et al. [16] have emphasized that the effectiveness of distance educa-
tion depends on the pedagogical design rather than the properties of the digital
media used. Designing engaging and pedagogically meaningful online activities
is especially important at primary school level, in order to address the limited
online attention spans of young children as well as the concerns of many parents
with regard to online learning [17].
Both the limited access to ER learning artifacts as well as the unfavorable
conditions for direct interactions in online settings, thus represented major obsta-
cles to be overcome to implement ER online activities during the Covid-19 pan-
demic. Yet on the other hand, these circumstances emerged as a possibility to
experiment with new approaches addressing these issues. In this work, we will
present one such approach made during the Covid-19 pandemic to implement
ER online activities for primary school students. To address the issue of limited
access to ER learning artifacts, activities were mainly based on pen and paper
approaches. Commonly used social media were used as a complement to augment
the learning experience and to allow students to communicate with their peers
and teacher as well as to collaborate with each other. The devised activities were
proposed to 13 students aged between 9 and 12 years old, over the time course
of four weeks. In the following, we will present the devised activity, report on
the benefits and challenges observed from this experience and finally discuss how
the findings could inform future works in this domain.
36 C. Giang and L. Negrini

2 An Example of an ER Online Activity for Primary


School

The devised ER online activities were proposed to 13 primary school students


from a small village in Switzerland (aged between 9 and 12 years old). In the
German-speaking part of Switzerland the compulsory school curriculum foresees
“Media and informatics” activities but leaves it to the teacher to decide which
tools they want to use for those activities. In this case, the teacher decided
to perform ER activities during those classes as well as during math classes.
Therefore, in the classroom activities performed before the pandemic, the stu-
dents had the possibility to familiarize themselves with the Bee-Bot1 robot and
its programming instructions. After school closure due to the pandemic, online
activities were developed and proposed to the students as a voluntary option to
continue the ER activities from home. From the 13 students that were offered
this option, 9 decided to complete them. School lessons during the closure were
organized on a weekly basis. At the beginning of each week, new assignments
and learning materials were deployed to the students by email or by post, which
were then discussed on the following day through online video conferences. The
rest of the week was dedicated to the independent completion of the assignments
with the possibility for individual discussions with the teacher. Since students
did not have access to the necessary ER tools, a pen and paper approach was
taken to provide them with alternative learning artifacts: colored triangles were
used as robots, paper strips as programming interfaces and a paper grid as the
playground (Fig. 1).

Fig. 1. Paper grid and triangles to locally simulate the behavior of the Bee-Bots (A),
set of instructions for the robot (B) and an example of a solution proposal (C).

The paper grid and the triangles served the students as learning artifacts
to locally simulate the behavior of the Bee-Bots. Each triangle represented a
robot with its orientation on the grid. The paper strips allowed the students
to write down their solution proposals before submitting them to their teacher
for evaluation. Solutions were submitted as photos sent through smartphone
messaging applications (Fig. 2A). The teacher, having access to Bee-Bot robots,
1
https://www.tts-international.com/bee-bot-programmable-floor-robot/1015268.
html.
Educational Robotics in Online Distance Learning for Primary School 37

programmed the robots according to the solution proposals submitted by the


students and made video recordings of the resulting behaviors. The videos were
then returned to the students together with some feedback of the teacher (using
text or audio messages). Moreover, students could directly provide feedback to
their peers, either during the synchronous video conferencing sessions or through
exchanges in group chats. In total, 12 different tasks were developed, from which
the students could choose one at the beginning of each week. To complete the
tasks, students had to find a suitable series of programming instructions for the
Bee-Bots that would satisfy certain conditions. For instance, in the activity illus-
trated in Fig. 2B, four robots (starting positions marked in colors) have to be
programmed to drive around the outer circle of the grid (marked with crosses).
The idea of this activity was for students to realize that a turn takes an additional
step and they therefore had to make use of the “pause step” command to suc-
cessfully complete the task without making the robots crash into each other. In
another example, illustrated in Fig. 2C, students had to program four Bee-Bots
with the exact same programming instructions, making them perform a choreog-
raphy and finally return to their starting positions (orange fields). The difficulty
here was to find the longest possible choreography, without making robots crash
into each other. The other ten activities were similar, with an increasing difficulty
given by adding more robots, larger grids or more constraints. For all proposed
activities, students worked independently from home. In case of difficulties they
could contact the teacher that was available via social media.

Fig. 2. Example of a message exchange between a student and the teacher (A), assign-
ment in which four robots have to be programmed to drive around the outer circle
of the grid (B) and assignment in which all four robots have to perform the same
choreography without crashing into each other (C).

3 Results and Discussion


The students who participated in the activities showed high engagement and
evaluated the ER online activity as mainly positive. Students were usually
eagerly awaiting the video recordings of their programmed Bee-Bot behaviors.
Once they received the videos from their teacher, students uploaded them in
the group chat, allowing others to provide feedback and suggest improvements.
38 C. Giang and L. Negrini

The integration of social media that most students were familiar with from their
everyday life (i.e., smartphone messaging applications) facilitated a smooth oper-
ation without any technical issues. At the same time, it allowed students to com-
municate and collaborate with their peers even from afar. Similarly, it allowed
the teacher to provide direct feedback to individual students using text or audio
messages. In addition to the asynchronous approach through messaging appli-
cations, the weekly online video conferences enabled synchronous moments of
interaction. During the video conferences, the teacher discussed with the stu-
dents in plenary sessions, explaining upcoming assignments and debriefing com-
pleted ones. In their work published during the pandemic, Rapanta et al. [15]
have suggested mixing synchronous and asynchronous approaches as a favorable
approach to online learning. When well designed, it allows to shift the responsi-
bility for learning to the student, while the teacher takes the role of a facilitator
and tutor. Since ER is built on the same learner-centered approach, following
this idea of hybrid forms of instruction, could provide a favorable framework
for the implementation of ER online activities. Moreover, from the teacher’s
perspective, following hybrid approaches could significantly reduce the fatigue
associated with the extensive use of synchronous forms of online instruction.
Except for the first week, where around 30 min were taken to explain the gen-
eral idea of the devised activities, only little amounts of time (around 5 min)
were needed to discuss the activities during the online video conferences of the
subsequent weeks.
In a recent work, El-Hamamsy et al. [18] have highlighted that the intro-
duction of screens in primary school is still met with reticence. Reducing active
screen time was particularly relevant during the pandemic, in order to address
the limited online attention span of young children as well as the concerns of
their parents with respect to online learning [17]. With online learning becoming
increasingly prevalent, finding alternative, “analogue” ways of distance learning
is crucial. As Mehrotra et al. [19] have suggested before, one way to address the
issue of extensive screen time in ER activities, is to introduce paper-based learn-
ing approaches. Building on this idea, the activities proposed in this work mainly
relied on paper-based materials. It could be argued that performing ER activities
without having students directly interact with robots and programming inter-
faces represents a major limitation of the proposed approach. However, a recent
work by Chevalier et al. [20] has also highlighted that providing students with
unregulated access to ER tools, in particular the programming interfaces, does
not necessarily result in better learning experiences. Indeed, when ER activities
are not well designed, involving these learning artifacts can even be counter-
productive, since they can promote blind trial-and-error approaches and hence
prevent desired reflection processes. The pen and paper approach applied in this
work provided students with alternative learning artifacts, that in contrast to
real robots, cannot provide immediate feedback on the proposed solutions. The
decelerated feedback loop, requiring students to take pictures, sending them to
their teacher and waiting for their response, may therefore represent an inter-
esting approach to promote the use of pauses for reflection. As shown by Perez
et al. [21], the strategic use of pauses was strongly associated with successful
learning in inquiry-based learning activities.
Educational Robotics in Online Distance Learning for Primary School 39

The three youngest students (9 years old) who chose to participate in the
proposed ER online activities, completed one assignment per week for the whole
duration of the school closure (in total four weeks). From the six older students
(12 years old) who started in the first week, only one half continued until the
end. The higher dropout rate of the older students could be related to the lack
of difficulty with regard to the proposed activities. Bee-Bots are designed as
tools for rather young children and are therefore limited in the complexity of
the possible programming instructions. The proposed ER online activities could
thus have been perceived as not sufficiently challenging by the older students.

4 Conclusion
The presented ER online activity was developed to face the challenges of online
distance learning and allowed primary school students to continue with ER activ-
ities during the Covid-19 pandemic. However, some interesting aspects emerged
from this experience, in particular the promotion of reflection processes due to
the use of pen and paper learning artifacts. The relevance of such approaches
could indeed go beyond the context of online learning and potentially also gen-
eralize to ER in face-to-face classroom activities. However, it should be acknowl-
edged that more exhaustive studies are needed to confirm this hypothesis. More-
over, future work could also investigate whether similar approaches could be
adapted for more advanced ER tools in order to design more complex tasks for
more mature students.

Acknowledgements. The authors would like to thank the teacher Claudio Giovanoli
for his contributions in designing and implementing the Bee-Bot online activity as well
as the children of the school in Maloja who decided to participate in the voluntary ER
online activities.

References
1. Donohue, J.M., Miller, E.: Covid-19 and school closures. JAMA 324(9), 845–847
(2020)
2. Allen, J., Rowan, L., Singh, P.: Teaching and teacher education in the time of
covid-19 (2020)
3. Carrillo, C., Flores, M.A.: Covid-19 and teacher education: a literature review of
online teaching and learning practices. Eur. J. Teach. Educ. 43(4), 466–487 (2020)
4. Giang, C., Piatti, A., Mondada, F.: Heuristics for the development and evaluation
of educational robotics systems. IEEE Trans. Educ. 62(4), 278–287 (2019)
5. Giang, C.: Towards the alignment of educational robotics learning systems with
classroom activities. Ph.D. thesis, EPFL, Lausanne, Switzerland (2020)
6. Kulich, M., Chudoba, J., Kosnar, K., Krajnik, T., Faigl, J., Preucil, L.: SyRoTek-
distance teaching of mobile robotics. IEEE Trans. Educ. 56(1), 18–23 (2012)
7. dos Santos Lopes, M.S., Gomes, I.P., Trindade, R.M., da Silva, A.F., Lima,
A.C.d.C.: Web environment for programming and control of a mobile robot in
a remote laboratory. IEEE Trans. Learn. Technol. 10(4), 526–531 (2016)
40 C. Giang and L. Negrini

8. Di Giamberardino, P., Temperini, M.: Adaptive access to robotic learning expe-


riences in a remote laboratory setting. In: 2017 18th International Carpathian
Control Conference (ICCC), pp. 565–570. IEEE (2017)
9. Farias, G., Fabregas, E., Peralta, E., Vargas, H., Dormido-Canto, S., Dormido, S.:
Development of an easy-to-use multi-agent platform for teaching mobile robotics.
IEEE Access 7, 55885–55897 (2019)
10. Després, C., George, S.: Computer-supported distance learning-an example in edu-
cational robotics (1999)
11. Almeida, T.O., Netto, J.F.d.M., Rios, M.L.: Remote robotics laboratory as support
to teaching programming. In: 2017 IEEE Frontiers in Education Conference (FIE),
pp. 1–6. IEEE (2017)
12. Kilpatrick, W.H.: The project method: the use of the purposeful act in the educa-
tive process, no. 3, Teachers College, Columbia University (1926)
13. Vygotsky, L.S.: Mind in society: the development of higher psychological processes.
Harvard University Press (1980)
14. Negrini, L., Giang, C.: How do pupils perceive educational robotics as a tool to
improve their 21st century skills?. J. e-Learn. Knowl. Soc. 15(2), 77–87 (2019)
15. Rapanta, C., Botturi, L., Goodyear, P., Guàrdia, L., Koole, M.: Online univer-
sity teaching during and after the covid-19 crisis: refocusing teacher presence and
learning activity. Postdigit. Sci. Educ. 2(3), 923–945 (2020)
16. Bernard, R.M., Abrami, P.C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L.,
Wallet, P.A., Fiset, M., Huang, B.: How does distance education compare with
classroom instruction? A meta-analysis of the empirical literature. Rev. Educ. Res.
74(3), 379–439 (2004)
17. Dong, C., Cao, S., Li, H.: Young children’s online learning during covid-19 pan-
demic: Chinese parents’ beliefs and attitudes. Child. Youth Serv. Rev. 118, 105440
(2020)
18. El-Hamamsy, L., et al.: A computer science and robotics integration model for
primary school: evaluation of a large-scale in-service K-4 teacher-training program.
Educ. Inf. Technol. 26(3), 2445–2475 (2020)
19. Mehrotra, A., et al.: Introducing a paper-based programming language for com-
puting education in classrooms. In: ITiCSE 2020, pp. 180–186. ACM, June 2020
20. Chevalier, M., Giang, C., Piatti, A., Mondada, F.: Fostering computational think-
ing through educational robotics: a model for creative computational problem solv-
ing. Int. J. STEM Educ. 7(1), 1–18 (2020). https://doi.org/10.1186/s40594-020-
00238-z
21. Perez, S., et al.: Identifying productive inquiry in virtual labs using sequence min-
ing. In: International Conference on Artificial Intelligence in Education, pp. 287–
298. Springer (2017)
Workshops, Curricula and Related
Aspects Secondary Schools
Robotics Laboratory Within the Italian
School-Work Transition Program in High
Schools: A Case Study

Gemma C. Bettelani1(B) , Chiara Gabellieri1 , Riccardo Mengacci1 , Federico Massa1 ,


Anna Mannucci2 , and Lucia Pallottino1
1
Research Center E. Piaggio, Department of Information Engineering of University of Pisa,
Pisa, Italy
lucia.pallottino@unipi.it
2
Multi-Robot Planning and Control Lab, Örebro University, Örebro, Sweden

Abstract. This paper presents a robotics laboratory originated by the collabora-


tion between the university and high school within the Italian school-work tran-
sition program. The educational objective of the proposed lab is twofold: 1) ease
the transfer of robotic researchers’ expertise into useful means for the students’
learning; 2) teaching by practice the multidisciplinarity of robotics. We exploited
the RoboCup Junior Race as a useful scenario to cover topics from 3D print-
ing for fast prototyping to low-level and high-level controller design. An ad-hoc
end-of-term student survey confirms the effectiveness of the approach. Finally,
the paper includes some considerations on how general problems in the robotic
and scientific community, such as gender issues and COVID-19 restrictions, can
impact the educational robotics activities.

Keywords: Educational robots · Lego Mindstorms NXT · High school ·


RoboCup junior race · Italian work-school transition program

1 Introduction
The fast development of robotics is leading towards a greater and greater presence of
robots in our society. One of the main sectors in which robots are showing their poten-
tial outside of industry and research is represented by education. As a matter of fact,
educational robotics programs in schools are spreading worldwide [1].
The use of robots as a teaching method has been suggested to develop and improve
not only students’ academic skills (e.g., mathematics and geometry, coding, physics,
etc.), but also to boost important soft skills [2], although further quantitative research
might be needed to confirm this [3]. In particular, educational robotics is expected to
help develop problem-solving, logic reasoning, creativity, linguistic skills, and social
skills such as teamwork and interactions with peers [4, 5].
Robotics has been largely deployed in high-school curricula as well as in secondary
and primary schools [6, 7]. One of the most common tools that have been proven to be
effective for introducing robotics to children and young students is LEGO Mindstorms1 .
1
https://www.lego.com/it-it/themes/mindstorms.

c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 43–51, 2022.
https://doi.org/10.1007/978-3-030-82544-7_5
44 G. C. Bettelani et al.

Indeed, the pedagogical use of LEGO goes in the direction of learning through play,
shown as a very efficient approach for students’ learning [8].
As expected, while students are usually very enthusiastic about the use of robots at
school, teachers may feel anxious about it and need adequate formation and, possibly,
technical support during the lessons [7, 9, 10].
Although many experiences have been conducted, robotic courses held by univer-
sity robotic researchers are not common in most of the pre-university institutes. In this
work, we describe a collaboration between the “Research Center E. Piaggio” of the
University of Pisa and an Italian high school of the scientific and technological area
(“ITI Galileo Galilei”, Livorno, Italy). The collaboration was born in 2018 within the
national program for school-work transition promoted by the Italian Ministry of Edu-
cation (MIUR). The aim of the program is twofold: 1) putting into practice in realistic
working environments the knowledge acquired at school; 2) helping the students in
making a conscious decision on their future job or university career.
The objective of this paper is to share our experience that we think could be taken
as a model by teachers in future robotics couses. Results of the students’ end-of-term
survey are reported and show positive feedback on the proposed approach. Finally, the
paper also aims at discussing gender issues and COVID-19 restriction problems in the
educational robotic field.

2 Preliminary Activities

Before starting the robotics lessons in class, the students attended a general introduction
about the activities of the “Research Center E. Piaggio” of the University of Pisa. In
particular, a Professor of the University and the Ph.D. students that participated in the
project (i.e., the authors) presented their research topics: humanoid robots, autonomous
cars, drones, etc. The students had also the opportunity to visit the laboratories of the
“Research Center E. Piaggio” and see real robots (e.g., KUKA arm [11], EGO robot
[12]). During this visit, they could get an insight into real-world robotics applications.
They also understood that robotics is not only limited to industrial robots but is a wide
field that includes, for example, the realization of prosthetic robotic hands (e.g., the
Pisa/IIT Softhand Pro [13]) and wearable haptics devices (e.g., the CUFF device [14]
and the Skin-stretch device [15]), which are developed to restore the sense of touch in
amputees.
These preliminary activities were of particular importance to introduce a large group
of students to robotics and to give them a general overview of the multidisciplinarity of
robotics, which includes mechanics, electronics, programming, automatic control and
so on. Moreover, they could see how working in robotics can be an actual profession
and had the chance to taste what working in the field of robotic research means (e.g.,
working in teams and finding people with different backgrounds such as bio-medical,
mechanical, computer, automation, and even management engineers). Finally, the stu-
dents who were most amazed by these activities decided to take part in the extracurric-
ular robotics course being part of the work-school transition program.
Robotics Laboratory Within the Italian School-Work Transition Program 45

Fig. 1. Example of the race environment. On the left side: the black line that the robot had to
follow (choosing the direction signaled by the green markers in the case of multiple options). On
the right side: the rectangular “dangerous area” with the balls. The red cross indicates the other
possible corner where the triangle could be placed

3 The Robotics Classes


The robotics laboratory within the Italian school-work transition program was sched-
uled into five lessons of four hours each, for a total of twenty hours. The aim of
the lessons was to find strategies to finalize a “rescue mission” performed by an
autonomous robot (Fig. 1 shows an example of the race circuit). The “rescue mission”
was part of the national program of the RoboCup Junior challenge2 .

3.1 The “rescue Mission” in a Nutshell

The goal of the mission was to get as many points as possible before the time ran out.
A mobile robot, positioned at the starting point, had to recognize and follow a twisty
path, created with a black line, which was painted on a floor with many asperities (e.g.,
bumps, halls, obstacles). At the end of the black path, the robot found a silvery line that
pointed out the entrance to a rectangular area: the “dangerous area” with the “victims”,
which were represented by balls. Particularly, the balls could be black (“dead victims”)
or silvery (“alive victims”). The robot had to pick up the balls and place them into a
triangular area, which had a certain height with respect to the floor. The triangle could
be positioned in one of the “dangerous area” corners that were in front of the robot
when it entered the area. Particularly, the rescue of “alive victims” gave more points
than the rescue of “dead victims”. After rescuing all the “victims”, the robot had to find
a way to exit the “dangerous area”.

2
https://junior.robocup.org/.
46 G. C. Bettelani et al.

3.2 Lesson Overview


Eleven students attended the course and they were divided into groups of two or three
students each. Each group had to realize and program its own robot, using the LEGO
Mindstorms NXT kit. During the course, they had to find, by collaborating with the
researchers and using the tools acquired during the lessons, strategies useful to accom-
plish the tasks of the race. Then, at the end of the course, the students were asked to
compile a questionnaire related to the quality and the usefulness of the course.
In the following sections, we present the tools that the researchers gave to the stu-
dents and some strategies that they found using them. The results of the questionnaire
is discussed in Sect. 4.

3.3 Proposed Tools


The students were already familiar with the ROBOTC programming language3 used to
program their robots. Thus, at the beginning of the first lesson, they were able to give
the robot’s wheels the basic commands to move it backward or forward. For this reason,
we decided to start the lessons with an overview of the sensors included in the LEGO
Mindstorms kit that could be useful for the race. Particularly, we introduced the usage
of the light sensor and the ultrasound sensor. During this lesson, we focused on the
importance of sensor calibration. In the following lesson, we decided to introduce the
students to the concept of closed-loop systems, comparing them to open-loop systems.
Then, we explained a more advanced architecture: the PID controller. The choice was
determined by the fact that some students were curious why sometimes their robots
could not be able to reach exactly the desired position. After that, we decided to explain
some basics of 3D mechanical design, using the FreeCAD software4 . This choice was
motivated by the fact that it was allowed by the race rules to add additional 3D printed
parts and by the fact that mechanical design is a very useful skill in robotics related
to rapid prototyping. After the introduction to the aforementioned topics, we had a
brainstorming session with each group to investigate possible strategies to accomplish
the “rescue mission” tasks.

3.4 Solutions and Strategies


In this section we report the most interesting strategies found by students’ groups, brain-
storming with the researchers.
– Black line following: to accomplish this task the students decided to use a light
sensor positioned in the frontal part of the robot, between the wheels, and facing
down. Thanks to the lesson related to sensor calibration, they were able to under-
stand immediately that the values given by the sensor were strictly intertwined with
the environment light condition. For this reason, they decided to change the thresh-
old found for the black color every time the light conditions changed and they took
note in order to do this during the race before the beginning of the “rescue mission”.
3
https://www.robotc.net/.
4
https://www.freecadweb.org/.
Robotics Laboratory Within the Italian School-Work Transition Program 47

– Identification of the triangle position: the students decided to use the ultrasound
sensor (placed on the frontal side of the robot and facing ahead) to measure the robot
distance from the side walls of the “dangerous area”. In doing so, the robot was able
to compute the center of the rectangular area in the transversal direction and to go
there. After that, by measuring its distance from the frontal wall and from the corners
of the room, the robot was able to understand in which corner of the area the triangle
was. (see Fig. 2 for a clear explanation of all the steps)
– Pick up of the “victims”: some students decided to design with the FreeCAD soft-
ware a calliper (Fig. 3). They decided to control the calliper elevation by implement-
ing the PID controller that the researchers explained during the related theoretical
lessons.
– Coverage of the rectangular dangerous zone: The students decided to make the
robot do a serpentine. If a light sensor, positioned at the base of the calliper, identified
a ball, the robot moved forward until the wall, where the ball could be dragged in
the calliper and then brought to the triangle. It is important to note that when the
robot arrived at the triangle, the height of the calliper had to be controlled in order to
lift the ball inside the triangle. For this task the students decided to not distinguish
between “alive or dead victims” but only to understand if the robot had picked up a
ball or not (Fig. 4).

For the development of these strategies, the students have exploited multiple skills.
Indeed, while the first and the third proposed strategies are related to the development of
technical skills, the other two strategies are associated with problem-solving solutions.

Fig. 2. Triangle position identification. The solutions proposed by the students articulated as fol-
lows: a) After the entrance of the robot in the rectangular area, the robot rotates counterclockwise
and measures its distance from the wall on its left. b) The robot measures its distance from the
wall on its right. c The robot can now compute the midpoint of the area width and when it places
there, it computes its distance from the frontal wall. d-e) Known the distances computed in panel
c, named x and y, the robot can compute the angle α. At this point, the robot turns counter-
clockwise and clockwise of α. The triangle is placed in the corner where the measured distance
robot-corner is smaller (dashed red line)
48 G. C. Bettelani et al.

Fig. 3. Mechanical parts designed and realized by the students a) The calliper cad realized with
the FreeCAD software. b) Lateral view of the calliper mounted on a LEGO motor. c) Calliper
and motor mounted on the robot

Fig. 4. Coverage strategy found by the students: when the robot identifies the position of the
triangle, it memorizes that position and, in the meanwhile, it moves there. From the triangle
position, the robot starts the serpentine and when it recognizes a ball, it drags it till the wall, so
the ball can be wedged in the calliper and it can be brought to the triangle.

4 Discussions

The solutions proposed by the students were of course not the optimal solutions but
the message that we tried to deliver was that often in robotics there is no unique solu-
tion. This is the reason why, in this field, it is important to work with others and share
opinions and doubts. We also tried to show to the students that robotics is indeed mul-
tidisciplinary. For this reason, we introduced the 3D mechanical design and we invited
Eng. Daniele Benedettelli, a freelance LEGO designer who presented to the students
LEGONARO5 , a LEGO portrayer robot. During this lesson, the students had also the
opportunity to be introduced to the concept of Artificial Intelligence that is becoming
very intertwined with modern robotics.
Another important point that we would like to underline is that the students, dur-
ing the implementation of the race solutions, were able to apply and understand the
importance of the concepts explained during the lessons, such as the calibration of the
sensors, the concept of closed-loop control and the 3D mechanical design. We think
that it was of particular relevance to have robotics researchers explaining these ideas.
5
https://robotics.benedettelli.com/legonardo/.
Robotics Laboratory Within the Italian School-Work Transition Program 49

Indeed, robotics researchers naturally speak using the right contextual vocabulary and
show great confidence towards topics they deal with regularly. Because of this, it comes
pretty natural to us to teach these concepts, as we experience them daily. Teachers, on
the other hand, usually lack deep robotics knowledge and practice, and can thus benefit
from inputs coming from collaborations with universities. As mentioned before, this
project was an experience of work-school transition, so it was essential that the stu-
dents collaborated with experts in the robotic field. However, we would like to stress
the importance that the teachers’ knowledge of didactics and pedagogy would have in a
curricular course. For this reason, in that case, the teachers should hold the course, but
robotics professionals, such as robotics researchers, could provide substantial support
to both students and teachers.
Moreover, we also think that this project was a great opportunity for both
researchers and students. From one side, as said before, working on problem-solving
with researchers was a good chance for the students to be introduced to the robotics
world and more in general to the research world. On the other side, for us, this project
was not only very satisfactory but it also gave us the opportunity to explain in a simple
way concepts that we take for granted. Moreover, the students’ course/project impres-
sions are underlined by the results of the Likert scale questionnaire (Table 1) proposed
at the end of the course. In particular, the students identified the project as useful (Q1),
they considered the lessons well-explained (Q2) and they found that the lesson topics
were sufficiently detailed (Q3). Then, the students considered it was useful to work in
groups (Q4). They also considered that the course should have been longer (Q5). This
last result is to be interpreted in light of the enthusiasm of the students. Indeed, some of
them left a comment suggesting that they would have willingly done a longer stage.
Finally, we would like to discuss some important problems related to robotics. The
first one is related to gender issues. The situation has improved considerably in recent
years, with more women in the scientific fields. However, the work that needs to be
done to highlight and resolve gender-related differences in STEM (Science, Technol-
ogy, Engineering, and Mathematics) is still significant [16, 17]. Indeed, it is necessary
that females feel part of these fields as males do. Only male students participated in
the laboratory in 2018. The laboratory was open to volunteers since it represented one
possible choice among the opportunities offered within the school-work transition pro-
gram. We want to stress the importance of the preliminary activities that we carried out
(preliminary lessons and the visit to our laboratory) since they were aimed at a wider
audience (a large number of students participated, females included, not only the ones
involved in the laboratory activity). During the preliminary lessons and the visit to the
“Research Center E. Piaggio”, we explained that our course would be held by three
female researchers and two male researchers, and all the students could directly see
females at work in a robotics laboratory. Encouragingly, in 2019 one female partici-
pated in the course, and in 2020 two females were enrolled, even though the course was
not held due to COVID-19 restrictions. It is important to underline that the high school
“ITI Galileo Galilei” is a school mainly attended by males, so having two girls enrolled
in the course can be considered a good result. We hope that we served as a positive
role model to encourage also female students to approach robotics and that the greater
female participation in consecutive years is a consequence.
50 G. C. Bettelani et al.

Table 1. Likert scale questions (from 1 to 7). 1 means that the students completely disagree with
the question and 7 means that the students completely agree with the question.

Questions Mean ± std


Q1 It was useful having taken part in this project 6.6 ± 0.5
Q2 The lessons were well explained 6.7 ± 0.5
Q3 The topics I was most interested in were sufficiently detailed 6.4 ± 0.7
Q4 It was useful working in groups 6.2 ± 0.8
Q5 The entire course duration was appropriated 4.4 ± 0.7

The last point that is important to underline in this context is that educational sci-
entific subjects have been really affected by COVID-19 restrictions. Indeed, organizing
a laboratory tour and teaching a course like the one we held is difficult and not well
manageable with the pandemic restrictions. However, the presentation of the laboratory
activities could be held online. During pandemic, a virtual presentation of the labora-
tory activities of the Research Center E. Piaggio was tested for the Researchers’ Night
event and the experience was really immersive for the people that were at home (see the
video of the event at this link https://www.youtube.com/watch?v=9QvTPpAReUg).
Then, to solve the problem of the robotics course, something could be achieved with
funds to buy a laptop and a LEGO Mindstrorms kit for each student. The students could
not collaborate during the lessons as they would have done in class, but they could at
least try out their solutions on the robot. Moreover, it has also been suggested [18] that
simulators represent the most suitable solution to easily carry on robotics classes within
the restrictions imposed by the recent social distancing. Indeed, if the school has not
enough funds to buy a huge number of robots, each student could use the simulator
to test his/her strategies and then the teacher could try the solutions on the real robot
during a video lesson. Furthermore, the target simulator should be easy-to-use, accurate
enough for the students to test their solutions reliably, and moderately priced. Finally,
to solve the problem related to the printing of the 3D mechanical parts, the school could
rely on online platforms that offer 3D printing and shipping services (e.g., 3Dhubs6 or
Fab Labs of the city)

Acknowledgments. We would like to thank Professor Cristina Tani and Professor Giorgio Meini
of the high school “ITI Galileo Galilei”, Livorno, Italy, who gave us the possibility to undertake
this great experience. We also thank Eng. Daniele Benedettelli, who took part in this course with
his great experience with LEGO usage and his enthusiasm.

References
1. Miller, D.P., Nourbakhsh, I.: Robotics for education. In: Siciliano, B., Khatib, O. (eds.)
Springer Handbook of Robotics, pp. 2115–2134. Springer, Cham (2016). https://doi.org/10.
1007/978-3-319-32552-1 79

6
https://www.3dhubs.com/.
Robotics Laboratory Within the Italian School-Work Transition Program 51

2. Toh, L.P.E., Causo, A., Tzuo, P.-W., Chen, I.-M., Yeo, S.H.: A review on the use of robots in
education and young children. J. Educ. Technol. Soc. 19(2), 148–163 (2016)
3. Benitti, F.B.V.: Exploring the educational potential of robotics in schools: a systematic
review. Comput. Educ. 58(3), 978–988 (2012)
4. Blanchard, S., Freiman, V., Lirrete-Pitre, N.: Strategies used by elementary schoolchildren
solving robotics-based complex tasks: Innovative potential of technology. Procedia-Soc.
Behav. Sci. 2(2), 2851–2857 (2010)
5. Souza, I.M.L., Andrade, W.L., Sampaio, L.M.R., Araujo, A.L.S.O.: A systematic review on
the use of lego R robotics in education. In 2018 IEEE Frontiers in Education Conference
(FIE), pp. 1–9. IEEE (2018)
6. Scaradozzi, D., Sorbi, L., Pedale, A., Valzano, M., Vergine, C.: Teaching robotics at the pri-
mary school: an innovative approach. Procedia - Soc. Behav. Sci. 174, 3838–3846 (2015).
International Conference on New Horizons in Education, INTE 2014, 25-27 June 2014,
Paris, France
7. Khanlari, A., Mansourkiaie, F.: Using robotics for stem education in primary/elementary
schools: Teachers’ perceptions. In: 2015 10th International Conference on Computer Science
& Education (ICCSE), pp. 3–7. IEEE (2015)
8. Atmatzidou, S., Markelis, I., Demetriadis, S.: The use of lego mindstorms in elementary and
secondary education: game as a way of triggering learning. In: International Conference of
Simulation, Modeling and Programming for Autonomous Robots (SIMPAR), Venice, Italy,
pp. 22–30. Citeseer (2008)
9. Chalmers, C.: Robotics and computational thinking in primary school. Int. J. Child-Comput.
Interact. 17, 93–100 (2018)
10. Alimisis, D.: Robotics in education & education in robotics: shifting focus from technology
to pedagogy. In: Proceedings of the 3rd International Conference on Robotics in Education,
pp. 7–14 (2012)
11. Bischoff, R., et al. The kuka-dlr lightweight robot arm-a new reference platform for robotics
research and manufacturing. In: ISR 2010 (41st International Symposium on Robotics) and
ROBOTIK 2010 (6th German Conference on Robotics), pp. 1–8. VDE (2010)
12. Lentini, G., et al.: Alter-ego: a mobile robot with a functionally anthropomorphic upper body
designed for physical interaction. IEEE Robot. Autom. Mag. 26(4), 94–107 (2019)
13. Godfrey, S.B., et al.: The softhand pro: translation from robotic hand to prosthetic prototype.
In: Ibáñez, J., González-Vargas, J., Azorn, J., Akay, M., Pons, J. (eds.) Converging Clinical
and Engineering Research on Neurorehabilitation II, pp. 469–473. Springer, Cham (2017).
https://doi.org/10.1007/978-3-319-46669-9 78
14. Casini, S., Morvidoni, M., Bianchi, M., Catalano, M., Grioli, G., Bicchi, A.: Design and
realization of the cuff-clenching upper-limb force feedback wearable device for distributed
mechano-tactile stimulation of normal and tangential skin forces. In: 2015 IEEE/RSJ Inter-
national Conference on Intelligent Robots and Systems (IROS), pp. 1186–1193. IEEE (2015)
15. Colella, N., Bianchi, M., Grioli, G., Bicchi, A., Catalano, M.G.: A novel skin-stretch haptic
device for intuitive control of robotic prostheses and avatars. IEEE Robot. Autom. Lett. 4(2),
1572–1579 (2019)
16. Kuschel, K., Ettl, K., Dı́az-Garcı́a, C., Alsos, G.A.: Stemming the gender gap in stem
entrepreneurship-insights into women’s entrepreneurship in science, technology, engineer-
ing and mathematics. Int. Entrepreneurship Manage. J. 16(1), 1–15 (2020)
17. Casad, B.J., et al.: Gender inequality in academia: problems and solutions for women faculty
in stem. J. Neurosci. Res. 99(1), 13–23 (2021)
18. Bergeron, B.: Online stem education a potential boon for affordable at-home simulators.
SERVO Mag. 2 (2020)
How to Promote Learning and Creativity
Through Visual Cards and Robotics
at Summer Academic Project Ítaca

Martha-Ivón Cárdenas1 , Jordi Campos2 , and Eloi Puertas3(B)


1
Department of Computer Science, Universitat Politècnica de Catalunya,
Barcelona, Spain
mcardenas@cs.upc.edu
2
Department of STEAM, Escola Sadako, Barcelona, Spain
jcampos@escolasadako.com
3
Department Matemàtiques i Informàtica, Universitat de Barcelona,
Barcelona, Spain
epuertas@ub.edu

Abstract. This paper presents a robotics workshop that aims to encour-


age high-school students to enrol at university STEAM-related degrees. It
is included in a socio-educational program in which we have been involved
for several years. The workshop attendees become familiar with the con-
cept of robotics through a set of visual cards that we recently introduced,
called STEAMOLUS cards. These cards introduce real-world problems to
be solved while learning the basics of robotics. The results suggest theses
cards contribute to develop students’ creativity and inspiration.

Keywords: Robotics · Education · Visual cards · STEAMOLUS ·


Open roberta

1 Introduction
Nowadays, the use of robots is quite common in compulsory education, basically,
through non-formal educational activities. It has been proved that such activities
contribute to reduce the risk of school dropout. Also, robots have been proved
useful in activities with students who have diverse special needs [1,2]. Taking
into consideration such benefits, the Campus Ítaca program1 decided to include
a robotics workshop which has been tailored by us since its beginnings.
Campus Ítaca is a socio-educational program of the Universitat Autònoma
de Barcelona (UAB), managed by the Fundación Autónoma Solidaria founda-
tion (FAS) aimed at secondary school students. It consists of a stay on the UAB
Campus during the months of June and July, organized in two turns of seven
days each. The students carry out a series of pedagogical and recreational activ-
ities whose objective is to encourage them to continue their education once the
1
Campus Ítaca. Universitat Autònoma de Barcelona (UAB). https://www.uab.cat/
web/campus-itaca-1345780037226.html.
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 52–63, 2022.
https://doi.org/10.1007/978-3-030-82544-7_6
How to Promote Learning and Creativity 53

obligatory stage has finished. At the same time, the campus aims to be a space
for coexistence between students of diverse social environments.
Campus Ítaca was founded in 2004. It was inspired by the Summer Academy
project of the University of Strathclyde (Scotland) and its main objectives were
to reduce early school dropout and bring students closer to the university. The
students are between 14 and 15 years old and the number of students along all
these years (2004–2020) is more than five thousands students from more than
70 schools of Barcelona area.
In order to favor active participation, activities are presented in an appealing
way and set in real contexts the students can relate to. In each of these activities
the students need to apply research methodologies to confirm or reject their initial
hypotheses. During these activities the students are accompanied by a monitor
which ensures that the group works properly during the activity. At the same time,
the activities promote cooperative work, dialogue and reflection among them. Two
types of activities are carried out on campus: pedagogical activities and leisure-
sports activities. One of the main pedagogical activity in the Campus Ítaca, is to
show to students the research done in several topics at university by offering about
fifteen different workshops which are repeated in two shifts.
Therefore, the workshops are the activities with most weight within the cam-
pus. All of them are developed during the last three days of the campus in which
the number of participants is around 12 students and gender balanced. Students
are assigned to a single one and they cannot choose it. Each workshop takes ten
hours plus a final oral presentation at the Closing Ceremony, where students
show their results to family members, high-school and town councils represen-
tatives and members of the University. The aim of these presentations is to
empower them, as they feel that what they have done is relevant, but also to
show families that their children could enroll at university.
Thanks to the students’ good reviews, our robotics workshop is running since
the beginning of the program fifteen years ago. To maintain this good perfor-
mance, we have to adapt to new frameworks and platforms [3]. Therefore, in the
last edition we designed new material for helping students in their robotic tasks.
This material was developed as visual cards called STEAMOLUS2 , and are our
main contribution of this project. Also, in the last edition, we introduced a new
cloud platform for robotics (Open Roberta3 [4]). The degree of satisfaction of the
students achieved in this edition has been very high, thus the recently introduced
changes will be applied in further editions.
This paper is structured into five sections. Section one gives an introduction
of the educational program Campus Ítaca. Section two provides an overview
and outlines our robotics workshop detailing the main goals and describing each
session. Section three describes the methodology at a pedagogical level as well as
our novel STEAMOLUS cards and the visual platform Open Roberta. Section
four summarizes the evaluation reports of our workshop. Finally, section five
provides a conclusion with future directions.

2
STEAMOLUS Make things to learn to code, learn to code to make things! https://
steamolus.wordpress.com.
3
Open Roberta Lab. https://lab.open-roberta.org/.
54 M.-I. Cárdenas et al.

2 The Robotics Workshop: “Robots for Everyone”


In the context of the Campus Ítaca, the workshop we perform is called “Robots
for everyone”. Our main goal doing this project is to promote students’ interest
in Science, Technology, Engineering, Art and Mathematics (STEAM), looking
for strategies to provide fun, learning and creativity. Workshops in the program
are ten hours long, giving students the opportunity to solve challenges inspired
on small real problems. In our case, they get familiar with robotics using an
iterative guess-design-test technique [8].

2.1 Workshop Goals

The main objective of the workshop is that students have to be capable to solve
simple challenges using a customized robot by adapting its hardware (physical
part, sensors and actuators) and programming its software (behavioural part)
in the most efficient way.
Other objectives are:

– To improve in students the capacity of logical thinking by means of the app-


roach and development of algorithms for the solution of concrete problems.
– To enhance in students the spatial vision capacity by building a robot with
assembly instructions.
– To develop their creativity and engineering skills by building a robot from
the basis (on those challenges without assembly instructions).
– To explore how robots can acquire stimuli from the environment through
sensors and also how these can be processed by a computer system to give a
response while they act on the environment according to this response.
– To work in team and be able to divide the tasks and find the best solutions
to specific problems.
– To understand the importance of intelligent computer systems in today’s
society and how they may affect us in the future.

2.2 Workshop Description

In our workshop, intended to enable cross-talking and spark interest in robotics,


students are grouped in teams of 3 and 4 participants. Each group creates their
own robot using Lego [9,10] bricks and in order to complete the proposed chal-
lenges, they program robots behaviour according to the methodology described
below in Sect. 3. The workshop is divided into five sessions, two hours each.
During the first two sessions, the hardware and the software of the robot are
introduced. This way, the students get some basics to solve the following tasks.
In the next two sessions, participants have to deal with two well-known chal-
lenges in the world of robotics: line-tracking and sumo wrestling. Each session
ends with a small competition between teams, to see which team has made the
most efficient robot for each challenge.
How to Promote Learning and Creativity 55

Finally, in the last session, the students prepare and rehearse the presentation
for the Campus Ítaca closing ceremony. Before leaving, participants disassemble
all the robots and sort the pieces inside boxes for the next edition.
The detailed description of these sessions for the edition 2019 can be found
below:
1st Session: Introduction and building the first robot.
First of all, an overview of the state of the art in robotics and intelligent
systems is introduced. Then, teams of four members are created. Each group
assembles their own basic robot depending on their abilities (those groups with
less technical skills are provided with assembly instructions while those groups
with more skills have to create their own robot design). It is important to encour-
age the groups to distribute the assembly task amongst the different members:
one part of the group looks for the pieces while the other part assembles them.
Since the beginning of the campus, the platform we used has been Lego Mind-
storms (first RCX and now NXT)4 [11] since it is the material provided by the
FAS foundation.
2nd Session: Introduction to visual programming.
We have used different visual programming languages along the years with
considerable success: LabView for RCX, NXT-G and Open Roberta. They are
very intuitive and easy to use. Hence, students can see the fundamentals of
programming very quickly and they can start to think in their own projects
very soon. In this session we teach the basic blocks: flow control, conditionals
and loops. The students also learn how to command the engines and how to
receive and process data from the different available sensors. During the session,
students follow our specifically designed guides called STEAMOLUS cards (see
Subsect. 3.2) where basic tasks such as moving the robot or reacting to sensors
inputs are explained. Some of these tasks imply adding some new sensors to the
basic robot. The programming task is distributed among the team applying pair
programming [12]. In our case, one student takes the control of the computer
while another one tells him/her which blocks are needed and how to connect
them. The rest of the team incrementally tests the robot once it has been pro-
grammed, observing if it really does what it is supposed to do. In case of a robot
malfunction, students have to think how to modify the program until it works
correctly.
3rd Session: First Challenge: waiter bot line-follower.
The first challenge is to create a robot that follows a black line using two
light sensors without going out of line. The students have to adapt the body of
the robot to perform this task efficiently with the given sensors and then deduce
and implement the robot’s behaviour. The robot is tested in a scenario with
two similar circuits (two separate lines) so teams can compete in pairs and see
who has come up with the most efficient overall solution (faster without going
out of line). In this session the team is divided in two subgroups. Two students
program the algorithm, following a pair programming approach. The rest adapts
4
Lego Mindstorms. http://mindstorms.lego.com/en-us/Default.aspx.
56 M.-I. Cárdenas et al.

and improves the body of the robot to make it faster, doing tests to ensure that
the robot does not step out the line.
4th Session: Second Challenge: Sumo players.
The second challenge is to create a robot that can stay inside a circle bounded
by black line, while it is able to locate an opposing robot and push it out of the
circle. In this case, the students have total freedom to create the body of the
robot, as well as add new sensors or motors to the robot in order to make
it more robust. At the end of the session, a competition is held between the
different teams to see which team has made the most robust and efficient sumo
fighter robot. In this session the team is also divided in two subgroups. Two
students design and program the new algorithm, following the pair programming
approach. The rest adapts and improves the body of the robot to make it heavier,
doing tests to ensure that the robot does not get off the circle.
5th Session: Preparing the presentation
The last session is dedicated to preparing the presentation for the closing
ceremony. Students have to discuss among themselves what aspects of the project
they want to include in the presentation. The table of contents of the presentation
is agreed on between the students and each section is assigned to a group of
students for developing. The best photos and videos of the workshop are selected
and edited. Then, a summary video of two minutes at the most is prepared for
each challenge with the results of each team. Once the presentation has been
designed, the students have time to rehearse it before the closing ceremony.
Before leaving, students disassemble the robots and store the pieces in boxes for
the next round of students.
Figure 1 shows some stages of the student’s learning process, as they design
their robots by agreeing on their final design and having into account their size
and shape depending on the task that has to be performed. The left picture
shows a STEAMOLUS card. The top-right picture depicts how a working team
of students build a robot. The center-right picture shows two teams perform-
ing a sumo challenge. Finally, the bottom-right picture shows the students and
instructors of the workshop.

3 Theoretical Framework and Methodology


Previous studies that have focused on educational robotics state that robots
and mobile robotics are an educational tool and a complete learning solution
that brings technology to secondary school students [13,14]. In that sense, Lego
Mindstorms allow students to discover coding through a simple and intuitive
software while they control the robot input and output devices. It gives atten-
tion and support to constructivist learning of our students in a society that
is evolving with new technologies and methodologies that identify three main
skills: thinking skills (observation, estimation and manipulation), science pro-
cess skills/problem-solving approaches, and social interaction/teamwork skills
[6,15]. On the implementation of educational robotics in our workshop, we point
How to Promote Learning and Creativity 57

Fig. 1. Robotics workshop.

out here that we want to consolidate STEAM areas, improve academic and pro-
fessional orientation with a vocation for success, optimising the transversality
of content and improving the treatment of Special Educational Needs (SEN)
diversity.
We focus on last edition workshop which was divided in two shifts of fifteen
students distributed in four heterogeneous groups of three or four students. Three
instructors were involved to train the students and one monitor was responsible
for watching and motivating the group.
In this edition two novelties were introduced: our designed visual cards
STEAMOLUS and the open platform for programming the robots named Open
Roberta. Before that, we used both text-based and blog-based materials5 . For
coding the robots we have used copyrighted Lego visual block programming
software.

3.1 Pedagogical Issues


Our robotics workshop provides, obviously, a hands-on STEAM experience to
the students. Moreover, it is also a consolidated methodology based on didactic
skills such:
5
https://edulogix.wordpress.com/.
58 M.-I. Cárdenas et al.

– Linguistic, audiovisual and communicative skill.


– Mathematics skill.
– Information processing and digital skill.
– Learning to learn (especially from mistakes).
– Autonomy and personal initiative.

These elements are smoothly integrated in our workshop thanks to the applica-
tion of different artifacts: the visual cards STEAMOLUS, the visual program-
ming Open Roberta and the well-known Lego Mindstorms NXT.

3.2 STEAMOLUS: A “Maker” Approach to Work on STEAM


Through Creativity

STEAMOLUS, pronounced stimulus, is a simple and creative way of learning by


doing in an STEAM context. Its recipe is based on different movements: Maker
[16], DIY [20] and Tinkering6 . It mainly consists on a series of cards designed
to inspire rather than giving highly detailed instructions (see Figs. 2 and 3).
These cards are: (1) platform-independent, since they show a generic robot, (2)
cross-language as they use visual language instead of text, (3) multi-age thanks
to their visual style and (4) creativity-fostering because they are not purely
instructional. The cards are visual proposals that indicate what to do but not
how to do it, in order to promote creativity and interaction between students.
This way, the students end up finding the solution after having tried out different
approaches.
This alternative environment based on visual reasoning is shown in two activ-
ities explained below: the waiter bot and the sumo players. See both cards in
Figs. 2 and 3 respectively, and the following examples of teacher tips for students:
STEAMOLUS 01: Waiter bot card.

1. Motivating statements:
Do you know that there are already factories and shops where robots move
around following marks on the floor?
Or that there are restaurants where robots use floor marks per waiter?
Do you dare to make a robot capable of following some marks on the floor to
avoid getting lost?
2. Encouraging advises:
Find out the way robot waiters have to go.
You can try to make your waiter robot from these visual clues.
Think about the challenge step by step and not at once.

6
The Tinkering Studio https://www.exploratorium.edu/tinkering.
How to Promote Learning and Creativity 59

STEAMOLUS 02: Sumo players card.


1. Motivating statements:
Oh, you have such a good time when you play sports with your friends? Well,
robots have a lot of fun, too.
But there are robots that prefer sports where there is even more physical
contact, as in the case of sumo.
Do you dare to make your own sumo wrestler and compete at the next robotic
Olympics?
2. Encouraging advises:
Make sure you have a base construction with a light sensor looking at the floor
and that you have practiced previously with the programming environment.
Adapt the robot’s body (hardware) to perceive your opponents.
Add one ultrasound sensor to your robot.
Can you make your robot as strong as a rock and as agile as a hare?
Adapt the robot software to stay within the combat circle.

Fig. 2. Waiter bot card. Fig. 3. Sumo players card.

3.3 Open Roberta Lab


Open Roberta Lab [17] is a visual online programming environment. Since it is
free to use and cloud-based, it enables students to start coding without long-
winded system installations, setups or additional technology. Its visual program-
ming language is called NEPO (New Easy Programming Online) and its code
60 M.-I. Cárdenas et al.

Fig. 4. Open roberta environment.

instructions are inspired in puzzle pieces (likewise Scratch [18] visual language
programming). Students just need to register to the platform and start program-
ming using the NEPO language. Robots can be paired with the platform using
wi-fi or USB/Bluetooth connection. Once the pairing is done, the programs can
be executed in the real robot and test its behaviours. Open Roberta has support
for a large number of robots from several manufacturers. However, in the case
that the student does not have any hardware available, the platform offers a
basic simulation that includes a sensor/obstacle emulation. Moreover, you can
use the simulator to test programs previous downloading them to the real robots.
Even it is possible to make changes while the simulator runs.
Figure 4 shows the coding of a line follower and its corresponding simulation
in Open Roberta Lab.

4 Results

At the end of each Campus Ítaca edition, students, monitors and instructors
are evaluated by responding several satisfaction surveys. The evaluation reports
are elaborated by the Campus Ítaca organizers, helping instructors to find the
strengths and weaknesses of their workshops. Let’s remark that the surveys are
designed by the organization and instructors can not modify its questions.
In the case of our workshop, the last year edition, students highlighted that
what they liked the most was being able to contribute throughout the creation
of the robot. They also enjoyed programming, building, and then dismantling
the robots. In addition, they also emphasized that they have learned more about
programming and robotics. Figure 5 shows the answers of the instructor (red)
and the students (blue) satisfaction evaluations. Students are asked for eight
different indicators about their experience in the workshop, while instructors
only are asked for four indicators.
How to Promote Learning and Creativity 61

Fig. 5. Robotics workshop last year edition: instructor and student satisfaction evalu-
ation.

Fig. 6. Comparison of last editions satisfaction surveys.

Finally, the monitors stated that the workshop gave the opportunity to dis-
cover the world of robotics and programming through the development of a robot
and also that the activity was very dynamic, whereas the activity turned out to
be complicated on some occasions causing some students to become demotivated
because the challenges did not work out.
In average, the workshop mark is 8.2 out of 10 which states the success of
the activity.
62 M.-I. Cárdenas et al.

Figure 6 shows the comparison between last two editions. The number of
respondents are all the students of both shifts that attended the workshop, this is
around 24 students for year. Despite the fact that the results are slightly similar,
the degree of participation, the acquisition of new learning and the interest are
key indicators that suggest that this methodology will improve positively.

5 Conclusions
The last edition of our robotic workshop has been one of the most successful and
best evaluated workshops compared to the fifteen previous editions. Only the
best evaluated workshops are asked to continue to next editions, and robotics
is one of the few that exists since the very first edition. This means that our
methodology keeps the interest and passion for robotics, regardless that it is more
known nowadays than fifteen years ago. From the observations we gather in our
workshop, we have experimented that motivational dynamics induce curiosity,
which sparks interest in novel and/or surprising activities [19]. In that sense,
surveys results from all previous editions showed that these pedagogical issues
applied in our workshop had positive results, not only to improve programming
robotics but also to acquire a strong motivation in STEAM area.
Introducing the visual cards STEAMOLUS had encourage students to solve
the tasks even they had no previous experience in the field. Incorporating this
new kind of materials helps us to adapt to new generations and connect better
in the way they understand and interact with new technologies. Furthermore, a
new open robotics platform in the cloud, Open Roberta in our case, helped us
to provide students a way to bring their own results to home or school.
In future editions, we want to promote more the context provided by the
STEAMOLUS challenges in order to engage a wider number of students. More-
over, we are planning to move to a newer hardware platform as soon as the
organizers have enough funding.

Acknowledgements. This work has been partially supported by the Span-


ish project PID2019-105093GB-I00 (MINECO/FEDER, UE) and CERCA Pro-
gramme/Generalitat de Catalunya.

References
1. Daniela, L., Lytras, M.D.: Educational robotics for inclusive education. Tech. Know
Learn. 24, 219–225 (2019). https://doi.org/10.1007/s10758-018-9397-5
2. Daniela, L., Strods, R.: The role of robotics in promoting the learning motivation
to decrease the early school leaving risks. In: ROBOESL Conference Proceedings,
Athens (2016)
3. Vandevelde, C., Saldien, J., Ciocci, M.C., Vanderborght, B.: Overview of technolo-
gies for building robots in the classroom. In: Proceedings of International Confer-
ence on Robotics in Education, pp. 122–130 (2013)
4. Ketterl, M., Jost, B., Leimbach, T., Budde, R.: Open roberta - a web based app-
roach to visually program real educational robots. LOM. Online J. 8(14), 22 (2015).
https://doi.org/10.7146/lom.v8i14.22183
How to Promote Learning and Creativity 63

5. Anwar, S., Bascou, N.A., Menekse, M., Kardgar, A.: A systematic review of studies
on educational robotics. J. Pre-Coll. Eng. Educ. Res. (J-PEER): 9(2), Article 2
(2019). https://doi.org/10.7771/2157-9288.1223
6. Benitti, F.B.V.: Exploring the educational potential of robotics in schools: a sys-
tematic review. Comput. Educ. 58(3), 978–988 (2012)
7. Eguchi, A.: Educational robotics for promoting 21st century skills. II J. Autom.
Mob. Robot. Intell. Syst. 8(1), 5–11 (2014)
8. Miller, G., Church, R., Trexler, M.: Teaching diverse learners using robotics. In:
Druin, A., Hendler, J. (eds.) Robots for kids: Exploring New Technologies for
Learning, pp. 165–192. Morgan Kaufmann, San Francisco (2000)
9. Afari, E., Khine, M.S.: Robotics as an educational tool: impact of lego mindstorms.
Int. J. Inf. Educ. Technol. 7(6), 437–442, June 2017
10. Ferrer, G.J.: Using Lego mindstorms NXT in the classroom. J. Comput. Sci. Coll.
23, 153–153 (2008)
11. Souza, I., Andrade, W., Sampaio, L., Araújo, A.L., Araujo, S.: A systematic review
on the use of lego robotics in education. In: Proceedings - Frontiers in Education
Conference, FIE, October 2018
12. Hanks, B., Fitzgerald, S., Mccauley, R., Murphy, L., Zander, C.: Pair programming
in education: a literature review. Comput. Sci. Educ. 21, 135–173 (2011)
13. Kumar, D., Meeden, L.: A robot laboratory for teaching artificial intelligence. The
Proceedings of the Twenty-ninth SIGCSE, Technical Symposium on Computer
Science Education (SIGCSE-98). Daniel Joyce Editor, ACM Press (1998)
14. Carbonaro, M. Rex, M. et al.: Using LEGO robotics in a project-based learning
environment. Interact. Multi. Electron. J. Comput. Enhanc. Learn. 6(1), 55–70
(2004)
15. Alimisis, D., Kynigos, C.: Constructionism and robotics in education. Teach. Educ.
Robot. Enhanced Constructivist Pedagogical Meth. Athens, Greece, School of Ped-
agogical and Technological Education (2009)
16. Innovations. 7(3), 11–14. https://www.mitpressjournals.org/doi/pdf/10.1162/
INOV a 00135
17. Jost, B., Ketterl, M., Budde, R., Leimbach, T.: Graphical programming environ-
ments for educational robots: open roberta - yet another one? In: IEEE Interna-
tional Symposium on Multimedia, pp. 381–386 (2014). https://doi.org/10.1109/
ISM.2014.24
18. Maloney, J., Resnick, M., Silverman, B., Eastmond, E.: The scratch programming
language and environment. TOCE 10(4), 1–15 (2010)
19. Tyng, C.M., Amin, H.U., Saad, N.M.N., Malik, A.S.: The influences of emotion
on learning and memory. Front Psychol. 8, 1454 (2017). https://doi.org/10.3389/
fpsyg.2017.01454
20. Gelber, S.M.: Do-It-Yourself: Construction. American Quarterly, Repairing and
Maintaining Domestic Masculinity (1997). https://doi.org/10.1353/aq.1997.0007
An Arduino-Based Robot with a Sensor
as an Educational Project

Milan Novák(B) , Jiří Pech, and Jana Kalová

Faculty of Science, University of South Bohemia, České Budějovice, Czech Republic


{novis,pechj,jkalova}@prf.jcu.cz

Abstract. This paper presents a demonstration of a project for teaching robotics


in secondary schools regarding the emerging concept of so-called thinking (Com-
putational thinking), which is supported by the PRIM project (Support for the
Development of Computational Thinking). The PRIM project aims to support a
change in the orientation of the school subject of informatics from user control of
ICT towards the basics of informatics as a field. In the light of the ongoing Indus-
try 4.0 revolution, the rapidly growing need for both IT professionals and general
education in the field of informatics is perceived. The project responds to similar
activities in several developed countries, where computational thinking comes to
the forefront of the attention of schools and creators of the national curriculum.
As part of the project, a set of textbooks covering the teaching of programming
across all levels of educational institutions in the Czech Republic was created.
This concept also created textbooks for teaching the basics of robotics, using the
Micro: Bit and Arduino platforms. The paper presents a sample of projects that try
to meet the emerging concept and copy the tested structure of teaching materials.

Keywords: Teaching robotics · STEAM (Science, Technology, Engineering, Art


and Mathematics) · ADDIE (Analysis, Design, Development, Implementation,
Evaluation) · Arduino

1 Introduction
Robotics has become commonplace in our environment and people have become accus-
tomed to possible interaction with robots in everyday life. Robotic tools have been used
successfully for teaching for several decades. The main advantage of teaching robotics
is that it is an interdisciplinary field that includes mechanics, electronics and program-
ming. Students improve in all these subjects at the same time and at the same time it is
fun for them to design robots [1]. An important prerequisite for teaching robotics is that
students could work on their own technical equipment individually or in small groups.
Each student should be able to solve problems in teaching with a given platform without
the necessary knowledge of higher principles of the robotic platform and the platform
should be flexible and extensible [2]. For the sake of clarity, it is also better to use real
robots instead of simulated environments [3].
Robots are physical three-dimensional objects that can move in space and imitate
human or animal behavior. From the point of view of education, it was observed that the
use of robots in teaching offers the following advantageous options:

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022


M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 64–71, 2022.
https://doi.org/10.1007/978-3-030-82544-7_7
An Arduino-Based Robot with a Sensor as an Educational Project 65

• When young people are concerned with concrete and physical objects and not just
working on abstractions and patterns, they learn faster and easier. It can therefore be
assumed that it would be advantageous if they were simply dedicated to computer
programming.
• Robots also evoke fascination and are motivating during the learning process. This
often leads to thinking in context.

In addition to the use of robots in schools, the study of the science of robotics for
children is a rich experience. It is very important to realize that we should not only
talk about technical aspects, but that robotics translates the interdisciplinary nature of
this science. This is evident from the fact that the ability to position and program a
robotic device requires several skills and competencies based on different disciplines.
Combining studies related to the field of robotics has many advantages. Most visible is
the inclusion of the development of technical skills related to the world of information
technology and the development of computational thinking. This educational approach
to STEAM (Science, Technology, Engineering, Arts and Mathematics), where some
creativity is added through the arts, allows robotics to be used in school as a practical
activity to encourage students to learn and apply theoretical knowledge in science, tech-
nology, engineering, art (creativity) and mathematics. Combinations of robotics using
STEAM and DIY (Do It Yourself) can be very effective and help students improve basic
skills such as logic and communication.
This approach can be better specified by introducing the term “Computational Think-
ing”. This concept can be described as a problem-solving approach [4]. Computational
thinking is not a skill, but it is mainly concepts, applications, tools and strategies of
thinking that are used to solve problems. In general, four main aspects of computational
thinking can be defined [5]:

• Decomposition: division of the problem into smaller parts.


• Pattern recognition: finding similarities and differences between different parts to
make predictions.
• Abstraction: the ability to find general principles between parts and patterns of
problems.
• Algorithm design: development step by step using instructions that lead to solving
various problems.

Technology is constantly changing our society, and students must learn to think
critically as well as be able to master and gain their own digital experience. This leads us
to strive so that students are not just consumers of digital technologies but could become
their creators. Teaching young people computational thinking allows them to understand
how digital technologies work, so it is important to ensure that they are directly involved
in understanding the principles of digital technology.
This approach was also applied in the PRIM project [8], which aims to support a
change in the orientation of the school subject of informatics from user control of ICT
towards the basics of informatics, as a field. The resulting textbook of robotics [11] uses
the basic concepts of STEM. It aims not only to provide materials to students, but above
all to help teachers understand the new approach to teaching programming.
66 M. Novák et al.

2 Teaching Materials
The textbook consists of several topics, which are approached using a physical computer
paradigm. These topics should teach students basic programming techniques. All topics
are processed using the open Arduino platform, which is widely available and supports
not only the programming itself, but also creativity and invention, in solving individual
tasks.
The creation of teaching materials was based on the established ADDIE model. This
model is part of ISD - Instructional System Design and provides a general framework
for the system design of courses or educational processes [5]. Its five basic components
helped us to clarify and choose a strategy for creating teaching materials (see Fig. 1).

Analysis

Evaluation
Design

Implement Developm
ation ent

Fig. 1. ADDIE model

The teaching materials themselves were divided into separate topics, which were
processed separately according to the model and their structure and content are processed
to meet the requirements of STEM as much as possible.
A model has been created that primarily considers the teacher and the student and
emphasizes the fulfillment of the didactic principles of the teaching process such as
sequence, consistency, permanence, clarity or the connection of theory with practice.
The structure of partial teaching materials from each topic is composed of three basic
elements (see Fig. 2). The so-called lesson guide is intended for teachers, which is a
methodological sheet. Contains a time division of the lesson with recommendations for
leading the lesson. The structure of the lesson guide is copied by a worksheet, which in
turn is intended for students. Both materials contain the developed program code and
separate tasks. Both documents are supplemented by a Detailed Guide to Theory section,
which contains the program codes and electronic diagrams described in detail with an
explanation of physical phenomena. It forms the exposure component of the teaching
material with all the details.
An Arduino-Based Robot with a Sensor as an Educational Project 67

Basic Basic
Developed Individual
wiring program
program tasks
diagram code

Lesson Work-
Guide sheets

Theory
guide

Fig. 2. Structure of teaching materials

3 Separate Project of Robotic Vehicle


This project consists of sub-tasks and projects, which are divided into several separate
chapters. In the end, these projects are combined into a large project, which is a robotic
vehicle with an obstacle sensor. In sub-projects, it may involve the connection of DC
motors and an ultrasonic sensor. The individual lessons follow each other and use the
knowledge of the previous lesson (see Fig. 3).

3.1 Sub-project - Fan with Speed Control


In this lesson, students focus on DC motor control. They use the knowledge gained
from the previous lesson, especially regarding analog signal processing and mapping of
values from analog devices. A partial project for a lesson with a DC motor is a fan with
speed control by the potentiometer.

3.2 Subproject - Distance Measurement

This lesson is intended for mastering the work with an ultrasonic sensor. They will learn
to upload to their program code an external library for working with the sensor and with
this component they can solve other tasks related to various calculations and processing
of multiple measurements, etc. A partial project for this lesson is distance measurement.
68 M. Novák et al.

Working with ana- DC motor, potentiom-


log input and mapping eter,
values. transistor.
DC motor
control

Final project: Students will learn to control the motor through a


FAN potentiometer and a transistor. They will use the
knowledge from the previous lesson, especially reading
analog values and their mapping.

Ultrasonic sensor,
Working Using the IF condi- breadboard, Arduino
with ultrasonic tional statement. board.
sensor
Resister
Final project:
MEASUREME When using an ultrasonic sensor, students will learn
NT OF to use external libraries to control the sensor and use
DISTANCE the conditional if statement.

Separate project: ROBOTIC VEHICLE

Fig. 3. Sequence of lessons

3.3 The Main Project

Students then apply the knowledge from the above projects in the robotic vehicle project.
It must connect all already known components on one development board at the same
time. It must also form the very mechanical part of the robotic chassis. Here they can
use their imagination and use their free ideas. They do not have to be bound by general
conventions as to what the robotic vehicle might look like and may use any material e.g.,
in the form of wooden prisms, components from old kits, old CDs, etc.

4 Verification

As part of the PRIM project, individual modules were tested at several secondary schools
over a period of two years. They were secondary vocational high schools, focusing
on technical fields and grammar schools. Responsible teachers tested partial lessons
An Arduino-Based Robot with a Sensor as an Educational Project 69

within their professional subjects and with a focus on the didactic side of the textbook.
Whether the selected topics are supportive and can be implemented in the current model
of teaching in secondary schools. Teachers used detailed lesson guides to save as much
time as possible to prepare for the lesson. Based on our own experience, the materials
were evaluated according to several criteria:

1. Structure and sequence of individual topics, manageability of the curriculum


regarding the length of the teaching unit.
2. Adequacy of age and previous knowledge of students, usability in everyday life.
3. Stylistic and grammatical accuracy, comprehensibility of the text.
4. Professional accuracy, appropriate terminology.
5. Presence of tasks for self-study.
6. Presence of tasks for deepening knowledge, for pupils with specific learning
disabilities, gifted pupils.
7. Presence of correct answers or problem solving.
8. Color and quality of objects (pictures, diagrams, graphs).
9. Quantity, adequacy or complexity for pupils or students.

Each criterion was supplemented by the possibility of verbal expression and sub-
sequent discussion between the teacher and the authors. Testing of the effectiveness of
teaching in the form of acquired knowledge of student programming followed and the
first results are available.

5 Responses and Test Results

As part of the preparation of teaching materials in 2018, alpha testing took place at
one grammar school and one secondary vocational school. There were around twenty
students in each classroom. All authors of the article directly participated in this testing,
either as a teacher or in the form of listening. During these tests, we noticed interesting
facts:

• High school students have trouble understanding the involvement of examples. They
do not understand wiring diagrams and have trouble understanding the principle of
operation of DC motor components. However, if they accept the connection as a fact,
they have no problem with any code modification.
• Pupils of vocational secondary schools do not have a problem with the connection
of circuits and understanding the function of individual elements, but on the contrary
have great problems with any modification of the code.

After the end of this testing and the incorporation of the identified problems and
requirements, beta testing took place in five schools, of which three were vocational
and one grammar school. There were again about twenty students in each class. Here,
as authors, we had no choice of schools, they were selected as part of a project that
included our textbook. We also did not have the opportunity to influence the behavior
of teachers and students in any way. At the end of this phase in February 2020, we
70 M. Novák et al.

received evaluations from teachers who taught according to our materials. The textbook
has 12 chapters, and each was evaluated by up to five teachers. Unfortunately, not all
ratings came back to us, we received a total of 42 ratings, a little more than two thirds.
Unfortunately, the evaluation system was set up so that it did not force teachers to evaluate
all chapters (Table 1).

Table 1. Average evaluation of teaching materials by teachers (5 best, 1 worst)

Question Score
Structure and sequence of individual topics, manageability of the curriculum regarding 4,28
the length of the teaching unit
Adequacy of age and previous knowledge of students, usability in everyday life 4,25
Color and quality of objects (pictures, diagrams, graphs) 4,5
Quantity, adequacy/complexity for students 4,14

The number of questions and their structure we could not influence as authors, it was
the same for all parts of the project.
Here we met again with a few comments also:

• Some have found our materials complex and others simple. From this we conclude
that complexity is generally right.
• In some chapters we found that the teachers did not catch up with them and in others
that there was little curriculum in them, so we partially moved some subject matter.
• Teachers praised that thanks to these materials, students finally understood the
principle of some components and their connection with the outside world.
• We got the suggestion that the teaching material is well structured and nicely meets
the principle from simple to more complex.
• There have been complaints about the poor quality of some of the recommended
components purchased, especially regarding cheap servos.
• We also moved the recommended class a bit upwards to second or third year of studies
with a recommendation to use this textbook in the third year of studies.

Further extended testing was planned for this year but had to be canceled due to
the epidemiological situation. These materials in their form are unsuitable for distance
online teaching.

6 Conclusion
We are seeing an increasing impact of technology and its development in every aspect of
human life. This has an impact on people’s interests, values and needs and their perception
of the world, because technology is an integral part of their lives, as it provides both
social and fun functions. The task of education is to prepare the individual to become
part of it [7]. For the development of digital competencies of teachers, it is essential
An Arduino-Based Robot with a Sensor as an Educational Project 71

to find common ground with their students and be able to implement these skills into
the educational process. The development of digital skills is important for teachers, in
terms of the ability to implement the use of technology in the educational process to
motivate students. It is essential to bring into use the use of technologies or methods that
might seem interesting and exciting to digital generation students. The field of robotics
provides both, develops digital competencies for students and for teachers, and at the
same time motivates students to participate in any subject taught in connection with
robotics. Therefore, the above example in the form of the presented material can form
the basis for further development regarding the principles of STEM and computational
thinking.

References
1. García-Saura, C., González-Gómez, J.: Low-cost educational platform for robotics, using open
source. In: 5th International Conference of Education, Research and Innovation, ICERI201,
Madrid, Spain (2012). ISBN 9788461607631. ISSN 23401095
2. Ryu, H., Kwak, S.S., Kim, M.: A study on external form design factors for robots as elementary
school teaching assistants. In: The 16th IEEE International Symposium on Robot and Human
interactive Communication, RO-MAN 2007, Jeju. IEEE (2007). ISBN 9781424416349. ISSN
9781424416359
3. Balogh, R.: Basic activities with the Boe-Bot mobile robot. In: 14th International Conference
DidInfo2008. FPV UMB, Banská Bystrica, Slovakia (2008)
4. Wing, J.M.: Computational thinking. Commun. ACM 49(3), 33–35 (2006). https://doi.org/10.
1145/1118178.1118215
5. ADDIE Model: Wikipedia: the free encyclopedia. San Francisco: Wikimedia Foundation,
November 2016. https://en.wikipedia.org/wiki/ADDIE_Model. Accessed 11 Mar 2020
6. Crouch, C., Fagen, A.P., Callan, J.P., Mazur, E.: Classroom demonstrations: learning tools or
entertainment? Am. J. Phys. 72(6), 835–838 (2004). https://doi.org/10.1119/1.1707018. ISSN
0002-9505
7. Morin, E.: Seven Complex Lessons in Education for the Future. United Nations Educational,
Scientific and Cultural Organization, Paris (1999)
8. Informatické myšlení: About the project. iMysleni. JIhočeská Univerzita v Českých
Budějovicích (2019). https://imysleni.cz/about-the-project. Accessed 5 Jan 2021
9. Novák, M., Pech, J.: Robotika pro střední školy - programujeme Arduino. Jihočeská univerzita
v Českých Budějovicích, Pedagogická fakulta, České Budějovice (2020). ISBN 978-80-7394-
786-6. https://github.com/Nowis75/PRIM. Accessed 29 Jan 2021
A Social Robot Activity for Novice
Programmers

Zimcke Van de Staey(B) , Natacha Gesquière, and Francis wyffels

Ghent University–Imec, IDLab-AIRO, Ghent, Belgium


{zimcke.vandestaey,natacha.gesquiere,francis.wyffels}@ugent.be

Abstract. We have developed an activity on social robots for learners


of ages 11–14, as part of a broader AI and CT curriculum for secondary
education. We thereby address some of the difficulties that teachers expe-
rience when teaching physical computing and challenges of CS education
such as gender equality. During the project, learners will use a graphi-
cal programming language and simulator to design and implement their
robot. Afterward, they can transfer their design to a physical robot.

Keywords: Social robotics · Computational thinking · Programming ·


Computer science · Physical computing · STEM education

1 Introduction
Preparing students for digitization requires that certain aspects of computer sci-
ence should be addressed early in the curriculum [1–4]. The Flemish government
enforces schools to achieve a set of minimum learning goals with their students.
For several decades computer science was almost absent in the curriculum of
Flemish schools [5]. However, recently, policy has changed and since September
2019 also minimum goals1 on computer science and STEM are included. This
means computational thinking (CT), programming and technological fluency are
becoming increasingly important in Flemish secondary schools.
However, there are some challenges to put the new curriculum successful into
practice. To begin with, teachers have little time to spend on teaching CT and
programming. Besides this, they often face a limited set of resources due to a
small budget. Another key point is that teaching computer science and STEM
requires appropriate knowledge and skills from teachers who lack proficiency in
these domains. Teachers do not necessarily have a computer science background
when they are assigned to teach programming and CT [6,7]. At the same time,
digital competencies are not always a strong component in their teacher training
[3, pp. 45–59]. Within physical computing teachers often see the constructions
and electronics they have to handle as an insurmountable threshold [8]. Thus it
becomes a challenge to fulfill the goals of the curriculum that is in place.
Teachers find it challenging to teach programming and physical computing
[5,9]. When teachers search for appropriate exercises for their learners, they
1
https://onderwijsdoelen.be, in Dutch.
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 72–77, 2022.
https://doi.org/10.1007/978-3-030-82544-7_8
A Social Robot Activity for Novice Programmers 73

are tempted to choose ready-made assignments from code.org or Blockly Games


since teachers don’t need a lot of programming skills for their students to accom-
plish those tasks. Teachers avoid assignments where learners are encouraged to
experiment because an open programming context is far more challenging for the
teacher to coach [10]. In contrast, connecting the activity to existing learning
goals lowers the barrier for the teacher [11,12]. Several learning tools involving
physical computing have been developed and tested over the past years, which
has shown that physical computing motivates learners because of its immediate
feedback and since it adds a tangible component to the learning process [10,12].
Blocks-based programming can empower novice programmers. Bau et al. argue
that learnability can be enhanced by some features f.i. online environments,
high-level abstractions and easy-to-find examples [13].
Other important challenges of our CS education are reaching underrep-
resented minorities and the ICT gender equality paradox [14]. This paradox
describes the phenomenon that the better countries score on the gender equal-
ity scale, the less likely their female inhabitants will enroll in computer science-
related study programs. It is thus advisable consider the gender equality paradox
when designing materials for the school curriculum, in particular by providing
more learning contexts that equally appeal to all genders. Resnick and Rusk
argue that the way coding is introduced, often undermines its potential and
promise, and that by coding, learners should learn to think creatively, communi-
cate and work together with their peers [15]. Such a broader perspective results
in more inclusive activities [11].
In this paper, we propose a Social Robot activity for ages 11–14 as a pos-
sibility to tackle the new learning goals on CS and STEM through an exciting
theme that also incorporates learning goals on privacy and societal issues. In
this way, we help middle school teachers to overcome their fear of open-ended
problems and aim to give them more confidence in physical computing. At the
same time, we want learners to have a programming experience that appeals to
their interest, creativity and imagination, and to make them think about the
real-world applications that are desirable.

Fig. 1. The Dwenguino microcontroller with breadboard and ultrasonic sensor (left)
and some social robots created during the project (right).
74 Z. Van de Staey et al.

2 Methods
We equip the teachers with a manual, that gives them insights into the state-
of-the-art of social robotics and human-robot interaction (HRI), and the role
artificial intelligence (AI) plays in those fields. During the Social Robot activity,
teachers can make use of our custom robotics kit that allows their students to
build a personalized social robot (in pairs). The kit includes: (1) a Dwenguino
which is an Arduino-based microcontroller board for robotics in education (see
Fig. 1) [16]; (2) several sensors and actuators; (3) laser-cut adaptor pieces to
easily attach sensors and actuators to a robot body; (4) the DwenguinoBlockly
software, a graphical programming environment developed for the Dwenguino.
(5) A cloud-based simulator view of the DwenguinoBlockly software in which
the students can test their code, free from physical restraints (see Fig. 2). The
simulator consists of several programming scenarios such as a driving, a drawing
and a social robot. For inclusiveness, the environment can be used in several
languages. The kit was designed during an iterative process in which different
parameters were taken into consideration: low production cost, high durability
(should be reusable), high robustness and high flexibility in its application.
The Social Robot activity consists of multiple modules of which some are
optional (see Fig. 3). During an introductory activity, students familiarize them-
selves with the context of social robots. They imagine how their robot can com-
municate with and respond to people. They can take part in unplugged activities
and class discussions. Afterward, they start the hands-on design and program-
ming phase in the online simulator. There they add different components to a
simulated robot and see the direct result of their programming code. Gradually
they can make their program more sophisticated. Following this phase the stu-
dents can start constructing their robot in physical space, using the sensors and
actuators in the kit. During the construction phase, students are encouraged to
upcycle leftover materials or be creative with materials such as papier-mâché.
After uploading the code to the physical robot, the learners can encounter some
problems with the executability of their program, due to physical issues. Students
might have to go back to the programming phase to adjust their implementa-
tion. Subsequently, teachers can propose to do a creative assignment with the
final result such as using the social robot in visual storytelling or organizing an
exhibition for other students. The activity can be wrapped up by doing a class
discussion, e.g., discussing the ethical implications of AI and social robotics.
The Social Robot activity gives learners plenty of opportunities to think
creatively, communicate and work together with their peers. It makes computa-
tional thinking a social event [17]. Since social robots come in all types and looks,
from a pet-like robot in elderly homes to guard robots in shopping malls, the
project can be approached gender-neutral. The social robot scenario provides a
less competitive context than some other scenarios in the simulator (in partic-
ular the driving robot that is used in robot competitions [18]). It also provides
opportunities for a more ‘narrative’ character since social robotics connects to
various school subjects and a social robot can be designed for communication
with humans [11]. To increase the Social Robot activity’s accessibility we took
A Social Robot Activity for Novice Programmers 75

several design decisions. First of all, we designed social robot programming blocks
with multiple levels of abstraction. For instance, to program the movement of
joints and limbs of the robot, students can use simplified movement blocks such
as ‘wave arms’ or ‘turn eyes left’ [13]. They can also opt to use more low-level
programming blocks such as blocks to control a servo motor directly, to achieve
more technological fluency. Secondly, by making use of the default sensors and
actuators pins on the Dwenguino and identifying unique default pins for the
other components, learners are less prone to making errors in the programming
and construction process. Programming happens in an online environment in
multiple languages. Learners also have the possibility to switch their code from
blocks-based to textual. Last but not least the laser-cut pieces to easily attach
sensors and actuators lower the barrier for both teacher and students.

3 Discussion
The Social Robot activity is one of the projects of AI At School2 , in which we are
developing teaching materials on CT, programming and CS, in particular AI,
for the entire secondary school education. These teaching materials consist of
several STEM projects, each in a different context. The contexts are chosen for
their relevance to society, their links with state-of-the-art scientific research, and
special attention goes to the ethical aspects of AI that arise. Our projects tackle
some of the challenges STEM education copes with: more inclusive projects, the
improvement of digital and AI literacy, stimulating the transfer of knowledge
between disciplines and awareness on ethical issues.
Previous work suggests that many factors can contribute to fostering the
interest and the engagement of a more diverse audience when it comes to com-
puter science education [11,15,17,19]. In the Social Robot project we have
included a substantial amount of these initiatives. We have contextualized
designing social robots within the needs of our society. Additionally, we pro-
posed to link the social robotics theme to a discussion on the ethics of AI.
During the project, learners start from an engaging real-world multidisciplinary
application of computing, which leads to a tangible result. We selected an open-
ended context to stimulate the creativity of the participants. At the same time,
we ensured that the assignments do not have a competitive nature, but instead
encourage collaboration by allowing co-creation and pair programming. We sup-
port the teacher with background information on the theme through a manual.
The theme and its connection with non-STEM learning goals have a lot of poten-
tial to capture the teachers’ and learners’ interests.

2
https://www.aiopschool.be, in Dutch.
76 Z. Van de Staey et al.

Fig. 2. Overview of the simulator layout: on the left a programming code editor with
graphical programming blocks; on the right the different programming scenarios and
program controls. You can see an example of the social robot scenario where the user
can drag and drop the physical components in the simulation window below.

Fig. 3. Overview of the Social Robot project structure. There can be multiple iterations
for the design and programming phase, as well as for the construction phase. To allow
Sim2Real transfer the design and programming phase can be repeated once more after
the construction phase.

4 Conlusion and Future Work


From 2021 onwards the Social Robot activity will be organized in Flemish sec-
ondary schools where all necessary materials can be borrowed by schools. By
organizing the Social Robot project on a large scale, we can conduct experimen-
tal research to substantiate our claims that the Social Robot project encourages
teachers to approach open-ended physical computing projects and fosters girls’
interests in computing.
A Social Robot Activity for Novice Programmers 77

References
1. Barr, V., Stephenson, C.: Bringing computational thinking to k-12: what is involved
and what is the role of the computer science education community? Acm Inroads
2(1), 48–54 (2011)
2. Bocconi, S., et al.: Developing Computational Thinking in Compulsory Education.
European Commission, JRC Science for Policy Report (2016)
3. European Commission/EACEA/Eurydice. Digital education at school in Europe.
Technical report, Publications Office of the European Union (2019)
4. Caspersen, M.E., Gal-Ezer, J., McGettrick, A., Nardelli, E.: Informatics for all:
The strategy. Technical report, February 2018
5. wyels, F., Martens, B., Lemmens, S.: Starting from scratch: exper- imenting with
computer science in flemish secondary education. In: Proceedings of the 9th Work-
shop in Primary and Secondary Computing Education, WiPSCE 2014, pp. 12–15,
New York, NY, USA, Association for Computing Machinery, November 2014
6. Marı́a Cecilia, M.M., Gomez, M.J., Moresi, M., Benotti., L.: Lessons learned on
computer science teachers professional development. In: Proceedings of the 2016
ACM Conference on Innovation and Technology in Computer Science Education
(2016)
7. Ravitz, J., Stephenson, C., Parker, K., Blazevski, J.: Early lessons from evaluation
of computer science teacher professional development in google’s CS4HS program.
ACM Trans. Comput. Educ. (TOCE), August 2017
8. Margot, K.C., Kettler, T.: Teachers’ perception of stem integration and education:
a systematic literature review. Int. J. STEM Educ. 6, 1–16 (2019)
9. Benitti, F.B.V.: Exploring the educational potential of robotics in schools: a sys-
tematic review. Comput. Educ. 58(3), 978–988 (2012)
10. Neutens, T., Wyffels, F.: Unsupervised functional analysis of graphical programs
for physical computing, pp. 1–6 (2020)
11. Eguchi, A.: Bringing Robotics in Classrooms, pp. 3–31. 07 (2017)
12. Karim, M.E., Lemaignan, S., Mondada, F.: A review: can robots reshape K-12
STEM education? In: 2015 IEEE International Workshop on Advanced Robotics
and its Social Impacts (ARSO), pp. 1–8, June 2015
13. Bau, D., Gray, J., Kelleher, C., Sheldon, J., Turbak, F.: Learnable programming:
blocks and beyond. Commun. ACM 60(6), 72–80 (2017)
14. West, M., Kraut, R., Ei Chew, H.: I’d blush if I could: closing gender divides in
digital skills through education (2019)
15. Resnick, M., Rusk, N.: Coding at a crossroads. Commun. ACM 63(11), 120–127
(2020)
16. Neutens, T., Wyffels, F.: Analyzing coding behaviour of novice programmers in
different instructional settings: creating vs. debugging, pp. 1–6 (2020)
17. Kafai, Y.B., Burke, Q.: Connected Code: Why Children Need to Learn Program-
ming. MIT Press, Cambridge, July 2014
18. wyffels, F., et al.: Robot competitions trick students into learning. In: Stelzer,
R., Jafarmadar, K., (eds.), Proceedings of the 2nd International Conference on
Robotics in Education, pp. 47–51 (2011)
19. Happe, L., Buhnova, B., Koziolek, A., Wagner, I.: Effective measures to foster
girls’ interest in secondary computer science education. Educ. Inf. Technol. 26(3),
2811–2829 (2020)
Workshops, Curricula and Related
Aspects Universities
Robotics and Intelligent Systems:
A New Curriculum Development
and Adaptations Needed in Coronavirus
Times

Francesco Maurelli(B) , Evelina Dineva, Andreas Nabor, and Andreas Birk

Jacobs University Bremen, Bremen, Germany


{f.maurelli,e.dineva,a.nabor,a.birk}@jacobs-university.de
http://www.jacobs-university.de/

Abstract. This paper presents the curriculum development of the


Robotics and Intelligent Systems Bachelor of Science program at Jacobs
University, one of the very few bachelor robotics-focused programs in
Germany. After highlighting the quality management measures and feed-
back received on the older Intelligent Mobile Systems, the new cur-
riculum is highlighted, together with implementation challenges linked
to Covid19 pandemic. Student satisfaction plots will provide an initial
assessment of the developed program.

Keywords: Robotics BSc · Curriculum development · Constructive


alignment · Covid impact on education

1 Introduction
Robotics, Artificial Intelligence, Machine Learning, Automation... they are all
topics receiving a very high attention in society and demands from industry.
Whilst they are often taught as part of other more canonical programs, like
Electrical/Mechanical/Computer Engineering and Computer Science, there is
an increased interest to have ad-hoc programs, where students can start learning
from the very beginning of their higher education. Those sciences have reached
a level of maturity and knowledge body that would grant them the dignity of
stand-alone programs, similarly to what happened to computer science decades
ago. Already when looking at robotics alone, it is a quite diverse field, both with
respect to research as well as education aspects, and it accordingly touches upon
many different disciplines [6].
Curriculum development in this area is hence not straight-forward [22] and
it spans several quite different dimensions. It ranges for example with respect
to content from add-on elements to, e.g., Computer Science education [11] or
Science-Technology-Engineering-Math (STEM) oriented programs in general [2,
14,17] to full-fledged robotics programs [1,25]. It also encompasses aspects of
different target groups, ranging for example from Kindergarten [4,16] over K12
[3,15,18] to college education [21,26,27].
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 81–93, 2022.
https://doi.org/10.1007/978-3-030-82544-7_9
82 F. Maurelli et al.

The United Kingdom is at leading in the world in terms of undergraduate


programs focused on robotics. The website www.bachelorsportal.com reports 50
BSc programs, compared to 17 in the USA, 5 in Australia and only 2 in Ger-
many. Although there are some undergraduate robotics-related programs in Ger-
many, there tend to be very few. The list includes the Intelligent Mobile Systems
(IMS) program at Jacobs University Bremen, which was launched in 2015, in the
Department of Computer Science and Electrical Engineering, alongside the BSc
programs in Computer Science and in Electrical and Computer Engineering.
The program was well received by students but soon a variety of proposals
appeared from the student body and faculty. In this paper we aim to present
the funding principles that guided the revision of the program, based on the
evaluation of the quality management reports and open discussions with students
and faculty. Section 2 will presents the analysis of the IMS Program, which were
used to develop the new Robotics and Intelligent Systems Program, presented
in Sect. 3. A section about the program responses to the covid crisis and the
students’ reaction is then presented in Sect. 4. Finally Conclusions will highlight
future work and optimisations of the program which are currently planned.

2 Challenges and Calls for Action


The IMS Program was designed with the following qualification aims:

Knowledge and Understanding. After finishing this program, the student will
have knowledge and understanding of
– Kinematics and dynamics of multi-body systems
– Linear and nonlinear control systems
– Basic electronics, operational principles of motors and drives
– Machine Learning algorithms and techniques for pattern-recognition, classi-
fication, and decision-making under uncertainty
– Computer Vision algorithms for inferring 3D information from camera images,
and for objectrecognition and localization
– Robotic manipulators and mobile robots
– Simultaneous Localization and Mapping (SLAM) algorithms
– Motion planning techniques in robotics
– Relevant sensors, signal-processing, and probabilistic estimation techniques
– Analytical and numerical optimization in continuous and discrete domains

Ability. After finishing this program, the student will be capable of designing and
implementing complete intelligent mobile systems that carry out complex tasks
in challenging environments without permanent human supervision. Concretely,
the student will be able to
– Model common mechanical and electrical systems which are part of intelligent
mobile systems
– Design control systems and tune their performance
Robotics and Intelligent Systems: A New Curriculum 83

– Design and program image-processing and computer-vision algorithms


– Select and implement classification and pattern recognition algorithms for
real-world problems
– Design robots and program them using popular robotics software frameworks
– Formulate and solve optimization problems of both theoretical and practical
natures, in continuous as well as discrete settings
– Work in a team to develop and integrate different components into a func-
tioning system

Fig. 1. The 3C model of Jacobs University: Choice, Core, Career

The overall program structure follows the Jacobs University 3C Model, as


outlined in Fig. 1. The first year is called Choice, as there is limited program-
specific content, to allow students to choose among the overall university offer.
This expresses the education philosophy of Jacobs University, where education
is seen as inter- and trans-disciplinary. An important additional benefit for the
students is the possibility to switch program at the end of the first year, as
three parallel choice tracks had to be selected. This was particularly useful in
robotics also to complete the first year in line with own interests. Robotics
and Artificial Intelligence can be applied to virtually any topic. For example
students interested in human-robot collaboration could choose a Psychology
module. Students interested in industry 4.0 could choose Industrial Engineer-
ing. Students more interested in the software aspects in robotics could choose
Computer Science, while students focused on electronics could choose Electri-
cal and Computer Engineering. The second year is names as Core, as students
take in-depth, discipline-specific modules, which represent the core of the pro-
gram. The third year is then called Career, as it is focused on the post-university
life: an optional internship or a semester abroad, specialisation courses, project
and thesis will allow students to sail off to new endeavors after earning their
degrees. The overall study program can be found in Fig. 2. During the years
several quality management activities were arranged. Formal activities included
satisfaction questionnaire for the overall program, course-related feedback, and
a yearly roundtable between students and faculty. Informal activities included
a discussion with graduating students about their experience, and one-to-one
discussions among faculty members and students in case of specific feedback or
proposal. The main feedback points are summarised as:
– more labs would be needed to make students experiencing hands-on work
– in some cases the first part of the thesis work was focused on learning basic
robotics software leaving little time for more high-level tasks
84 F. Maurelli et al.

Fig. 2. The study plan of the initial intelligent mobile systems BSc

– there was a need for more programming languages in addition to C and


MATLAB
– many classes were too theoretical with actual no mobile systems used in a
program called “Intelligent Mobile Systems”

The faculty of the program has therefore worked on a new concept, starting
from the original IMS program, and arranging the necessary modifications to
take into account the feedback received. This process happened in the framework
of a wider reorganisations of the BSc programs at Jacobs University in view of
the renewal of the program accreditation, as explained in the following section.

3 The New Robotics and Intelligent Systems Program

Universities operate within overarching framework of educational policies, found-


ing programs, international guidelines, or professional organisations. One com-
mon denominator is quite well expressed in the position paper by the German
Council of Science and Humanities (Wissenschaftsrat) [31]: Higher education
must be balanced such that students develop along all three dimensions: (i)
scholarship, (ii) employability, and (iii) civic responsibility. The European Union
(EU) formulates the European Qualifications Framework (EQF), which describes
which competencies need to be achieved at different education levels, includ-
ing higher education through the Bologna Process [8]. Educational associations
specify pathways on how to develop complex sets of competencies; a few notable
examples are the Careers Research and Advisory Centre (CRAC) in the UK
Robotics and Intelligent Systems: A New Curriculum 85

external framework:
educational policies, founding programs, international guidelines,

macro
operational framework within an institution:
developing a culture and providing an infrastructure, for example,
through quality management or professional development of
educators
meso
study programs:
de ning a pro le by
curricula development micro
and coherence learning environments:
development of student competencies in
between
courses, projects, seminars, facilities,
curricula

Fig. 3. Governance levels in higher education pedagogy: macro, meso, and micro. For
each level, an operative framework is presented alongside with objectives and examples.

[10] and the German University Association of Advanced Graduate Training


(GUAT) [30]. Trans-disciplinary scholarship in combination with civic responsi-
bility and leadership are also at the core of so-called future skills in engineering.
Those are called for by professional engineering associations [24] and research in
engineering education [13,23] and knowledge management in engineering [12].
Those analysis are important starting points in the design of a new curriculum,
as a university is embedded in the overall society. Moving towards the inter-
nal organisation of higher education, it is necessary to consider different levels of
action that we can analyse and act upon—helps to steer changes by level-specific
methodologies. Figure 3 describes the well known macro-, meso-, and micro-level
taxonomy in higher education [28]. Change management is driven by strategies
or innovations on any of those levels [9]. The overall educational offer of Jacobs
University has therefore been redesigned and adapted on all three different layers.
At macro-level, university-wide policies lead to the introduction of Big Questions
and Community Impact Project modules in all programs, intended to broaden
the students’ horizon with applied problem solving between and beyond the
disciplines. The meso-level program-specific analysis started from the Intended
Learning Outcomes (ILOs) of the overall program, matching them with the cur-
rent offer. Starting from ILOs was pivotal to identify gaps in hands-on robotics
for example, and at the same time to find space for them identifying the courses
which were less critical in fulfilling the program ILOs. This process has lead to
the creation of the second year CORE modules RIS Lab and RIS Project, which
at the same time fulfilled the requests from students of more laboratory activities.
With the RIS lab, students start to use the Robot Operating System in simula-
tion and then with the RIS Project they can use real robotics systems working in
a team. The educational offer can be even further improved when lab activities
are paired up with extra-curricular projects, like for example participating in
robotics competitions [19,20,29]. Those curricular changes prepare the students
to face the thesis in the third year with a much stronger and solid background,
86 F. Maurelli et al.

being able to dedicate more time to the research questions they choose. C++ is
introduced from the first semester, Python in the second semester and then used
in various courses in the second year. In addition to be aligned to the overall
program ILOs, these structural changes at program level responded very well to
the requests from students. Finally, at the individual module micro-level, each
course has been revised, again starting with the definition of the module ILOs
and adjusting course content and examination type in order to cover all ILOs.
In particular, constructive alignment principles were employed in the design of
the module and examinations [5]. At the end, a table matching the Program
Intended Learning Outcomes with the program modules ensured that all ILOs
were appropriately addressed, as in Fig. 4. The resulting schematic study plan
for the Robotics and Intelligent Systems BSc can be found in Fig. 5. By the end
of the program, according to the intended learning outcomes, students will be
able to:
– demonstrate knowledge of kinematics and dynamics of multibody systems;
– design and develop linear and nonlinear control systems;
– design basic electronic circuits;
– show competence about operational principles of motors and drives;
– design and develop machine learning algorithms and techniques for pattern-
recognition, classification, and decision-making under uncertainty;
– design and develop computer vision algorithms for inferring 3D information
from camera images, and for object recognition and localization;
– model common mechanical and electrical systems that are part of intelligent
mobile systems;
– design robotics systems and program them using popular robotics software
frameworks;
– use academic or scientific methods as appropriate in the field of Robotics
and Intelligent Systems such as defining research questions, justifying meth-
ods, collecting, assessing and interpreting relevant information, and drawing
scientifically founded conclusions that consider social, scientific, and ethical
insights;
– develop and advance solutions to problems and arguments in their subject
area and defend these in discussions with specialists and non-specialists;
– engage ethically with the academic, professional, and wider communities and
to actively contribute to a sustainable future, reflecting and respecting differ-
ent views;
– take responsibility for their own learning, personal, and professional develop-
ment and role in society, evaluating critical feedback and self-analysis;
– apply their knowledge and understanding to a professional context;
– work effectively in a diverse team and take responsibility in a team;
– adhere to and defend ethical, scientific, and professional standards.
The first batch with the new curriculum started in Fall 2019, while from Fall
2020 the program finally changed the name from Intelligent Mobile Systems to
the more understandable Robotics and Intelligent Systems, also to highlight the
important structural changes.
Robotics and Intelligent Systems: A New Curriculum 87

Fig. 4. Intended learning outcomes assessment matrix - RIS program

With the support of the Quality Management Team, regular feedback is


collected from the students. Comparing the general satisfaction feedback across
the last three years 2018–2020, the plots in Fig. 6 shows that the changes have
been significantly appreciated by the student body. The plot shows the number
of satisfied and very satisfied students with respect to the total who answered. In
particular, in addition to an improvement of all indicators, the satisfaction about
the overall organisation of the study program had a substantial jump upwards
with the introduction of the new program.
88 F. Maurelli et al.

Fig. 5. The study plan of the new robotics and intelligent systems BSc

The overall break-down in the various satisfaction level is shown in Table 1.


The improvements from the change of curriculum can also be seen using a
different metric, namely the weighted average of the values in Table 1. Assigning
weights from −2 to +2 to the satisfaction level, the increase in satisfaction after
introducing the new program is even clearer, as shown in Fig. 7. Additionally, the
2020 values significantly exceed the average satisfaction university-level, where
the parameters quality of teaching, studies and overall program organisation
score on average 58%, 66% and 55%, compared to 70%, 70% and 71% of the RIS
program. Looking at course-related feedback, questionnaires compiled at the
end of each course, and not about the overall program, this data is confirmed,
with RIS courses positioning above-average with respect to the university-wide
feedback. This is shown for example in Fig. 8 which is related to the individual
courses in Fall 2020.

4 Adaptations Needed During the Pandemic

Like any sector, the coronavirus has severely impacted education. On the pos-
itive side, it has also represented a vigorous push for innovation in education.
The IMS/RIS Program was hit during its first year of implementation of the new
curriculum. The first year module Introduction to Robotics and Intelligent Sys-
tems was particularly challenging due to the high number of student (146) and
Robotics and Intelligent Systems: A New Curriculum 89

Fig. 6. General student satisfaction - IMS-RIS programs. Percentage of students sat-


isfied and very satisfied with respect to the total.

Fig. 7. Weighted student satisfaction - IMS-RIS programs. Weights used: {−2, −1, 0,
1, 2}

Fig. 8. Course evaluation RIS-average (red line) vs JUB-average (dotted blue line).
90 F. Maurelli et al.

Table 1. Satisfaction summary: strongly unsatisfied, unsatisfied, neutral, satisfied,


strongly satisfied

−− − N + ++
2018 Quality of teaching 0% 36% 18% 37% 9%
Your studies 0% 9% 36% 55% 0%
Overall organisation 9% 18% 37% 36% 0%
2019 Quality of teaching 7% 26% 13% 47% 7%
Your studies 7% 13% 20% 53% 7%
Overall Organisation 0% 40% 33% 27% 0%
2020 Quality of teaching 0% 18% 12% 58% 12%
Your studies 0% 12% 18% 46% 24%
Overall organisation 6% 0% 23% 53% 18%

the lab activities, involving three faculty members and eight teaching assistants.
With one week of notice, the overall university moved to remote teaching. Such a
sudden change, already demanding for lectures, presented additional challenges
for the lab activities. Due to the numbers, the Arduino-based lab associated to
the Introductory module was split into six rotations in the semester. Only about
three rotations (half of the students) were able to complete the lab component
before in-presence teaching was suspended. The remaining of the semester was
organised with the following strategy. The hardware kits were distributed to the
teaching assistants, who are senior students living on-campus together with the
other students in college accommodations. A limited number of students was
allowed to collect the hardware kit in predefined times, with a specific hygiene
protocol to follow. The other students, including those who decided to travel
back to their country/family home, were provided with a simulation environ-
ment and performed the tasks in simulation. At the end of the semester, a grade
analysis has shown no statistically relevant difference between the various groups
(in the lab with a faculty member, alone collecting the hardware from the TA, in
simulation). This initial result was very interesting and will be further explored
in Spring 2021 to gather more data to arrive to more solid conclusions about the
use of simulation and real hardware in fulfilling the ILOs of a specific module.
Regarding the adaptation of the lectures, a great example is reported for the
Robotics class (CORE second year), in Birk et al. [7].

5 Conclusions and Future Work


This paper has presented an innovative robotics curriculum for undergraduate
studies, one of the very few available in Germany. The initial student feedback
shows that the program is well appreciated, with above-average results. As a
trans-disciplinary curriculum, the RIS Program fosters balanced development
of student competencies along (i) scholarship, (ii) employablilty, and (iii) civic
responsibility.
Robotics and Intelligent Systems: A New Curriculum 91

As from one side curriculum stability is important, on the other side it is not
possible not to consider the need for continuous improvements. Quality man-
agement measures will continue to ensure to proactively address any concern as
well as any chance for making things better, in service of the students’ personal
and professional development.

Notes and Comments. This research was funded by the project “Hands-On 4.0:
Individualized Applied Education in the Digitalization Age” (Grant 2019-1347-
00) by the Jacobs Foundation within the “B3—Bildung Beyond Boundaries”
framework.

References
1. Ahlgren, D.: Meeting educational objectives and outcomes through robotics edu-
cation. In: World Automation Congress, 2002. Proceedings of the 5th Biannual
World Automation Congress, 2002. Proceedings of the 5th Biannual, vol. 14, pp.
395–404 (2002)
2. Altin, H., Pedaste, M.: Learning approaches to applying robotics in science educa-
tion. J. Baltic Sci. Educ. 12(3), 365 (2013)
3. Benitti, F.B.V.: Exploring the educational potential of robotics in schools: a sys-
tematic review. Comput. Educ. 58(3), 978–988 (2012)
4. Bers, M.U., Flannery, L., Kazakoff, E.R., Sullivan, A.: Computational think-
ing and tinkering: exploration of an early childhood robotics curriculum. Com-
put. Educ. 72, 145–157 (2014). https://www.sciencedirect.com/science/article/
pii/S0360131513003059
5. Biggs, J.B., Tang, C.K.C.: Teaching for Quality Learning at University. McGraw-
Hill and Open University Press, Maidenhead (2011)
6. Birk, A.: What is robotics? An interdisciplinary field is getting even more diverse.
IEEE Robot. Autom. Mag. (RAM) 18(4), 94–95 (2011)
7. Birk, A., Dineva, E., Maurelli, F., Nabor, A.: A robotics course during COVID-
19: lessons learned and best practices for online teaching beyond the pandemic.
Robotics 10(1), 5 (2021)
8. Bologna Working Group: A Framework for Qualifications of the European Higher
Education Area. Bologna Working Group Report on Qualifications Frameworks.
(Copenhagen, Danish Ministry of Science, Technology and Innovation) (2005)
9. Brahm, T., Jenert, T., Euler, D.: Pädagogische hochschulentwicklung - überblick
und perspektiven. In: Brahm, T., Jenert, T., Euler, D. (eds.) Pädagogische
Hochschulentwicklung: Von der Programmatik zur Implementierung, pp. 9–16.
Springer Fachmedien Wiesbaden, Wiesbaden (2016)
10. Careers Researchand Advisory Centre (CRAC) Limited: vitae—Researcher Devel-
opment Framework. Joint Statement of the UK Research Councils’ Training
Requirements for Research Students, 2001, UK GRAD Programme and the
Research Councils, 2 edn (2011)
11. Cosma, C., Confente, M., Botturi, D., Fiorini, P.: Laboratory tools for robotics and
automation education. In: Confente, M. (ed.) Proceedings of IEEE International
Conference on Robotics and Automation, ICRA 2003, vol. 3, pp. 3303–3308 (2003)
92 F. Maurelli et al.

12. Dineva, E., Bachmann, A., Knodt, U., Nagel, B.: Lessons learned in participative
multidisciplinary design optimization. J. Aerosp. Oper. 4(1–2), 49–66 (2016)
13. Hadgraft, R.: New curricula for engineering education: experiences, engagement,
e-resources. Glob. J. Eng. Educ. 19(3), 112–117 (2017)
14. Howell, A., Way, E., Mcgrann, R., Woods, R.: Autonomous robots as a generic
teaching tool. In: Way, E. (ed.) 36th Annual Frontiers in Education Conference,
pp. 17–21 (2006)
15. Johnson, J.: Children, robotics, and education. Artif. Life Robot. 7(1), 16–21
(2003). https://doi.org/10.1007/BF02480880
16. Jung, S.E., Won, E.S.: Systematic review of research trends in robotics education
for young children. Sustainability 10(4), 905 (2018). https://www.mdpi.com/2071-
1050/10/4/905
17. Khine, M.S.: Robotics in STEM Education - Redesigning the Learning Experience.
Springer, Cham (2017). https://doi.org/10.1007/978-3-319-57786-9
18. Kolberg, E., Orlev, N.: Robotics learning as a tool for integrating science technology
curriculum in k-12 schools. In: Orlev, N. (ed.) 31st Annual Frontiers in Education
Conference, vol. 1, pp. T2E–12–13 (2001)
19. Maurelli, F., Cartwright, J., Johnson, N., Petillot, Y.: Nessie IV autonomous under-
water vehicle wins the SAUC-E competition. In: 10th International Conference on
Mobile Robots and Competitions, Portugal (2010)
20. Maurelli, F., Petillot, Y., Mallios, A., Ridao, P., Krupiński, S.: Sonar-based AUV
localization using an improved particle filter approach. In: IEEE Oceans 2009,
Germany (2009)
21. Mirats Tur, J., Pfeiffer, C.: Mobile robot design in education. IEEE Robot. Autom.
Mag. 13(1), 69–75 (2006)
22. Padir, T., Chernova, S.: Guest editorial special issue on robotics education. IEEE
Trans. Educ. 56(1), 1–2 (2013)
23. Prpic, J.K., Hadgraft, R.: Interdisciplinarity as a path to inclusivity in the engi-
neering classroom: a design-based research approach. In: Proceedings of the: AAEE
Conference, Fremantle, Western Australia (2011)
24. Sheppard, S., Macatangay, K., Colby, A., Shulman, L., Sullivan, W.: Educating
Engineers: Designing for the Future of the Field. Jossey-Bass/Carnegie Foundation
for the Advancement of Teaching. Wiley (2009)
25. Shibata, M., Demura, K., Hirai, S., Matsumoto, A.: Comparative study of robotics
curricula. IEEE Trans. Educ. 1–9 (2020)
26. Spolaor, N., Benitti, F.B.V.: Robotics applications grounded in learning theories
on tertiary education: a systematic review. Comput. Educ. 112, 97–107 (2017).
http://www.sciencedirect.com/science/article/pii/S0360131517300970
27. Tsoy, T., Sabirova, L., Magid, E.: Effective robotics education: surveying experi-
ences of master program students in introduction to robotics course. In: MATEC
Web Conference, vol. 220, p. 06005 (2018). https://doi.org/10.1051/matecconf/
201822006005
28. Ulrich, I., Heckmann, C.: Taxonomien hochschuldidaktischer Designs und Metho-
den aus pädagogisch-psychologischer Sicht samt Musterbeispielen aus der aktuellen
Forschung. die hochschullehre 3, 1–28 (2017)
29. Valeyrie, N., Maurelli, F., Patron, P., Cartwright, J., Davis, B., Petillot, Y.: Nessie
V turbo: a new hover and power slide capable torpedo shaped AUV for survey,
inspection and intervention. In: AUVSI North America 2010 Conference (2010)
Robotics and Intelligent Systems: A New Curriculum 93

30. Vurgun, S., et al.: Kompetenzentwicklung von Nachwuchswissenschaftlerin-


nen und Nachwuchswissenschaftlern – Fördern und Entwickeln. In: UniWiND-
Publikationen, vol. 10 (2019)
31. Wissenschaftsrat: Empfehlungen zum Verhältnis von Hochschulbildung und
Arbeitsmarkt – Zweiter Teil der Empfehlungen zur Qualifizierung von Fachkräften
vor dem Hintergrund des demographischen Wandels. Drs. 4925-15 (2015)
An Educational Framework for Complex
Robotics Projects

Simon Untergasser(B) , Manfred Hild, and Benjamin Panreck

Beuth University of Applied Sciences Berlin, Berlin, Germany


simon.untergasser@beuth-hochschule.de,
http://neurorobotik.de

Abstract. In robotics, several subject areas, such as electronics,


mechanics and programming, come together. It can be a challenge to
train new generations of robotics engineers in all the necessary areas.
Humanoid robots in particular are very complex systems that require
solid knowledge. At the same time, the goal of good teaching is to pro-
vide equal opportunities to all learners and to foster their individual
abilities. In this paper, we present an educational framework that makes
it possible to realize robotics projects of different levels of complexity
in a short time. By partitioning into these levels, there are meaningful
tasks for each level of difficulty in which learners can realize themselves.
The framework is presented with a project course for humanoid robotics
as example.

Keywords: Humanoid robotics · Project-based · University course

1 Introduction

Humanoid Robotics is receiving a lot of attention for its potential usage in dif-
ferent fields of society. Research is being done in care applications, socially assis-
tive situations and education (Gross et al. 2019; Belpaeme et al. 2018), among
other areas. To ensure future research in humanoid robotics, new generations of
researchers must be trained. However, their education can only be provided in
a high quality if the students have access to appropriate hardware. On the one
hand, there are commercially available products, such as Nao from Aldebaran
Robotics (Gouaillier et al. 2009). On the other hand, there are open source
projects like iCub (Metta et al. 2008) and robots built by research groups them-
selves like Myon (Hild et al. 2012). All these robots are highly complex systems
that are fragile, expensive and not easy to maintain. As a result, there are usu-
ally few of these systems, if any, available for education. This can be a barrier
to high-quality teaching. Another, already mentioned difficulty is the high com-
plexity of these systems. Even for experienced researchers, it often takes a lot of
time and effort to become familiar with such a system. Therefore, it is under-
standable that one cannot complete a large project on a humanoid robot within

c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022


M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 94–104, 2022.
https://doi.org/10.1007/978-3-030-82544-7_10
Research in Education 95

a short period of time, unless a lot of simplifications are made (like using pre-
existing solutions). Additionally, the different learning needs and performance
levels of the students must be taken into account. This applies to any kind of
complex problem that is to be solved with trainees. As long as the trainees have
no experience in solving such tasks, a lot can go wrong. This includes not only
the availability of possibly required hardware, as in the case of robotics. A group
of trainees is always diverse in their level of knowledge of individual content.
The level of comprehension capacities also varies naturally. How can we help
weaker learners? Frustration should be avoided to guarantee satisfying results
(see (Meindl et al. 2019) for possible consequences of frustration). And how can
we ensure that the stronger learners are not bored when the content is taught
more slowly? What happens if certain teaching content is not understood by
individuals? Everyone has strengths and weaknesses and there should be no sys-
tematic disadvantage. Last but not least, hardware, as mentioned above, is often
a bottleneck. Not only must availability be ensured, but also functionality. How
to deal with technical problems, whether or not they are the fault of the learn-
ers? We address these questions with a didactic concept that makes it possible to
implement complex projects even in a large course with many students. It pro-
vides equal opportunity for students with different performance levels while still
delivering measurable results. With smooth transitions between different levels
of complexity, frustration does not arise for weaker learners, nor does motivation
disappear for the better ones. Furthermore, our curriculum can handle technical
problems of complex hardware. We show how this concept is realized in a project
course in the program Humanoid Robotics at Beuth University of Applied Sci-
ences in Berlin. For this university course, we introduce three levels of increasing
complexity. For each level, different hardware requirements are given and differ-
ent contents of the accompanying lectures are relevant. Nevertheless, there are
enough meaningful tasks to satisfy all needs and performance levels. In addition,
learners have the opportunity to participate in a research project with their work
and have direct contact with future clients.
The paper is structured as follows. In Sect. 2, the didactic concept is described
in detail based on the robotics course example. Part three describes the integra-
tion of the curriculum into the research project RoSen. The paper concludes
with part four.

Related Work
Many studies on robots in education use these machines to deliver teaching
content or to motivate students. For example, the authors of (Donnermann et al.
2020) use the robot Pepper to provide individual tutoring as exam preparation. A
similar situation is established in (Rosenberg-Kima et al. 2020). Here, the robot
Nao serves as a moderator of a group assignment. Another field of application
is the use of robots as a mechatronic example for engineering subjects (see for
example (de Gabriel et al. 2015)). Close to the approach of this work is (Danahy
et al. 2014). Here, LEGO robots were used over a period of 15 years to implement
various teaching programs at universities. Different engineering topics such as
96 S. Untergasser et al.

Fig. 1. Students start in the simulation, shown on the left side. When successful, they
get a real MiniBot shown in the middle. At the end of the semester, they get the chance
to test their code on a big humanoid robot like Myon shown on the right (source: Messe
Berlin).

mechanics, electronics and programming were transported in the form of robotics


projects. Another interesting platform is presented in (Čehovin Zajc et al. 2015).
The authors describe a self build mobile robot based on a iRobot Roomba.
They use their TurtleBot inspired robot to teach a university course for the
development of intelligent systems. However, all these approaches do not do
justice to the wide spectrum of knowledge students of humanoid robotics need
to know.

2 Concept
This section first describes the problem in more detail. Then it outlines the
proposed concept in the robotics course.
In any learning situation with multiple learners, you face the challenge of
having to serve different levels of proficiency. Some of the learners understand
the content faster than others. Typically, the goal is to maintain an average
speed and level of complexity so that most of the group can follow along. This
can mean that the material is nevertheless too difficult or fast for some learners,
so that frustration can spread among this part of the group. These learners then
have to spend more of their free time on the subject in question. As a result,
they may miss out on other courses. At the same time, the better learners are
bored because the material is not challenging enough for them or is taught too
Research in Education 97

slowly. Often it is particularly important to keep these learners motivated, as


the entire course can benefit from their level of performance. This is especially
true in project courses, where students work together to achieve a larger goal. To
address this issue, a multi-level didactic approach is presented below. It allows
the teaching content to be adapted to different personalities and still get valuable
results from all participants. At the same time, it provides measurable results
in order to make fair assessments. The project course “Humanoid Robotics”,
in which complex robotics projects are realized, serves as an example. The key
concepts are:

– multiple levels of complexity


– flexible, permeable boundaries between levels (you can level up anytime)
– meaningful tasks in each level
– measurable, fair outcomes at all levels
– as much independence from technical dependencies as possible.

It is possible that there are teaching contents which can not divided into
several levels. However, with some effort and abstraction, it is possible in many
cases. For example, one goal of the research project RoSen (described in part
3) is using a complex humanoid robot like Myon (see Fig. 1 on the right). This
can only be achieved if all intermediate steps are understood. These include, for
example, controlling the actuators, visual processing of camera data, or handling
the up to 32 degrees of freedom. However, concentrating on simpler subtasks of
the bigger goal allows for a much broader range of projects. For example, local-
izing the robot in complex environments and building a map of the environment
are essential parts of the RoSen project. To solve this task, however, the robot
Myon is not necessary in the first step. All that is needed is a small analogue,
called MiniBot (shown in the center of Fig. 1), which has the necessary capa-
bilities and degrees of freedom so that relevant algorithms can be tested on it
before being applied to Myon. This MiniBot provides a huge reduction in com-
plexity. It also circumvents potential hardware bottlenecks, since the MiniBot
is relatively inexpensive to produce and therefore each student can have their
own robot. However, the lowest level of complexity is formed by a simulation of
the two robots (large and small) in the simulation environment WeBots (Michel
2004). This virtual environment is used for the development and initial testing
of algorithms and project ideas and serves as a playground for the first level.
Figure 1 shows the simulated MiniBot on the left. In the transition to the second
level the code developed by the students can be transferred to the MiniBot and
adapted to the real world. The transition to the third level is achieved when
everything works on the MiniBot. Then the students get the opportunity to test
their project developments on the large robot. The three-level curriculum of the
example presented here is shown in Fig. 2. The total 15-week course is divided
into three blocks of equal length. The lower part of the blocks list the main
course contents. In the following, the two courses forming the formal framework
of the robotic projects are first outlined in terms of content. Building on this,
the concepts and contents of the three levels are described in detail.
98 S. Untergasser et al.

Fig. 2. The timeline of the project course is divided in three blocks. Students start
in the simulation, continue with the MiniBot and can end up programming Myon.
Contents of the student tasks are listed in the lower part of the blocks.

2.1 University Courses

The Humanoid Robotics project module forms the formal framework for the
students’ project work. The details for the WeBots simulation environment (see
next section) and the simulated worlds and robots in it are presented and dis-
cussed here. Additionally, the hardware and programming details of the robots
are introduced. Students can carry out projects alone or in small groups of up to
three people. The examination consists of three videos, which are due at the end
of the respective blocks. In these videos, the project group members describe
their current progress, any problems and their solutions. The expected content
relates to the content of the block, but of course also to the intended projects. In
the first five weeks only the simulation is used. Here the students are given tasks
that build on each other in order to get to know the simulation environment and
the behavior of the robots better. This already includes the first algorithms for
robot localization and map creation (Durrant-Whyte and Bailey 2006), which
are implemented and tested as part of the tasks. The students also get to know
extensions and improvements of the algorithms, which they can later implement
and test as possible projects. In the following five weeks, the students who have
successfully submitted their first video will each receive a MiniBot. The first
tasks are to let the real MiniBot balance and to port the code that has already
been written for orientation. Here it is of particular didactic relevance to over-
come the difference between the simulated world and the real world, where, for
example, sensor values are always noisy. In the course of the second block, the
students are encouraged to come together in groups and to develop project ideas.
In the second video, at the end of the second block, the MiniBot is expected to
balance, move when switched on and form a map. A sketch and first develop-
ments regarding the project ideas are also expected here. Individual performance
can vary greatly, which is why there are only a few predetermined expectations.
The third block now consists of the implementation of the project ideas on the
MiniBot and, in the case of successful projects, the adaptation of the code for
Myon. The lecture Machine Learning deals with the required algorithms. The
contents are adapted to the timeline in Fig. 2 to enable successful projects. Here,
too, there is a division into three subject blocks. The first block deals with the
mathematical basics of the course (probability theory, graph theory, particle
Research in Education 99

Fig. 3. The C-Program in the middle is written by the students in the course of their
projects. There are different SDKs for all platforms available, since they all have dif-
ferent hardware and therefore different implementations. These differences are hidden
below the SDKs.

filter, etc.). In addition, the robot localization and map formation (SLAM) is
dealt with in detail. The characteristics of the different robotic platforms, for
example, the specific features of the sensor values, are taken into account here.
The second part deals with neural networks in general and the visual processing
with the help of these networks in particular. Architectures that are suitable for
mobile robot applications are examined in more detail. These are so-called light
weight networks such as MobileNet (Howard et al. 2017) and YOLO (Redmon
et al. 2016). By the end of the second block, the students have got to know all
the algorithms for their projects. In the third part of the lecture, further algo-
rithms from machine learning are presented, but they have no direct relation
to the projects (e.g. regression, decision trees, boosting, etc.). Platform-specific
SDKs (Software Development Kits) have been developed to make the transitions
between simulation, MiniBot and Myon as easy as possible. The same function-
alities, independent of the robotic platform, are available to the students. These
are implemented differently in the background on the different platforms. In the
course of the lectures, however, it will be discussed exactly where the differences
are and what to look out for (e.g. different number of sensors, different pro-
cessing capabilities). Figure 3 shows the merging of the different platforms on a
conceptual level. The common basis is always the C code, which is created by
the students as part of their projects. Depending on which platform they want
their code to run on, a different SDK is integrated. This enables easy porting of
the code between all platforms once the SDKs are available. The details of the
platforms are described in more detail below.
100 S. Untergasser et al.

Fig. 4. The initial simulated world for the MiniBot experiments. The walls within a
two square meters rectangle can be added or removed according to a raster.

2.2 Simulation

To maximize the efficiency of learning, the concept described above was devel-
oped in close collaboration with students. To this end, selected students were
given the opportunity to work on specific sub-aspects. One of these sub-aspects
was the development and construction of the MiniBot (see Sect. 2.3). Another
part was to create a simulation of the MiniBot (see Fig. 1 on the left) and a
corresponding world in the simulation environment WeBots. As can be seen in
Fig. 4, this world consists of a set of walls placed on a table. Within the rectangle
of two square meters, the walls can be added or removed as desired, creating
different room geometries. The simulation corresponds exactly to a model in the
university’s lab, where the real MiniBot can drive around. The real model also
consists of movable walls of the same size as in the simulation. The algorithms
tested in the simulation can therefore be tested in the exact same environment
in reality. Accordingly, the simulated MiniBot is also an exact replica of the real
robot. This includes shape, weight and components, but also the driving behavior
and sensor qualities. As can be seen in Fig. 3, the students’ C program commu-
nicates with the simulation environment via a TCP/IP connection. In order for
the communication to work, a C program is running in the WeBots Simulation,
which executes a TCP/IP server and handles communication (this includes read-
ing sensor values and writing motor commands). The TCP/IP server sends the
sensor values (including camera image) to the central C program and receives
the motor commands from it.

2.3 MiniBot
When the students are ready to transition from the simulation to the real world,
they receive a MiniBot as shown in the middle of Fig. 1. The central computing
Research in Education 101

unit of the MiniBot is the Kendryte K210 with a Risc-V dual core processor and
the so-called Kernel Processing Unit (KPU), which accelerates the computation
of Convolutional Neural Networks. The K210 is mounted on the Maix BiT board
from Sipeed. The set from Sipeed includes a microphone, camera, SD card slot
and a LCD display. In addition to that, a gyro sensor, a time of flight sensor, some
switches and an infrared receiver for a remote control were built onto the MiniBot
(see Fig. 5 on the left for all parts of one robot). The battery was placed at the
very top for better weight distribution. The drive is provided by two Dynamixel
XH430 from Robotis. The wheels, as well as the body of the MiniBot are made
from acryl (PMMA) with a laser cutter. Figure 5 also shows the charger and
power supply for the battery. To run the same central C code from Fig. 3 on the
MiniBot, it is compiled into a BIN file using the MiniBot’s SDK. This BIN file is
then flashed onto the K210. It is also possible to place neural networks in the SD
card’s memory and then run them using the KPU. However, the implementation
details are hidden in the background in the respective SDKs.

2.4 Myon
Finally, if the students handle the MiniBot successfully as well, they can test their
code on the large robot Myon (see Fig. 1 on the right). For this purpose, a student
developed a drivable lower body, the prototype of which can be seen in Fig. 5 on
the right side. This development is part of the student’s final thesis. Also, part
of the work is to bring Myon into WeBots, including the mobile lower body. So
far the head with its functionalities is already realized in the simulation. Due to
the mobile lower body, the possible behaviors of the large robot correspond to
those of the MiniBot. The algorithms developed by the students can therefore be
brought to the large robot without having to develop everything from scratch.
Of course, minor adjustments are necessary. For example, Myon has eight time
of flight sensors, whereas the MiniBot has only one. Also, the size of the wheels
in relation to the body size is completely different. However, the adaptation of
the parameters is part of the didactic unit of the last block and is an important
lesson to learn.

3 Integration of Research and Education


In addition to realizing their own project ideas, the students of the current
semester were involved in the research project RoSen (Robots in Senior Living
Facilities). In the following, the project process and its goals are described first.
Then follows a description of the students’ role in the project and the advantages
and challenges it poses to them.

Project RoSen
The RoSen project is a cooperative project between two universities and a hous-
ing cooperative that operates senior housing facilities. With the help of con-
versations between students and residents of the senior housing facilities, the
102 S. Untergasser et al.

Fig. 5. Left: Parts of a MiniBot, which will be assembled from students in collaboration
with teachers. Right: Myon with the prototype of its drivable lower body.

project aims to discuss wishes and needs of the participants. These are then
tested for their technical feasibility. Together with the residents and technical
experts, the students develop possible application scenarios to create a win-win
situation. After prioritization and further elaboration of the ideas, the agreed on
applications are implemented and the robots are then brought into interaction
with the seniors for defined time periods. These interactions are observed and
evaluated. In addition, there are surveys of all participants on the fulfillment of
the expected effort and benefits. The primary goal, then, is to gather and ana-
lyze the needs and expectations of potential users, as well as the young junior
technologists.

Integration into the Project Module

In the third block of the project course, one of the students’ tasks is to contact
residents of the senior citizens’ residence. Due to the health situation, the planned
interviews will not be conducted in person, but by letter or telephone. For this
purpose, the students write a welcome letter to selected seniors, in which they
introduce themselves, formulate their concerns and make suggestions on how
to proceed. Optionally, the same is done by telephone. In the course of several
conversations, the above-mentioned project goals are worked out together with
the seniors. In the module, data protection, appropriate forms of approaching
and the methods for finding the needs of potential users are discussed. This
gives the students the opportunity to align their robotics projects with the RoSen
project and the wishes and needs of the seniors. They can test and evaluate their
ideas directly in interaction with users. They can try out the algorithms they have
learned and implemented (such as robot localization and recognition of visual
impressions) in a realistic environment. Direct involvement in active research also
Research in Education 103

provides a unique insight. Interested students can also participate in the project
after the end of the semester in the context of final theses. Nevertheless, it is also
a challenge for those involved. It is not just a technical event where everyone
can program for themselves. The social component through direct contact with
potential users opens up completely new perspectives.

3.1 Results

After the first semester of the presented university courses we can report first
results. All participating students (17 people) have completed the course and all
of them have realized a project. We observed a typical distribution of perfor-
mance levels, some easier projects, most of them with an intermediate level of
difficulty and some very challenging ones. Some results of the projects will now
be incorporated into the hardware and software, so that the base system for the
next generation of students already starts at a higher level. One third of the
students this semester are interested in continuing the projects in their bachelor
thesis.

4 Conclusion

When young recruits learn about complex machines, it usually takes years before
they are good enough to carry out larger projects. It’s the same with humanoid
robots. Understanding and being able to enable these very complex machines to
do something is a tedious task. This work presented how such complex projects
can nevertheless be realized in a short time. The described educational frame-
work meets all the needs of the learners. By dividing the study contents into sev-
eral difficulty levels, projects with different demands can be realized. Thus, every
learner has a chance to successfully complete a project. For the specific example
of humanoid robotics projects, three of these levels were introduced. Students
were first taught to program the virtual version of the small robot MiniBot in
the simulation environment WeBots. Each student who was able to successfully
implement the algorithms from the accompanying machine learning lecture here
will receive a real MiniBot. In terms of functionality, this MiniBot is a small
copy of the large robot Myon. The transfer from simulation to the real world
has many obstacles. Those who successfully master these obstacles and enable
the MiniBot to find its way around are allowed to test their own developments
on the large robot. To enable these seamless transitions between platforms, the
MiniBot was developed along the lines of Myon. A virtual version of the MiniBot
was then built in WeBots. A virtual version of the Myon is also nearly complete.
In addition, SDKs for all platforms have been developed, allowing all platforms
to be programmed with the same code. Through the RoSen research project,
students can directly try out what they have learned in an application-oriented
environment and practice initial customer contact. In each level of the curricu-
lum there are enough topics that can be worked on by the learners so that they
can contribute to the complete project in a meaningful way.
104 S. Untergasser et al.

References
Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B., Tanaka, F.: Social robots
for education: a review. Sci. Robot. 3(21), eaat5954 (2018)
Danahy, E., Wang, E., Brockman, J., Carberry, A., Shapiro, B., Rogers, C.B.: Lego-
based robotics in higher education: 15 years of student creativity. Int. J. Adv. Robot.
Syst. 11(2), 27 (2014)
de Gabriel, J.M.G., Mandow, A., Fernández-Lozano, J., Garcı́a-Cerezo, A.: Mobile
robot lab project to introduce engineering students to fault diagnosis in mechatronic
systems. IEEE Trans. Educ. 58(3), 187–193 (2015)
Donnermann, M., Schaper, P., Lugrin, B.: Integrating a social robot in higher
education–a field study. In: 2020 29th IEEE International Conference on Robot and
Human Interactive Communication (RO-MAN), pp. 573–579. IEEE (2020)
Durrant-Whyte, H., Bailey, T.: Simultaneous localization and mapping: part I. IEEE
Robot. Autom. Mag. 13(2), 99–110 (2006)
Čehovin Zajc, L., Rezelj, A., Skočaj, D.: Teaching intelligent robotics with a low-cost
mobile robot platform (2015)
Gouaillier, D., et al.: Mechatronic design of NAO humanoid. In: 2009 IEEE Interna-
tional Conference on Robotics and Automation, pp. 769–774. IEEE (2009)
Gross, H.-M., Scheidig, A., Müller, S., Schütz, B., Fricke, C., Meyer, S.: Living with a
mobile companion robot in your own apartment-final implementation and results of
a 20-weeks field study with 20 seniors. In: 2019 International Conference on Robotics
and Automation (ICRA), pp. 2253–2259. IEEE (2019)
Hild, M., Siedel, T., Benckendorff, C., Thiele, C., Spranger, M.: Myon, a new humanoid.
In: Language Grounding in Robots, pp. 25–44. Springer (2012)
Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile
vision applications. arXiv preprint arXiv:1704.04861 (2017)
Meindl, P., et al.: A brief behavioral measure of frustration tolerance predicts academic
achievement immediately and two years later. Emotion 19(6), 1081 (2019)
Metta, G., Sandini, G., Vernon, D., Natale, L., Nori, F.: The icub humanoid robot:
an open platform for research in embodied cognition. In: Proceedings of the 8th
Workshop on Performance Metrics for Intelligent Systems, pp. 50–56 (2008)
Michel, O.: Cyberbotics ltd. webotsTM : professional mobile robot simulation. Int. J.
Adv. Robot. Syst. 1(1), 5 (2004)
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-
time object detection. In: Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, pp. 779–788 (2016)
Rosenberg-Kima, R.B., Koren, Y., Gordon, G.: Robot-supported collaborative learn-
ing (RSCL): social robots as teaching assistants for higher education small group
facilitation. Front. Robot. AI 6, 148 (2020)
Education Tool for Kinematic Analysis:
A Case Study

Maria Cristina Valigi1 , Silvia Logozzo1 , and Monica Malvezzi2(B)


1 Department of Engineering, University of Perugia, 06125 Perugia, Italy
{mariacristina.valigi,silvia.logozzo}@unipg.it
2 Department of Information Engineering and Mathematics, University of Siena,
53100 Siena, Italy
monica.malvezzi@unisi.it

Abstract. This work describes the part of the Applied Mechanics course con-
cerning the kinematics of mechanisms in the Mechanical Engineering master’s
degree course of the University of Perugia. The kinematic study of mechanisms is
essential for mechanical engineers, but also for the design and control of robots and
intelligent machines. Traditionally in mechanical engineering courses this subject
is dealt with methods that are not easy to generalise and visualise by students. In
the activity described in this paper the lecturer, after having dealt with the funda-
mental properties of closed and open kinematic chains, with particular attention to
problems of motion, presented an algebraic method for determining the position,
velocity and acceleration of 1-DoF (degree of freedom) mechanisms, throughout
explanatory examples, and proposed to the students to individually analyse mech-
anisms employed in robotic systems, providing a complete kinematic analysis
integrated with the dynamic animation of the mechanism.
This paper reports the response of the students, who were asked to select a
mechanism of their choice to test the application of the taught analysis method.
In particular, one student’s exercise is described as an explanatory case, which
highlights the importance of kinematics in educational robotics.

Keywords: Education tool · Kinematic analysis · Robotics · Simulation tool ·


Distance learning

1 Introduction

The role of engineers is rapidly changing in the new industry 4.0 framework and requires
cross-functional and multidisciplinary knowledge and skills that combine IT and pro-
duction knowledge [1]. In his 2017 talk, Peter Fisk defined a list of features of future
education: personalized in time, space, and tools, project and field experience based,
peer-to-peer, allowing data interpretation. Universities have an important role in fulfill-
ing this need and the teaching methods adopted also for basic courses need to be revised
according to this new perspective [2].

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022


M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 105–116, 2022.
https://doi.org/10.1007/978-3-030-82544-7_11
106 M. C. Valigi et al.

The need for innovative education tools, allowing distance learning, possibly through
the use of simulation environments, has become even more important during the recent
lockdown period, due to the COVID-19 pandemics [3].
In this work, a specific path followed during the Applied Mechanics course, focused
on the kinematics of mechanisms, is described. It dealt with methodologies which
brought out the interest of some students in the study of the mechanics of robots. This
experience showed how an applied mechanics didactic path, carried out during a gen-
eral mechanical engineering MSc degree course, which does not explicitly present a
curriculum in robotics, can be an incentive and an introduction to robotics [4].
Traditionally, the course of applied mechanics is one of the fundamental courses in
mechanical engineering MSc degree courses, introducing the study of common com-
ponents found into various categories of machines, regardless of the specificity of each
machine [5–7]. Some examples are mechanical seals and bearings [8, 9], cams [10–12],
mechanisms and gears [13–15]. The study can be approached from a purely kinematic
point of view, regardless of the forces which cause the motion [16], or from a dynamic
point of view, considering the motion as the effect of forces acting on the machine,
and evaluating the onset of vibrations [17]. All these studies can be approached both
neglecting friction and considering the tribological conditions [18].
Notwithstanding the availability of computational resources and the development
of efficient algorithms and tools for the kinematic and dynamic analysis of mechanical
systems, in several academic courses [17] traditional analysis methods, often based on
graphical constructions, are still quite diffused [5–7] (see Fig. 1). Even if, thanks to
the availability of CAD software, the solution of mechanics problems with traditional
graphical methods has been improved, these methods are still time consuming, they are
limited to the design and mechanical analysis, and they are not suitable for an integration
with other contents, for example for the design of mechatronics components and/or
mechanism control system. Such an integration is desirable in robotic systems, in which
the mechanical structure behaviour is strictly related to the sensorimotor and control
systems. A more computationally oriented perspective has been employed in robotics
teaching: several software frameworks have been realised for education and training
purposes, mostly at an academic level. They were recently revised by Cañas et al. in
[19], and among them, the most famous one is the Robotics Toolbox by Peter Corke
[4], other examples are SynGrasp [20], devoted to the simulation of robotic grasping,
and ARTE (A Robotics Toolbox for Education), allowing the study of industrial robotic
manipulators [21]. In all these toolboxes the mechanical structure of robots is mostly
limited to serial structures that can be modelled with standard kinematic tools, as for
instance the Denavit Hartenberg representation. Other recent simulation tools developed
for kinematics and educational robotics are presented in literature, such as [22], where
the purpose was to analyse the kinetostatic effects of singularities in parallel robots
to evaluate change in resistance or loss of control. In [23], authors describe a virtual
laboratory of parallel robots developed for educational purposes. The tool is named
LABEL and is aimed at the teaching of kinematics of parallel robots, solving the inverse
and direct kinematic problem of three kinds of robots. In [24], authors deal with the
teaching of parallel robots introducing an intelligent computer-aided instruction (ICAI)
Education Tool for Kinematic Analysis: A Case Study 107

modelling method. In [25], authors propose an educational example to design a delta


robot including a task of kinematic analysis.
The teaching methodology adopted in the activity described in this paper is inspired
to the one described in the book [26] which, based on known 1-DoF planar mechanisms
taken as examples, shows how to deal with a kinematic study in terms of position,
velocity, and acceleration analyses, throughout the application of an algebraic method
and the use of matrix calculation programs, such as Matlab. At the end of the theoretical
teaching phase, and after the application of the method to some examples of mechanisms,
the students were asked to choose a real mechanism at will, and to perform the analysis
of position, velocity and acceleration.
The response from the students was satisfactory, and their interest was very high.
Above all, it was very impressive to see that many of them chose to analyse mechanisms
for robotic applications. This paper presents one of the students’ studies, as an example,
which demonstrates how educational robotics is an integral part of applied mechanics
[27]. After this brief introduction, the first section of the paper reports some references
to the kinematics of the rigid body; the second section describes one of the examples
shown during the course, to explain the algebraic method for the analysis of 1-DoF planar
mechanisms; finally, the third section reports one of the most significant examples of
the students’ works.

Fig. 1. Acceleration analysis of a planar mechanism with graphical methods from [7]
108 M. C. Valigi et al.

2 Kinematics of the Rigid Body: Some Hints and References


In applied mechanics kinematics refers to the study of the motion of point particles,
bodies, articulated systems, or mechanisms.
In mechanism kinematics, the geometrical configurations of the system during
motion must be congruent with the motion itself and with the constraints; the forces
acting on the bodies are not considered. As known, any free rigid body in a three-
dimensional (3-D) space has six degrees of freedom, thus six parameters are needed to
define its position. One possible set of parameters that could be used is composed by
three lengths, (x, y, z) and three angles (θx, θy, θz). In the general case, a rigid body free
to move in a reference frame will have a complex motion, which is a combination of
a simultaneous rotation and translation. For simplicity, in this paper only planar cases
(two-dimensional (2-D) cases) will be presented.

2.1 Configuration Analysis

The configuration analysis has the objective of determining the positions of the centers
of joints, centers of mass and the rotation angles of all the bodies that constitute the
mechanism as a function of a set of input parameters whose dimension is defined by the
number of DoF of the mechanism. Typically, in planar mechanisms joints are represented
as nodes and bodies are schematized as links between two joints.
From the mathematical point of view, the problem consists in solving a non-linear
algebraic system.

2.2 Velocity and Acceleration Analysis

Once the system configuration has been solved in terms of configuration, and the veloci-
ties and accelerations of the input elements are provided, the successive step in kinematic
analysis consists in defining velocities and accelerations of all the mechanism’s elements.
The solution of the velocity and acceleration problem is simpler than the position anal-
ysis; in fact, the problem consists in solving a linear equation system. For velocity, rigid
body fundamental kinematic relationship is used, while for the acceleration Rival’s the-
orem is applied. In some mechanisms it is convenient to introduce relative motions and
therefore to include relative velocities, accelerations and Coriolis acceleration term.

2.3 Kinematic Simulation

The kinematic simulation comprehends all the three steps previously introduced and
allows to determine the motion characteristics (configuration, velocity, acceleration)
of all the system’s elements, once the motion laws of the input elements are known.
The kinematic simulation provides useful information about the mechanism’s motion;
therefore, it can be employed for the trajectory analysis of each element, with the purpose
of identifying any collision, interference, etc.
Education Tool for Kinematic Analysis: A Case Study 109

3 Sample Mechanism and Educational Tool for Kinematic Analysis


This section illustrates one of the many exercises presented during the course, as example
for the students to deal with the solution of the kinematic problem by means of the
algebraic method applied to a sample planar mechanism [26]. The studied mechanism is
represented in Fig. 2a. The input parameters were the lengths of the links, the positions
of the joint nodes connected to the frame, the angle ϕ between the link AB and the
horizontal axis in the initial position and its angular velocity. The exercise requires the
position analysis for a complete rotation of the link AB with ϕ ∈ [0°, …, 360°], and the
velocity and acceleration analysis of the mechanism.
The reference frame is chosen with its origin O coincident with A, x and y as
horizontal and vertical axes, and the third axis in compliance with the right-hand rule.
For a generic point K, we indicate with rK a three-dimensional vector containing its
coordinates expressed w.r.t. the above defined reference frame. The position of the joint
C can be determined by solving a nonlinear problem, considering the intersection two
circumferences: the first with center in B and radius BC and the second with center in
D and radius DC, as shown in Fig. 2b):

Fig. 2. a) Sample mechanism: links, joints and input data; b) determination of the configuration
of point C

The system can be analytically formulated by writing the equations of the two
circumferences:

DC 2 = (xC − xD )2 + (yC − yD )2
(1)
BC 2 = (xC − xB )2 + (yC − yB )2

It is clear that, since the system is not linear, multiple solutions can appear and should
be managed. In particular, by solving the system of the two equations points 1 and 2 are
determined, as shown in Fig. 2b), respectively C1 = [xC1 , yC1 , 0] and C2 = [xc2 , yC2 , 0].
The point corresponding to the effective point C of the mechanism is chosen by selecting
the point with the lower y, in this case C since yC2 < yC1 .
110 M. C. Valigi et al.

Similarly, the position of the joint E can be computed by solving the following
system, as graphically reported in Fig. 3, with the following relationships:

CE 2 = (xE − xC )2 + (yE − yC )2
yC −yB yE −yB (2)
xC −xB = xE −xB

Also this system has two solutions, indicated with 1 and 2 in Fig. 3a), respectively,
corresponding to two points E1 = [xE1 , yE1 , 0] and E2 = [xE2 , yE2 , 0]. The point corre-
sponding to the effective point E of the mechanism is the one with lower x coordinate,
therefore in this case E1 point is chosen, since xE1 < xE2 .
The joint F can move along a guide track parallel to the x axis, therefore the value
of yF is known.

Fig. 3. a) Determination of point E b) Determination of point F

Writing down the following relationship, where x E , yE and yF are known,

EF 2 = (xE − xF )2 + (yE − yF )2 (3)

xF can be determined (Fig. 3b). Solving the equation, also in this case two solutions xF1
and xF2 can be found. The point corresponding to the point F of the mechanism is the
one with the lower xF value, in this case xF1 is chosen since xF1 < xF2 .
The velocity and acceleration problems can be consequently formulated and solved.
Based on the described method, a series of Matlab scripts developed for educational
purposes was set up to evaluate the kinematics of the mechanism in terms of position,
velocity and acceleration during a complete rotation of the member 1, i.e. varying ϕ angle
in the range 0° < ϕ < 360° and to provide a graphical representation of the mechanism
movement during member 1 rotation (as shown in Fig. 4 for the sample mechanism).

4 Preliminary Considerations on Course Outcomes and Teaching


Methods
From the didactical point of view, the formulation briefly introduced in the previous
section for the kinematic analysis has some relevant advantages that made it particularly
Education Tool for Kinematic Analysis: A Case Study 111

Fig. 4. Position analysis of the sample mechanism

effective in the course, especially in the last semester in which the lectures were forced
to be on-line:

• With respect to more efficient but complex analysis tools [28, 29], as the one imple-
mented in multibody simulators, it is still quite intuitive and easy to visualise for the
students and requires only very basic mathematics and physics knowledge.
• The formulation is provided in an analytical way that can be easily implemented
directly by students with very basic programming skills, for instance with Matlab.
• The implementation allows the students to immediately visualise the results of the
analysis, the main features of the motion and the main properties of kinematic
parameters.

The course has been attended on-line by approximately 50 students in the first
semester of Academic Year 2020/2021. During the first lessons, the theoretical fun-
damentals of kinematics were explained step by step, and in parallel the taught methods
were implemented in practical exercises and examples, also based on robotic applica-
tions, and solved by the lecturer. Later, the same exercises were proposed to the students
with different input data, and they solved the mechanisms in autonomy. At the end
of the course, the students were asked to choose a mechanism implemented on a real
mechanical system and use it to practice the methods. Different types of mechanisms
were chosen, a graphical representation of the distribution of the applications is reported
in Fig. 5. An example is reported in the following section.
112 M. C. Valigi et al.

Mechanisms for automoƟve and airplane


applicaƟons
12% 22% RoboƟc Mechanisms

Mechanisms for operaƟve machines


34% 14%
Mechanisms for common use
18%
Mechanisms for automaƟc machines

Fig. 5. Classification of the projects selected by students according to the application

5 Case Study: A Student’s Work

This section reports one of the most significant works developed by the students in the
robotic field as a case study: a part of the walking mechanism created by Theo Jansen,
a Dutch artist which has developed kinetic art experiences (New forms of life, as he
defines them) since 1990.
Some of his most famous artworks are the Strandbeesten (animals of the beach,
Fig. 6), a sort of animated skeletons, whose propulsion originates directly from the
wind energy. The motion of these mechanisms is governed by complex systems, made
up mechanical members jointed with joints. Although they have big dimensions, these
structures are built up using plastic flexible pipes, nylon wires and adhesive tape [30].

Fig. 6. An example of Strandbeesten from [30].

These complex kinematic systems can have large size, generally from 3–4 m up to
12 m. The big dimensions allow to have large surfaces for the wind impact, resulting in
huge propulsion pressures.
Education Tool for Kinematic Analysis: A Case Study 113

Fig. 7. Strandbeesten leg simulation, a) Initial position b) position analysis.

Fig. 8. Strandbeesten leg simulation, velocity analysis.

Fig. 9. Strandbeesten leg simulation, acceleration analysis


114 M. C. Valigi et al.

The case study regards the position and kinematic analysis of the mechanism of a
single limb. This is a single DoF planar mechanism (Fig. 7a), composed of 11 links and
8 joints. The reference frame has its origin in A, which is a stationary point. ϕ is the
angle between the horizontal direction intersecting A and the link AB and it represents
the unique DoF of the motion. Applying the method presented in the previous section,
the position analysis was performed as shown in Fig. 7b and the kinematic simulation
was carried out, determining velocities (Fig. 8) and accelerations (Fig. 9) of the main
points of the mechanism.

6 Conclusions and Perspectives


The aim of this paper was to describe the implementation of a formulation for the kine-
matic analysis of mechanisms that presents several advantages from the educational
point of view with respect to other more traditional methodologies (often based on
complex geometrical costructions or on not very intuitive algebraic formulations). The
formulation is based on an algebraic method that is easy to be implemented in simu-
lation procedures directly by students with basic knowledge of mathematics, physics,
mechanics and programming and allows to get intuitive and insight results in terms of
graphics and animation that help the students to understand mechanism kinematics and
to apply it in several contexts related to Industry 4.0, as for instance robotic devices. This
formulation was presented to the students during the applied mechanics course of the
MSc in mechanical engineering, one of one of the most interesting exercises individually
performed by the students was reported in this paper. A case study chosen by one of
the students was based on the famous kinematic mechanisms created by Theo Jansen,
enormous structures built up with flexible links and moved by wind propulsion. Thanks
to the methodology for kinematic analysis illustrated in the first part of the course and
the procedures developed for simple mechanisms, the kinematic simulation of a single
limb of the mechanism was performed autonomously by the student and results in terms
of position, velocity and acceleration analysis were presented. This paper demostrates
how an interactive approach leveraging on case studies related to engaging problems as
for instance robotic design could stimulate the creativity and problem-solving skills of
students even in basic and theoretical courses like applied mechanics. During the last
year the educational activities at schools and universities have been dramatically changed
due to COVID-19 pandemic restrictions, physical presence of students in classrooms
and laboratory has been greatly limited and this could affect the learning process. The
type of educational tools and activities presented in this paper have shown to be par-
ticularly effective and versatile also for distance learning and therefore contributed to
limit the drawbacks of the past months. As short-term follow-up of this activity, we are
going to review all the projects realised by the students and collect them in a library
accessible on-line, that will be the base for the development (mid-term objective) of a
educational simulation toolbox of mechanisms with a particular focus on robotics. It’s
worth to observe that some basic concepts (e.g. configuration analysis of simple mecha-
nisms) could also be targeted for high-school students from scientific and technological
schools. As a second mid-term objective we will make teaching material available,
including details on the mathematical model, simulations and demonstrations to high
school students and teachers.
Education Tool for Kinematic Analysis: A Case Study 115

References
1. Onar, S., Ustundag, A., Kadaifci, Ç., Oztaysi, B.: The changing role of engineering education
in industry 4.0 Era. In: Industry 4.0: Managing the Digital Transformation, pp. 137–151.
Springer, Cham (2018). https://doi.org/10.1007/978-3-319-57870-5_8
2. Fisk, P.: Education 4.0 (2017)
3. Birk, A., Dineva, E., Maurelli, F., Nabor, A.: A robotics course during COVID-19: lessons
learned and best practices for online teaching beyond the pandemic. Robotics 10, 5 (2021)
4. Corke, P.: Robotics, Vision and Control: Fundamental In MATLAB Second Completely
Revised, vol. 118. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54413-7
5. Callegari, M., Fanghella, P., Pellicano, F.: Meccanica Applicata alle Macchine. UTET
Università
6. Meneghetti, U., Maggiore, A., Funaioli, E.: Lezioni di meccanica applicata alle macchine.
Fondamenti di meccanica delle macchine, Pàtron, vol. 1 (2005)
7. Papini, S., Malvezzi, M., Falomi, S.: Esercitazioni di meccanica applicata alle macchine.
Cinematica e cinetostatica, Esculapio, vol. 1
8. Valigi, M.C., Braccesi, C., Logozzo, S.: A parametric study on friction instabilities in
mechanical face seals. Tribol. Trans. 59(5), 911–922 (2016)
9. Liu, Z., Wang, Y., Zhao, Y., Cheng, Q., Dong, X.: A review of hydrostatic bearing system:
researches and applications. Adv. Mech. Eng. 9(10) 2017
10. Yousuf, L.S., Hadi, N.H.: Contact stress distribution of a pear cam profile with roller follower
mechanism. Chin. J. Mech. Eng. 34(1), 1–14 (2021). https://doi.org/10.1186/s10033-021-005
33-y
11. Cardona, S., Zayas, E., Jordi, L., Català, P.: Synthesis of displacement functions by Bézier
curves in constant-breadth cams with parallel flat-faced double translating and oscillating
followers. Mech. Mach. Theory 62, 51–62 (2013)
12. Chen, F.Y.: Mechanics and Design of Cam Mechanisms. Pergamon Press, Oxford (1982)
13. Dudley, D.: Gear Handbook. McGraw-Hill, New York (1962)
14. Baglioni, S., Cianetti, F., Landi, L.: Influence of the addendum modification on spur gear
efficiency. Mech. Mach. Theory 49, 216–233 (2012)
15. Litvin, F.L., Fluentes, A.: Gear Geometry and Applied Theory. University Press, Cambridge
(2004)
16. Valigi, M.C., Logozzo, S., Malvezzi, M.: Design and analysis of a top locking snap hook for
landing manoeuvres. In: Niola, V., Gasparetto, A. (eds.) IFToMM ITALY 2020. MMS, vol.
91, pp. 484–491. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-55807-9_55
17. Kelly, S.: Fundamental of Mechanical Vibrations. McGraw-Hill, Singapore (2000)
18. Mang, T., Dresel, W.: Lubricants and Lubrication. Wiley-VCH Verlag GmbH & Co., KGaA
(2007)
19. Cañas, J., Perdices, E., García-Pérez, L.: A ROS-based open tool for intelligent robotics
education. Appl. Sci. 10, 7419 (2020)
20. Malvezzi, M., Gioioso, G., Salvietti, G., Prattichizzo, D.: SynGrasp: a Matlab toolbox for
underactuated and compliant hands. IEEE Robot. Autom. Mag. 22(4), 52–68 (2015)
21. Gil, A.: ARTE: a robotics toolbox for education. Miguel Hernández University (UMH) (2012)
22. Peidró, A., Marín, J.M., Payá, L., Reinoso, O.: Simulation tool for analyzing the kinetostatic
effects of singularities in parallel robots. In: IEEE International Conference on Emerging
Technologies and Factory Automation, ETFA 2019, September 2019
23. Gil, A., Peidró, A., Reinoso, O., Marín, J.M.: Implementation and assessment of a virtual
laboratory of parallel robots developed for engineering students. IEEE Trans. Educ. 57(2),
92–98 (2014)
116 M. C. Valigi et al.

24. Tan, D.-P., Ji, S.-M., Jin, M.-S.: Intelligent computer-aided instruction modeling and a method
to optimize study strategies for parallel robot instruction. IEEE Trans. Educ. 56(3), 268–273
(2012)
25. Kovar, J., Andrs, O., Brezina, L., Singule, V.: Laboratory delta robot for mechatronic edu-
cation purposes. In: SPEEDAM 2012 - 21st International Symposium on Power Electronics,
Electrical Drives, Automation and Motion (2012)
26. Marghitu, D.B.: Mechanisms and Robots Analysis with MATLAB. Springer, London (2009).
https://doi.org/10.1007/978-1-84800-391-0
27. Baglioni, S., Cianetti, F., Braccesi, C., De Micheli, D.: Multibody modelling of N DOF robot
arm assigned to milling manufacturing. Dynamic analysis and position errors evaluation. J.
Mech. Sci. Technol. 30(1), 405–420 (2016)
28. Shabana, A.: Dynamics of Multibody Systems. Cambridge University Press, Cambridge
(2020)
29. Samin, J., Fisette, P.: Symbolic modeling of multibody systems, vol. 112. Springer, Dordrecht
(2003). https://doi.org/10.1007/978-94-017-0287-4
30. www.strandbeest.com
Evaluating Educational Robotics
Girls and Technology – Insights
from a Girls-Only Team at a Reengineered
Educational Robotics Summer Camp

Bjarke Kristian Maigaard Kjær Pedersen(B) , Jørgen Christian Larsen ,


and Jacob Nielsen

SDU Game Development and Learning Technology, The Maersk Mc-Kinney Moeller Institute,
University of Southern Denmark, Odense, Denmark
bkp@mmmi.sdu.dk

Abstract. In this paper, we present the results from a case study on an annual
extra-curricular summer camp, in which participants (n = 126) engaged in
STEM and Computational Thinking activities, facilitated through the usage of
the micro:bit microcontroller platform. The camp was a reengineered version of
an annual summer camp held by Teknologiskolen; a Danish non-profit organisa-
tion offering weekly classes in technology. The focus of the reengineering was
to increase the number of girls participating in the camp. This was attempted by
creating a girls-only team, which employed highly contextualised projects and
had an emphasis on using everyday materials, like cardboard, paint and glue. The
result was a significant increase in the number of participating girls and on their
attitude towards technology, which at the end of the camp, matched that of the
boys. Based on the results, we argue that the girls-only team was the main reason
for the higher number of participating girls, while the change in attitude was due
to the highly contextualised projects and selection of materials.

Keywords: Educational Robotics · ER · Computational Thinking · CT · STEM ·


STEAM · Gender · Female · Girls · K-12 · Science Camp · Curriculum ·
Micro:bit

1 Introduction
Teknologiskolen (transl. “The School of Technology”) (TS) [1] is a non-profit organi-
sation based in Odense, Denmark.
TS was founded for two reasons: First, we saw a societal need for an increase in the
STEM (Science, Technology, Engineering and Mathematics) workforce [2, 3]; Second,
and most importantly, we identified a need for an extracurricular technology club. The
intention was to give youths from the local community a place to cultivate and grow
their interest in technology, much like they can attend extracurricular sports and music
activities. The activities at TS revolve around teaching technology, based in both STEM
and Computational Thinking (CT) [4], by the use of Educational Robotics (ER) which
has been proven a feasible approach of achieving this [5–7]. Our approach to teaching

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022


M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 119–133, 2022.
https://doi.org/10.1007/978-3-030-82544-7_12
120 B. K. M. K. Pedersen et al.

is largely based on constructionism [8, 9] and self-directed activities. The University of


Southern Denmark is providing facilities, and most volunteers are engineering students
and faculty staff. When the TS first opened its doors in 2015, approximately 50 pupils
signed up to attend the weekly classes, which last two hours once a week, for the duration
of a season (September – April). Since then, the TS has grown each year, and for the
2020–2021 season, we have approximately 120 pupils – divided into eight teams –
attending our classes every week. In 2016, TS invited the weekly attendees and other
interested youths to participate in its first annual summer camp. The camp was fully
booked, and 31 pupils attended. Since then, the camp has been fully booked each year,
growing steadily. In 2020, 126 pupils attended the camp. At both the camp and the season
classes, participants are divided in to two main age groups: age 6–9 and 10–16.
Over the years, we identified two separate, yet related, challenges within TS. The first
is that despite the continuous growth, very few girls aged 10–16 attend either our weekly
classes or the summer camp. These low numbers are also reflected in the current STEM
workforce, where a significant gender imbalance is seen [10–13]. Research shows that
girls often display negative gender stereotypes and low self-efficacy towards technology
and robotics [14, 15]. This can be seen as early as the age of six [14], and at a higher degree
than regarding math and physics [14]. While research also shows that participating in
STEM activities can improve girls’ self-efficacy and attitude towards technology [15–
18], our second problem is that the girls in the 10–16 age group who do attend TS very
rarely return for the next season or camp. Often, the attending girls could be observed
acting disinterested in the projects and expressing a feeling of inadequacy at constructing
the requested machines. It should be noted that the typical projects offered often involve
line-following robots, colour-sorting machines and competing robots such as sumo-
robots. In these projects, the dominating construction material is LEGO® Technic, which
they perceived as very boyish and difficult. Research within this area has also shown that
while boys prefer function and task-oriented projects [19], girls are interested in more
contextualised projects[19–21], like how technology can help humans [19]. Likewise,
studies show that people are more prone to feeling inadequate building with LEGO®
than they are at creating i.e. collages with everyday materials like paper and glue [22].
At TS, previous experiments have been carried out, altering the contextual relations
in the projects, and trying to let participants work with other materials. This has been
successful to some degree. Even though it made the participating girls in the 10–16 age
group decisively more interested and engaged, it did not result in a significant increase
in the number of female participants [21]. In addition, while girls in a previous research
study held the highest learning outcome when working in mixed-gender groups [18],
research also suggests that they prefer girls-only groups [23].
With this in mind, we set out to reengineer the structure of the 2020 summer camp by
implementing a new girls-only team, and the content by employing highly contextualised
projects using everyday materials. This was done as an effort to both increase the number
of participating girls aged 10–16 and to find ways to keep them interested and engaged
throughout the camp. With this paper, we present the case study and results of the 2020
summer camp in relation to our research questions:

RQ1: Will the addition of a girls-only team influence the number of participating girls
and their interest in the camp?
Girls and Technology – Insights from a Girls-Only Team 121

RQ2: Will introducing more contextualised projects and a change in materials influence
the number of participating girls and their interest in the camp?

2 Method

The study was carried out as a case study at Teknologiskolens annual summer camp on
robotics and automation, which took place from Monday 29 June to Friday 3 July 2020
with participants meeting in from 9 AM–3 PM each day. When signing up for the camp,
the participants (N = 126) had to choose which team to participate in based on their
age group and gender (see Table 1). The case study and following sections will mainly
revolve around Team 3, which was specifically created for this camp.

Table 1. Overview of Team 1, Team 2, and Team 3.

Team: Participants: Age group: Gender (m/f): Worked with:


Team 1 n = 51 6–9 Mixed (42/9) Remote controlled cars, house alarms
and basic circuitry
Team 2 n = 53 10–16 Mixed (51/2) Sumo-wrestling robots, drawing robots
and programmable games on a physical
activity wall
Team 3 n = 22 10–16 Female (0/22) Automatic soap dispensers, interactive
only art, smart homes

2.1 Advertisement, Registration and Initial Survey

Four months ahead of the camp, the official advertisement materials (TS – website
information) was distributed in the local community. Descriptions for both Team 2 and
Team 3 were largely similar and revolved around learning more about the technology
which surrounds us, by building and programming technological machines and robots
at the camp. The main difference between the descriptions was that in relation to the
description of Team 3, the potential for robot technology to help humans and animals
was mentioned, without providing any specific contexts for this. In addition, a list of
prototyping materials to be used in their projects, such as cardboard, paper, glue and
fabric, was mentioned. Participants (girls-only) who signed up for Team 3 were asked
to fill out a short initial survey regarding their interest in the following ER themes:
Smart homes, with a focus on persons with a physical disability (61.5%); traffic safety
(23.1%); interactive art (38.5%) and wearables/intelligent clothing (46.2%). The price
of attending the camp was DKK 850, which included a personal box of technology to
take home afterwards, with technology worth approximately DKK 800, made possible
through a donation from the local Facebook data centre.
122 B. K. M. K. Pedersen et al.

2.2 Projects, Contexts, Materials and Procedure

Projects and Contexts: During the camp, several projects and contexts were explored
and worked on. Firstly, there was the automatic soap dispenser. This project served as
an introduction to robotics and automation (microcontrollers, sensors, actuators and the
interplay between these), while demonstrating the strength and flexibility of relatively
simple input/output systems (one sensor, one actuator). To exemplify the relevance of
robot technology in relation to the daily lives of the participants, the chosen context for
the project was the current Covid-19 pandemic.
For practical reasons, mainly costs and number of helpers, it was decided that the
participants could choose between two different projects to work on, after having com-
pleted the automatic soap dispenser project. In this regard, the results from the admission
survey show that the three most popular options in technology were the smart home,
wearables/intelligent clothing and interactive art. After considering the overlap in mate-
rial and work processes, it was decided to let the students choose between projects
within interactive art and smart home, after having finished the soap dispenser project.
While developing both projects, an emphasis was put on designing for a low floor entry
threshold for beginners and a high ceiling for the more experienced participants [24].
The project regarding interactive art involved developing paintings which would
react to their surroundings. This could be, for example, streetlights that turn on when
it gets dark (light sensor/LED); characters that hide when a person approaches (dis-
tance sensor/servomotor); or flowerheads that spin around when grass is being stroked
(switch/DC motor). The context was designing and developing technological products
that the participants would want to continue using after the camp had finished.
The smart home project was about creating devices intended for people with physi-
cal disabilities to exemplify how robot technology is an important factor in the welfare
sector. For this purpose, a laser-cut model for a two-room house with hollow walls
for containing the technology was developed. The model was designed to support the
implementation of the following functionalities: automatic windows (switch or tempera-
ture sensor/servomotor); automatic doors (switch/servomotor); automatic curtains (light
sensor/servomotor); automatic lighting (PIR/LED); aoor lock (hall sensor/servomotor);
aoor alarm (distance sensor or PIR/buzzer); automatic plant watering (soil moisture
sensor/DC motor water pump).

Materials: Considering the massive national investments in the MakeCode [25] com-
patible micro:bit technology [26], it was decided to use this as the basis development
platform for the camp, to bridge with the compulsory education of the participants. To
expand the micro:bit with a breadboard and thus allow for external sensors and actua-
tors, the FireFly connector from Teknologihuset [27] was chosen, due its built-in dual
H-Bridges providing native control of up to four DC motors. In addition, a variety of
basic components, sensors, actuators and prototyping materials was made freely avail-
able to the participants to use as they saw fit (see Table 2). Furthermore, to support the
participants in working as autonomously as possible, they were provided with a devel-
oped compendium of cheat sheets for each sensor and actuator. The files for the house
model and compendium can be found here: https://bit.ly/3pRBOk3.
Girls and Technology – Insights from a Girls-Only Team 123

Table 2. Overview of the different materials made available at the camp.

Basics: micro:bits, FireFlys, breadboards, resistors, jumper wires, battery


clips, usb cables and batteries
Sensors: Light-dependent resistors (LDR), thermistors, potentiometers,
microphones, infrared sensors (IR), ultrasound distance sensors,
motion detection sensors (PIR), hall sensors, soil moisture sensors and
tactile switches (with/without lever arms)
Actuators: LEDs (red, green, blue, yellow and white), DC motors, water pumps
(DC motors), servomotors, buzzers and NeoPixels
Prototyping materials: Cardboard, ice lolly sticks, plastic straws, elastic bands, paint and
brushes, coloured feathers, sequins, markers, canvases and laser-cut
miniature houses

Procedure: Different pedagogical methods were applied throughout the camp, which
was split into two sections. The first section (Monday) was based on teacher-centred
instructions, guiding the pupils through their first project (automatic soap dispenser).
Here the students were taught about the difference between sensors and actuators, the
role of the microcontroller, and how to use conditionals to connect the input with a
desired output. That is, how to construct and programme simple input/output systems, in
which the sensors and actuators can be freely swapped around with other types to obtain
completely new functionalities. The second section (Tuesday to Friday) was inspired
by constructionism [8, 9], self-directed learning and Challenge-Based Learning [28, 29]
with each proof of concept (smart home) acting as a nano-challenge [28]. During this
time, the participants would direct their own learning through their choice of project
and the functionalities they wanted to implement, while the teachers acted as facilitators
assisting them in understanding the challenges. To let the participants make an informed
decision regarding which type of project they wished to work with, and to provide a basis
for understanding the scope of the possibilities and limitations, three pieces of interactive
art and three models of smart homes were demonstrated.For practical reasons and due to
the Covid-19 pandemic, the participants worked in pairs of two with a minimum distance
of two metres between each pair. In addition, participants could opt to keep their project
at the end of the camp. However, the pairs working with the smart homes had to choose
internally which one of them could keep it.

2.3 Data
A mixed-method approach was used to collect both quantitative and qualitative data.
Quantitative data was collected in both Team 2 and Team 3 through pre- and post-
questionnaires. The questionnaires made use of multiple-choice answers, 5-point Likert
scales and open answers. Qualitative data from Team 3 was collected through observa-
tions and conversations with the participants throughout the study. Additionally, individ-
ual semi-structured interviews were conducted with six randomly selected participants
124 B. K. M. K. Pedersen et al.

during the last day of the camp. For the analysis, Pearson’s Chi-Squared tests, indepen-
dent and dependent t-tests (conf. interval 95%) were used to find the level of significance
between the scores from the pre- and post-questionnaires. In the instance of no-response
errors, the data field was left empty and the method ‘exclude cases analysis by analysis’
in IBM’s SPSS [30] was used. While the pre- and post-questionnaires were anonymous,
a generated code allowed for the comparison between them.

3 Results
The demographic, quantitative and qualitative data will be presented in this section. In
the following, statistically significant results will be in bold.

3.1 Demographics
Table 3 presents an overview of the participants’ age, their previous level of experience
with programming and robotics, previous attendance at Teknologiskolen, if they knew
at least one other person attending the camp and the importance of this.

Table 3. Overview of the demographic data for Team 2 and Team 3.

Age
8: 9: 10: 11: 12: 13: 14: 15:
Team 2: 0% 2.1% 31.9% 21.3% 19.1% 14.9% 8.5% 2.1%
Team 3: 4.5% 4.5% 31.8% 36.4% 22.7% 0% 0% 0%
Prev. attended the TS: Knew another It is important to know another person:
person: Yes No Don’t know:
Team 2: 41.1% 45.1% 41.2% 52.9% 0%
Team 3: 36.4% 68.2% 50.0% 40.9% 9.1%
Prev. experience with programming: Prev. experience with robotics:
Vast (3): Some (2): None (1): Vast (3): Some (2): None (1):
Team 2: 45.8% 45.8% 0% 25% 52.1% 22.9%
Team 3: 9.1% 81.8% 9.1% 4.5% 59.1% 36.4%
Team 2 Team 3 Significance
M= SE= M= SE=
Experience with programming: 2.46 .073 2.00 .093 t(68) = 3.882, p = .000
Experience with robotics: 2.02 .101 1.68 .121 t(68) = 1.991, p = .051

3.2 Number of Participants


While originally designed to accommodate 120 pupils, the camp had 126, partly due to
the TS being contacted by two female pupils who had signed up for Team 2 by accident
Girls and Technology – Insights from a Girls-Only Team 125

and wished to be moved to Team 3 (see Table 4 for an overview). The difference in the
gender composition of the participating pupils aged 10–16 in the years 2019 and 2020
were significant at X 2 (1, N = 125) = 9.941, p = .002.

Table 4. The gender composition of participants aged 10–16 from the camp in 2019 and 2020.

Team 2 (m/f): Team 3 (m/f): In total (m/f):


2019: 46/4 – 46/4
2020: 51/2 0/22 51/24

3.3 Pre- and Post-test Scores


The participants in Team 2 and Team 3 answered a series of questions regarding their
level of interests in technology, future job preferences, daily usage of robot technology,
and various school topics (see Table 5). Answers were given on a 5-point Likert scale
ranging from 5 (very much) to 1 (not at all).

Table 5. Overview of participants self-reported interests and usage of robot technology.

Pre Post Significance


M= SE = M= SE =
I am interested in technology: Team 2: 4.25 .117 4.28 .119 t(39) = −0.255, p = .800
Team 3: 3.76 .153 4.29 .122 t(20) = −3.532, p= .002
I would like a job, involving Team 2: 3.15 .222 3.29 .219 t(40) = −1.138, p = .262
programming or robotics: Team 3: 2.86 .221 3.24 .168 t(20) = −1.630, p = .119
I use robot technology in my Team 2: 2.55 .195 2.95 .226 t(37) = −2.499, p= .017
everyday life: Team 3: 2.45 .285 3.77 .246 t(21) = −4.805, p= .000
I am interested in Danish or Team 2: 2.93 .208 2.80 .207 t(40) = 0.777, p = .442
English: Team 3: 3.55 .292 3.73 .248 t(21) = −0.890, p = .383
I am interested in mathematics, Team 2: 3.46 .198 3.78 .162 t(40) = −2.477, p= .018
nature and technology: Team 3: 4.14 .165 4.18 .204 t(21) = −0.271, p = .789
I am interested in sports and Team 2: 3.17 .181 3.29 .195 t(40) = −0.797, p = .430
visual arts: Team 3: 4.18 .252 4.14 .266 t(21) = 0.370, p = .715

The participants were also shown a series of MakeCode programming blocks and
asked to write down what each block did, in addition to explaining related vocabulary.
The answers were given scores based on the following criteria: The answer is wrong,
blank or an abbreviation of ‘I don’t know’ (0); there is a trace of understanding, although
very vague (1); there is a clear understanding (2); there is a very clear and detailed
understanding (3). In addition, they were asked whether different electronic components
could be categorised as sensors or actuators. See Table 6 for the results.
126 B. K. M. K. Pedersen et al.

Table 6. The participants pre- and post-scores regarding programming and sensors/actuators.

Pre: Post: Significance


M= SE = M= SE =
Block – Forever: 1.00 .263 2.50 .171 t(21) = −5.936, p= .000
Block – Variable: 0.18 .142 0.64 .233 t(21) = −2.485, p= .021
Block – Condition (if, else): 0.36 .203 2.18 .252 t(21) = −6.580, p= .000
Block – Repeat (x4): 1.27 .310 1.77 .301 t(21) = −2.217, p= .038
Block – Function: 0.00 .000 0.73 .202 t(21) = −3.464, p= .002
Block – Digital Write: 0.00 .000 0.68 .202 t(21) = −3.382, p= .003
Word – Variable: 0.23 .146 0.23 .146 t(21) = 0.000, p = 1.000
Word – Conditional: 0.00 .000 0.14 .136 t(21) = −1.000, p = .329
Word – Loop: 0.86 .281 1.82 .306 t(21) = 3.470, p= .002
Pre Post Significance
Wrong: Correct: Wrong: Correct:
Motor: 9 13 2 20 X 2 (1, N= 44) =
5.939, p= .015
Speaker: 16 6 9 13 X 2 (1, N= 44) =
4.539, p= .033
Microphone: 10 12 9 13 X 2 (1, N= 44) =
0.093, p= .761
Switch: 11 11 5 17 X 2 (1, N= 44) =
3.536, p= .058
LED: 16 6 6 16 X 2 (1, N= 44) =
9.091, p= .003

3.4 Evaluation

Table 7 presents the participants’ answers regarding their evaluation of the camp.
Answers were given on a 5-point Likert-scale ranging from 5 (very much) to 1 (not
at all). The table also presents the most often-mentioned topics within the participants’
written answers, regarding what they felt had been the best about the camp, and what
could have been better. Two participants from Team 3 had written ‘That it wasn’t just
technology, but also art’ and ‘Doing different things than the ones on Team 2’.
Girls and Technology – Insights from a Girls-Only Team 127

Table 7. The scores and most mentioned topics from the feedback.

Team 2 Team 3 Significance


M= SE = M= SE =
It was fun and exciting to 4.48 .100 4.73 .117 t(64) = −1.620, p = .111
attend:
I want to attend again next 3.86 .213 4.32 .179 t(64) = −2.290, p= .025
year:
I have gotten a better 3.72 .164 4.55 .127 t(63) = −3.973, p= .000
understanding of the robot
technology I encounter in my
everyday life:
I have gotten new friends: 3.23 .223 4.27 .220 t(63) = −3.322, p= .002
I worked creatively with 4.09 .155 4.55 .109 t(64) = −1.948, p= .019
technology:
The best: Team 2: Team Could Team Team 3:
3: be 2:
better:
To learn/make things: 17.8% 54.5% I do not 27.3% 31.8%
know:
Social/new friends: 13.3% 36.4% Nothing: 27.3% 22.7%
To programme: 15.6% 0% More 0% 31.8%
days/two
weeks:
Having fun: 0% 18.2% More 11.4% 0%
robots:
Everything: 13.3% 18.2% More 9.1% 0%
help:
Getting it to work: 6.7% 0% When it 4.5% 0%
doesn’t
work:
Being creative: 6.7% 0%

3.5 Semi-structured Interviews

The six participants were among other subjects interviewed regarding their motivation
for attending the camp and why they had chosen Team 3.
Motivation: A single participant answered it was because she already attended TS
weekly classes. Two answered they really enjoyed working with technology, with one
of them saying that she felt they did not do it enough at school. Three answered that the
prospect of learning about and building their own robots sounded interesting.
Why Team 3: Although being interviewed individually, all six of the participants
quickly answered that it was because it was a girls-only team. They also stressed how
128 B. K. M. K. Pedersen et al.

important it was that there would be other girls at the camp when deciding to sign up,
how none of them liked it when there were too many boys, and how they generally
preferred being with other girls. Their answers covered the spectrum from: ‘It’s fun
because there are other girls’ all the way to ‘Boys are annoying’. Interestingly, none
of them mentioned the slight difference in the advertisement materials for Team 2 and
Team 3. When asking how they felt about the available projects and materials, they had
found them to be fun, relevant, creative, exciting and engaging, with two girls specifically
mentioning not liking LEGO® Technic (one mentioned having seen Team 2 use it during
the shared breaks).

3.6 Observations and Conversations


Materials: The participants were not observed exhibiting feelings of inadequacy in rela-
tion to the prototyping materials used at the camp. To our surprise, they even started
decorating their automatic soap dispensers, painting their houses and creating furniture
for these, while seemingly taking great pleasure in doing so. It should be noted that the
demonstrated house model had no furniture, paint or decorations.
Technology: The participants displayed the ability to both comprehend the con-
cept of the used input/output systems and make great use of its flexibility, resulting in
a high diversity between their respective projects (see Fig. 1). However, while most
participants were able to use the correct vocabulary regarding sensors and actuators,
their vocabulary regarding programmable concepts did not develop much throughout
the camp. Furthermore, while most of the participants were able to copy the circuits
found in the compendium (Fritzing diagrams), they did not display an understanding
of how the internal connections of the breadboard were configured or how the electric
current would flow between the components. As a result, they would often be unable
to continue by themselves due to relatively small mistakes in the circuits or if specific
rows/columns used in a diagram overlapped with those of another.
Compendium: Aside from the aforementioned difficulties, the cheat sheets examples
of how to read and print values from the available sensors and how to control the actuators
were a great tool, enabling them to work autonomously. In addition, it provided a natural
starting point for conversation with the teachers about desired functionalities and how
to approach them.
The projects: Interactive art was a slightly more popular project than the smart homes,
with respectively seven groups (n = 14) having chosen the former and four groups (n =
8) the latter. See Fig. 1 for examples of these. The participants were, however, overall
actively engaged and stayed interested in these throughout the camp, buzzing around and
working busily on their projects, regardless of which they had chosen. Participants also
expressed that they found the context of the different projects relevant, and some took
great pride in having created their own automatic soap dispenser, seeing that it was the
equivalent of the commercial ones they so often encounter outside of the camp. We did,
however, observe a major difference in the affection value shown towards the different
projects at the end of the camp. While some participants had no trouble disassembling
their house at the end of the camp, this was in stark contrast to those who had worked
with the interactive art. None of these participants disassembled it afterwards or left it
behind.
Girls and Technology – Insights from a Girls-Only Team 129

Fig. 1. 1st row (left, middle): The flower goes up/down, and the stars turn on/off when its light/dark
(LDR/servomotor, LDR/NEO pixels) and a fish (magnet) attracts a cat (hall sensor/servomotor).
1st row (right): The unicorn lights up when you blow on the sun (microphone/NEO pixels). 2nd
row (left): Automatic curtains go down/up when its dark/light (LDR/DC motor). 2nd row (middle):
Automatic watering when the earth is too dry (soil moisture-sensor/DC motor water pump). 2nd
row (right): Automatic lights turn on when motion is detected (PIR/NEO-pixels). 3rd row (left):
Automatic soap-dispenser (IR sensor/servomotor). 3rd row (middle): Automatic front door with a
key card (magnet) (hall sensor/servomotor). 3rd row (right): The earth starts/stops spinning when
Saturn is pressed down (switch/DC-motor), the colours of the stars change when the planet next
to Saturn is rotated (potentiometer/NEO-pixels), and Baby Yoda hides when a hand comes close
to him (IR sensor/servomotor).

4 Discussion
The Immediate Results, Attitude and Learning Outcome: The Team 3 (T3) initiative
yielded an immediate and significant increase in female participants and change to the
gender composition of participants in oldest half of the camp. However, it did not manage
to attract female participants aged 13–16, an age group where we also see a reduction
in the number of male participants. In addition, while Team 2 (T2) did not develop
a significantly higher interest in technology like T3 – which can be explained by T2s
initially higher interest herein – it is interesting to see that despite this initial difference in
their interests in technology and related jobs, these were both equalised at the end of the
camp. Likewise, although both T2 and T3 developed a significantly higher awareness
of their daily interactions with robot technology, T3’s awareness became noticeably
130 B. K. M. K. Pedersen et al.

higher. This difference can possibly be due to the higher level of contextualisation in
the projects developed for T3. In comparison, it was, however, only T2 who developed
a significantly higher interest for STEM related school topics. While no effort had been
made to emphasise the relation between these or the involved labour market perspectives
and the content of the camp, this is something which could possibly prove beneficial
to do in the future. However, T3 reported having gained a significantly higher level of
improvement in their understanding of robot technology than T2, and in their feedback
they had a higher focus on having learned/made new things. This might be explained by
T2’s initial significantly higher experience with programming and robotics and could
indicate that attending participating the camp had a larger effect on the girls’ self-efficacy
than the boys’. To our surprise, and despite both T2 and T3 reporting high average scores
regarding how fun and exciting it had been attending the camp, T3 reported a significantly
higher desire to participate in the camp the following year compared to T2. Likewise,
a high percentage indicated a desire for having more time at the camp, which is not
something we had discussed with them during the camp. The results also show that
while T3 became significantly better at differentiating between sensors and actuators,
and at explaining the MakeCode programming blocks, the average scores for the latter
were still relatively low. This inability to use related vocabulary was indeed observed
during the camp and stood in stark contrast to their actual abilities. Thus, we consider
their final projects a better indicator for the learning outcome.
Considering that the camp managed to significantly increase the number of female
participants and had a significant effect on their attitude towards technology, an attitude
that for the most part matched that of the boys or even surpassed them, while also provid-
ing them with a significant learning outcome, we consider the camp and new initiatives
a success. The results regarding their change in attitude is likewise in accordance with
related research [15–18].
RQ1 – The Effect of the Girls-Only Team: T3 generally had a higher focus on the
social aspects of the camp in their written feedback than T2 and likewise held a signif-
icantly higher average regarding having made new friends at the camp. In addition, a
noticeably higher percentage of them also knew at least one other person who would be
attending the camp and reported a slightly higher need for this. Furthermore, while the
participants’ initial interest for robotics was present before the camp, the confirmation
of there being other girls at the camp and of not risking being the only girl at a camp full
of boys was expressed as being the decisive factor when deciding to attend the camp.
Regarding this, we likewise interpret the interviewed girls’ attitude towards boys as an
indicator of this and as a common phenomenon at that age.
In comparison, none of them mentioned the slight difference between the advertise-
ment materials for T2 and T3, mentioning the potential for robot technology to help
humans and animals and examples of prototyping materials. With this in mind, we argue
that the implementation of the girls-only team had the largest effect regarding the sig-
nificantly higher number of participating girls at the camp. Their expressed desire for a
girls-only team is also in accordance with related research [23].
RQ2 – The High Contextualisation of the Projects, the Change in Materials,
and the Effect of This: The strength and flexibility of the heavily used, yet relatively
Girls and Technology – Insights from a Girls-Only Team 131

simple input/output system became very clear when observing the wide variety of func-
tionalities embedded in the participants individual projects. We therefore consider this
approach very efficient in letting the participants freely design the desired behaviour of
their projects: What should it do, when should it do it? Likewise, we consider it an effi-
cient approach in letting them analyse the technologies in their surroundings, with this
conceptual framework in mind: How does the automatic sink, hand dryers etc., work?
Regarding the developed compendium, it made a great impact on enabling them to search
for information by themselves, yet within a restricted scope. We therefore recommend
this approach as an alternative to i.e. providing them with internet access and thus the
risk of finding incompatible or incomprehensible information. However, it should also
be noted that while LEGO naturally affords for being assembled/disassembled, this is
not the case of a painting, as was also evident from the participants’ high affection to
the interactive art projects. A forced requirement of disassembling these might therefore
result in dissatisfaction. In these cases, we recommend the smart home projects, which
possibly due to also being a standardised assembly kit, did not invoke the same level
of affection. Overall, the participants were generally observed as being very satisfied
with the contextualisation of the projects, which they found relevant in relation to both
themselves (soap dispensers, interactive art) and in helping humans (soap dispensers,
smart homes). In addition, the chosen materials were found to afford creativity without
exhibiting the feelings of inadequacy than can stem from having to scale the technical
barrier that the LEGO platform can present [21, 22].
Considering this, we argue that the design of the projects, the high level of con-
textualisation and the change in materials had the largest effect in relation to keeping
them engaged and interested throughout the camp, their significant change in attitude,
learning outcome and desire to participate in the camp the following year. This is also
in accordance with previous research [19–22].
Implications for Future STEM Activities and TS: Based on the results of this case
study, we argue that neither the initiative of a girls-only team or an increased contextual-
isation of the projects and the change in prototyping materials, can stand alone regarding
voluntarily attended robotics activities. We therefore argue that they will probably need
to be implemented in unity to obtain similar results. As a direct implication hereof, future
TS summer camps will likewise implement both. In addition, we have also created a new
girls-only weekly TS class, designed around the knowledge obtained from this study.
While the implementation of a girls-only team can be controversial, we also acknowl-
edge its effect on the number of participating girls, in that it serves as a guarantee for
them not to be the minority girl at a camp or class, otherwise full of boys.

4.1 Limitations and Future Work


Had the study employed an experimental 2 × 2 factorial design, the results would pos-
sibly have produced a more precise insight into the effect of each initiative. However,
the purpose of the TS summer camp is to provide a week of learning about and exper-
imenting with robot technology to interested and voluntary youth. Therefore, this type
of study design would not be possible.
132 B. K. M. K. Pedersen et al.

5 Conclusion
With this case study, in which participants (n = 126) attended an educational robotics
summer camp, we have shown that the implementation of a girls-only team in com-
bination with highly contextualised projects (automatic soap dispensers, smart homes
and interactive art) and the usage of everyday materials (paint, cardboard, glue etc.) can
help to attract significantly more female participants to extracurricular robotics activ-
ities. The camp also had a significant effect on the girls’ attitude and interest towards
technology and their desire to return for more. We have argued that while the girls-only
team was probably the main reason in attracting them, it was likely the contextualised
projects and choice of materials which kept them engaged and interested. It is therefore
our conclusion that in relation to extracurricular voluntarily attended robotics activities,
we consider it necessary to implement the initiatives in unity to obtain these results.

References
1. Teknologiskolen. Teknologiskolen (2020). [cited January the 22nd , 2021]; https://www.tek
nologiskolen.dk/
2. Future, E.T.: Prognose for STEM-mangel 2025 (2018). Engineer the Future: engineerthefu-
ture.dk
3. Schoolnet, E.: European Schoolnet’s 2017 Annual Report. 2018: eun.org - Brussels, Belgium
4. Wing, J.M.: Computational thinking. Commun. ACM 49(3), 33–35 (2006)
5. Atmatzidou, S., Demetriadis, S.: Advancing students’ computational thinking skills through
educational robotics: a study on age and gender relevant differences. Robot. Auton. Syst. 75,
661–670 (2016)
6. Blanchard, S., Freiman, V., Lirrete-Pitre, N.: Strategies used by elementary schoolchildren
solving robotics-based complex tasks: Innovative potential of technology. Procedia-Soc.
Behav. Sci. 2(2), 2851–2857 (2010)
7. Sáez-López, J.-M., Sevillano-García, M.-L., Vazquez-Cano, E.: The effect of programming
on primary school students’ mathematical and scientific understanding: educational use of
mBot. Educ. Tech. Res. Dev. 67(6), 1405–1425 (2019). https://doi.org/10.1007/s11423-019-
09648-5
8. Papert, S., Harel, I.: Constructionism: research reports and essays, 1985–1990. 1991: Ablex
publishing corporation (1991)
9. Wilson, B.G.: Constructivist learning environments: case studies in instructional design. 1996:
Educational Technology (1996)
10. Corbett, C., Hill, C.: Solving the Equation: The Variables for Women’s Success in Engineering
and Computing. 2015: ERIC (2015)
11. García-Holgado, A., et al.: European proposals to work in the gender gap in STEM: a system-
atic analysis. IEEE Revista Iberoamericana de Tecnologias del Aprendizaje 15(3), 215–224
(2020)
12. García-Holgado, A., et al.: Trends in studies developed in Europe focused on the gender gap in
STEM. In: Proceedings of the XX International Conference on Human Computer Interaction
(2019)
13. Peixoto, A., et al.: Diversity and inclusion in engineering education: looking through the
gender question. In: 2018 IEEE Global Engineering Education Conference (EDUCON). IEEE
(2018)
Girls and Technology – Insights from a Girls-Only Team 133

14. Master, A., et al.: Programming experience promotes higher STEM motivation among first-
grade girls 160, 92–106 (2017)
15. Craig, M., Horton, D.: Gr8 designs for Gr8 girls: a middle-school program and its evaluation.
In: Proceedings of the 40th ACM technical symposium on Computer Science Education
(2009)
16. Mason, R., Cooper, G., Comber, T.J.A.I.: Girls get it 2(3), 71–77 (2011)
17. Weinberg, J.B., et al.: The impact of robot projects on girls’ attitudes toward science and
engineering. In: Workshop on Research in Robots for Education. Citeseer (2007)
18. Sullivan, A.A.J.T.U.: Breaking the STEM Stereotype: Investigating the Use of Robotics to
Change Young Childrens Gender Stereotypes About Technology and Engineering (2016)
19. Terry, B.S., Briggs, B.N., Rivale, S.: Work in progress: gender impacts of relevant robotics
curricula on high school students’ engineering attitudes and interest. In: 2011 Frontiers in
Education Conference (FIE). IEEE (2011)
20. Sklar, E., Eguchi, A.: RoboCupJunior — four years later. In: Nardi, D., Riedmiller, M., Sam-
mut, C., Santos-Victor, J. (eds.) RoboCup. LNCS (LNAI), vol. 3276, pp. 172–183. Springer,
Heidelberg (2005). https://doi.org/10.1007/978-3-540-32256-6_14
21. Pedersen, B.K.M.K., Marchetti, E., Valente, A., Nielsen, J.: Fabric robotics - lessons learned
introducing soft robotics in a computational thinking course for children. In: Zaphiris, P.,
Ioannou, A. (eds.) HCII 2020. LNCS, vol. 12206, pp. 499–519. Springer, Cham (2020).
https://doi.org/10.1007/978-3-030-50506-6_34
22. Borum, N., Kristensen, K., Peterssonbrooks, E., Brooks, A.L.: Medium for children’s creativ-
ity: a case study of artifact’s influence. In: Stephanidis, C., Antona, M. (eds.) UAHCI 2014.
LNCS, vol. 8514, pp. 233–244. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-
07440-5_22
23. Carmichael, G.J.A.S.B.: Girls, computer science, and games 40(4), 107–110 (2008)
24. Papert, S.A.: Mindstorms: children, computers, and powerful ideas. 2020: Basic books (2020)
25. MakeCode, M.: MakeCode. https://makecode.microbit.org/. Accessed 14 Dec 2020
26. Holst, K.: Ny bevilling giver ultra:bit vokseværk (2020). https://www.dr.dk/om-dr/ultrabit/
ny-bevilling-giver-ultrabit-voksevaerk. Accessed 14 Dec 2020
27. Teknologihuset. FireFly (2021). https://teknologihuset.dk/vare/firefly/. Accessed 20 Jan 2021
28. Conde, M.Á., et al.: RoboSTEAM-A Challenge Based Learning Approach for integrating
STEAM and develop Computational Thinking. In: Proceedings of the Seventh International
Conference on Technological Ecosystems for Enhancing Multiculturality (2019)
29. Johnson, L.F., et al.: Challenge-based learning: An approach for our time. The new Media
consortium (2009)
30. IBM. SPSS (2021). https://www.ibm.com/analytics/spss-statistics-software. Accessed 4 Jan
2021
Improved Students’ Performance in an
Online Robotics Class During COVID-19:
Do only Strong Students Profit?

Andreas Birk(B) and Evelina Dineva

Jacobs University Bremen, Campus Ring 1, 28759 Bremen, Germany


a.birk@jacobs-university.de
http://robotics.jacobs-university.de/

Abstract. Observations from the online teaching of a robotics class


during the CoViD-19 pandemic caused by SARS-CoV-2 also known as
the Corona virus are presented. The online teaching of the course lead
to an unexpected increase in the grade performance of the students –
despite the fact that the exams were at the same level of difficulty as in
the years before. There is some evidence that the asynchronous online
teaching catered for some of the general interests and learning trends
in generation-Z students. This paper addresses the question whether the
general study performance of a student is a possible factor in whether
the student profited more or less from the online version, i.e., whether
there is a correlation between the improved course grade with the Grade
Performance Average (GPA) of students.

Keywords: Robotics education · Online teaching · Blended learning ·


Asynchronous online · CoViD-19 · SARS-CoV-2 · Corona

1 Introduction
In spring 2020, Jacobs University Bremen responded – like almost every univer-
sity [8,11] – to the CoViD-19 outbreak by a switch to online education, which is
expected to have significant, long-lasting effects on higher education [1,2,22,23].
Especially, CoViD-19 has accelerated a trend that is expected to become per-
sitent, namely that (elements of) online educations will be the new normal in
higher education [15].
Here, effects on a course in a robotics module are reported, which we consider
to be of interest for online teaching in robotics and related disciplines in general.
Especially, the online version of the course lead to an unexpected increase in the
grade performance of the students - while the difficulty of the exams remained
at the same level. There are indications that the asynchronous online version of
the course addressed some general students’ interests and needs [3], especially
with respect to the use of video material. While learning with videos plays an
increasing role since several years [4,14], it has recently experienced a massive
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 134–145, 2022.
https://doi.org/10.1007/978-3-030-82544-7_13
Students’ Performance in Online Robotics Teaching 135

surge. YouTube ranked for example first for learning according to a survey in the
generation-Z conducted in 2018 by The Harris Poll for the publisher Pearson [17].
Asynchronously provided video material hence caters to the learning preferences
of current undergraduate students.
Given the unexpected increase in students’ performance, it is an interest-
ing question whether particular groups of students benefited more than others.
Especially, there is the question if there is any correlation with general student
performance, i.e., that, e.g., strong students profit more than under-performers.
Note that evidence was found in the past that online teaching may increase per-
formance gaps [24], i.e., strong students may profit significantly more than low
achievers from online learning elements.
The paper strives for two main contributions. First, it provides indications in
the spirit of evidence-based education [9,19], which can be further used in, e.g.,
meta-studies to identify general trends. Second, it provides some insights and
guidelines that can be useful for instructors of robotics courses who are interested
in using online or blended learning in the future beyond the CoViD-19 pandemic;
especially, we motivate why asynchronous online material including videos can
be beneficial and we provide a postulation that all students can benefit from
this.

2 The Structure and the Content of the Robotics Course


The robotics course discussed here is part of the “Robotics and Intelligent Sys-
tems (RIS)” BSc program at Jacobs University Bremen, Germany. In this pro-
gram, it is offered as a 2nd year course. The course can also be taken as a
specialization course by 3rd year students of the “Computer Science (CS)” BSc
program. It is in addition an elective option in several study programs including
“Electrical and Computer Engineering (ECE)”. As shown in Table 1, students
from five different majors were registered in Spring 2020.

Table 1. Distribution of the students’ majors in the spring 2020 course.

RIS CS ECE Math Physics Total


#students 21 31 3 2 1 58

The grading is based on 100% final exam in form of a 2 h written exam.


There is the option to just audit the course. The Robotics course has 5 ECTS,
which are earned as part of the standard course design at Jacobs University. The
course is structured into the following nine parts P1 to P9:
– P1 Introduction
• definition of a robot
• SciFi and reality
• related concepts and fields throughout history
136 A. Birk and E. Dineva

• robotics beyond factory automation


– P2 Spatial representations and transforms
• coordinate systems
• spatial transformations
• homogeneous matrices
• frames of reference
• properties of rotation matrices
• Euler angles
• quaternions
• chaining transforms
– P3 Forward Kinematics (FK)
• joints and links
• kinematic chains
• forward kinematics
• Denavit-Hartenberg
• work-space
– P4 Actuators
• DC-motor: model and properties
• H-bridge & Pulse-Width-Modulation (PWM)
• gear wheels & gear train
• gear types
• planetary gear
• harmonic gear
• linear actuators
• bearings
• shaft-encoders
• servos
– P5 Inverse Kinematics (IK)
• analytical inverse kinematics
• example 6-legged robot
• arm- and hand-decoupling
• numerical inverse kinematics
• basic Newton Raphson
• Jacobian
• pseudo-inverse
• IK with Newton Raphson
• example simple planar robot-arm
• IK with gradient descent
– P6 Locomotion
• wheel models and properties
• Ackermann drive
• differential drive FK
• tracks
• differential drive IK
• synchro drive
• omnidirectional wheels
Students’ Performance in Online Robotics Teaching 137

• omnidirectional drive FK and IK


– P7 Localization
• global (absolute) localization
• triangulation & trilateration
• least squares optimization
• multilateration
• ranging technologies
• global navigation satellite systems (GNSS)
• beacon-free localization
• multidimensional scaling (MDS)
• orientation sensors
• relative orientation
• dead reckoning
– P8 Probabilistic Localization
• localization error representation
• error propagation
• probabilistic differential drive odometry
• Kalman filter (KF)
• extended Kalman filter (EKF)
• unscented Kalman filter (UKF)
• particle filter (PF)
– P9 Mapping
• map representations in 2D and 3D
• evidence grid
• range sensors
• simultaneous localization and mapping (SLAM)
• registration
• loop closing
• EKF-SLAM
• PF-SLAM
• pose-graph
The course is complemented by other modules like Control Systems, Automa-
tion, Computer Vision (CV), Artificial Intelligence (AI), or Machine Learning
(ML) in the RIS, respectively CS BSc program. Hence, additional robotics topics
are covered also in some other classes.
Like at almost all universities in the world, almost all courses at Jacobs
University were switched to online teaching in the Spring 2020 semester. An
asynchronous online teaching mode was chosen for the robotics course. The
teaching material consists of slides (PDF) as well as videos (mp4), which were
provided via a course website. The videos present the slides with a voice-over
plus a recorded virtual laser-pointer; in some occasions, animations are also
included. The material covers the lectures and the solutions to the homework
exercises that form the basis for the final exam at the end of the course.
Tables 2 and 3 provide an overview of the number of slides and of the dura-
tion of the videos for the lectures and for the homework exercises for the differ-
ent course parts. Note that the students are frequently encouraged to stop the
138 A. Birk and E. Dineva

Table 2. The number of slides and the duration (min) of the videos of the asynchronous
online lecture part of the Spring 2020 robotics course at Jacobs University.

Lecture part #slides Video (min)


P1 Introduction 27 41
P2 Spatial transforms 70 96
P3 Kinematics 15 22
P4 Actuators 45 86
P5 Inverse kinematics 64 68
P6 Locomotion 47 58
P7 Localization 63 91
P8 Probabilistic localization 56 92
P9 Mapping 60 91

Table 3. The number of slides and the duration (minutes) of the videos for the asyn-
chronous online explanations and solutions of the homework exercises of the Spring
2020 robotics course at Jacobs University.

Solution #slides Video (min)


HW-1 39 54
HW-2 36 36
HW-3 16 20
HW-4 9 15
HW-5 42 61
HW-6 16 24
HW-7 18 20
HW-8 17 23
HW-9 4 16

videos and to engage in paper and pencil work on their own to reproduce the
derivation of presented methods and their application to specific problems. This
holds especially for the provided solutions to the homework exercises, where the
expected time to interact with the slides/video material is about 6 to 7 times
the duration of each video when properly doing all calculations step by step,
e.g., when performing extensive linear algebra calculations. In addition to the
slides and videos, a course forum was provided on Moodle [7,10], where stu-
dents were encouraged to ask questions and to engage in discussions with the
instructor as well as among each other. Furthermore, Microsoft Teams [12] sup-
port was provided for each course at Jacobs University, where also the option
for (synchronous) Q&A-sessions in the lectures slots was offered for the robotics
course.
Students’ Performance in Online Robotics Teaching 139

3 Improved Grade Performance


As mentioned already in the introduction, there was a very unexpected improve-
ment in the grade performance, of which a short analysis is provided in this
section. The analysis of grade distributions can in general be a useful tool to
assess learning effects [18], though this also has its limitations [6,13,20,21]. Note
that the crucial variables [5,6,16] related to the course are here constant, i.e.,
the instructor and the syllabus did not change over the period of analysis.
For the following analysis, the exact percentages of scored points that each
student obtained in each exam are used. They are represented as normalized
proportion [0;1] from worst to best possible. A one-way analysis of variance
(ANOVA) of grades by year shows a significant main effect for the year with
the online teaching, F (3, 117) = 10.21, p < .001, i.e., there is a clear effect from
the improved average performance in 2020 compared to previous years. This is
among others illustrated in the box-plot in Fig. 1. As can be seen, there is in
2020 with online teaching a strong shift to higher grades compared to the face-
to-face teaching in previous years. Moreover, achievement levels below 60% are
statistical outliers for 2020.

Fig. 1. ANOVA box plot statistics of grades by year for the Robotics Lectures. There
is a strong shift toward higher grades in the year with the online teaching (2020)
compared to the years with face-to-face teaching (2017–2019).
140 A. Birk and E. Dineva

Fig. 2. Histogram of the exam grades by year. The relative frequencies for each year
are shown.

This observation of a stronger grade performance in 2020 is also supported by


the analysis of the frequencies of grades (Fig. 2). For each year, the y-axes extends
to the number of students who took the exam, i.e., the frequencies-distribution
of grades within a year is shown. It also permits a normalized comparison of
frequency patterns between the years. The relative number of students who
achieved a grade of 80% or more is clearly higher in 2020 than in previous years.
Furthermore, the cluster of frequencies of high grades for 2020 further supports
the evidence that there is an overall better student performance compared to
previous years.

4 Online Performance and GPA

As already mentioned in the introduction, there are indications that the asyn-
chronous online material, especially in form of the video material was in line
with the students’ learning preferences [3]. The main question for this paper is
whether some students may have profited more than others - given especially
that some evidence was found in the past that online teaching may increase
performance gaps [24].
Students’ Performance in Online Robotics Teaching 141

Fig. 3. The Grade Performance Average (GPA) of the students in the course in each
year. There is no significant correlation, i.e., the general performance of the students
was very similar in each year.

For this analysis, the Grade Point Average (GPA) of the students who took
the robotics course in the years 2017 to 2020 are used as a measure of the more
general performance of the students. Figure 3 shows the average GPA for each
year. While there are slight fluctuations, there are no significant differences, i.e.,
the on average general performance of the students taking the course is about
the same in each year.
Furthermore, the grade for each student is roughly proportional to the GPA
of each student. This is also illustrated in Fig. 4. This roughly linear relation
holds for every year. What differs is the proportionality constant for 2020, where
all the grades are so to say shifted towards higher performance in the course.
This is more clearly illustrated in Fig. 5. The grades at Jacobs University are
given on a scale from 1.00 to 5.00 in steps of 0.33, i.e., 1.00, 1.33, 1.67, 2.00, and
so on. The lower the number, the better the grade, i.e., 1.00 is best and 5.00
the worst possible grade. In Fig. 5, the histograms of the number of students
per performance difference pd are shown for every year 2017 to 2020, i.e., the
difference pd = cg − gpa between the course grade cg and the GPA gpa is used.
142 A. Birk and E. Dineva

Fig. 4. A scatter plot of the grades and GPAs of the students for the four years. Overall,
a student’s grade tends to be proportional to his/her GPA, i.e., the course grades tends
to be better, the better the student’s GPA is.

As can be seen in Fig. 5, the means of the performance difference distributions


are positive in the years 2017 to 2019, i.e., the value of the grades tends to be
higher in the robotics course then the GPA, i.e., the students perform worse in
the course than in general (given the grading scheme of Jacobs University where
lower grade values are rated as being better). In 2020 in contrast, the mean is
clearly negative, i.e., the students perform better in the course than in general
while there is no significant change in the underlying distribution of the GPA
as illustrated also in Fig. 3. More importantly, absolutely no correlation between
the performance difference and the GPA could be found in any of the years
including 2020. Especially, there is no evidence that strong students profited
more than under-performers.
This evidence is of course only based on limited data. We hence suggest that it
is of interest to further check whether the insights from the well-known study by
Xu and Jaggars [24] still hold. While their study is extremely well substantiated
in terms of its data basis, it is also not fully up-to-date anymore, i.e., it does not
cover generation-Z students who have been found to have their very own learning
preferences. We postulate that due to the increasing importance of video material
for learning in generation-Z, asynchronous online courses can be very beneficial
Students’ Performance in Online Robotics Teaching 143

Fig. 5. Histograms of the number of students per performance difference between the
course grade and the GPA.

for robotics education [3] - and that this holds for all students, i.e., that also
under-performing students can achieve clear performance gains through online
teaching.

5 Conclusion

Observations from an asynchronous online version of a robotics course during


the CoViD-19 pandemic were presented. The online teaching of the course lead
to an unexpected increase in the performance of the students. It seems that due
to the learning preferences in generation-Z students, there is a clear potential
for a beneficial use of online teaching elements in robotics education beyond the
pandemic. But an argument that is often used against online teaching is that
strong students supposedly tend to profit more from it than under-performers.
We postulate that this effect that has been observed in the past is not really valid
anymore due to learning preferences in generation-Z students - where also under-
performers are very versed in the use of video teaching material, respectively
even prefer videos over all other teaching material. This hypothesis has to be
144 A. Birk and E. Dineva

further substantiated in future work including, e.g., meta-studies. Nonetheless,


the insights presented here are hopefully already useful when considering the use
of online teaching elements in robotics education.

References
1. Aristovnik, A., Kerzic, D., Ravselj, D., Tomazevic, N., Umek, L.: Impacts of the
COVID-19 pandemic on life of higher education students: a global perspective.
Sustainability 12(20), 8438 (2020)
2. Bai-Yun, C.: The effects of COVID-19 on international secondary assessment
(2020). https://www.naric.org.uk/downloads/The%20Effects%20of%20COVID-
19%20on%20International%20Secondary%20Assessment%20-%20UK%20NARIC.
pdf
3. Birk, A., Dineva, E., Maurelli, F., Nabor, A.: A robotics course during COVID-
19: Lessons learned and best practices for online teaching beyond the pandemic.
Robotics 10(1), 5 (2021)
4. Buzzetto-More, N.A.: An examination of undergraduate student’s perceptions and
predilections of the use of YouTube in the teaching and learning process. Interdis-
cip. J. e-Skills Lifelong Learn. 10, 17–32 (2014)
5. Carpenter, S.K., Witherby, A.E., Tauber, S.K.: On students’ (mis)judgments of
learning and teaching effectiveness. J. Appl. Res. Mem. Cogn. 9(2), 137–151 (2020).
http://www.sciencedirect.com/science/article/pii/S2211368120300024
6. Clayson, D.E.: Student evaluations of teaching: are they related to what students
learn? A meta-analysis and review of the literature. J. Market. Educ. 31(1), 16–30
(2009). https://doi.org/10.1177/0273475308324086
7. Costello, E.: Opening up to open source: looking at how Moodle was adopted in
higher education. Open Learn. J. Open Distance e-Learn. 28(3), 187–200 (2013).
https://doi.org/10.1080/02680513.2013.856289
8. Crawford, J., et al.: COVID-19: 20 countries’ higher education intra-period digital
pedagogy responses. J. Appl. Learn. Teach. 3(1), 1–20 (2020)
9. Davies, P.: What is evidence-based education? Br. J. Educ. Stud. 47(2), 108–121
(1999). https://doi.org/10.1111/1467-8527.00106
10. Dougiamas, M., Taylor, P.C.: Moodle: using learning communities to create an open
source course management system. In: EdMedia. Association for the Advancement
of Computing in Education (AACE) (2003)
11. IAU: Regional/national perspectives on the impact of COVID-19 on higher
education (2020). https://www.iau-aiu.net/IMG/pdf/iau covid-19 regional
perspectives on the impact of covid-19 on he july 2020 .pdf
12. Ilag, Balu N.: Microsoft Teams Overview. In: Understanding Microsoft Teams
Administration, pp. 1–36. Apress, Berkeley (2020). https://doi.org/10.1007/978-
1-4842-5875-0 1
13. Linse, A.R.: Interpreting and using student ratings data: guidance for faculty serv-
ing as administrators and on evaluation committees. Stud. Educ. Eval. 54, 94–106
(2017). http://www.sciencedirect.com/science/article/pii/S0191491X16300232
14. Nicholas, A.J.: Preferred learning methods of the millennial generation. Int. J.
Learn. Ann. Rev. 15(6), 27–34 (2008)
15. Pearson: The global learner survey (2020). https://www.pearson.com/content/
dam/one-dot-com/one-dot-com/global/Files/news/gls/Pearson Global-Learners-
Survey 2020 FINAL.pdf
Students’ Performance in Online Robotics Teaching 145

16. Perry, R.P., Smart, J.C.: The Scholarship of Teaching and Learning in Higher
Education: An Evidence-Based Perspective. Springer, Dordrecht (2007). https://
doi.org/10.1007/1-4020-5742-3
17. Poll, T.H.: Beyond Millennials: The Next Generation of Learners. Pearson, London
(2018)
18. Schneider, M., Preckel, F.: Variables associated with achievement in higher educa-
tion: a systematic review of meta-analyses. Psychol. Bull. 143(6), 565–600 (2017)
19. Slavin, R.E.: Evidence-based education policies: transforming educational prac-
tice and research. Educ. Res. 31(7), 15–21 (2002). https://doi.org/10.3102/
0013189X031007015
20. Stroebe, W.: Why good teaching evaluations may reward bad teaching: on grade
inflation and other unintended consequences of student evaluations. Perspect. Psy-
chol. Sci. 11(6), 800–816 (2016). https://doi.org/10.1177/1745691616650284
21. Stroebe, W.: Student evaluations of teaching encourages poor teaching and con-
tributes to grade inflation: a theoretical and empirical analysis. Basic Appl. Soc.
Psychol. 42(4), 276–294 (2020). https://doi.org/10.1080/01973533.2020.1756817
22. Symonds, Q.: The coronavirus crisis and the future of higher educa-
tion (2020). https://www.qs.com/portfolio-items/the-coronavirus-crisis-and-the-
future-of-higher-education-report/
23. Symonds, Q.: International student survey - global opportunities in the new higher
education paradigm (2020). https://info.qs.com/rs/335-VIN-535/images/QS EU
Universities Edition-International Student Survey 2020.pdf
24. Xu, D., Jaggars, S.S.: Performance gaps between online and face-to-face courses:
differences across types of students and academic subject areas. J. High. Educ.
85(5), 633–659 (2014)
Pupils’ Perspectives Regarding Robot Design
and Language Learning Scenario

Petra Karabin1(B) , Ivana Storjak2 , Kristina Cergol1 , and Ana Sovic Krzic2
1 Faculty of Teacher Education, University of Zagreb, Zagreb, Croatia
{petra.karabin,kristina.cergol}@ufzg.hr
2 Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia

{ivana.storjak,ana.sovic.krzic}@fer.hr

Abstract. Educational robots are recognized in English language teaching (ELT).


Working on the development of the concept Robot-Assisted Language Learning in
Croatia, this study aimed to get insight into pupils’ perspectives, thus investigate
perceptions, ideas, and preferences for such implementation. This kind of input
could be used while designing future learning activities in the ELT to optimize
the experience of pupils and therefore facilitate learning. Data for this study were
obtained during a summer robotics camp and are related to robot design and learn-
ing scenario design. In this qualitative study participated 23 pupils whose perspec-
tives were investigated with the short questionnaire. The results have shown that
the pupils envisioned versatile robots (anthropomorphic and mechanomorphic)
with various sensors (mostly ultrasonic and color), built from different materials
(mostly LEGO kits pieces or metal), and applicable for teaching all four language
skills (reading, listening, speaking, and writing).

Keywords: Educational robots · English language teaching · Learning scenario


design · Robot-assisted language learning · Robot design · Summer robotics
camp

1 Introduction
The development of information and communication technology (ICT) has highly
increased over the past two decades. Such development introduced modern digital mate-
rials into mainstream schooling, the beneficial usage of which quickly became well-
recognized and supported by the educational system. One form of ICT that was advanced
and implemented as part of the classroom materials at the beginning of the 21st cen-
tury was the educational robot [1]. The robots used in educational research differentiate
according to their characteristics. Thus, taking into consideration the autonomy of social
robots we distinguish between the teleoperated type (the robot is controlled remotely to
achieve telepresence), the autonomous type (the robot is controlled by artificial intelli-
gence), and the transformed type (the robot utilizes both teleoperation and autonomous
control) [2]. Educational robots are also classified according to their appearance. Namely,
we differentiate the anthropomorphic robot type (the robot that has humanoid character-
istics: head, torso, limbs), the zoomorphic robot type (the robot that looks and behaves

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022


M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 146–152, 2022.
https://doi.org/10.1007/978-3-030-82544-7_14
Pupils’ Perspectives Regarding Robot Design and Language Learning Scenario 147

like an animal), the mechanomorphic robot type (the robot that has an industrial appear-
ance and machine-like characteristics), and the cartoon-like robot (the robot that has
caricature-like characteristics) [3]. Moreover, as the development of educational robots
amplified the number of studies on the topic, the concept of Robot-Assisted Learning
(RAL) was introduced into the field of education research [4]. Han [4] defined RAL as
learning assisted by robots with anthropomorphized characteristics (the ability to move,
produce sounds, sense the surrounding, etc.). According to [5], applications of robots in
education include technical subjects, nontechnical subjects, second language learning,
and assistive robotics. Specific to the field of foreign language learning is the concept
of Robot-Assisted Language Learning (RALL), i.e. using robots in language instruction
[6]. Most of the research considering RALL has been carried out in Asia (South Korea,
Taiwan, and Japan), when, in Europe, they are just taking their turn [3]. The results
of previous research [7, 8] showed that pupils in ELT classrooms felt quite motivated
and satisfied while using a robot as the educational material, but some research [9, 10]
found that the pupils’ interest in using robots decreased over time. In Croatia, robots are
included in the curriculum only in the scope of technical subjects, namely Informatics
and Technical Education [11], and the research regarding robot usage in primary schools
is at the beginning [1]. To tackle the issue of interest, in earlier studies, we focused on
future teachers’ attitudes on using robots as educational tools [12] and using robots with
appropriate activities in everyday lessons in teaching the native language [13]. On the
other hand, this study aims to explore the perspective of foreign language learners (age
8–16) on using robots in the ELT classroom.

2 Methodology

This pilot study was conducted during the Summer robotics camp “Petica 2020”. The
camp was held on a rural estate and lasted for four days. The schedule included the
learning activities thus robotics workshops in the morning, recreational activities in the
afternoon, and social activities in the evening. The duration of the learning activities
was six hours per day with a break in the middle. According to background knowledge,
pupils were divided into three groups: beginners (M = 6, F = 1), experienced (M = 7,
F = 5), and high schoolers (M = 3, F = 2). For beginners and experienced, progression
was guided by using worksheets in the Croatian language where programming tasks
were scaffolded. All the programming interfaces were used in the English language,
which can also serve as an ELT classroom vocabulary building tool. Beginners used
LEGO Mindstorms EV3 kit to assemble and program a robot to race, rescue animals,
solve a maze, talk, alarm about intruders, show cartoons, recognize colors, and follow
the black line. Robot design was adapted accordingly to the objectives of tasks. The
experienced group used LEGO SPIKE Prime to drive slalom, clean scattered bricks,
solve a maze, make sounds, recognize colors, follow the black line, and have gears. This
group assembled more complex artifacts such as robotic arms, soccer players, and sumo
robots. High schoolers used LEGO SPIKE Prime and REV Robotics metal elements
to build and control more advanced robots. REV was used because it is suitable for
hands-on learning of engineering principles [14].
148 P. Karabin et al.

As the participants worked on solving the tasks, two researchers conducted unstruc-
tured interviews with them to direct their attention to the possibilities of using the tech-
nology they were exploring in the ELT classroom. The idea behind this approach was
to investigate pupils’ perspectives so that further application of robots can be adjusted
to the needs of the ELT classroom. Based on their feedback, a set of 12 questions was
formed aiming to uncover specific and more elaborated possibilities and pupils’ pref-
erences for using robots in an ELT classroom. The questionnaires were distributed at
the end of the robotics camp. As one camp participant did not take part in the study, a
sample consisted of 8 females and 15 males from different parts of Croatia. Questions
were related to robot design and learning scenario design as seen in Table 1. To inves-
tigate preferable robot design, two approaches were tested, first based on a preference
of a real robot design introduced at the camp (Q1), and second based on an imaginary
robot which design depended on the pupils’ vision (Q3, Q4, Q5, Q8). Characteristics
of a preferable learning scenario design concerning the scope of the tasks, nature of the
content, assembly and programming responsibility, duration, and working practice were
investigated. To give context and promote creative thinking the following sentence was
used as the introduction to the questionnaire: “Imagine you are in an English class and
you learn about fruits and vegetables, or you learn concepts and words that can be found
in a store.”. As all the pupils were already familiar with the content from the input, it
was expected that it would easily prompt their ideas about possible robot design and
learning scenario.

Table 1. Questionnaire

Robot design
1 Which robot design that you assembled at the camp could be used in an English class?
3 Design a different robot that you could use in an English class when learning about fruits and
vegetables or store-related words. Draw that robot
4 Which sensors are on the robot?
5 How many motors does the robot have?
8 Which parts would you use to assemble the robot (e.g. Fischertechnik, STEMI, LEGO, wood,
metal)?
Learning scenario design
2.,6 What would be the task for the pupils with this robot?
7 Would you learn new content in this class, or would you revise what you have previously learned?
9 Would the teacher assemble the robot in advance, or would the pupils assemble it in class?
10 Would the teacher program the robot in advance, or would the pupils program it in class?
11 How long would such activity last?
12 How many pupils would use one robot?
Pupils’ Perspectives Regarding Robot Design and Language Learning Scenario 149

3 Results and Discussion


A qualitative analysis of the questionnaire was conducted due to the questionnaire con-
sisting of mostly open-ended questions. While analyzing the results, a few challenges
occurred. Since the answers were handwritten, some of them were illegible. To surpass
the issue of illegibility, all four authors were included in the interpretation of answers
and consensus regarding the meaning was met. Another challenge that appeared was
that not every pupil responded to all the questions. Lastly, imposed by using the conve-
nience sample, specifically wide span of age groups included, diverse interpretation of
questions was noted.
The first question was interpreted in three ways, naming a robotic kit used, a design
from the workshops, or describing how could specific design characteristics be utilized
and upgraded with additional sensors. Therefore, the results note LEGO Spike Prime
set (7) and LEGO EV3 Mindstorms (3). Designs such as a delivery cart, robotic hand,
Hooper robot, and soccer player were considered applicable in a given context, and so
were the color, distance, ultrasonic sensor, and speaker.
Considering the imaginary design, pupils’ envisioned robots could be catego-
rized according to the physical appearance, hence anthropomorphic (humanoid) and
mechanomorphic (vehicle) robot type. Results correspond to [15] where without
additional guidance pupils draw imaginary robots as anthropomorphic (95,4%) and
mechanomorphic. Similarly, in [16] anthropomorphic were noted in 88% of drawings
followed by mechanomorphic (5%), and zoomorphic (4%). After the workshop utilizing
mechanomorphic robots, the influence on perception was noted namely the increase of
mechanomorphic type and drop of the anthropomorphic type [16], therefore depending
on the context perception of pupils can differ. Regardless of distinction related to phys-
ical appearance, all types use movement and perform activities in e.g. sorting fruits and
vegetables or transporting such items to a certain point (Fig. 1). Pupils’ answers regard-
ing sensors show that children are familiar with a variety of sensors available, thus named
ultrasonic, color, distance, touch, gyro, camera, and microphone. An influence of the
workshops was noted as the ultrasonic and color sensors, used in the workshops, pre-
vailed. Moreover, the imaginary design would most likely include two (12) or three (4)
motors. Regarding building materials, most of the pupils would assemble the robot using
LEGO kit pieces (12), or LEGO with the addition of a screen, paper, or feathers (3). The
rest would use metal parts (5, all males), a combination of metal and wood, or metal,
LEGO, and wood (2), or even vegetables (1). Again, the influence of the workshops
could be the reason for choosing LEGO. Moreover, many pupils had prior experience
with LEGO. Interestingly, although only the high schoolers worked with metal REV
robots, six pupils that mentioned metal were not high schoolers.
In the following paragraphs, learning scenario design is discussed. Tasks suggested
by the pupils involve the development of all four language skills: reading, listening,
speaking, and writing. Reading tasks are mostly based on the interpretation of a word
in the English language, where either a robot or a pupil needs to react to the lexis, or
connect it to an appropriate picture, etc. For example, P91 expressed the idea: “The

1 P9 stands for the answer provided by participant No. 9. The abbreviation used to represent other
participants are P1, P2, … and P23.
150 P. Karabin et al.

pupils would need to recognize or read a picture or a word from the screen.”. Listening
tasks involve listening to a target lexis-related question that the robot presents and the
pupils need to react to. For instance, P22 stated that the pupils would need to help the
robot reach a goal by replying to the robot’s questions using a remote control.

Fig. 1. Pupils’ robot design. From left to right, up to down: recognition of fruits and vegetables
from pictures; pupils answer to questions and help the robot to follow the line; robotic arm that
puts names of fruits and vegetables in different boxes based on pupils’ input; robot assigns words
to translate; and pupils control the robot to sort fruits and vegetables.

An often cited example of a speaking task involves having a robot perform a move-
ment it has been pre-programmed to do. The movement would involve interpreting target
vocabulary and performing an activity connected to it. P7 suggested: “The pupils would
teach the robot to move and to recognize fruits and vegetables by color. When the robot
finds fruits or vegetables, it would need to stop and name the fruit or the vegetable.”.
A similar activity was carried out in [6] where the pupils gave navigation commands to
the LEGO Mindstorms robot. The robot was required to reach a colored target, sense
the color of the target, and shoot a ball into a matching goal. These two examples can be
an illustration of the Total Robo Physical Response (TRPR) since the robot is moving
according to the commands of the users. We propose a concept of a TRPR as a Total
Physical Response (TPR [17]) in robot-assisted learning where the robot is the physical
doer of an activity operated and controlled by a child. On the other hand, [18] presented
the case of TPR, where pupils followed the commands (such as turning around, putting
the hands up, etc.) presented by a humanoid robot. Finally, writing practice has also been
suggested as a robot would require that the pupils provide the correct spelling for the
target vocabulary. P23 shared: “The pupils would need to pronounce the word and spell
it in English.”. Some pupils provided ideas for practicing grammar and translation. P11
proposed that the pupils need to translate the words that the robot pronounces – if they
do it correctly, they score a point.
According to pupils’ answers, robots should be used for learning new content (12),
to revise previously learned content (9) or for both depending on the robot envisioned
(2).
Assembly is well accepted among pupils, a majority (11) considered that they should
assemble the robot during the ELT class or at least upgrade it (7). Only three pupils wrote
Pupils’ Perspectives Regarding Robot Design and Language Learning Scenario 151

that the robot should be already assembled. One pupil did not answer, and one pupil wrote
that it depends on the motivation of the pupils in the class.
Regarding programming responsibility, the majority would like to create their own
program during the lecture (16), while some (5) prefer that the teacher does it beforehand.
Pupils prefer working with robots for a longer period as seen in the distribution of
answers: the activity with the robot should last less than half an hour (3), half an hour
(5), the whole hour (8), or more than one hour (7).
Lastly, the number of pupils per robot was the question with the most divergent
answers: a pupil should have her/his robot (4), the maximum is two pupils per robot
(5), the maximum is three pupils per robot (6), and four or more pupils per robot (7).
The highest noted was 10 pupils per robot. According to teachers, robots should be used
individually, or in small groups to experience teamwork [19], and based on the answers,
pupils in this study recognized the benefit of working in small groups with robots.

4 Conclusion and Future Work

In this paper, we presented the results of a pilot study concerning pupils’ perspectives,
thus ideas and preferences regarding the utilization of robots in an ELT classroom. Pupils
envisioned robots, described, and drew them, and decided about sensors (mostly ultra-
sonic and color) and building materials (mostly LEGO kits pieces or metal). Analysis of
their input regarding a given context showed that the robots could be used for teaching all
four language skills: reading, listening, speaking, and writing; should have the appear-
ance of an anthropomorphic or mechanomorphic robot type, and could sort or transport
objects. Pupils would prefer to assemble and program robots in an ELT classroom by
working individually or collaboratively in small groups for a longer period. The results
can be used as a starting point for implementing a robot as an educational tool into the
ELT classroom, namely in designing tasks and making an appropriate choice of a robot
for a given context. In the future, we plan to organize camps annually so that pupils
can develop their programming skills while using robots in an informal environment but
also learn English utilizing robots. Research should be further conducted for specific age
groups and different contexts to correspond with the prescribed English language edu-
cational materials, with an aim of conceptualizing the model of a robot as an educational
tool in the ELT classroom in Croatia.

Acknowledgment. This work has been supported by Croatian Science Foundation under the
project UIP-2017–05-5917 HRZZ-TRES and Croatian robotic association. The work of doctoral
students Petra Karabin and Ivana Storjak has been fully supported by the “Young researchers’
career development project – training of doctoral students” of the Croatian Science Foundation
DOK-2018–09 and ESF-DOK-2018–01.

References
1. Nikolić, G.: Robotska edukacija “robotska pismenost” ante portas? Andragoški glasnik 20(1–
2), 25–57 (2016)
152 P. Karabin et al.

2. Han, J.: Emerging technologies: robot assisted language learning. Lang. Learn. Technol.
16(3), 1–9 (2012)
3. Randall, N.: A survey of robot-assisted language learning (RALL). ACM Trans. Hum.-Robot
Interact. 9(1). 7, 1–36 (2019)
4. Han, J.: Robot-aided learning and r-learning services. Hum.-Robot Interact. 247–266 (2010)
5. Mubin, O., Stevens, C.J., Shahid, S., Al Mahmud, A., Dong, J.J.: A review of the applicability
of robots in education. J. Technol. Educ. Learn. 1(209–0015), 13 (2013)
6. Mubin, O., Shadid, S., Bartneck, C.: Robot assisted language learning through games: a
comparison of two case studies. Aust. J. Intell. Inf. Process. Syst. 13(3), 9–14 (2013)
7. Lee, S., et al.: On the effectiveness of robot-assisted language learning. ReCALL 23(1), 25–58
(2011)
8. Hong, Z.-W., Huang, Y.-M., Hsu, M., Shen, W.-W.: Authoring robot-assisted instructional
materials for improving learning performance and motivation in EFL classrooms. Educ.
Technol. Soc. 19(1), 337–349 (2016)
9. Kanda, T., Hirano, T., Eaton, D., Ishiguro, H.: Interactive robots as social partners and peer
tutors for children: a field trial. J. Hum.-Comput. Interact. 19(1–2), 61–84 (2004)
10. You, Z.-J., Shen, C.-Y., Chang, C.-W., Liu, B.-J., Chen, G.-D.: A robot as a teaching assistant
in an English class. In: Proceedings of the 6th International Conference on Advanced Learning
Technologies, Kerkrade, pp. 87–91. IEEE (2006)
11. Kurikulumi–Škola Za Život. https://skolazazivot.hr/kurikulumi-2/. Accessed 30 Mar 2021
12. Karabin, P., Cergol, K., Mikac, U., Sović Kržić, A., Puškar, L.: Future teachers’ attitudes on
using robots as educational tools. In: Proceedings of the Symposium Trends and Challenges
in FL Education and Research, International Scientific and Art Conference Contemporary
Themes in Education, In press
-
13. Milašinčić, A., Andelić, B., Pushkar, L., SovićKržić, A.: Using robots as an educational tool
in native language lesson. In: Merdan, M., Lepuschitz, W., Koppensteiner, G., Balogh, R.,
Obdržálek, D. (eds.) RiE 2019. AISC, vol. 1023, pp. 296–301. Springer, Cham (2020). https://
doi.org/10.1007/978-3-030-26945-6_26
14. REV Robotics. https://www.revrobotics.com/. Accessed 30 Mar 2021
15. Bruni, F., Nisdeo, M.: Educational robots and children’s imagery: a preliminary investigation
in the first year of primary school. Res. Educ. Media 9(1), 37–44 (2017)
16. Storjak, I., Pushkar, L., Jagust, T., Krzic, A. S.: First steps into STEM for young pupils
through informal workshops. In: 2020 IEEE Frontiers in Education Conference (FIE), pp. 1–5,
Uppsala. IEEE (2020)
17. Asher, J.J.: The total physical response approach to second language learning. Modern Lang.
J. 53(1), 3–17 (1969)
18. Chang, C.-W., Lee, J.-H., Chao, P.-Y., Wang, C.-Y., Chen, G.-D.: Exploring the possibility
of using humanoid robots as instructional tools for teaching a second language in primary
school. Educ. Technol. Soc. 13(2), 13–24 (2010)
19. Serholt, S., et al.: Teachers’ views on the use of empathic robotic tutors in the classroom. In:
23rd IEEE RO-MAN, Edinburgh, pp. 955–960. IEEE Press (2014)
Teachers in Focus
Enhancing Teacher-Students’ Digital
Competence with Educational Robots

Elyna Heinmäe(B) , Janika Leoste, Külli Kori, and Kadri Mettis

Tallinn University, Narva rd 25, 10120 Tallinn, Estonia


elyna.heinmae@tlu.ee

Abstract. The digital competence of an average teacher is still behind the level
needed for teaching in technology-rich classrooms. The teacher-students need
skills and knowledge for enhancing their lesson plans and activities with technol-
ogy. Educational robots can be used to improve teacher-students’ digital compe-
tence to support their problem solving, communication, creativity, collaboration,
and critical thinking skills. Together with STEAM kits, educational robots pro-
vide good opportunities to teach subjects like mathematics, chemistry, biology,
and physics. The aim of this case study is to find if a robot-enhanced training course
encourages development of teachers-students’ digital competence, and which edu-
cational robots or STEAM kits do teacher-students prefer to use in learning activ-
ities. We used European Digital Competence Framework for Educators to assess
teacher-students’ competence in four areas (digital resources, teaching and learn-
ing, empowering learners, facilitating learners’ digital competence) before and
after a robot-enhanced training course. Our results indicate that activities with
educational robots (hands-on experiments and designing innovative lesson plans)
supported and developed teacher-students’ digital competence. As a conclusion
we recommend using educational robots or other STEAM kits when developing
teachers’ digital competence.

Keywords: Educational robots · Digital competence · Teacher-students ·


DigCompEdu

1 Introduction

Rapid digital transformation is reshaping society, the labor market and the future of
education and training [1]. Thus, being digitally competent is a key requirement for a
21st century teacher [2, 3]. The vast and growing array of digital technologies (devices,
systems, electronic tools, also software) forces us to think more about people, their
mindsets, attitudes and skills. Although teachers must be digitally competent (i.e., able
to effectively integrate technology in classrooms and to empower learners to become
creators of technology etc.), studies [4, 5] show that while teachers have a positive attitude
towards using technology in the classroom, their digital competence is still behind the
level needed for teaching in technology-rich classrooms. This leads to situations where
schools and kindergartens are equipped with ICT tools and innovative STEAM kits

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022


M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 155–165, 2022.
https://doi.org/10.1007/978-3-030-82544-7_15
156 E. Heinmäe et al.

(robots, sensors, electronic sets) but left unused, as teachers are unable to integrate these
sets into their daily teaching practices [5]. This could be originated from the lack of time
and support or from inadequate teacher technological education during their studies
(poor digital skill level).
According to EU Kids Online [6], many secondary education students are entering
universities, professional education or the labor market with extraordinarily relevant
digital skills. Thus, there is still a too large percentage of learners with a low level of
digital and technological literacy skills. In this context, an education system that does not
provide learners with the adequate background and familiarity with technology-based
(digital) tools, will fail to prepare them for their future professional activities in our
digital societies. The research problem here is twofold: on the one hand the teacher-
students’ low digital competence, and on the other hand lack of courage to integrate
educational robotics in educational activities.
In order to cope with these problems, Estonia has designed systemic strate-
gies to enhance teaching profession attractiveness. Implementing robots into teacher
training courses helps to engage teacher-students in learning STEM subjects, introduce
programming, and raise awareness that using robots in classrooms is not difficult [7, 8].
The aim of this case study is (a) to find out which digital competence can be taught
through educational robots, and (b) to examine what educational robots and STEAM
kits did teacher-students prefer for developing their digital skills.
Based on the research problem and research aims, the following research questions
were formulated:

1. Which teacher-students’ digital competence differ before and after a training course
with STEAM K12 open educational resources?
2. What type of educational robots and STEAM kits do teacher-students consider as
most appropriate for designing their digital competence?

This study is based on a collection of open educational resources (OER), called


STEAM K12, created by the authors and their university colleagues in 2020. The collec-
tion aims to increase the digital competence of students of non-ICT fields with emphasis
on teacher-students, by offering them robotics and STEAM workshops as an essential
part of their one semester study course. Interactive training modules were designed and
arranged into collections according to the topics taught in Estonian school stages K1-
K3, K4-K6, K7-K9 and K10-K12. Each collection includes digital interactive learning
designs that are based on interactive exercise templates of a free community-based con-
tent collaboration platform H5P [9]. For the future teachers to perceive these materials
as useful for their disciplines, the content of the learning designs includes discussions
about wider social challenges, proposing different solutions for these problems. Multiple
robotics and STEAM tools are used for presenting, visualizing, modelling, and proto-
typing possible problems. The social challenges for the learning designs include climate,
topsoil, water, ocean, and biological diversity. For validating the project aims, the materi-
als were piloted with three different groups of teacher-students. Teacher-students’ digital
competence in four areas (digital resources, teaching and learning, empowering learners
and facilitating learners’ digital competence) were measured before and after the course
by using European Digital Competence Framework for Educators(DigCompEdu) [10].
Enhancing Teacher-Students’ Digital Competence 157

We wanted to find out if using the STEAM K12 collection for solving experiments
on their own, adapting these OER-s to the classroom environment, and using adapted
OER-s to conduct robotics workshops to students will have significant influence on
teacher-students’ digital competence.

2 Educational Robots and Digital Competence

Technological advancements eventually find their way into the education sector. Stake-
holder pressure and necessities of the labor market force the education sector to embrace
these advancements with the goal of making learning and teaching more effective. Using
technology to enhance learning is expected to improve some or all aspects of the learning
process without an additional overhead.
The term Educational Robots (ER) refers to the use of robots in education: robots
as learning objects, robots as learning tools and learning aids. Thus, robots as learning
tools refers to using robots to teach different subjects, such as science, math, or physics
[11]. Surveys show [12, 13] that educational robots should be seen as tools to support
and foster STEAM skills such as problem solving, communication, creativity, collab-
oration and critical thinking. Moreover, using educational robots is an innovative way
to increase the attractiveness of science education [14]. Despite the positive results of
using educational robots in the learning process, there are some crucial obstacles, which
have to be pointed out. Studies show [15, 16] that the educational robotics technology is
still in its developmental stage and there are lack of appropriate hardware platforms and
software frameworks. There are also rather little integrated didactic concepts available,
making it difficult for teachers to design lesson plans, and to share materials and best
teaching practices. We can say that many teachers do not have a strong enough under-
standing and background knowledge on this area [17]. Therefore, applying educational
robots in the context of teacher training is essential.
According to DigCompEdu [10], Estonian national documents [8, 18] and quali-
fication standards [19], teachers must be digitally competent. Digital pedagogy is one
important strategic goal in Estonian Education Strategy for 2021–2035 [20]. Thus, edu-
cators have to know trends, opportunities, risks and methodologies related to new tech-
nologies. Also, they have to apply the technologies in a purposeful way. Defining digital
competence as a key competence across Europe is a consistent approach, based on Dig-
CompEdu [10]. This framework aims to describe how digital technologies can be used to
enhance and innovate education and training. Majority of European education systems
have included learning outcomes related to digital competence [7].

3 Methodology
This research was designed as a case study, which consisted of master degree teacher-
students that participated in three different courses. The first course, “Educational Tech-
nology in Learning Process”, with the volume of 3 ECTS (European Credit Point Transfer
System) equal to 78 h, was conducted by one of the researchers. Another researcher con-
ducted the other courses: an interdisciplinary course “Using robots to save the world”,
158 E. Heinmäe et al.

with the volume of 6 ECTS equal to 156 h, and a “Robot-integrated learning in the kinder-
garten and primary” course with the volume of 6 ECTS equal to 156 h. The courses were
conducted during the autumn semester of 2020.
Our study included a semi-structured online survey based on DigCompEdu Check-
In Self-Reflection Tool [21]. The check-in self-reflection tool was translated previously
by HITSA1 . Thus, some descriptions of the competence (compared with DigCompEdu)
may differ a little in wording.
We decided to use this self-reflection tool as a method that allows the participants
to assess their digital competence (17 items) in four areas (see Table 1). Also, this self-
reflection tool is mainly used in teacher training courses at Tallinn University. Digital
competence in areas 1 and 4 were not measured because these areas were not able to
test in the context of these courses.
The participants assessed their expertise levels in 17 digital competences. Each com-
petence could be evaluated from Level 0 (the lowest level) to Level 5 (the highest level).
They were also asked to bring examples (via open-ended questions) about how they
apply these skills in their everyday practices.
We also used Level (0–5) descriptions as similar to DigCompEdu with some mod-
ifications. Level 0-No, Level 1-Beginner, Level 2-developing-Expert, Level 3-Expert,
Level 4-developing-Leader, Level 5-Leader. According to the DigCompEdu explana-
tions, Level 0 means that educators have had very little contact with digital technologies,
Level 1 and Level 2 means that educators assimilate new information and develop basic
digital practices. Level 3 and 4 means that educators apply, further expand and reflect
on their digital practices, Level 5 means that educators pass on their knowledge, critique
existing practice and develop new practices.

Table 1. Description of selected DigCompEdu competence.

Competence Description of activities


Area 2. (2.1, 2.2, 2.3, 2.4) Digital resources: sourcing, creating and sharing digital
resources
Area 3. (3.1, 3.2, 3.3, 3.4) Teaching and learning: managing and orchestrating
the use of digital technologies in teaching and learning
Area 5. (5.1, 5.2, 5.3) Empowering learners: using digital technologies to
enhance inclusion, personalization and learners’ active
engagement
Area 6. (6.1, 6.2, 6.3, 6.4, 6.5, 6.6) Facilitating learners’ digital competence: enabling
learners to creatively and responsibly use digital
technologies for information, communication, content
creation, wellbeing and problem solving

The teacher-students filled a digital competence questionnaire in Google Forms after


the first and the course meetings. The teacher-students were informed about the use
1 The Estonian Information Technology Foundation for Education – https://www.hitsa.ee/.
Enhancing Teacher-Students’ Digital Competence 159

of data for research purposes and participation was voluntary. The questionnaire was
personalized and only these teacher-students were included into the sample who filled
the survey twice. In total, the population consisted of 55 teacher-students (total of three
courses) and the sample was 36 teacher-students.

Fig. 1. Examples of robots used in the workshops (first row: LEGO WeDo, second row Sphero,
third row EasiScope microscope and Dash).

The participants (n = 36) assessed their digital competence before and after the
course, which included activities with educational robots (Edison, Ozobot, LEGO WeDo
2.0, LEGO EV3, LEGO Spike Prime, mTiny, Sphero, etc.), STEAM kits (Makeblock
Neuron, Little Bits, thermal camera Flir One, different sensors of Vernier and Globisens,
TTS EasiScope microscope), and virtual reality solutions. One of their task was to design
lesson plans, based on a freely chosen educational robotics or STEAM platform. These
lesson plans were used to gather information about which of these educational robots
and STEAM kits did the participants prefer to use in the learning and teaching context.
Depending on the COVID-19 restrictions, the three courses were first conducted in-
person, next in the hybrid form, and at last as distance learning courses. The participants
160 E. Heinmäe et al.

worked as teams of 3–5 members. At first, they were familiarized with the STEAM K12
OERs, and they solved robotics experiments in teamwork. Next, each group chose 1–2
experiments to be used as the basis for a workshop. The teacher-students adapted the
material to the age group that participated in the workshops. Depending on the COVID-
19 restrictions, the participants of two courses conducted their workshops at school,
kindergarten or with their own children. As the swimming pools were open, then the
“Using robots to save the world” mission with the Sphero robot was conducted in a
swimming pool (see Fig. 1). The participants in the third course simulated children of
appropriate age and conducted their classes at the university.
Due to not all participants’ schools, kindergartens and homes having the necessary
equipment (i.e. educational robots, tablets), these were borrowed from the university if
needed. Teacher-students were also allowed to prepare their materials in the university’s
EDUSPACE research laboratory.

4 Results

4.1 Which Teacher-Students’ Digital Competence Differ Before and After


a Training Course with STEAM K12 Open Educational Resources?

Data was analyzed with IBM SPSS Statistics 27, using paired sample t-tests and descrip-
tive statistics. Paired sample t-test showed that all digital competence (in 4 areas)
improved statistically significantly (see Table 2). In order to put our findings into a
context, we analyzed the participants’ answers to open-ended questions about using
educational robots in learning activities.
The frequency analysis showed that before the course (including activities with
educational robots) 26 teacher-students assessed their digital resources competence (2.1,
2.2, 2.3, and 2.4) as Level 0 - No use. After the course, only 6 teacher-students of
36 considered their digital skills with selecting, creating and modifying, managing,
protecting, etc. relatively low. In addition, before the ICT-enriched course, only 6 teacher-
students felt themselves as Leaders (Level 5) in the context of digital resources. After
the course, 16 teacher-students evaluated themselves as Leaders (Level 5).
According to the results, before the course the majority of teacher-students rated
their digital competence, especially communication skills (6.2), self-regulated learning
(3.4) and content creation competence (6.4) very low.
We found out that in all areas (digital resources, teaching and learning, empowering
learners, facilitating learners’ digital competence) teacher-students assessed their skills
more as No use (Level 0) whereas after the course their self-image had significantly
improved, reaching Levels 2 (developing Expert) and 3 (Expert).
Surprisingly however, some teacher-students who had assessed their digital compe-
tence in teaching (3.1), guidance (3.2) and self-regulated learning (3.4) quite high before
the course, revised their assessments and reported lower values after the course.
We also used open-ended questions to find out teacher-students opinions and expla-
nations using educational robotics in learning activities. The answers suggest that suc-
cessful exploitation of robots depended on availability of OER lesson design templates.
The participants pointed out that using educational robots with researcher-supplied OER
Enhancing Teacher-Students’ Digital Competence 161

materials (STEAM K12) was novel and exciting: “I didn’t know anything about this kind
of materials and I hope this type of content will be supplemented and further developed”
(S1). The learners mentioned that STEAM K12 resources were helpful and contributed
to planning and implementing learning activities with robots. In their opinion, they
felt more confident and supported when they were able to use these previously created
collections.

Table 2. Improvement in digital competence (N = 36, min = 0, max = 5).

Digital competence Pre-questionnaire Post-questionnaire M Paired sample


M (SD) (SD) t-test
2. Digital resources 1.96 (1.07) 2.78 (1.14) t = −5.83, p < 0,01
2.1. Selecting 1.78 (1.17) 2.89 (1.21) t = −7.02, p < 0,01
2.2. Creating and 1.86 (1.22) 2.56 (1.36) t = −3.37, p < 0,01
modifying
2.3. Managing, 1.61 (1.38) 2.64 (1.38) t = −4.92, p < 0,01
protecting, sharing
2.4.Sensitive information 2.58 (1.20) 3.06 (1.33) t = −2.62, p < 0,01
and privacy
3. Teaching and learning 1.38 (1.08) 2.49 (1.19) t = −6.27, p < 0.01
3.1. Teaching 1.83 (1.32) 2.89 (1.21) t = −5.79, p < 0,01
3.2. Guidance 1.56 (1.36) 2.50 (1.16) t = −4.33, p < 0,01
3.3. Collaborative 1.31 (1.22) 2.33 (1.27) t = −4.92, p < 0,01
learning
3.4. Self-regulated 0.81 (0.92) 2.25 (1.42) t = −6.68, p < 0,01
learning
5. Empowering learners 1.26 (1.00) 2.39 (1.25) t = −6.83, p < 0.01
5.1. Accessibility and 1.19 (1.04) 2.17 (1.34) t = −5.26, p < 0,01
inclusion
5.2. Differentiation and 1.19 (1.06) 2.28 (1.37) t = −6.02, p < 0,01
personalization
5.3. Actively engaging 1.39 (1.10) 2.72 (1.32) t = −6.56, p < 0,01
learners
6. Facilitating learners’ 1.27 (1.05) 2.25 (1.12) t = −6.50, p < 0.01
digital competence
6.1. Information and 1.44 (1.36) 2.31 (1.47) t = −4.07, p < 0,01
media literacy
6.2. Communication 1.33 (1.35) 2.22 (1.40) t = −3.90, p < 0,01
6.3. Online behavior and 1.28 (1.21) 2.53 (1.30) t = −7.13, p < 0,01
digital identity
(continued)
162 E. Heinmäe et al.

Table 2. (continued)

Digital competence Pre-questionnaire Post-questionnaire M Paired sample


M (SD) (SD) t-test
6.4. Content creation 0.75 (0.97) 1.75 (1.36) t = −5.76, p < 0,01
6.5. Responsible use 1.53 (1.43) 2.42 (1.27) t = −4.58, p < 0,01
6.6. Problem solving 1.31 (1.14) 2.25 (1.10) t = −.06, p < 0,01

Open-ended answers also revealed that teacher-students considered it important to


familiarize themselves with educational robots in practice. They emphasized that testing
and hands-on activities would bring out potential faults and give certainty and confidence
in front of pupils: “I learned that I don’t have to be afraid of robots, especially when there
is always someone to ask help and most importantly, it’s worth discovering it first by
myself. I learned that using educational robots to improve my digital skills is rather easy,
but you have to know some important aspects of learning and teaching and feedback is
essential” (S2). The teacher-students also pointed out that using educational robots an
important opportunity to develop their digital skills, especially significantly improving
their communication and team-work competence: “Using robots made me think that I
am very good at it. I know how to use and implement these tools. I’m an expert, but
I want to be a leader. I should focus even more on how to support and inform others
(colleagues, fellow-students) how to use these solutions and develop digital skills” (S3).
Finally, some learners admitted that they would have liked to ‘play’ even more with
the robots and after analyzing their digital competence they had realized how they had
overestimated themselves (in some areas) while having plenty of room for development
and improvement.

4.2 What Type of Educational Robots and STEAM Kits do Teacher-Students


Consider as Most Appropriate for Designing Their Digital Competence?

Based on the lesson plans that the participants designed, and on answers to the open-
ended questions in the survey, we studied what types of educational robots and STEAM
kits they preferred to use in learning activities. Results show that the most popular
educational robots were Ozobot, LEGO WeDo 2.0, Sphero. Of the STEAM kits, only
the handheld digital microscope called EasiScope was chosen. The teacher-students did
not use any standalone sensors in their lesson plans or activities with pupils.
The tools, selected by the participants, are designed with the needs of children in
kindergarten in mind, having high reliability, being easy to use, and easy to program in a
graphical programming environment. The Ozobot robot can be used without additional
devices, using only its pre-programmed line following and color-code programming
features. The Sphero robot can be controlled in a remote-control mode, without any
programming at all. We would like to emphasize the fact that Sphero is suitable for
introducing the working principles of robots, meaning that learner engagement with
robotics could begin before learning programming and coding. In addition, programming
alone does not cover all digital competence areas.
Enhancing Teacher-Students’ Digital Competence 163

Building a LEGO WeDo 2.0 robot is not very time consuming – constructing a typical
model takes only 15–20 min, its programming language is minimalist, and when using
examples, the robot can be made to move easily. As these robots are relatively common in
Estonian schools and kindergartens, and as the majority of our participants are already
working in these institutions, we can assume that the teacher-students preferred the
devices that they had already seen in their workplaces and perceived as easy to use.
It is possible that these results could have been different if the participants had
previous knowledge about all robotics platforms that were used during the course. Some
teacher-students indicated that for them, robots and technology were quite unfamiliar
topics, but after the course, they had gathered more confidence and were planning to
implement educational robots in their teaching practices.

5 Discussion and Conclusions

The aim of this case study was to discover the areas where teacher-students’ digital com-
petence improve after a robot-enhanced training course, and to identify the educational
robots and STEAM kits that teacher-students preferred to use for developing their digital
skills. We theorized that involving educational robots in teacher training courses would
develop teacher-students’ digital competence in four DigCompEdu areas. Using the Dig-
CompEdu check-in list (an Estonian translation with a few modifications), we evaluated
in total 17 competences. The results showed that learners’ competence advanced in all
4 DigCompEdu areas, with significant improvements in digital resources (2.1, 2.2, 2.3,
2.4), teaching and learning (3.1, 3.2, 3.4) and facilitating learners’ digital competence
(6.2). Our results suggest that educational robots can be used as helpful tools to facilitate
developing learners’ digital competence in the context of higher education [14].
As we had expected, the majority of the participants rated their digital competence
much higher after the course. These results can be explained by the fact that the courses
were very practical and with hands-on tasks. In addition, the COVID-19 situation forced
teacher-students to communicate more online and to take responsibility for their own
learning because there was less in-person communication and direct guidance by the
lecturer.
Our findings showed that the participants rated their competence after the training
course positively high in Level 2 (developing Expert) and 3 (Expert). The reason for
this could be teacher-students’ preference for easier-to-handle robots and STEAM kits
to plan their lesson activities. The use of these robots (like Ozobot etc.) did not require
top-level digital competence, and the participants felt more confident and comfortable
with these tools. It is important to emphasize that with novice teachers (both didactically
and technologically), it’s wise to start with simpler robots and sensors.
Our results indicate, similarly to previous findings [22], that some teacher-students
evaluated their digital competence in the context of teaching and learning much lower
after the ICT-enriched course. This could be a result of a reality check for their existing
knowledge after reading the descriptions of DigCompEdu check-in items. Learners,
when assessing their digital competence, had to understand the description of all items.
Thus, reading the descriptions could give a clearer understanding about what is being
asked and what they have to assess. As a result, learners could discover what knowledge
164 E. Heinmäe et al.

and competence they are lacking. Thus, understanding the DigCompEdu model and
how to analyze digital competence is very important skill that needs to be taught and
supported at teacher-training courses.
The case study has some limitations. One of the weaknesses is a non-representative
sample that was rather small. This does not allow us to make broader generalizations
about the findings. Also, teacher-students’ activities during the course were assessed by
the lecturer, and learners got additional feedback from their peers. This feedback may
have influenced the teacher-students’ self-perception of their digital competence.
We believe that additional research, especially qualitative research, is needed on the
topic. Valuable data can be gathered from, for instance, interviews and observations
with teacher-students in order to explore how they use educational robots in real life
situations with their pupils, and how they use STEAM kits to support and enhance
pupils’ digital competence. Future studies should also focus on the pre-teacher and post-
teacher experiences with educational robots in the context of digital competence. Also,
we plan to carry out an experiment with test and control groups, with various settings,
in order to examine how STEAM activities change learner digital competence.
In conclusion, we believe our findings provide a valuable input for the wider topic
of how to support teachers’ professional development in ICT field in general, and in the
STEAM field specifically, by using educational robots.

Acknowledgments. Project “TU TEE - Tallinn University as a promoter of intelligent lifestyle”


(nr 2014–2020.4.01.16–0033) under activity A5 in the Tallinn University Centre of Excellence in
Educational Innovation.

References
1. The Digital Education Action Plan (2021–2027): Resetting education and training for the digi-
tal age. https://ec.europa.eu/education/education-in-the-eu/digital-education-action-plan_en.
Accessed 21 Dec 2020
2. OECD: The Future of Education and Skills: Education 2030. OECD Publishing, Paris (2018)
3. All digital webpage. https://all-digital.org/all-digital-annual-report-2018-is-out/. Accessed 08
Nov 2020
4. OECD: TALIS 2018 Results (Volume II): Teachers and School Leaders as Valued Profes-
sionals, TALIS. OECD Publishing, Paris (2020)
5. Praxis Homepage. http://www.praxis.ee/en/works/ict-education-in-estonian-schools-and-kin
dergartens/. Accessed 10 Jan 2020
6. Smahel, D., et al.: EU Kids Online 2020: survey results from 19 countries. EU Kids Online
(2020)
7. Eurydice: Digital Education at School in Europe. Eurydice Report. Publications Office of the
European Union, Luxembourg (2019)
8. The Estonian Lifelong Learning Strategy 2020. Ministry of education and research, Tallinn
(2014)
9. STEAM K12 - open education resource. https://e-koolikott.ee/kogumik/28660-STEAM-K12.
Accessed 10 Jan 2020
10. Punie, Y., Redecker, C.: Digital Competence Framework for Educators: DigCompEdu.
Publications Office of the European Union (2017)
Enhancing Teacher-Students’ Digital Competence 165

11. Angel-Fernandez, J.M., Vincze, M.: Introducing storytelling to educational robotic activities.
In: IEEE Global Engineering Education Conference (EDUCON) 2018, pp. 608–615, Tenerife
(2018)
12. Alimisis, D.: Educational robotics: open questions and new challenges. Themes in Sci.
Technol. Educ. 6(1), 63–71 (2013)
13. Harris, A., de Bruin, L.R.: Secondary school creativity, teacher practice and STEAM educa-
tion: An international study. J. Educ. Change 19(2), 153–179 (2017). https://doi.org/10.1007/
s10833-017-9311-2
14. De Gasperis, G., Di Mascio, T., Tarantino, L.: Toward a cognitive robotics laboratory based on
a technology enhanced learning approach. In: Vittorini, P., et al. (eds.) International Confer-
ence in Methodologies and Intelligent Systems for Techhnology Enhanced Learning, AISC,
vol. 617, pp. 101–109. Springer Nature, Switzerland (2017).
15. Gerecke, U., Wagner, B.: The challenges and benefits of using robots in higher education.
Intell. Automation Soft Comput. 13(1), 29–43 (2007)
16. Gorakhnath, I., Padmanabhan, J.: Educational robotics in teaching learning process. Online
Int. Interdiscip. Res. J. 7(2), 161–168 (2017)
17. Leoste, J., Heidmets, M.: Factors influencing the sustainability of robot supported math
learning in basic school. In: Silva, M.F., et al. (eds.) ROBOT’2019: Fourth Iberian Robotics
Conference, AISC 1092, p. 443−454, Springer Nature, Switzerland AG (2019)
18. National curriculum for basic schools. https://www.riigiteataja.ee/en/eli/524092014014/con
solide. Accessed 10 Jan 2020
19. Occupational Qualification Standards: Teacher, level 7. https://www.kutseregister.ee/ctrl/en/
Standardid/vaata/10719336. Accessed 10 Jan 2020
20. Estonia Education Strategy 2021–2035. https://www.hm.ee/en/activities/strategic-planning-
2021-2035/education-strategy-2035-objectives-and-preliminary-analysis. Accessed 10 Jan
2020
21. The Check-In Self-Reflection Tool. https://ec.europa.eu/jrc/en/DigCompEdu/self-assess
ment. Accessed 10 Jan 2020
22. Abbiati, G., Azzolini, D., Piazzalunga, D., Rettore, E., Schizzerotto, A.: MENTEP Evaluation
Report, Results of the field trials: The impact of the technology- enhanced self-assessment
tool. TET-SAT) - European Schoolnet, FBK- IRVAPP, Brussels (2018)
Disciplinary Knowledge of Mathematics
and Science Teachers on the Designing
and Understanding of Robot Programming
Algorithms

Jaime Andrés Carmona-Mesa(B) , Daniel Andrés Quiroz-Vallejo ,


and Jhony Alexander Villa-Ochoa

Universidad de Antioquia, 005010 Medellín, Colombia


{jandres.carmona,daniel.quirozv,jhony.villa}@udea.edu.co

Abstract. This document presents the results of a study aimed to identify disci-
plinary knowledge of mathematics and pre-service science teachers while under-
standing and designing a programming algorithm. For this, a block programming
experience with Arduino UNO boards was designed and an interpretative content
analysis of the data recorded in the experience was developed. The results show
that two groups of preservice teachers exhibit both explicit and implicit basic dis-
ciplinary knowledge. This study suggests that science teachers can fluently guide
students in making explicit the relationship between algorithms and disciplinary
knowledge.

Keywords: Computational thinking · Block programming · Fourth industrial


revolution · STEM education · Pre-service teachers

1 Theoretical Background
The current technological revolution (known as the fourth industrial revolution - 4IR)
demands an education that prepares the new generations for challenges such as artificial
intelligence, rapid development of global communications and maximum data dissemi-
nation (Carmona-Mesa et al. 2021; Mpungose 2020; Scepanovic 2019). Faced with these
challenges, educational proposals that highlight the role of programming as an instru-
ment to face this technological revolution have been consolidating. These initiatives focus
on computational thinking as a methodological resource or object of study within the
framework of STEM education (Lee et al. 2020; Villa-Ochoa and Castrillón-Yepes 2020).
They enhance interdisciplinary educational processes based on a broad understanding
of technology in which other disciplines are connected and required (Carmona-Mesa
et al. 2020; Weintrop et al. 2016).
Benton et al. (2017) and Bellas et al. (2019) analyze programming training and its
integration with mathematics through robotics in the context of primary education; sim-
ilarly, Santos et al. (2019) explore the integration of programming with natural sciences.
These studies highlight that: (i) although the contributions of robotics in under- standing

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022


M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 166–176, 2022.
https://doi.org/10.1007/978-3-030-82544-7_16
Disciplinary Knowledge of Mathematics and Science Teachers 167

the notion of algorithm (a core concept in programming) stand out, its relationship with
other disciplinary knowledge requires further investigation; and (ii) teachers have a key
role to make explicit the utility of algorithms in other disciplines, so it is necessary to
expand their training and confidence to guide experiences in programming based on
robotics.
International research points out that the possibilities of robotics are not fully
exploited in the teaching and learning of specific disciplines, because of teachers not fully
identifying their potential (e.g., Zhong and Xia 2018). On the other hand, it is reported
that the main sources of training used by teachers are previous personal experiences in
the subject, self-learning, and experience exchange with colleagues; furthermore, engi-
neers are the principal professionals who guide programming experiences supported in
robotics in the school curriculum (Patiño et al. 2014).
Different initiatives have been generated to meet the educational needs of teachers;
some aimed at the design of methodological strategies (Bellas et al. 2019); others con-
sidered the design of educational materials, accompanied by related training to support
its integration in the school system (Benton et al. 2017); to a lesser extent, other initia-
tives focused on offering virtual courses (Zampieri and Javaroni 2020). Nevertheless,
the research results reported by these initiatives are not conclusive; in particular, there
is no empirical evidence of support to educational needs of specific-disciplines teachers
that make explicit the relationship between algorithms and specific curricular contents.
In this sense, this study was developed to answer the question: What are the disciplinary
skills that pre-service mathematics and science teachers exhibit in understanding and
designing algorithms when programming robots?

2 Methodology
In the empirical part of this study, a training experience in programming through robotics
was designed and implemented; it consisted of the design of an algorithm using the
mBlock software; the algorithm implicitly implied disciplinary concepts such as dis-
tance, time, speed, right angles, and proportional scales. This experience took a four-hour
session and was implemented during two courses aimed at training teachers in technol-
ogy integration, one for mathematics and the other one for science (with an emphasis on
physics). This section presents the description of the programming training experience,
the tools used for empirical data analysis and the data analysis method.

2.1 Programming Training Based on Robotics

The programming educational experience has a theoretical and practical nature based
on the objective of learning from and through robotics. This experience emerges from
a challenge where elements such as teamwork and disciplinary-knowledge decision
making, as well as specific software and hardware knowledge, are considered (López
and Andrade 2013). For this study, the block programming language mBlock (version
3.4.12) was used; the software allowed visualizing algorithms both in blocks and code
lines. In addition, Arduino UNO boards were used to upload the generated algorithms.
168 J. A. Carmona-Mesa et al.

Fig. 1. Map of the University (left). The right image represents the left black rectangle zoomed
area and shows the route to be covered by the algorithm.

As the used robots had its own power supply, it was possible to operate its motors without
plugging them to an external source.
Educational experiences with robots have been focused in procedural learning
(Patiño et al. 2014). To transcend it, a challenge was designed to build an algorithm
in such a way that planning, design, test and final evaluation phases were necessary
to solve it (see Table 1). The challenge consisted of programming a vehicle that could
enter the university campus. A map was given to the pre-service teachers (Fig. 1). The
instructions were: through the western vehicular access, enter the university campus; the
vehicle (robot) must be parked correctly and in the shortest possible time in a specific
location of the parking zone (see Fig. 1). The path to be described by the robot was
studied in the classroom, considering real campus dimensions, but using different scales
to construct sections: A (1/10 scale), B (1/30 scale), C (1/20 scale) and D (1/30 scale).
After that activity, the following questions were raised: how to find the real dimensions
of the path and, through the scales, find the dimensions of the path that the robot must
carry out in the classroom? How to find the speed of the robotic car and be able to
program the algorithm that allows it to travel the entire trajectory?

Table 1. Phases, activities, knowledge, and times

Phase Activities Disciplinary knowledge Time


and algorithms
Contextualization Pre-service teachers 20 min
received a
contextualization about
the required tasks (Fig. 1)
(continued)
Disciplinary Knowledge of Mathematics and Science Teachers 169

Table 1. (continued)

Phase Activities Disciplinary knowledge Time


and algorithms
Planning To define the strategy to Distance, Ratio Scale and 30 min
determine the real Blocks: “Arduino
measurements of sections Program” “Wait” and “Set
A, B, C and D (most of Digital Pin Output.”
the participants agree in
the use of Google maps
for doing this). To locate
the scaled value that the
robot must travel. To
identify the scripts (code
blocks) required for
solving the challenge and
to recognize the digital
pins that control the wheel
motors (the robot has
three points of support: a
free wheel and two other
wheels controlled by
motors trough digital
pins: 8 and 9 for the left
motor and 10 and 11 for
the right motor; pins 8 and
11 activate clockwise
rotation for their
respective motors)
Design After establishing the Time, speed, 90º angle 30 min
connection between the and blocks required for
mBlock and the Arduino the design of a functional
UNO, identifying the algorithm
actions that each pin
performs in both motors
and the dimensions of the
scale path built in the
classroom, the
participants need to
establish the average
speed of the robot to
identify the specific times
in which the motors must
be activated. In addition,
they must identify the
configuration of blocks
that al- lows a turn and the
time it takes to describe a
90º angle
(continued)
170 J. A. Carmona-Mesa et al.

Table 1. (continued)

Phase Activities Disciplinary knowledge Time


and algorithms
Verification Each group has two From the concepts of 100 min
opportunities to verify the distance, time, speed and (30 for each of the two
algorithm in the scaled angle, pre-service initial opportunities and
path; each group teachers make decisions 40 to adjust the
programs the robot and to adjust the built final algorithm)
records the de- scribed algorithm
path. Finally, on the third
opportunity, the precision
of the algorithm in its
final version is tested
Evaluation and closure At the end, an evaluation Both groups are closely 60 min
of the final algorithm is monitored, and questions
carried out. It considered are formulated in order to
the searched information, help making explicit
the arguments supporting reasonings that are
the produced algorithm, implicit in the algorithm
the disciplinary concerning disciplinary
knowledge evoked, the and programming
precision of the route, its knowledge
spended time and the
algorithm code

2.2 Data and Analysis

The educational experiences were implemented in two courses for one week. Thirteen
future mathematics teachers were divided into five teams and nine future science teach-
ers were divided into three teams. The activities had a full-time teacher and two assistant
teachers. The three teachers jointly designed and implemented the experience. The expe-
rience lasted four hours and was videotaped by two cameras that captured the experience
from opposite angles; In addition, to carry out each test, the algorithm was previously
tested. In total, four hours of video (from both cameras) and three algorithms for each
work team were transcribed.
To answer the study question, an interpretative content analysis was developed using
qualitative criteria to determine the reliability and validity of the analytical processes,
also allowing inferences to be made by objectively and systematically identifying both
manifest and latent content in the recorded data (Drisko and Maschi 2016). Each author
independently analyzed and coded the three algorithms constructed by each team and
the transcription (manifest contents). Once this phase was completed, the findings were
organized into a list of three types of algorithms (basic, intermediate, and sophisti-
cated). After that, the authors analyzed the transcripts trying to identify the influence of
disciplinary knowledge made implicit or explicit in the participants’ arguments (latent
content). Finally, the findings of each author were discussed, with the whole team seek-
ing to reach consensus; because of this debate, a more refined categorization emerged
(Table 2).
Disciplinary Knowledge of Mathematics and Science Teachers 171

Table 2. List of categories, descriptions, and records for analysis.

Categories Description Records


Understanding and design of It refers to the analysis of the Three algorithms by each
algorithms concept of algorithm that show team
the participants and the level of
their development evidenced in
the three performed tests
Influence of disciplinary It focuses on the inefficiency of Transcriptions
knowledge in the construction the constructs related to the
of algorithms concepts of distance, time,
speed, 90° angle and
proportional scale showed
the construction and
adjustments of the algorithms

3 Results
The results are presented according to the categories described in Table 2. For each
category, conceptual aspects and records of the training experience that support them
are shown.

3.1 Understanding and Design of Algorithms


Programming educational proposals are usually based on computational thinking as a
methodological resource or as an object of study (Villa-Ochoa and Castrillón-Yepes
2020; Weintrop et al. 2016); however, algorithm design is a central concept in program-
ming and is common to both approaches. An algorithm is a precise method to solve
a challenge in finite time; it is an abstraction of a procedure that optimizes the steps
required to produce the desired result by using input information (Carmona-Mesa et al.
2021; Benton et al. 2017).
Consequently, the requested algorithm should make explicit the set of scripts (code
blocks) that were used to make the robot go through the route. Being an introductory
experience to the subject and intended for pre-service teachers with no previous knowl-
edge of the subject (or at least not offered in the training program), the algorithm is
relatively simple and, therefore, it is limited to the blocks “Arduino Program”, “wait”
and “set digital pin output” (see Table 1). To reduce possible sources of error, only three
robots with the same hardware characteristics were used and their batteries were changed
after each round of tests.
The teams of both groups were able to build functional algorithms. However, deep
differences were identified between them and allowed to classify algorithms as basic,
intermediate, and sophisticated (see Fig. 2). In the mathematics course, only two out
of five teams presented intermediate algorithms, where, although only the necessary
blocks are used, digital pin 11 is explicitly turned off to perform the turn; the other three
teams presented simple algorithms that included additional and unnecessary blocks in
172 J. A. Carmona-Mesa et al.

Fig. 2. Basic (left), intermediate (center), and sophisticated (right) algorithms.

their construction, for instance: “wait_seconds” and “forever”. In the science course, two
teams achieved intermediate algorithms and the remaining one achieved a sophisticated
one; for this last case, it is worth noting the fact of limiting the rotation of the robot by
turning off the digital pin 11 and not keeping pin 8 on.
Even though the experience was introductory, it is highlighted that pre-service teach-
ers built functional algorithms, even though not the most optimized versions for the
challenge. For example, when considering as an evaluative variable the shortest time to
complete the journey, the fact of keeping both motors on during a turn and giving them
opposite rotation directions allows the turn to be performed much faster (to do this, pin
9 or pin 10 must be activated, which drive counter-clockwise rotation for each motor).
It is important to note that, in the case of the science course teams, none considered
the last turn of the route (between section C and D). In contrast, all teams in the math
course considered in detail the four sections and the respective turns. It was a constant
in all the teams to formulate an initial structure of blocks and adjust it slightly after each
test; that is, the essence of the algorithm was identified from the beginning and the rest of
the experience was focused on modifying it according to time issues (both for sections
and turns).

3.2 Influence of Disciplinary Knowledge in the Construction of Algorithms


During the entire training experience, the three professors were attentive to teamwork
and decision-making by the participants. For this, they focused their attention on the dis-
cussions and explanations that were generated within each team, and in the whole group.
From these discussions and explanations, it was possible to recognize the presence of
disciplinary knowledge and the comprehension of the concept of algorithm achieved
after each test. Attentive and meticulous monitoring of these interactions allowed ques-
tions to be generated in the final evaluation process with the purpose of making explicit
whether disciplinary knowledge was present in the construction of the algorithms.
The mathematics pre-service teachers paid attention to all the sections and turns of
the path that the robot had to make. The science students were interested in building
Disciplinary Knowledge of Mathematics and Science Teachers 173

an efficient algorithm. In both courses, the professor asked about the relationship that
the participants established between the algorithm and the concepts of distance, time,
speed, right angle, and proportional scale.
Divided opinions were identified in the mathematics course. On the one hand, a
member of one of the teams stated: “It seemed silly to me that we took so much time
looking at how much the measurement in meters was, knowing that this was not going to
be of any use for programming [proportional scale]”. To that comment, another member
of the same team added:

We did the conversion, we did all that and then we obtained ten meters for the first
[section A], (...) and then we calculated the speed divided by time, uh! the distance
divided by time to find the speed and we got 0.33, and that was not right! ... it had
no sense. We did not see any connection in the data we collected, we were just
calculating.

The reaction of this team shows that, in their experience, the disciplinary knowl-
edge had no influence on the construction of the algorithm; furthermore, there is some
imprecision (or doubt) regarding the concept of speed. However, the observation of the
teachers allowed to infer that this perception was not generalized. They asked the rest of
the pre-service teachers if the same thing happened for all. Faced with this, one of the
members of another team replied:

no, it did help us, (…) making an approximate translation of scales, we obtained 1.3
meters here (…) and with the speed I assumed, then we made those calculations,
and it did help us, so much so that what we have modified here has been more in
terms of the lap and the turn than the distance. The distance as calculated in the
beginning kept unchanged and it has worked well, only with respect to the turns
have we had problems.

During the same experience in the science course, one of the members of a team
responded to the same previous question by stating: “There are basic concepts of physics
such as (…): take the map [proportional scale], find the distance, ( …) The speed of the
car”; the participant complemented: “no one is one hundred percent sure that this is
the case, but, if we go further and say: (…) next time we will try to understand the
two concepts and see if either of the two is right [when analyzing the notion of time for
speed and for turns]”. In addition, he emphasized the challenge that the 90º angle concept
implied: “this turn of the car was not perpendicular, it did not result in a ninety-degree
turn, so what are we going to do? look, it is a little more [than perpendicular] so we
could lower the time a little”.
The records of the previous paragraph denote a greater sensitivity for the role of
disciplinary knowledge in the construction of the algorithms. Furthermore, the third team
was the one that developed the sophisticated algorithm. These feelings and statements,
unlike the math group ones, were common in the science teams. Even more, while for the
mathematics group the opinions were divided in relation to disciplinary knowledge, for
the case of the science group such knowledge was assumed as basic. For this reason, the
professor, aiming to delve into this discussion, asked: “if the radius of the robot’s wheel
is reduced by one third of its current value, how is the route it makes modified?”. The
174 J. A. Carmona-Mesa et al.

science group began to discuss, and they suggested that the length of the circumference
that the wheel described would be 2πr, therefore, the length of the new route would be
2πr/3. They stated then that, although the engine operation would be the same, the route
length was modified by a reduction of one third respect to the one initially considered.

4 Discussion and Conclusion

In the first part of this chapter, the need to expand the empirical evidence related to the
teachers’ educational needs on specific disciplines was presented. Teacher knowledge
on relationship between algorithms and specific curricular contents are also required.
Then, it was proposed to contribute to the international research from two categories:
(i) understanding and design of algorithms and (ii) influence of disciplinary knowledge
in the construction of algorithms, analyzed for the case of mathematics and science
teachers.
In this regard, this chapter offers two important results. First one is that, although
in short-term training experiences future teachers can perform functional algorithms,
there is an important difference in the emphasis given by mathematics and science
teachers. For the first group of them, a meticulous reasoning is observed. Priority to the
functionality of the robot-path algorithm was observed in that reasoning. For pre-service
science teachers, priority is given to optimizing the algorithm respect to the number of
blocks and the time in which the robot made the journey.
Although the results are not conclusive, there is evidence that science teachers, by
showing greater confidence and optimization in the algorithm, could better accompany
students when explaining the relationship between algorithms and disciplinary knowl-
edge (Benton et al. 2017; Zhong and Xia 2018). In addition, it is confirmed that the design
of algorithms constitutes a challenge for mathematics teachers as it transcends the usual
reasoning of their discipline, involving aspects of computer science (Carmona-Mesa
et al. 2021; Benton et al. 2017).
A second result refers to the disciplinary knowledge that future teachers exhibit in the
construction of algorithms. The importance of transcending functional aspects of robotics
and promoting training in practical programming is highlighted, where elements such as
teamwork and decision-making based on disciplinary knowledge could emerge (López
and Andrade 2013; Patiño et al. 2014). This fact showed that, while for mathematics
teachers, disciplinary concepts do not play an important role in the understanding and
design of an "ideal" algorithm, for science teachers there is a greater presence of their
disciplinary knowledge, for example, in the use of time to optimize the "wait" block.
This provides evidence against the conjecture of a possible greater relationship between
science and robot programming (Santos et al. 2019), by showing a possible explicit
influence on the design of the algorithm.
In this study, on the one hand, the influence of an absolutist vision of mathematics
is identified in the process carried out by the teachers of this discipline. This was evi-
denced in the fact that they seem to ignore the suggested disciplinary concepts and focus
on achieving an "ideal version” of the algorithm. On the other hand, science teachers
reflect a common tendency in their discipline to neglect variables with less effects and
focus their work on functional procedures closely related to the faced challenge. These
Disciplinary Knowledge of Mathematics and Science Teachers 175

results implicitly reflect a relationship between disciplinary knowledge and algorithm


construction; however, the nature of such relationship and the ways in which those topics
condition each other should be the object of study of future research.
Consequently, expanding research in this line will allow a better understanding of
the educational needs that implicitly demarcate the discipline in which teachers are
trained and that can limit or enhance interdisciplinary educational processes based on
computational thinking within the framework of STEM education (Carmona-Mesa et al.
2020; Lee et al. 2020). Similarly, the offer of training proposals should be extended
through virtual courses, which are reported to a lesser extent in the literature (Patiño
et al. 2014; Zampieri and Javaroni 2020) but are key to massify the efforts to address
the technological revolution that humanity is currently experiencing.

Acknowledgments. We are grateful to the Committee for the Development of Research of the
University of Antioquia for financing the project “Foundation and development of a STEM training
proposal for future mathematics teachers.” In the same way, we express our gratitude to the Doc-
toral Excellence Scholarship Program of the Bicentennial of MinCiencias-Colombia for financing
the project “Design and validation of a STEM training proposal for teachers of the Department of
Antioquia.”

References
Benton, L., Hoyles, C., Kalas, I., Noss, R.: Bridging primary programming and mathematics:
some findings of design research in England. Digit. Exp. Math. Educ. 3(2), 115–138 (2017).
https://doi.org/10.1007/s40751-017-0028-x
Bellas, F., Salgado, M., Blanco, T.F., Duro, R.J.: Robotics in primary school: a realistic mathe-
matics approach. In: Daniela, L. (ed.) Smart Learning with Educational Robotics, pp. 149–182.
Springer, Cham (2019). https://doi.org/10.1007/978-3-030-19913-5_6
Carmona-Mesa, J.A., Krugel, J., Villa-Ochoa, J.A.: La formación de futuros profesores en tec-
nología. Aportes al debate actual sobre los Programas de Licenciatura en Colombia. En Richit,
A., Oliveira, H. (eds.) Formação de Professores e Tecnologias Digitais, pp. 35–61. Livraria da
Física, São Paulo (2021)
Carmona-Mesa, J.A., Cardona, M.E., Castrillón-Yepes, A.: Estudio de fenómenos físicos en la
formación de profesores de Matemáticas. Una experiencia con enfoque en educación STEM.
Uni-pluriversidad 20(1), e2020101 (2020). https://doi.org/10.17533/udea.unipluri.20.1.02
Drisko, J., Maschi, T.: Content Analysis. Oxford University Press (2016)
Mpungose, C.B.: Student teachers’ knowledge in the era of the fourth industrial revolution. Educ.
Inf. Technol. 25(6), 5149–5165 (2020). https://doi.org/10.1007/s10639-020-10212-5
Lee, I., Grover, S., Martin, F., Pillai, S., Malyn-Smith, J.: Computational thinking from a disci-
plinary perspective: integrating computational thinking in K-12 science, technology, engineer-
ing, and mathematics education. J. Sci. Educ. Technol. 29(1), 1–8 (2020). https://doi.org/10.
1007/s10956-019-09803-w
López, P.A., Andrade, H.: Aprendizaje de y con robótica, algunas experiencias. Revista Educación
37(1), 43 (2013). https://doi.org/10.15517/revedu.v37i1.10628
Santos, I., Grebogy, E.C., de Medeiros, L.F.: Crab robot: a comparative study regarding the use of
robotics in STEM education. In: Daniela, L. (ed.) Smart Learning with Educational Robotics,
pp. 183–198. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-19913-5_7
176 J. A. Carmona-Mesa et al.

Patiño, K.P., Diego, B.C., Rodilla, V.M., Conde, M.J.R., Rodriguez-Aragon, J.F.: Using robotics
as a learning tool in Latin America and Spain. IEEE Revista Iberoamericana de Tecnologias
del Aprendizaje 9(4), 144–150 (2014). https://doi.org/10.1109/RITA.2014.2363009
Scepanovic, S.: The fourth industrial revolution and education. In: 2019 8th Mediterranean Con-
ference on Embedded Computing (MECO), vol. 8, pp. 1–4. IEEE (2019). https://doi.org/10.
1109/MECO.2019.8760114
Zhong, B., Xia, L.: A Systematic review on exploring the potential of educational robotics in
mathematics education. Int. J. Sci. Math. Educ. 18(1), 79–101 (2018). https://doi.org/10.1007/
s10763-018-09939-y
Villa-Ochoa, J.A., Castrillón-Yepes, A.: Temas y tendencias de investigación en América Latina
a la luz del pensamiento computacional en Educación Superior. En Políticas, Universidad e
innovación: retos y perspectivas, pp. 235–248. JMBosch (2020)
Weintrop, D., et al.: Defining computational thinking for mathematics and science classrooms. J.
Sci. Educ. Technol. 25(1), 127–147 (2016). https://doi.org/10.1007/s10956-015-9581-5
Zampieri, M.T., Javaroni, S.L.: A dialogue between computational thinking and interdisciplinarity
using scratch software. Uni-pluriversidad 20(1), e2020105 (2020). https://doi.org/10.17533/
udea.unipluri.20.1.06
Teachers’ Perspective on Fostering
Computational Thinking Through
Educational Robotics

Morgane Chevalier1,2(B) , Laila El-Hamamsy1 , Christian Giang1,3 ,


Barbara Bruno1 , and Francesco Mondada1
1
Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
{morgane.chevalier,laila.elhamamsy,christian.giang,barbara.bruno,
francesco.mondada}@epfl.ch
2
Haute École Pédagogique du canton de Vaud (HEP-VD), Lausanne, Switzerland
3
SUPSI-DFA, Locarno, Switzerland

Abstract. With the introduction of educational robotics (ER) and com-


putational thinking (CT) in classrooms, there is a rising need for opera-
tional models that help ensure that CT skills are adequately developed.
One such model is the Creative Computational Problem Solving Model
(CCPS) which can be employed to improve the design of ER learning
activities. Following the first validation with students, the objective of
the present study is to validate the model with teachers, specifically
considering how they may employ the model in their own practices. The
Utility, Usability and Acceptability framework was leveraged for the eval-
uation through a survey analysis with 334 teachers. Teachers found the
CCPS model useful to foster transversal skills but could not recognise
the impact of specific intervention methods on CT-related cognitive pro-
cesses. Similarly, teachers perceived the model to be usable for activity
design and intervention, although felt unsure about how to use it to assess
student learning and adapt their teaching accordingly. Finally, the teach-
ers accepted the model, as shown by their intent to replicate the activity
in their classrooms, but were less willing to modify it or create their own
activities, suggesting that they need time to appropriate the model and
underlying tenets.

Keywords: Computational thinking · Educational robotics ·


Instructional intervention · Teacher professional development · Teacher
practices

1 Introduction
Educational robotics has garnered significant interest in recent years to teach
students not only the fundamentals of robotics, but also core Computer Science
This research was supported by the NCCR Robotics, Switzerland. We would like to
thank the trainers and teachers who made this study possible.
M. Chevalier and L. El-Hamamsy contributed equally to this work.
The original version of this chapter was previously published non-open access. A Cor-
rection to this chapter is available at https://doi.org/10.1007/978-3-030-82544-7 30
c The Author(s) 2022, corrected publication 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 177–185, 2022.
https://doi.org/10.1007/978-3-030-82544-7_17
178 M. Chevalier et al.

(CS) concepts [5] and Computational Thinking (CT) competencies [2,3]. How-
ever, participation in an ER Learning Activity (ERLA) does not automatically
ensure student learning [17], with the design of the activity playing a key role
towards the learning outcomes [6]. Indeed, the lack of understanding as to how
specific instructional approaches impact student learning in ER activities has
been raised at multiple occasions [16]. Many researchers have even evoked the
need to have an operational model to understand how to foster CT skills [10]
within the context of ER activities [1,9]. To that effect, Chevalier & Giang et
al. [3] developed the Creative Computational Problem Solving (CCPS) model
for CT competencies using an iterative design-oriented approach [13] through
student observations. The resulting 5 phase model (Fig. 1) helped identify and
understand students’ cognitive processes while engaging in ER learning activi-
ties aimed to foster CT skills. By analysing the students’ behaviour through the
lens of the CCPS model to understand the students’ thought processes, both
teachers and researchers may have a means of action and intervention in the
classroom to foster the full range of cognitive processes involved in creative com-
putational problem solving. The authors concluded that a validation by teachers
was essential to ensure that they could “effectively take advantage of the model
for their teaching activities”, not only at the design stage, but also to guide
specific interventions during ER learning activities.
This article reports the findings of a study involving 334 in-service and pre-
service primary school teachers, with the purpose of evaluating their perception
of the model and investigating whether their own needs as users of the model
are met. The Utility, Usability and Acceptability framework [18] for computer-
based learning environment assessment was leveraged, as it has been previously
used for the evaluation of teachers’ perception of the use of educational robots
in formal education [4]. More formally, we address the following questions:
• RQ1: What is the perceived utility of the CCPS model?
• RQ2: What is the perceived usability of the model?
• RQ3: What is the acceptability of the model by teachers?

2 Methodology
To evaluate the model, the study was conducted with 232 in-service and 102
pre-service teachers participating in the mandatory training program for Digi-
tal Education underway in the Canton Vaud, Switzerland [5] between Novem-
ber 2019 and February 2020. The inclusion of both pre-service and in-service
teachers within the context of a mandatory ER training session helps ensure
the generalisability of the findings to a larger pool of teachers, and not just
experienced teachers and/or pioneers who are interested in ER and/or already
actively integrating ER into their practices [4]. During the ER training session,
the teachers participated in an ER learning activity (see Lawnmower activity
in Fig. 1 [3]) which was mediated by the CCPS model. During this activity, the
teachers worked in groups of 2 or 3 to program the event-based Thymio II robot
[12] to move across all of the squares in the lawn autonomously. As the robot is
Teachers’ Perspective on Fostering CT Through Educational Robotics 179

event-based, the participants “have to reflect on how to use the robot’s sensors
and actuators to generate a desired [behaviour]” [3], which requires that the par-
ticipants leverage many CT-related competencies. So that teachers understand
how the CCPS can be used to mediate an ER learning activity, and similarly
to Chevalier & Giang et al. (2020) [3], a temporary access blocking to the pro-
gramming interface was implemented at regular time intervals.

Fig. 1. The CCPS model (left) and lawnmower activity setup with Thymio (right) [3].

After the activity, the teachers participated in a debriefing session where


they were asked to express what they had to do to solve the problem and the
trainer grouped these comments into categories relating to CT, the CCPS model
and transversal skills. The teachers were then presented the CCPS model itself
and its 5 phases, together with the results of the study conducted by Chevalier
& Giang et al. [3] to provide concrete testimony as to the effectiveness of the
model when applied in classrooms. Finally, in an overarching conclusion about
fostering CT competencies during ER activities, the trainer provided guidelines
on how to design ER learning activities and intervene accordingly.
The Utility, Usability and Acceptability framework [18] then served as the
basis for the teachers’ evaluation of the CCPS model. As utility “measures the
conformity of the purpose of the device with the users’ needs” [4,18], in the
present context we consider utility with respect to student learning. Two per-
spectives are adopted : 1) how the use of the model helps foster transversal
skills that are part of the mandatory curriculum1 and 2) how certain interven-
tion methods may help promote reflection in the different phases of the model.
Usability on the other hand considers “the ease of use and applicability of the
device” [4,18] by the teacher, which is why the CCPS model in this case is con-
sidered in accordance with the “professional and technical actions” that teachers
make use of in their daily practices2 . Finally, acceptability “measures the pos-
sibility of accessing the device and deciding to use it, the motivation to do so,
and the persistence of use despite difficulties” [4,18]. In this case, we consider
acceptability with respect to what the teachers intend to do with the model with
1
See transversal skills of the curriculum: plandetudes.ch/capacites-transversales.
2
See “gestes professionels”: go.epfl.ch/hepvd referentiel competences 2015.
180 M. Chevalier et al.

Table 1. Utility, usability and acceptability survey [18]. Utility in terms of transver-
sal skills considers 5 dimensions: collaboration (COL), communication (COM), learning
strategies (STRAT), creative thinking (CREA) and reflexive processes (REFL)

Construct Question
Utility of the (COL) We exchanged our points of view/evaluated the pertinence of our
CCPS for actions/confronted our ways of doing things
transversal skills (COM) We expressed ourselves in different ways (gestural, written
(4-point Likert etc...)/identified links between our achievements and
scale) discoveries/answered our questions based on collected information
(STRAT) We persevered and developed a taste for effort/identified
success factors/chose the adequate solution from the various
approaches
(CREA) We expressed our ideas in different and new ways/expressed our
emotions/were engaged in new ideas and exploited them
(REFL) We identified facts and verified them/made place for doubt and
ambiguity/compared our opinions with each other
Utility of the What helps i) identify the problem (USTD)? ii) generate ideas (IDEA)?
intervention iii) formulate the solution (FORM)? iv) program (PROG)? v) evaluate the
methods solution found (EVAL)?
(checkboxes) Max 3 of 5 options: 1) manipulating the robot; 2) writing down the
observations; 3) observing 3 times before reprogramming; 4)
programming; 5) not being able to program
Usability (4-point The model helps i) plan an ERLA; ii) intervene during an ERLA; iii)
Likert scale) regulate student learning during an ERLA; iv) evaluate student
learning during an ERLA
Acceptability I will redo the same ERLA in my classroom
(4-point Likert I will do a similar ERLA that I already know in my classroom
scale) I will do a similar ERLA that I will create in my classroom
I will do a more complex ERLA in my classroom

increasing levels of appropriation. To measure the aforementioned constructs, a


set of questions pertaining to each dimension was developed (see Table 1) with
most responses being provided on a 4-point Likert scale (1 - strongly disagree,
2 - disagree, 3 - agree, 4 - strongly agree).

3 Results and Discussion


3.1 RQ1 - Utility
Educational robotics learning activities are often considered to contribute to the
development of a number of transversal skills (e.g., collaboration, problem solv-
ing etc...) [2]. While this perception is also shared by teachers who are pioneers
in robotics [4], it is important to ensure that ER activities mediated using the
CCPS model and designed to foster CT skills, are perceived by teachers at large
as contributing to the development of transversal skills. The results of the sur-
vey showed that teachers found the ER learning activity with the Thymio useful
to engage in transversal skills (Fig. 2), in particular collaboration, reflexive pro-
cesses, learning strategies and communication, with only creative thinking being
Teachers’ Perspective on Fostering CT Through Educational Robotics 181

less perceived by teachers. This is coherent with the fact that the ER Lawnmower
activity was conceived to promote students’ use of transversal skills to help the
emergence of related CT competencies and suggests that the use of the CCPS
model in designing ER learning activities helps teachers see and strengthen the
link between ER, transversal skills and CT, confirming the results of [4] with
teachers who were novices. Although in the mandatory curriculum teachers are
taught to evaluate transversal skills, little indication is provided as to how to
foster them. The use of ER learning activities informed and mediated by the
CCPS model can provide a concrete way to foster skills already present in the
curriculum and ensure that students acquire the desired competencies.

Fig. 2. Teachers’ perceived utility (with respect to fostering transversal skills), usability
and acceptability of the CCPS model. For each transversal skills we report the mean
and standard deviation μ±std, and Cronbach’s α measure of internal consistency.

To clarify the link between the CCPS model and the employed intervention
methods, the teachers were asked to select a maximum of three intervention
methods that they believed were useful to engage in each of the phases of the
CCPS model (see Fig. 3). The element which emerges as the most relevant for
all the phases of the model is the possibility of manipulating the robot, thereby
reinforcing the role of physical agents in fostering CT skills [8]. This however is
dependent on the fact that the Thymio robot provides immediate visual feed-
back through the LEDs in relation to each sensor’s level of activation. This
highlights once more the importance of constructive alignment in ER learning
activity design [7] which stipulates the importance of the robot selection in rela-
tion to the desired learning outcomes. The second most popular choice was to
write down the observations, likely because this constitutes a means of specifying
what happens with the robot in the environment. The written observations then
become a “thinking tool” that supports modelling and investigation [15]. Sur-
prisingly, and although the teachers were introduced to the fact that unregulated
182 M. Chevalier et al.

access to the programming interface tends to lead to trial and error behaviour
[3], programming was often selected as being useful to foster the different CT
phases, whilst not being able to program was one of the least selected. Only in
the case of idea generation did both programming and not programming receive
an equal number of votes. We believe that the frequent selection of programming
is due to the fact that the question was based on their experience as the partic-
ipants, and therefore the need for a high sense of controllability [14], in the ER
learning activity and not on their experience as a teacher leading the activity in
the classroom.

Fig. 3. Teachers’ perception of the link between intervention methods and the different
phases of the CCPS model. For each phase of the model and intervention method, the
proportion of teachers having selected the approach as relevant is shown.

To summarise, on the one hand, teachers perceive the usefulness of promot-


ing transversal skills that they are familiar with, as they are already part of the
curriculum. On the other hand, they do not perceive what research has shown
to be useful to promote CT competencies, likely because it was not part of the
curriculum until now. Therefore, experimentation in their classrooms is neces-
sary, as well as further training to help them acquire a more critical view of ER
learning activities and to understand the impact of specific intervention methods
on the development of students’ skills.

3.2 RQ2 - Usability


With respect to the usability of the CCPS model in the teaching profession,
responses were globally positive (see Fig. 2, μ = 2.89, std = 0.78, Cronbach’s
α = 0.82). Teachers believed that the CCPS model could be used to plan and
intervene during an ER learning activity (over 80% of positive responses), likely
due to the guidelines provided during the theoretical presentation. However, the
link between the CCPS model and student learning was less evident for teachers:
66% believed it could be used to regulate student learning and 60% that it could
be used to evaluate student learning. Both constructs are related by the need to
assess students and understand where they stand in terms of the overall learning
objectives. Although this shows that teachers need to be taught how to identify
the phases in which the students are to be able to use the CCPS model to its
Teachers’ Perspective on Fostering CT Through Educational Robotics 183

full extent, this is also highly linked to the difficulty found in both the ER and
CT literature in terms of assessment of learning and/or transversal skills [8,9].

3.3 RQ3 - Acceptability

The question of acceptability here targets teachers’ intent to use the CCPS model
in their practices. Intent to use is considered at progressive levels of appropriation
(see Fig. 2), which is why it is not surprising to find that while teachers might be
willing to conduct the same ER learning activity in their classrooms (64%), less
are willing to adapt the activity (40%), create their own custom one (32%), and
conduct a more complex one (20%). One can put this in relation with the Use-
Modify-Create (UMC) progression [11] which was developed to scaffold student
learning in CT contexts. Teachers need to start by using the model in a given ER
learning activity to gain in self-efficacy. Only then will they be able to progress
to the next stage where they feel comfortable adapting the use of the model to
their teaching style and to the individual students. Finally, teachers will reach
a level where they create their own ER pedagogical interventions to foster CT
competencies. One must however note that intent is likely influenced by external
factors (e.g. time or access to the robots, frequent barriers to the introduction
of ER in formal education [4,5]).

4 Conclusion

Provided the prominent role that teachers play in the integration of ER and
CT in formal education, this study investigated teachers’ perception of an oper-
ational model to foster Computational Thinking (CT) competencies through
ER activities: the Creative Computational Problem Solving (CCPS) model [3].
Three research questions were considered in a study with 334 pre-service and
in-service primary school teachers: What is the perceived utility (RQ1), usabil-
ity (RQ2) and acceptability (RQ3) of the CCPS model? While teachers found
that the activity design and intervention methods employed were useful to foster
transversal skills (RQ1), their perception of the utility of the intervention meth-
ods on the different cognitive processes defined by the CCPS model (RQ1) was
somewhat unexpected. In terms of the usability (RQ2), teachers perceived how
they could design an activity and intervene using the model, but were less able
to perceive how the model could be used to assess where the students were in
terms of learning and regulate the activity to mediate their learning. The find-
ings of RQ1 and RQ2 support the importance of training teachers to recognise
and understand the different cognitive processes to intervene adequately and be
able to differentiate their teaching per student, rather than adopting a unique
strategy for an entire class.
To help teachers implement ER learning activities in the classroom and gain
in autonomy to create their own activities that foster CT skills (RQ3), it seems
relevant to alternate between experimentation in classrooms and debriefing dur-
ing teacher training and go beyond providing pedagogical resources. To conclude,
184 M. Chevalier et al.

the operationalisation of ER to foster CT skills must also consider the key role
that teachers have to play in the introduction of any such model and its appli-
cation in formal education.

References
1. Atmatzidou, S., Demetriadis, S.: Advancing students’ computational thinking skills
through educational robotics: A study on age and gender relevant differences.
Robotics and Autonomous Systems 75, 661–670 (Jan 2016)
2. Bers, M.U., Flannery, L., Kazakoff, E.R., Sullivan, A.: Computational thinking
and tinkering: Exploration of an early childhood robotics curriculum. Computers
& Education 72, 145–157 (Mar 2014)
3. Chevalier, M., Giang, C., Piatti, A., Mondada, F.: Fostering computational think-
ing through educational robotics: a model for creative computational problem solv-
ing. International Journal of STEM Education 7(1), 39 (Dec 2020)
4. Chevalier, M., Riedo, F., Mondada, F.: Pedagogical Uses of Thymio II: How Do
Teachers Perceive Educational Robots in Formal Education? IEEE Robotics &
Automation Magazine 23(2), 16–23 (Jun 2016)
5. El-Hamamsy, L., et al.: A computer science and robotics integration model for
primary school: evaluation of a large-scale in-service K-4 teacher-training program.
In: Education And Information Technologies, (Nov 2020)
6. Fanchamps, N.L.J.A., Slangen, L., Hennissen, P., Specht, M.: The influence of SRA
programming on algorithmic thinking and self-efficacy using Lego robotics in two
types of instruction. ITDE (Dec 2019)
7. Giang, C.: Towards the alignment of educational robotics learning systems with
classroom activities p. 176 (2020)
8. Grover, S., Pea, R.: Computational Thinking in K-12: A Review of the State of
the Field. Educational Researcher 42(1), 38–43 (Jan 2013)
9. Ioannou, A., Makridou, E.: Exploring the potentials of educational robotics in
the development of computational thinking: A summary of current research and
practical proposal for future work. Education And Information Technologies 23(6),
2531–2544 (Nov 2018)
10. Lye, S.Y., Koh, J.H.L.: Review on teaching and learning of computational thinking
through programming: What is next for K-12? Computers in Human Behavior 41,
51–61 (2014)
11. Lytle, N., Cateté, V., Boulden, D., Dong, Y., Houchins, J., Milliken, A., Isvik, A.,
Bounajim, D., Wiebe, E., Barnes, T.: Use, Modify, Create: Comparing Computa-
tional Thinking Lesson Progressions for STEM Classes. In: ITiCSE. pp. 395–401.
ACM (2019)
12. Mondada, F., Bonani, M., Riedo, F., Briod, M., Pereyre, L., Retornaz, P., Magne-
nat, S.: Bringing Robotics to Formal Education: The Thymio Open-Source Hard-
ware Robot. IEEE Robotics Automation Magazine 24(1), 77–85 (Mar 2017)
13. Rogers, Y., Sharp, H., Preece, J.: Interaction design: beyond human-computer
interaction. John Wiley & Sons (2011)
14. Rolland, V.: La motivation en contexte scolaire / Rolland Viau. Pédagogies en
développement Problématiques et recherches, De Boeck Université (1994)
15. Sanchez, É.: Quelles relations entre modélisation et investigation scientifique dans
l’enseignement des sciences de la terre? Éducation et didactique (2–2), 93–118
(2008)
Teachers’ Perspective on Fostering CT Through Educational Robotics 185

16. Sapounidis, T., Alimisis, D.: Educational robotics for STEM: A review of tech-
nologies and some educational considerations. p. 435 (Dec 2020)
17. Siegfried, R., Klinger, S., Gross, M., Sumner, R.W., Mondada, F., Magnenat, S.:
Improved Mobile Robot Programming Performance through Real-time Program
Assessment. In: ITiCSE. pp. 341–346. ACM (Jun 2017)
18. Tricot, A., Plégat-Soutjis, F., Camps, J., Amiel, A., Lutz, G., Morcillo, A.: Util-
ity, usability, acceptability: interpreting the links between three dimensions of the
evaluation of the computerized environments for human training (CEHT). Envi-
ronnements Informatiques pour l’Apprentissage Humain 2003,(2003)

Open Access This chapter is licensed under the terms of the Creative Commons
Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/),
which permits use, sharing, adaptation, distribution and reproduction in any medium
or format, as long as you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons license and indicate if changes were
made.
The images or other third party material in this chapter are included in the
chapter’s Creative Commons license, unless indicated otherwise in a credit line to the
material. If material is not included in the chapter’s Creative Commons license and
your intended use is not permitted by statutory regulation or exceeds the permitted
use, you will need to obtain permission directly from the copyright holder.
Technologies for Educational Robotics
Mirobot: A Low-Cost 6-DOF Educational
Desktop Robot

Dongxu Zhou(B) , Ruiqing Jia, and Mingzuo Xie

School of Mechanical Electronic and Information Engineering, China University of Mining and
Technology (Beijing), Beijing, China
zhoudongxv@yeah.net

Abstract. Traditional industrial robots are not suitable as teaching equipment for
students to learn due to their high cost and large size. Accordingly, this paper’s
motivation is to propose a multi-degree-of-freedom low-cost desktop educational
robot Mirobot, suitable use as an educational device for various kinds of robotics
courses and research. We apply 3D printing technology to design iteration and
manufacture of the robot structural parts. The look-ahead speed control algorithm
is applied to the execution of a large number of small line segments in trajectory
control. The geometric method and the Euler angle transformation method are used
to solve the inverse kinematics solution problem. The robot’s computer control
software, mobile phone software, and Bluetooth remote control are presented.

Keywords: Educational robot · Desktop robotic arm · 3D printing · Low-cost


robot

1 Introduction

Robot education is of great significance to the cultivation of talents in robotics-related


fields. More and more universities have offered robotics courses for students. Many ele-
mentary and middle schools have also added robotics courses to their students. Industrial
robots have high accuracy and stability, but they are expensive and bulky. Many schools
cannot afford the price of industrial robots, or they can only buy one industrial robot
for all students to learn. The massive weight of industrial robots makes it impossible
for teachers and students to carry them easily. Also, the internal structure and princi-
ple of the control system of industrial robots are confidential. Students can only learn
how to operate a robot, but they cannot match the algorithms of robotics with actual
industrial robots. Such studies are sufficient for vocational or technical school students,
but not enough for undergraduate or graduate students. Therefore, low-cost, desktop-
level, open-source educational robots can better meet the needs of robot education and
laboratory research.
Some scholars have studied small desktop robots. Ceccarelli [1] explained the advan-
tages and importance of low-cost robots in teaching and research activities. He suggested
using low-cost robots for teaching and building mechatronics laboratories, describing

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022


M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 189–200, 2022.
https://doi.org/10.1007/978-3-030-82544-7_18
190 D. Zhou et al.

the success of Cassino University in this regard. Quigley et al. [2] designed a low-
cost 7-degree-of-freedom robotic arm. The robot arm is equipped with several stepper
motors and Dynamixel robotics RX-64 servos to act as joints. The unique feature of this
robotic arm is that it guarantees a relatively high-performance index, such as repeated
positioning accuracy and movement speed. McLurkin et al. [3] developed a low-cost,
multi-robot collaborative mobile robot system for science, technology, engineering, and
mathematics (STEM) education. The developed robot is applied to the teaching work of
graduate students and has achieved good results. Eilering et al. [4] proposed a low-cost
3D printed desktop robotic arm and innovatively proposed the use of a small desktop
robotic arm as a controller to control industrial robotic arms. A three-degree-of-freedom
desktop robotic arm named Magician [5] is proposed by a company. The robot arm is
equipped with three stepper motors with planetary reducers as joints, and the MEMS
inclination sensor is installed on the big arm and the small arm to detect the angle of
the axis. The UARM swift manipulator [6] designed by another company has a similar
structure and function. However, this type of robotic arm has relatively few degrees
of freedom, and they are unable to control the orientation of the end effector. Sahin
et al. [7] designed a three-degree-of-freedom education desktop robot R3D. The robot
is equipped with a DC motor as the joint and CNC-machined aluminum as the arm
material. Linke et al. [8, 9] designed a low-cost multi-axis grinder based on a modular
desktop robotic arm structure. By reassembling each module, it can realize a robotic arm
grinder with different mechanical structures. Ghio et al. [10] Proposed another low-cost
three-degree-of-freedom writing robotic arm. The robot arm is equipped with cardboard
as the arm material and three servo motors as joints. Due to the reduced rigidity of the
cardboard, the accuracy of the robotic arm is not high. Shevkar et al. [11] proposed
a three-degree-of-freedom SCARA desktop robotic arm. The robotic arm is driven by
three stepper motors, which controls the speed of the joints well.
Most desktop robots have relatively few degrees of freedom. In the actual teaching
of robotics, the concepts of robot position and orientation are essential. If the number of
degrees of freedom of the robot is less than 6, the orientation of its end effector cannot
be controlled well. Some desktop robotic arms have sufficient degrees of freedom [2],
but they often need to be equipped with small-sized servos to form joints. However,
ordinary servos only provide position control but not speed control. This is a significant
drawback of a robotic arm consisting of a servo as a joint. When a robotic arm equipped
with servos needs to walk through a large number of small line segments, the smoothness
of the movement cannot be guaranteed.
Most desktop robotic arms have poor motion stability due to their lack of look-ahead
speed control [12, 13]. When such a robotic arm performs spot movement, its motion
effect is relatively good. However, as mentioned in the first point, its end-effector will
experience severe jitter when it moves through a large number of small line segments.
The materials of most desktop robotic arms are inappropriate. The structure of some
desktop robotic arms is made of metal materials, which results in high weight and
complicated machining manufacturing processes. There are also some desktop robotic
arms made of plastic sheet, cardboard or sheet metal, which results in low accuracy and
structural strength.
Mirobot: A Low-Cost 6-DOF Educational Desktop Robot 191

In this paper, we propose a low-cost desktop six-degree-of-freedom education robotic


arm named Mirobot. Mechanically, Mirobot is equipped with six low-cost small step-
per motors with reducers as joints. Its structural parts adopt 3D printing technology
for design iteration and manufacturing. In the design of the electronic circuit, the AVR
MEGA2560 single-chip microcomputer is used as the microcontroller. The inverse kine-
matics of Mirobot robot is solved by combining the geometric method with the Euler
angle transform method. In the microcontroller software, look-ahead control technology
is adopted to ensure smooth motion. In order to make the operation of the robotic arm
simpler, we have designed computer and mobile phone control software to control the
Mirobot robot. A remote control is also developed.

2 Mechanical Structure

The actual Mirobot robotic arm is shown in Fig. 1. The overall mechanical system
consists of a base, link frames, gear reducer stepping motors and limit switch sensors.
The robot has link frames and appearance similar to an industrial robot, and it consists
of six joints. A stepper motor with reduction gear is installed at each joint.

Fig. 1. Low-cost 6-DOF desktop robotic arm named Mirobot

2.1 Joints Composed of Stepper Motors

The Mirobot robotic arm uses six small stepper motors as drivers. There are many
advantages to using a stepper motor as a joint drive for a desktop robotic arm. The
characteristic of the stepper motor is that it can provide larger torque when running at
low speed. The desktop robotic arm is mainly used in teaching and scientific research,
and its movement speed does not need to be very high. Using a stepper motor as a joint
can also reduce the cost of a desktop robotic arm. A stepper motor can be combined with
192 D. Zhou et al.

its driver to achieve precise pulse-based control under open-loop control. The control
system precisely controls the rotation angle of the motor by sending the number of pulses
and precisely controls the speed of the motor by sending the frequency of the pulses.
Moreover, the servo motor system that realizes the same rotation angle and speed control
is costly, which is one of the essential reasons why the cost of a small mechanical arm
is difficult to reduce. Also, stepper motors can act as electromagnetic clutches [2]. If
the motor output end is accidentally applied with force exceeding its holding torque,
the stepper motor will slide until the force is small enough that the stepper motor can
resume transmission. This characteristic is of great significance in the use of desktop
education robotic arms. It means that desktop robotic arms do not need to add additional
collision detection functions. When the robotic arm touches the user and generates higher
resistance, the stepper motor joint will automatically slide to ensure the safety of the
user. Of course, when the robotic arm is in the “lost step” sliding state, it needs to be
reset before it can continue to be used. Therefore, after restarting the Mirobot robot arm,
it is necessary to perform a mechanical reset first. Otherwise, each axis is locked and
cannot move. The small gear reducer stepping motors used by Mirobot’s six axes are
shown in Fig. 2.

Fig. 2. The small gear reducer stepping motors used by Mirobot’s six axes

Fig. 3. The 3D printed structural component after painting


Mirobot: A Low-Cost 6-DOF Educational Desktop Robot 193

Fig. 4. The prototypes of various iterative versions of the Mirobot

2.2 Structural Parts Manufactured by 3D Printing

Considering the cost of the desktop robot and the need for rapid iteration in design, the
prototype of Mirobot robot is processed by 3D printing technology. 3D printing tech-
nology [14] is an emerging additive manufacturing technology in recent years. After
completing the 3D CAD drawing, the designer can quickly print a set of prototype struc-
tural parts with only a few modifications. Designers can then quickly modify drawings
and print again based on the effects of the prototype. The structural design process of the
Mirobot robot has gone through multiple iterations. With the help of 3D printing tech-
nology, the speed of iteration has been increased while the processing costs have been
reduced. As shown in Fig. 3, it is the 3D printed structural component after painting.
Figure 4 shows the prototypes of various iterative versions of the Mirobot robot, which
are all manufactured using 3D printing technology.

3 Hardware Design of the Control System


Figure 5 shows the connection relationship of all hardware of Mirobot robotic arm.
Reduction stepper motors, stroke switches, power and reset buttons, and A4988 drive
modules are connected to the control circuit board. The computer software on the bottom
right sends instructions to control the robotic arm through the USB serial port. The robot
arm can also be controlled wirelessly via a remote control handle.
194 D. Zhou et al.

Fig. 5. The connection relationship of all hardware of Mirobot

4 Inverse Kinematics Algorithm


The kinematics algorithm maps the end pose of the desktop robotic arm to the joint
angle. In the precise control of the end pose of the desktop robot, such as writing and
painting, the inverse kinematics algorithm is necessary and essential.
Mirobot’s inverse kinematics is solved using a combination of geometric methods and
Euler angle transformation methods. The first three axes of the manipulator are solved by
geometric methods, and the last three axes are solved by Euler angle transformation. We
use the modified Denavit-Hartenberg method to establish the link model of the robotic
arm, as shown in Fig. 6. According to the secured link frame model, the DH parameters
are shown in Table 1. More details of the Inverse solution algorithm are described in
Ref. [15]. We propose a teaching method for the teaching of robot inverse kinematics
algorithm and use Mirobot robot as a case to derive its inverse solution algorithm in
detail.

Table 1. DH parameters

αi-1 ai−1 di θi
1 0 0 d1 0
2 −π/2 a1 0 −π/2
3 0 a2 0 0
4 −π/2 a3 d4 0
5 π/2 0 0 π/2
6 π/2 0 d6 0
Mirobot: A Low-Cost 6-DOF Educational Desktop Robot 195

Fig. 6. DH model of Mirobot

5 Look-Ahead Control Algorithm

The look-ahead control of speed plays an essential role in high-end robot controllers and
CNC equipment. It is one of the core technologies of various industrial robot controller
manufacturers. Some desktop robotic arms are not equipped with the look-ahead control
algorithm. The small line segments densified by the interpolation algorithm are directly
sent to each joint axis to perform rotation after inverse kinematics solution. The move-
ment of the robotic arm without the look-ahead control is basically accurate, but the
smoothness of the movement is very poor. For a large number of interpolation small line
segments, the movement of the robot is performed frame by frame, and each frame is
not connected with the previous frame in speed, so a sudden change in speed will occur.
The jitter of the robotic arm will be serious. When the jitter is too large, the trajectory
of the desktop robotic arm will not be accurate.
The look-ahead control is to plan the speed profile curve of the small line segment
when the robot arm needs to execute a large number of small line segments to ensure
that the speed profile curve of each small line segment is connected end to end. During
the movement of the robotic arm, the interpolation algorithm densifies its trajectory
into a large number of small line segments that end to end. The robotic arm needs
to execute a large number of small line segments within a particular time to ensure the
accuracy of the trajectory. This situation is prevalent in the welding function of industrial
robotic arms and the painting and laser engraving functions of desktop robotic arms.
The speed connection between the small line segments directly affects the smoothness
of the movement. Typical look-ahead speed profiles are trapezoidal and S-shaped. The
trapezoidal speed profile makes the speed of each small line segment a trapezoid, and
the trapezoidal speeds are connected according to a particular speed connection rule.
The S speed profile can further ensure the continuity of acceleration between small line
196 D. Zhou et al.

segments. The Mirobot robotic arm adopts a trapezoidal method that is relatively easy
to implement.
In the control system, each small line segment of the movement of the robot arm is
abstracted as a “block” structure, as shown in Fig. 7. Each block contains information
such as the entry speed, normal speed, and movement distance of the small line segment.

Fig. 7. The speed profile of block structure

The role of speed look-ahead is to plan the movement speed of these small line
segments, and connect the first and last speeds of each small line segment under the
consideration of speed constraints, as shown in Fig. 8. This method can ensure that
the movement speed does not change abruptly, thereby ensuring the smoothness and
stability of the movement.

Fig. 8. The connected blocks after speed look-ahead

6 PC Control Software and Remote Control


Mirobot desktop robotic arm can be operated with PC upper computer software Wlkata
Studio or remote control. The PC-side control software is shown in Fig. 9. The host
Mirobot: A Low-Cost 6-DOF Educational Desktop Robot 197

computer provides the single-axis motion function of the robotic arm, the posture con-
trol function in the Cartesian mode, the teaching function, the blockly [16] graphical
programming function, and the drawing function. These functions of Mirobot robots are
designed according to common teaching or laboratory research needs.
For smooth operation, we have also developed a simple mobile phone APP and
remote control for Mirobot robotic arm. Both the mobile phone and the remote control
can use Bluetooth to wirelessly connect with the Mirobot robotic arm to achieve wireless
control of the robotic arm. Figure 10(left) shows the App software interface on the mobile
phone, and Fig. 10(right) shows the remote control interface.

Fig. 9. Computer control software

Fig. 10. The mobile app and remote control


198 D. Zhou et al.

7 Practical Application Testing


We connected the Leap Motion [17] controller to the Mirobot robot for control tests. As
shown in Fig. 11, the operator used his arm and hand to control the robotic arm. The
movement of the arm controlled the movement of the gripper at the end of the robot arm.
Hand extension and fists controlled the opening and closing of the end grip.

Fig. 11. The operator used his arm and hand to control the robotic arm

As shown in Fig. 12, a watercolor pen was installed on the end flange of the robot,
and the robot wrote a Chinese character “Yong”.

Fig. 12. The robot wrote a Chinese character yong

We wrote the URDF file of Mirobot robot and imported it into ROS’s gazebo simula-
tion environment. The MoveIt! plug-in in ROS was used for motion planning of Mirobot
robot. As shown in Fig. 13, in the Moveit! plug-in, the virtual robot arm in the gazebo
and the external real robotic arm were simultaneously controlled for motion planning.
Mirobot: A Low-Cost 6-DOF Educational Desktop Robot 199

Fig. 13. ROS control

8 Conclusion
In this paper, we propose a low-cost desktop robot named Mirobot that is suitable for
robotics education and scientific research. In mechanical structure design, we use stepper
motors to form the joints of the robot and use 3D printing technology to manufacture each
structural part. In the design of the control system, we use the look-ahead speed control
algorithm to ensure the stability of the movement, and the kinematics inverse solution
algorithm with low computational complexity. Computer control software and remote
control designed for the robot are presented. We showed some typical applications of
the Mirobot robot in writing and scientific research experiments.

References
1. Ceccarelli, M.: Robotic teachers’ assistants - low-cost robots for research and teaching
activities. IEEE Robot. Autom. Mag. 10, 37–45 (2003)
2. Quigley, M., Asbeck, A., Ng, A.: A low-cost compliant 7-DOF robotic manipulator. In: 2011
IEEE International Conference on Robotics and Automation. IEEE, New York (2011)
3. McLurkin, J., Rykowski, J., John, M., Kaseman, Q., Lynch, A.J.: Using multi-robot systems
for engineering education: teaching and outreach with large numbers of an advanced, low-cost
robot. IEEE Trans. Educ. 56, 24–33 (2013)
4. Eilering, A., Franchi, G., Hauser, K.: ROBOPuppet: low-cost, 3D printed miniatures for
teleoperating full-size robots. In: 2014 IEEE/RSJ International Conference on Intelligent
Robots and Systems, pp. 1248–1254 (2014)
5. Ai, Q.S., Yang, Q.F., Li, M., Feng, X.R., Meng, W.: Implementing multi-DOF trajectory
tracking control system for robotic arm experimental platform. In: 2018 10th International
Conference on Measuring Technology and Mechatronics Automation, pp. 282–285. IEEE,
New York (2018)
6. UFACTORY Homepage. https://www.ufactory.cc/#/en/uarmswift. Accessed 26 Jan 2021
7. Sahin, O.N., Uzunoglu, E., Tatlicioglu, E., Dede, M.I.C.: Design and development of an
educational desktop robot R3D. Comput. Appl. Eng. Educ. 25, 222–229 (2017)
200 D. Zhou et al.

8. Ghadamli, F., Linke, B.: Development of a desktop hybrid multipurpose grinding and 3D
printing machine for educational purposes. In: Shih, A., Wang, L. (eds.) 44th North American
Manufacturing Research Conference, NAMRC 44, vol. 5, pp. 1143–1153 (2016)
9. Linke, B., Harris, P., Zhang, M.: Development of desktop multipurpose grinding machine for
educational purposes. In: Shih, A.J., Wang, L.H. (eds.) 43rd North American Manufacturing
Research Conference, NAMRC 43, vol. 1, pp. 740–746 (2015)
10. Grana, R., Ghio, A., Weston, A., Ramos, O.E., Sect, I.P.: Development of a low-cost robot
able to write (2017)
11. Shevkar, P., Bankar, S., Vanjare, A., Shinde, P., Redekar, V., Chidrewar, S.: A Desktop SCARA
robot using stepper motors (2019)
12. Todd, J., Schuett, A.: Closer look at look-ahead, speed and accuracy benefits. Creative
Technology Corp (1996)
13. Tsai, M.-S., Nien, H.-W., Yau, H.-T.: Development of a real-time look-ahead interpolation
methodology with spline-fitting technique for high-speed machining. Int. J. Adv. Manuf.
Technol. 47, 621–638 (2010)
14. Attaran, M.: The rise of 3-D printing: the advantages of additive manufacturing over traditional
manufacturing. Bus. Horiz. 60, 677–688 (2017)
15. Zhou, D., Xie, M., Xuan, P., Jia, R.: A teaching method for the theory and application of robot
kinematics based on MATLAB and V-REP. Comput. Appl. Eng. Educ. 28, 239–253 (2020)
16. García-Zubía, J., Angulo, I., Martínez-Pieper, G., Orduña, P., Rodríguez-Gil, L., Hernandez-
Jayo, U.: Learning to program in K12 using a remote controlled robot: RoboBlock. In: Auer,
M.E., Zutin, D.G. (eds.) Online Engineering & Internet of Things. LNNS, vol. 22, pp. 344–
358. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-64352-6_33
17. Mizera, C., Delrieu, T., Weistroffer, V., Andriot, C., Decatoire, A., Gazeau, J.P.: Evaluation
of hand-tracking systems in teleoperation and virtual dexterous manipulation. IEEE Sens. J.
20, 1642–1655 (2020)
Developing an Introduction to ROS
and Gazebo Through the LEGO SPIKE
Prime

Owen Gervais and Therese Patrosio(B)

Tufts University, Medford, MA 02155, USA


{owen.gervais,therese.patrosio}@tufts.edu

Abstract. In an effort to teach controller design in the college class-


room, we have built a toolbox connecting the LEGO SPIKE Prime
robotic platform, Raspberry Pi 4, Onshape CAD environment, and ROS
2/Gazebo. In this paper, we outline the exploratory process in connect-
ing these platforms through the creation of a virtual machine package
bundling all process dependencies and connections. This includes devel-
oping a library of LEGO pieces in Onshape and developing a Raspberry
Pi image and a Linux VM for the laptop that can be shared across
the class (all shared on Google Drive). We demonstrate the package in
building a LEGO self-balancing robot and then using its ROS-controlled
digital twin (drawn in Onshape) to tune the controller (PID) in the vir-
tual world (Gazebo) and then successfully apply the controller to the
LEGO robot.

Keywords: LEGO · Gazebo · ROS · Onshape · Simulation · Virtual


twins

1 Introduction

The field of robotics requires a breadth of background knowledge across various


engineering disciplines from mechanical engineering to computer science, making
the barrier of entry high for the beginning student. Increased interest in intro-
ducing these diverse topics earlier in education has led researchers to investigate
the use of simulation, virtual design, and low-cost robotics as possible teaching
tools [1,2].
In the space of industrial robotic virtual design and simulation, open-source
software such as ROS [3] and Gazebo [4] are widely utilized. ROS (Robot Oper-
ating System), a set of software libraries and tools for robotic design, allows the
user to create high level robotic systems without the need for setting up custom
communication protocols through TCP/IP. Gazebo, which was created to inter-
face with ROS, allows the user to access a full physics engine for realistic virtual
testing of their designs.
Previous work has been done to leverage the power of these programs in
curriculums through the frame of virtual twin design [1]. A virtual twin mirrors
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 201–209, 2022.
https://doi.org/10.1007/978-3-030-82544-7_19
202 O. Gervais and T. Patrosio

the physical robot build allowing for prototyping and testing in a simulation
sandbox, greatly reducing physical prototyping time and increasing accessibility
for users. Once testing of the virtual twin is completed, results will be deployed
to the physical robot for final physical testing and tuning.
In [1] the authors write about utilizing simulation software in the context of
introducing inverse kinematics and programming by framing industrial robots as
the virtual twins. However, given the complexity of industrial robotics for begin-
ning students, a simplified learning platform may also be successful in introduc-
ing fundamental topics.
With the hope of lowering costs and increasing accessibility to robotics, prior
research has investigated the utilization of low-cost robotics kits and systems [2],
mainly Arduino [5] and LEGO education platforms [6].
LEGO Education’s platforms have been widely adopted in universities and
K-12 curriculums in order to provide the base frames for the robotics design
process [7,12]. Given access to LEGO bricks and easily interfaced hardware and
software, the student is quickly able to dive into the design process. By utilizing
this platform, educators have been able to introduce complex robotics topics,
such as reinforcement learning [10] and autonomous operation [12].
Our goal for this research project was to build on previous work leveraging
LEGO platforms in curriculum by using the LEGO SPIKE Prime [13] platform
as the means to introduce ROS 2 and Gazebo to college students. Outlined will
be our developed process combining Onshape [14], ROS 2, and Gazebo, in order
to allow for the creation and programming of a LEGO SPIKE Prime virtual
twin. We will cover our process of creating an asset library of LEGO pieces in
Onshape, the build and design procedure of the virtual twin, and the control
and simulation in Gazebo using ROS 2.

2 Connecting Platforms
2.1 Process Flow Overview
The process flow shown in Fig. 1 demonstrates a high-level overview of the devel-
oped connection. The users will first design a SPIKE Prime robot using the cre-
ated Onshape part library. They will then export their robots to URDF/SDF
format and create their own ROS 2 package. Lastly, the users will simulate
their builds in Gazebo, deploying the simulation results to control their physical
builds. This section of the paper will dive into each of these respective categories
and our solution to packaging this process into a deliverable format.

2.2 Onshape and LEGO SPIKE Prime Part Library


In order to develop a simulation and modelling environment for the LEGO
SPIKE Prime, a complete asset library was first created. Although individual
LEGO pieces are accessible on various 3D modelling community platforms like
GrabCAD [15] or Thingiverse [16], a complete standardized library of SPIKE
Prime parts was not available to the public before starting this work.
Gazebo LEGO SPIKE Prime 203

Fig. 1. Complete process overview of building and simulating a virtual LEGO SPIKE
Prime robot in Onshape.

Onshape, a collaborative browser-based computer aided design suite, was


chosen for document creation and hosting. With its ability for real-time CAD
collaboration and in-browser operation, any student is able to open Onshape
and collaborate virtually to design with their peers on a shared CAD document.
By building the part library on this platform, educators are able to create
student specific instances of the LEGO SPIKE Prime kit in Onshape [17], setup
collaborative assembly workspaces, and foster engaging CAD design with real-
time collaboration that is not available in other industry standard software, such
as Solidworks [18] or Fusion 360 [19]

2.3 Part Library Creation and Usability

Unlike competing CAD suites, a single part document in Onshape can contain
multiple solid-body models. This unique format allows for the entire LEGO
part library to be created in a single document, which can be made accessible
to all Onshape users. Leveraging this format, we configured each piece into its
respective part category i.e., liftarm, or square brick. This configuration-driven
format allowed for a final assembly workflow which can be seen in Fig. 2. In
the insertion menu on the left, the user can select a part from the desired part
category. This selection will then be populated into the assembly build space to
the right.

2.4 Exporting Robots from Onshape to URDF/SDF Format

The initial goal of creating a virtual twin that utilizes the new asset library in
Onshape led to the discovery of a Python 3 [20] package, onshape-to-robot [21].
Onshape-to-robot utilizes the Onshape API to export assemblies into SDF
format which is compatible with ROS and Gazebo. The SDF format provides
a full dynamic description of a collection of rigid bodies. By connecting LEGO
pieces in parent/child style during the assembly process, the exporter identifies
the trunk link and obtains each concurrent link’s visual, collision, and inertial
information. Inertial data was determined using a LEGO-like ABS material from
the Onshape material library.
204 O. Gervais and T. Patrosio

Fig. 2. Example part library usage during the build process of a virtual LEGO SPIKE
Prime robot in Onshape

Onshape-to-robot utilizes Onshape document identifiers and personal API


keys to target the correct assembly. In order to facilitate a connection between
onshape-to-robot and ROS, we modified onshape-to-robot to take in a pathname
of the newly created ROS 2 package. We then used the pathname to connect the
meshes and sdf description as illustrated in Fig. 3. The meshes folder contains a
3D model of each component of the robot and the sdf folder contains a xacro.sdf
(xml macro) file. This file when passed into ROS creates absolute paths for all
description files to get the robot up and running in Gazebo.

2.5 ROS 2 and Gazebo

In order to use the newly exported robot model with ROS 2, the user creates a
new ROS 2 Python package with the previously supplied pathname. The user
then copies the onshape-to-robot meshes and sdf folders into the ROS 2 package.
A sample launch folder and setup.py file is also supplied to the user ensuring
that all dependency and description file paths are generated correctly. Figure 4
illustrates the described process.
After completion of these steps, the user will be able to build the package and
launch the ROS 2 start file, spawning their robot into Gazebo. As imported, the
robot will not be controllable until a joint controller is applied to the model. For
the scope of this project, we worked with a built-in differential-drive controller
from gazebo ros pkgs [22], which generated the necessary topics, /cmd vel (wheel
velocity control) and /odom (odometry of the base frame) for operation.
Gazebo LEGO SPIKE Prime 205

Fig. 3. Onshape-to-robot workflow and output folder structure, meshes and sdf

Fig. 4. Process of copying asset folders and supplemental material into the new ROS
2 package

2.6 Virtual Machine Package

To keep the large number of platform connections contained, we set up a


VMWare [23] virtual machine running Ubuntu 20.04 LTS [24]. This image comes
pre-installed with the modified onshape-to-robot package along with all ROS 2
dependencies needed to create a simple differential drive LEGO robot. Supplied
on the desktop of the virtual machine is a getting started guide that outlines
the process described in this paper and provides locations for all supplementary
files for ROS 2 package creation. A publicly available Google Drive [25] folder
hosts the virtual machine for easy student access [26].
206 O. Gervais and T. Patrosio

3 Testing the Virtual LEGO Build Process with the ROS


2 and Gazebo Integration

For a proof of concept, we focused on the design, simulation, and control of


a differential drive navigation bot and a two wheeled self-balancing robot. In
order to control these virtual twin models, we utilized the built-in differential-
drive controller from gazebo ros pkgs.
This section focuses on the training of the two wheeled self-balancing LEGO
robot. As a common introductory lesson in control theory, balancing an inverted
pendulum requires constant control gain iteration in order for success. By sim-
ulating the robot in Gazebo, users are able to quickly move through iterations
to generate their PID control gains. Simulation results can then be deployed to
their physical robot build for final testing.
Following the procedure in Fig. 1, we built the physical counterpart and mod-
eled the self-balancing LEGO robot in Onshape. The final designs, both virtual
and physical, can be seen in Fig. 5 and Fig. 6.

Fig. 5. Physical build of LEGO SPIKE Prime balancing robot

We then converted the virtual twin from an Onshape assembly into SDF
format and created the new ROS 2 package using the supplied materials in
the Ubuntu virtual machine. After creating a simple PID control, utilizing the
orientation of the SPIKE Prime Hub for pose feedback, we tuned the system until
we achieved stability in Gazebo for three seconds. Captures from the Gazebo
simulation can be seen in Fig. 7.
We then deployed these generated gains to the physical counterpart of the
self-balancing LEGO SPIKE Prime robot. By controlling motor speed with
PWM, we were able to achieve dynamic stability with the same gain values
generated in Gazebo.
Gazebo LEGO SPIKE Prime 207

Fig. 6. Virtual build of LEGO SPIKE Prime balancing robot

Fig. 7. Screen capture from the tuning process during Gazebo simulation

4 Discussion

In this project, we were motivated to introduce virtual robotics design and sim-
ulation tools, ROS 2 and Gazebo, through the common framework of the LEGO
SPIKE Prime. The choice of the LEGO Education robotics platform stemmed
from its high level of complexity abstraction. By providing intuitive electronics,
materials, and programming environments, the LEGO SPIKE Prime lowers the
barrier of entry to robotics for beginning users.
The creation of this process was intended to bridge the knowledge gap that
arises when students wish to add ROS and Gazebo to their toolboxes. In order
to build, design, and simulate using these tools, students have to first gain high
proficiency in CAD and programming before they produce their first iteration.
By removing the need to delve into the innerworkings of each software, com-
plexity can be abstracted away in order to introduce ROS and Gazebo earlier.
By providing the entire LEGO SPIKE Prime set in Onshape, we remove the
need for students to model each individual component. This allows the students
to focus on learning assemblies and mechanical design more broadly. Following
the process outlined in this paper, the student is then able to quickly export
their model, start their own physics simulations, and setup control theory test-
ing environments. This allows the students to focus on learning the basics of
208 O. Gervais and T. Patrosio

ROS and Gazebo without the setup time of structuring a ROS 2 package that
connects with Gazebo.
Throughout this process we ran into many issues in streamlining and finding
the best way to balance abstraction without losing valuable learning oppor-
tunities. After personal explorations into the respective programs, we found a
method of setting repeatable paths to all of the assets and created a package
encompassing this work in the form of a VMware virtual machine.
While the complexity of this process is currently still out of reach for our
original middle-school goal, there are still exciting possibilities for future work
introducing machine learning and reinforcement learning through this medium.
In the proof-of-concept example in Sect. 3, all PID gain tuning was done by hand.
Future ventures could foreseeably connect AI training to this platform in order
to teach different control approaches.
Looking to streamline the process further, we could implement a bash script
that would tie the file renaming and ROS 2 package creation into a one-line oper-
ation. With this new workflow after the student creates their model in Onshape,
they would be able to run this bash command and quickly dive into the ROS
and Gazebo environment.

5 Conclusion

We developed a process to allow students to use ROS 2 and Gazebo through


the common frame of the LEGO SPIKE Prime platform. By utilizing open-
source and industry software throughout, this process connects the student with
multiple valuable learning opportunities: CAD assembly, Python programming,
and APIs. This process culminates in the ability to control and simulate their
SPIKE Prime creations through ROS 2 and Gazebo. Future work on this project
would dive into integrating machine learning platforms in order to leverage the
simulation control environment further.

References
1. Lopez-Nicolas, G., Romeo, A., Guerrero, J.: Simulation tools for active learn-
ing in robot control and programming, pp. 1–6. (2009). https://doi.org/10.1109/
EAEEIE.2009.5335490
2. Irigoyen, E., Larzabal, E., Priego, R.: Low-cost platforms used in control education:
an educational case study. IFAC Proc. Volumes 46(17), 256–261 (2013). https://
doi.org/10.3182/20130828-3-UK-2039.00058
3. ROS 2. https://index.ros.org/doc/ros2/
4. Gazebo. http://gazebosim.org/
5. Arduino. https://www.arduino.cc/
6. LEGO Education. https://education.lego.com/en-au/
7. Edwards, G., et al.: A test platform for planned field operations using LEGO
mindstorms NXT. Robotics 2(4), 203–216. ProQuest. (2013). https://doi.org/10.
3390/robotics2040203
Gazebo LEGO SPIKE Prime 209

8. Danahy, E., Wang, E., Brockman, J., Carberry, A., Shapiro, B., Rogers, C.B.:
LEGO-based robotics in higher education: 15 years of student creativity. Int. J.
Adv. Robot. Syst. 11(2), 27 (2014). https://doi.org/10.5772/58249
9. Cheng, C.-C., Huang, P.-L., Huang, K.-H.: Cooperative learning in lego robotics
projects: exploring the impacts of group formation on interaction and achievement.
J. Netw. 8(7), 1529–1529. (2013). https://doi.org/10.4304/jnw.8.7.1529-1535
10. Martinez-Tenor, A. Cruz-Martin, A., Fernandez-Madrial, Juan-Antonio, R.: Teach-
ing machine learning in robotics interactively: the case of reinforcement learning
with lego mindstorms. Interact. Learn. Environ. 27(3), 293–306 (2019). https://
doi.org/10.1080/10494820.2018.152541
11. Fanchamps, N.L.J.A., Slangen, L., Hennissen, P., Specht, M.: The influence of SRA
programming on algorithmic thinking and self-efficacy using Lego robotics in two
types of instruction. Int. J. Technol. Des. Educ. (2019).https://doi.org/10.1007/
s10798-019-09559-9
12. Akin, H.L., Mericli, C., Mericli, T.: Introduction to autonomous mobile robotics
using Lego mindstorms NXT. Comput. Sci. Educ. 23(4), 368–386 (2013). https://
doi.org/10.1080/08993408.2013.838066
13. LEGO SPIKE Prime. https://education.lego.com/en-us/products/lego-education-
spike-prime-set/45678#spike
14. Onshape. https://www.onshape.com/en/
15. GrabCAD. https://grabcad.com/
16. Thingiverse. https://www.thingiverse.com/
17. LEGO SPIKE Prime Onshape Document. https://cad.onshape.com/documents/
f80b668b3ae9c732b3c7d91b/w/cc29213b0eb52b9d3bc554e2/e/78708f75527b34f
b9f21b768
18. Solidworks. https://www.solidworks.com/
19. Fusion 360. https://www.autodesk.com/products/fusion-360/overview
20. Python. https://www.python.org/
21. Rhoban, onshape-to-robot, Github Repository (2020). https://github.com/
Rhoban/onshape-to-robot
22. gazebo ros pkgs. http://wiki.ros.org/gazebo ros pkgs
23. VMware. https://www.vmware.com/
24. Ubuntu. https://ubuntu.com/
25. Google Drive. https://www.google.com/intl/en/drive/
26. Virtual Machine Google Drive Hosting. https://drive.google.com/drive/folders/
1jaeOl53kcf-iK5GLCMRy wXDmsG37tWbi?usp=sharing
Constructing Robots for Undergraduate
Projects Using Commodity Aluminum
Build Systems

John Seng(B)

California Polytechnic State University, San Luis Obispo, CA 93407, USA


jseng@calpoly.edu
https://www.csc.calpoly.edu/~jseng

Abstract. In this paper, we outline our experiences constructing two


robots for undergraduate student projects using different commodity alu-
minum construction systems, Actobotics and goBilda. These building
systems have gained popularity for use in high school robotics competi-
tions. For our work, we adopt these modular build systems to construct
robots used for undergraduate projects and as research robots. These
building systems provide aluminum components that can be used to con-
struct robot chassis and have the benefit of being reusable. We evaluate
the benefits and drawbacks of each system. In addition to the evaluation
of the construction systems used, we outline integration of embedded
computers running ROS and the characteristics of each system.

Keywords: Aluminum construction system · Actobotics · goBilda ·


Undergraduate robotics projects

1 Introduction

Robotics education in the college classroom environment can take shape in many
forms, including: using small robots in a class, having students participate in
large scale team projects, or using virtual environment simulators. In the first
example, there is much students can learn from smaller scale robots (i.e. Arduino-
powered designs), but students may be missing out on larger system integration
issues and may be needing more compute capability. When working on a large
scale team project, such as a group of students working on an autonomous car,
students may encounter system level engineering issues, but may become pigeon-
holed into the area of the project they are working on. In the last example of
a simulation environment, students may be missing the hands-on aspect that
many students appreciate when working in the field of robotics.
In our work, we focus on the area of what we categorize as medium-sized
robots (i.e. microwave oven-sized) which have a single Linux computer for com-
pute that is coupled with several other sensors (e.g. cameras, lidar). Having
medium-sized robots available for undergraduate students to work on as projects
can be very effective in improving student learning and can expose them to many
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 210–220, 2022.
https://doi.org/10.1007/978-3-030-82544-7_20
Constructing Robots for Undergraduate Projects 211

software, mechanical, and electrical issues that they may encounter when work-
ing on an engineering project in their future careers.
Commercial research robots in the university environment can often be
expensive and not very configurable if research needs change over time. In this
work, we discuss the use of commodity aluminum build systems (ones that are
often marketed towards high school robotics competitions) and our experience
in utilizing such systems for undergraduate projects, along with the computer
systems used in the robots.
We have used these modern aluminum build systems to construct two gen-
erations of project robots that we call Herbie 1 and Herbie 2. These robots are
used at our university as a foundation for student projects and for undergraduate
research. The robots use different build systems and we outline the differences
between the systems.
In this paper, we will describe the robot designs in parallel and cover the
various design aspects of each robot. This work focuses on robots that are not
intended to be duplicated for multiple student groups in a classroom setting.
That is completely possible with our work, but it is currently not viable in our
university’s educational environment and budget. Instead, our robots are used
as development platforms for senior design projects where a few students work
on particular aspect of the robot at a time.
This paper is organized as follows: Sect. 2 outlines prior work that is related
and relevant to this project. Section 3 outlines the construction systems used in
the robot designs and covers the benefits and drawbacks of using the systems in
an undergraduate student environment. Section 4 describes the compute hard-
ware and software used on the Herbie robots. Section 5 covers the experience
gained from utilizing these robots. Section 6 concludes.

2 Related Work
Aside from the construction systems that are discussed in our work, there are a
number of other modular aluminum construction systems that are available on
the market. Some of these include Tetrix Max [12], Rev Robotics [13] channel
and extrusion, and the Andy Mark S3 system [14]. The Tetrix Max system by
Pitsco Education is very similar to the construction systems that we utilize. The
system consists of aluminum channel and provides flexibility in motor mounting.
The channel design offered by Rev Robotics has the unique feature of having
slots to allow easy slide mounting of brackets in a manner similar to a T-slot.
The S3 system by Andy Mark provides a custom box aluminum extrusion for
building structures. We find that all 3 related systems target FTC (FIRST Tech
Challenge) contestants on their respective web sites. For our work, we selected
the Actobotics [2] system and the goBilda [3] system based on their merits,
academic discount pricing, and market availability.
One research robot that utilizes the Actobotics construction system is a snake
robot developed by Dear et al. [6]. The authors use U-channel as the body of the
snake and utilize servos to actuate the locomotion of the robot. In that work,
212 J. Seng

Fig. 1. Photograph of Actobotics (on the left) and goBilda channel (on the right).
The channel is U-shaped with regularly spaced larger holes for mounting bearings and
axles. The smaller holes can be used for fastening brackets and other connectors.

the authors studied snake robot locomotion when motion constraints are placed
at each segment of the robot. Work by Ray et al. [7] uses an Actobotics gripper
to implement techniques to efficiently detangle salad from plant material. These
works focus primarily on using the Actobotics components for research work and
not in the educational domain.
Ramos et al. [8] describe an open source platform that is designed for teaching
legged robot control in a classroom environment. The hardware structure they
present is constructed using the goBilda system.

3 Commodity Construction Systems


In recent years, reconfigurable metal construction systems have become popular
in the educational robotics community especially due to their use in high school
robotics competitions such as FIRST robotics [1]. Because the competition goals
are different every year, high school teams competing in FIRST robotics need
construction systems that allow rapid reconfiguration and in order to save cost,
reusability. These competitions typically involve having the robots move and
grasp objects while driving very quickly under human remote control. Several
companies have emerged to cater to the needs of educators and students to
support the construction of these competition robots.
The component systems that we describe in our work may seem similar to
other metal-based construction systems (such as Erector [10] or VEX [11]), but
we find a number of advantages with the systems we utilize, Actobotics and
goBilda. The construction systems we evaluate provide some variation of metal
U-shaped channel as long structural members that allow ball-bearings and axles
to be placed at various locations within the channel. In Fig. 1, we show Actobotics
and goBilda channel side-by-side.
Constructing Robots for Undergraduate Projects 213

Fig. 2. Diagram of Actobotics pattern spacing. The larger center hole is for mounting
ball bearings and the smaller holes can be used for brackets and connectors. Image
used with permission [4].

In our work, we utilize two construction systems and discuss their suitability
to building robots for university-level student projects and research. These build
systems are produced by the same parent company and we outline the differences
between the systems. We are not affiliated with the parent company and have
selected these build systems on their merits.

3.1 Actobotics
Actobotics is a build system based on the Imperial measurement system. The
structural components of the Actobotics system are fastened together using 6–32
screws. These size screws are sold in various lengths and are sufficient for the
robot size we are using (approximately 10 pounds total weight).
The primary structural component of the Actobotics system is U-channel
which has a 1.5 square U-shaped cross section (open on one side), with holes
drilled as shown in Fig. 2. The channel comes in lengths based on the number of
holes in the channel (e.g. 3 hole, 5 hole, 7 hole).
Figure 2 shows the layout of holes in Actobotics channel. In the Actobotics
build system, the main holes are .502 in diameter and are intended to house
0.5 flanged ball bearings and allow axles (commonly sized to 1/4 diameter)
to pass through. One significant feature of both of these build systems is the
ability to utilize ball bearings in various locations of a robot build. The flange
on the ball bearing allows it to be easily fitted from one side of the channel while
having the flange prevent the bearing from falling through.
Surrounding the 0.502 primary hole are 8 smaller holes that allow a 6–
32 bolt to pass through. These 8 holes are not threaded and allow beams and
bracket to be connected when the connecting holes are spaced .770 apart. The
8 holes allow mounting items conveniently at a 45◦ angle to the U-channel. U-
channel on its own, does not provide the strength that an equivalently sized
214 J. Seng

Fig. 3. Diagram of goBilda pattern spacing. Holes are drilled on an 8 mm grid pattern.
Image used with permission [5].

square tubing provides. To create a strong channel similar to square tubing,


spacers can be installed to close the end of the U-channel. One item that we
observed is that the corners of Actobotics channel has a chamfered radius which
removes the sharp corner of the aluminum. This radiused corner is not present
in the goBilda system described in the next section.
In addition to the U-channel, another structural member that we find very
convenient to use are the Actobotics structural beams. Beams are a straight
aluminum component with regularly spaced drilled holes. These beams can easily
be fastened to U-channel and provide a way to support loose cables and mount
other types of cabling strain relief.
For the drivetrain, we use a direct drive system using 37 mm 12 V gearmotors.
The Actobotics system has a few different mounting options for this diameter
motor and we select a mount that clamped around the motor. There were no
issues with motors slipping in this clamping mount.
The Herbie 1 wheels are 6 wheels with a rubberized tread and they con-
nect to the 6mm motor shaft using an Actobotics hub for 6 mm D-shafting. The
primary drive surface is outdoor concrete sidewalk. The motors provide approx-
imately 300 oz-in of torque and there was no problem in moving the robot using
these motors.

3.2 GoBilda
The Herbie 1 robot is constructed entirely using the Actobotics build system and
there was a significant investment in the system. At the time of the design of
Herbie 2, the goBilda system had just become available and there were a number
of advantages to the build system and those advantages were significant enough
to consider investment in the newer system.
One of the main differentiators of the goBilda build system is that the system
is completely metric. All bolt lengths, channel lengths, and hole spacings are
specified in millimeters. Although this fact in itself was not a primary reason to
move to the build system, one benefit is that the system was designed to use
Constructing Robots for Undergraduate Projects 215

M3 metric screws. The larger diameter M3 screws were a better design choice
for the size of the Herbie 2 robot to be constructed.
A second advantage of the goBilda system is that the standard channel is
larger. The exterior dimensions of the square U-channel is 48 mm with the inte-
rior dimensions being 43 mm. This increased size allows motors to fit inside
the channel to produce a very compact right-angle drivetrain. Because Herbie
1 and Herbie 2 needed to pass through doorways, robot width was always an
important design constraint. The right angle drivetrain allows the robot width
to not be limited by the length of the drive motors. The goBilda system provides
many different wheel options, but for Herbie 2 it was determined that a larger
wheel diameter was required. 8 pneumatic kick-scooter wheels were found to
be suitable and they worked with an 8 mm axle that is available in goBilda.
As shown in Fig. 3, the main center hole for bearings in 14 mm. This hole is
surrounded by 4 mm holes arranged on an 8 mm grid. The grid allows components
to be connected in several locations and the holes at the corners surround the
14 mm hole are enlarged to match a 16 mm grid spacing when connecting items
at 45◦ .

3.3 Comparison

While both build systems provide similar functionality, the ability to place drive
motors inside U-channel provided a significant advantage for our application
in Herbie 2. If there were no width requirement for our particular application,
Actobotics would have served just as well in terms of drivetrain capabilities for
the second generation design.
The standard grid spacing of goBilda was very useful when designing the
sensor mounts for Herbie 2. Several of the sensor brackets are 3-D printed with
8 mm holes spaced to match goBilda and this allowed the sensor positions to be
easily modified on the robot when necessary. We have found that because some
of our sensors and components are sourced from international companies, the
metric spacing of the goBilda has proven to be advantageous.
Both Actobotics and goBilda provide a variety of mounts and channel con-
nectors. There are various mounts to hold DC motors to U-channel and also
various patterned mounts which have drilled out holes to match the spacing
pattern of the channel.
In terms of additional structural components, both systems provide extruded
aluminum rail (similar to 80/20 aluminum extrusion [15], but matching the
respective channel hole spacing). This extruded railing allows infinite adjust-
ment of bracket mounting positions. Additionally, both systems provide timing
belts and chain for power transmission. At this time, we have not utilized these
components for drivetrain design.
216 J. Seng

Fig. 4. Photograph of Herbie 1. Herbie 1 is constructed using Actobotics components


and uses an NVIDIA TX2 board for compute. The robot is 15 (380 mm) wide and
32 (812 mm) tall.

4 Compute Hardware and Sensors


In this section, we describe the compute hardware on the robots as well as the
sensor systems.

4.1 Embedded Computers


For both robots, it was important to be able use the ROS operating system
running on top of a standard Linux distribution. We selected two computer
boards from the NVIDIA Jetson line of boards. These boards have multiple
ARM cores along with an NVIDIA GPU for neural network inference. Power
consumption is a very important metric and the boards we use can run while
using approximately 10–15 W of power. This allows the robots to be powered
with a 4S Lipo 6000 mAh battery that is commonly available.
For Herbie 1, we use the NVIDIA Jetson TX2, which comes in a Mini-ITX
motherboard form factor. Because the Mini-ITX hole spacing does not align
with the Actobotics standard hole spacing, we use a lasercut acrylic sheet as
an adapter plate to mount the Jetson TX2 to the Actobotics frame. We use an
external SSD for data storage.
In terms of support electronics, we use an Arduino Uno for reading motor
encoder counts and an additional USB dual motor controller to handle driving
the two drive motors. A picture of a completed Herbie 1 is shown in Fig. 4.
On Herbie 2, we looked to use an upgraded computer and selected the
NVIDIA Jetson AGX Xavier which also comes in a smaller 105 mm × 105 mm
Constructing Robots for Undergraduate Projects 217

Fig. 5. Photograph of Herbie 2. Herbie 2 is constructed using components from the


goBilda construction system. The robot is 15 (380 mm) wide and 23 (585 mm) tall.

square footprint. Because of the smaller footprint, we use a 3-D printed mount-
ing bracket to hold the AGX Xavier board. The Jetson Xavier provides support
for an M.2 SSD and that is used to hold additional data and record video. For
low level control, the Arduino Uno is replaced by a Sparkfun Redboard Turbo
[16]. A picture of Herbie 2 under development is shown in Fig. 5.

4.2 Sensors and Software

The sensor set for both robots is similar with some slight upgrades as available.
Because the batteries of both robots are relatively small (4S Lipo 6000 mAh),
there is a limitation on the number of sensors that can be on-board.
For camera input, both robots use the ZED stereo camera with Herbie 1
using the original ZED camera and Herbie 2 using the ZED 2. These cameras
provide a depth map which is computed using the on-board GPU of the NVIDIA
boards.
For Herbie 2, we add an additional 2-D laser scanner for obstacle detection
and for building maps. For both robots, we have used the RTABMAP SLAM
system [17]. This SLAM system allows for map building using visual input as
well as with laser scanner input. RTABMAP has built-in support for ROS and
supports several feature detection and matching techniques.
218 J. Seng

5 Experience
In this section, we outline two types of experience that we have obtained through
the Herbie 1 and Herbie 2 projects. The first is our experience in using the
modular construction systems. The second section describes student involvement
in the projects.

5.1 Build System Experience


Our primary observation in using these build systems is that the ability to recon-
figure the structure of a robot has provided a tremendous advantage. The deter-
mination of the appropriate width and height of the robot has been a trial and
error process and the ability to quickly rebuild the chassis has saved time on
a number of construction iterations. In addition, because of the large number
of holes drilled into each segment of U-channel, it is easy to quickly adjust the
position of a sensor or to adjust the position of a motor to change weight distri-
bution.
One our the initial concerns when running the robots was the problem of
fasteners coming loose during the testing and operation of these undergraduate
project robots. We have found that this has not been an issue even during our
outdoor operation of these robots on concrete sidewalks. The strength of our
robot structures has not been a concern either. Even though the U-channel of
both systems is not inherently as strong as a boxed square tube, with the addition
of the correct type of standoffs and spacers, the structure can be just as rigid.
A primary reason for using these build systems to construct our robots has
been the potential to save on costs over the long term when rebuilding robots
from one design to the next. Because the robots are built using two different
construction systems, we were not able to see cost savings in transitioning from
Herbie 1 to Herbie 2, but we believe that for future robot designs, we will not
need to invest in a new construction system. We think that this is the greatest
benefit of using a modular system for an undergraduate project robot.

5.2 Student Experience


Initially, the preliminary robot was designed by a faculty member and one com-
puter engineering student as part of an independent study project. This prelim-
inary design of a robot involved selecting the construction system and it was
through this process that the Actobotics build system was selected. During this
initial design phase, an NVIDIA embedded controller board was chosen and the
design required a custom mounting plate for the controller. It was at this point
that it was decided that the robot project as a whole would make a good platform
for undergraduate students. During this process, the student was exposed to the
overall design process involving selecting components, budgeting, and software
development. Although this was a greatly beneficial educational experience lim-
ited to a student, this type of experience would be difficult to duplicate in a
classroom environment.
Constructing Robots for Undergraduate Projects 219

A second set of undergraduate students who worked on Herbie 1 were focused


on adding audio capabilities to the robot. Instead of using the on board audio of
the computer, the project revolved around making an external audio interface
that could be remotely activated even when the main computer was not active.
This exposed the students to the challenges of low power electronics and wireless
communication.
Because of the reconfigurable nature of the robots, we found that students
were inclined to make suggestions on the design of the robot in terms of driv-
etrain and sensor placement. We believe that if we had used a pre-built robot
design students would possibly have felt more restricted during their develop-
ment process.
For the Herbie 2 design, one of the advantages of the upgraded computer
was that groups of students could develop machine learning models in order to
classify various obstacles that the robot may encounter. Students have developed
a process infrastructure to manage training images and to use that data to
develop machine learning models built around the Efficientnet architecture [18].
This infrastructure will be greatly beneficial to the groups of future students
who will be working on building additional machine learning models.
Overall, we have observed that the construction of the two robots has been a
successful approach to providing upper level undergraduate students with plat-
forms that serve as a basis for their design work. Although the robots are not
completely suitable for use in a classroom environment because we are not able
to construct several in parallel, there is enough engineering work required that
small student teams can work on implementing various capabilities. This does
require more coordination and management in terms of faculty involvement, but
robotics projects such as these requires close involvement in order to maintain
continuity through the lifespan of the robot design.

6 Conclusion
In this work, we describe the construction of two robots for undergraduate stu-
dent projects: Herbie 1 and Herbie 2. Instead of purchasing pre-built robot chas-
sis designs, the structures for these robots are constructed using Actobotics and
goBilda. These are two commodity aluminum build systems that are commonly
used in high school FIRST robotics competitions. We find these build systems to
be robust, reconfigurable, and reusable - all characteristics needed for develop-
ing robots for undergraduate student projects and research. The build systems
provide the ability to quickly construct a stable platform for carrying embedded
computers, sensors, and batteries.
Throughout the development of these robots, we have incorporated under-
graduate student work through various projects. Some students have been
involved in designing the chassis itself while other students have been a part
of adding various features (mechanical, electrical, and software) to the robots.
In addition, students have been involved in developing software infrastructure
to enable future groups to develop machine learning models to enhance robot
capabilities.
220 J. Seng

References
1. Burack, C., Melchior, A., Hoover, M.: Do after-school robotics programs expand
the pipeline into STEM majors in college? J. Pre-College Eng. Educ. Res. 9(2),
1–13 (2019)
2. Actobotics. https://www.servocity.com/actobotics
3. goBilda. https://www.gobilda.com
4. Actobotics Pattern. https://cdn11.bigcommerce.com/s-tnsp6i3ma6/images/
stencil/original/products/3671/21044/585440-Schematic 02283.1583537050.png?
c=2
5. goBilda Pattern. https://cdn11.bigcommerce.com/s-eem7ijc77k/images/stencil/
original/products/846/27263/1120-0001-0048-Schematic 02571.1605298181.png?
c=2
6. Dear, T., Buchanan, B., Abrajan-Guerrero, R., Kelly, S., Travers, M., Choset, H.:
Locomotion of a multi-link non-holonomic snake robot with passive joints. Int. J.
Robot. Res. 39(5), 598–616 (2020)
7. Ray, P., Howard, M.: Robotic untangling of herbs and salads with parallel grippers.
In: International Conference on Intelligent Robots and Systems (2020)
8. Ramos, J., Ding, Y., Sim, Y., Murphy, K., Block, D.: HOPPY: an open-source and
low-cost kit for dynamic robotics education. https://arxiv.org/pdf/2010.14580.pdf
9. NVIDIA Robotics Teaching Kit. https://developer.nvidia.com/teaching-kits
10. Erector by Meccano. http://www.meccano.com/products
11. VEX Robotics. https://vexrobotics.com
12. Tetrix Robotics. https://www.pitsco.com/Shop/TETRIX-Robotics
13. REV Robotics. https://www.revrobotics.com/
14. Andy Mark S3 Tubing. https://www.andymark.com/products/s3-tubing-options
15. 80/20 T-Slot Aluminum Building System. https://8020.net/
16. Sparkfun Redboard Turbo. https://www.sparkfun.com/products/14812
17. Labbe, M., Michaud, F.: RTAB-map as an open-source lidar and visual SLAM
library for large-scale and long-term online operation. J. Field Robot. 36(2), 416–
446 (2019)
18. Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural
networks. In: International Conference on Machine Learning (2019)
Hands-On Exploration of Sensorimotor
Loops

Simon Untergasser(B) , Manfred Hild, and Benjamin Panreck

Beuth University of Applied Sciences Berlin, Berlin, Germany


simon.untergasser@beuth-hochschule.de
http://neurorobotik.de

Abstract. The behavior of many living beings is at least partly influ-


enced by the coupling of their sensors and actuators. In order to be able
to implement behavior adapted to the environment in robotic systems
as well, it is helpful to understand and emulate these sensorimotor cou-
plings. Therefore, sensormotorics is not only a current research topic, but
can also be found in robotics curricula. In order to understand this com-
plex topic in depth, first-hand experience of interacting with robots is
extremely helpful. In this paper, a curriculum for a sensorimotor lecture
is presented, which includes a lot of practical experience. For this pur-
pose, the necessary mathematical basics (dynamical systems) are taught
and deepened with experiments on simple robots.

Keywords: Cognitive robotics · Sensorimotorics · Hands-on


experiments

1 Introduction
In robotics research, among other things, researchers are working intensively on
sensorimotor coupling. This refers to the coupling of sensory stimuli with motor
activity in natural and in artificial systems. The sensors provide information
about the environment (for example visual stimuli) and about the state of the
own body (for example temperature). To survive in an environment, a system
must process this information and translate it into actions that satisfy its needs
and, if possible, improve its state. Since there is an interdependence, actions
performed can influence sensory information and vice versa. This ongoing pro-
cess is called a sensorimotor loop. To enable robots to interact in our highly
complex and unstructured environment, it can be helpful if they, too, have some
sort of coupling of their sensors to their actuators. These can be simple wirings
from sensors to motors, such as those described in [Braitenberg 1986], or more
complex networks, such as in [Sun et al. 2020]. However, it can be said that the
study of sensorimotor couplings is still in its infancy. Therefore, it is important to
incorporate the study of sensorimotor couplings into education in relevant top-
ics (such as humanoid robotics in this paper) so that future researchers can be
exposed to it at an early stage. However, it is often difficult to teach these com-
plex relationships and the mathematical background in an easily understandable
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 221–228, 2022.
https://doi.org/10.1007/978-3-030-82544-7_21
222 S. Untergasser et al.

Fig. 1. A cognitive system consisting of sensors, actuators and a controller. It is situated


in some environment and interacts with it. The dynamic system of interest comprises
the cognitive agent and the world surrounding it. (Figure based on [Montúfar et al.
2015])

way. In order to still achieve an intuitive understanding among young scientists,


this paper presents a concept in which sensorimotor principles can be experi-
enced interactively with the help of increasingly complex robots. Here, students
are enabled to complement theoretical content with hands-on experiments by
interacting with robots, customizing them with their own programming, and
conducting their own experiments.

2 Sensomotorics Curriculum
In the following, first an overview of topics from the curriculum of the sensori-
motor course is given. Afterwards, It is described, how this curriculum is used,
the individual topics are put into context and it is argued why they are relevant
for robotics.
– Motor Dynamixel XH430
– Dynamical Systems Theory
– Sensorimotor Loops
– Neural Networks
– Artificial Evolution
– Central Pattern Generators etc.
This list of topics comprise the “sensorimotorics” lecture (2. semester) in
the program Humanoid Robotics at Beuth University of Applied Sciences in
Berlin. This lecture continues topics from the first semester (reactive robotics,
electronics, mathematics) and prepares for the following semesters (cognitive
robotics, adaptive systems, control theory). In the course of this lecture, the
students deepen their knowledge using different robots. To do so, they first need
to understand how a typical robot actuator works, and how they can use it. So
in the first weeks, an important topic is the Dynamixel X430 actuator and how
to program it. Equipped with this actuator, different sensorimotor behaviors
Sensorimotor Curriculum 223

can be explored. The mathematical tools for sensorimotor systems come from
dynamical systems theory. A dynamical system is the mathematical description
of a system that changes over time. In most cases, the exact behavior of more
complex systems cannot be predicted, but qualitative statements can be made.
For example, specific states, called attractors, can be described toward which
a system moves (e.g., the resting state of a pendulum). Figure 1 shows a cog-
nitive agent (for example, a robot) embedded in an environment. It interacts
with this environment using its actuators and sensors. The environment and the
cognitive system, both dynamic systems in their own right, influence each other
and therefore also form a dynamic system overall. The controller may consist
of a recurrent neural network [Haykin et al. 2009]. These networks are capa-
ble of enabling robots to behave in interesting ways (see, for example, [Hülse
et al. 2004] or [Candadai et al. 2019]). One can also consider them as dynamical
systems and therefore study them with the same mathematical tools. To find
suitable networks, it is helpful to use artificial evolution. Following ideas from
nature, artificial evolution is a population-based optimization algorithm [Nolfi
and Floreano 2000]. It is often used when classical optimization strategies such
as gradient methods do not work well. Artificial evolution can provide a solution
to a problem that is not optimal in some cases, but good enough for the task at
hand. Instead of evolving neural networks via artificial evolution, one can also
find interesting behaviors through theoretical reasoning (see, e.g., [Schilling et al.
2013]). For example, Central Pattern Generators are simple constructs of a few
neurons that can serve as the basis of walking behaviors of more complex robots
([Hild 2008]). Another important topic is the interaction of robots with their
environment. To achieve robust behavior in dynamic and unstructured environ-
ments, it is necessary for robots to adapt to changing environmental conditions.
It can be beneficial if the robot is able to use its body and sensors to acquire the
information it needs about the environment to do so. Again, dynamic systems
theory can be used to structure the gathered information. For example, stable
postures (lying down, standing up straight see Fig. 4 for some examples) of the
robot correspond to attractors in the dynamic system consisting of robot and
environment. In addition, there are unstable postures (repellors, e.g. tilting on
an edge). This possibility space can be explored by the robot and paths (i.e.
movement patterns) can be found so that the robot can switch between different
postures. Depending on the complexity of the robot and the environment, more
or less complex sensorimotor manifolds are thus found.

3 Practical Exploration of Sensorimotor Behavior

To give students a feel for how these theoretical topics can be applied to the real
world, they are given the opportunity to experiment with a number of robots.
To do this, they are given a small box, as shown in Fig. 2, which contains a
Dynamixel XH430 motor, as well as a battery and connectors. This allows them
to connect the motor to a smartphone, as shown on the left side of the figure.
First, the teaching content about the motor is described. Then it will be discussed
224 S. Untergasser et al.

Fig. 2. The box, shown on the right top, includes the motor XH430, a battery and
some cables and connectors. Students can connect this system to their smartphones
as shown in the left part of the figure. In the right bottom, screenshots of the android
apps are shown, which are used to program the motor.

how the robot is programmed and which tools are provided for this purpose. The
conclusion is formed by 2 examples of robots that are discussed in the lecture.

3.1 The Dynamixel Actuator for Simple Robots

As an example of a robotic actuator, the XH430-W210-R from Dynamixel is dis-


cussed in the lecture. This high performance servo is equipped with an STM32
microcontroller and a magnetic rotary encoder. Using the electronic inner work-
ings of this motor, H-bridges, shunt resistance and PWM are explained. This
builds on the knowledge students gained in the first semester electronics lectures.
Subsequently, students are given the opportunity to conduct experiments with
the motor using Android apps. For this purpose, a custom firmware was devel-
oped for the actuator. It is based on a specially developed Forth ([Brodie 2004])
dialect, called Behavior Design Environment (BDE). This custom programming
language will also be used in later semesters to program larger humanoid robots
(there is an extended version of BDE for this purpose). In this language, senso-
rimotor loops can be programmed with a few simple commands and thus behav-
iors can be brought to the motor (and later robots). The firmware is flashed on
the actuator using an android app, shown on the right bottom of Fig. 2. The
firmware flashing software is shown in green on the left side. In the middle is
an app, where motor values can be plotted. In this case, the logistic function is
plotted using the motor angle as parameter. So by interacting with the motor,
the curve is plotted. This is a first simple example of a dynamical system. On
the right side, the corresponding code is shown, by using an android terminal
to program the motor. There, the individual BDE programs can be flashed on
the motor. Simple sensorimotor loops are developed and programmed in the lec-
tures with the students. The behavior can then be tested in interaction with the
motor. An example is the Cognitive Sensory Motor Loop (CSL, see [Hild 2013]),
Sensorimotor Curriculum 225

Fig. 3. The cognitive sensorimotor loop [Hild 2013] as a flow graph. In the right bottom,
its implementation in the programming language BDE is shown. The z −1 block is a
delay of one time step. The coefficients are gi = 22.5 and gf = 0.3.

whose flow chart can be seen in Fig. 3. This sensorimotor loop has different modi,
for example, the contraction mode causes the motor to move against external
forces.

3.2 An Example Robot

With a few additional parts that the students can produce themselves, the first
simple robot can be built. One example is shown in Fig. 4 on the left. It consists
of the XH430 motor described above, the battery from the box, and some molded
parts laser cut from acrylic glass. As can be seen in the figure, it is designed so
that the axis of the motor always remains parallel to the ground once it has
been oriented that way. This allows the state of the robot to be described by
four variables. The first two are the angle of the robot to the ground and its
derivative. The next two are the current angular position of the motor and its
derivative.The states where the robot does not move are the fixed points of
the system. The robot postures shown in the right part of Fig. 4 are instable fix
points, where slight perturbations causes the robot to leave the posture. The CSL
in contraction mode swaps stable and instable fix points, so that the postures
in the figure become stable. The corresponding nodes are connected by arrows
when the robot can move from one posture to another. Green arrows mean
counterclockwise rotation of the motor, red means clockwise rotation. These
transitions are only possible in one direction. Only between postures that are
connected with a black arrow (pointing in both directions), the robot can switch
between postures by itself.
This simple example already shows a rich behavior that can be investigated
with the learned methods. To find out, in which posture the robot is, students
usually suggest to equip the robot with a sensor (e.g. IMU) to measure the angle
between robot and table. But that is not necessary. By choosing a suitable move-
ment, the sensorimotor behavior provides enough relevant information. For exam-
ple, the motor voltage can be used to find out in which posture the robot is. To see
this, experiments are carried out in which joint angle and motor voltage are mea-
sured and the robot’s postures are recorded. At the respective posture, one can
then see the influence of gravity on the motor voltage, as can be seen in Fig. 5.
226 S. Untergasser et al.

Fig. 4. One design of a simple robot, which can be built with the described motor.
The right side shows stable fix points of the system containing the robot, the CSL and
the environment, as well as the possible transitions between them (which the robot can
do by itself). The green arrows represent counterclockwise rotation of the motor, red
means clockwise. Black arrows represent both directions.

Fig. 5. Recording of joint angle and motor voltage in one experiment (see text). Hori-
zontal axis is the joint angle ϕ where 360◦ are divided into 212 steps. The vertical axis
represents the motor voltage U where –60 is 0.7 V.

In the displayed series of measurements, the robot’s motor was kept at a constant
rotational speed. During this process, the motor voltage was measured (blue line
in the figure). It can be seen that the robot’s postures with the same angular posi-
tions require different amounts of motor voltage to maintain the speed, depending
on whether the robot is standing upright (posture a), or lying on the ground (pos-
ture h). Moving from posture a to the right, you can see the point where the robot
falls over (ϕ ≈ 70), which is the sharp drop in motor voltage. If the running direc-
tion of the motor is reversed here, the robot can no longer come to posture a by
itself, hence the motor voltage follows another curve (The ripples in the curve stem
from the stick slip effect between robot and table). Without complex sensor tech-
nology, the robot can therefore already learn a lot about its environment and its
Sensorimotor Curriculum 227

own body. The students now have all the freedom they need to make custom bod-
ies for their robots and to repeat the experiments and gain their own insights. Of
course, feeling and understanding the sensorimotor behaviors is also part of the
exam for this course. For example, a randomly chosen behavior is set on the robot
before the examinee enters the room. Now, as a first task, the behavior (and the
sensorimotor loop behind it) should be identified through interaction.

4 Conclusion
Insights on sensorimotor couplings from research on living beings are often useful
in robotics. In order to prepare future roboticists, sensorimotorics is being incor-
porated into various curricula of programs like humanoid robotics. To under-
stand this complex topic in depth, it is useful to interact with the robot and
experience the behaviors haptically. In this work, it was presented how in-depth
hands-on experiences can be built into a sensorimotor curriculum. For this pur-
pose, the necessary mathematical basics (dynamical systems, neural networks)
were deepened with the help of experiments on simple robotic systems. Each
student gets a motor with accessories, with which she can try out the learned
contents in practice. Android apps were developed to be able to program the
motor. In addition, a simple programming language was flashed on the motor so
that different sensorimotor loops can be tried out. Using a simple robotic body,
interactions of the robot with the environment, but also with the students, were
explored, so that a deep understanding of the content emerges.

References
Braitenberg, V.: Vehicles: Experiments in Synthetic Psychology. MIT press (1986)
Brodie, L.: Thinking Forth. Punchy Publisher (2004)
Candadai, M., Setzler, M., Izquierdo, E.J., Froese, T.: Embodied dyadic interaction
increases complexity of neural dynamics: a minimal agent-based simulation model.
Front. Psychol. 10, 540 (2019)
Haykin, S.S., et al.: Neural networks and learning machines/Simon Haykin. Prentice
Hall, New York (2009)
Hild, M.: Neurodynamische module zur bewegungssteuerung autonomer mobiler
roboter (2008)
Hild, M.: Defying gravity - a minimal cognitive sensorimotor loop which makes robots
with arbitrary morphologies stand up. In: Proceedings 11th International Confer-
ence on Accomplishments in Electrical and Mechanical Engineering and Information
Technology (DEMI) (2013)
Hülse, M., Wischmann, S., Pasemann, F.: Structure and function of evolved neuro-
controllers for autonomous robots. Connection Sci. 16(4), 249–266 (2004)
Montúfar, G., Ghazi-Zahedi, K., Ay, N.: A theory of cheap control in embodied systems.
PLoS Comput. Biol. 11(9), e1004427 (2015)
Nolfi, S., Floreano, D.: Evolutionary Robotics: The Biology, Intelligence, and Technol-
ogy of Self-Organizing Machines. MIT Press (2000)
228 S. Untergasser et al.

Schilling, M., Hoinville, T., Schmitz, J., Cruse, H.: Walknet, a bio-inspired controller
for hexapod walking. Biol. Cybern. 107(4), 397–419 (2013)
Sun, T., Xiong, X., Dai, Z., Manoonpong, P.: Small-sized reconfigurable quadruped
robot with multiple sensory feedback for studying adaptive and versatile behaviors.
Front. Neurorobotics 14, 14 (2020)
Programming Environments
CORP. A Collaborative Online Robotics
Platform

Olga Sans-Cope1,2 , Ethan Danahy1 , Daniel Hannon1 , Chris Rogers1 ,


Jordi Albo-Canals1,3(B) , and Cecilio Angulo2
1
CEEO - Tufts University, Boston, MA 02155, USA
{olga.sans cope,ethan.danahy,dan.hannon,Chris.Rogers}@tufts.edu
2
IDEAI - Universitat Politècnica de Catalunya, 08034 Barcelona, Spain
cecilio.angulo@upc.edu
3
Lighthouse-DIG, Cambridge, MA 02139, USA
jordi.albo@lighthouse-dig.com

Abstract. In this paper we introduce a novel integrated educational


learning development web-based environment intended for learning and
teaching, CORP (Collaborative Online Robotics Platform). CORP com-
bines the potential of the Google web-based platform with the educa-
tional robot technology of the LEGO Mindstorms EV3 Kit. Designed
for students in primary and high school, CORP allows students from
different locations to collaboratively interact through a shared Google
Slide document with a custom add-on developed to connect to the EV3
robot. In addition to their science, technology, engineering and mathe-
matics (STEM) skills, students also work on their social emotional learn-
ing (SEL) skills. Moreover, teachers can analyse log files generated by
the platform for insights into student learning.

Keywords: Remote educational robotics · Web-based robotics ·


Virtual labs · STEM education · Social emotional learning · Remote
learning · Covid

1 Introduction

As the COVID-19 pandemic increased the usage of distance learning and eLearn-
ing in all levels of education and many types of hands-on instruction have been
interrupted, we introduced CORP (Collaborative Online Robotics Platform), an
integrated educational learning development web-based environment intended to
teach robotics.
As we can see in the literature, for more than 20 years, the use of technology
to bring hands-on instruction to online learning has been used so several works,
with multiple approaches [1,2]; however, it is relatively new in k-12 instruction
[3]. Currently, some Universities have opened remote access to their labs for sec-
ondary schools [4,5], with the initial purpose to optimize costs because physical
experiments are expensive to maintain, and they require dedicated facilities to
accommodate the experimental setups. Additionally, COVID-19 boosted the use
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 231–242, 2022.
https://doi.org/10.1007/978-3-030-82544-7_22
232 O. Sans-Cope et al.

of remote labs adding the social distancing factor [6]. Remote labs already exist,
and they are being applied with success, such as [7] or [8]. We focus our pro-
posal on an alternative architecture based on an existing environment, Google
Cloud, to accelerate the solution’s adoption and scalability based on the already
existing users within the K-12 education population [9,10].
Hereafter, a research prototype of the platform developed at Tufts Center for
Engineering Education and Outreach (CEEO) is presented. The system allows
participants to directly interact with remote physical robots over internet facili-
tating an immersive experience, also known as a phygital experience, in a collab-
orative virtual environment by combining physical robots with remote STEM-
based instruction. Particularly relevant during COVID-19, it also helps connect
students potentially isolated by other factors (geographic, economic, environ-
mental, health limitations). Whilst, it is challenging to transfer the straight-
forwardness of collaboration onto remote located teams [1], this computer sup-
ported cooperative work (CSCW) system allows real-time coding, sharing, and
reporting through a merging of existing Google Slides functionality with a web-
enabled sidebar additional for directly connecting to the physical robots (locally
or remotely located).
Through this platform, “roboticists learners” collectively give instructions to
a remote robot in a shared environment. The system combines online activities
with shared experiences where the students are exposed to a technological cli-
mate that in addition to their STEM skills, students also work on their SEL
skills: self- management (goal setting and following instructions), relationship
building (communication and cooperation), social awareness (turn-taking and
appreciating diversity), self-awareness (confidence and efficacy), and responsible
decision-making (problem-solving), to achieve the goals.

2 The CORP Platform


The Collaborative Online Robotics Platform, CORP is built using Google Slides
as front-end for the students through an add-on developed to communicate with
the robotic kit LEGO Mindstorms EV3 over Wi-Fi. The use of Google Slides
provides a web-based environment working on Google servers accessible with-
out any previous installation of any software. Only a processing device (laptop,
tablet) with Internet connection, a browser, and a Google account are required.
So, it makes easy and simple to access, as well as its compatibility with different
devices such as computers or tablets.
The platform allows several users to conduct activities remotely at the same
time controlling physical LEGO Mindstorms EV3 robots remotely. This arrange-
ment of resources allows for two possible scenarios depending on the physical
location of the users and the robot, as shown in Fig. 1(bottom).
In the first scenario (left), the robot EV3 is physically located with the users,
for instance in a classroom. In the scenario 2 (right), the robot EV3 and the
users can be remotely located relative to one another. When the platform is
used for online learning, We used Google Meet, an internet meeting application,
CORP. A Collaborative Online Robotics Platform 233

Fig. 1. Left: Scenario 1, students sharing the same space. Right: Scenario 2, students
working in different locations

and a device with camera pointing at the robot. This configuration gives CORP
the ability to connect several users with a robot over the Internet, creating
a workspace where users can collaborate remotely controlling the inputs and
outputs to and from the robot from anywhere in the world.

3 Hardware Architecture

The general overview of the workspace hardware architecture implemented when


users and the robotic platform are not co-located in the same space is shown in
Fig. 2. From the hardware point of view, four elements are needed: (1) User’s
devices (computers, tablets), (2) a LEGO Mindstorms EV3 robot, (3) an SD-card
Micro SDHC (min. 4 GB, max. 32 GB), (4) a Wi-Fi dongle for the EV3 robot. A
Google Slides document, shared across all participants provides the coding and
documentation environment with the EV3 add-on. The add-on allows the users
to interact with a flask server running on the EV3 robot.
The resulting setup has student users programming over Google Slides and
watching the robot move over Google Meet.
The LEGO Mindstorms EV3 robot is built as a vehicle with two motors and
four different sensors: touch, ultrasonic, light and gyro, that are parts of the
LEGO Mindstorms EV3 kit [12]. This is a versatile and extensively used struc-
ture to build different scenarios as an educational tool [13]. The communication
between Google Slides and the EV3 robot is established through WiFi. A USB
WiFi dongle supporting Linux is plugged to the EV3’s USB port to connect
the robot to the WiFi access point [14]. Furthermore, the LEGO EV3 Mind-
storms brick has plugged in a microSD/microSDHC card with 8-16GB space
where a Debian Linux-based operating system called evd3ev [15] is installed.
234 O. Sans-Cope et al.

Fig. 2. CORP hardware architecture

A web server in Flask (Python) has been developed to answer the petitions from
Google Slides.

4 Software Architecture
The overview of the different software elements is shown in Fig. 3. They are
built on Google Slides, Google Meet, and LEGO Mindstorms EV3 software using
several programming languages and applications.

4.1 Google Slides


The user interface of CORP is implemented on Google Slides (part of Google
Apps). Google Slides is a free cloud-based presentation application offered by
Google. It provides a web-based environment accessible without any installation
of any software previously, only a device with an Internet connection, a browser
and a Google account are required for using it.
As a working space, Google Slides offers tools that let customize slide presen-
tations [16], add multimedia elements, format text and images, enable anima-
tions, add links from Internet resources or other types of Google Applications.
All these capabilities make Google Slides a powerful environment to deliver the
content of the learning sessions as well as the students’ work as a notebook or
as part of their e-portfolio.
On the other hand, Google offers the possibility to extend the Google Apps
capabilities and functionalities by using Google Apps Script, a cloud-based plat-
form, that allows developing customize applications with multiple functionali-
ties [17] inside Google Apps. These Apps Script applications are developed in
CORP. A Collaborative Online Robotics Platform 235

Fig. 3. CORP software architecture

JavaScript and run on Google servers’ infrastructure. Besides, allows connec-


tion directly with many other G Suite applications or third-party services and
systems [18] through the URL Fetch Service [19]. This service allows scripts to
access other resources or to communicate with other applications on the web by
fetching URLs issuing HTTP and HTTPS requests and receiving responses [20].
We leverage these potentialities to implement the platform’s user interface
where the functionality of the Google Slides document is two-fold: (1) delivers
the sequences of tasks and learning content as well as the user’s notebook (where
the users report their learning process: reflections, notes, graphs etc.) and (2) it
is the user interface to interact with the robotic tool.
This document is shared with all the users in the working group and with
the facilitator so allows working collaboratively in real-time.

Google Slides Add-On. As commented above, Google Slides offers the user
interface (UI) to interact with the robotic tool. The UI is an add-on built using
Apps Script. It is presented to the user as a sidebar by using the add-on fea-
tures [22]. This Add-on is implemented with HTML and JavaScript that makes
it possible to create and design highly customizable user interface applications
within the menu system (sidebars and dialog box) of the Google Slides. The
236 O. Sans-Cope et al.

HTML service serves these web pages as custom user interfaces and interacts
with server-side Apps Script functions [21]. In addition, the add-on allows inte-
gration of data by the user from and communication with the robot using the
URL Fetch and HTML Services.

4.2 CORP Add-On’s Architecture


CORP has been developed by using the editor Add-ons, one of the two types
to extend Google Applications. Editor add-ons let the user bind [22] a script to
Google Slides file to build an interface. In this case, we created an add-on menu
using Apps Script’s base UI service with different items that provide an initial
starting point on the front tool bar on Google Slides application (see Fig. 4).

Fig. 4. CORP add-on in Google Slides

Our custom (EV3) sidebar (Slides add-on) is built using Apps Script, a tool
that lets one edit Google slides and connect directly to many Google products
using the built-in Apps Script Slides service [16] to read, update, and manipulate
data within G Suite applications. We also leveraged the Slides REST API directly
using the Apps Script advanced Slides Service.
The add-on uses Apps Script’s HTML service for setting up client-server
communication calls to interact with the robotic EV3 platform. The user actions
in the interface are tied to a function in the script that result in actions taken
on the Google servers where the editor file resides and vice versa. The structure
of the files that implements the add-on is shown in Fig. 5.
The EV3Control.gs file is the core of the add-on. It manages all the user’s
interactions from opening and closing the add-on to the dialog boxes inside
the sidebar tabs. Interactions where the user asks for data from or send com-
mands to the robot are also managed by the Ev3control.gs file. In those cases,
it makes petitions and receives data through FetchURL service. That service
allows to retrieve or upload data from the robotic platform into and from Google
Slide presentation working with JSON objects. Depending on the information
retrieved, the data is displayed directly on the slides or sidebar tabs, or in a
spreadsheet from where data is taken and displayed on Google Slides using the
Slides service [21].
Those communications between sidebar and dialog boxes (HTML service)
with the Ev3control.gs file (Script Apps application) are asynchronous client-
side JavaScript API petitions to the server-side Apps Script functions stored in
the Ev3control.gs file.
CORP. A Collaborative Online Robotics Platform 237

Fig. 5. CORP’s Add-on structure and communication

4.3 Google Meet Video Communication

Users communicate using Google Meet as videoconferencing system and to visu-


alize the robot and its environment which it is located with the facilitator. This
video call service allows the participants to see the robot and the environment
where it is located and interact with other users by video, voice, and/or chat
and share their laptop screens. Google Meet comes as a web application in the
user’s Google account facilitating not to install additional software in the user’s
devices. Examples of the use of the platform in a real scenario are shown in
Fig. 6.

4.4 LEGO EV3 Robot Software

To interact with the LEGO Mindstorms EV3 robot over Wi-Fi with Google
Slides, we used the open-source firmware EV3dev [15]. EV3dev is a Debian Linux-
based operating system running on LEGO Mindstorms EV3 robots that brings a
low-level driver framework for controlling EV3 robot’s sensors and motors using
multiple scripting languages like Python, Java, C, C++, Go. In that project,
EV3dev runs on a microSD card attached in the EV3 robot and a Wi-Fi dongle
connected in the USB port of the LEGO EV3 brick that allows internet control
of the robot.
238 O. Sans-Cope et al.

Fig. 6. Example of a real scenario.

One of the advantages of using EV3dev is it does not require flashing new
firmware and it is therefore compatible with a lot of free software packages.
To allow the communication between EV3 robot and Google Slides, the Flask
package is installed over EV3dev firmware and the language selected to expand
the communication’s capabilities of the EV3 robot with Google Slides is Python.
A Flask server manages the communication from/to Google Slides and the
EV3 LEGO robot. The server processes users’ commands and controls the pro-
grammable robot retrieving measurement data from the robot and send it back
to the client when it is requested by the user. Once the user makes a petition
through the sidebar user interface, it is processed by the main core file of the
add-on, EV3Control.gs which makes a fetchUrl petition to a specific address
and sends a JSON data package.
The Flask server is listening the petitions, processing them and executing
them in an asynchronous way. Some of the petitions can be actions that are
executed on the robot directly, like moving motors or a Python file to be executed
by it. Others require to send back to Google Slides data, like reading from sensors.
In the later, the data is sent back also as a JSON package.
Python is selected as a high-level, general-purpose, interpreted scripting lan-
guage because its versatility for beginners, and experienced coders. Programs
written in Python can be sent from Google Slides to the robot to execute sophis-
ticate actions. Finally, the overall CORP software architecture is shown in Fig. 7.

5 CORP User Interface


The add-on developed presents the user with numerous choices of how to con-
trol the robot from moving the motors and reading sensors to increasingly create
complex programs in Python. Each tab relates with different actions to be con-
figured by the user. Some actions are one way, from Google Slides sidebar to
the robot, like move motors, program sequences, behaviors or program. Others,
CORP. A Collaborative Online Robotics Platform 239

Fig. 7. Overview of the overall CORP software architecture

Fig. 8. CORP’s Sidebar options and tabs’ configuration. The red dash box in sensor’s
tab is an example of data from the robot shown in the sidebar

like read sensors or graphs, receive data from the robot and visualize them on
either the same block of the sidebar (see Fig. 8 for the reading sensors tab) or in
a slide.
If the user wants to see the data on a slide, a specific field should be filled in
with the number of the slide where user wants to show data. For example, in the
240 O. Sans-Cope et al.

Fig. 9. CORP’s Sidebar options and tabs’ configuration. The red dash box in sensor’s
tab is an example of data from the robot shown in the sidebar

case of running a datalog, data collected by the robot is sent to Google Slides.
Next, the EV3Control.gs file registers the tuples time and sensor data value in
a spreadsheet and generates a line graph with data pairs. Then, in the graphs
tab the user can add the graph in a slide selecting the desired slide’s number.
Figure 9 shows an example.
Because the sidebar is a client-side HTML file, it is different for each user.
The actions and the data received and shown in the sidebar are attached to the
user who made the action. Users can share their programs and data obtained
from the robot with the group and work collaboratively on the same section of
code or program sequence using the option field “Share” in sidebar’s tabs and
dialog boxes.

6 Conclusions and Future Work

In this paper we introduce CORP, an integrated educational learning develop-


ment web-based environment intended for learning and teaching that combines
interactive robotics, real-world robotic-control, as well as collaborative problem-
solving. Due to lack of space, some potentialities of the platform are not dis-
cussed, even though they are also its objective. An overview video of CORP’s
functionalities is available at [23].
CORP is a tool enabling teachers to integrate robotics in the classroom and
measure the progress of the student while working. Teachers can also integrate
assessments and create activities adapted to the student. In the future, one could
imagine EV3’s talking to each other (over WiFi), giving CORP the ability to
control robot swarms and other robotic kits.
Beyond the new innovation of extending Google Slides for robotic control,
the platform also enables educational services (currently challenged by social dis-
tancing restrictions) to reach remote users. For those users in remote locations
without access costly, advanced, or exclusive robotic systems, CORP connects
users not only with the advanced technology but also with mentors and edu-
cational experts. Building on the Google Cloud platform and the G Suite of
services, the accessibility, scalability, and integration with existing tools already
familiar to users is ensured.
CORP. A Collaborative Online Robotics Platform 241

Currently, with COVID-19 restrictions driving drastic changes to the educa-


tional landscape, there exists a great need for, on one hand, running high-quality
online STEM sessions that help remote users to develop their technical skills in
tandem with social emotional learning skills and on the other hand to facilitate
flexible layouts that allow in-person, remote, online and hybrid learning and
teaching modalities that are constantly changing.
An experimental study was carried out during summer 2020 with nine par-
ticipants from 10 to 17 years old showing promising results about the feasibility
of the platform and improvements to enhance usability and user experience.
Participants attended from their locations one session per week for four weeks.
In groups of 2 to 3 participants interacted remotely with a LEGO EV3 robot
doing activities from basic interactions with the robot to more complex collabo-
rative activities and challenges. Participants achieved the different tasks taking
turns, making decisions collaboratively, listening each other ideas and helping
each other. In some sessions participants highlighted collaboration with their
peers helped them to achieve the goals. From a technical point of view, a critical
point in these kind of systems is user registration and authentication, however
in the study the majority of students did not have problems accessing the plat-
form although those that had a problem was only in the first session because
of the test’s design. These preliminary results launch promising future in online
learning while interacting collaboratively with robots remotely.

Acknowledgements. This work was partially funded by a grant from LEGO Edu-
cation. The authors acknowledge Emma Stevens, Joshua Balbi and Dipeshwor Man
Shrestha for their useful and helpful collaboration of our work. The research results
presented in this paper are under the Tufts Institutional Review Boards Submission
ID: MOD-01-1911005. Date Approved: 06/11/2020.

References
1. Gröber, S., Vetter, M., Eckert, B., Jodl, H.J.: Experimenting from a distance-
remotely controlled laboratory (RCL). Eur. J. Phys. 28(3), S127 (2007)
2. Matarrita, C. A., Concari, S.B.: Remote laboratories used in physics teaching: a
state of the art. In: 2016 13th International Conference on Remote Engineering
and Virtual Instrumentation (REV), pp. 385–390. IEEE (2016)
3. Gravier, C., Fayolle, J., Bayard, B., Ates, M., Lardon, J.: State of the art about
remote laboratories paradigms-foundations of ongoing mutations. Int. J. Online
Eng. 4(1) (2008)
4. Maiti, A., Maxwell, A.D., Kist, A.A., Orwin, L.: Merging remote laboratories and
enquiry-based learning for STEM education. Int. J. Online Eng. 10(6), 50–57
(2014)
5. Lowe, D., Newcombe, P., Stumpers, B.: Evaluation of the use of remote laboratories
for secondary school science education. Res. Sci. Educ. 43(3), 1197–1219 (2013)
6. West, R.E.: Ideas for supporting student-centered stem learning through remote
labs: a response. Educ. Technol. Res. Dev. 1–6 (2020)
7. de Jong, T., Sotiriou, S., Gillet, D.: Innovations in STEM education: the Go-Lab
federation of online labs. Smart Learn. Environ. 1(1), 1–16 (2014). https://doi.
org/10.1186/s40561-014-0003-6
242 O. Sans-Cope et al.

8. da Silva, J.B., Rochadel, W., Simão, J.P.S., da Silva Fidalgo, A.V.: Adaptation
model of mobile remote experimentation for elementary schools. IEEE Revista
Iberoamericana de Tecnologias del Aprendizaje 9(1), 28–32 (2014)
9. Ok, M.W., Rao, K.: Digital tools for the inclusive classroom: google chrome as assis-
tive and instructional technology. J. Spec. Educ. Technol. 34(3), 204–211 (2019)
10. Khazanchi, R.: Current scenario of usage, challenges, and accessibility of emerging
technological pedagogy in k-12 classrooms. In: Society for Information Technology
—& Teacher Education International Conference, pp. 1767–1769. Association for
the Advancement of Computing in Education (AACE) (2020)
11. Otto, O., Roberts, D., Wolff, R.: A review on effective closely-coupled collaboration
using immersive CVE’s. In: Proceedings of the 2006 ACM International Conference
on Virtual Reality Continuum and its Applications, pp. 145–154 (2006)
12. LEGO Education: User Guide LEGO Mindstorms EV3. https://education.lego.
com/en-gb/product-resources/mindstorms-ev3/downloads/user-guide
13. Souza, I.M., Andrade, W.L., Sampaio, L.M., Araujo, A.L.: A systematic review
on the use of LEGOrobotics
R in education. In: 2018 IEEE Frontiers in Education
Conference (FIE), pp. 1–9. IEEE Press, New York (2018). https://doi.org/10.1109/
FIE.2018.8658751
14. CanaKit: Raspberry Pi WiFi Adapter. https://www.canakit.com/raspberry-pi-
wifi.html
15. ev3dev: ev3dev software. https://www.ev3dev.org/
16. Google: Slides Services. https://developers.google.com/apps-script/reference/
slides
17. Google: Apps Script. https://developers.google.com/apps-script/overview
18. Google: Built-in Google Services. https://developers.google.com/apps-script/
guides/services
19. Google: URL Fetch Service. https://developers.google.com/apps-script/reference/
url-fetch
20. Google: Class UrlFetchApp. https://developers.google.com/apps-script/reference/
url-fetch/url-fetch-app
21. Google: HTML Service: Create and Serve HTML. https://developers.google.com/
apps-script/guides/html
22. Google: Extending Google Slides with Add-ons. https://developers.google.com/
gsuite/add-ons/editors/slides
23. https://drive.google.com/file/d/1s6qXG1YQnE-09eg5E9OUwKEAGdfKF7AV/
A ROS-based Open Web Platform
for Intelligent Robotics Education

David Roldán-Álvarez1(B) , Sakshay Mahna2 , and José M. Cañas1


1
Escuela Técnica Superior de Ingenierı́a de Telecomunicación,
Universidad Rey Juan Carlos, Móstoles, Spain
{david.roldan,josemaria.plaza}@urjc.es
2
Indian Institute of Technology, Ropar, India
2018csb1119@iitrpr.ac.in

Abstract. This paper presents the new release of the Robotics Academy
learning framework and the open course on Intelligent Robotics available
on it . The framework hosts a collection of practical exercises of robot
programming for engineering students and teachers at universities. It
has evolved from an open tool, which the users had to install on their
machines, to an open web platform, simplifying the user’s experience.
It has been redesigned with the adoption of state-of-the-art web tech-
nologies and DevOps that make it cross-platform and scalable. The web
browser is the new frontend for the users, both for source code editing
and for the monitoring GUI of the exercise execution. All the software
dependences are already preinstalled in a container, including the Gazebo
simulator. The proposed web platform provides a free and nice way for
teaching Robotics following the learn-by-doing approach. It is also a use-
ful complement for remote educational robotics.

Keywords: Web-based robotics · Simulation · Remote educational


robotics

1 Introduction
In the last decade robotics has experienced a large growth in the number of
applications available in the market. We usually think of robots as the robotic
arms used in the industrial sector and automated assembly processes. However,
today robots have appeared in many areas of daily life such as food process-
ing or warehouse logistics [1]. Moreover, recently robots have been used in the
homes to carry out real life tasks for people, such as vacuum cleaning. This
shows how robots can successfully address real life tasks. In many areas, robots
are progressively being included, with technologies such as automatic parking
or driver-assistance systems, not to mention aerial robots which have risen in
popularity in the last years.
Robotics in Education tries to strengthen the learning skills of future engi-
neers and scientists by using robot-based projects. Both in schools and colleges
presenting robots in the classroom will give students a more interesting vision of
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 243–255, 2022.
https://doi.org/10.1007/978-3-030-82544-7_23
244 D. Roldán-Álvarez et al.

science and engineering, and they will be able to observe directly the practical
application of theoretical concepts in the fields of mathematics and technology.
From the perspective of the University, robot-based projects help to enhance key
abilities since their origin, for instance those identified with mechatronics [2].
The fast technological changes and the subsequent movements in education
require the development and use of educational methodologies and opportunities
with a moderate economic effort. Many institutions answer to this challenge
creating distance education programs. Web-based robotics distance education is
a growing field of engineering education. Web based applications allow students
to code robotic projects without having to run special software in their computers
nor the robot physically near them [3,4].
When learning online, the robotic projects take place using a robotic frame-
work and a teaching tool where the robot behavior is programmed using certain
language [4]. The most common way to interact with robots remotely is through
virtual and remote laboratories [5]. According to Esposito’s work [6], MATLAB
(62%) and C (52%) are the dominat choices, with a minority using free open-
source packages such as the Robot Operating System (ROS) (28%) or OpenCV
(17%). We did not found in the literature recent studies with this kind of infor-
mation but in Garcı́a’s et al. work [7], 16 professionals where interviewed about
the frameworks they use when programming robotics. Their results showed that
most of them (88%) used ROS. ROS stands out for having a large and enthu-
siastic community. ROS enables all users to leverage an ever-increasing number
of algorithms and simulation packages, as well as providing a common ground
for testing and virtual commissioning [8]. One of the most used simulators for
ROS is Gazebo [9]. It provides 3D physics simulation for a large variety of sen-
sors and robots, and it can be customized for the user needs. Gazebo is bundled
into the full installation package of ROS, making it widely and easily available.
Many robot manufacturers offer ROS packages specifically designed for Gazebo
support. Therefore, ROS and Gazebo seem suitable tools to be used in web
environments to offer robotics education at no cost.
In this paper we present the latest release of Robotics Academy learning
framework. On its exercises the student is intended to develop the intelligence of
some robots for several robotics application. Robotics Academy is open source
and uses Python, ROS, Gazebo and Docker for supplying an easy method to
create robotics or computer vision applications.

2 Robotics Engineering Education


In educational robotics at universities there are two areas that are usually
taught: industrial robots and mobile robots [10]. The main topics of knowl-
edge are manipulation control techniques, inverse kinematics, trajectory calcu-
lations, local and global navigation, position control, perception, mapping and
self-location.
Today, the progressive digitalization of the world has transformed how the
knowledge is created and consumed. In this sense, online learning tools are gain-
ing popularity exponentially, since they allow users to access content wherever
A ROS-based Open Web Platform for Intelligent Robotics Education 245

and whenever as long as they have an Internet connection. Some leading uni-
versities already provide online robotic courses such as Artificial Intelligence
for Robotics from Standford University, Autonomous Mobile Robots from ETH
Zurich or the Art of Grasping and Manipulation in Robotics from the Univer-
sity of Siena. They see the potential of online learning in robotics engineering.
In addition, we can find these courses in the market. One of the most well-
known proprietary platforms that provide robotic courses is TheConstruct [13].
The goal of this online tool is to provide an environment to learn ROS-based
advanced robotics online. It provides courses at different levels using different
programming languages (C++ and Python).
In addition, many authors and developers have joined forces to create online
learning tools for robotics. We may find in the literature examples of robotics
online learning tools such as RobUALab [11]. This platform has been pro-
grammed in Java and thanks to web technologies it allows to teleoperate, exper-
iment and programm a robotic arm. With it, the students can test diferent
configurations, evaluate pairs, both Cartesian and articulation trajectories and
program the arms with a list of comands.
Another example is the work of Peidró et al. [12]. They presented a virtual
and remote laboratory of parallel robots consisting of a simulation tool and a real
robot. The virtual tool helps the students to simulate the inverse and forward
kinematic problems of three parallel robots in an intuitive and graphical way.
On the remote lab, the students can experiment with a real parallel robot and
let them explore phenomena that arise in the motion of the real robot and do
not happen in the simulation.
Fabregas et al. [5] presented the Robots Formation Control Platform, a web
based tool for simulation and real experimentation with mobile robots with ped-
agogical purposes. The platform provides an interactive simulator to develop
advanced experiment of formation control with multi-robots systems, an exper-
imental environment to develop laboratory sessions with real mobile robots and
a low-cost robotic platform. It also allows the students to conduct experiments
with mobile robots in a dynamic environment by changing the configuration
without having to reprogram the simulation itself.
One of the main issues regarding robotics platforms is that they do not follow
a standard in the robotic infrastructure used and many times they do not use
open hardware or software [14]. It was not until recently when ROS has been
taken into account in the development of educational robotics platforms. One
interesting educational robotics platform is EUROPA, an extensible one, based
on open hardware and software (ROS) which includes friendly user interfaces
for control and data acquisition and a simulation environment [15]. By using
ROS, they provide the opportunity for both teachers and students to work with
advanced simulation and visualization tools. EUROPA allows easy programma-
bility based on Python. In spite of all the advantages of the EUROPA platform,
its functionality is limited to only one robot and the computer where the software
is installed.
246 D. Roldán-Álvarez et al.

3 Robotics Academy: From Open Tool to Open Web


Platform

This educational robotics framework hosts a collection of practical exercises of


robot programming for engineering students and teachers at universities. It fol-
lows the learn-by-doing paradigm and puts the focus in the programming of the
main robotics algorithms: for perception, navigation, control, localization, map-
ping, etc. It supports Python as robot programming language and it is strongly
based on the Gazebo simulator. In addition, it is open source, its source code is
publicly available at GitHub1 .
Each exercise is a robot application, which is designed as the combination
of the student’s code and a template with auxiliary code. The template includes
many functionalities that are required to execute that robot application but
are not the focus of the learning target. There is one specific template for each
exercise.
The first major release was based on ICE middleware [1]. The second one was
completely migrated to ROS middleware and two robotics courses were devel-
oped for it and provided [10,16]. This second release included a ROS-node as the
template for each exercise. The student was required to include his/her code in a
separate Python file and to install all the software dependences needed for each
exercise. The framework included installation recipes for all those dependencies,
which are typically third party software such as ROS middleware for sensor
and actuator drivers, PX4 for drones, OpenCV for image processing, MoveIt for
industrial robotics, etc.
The third major release [17] has been significantly refactored following the
design in Fig. 1 and delivered as a web platform. It is available and ready to
use at the web2 , which makes it a nice choice for distance engineering learning.
It persistently stores the student’s code for the exercises, who may access them
and run them anytime anywhere. Its main improvements are described in the
following subsections.

Robotics-Academy main features


Cross-platorm It works in Linux, Windows and MacOS
Web based interface The browser for code editing and GUI
Very simple installation All dependencies are preinstalled in a Docker container
Educative contents Wide collection of exercises already available

3.1 Robotics Academy Docker Image

The robotics software is inherently complex, includes many pieces and so it usu-
ally has many dependencies. For instance, the software access to robot sensors

1
https://github.com/JdeRobot/RoboticsAcademy.
2
https://unibotics.org.
A ROS-based Open Web Platform for Intelligent Robotics Education 247

WEB BROWSER WEBSERVER

Student’s code Execution monitoring


editor GUI
https

Websockets

Python template
Student’s code GzWeb server
in execution

Sensors Actuators
Drivers (plugins)

GAZEBO SIMULATOR
Robot, world

DOCKER Container

USER MACHINE: Linux, MS−Windows, MacOS SERVER MACHINE

Fig. 1. New design of robotics academy learning framework.

and actuators is provided by drivers. In some cases the focus for the Robotics
students is on the drivers themselves, but most times the focus is on the algo-
rithms and their implementation. Then the students just need to install and use
the drivers, without further digging into them. They are included in common
robotics frameworks such as ROS and Gazebo simulator.
In order to run RoboticsAcademy exercises many software pieces are required
and so, the student had to install all of them. The installation recipes delivered in
the second framework release were not enough to significantly simplify the expe-
rience for the users. They still had to manually install all the needed packages
on their local operating system, and this was an entry barrier. In the presented
release a Docker container is provided3 which includes all the dependencies
of exercises already preinstalled inside. It is called RADI (Robotics Academy
Docker Image) and it may be easily installed and run in Windows, Linux or
MacOS, as all of them support this de-facto DevOps standard. This approach
follows some of the ideas in ROSLab [18], which uses ROS, Jupyter Notebooks
and Docker containers for providing reproducible research environments.
3
https://hub.docker.com/r/jderobot/robotics-academy/tags.
248 D. Roldán-Álvarez et al.

The burden of installing all the dependencies is removed from users and
moved to the RoboticsAcademy developers who build that container. This way
the learning curve is greatly reduced for final users, who skip most of the details.
They are only required to (1) install a Docker image, which is a simple and
standard step, (2) start its execution and (3) enter into a web page. Then the
browser automatically connects to the robot simulator and to the empty robot
brain inside the container. The Docker container and the browser are cross-
platform technologies, and so is the presented framework, expanding the number
of its potential users.
The robot simulator and the student’s code both run inside that container,
in the user’s machine. This design allows the framework to scale up to many
users, as the main computing costs are already distributed. The simulator runs
headless inside the container, regardless the operating system of the host com-
puter . The Gazebo simulated world is displayed in the browser through an http
connection with the “GzWeb server”, which also runs inside the container. The
browser and the container are connected through several websockets and that
http connection.

3.2 New Web-Based Exercise Templates

The new exercise templates have two parts, as shown in Fig. 1: a Python
program inside the RADI (exercise.py) and web page inside the browser
(exercise.html). They are connected to each other through two websockets.
The browser holds the code editor and sends the user’s source code for an exer-
cise to the empty robot brain, the exercise.py, where it will be run. The
exercise.py sends debugging and GUI information to the browser, where it
will be displayed.
The student just programs the robot’s brain in the browser. From the user’s
point of view the template provides two programming interfaces: (a) HAL-API
for reading the robot sensor measurements and for commanding orders to the
robot actuators; and (b) GUI-API which includes several useful methods for
debugging, for showing print messages in the browser or displaying the processed
images.
The Python program exercise.py is connected to the Gazebo simulator for
getting the sensor readings and setting the motor commands. In addition, it runs
the source code of the robot’s brain received from the browser, which typically
executes perception and control iterations in an infinite loop.
The browser is the only Graphical User Interface (GUI). The webpage of each
exercise, exercise.html, is divided in two vertical columns, as shown in Fig. 2.
The left column is for an inline text editor and the buttons for executing the
code, stopping it, etc. The right column typically includes a view of the simulated
world to see the generated robot behavior. It also includes a debugging console for
text messages and several exercise-specific illustrative widgets to show debugging
information such as the processed images. The widgets use state-of-the-art web
technologies from html5 such as 2D canvas, 3D graphics, websockets, etc.
A ROS-based Open Web Platform for Intelligent Robotics Education 249

Fig. 2. Visual follow line exercise.

3.3 Webserver

The core of the web platform is a Django based webserver. It shows the differ-
ent exercises available to the users so they may choose which one try to solve.
The currently available exercises are shown in Fig. 3. Once the user chooses an
exercise the webserver will show the template corresponding to that exercise, its
simulation will be executed in the Docker container that the user has instantiated
on his computer, as shown in Fig. 1.
The webserver also supports a second option: the Docker containers are
instantiated and run on a backend computer farm, and so the user is not required
to install anything in his/her computer. The webserver distributes the user’s
simulation into one of the available computers in the farm. The IP of the farm

Fig. 3. Available exercises in robotics academy.


250 D. Roldán-Álvarez et al.

computers remain hidden for the user, since all of the required communications
(sending code, receiving images, sensor information, etc.) occur through the web-
server itself.
In order to manage the computer farm and the running Docker containers
the webserver provides an administration panel as shown in Fig. 4. The panel
offers all the functionalities needed to monitor execution, add a new computer,
to delete a container, to restart a container and to run a new container.

Fig. 4. Admin panel with the computers in the farm with a running RADI.

Through the webserver, the user is able to save and load the code written
for each exercise. When the code is saved, the webserver stores it in an Amazon
S3 service. Amazon S3 is an object storage service that offers scalability, data
availability, security and performance. Everytime the user loads an exercise, the
last version of the code is retrieved from Amazon S3.

4 Intelligent Robotics Course


Robotics-Academy is currently in use in two university courses on Intelligent
Robotics at Universidad Rey Juan Carlos. Both of them follow a’learn by doing’
paradigm and focus on robotics algorithms. Solutions to the exercises involve the
use of PID Controllers, Local and Global Navigation, random as well as Optimal
Coverage algorithms. In each exercise, the robot to be used, the environment and
the task to be carried out are detailed. Reference solutions have been developed
and published as illustrative videos, so the students may know in advance the
expected robot behavior and results for each exercise. Some Theory related to
the exercise task and programming hints are also available for the students.

4.1 Visual Follow Line Exercise


The challenge of this exercise is to program an autonomous F1 Car to complete
a lap of the circuit in the shortest possible time by following a red colored line
drawn throughout the circuit. The car is equipped with an onboard front camera.
Regarding movement, an intermediate command interface has been provided:
A ROS-based Open Web Platform for Intelligent Robotics Education 251

forward speed V and angular speed W. Figure 2 show the Gazebo environment
and the F1 car used.
This exercise is designed to teach basic reactive control, including PID con-
trollers as well as introducing students to basic image processing, for example
color filters, morphological filters or segmentation of the red line from the track.
The solution involves segmenting the red line and making the car follow it using
a PID based control. The GUI of this exercise includes a bird’s eye view widget
to know the current position of the car inside the track, as shown in Fig. 2.

4.2 Obstacle Avoidance Exercise


In this exercise, the user has to program an Autonomous Car to complete a lap
of the circuit in the minimum possible time while avoiding the obstacles present
in the way. The car has a laser sensor in front of it. The car may be given
commands for translational speed V and angular speed W. The Gazebo scenario
consists of a race track similar to Visual Follow Line exercise, with stationary
obstacles present at different places on the track. Figure 5 shows the GUI, the
robot and simulated scenario.

Fig. 5. Obstacle avoidance exercise.

The solution of this exercise involves the use of Local Navigation Algorithms,
such as Virtual Force Field (VFF). A sequence of waypoints is provided to the
students for a successful completion of the circuit. Navigation using the VFF
algorithm consists of obstacles generating a repulsive force and the current way-
point generating an attractive force. This makes it possible for the robot to go
towards the nearest waypoint, distancing itself of the obstacles and addressing
towards the vector sum of the forces. A debugging widget specific to this vector
sum algorithm is provided in the GUI, as shown in Fig. 5.
252 D. Roldán-Álvarez et al.

4.3 Vacuum Cleaner Exercise

The challenge of this exercise is to program a Roomba vacuum cleaner robot to


clean an apartment. The robot is intended to visit the whole area of the apart-
ment, in the shortest possible time. It can softly bump into walls, chairs etc., as
real vacuum cleaner robots do. The robot has motors which can be commanded
in translational speed V and rotational speed W. It includes a laser sensor and 3
bump sensors (left, middle and right). For instance, the getLaserData() method
is used to read the laser sensor measurements. Figure 7 shows the Gazebo envi-
ronment of the exercise.

Fig. 6. Exercise GUI. Fig. 7. Vacuum cleaner exercise.

Random Wandering algorithms may be developed to solve this exercise. For


instance, initial spiral and bump and go behavior may be proposed as base
solutions. This exercise includes a debugging widget as shown in Fig. 7, which
displays the path followed and the area swept by the robot.

4.4 Vacuum Cleaner with Localization Exercise

This exercise is similar to the previous one. The student is provided with the
same Roomba vacuum cleaner robot to clean an apartment. However, the robot
is equipped with an additional pose estimation sensor, which is used to estimate
the 3D position of the robot. Its position estimations are available using a Python
API call to getPose3d() method.
This exercise has been developed to teach planning algorithms. As self local-
ization is provided, efficient foraging algorithms, such as boustrophedon motion
and A* search algorithm, are available choices . They minimize the revisiting of
places and sweep the apartment more systematically and efficiently.
A ROS-based Open Web Platform for Intelligent Robotics Education 253

5 Conclusions
The lessons learnt with the previous release of the learning framework have
motivated the transition from an open tool to an open web platform. First, the
installation of the software dependencies has been greatly simplified as they are
now all pre-installed in the Docker container. This makes simpler the use of the
framework.
Second, the web browser is the only frontend for the student, who edits the
robot program there and monitors its execution there as well. The new exercise
templates are composed of a webpage inside the browser and a Python program
inside the Docker container, mutually connected. The Python program is also
connected to the robot simulator and runs the user’s code for the robot’s brain.
A webserver has also been developed and the new platform is ready to use
by anyone at a web site, for free. The framework is truly cross-platform as the
container technology and the web browser used technologies are widely supported
in the main operating systems (Windows, MacOS, Linux).
Beyond the major framework upgrade, the exercises of the Intelligent
Robotics course have been successfully migrated to the web platform and are
ready to use by Robotics teachers and students around the world. An instance
of this course is currently taking place with twenty students of the Rey Juan
Carlos University.
Future lines for the new web platform: (a) include support for real robots;
(b) introduce more educative contents such as computer vision course and indus-
trial robotics course; (c) upgrade the infrastructure dependencies to ROS2 and
Ignition simulator; and (d) perform an in-depth and empirical study with quan-
titative evidence of its impact in the learning process of real students since the
platform has recently started to be used.

Acknowledgements. This research was partially funded by the Community of


Madrid within three projects: “UNIBOTICS2.0: Plataforma web educativa de pro-
gramación de robots y visión artificial” inside a multiannual agreement with URJC,
Encouragement of Young Ph.D. Students Investigation; “RoboCity2030—Madrid
Robotics Digital Innovation Hub” (Ref. S2018/NMT-4331) ; and by the Madrid
Regional Government through the project eMadrid-CM(S2018/TCS-4307). The later
is also co-financed by the Structural Funds (FSE and FEDER). The authors also thank
Google for funding the JdeRobot non-profit organization in its calls for Google Summer
of Code 2015, 2017, 2018, 2019, and 2020. Found action by the Community of Madrid
in the framework of the Multiannual Agreement with the Rey Juan Carlos University
in line of action 1, “Encouragement of Young Phd students investigation” Project Ref.
F664 Acronym UNIBOTICS2.0
254 D. Roldán-Álvarez et al.

References
1. Cañas, J.M., Martı́n, A., Perdices, E., Rivas, F., Calvo, R.: Entorno Docente
Universitario para la Programación de los Robots. Revista Iberoamericana de
Automática e Informática industrial 15(4), 404–415 (2018). https://doi.org/10.
4995/riai.2018.8962
2. Curto, B., Moreno, V.: Robotics in education. J. Intell. Robot. Syst. 81(1), 3–4
(2016)
3. Cheng, H.H., Chen, B., Ko, D.: Control system design and analysis education via
the web. In: Web-Based Control and Robotics Education, pp. 39–59. Springer,
Dordrecht (2009)
4. Bacca-Cortés, B., Florián-Gaviria, B., Garcı́a, S., Rueda, S.: Development of a
platform for teaching basic programming using mobile robots. Revista Facultad de
Ingenierı́a 26(45), 61–70 (2017)
5. Fabregas, E., Farias, G., Dormido-Canto, S., Guinaldo, M., Sanchez, J., Dormido,
S.: Platform for teaching mobile robotics. J. Intell. Robot Syst. 81, 131–143 (2016)
6. Esposito, J.M.: The state of robotics education: proposed goals for positively trans-
forming robotics education at postsecondary institutions. IEEE Robot. Autom.
Mag. 24, 157–164 (2017)
7. Garcı́a, S., Strüber, D., Brugali, D., Berger, T., Pelliccione, P.: Robotics software
engineering: a perspective from the service robotics domain. In: Proceedings of
the 28th ACM Joint Meeting on European Software Engineering Conference and
Symposium on the Foundations of Software Engineering, pp. 593–604, November
2020
8. Erős, E., Dahl, M., Hanna, A., Götvall, P.-L., Falkman, P., Bengtsson, K.: Devel-
opment of an industry 4.0 demonstrator using sequence planner and ROS2. In:
Koubaa, A. (ed.) Robot Operating System (ROS). SCI, vol. 895, pp. 3–29. Springer,
Cham (2021). https://doi.org/10.1007/978-3-030-45956-7 1
9. Marian, M., Stı̂ngă, F., Georgescu, M.T., Roibu, H., Popescu, D., Manta, F.: A
ROS-based control application for a robotic platform using the Gazebo 3D simula-
tor. In: 2020 21th International Carpathian Control Conference (ICCC), pp. 1–5.
IEEE, October 2020
10. Cañas, J.M., et al.: Open-source drone programming course fordistance engi-
neering education. Electronics 9(12), 2163 (2020). https://doi.org/10.3390/
electronics9122163
11. Jara, C., Candelas, F., Pomares, J., Torres, F.: Java software platform for the
development of advanced robotic virtual laboratories. Comput. Appl. Eng. Educ.
21, 14–30 (2013). https://doi.org/10.1002/cae.20542
12. Peidró, A., Reinoso, Ó., José, A.G., Marı́n, M., Payá, L.: A simulation tool to
study the kinematics and control of 2RPR-PR parallel robots. IFAC-PapersOnLine
49(6), 268–273 (2016)
13. A platform to Learn ROS-based advanced ROBOTICS ONLINE , 01 March 2021).
https://www.theconstructsim.com/. Accessed 08 Mar 2021
14. Casañ, G. A., Cervera, E., Moughlbay, A. A., Alemany, J., Martinet, P.: ROS-
based online robot programming for remote education and training. In: 2015 IEEE
International Conference on Robotics and Automation (ICRA), pp. 6101-6106.
IEEE, May 2015
A ROS-based Open Web Platform for Intelligent Robotics Education 255

15. Karalekas, G., Vologiannidis, S., Kalomiros, J.: EUROPA–A ROS-based open plat-
form for educational robotics. In: 2019 10th IEEE International Conference on
Intelligent Data Acquisition and Advanced Computing Systems: Technology and
Applications (IDAACS), vol. 1, pp. 452–457. IEEE, September 2019
16. Cañas, J.M., Perdices, E., Garcı́a-Pérez, L., Fernández-Conde, J.: A ROS-based
open tool for intelligent robotics education. Appl. Sci. 10(21), 7419 (2020). https://
doi.org/10.3390/app10217419
17. Mahna, S., Roldán, D., Cañas, J.M., JdeRobot robotics academy: web based tem-
plates. ROSWorld2020 (2020). https://vimeo.com/480550758
18. Cervera, E., del Pobil, Ángel P., ROSLab: sharing ROS code interactively with
docker and jupyterlab. IEEE Robot. Autom. Mag. 26(3), 64–69 (2019)
HoRoSim, a Holistic Robot Simulator:
Arduino Code, Electronic Circuits
and Physics

Andres Faiña(B)

Robotics, Evolution and Art Lab (REAL), IT University of Copenhagen,


2300 Copenhagen, Denmark
anfv@itu.dk

Abstract. Online teaching, which has been the only way to keep teach-
ing during the pandemic, imposes severe restrictions on hand-on robotic
courses. Robot simulators can help to reduce the impact, but most of
them use high-level commands to control the robots. In this paper, a
new holistic simulator, HoRoSim, is presented. The new simulator uses
abstractions of electronic circuits, Arduino code and a robot simulator
with a physics engine to simulate microcontroller-based robots. To the
best of our knowledge, it is the only tool that simulates Arduino code and
multi-body physics. In order to illustrate its possibilities, two use cases
are analysed: a line-following robot, which was used by our students to
finish their mandatory activities, and a PID controller testbed. Prelim-
inary tests with master’s students indicate that HoRoSim can help to
increase student engagement during online courses.

Keywords: Robot simulator · Arduino · Electronics · Physics ·


Educational tools

1 Introduction
There are a lot of robots controlled by microcontrollers that interact with phys-
ical objects, such as line-following robots or walking robots. In these robots,
sensors are used to transform physical properties into electrical signals, which
are then processed by the microcontroller. Based on these inputs, the micro-
controller produces electric signals as outputs, which are then processed by an
electric circuit to power actuators. Finally, these actuators produce some effect
in the environment (turning a motor for example) and the sensors generate new
signals. Thus, teaching these robotic first principles involves a wide curricula
that includes mechanics, electronics and programming.
The main aim of teaching these basic topics in a robotics course is to demys-
tify these areas, show the big picture to the students and force them to under-
stand that robotics is not only computer science. At IT University of Copen-
hagen, we use this holistic approach in the first course of the robotics specializa-
tion. The course is called “How to Make (Almost) Anything”1 and is targeted to
1
Inspired by the course of the same name taught at MIT by Neil Gershenfeld.
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 256–267, 2022.
https://doi.org/10.1007/978-3-030-82544-7_24
HoRoSim: A Holistic Robot Simulator 257

Computer Science students (21 to 30 years of age) without previous knowledge


in robotics, mechanics or electronics. The approach is comparable to the famous
“NAND to tetris” course [1] for teaching computer science, where computer sci-
ence abstractions are made up concrete through detailed implementations. In
our course, each lecture focuses on a specific technique or topic which allows
students to build some components of a custom line following robot, which is
completely functional after 7 weeks. The approach used was not to simulate
anything and work directly in hardware. This strategy worked relatively well
until March 2020, when the pandemic forced a lock down of the university. This
caused that half of the students could not get access to their robots while still
having pending assignments to hand in. This suddenly created the need for a
simulator.
Simulators have several drawbacks. First, they are not accurate, which pro-
vokes a reality gap between the simulation and the real behaviour. Second, they
introduce limitations as not all the aspects of the robots are taken into account
and not all the components are available to be simulated. And finally, the man-
ufacturing of physical prototypes cannot be recreated. To avoid the limitations,
there are several robots designed to be used in education [2,3]. However, simula-
tors have some advantages over working in hardware: (1) They provide a quick
way to prototype and check that everything works as expected, (2) they do not
require any hardware to learn (apart from a computer), which is especially rele-
vant for online learning, (3) they reduce the running costs of the course and (4)
students lose their fear to break the hardware which allow them to experiment
more freely.
While there are a lot of simulators that allow us to simulate robots, there is
a lack of a holistic simulator. Most of the times, teaching robotics involves the
use of several simulators, as there is no one that covers all the topics (physics,
electronics, and embedded programming). As an example, when using robot
simulators, the electronic aspects are usually ignored and students do not need to
think in the electronic hardware that controls the actuators, the type of actuators
of the robot or the low-level controllers used. Other simulators are focused on
electronics and microcontrollers but are unable to simulate multi-body physics.
This paper introduces a new holistic simulator that combines three funda-
mental aspects of robotics: physics, electronics, and embedded programming.
The main advantage is that it forces students to understand the basic princi-
ples of robotics before moving to more advanced topics. The simulator, called
HoRoSim, consists of a robot simulator that simulates multi-body physics (Cop-
peliaSim), custom libraries for the Arduino code and the logic to simulate basic
electronic circuits.

2 Related Work

There are several simulators that can be used to simulate some aspects of robots.
Physics engines such as ODE [4] or Bullet [5] can simulate multi-body dynamics
and use collision detection algorithms to calculate the distance from one sensor
258 A. Faiña

to a body. However, the input of these physics engines requires to specify the
torques or forces for each joint and the output is just the position of the bodies
in the system for the next time step. Robot simulators such as Gazebo [6],
CoppeliaSim (before known as V-Rep) [7] or Webots [8] can facilitate the use of
the physics engines as they provide a graphical user interface where the robot and
physical bodies are displayed, a set of controllers for the joints (move joint with
constant speed, etc.), sensors (cameras, distance sensors, etc.), and predefined
models of robots and common objects. Nevertheless, they ignore the fact that
these actuators and sensors are controlled by some electronic circuit.
The most popular electronic simulator is SPICE [9] and its different forks
such as LTspice [10] and Ngspice [11]. These simulators take the definition of an
electronic circuit through a netlist and can perform different types of analyses:
direct current operating point, transient, noise, etc. However, the complexity of
these analyses is especially high when simulating devices that interact with the
real world (motors, switches, photo-diodes, etc.) as they require to specify the
properties of their components depending on the physical environment.
There are different simulators for microcontrollers, but most of them are
very specialized tools. These simulators change the registers of the microcon-
troller based on the firmware introduced without simulating other hardware
than LEDs [12]. Some of them have been extended to interact with electronic
circuits. For example, Proteus [13] or SimulIDE [14] are able to combine the
simulation of a microcontroller with a SPICE simulation. They are very useful
to learn electronics, however they are not able to perform physics simulations
and their use for teaching robotics is limited.

3 HoRoSim
HoRoSim2 has four main components. The first one is the Arduino code that
students program, which has been extended with a function to specify the hard-
ware devices employed (electronic circuits, motors and sensors). The second one
is the HoRoSim libraries that replace the standard libraries used by Arduino.
The third component is a robot simulator, CoppeliaSim [7], that is employed
for visualization and calculation of the physics (multi-body dynamics, object
collision, distance sensors, etc.). Finally, there is a user interface to handle those
devices that are not part of the physical simulation and that the user just uses
to interface with the microcontroller (buttons, potentiometers and LEDs). A
representation of these four components is shown in Fig. 1.
HoRoSim is used from the Arduino IDE and uses the same sketch building
process as Arduino. Basically, HoRoSim installs a new Arduino “core” (software
API) that compiles and links the user’s program to the HoRoSim libraries. Thus,
the user can choose to use HoRoSim (and the Arduino board to be simulated3 )
from the Board menu in the Arduino IDE. For the user, the only difference is
2
HoRoSim is open source (MIT license) and available for download at https://
bitbucket.org/afaina/horosim.
3
Currently, only Arduino Unos are supported.
HoRoSim: A Holistic Robot Simulator 259

Arduino Code HoRoSim User Interface

Extended Code: init


hardware setup() Hardware devices
Regular Code:
setup() Robot Simulator
loop()
pinMode() call
Arduino library
digitalWrite()
analogRead() functions
etc.

Fig. 1. Main components of the HoRoSim simulator: The Arduino code is extended
to specify the hardware devices employed, the code is compiled using the HoRoSim
libraries and the libraries are in charge of communicating with a robot simulator (Cop-
peliaSim) and the user interface.

that HoRoSim needs an extra function in the program that specifies the hard-
ware devices (motors, sensors, LEDs and its circuits) that are connected to the
Arduino. This is done in a function called hardware setup() that is mandatory.
In this function, the user creates instances of the hardware devices employed and
specifies their properties and the pins of the Arduino that are used to interact
with them. Two implementations of the hardware setup function can be seen in
Sect. 4.
Currently, the hardware devices that HoRoSim can simulate are shown in
Fig. 2 and include:
– Direct current (DC) motors controlled with a transistor or with motor con-
trollers (H-bridges)
– Stepper motors controlled by drivers with a STEP/DIR interface
– Radio control (RC) servomotors
– Proximity sensors like ultrasound or infrared (IR) sensors (analogue or digital)
– Infrared sensors used as vision sensors to detect colours (analogue or digital)
– Potentiometers connected to a joint in the robot simulator
– LEDs (User Interface)
– Potentiometers (User Interface)
– Buttons (momentary or latching) (User Interface)
The program is compiled using a C++ compiler using the Arduino IDE
interface (verify button), which creates an executable program. This program
can be run from the Arduino IDE (upload button) or through a terminal in the
computer.
Once the compiled program is launched, it connects to the robot simulator
(CoppeliaSim) through TCP/IP communication and starts a user interface. The
robot simulator is used to perform the rigid-body dynamic calculations. The
components that do not need the robot simulator (LEDs, buttons and poten-
tiometers) are rendered in the user interface using ImGui, a graphical user inter-
face library for C++.
260 A. Faiña

Fig. 2. The different hardware devices wired in a breadboard; note that this view
is for illustration purposes only and it is not available in the simulator. The wires
that come out of the breadboard are the connections that need to be specified in
the hardware setup function (blue for Arduino outputs, purple for digital inputs and
orange for analogue inputs). For the sake of clarity, the power applied to the rails of
the breadboard is not shown.

Calls to the Arduino functions are executed using the HoRoSim libraries.
Currently, the list of available Arduino functions is shown in Table 1. The func-
tions that interact with hardware devices (I/O and Servo libraries) call the
respective functions for each hardware device instance that has been defined
in the hardware setup function. The code in each hardware device instance will
check that the pin of the Arduino that is passed as an argument matches the
pin where the hardware device is connected and that the pin has been initialized
properly. If these conditions are met, the HoRoSim libraries call the functions to
replicate the expected behaviour in the robot simulator or user interface. There-
fore, a digital output pin in the Arduino can control several devices. An example
of this software architecture for the digitalWrite function can be seen in Fig. 3.
In this case, the user has defined a DC motor controlled by a transistor, which
base is connected to a specific pin in the Arduino. If the conditions are met,
the library sends a message to CoppeliaSim to set a new speed (0 or maximum
HoRoSim: A Holistic Robot Simulator 261

Table 1. Arduino functions implemented in HoRoSim

I/O Servo Serial Time


pinMode attach begin delay
digitalRead write write delayMicroseconds
digitalWrite writeMicroseconds print millis
analogWrite read println micros
analogRead detach
attached

speed) depending on the argument passed as value. In case of functions that


return a value (analogRead and digitalRead), the mechanism is similar. The
logic implemented takes into account that the analogue values can only be read
in analogue pins and analogue writes can only be used in the pins that have
pulse width modulated (PWM) hardware.
Thus, there is no simulation of the electronic circuits. The libraries just repli-
cate the behaviour of the electronics. Of course, this means that there are some
electronic aspects that are not addressed such as the appropriate values of the
components used or the wiring of the circuits.
Before running HoRoSim, the CoppeliaSim simulator needs to be launched
and the user needs to load the scene that contains the robot and environment
to simulate. The user can also decide which physics engine to use (ODE, Bul-
let, Newton, etc.). The name of the joints and sensors used in the scene should
be introduced as arguments to the appropriate hardware devices in the hard-
ware setup function. The simulation is automatically started when the student
presses the upload button in the Arduino IDE and stops automatically when the
user interface is closed.
In order to create the scenes, the students need to use the functionality pro-
vided by CoppeliaSim. This process consists in generating the body of the robot
using primitive shapes and connecting them together with joints (kinematic pairs
which can be passive or actuated). This process can be time consuming, but it
only needs to be performed once.
The simulation is asynchronous. CoppeliaSim updates the physics engine
in its own thread and HoRoSim sends the commands and reads the sensors
independently. Therefore, the handling of the time in HoRoSim is not accurate.
The delays and time related functions are handled by the chrono library. In
addition to the inaccuracies caused by the operating system, the communications
with the simulator and the update of the graphical user interface introduce more
delays. As a workaround, students can slow down the physics simulator through
its graphical user interface (GUI). However, we have found that most robots can
be simulated without noticeable side effects.
262 A. Faiña

Fig. 3. Pseudo-code that illustrates how HoRoSim works. The Arduino functions call
the functions of the hardware devices instances. Each hardware device instance checks
that that pin passed as argument is used in that device and that it has been initialized
properly. If so, the correct behaviour is applied or the correct value returned.

4 Use Cases

In this section, we will illustrate the potential of the simulator with two examples:
A line following robot and a testbed to study PID controllers.

4.1 Line Following Robot

This example was used by our masters students (21 to 30 years of age) to finish
a mandatory assignment of the course as the real robots were locked in the
university during the lockdown. The assignment consisted in programming a
robot able to follow a line, find a can at the end of the line, grasp it with a gripper
and place it near a crossing. This example replicates all the hardware that the
students had in their line-following robots: DC motors controlled by transistors,
an Arduino, 2 IR sensors for detecting the line on the floor, an ultrasound sensor
to detect the can and a gripper controlled by a RC servomotor. Figure 4a shows
HoRoSim: A Holistic Robot Simulator 263

(a) Arduino code (b) CoppeliaSim

Fig. 4. A line following robot that was used to work on a mandatory activity during
the lockdown.

the Arduino code that initializes the hardware setup function and Fig. 4b shows
the robot following the line.
The scene with the robot, the line and the can was given to the students.
To avoid problems, we also gave them a small Arduino sketch with the hard-
ware setup function already implemented to check that the connection between
HoRoSim and CoppeliaSim worked correctly.4 As the installation of HoRoSim
is a bit cumbersome in Windows, an Ubuntu virtual machine was available to
download with HoRoSim already installed (students still had to install Cop-
peliaSim in their computers).

4.2 PID Testbed


This example illustrates how to use HoRoSim for teaching complex topics. In this
case, a testbed for PID controllers has been implemented.5 The scene consists of
a big disc that is static (not affected by the physics engine) with a graduated scale
where a DC motor (red joint) controls a dial (dynamic body). The DC motor is
controlled by an H-bridge and, therefore, it can rotate the dial in both directions.
The angle of the dial is measured by a potentiometer, which is connected to the
dial. Hence, a feedback loop can be created. The Arduino sketch of this example
setups these hardware devices and three extra potentiometers to interact with the
testbed, see Fig. 5a. The code of the sketch implements a closed loop proportional
4
The scene and the Arduino code with the hardware setup function implemented
are available for download at https://bitbucket.org/afaina/horosim/src/master/
examples/lineFollowingRobot/. The code to follow the line is not provided as it
is the students task to program the robot. A video of the robot following the line is
available at https://bitbucket.org/afaina/horosim.
5
The Arduino code and the scene are available for download at https://bitbucket.
org/afaina/horosim/src/master/examples/pidController/. A video is available at
https://bitbucket.org/afaina/horosim/src/master/videos/pid controller.mp4.
264 A. Faiña

(a) Arduino code (b) CoppeliaSim and UI

(c) p=1 (d) p=100

Fig. 5. Simulation of a PID testbed. The dial is moved between two different target
positions using a proportional controller. The user can change the target positions and
the proportional constant by moving the potentiometers in the user interface (sliders).
The angle of the dial is plotted for proportional values of 1, 10 and 100 in subfigures c, b
and d respectively. Thus, different behaviours are observed: overdamped, overshooting
and oscillation around the target position.

(P) controller, but of course this could be extended to PID controllers. The dial
is moved continuously between two target positions by the DC motor every few
seconds. The target positions and the proportional constant can be adjusted
by tuning the three potentiometers in the user interface (Fig. 5b). In addition,
a graph element has been inserted in the scene. This graph plots the angle of
the dial (in degrees) versus time and it is updated automatically during the
simulation.
This example allows students to experiment with different controllers (and
values of their constants) and observe the different behaviours that they produce.
In Fig. 5b, a proportional value around 10 has been set with the potentiometer
and the graph shows a small overshooting when reaching the targets. When
changing the proportional value, different behaviours are observed as shown
in Fig. 5c and 5d. Note that the simulator allows students to quickly change
other properties and not only the software of the controller. For example, they
could study the response of the system when increasing the mass of the dial or
increasing the speed or torque of the motor.
HoRoSim: A Holistic Robot Simulator 265

5 Discussion
5.1 Students Reception

The students used the simulator to fulfil the last mandatory assignment of the
course (programming a line-following robot) and the reception of the simulator
was very good. As an example, one student wrote in the evaluation of the course:
“I liked that Andres made the simulator which helped us finish the mandatories!
It was very cool to use it actually.”. And a lot of students liked the attempt
to keep a hands-on course even during the lockdown, to which the simulator
contributed.
Furthermore, all the students were able to program their robots to success-
fully pass the mandatory assignment. None of them complained about the simu-
lator and most of the questions were about how to achieve that the robot could
make the sharp turns and go straight over the crossing. In that sense, it was
a pleasure to observe that the mandatory assignment could be realised online
without any problems and triggering the same kind of questions as if they had
used a physical robot.
The simulator, which was originally built to only cover the line following
robot assignment, was extended to allow simulating other kinds of robots and
machines, and more hardware devices were implemented. This gave students
the possibility of using the simulator for their final projects (build a robot or a
machine with mechanisms, electronics and embedded programming). When they
started to work on their final projects, a small survey was carried out and 10
students replied. Half of them were interested in using the simulator for their final
projects, while the other half discarded it. The reasons to discard the simulator
were: 3 of them were very busy to use it, 1 machine could not be simulated,
and 1 machine could be built at home and there was no need for the simulator.
However, only one group out of twenty ended up using the simulator.
The low usage of the simulator (1 group out of 20) was motivated by sev-
eral reasons. First, the students could use a small budget to order electronic
and mechanical parts from online stores and prototyping services. Thus, most
of the students managed to finish with a minimum prototype and there was
no need for the simulator. However, the university had to use a considerable
amount of money to pay for this when in a normal course the students can pro-
totype their own parts at our workshop and use our stock materials. Additionally,
some students reported a lack of engagement and difficulty to collaborate during
the lockdown. Finally, the current limitations of the robot simulator, which are
described in the next section, also contributed to the low usage of the simulator.

5.2 Limitations

The current implementation of the code has several limitations. First, there
are still several functions and libraries available for Arduino that have not
been implemented. They include interrupts, receiving serial messages and I2C
266 A. Faiña

(Inter-Integrated Circuit, Wire in Arduino) and SPI (Serial Peripheral Inter-


face) libraries. They could be implemented in the future, but specific I2C and
SPI libraries need to be implemented for each device (sensor, motor controller,
etc.) which can be tedious.
In addition, the electronic circuits provided are abstractions of the physical
world. Thus, the students are not forced to think how to wire the electric com-
ponents between them. And the lack of a SPICE simulator makes impossible to
detect problems that could happen when working with hardware (wiring, value
of the components, power supply voltages, etc.).
Finally, a big limitation is that the students need to create a scene with
a physical model of their prototypes. In our course, they use Computer-Aided
Design (CAD) software to design their robots. However, the translation from
these 3D models to CoppeliaSim is not automatic. One could import the parts
in CoppeliaSim as meshes, but this makes the dynamic engine slower and can
cause stability issues. Additionally, too many dynamically enabled parts can
again slow down the simulator significantly. A better approach is to design the
prototype in CoppeliaSim using primitive shapes that are faster to simulate
and simplify the mechanisms. For example, most of our students implemented a
gripper for the line following robot using a servo that rotates some gears, which
open or close the end effector. In the simulator, this is simplified as a linear
actuator that opens or closes a cuboid body. Thus, all the gears and their joints
are not simulated. To compensate for this, a reduction ratio between the motor
and the axis of movement was introduced as a parameter in the constructor
of the hardware devices. This value represents the transmission (gears, belts or
leadscrew) and it will reduce the speed and increase its force/torque accordingly.
The main inconvenience of the necessity of generating a new and simplified
model of the machine to simulate is that it requires to learn how to use Cop-
peliaSim. In a course where the students already learn several programs and
topics, introducing one more increases the overload of the students. And the
survey showed that most of them are already very busy and not able to handle
more workload. Thus, this limitation needs to be addressed to increase the utility
of the simulator.

6 Conclusions
The paper has shown a new robot simulator, HoRoSim, that has a holistic app-
roach. Students can simulate Arduino code, electronic hardware and multi-body
dynamics, which helps to teach the basic principles of robotics. To the best of our
knowledge, it is the first simulator that allows users to simulate Arduino code
combined with a physics engine. HoRoSim provides teachers with a flexible tool
that can be used for assignments in robotic courses. Preliminary results have
shown that it can contribute to keep hands-on courses and student engagement
in online courses, which is especially relevant in the current pandemic.
HoRoSim: A Holistic Robot Simulator 267

In order to address the limitations of the current version, future work includes
(1) generating CoppeliaSim models automatically from CAD designs, (2) allow-
ing students to make their own electronic circuits by wiring electronic compo-
nents, and (3) adding a SPICE simulator. We are currently looking at how we
could integrate Fritzing [15] and Ngspice [11]. Fritzing could provide a platform
where students create their circuits (as schematics or wiring the components in
a breadboard), while Ngspice could simulate these circuits.

References
1. Schocken, S.: Nand to tetris: building a modern computer system from first princi-
ples. In: Proceedings of the 49th ACM Technical Symposium on Computer Science
Education, p. 1052 (2018)
2. Mondada, F., et al.: The e-puck, a robot designed for education in engineering, in:
Proceedings of the 9th Conference on Autonomous Robot Systems and Competi-
tions, vol. 1, pp. 59–65. IPCB: Instituto Politécnico de Castelo Branco (2009)
3. Bellas, F., et al.: The robobo project: bringing educational robotics closer to real-
world applications. In: International Conference on Robotics and Education RiE
2017, pp. 226–237. Springer (2017)
4. Smith, R., et al.: Open dynamics engine
5. Coumans, E., et al.: Bullet physics library. bulletphysics.org. Accessed 04 Feb 2021
6. Koenig, N., Howard, A.: Design and use paradigms for gazebo, an open-source
multi-robot simulator. In: IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), vol. 3, pp. 2149–2154. IEEE (2004)
7. Rohmer, E., Singh, S.P., Freese, M.: V-rep: a versatile and scalable robot simula-
tion framework. In: IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), pp. 1321–1326. IEEE (2013)
8. Michel, O.: Cyberbotics LTD-webotsTM : professional mobile robot simulation. Int.
J. Adv. Robot. Syst. 1(1), 40–43 (2004)
9. Nagel, L., Pederson, D.: Simulation program with integrated circuit emphasis. In:
Midwest Symposium on Circuit Theory (1973)
10. Analog Devices, LTspice. https://www.analog.com/en/design-center/design-tools-
and-calculators/ltspice-simulator.html#. Accessed 08 Jan 2021
11. Vogt, H., Hendrix, M., Nenzi, P., Warning, D.: Ngspice users manual version 33
(2020)
12. Gonçalves, P.F., Sá, J., Coelho, J., Durães, J.: An arduino simulator in classroom-a
case study. In: First International Computer Programming Education Conference
(ICPEC), Schloss Dagstuhl-Leibniz-Zentrum für Informatik (2020)
13. Bo Su, L.W.: Application of proteus virtual system modelling (VSM) in teaching of
microcontroller. In: 2010 International Conference on E-Health Networking Digital
Ecosystems and Technologies (EDT), vol. 2, pp. 375–378 (2010). https://doi.org/
10.1109/EDT.2010.5496343
14. SimulIDE. https://www.simulide.com. Accessed 08 Jan 2021
15. Knörig, A., Wettach, R., Cohen, J.: Fritzing: a tool for advancing electronic proto-
typing for designers. In: Proceedings of the 3rd International Conference on Tan-
gible and Embedded Interaction, pp. 351–358 (2009)
Exploring a Handwriting Programming
Language for Educational Robots

Laila El-Hamamsy1,2(B) , Vaios Papaspyros1 , Taavet Kangur1 ,


Laura Mathex1 , Christian Giang3,5 , Melissa Skweres2 , Barbara Bruno4 ,
and Francesco Mondada1,2
1
MOBOTS Group, Ecole Polytechnique Fédérale de Lausanne (EPFL),
Lausanne, Switzerland
{laila.elhamamsy,vaios.papaspyros,taavet.kangur,laura.mathex,
francesco.mondada}@epfl.ch
2
Center LEARN, Ecole Polytechnique Fédérale de Lausanne (EPFL),
Lausanne, Switzerland
melissa.skweres@epfl.ch
3
D-VET Laboratory, Ecole Polytechnique Fédérale de Lausanne (EPFL),
Lausanne, Switzerland
christian.giang@epfl.ch
4
CHILI Laboratory, Ecole Polytechnique Fédérale de Lausanne (EPFL),
Lausanne, Switzerland
barbara.bruno@epfl.ch
5
SUPSI-DFA, Locarno, Switzerland

Abstract. Recently, introducing computer science and educational


robots in compulsory education has received increasing attention. How-
ever, the use of screens in classrooms is often met with resistance, espe-
cially in primary school. To address this issue, this study presents the
development of a handwriting-based programming language for educa-
tional robots. Aiming to align better with existing classroom practices,
it allows students to program a robot by drawing symbols with ordi-
nary pens and paper. Regular smartphones are leveraged to process
the hand-drawn instructions using computer vision and machine learn-
ing algorithms, and send the commands to the robot for execution. To
align with the local computer science curriculum, an appropriate play-
ground and scaffolded learning tasks were designed. The system was
evaluated in a preliminary test with eight teachers, developers and edu-
cational researchers. While the participants pointed out that some tech-
nical aspects could be improved, they also acknowledged the potential
of the approach to make computer science education in primary school
more accessible.

Keywords: Educational robotics · Tangible programming ·


Computing education

Supported by the NCCR Robotics, Switzerland.


The original version of this chapter was previously published non-open access. A Cor-
rection to this chapter is available at https://doi.org/10.1007/978-3-030-82544-7 30
c The Author(s) 2022, corrected publication 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 268–275, 2022.
https://doi.org/10.1007/978-3-030-82544-7_25
Exploring a Handwriting Programming Language for Educational Robots 269

1 Introduction
With the increasing efforts to introduce computer science (CS) in K-12 world-
wide [2], it has become necessary to develop tools and environments that offer
an appropriate progression across schools years [13]. Nowadays, aside from Text-
based Programming Languages such as Python or C, three main approaches to
teach CS exist: CS unplugged activities 1 that do not involve the use of screens
and aim at teaching core CS concepts rather than programming per se; Tangi-
ble Programming Languages (TPLs) which involve the use of physical manip-
ulatives to program virtual or physical agents [9,10]; and Visual Programming
Languages (VPLs) such as Scratch2 , which offer block-based solutions to pro-
gramming, either on tablets or computers [7]. Although VPLs are increasingly
present in education settings, they are met with resistance from teachers at the
lower levels of primary school mainly due to the presence of screens [2]. Con-
versely, TPLs, which are still not present in formal settings [8] appear to be
better suited for the needs of primary school education. Indeed, the tangibility
of TPL approaches aligns nicely with early childhood education practices and
principles [3,15], promotes active and collaborative learning [9,16], and is less
likely to be met with resistance by teachers. Moreover, TPLs seem to appeal
more than VPLs to young students [7] and help mitigate gender stereotypes
[7,9]. Lastly, experiments suggest that TPLs have a positive impact on interest,
enjoyment and learning [9], all the while reducing the amount of errors produced
[17] and the amount of adult support required [8] compared to VPLs.
Educational Robots (ER), which by design embody and enact abstract CS
concepts and directives, can be thought of as naturally aligned with the principles
of TPL, and thus a potential means to promote the adoption of TPL by teachers.
Among the solutions proposed in the literature to combine ER and TPL, those
not requiring the purchase of additional dedicated hardware (i.e., beside the
robot itself) seem particularly promising for primary school education settings
[6]. A recent example is the work of Mehrotra et al. [8], who developed a Paper-
based Programming Language (PaPL) for the Thymio II robot3 , observing that
the paper-based interface helps promote the use of tangible interfaces in class-
rooms by offering a ubiquitous and accessible solution which only requires the
robot and a classroom laptop. Similarly, TPL approaches that introduce hand-
writing as a way to provide instructions/values [14] seem particularly interesting
for primary school education settings, enhancing the programming sessions with
the possibility to train essential graphomotor skills [18], which have been shown
to be linked to improved student learning [12].
In this work, we build on the principles of the first handwriting TPL approaches
and the Paper-based Programming Language to introduce a Handwriting-based
Programming Language (HPL). By allowing students to program the robot by
handwriting instructions on a sheet of paper, HPL relies on two cornerstones of

1
CS Unplugged activities: csunplugged.org/en/.
2
See scratch.mit.edu for Scratch resources and scratchjr.org for Scratch Jr.
3
Thymio II - https://www.thymio.org/.
270 L. El-Hamamsy et al.

primary school classrooms: the use of paper as material and of handwriting as a


means of expression. HPL can thus be seen as an effort to take programming one
step closer to primary school practices, all the while capitalising on the benefits
handwriting has with respect to student learning [12].

2 Development of HPL

Our goal is to develop a Handwriting-based Programming Language for the


Thymio II robot which could serve as a precursor to more advanced TPLs and
VPLs for ER. For this reason, we focus on sequential programming and basic
robot movements. As movements in TPLs are often represented by arrows, the
instructions of the HPL were designed to allow for a “clear mapping between
[the] tangible and virtual commands” [13] (see Fig. 1 - A).

Fig. 1. HPL symbols (A) and workflow from the handwriting activity to the robot
commands (B). The “up” and “down” arrows (first row) correspond to forwards and
backwards motions. The “forward right” and “forward left” arrows (second row) rep-
resent a forward motion along an arc of given circumference. The “rotate right” and
“rotate left” arrows (third row) correspond to a rotation. Icons are taken from FlatIcon
and the NounProject.

A data set of 6888 images of handwritten arrows was acquired from 92 stu-
dents in the 9 to 12 age range, and split 60%–40% between training and testing.
Both during training and testing, each image is first pre-processed using adap-
tive filtering and binarisation followed by morphological operators, to reduce the
noise while preserving edges and remove asperities [5]. The arrows are then iso-
lated by identifying contours. The feature set is created to characterise the shape
differences between the arrows by combining a) Fourier descriptors, b) Hellinger
distances between the pixel density histograms along the x and y axes of the
arrow to be classified and the centroid of the classes computed on the training
set, and c) geometric features (circularity, convexity, inertia, rotated bounded
rectangle, minimum including circle, moments, etc...). A number of state-of-the-
art classifiers were trained and tested, namely: Decision Tree; Support Vector
Exploring a Handwriting Programming Language for Educational Robots 271

Table 1. Confusion Matrix, Precision (P), Recall (R) and F1 -Score (F1 ) for the MLP
classifier (learning rate 0.01, ReLU activation, Adam optimiser) computed on the test
set. The confusion matrix indicates the number of observations of the class (true labels
indicated in each row) and predicted to be in the class indicated by the columns.

Up Down Forward Forward Rotate Rotate P R F1


Right Left Right Left
Up 430 3 5 5 5 3 .95 .95 .95
Down 3 427 4 4 4 3 .97 .96 .96
Forward Right 8 2 544 8 6 0 .96 .96 .96
Forward Left 6 3 7 573 6 0 .95 .96 .96
Rotate Right 2 2 2 3 263 55 .71 .80 .75
Rotate Left 2 3 6 10 86 263 .81 .71 .76

Machine; Multilayer Perceptron (MLP); kNN; Random Forest; AdaBoost; Naive


Bayes classifier. The MLP achieved the best performance (reported in Table 1).
While the symbol set here only included motion commands, the paper and
pen approach allows to easily change the symbols and provides the opportunity
to develop more complex commands. While contingent on the creation of a data
set with a selection of symbols which must be both intuitive for participants and
sufficiently distinct for classification, we do not perceive any technical limitations
to expanding the HPL language to include a larger set of instructions (including
conditionals and loops for example).

3 An ER-HPL Learning Activity


Specifically, the activity aims at helping students 1) understand the concept
of sequential programming, 2) discover the notions of cost and path planning,
inspired from the work done in the JUSThink project [11], and 3) reinforce the
understanding of geometry and angles. A set of scaffolded tasks were designed
to introduce the students to these concepts using the playground and interface
presented in Fig. 2. In the tasks, students are asked to provide a sequence of
instructions allowing the robot to reach a target on a map, by first drawing
arrows on sheets of paper and then taking images of them using an ad-hoc
application running on a tablet. The application analyses and transforms the
instructions into robot motion commands, according to the workflow shown in
Fig. 1 - B.
The use of paper introduces the possibility to use a physical exercise book, a
natural approach which is close to classroom practices and helps monitor student
progress over time. Additionally, it gives the teacher the opportunity to scaffold
the instructions according to the needs of the individual students. Indeed, to
promote collaboration, the teacher can choose to employ a card-based solution
(similar to traditional TPLs) requiring students to draw the arrows only once,
on separate cards, and then move them around to find the correct sequence (see
272 L. El-Hamamsy et al.

Fig. 2. Learning activity with ER-HPL using the Thymio robot. Participants are
expected to program the robot to move from one icons on the map to another using the
HPL programming language. To learn about the notion of cost, on the playground map
(top), the colour of the ground indicates the cost for the robot to navigate along it. The
darker the ground colour of a segment, the higher the cost for the robot to navigate it
and the faster the robot will use it’s initial energy (denoted by the colours of the LEDs
on the surface of the robot) using the ground sensors to the intensity of the colour. The
instructions corresponding to the displayed trajectory (in white on the map) are shown
on the bottom. Each step of the algorithm generates either a displacement (indicated
by the white arrows on the map) which is equal to the distance between the different
illustrated locations on the map (i.e., 11 cm) or a rotation of 45◦ . Icons are taken from
FlatIcon and the NounProject.

Fig. 2, Option 1). Conversely, if the objective is to have the students practice
their handwriting, the teacher can choose to have them write the arrows on a
single sheet of paper (see Fig. 2, Option 2).
A preliminary heuristic evaluation of the HPL platform and corresponding
activity framework was conducted based on the methodology developed by Giang
[4]. The evaluation included 8 participants with experience in education (2 teach-
ers, 4 engineers, 2 educational researchers) representing the different stakehold-
ers involved in the development of ER content. The participants were interested
Exploring a Handwriting Programming Language for Educational Robots 273

and evoked the engaging, kinaesthetic, playful character of this low cost app-
roach to teaching programming. Although one researcher mentioned that they
were unsure how the HPL would help improve learning CS concepts, other par-
ticipants appreciated that the HPL would help train graphomotor skills and
increase sense of ownership of the system. Participants mentioned that the HPL
components were easy to understand (but not necessarily easy to draw), and
suggested to continue improving the robustness of the symbol recognition. They
believed that drawing the symbols on individual cards would facilitate collabora-
tion, and allow to easily modify the number of symbols to be used. They believed
that the activity set designed was intuitive and didactic. They appreciated the
various scaffolding possibilities and the presence of a physical exercise book com-
plementing the in-app instructions. They characterised the overall platform as
intuitive, but suggested improvements in terms of storytelling and the applica-
tion’s graphical user interface, specifically recommending additional feedback on
the smartphone.

4 Conclusion
In this article we propose the Handwriting-based Programming Language (HPL),
aiming at facilitating the introduction of programming in primary school edu-
cation. HPL relies on paper and handwriting to program Educational Robots
(ER) in a tangible way, while leveraging on well-known practices for early child-
hood education. From the students’ perspective, the use of handwriting as input
modality introduces the possibility to train graphomotor skills while program-
ming, and can potentially contribute to an increased sense of controllability and
ownership, which, like handwriting in general, are important factors for student
learning. From the teachers’ perspective, HPL aims to be close to traditional
pedagogical approaches and facilitate the development of learning activities that
organically take into account instructions (and instruction modalities), learning
artefacts (i.e., the robot, playground and interfaces) and assessment methods [4].
Therefore, we believe that HPL will be more attractive for teachers who are still
reticent about educational technologies. Future steps include evaluating these
hypotheses in classroom experiments with the target group and expanding to a
larger set of instructions. One could even envision a fully tablet-based alternative
in conjunction with a digital pen and an automated handwriting diagnostic sys-
tem [1]. Such a system would not only be able to assess the child’s progress over
time but would be able to propose adequate remediation to address common
handwriting difficulties [1] through playful means.

Acknowledgements. A big thank you goes out to the teacher A.S. and the students
involved in the data collection, as well as our always supportive colleagues.
274 L. El-Hamamsy et al.

References
1. Asselborn, T.L.C.: Analysis and Remediation of Handwriting difficulties, p. 157
(2020). https://doi.org/10.5075/epfl-thesis-8062
2. El-Hamamsy, L., et al.: A computer science and robotics integration model for
primary school: evaluation of a large-scale in-service K-4 teacher-training program.
Educ. Inf. Technol. 26(3), 2445–2475 (2020). https://doi.org/10.1007/s10639-020-
10355-5
3. Elkin, M., Sullivan, A., Bers, M.U.: Programming with the KIBO robotics kit in
preschool classrooms. Comput. Schools 33(3), 169–186 (2016)
4. Giang, C.: Towards the alignment of educational robotics learning systems with
classroom activities, p. 176 (2020). https://doi.org/10.5075/epfl-thesis-9563
5. Herrera-Camara, J.I., Hammond, T.: Flow2Code: from hand-drawn flowcharts to
code execution. In: SBIM 2017, pp. 1–13. ACM (2017)
6. Horn, M.S., Jacob, R.J.K.: Designing tangible programming languages for class-
room use. In: TEI 2007, p. 159. ACM Press (2007)
7. Horn, M.S., Solovey, E.T., Crouser, R.J., Jacob, R.J.: Comparing the use of tangi-
ble and graphical programming languages for informal science education. In: CHI
2009, pp. 975–984. ACM, April 2009
8. Mehrotra, A., et al.: Introducing a Paper-Based Programming Language for Com-
puting Education in Classrooms. In: ITiCSE 2020, pp. 180–186. ACM (2020)
9. Melcer, E.F., Isbister, K.: Bots & (Main)Frames: exploring the Impact of Tangible
Blocks and Collaborative Play in an Educational Programming Game. In: CHI
2018, pp. 1–14. ACM (2018)
10. Mussati, A., Giang, C., Piatti, A., Mondada, F.: A Tangible programming lan-
guage for the educational robot Thymio. In: 2019 10th International Conference
on Information, Intelligence, Systems and Applications (IISA), pp. 1–4 , July 2019
11. Nasir, J., Norman, U., Johal, W., Olsen, J.K., Shahmoradi, S., Dillenbourg, P.:
Robot analytics: what do human-robot interaction traces tell us about learning?
In: 2019 28th IEEE RO-MAN, pp. 1–7, October 2019
12. Ose Askvik, E., van der Weel, F.R.R., van der Meer, A.L.H.: The importance of
cursive handwriting over typewriting for learning in the classroom: a high-density
EEG study of 12-year-old children and young adults. Front. Psychol. 11, 1810
(2020)
13. Papavlasopoulou, S., Giannakos, M.N., Jaccheri, L.: Reviewing the affordances of
tangible programming languages: implications for design and practice. In: 2017
IEEE EDUCON, pp. 1811–1816 (2017)
14. Sabuncuoğlu, A., Sezgin, M.: Kart-ON: affordable early programming education
with shared smartphones and easy-to-find materials. In: Proceedings of the 25th
International Conference on Intelligent User Interfaces Companion, IUI 2020, pp.
116–117. ACM (2020)
15. Sapounidis, Theodosios, Demetriadis, Stavros: Educational robots driven by tan-
gible programming languages: a review on the field. In: Alimisis, Dimitris, Moro,
Michele, Menegatti, Emanuele (eds.) Edurobotics 2016 2016. AISC, vol. 560, pp.
205–214. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-55553-9 16
16. Sapounidis, T., Demetriadis, S., Papadopoulos, P.M., Stamovlasis, D.: Tangible
and graphical programming with experienced children: a mixed methods analysis.
Int. J. Child Comput. Interact. 19, 67–78 (2019)
Exploring a Handwriting Programming Language for Educational Robots 275

17. Sapounidis, Theodosios, Demetriadis, Stavros, Stamelos, Ioannis: Evaluating chil-


dren performance with graphical and tangible robot programming tools. Per-
sonal Ubiquit. Comput. 19(1), 225–237 (2014). https://doi.org/10.1007/s00779-
014-0774-3
18. Ziviani, J.: The development of graphomotor skills. In: Hand Function in the Child,
pp. 217–236. Elsevier (2006)

Open Access This chapter is licensed under the terms of the Creative Commons
Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/),
which permits use, sharing, adaptation, distribution and reproduction in any medium
or format, as long as you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons license and indicate if changes were
made.
The images or other third party material in this chapter are included in the
chapter’s Creative Commons license, unless indicated otherwise in a credit line to the
material. If material is not included in the chapter’s Creative Commons license and
your intended use is not permitted by statutory regulation or exceeds the permitted
use, you will need to obtain permission directly from the copyright holder.
Artificial Intelligence
AI-Powered Educational Robotics as a Learning
Tool to Promote Artificial Intelligence
and Computer Science Education

Amy Eguchi(B)

University of California, San Diego, CA, USA


a2eguchi@ucsd.edu

Abstract. AI is considered as a rapidly advancing technological domain capable


of altering every aspect of our lives and society. Many children may have used
AI-assistants, and some have had the privilege of growing up with AI-assistants
or AI-assisted smart devices in their homes. Educators across fields from com-
puter science, AI, to education strongly suggest that it is crucial to help people
understand the science behind AI, its limits, and potential societal impacts in our
everyday lives as well as in the future. What is specifically urgent is to prepare
K-12 students for their future professions, which might not currently exist, and
becoming citizens capable of understanding and utilizing AI-enhanced technolo-
gies in the right way in the future so that they would not benefit some populations
over others. AI4K12, a joint project between CSTA (CS Teacher Association) and
AAAI (Association for Advancing Artificial Intelligence, is developing K-12 AI
guidelines for teachers and students organized around the Five Big Ideas in AI
which were introduced in 2019. Aligned with Computer Science Standards and
the standards of the core subjects, the Five Big Ideas in AI can guide students’
learning of AI while learning computer science concepts and the concepts of other
subjects. This position paper explores the idea to promote computing education,
including computer science and computational thinking skills, and AI education
through AI-powered educational robotics as a motivating learning tool.

Keywords: AI-powered educational robotics · Robotics as a learning tool ·


Artificial Intelligence · Computer science education

1 Introduction

We cannot ignore the fact that the impacts of Artificial Intelligence (AI) on our society
are becoming ever more prominent now than ever. For example, many children may have
used AI-assistants becoming aware of them, and some have had the privilege of growing
up with AI-assistants or AI-assisted smart devices in their homes (i.e. Google enhanced
smart speaker, devices equipped with Siri or Alexa). The rapid development of voice and
facial recognition technologies and machine learning algorithms have started to trans-
form our everyday lives. One of the most common AI algorithms that we encounter every

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022


M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 279–287, 2022.
https://doi.org/10.1007/978-3-030-82544-7_26
280 A. Eguchi

day is recommender systems that power customized timelines on Social Networking Ser-
vices (SNS, such as Facebook and Twitter) and video suggestions we see on YouTube.
Educators across fields from computer science, AI, to education strongly suggest that
it is crucial to help people understand the science behind AI, its limits, and potential
societal impacts in our everyday lives as well as in the future. What is specifically urgent
is to prepare K-12 students for their future professions, which might not currently exist,
and becoming citizens capable of understanding and utilizing AI-enhanced technologies
in the right way in the future so that they would not benefit some populations over oth-
ers [1]. This is of particular importance to low-income and underrepresented minorities
where the potential to be left behind is greatest due to lack of resources and opportunities
to explore computing in their schools and lack of exposure to AI-enabled devices in their
homes and communities.
In the US, several initiatives at the grassroots level have emerged to provide greater
access to K-12 AI Education. AI4All provides AI summer camps for historically
excluded populations through education and mentorship with a focus on high school.
AI4K12, a joint project between CSTA (CS Teacher Association) and AAAI (Associa-
tion for Advancing Artificial Intelligence), is developing K-12 AI guidelines for teachers
and students organized around the Five Big Ideas in AI [1]. The Five Big Ideas in AI
were introduced in 2019 which are (Fig. 1):

1. Perception: Computers perceive the world using sensors,


2. Representation & Reasoning: Agents maintain representations of the world and use
them for reasoning,
3. Learning: Computers can learn from data,
4. Natural Interaction: Intelligent agents require many kinds of knowledge to interact
naturally with humans, and
5. Societal Impact: AI can impact society in both positive and negative ways.

Fig. 1. AI4K12 Five Big Ideas in AI


AI-Powered Educational Robotics as a Learning Tool 281

The AI4K12 guidelines are developed in response to the existing CSTA K-12 CS
Standards (2017) which included only two standards addressing AI under the Algorithms
& Programming concept for 11–12 grade band:

3B-AP-08 - Describe how artificial intelligence drives many software and physical
systems.
3B-AP-09 - Implement an artificial intelligence algorithm to play a game against a human
opponent or solve a problem.

As interest in K-12 AI education increases, there are a number of open-source cur-


ricula and resources that are becoming available for middle and high school [2–5], with
far fewer resources available to elementary schools. However, introducing AI earlier,
especially at the elementary grade-levels, will inspire young students to learn computer
science and acquire computational thinking skills early. Although anecdotal, PI Eguchi
has witnessed how inspiring AI could be through the workshop in which she engaged
upper elementary school students in AI-powered robotics programming activities with
Scratch Extension and Google’s Teachable Machine (https://teachablemachine.withgo
ogle.com/). When young students watch Boston Dynamics’ Atlas and Spots dance,
it excites them (https://youtu.be/fn3KWM1kuAw). When students are introduced to
Teachable Machine or GANPaint Studio (https://ganpaint.io/), they become curious and
wonder how to create their own model and how to improve the models they have tinkered
with. They can be used as a lesson hook to engage students in computer science learning
and developing computational thinking skills. But if nothing further is introduced, the
AI technology will be perceived as magic by many students.
This position paper explores the idea to promote computing education, includ-
ing computer science and computational thinking skills, and AI education through
AI-powered educational robotics as a motivating learning tool.

2 Why AI-Powered Educational Robotics for AI, Computer


Science, and Computational Thinking Learning?
Every time we bring our robotics creation to classrooms, excitements and creative effects
are observed [6]. In December 2018, my college students presented their robotics creation
using recycling materials and boxes to pre-school students, where we observed the
same reactions from the students who were working on their box project. Our visit
triggered them to create their imaginary box robots as their final productions. Similarly,
in December 2019, 30 3rd graders (two 3rd grade classes) from a local elementary school
where the majority of students are underrepresented minorities, visited the University
of California, San Diego (UCSD) campus for the showcase of robotics creation by my
undergraduate students. The 3rd graders were excited to learn about the robots created by
the UCSD students. The UCSD students were also tasked with teaching the 3rd graders
how their robots were programmed by either explaining the concept or showing the code
and explain step-by-step. Since all robots were required to use at least one sensor, the
3rd graders engaged in learning how sensor inputs work to control the robot’s actions.
The 3rd -grade teachers shared the feedback from the 3rd graders, which were all very
282 A. Eguchi

positive including their excitement to continue learning about robotics and coding, and
tips to make the robots better. Moreover, teachers asked the author if she could help them
start incorporating educational robotics in their lessons. Although it has not started due
to the COVID-19 pandemic.
Educational robotics started to gather attention from the education community after
LEGO Mindstorms RCX was produced in the late 1990s. It has gained more popularity
and demands since the worldwide focus on STEM and computer science education
started to prevail. It is widely accepted and understood that educational robotics brings
excitement and motivation to learn, and fosters curiosity and creativity in students [7–11].
Educational robotics is an effective tool for promoting students’ learning. It becomes
effective when students engage in making or creating using the hands-on robotic tool,
creating a fun and engaging hands-on learning environment for students. Making with
robotics engages students in manipulating, assembling, disassembling, and reassembling
materials while going through the design learning and engineering iteration processes.
Students also engage in problem-solving processes through trial and error while improv-
ing the codes for their robots. The learning approaches that make learning with robotics
successful are project-based, problem-based, learning by design, student-centered, and
constructionist learning approaches where the focus is on the process of learning, rather
than the final product [12].
AI, as mentioned earlier, is still an emerging technology that is continuously devel-
oped and incorporated into and influencing our daily lives. When students engaged in
activities using AI technology, they become curious and wonder how it works. Some
might also become interested in creating their own or tinker to make it their own. AI
provides students with excitement and more learning opportunities to explore the new
technology; however, since some have more exposure to AI in their home or community,
while others have less access to AI, it is crucial to incorporate AI learning early. To help
the early introduction of AI, there are several tools available for K-12 populations of
students. For example, AI for Oceans (https://code.org/oceans) designed for grades 3 and
up, introduces AI, machine learning, training data, and AI bias. It aligns with some con-
cepts of the United States’ Computer Science Teachers Association Computer Science
Standards (Data and Analysis and Impacts of Computing). There are also AI-enhanced
online tools and resources that introduce AI which are fun to use. For example, there
are several AI-tools accessible to K-5, including Scratch Lab’s Face Sensing (https://lab.
scratch.mit.edu/face/), Scratch extensions (i.e. text to speech, translate, video sensing,
face sensing), Machine Learning for Kids (https://machinelearningforkids.co.uk/), and
Cognimates (https://cognimates.me/).
Although there are several AI-enhanced online tools that support students’ learning of
AI, computer science, and computational thinking concepts, the concepts remain rather
abstract for students since their work stays inside of the computer (virtual and abstract).
Students, especially younger ones, tend to struggle to fully grasp abstract concepts and
ideas without applying them in real-life situations using manipulatives.
Seymour Papert’s constructionism theory as its foundation, the learning environment
that robotics tools create promotes students’ learning of abstract concepts and ideas [12–
18]. Incorporating AI-powered robotics tools as manipulatives could make abstract AI,
computer science, and computational thinking concepts and their ideas visible while
AI-Powered Educational Robotics as a Learning Tool 283

students apply the concepts in real-life situations. Since the subject and concepts that AI
introduces are foreign to many, making AI and its concepts visible through AI-robotics
tools could help their construction of new knowledge, make their understanding of AI
deeper, and help them retain the newly constructed knowledge.

3 AI-Powered Educational Robotics Tools


There are several AI-powered Educational Robotics Tools (AI-robots) developed for
K-12 students. Cozmo, Zumi, and CogBots are introduced in this paper.

3.1 Cozmo
Cozmo (Fig. 2) is the first AI-powered robot that became available for students to explore
AI while learning to code. Cozmo is equipped with proximity sensors, a gyroscope and a
downward-facing cliff detector, and a camera, which allows it to sense its environment.
Cozmo has a vision capability, which allows it to learn human faces and objects, and sense
human feelings. It provides several coding options, such as Code Lab – a block coding
environment, built on Scratch Blocks, for younger students and Python. It provides SDKs
for expert programmers to explore its AI capabilities. Code Lab is a coding app that runs
on tablets. Calypso (https://calypso.software/) is another coding app for Cozmo. It also
runs on tablets. Calypso is a simple tile-based user interface to teach robot logic and
behavior. It also provides an AI lesson that encourages students to learn how AI works
while developing coding skills. RedyAI, a company promoting AI education based in
Pittsburgh, developed an AI education unit for elementary school students (https://edu.
readyai.org/courses/lesson-plans-elementary-school/). The lessons incorporate project-
based learning and let students explore AI concepts with a special focus on AI ethics and
AI for social good. Cozmo is currently not available; however, it will become available
again in Spring 2021.

Fig. 2. Cozmo Fig. 3. Zumi

3.2 Zumi
Zumi, by Robolink (https://www.robolink.com/zumi/), is a self-driving car robot that has
capabilities of vision and navigation and allows students to explore AI, machine learning,
284 A. Eguchi

computer vision, mapping, and self-driving decision-making. She is a Raspberry Pi-


based robot equipped with a gyrometer, accelerometer, camera, and six IR sensors to
navigate her environment. Zumi can be controlled with Blockly and Python. Blockly is
a block coding app for younger students. Zumi can be trained to identify objects and
make decisions accordingly or trained to identify different faces (Fig. 3).

3.3 CogBots
Cogbots, CogBot, and CogMini (Figs. 4 and 5) are open-source robotics kits developed
by CogLabs in collaboration with Google and UNESCO. They are do-it-yourself-type
robotics kits that spark students’ creativity and imaginations. CogBots empower students
through creating designs, assembling, and programming their own “thinking” robot.
It uses sustainable and readily available materials like recycled boxes and cardboards
(Fig. 6), 3D printed parts, and recycled smartphones. For the controller, CogBots uses an
Arduino ESP32 board that communicates with motors and a smartphone. ESP32 board
is a low-cost, low-power microcontroller board with integrated WiFi and dual-mode
Bluetooth communication capabilities, which is robust enough for the AI-robotics tool.
Its motors, micro 360-degree continuous rotation servo motors, are also low-cost (US$5
each), which contributes to keeping the price of the robot affordable.

Fig. 4. CogBot Fig. 5. CogMini


CogBot uses a smartphone as its sensors. The smartphone needs to be connected
to the same local WiFi network that a computer for programming is connected to. A
smartphone can be recycled one without a cellphone data subscription (Android 5.0
onwards). It communicates with a computer via a local WiFi network and with the ESP32
board via Bluetooth. The current model works with Android phones only. However, it
has a plan to make the iOS app available in the near future.
Scratch is recognized as one of the most powerful, popular, and accessible block-
coding tools available in many countries around the world, which is an ideal application
for CogBots. CogBots provides a machine learning experience through programming to
control the AI-powered robot for students to learn AI concepts. Using the Scratch exten-
sion to make AI-CS experience accessible for students with little or no coding experi-
ence (Fig. 7). Scratch is beneficial because it is a web-based programming environment
AI-Powered Educational Robotics as a Learning Tool 285

Fig. 6. CogBot with cardboard and craft materials

where no installation is required, making it easy for teachers to use in their classrooms
since any installation on their classroom computers requires IT support. Since Scratch is
widely used in K-12 classrooms around the world, it will make programming with Cog-
Bots accessible to many students who have experience coding with Scratch. Teachable
Machine, by Google Creative Lab, is an additional tool that enables CogBot to provide
a machine learning experience (https://teachablemachine.withgoogle.com/train/image).
ThinkBot Scratch extension allows students to use an image classification model created
with Teachable Machine in their codes to control the CogBots.

Fig. 7. ThinkBot scratch extension


286 A. Eguchi

4 How to Promote AI-Powered Educational Robotics as a Learning


Tool - Next Step
The paper explored the idea to promote computing education, including computer sci-
ence and computational thinking skills and AI education through AI-powered educa-
tional robotics as a motivating learning tool. It introduced the AI-powered Educational
Robotics tools suitable for upper elementary school students and older. Those are excit-
ing and fun tools to use in the classroom, which have the potentials to engage students
in hands-on learning. However, just bringing the tool in a classroom does not promote
student learning by itself.
In order to support the integration of AI, computer science education, into classrooms,
it is important that there are well-structured lessons that are aligned to the standards
that teachers need to cover. Especially in elementary schools, AI and computer science
education cannot be a standalone course. It needs to be integrated into existing lessons
without scarifying core subjects’ standards. Some of the AI-empowered educational
robotic tools come with lessons. However, their focus is mainly on AI education and
there is no crosswalk with the standards of computer science and other core subjects. The
lessons integrating AI-powered educational robotics tools need to be age-appropriate,
have progressions, are standard aligned with computer science and core subjects, and
can be easily integrated into their everyday lessons.
Moreover, it is important to point out that learning with AI-powered educational
robotics tools must provide an authentic connection with the students’ life through real-
world problems and/or challenges [14]. While engaging students in real-world problems,
developing solutions, and demonstrating their learning by physically testing solutions
with their robotics creation, their active engagement in authentic learning ensures deeper
subject knowledge acquisition among the students. In addition, the lessons need to incor-
porate culturally responsive approaches. Culturally responsive teaching is a pedagogy
designed to motivate students by incorporating teaching practices grounded in their
experiences and ways in which they make sense of their knowledge [19]. It places
students’ cultural identities at the center of their learning process while incorporating
various aspects of their lives including their cultural knowledge, prior experience, and
knowledge, ways of thinking into the curriculum. Its focus is to facilitate and support
all students with their learning journey by making it relevant and relatable for the learn-
ers [20]. Culturally responsive computing education is not new. Culturally responsive
pedagogy “can be used to explore problems and solutions in any scientific or technical
field, often using traditional knowledge or practices of the group being educated” [21].
Culturally responsive pedagogy can be incorporated anywhere to benefit students who
do not come from the group that created the AI and computer science education curricu-
lum. To promote AI and computer science education integrated into classrooms through
AI-empowered educational robotics tools, we need to develop lessons and instructional
materials that meet the needs addressed in this paper.

References
1. Touretzky, D.S., Gardner-McCune, C., Martin, F., Seehorn, D.: Envisioning AI for K-12: what
should every child know about AI? In: AAAI 2019. AAAI Press, Palo Alto (2019)
AI-Powered Educational Robotics as a Learning Tool 287

2. Payne, B.H.: An Ethics of Artificial Intelligence Curriculum for Middle School Students.
https://aieducation.mit.edu/aiethics.html
3. ISTE: Artificial Intelligence Explorations and Their Practical Use in Schools. https://www.
iste.org/learn/iste-u/artificial-intelligence
4. Clarke, B.: Artificial Intelligence Alternate Curriculum Unit (2019). http://www.exploringcs.
org/wp-content/uploads/2019/09/AI-Unit-9-16-19.pdf
5. AI4ALL: AI4ALL - Open Learning. https://ai-4-all.org/open-learning/. Accessed 1 Feb 2021
6. Eguchi, A.: Keeping students engaged in robotics creations while learning from the
experience. In: Presented at the 11th International Conference on Robotics in Education
(2020)
7. Eguchi, A.: Educational robotics theories and practice: tips for how to do it right. In: Barker,
B.S., Nugent, G., Grandgenett, N., Adamchuk, V.L. (eds.) Robotics in K-12 education: a
new technology for learning, pp. 1–30. Information Science Reference (IGI Global), Hershey
(2012)
8. Eguchi, A.: Learning experience through RoboCupJunior: promoting STEM education and
21st century skills with robotics competition. In: Proceedings of the Society for Information
Technology & Teacher Education International Conference (2014)
9. Eguchi, A.: Educational robotics for promoting 21st century skills. J. Autom. Mobile Robot.
Inf. Syst. 8, 5–11 (2014). https://doi.org/10.14313/JAMRIS_1-2014
10. Eguchi, A.: Computational thinking with educational robotics. In: Proceedings of the Society
for Information Technology & Teacher Education International Conference (2016)
11. Valls, A., Albó-Canals, J., Canaleta, X.: Creativity and contextualization activities in edu-
cational robotics to improve engineering and computational thinking. In: Lepuschitz, W.,
Merdan, M., Koppensteiner, G., Balogh, R., Obdržálek, D. (eds.) RiE 2017. AISC, vol. 630,
pp. 100–112. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-62875-2_9
12. Eguchi, A.: Bringing robotics in classrooms. In: Khine, M.S. (ed.) Robotics in STEM
Education, pp. 3–31. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-57786-9_1
13. Eguchi, A.: Student learning experience through CoSpace educational robotics. In: Pro-
ceedings of the Society for Information Technology & Teacher Education International
Conference. (2012).
14. Eguchi, A.: Educational robotics as a learning tool for promoting rich environments for active
learning (REALs). In: Keengwe, J. (ed.) Handbook of Research on Educational Technology
Integration and Active Learning, pp. 19–47. Information Science Reference (IGI Global),
Hershey (2015)
15. Papert, S.: The Children’s Machine: Rethinking School in the Age of the Computer. Basic
Books, New York (1993)
16. Papert, S.: Mindstorms: Children, Computers, and Powerful Ideas, 2nd edn. Basic Books,
New York (1993)
17. Bers, M.U.: Using robotic manipulatives to develop technological fluency in early childhood.
In: Saracho, O.N., Spodek, B. (eds.) Contemporary Perspectives on Science and Technology in
Early Childhood Education, pp. 105–125. Information Age Publishing Inc., Charlotte (2008)
18. Bers, M.U.: The TangibleK robotics program: applied computational thinking for young
children. Early Child. Res. Pract. 12 (2010)
19. Rhodes, C.M., Schmidt, S.W.: Culturally Responsive Teaching in the Online Classroom
(2018). https://elearnmag.acm.org/featured.cfm?aid=3274756
20. Richards, H.V., Brown, A.F., Forde, T.B.: Addressing diversity in schools: culturally
responsive pedagogy. Teach. Except. Child. 39, 64–68 (2007)
21. Eglash, R., Gilbert, J.E., Foster, E.: Broadening participation toward culturally responsive
computing education - Improving academic success and social development by merging
computational thinking with cultural practices (2013)
An Interactive Robot Platform
for Introducing Reinforcement Learning
to K-12 Students

Ziyi Zhang(B) , Sara Willner-Giwerc, Jivko Sinapov, Jennifer Cross,


and Chris Rogers

Tufts University, Medford, MA 02155, USA


{ziyi.zhang,sara.willner giwerc,jivko.sinapov,jennifer.cross,
chris.rogers}@tufts.edu

Abstract. As artificial intelligence (AI) plays a more prominent role


in our everyday lives, it becomes increasingly important to introduce
basic AI concepts to K-12 students. To help do this, we combined the
physical (LEGO R robotics) and the virtual (web-based GUI) worlds for
helping students learn some of the fundamental concepts of reinforcement
learning (RL). We chose RL because it is conceptually easy to understand
but received the least attention in previous research on teaching AI to
K-12 students. Our initial pilot study of 6 high school students in an
urban city consisted of three separate activities, run remotely on three
consecutive Friday afternoons. Students’ engagement and learning were
measured through a qualitative assessment of students’ discussions and
their answers to our evaluation questions. Even with only three sessions,
students were optimizing learning strategies, and understanding key RL
concepts and the value of human inputs in RL training.

Keywords: Reinforcement learning · Robotics education · K-12


education

1 Introduction
Artificial intelligence (AI) is predicted to be a critical tool in the majority of
future work and careers. It is increasingly important to introduce basic AI con-
cepts to K-12 students to build familiarity with AI technologies that they will
interact with. Reinforcement Learning (RL), a sub-field of AI, has been demon-
strated to positively contribute to many fields, including control theory [1],
physics [2] and chemistry [3]. This was demonstrated by the AlphaZero system
which attracted widespread public attention by defeating the world’s top chess
and Go players [4]. For students, basic concepts of RL are intuitive and attrac-
tive to learn since it is similar with our cognition of nature of learning [5] and
easy to be demonstrated with games. For instance, when researchers introduced
Super Mario to undergraduate RL sessions, they argued that it increased stu-
dents’ engagement [6]. However, most current platforms and empirical research
on introducing AI to K-12 students are based on training supervised learning
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 288–301, 2022.
https://doi.org/10.1007/978-3-030-82544-7_27
Robot Platform for Learning RL 289

models [7,8]. There are only few activities on the Machine Learning for Kids
platform, and a VR system designed by researchers that are intended to intro-
duce RL to K-12 students [9]. The activities in these RL teaching projects were
all fully developed in the simulated world and there is no research on using phys-
ical tools like educational robots to introduce RL concepts to K-12 students in
the real world.
To address this need, we designed three consecutive sessions combining the
LEGO SPIKE Prime robot and a code-free web-based GUI designed for middle
school and above students to learn basic RL concepts. Our first two sessions are
designed around the LEGO SPIKE Prime robot. We asked students to build and
control a robot car to move a specific distance, first with mathematical methods
using extrapolation, and then with a RL method. In the third session, students
were asked to train a virtual robot agent to finish a 1D and a 2D treasure hunt
challenge on a web-based GUI. Prior research has shown that educational robots
are powerful tools for introducing AI concepts to students from young age [10]
to college students [11,12]. Through our sessions, we want students to learn
RL in both the physical and virtual world and give the students an intuitive,
interactive, and engaging learning experience. Due to COVID-19, we designed
our system to be easily built and explored at home, with the instructors providing
feedback and guidance using common video conferencing platforms. Our system
and proposed structured activities cover the following aspects of RL: 1) Key
concepts in RL, including state, action, reward and policy; 2) Exploration and
exploitation and how the agent chooses whether to explore or exploit; 3) Q-table
and agent’s decision making based on it; 4) Episodes and termination rules; and
5) Impacts of human input. We ran a pilot study in December 2020 to test
our system design and session plan with 6 high school students in Boston area
remotely and described the key results and takeaways in this paper.

2 Background and Related Work

It has become increasingly important to introduce AI concepts to pre-college


level or even younger students, resulting in a number of approaches and
methodologies [7,10]. Recent AI education platforms include Google’s Teachable
Machine [13], Machine Learning for Kids, and MIT’s Cognimate [14]. Google’s
Teachable Machine provides a web-based GUI for students to build classifica-
tion models without any specialized technical expertise and to easily distribute
designed courses and tutorials to others. Other web-based AI education plat-
forms demonstrate AI-related concepts by implementing servers and web inter-
faces for students to design and test AI agents to play games such as Rok [15].
When designing an activity to teach AI to middle and high school students,
we focused on how to lower the barrier to entry and keep it engaging. Game-
like activities can be engaging to K-12 students and many methodologies for
teaching AI have used video games as educational tools [6,16]. Research has
shown that games are a promising tool for introductory AI classes [17,18] and
has demonstrated the potential to engage non-major students [12]. These prior
290 Z. Zhang et al.

works indicate that students are more motivated and engaged when learning
AI through games [19]. Educators also consider “toy problems” as satisfactory
approaches to introduce basic AI knowledge [20]. Our activity is inspired by
classic treasure hunt games which we make it tangible by combining the web
GUI with an educational robotics platform.
Educational LEGO robots and others have been studied in many K-12 edu-
cation contexts including physics [21], mathematics [22] and engineering [23,24].
These studies have shown that robotics kits can improve the students’ engage-
ment and facilitate understanding of STEM concepts. LEGO robots have also
been used in studies for teaching AI and robotics to different age group students
[10–12,25–27]. These researchers have reported that LEGO robots could make
the learning more interactive, attractive and friendly to students who do not
have an AI or robotics background, which provided inspiration for us on how to
utilize the advantages of physical robots when designing our RL activity.

3 Reinforcement Learning Background

Reinforcement learning is a class of problems where an agent has to learn how


to act based on scalar reward signals detected over the course of the interaction.
The agent’s world is represented as a Markov Decision Process (MDP), a 5-
tuple < S, A, T , R, γ >, where S is a discrete set of states, A is a set of actions,
T : S × A → Π(S) is a transition function that maps the probability of moving
to a new state given action and current state, R : S × A → R gives the reward
of taking an action in a given state, and γ ∈ [0, 1) is the discount factor. We
consider episodic tasks in which the agent starts in an initial state s0 and upon
reaching a terminal state sterm , a new episode begins.
At each step, the agent observes its current state, and chooses an action
according to its policy π : S → A. The goal of an RL agent is to learn an
optimal policy π ∗ that maximizes the long-term expected sum of discounted
rewards. One way to learn the optimal policy is to learn the optimal action-
value function Q∗ (s, a), which gives the expected sum of discounted rewards for
taking action a in state s, and following policy π ∗ after:

Q∗ (s, a) = R(s, a) + γ T (s |s, a) × maxa Q∗ (s , a )
s

A commonly used algorithm used to learn the optimal action-value function


is Q-learning [28]. In this algorithm, the Q-function is initialized arbitrarily (e.g.,
all zeros). Upon performing action a in state s, observing reward R and ending
up in state s , the Q-function is update using the following rule:

Q(s, a) ← Q(s, a) + α(R + γ maxa Q(s , a ) − Q(s, a))


where α, the learning rate, is typically a small value (e.g., 0.05). The agent
decides which action to select using an -greedy policy: with small probability
, the agent chooses a random action (i.e., the agent explores); otherwise, it
Robot Platform for Learning RL 291

Fig. 1. An example physical build of the robot car and activity environment

choose the action with the highest Q-value in its current state (i.e., the agent
acts greedily with respect to its current action-value function).
RL can be computationally expensive and may require large amounts of
interaction. To speed up learning, researchers have proposed human-in-the-loop
RL methods. For example, in the learning-from-demonstration (LfD) framework,
human teachers take over the action selection step, often providing several tra-
jectories of complete solutions before the agent starts learning autonomously. In
a related paradigm, the agent can seek “advice” from its human partner, e.g.,
what action to select in a particular state for which the Q-values are thought
to be unreliable due to lack of experience. One of the goals of our system and
proposed activity is to demonstrate to students how human partners can help a
robot learn through interacting with it.

4 Technical Approach
4.1 System Overview
1
The following system components are required for setting up the activity: a
physical robot platform, a custom code-free web-based GUI environment to bet-
ter visualize and control the RL training process, and instructions to guide stu-
dents as they go through the activity. We used the LEGO SPIKE Prime robot,
which supports MicroPython and Scratch-like block-based coding, and Blue-
tooth communication between the robot and a computer. It comes with several
sensors including distance sensor, force sensor and color sensor. It also has an
intuitive programming application for students to code and monitor the status
of the robot. In our activity, students are guided through a challenge to build
a robotic car like the one shown in Fig. 1 and program it to move a specific
distance, first using a mathematical approach, and then using a RL approach.
1
All code and materials available at: https://github.com/ZyZhangT/rlplayground.
292 Z. Zhang et al.

Fig. 2. The front side of the placemat instruction used in session 1 and 2. We pro-
vide students some example builds and hints to the solution of the challenges. High-
resolution images are available at: https://github.com/ZyZhangT/rlplayground/tree/
main/Placemats

We also designed an interactive, web-based GUI for both the 1D and 2D


treasure hunt activities. The objective of the GUI is to facilitate student learning
by illustrating key concepts and processes that are hard to demonstrate with the
physical robot. Also, we wanted to provide students with a friendly and efficient
approach to manually control the training process of an RL agent. To serve such
purpose, the GUI system has the following features: 1) An interface for students
to visualize how key RL variables (e.g., state, action, reward) change during
the training process; 2) Opportunities for students to participate in the training
process by providing rewards to the robot to demonstrate how human input
can affect training; 3) An interface for students to tweak the  value to learn
the difference and importance of exploration and exploitation; 4) Visual aide for
teachers to explain or demonstrate RL concepts especially when implementing
this activity remotely; and 5) Repeatable, automated training platform for the
agent while students explore the activity independently.
Placemat instructions are one-page (double-sided) instructions that supple-
ment the GUI. They provide students with a few images of example builds, some
guidance to get them started, and activity specific background knowledge. How-
ever, they do not provide step-by-step instructions or dictate the creation of a
single “correct” solution [29]. For this activity, students used two different place-
mat instructions showed in Fig. 2. The goal of using the placemat instructions
was to support students in getting started with the RL activities without telling
them exactly what to do and how to do it.

4.2 Graphical User Interface for Treasure Hunting Activities

The first part of our GUI is based on an 1-D treasure hunt activity as we showed
in Fig. 3. The agent is located in a six-state environment, which is located at
the center of the interface. Students can differentiate between various states,
including a starting state, a trap (restart from the starting state) and a treasure
Robot Platform for Learning RL 293

Fig. 3. GUI layout of 1-D treasure hunting. (The upper part is for students to visualize
virtual robot’s move in the maze and interact with the GUI. The lower part is for
students to visualize agent’s assessment on taking different actions in each state.

(termination). The agent can choose its action to be either moving left or moving
right. We set the rewards for the trap, the treasure and other states to be –20,
20, and –1 correspondingly. Students can let the agent train itself for an episode
using Q-learning algorithm by clicking on the button above the map and inspect
the learning process by watching how the agent moves around in the environment
and also from the information provided in the user interface, such as cumulative
return of the episode, current state, last action and reward. We also provide
buttons for students to reset the training process, or to navigate to the 2-D
treasure hunt challenge. To help students acquire a better understanding of the
learning procedure, we provide a visual aid to illustrate the current estimated
Q-value of each state-action pair, which is located under the virtual maze. For
each state-action pair, if the corresponding box turns greener, it means the agent
has a more positive value estimation; if the corresponding box turns redder, it
has a more negative value estimation. But how each Q-value is calculated is not
exposed to the student since it would be too complicated. We also have two
buttons with question marks to help students with some common confusions
they might have during the training process.
Through 1-D treasure hunting activity, we hope to introduce basic RL con-
cepts such as state, action, episodes to students and help them get familiar with
our interface. After that, students are guided to proceed to the 2-D treasure
hunting activity.
As shown in Fig. 4, the agent is now in a 2-D maze with an addition of an
exit and some barriers between states to make the problem more complex. To
put students’ in the robot’s perspective, the maze structure starts out hidden
from students and as the training proceeds, the states visited by the agent and
barriers detected by the agent would be revealed on the map. The maze has
16 states, including a starting state, two traps (restart from the starting state),
294 Z. Zhang et al.

Fig. 4. GUI layout for 2-D Treasure Hunt. The general layout is the same as 1D version
except we added a mid part of the GUI for students to play with different training
modes. And on top of the right part of the GUI, students could tweak exploration rate
and the auto training speed.

a treasure, and an exit (termination). The rewards for actions that make the
robot reach traps, treasure or exit, or hitting a barrier are −10, 20, 50, and
−10 correspondingly. Otherwise, the reward is −1. And the agent now has 4
available actions: moving up, down, left, or right. For 2-D treasure hunt, we
provide buttons for both automatic training and manual training, which means
students could choose to let the agent train itself based on the reward table
we provides, or to give the agent reward by themselves at each step to train
it in their own way. if students wish to take a step-by-step look at the training
process, we also provide adjustable automatic training speed so they can slow the
training process. Students could also tweak the exploration rate () during the
training to see how it affects the agent’s learning process. We provide the same
kind of information about the training process above the maze map and visual
illustration on the Q-value estimation on the right side of the interface. Through
2-D treasure hunt activity, we expect students to have a deeper comprehension
on basic RL concepts, also understand exploration and exploitation, and the
impact of human inputs.

4.3 Lesson Structure

The lesson was divided into three one-hour sessions. The first two sessions lever-
aged the LEGO SPIKE Prime robot. In the first session, students built a robot
car for our “Going the Distance” challenge. Students used the placemat instruc-
tion shown in Fig. 2 as a support resource. The goal of this activity was to get
students to program their car to get as close as possible without running over
a LEGO mini-figure placed between 5 in. and 25 in. away from their cars. They
were not allowed to use a sensor for this challenge. The objective of this was
to get them to think about interpolation and extrapolation and get more famil-
iar with programming their robot using mathematical relationships. While this
activity did not leverage RL or other artificial intelligence concepts, it acted as
a setup to the second activity which did introduce RL.
Robot Platform for Learning RL 295

In the second session, the students used reinforcement learning to solve the
going the distance challenge. The goal of this challenge was to help students
obtain a general understanding of what RL is and let them try to code a simple
RL method. This session contained 4 phases. Since we assumed that students had
little to no prior experience with artificial intelligence, the first phase was a 5 min
introduction to the general concepts of AI and reinforcement learning. Phase 2
is the main part of this session in which we would send out the placemat showed
in Fig. 2, and students would try to implement the RL method we provided in
placemat to accomplish the challenge individually. In phase 3, we asked students
to show how far they got. The goal of this section was for them to share ideas
with each other, and also for us to evaluate how much the students learned from
this session and whether the activity design was achieving aspects of the desired
outcomes. The last phase was to get feedback from students and introduce some
of the concepts for the final session.
The third session was designed around our web platform. The learning objec-
tives were to let students: 1) Understand more specific RL concepts (state,
action, reward and episode); 2) Identify the differences between exploration and
exploitation in the context of reinforcement learning and understand why explo-
ration is important and how is affects the training process if an inappropriate  is
given; 3) Associate a specific action performed by the agent with the correspond-
ing policy function, i.e., relate the agent’s choice with values in the Q-table; and
4) Understand how humans can be part of the learning process and facilitate
the agent identifying more desirable actions in a shorter amount of time.
We separated this session into 4 phases. In the first phase, students played
with the 1D treasure hunting game to get familiar with the GUI and basic RL
concepts. Then they started the 2D treasure hunting challenge, in which they
first watched how the robot trained itself and learned about the environment.
Next, they manually trained the robot by giving it reward with six options (+10,
+5, +1, −1, −5, −10) for several episodes to compare the two different methods.
After that, we asked students to reset the environment, set  to zero, automat-
ically train the robot for one episode, and then see how the robot kept moving
to the treasure box and fails to find the exit. We let them discuss why this hap-
pens in an effort to introduce the concept of exploration and exploitation. Due
to the limitation of time, we interspersed our assessment on students’ learning
outcome throughout the whole training process in the form of discussion ques-
tions. These questions included asking them to explain why specific phenomena
happen during training and to describe their understanding of RL concepts.

5 Pilot Study
5.1 Background
We conducted a pilot study with 6 high school students in a city just outside
of Boston, Massachusetts. These six students were members of a robotics club
that had been meeting weekly for three months to do activities with the LEGO
SPIKE Prime robotic kit. Two of the students had previous robotics and coding
296 Z. Zhang et al.

Fig. 5. A screenshot from our pilot study. One student was testing his robot while
others were watching or preparing their robot cars

experience, and four of them were new to both robotics and coding. The students
had no prior knowledge of AI. The testing includes three separate sessions as we
proposed in the former chapter. The first session was carried out on December
4th , and the following two sessions were held on two consecutive Fridays after.
Each session lasted for an hour, and all the sessions were held remotely on Zoom
(Fig. 5).

5.2 Robot-Centered Sessions


The first two sessions was based on the SPIKE Prime robot. Students were asked
to build a robot car and figure out the relationship between running time and
moving distance using a mathematical (Session 1) model and RL (Session 2) to
move the robot forward by a specific distance. The students were able to build
their car in 15 min (with the help of the placemat instruction) and came up with
different ways to control their robot, rather than simply using the relationship
between running time and moving distance. By the end of the session, all of the
students were successful or close to being successful in completing the challenge.
In Session 2, students were asked to program a simple RL method to solve the
same challenge as in Session 1. We included code snippets and flowchart in the
placemat instruction, but some of the students still had difficulties coding the
RL method. Most of the confusion was about how to establish a reward system
by code. By the end of the session, two students came up with the correct
code structure of the RL method. One of them actively expressed his excitement
about the content of this session. The two students who were located in the same
household and therefore worked together had hardware issues and so failed to
catch up and complete the activity. One student who didn’t finish during the
session contacted us after the session and was able to complete the challenge.
Overall, we think the challenge had an appropriate difficulty level. Even though
not all of the students successfully programmed the RL method by the end of
the session, they understood the general methodology of RL with the help of the
placemat instruction and explanations from the instructor.
Robot Platform for Learning RL 297

The LEGO SPIKE Prime robot and the placemat instructions were two key
components of the first two sessions. In our study, we found the robot not only
increased student engagement, but also inspired them to be creative. Unlike
a virtual platform where students have limited ways of interacting with the
environment, a physical robot allows them to manipulate the robot in space and
time and control not only the robot itself but also the robot’s environment. For
example, one student used the relationship between the rotations and driving
distance to control his robot in Session 1. They also built different robot cars
and were responsible for finding ways to test them. The placemat instructions
for Session 1 and 2 acted as a shared artifact between the students and the
instructors. In Session 1, the placemat instruction was useful for helping students
get started with building their robot car. In Session 2, the placemat instruction
served as a resource for the instructors to point students towards answers when
they had questions about how to code using RL. The diagram on the front of the
placemat instruction helped spark dialogue about what the relevant variables in
the system were, and the flowchart on the back helped students combine those
variables in a single program to achieve the desired outcome.

5.3 Web-Based Session


In the third session, centered around the web platform, students were first asked
explore the 1D treasure hunt game to get familiar with the GUI layout, and they
all successfully trained the virtual robot in 10 min. The students pointed out that
with more training episodes, the robot became smarter, and it learned to choose
the shortest route to the treasure, indicating that they understood what “train-
ing” means in RL. After the 1D challenge, they explored automatic and manual
training in the 2D maze. Automatic training went smoothly, but the students
struggled with the manual mode. Since the -greedy policy used to choose actions
was hidden from the students, it was difficult for them to understand some of
the agent’s random moves. However, the students also showed creativity and
tried out interesting ideas, including different training strategies. One student
shared that it was more effective to use small rewards at the beginning and then
use larger rewards to cement right moves into the agent’s policy. These types of
comments showed that students started to build up their own understanding of
the importance of the reward system. When asked whether they preferred auto-
matic or manual training, a student answered with the manual option since she
could control the agent’s rewards and it was fun to play around with different
rewards. Next, students tweaked the exploration rate () to see how it affected
training, and were asked several questions (shown in Table 1) about exploration
vs. exploitation. One student proposed that we could give the robot a high explo-
ration rate at the beginning and then decrease it, which is exactly what decaying
-greedy training does in RL. Finally, we evaluated students’ learning outcomes
on RL concepts by asking them several questions (a sample of answers is shown
in Table 1). After the session ended, one student stayed on to ask many techni-
cal questions, including how long it would take to design a platform like the one
he had just used, and how hard it was to accomplish the RL algorithm for the
298 Z. Zhang et al.

Table 1. Evaluation questions and answers from the students

Questions Answers from students (pseudonym name)


1: Do you prefer automatic training or A: “I enjoy the manual training, you kinda
manual training? choose what it gets and doesn’t get...though
it takes a while to understand what is
happening, it’s still kind of fun to play
around.”
2: When the exploration rate was 0, why did B: “Because whenever it hits the treasure, it
the robot keep moving to the treasure box? gets 20 points.”; C: “...Because it just always
go to wherever it has the highest reward.”
3: What would happen when the exploration C: “...It would completely ignore all of the
rate is 1? stuff it had found out and just move
randomly.”
4: Use one sentence to describe what you B: “It takes time.” D: “I think reinforcement
think reinforcement learning is? learning is you go through many trials...test
paths you remembered from experience and
optimize it to find the most efficient
way...Basically reinforcement learning is
learn from experience.” C: “It’s trial and
error.”
5: What do you think is the most important C: “I think it’s having a good system for
part in RL? figuring out how much the reward should
be...”
6: For the activity in the last session, what C:“...While the distance sensor seems to be
do you think the states and actions are? How able to detect anything from 0–200
many states do we have for that activity? centimeters, I would say there are 200 states,
and the action is moving some amount of
time or distance.”

treasure hunting activity. He also checked the source code behind the web GUI,
which showed a high level of interest and engagement.

5.4 Limitations
Since the pilot study only had 6 participants, we didn’t collect and analyze quan-
titative data from our testing. For session 2, we think the time was tight and some
students spent too much time programming the robot and debugging syntax errors
rather than coming up with the logic of the RL algorithm. In the future, we would
provide students more pre-written code chunks so they could focus on writing the
algorithm. In Session 3, since students were all working on their own screens, we
found it was hard to track how far they got. Going forward we would use multiple
screen shares functions so that we can see what students are doing in real time,
which would be especially helpful in the manual training phase. Also, we found
that if one student answered a question, other students were sometimes unwilling
to answer it again. Therefore, we need to find a way to encourage students to talk
more about their own thinking and reduce the influence of other students’ answers
so we could better evaluate their learning outcome.
Robot Platform for Learning RL 299

5.5 Results
Overall, students demonstrated satisfactory learning outcomes on general and
specific RL concepts. Though most of the time we let the students independently
explore the activity we designed, they still gained an understanding of the AI
concepts we were hoping they would learn about and successfully triggered their
interest in the AI and RL fields. In Session 2 specifically, we expected students
to learn the logic of a general RL algorithm through hands-on coding experience
of solving a problem. With the exception of the two students who had hardware
issues, most of the students were able to generate the correct structure of the
RL method by the end of the session. In Session 3, we hoped that students
would learn about specific RL concepts including reward, action, explore and
exploit, etc. As shown in Table 1, students could correctly answer most of our
evaluation questions. The interesting ideas shared by the students during the
session revealed that some students not only had clear understanding, but even
thought one step further than we expected on concepts covered in this session.

6 Conclusion and Future Work


In this paper, we integrated the LEGO SPIKE Prime robotics kit and designed
a web platform to introduce Reinforcement Learning (RL) to age 13 and above
students. Centered around this platform, we designed three sessions for students
to learn about general RL ideas and specific RL concepts, and try to use RL to
solve a real problem. We conducted testing with six high school students and
the results showed that our activity and GUI design was engaging and intuitive
for students to use. Through our activity, students learned about several key
RL concepts and showed excitement and engagement throughout and after the
sessions. In the future, we hope to further evaluate the data we acquired from
the testing to improve the activity and GUI design. Moving forward, we plan to
design more systematic means to measure student learning of AI topics and to
scale up our platform for future testing with larger groups of students. Finally,
Augmented Reality (AR) interfaces (e.g., [30,31]) for human-robot interaction
have shown promise at revealing the robot’s sensory and cognitive data to users
as to establish common ground between the human and the robot. We plan to
introduce such AR interfaces to our activities as to enable students to visualize
in context what the robot is learning and what factors it is using to make its
decisions as it attempts to solve a problem, as well as to enable students to debug
their robot in real time [32].

References
1. Kiumarsi, B., Vamvoudakis, K.G., Modares, H., Lewis, F.L.: Optimal and
autonomous control using reinforcement learning: A survey. IEEE Trans. Neural
Netw. Learn. Syst. 29(6), 2042–2062 (2018)
2. Fosel, T., Tighineanu, P., Weiss, T., Marquardt, F.: Reinforcement learning with
neural networks for quantum feedback. Phys. Rev. X 8, 031084 (2018)
300 Z. Zhang et al.

3. Zhou, Z., Li, X., Zare, R.: Optimizing chemical reactions with deep reinforcement
learning. ACS Cent. Sci. 3, 12 (2017)
4. Silver, D., et al.: A general reinforcement learning algorithm that masters chess,
shogi, and go through self-play. Science 362(6419), 1140–1144 (2018)
5. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. A Bradford
Book, Cambridge (2018)
6. Taylor, M.E.: Teaching reinforcement learning with Mario: an argument and case
study. In: Proceedings of the 2011 AAAI Symposium Educational Advances in
Artificial Intelligence (2011)
7. Vartiainen, H., Tedre, M., Valtonen, T.: Learning machine learning with very young
children: who is teaching whom? Int. J. Child-Comput. Interact. 100182, 06 (2020)
8. Sakulkueakulsuk, B.: Kids making AI: integrating machine learning, gamification,
and social context in stem education. In: 2018 IEEE International Conference on
Teaching, Assessment, and Learning for Engineering (TALE), pp. 1005–1010. IEEE
(2018)
9. Coppens, Y., Bargiacchi, E., Nowé, A.: Reinforcement learning 101 with a virtual
reality game. In: Proceedings of the 1st International Workshop on Education in
Artificial Intelligence K-12, August 2019
10. Williams, R., Park, H.W., Breazeal, C.: A is for artificial intelligence: The impact
of artificial intelligence activities on young children’s perceptions of robots. In: CHI
2019: Proceedings of the 2019 CHI Conference on Human Factors in Computing
Systems, pp. 1–11, April 2019
11. Parsons, S., Sklar, E., et al.: Teaching AI using LEGO mindstorms. In: AAAI
Spring Symposium (2004)
12. van der Vlist, B., et al.: Teaching machine learning to design students. In: Pan, Z.,
Zhang, X., El Rhalibi, A., Woo, W., Li, Y. (eds.) Technologies for E-Learning and
Digital Entertainment, pp. 206–217. Springer, Heidelberg (2008)
13. Carney, M., et al.: Teachable machine: approachable web-based tool for exploring
machine learning classification. In: Extended Abstracts of the 2020 CHI Conference
on Human Factors in Computing Systems, CHI EA 2020, pp. 1–8, New York, NY,
USA, 2020. Association for Computing Machinery (2020)
14. Druga, S.: Growing up with AI. Cognimates: from coding to teaching machines.
PhD thesis, Massachusetts Institute of Technology (2018)
15. Friese, S., Rother, K.: Teaching artificial intelligence using a web-based game
server. In: Proceedings of the 13th Koli Calling International Conference on Com-
puting Education Research, pp. 193–194 (2013)
16. Yoon, D.-M., Kim, K.-J.: Challenges and opportunities in game artificial intelli-
gence education using angry birds. IEEE Access 3, 793–804 (2015)
17. McGovern, A., Tidwell, Z., Rushing, D.: Teaching introductory artificial intelli-
gence through java-based games. In: AAAI 2011 (2011)
18. DeNero, J., Klein, D.: Teaching introductory artificial intelligence with Pac-Man.
In: First AAAI Symposium on Educational Advances in Artificial Intelligence
(2010)
19. Hainey, T., Connolly, T.M., Boyle, E.A., Wilson, A., Razak, A.: A systematic
literature review of games-based learning empirical evidence in primary education.
Comput. Educ. 102, 202–223 (2016)
20. Wollowski, M.: A survey of current practice and teaching of AI. In: Thirtieth AAAI
Conference on Artificial Intelligence (2016)
21. Petrovič, P.: Spike up prime interest in physics. In: Robotics in Education, pp.
146–160. Springer International Publishing, (2021)
Robot Platform for Learning RL 301

22. Mandin, S., De Simone, M., Soury-Lavergne, S.: Robot moves as tangible feedback
in a mathematical game at primary school. In: Robotics in Education. Advances
in Intelligent Systems and Computing, pp. 245–257. Springer International Pub-
lishing, Cham (2016)
23. Laut, J., Kapila, V., Iskander, M.: Exposing middle school students to robotics and
engineering through LEGO and Matlab. In: 120th ASEE Annual Conference and
Exposition, 2013. 120th ASEE Annual Conference and Exposition; Conference 23
June 2013 Through 26 June 2013
24. Williams, K., Igel, I., Poveda, R., Kapila, V., Iskander, M.: Enriching K-12 science
and mathematics education using LEGOs. Adv. Eng. Educ. 3, 2 (2012)
25. Whitman, L., Witherspoon, T.: Using LEGOs to interest high school students and
improve K-12 stem education. Change 2, 12 (2003)
26. Selkowitz, R.I., Burhans, D.T.: Shallow blue: LEGO-based embodied AI as a plat-
form for cross-curricular project based learning. In: Proceedings of the Twenty-
Eighth AAAI Conference on Artificial Intelligence, pp. 3037–3043 (2014)
27. Cheli, M., Sinapov, J., Danahy, E.E., Rogers, C.: Towards an augmented reality
framework for K-12 robotics education. In: Proceedings of the 1st International
Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018)
28. Christopher JCH Watkins and Peter Dayan: Q-learning. Mach. Lear. 8(3–4), 279–
292 (1992)
29. WWillner-Giwerc, S., Danahy, E., Rogers, C.: Placemat instructions for open-
ended robotics challenges. In: Robotics in Education, pp. 234–244. Springer Inter-
national Publishing (2021)
30. Cleaver, A., Muhammad, F., Hassan, A., Short, E., Sinapov, J.: SENSAR: a visual
tool for intelligent robots for collaborative human-robot interaction. In: AAAI Fall
Symposium on Artificial Intelligence for Human-Robot Interaction (2020)
31. Chandan, K., Kudalkar, V., Li, X., Zhang, S.: ARROCH: Augmented reality for
robots collaborating with a human. In: IEEE International Conference on Robotics
and Automation (ICRA) (2021)
32. Ikeda, B., Szafir, D.: An AR debugging tool for robotics programmers. In: Inter-
national Workshop on Virtual, Augmented, and Mixed-Reality for Human-Robot
Interaction (VAM-HRI) (2021)
The BRAIINS AI for Kids Platform

Gottfried Koppensteiner1(B) , Monika Reichart1 , Liam Baumann1 ,


and Annamaria Lisotti2
1 Technologisches Gewerbemuseum, HTBLVA, Wien 20, Austria
{gkoppensteiner,mreichart}@tgm.ac.at
2 IIS Cavazzi, Pavullo, Italy

annamarialisotti@cavazzisorbelli.it

Abstract. BRAIINS (BRingAIIN Schools) 2020–2023 Erasmus+ project is first


and foremost a bottom-up educational alliance between schools in EU with differ-
ent backgrounds, expertise and focus and among students. Although already per-
meating our world, AI is not explicitly taught in most schools and has not found an
appropriate place in curricula yet. While some platforms for kids and professional
tools can be found, BRAIINS is targeting 15–18 year olds with a peer-to-peer
approach. This paper explains how students with technical background provide
online-learning materials and a ready to use website for their peers from the partner
schools using ML4Kids, KNIME, the ML Kit for Firebase and robotic platforms
such as Misty, Nao, Botball. This will give them the opportunity to learn from each
other, collaborate to enhance the website with self-written exercises and examples
as well as use it for their discussions about ethics and creative mindsets.

Keywords: Machine learning · Robotics · Peer-to-peer learning

1 Introduction
Artificial Intelligence is no longer the stuff of the future although awareness is still inad-
equate at all levels. Children growing up in the era of artificial intelligence (AI) will
have a fundamentally different relationship with technology than those before them [1].
Moreover, as grownups they will surely have to interact with AI in both their personal and
professional lives. As a result, Artificial Intelligence education is very important in the
21st century knowledge information society [2]. Nevertheless, the topic is not explicitly
taught in most of our schools and has not found an appropriate place in curricula yet.
Only a few platforms focusing on AI for schools have emerged, like Machine Learn-
ing for Kids, Calypso for Cozma, Cognimates [3–5]. All of these platforms provide AI
engines and enable children to do hands-on projects. There are also some professional
tools, used in projects by companies and teaching at universities, like Firebase or KNIME
[6, 7]. However, to prepare adequate digital literacy, avoid an increasing digital divide
among students and ensure equal opportunities to everyone, educators need to under-
stand how AI technologies can be leveraged to facilitate learning, accelerate inclusion,
solve real-world problems and how teaching can and should change accordingly. BRAI-
INS (BRing AI IN Schools) 2020–2023 is an Erasmus+ KA229 project funded by the

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022


M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 302–307, 2022.
https://doi.org/10.1007/978-3-030-82544-7_28
The BRAIINS AI for Kids Platform 303

EU Commission and targeting students (15–18 yrs old), teachers and local communities
alike [8]. Coordinated by IIS Cavazzi - Pavullo (IT) the project involves TGM Wien
(AT), IES Villoslada Pamplona (ES) and 2° GEL Kalymnos (GR). The main priorities
are creation and testing of innovative educational practices in the digital era while con-
textually increasing interest and level of achievement in STEM and operating for an
inclusive school and – consequently - society. In this context the project aims to increase
students’ competences in following domains: AI, robotics and programming.

Focus on AI. Participants in the project will acquire a critical knowledge of basic AI
ideas; gain awareness of its impact on the fast-changing future of work, of the related
ethical issues (privacy, data collection, algorithmic bias and their effect on individuals
and society, human-machine relationship) together with the new opportunities it offers.
They will also identify AI that can be used to support teachers: AES, personal tutors,
expert systems, language expert support thus favoring a shift in teachers’ role from
transmission model to facilitators.

Focus on Robotics. Working with humanoid robots will be a core section of the project
as human-machine relationships will be a critical point in the near future, dense with
ethical and social implications. With their human-like appearance and physical pres-
ence, such robots call for a more natural and intuitive interaction, are highly motivating
for all students and have proved particularly effective with special needs kids in their
engagement and for the improvement of cognitive tasks and social abilities.
Humanoid robots introduce new and attractive pedagogical challenges and
approaches promoting meaningful learning based on authentic tasks. NAO robots for
example will give to the students the opportunity to get their hands, mind and even heart
involved in the world of robots, letting them play a role of innovators in a high-tech but
still inclusive society.

Focus on Coding. Robotics favor a cross-field approach, involving a multiplicity of


areas from language empowerment to STEM with an emphasis on coding. They pro-
mote new teaching & learning approaches and methodologies such as problem solving,
real tasks and challenge-based learning, teamwork, design thinking, civic engagement
meanwhile developing 21st century skills and new competences like critical and lateral
thinking, creativity, the capacity to tackle problems through algorithms, the ability to
take decisions in uncertainty based on data and probabilistic models. This favors gender
empowerment and motivation also of students who are low achievers in traditional school
environments but fit well in a more operative hands-on context. The paper is structured as
follows: The following section introduces the Tools for Artificial Intelligence. Section 3
gives Pedagogic Point of View. Finally, a conclusion is given in Sect. 4.

2 Tools for Artificial Intelligence (Machine Learning)


As mentioned in the introduction, AI is part of our everyday lives, whether someone is
using an app on a mobile device [9], playing a game on a tablet or computer [10], driving
a car through the city [11] or buying something online [12]. AI is the driving force for
304 G. Koppensteiner et al.

the rapid changes in the knowledge and information society of the 21st century and
the reason why it should be embraced and trained by teachers [13]. Moreover, Artificial
intelligence education should be started from elementary school as an integral part of the
curricula and should be taught on all stages till university level. Currently, AI is emerging
in such a fast way that society is not able to cope with this transition. Everybody is using
it, but we do have a lack of understanding AI technologies do (and could) not care
about the usage of data about us. In this context, BRAIINS (Bring AI IN Schools)
project will set the scene for the understanding of AI future scenarios within schools
and local communities. Students within a service-learning frame will engage with the
public to make them reflect on AI imagery and related messages, unveiling how AI
technologies are already embedded in many different aspects of our lives, realizing how
they can benefit humanity but also which are their drawbacks. In addition, students (with
technical background) will provide online-learning materials and a ready to use website
for their peers (from schools in other countries) in order to give them the opportunity
to learn from each other, encourage them to enhance the website with exercises devised
by them as well as use the website for their discussions of ethics and developing a
creative mindset. Different tools will be integrated in order to learn and teach about AI,
like ML4Kids, KNIME, and the ML Kit for Firebase. Dissemination will be achieved
in informal interactive learning environments such as hackathons, exhibitions, robotic
courses, contests.

2.1 ML4Kids
For an easy entry-point to start AI education, especially at the elementary level, we are
planning to use the ML4Kids Environment. Machine Learning for Kids (ML4Kids) is a
free tool that introduces machine learning by providing hands-on experiences for learning
about and training machine learning systems. The website provides an easy-to-use guided
environment for training machine learning models to recognize text, numbers, images,
or sounds. With other educational coding platforms, like Scratch and App Inventor, it
allows children to create projects and build games with the machine learning models
they train [3]. In addition to the website, a series of book will be presented in order to
guide young readers through creating compelling AI-powered games and applications
using the Scratch programming language [19]. Within the BRAIINS Project this website
will be evaluated, and the best examples will be provided at the project internal website
AI for Kids. Additionally, this website will provide the opportunity to share projects
done by kids and teachers in order to help the community.

2.2 KNIME
The KNIME Software is a professional tool, structured in an analytics platform and a
server. KNIME Analytics Platform is the open-source software for creating and learning
data science. It is intuitive, open, and will be continuously integrating new developments.
As a result, KNIME makes understanding data and designing data science workflows and
reusable components accessible to everyone. Together with KNIME Server, analytical
applications and services for automation, management, and deployment of data science
workflows could be integrated in the curricula. The idea behind integrating KNIME
The BRAIINS AI for Kids Platform 305

Analytics Platform into our concept will allow us to focus on concepts and methods
instead of the syntactic nature of programming languages. The KNIME-platform offers
a wide range of data access and processing functions, modeling tools, and techniques in
a single platform. Students will be able to apply methods using a software that they can
continue using throughout various stages of their professional lives. Within the KNIME
Academic Alliance [20], material as well as ideas on how to integrate KNIME in classes
will be shared. There are also a couple of books available, like the Guide to intelligent
data analysis: how to intelligently make sense of real data [21].

2.3 ML Kit for Firebase


Firebase is a NoSQL cloud-based database that syncs data across all clients in real-time
and provides offline functionality. Data is stored in the Realtime database as JSON, and
all connected clients share one instance, automatically receiving updates with the newest
data. Based on this database the ML Kit offers powerful machine learning functionality
for apps running either iOS or Android and is for both experienced and novice machine
learning developers. With just a few lines of code, powerful and easy to use machine
learning packages can be used [22]. Possible applications are text recognition, facial
detection, recognizing points of interest, image characterization and labeling. Together
with tools for building apps this could be an easy to integrate outreach-tool in order to
generate interest in STEM Education.

2.4 Robotic Platforms to Integrate ML


Beside working on online-platforms, databases or creating apps, (mobile) robots offer
a different way to catch the attention of pupils as well as to implement AI technolo-
gies. Research on STEM education [14–17] and inclusion (which is a big part in den
BRAIINS project) [18] provides evidence that robots can be used diverse student groups
of different ages. In order to follow the principles of peer-to-peer learning all of the
following tools will be evaluated, introduced on a webpage with short getting -started
examples and other teaching materials, devised by students for students. Therefore a few
Robotic-Platforms for the integration of AI technologies or principles will be evaluated
in the BRAIINS Project. A deeper evaluation will be done on three robot-tools: MISTY
[23] as an easy-to-use tool; second, the BOTBALL Robotics Set [24, 25], whose embed-
ded Linux and reloadable firmware allow additional software paradigms to be added
allowing the CBCv2 to be enhanced with additional software, like mobile agents in the
DISBOTICS Project [26] or, for instance, AI-Tools. And finally the NAO Robot System,
a well-established robust platform with the support of an already extensive literature and
many best practice examples of integration in school, aiming at STEM enhancement
[27].

3 Activities – AI for Kids Platform and Further Work


The basic idea of the project is that students with a technical background evaluate tools,
describe tools, present an easy-to-start tutorial on a webpage as well as help their peer
306 G. Koppensteiner et al.

group to get in touch with Machine Learning. In this context, five groups of students
from the Department of Information Technology from the Vienna Institute of Technology
(TGM, Technologisches Gewerbemuseum), with 5 students per group, will be working
on the tools explained in Sect. 2 in different phases. First of all, within the next semester
all of these groups will work to get a deeper understanding of the topic, the possibilities
of these tools as well as how to learn about them in an easy way. As a result, the
groups will provide their knowledge on a webpage in order to support students from
the other three high schools (with non-technical background). With the beginning of
the next semester, the website will be presented during the European Conference on
Educational Robotics (ECER), which is a research conference of students for students,
where robotic tournaments take place [28]. Besides the presentation of the website, also
different tournaments involving ML technologies will be announced. As the students
at the other schools which are involved in the BRAIINS Project start learning about
the ML-Tools presented in order to be able to participate in the tournaments, they also
give feedback about the website and so the website will get further improved. During
the preparation for the tournaments, multiple exercises will be provided on the website.
All the participating teams need to solve those and present their work on the website.
The best solutions will be rewarded with points in their respective categories and for the
overall winner. This should lead to a common knowledge sharing community. After the
BRAIINS Project, this mechanism of introducing new exercises will be opened to the
public and should lead to the growth of this educational material.

4 Conclusion
The project is still in the beginning, but as research on the field for AI Technologies
has shown, a lot of tools are available. Only a few of them are for absolute beginners
or usable for outreach programs. Therefore, the BRAIINS project tries to overcome this
and find new ways of preparing the right tools for young students out of the ideas and
needs of their peer-group. In order to learn about the needs of students while learning
about AI technologies we will evaluate the effort done by 20 students out of 5 groups in
4 different countries/schools with almost more than 400 students and present the results
in our future work.

Acknowledgement. The authors acknowledge the financial support by the EU Commission


within the frame of Erasmus+ KA229 program under grant agreement no. 2020-1-IT02-KA229-
078972_3.

References
1. Druga, S., et al.: Inclusive AI literacy for kids around the world. In: Proceedings of FabLearn
2019, pp. 104–111 (2019)
2. Ali, S., Payne, B.H., Williams, R., Park, H.W., Breazeal, C.: Constructionism, ethics, and
creativity: developing primary and middle school artificial intelligence education. In: Pro-
ceedings of the International Workshop on Education in Artificial Intelligence K-12 (EDUAI
2019) (2019)
The BRAIINS AI for Kids Platform 307

3. ML for Kids: https://machinelearningforkids.co.uk/#!/about. Accessed 28 Jan 2021


4. Calypso for Cozma: https://calypso.software. Accessed Dec 2020
5. Cognimates: http://cognimates.me/home/. Accessed Nov 2020
6. ML Kit for Firebase: https://firebase.google.com/. Accessed Feb 2021
7. KNIME: https://www.knime.com. Accessed Jan 2021
8. BRAIINS: https://bit.ly/3dfaqsY. Accessed Feb 2021
9. Sarker, I.H., Hoque, M.M., Uddin, M.K., Alsanoosy, T.: Mobile data science and intelligent
apps: concepts, AI-based modeling and research directions. Mob. Netw. Appl. 26(1), 285–303
(2020)
10. Risi, S., Preuss, M.: Special issue on AI in games. KI – Künst. Intel. 34(1), 5–6 (2020). https://
doi.org/10.1007/s13218-020-00645-y
11. Breunig, M., Kässer, M., Klein, H., Stein, J.P.: Building Smarter Cars with Smarter Factories:
How AI Will Change the Auto Business. McKinsey Digital, McKinsey & Company (2017)
12. Gentsch, P.: Künstliche Intelligenz für Sales, Marketing und Service Mit AI und Bots zu einem
Algorithmic Business – Konzepte, Technologien und Best Practices. Springer, Wiesbaden
(2017). https://doi.org/10.1007/978-3-658-19147-4
13. Kim, K.: An Artificial Intelligence education program development and application for
elementary teachers. J. Korean Assoc. Inf. Educ. 23(6), 629–637 (2019)
14. Jung, S.E., Won, E.S.: Systematic review of research trends in robotics education for young
children. Sustainability 10(4), 905 (2018)
15. Dorouka, P., Papadakis, S., Kalogiannakis, M.: Tablets and apps for promoting robotics,
mathematics, STEM education and literacy in early childhood education. Int. J. Mob. Learn.
Organ. 14(2), 255–274 (2020)
16. Lepuschitz, W., Koppensteiner, G., Merdan, M.: Offering multiple entry-points into STEM for
young people. In: Merdan, M., Lepuschitz, W., Koppensteiner, G., Balogh, R. (eds.) Robotics
in Education. AISC, vol. 457, pp. 41–52. Springer, Cham (2017). https://doi.org/10.1007/
978-3-319-42975-5_4
17. Jäggle, G., Merdan, M., Koppensteiner, G., Lepuschitz, W., Posekany, A., Vincze, M.: A
study on pupils’ motivation to pursue a STEM career. In: Auer, M.E., Hortsch, H., Sethakul,
P. (eds.) ICL 2019. AISC, vol. 1134, pp. 696–706. Springer, Cham (2020). https://doi.org/10.
1007/978-3-030-40274-7_66
18. Daniela, L., Lytras, M.D.: Educational robotics for inclusive education. Technol. Knowl.
Learn. 24(2), 219–225 (2018). https://doi.org/10.1007/s10758-018-9397-5
19. Lane, D.: Machine Learning for Kids: A Project-Based Introduction to Artificial Intelligence.
No Starch Press, Incorporated, Vereinigte Staaten (2021)
20. KNIME Academic Alliance: https://www.knime.com/. Accessed 22 Jan 2021
21. Berthold, M.R., Borgelt, C., Höppner, F., Klawonn, F.: Guide to Intelligent Data Analysis:
How to Intelligently Make Sense of Real Data. Springer, London (2010). https://doi.org/10.
1007/978-1-84882-260-3
22. Sproull, T., Shook, D., Siever, B.: Machine learning on the move: teaching ML kit for firebase
in a mobile apps course. In: Proceedings of the 51st ACM Technical Symposium on Computer
Science Education, Feb 2020, p. 1389 (2020)
23. Misty Robotics: https://www.mistyrobotics.com/. Accessed Feb 2021
24. Miller, D.P., Winton, C.: Botball kit for teaching engineering computing. In: Proceedings of
the ASEE National Conference, June 2004. ASEE (2004)
25. Stein, C., Nickerson, K., Schools, N.P.: Botball robotics and gender differences in middle
school teams. Age 9, 1 (2004)
26. Koppensteiner, G., Merdan, M., Miller, D.P.: Teaching botball and researching disbotics.
Robot. Educ. (2011)
27. NAO Robots from Softbanks Robotics: https://www.softbankrobotics.com/emea/en/nao.
Accessed 2 Feb 2021
28. ECER: http://pria.at/en/ecer/. Accessed Feb 2021
The BRAIINS Humanoid Robotics Lab

Annamaria Lisotti1(B) and Gottfried Koppensteiner2


1 IIS Cavazzi, Pavullo, Italy
annamarialisotti@cavazzisorbelli.it
2 Technologisches Gewerbemuseum, HTBLVA, 20 Wien, Austria

gkoppensteiner@tgm.ac.at

Abstract. BRAIINS (BRing AI IN Schools) Erasmus+ 2020–2023 main aim is to


create and test innovative educational practices in the digital era. Humanoid robots
and their integration in different disciplines and in society are a major highlight of
the project. Starting on the wave of the Italian Nao Challenge 2021 for Arts and
Cultures, BRAIINS Humanoid Robotics Lab had to be quickly turned into a remote
mode due to CoVid19. The Lab consists of 2 stations: a Laboratory, where a Nao
Academic v.6 robot is located, and a Remote Station, namely, any place a student
tries to connect from remotely. Students worked collaboratively and synchronously
in small groups from their homes through Google Meets, using Teamviewer and
AnyDesk for Nao remote control and Choregraphe to build the workflows. The
need to redesign the Lab eventually brought into a new educational perspective
and disclosed a wide range of unprecedented possibilities: from a major emphasis
on problem solving and flipped approach to time flexibility, which is an issue of the
utmost importance in a rural area with a large number of commuter students and a
limited transport service. The classroom environment expanded beyond physical
and geographical borders potentially opening to collaboration with schools from
all over the Globe while enhancing alliances among local institutes. The use of
the Remote Lab for outreach and teachers’ training on robotics in hybrid mode
has also become part of the BRAIINS dissemination plan and introduced in the
regular teaching practice.

Keywords: High school robotics · Humanoid robots in education · Remote


robotics lab

1 Introduction
Human-machine relationships will be a critical point in the near future, dense with ethical
and social implications. AI is much more than just robotics but in public imagery they
often are one thing. As we are plunging head first - but often dangerously unconscious -
in the very midst of a digital revolution, the introduction of Humanoid robotics in schools
will have many advantages: promotion of meaningful learning based on authentic tasks,
the opportunity for students to play a role of innovators in a high-tech but still inclusive
society, rethinking Society’s relationship with the machines we create.
BRAIINS (Bring AI IN Schools) is an Erasmus+ KA229 Project [1] funded by the
EU Commission targeting students (15–18 years old), teachers and local communities

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022


M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 308–313, 2022.
https://doi.org/10.1007/978-3-030-82544-7_29
The BRAIINS Humanoid Robotics Lab 309

alike. Its main priorities are the creation and testing of innovative educational practices in
the digital era while contextually increasing interest and level of achievement in STEM
and operating for an inclusive school and – consequently - society. Based on the idea to
bring more students into the STEM fields, Robots have been proven by many projects
as the perfect tool for it [2–4]. Therefore, the project focuses on Artificial Intelligence,
Coding, Robotics, and the integration of all these topics in regular classroom activities
in order to prepare the students for their future. On the other hand students - within a
service-learning frame- will engage with the local communities: social robots will be
instrumental in demonstrating the 5 big ideas of AI to the public [5, 6].
As CoVid19 banned all on-site activities, students reverted to working from home.
Remote robotic laboratories are no news [7, 8]. However what we aimed at was something
tailored to our needs: free, quick, flexible, easy to access, directly managed by the
teachers, with no time limits, open to experimentation of all robot functionalities and its
possible applications. We didn’t mean to train for just one challenge but to offer a whole
playground. We wanted students to interact on-line real time with the real robot together
with mates. In a word we had in view not just a simulator but a substitute experience
for the usual robotic lab preserving the socializing aspect and the empathy generated
by the humanoid robot. The paper is structured as follows: Sect. 2 introduces the Nao
Robotic Platform. Section 3 presents the Humanoid Robots Remote Lab. Section 4 gives
Pedagogic Point of View. Finally, a conclusion is given in Sect. 5, future developments
in Sect. 6.

2 The NAO Robotic Platform


Although quite expensive, Nao is a well-established robust platform with the support
of an already wide literature and many best practice examples of integration in school,
aiming at STEM enhancement, language learning, development of cognitive skills and
social abilities in special needs pupils and many more applications still to come [9–11].
The open software platform with available SDK provides ground for in-depth study of
AI basics such as human machine interaction, cognitive computing and autonomous
navigation with a wide variety of new applications designed, developed and shared
globally by a fast-growing user-community students will become part of [12]. Targeting
from beginners to more experienced and professionally oriented programmers, Nao in
fact supports advanced languages as well as graphic ones.
Robots like Nao are more than simple high-tech objects, they can trigger delicate
psychological mechanisms sometimes even stretching to the formation of long-term
relationships with students. The same can be said for language empowerment. We also
foresee Nao as Social Robot for Second Language Tutoring [13], able to speak both
mother tongue and the new one, to translate words on necessity, to engage in advanced
dialogues supported by a rich body language for faster integration of migrant students.

3 The Humanoid Robots Remote Lab


Art and Culture was the established theme for the Italian National Nao Challenge 2021
[14]. Students had to devise innovative ways for humanoid robots to engage the public
310 A. Lisotti and G. Koppensteiner

and empower the cultural experience within a local institution: either an art gallery, a
museum or a landmark. Cavazzi’s students partnered with Montecuccolo Castle which
hosts some art collections from local painters [15]. One of these collections in particular
captured their interest: “Il Paese ritrovato”. It tells the story of the inhabitants of the
Tuscan Emilian Apennines in a not-so-distant past, when life on the mountains was
hard but villages were still full of life. Since the ‘50s because of industrialization and
economic boom, in fact the area as a whole suffered a slow but persistent demographic
decline and a disruptive change in society habits and values. Students decided to offer
the perspective of a young resilient society rooted in its traditional values and confidently
facing the future thanks to the right use of technology, namely robots.

3.1 COVID19-Impact, Challenges and Opportunities

In lockdown far from stopping, our robotics club gained extra momentum. As with many
other aspects of this pandemic impacting Education - the need to redesign approaches
eventually brought into a new perspective and disclosed new possibilities.
Since February ‘20 the stage was set: each student had his own device and Net
connection at home plus was trained in working collaboratively at distance with teachers
acting as facilitators.

3.2 Technical Setup

Cavazzi BRAIINS Remote Lab consisted of two distant stations. Let’s call Laboratory
the place where the robot was physically present and Remote-Station any place a student
tried to connect from remotely.

The Laboratory. It had one Nao Academic V.6 robot, a laptop with Windows 10,
a second device to launch Google Meet and an operator (usually but not necessarily
the teacher). Installed on the laptop were Choreographe 2.8.6 and Teamviewer. Chore-
ographe is a drag and drop graphic software based on building blocks to produce the
workflows and program Nao. Blocks are either taken from the very rich library or created
by the user itself. However, advanced users may code in Python, too. It can be freely
downloaded from SoftBankrobotics and works also in simulator mode [16]. TeamViewer
is a solution for remote access and control of computers or mobile devices. It works with
almost every desktop and mobile platform with a free and unlimited permission for
private non-commercial use [17]. AnyDesk is a low cost alternative [18]. The second
device just needed a web connection and a working camera through which students could
work collaboratively in Google Meet and follow Nao movements and its reactions to the
uploaded workflow. They could also interact with Nao either through the operator (touch
sensors) or directly. Nao is in fact able to perform recognition of vocal commands and
execute face detection across the screen. We asked each student to have his own avatar
to make the recognition task easier for Nao, but it worked also with real faces provided
the right level of illuminance [19].

The Remote Stations. Students joined in small groups from their homes through
Google Meet. They all had the same software as the laptop in the Laboratory installed
The BRAIINS Humanoid Robotics Lab 311

on their device. The Teamviewer connection however was one directional, therefore, we
had no privacy issue and no risk of intrusion in students’ laptops.
Through Teamviewer they also enabled quick file sharing so they could produce
workflows prior to the lab on their own laptop and then upload the.crg files and run
them. Up to 4 students with the same Teamviewer ID and password connected and
worked collaboratively and synchronously. We didn’t use the videoconferencing option
in Teamviewer because we wanted to have a completely free and fixed screen for the
teacher to monitor the workflow and join in the collaborative work as coach.

4 Pedagogical Point of View – The Lab Protocol


20 Students worked from home in 5 teams, each focusing on the implementation of
a particular aspect of Nao (movement, speech, sensors, face and image recognition,
interaction with Arduino). The goal was to explore and build basic workflows which
were subsequently shared on a Padlet in cooperative learning approach [20, 21]. Students
embraced a problem-solving flipped approach. They first discussed and designed their
concept of the desired output and subsequently built workflows, then they finally tested
them on the virtual Nao (simulator) on their own laptop. The teacher made a prior check
and dispensed suggestion for improvements (Fig. 1).

Fig. 1. Remote Station – screenshot of virtual NAO – with Choreographe

When a group was satisfied with coding results and ready for a final test with the real
robot, a 30’ slot was booked in the remote distance lab. The on-line modality had the
added bonus of maximum flexibility and no transport constraint: booking could therefore
be extended far beyond the traditional 14.30–16.30 slot. Everyone had the possibility to
upload and improve his own workflow and interact with Nao.

5 Conclusion
The Remote Lab experience was totally positive! Against a few cons we count many
pros. Cons: the presence of an operator sitting next to Nao is still needed for tactile
312 A. Lisotti and G. Koppensteiner

sensors and sometimes also in facial recognition to adjust the screen in the robot view.
These two functionalities in fact do not work on the simulator. Pros:
No Connection Issues: Our setup turned out to be very robust. Taking control with
TeamViewer/AnyDesk just takes a few seconds. It seldom happened, but in case of
connection failure students just dropped out of TeamViewer and Google Meet while the
rest of the group was still in and working. Nao was not disrupted as it was not a direct
connection but through the laptop in the same room with the robot.

Time Flexibility: IIS Cavazzi collects students from all the surrounding mountain area
and almost 40% of the student population is made of commuters. Our territory is a rural
one, with villages spread apart and not so frequent public transport connection.
After 4 p.m. many pupils have to leave because there are no later buses to go back
home. Moreover, some students do not join robotics activities because the set day/time
doesn’t fit in their already tightly packed schedule of extracurricular activities. Remote
labs thus offer the opportunity to greatly expand beyond the regular 2.30–4.00 p.m. time
slot of the traditional extracurricular activities in school.

Easily Reaching Out to Junior Schools of the Local Area. We plan to offer short
outreach courses on AI and humanoid robotics on a regular basis: 3 sessions from remote
and one in presence. We already started a pilot collaboration with the Junior Secondary
School in Sestola (14 km away, 1000 m asl).

No Geographical Boundaries. Whether students connect from 14 or 800 km away


doesn’t make any difference. With this kind of remote lab scheme students may collabo-
rate across the world remotely and instantly. Within the BRAIINS project this is exactly
what is going to be implemented soon: mixed nationalities teams will compete to pro-
duce a collaborative output in the context of a Nao Challenge. We aim to expand this new
teaching-learning approach and establish long-lasting collaboration between the partner
schools far beyond 2023, thus potentially overcoming the walls of our classrooms and
improving the expertise of our schools beyond any physical and geographical border.

6 Further Work
Crises are rare occasions for change. Responding to the necessity of ensuring students
the access to the robot in order to complete the Italian Nao challenge had much wider
implications for our school than we had previously envisioned:

Remote Collaboration with Local Junior Schools: it will become a part of our School
Open day program. Teachers’ training in Humanoid Robotics integration in the dis-
ciplines: planned for next September in hybrid mode. Outreach. We plan to launch
and test a beta course for local junior students during the robotics/code week in Octo-
ber 2021. A hybrid approach will be of great advantage to kids living scattered in the
mountain villages where STEM and robotics programs are almost non-existent.

All the above activities will be part of the BRAIINS dissemination program.
The BRAIINS Humanoid Robotics Lab 313

Acknowledgement. The authors acknowledge the financial support by the EU Commission


within the frame of Erasmus+ KA229 program under grant agreement no. 2020-1-IT02-KA229-
078972_3.

References
1. BRAIINS Project: https://bit.ly/3dfaqsY
2. Jung, S.E., Won, E.S.: Systematic review of research trends in robotics education for young
children. Sustainability 10(4), 905 (2018)
3. Dorouka, P., Papadakis, S., Kalogiannakis, M.: Tablets and apps for promoting robotics,
mathematics, STEM education and literacy in early childhood education. Int. J. Mob. Learn.
Organ. 14(2), 255–274 (2020)
4. Jäggle, G., Merdan, M., Koppensteiner, G., Lepuschitz, W., Posekany, A., Vincze, M.: A
study on Pupils’ motivation to pursue a STEM career. In: Auer, Michael E., Hortsch, Hanno,
Sethakul, Panarit (eds.) ICL 2019. AISC, vol. 1134, pp. 696–706. Springer, Cham (2020).
https://doi.org/10.1007/978-3-030-40274-7_66
5. Touretzky, D., Gardner-McCune, C., Martin, F., Seehorn, D.: Envisioning AI for K12-what
should every child know about AI. Proc. AAAI Conf. Artif. Intel. 33, 9795–9799 (2019)
6. https://ai4k12.org
7. https://www.robocup.org/leagues/23
8. https://www.coppeliarobotics.com/downloads
9. https://youtu.be/0Fc9d10D5JI
10. Reich-Stiebert, N., Eyssel, F.: Robots in the classroom: what teachers think about teaching and
learning with education robots. In: Agah, Arvin, Cabibihan, John-John., Howard, Ayanna M.,
Salichs, Miguel A., He, Hongsheng (eds.) ICSR 2016. LNCS (LNAI), vol. 9979, pp. 671–680.
Springer, Cham (2016). https://doi.org/10.1007/978-3-319-47437-3_66
11. Eguchi, A.: RoboCupJunior for promoting STEM education, 21st century skills, and tech-
nological advancement through robotics competition. Robot. Auton. Syst. 75, 692–699
(2016)
12. NAO Robots from Softbank Robotics: https://www.softbankrobotics.com/emea/en/nao.
Accessed 2 Feb 2021
13. Vogt, P., De Haas, M., De Jong, C., Baxter, P., Krahmer, E.: Child-robot interactions for
second language tutoring to preschool children. Front. Hum. Neurosci. 11, 73 (2017)
14. NAO Challenge Italy: https://www.naochallenge.it/. Accessed Dec 2020
15. Montecuccoli’s Castle. https://bit.ly/3hO1E6X. Accessed Dec 2020
16. Choreographe: https://www.softbankrobotics.com/emea/en/support/nao-6/downloads-sof
twares. Accessed Dec 2020
17. Teamviewer: https://www.teamviewer.com/en/. Accessed Dec 2020
18. AnyDesk: https://anydesk.com/. Accessed Feb 2021
19. Light level NAO: http://doc.aldebaran.com/2-1/naoqi/vision/allandmarkdetection.html
20. Padlet Example for NAO: https://bit.ly/3pmlKG2. Accessed 2 Feb 2021
21. Bodnenko, D.M., Kuchakovska, H.A., Proshkin, V.V., Lytvyn, O.S.: Using a virtual digital
board to organize student’s cooperative learning. In: Proceedings of the 3rd International
Workshop on Augmented Reality in Education (AREdu 2020), Kryvyi Rih, Ukraine, May
2020 (2020)
Correction to: Robotics in Education

Munir Merdan, Wilfried Lepuschitz, Gottfried Koppensteiner,


Richard Balogh, and David Obdržálek

Correction to:
M. Merdan et al. (Eds.): Robotics in Education, AISC 1359,
https://doi.org/10.1007/978-3-030-82544-7

The original versions of the Chapters 17 and 25 were previously published as non-open
access. They have now been changed to open access under the CC BY 4.0 license and
the copyright holder updated to ‘The Author(s)’. The book and the chapters have been
updated with the changes.

The updated original version of these chapters can be found at


https://doi.org/10.1007/978-3-030-82544-7_17
https://doi.org/10.1007/978-3-030-82544-7_25

© The Author(s) 2022


M. Merdan et al. (Eds.): RiE 2021, AISC 1359, p. C1, 2022.
https://doi.org/10.1007/978-3-030-82544-7_30
Author Index

A G
Albo-Canals, Jordi, 231 Gabellieri, Chiara, 43
Angulo, Cecilio, 231 Gervais, Owen, 201
Avramidou, Eleftheria, 26 Gesquière, Natacha, 72
Giang, Christian, 34, 177, 268
B
Baumann, Liam, 302
H
Bettelani, Gemma C., 43
Hannon, Daniel, 231
Bezáková, Daniela, 3
Heinmäe, Elyna, 155
Birk, Andreas, 81, 134
Hild, Manfred, 94, 221
Blasco, Elena Peribáñez, 14
Hrušecká, Andrea, 3
Bruno, Barbara, 177, 268
Budinská, Lucia, 3
J
C Jia, Ruiqing, 189
Campos, Jordi, 52
Cañas, José M., 243 K
Cárdenas, Martha-Ivón, 52 Kaburlasos, Vassilis G., 26
Carmona-Mesa, Jaime Andrés, 166 Kalová, Jana, 64
Cergol, Kristina, 146 Kangur, Taavet, 268
Chevalier, Morgane, 177 Karabin, Petra, 146
Cross, Jennifer, 288 Karageorgiou, Elpida, 26
Kechayas, Petros, 26
D Koppensteiner, Gottfried, 302, 308
Danahy, Ethan, 231 Kori, Külli, 155
Dineva, Evelina, 81, 134 Kourampa, Efrosyni, 26
E
Eguchi, Amy, 279 L
El-Hamamsy, Laila, 177, 268 Larsen, Jørgen Christian, 119
Eskla, Getter, 14 Leoste, Janika, 14, 155
Lisotti, Annamaria, 302, 308
F Logozzo, Silvia, 105
Faiña, Andres, 256 Lytridis, Chris, 26

© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2022
M. Merdan et al. (Eds.): RiE 2021, AISC 1359, pp. 315–316, 2022.
https://doi.org/10.1007/978-3-030-82544-7
316 Author Index

M R
Mahna, Sakshay, 243 Reichart, Monika, 302
Malvezzi, Monica, 105 Rogers, Chris, 231, 288
Mannucci, Anna, 43 Roldán-Álvarez, David, 243
Massa, Federico, 43
Mathex, Laura, 268 S
Maurelli, Francesco, 81 Sabri, Rafailia-Androniki, 26
Mengacci, Riccardo, 43 San Martín López, José, 14
Mettis, Kadri, 155 Sans-Cope, Olga, 231
Seng, John, 210
Miková, Karolína, 3
Sinapov, Jivko, 288
Mondada, Francesco, 177, 268
Skweres, Melissa, 268
Sovic Krzic, Ana, 146
N Storjak, Ivana, 146
Nabor, Andreas, 81
T
Negrini, Lucio, 34
Tammemäe, Tiiu, 14
Nielsen, Jacob, 119
Novák, Milan, 64 U
Untergasser, Simon, 94, 221
P
V
Pallottino, Lucia, 43 Valigi, Maria Cristina, 105
Panreck, Benjamin, 94, 221 Van de Staey, Zimcke, 72
Papakostas, George A., 26 Villa-Ochoa, Jhony Alexander, 166
Papanikolaou, Athanasia-Tania, 26
Papaspyros, Vaios, 268 W
Pastor, Luis, 14 Willner-Giwerc, Sara, 288
Patrosio, Therese, 201 wyffels, Francis, 72
Pech, Jiří, 64
Pedersen, Bjarke Kristian Maigaard Kjær, 119 X
Puertas, Eloi, 52 Xie, Mingzuo, 189

Z
Q Zhang, Ziyi, 288
Quiroz-Vallejo, Daniel Andrés, 166 Zhou, Dongxu, 189

You might also like