Review - Harpstead Et Al

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

Review: Facilitating Data Analysis in Educational Games by

Harpstead et al.
One of the challenges in taken a data-driven approach to
analyzing student learning in educational games is that there exists a
need to create a custom set of tools for logging and evaluating
students for each game. In their research, Harpstead et al. explore
possibility of creating a system of tools which could be applied more
broadly to a variety of games to provide insight into new directions for
development. Their system addresses multiple challenges in this
space, such as the need for these tools to be adaptable to various
measures of learning, and the ability to address questions beyond
simply whether or not learning is observed from the game.
They test their toolkit on the game RumbleBlocks, a game which
aims to teach students basic structural stability concepts. In the game,
players build a structure from rectangular blocks to create towers
which reach a specified height, but must also be stable enough to
maintain this height when subjected to random earthquake events. In
RumbleBlocks, there are a variety of measures of learning which could
be used, representative of some of the broader measures used in
educational games. One of the simplest possibilities would be to
measure learning on a levels-completed basis, but this measure can
often be too broad, since levels dont always have a one-to-one
correspondence with learning objectives, even if theyre intended to.
Other methods might include Bayesian Knowledge Tracing, which
weve seen earlier in this class, or empirical knowledge tracing, which
plots the error rates of various knowledge components in the game as
a function of the successive opportunities to display mastery of that
knowledge component.
The toolkit is introduced to address the fact that using each of
these approaches to measuring learning requires different data and
methods of analysis. It is made up of two primary components: a
logging a system which records the smallest unit of meaningful action
that a player can exert on the game world and a replay system, which
allows for the reconstruction of a students performance in the game.
These tools were used on RumbleBlocks to evaluate evidence of ingame learning, by analyzing students performance on pre-test levels
and post-test levels for a display of knowledge of learning objectives,
including that a wider base, increased symmetry, and lower center of
mass all result in more stable structures. It was observed that after
playing the game students started to build more symmetric structures
with a wider base, but didnt tend to build structures with lower centers
of mass. This suggests that the game doesnt do a great job of
teaching the center of mass concept, and could motivate the game
designers to emphasize this concept with new or improved mechanics
and level design.

Interestingly, the troubles with teaching the center of mass


concept show up when the researchers analyze to what degree
following the learning objectives corresponds with in-game success.
They found using their toolkit that players who followed the wide base
and symmetry principals were more likely to win, but that due to some
level design decisions, in some levels towers with a higher center of
mass were more likely to succeed. This provides an explanation for the
games poor performance in teaching the center of mass concept, and
provides even clearer guidance for the games level designers on what
to change in order to remedy the issue. This observation displays the
power of the toolkit, and suggests that it is fairly successful at
providing data for meaningful analysis of a games effectiveness at
teaching learning objectives.

You might also like