Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/313247478

Procedural Content Generation via Machine Learning (PCGML)

Article  in  IEEE Transactions on Games · February 2017


DOI: 10.1109/TG.2018.2846639

CITATIONS READS

111 2,096

8 authors, including:

Sam Snodgrass Matthew Guzdial


Drexel University University of Alberta
22 PUBLICATIONS   200 CITATIONS    28 PUBLICATIONS   209 CITATIONS   

SEE PROFILE SEE PROFILE

Christoffer Holmgård Amy K Hoover


New York University New Jersey Institute of Technology
32 PUBLICATIONS   393 CITATIONS    30 PUBLICATIONS   316 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Drawing without replacemente as a game mechanic View project

The Many AI Challenges of Hearthstone View project

All content following this page was uploaded by Christoffer Holmgård on 05 February 2017.

The user has requested enhancement of the downloaded file.


1

Procedural Content Generation via Machine Learning


(PCGML)

Adam Summerville1 , Sam Snodgrass2 , Matthew Guzdial3 , Christoffer Holmgård4 ,


Amy K. Hoover5 , Aaron Isaksen6 , Andy Nealen6 , and Julian Togelius6 ,
1 Department of Computational Media, University of California, Santa Cruz, CA 95064, USA
2 College of Computing and Informatics, Drexel University, Philadelpia, PA 19104, USA
3 School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
4 Duck and Cover Games ApS, 1311 Copenhagen K, Denmark
5 College of Arts, Media and Design, Northeastern University, Boston, MA 02115, USA
6 Department of Computer Science and Engineering, New York University, Brooklyn, NY 11201, USA

emails: asummerv@ucsc.edu, sps74@drexel.edu, mguzdial3@gatech.edu, christoffer@holmgard.org,


arXiv:1702.00539v1 [cs.AI] 2 Feb 2017

amy.hoover@gmail.com, aisaksen@appabove.com, nealen@nyu.edu, julian@togelius.com

This survey explores Procedural Content Generation via Machine Learning (PCGML), defined as the
generation of game content using machine learning models trained on existing content. As the importance of
PCG for game development increases, researchers explore new avenues for generating high-quality content
with or without human involvement; this paper addresses the relatively new paradigm of using machine
learning (in contrast with search-based, solver-based, and constructive methods). We focus on what is most
often considered functional game content such as platformer levels, game maps, interactive fiction stories, and
cards in collectible card games, as opposed to cosmetic content such as sprites and sound effects. In addition
to using PCG for autonomous generation, co-creativity, mixed-initiative design, and compression, PCGML is
suited for repair, critique, and content analysis because of its focus on modeling existing content. We discuss
various data sources and representations that affect the resulting generated content. Multiple PCGML methods
are covered, including neural networks, long short-term memory (LSTM) networks, autoencoders, and deep
convolutional networks; Markov models, n-grams, and multi-dimensional Markov chains; clustering; and
matrix factorization. Finally, we discuss open problems in the application of PCGML, including learning
from small datasets, lack of training data, multi-layered learning, style-transfer, parameter tuning, and PCG
as a game mechanic.

Index Terms—Computational and artificial intelligence, Machine learning, Procedural Content Generation,
Knowledge representation, Pattern analysis, Electronic design methodology, Design tools

I. I NTRODUCTION What these methods have in common is that the


algorithms, parameters, constraints, and objectives
Procedural content generation (PCG), the creation
that create the content are in general hand-crafted by
of game content through algorithmic means, has
designers or researchers. While common to examine
become increasingly prominent within both game
existing game content for inspiration, so far machine
development and technical games research. It is em-
learning methods have far less commonly been used
ployed to increase replay value, reduce production
to extract data from existing game content in order
cost and effort, to save storage space, or simply as an
to create more content.
aesthetic in itself. Academic PCG research addresses
these challenges, but also explores how PCG can Concurrently, there has been an explosion in
enable new types of game experiences, including the use of machine learning to train models based
games that can adapt to the player. Researchers on datasets [4]. In particular, the resurgence of
also addresses challenges in computational creativity neural networks under the name deep learning has
and ways of increasing our understanding of game precipitated a massive increase in the capabilities
design through building formal models [1]. and application of methods for learning models from
In the games industry, many applications of PCG big data [5], [6]. Deep learning has been used for a
are what could be called “constructive” methods, variety of tasks in machine learning, including the
using grammars or noise-based algorithms to create generation of content.
content in a pipeline without evaluation. Many For example, generative adversarial networks
techniques use either search-based methods [2] (for have been applied to generating artifacts such as
example using evolutionary algorithms) or solver- images, music, and speech [7]. But many other
based methods [3] to generate content in settings machine learning methods can also be utilized
that maximize objectives and/or preserve constraints. in a generative role, including n-grams, Markov
2

models, autoencoders, and others [8], [9], [10]. The lows. Section II describes the various use cases for
basic idea is to train a model on instances sampled PCG via machine learning, including various types
from some distribution, and then use this model to of generation and uses of the learned models for
produce new samples. purposes that are not strictly generative. Section III
This paper is about the nascent idea and practice discusses the key problem of data acquisition and
of generating game content from machine-learned the recurring problem of small datasets. Section IV
models. We define Procedural Content Generation includes a large number of examples of PCGML
via Machine Learning (abbreviated PCGML) as the approaches. As we will see, there is already a large
generation of game content using models that have diversity of methodological approaches, but only a
been trained on existing game content. These models limited number of domains have been attempted. In
could be of many different kinds and trained using Section V, we outline a number of important open
very different training algorithms, including neural problems in research and application of PCGML.
networks, probabilistic models, decision trees, and
others. The generation could be partial or complete, II. U SE C ASES FOR PCGML
autonomous, interactive, or guided. The content
could be almost anything in a game, such as levels, Procedural Content Generation via Machine
maps, items, weapons, quests, characters, rules, etc. Learning shares many uses with other forms of
PCG: in particular, autonomous generation, co-
This paper is focused on game content that is
creation/mixed initiative design, and data compres-
directly related to game mechanics. In other words,
sion. However, because it has been trained on
we focus on functional rather than cosmetic game
existing content, it can also extend into new use
content. We define functional content as artifacts
areas, such as repair and critique/analysis of new
that, if they were changed, could alter the in-game
content.
effects of a sequence of player actions. The main
types of cosmetic game content that we exclude are
textures and sound, as those do not directly impact A. Autonomous Generation
the effects of in-game actions the way levels or The most straightforward application of PCGML
rules do in most games, and there is already much is autonomous PCG: the generation of complete
research on the generation of such content outside game artifacts without human input at the time of
of games [11], [12]. This not a value judgment, and generation. Autonomous generation is particularly
cosmetic content is extremely important in games; useful when online content generation is needed,
however, it is not the focus of this paper. such as in rogue-like games which require runtime
It is important to note a key difference between level generation.
game content generation and procedural generation PCGML is well-suited for autonomous generation
in many other domains: most game content has because the input to the system can be examples
strict structural constraints to ensure playability. of representative content specified in the content
These constraints differ from the structural con- domain. With search-based PCG using a generate-
straints of text or music because of the need to and-test framework, a programmer must specify
play games in order to experience them. Where an algorithm for generating the content and an
images, sounds, and in many ways also text can evaluation function that can validate the fitness of
be consumed statically, games are dynamic and the new artifact [14]. However, these are rarely
must be evaluated through interaction that requires coded in the same space as the output artifact,
non-trivial effort—in Aarseth’s terminology, games requiring the designer to use a different domain
are ergodic media [13]. A level that structurally to express their ideas. With PCGML, a designer can
prevents players from finishing it is not a good create a set of representative artifacts in the target
level, even if it’s visually attractive; a strategy game domain as a model for the generator, and then the
map with a strategy-breaking shortcut will not be algorithm can generate new content in this style.
played even if it has interesting features; a game- PCGML avoids the complicated step of experts
breaking card in a collectible card game is merely having to design specific algorithms to create a
a curiosity; and so on. Thus, the domain of game particular type of content.
content generation poses different challenges from
that of other generative domains. Of course, there
are many other types of content in other domains B. Co-creative and Mixed-initiative Design
which pose different, and in some sense more A more compelling use case for PCGML is AI-
difficult challenges, such as lifelike and beautiful assisted design, where a human designer and an
images or evocative musical pieces; however, in this algorithm work together to create content. This
paper we focus on the challenges posed by game approach has previously been explored with other
content by virtue of its necessity for interaction. methods such as constraint satisfaction algorithms
The remainder of this paper is structured as fol- and evolutionary algorithms [15], [16], [17].
3

Again, because the designer can train the machine- the aesthetics of the game and repair broken parts,
learning algorithm by providing examples in the could be very useful. Such models could also be
target domain, the designer is “speaking the same used to automatically evaluate game content, as is
language” the algorithm requires for input and already done within many applications of search-
output. This has the potential to reduce frustration, based PCG, and potentially be used together with
user error, user training time, and lower the barrier other generative methods.
to entry because a programming language is not
required to specify generation or acceptance criteria. E. Data Compression
Because PCGML algorithms are provided with
One of the original motivations for PCG, partic-
example data, they are suited to auto-complete
ularly in early games such as Elite [23], was data
game content that is partially specified by the
compression. There was simply not enough space
designer. Within the image domain, we have seen
on disk for the whole game universe. The same is
work on image inpainting, where a neural network
true for some of today’s games such as No Man’s
is trained to complete images where parts are
Sky [24]. The compression of game data into fewer
missing [18]. Similarly, machine learning methods
dimensions through machine learning could allow
could be trained to complete game content with
more efficient game content storage. By exploit-
parts missing.
ing the regularities of a large number of content
instances, we can store the distinctive features of
C. Repair each more cheaply. Unsupervised learning methods
With a library of existing representative content, such as autoencoders might be particularly well
PCGML algorithms can identify areas that are not suited to this.
playable (e.g. if an unplayable level or impossible
rule set has been specified) and also offer sug- III. DATA S OURCES AND R EPRESENTATIONS
gestions for how to fix them. Summerville and In this section we investigate how training data,
Mateas [19] use a special tile that represents where varying in source, quality, and representation, can
an AI would choose to move the player in their impact PCGML work.
training set, so that the algorithm only generates
playable content; the system inherently has learned
A. Data Sources
the difference between passable and impassable
terrain. Jain et. al. [20] used a sliding window and A commonly held view in the machine learning
an autoencoder to repair illegal level segments – community is that more data is better. Halevy,
because they did not appear in the training set, the et al. [25] make the case that having access to
autoencoder replaced them with a nearby window more data, not better algorithms or better data, is
it had seen during training. the key difference between why some problems
are able to be solved in an effective manner with
machine learning while others are not (e.g., language
D. Recognition, Critique, and Analysis translation is a much harder problem than document
A use case for PCGML that sets it apart from classification, but more data exists for the translation
other PCG approaches is its capacity for recognition, task). For some types of machine learning a surfeit
analysis and critique of game content. Given the of data exists enabling certain techniques, such as
basic idea of PCGML is to train some kind of model images for generative adversarial networks [26] or
on sets of existing game content, these models could text for Long Short-Term Memory recurrent neural
be applied to analyzing other game content, whether networks (LSTMs) [27]; However, video games
created by an algorithm, players, or designers. do not have this luxury. While many games exist,
For example, by supervised training on sets of they typically share no common data structures.
game content artifacts and player response data, Additionally, even if they do they typically do not
machine learning models can be trained to classify share semantics (e.g., sprites in the NES share
game content into various types, such as broken or similarity in how they are displayed but sprite 0x01
whole, underworld or overworld, etc. They can also is unlikely to have similar semantics between Super
be trained to predict quantities such as difficulty, Mario Bros. [28] and Metroid [29], despite the fact
completion time, or emotional tenor [21], [22]. that both are platformers). As such, data sources
They also have the potential to identify uniqueness, are often limited to within a game series [30], [19]
for example by noting how common a particular and more commonly within individual games [31],
pattern appears in the training set, or judging how [32], [33]. This in turn means that data scarcity is
typical a complete content artifact (such as a level) likely to plague PCGML for the foreseeable future,
is with regard to an existing set. Being able to as there are a small, finite number of official maps
identify which user-created levels are truly novel, for most games. However, for many games there
and then enhance them further so as to fit with are vibrant fan communities producing hundreds,
4

thousands, and even millions of maps [34], which original two-dimensional level must be transformed
could conceivably provide much larger data sources. according to some function.
Recently, Summerville et al. created the video Most simply this involves iterating over the data
game level corpus (VGLC) [35], a collection of from left-to-right then top-to-bottom [20], however
video game levels represented in several easily depending on the technique, this can obfuscate
parseable formats. The corpus contains 428 levels spatial relationships. This loss can be mitigated, for
from 12 games in 3 different file formats. This example [19] utilizes a “snaking” path while [36]
effort marks an initial step in creating a shared data represents vertical and horizontal relations by split-
source that can be used for commonality across ting two-dimensional maps into one-dimensional
PCGML research. That being said, the repository sequences of columns.
is limited in scale (12 games and 428 levels being Super Mario Bros. stores its levels in a grid of
small on the standard machine learning front) and “sprites,” a predefined set of 16 pixel square images
the representations are lossy (e.g., in Mario, levels that can be tiled to create larger shapes. Many of
only contain a single Enemy type mapping for all these sprites represent the same in-game function
Goombas, Koopa Troopas, etc.), so additional effort such as “solid blocks” or “enemies,” which tile-
is needed to increase the scope and fidelity of the based representations take advantage of. In a tile-
representations. based representation, the low-level sprites of the
Assuming bountiful data existed for video games, game are classified into a higher order representation
bigger questions abound: 1) How should the data based on gameplay function (e.g. all enemies clas-
be represented? 2) What is a training instance? For sified as an “enemy" type). This improves learning
certain classes of PCGML, such as the aforemen- by cutting down on variance, which is important
tioned image generation, obvious data structures in cases of PCGML due to the limited size of the
exist (e.g., RGB matrices), but there are no common training set. However this representation requires
structures for games. Games are a combination hand-authored rules to transform from higher order
of some amount of content (levels, images, 3-D tiles to sprites after generation.
models, text, boards, pieces, etc.) and the rules that While tile-based representations have been the
govern them (What happens when input x is held most popular thus far, we highlight two alternatives.
down in state y? What happens if z is clicked on? Voices-based representations as in [36] similarly
Can player α move piece p? etc.). Furthermore, the separate sprites by function, with similar drawbacks,
content is often dynamic during playtime, so even but rely on a musical metaphor to categorize sprites
if the content were to capture with perfect fidelity according to voices based on gameplay function.
(e.g., a Goomba starts at tile X, Y ) how it will affect Shape-based representations as in [33] instead use
gameplay requires play (e.g., by the time the player the game’s original sprites as their atomic unit,
0 0 categorizing the various shapes they are arranged
can see the Goomba it has moved to tile X , Y ).
Thus, issues of representation serve as a second into, which therefore requires no processing after
major challenge, after access to sufficient quantities generation. However this representation limits the
of quality data. types of machine learning approaches that can be
applied.
PCGML has been largely limited to the represen-
B. Data Representation Example: Mario Levels tation of video game levels thus far, impacting the
types of data source and representation approaches
We next look at Super Mario Bros. levels as a
that we’ve seen. However, we note that PCGML
case study in the importance of data representation
has promise to address many other elements of
in PCGML. Super Mario Bros. levels have been one
video games, along with entire games themselves.
of the most successful venues for PCGML research,
We expect that future PCGML approaches will
with many of these prior approaches having distinct
still struggle with issues of data sources and data
representations, with individual affordances and
representations, which will in turn restrict the
limitations. These can be divided according to two
underlying machine learning approaches.
features: how the units of the representation are
arranged (one or two-dimensional representations)
and the atomic unit of the representation (tiles, C. Non-level Content
voices, shapes). The majority of PCGML work thus far, and the
Two-dimensional representations of Super Mario majority covered in this paper, focuses on gener-
Bros. levels parallel the two-dimensional spatial rela- ating individual game levels. However, PCGML
tionships in the game. However, a two-dimensional can apply to all types of game content, including
representation limits the types of machine learn- mechanics, stories, quests, NPCS, and even full
ing techniques that can be applied. Therefore the games. We note two examples covered in this
majority of PCGML approaches structure levels paper: Magic/Hearthstone card generation [37]
as one-dimensional sequences, meaning that the and interactive narrative/story generation [38]. The
5

former shares a framework with much of the level a musical note’s pitch and duration to the ANNs,
generation work, relying on a fan-made corpus of this approach translates pitch to the height at which
game content. The latter relies on crowdsourcing a tile is placed and duration to the number of times
its training data, given the lack of a general, formal a tile-type is repeated at that height. For a given
story representation. We expect either of these tile-type or musical voice, this information is then
strategies could find success in other types of input to an ANN that is trained on two-thirds of the
PCGML content generation. existing human-authored levels to predict the value
of a tile-type at each column (as shown in Figure
IV. M ETHODS OF PCGML 1). The idea is that the ANN will learn hidden
We classify procedural content generation ap- relationships between the tile-types in the human-
proaches that employ machine learning techniques authored levels that can then help humans construct
(PCGML) using two axes: method, the machine entire levels from as little starting information as
learning approach that is used; and content, the the layout of a single tile-type.
type of content that is generated. In this section, Like in FSMC, this approach combines a minimal
we discuss various PCGML techniques, grouped amount of human-authored content with the output
by method. We start with neural networks, because from previous iterations. The output of the network
of the recent uptake in PCGML approaches em- is added to the newly created level and fed back
ploying neural networks. We then discuss Markov as input into the generation of new tile layouts.
models, which have been explored in the context By acknowledging and iterating on the relationships
of platformer game level generation. Next we cover between design pieces inherent in a human-produced
clustering-based approaches that have been used level, this method can generate maps that both
to model and generate platformer levels as well as adhere to some important aspects of level design
interactive stories. Afterwards, we discuss matrix while deviating from others in novel ways.
factorization techniques for generating platformer 2) LSTMs for Mario Levels
and action role-playing game levels. We close the Summerville and Mateas [19] used Long Short-
section with an explanation of a wave function Term Memory Recurrent Neural Networks (LSTM
collapse approach. RNNs) [46] to generate levels learned from a tile
representation of Super Mario Bros. and Super
A. Artificial Neural Networks Mario Bros. 2 (JP) [47] levels. LSTMs are a variant
Recently neural networks have been used in of RNNs that represent the current state-of-the-art
the context of generating images [40], [41] and for sequence based tasks, able to hold information
learning to play games [42], [43], [44] with great for 100’s and 1000’s of time steps, unlike the 5-6 of
success. Unsurprisingly, neural networks are also standard of RNNs. Summerville and Matteas used
being explored in PCG for level and map generation a tile representation like the one used by Snodgrass
as well as cooperative trading card generation. and Ontañón [48], but instead of representing levels
1) Function Scaffolding for Mario Levels as 2-D arrays they represent them as a linear string
Hoover et al. [36] generate levels for Super Mario with 3 different representations used for experimen-
Bros. by extending a representation called functional tation. They also included simulated player path
scaffolding for musical composition (FSMC) that information in the input data, forcing the generator
was originally developed to compose music. The to generate exemplar paths in addition to the level
original FSMC representation posits 1) music can geometry. Finally, they included information about
be represented as a function of time and 2) musical how deep into the string the level geometry was,
voices in a given piece are functionally related [45]. causing the generator to learn both level progression
Through a method for evolving artificial neural and when a level should end.
networks (ANNs) called NeuroEvolution of Aug- Summerville et al. [49] extended this work to
menting Topologies (NEAT) [39], additional musical incorporate actual player paths extracted from 4
voices are then evolved to be played simultaneously different YouTube playthroughs. The incorporation
with an original human-composed voice. of a specific player’s paths biased the generators
To extend this musical metaphor and represent to generate content suited to that player (e.g., the
Super Mario Bros. levels as functions of time, each player that went out of their way to collect every
level is broken down into a sequence of columns coin and question-mark block had more coins and
with a tile-sized width. The height of each column question-marks in their generated levels).
extends the height of the screen. While FSMC 3) Autoencoders for Mario Levels
represents a unit of time by the length of an eighth- In [20] Jain, Isaksen, Holmgård and Togelius
note, a unit of time in this approach is width of show how autoencoders [50] may be trained to
each column. reproduce levels from the original Super Mario
At each unit of time, the system queries the ANN Bros. game. The autoencoders are trained on series
to decide at what height to place a tile. FSMC inputs of vertical level windows and compress the typical
6

Outputs

Brick

Bias Ground (t) Ground (t-1) Ground (t-2) Output (t-1) Output (t-2) Output(t-3)

Inputs
Figure 1: The ANN Representation. At each column in a given level, an ANN inputs visual assets (i.e.,
musical voices) from at least one tile type of Super Mario Bros. and outputs a different type of visual
asset. This example figure shows ground tiles being input to the ANN while the brick tile placements are
output from predictions made through NeuroEvolution of Augmenting Topologies [39]. To best capture
the regularities in a level, each column is also provided information about the most recently encountered
inputs (i.e., input values for the three previous columns). The inputs and outputs are then fed back into
the ANN at each subsequent tick. Once trained, ANNs can potentially suggest reasonable placements for
new human composed in-game tiles.

(a) Original (b) Unplayable (c) Repaired


Figure 3: The original window is overwritten with
a vertical wall making the game unplayable. The
autoencoder is able to repair the window to make
it playable again, although it chooses a different
solution to the problem.

Figure 2: Example output of the LSTM approach,


including generated exemplar player path. locations are sparsely represented in StarCraft II
maps, but are decisive in shaping gameplay. They
are usually placed to match the topology of maps.
features of Mario levels into a more general rep- Exploiting this fact, they transform StarCraft II
resentation. They experimented with the width of maps into heightmaps, downsample them, and train
the level windows and found that four tiles seems deep neural networks to predict resource locations.
to work best for Mario levels. They proceeded The neural networks are shown to perform well in
to use these networks to discriminate generated some cases, but struggled in other cases, most likely
levels from original levels, and to generate new due to overfitting owing to a small training dataset.
levels as transformation from noise. They also By adding a number of postprocessing steps to the
demonstrated how a trained autoencoder may be network, the authors created a tool that allows map
used to repair unplayable levels, changing tile-level designers to vary the amount of resource locations
features to make levels playable, by inputting a in an automatically decorated map, varying the
broken/unplayable level to the autoencoder and frequency of mineral placements, shown in Figure 4.
receiving a repaired one as output (see Figure 3). 5) LSTM Generation of Magic Cards
4) Deep Nets for StarCraft II A rare example of game content generation that
Lee, Isaksen, Holmgård and Togelius [51] use is not a level generator is Magic: The Gathering
convolutional neural networks to predict resource [53] card generation. The first to use a machine
locations in maps for StarCraft II [52]. Resource learning approach for this is Morgan Milewicz
7

piece of the card, addressing one of the limitations


of @RoboRosewater. This work shows multiple
different paradigms for PCGML, showing how it
can be used on different game content and as a
co-creative design assistant, as in Hoover et al.’s
[36] previously mentioned work.

Figure 5: Example partial card specification and its


(a) A StarCraft II heightmap resulting output.

B. Markov Models
Markov models are stochastic models that model
changes in a system over time, where the current
state of the system depends on the previous state.
This type of model lends itself easily to modeling
levels for games that have a directional trend, such
as Super Mario Bros. and Kid Icarus [56], where
the “time” of the system relates to the position in
the level.
1) n-grams for Mario Levels
(b) Varying resource amounts.
Perhaps the simplest possible Markov models are
Figure 4: A heightmap and generated resource n-gram models. An n-gram model (or n-gram for
locations for StarCraft II short) is simply an n-dimensional array, where the
probability of each character is determined by the
n characters that precede it. Training an n-gram
with @RoboRosewater [54] a twitter bot that uses consists of scanning a set of strings and calculating
LSTMs to generate cards. Trained on the entirety the conditional probability of each character.
of the corpus of Magic cards it generates cards, Dahlskog et al. trained n-gram models on the
represented as a sequence of text fields (e.g., Cost, levels of the original Super Mario Bros. game, and
Name, Type, etc.). A limitation of this representation used these models to generate new levels [31]. As
and technique is that by treating cards as a sequence n-gram models are fundamentally one-dimensional,
there is no way to condition generation of cards on these levels needed to be converted to strings in
fields that occur later in the sequence (e.g., Cost order for n-grams to be applicable. This was done
occurs near the end of the sequence and as such through dividing the levels into vertical “slices,”
can not be used to condition the generation unless where most slices recur many times throughout the
all previous fields are specified). level [57]. This representational trick is dependent
6) Sequence-to-Sequence Magic Card Creation on there being a large amount of redundancy in the
Building on the work of @RoboRosewater is level design, something that is true in many games.
Mystical Tutor by Summerville and Mateas [37]. Models were trained using various levels of n, and
Using sequence-to-sequence learning [55], wherein it was observed that while n = 0 create essentially
an encoding LSTM encodes an input sequence into random structures and n = 1 create barely playable
a fixed length vector which is then decoded by a levels, n = 2 and n = 3 create rather well-shaped
decoder LSTM, they trained on the corpus of Magic: levels. See Figure 6 for examples of this.
The Gathering. They corrupted the input sequences 2) Multi-dimensional Markov Chains for Levels
by replacing lines of text with a MISSING token In [48] Snodgrass and Ontañón present an ap-
and then tried to reproduce the original cards as proach to level generation using multi-dimensional
the output sequence. The corruption of the input Markov chains (MdMCs) [58]. An MdMC differs
along with the sequence-to-sequence architecture from a standard Markov chain in that it allows for de-
allows the generator to be conditioned on any pendencies in multiple directions and from multiple
8

(a) n = 1

(b) n = 2

(c) n = 3
Figure 6: Mario levels reconstructed by n-grams
with n set to 1, 2, and 3 respectively.

states, whereas a standard Markov chain only allows


for dependence on the previous state alone. In their
work, Snodgrass and Ontañón represent video game
levels as 2-D arrays of tiles representing features
in the levels. For example, in Super Mario Bros.
they use tiles representing the ground, enemies, and
?-blocks, etc. These tile types are used as the states
in the MdMC. That is, the type of the next tile is
dependent upon the types of surrounding tiles, given
the network structure of the MdMC (i.e., the states
that the current state’s value depends on).
In order to generate levels, first the model must
be trained. They train an MdMC by building a
probability table according to the frequency of the
tiles in training data, given the network structure of
the MdMC, the set of training levels, and the set
of tile types. A new level is then sampled one tile
at a time by probabilistically choosing the next tile Figure 7: Sections from a Super Mario Bros. level
based upon the types of the previous tiles and the (top) and a Kid Icarus level (bottom left) generated
learned probability table. using the constrained MdMC approach, and a Kid
In addition to their standard MdMC approach, Icarus level generated using an MRF approach
Snodgrass and Ontañón have explored hierarchical (bottom right).
[32] and constrained [59] extensions to MdMCs
in order to capture higher level structures and
ensure usability of the sampled levels, respectively. input, and the distribution of each N ×N window in
They have also developed a Markov random field the output is similar to the distribution in the input,
approach (MRF) [60] that performed better than where N is chosen prior to generation, typically
the standard MdMC model in Kid Icarus, a domain based on the patterns present in the training data.
where platform placement is pivotal to playability. This wave function collapse algorithm is par-
Figure 7 shows a section of a Super Mario Bros. ticularly exciting, because it is able to produce
level (top) and a section of a Kid Icarus level high-quality results with a single training example.
(bottom-left) sampled using a constrained MdMC The algorithm works by initializing a level to
approach, and a section of Kid Icarus level (bottom- superpositions of possible N × N windows from
right) sampled using the MRF approach. the input. It then chooses a position among those
3) Wave Function Collapse with the lowest entropy, and collapses it (i.e., sets
A recent approach by Gumin [61] leverages it probabilistically according to the distribution of
concepts from quantum mechanics in order to windows in the input and the possible windows for
generate images and levels from a representative that position). The entropy for the rest of the level is
example tile set. Specifically, he created a wave updated accordingly. This process is repeated until
function collapse algorithm that takes a set of all of the positions in the output have been collapsed.
representative level sections and generates new This approach is a variant of MdMC, except instead
levels that are locally similar to the input. That of sampling based on a geometric pattern (e.g. left-
is, each N × N window in the output appears in the to-right, bottom-to-top), samples are chosen via the
9

aforementioned entropy methodology. novel chunks of level. After this point generating
This approach was initially explored for bitmap a new level requires generating novel level chunks
generation, but has since been expanded for use in sequences derived from the gameplay videos.
with 3-D tile sets as well as for level generation An example screen of generated content using this
[62], [63]. The source code and examples of the approach is shown in Figure 8.
bitmap project can be found online [61].

Figure 9: Example of Scheherazade-IF being played.

2) Scheherazade-IF: General Interactive Fiction


PCGML has focused on graphical games, and
Figure 8: Example level section from the clustering particularly the levels of graphical games. However,
approach. there exists some work in the field of generating
interactive fiction, text-based games like choose
your own adventure stories. Guzdial et al. adapted
Scheherazade [66], a system for automatically learn-
C. Clustering
ing to generate stories, into Scheherazade-IF [38],
Clustering is a category of machine learning which can derive entire interactive fiction games
approaches that attempt to group similar elements from a dataset of stories.
together in order to learn their relationships. Below Both Scheherazades rely on exemplar stories
we discuss an approach that uses clustering to learn crowdsourced from Amazon Mechanical Turk
grammars and graphs for level and story generation, (AMT). The reliance on crowdsourcing rather than
respectively. finding stories "in the wild" allows Scheherazade
1) Learning Mario Levels from Video to collect linguistically simple stories and stories
In [33] Guzdial and Riedl utilized gameplay related to a particular genre or setting (e.g., a bank
video of individuals playing through Super Mario robbery, a movie date, etc).
Bros. to generate new levels. They accomplished Scheherazade-IF’s structures its model in the form
this by parsing each Super Mario Bros. gameplay of plot graphs [67], directed graphs with events as
video frame-by-frame with OpenCV [64] and a fan- vertices and sequentiality information encoded as
authored spritesheet, a collection of each image edges. The system derives events from the individual
that appears in the game (referred to as sprites). sentences of the story, learning what events can
Individual parsed frames could then combine to occur via an adaption of the OPTICS clustering
form chunks of level geometry, which served as the algorithm [68]. The ordering of these primitive
input to the model construction process. In total events can then be derived from the exemplar
Guzdial and Riedl made use of nine gameplay stories, which then enables the construction of a plot
videos for their work with Super Mario Bros., graph for each type of story (e.g., bank robbery).
roughly 4 hours of gameplay in total. Scheherazade-IF uses these learned plot graphs as
Guzdial and Riedl’s model structure was adapted the basis for an interactive narrative experience,
from [65], a graph structure meant to encode styles allowing players to take the role of any of the
of shapes and their probabilistic relationships. The characters in the plot graph (robber or bank teller
shapes in this case refer to collections of identical in bank robbery) and making choices based on the
sprites tiled over space in different configurations. possible sequences of learned events.
For further details please see [33], but it can be
understood as a learned shape grammar, identifying
D. Matrix Factorization
individual shapes and probabilistic rules on how to
combine them. To train this model Guzdial and Riedl Matrix factorization is a way of extracting latent
used a hierarchical clustering approach utilizing features from high-dimensional data. These tech-
K-means clustering with automatic K estimation. niques have been explored (below) in the context
First the chunks of level geometry were clustered to of learning from level data and learning from other
derive styles of level chunks, then the shapes within level generators.
that chunk were clustered again to derive styles of 1) Bayes Nets and PCA for Zelda Levels
shapes, and lastly the styles of shapes were clustered While platformers, specifically Super Mario Bros.,
to determine how they could be combined to form have dominated the field, Summerville et al. [30]
10

[69] have generated levels learned from The Legend V. O PEN P ROBLEMS AND O UTLOOK
of Zelda [70] series. Combining different approaches In this section, we describe some of the major
for the content, they split the generation process into issues facing PCGML and open areas for active
two domains. The first is the dungeon level wherein research. Because games as a ML research domain
the base topology of a dungeon is generated in a are considerably different from other ML domains,
graph structure. The room-to-room structure of a such as vision and audio learning, the problems that
dungeon is learned along with high level parameters arise are commensurately different. In particular,
such as dungeon size and length of the optimal game datasets are typically much smaller than
player path using a Bayes Net [71], a graphical other domains’ datasets, they are dynamic systems
structure of the joint probability distribution. that rely on interaction, and the availability of
After generating the topological structure of high-quality clean data is limited. In the following
a dungeon, they then use Principal Component sections we discuss the issue of limited data as
Analysis (PCA) to generate the low-level tile rep- well as underexplored applications of PCGML for
resentations of the individual rooms. PCA finds games includes style transfer, parameter tuning, and
a compressed representation of the original 2-D potential PCGML game mechanics.
room arrays by taking the eigenvectors of the
original high dimensional space and only keeping
A. Learning from Small Datasets
the most informative, in this work only keeping
the top 20 most informative accounting for 95% As previously mentioned, it is a commonly held
of all variation. These compressed representations tenet that more data is better; However, games are
are then represented as weight vectors that can be likely to always be data-constrained. The English
interpolated between to generate new room content Gigaword corpus [75] which has been used in over
as seen in Figure 10. 450 papers at the time of writing is approximately 27
2) Matrix Factorization for Level Generation GB. The entire library of games for the Nintendo
Entertainment System is approximately 237 MB,
Many generators (both machine and human) cre-
over 2 orders of magnitude smaller. The most com-
ate in a particular identifiable style with a limited ex-
mon genre, platformers, makes up approximately
pressive range [72]. Shaker and Abou-Zleikha [73]
14% of the NES library (and is the most common
use Non-Negative Matrix Factorization (NNMF) to
genre for PCG research) which is roughly another
learn patterns from 5 unique non-ML-based unique
order of magnitude smaller. For specific series it is
generators of Super Mario Bros. levels (Notch,
even worse, with the largest series (Mega Man [76])
Parameterized, Grammatical Evolution, Launchpad,
making up 0.8% of the NES library, for 4 orders of
and Hopper), to create a wider variety of content
magnitude smaller than a standard language corpus.
than any of the single methods. To their knowledge,
Standard image corpora are even larger than the
this is the first time that NNMF has been used
Gigaword corpus, such as the ImageNet corpus [77]
to generate content. Because NNMF requires only
at 155GB. All told, work in this area is always going
non-negative values in its matrix decomposition,
to be constrained by the amount of data, even if
the basis representations are localized and purely
more work is put into creating larger, better corpora.
additive, and they are therefore more intuitive to
While most machine learning is focused on using
analyze than PCA.
large corpora, there is work in the field focused
Their system begins by generating 1000 levels on One-Shot Learning/Generalization [78]. Humans
of length 200 with each of the G = 5 unique have the ability to generalize very quickly after
generators. These 5000 levels in the training set being shown a single (or very small set, e.g., n < 5)
are represented as vectors recording the locations object to other objects of the same type, and One-
or amount of T = 5 content types at each level Shot Learning is similarly concerned with learning
column: Platforms, Hills, Gaps, Items, and Enemies. from a very small dataset. Most of the work has
These vectors are combined into T matrices, one for been focused on image classification, but games
each level content type. The algorithm then uses a are a natural testbed for such techniques due to the
multiplicative update NNMF algorithm [74] to factor paucity of data.
the data matrices into T approximate “part matrices”
which represent level patterns and T coefficient
matrices corresponding to weights for each pattern B. Learning on Different Levels of Abstraction
that appear in the training set. The part matrices can As described above, one key challenge that sets
be examined to see what types of patterns appear PCGML for games apart from PCGML for domains
globally and uniquely in different generators, and such as images or sound is the fact that games
multiplying these part matrices by novel coefficient are multi-modal dynamic systems. This means that
vectors can be used to generate new levels. This content generated will have interactions: generated
allows the NNMF method to explore far outside the rules operate on generated levels which in turn
expressive range of the original generators. change the consequences of those rules and so on.
11

Figure 10: Example of interpolation between two Zelda rooms (the leftmost and rightmost rooms).

A useful high-level formal model for understanding the games, but not the training data necessary for
games as dynamic systems that create experiences machine learning approaches.
is the Mechanics, Dynamics, Aesthetics (MDA) In addition to competitions, there has been some
framework by Hunicke, LeBlanc and Zubek [79]. work in developing metric-based benchmarks for
“Mechanics describes the particular components comparing techniques. Horn et al. [72] proposed a
of the game, at the level of data representation benchmark for Super Mario Bros. level generators
and algorithms. Dynamics describes the run-time in a large comparative study of multiple gener-
behavior of the mechanics acting on player inputs ators, which provided several evaluation metrics
and each other’s outputs over time. Aesthetics that are commonly used now, but only for level
describes the desirable emotional responses evoked generators for platforming games. In recent years,
in the player, when she interacts with the game Canossa and Smith [82] presented several general
system.” [79] If the end goal for PCGML for games evaluation metrics for procedurally generated levels,
is to generate content or even complete games and which could allow for cross-techniques comparisons.
with good results at the aesthetic level, the problem Again, however, these only apply to level generation
is inherently a hierarchical one. It is possible that and not other generatable content. Other machine
learning from data will need to happen at all of these learning domains, such as image processing, have
three levels simultaneously in order to successfully benchmarks in place [83] for testing new techniques
generate content for games. and comparing various techniques against each other.
For the field of PCGML to grow, we need to be
able to compare approaches in a meaningful way.
C. Datasets and Benchmarks
Procedural content generation, and specifically
D. Style Transfer
PCG via machine learning, is a developing field of
research. As such, there are not many publicly avail- Style and concept transfer is the idea that infor-
able large datasets and no standardized evaluation mation learned from one domain can enhance or
benchmarks that are widely used. supplement knowledge in another domain. Style
Publicly available, large datasets are important, as transfer has most notably been explored for image
they will reduce the barrier for entry into PCGML, modeling and generation [84], [85]. For example, in
which will allow the community to grow more recent work Gatys et al. [41] used neural networks
quickly. As previously mentioned, the VGLC [35] to transfer the style of an artist onto different
is an important step in creating widely available images. Additionally, Deep Dream [40] trains neural
datasets for PCGML. However, it currently only networks on images, and then generates new images
provides level data. Other data sets exist for ob- that excite particular layers or nodes of the network.
1 2
ject models and gameplay mechanics , but they This approach could be adapted to learn from game
have not been utilized by the PCGML community. content (e.g., levels, play data, etc.) and used to
By bringing these data sets to the attention of generate content that excites layers associated with
researchers, we hope to promote the exploration different elements from the training data. More
and development of approaches for generating those traditional forms of blending domain knowledge
types of content. Additionally, creating, improving, have been applied sparingly to games, such as
and growing both new and existing data sets is game creature blending [86] and interactive fiction
necessary for refining and expanding the field. blending [87]. However, until very recently no one
Widespread benchmarks allow for the compar- has applied this machine learning-oriented style
ison of various techniques in a standardized way. transfer to procedural content generation.
Previously, competitions have been used to compare Recently Snodgrass and Ontañón [88] explored
various level generation approaches [80], [81]. How- domain transfer in level generation across platform-
ever, many of the competitors in the Super Mario ing games, and Guzdial et al. [89] used concept
Bros. competition included hard coded rules, and blending to meld different level designs from Super
the GVG-AI competition provides descriptions of Mario Bros. together (e.g., castle, underwater, and
overworld levels). These approaches transfer and
1 opengameart.org blend level styles, but do not attempt to address
2 http://www.squidi.net/three/index.php the game mechanics explicitly; both approaches
12

ensure playable levels, but do not attempt transfer F. Using PCGML as a Game Mechanic
or blending between different mechanics. Gow and Most current work in PCGML focuses on replicat-
Corneli [90] proposed a framework for blending ing designed content in order to provide the player
two games together into a new game, including infinite or novel variations on gameplay, informed by
mechanics, but no implementation yet exists. past examples. Another possibility is to use PCGML
The above approaches are important first steps, as the main mechanic of a game, presenting the
but style transfer needs to be explored more fully PCGML system as an adversary or toy for the player
in PCG. Specifically, approaches to transferring to engage with. Designs could include enticing the
more than just aesthetics are needed, including player to, for example, generate content that is
game mechanic transfer. Additionally, the above significantly similar to or different from the corpus
approaches only apply transfer to game levels, and the system was trained on, identify content examples
there are many other areas where style transfer can that are outliers or typical examples of the system,
be applied. Character models, stories, and quests or train PCGML systems to generate examples that
are a few such areas. possess certain qualities or fulfill certain objective
functions, teaching the player to operate an opaque
or transparent model by feeding it examples that
shape its output in one direction or the other. This
E. Exposing and Exploring the Generative Space would allow a PCGML system to take on several
design pattern roles, including AI as Role-Model,
One avenue that has yet to be deeply explored Trainee, Editable, Guided, Co-Creator, Adversary,
in PCGML is opening the systems up to allow and Spectacle [93].
for designer input. Most systems are trained on a Role-model: a PCGML system replicates content
source of data and the editorial control over what generated by players of various levels of skill or
data is fed in is the main interaction for a designer, generates content suitable for players of certain
but it is easy to imagine that a designer would skill levels. New players are trained by having them
want to control things such as difficulty, complexity, replicate the content or by playing the generated
or theming (e.g., the difference between an above content in a form of generative tutorial.
ground level and below ground level in Super Mario Trainee: the player needs to train a PCGML system
Bros.). The generative approach of Summerville et al. to generate a piece of content necessary as part of
[30] allows a designer to set some high-level design a puzzle or a piece of level geometry to proceed in
parameters (number of rooms in a dungeon, length the game.
of optimal player path), but this approach only works Editable: rather than training the AI to generate
if all design factors are known at training time the missing puzzle piece via examples, the player
and a Bayes Net is a suitable technique. Similarly, changes the internal model’s values until acceptable
Snodgrass and Ontañón [59] allow the user to define content is generated.
constraints (e.g., number of enemies, distance of Guided: the player corrects PCG system output in
the longest gap, etc.), but require hand crafted order to fulfill increasingly difficult requirements.
constraint checkers for each constraint used, and The AI, in turn, learns from the player’s corrections,
for the domain to be able to be modeled with multi- following the player’s guidance.
dimensional Markov chains. Co-creator: the player and a PCGML system take
The nature of Generative Adversarial Networks turns in creating content, moving toward some
has allowed for the discovery of latent factors external requirement. The PCGML system learns
that hold specific semantics (e.g., subtracting the from the player’s examples.
latent space image of blank-faced person from a Adversary: the player produces content that the
smiling one is the smile vector) [92]. Interpolating PCGML system must replicate by generation to
and extrapolating along these latent dimensions survive or vice versa in a “call and response” style
allows a user to generate content while freely battle.
tuning the parameters they see fit. We envision Spectacle: the PCGML system is trained to repli-
that this type of approach could lead to similar cate patterns that are sensorically impressive or
results in the PCG domain (e.g., subtract Mario cognitively interesting.
1-1 from Mario 1-2 for the underground dimension,
subtract Mario 1-1 from Mario 8-3 for the difficulty VI. C ONCLUSION
dimension). Furthermore, if multiple games are In this survey paper, we give an overview of an
used as training input, it is theoretically possible emerging machine learning approach to Procedu-
that one could interpolate between two games ral Content Generation, including describing and
(e.g., find the half-way point between Mario and contrasting the existing examples of work taking
Sonic) or find other more esoteric combinations this approach and outlining a number of challenges
(Contra − M ario + Zelda =???). and opportunities for future research. We intend
13

Table I: A summary of the PCGML approaches discussed in this article. Note, the C/R column specifies
if the output data is Categorical (Cat.) or Real valued, and the Data column specifies the format of the
data (e.g., Seq. = Sequence). The methods are listed in the order they are discussed in Section IV.
Paper C/R Data Content Methods Task
Hoover, et. al. [36]: Cat. Seq. Mario Levels Feed Forward NN, Autonomous generation,
Functional Scaffolding for Mario NEAT, FSMC Domain transfer
Summerville, Mateas [19]: Cat. Seq. Mario Levels Recurrent Neural Autonomous generation
LSTMs for Mario Networks (RNNs)
Jain, et. al. [20]: Cat. Seq. Mario Levels Neural Network Autonomous generation
Autoencoders for Mario Autoencoders Repair, Recognition
Lee, et. al. [51]: Real Grid Starcraft II Maps Convolutional Neural Co-creative generation
Deep nets for StarCraft Networks (CNNs)
@RoboRosewater [54]: Cat. Seq. Magic Cards RNNs Autonomous generation
LSTM generation of Magic cards
Summerville, Mateas [37]: Cat. Seq. Magic Cards Sequence to Sequence Co-creative generation
Seq.-to-seq. learning for Magic Neural Networks
Dahlskog, et. al. [31]: Cat. Seq. Mario Levels Markov Chains, Autonomous generation
n-Grams for Mario n-Grams
Snodgrass, Ontañón [48]: Cat. Seq. Mario, Loderunner, Multi-dimensional Autonomous generation
Multi-dim. Markov Ch. for Levels Kid Icarus Levels Markov Chains (MdMCs)
Snodgrass, Ontañón [32]: Cat. Seq. Mario, Loderunner, Hierarchical MdMCs, Autonomous generation
Hierachical Multi-dim. Markov Ch. Kid Icarus Levels Clustering
Snodgrass, Ontañón [59]: Cat. Seq. Mario, Loderunner, Constrained MdMCs Autonomous generation,
Constrained Multi-dim. Markov Ch. Kid Icarus Levels Domain transfer
Snodgrass, Ontañón [60]: Cat. Grid Mario, Loderunner, Markov Random Fields, Autonomous generation
Markov Random Fields for Levels Kid Icarus Levels Metropolis-Hastings
Gumin, M. [61]: Cat. Grid Tile Levels Wave Function Autonomous generation,
Wave Function Collapse Collapse Co-creative generation
Guzdial and Riedl [33]: Cat., Graph Mario Levels K-Means Clustering Autonomous generation,
Learning Mario Levels from Video Real Co-creative generation,
Domain transfer
Guzdial et. al. [38]: Cat. Graph Interactive Fiction OPTICS Clustering Autonomous generation
Scheherazade-IF: Interactive Fiction
Summerville et. al. [91]: Cat., Graph Zelda Dungeons Bayes Nets, Autonomous generation,
Bayes Nets and PCA for Zelda Real Graphical Models Co-creative generation
Shaker, Abou-Zleikha [73]: Real Seq. Mario Levels Non-Negative Matrix Design Pattern Modeling,
Matrix Factorization for Level Gen. Factorization (NNMF) Autonomous generation

the paper to play a similar role as the Search- R EFERENCES


Based Procedural Content Generation paper [2],
[1] N. Shaker, J. Togelius, and M. J. Nelson, Procedural
which pointed out existing research as well as work Content Generation in Games: A Textbook and an Overview
that was yet to be done. Much research that was of Current Research. Springer, 2016.
proposed in that paper was subsequently carried out [2] J. Togelius, G. N. Yannakakis, K. O. Stanley, and
by various authors. In Table I, we present a summary C. Browne, “Search-based procedural content generation:
A taxonomy and survey,” Computational Intelligence and
the Procedural Content Generation via Machine AI in Games, IEEE Transactions on, vol. 3, no. 3, pp.
Learning papers discussed throughout this paper. 172–186, 2011.
As can be seen from this table, and indeed from [3] A. M. Smith and M. Mateas, “Answer set programming for
procedural content generation: A design space approach,”
the rest of the paper, there is much work left to do. Computational Intelligence and AI in Games, IEEE Trans-
The vast majority of work has so far concerned two- actions on, vol. 3, no. 3, pp. 187–200, 2011.
dimensional levels, in particular Super Mario Bros. [4] J. Montgomery, “Machine learning in 2016:
what does nips2016 tell us about the field
levels. Plenty of work remains in applying these today?” 2016, https://blogs.royalsociety.org/in-
methods to other domains, including rulesets, items, verba/2016/12/22/machine-learning-in-2016-what-does-
characters, and 3-D levels. There is also very rapid nips2016-tell-us-about-the-field-today/.
[5] J. Schmidhuber, “Deep learning in neural networks: An
progress within machine learning in general in the overview,” Neural Networks, vol. 61, pp. 85–117, 2015.
moment, and in particular within the deep learning [6] I. Goodfellow, Y. Bengio, and A. Courville, “Deep
field and in methods with generative capabilities learning,” 2016, book in preparation for MIT Press.
[Online]. Available: http://www.deeplearningbook.org
such as Generative Adversarial Networks. There [7] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu,
is plenty of interesting and rewarding work to do D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio,
in exploring how these new capabilities can be “Generative adversarial nets,” in Advances in Neural Infor-
mation Processing Systems, 2014, pp. 2672–2680.
adapted to function with the particular constraints [8] N. Boulanger-Lewandowski, Y. Bengio, and P. Vincent,
and affordances of game content. “Modeling temporal dependencies in high-dimensional
14

sequences: Application to polyphonic music generation more_than_a_million_super_mario_maker_levels_have_


and transcription,” arXiv preprint arXiv:1206.6392, 2012. been_uploaded_in_a_week
[9] S. Fine, Y. Singer, and N. Tishby, “The hierarchical [35] A. J. Summerville, S. Snodgrass, M. Mateas, and Ontañón,
hidden markov model: Analysis and applications,” Machine “The VGLC: The video game level corpus.”
learning, vol. 32, no. 1, pp. 41–62, 1998. [36] A. K. Hoover, J. Togelius, and G. N. Yannakis, “Composing
[10] K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and video game levels with music metaphors through functional
D. Wierstra, “Draw: A recurrent neural network for image scaffolding.”
generation,” arXiv preprint arXiv:1502.04623, 2015. [37] A. Summerville and M. Mateas, “Mystical tutor: A magic:
[11] L.-Y. Wei, S. Lefebvre, V. Kwatra, and G. Turk, “State of The gathering design assistant via denoising sequence-to-
the art in example-based texture synthesis,” in Eurographics sequence learning,” 2016. [Online]. Available: http://www.
2009, State of the Art Report, EG-STAR. Eurographics aaai.org/ocs/index.php/AIIDE/AIIDE16/paper/view/13980
Association, 2009, pp. 93–117. [38] M. Guzdial, B. Harrison, B. Li, and M. O. Riedl, “Crowd-
[12] Z. Ren, H. Yeh, and M. C. Lin, “Example-guided physically sourcing open interactive narrative,” in the 10th Interna-
based modal sound synthesis,” ACM Transactions on tional Conference on the Foundations of Digital Games.
Graphics (TOG), vol. 32, no. 1, p. 1, 2013. Pacific Grove, CA, 2015.
[13] E. J. Aarseth, Cybertext: Perspectives on ergodic literature. [39] K. O. Stanley and R. Miikkulainen, “Evolving neural
JHU Press, 1997. networks through augmenting topologies,” Evolutionary
[14] J. Togelius, G. N. Yannakakis, K. O. Stanley, and Computation, vol. 10, pp. 99–127, 2002.
C. Browne, “Search-Based Procedural Content Generation,” [40] A. Mordvintsev, C. Olah, and M. Tyka, “Inceptionism:
in Applications of Evolutionary Computation. Springer, Going deeper into neural networks,” Google Research Blog.
2010, pp. 141–150. Retrieved June, vol. 20, 2015.
[15] N. Shaker, M. Shaker, and J. Togelius, “Ropossum: An [41] L. A. Gatys, A. S. Ecker, and M. Bethge, “A neural algo-
authoring tool for designing, optimizing and solving cut rithm of artistic style,” arXiv preprint arXiv:1508.06576,
the rope levels.” 2013. 2015.
[16] A. Liapis, G. N. Yannakakis, and J. Togelius, “Sentient [42] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness,
sketchbook: Computer-aided game level authoring.” in M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland,
FDG, 2013, pp. 213–220. G. Ostrovski et al., “Human-level control through deep
[17] G. Smith, J. Whitehead, and M. Mateas, “Tanagra: Reactive reinforcement learning,” Nature, vol. 518, no. 7540, pp.
planning and constraint solving for mixed-initiative level 529–533, 2015.
design,” IEEE Transactions on Computational Intelligence [43] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre,
and AI in Games, no. 99, 2011. G. Van Den Driessche, J. Schrittwieser, I. Antonoglou,
[18] C. Guillemot and O. Le Meur, “Image inpainting: Overview V. Panneershelvam, M. Lanctot et al., “Mastering the game
and recent advances,” IEEE signal processing magazine, of go with deep neural networks and tree search,” Nature,
vol. 31, no. 1, pp. 127–144, 2014. vol. 529, no. 7587, pp. 484–489, 2016.
[19] A. Summerville and M. Mateas, “Super Mario as a string: [44] S. Samothrakis, D. Perez-Liebana, S. M. Lucas, and
Platformer level generation via LSTMs,” Proceedings of M. Fasli, “Neuroevolution for general video game playing,”
1st International Joint Conference of DiGRA and FDG, in Computational Intelligence and Games (CIG), 2015
2016. IEEE Conference on. IEEE, 2015, pp. 200–207.
[45] A. K. Hoover, P. A. Szerlip, and K. O. Stanley, “Func-
[20] R. Jain, A. Isaksen, C. Holmgård, and J. Togelius, “Au-
tional scaffolding for composing additional musical voices,”
toencoders for Level Generation, Repair, and Recognition,”
Computer Music Journal, vol. 38, no. 4, pp. 80–99, 2014.
in ICCC, 2016.
[46] S. Hochreiter and J. Schmidhuber, “Long short-term
[21] G. N. Yannakakis, P. Spronck, D. Loiacono, and E. André,
memory,” Neural Computing, 1997.
“Player modeling,” in Dagstuhl Follow-Ups, vol. 6. Schloss
[47] S. Miyamoto and T. Tezuka, “Super mario bros. 2,” 1986.
Dagstuhl-Leibniz-Zentrum fuer Informatik, 2013.
[48] S. Snodgrass and S. Ontañón, “Experiments in map
[22] M. Guzdial, N. Sturtevant, and B. Li, “Deep static and
generation using Markov chains,” in Proceedings of the
dynamic level analysis: A study on infinite mario,” in Third
9th International Conference on Foundations of Digital
Experimental AI in Games Workshop, vol. 3, 2016.
Games, ser. FDG, vol. 14, 2014.
[23] D. Braben and I. Bell, “Elite,” 1984. [49] A. Summerville, M. Guzdial, M. Mateas, and M. Riedl,
[24] Hello Games, “No Man’s Sky,” 2016. “Learning player tailored content from observation: Plat-
[25] F. Pereira, P. Norvig, and A. Halevy, “The unreasonable former level generation from video traces using lstms,” in
effectiveness of data,” IEEE Intelligent Systems, vol. 24, AAAI Conference on Artificial Intelligence and Interactive
no. undefined, pp. 8–12, 2009. Digital Entertainment, 2016.
[26] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to- [50] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol,
image translation with conditional adversarial networks,” “Extracting and composing robust features with denoising
2016. autoencoders,” in Intl. Conf. on Machine Learning. ACM,
[27] A. Karpathy, “The unreasonable effectiveness of recurrent 2008, pp. 1096–1103.
neural networks,” 2015. [Online]. Available: http://karpathy. [51] S. Lee, A. Isaksen, C. Holmgård, and J. Togelius, “Pre-
github.io/2015/05/21/rnn-effectiveness/ dicting Resource Locations in Game Maps Using Deep
[28] S. Miyamoto and T. Tezuka, “Super Mario Bros.” 1985. Convolutional Neural Networks,” in The Twelfth Annual
[29] G. Yokoi and Y. Sakamoto, “Metroid,” 1986. AAAI Conference on Artificial Intelligence and Interactive
[30] A. Summerville and M. Mateas, “Sampling hyrule: Sam- Digital Entertainment. AAAI, 2016.
pling probabilistic machine learning for level generation,” [52] B. Entertainment, “StarCraft II,” 2010.
2015. [53] W. of the Coast, “Magic: The Gathering,” 1993.
[31] S. Dahlskog, J. Togelius, and M. J. Nelson, “Linear levels [54] M. Milewicz, RoboRosewater.
through n-grams,” in Proceedings of the 18th International https://twitter.com/roborosewater?lang=en, 2016.
Academic MindTrek Conference, 2014. [55] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to
[32] S. Snodgrass and S. Ontañón, “A hierarchical MdMC sequence learning with neural networks,” in CoRR, 2014.
approach to 2D video game map generation,” in Eleventh [56] T. Osawa and Y. Sakamoto, “Kid icarus,” 1986.
Artificial Intelligence and Interactive Digital Entertainment [57] S. Dahlskog and J. Togelius, “Patterns as objectives for
Conference, 2015. level generation,” in Proceedings of EvoApplications, 2013.
[33] M. Guzdial and M. Riedl, “Game level generation from [58] W. Ching, S. Zhang, and M. Ng, “On multi-dimensional
gameplay videos,” in Twelfth Artificial Intelligence and markov chain models,” Pacific Journal of Optimization,
Interactive Digital Entertainment Conference, 2016. vol. 3, no. 2, 2007.
[34] D. McFerran, “More than a million super mario maker [59] S. Snodgrass and S. Ontañón, “Controllable procedu-
levels have been uploaded in a week,” 2015. [Online]. ral content generation via constrained multi-dimensional
Available: http://www.nintendolife.com/news/2015/09/ Markov chain sampling,” in Proceedings of the Twenty-Fifth
15

International Joint Conference on Artificial Intelligence, [86] P. Ribeiro, F. C. Pereira, B. Marques, B. Leitao, and
IJCAI, 2016, pp. 780–786. A. Cardoso, “A model for creativity in creature generation.”
[60] S. Snodgrass and S. Ontanon, “Learning to generate video in Proceedings of the 4th GAME-ON Conference, 2003, p.
game maps using Markov models,” IEEE Transactions on 175.
Computational Intelligence and AI in Games, 2016. [87] J. Permar and B. Magerko, “A conceptual blending ap-
[61] M. Gumin, “Wavefunctioncollapse,” GitHub repository, proach to the generation of cognitive scripts for interactive
2016. narrative,” in Proceedings of the 9th AIIDE Conference,
[62] Freehold Games, “The Caves of Qud,” 2016. 2013.
[63] Arcadia-Clojure, “Proc Skater 2016,” 2016. [88] S. Snodgrass and S. Ontañón, “An approach to domain trans-
[64] K. Pulli, A. Baksheev, K. Kornyakov, and V. Eruhimov, fer in procedural content generation of two-dimensional
“Real-time computer vision with opencv,” Commun. ACM, videogame levels,” in Twelfth Artificial Intelligence and
vol. 55, no. 6, pp. 61–69, Jun. 2012. Interactive Digital Entertainment Conference, 2016.
[65] E. Kalogerakis, S. Chaudhuri, D. Koller, and V. Koltun, “A [89] M. Guzdial and M. Riedl, “Learning to blend computer
probabilistic model for component-based shape synthesis,” game levels,” arXiv preprint arXiv:1603.02738, 2016.
ACM Transactions on Graphics, vol. 31, no. 4, pp. 55:1– [90] J. Gow and J. Corneli, “Towards generating novel games
55:11, Jul. 2014. using conceptual blending,” in Eleventh Artificial Intelli-
gence and Interactive Digital Entertainment Conference,
[66] B. Li, “Learning knowledge to support domain-independent
2015.
narrative intelligence,” Ph.D. dissertation, Georgia Institute
[91] A. J. Summerville, M. Behrooz, M. Mateas, and A. Jhala,
of Technology, 2015.
“The learning of Zelda: Data-driven learning of level
[67] P. Weyhrauch, “Guiding interactive fiction,” Ph.D. disser-
topology.”
tation, Ph. D. Dissertation, Carnegie Mellon University,
[92] T. White, “Sampling generative networks: Notes on a few
1997.
effective techniques,” arXiv preprint arXiv:1609.04468,
[68] M. Ankerst, M. M. Breunig, H.-P. Kriegel, and J. Sander, 2016.
“Optics: ordering points to identify the clustering structure,” [93] M. Treanor, A. Zook, M. P. Eladhari, J. Togelius, G. Smith,
in ACM Sigmod Record, vol. 28, no. 2. ACM, 1999, pp. M. Cook, T. Thompson, B. Magerko, J. Levine, and
49–60. A. Smith, “AI-based game design patterns,” in Proceedings
[69] A. J. Summerville, M. Behrooz, M. Mateas, and A. Jhala, of the 10 International Conference on Foundations of
“The learning of zelda: Data-driven learning of level Digital Games, FDG, 2015.
topology.”
[70] Nintendo, “The Legend of Zelda,” 1986.
[71] S. Wright, “Correlation and causation,” 1921.
[72] B. Horn, S. Dahlskog, N. Shaker, G. Smith, and J. Togelius,
“A comparative evaluation of procedural level generators
in the mario ai framework,” 2014.
[73] N. Shaker and M. Abou-Zleikha, “Alone we can do so little,
together we can do so much: A combinatorial approach
for generating game content.” AIIDE, vol. 14, pp. 167–173,
2014.
[74] D. D. Lee and H. S. Seung, “Learning the parts of objects
by non-negative matrix factorization,” Nature, vol. 401, no.
6755, pp. 788–791, 1999.
[75] D. Graff, J. Kong, K. Chen, and K. Maeda, “English giga-
word,” Linguistic Data Consortium, Philadelphia, 2003.
[76] Capcom, “Mega Man,” 1987.
[77] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-
Fei, “Imagenet: A large-scale hierarchical image database,”
in Computer Vision and Pattern Recognition, 2009. CVPR
2009. IEEE Conference on. IEEE, 2009, pp. 248–255.
[78] F.-F. Li, R. Fergus, and P. Perona, “One-shot learning of
object categories,” IEEE Transactions on Pattern Analysis
and Machine Intelligence, vol. 28, no. 4, pp. 594–611,
2006.
[79] R. Hunicke, M. LeBlanc, and R. Zubek, “Mda: A formal
approach to game design and game research,” in Proceed-
ings of the AAAI Workshop on Challenges in Game AI,
vol. 4, 2004, p. 1.
[80] J. Togelius, N. Shaker, S. Karakovskiy, and G. N. Yan-
nakakis, “The Mario AI Championship 2009-2012,” AI
Magazine, vol. 34, no. 3, pp. 89–92, 2013.
[81] S. L. Ahmed Khalifa, Diego Perez-Liebana and J. Togelius,
“General video game level generation,” in Proceedings of
the Genetic and Evolutionary Computation Conference,
2016.
[82] A. Canossa and G. Smith, “Towards a procedural evaluation
technique: Metrics for level design,” development, vol. 7,
p. 8, 2015.
[83] Y. LeCun, C. Cortes, and C. J. Burges, “The MNIST
database of handwritten digits,” 1998.
[84] S. Bruckner and M. E. Gröller, “Style transfer functions
for illustrative volume rendering,” in Computer Graphics
Forum, vol. 26, no. 3. Wiley Online Library, 2007, pp.
715–724.
[85] A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and
D. H. Salesin, “Image analogies,” in Proceedings of the 28th
annual conference on Computer graphics and interactive
techniques. ACM, 2001, pp. 327–340.

View publication stats

You might also like